does anyone have experience with bcache/flashcache? considering deploying a rig at arpnetworks with a ssd cache, but have only played with the technology for a month or three curious to know how you get on jbergstroem but no experience myself mercutio: testing locally has been positive, but we all know the effects of doing something a longer while in production :) yeh i wouldn't do it in production myself cos that stuff changes the layout of hdd too right? zfs is just superior here. using l2arc for a long time; just friggin works. i've disconnected the ssd multiple times, as well as parts of my raid. not really. its supposed to act as a pure cache, but bcache, flashcache and the new dscache (3.9) works a bit different zfs has the problem that it loses ssd cache when you reboot and it's complicated to split read/write on one ssd for instance, dscache actually caches writes write caching is bigger benefit when you have 32gb of ram which is quite cheap to do now days yeah. i'm probalby going there instead but for reads if you do raid1 (which i will).. gotta split. thanks for the feedback raid 1 with linux you want to use far far=2 or something i can't remember exact but normally raid1 doesn't give much perforamnce benefit unless you do that then it stripes fast/slow parts of the disks together mind you that could slow down writes i imagine? seem network lag.. few packets loss on my VPS mercutio: sorry, 1 was a slip. i meant 0 (mirroring) jbergstroem: mirroring is raid 1 not 0 ? mercutio: your reply confused me (since i thought you referred to stripe while mentioning performance benefits), so i assumed i did a typo. and, i replied in the middle of the night :) jbergstroem: oh true, jbergstroem: normal zfs will read from disks alternating in raid 1 but not write, and you were said about read benefits of raid 1 rather than read and write i'm not sure what linux raid does, but with far=2 it uses the first half of each disk afaik http://serverfault.com/questions/139022/explain-mds-raid10-f2 that explains it well it seems