Right now there is no split between write cache and read cache as there can be on some filesystem (say ZFS), to handle particularities of the technologies behind ssd (slc, mlc, etc.). The advantage of this is to have more space for caching reads, and to reduce the wear on the SSD. Data written to the device goes directly to the HDD, it is not written to SSD at all, so you don't benefit of any write gain like writethrough also the first time this data is read, it will be read from HDD.So this tend to be more secure than “writeback”, but will lack of some of the performance gain. Data written to the device is copied on the ssd and the HDD at the same time, write is considered complete at the end of write on the HDD.
#Wipefs stop software#
The goal here was to be as secure with bcache and software Linux Raid as with a hardware Raid device with BBUs. For security purpose, the mechanism here always ensure that no data is considered safe until it has been completely written to the backing device (dirty pages), so if a power outage happened while data are still on the SSD, at next boot, data will be pushed back to the backing device. Data written to the device is first written on the ssd, and then copied on the backing device asynchronously, write is considered complete at the end of the copy on SSD.Today, it appears to be possible to use a combination of lvm, md (linux raid) and bcache, but no extensive tests have been done so far on this setup, and some compatibility issues may occur depending on the chipsets used. In the future, it will be possible to use native raid mechanism on caching devices, for reliability improvement and maybe also for performance boost (ie : RAID0). There is a tunable option to allow sequential write and read to be cached by bcache also in the examples following, it will not be used. Some SSD also have better sequentials read and write nowadays than single HDD, but if you compare to some SAS RAID, it can't compete, but still is way better in random IO. It will use SSD as a huge cache of many gigabytes to be able to write data almost always sequentially on HDD. It will use the SSD for what it is good at : IOPS, and random reads/write, but it will let sequentials reads/writes on HDD/Raid devices by default. So bcache tries to take the goods from all those technologies by adding another level of indirection. Some tools like lvm also allow adding physical disks and show them as bigger ones. To overcome some of the default of HDD, RAID has been at the rescue to achieve some goals like, better reliability, better performance and better capacity. To sum-up, HDD have great capacity, and have achieve good sequential read and write operations, but are very slow on random writes and reads, so they don't have a high level of IOPS SSD have very good overall performance specially high IOPS, so random writes and reads are way better than HDD, but they lack capacity. Bcache is an attempt to take all advantages of both ssd and hdd drives or RAID devices.