No Data Corruption & Data Integrity
What does the 'No Data Corruption & Data Integrity' motto mean to each web hosting account owner?
The process of files being corrupted caused by some hardware or software failure is called data corruption and this is among the main problems which Internet hosting companies face as the larger a hard disk is and the more info is filed on it, the much more likely it is for data to get corrupted. You can find several fail-safes, still often the data gets damaged silently, so neither the particular file system, nor the administrators see a thing. Thus, a corrupted file will be treated as a regular one and if the hard disk drive is a part of a RAID, that file will be copied on all other disk drives. In theory, this is done for redundancy, but in reality the damage will get even worse. Once some file gets corrupted, it will be partially or entirely unreadable, so a text file will no longer be readable, an image file will present a random combination of colors in case it opens at all and an archive shall be impossible to unpack, and you risk sacrificing your content. Although the most well-known server file systems include various checks, they quite often fail to detect some problem early enough or require an extensive time period to check all the files and the web server will not be operational in the meantime.
No Data Corruption & Data Integrity in Cloud Hosting
We guarantee the integrity of the info uploaded in each and every cloud hosting account that is created on our cloud platform because we use the advanced ZFS file system. The aforementioned is the only one that was designed to avert silent data corruption via a unique checksum for every single file. We'll store your data on a large number of NVMe drives that operate in a RAID, so the very same files will be present on several places simultaneously. ZFS checks the digital fingerprint of all the files on all the drives in real time and in the event that the checksum of any file differs from what it should be, the file system replaces that file with a healthy version from another drive from the RAID. There's no other file system that uses checksums, so it's easy for data to become silently corrupted and the bad file to be replicated on all drives over time, but since that can never happen on a server running ZFS, you will not have to concern yourself with the integrity of your data.