With each passing day, the amount of data in the world is increasing. We create files and rarely delete them, preferring to store the data "just in case." And in business, there are even more stringent rules dictating the retention of more and more data. All this leads to the constant need for new storage concepts.
Data recovery, by definition, follows innovations in the storage industry. After all, it is impossible to learn how to recover something that has not been invented yet. On the other hand, the recent trend is that the tasks data recovery faces are becoming more and more complex; moreover, some of these tasks are just fundamentally unsolvable. (Learn more in Disaster Recovery: The 5 Things That Often Go Wrong.)
Complexity and Big Storage
Big storage takes more time to extract data, since you need at least to read and copy the entire data capacity. For example, simply reading all the data from a 2 terabyte disk takes about 10 hours given that the average read speed is 60 MB/s.
On the other hand, big storage requires new storage technology. To get storage of several terabytes you can use RAID technology. For an effectively functioning storage of dozens of terabytes, you need schemes combining RAID fault-tolerance and the efficiency of, say, block allocation algorithms of the file system driver. In practice, something like that is implemented in ZFS from Sun Microsystems and in Storage Spaces from Microsoft. The second option is a big RAID of uncommon layouts, such as RAID 60.
Recovering data in the past, whether from a camera memory card or regular hard drive, all you needed was file system recovery. Nowadays, dealing with a complex storage system consisting of several physical disks, first you need to recover your storage configuration (i.e. how separate disks work together to create a single storage). Only then can you proceed with a file recovery.
Storage configuration recovery is a complex, non-trivial task with a relatively modest chance of success. Even in the case of successful recovery, the task is very time consuming, so often it is easier just to dismiss the case as unrecoverable. In our practice, we once dealt with a failed 50 TB Storage Spaces pool, for which our recovery estimation was two to three months (note that simply reading 50 TB of data two times would take 40 days). When the client heard about this, he outright refused the recovery attempt, admitting the case was unrecoverable.
Automatic Hardware Encryption
There is a type of modern disk that encrypts data even if you do not ask it to. The most well known are WD MyBook disks. Data stays encrypted even if no password is set. Such a scheme is required to make quick password changes possible. The only copy of the encryption key is stored inside flash memory on the board. If the board burns, data is lost even if a user never made a conscious effort to encrypt the data (or set the password). In such a case, data can be recovered neither at home nor at a data recovery lab.
Monolithic SD Cards
A monolithic memory card (often called a monolith) is designed in such a way that the flash chip (memory) storing user data is not separable from its controller. In monolithic memory cards, both a memory and a controller are combined into one chip and covered with the plastic forming the case. If the controller fails in a regular 2.5’’ SSD, it is still possible to recover data from standalone memory chips, bypassing the failed controller. If the controller fails on a monolithic memory card, data is difficult to recover because it is difficult to get direct access to the memory. Sometimes a manufacturer leaves service connection points on an SD card, which could be useful in data recovery. However, almost every SD card model has its own service connection points and the cost of research and development of recovery technology is unacceptably high.
Any new technology is good until it fails. Typically, the newer the technology, the more complex it is and data storage technology is no exception. Before you are going to commit your data to some shiny new storage technology, you should assess the damage in case of a failure, because considering all the above, data recovery might be of no use.