How to back up Kubernetes and Docker
You don’t have to back up everything about every container, but it’s important to back up configurations.
You don’t have to back up everything about every container, but it’s important to back up configurations.
Block storage in the cloud that is not properly backed up can result in lost data, and while object storage in the cloud is more resilient.
A recent Amazon outage resulted in a small number of customers losing production data stored in their accounts.
Vendors of hyperconverged infrastructure provide options for making it easier to backup data on-site, in the cloud or both, but which is best?
Backup and recovery done efficiently require use of multiple backup levels carefully thought out so a full backup isn’t needed as often if incremental and differential backups are properly planned.
By eliminating redundant blocks of data within a dataset – deduplication – enterprises can reduce the size of backups 90-99%.
Backing up and archiving data have distinct functions, and not recognizing that it’s important to have both can lead to access problems and even legal troubles.
I have a major problem with the Windows archive bit, and so should you. At the very least, backup product vendors should give us the option of not using it - without penalty. Here's why: If the "ready for archiving" bit is set on a file in Windows, it indicates that a file is new or changed, and that it should be backed up in an incremental backup. Once this happens, the archive bit is cleared. Therefore, the first problem with the archive bit is that it should be called the backup bit, because backups are not archives.
If you have more than one state-of-the-art tape drive behind your LAN-based backup server, chances are you have too many. That's right. If you've got a LAN-based backup server, and behind it is more than one tape drive capable of 45 MB to 50 MB, then you should probably rethink your design. Let me explain.