One difficulty with preparing a backup snapshot ‘table’ is ensuring that any secondary metadata chunks differ from from the primary metadata chunks (to ensure they are duplicated rather than being deduplicated). "chunks2", "length2", "files2") and using this when required (perhaps with a -usesecondary option). In principle, this could be done by preparing backup metadata sequences (e.g. the backup MFT (in NTFS) and FAT table (in FAT file systems). This is similar to what is done in various file systems, e.g. One intuitive way of doing so is for each backup to have a ‘secondary’ copy of snapshot metadata. In my view, it would be useful if robustness around missing/corrupt metadata chunks could be improved. In addition, it is possible that multiple backup revisions or snapshot ids could reference the same metadata chunks if they are refering to similar directory tree, meaning that some metadata chunks could be essential for multiple snapshots. Missing/corrupt metadata chunks will still cause a complete failure of the restore process. One thing I note though is that a point of failures is in the metadata (or snapshot) chunks. I think this is a useful improvement to increase robustness of duplicacy in dealing with missing/corrupt chunks given that data storage is seldom 100% reliable. This allows a restore process to complete despite encountering file chunk errors, thus restricting the failure to the actual affected files. I have made a pull request to add -persist option to duplicacy to continue processing despite encountering missing or corrupt file chunks.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |