You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps will reproduce the problem?
1. I have a backup archive file with lots of redundant data (OS files,
similar DB files, etc.) I want to use xdelta like rzip to find block
matches within the same file. However, when I use xdelta without the -S
parameter, it only uses 13 MB of RAM, and does not appear to find block
matches within the same file. Gzip -9 outperforms xdelta significantly in
this scenario. So I do not think xdelta is even trying to find block
matches within the same file.
2. I would hope to achieve something like what rzip does. I did another
contrived test wehere I took one ~400 MB file (file-a.dat), concatenated a
smaller file (file-b.dat), and then concatnenated the same ~400 MB file
again. The resulting file-c.dat was not compressed much at all by xdelta
when no source file was used. See attachment for more details.
What is the expected output? What do you see instead?
I expect a file which is much smaller than what I am getting, as redundant
blocks from the common OS files and common database pages are eliminated.
What version of the product are you using? On what operating system?
3.0q on Windows XP SP2
Please provide any additional information below.
Original issue reported on code.google.com by [email protected] on 14 May 2007 at 3:14
Original issue reported on code.google.com by
[email protected]
on 14 May 2007 at 3:14Attachments:
The text was updated successfully, but these errors were encountered: