|
Post by Admin on May 8, 2024 14:27:47 GMT
For those of us that have paid XP-Dev for S3 backup services the possibility exists to restore our backed up repositories. The issue is that the tools for recovery are on the XP-Dev servers that are not currently functioning. An XP-Dev user, Arne, had a copy of the tools on hand. He has graciously shared the tools here: drive.google.com/file/d/1P6rpg_a4OVP5c0IMq6IGIYB7dY0O8mIbThe tools were written in Python. There were some issues with continuity of the commit chains that I am looking at now, but I am not a PY expert. I am attempting to recover to a new SVN repository. The recovery scripts do not work with Py 3, so Py 2 must be installed on the recovery machine. The description from Arne on the recovery process is: 1. Download and unzip recovery tools above. 2. Install Python 2. 3. Run the downloadrepo.py as "downloadrepo.py <amazon aws access key> <amazon aws private key> <bucket name> <repository unique id>" to download the backup and aggregate the single commit dumps into an overall dump. This will need to be performed for each of the repositories in the <bucket name> <repository unique id> arguments to the downloadrepo.py call above. 4. The aggregated single commit file can then be loaded into a newly created (empty) SVN repository with "gunzip -c my_repo-0-<last revision>.dump.gz | svnadmin load /path/to/new/repo" . I have executed the 'downloadrepo' script and am on to the next steps.
|
|
arne
New Member
Posts: 3
|
Post by arne on May 8, 2024 15:04:04 GMT
|
|
|
Post by jltransus on May 8, 2024 15:36:32 GMT
Arne,
I have executed the downloadrepo.py and it does not appear to have combined the individual commits into one dump. I see many .gz files, but cannot identify one as the combination, either by name or file size.
Edit: Perhaps it is in the final .gz. Edit: Negative, I simply have many individual commit files. And the backups started after 164 revisions in. It appears I need the structure and data of the 163rd revision in order to restore individually.
|
|
arne
New Member
Posts: 3
|
Post by arne on May 9, 2024 6:13:53 GMT
Yes, I backups of some of my other repositories had also .dump.gz files for individual commits/revisions but no aggregated .dump.gz file. I added leading zeros to the revision numbers so that all revision numbers had the same number of digits and ordering the .dump.gz files alphabetically ordered them chronologically (e.g., "repo-3-3.dump.gz" to "repo-003-003.dump.gz" so that this file comes *before*, e.g., "repo-200-200.dump.gz"). Then, I ran:
gunzip -c `ls -tr *.dump.gz` | svnadmin load /path/to/new/empty/repo Some other repos had a mix of .dump.gz files for individual commits/revisions and .dump.gz files for a range of revisions, I removed some of these files so that each revision was included only once (either in a .dump.gz file for this individual commits/revisions or in a .dump.gz file for a range of revisions that included the revision), added leading zeros to the revision numbers as described above and used the same command as above to create a repository.
As the statistical software R is my favourite scripting software, I used it to add leading zeros to the names of .dump.gz files, e.g. for adding a leading zero to three-digit revision numbers, I used:
files <- list.files(path = "./", pattern = "*.dump.gz", full.names = FALSE ) new_names <- sub( "-([0-9]{3})-", "-0\\1-",sub( "-([0-9]{3})\\.", "-0\\1.", files)) file.rename(from = files, to = new_names) Well, there are certainly more elegant ways to do this...
|
|
|
Post by jltransus on May 12, 2024 19:31:06 GMT
Thanks Arne,
It appears that I cannot restore anything. My dumps start at rev 140. If I try to load that dump it complains that the earlier transactions are missing. Supposedly, when backups are turned on in xp-dev, there is one full dump of the repo, then after the dumps are incremental:
"The first time you enable Amazon S3 backups on your repository, an initial backup file of your repository will be uploaded to your bucket. Subsequent commits will be backed up in real-time as you commit data into your repository."
It appears that the initial backup is incremental as well for me, in all of my repositories.
Perhaps I can create a repository from the earliest clean working copy I have that is somewhere in the range of the backup dumps I have.
|
|
|
Post by jothiraj on May 16, 2024 18:33:39 GMT
admin Yesterday we're enable the AWS backup, but backup not initiated, what I'm doing now.
I need that data, please help me resolve this issue Admin & anyone
|
|
mlr
New Member
Posts: 3
|
Post by mlr on May 22, 2024 7:44:01 GMT
We are finding that a lot of our backup data on S3 is in Glacier, we have to retrieve it from there first.
Does anyone have a suggestion for a new hosting site? We need SVN and git both. Thank you
|
|
aj
New Member
Posts: 1
|
Post by aj on Jul 17, 2024 21:35:44 GMT
Thank you for creating this forum and for the assistance. We just faced the issue and found out. We have AWS s3 backup initiated but trying for the first time to restore from it.
"downloadrepo.py <amazon aws access key> <amazon aws private key> <bucket name> <repository unique id>" We are having access issues because the code is from 2009 and it uses old API method. Did this work for you guys without any changes?
Also what exactly is "repository unique id"?
thanks
|
|