Wayback Machine Downloader – Download an Entire Online page material from the Wayback Machine

Wayback Machine Downloader – Download an Entire Online page material from the Wayback Machine

Gem Version
Build Status

Download a full web place of residing from the Web Archive Wayback Machine.

Set up

You desire to install Ruby on your blueprint (>= 1.9.2) – in case you beget now not comprise already bought it.
Then speed:

gem install wayback_machine_downloader

Tip: While you speed into permission errors, that you just can perchance also wish so that you just’ll want to add sudo in front of this expose.

Stylish Utilization

Flee wayback_machine_downloader with the execrable url of the on-line place of residing you are making an are trying to want to retrieve as a parameter (e.g., http://example.com):

wayback_machine_downloader http://example.com

How it works

It will get the last model of every file demonstrate on Wayback Machine to ./web sites/example.com/. It will also re-achieve a directory structure and auto-achieve index.html pages to work seamlessly with Apache and Nginx. All recordsdata downloaded are the distinctive ones and now not Wayback Machine rewritten versions. This arrive, URLs and hyperlinks structure are the same as earlier than.

Stepped forward Utilization

Utilization: wayback_machine_downloader http://example.com

Download a full web place of residing from the Wayback Machine.

No longer mandatory alternatives:
    -d, --directory PATH             Directory to assign the downloaded recordsdata into
				     Default is ./web sites/ plus the domain title
    -s, --all-timestamps             Download all snapshots/timestamps for a given web place of residing
    -f, --from TIMESTAMP             Easiest recordsdata on or after timestamp equipped (ie. 20060716231334)
    -t, --to TIMESTAMP               Easiest recordsdata on or earlier than timestamp equipped (ie. 20100916231334)
    -e, --trusty-url                  Download finest the url provied and now not the chunky place of residing
    -o, --finest ONLY_FILTER           Restrict downloading to urls that match this filter
				     (employ // notation for the filter to be handled as a regex)
    -x, --exclude EXCLUDE_FILTER     Skip downloading of urls that match this filter
				     (employ // notation for the filter to be handled as a regex)
    -a, --all                        Develop downloading to error recordsdata (40x and 50x) and redirections (30x)
    -c, --concurrency NUMBER         Kind of rather a lot of recordsdata to get at a time
				     Default is one file at a time (ie. 20)
    -p, --most-snapshot NUMBER    Most snapshot pages to determine out into memoir (Default is 100)
				     Depend a median of 150,000 snapshots per web page
    -l, --list                       Easiest list file urls in a JSON structure with the archived timestamps, could now not get the relaxation

Specify directory to assign recordsdata to

No longer mandatory. By default, Wayback Machine Downloader will get recordsdata to ./web sites/ followed by the domain title of the on-line place of residing. You may perchance perchance also are making an are trying to assign recordsdata in a disclose directory the usage of this selection.

Instance:

wayback_machine_downloader http://example.com --directory downloaded-backup/

All Timestamps

No longer mandatory. This feature will get all timestamps/snapshots for a given web place of residing. It will uses the timestamp of every snapshot as directory.

Instance:

wayback_machine_downloader http://example.com --all-timestamps 

Will get:
	web sites/example.com/20060715085250/index.html
	web sites/example.com/20051120005053/index.html
	web sites/example.com/20060111095815/img/tag.png
	...

From Timestamp

No longer mandatory. You may perchance perchance also are making an are trying to invent a from timestamp to lock your backup to a disclose model of the on-line place of residing. Timestamps could additionally be chanced on at some stage in the urls of the smartly-liked-or-garden Wayback Machine web place of residing (e.g., https://web.archive.org/web/20060716231334/http://example.com). You may perchance perchance also additionally employ years (2006), years + month (200607), and so on. It can most likely additionally be used collectively of To Timestamp.
Wayback Machine Downloader will then procure finest file versions on or after the timestamp specified.

Instance:

wayback_machine_downloader http://example.com --from 20060716231334

To Timestamp

No longer mandatory. You may perchance perchance also are making an are trying to invent a to timestamp to lock your backup to a disclose model of the on-line place of residing. Timestamps could additionally be chanced on at some stage in the urls of the smartly-liked-or-garden Wayback Machine web place of residing (e.g., https://web.archive.org/web/20100916231334/http://example.com). You may perchance perchance also additionally employ years (2010), years + month (201009), and so on. It can most likely additionally be used collectively of From Timestamp.
Wayback Machine Downloader will then procure finest file versions on or earlier than the timestamp specified.

Instance:

wayback_machine_downloader http://example.com --to 20100916231334

Exact Url

No longer mandatory. While you are making an are trying to want to retrieve finest the file matching precisely the url equipped, that you just can perchance employ this flag. It will manual clear of downloading the relaxation else.

As an illustration, in case you finest are making an are trying to get finest the html homepage file of example.com:

wayback_machine_downloader http://example.com --trusty-url 

Easiest URL Filter

No longer mandatory. You may perchance perchance also are making an are trying to retrieve recordsdata which could be of a decided form (e.g., .pdf, .jpg, .wrd…) or are in a disclose directory. To beget so, that you just can perchance present the --finest flag with a string or a regex (the usage of the ‘/regex/’ notation) to limit which recordsdata Wayback Machine Downloader will get.

As an illustration, in case you finest are making an are trying to get recordsdata inside a disclose my_directory:

wayback_machine_downloader http://example.com --finest my_directory

Or if you’re making an are trying to want to get every photos without the relaxation else:

wayback_machine_downloader http://example.com --finest "/.(gif|jpg|jpeg)$/i"

Exclude URL Filter

 -x, --exclude EXCLUDE_FILTER

No longer mandatory. You may perchance perchance also are making an are trying to retrieve recordsdata that are not of a decided form (e.g., .pdf, .jpg, .wrd…) or are not in a disclose directory. To beget so, that you just can perchance present the --exclude flag with a string or a regex (the usage of the ‘/regex/’ notation) to limit which recordsdata Wayback Machine Downloader will get.

As an illustration, if you’re making an are trying to want to manual clear of downloading recordsdata inside my_directory:

wayback_machine_downloader http://example.com --exclude my_directory

Or if you’re making an are trying to want to get the total thing apart from photos:

wayback_machine_downloader http://example.com --exclude "/.(gif|jpg|jpeg)$/i"

Develop downloading to all file forms

No longer mandatory. By default, Wayback Machine Downloader limits itself to recordsdata that answered with 200 OK code. While you also need errors recordsdata (40x and 50x codes) or redirections recordsdata (30x codes), that you just can perchance employ the --all or -a flag and Wayback Machine Downloader will get them as nicely as of the 200 OK recordsdata. It will also support empty recordsdata which could be eliminated by default.

Instance:

wayback_machine_downloader http://example.com --all

Easiest list recordsdata without downloading

It will acceptable show the recordsdata to be downloaded with their snapshot timestamps and urls. The output structure is JSON. It can most likely now not get the relaxation. It’s well-known for debugging or to connect with one other application.

Instance:

wayback_machine_downloader http://example.com --list

Most series of snapshot pages to determine out into memoir

-p, --snapshot-pages NUMBER    

No longer mandatory. Specify essentially the most series of snapshot pages to determine out into memoir. Depend a median of 150,000 snapshots per web page. 100 is the default most series of snapshot pages and also can be enough for many web sites. Exhaust an even bigger quantity if you’re making an are trying to want to get a extremely enormous web place of residing.

Instance:

wayback_machine_downloader http://example.com --snapshot-pages 300    

Download rather a lot of recordsdata at a time

No longer mandatory. Specify the series of rather a lot of recordsdata you are making an are trying to want to get at the same time. Permits one to tempo up the get of a web place of residing vastly. Default is to get one file at a time.

Instance:

wayback_machine_downloader http://example.com --concurrency 20

The usage of the Docker image

In its place installation arrive, now we comprise a Docker image! Retrieve the wayback-machine-downloader Docker image this arrive:

docker pull hartator/wayback-machine-downloader

Then, you desire to fetch a plot to make employ of the Docker image to get web sites. As an illustration:

docker speed --rm -it -v $PWD/web sites:/web sites hartator/wayback-machine-downloader http://example.com

Contributing

Contributions are welcome! Factual post a pull quiz thru GitHub.

To speed the tests:

bundle install
bundle exec rake check

Read Extra

Share your love