![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
On this page
- Linux Basics: How to Download Files on the Shell With Wget
- 1.1 Wget - An Overview
- 1.2 Good to Know
- 1.3 Basic-Downloading One File
- 1.4 Download and Save the File using a Different Name
- 1.5 For Limiting the Speed of Download
- 1.6 Resuming a Stopped/Interrupted Download
- 1.7 Continuing the Download Process in the Background
- 1.8 Customizing the Number of Attempts (Increasing/Decreasing)
- 1.9 Reading a File for Multiple Downloads
- 1.10 Emulating a Complete Website
- 1.11. Rejection of Specific File Types
- 1.12. FTP Downloads
Linux Basics: How to Download Files on the Shell With Wget
Version: 1.0
Last Edited: June 25th, 2014
Last Edited: June 25th, 2014
Contents
- 1.1: Wget- An Overview
- 1.2: Good to Know
- 1.3: Basic-Downloading One File
- 1.4 Download and Save the File using a Different Name
- 1.5 For Limiting the Speed of Download
- 1.6 Resuming a Stopped/Interrupted Download
- 1.7 Continuing the Download Process in the Background
- 1.8 Customizing the Number of Attempts (Increasing/Decreasing)
- 1.9 Reading a File for Multiple Downloads
- 1.10 Emulating a Complete Website
- 1.11. Rejection of Specific File Types
- 1.12. FTP Downloads
1.1 Wget - An Overview
Wget is a popular and absolutely user-friendly free-utility command line tool primarily used for non-interactive downloading files from the web. wgethelps users in manipulate downloading of huge chunks of data, multiple files, downloads of recursive nature, and protocol-based download (HTTP, HTTPS, and FTP). The following is the basic wget command syntax, though this tutorial covers all the possible download scenarios for the benefit of learners.
wget [option] [URL]
1.2 Good to Know
For the information of learners, Wget shall always display the following whenever any download is in process:
- Download progress (in percentage form)
- Data quantity downloaded
- Download Speed
- Remaining time for the completion of the download process
Explained below are the various possible download scenarios users may be dealing with when downloading files on the Linux shell using wget:
1.3 Basic-Downloading One File
This is the most elementary of processes where users execute the wget command without any option by simply using the URL of the file to be downloaded in the command line. The following command shall help you do that
wget [URL]
The above command shall help you download the file in an unperturbed manner.
1.4 Download and Save the File using a Different Name
This step is simply an extension of the previous one, and may be required when you wish to assign fuss-free, readable and comprehensible nomenclature to the downloaded files. All you need to do is tweak around the basic command a little and use the same with the option O as shown below:
wget -O [Preferred_Name] [URL]
Using the above command, you would be able to save the file using the name you wish to assign it.
1.5 For Limiting the Speed of Download
Normally, wget would eat up a significant bandwidth for downloading files from the web. You, however, have the option to restrict the speed of download to a certain assigned value by customizing the basic wget command together with the "limit rate" option, by using the following command:
wget --limit-rate=[VALUE] [URL]
By specifying the preferred speed in the field "VALUE" in the above command, you would be able to customize the download speed as per your requirements. Add a suffix "k" for jilobytes or "m" for megabytes. e.g. "--limit-rate = 2m" to limit the max download speed to 2Mbyte/sec.
1.6 Resuming a Stopped/Interrupted Download
In case you face any system interruptions post starting the download of a massive file from the web using wget , you will be absolutely delighted to know if the command given below that shall help you resume the download process from where it stopped (without the hassle of starting the tedious process all over again!):All you need to do is execute the basic wget command with option "-c".
wget -c [URL]
The above command shall restore the download process from where it stopped earlier, thus letting you download the entire file in a seamless fashion.
1.7 Continuing the Download Process in the Background
When downloading a huge file, you may prefer to continue download process in the background and make use of the shell prompt. In this case, you must execute the wget command using option -b, and monitor the download status in the wget-log file, where it would be logged as per process. You need to use the following command to continue the download process in the background:
wget -b [URL]
You may check for download progress by accessing contents of the wget-log file using the tail command as follows:
tail -f wget-log
The above set of commands shall help you avail the shell prompt while a massive file gets downloaded in the background, and also keep a tab on the download progress.
1.8 Customizing the Number of Attempts (Increasing/Decreasing)
In the normal course, the wget command would make 20 reattempts to connect to the given website for completing the downloading in the event of lost/disrupted internet connectivity. However, users have the privilege to change this number as per their preference, by using the "--tries" option. The following command shall help you do exactly that:
wget --tries=[DESIRED_VALUE] [URL]
By specifying the preferred number in the DESIRED_VALUE field, users may regulate the number of retries in case of interrupted connectivity.
1.9 Reading a File for Multiple Downloads
If you wish to download multiple files, you need to prepare a text file containing the list of URLs pertaining to all the files that need to be downloaded. You may read the text file using option -i of the command (given below), and begin the intended multiple downloads. Please use the following command for the same:
wget -i [TEXT-FILE-NAME]
The above command shall facilitate downloading of multiple files in a hassle-free manner.
1.10 Emulating a Complete Website
If you wish to retain a copy of any website that you may like to refer to/read locally, or maybe save a copy of your blog to the hard disk as back up, you may execute the wget command with mirror option, as follows:
wget --mirror [Website Name]
The above command shall help you mirror the desired website/save data locally for future reference, thus saving you the hassle of visiting the said website again and again.
1.11. Rejection of Specific File Types
In instances where you wish to download an entire website barring files of particular type, for example, videos/images, you may make use of the rejectoption with the wget command (given below):
wget --reject=[FILE-TYPE] [URL]
The above command shall enable you to reject the specified file types while downloading a website in its entirety.
1.12. FTP Downloads
The FTP Downloads may be of two types:
1. Anonymous FTP Download 2. Authenticated FTP DownloadConsequently, there is a unique command for downloading each type.
For Anonymous FTP downloading, please use the following command:
wget [FTP-URL]
For Authenticated FTP Download, please use the following command:
wget --ftp-user=[USERNAME] --ftp-password=[PASSWORD] [URL]
Each of the above commands shall lead to the required FTP download.
https://www.howtoforge.com/linux-basics-how-to-download-files-on-the-shell-with-wget