Edited 3 weeks ago by ExtremeHow Editorial Team
Command LineNetworkingHTTP RequestsFile TransfersDownloadingUtilitiesScriptingAutomationWebTools
This content is available in 7 different language
Linux is a powerful operating system that is widely used for a variety of tasks. The two most common tools for interacting with the web via the command line are wget
and curl
. These tools are a must for anyone who needs to download files, make HTTP requests, or interact with Internet resources directly from the terminal. This guide will introduce you to the basics of both wget
and curl
commands, discuss their differences, and show you how to use them effectively.
wget
is a command-line utility that allows you to download files from the web. It supports HTTP, HTTPS, and FTP protocols, making it a versatile tool for interacting with web resources. To get started, you first need to check if wget
is installed on your Linux system. Open your terminal and type:
wget --version
If wget
is installed, you'll see its version number. If it isn't, you can install it using your distribution's package manager. For example, on Ubuntu or Debian, use:
sudo apt-get install wget
On Fedora, use:
sudo dnf install wget
Once wget
is installed, you can use it to download files. The easiest way to use wget
is to type the command followed by the URL of the file you want to download:
wget http://example.com/file.txt
This command will download file.txt
from http://example.com
and save it to the current directory. wget
uses the current directory as the base directory for saving files, unless otherwise specified.
One of the most useful features of wget
is its ability to resume paused downloads. If your download was interrupted, you can resume it with the -c
option:
wget -c http://example.com/largefile.zip
If largefile.zip
has already been partially downloaded, this command will resume its download.
To download multiple files with wget
, you can use a text file containing a list of all the URLs you want to download. For example, create a file named urls.txt
with the URLs of the files, then pass it to wget
with the -i
option:
wget -i urls.txt
This command will download every file listed in urls.txt
.
wget
also supports recursive downloads, which can be useful for mirroring an entire website. To make wget
download files recursively, use the -r
option:
wget -r http://example.com
This command will download all files linked from the specified webpage, up to a default link depth of 5.
Sometimes, you may want to limit the download speed to avoid consuming too much bandwidth. You can use --limit-rate
option to limit the download speed:
wget --limit-rate=100k http://example.com/file.iso
This command will download the file at a maximum speed of 100 kilobytes per second.
curl
is another command-line tool that allows you to interact with web resources. It is more versatile than wget
, especially when it comes to performing requests other than simple file downloads. To check if curl
is installed, type:
curl --version
If curl
is not installed, you can install it using the following:
For Ubuntu or Debian:
sudo apt-get install curl
For Fedora:
sudo dnf install curl
The simplest use of curl
is to display the contents of a URL directly in the terminal. You can achieve this by typing:
curl http://example.com
This command will fetch the content from http://example.com
and display it in your terminal window.
Unlike wget
, curl
does not save files by default. To save the output to a file, use the -o
option followed by the desired file name:
curl -o file.txt http://example.com/file.txt
This command will download file.txt
and save it as file.txt
in the current directory.
To resume a partially downloaded file with curl
, use the -C -
option:
curl -C - -o file.txt http://example.com/file.txt
This command will restart the download and continue where you left off.
curl
is particularly powerful for making HTTP requests and can handle a variety of request types, such as POST, PUT, DELETE, etc. To make a GET request (which is the default), you can simply type:
curl http://api.example.com/data
For a POST request, you can send data using the -d
option:
curl -d "param1=value1¶m2=value2" http://api.example.com/post
This sends a POST request with the parameters param1
and param2
.
curl
allows you to add custom headers to your requests, which can be useful for API interactions. Use the -H
option to include headers:
curl -H "Authorization: Bearer token" http://api.example.com/data
This command adds an Authorization
header with a bearer token.
Both wget
and curl
can use a proxy server when making requests. For wget
, you can set a proxy by configuring http_proxy
and https_proxy
environment variables:
export http_proxy="http://proxyaddress:port"
wget http://example.com/file.txt
For curl
, you can specify a proxy using the -x
option:
curl -x http://proxyaddress:port http://example.com/file.txt
Both wget
and curl
are extremely useful tools for interacting with web resources from the command line in Linux. wget
is excellent for direct file downloads and supports features such as recursive downloading, while curl
provides more flexibility for interacting with web services and handling different types of HTTP requests. Understanding how to efficiently use these tools can greatly enhance your workflow and productivity when dealing with the Linux command line.
In conclusion, mastering wget
and curl
commands can be a valuable asset for users who frequently work with web interactions. Whether it's downloading files, accessing APIs, or scraping data from websites, these tools provide robust mechanisms to achieve your goals directly from the terminal.
If you find anything wrong with the article content, you can