Curl is a command-line program for transferring data to or from a server using any supported protocols (HTTP, FTP, IMAP, POP3, SCP, SFTP, SMTP, TFTP, etc. TELNET, LDAP, or FILE). libcurl is the engine that powers curl. Because it is meant to function without user intervention, this technology is recommended for automation; Let us see some cURL examples helpful for beginners.

cURL Commands

1. Obtain a Single File

The following command will get the URL’s content and print it to STDOUT (i.e., on your terminal).

$ curl http://www.centos.org

2. Save the output

Using the -o/-O arguments, we may store the result of the curl operation in a file.

-o (lowercase o) saves the output to the filename specified on the command line.

The filename in the URL will be taken and used as the filename to save the result if -O (uppercase O) is used.

$ curl -o mygettext.html http://www.google.com/software/gettext/manual/gettext.html

The page gettext.html will now be stored in the file’mygettext.html.’ It’s also worth noting that when you run curl with the -o option, it displays the progress meter for the download seen below.

% Total % Received % Xferd Average Speed Time Time Time Current
                                 Dload Upload Total Spent Left Speed
 66 1215k 66 805k 0 0 34060 0 0:00:37 0:00:24 0:00:13 45900
100 1215k 100 1215k 0 0 39994 0 0:00:31 0:00:31 --:--:-- 68987

When you use curl -O (uppercase O), the content is saved in your local system’s file ‘gettext.html.’

$ curl -O http://www.google.com/software/gettext/manual/gettext.html

Curl disables the Progress Metre when it has to send data to the terminal To avoid publishing confusion.

To save the output to a file, we may use the ‘>’|’-o’|’-O’ options. Wget, like cURL, may be used to download files.

3. Retrieve Multiple Files at Once

By entering the URLs on the command line, we may download multiple files at once.

$ curl -O URL1 -O URL2

The program below will download index.html and gettext.html and store them under the same name in the current directory.

$ curl -O http://www.google.com/software/gettext/manual/html_node/index.html -O http://www.google.com/software/gettext/manual/gettext.html

Please keep in mind that curl will attempt to reuse the connection when downloading multiple files from the same server.

4. Use the -L option to follow HTTP Location Headers

CURL does not, by default, follow the HTTP Location headers, and it’s also known as Redirects. When a requested web page is relocated, an HTTP Location header is provided as a Response, indicating where the actual web page is.

For example, if a user from India puts google.com into their browser, they will be immediately routed to ‘google.co.in’. It is accomplished via the HTTP Location header, as seen below.

$ curl http://www.google.com
<TITLE>302 Moved</TITLE>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.co.in/">here</A>

The document has been relocated to this location.

According to the output above, the requested document has been relocated to ‘http://www.google.co.in/.’

We may force curl to follow the redirection using the L option as illustrated below. It will now download the HTML source code for google.co.in.

$ curl -L http://www.google.com

5. Use HTTP Authentication

To read the information of a website, you may be required to enter a login and password ( can be done with .htaccess file ). As seen below, we may transmit those credentials from cURL to the webserver using the -u option.

$ curl -u username: password URL

Curl’s default authentication method is Basic HTTP Authentication. Using –ntlm | –digest, we may select a different authentication mechanism.