Home Linux Tips & Tricks How To Quickly Transfer Large Files Over Network In Linux And Unix

How To Quickly Transfer Large Files Over Network In Linux And Unix

By sk
Published: Updated: 32.4K views

Today, I had to reinstall my Ubuntu server which I use often to test different applications. My Ubuntu server has over 200 GB data, and I don't want to lose it. I can transfer those data via scp, or setup NFS, FTP to copy files, but I am pretty sure it would take several hours to transfer such big files. While searching for an alternative method, I came across the following method. We can quickly transfer large files between two systems over Network using netcattar, and pv commands in any GNU/Linux and Unix-like operating systems.

Unlike other methods, I find it very fast and quick. For those who don't know, Netcat is a simple Unix utility which reads and writes data across network connections using TCP or UDP protocol. tar is a commandline archiving tool, and pv, stands for pipe viewer, is used to monitor the progress of data. Now, allow me to show you how to transfer large files quickly between two Linux systems. It's not as difficult as you may think. Read on.

Quickly Transfer Large Files Over Network Between Two Systems In GNU/Linux

Make sure you have installed "netcat" and "pv" utilities on your systems. If they are not installed already, you can install them as shown below. The "tar" package is available by default on most Linux systems, so you don't have to install it.

On Arch Linux and its derivatives:

$ sudo pacman -S netcat pv

On RHEL, CentOS, Fedora:

$ sudo yum install epel-release
$ sudo yum install nc pv

Or,

$ sudo dnf install nc pv

On Debian, Ubuntu, Linux Mint:

$ sudo apt-get install netcat pv

Now let us see how to quickly copy the large file(s) between two systems.

To do so, run the following command as root user on the receiving node (destination system):

# netcat -l -p 7000 | pv | tar x

On the sending node (source system), run this command as root user:

# tar cf - * | pv | netcat 192.168.1.105 7000

Here, 192.168.1.105 is my destination system. tar cf - * will copy everything in the current working directory to the destination system, and the files will be extracted at the other end.

Note: On RHEL, CentOS systems, use "nc" instead of "netcat" as shown below. And, you need to add the port "7000" to the iptables / firewall-cmd on the target system.

After adding the port on target system, you can transfer the larger files as shown below.

On destination system:

# nc -l -p 7000 | pv | tar x

On source system:

# tar cf - * | pv | nc 192.168.1.105 7000

Also, You can specify a particular file like below.

# tar cf - /home/sk/test.file | pv | netcat 192.168.1.105 7000

Please be mindful that both system should have netcat installed. Now, grab a cup of coffee. You'll see that the files will be copied very quickly than the traditional methods like scp.

Also, you will not see any sign of the file transfer completion on both sides. These commands will keep running until you manually stop them. You need to manually check the file sizes on both systems using "du -h <filename>" command. If the file size in destination system is same as in source system, then you can assume the file transfer process is completed and quit the command by pressing CTRL+C.

Quickly Transfer Large Files Between Two Systems In Unix

On Unix operating systems, netcat is called as nc. So, to copy large files between systems over network, the command would be:

On destination system:

# nc -l 7000 | pv | tar -xpf -

On source system:

# tar -cf - * | pv | nc 192.168.1.105 7000

Again, these commands should be run as root user. And, both source and destination systems should have netcat and pv installed. Transferring large files over LAN using netcat and tar can indeed will save you a lot of time.

Disclaimer: Please be mindful that there is no security in this method. Because, as you see in the above examples, there is no authentication on either side. All you need to know is the destination system's IP address. Transferring files using netcat is recommended only inside protected networks. If you are paranoid about security, I strongly suggest you to use scp command.

If you security is so important to you, you can use Rsync to securely transfer files.

$ rsync -ravz /path/to/source/files/ destination-ip:/path/on/destiny

Thanks: Ppnman

That's it. Do you know any other way to copy large files quickly? Please share it in the comment section below.

You May Also Like

13 comments

IJK January 18, 2017 - 8:33 pm

It would be nice if you could publish some figures comparing this mechanism against FTP or SCP in the same network.

Reply
SK January 19, 2017 - 6:46 am

Good point. I will try.

Reply
hi.itsme January 21, 2017 - 9:16 am

And how about writing a bash script to completely automatize so that you can fetch the files using only the client machine. For example, I have generated my data on my computing server (I maintain it) and I want to transfer important data to results directory of a production machine where analyse it for further usage. You once run the script and forget about it.

Reply
SK January 21, 2017 - 2:14 pm

Yes, It would be very helpful. If I find anything, I will share it for sure.

Reply
ppnman January 18, 2017 - 10:02 pm

don’t know dude…. security is so important to me. I prefer rsync -ravz /path/to/source/files/ destination-ip:/path/on/destiny
fast,secure and easy ?

Reply
William Chipman January 23, 2017 - 4:41 pm

Tried this from Oracle 6 box to Centos 7 box and ran into a few issues:
1. Had to install the EPEL repository to get pv.
2. Had to add the port to the iptables / firewall-cmd on the target system. This could also be used to control possible source systems for improved security.
3. command name was changed to “nc” on both systems, not netcat.
After those changes, worked as advertised and very quick.

Reply
SK January 24, 2017 - 6:29 am

Happy to hear that it helped you. I added your notes to the guide now. Thanks.

Reply
Benjamin Furstenwerth February 21, 2019 - 9:19 pm

Use the -w flag with netcat and specify a time in seconds. This will close the connection on time out. If you don’t want visual status, this would avoid having to install pv. I use pv for dd frequently, so it’s a non issue. Timing out the transfer is good for automation and provided you have a stable network, it shouldn’t prematurely fail…. Otherwise set a larger time out.

Reply
sk February 22, 2019 - 11:44 am

Thanks for your tip. Cheers!

Reply
LinuxLover January 15, 2020 - 7:48 am

I prefer SCP. I don’t need to setup anything on the client end everytime I want to do a transfer and it gives me transfer status built in.

Reply
stasman January 15, 2020 - 7:50 am

Stats? Netcat vs FTP vs rsync vs SCP

Reply
BG January 15, 2020 - 5:45 pm

I have a hard time believing this would be significantly faster than rsync, unless there’s something wrong with one of your systems. On any modern system, for large files, the bottleneck should be the network bandwidth (or cpu if the pc is from the past century). For a huge amount of tiny files maybe tar would help. I’ve transferred large files over network many times using rsync, scp, samba, nfs, etc. and the bottleneck is always the network bandwidth (even on a gigabit network).

Reply
bwoo May 7, 2022 - 4:29 pm

Nice! I’m get 2.5MiB/s instead of the soul destroying 50KiB/s I get with scp or rsync and plain http. Still much less than I would have expected given that I’m only transferring across a local network but one hell of an improvement.

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. By using this site, we will assume that you're OK with it. Accept Read More