Jump to content

Very slow smb transfer speeds

Hi everyone!

 

So I just put together a samba file server to dump data to while I'm reconfiguring my desktop drives an I'm having very slow transfer speeds from my desktop to the server. The average transfer speed is about 2 MB/s. Here is my hardware configuration:

 

Server:

Asus Z9PR-D12 w/ 2 x's  Xeon( E5-2620

72gb DDR3 ECC

2 x's Sandisk 60gb - OS hard RAID 1

6 x's 2TB SATA drives - zfs RAIDZ2

 

I'm using a managed gigabit network switch. I created the zpool with the following commands:

 

sudo zpool create rz2TB -o ashift=12 raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg 
sudo zfs set compression=lz4 rz2TB

 

Then I checked the pool with the following:

 

zog@zogland:/etc/samba$ zpool status
  pool: rz2TB
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rz2TB       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0

errors: No known data errors

 

Than I created my smb share:

 

[Share]
   path = /rz2TB/share
   writable = yes
   guest ok = yes
   guest only = yes
   create mode = 0777
   directory mode = 0777

 

And then I did the following:

 

sudo chgrp sambashare /rz2TB/share

sudo useradd -M -d /rz2TB/share/zog -s /usr/sbin/nologin -G sambashare zog
sudo mkdir /rz2TB/share/zog
sudo chown zog:sambashare /rz2TB/share/zog
sudo chmod 2770 /rz2TB/share/zog

sudo smbpasswd -a zog

sudo smbpasswd -e zog


sudo useradd -M -d /rz2TB/share/smbadmin -s /usr/sbin/nologin -G sambashare smbadmin
sudo mkdir /rz2TB/share/smbadmin
sudo smbpasswd -a smbadmin

sudo smbpasswd -e smbadmin

 

sudo chown smbadmin:sambashare /rz2TB/share/smbadmin
sudo chmod 2770 /rz2TB/share/smbadmin

sudo systemctl restart smbd nmbd
 

Does anyone have any suggestions what I did wrong?

 

Thanks!

 

zog

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, zogthegreat said:

The average transfer speed is about 2 MB/s

Are these large or small files? Filesize matters. Also, have you tested write-speeds locally on the server itself?

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

 

socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072

Maybe what you need but you should verify this as i have not scratched to deeply and ensure you can revert or roll back any changes


Samba Guide
https://www.samba.org/samba/docs/old/Samba3-HOWTO/speed.html

My point of referecne
https://superuser.com/questions/713248/home-file-server-using-samba-slow-read-and-write-speed

 

Link to comment
Share on other sites

Link to post
Share on other sites

Hi @WereCatf

 

No, I was just trying to do a quick data dump. I guess I'll be testing tomorrow! The "file size" was a about 1.3TB folder with my "Libraries" folder, i.e. /Desktop /Downloads /Documents etc. The actual files within vary in size from a few kilobytes to 4GB IOS's.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, zogthegreat said:

No, I was just trying to do a quick data dump. I guess I'll be testing tomorrow! The "file size" was a about 1.3TB folder with my "Libraries" folder, i.e. /Desktop /Downloads /Documents etc. The actual files within vary in size from a few kilobytes to 4GB IOS's.

You have to keep in mind that RAID, even a ZFS-one, handles small files slowly and Samba slows things even more, when dealing with small files. You should see what the speed is with those ISO-file or other, similar large files -- there's not a lot you can do about the small ones.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

Would you recommend something other than smb for file backups? Because this really sucks!

smb_2.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, zogthegreat said:

Would you recommend something other than smb for file backups?

NFS is a bit faster when handling small files. iSCSI would be the fastest, but it takes quite a bit of learning and you'll be creating a disk-image on the server, instead of the individual files, so it may not be desirable. You could use also rsync or a similar backup-tool to only update the differences between the current state of the files and the backup -- the first sync would obviously take longer, but any subsequent ones would be fast.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

Hmm, maybe I'll try to hook up the drive to the server with a caddy and do the transfer from the command line. I know I'm approaching this somewhat backwards, but I just need to backup my desktop data. In reality I should just sit down and try to figure out the problem.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, RobinD said:

 


socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072

Maybe what you need but you should verify this as i have not scratched to deeply and ensure you can revert or roll back any changes


Samba Guide
https://www.samba.org/samba/docs/old/Samba3-HOWTO/speed.html

My point of referecne
https://superuser.com/questions/713248/home-file-server-using-samba-slow-read-and-write-speed

 

Thanks for the links @RobinD Reading through them now.

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zogthegreat said:

Thanks for the links @RobinD Reading through them now.

 

No worries, good luck troubleshooting this

Link to comment
Share on other sites

Link to post
Share on other sites

Tested your ZFS array first? 

e.g

root@tower:~# time dd if=/dev/zero of=/storage/ISO/testfile bs=1G count=10  
10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 3.79571 s, 2.8 GB/s

 

Also consider testing your network with something like iPerf: https://www.linode.com/docs/networking/diagnostics/install-iperf-to-diagnose-network-speed-in-linux/

 

That will help to narrow the problem down. 

 

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Jarsky said:

Tested your ZFS array first? 

e.g


root@tower:~# time dd if=/dev/zero of=/storage/ISO/testfile bs=1G count=10  
10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 3.79571 s, 2.8 GB/s

 

Also consider testing your network with something like iPerf: https://www.linode.com/docs/networking/diagnostics/install-iperf-to-diagnose-network-speed-in-linux/

 

That will help to narrow the problem down. 

 

 

@Jarsky Thanks for the advice. I had started trying your suggestions when it occurred to me that I had forgotten to add a SLOG drive. I dug up a Sandisk 60gb SSD and set it up like this:

 

 sudo zpool add rz2TB log -f /dev/disk/by-id/scsi-SATA_SanDisk_SD6SB1M0_141365400346

 

zog@zogland:/rz2TB$ sudo zpool status
  pool: rz2TB
 state: ONLINE
  scan: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        rz2TB                                                    ONLINE       0     0     0
          raidz2-0                                               ONLINE       0     0     0
            scsi-35000c5003ccc2ae0                 ONLINE       0     0     0
            scsi-35000c5002ac15634                 ONLINE       0     0     0
            scsi-35000c5002ac58799                 ONLINE       0     0     0
            scsi-35000c5002ac1ffde9                 ONLINE       0     0     0
            scsi-35000c5003cd01157                 ONLINE       0     0     0
            scsi-35000c5002ac54426                 ONLINE       0     0     0
        logs
          scsi-SATA_SanDisk_SD6SB1M0_141365400346  ONLINE       0     0     0
 

Then I ran iperf as suggested and received the following output:

 

zog@zogland:~$ sudo iperf -c 192.168.2.144
------------------------------------------------------------
Client connecting to 192.168.2.144, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 192.168.2.144 port 51720 connected with 192.168.2.144 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  25.9 GBytes  22.2 Gbits/sec
 

###

 

zog@zogland:~$ sudo iperf -c 192.168.2.144 -u
------------------------------------------------------------
Client connecting to 192.168.2.144, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.144 port 59182 connected with 192.168.2.144 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 892 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec   0.005 ms    0/  892 (0%)
 

###

zog@zogland:~$ sudo iperf -c 192.168.2.144 -u -b 1000m
------------------------------------------------------------
Client connecting to 192.168.2.144, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.76 us (kalman adjust)
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.2.144 port 60262 connected with 192.168.2.144 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.16 GBytes  1000 Mbits/sec
[  3] Sent 850341 datagrams
[  3] Server Report:
[  3]  0.0-10.0 sec  1.16 GBytes   998 Mbits/sec   0.000 ms 1343/850341 (0.16%)
[  3] 0.0000-9.9993 sec  184 datagrams received out-of-order

 

Now I'm getting around 110 MB/s. Not great, but better than before. As previously stated, large files transfer faster than small ones. Copying my ISO folder goes much faster than my documents folder. Right now I'm reading up on tuning zfs so for the long term I will have a better/faster server.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, zogthegreat said:

 sudo zpool add rz2TB log -f /dev/disk/by-id/scsi-SATA_SanDisk_SD6SB1M0_141365400346

What kind of device is this? A search on SD6SB1M0 comes back with very poor stats. If its a low performance drive its probably better to run without ZIL/SLOG. 

 

3 hours ago, zogthegreat said:


 

Then I ran iperf as suggested and received the following output:

 

zog@zogland:~$ sudo iperf -c 192.168.2.144
------------------------------------------------------------
Client connecting to 192.168.2.144, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 192.168.2.144 port 51720 connected with 192.168.2.144 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  25.9 GBytes  22.2 Gbits/sec
 

 

That doesn't look right unless you have like a 20Gbit LAN? You havent started up iperf server & iperf client on same machine?

Can ignore UDP testing as SMB/CIFS primarily transfers traffic over TCP/IP

 

What if you run a bi-directional test as well? 

e.g iperf -c <ipaddress> -d

 

And what was the result of your ZFS performance? 

e.g time dd if=/dev/zero of=/rz2TB/share/zog/testfile bs=1G count=10 && rm /rz2TB/share/zog/testfile

 

 

110MB/s seems fine if you have a standard 1Gbit LAN, so your tests will probably come back fine.  Its quite normal for small files to transfer slower

 

15 hours ago, zogthegreat said:

sudo zpool create rz2TB -o ashift=12 raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg 
sudo zfs set compression=lz4 rz2TB

 

 

I see you manually defined a large ashift value (ashift=12), is this using a 4K record size on your dataset?

 

Check your recordsize, e.g

zfs get recordsize,compression rz2TB/share

If it's the default 128K, try changing it to something larger e.g 

zfs set recordsize=4K rz2TB/share

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Jarsky said:

What kind of device is this? A search on SD6SB1M0 comes back with very poor stats. If its a low performance drive its probably better to run without ZIL/SLOG. 

 

 

Hi @ , Yeah, the Sandisk is an old drive, I think that it's SATA II actually. I have a Intel DC S3510 240GB SSD that I ordered waiting for me just across the border at my US mail service... that I can't get to until they reopen the US/Canadian border. Gaa!

 

I need to be clear that this is the first zfs system that I have built, I've used mdadm previously with no problems, but from what I have been reading zfs is the future. Any advice or pointers that you might have are appreciated!

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/29/2020 at 11:00 AM, Jarsky said:

Tested your ZFS array first? 

e.g


root@tower:~# time dd if=/dev/zero of=/storage/ISO/testfile bs=1G count=10  
10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 3.79571 s, 2.8 GB/s

 

OP has compression enabled, so writing zeroes will NOT yield an accurate result.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, zogthegreat said:

Then I ran iperf as suggested and received the following output:

 

zog@zogland:~$ sudo iperf -c 192.168.2.144
------------------------------------------------------------
Client connecting to 192.168.2.144, TCP port 5001
TCP window size: 2.50 MByte (default)
------------------------------------------------------------
[  3] local 192.168.2.144 port 51720 connected with 192.168.2.144 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  25.9 GBytes  22.2 Gbits/sec

You ran both the iperf-server and -client on the same machine. That's pointless and won't give any indication of your actual network-performance. They are supposed to be run on different computers.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×