Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

ronaldnwb

Member
  • Content Count

    2
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About ronaldnwb

  • Title
    Newbie
  1. Thanks Alex for your comments... the thing is, I’m a radiologist and I work with a lot of images, from simple x-rays to CT-scans and MRIs... I’ve a flexible schedule and I can send all my files from my work PACS to my home storage in the background... (it’s secure, encrypted and validated) the upload speed is 1Gbps, the same as the download speed at home... when at home, I do pull large files from my NAS to my desktop... I only upload comments and reports (docs) from home to work... so the 200Mbps top speed is not a real problem... the reason I’m interested in knowing the real bandwidth of my 10Gbps LAN is that if I still have more bandwidth to grow, I’d upgrade my NAS to a SSD raid (1754MB/s read, theoretical) instead of HDDs (488MB/s). Even seconds per file add up and get annoying. I already get about 480MB/s using HDD raid, 500MB/s from single SSD... but that’s pretty much the same as the theoretical speed of the raid configuration... but I’d like it to go at least as fast as 1 GB/s. Iperf testing was done with this in mind... knowing the maximum speed possible and upgrade accordingly! But if the NICs (SFP+ 10Gbps per spec) are the main bottleneck and are limited to 3.5-4Gbps I’d upgrade them too or at least LAG them. But if I cannot rely on Iperf as you stated, I guess I’d have to put an NVME in the NAS and see how fast it goes... kind of a real live test without the raid and extra load on the cpu obviously. when I said client I meant #iperf -c on my desktop and #iperf -s on my NAS... I stated it because I read somewhere else that the speed can be different depending on which computer is running as the iperf server... I wouldn’t know how to explain how the isp set the connections up because I really don’t know but they are guaranteeing me 200Mbps uncontended speed within the isp’s network (home to work) and contended for the rest of the internet... and it has proven accurate so far... super consistent file transfers, I transferred randomly generated 14GB test files without a single speed drop... not really a problem since I only transfer doc reports from home to work... seldom a large file... all the large files go from work to home... I apologize for any incorrect phrasing or term... IT is not my field, just something I really enjoy doing!
  2. I've a small home network running and I've been testing the connection speed between different components. My router is a dedicated pfsense box and my NAS is running freeNAS 11.2, linked together using 10GB NIC and DAC cable. My laptop is running windows 10, connected using a gigabit connection, and my desktop is connected using a 10Gb NIC. When I transfer a file from my laptop to my NAS I get a consistent 113MB/s, or about 900Mbits/sec, and 500MB/s when using my desktop and 10Gbit NIC (pretty much the speed of the SSD), but when I run Iperf I get totally different results, and a much slower connection. From my laptop, running as a client, I only get ~500Mbit/sec, 530 tops... and from my desktop, 413MB/s or 3.5Gbit/s, I get exactly the same results when testing the connection speed from my pfsense client to my NAS. So, what's the deal?? Why do I get actual transfer speeds higher than Iperf speeds? I've a D1000MBit:U200Mbit connection and I want to make sure I can take full advantage of it... At work, I've a symetrical Gigabit connection with the same ISP, actually connected to the same ISP server, so the connection is pretty reliable... I would love to transfer all my files to my NAS and just pull them to my desktop at home. BTW... I just ran an Iperf Test using UDP, -i 5 -t 30 -P 5 and got 6.15 Gbit/s total... so more streams did increase the speed, almost doubled it... but not even close to 10Gb. The Gigabit connection to my laptop did improve using TDP but with more datastreams, I got 940Mbits average, and spikes of up to 1.15 Gbits... (really unexpected)
×