Jump to content

Matioupi

Member
  • Posts

    18
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Matioupi's Achievements

  1. aside from all the BOM / price discussion which is interesting, I think there are other more practical" key factors Power consumption has been mentioned and plays its role, I've digged that a little mode : when used at gigabit only, a X550-T2 is given for 5.5W average 6.4W max (turns to 11.2 / 13W at 10GbE) (http://www.intel.com/content/www/us/en/ethernet-products/converged-network-adapters/ethernet-x550-brief.html) while a dual gigabit port http://ark.intel.com/products/84804/Intel-Ethernet-Server-Adapter-I350-T2V2 is given for 4.4W max If you scale "per Gb/s" 10Gb NICs are more efficient, but they are still worse at 1Gb/s by about 45% (of a small value) and at the end of the day, the bill will be bigger so you have to sell more GB (TB) to earn more money (which is the trend...) The physical layer also plays it's role. Are there any statistics/polls (could be a separate thread) about he actual percentage of users that are doing WiFi only vs. thoses that are using wired connections (at home / at work). I don't think 10GbE WiFi is quite there yet... and therefore generalized WiFi usage may slow down quick adoption of faster network standards at a large scale in a similar way that it did for 10/100 to 1Gb transition. A lot of consummer content/service providers will stick the bandwidth requirements of their products (games, video, ...) to what can be handled by WiFi. Only pro or wired (which I guess is a small percentage) will benefit turning 10GbE. For the good points, RJ45 10GB-base T was far from being an easy way to go for 10GbE up to now. Almost all pro switches and NICs were SFP+ or CX4 only on the physical layer. Now, we have the intel X540 and X550 series for RJ45 based NICs, plus some "comsummer/small business" 10bE base T / mixed physical player switches, and this will allow people to turn more smoothly to 10GbE. Even if cabling was only cat 5e, tests tends to show that 10GbE is quite well supported for reasonnable distances.
  2. I own one GS728TXS which main difference with what you are describing is 4 SFP+ 10G ports instead of 2 SFP+ plus 2 10GB-baseT (I think your description is incorrect as I understand it, this would be a total 8 10Gb ports when it only have 4 actually) I'm pretty happy with my TXS version right now. I guess I could use transceivers to go 10Gb base T if needed. My network setup is made with : one Netgear GS728TXS, and one D-link DXS1210-12TC, one Arista 7124S. I'm using 2 SFP+ of the netgear for uplink to Arista, 2SFP+ for uplink to the D-Link (all in dynamic aggregation) All was second hand equipment bought on ebay and with this setup, I can handle easily the SFP+ part of the network on the Arista, the 10Gb-base T on the D-link, and the 1Gb part on the netgear.
  3. according to a graph found here : http://www.anandtech.com/show/8718/the-samsung-galaxy-note-4-exynos-review/3 describing memory bandwidth of the ARM Cortex A53 that is empowering the rpi3, it can be seen that memory bandwidth is somewhere between (depending on size of transfers) 16 and 40 Gb/s so it will probably be a little short for an 8 port switch. By the way, for 10 GbE users, this graph questions me about what is seen on this guide (and following parts) : http://www.cinevate.com/blog/confessions-of-a-10-gbe-network-newbie-part-1-basics/ stating that buffers should be tuned at maximum values for improved throughput; I did actually observed the opposite (with intel x710-da4 and x520-da2 cards) where reducing the buffer a little bit from default values would improve the throughput, but maybe the pretty old 10GbE switch in the middle is at play here with the same effect than on the above (and I don't have time testing without the switch)
  4. thanks for the interest in answering. So far, you all seems to think that the internet speed is the key limiting factor. I do have about 20 Mb/s internet, but still 10Gb/s LAN is really great for archiving, and professional storage optimizations through high speed NAS/SAN) If the mean for WAN connections is 15Mb/s right now (didn't knew that), then why 1Gb/s would be the standard...
  5. Hello, when do you think 10 Gb/s will overcome/replace the current 1 Gb/s lan ports on virtually all motherboards/systems that are sold at date ? Do you have any doubts that it will ever happen ? Do you have any doubts that it will be available through 10GB-BaseT "RJ45" standard ?
  6. http://boinc.berkeley.edu/ (ok, this is quite similar to folder@home... maybe a little more generic) you can also learn to configure some software such as HTCondor ( https://research.cs.wisc.edu/htcondor/ ) for setting up your own computing pool (not sure it's so easy to join existing pools without BOINC, which can be "plugged" in HT Condor as a fall-over for CPU time) HT Condor can be used to parallelize all kind of computations that are addressed from "command line utilities with parameters" (and more) Also, what kind of internet connection do you have to this machine ?
  7. Sorry for the misunderstanding about the 1Gb/s vs. 10Gb/s saturating the SSD cache. As for the size of the NVMe not adding to the pool, this is well understood and actually, having it in a Pcie 3.0 would even save one slot inside the NAS (before needing to add an extension) for another HD. PS : very nice passive cooled machine displayed on your signature.
  8. Hello, yes, I have a wall embedded cat 6 network at home (where I work from...) , plus some twinax DAC SFP+ cables in the server cabinet and a set of intels x540-T2, x520-DA2 and x710-DA4 10GbE NICs (plus one arista 7124S, DXS1210-12TC and NetGear GS728TXS for the distribution part) As for SSD saturating 10GbE link, I do not really agree : 10GbE is around 1220MB/s (that's what I can achieve on a single link when setting a SSD raid volume with enought drives) when SATA III SSD's are "only" ~ 600MB/s (half). I'm not sure either if the NVMe would be better suited in a single PC (OS is already installed on SATA III SSD's on every machines) All our machines do not have PCIe 3.0 yet and are not capable, but they would benefit of having it NAS mounted. I also really prefer to be able to manage storage at a single place (inside the NAS) as this makes maintenance, backup and cost sharing much more efficient.
  9. Hello, I checked on the Synology open source repository (which by the way is missing a lot of architectures and does not seem really up to date). I could not find the source code for the current (DSM 5.2 5644.5-bromolow) version of my architecture, but could for other architectures and for the upcoming DSM 6.0. Both have the nvme driver in the source code, but it is not configured to be compiled, nor as a module nor directly in the kernel. I'll try to build this module, but once this is done, I'm still wondering if the DSM GUI will let me use this drive as a SSD cache. I foresee that Synology "proprietary" mecanisms will be at play beyond the NVMe drive being detected. Beside that, did some others users with other brands or self build NAS did experimented such feature (nvme caching) ?
  10. indeed...both flops / m² , flops, MTBF are also very poor. the very nice point is more for the fun and also the learning of how logic gates can be combined to perform numerical operations...
  11. Other do even build computers (ok, basic function of a computer) with dominos. If you've not already seen, I think this is awesome : Domino "computer" https://youtu.be/OpLU__bhu2w
  12. I would love to see a video for bench-marking air and liquid cooling. One idea I'd like to see is a dual CPU board, one CPU AIR cooled, and the other CPU liquid cooled both with high end cooling systems. Monitoring temperature of both CPU would tell if there is any difference.
  13. Hello, does anybody knows/already tested if an NVMe drive, such as a samsung 950 Pro mounted on a PCIe 3.0 adapter can be used as an SSD cache on Synology PCIe 3.0 enabled models (such as RS3614xs) That would save disk slots and (I hope) give a good performance improvement kick. It seems that those drives are supported in recent Linux kernels https://communities.intel.com/community/itpeernetwork/blog/2015/06/09/nvm-express-linux-driver-support-decoded Current Synology is 3.10.35 (at least on my machine), but I don't see any nvme.ko module on the system... could it be build later or embedded into the kernel ? What do you think of that ? How do
  14. Hello, if your are using an Intel multi-port NIC (either multi 1Gb or multi 10Gb outputs) then it appears that Intel and Microsoft are not supporting VLAN and NIC aggregation in Windows 10 anymore since last September (at least) Both companies seems to say that it's the other one that is faulty, cf threads on MS and Intel forums below : https://communities.intel.com/thread/77934?start=45&tstart=0 https://social.technet.microsoft.com/Forums/en-US/66166918-5b67-4754-89e5-a2572b1888a2/nic-teaming-failed-build-10568-windows-10?forum=win10itpronetworking What do you think of such a situation ? Obviously, the feature is working in Win 7, Win 9.1, Win Server 2008 and 2012 and it's a regression. How the hell could it be that it takes so long to solve ? The line of codes propably exist somewhere ! Do you people at LMG have a chance to use your contacts at Intel and Microsoft to sort this out, make a little noise and get this solved quicker ? Regards
  15. Maybe thermal compound get too old or dry. I have a laptop that started (slowly so I get upset when it was already there) to overheat and get noisy as fans where on all the time. I searched "opening" + model of laptop in youtube to get a tutorial of how to cleanly open it (as it's sometimes tricky and screws often hide under the keyboard) Once CPU and other chips covered by heatsink got accessible, I removed the radiators, cleaned and replaced the thermal coumpound and this revided the laptop instantly. Actually one of the radioator was barely touching the coupound (on a dell 630 atg laptop). While openned, I also upgraded RAM and changed the hard drive by a old SSD that was around. I now had a clone of the hard drive and decided to try this W10 upgrade... It went smoothly and the laptop is now a very decent and silent "kitchen machine" ! Hope this helps. Regards
×