Jump to content

zENjA

Member
  • Posts

    43
  • Joined

  • Last visited

Everything posted by zENjA

  1. Maybe I'm oldschool, but I think "more speed" is not everything. I would love to step down my PCIe Nx4x NVMe U.2 Server to an PCIe 5 1x and then add 36x SSDs in 1U. or it's just me, who is dealing with servers all day
  2. I build datacenters (megawatt scale) and I love Singlemode Fiber. The main question for me... isn't 12/24/48/96/144/288/864 Fibers in a run default in US/CA? I guess, that you get this cable to put in in underground piping later... also... I guess you just preorder the terminated cable, because of conviniance. (I normally just spice my own cabels) DAC Spitter cabels... normally you go to the CLI and set it to slpit mode - at least, that I need to do on my mellanox hardware. Did you see, that there are singlewave BiDi 100G QSFP28 available? Flexoptix here in germany have them in stock -> Q.B161HG.10.AD + Q.B161HG.10.DA (Joking) you tell me 3x 400G? Let me introduce you to DWDM
  3. For power testing -> build youself a 3 Phase Braker box with like...2 Outlets per phase... and monitor it with a Shelly 3EM. I've used two of these in my Datacenter at work to get monitoring data for a A&B Line for 80A 3p in a fixed install. If you have something else... I could create a quick wireing diagram, so someone could cable a box for that. So maybe this could be a qick idea to test efficiany between 120v & 240v - or 400v if your power provider is nice to you. Maybe I just have a verry german view on power infra... Yes... you have gotten a psu tester for that... I know, but this would be a small box, that you could just throw on the table and go testing.
  4. I use a TV (Samsung GQ55QN95A) - 4k HDR10+ & HLG
  5. I think the Slide at 1:44 have an error - the scale is... strange Cinebench R23? My AMD 3700u gave me 3146 in R23 Multicore. Like... i5-8259U | R20 - 1641 | R23 - 4206 So I guess its: A: a R20 Rating with a missing 0 in the scale B: a R23 Rating and the scale is "just for comparison"
  6. For testing, I thought about connecting two external TB3 "GPU" Box, but put 40/100G Mellanox cards in them. Yes, I know iperf is CPU bound on most systems I know at multi 40g or higher systems May I see my DC Customers bringing these mini mac's in as a cluster... would be interesting
  7. "15TB... more that your total household..." Datahorder... Hold my Ingest Drive...
  8. Yes, as i guess it's a "DELL/EMC" OEM System -> then it's a handel to carry / pull the server back into the rack from behind. (I as DC Tech like this )
  9. I've a shadow since they started. The streaming quality is ok, but I've found several downsides for me: - my Hardware have no hardware decode for it - min. Intel Core i 3000 Series needed - to less ram & cpu for workloads without GPU support - single monitor support - mouse support on iOS - no IPv6 Support I've also get the ghost, yes it's ok, but a base i3 NUC is more versitale - I just need a min. 3 Monitors at 1440p. I think it's a grade service for 95% of gamers (with the right internet uplink), but I'm to much of a heavy user.(I just have 3 GPUs in my System ATI v7900 just for 4x1440p, GTX1060 3x1440p to do stuff, RTX2070super 1x1440p for compute & to less gaming at the moment - It's all on a P9D-WS - compute is handeled by real servers in my garage rack)
  10. Their devices are verry cool. 1280x800 is fine, its 16:10 that I like and I've the same resolutuion in my 12" Thinkpad x201. I've seen their devices with a lot of customer datacenter techs guys and I ask them why: - small & light - real PC - no touch - "high" resolution screen - for the size - real storage - do real work - all IO they really need like ethernet, full sized USB & USB-C - Linux/BSD etc. support - no strange T2 chip, locked down bios etc. - some even run VM's on it (native Linux and their company Windows in a VM) - some told me they where wating for one with Thunderbold - they want to connect with an Sonnet Solo 10G SFP+ Adapter for Network testing & troubleshooting I've heard they call it (the Pocket 2) the EEEPC they ever wanted and for watching video and writing some mails, they also carry an ipad. So two light devices with long battery live and easy charging.
  11. I'm running my handy x201 with 8gb - thats ok. On my work laptop where AV, Webex etc. blode is installed & starts automaticly - because IT says... 8gb is at or beyond the limit. On my "desktop" Thinkpad W520 16GB is fine, mostly unless I do something heavy like having Firefox, Opera, Outlook, large Visio, MS Project open at the same time. Than I'm at 14-full RAM use - normally ~10-12gb. In a comparaison between my w520 (16GB - 4x DDR3), a NUC8 i3 with 32gb (2x DDR4) and my Testsystem Asus P9D-WS, E3-1231v3, 32GB (4x DDR3 ECC) the w520 feels "slower" but thats bostly the fault of the GPU. On the working end, I need way more. Multible older servers (full spec - ok, 2x E56** 6c) with 48gb+ and one main host with 768gb is fine. Ok, I'm doing loadtesting for customers on their clusters ath the moment - nothing for "normal" scale.
  12. TD:DR Yes your Serverroom is really small to have a Rack, Switch Box and a UPS in there. Cabeling Think about zero u PDU's with C13/14 or even C19/20 outlets for your servers. Some also can be switched and/or monitored. Also have 2, A from Grid & B from the UPS, as it's not your scale to have two UPS Systems. Get an ATS Switch for single PSU devices, so they could benefit from both sides. Also you could set the UPS to 220-230V to get better efficiancy out of your server PSU's. If you only use the PDU Outlets, you could not plug a 120v device in by accident, because the plugs are different. Long version: I plan/build/operate colocation Datacenters at a scale of 90-120MW per site / ~5-15MW per Building. It's like "Unboxing Canada's BIGGEST Supercomputer!" but at scale. UPS Systems at scale, sometimes have problems, but they should have internal redundancy and not be loaded over 45% and for efficiancy not under 30%. I believe that you have get a double conversion UPS, so you could install an ATS or STS Switch between the powerwallbox and your UPS. This way you could switch the powersource of the UPS from Grid to Backup Generator if needed. Don't know how often this happens in your area, but here in germany I only had one unplaned outage for 4min at home this year (+ one announced for maintanance). In the DC we normally have not outages, because of the 110kv / 3 Transformer n+1 setup, but we get all shit that comes over the line, like spikes from lightnings or even get hit by lightnings. In that case it happened that all buildings disconnects themself from the side grid, run on UPS and starts their generators (mostly 16-24 Zylinders with ~35-70l of cubic capacity at 1,8-2,4MW of power per generator). Under full load they want ~ 500l / ~ 3 barrel of super light heating oil (like diesel) each per hour and we have at least enough onside to power them for 72h. Our UPS Systems get a refresh after 8 years, then the batterys "only" have 80-90% of capacity left. I've get some (18) ot these and have them connected to an aged Dell 5500kVA to get longer runtime. I use Yuasa NPL's and at work we use Yuasa SWL UPS Blocks. Mostly 56-60 in a row, to get high DC Voltages for the UPS, because it's more efficiancy and / and because this way, we get lower amp's and don't have to use "thick" cabels - ok they are still hand thick. All our UPS Systems are seperated from the battery blocks, they are in different rooms. At scale there is not only eaton, also Vertiv (Emerson Network Power), Schneider Electric (APC).
  13. @RejZoR Yes, it's the naming is a bit strange. Same as SK hynix, samsung or hitachi/toshiba (in the Enterprise HDD market).
  14. Jake types in his user/pw on a external system to give linus access to his share - please do not, the other person could catch your credentials. Create a seperate share & user for the person, who want to backup, also in the share config you can create quotas so your mate don't fill your nas full with his crap. I upload by "external" Backup to work, where I have racked my own server (verry old system). But at some friends, I've a Rasip with a usb hdd that openes a reverse SSH tunnel to me (it also could be a cheap vm) and mapps the ssh port there. Hyper Backup is able to do packupo over ssh, just use "rsync" as backup type. May you have to edit the parth to get it working, in my case I mount the hdd to "/home/$USER/backup". Even on some Webdav services that dont run out of the box just type in "/home" then you can run your backup.
  15. Wow, may I replace my x201 / T410 / W520 for a E495 with this CPU. "Battery is rated for 8h" - did you have test results what to expect in reality - office, remote desktop, surfing youtube (all the non gaming tasks)?
  16. The "kids" sitting in a meeting and want wo show something - and fail, because the company refuses guest access and have no cellular coverage in the room. That already takes place and I'm sitting there with my x201 dualboot XP & Win7 and run everything offline. Also the IT department would say no, because thy can't manage it via MS Active Directory. Wellcome to good offline legacy IT world.
  17. Sounds like AMD is going to replace threadripper (from a workstation home user and datacenter view). So here my thoughts about it: 1. the 16 core 2950x (900$) and the epyc one (single 7302P - 825$ & dual 7282 650$ / 7302 978$) - sounds like kind of "no server premium" to me 2. why get a cpu with 60 PCIe lanes, when I can have 128 lanes for mostly the same price 3. kind of double the RAM bandwith in epyc - virtual systems will love this 4. no way for Intel Xeon W - the 16 core is more than twice the price 5. running a SAP Hana system with one 2U system and not a Huawei 9008 with 8x E7-8880 v4 - wohooo less inter CPU com performance problem shit 6. I've get a Dell system with dual CPU and quad GPU for VDI testing with GPU acceleration, 2x ConnectX 6 (one port is used for storage & one for user traffic - redundant setup) NVMe SSD's only for local caching. From my testing I could replace each HP c7000 rev.2 Bladesystem with G7&G8 Blades with one of these systems. (and there are 54 bladecenters is use - l could fit everything in three racks incl. storage -> less networking & cabeling shit + less power draw
  18. I would use it as a Storage or VM Box, but as storage Box I would use the SAS port together with an expander backplane of a JBOD and put a NIC inside the PCI slot / use a 8x/8x riser for NIC & HBA Card.
  19. @GabenJr yes this was also my thought and yes, for a server / workstation system with lot's of cards, yes. But for some storage and networking tasks like 200G/2x100G like ConnectX-5 (or newer) cloud take a boost of PCIe gen.4 by going from x16 to x8 or x4 to have space for x4 GPU's in a compute note. Rysen 9 - 16x PCIe gen.4 Threadripper 2990WX - 64x PCIe gen.3 Epyc - 128x PCIe Gen.3 I had a Dell R7425 with 1x 7401 with 1x GPU + Fiberchanel + 100G eth + PCIe NVMe SSDs lend from them for Hannover Messe. It was awesom to see all cards performing at full speed without lane sharing. Ok, cinebench R15 was crap on them, because I only could run windows 7 in a VM because of platform support... 110/3343 (ESXi 6.0 failed to boost right)
  20. I've watched it, to check if I should buy 3.gen Rysen or 2.gen Threadripper. But the issue was, that you benchmark the new one's with R20 and the "old" ones with R15. I found a side "cpu monkey" where someone already benchmarked the old ones with R20. Your Result: Ryzen 7 2700X - 235/4029 Ryzen 7 3700X - 502/4875 Rysen 9 3900X - 516/7253 Their result: Ryzen 5 3600 - 478/3509 Threadripper 1920X - 406/5326 Threadripper 1950X - 411/6731 Threadripper 2920X - 439/5843 Threadripper 2950X - 449/7003 Threadripper 2990WX - 398/11463
  21. From an IT/Sysadmin view, did anyone try Citrix or MS Remote Desktop with iPadOS and a mouse? If yes, I could start handout iPad's instead of notebooks that have way more problems with drivers etc. and my user hate it, because we have spend to much time with fixing and save about 40% of our first & second level support staff worktime with just go to reinstall the device.
  22. I would call an iMac a portable device, because I've already seen people working on train with their imac.
  23. sorry Linus, but 1.000.000 Watts is only 1mw. We in a datacenter consume about 80mw and can pull up to 120mw with n+1 redundancy. Also a tear 1 dc is nothing special, your serverroom could be near tear 1
  24. As you have a 10G NIC I would leave out the HDD and put the large files on a data server. Enable deduplication and your done, as you will have most of the systems have the same game data on them.
  25. As conclusion I would say: Laptop/Apple users buy LG & Pro (Work) Users by dell. I've 2x B2791QSU-B1 (and to many 22" on desk) with different devices connected and a 4x output usb kvm, so the with the dell i cloud use 8 different systems with "just" 4 monitors.
×