Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

zENjA

Member
  • Content Count

    30
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About zENjA

  • Title
    Member

Contact Methods

Profile Information

  • Gender
    Not Telling
  • Location
    Kassel & Frankfurt a.M., Germany
  • Interests
    ccc related
  • Biography
    IT & Tech Nerd
  • Occupation
    Datacenter Engineer

Recent Profile Visitors

623 profile views
  1. Jake types in his user/pw on a external system to give linus access to his share - please do not, the other person could catch your credentials. Create a seperate share & user for the person, who want to backup, also in the share config you can create quotas so your mate don't fill your nas full with his crap. I upload by "external" Backup to work, where I have racked my own server (verry old system). But at some friends, I've a Rasip with a usb hdd that openes a reverse SSH tunnel to me (it also could be a cheap vm) and mapps the ssh port there. Hyper Backup is able to do packupo over ssh, just use "rsync" as backup type. May you have to edit the parth to get it working, in my case I mount the hdd to "/home/$USER/backup". Even on some Webdav services that dont run out of the box just type in "/home" then you can run your backup.
  2. Wow, may I replace my x201 / T410 / W520 for a E495 with this CPU. "Battery is rated for 8h" - did you have test results what to expect in reality - office, remote desktop, surfing youtube (all the non gaming tasks)?
  3. The "kids" sitting in a meeting and want wo show something - and fail, because the company refuses guest access and have no cellular coverage in the room. That already takes place and I'm sitting there with my x201 dualboot XP & Win7 and run everything offline. Also the IT department would say no, because thy can't manage it via MS Active Directory. Wellcome to good offline legacy IT world.
  4. Sounds like AMD is going to replace threadripper (from a workstation home user and datacenter view). So here my thoughts about it: 1. the 16 core 2950x (900$) and the epyc one (single 7302P - 825$ & dual 7282 650$ / 7302 978$) - sounds like kind of "no server premium" to me 2. why get a cpu with 60 PCIe lanes, when I can have 128 lanes for mostly the same price 3. kind of double the RAM bandwith in epyc - virtual systems will love this 4. no way for Intel Xeon W - the 16 core is more than twice the price 5. running a SAP Hana system with one 2U system and not a Huawei 9008 with 8x E7-8880 v4 - wohooo less inter CPU com performance problem shit 6. I've get a Dell system with dual CPU and quad GPU for VDI testing with GPU acceleration, 2x ConnectX 6 (one port is used for storage & one for user traffic - redundant setup) NVMe SSD's only for local caching. From my testing I could replace each HP c7000 rev.2 Bladesystem with G7&G8 Blades with one of these systems. (and there are 54 bladecenters is use - l could fit everything in three racks incl. storage -> less networking & cabeling shit + less power draw
  5. I would use it as a Storage or VM Box, but as storage Box I would use the SAS port together with an expander backplane of a JBOD and put a NIC inside the PCI slot / use a 8x/8x riser for NIC & HBA Card.
  6. @GabenJr yes this was also my thought and yes, for a server / workstation system with lot's of cards, yes. But for some storage and networking tasks like 200G/2x100G like ConnectX-5 (or newer) cloud take a boost of PCIe gen.4 by going from x16 to x8 or x4 to have space for x4 GPU's in a compute note. Rysen 9 - 16x PCIe gen.4 Threadripper 2990WX - 64x PCIe gen.3 Epyc - 128x PCIe Gen.3 I had a Dell R7425 with 1x 7401 with 1x GPU + Fiberchanel + 100G eth + PCIe NVMe SSDs lend from them for Hannover Messe. It was awesom to see all cards performing at full speed without lane sharing. Ok, cinebench R15 was crap on them, because I only could run windows 7 in a VM because of platform support... 110/3343 (ESXi 6.0 failed to boost right)
  7. I've watched it, to check if I should buy 3.gen Rysen or 2.gen Threadripper. But the issue was, that you benchmark the new one's with R20 and the "old" ones with R15. I found a side "cpu monkey" where someone already benchmarked the old ones with R20. Your Result: Ryzen 7 2700X - 235/4029 Ryzen 7 3700X - 502/4875 Rysen 9 3900X - 516/7253 Their result: Ryzen 5 3600 - 478/3509 Threadripper 1920X - 406/5326 Threadripper 1950X - 411/6731 Threadripper 2920X - 439/5843 Threadripper 2950X - 449/7003 Threadripper 2990WX - 398/11463
  8. From an IT/Sysadmin view, did anyone try Citrix or MS Remote Desktop with iPadOS and a mouse? If yes, I could start handout iPad's instead of notebooks that have way more problems with drivers etc. and my user hate it, because we have spend to much time with fixing and save about 40% of our first & second level support staff worktime with just go to reinstall the device.
  9. I would call an iMac a portable device, because I've already seen people working on train with their imac.
  10. sorry Linus, but 1.000.000 Watts is only 1mw. We in a datacenter consume about 80mw and can pull up to 120mw with n+1 redundancy. Also a tear 1 dc is nothing special, your serverroom could be near tear 1
  11. As you have a 10G NIC I would leave out the HDD and put the large files on a data server. Enable deduplication and your done, as you will have most of the systems have the same game data on them.
  12. As conclusion I would say: Laptop/Apple users buy LG & Pro (Work) Users by dell. I've 2x B2791QSU-B1 (and to many 22" on desk) with different devices connected and a 4x output usb kvm, so the with the dell i cloud use 8 different systems with "just" 4 monitors.
  13. Latency is not Round trip time... latency is always one way and not back again. Yes the ciena "CPE" will add latency, it's mostly a DWDM device without a optical spitter. This is like a switch with "extendes monitoring funktion". Overhead is a thing, if you test with only 64bytes you will never get the full speed. This is why "more carrier grade" routers like the edgerouter infinity is telling the max. amount of packages at this small size. If you go to 1024-1514bytes you see "mostly the full performance" but you still have overhead, especially if your ISP is limiting your MTU zu 1492 in xDSL or sometimes even smaller on coax lines. Sometimes it's because your ISP packs your IP/Ethernet into SDH or put another header like PPPoE ontop. The max. speed you will see if you go higher like in a SAN where you use 9000bytes jumbo frames. It's also possible to configure something larger than 9000 like 12394, so you can add ISP overhead without your customer get's any down sides. Like if you go with Jumbo frames (from customer) and transport it over a link where your command and routing like GRE, MPLS, IPSec adds more overhead, sometimes even combined. In my sub AS I allow routing of "oversized packets" because my uplink AS accept this. Then at the next hops: - some of the IX (Internet Exchange) allow also jumbo, other not so if the connected ISP's can handel it, you have less overhead - some of your direkt peerings, also allow them like on a link to another compete DC provider (tow customers even move around 30-60GBE to two other DC providers where they are also customers and for whatever reason they decide to go though the "internet" and not via a privat wave) - even one of our four transit providers allow large packages Testing with a browser is not very efficient, to much overhead like http, encryption, etc. Peering at a IX or direkt is cheaper than transit. Transit is just someone is doing peering all over the world for you. This is why you have a 10G Wave to the DC and all connected peering partners (who have 10g or more) but "only" 5.xxxG to the "internet". And then the layer 0 & 1, the distance on the SFP is only a slide tast, how good it could perform. The only real thing is the power budged, some 10km SFP's have a budget of 8,2db and the customer is using it on a 18,8km link...
  14. I would expand my iFixit kit with some of them, but I already have expanded it with a "standart" 1/4 Bitkit Wera 05056490001 & a Bosch GSR 12V-15 with Wera 05052502001 & a 15cm & 30cm extention. And in my dayly bag (as DC Engineer) I've a Wera 05057460001 & 05073660001, because in most cases more force need to be transferred to the screws for mounting Switches that the customer only delivers without full rack rails (especialy with swiches longer that 40cm) and only with the front brackets that went too easily.
  15. Ok you told you want to go the 100G way. I've a demo Setup in my Work Lab with a Mellanox MSN2700-CS2F & MCX416A-CCAT and some QSFP28 DAC's running well. Main Target of this setup is to show the 300G NVMe Storage box underneath it (delivered by memorysolution). I would not do VM Storage on the Optane Drive, because if you have (dual or quad) 100GbE you will not have the need for local drive, because the latency will be unfeelable low. For Passthrough - I recommend to forward one USB Controller to each VM so any device will be on the VM when plugged in the "right" card.
×