Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

zENjA

Member
  • Content Count

    18
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About zENjA

  • Title
    Newbie

Contact Methods

Profile Information

  • Gender
    Not Telling
  • Location
    Kassel & Frankfurt a.M., Germany
  • Interests
    ccc related
  • Biography
    IT & Tech Nerd
  • Occupation
    Datacenter Engineer

Recent Profile Visitors

484 profile views
  1. zENjA

    We got 10 GIGABIT Internet!!

    Latency is not Round trip time... latency is always one way and not back again. Yes the ciena "CPE" will add latency, it's mostly a DWDM device without a optical spitter. This is like a switch with "extendes monitoring funktion". Overhead is a thing, if you test with only 64bytes you will never get the full speed. This is why "more carrier grade" routers like the edgerouter infinity is telling the max. amount of packages at this small size. If you go to 1024-1514bytes you see "mostly the full performance" but you still have overhead, especially if your ISP is limiting your MTU zu 1492 in xDSL or sometimes even smaller on coax lines. Sometimes it's because your ISP packs your IP/Ethernet into SDH or put another header like PPPoE ontop. The max. speed you will see if you go higher like in a SAN where you use 9000bytes jumbo frames. It's also possible to configure something larger than 9000 like 12394, so you can add ISP overhead without your customer get's any down sides. Like if you go with Jumbo frames (from customer) and transport it over a link where your command and routing like GRE, MPLS, IPSec adds more overhead, sometimes even combined. In my sub AS I allow routing of "oversized packets" because my uplink AS accept this. Then at the next hops: - some of the IX (Internet Exchange) allow also jumbo, other not so if the connected ISP's can handel it, you have less overhead - some of your direkt peerings, also allow them like on a link to another compete DC provider (tow customers even move around 30-60GBE to two other DC providers where they are also customers and for whatever reason they decide to go though the "internet" and not via a privat wave) - even one of our four transit providers allow large packages Testing with a browser is not very efficient, to much overhead like http, encryption, etc. Peering at a IX or direkt is cheaper than transit. Transit is just someone is doing peering all over the world for you. This is why you have a 10G Wave to the DC and all connected peering partners (who have 10g or more) but "only" 5.xxxG to the "internet". And then the layer 0 & 1, the distance on the SFP is only a slide tast, how good it could perform. The only real thing is the power budged, some 10km SFP's have a budget of 8,2db and the customer is using it on a 18,8km link...
  2. zENjA

    Chinese Tools vs iFixit Kit!

    I would expand my iFixit kit with some of them, but I already have expanded it with a "standart" 1/4 Bitkit Wera 05056490001 & a Bosch GSR 12V-15 with Wera 05052502001 & a 15cm & 30cm extention. And in my dayly bag (as DC Engineer) I've a Wera 05057460001 & 05073660001, because in most cases more force need to be transferred to the screws for mounting Switches that the customer only delivers without full rack rails (especialy with swiches longer that 40cm) and only with the front brackets that went too easily.
  3. zENjA

    The $100,000 PC LIVES!

    Ok you told you want to go the 100G way. I've a demo Setup in my Work Lab with a Mellanox MSN2700-CS2F & MCX416A-CCAT and some QSFP28 DAC's running well. Main Target of this setup is to show the 300G NVMe Storage box underneath it (delivered by memorysolution). I would not do VM Storage on the Optane Drive, because if you have (dual or quad) 100GbE you will not have the need for local drive, because the latency will be unfeelable low. For Passthrough - I recommend to forward one USB Controller to each VM so any device will be on the VM when plugged in the "right" card.
  4. @Egg-Roll If you calculate the systems livetime with 5 years and you use it a lot for gaming ok. Also I'm with you with storage needs and a bit on the Uplink pricing (because all places I go to have good connectivity). But if you need this kind of resources on the go / multi homed.. you would buy more systems?
  5. I've tested it here in germany with a low latency connection (about 9ms to the Host's public IP via our own AS -> Telia -> cogent -> blade), a normal Coax uplink (150/10 40-60ms unitymedia -> cogent -> blade) & a VDSL (250/40 22ms german telekom -> hupus -> blade) uplink. Also tested Mobile connection on Train.... fail. The wired uplinks worked fine, I've didn't notice any "lagging". Only downside is, that my w520 & x201 GPU's does not like decoding of the stream, because I can hear the sound of the Hitman shooting, but had a decoding delay of about 300-800ms. So I'm more into this oldschool type of computing and because of that the GPU's Video Engine can't get the performance needed. Alternativly I also test it on my old AMD Phenom 9750 + ATI HD5770 and on my NUC with i5-5250u, then it works fine. Also the Steam link was able to run gread with a small hack - Setup a VPN endpoint on your home firewall, configure the VPN target on the shadow system, establish connection, tell the steam link the VPN IP of the shadow... works for me. And because Valve stop selling their link, the ghost will be an alternative as soon it gets buyable. Hope they bring dual or 3 monitor clients soon. Because I use this as a working setup. In my Main house (weekend) & my work flat I've a fixed monitor setup that I can't carry around 3 times a week and having local computers also does not make any sense. Yes I have laptops, but because of my needs, they can't be newer then 2***/3*** gen cpu's (legacy bios needed).
  6. But the CDN and finacial customers in my dc my run solarflare inside their servers, but in the core they still use Juniper MX/QFX & Cienna DMDM Systems to uplink to the DE-CIX. Most financial customers and even fintec startups still rely on 10G or even 1GbE and their uplink didn't see 1G yet, even if the carrier could deliver...
  7. I've setup the 100G over Fabric Showcase on thursday for the Amsterdam OCP event next week. Yes that's crazy (from a normal standpoint) stuff. It's running a AIC FB127-LX with 3x100G + MZ4LB3T8HALS-00003 (Samsing 3.8TB M.3 NVMe SSDs), a Mellanox MSN2700-CS2F (32x100G) Switch and 4 Compute nods with 100G each. But Linus, for your "Office use" you should only should get connectx-3 with a MSX1012B-2BRS Switch, it's more cost effective at the moment. For conclusion some Hardware p*rn:
  8. zENjA

    The Hotel Built for Gamers

    If I go to a hotal, only for business, just take my small Laptop with me. Worst are hotels where they charge for the wifi and then it sucks low throughtput, weak signal, data caps.... To have a system that is "fresh" is allways nice to have, because most hotel "internetpoints" suck even harder with viruses, unupdated systems, blocking my work VPN ... But they realy think about their wireing and their AP's.
  9. zENjA

    Do Dual CPU Sockets Matter in 2018?

    I work at a Datacenter / multi customer for two years, but I can't see any decrease of dual socket systems. Quad socket systems are decreasing, because of the better density. Especially if they can live with 1,5TB RAM or less. Singe CPU Systems are also getting less, because most of the customers want as much computing power in their racks as possible. But I can see a increase of single cpu on "non standard compute workloads" like backbox firewalls like Fortigate's, VPN Gateways like JunOS Puls gateways or other networking or transfer intensive loads. Mostly because the need to get more efficient.
  10. zENjA

    56 Cores in ONE SYSTEM! - HOLY $H!T

    Some of my customers run Dell R940 with 4 CPUs, Mellanox Connect-X 5 and MSN2700-CS2F as their hyper convertged setup in my Datacenter, but back to topic. Build you own Nvidia Grid solution by allocate GPU performance dynamicly to the user. And at night then the editors aren't working use it as an in/output render system. Or do Remote editing on the vGPUs with RemoteFX of the MS Server 2012/2016 with Tesla M10's. ALike Morten from My PlayHouse did it: And than combine it with 40G/100G Networking with Mellanox SX1012 (or faster) and ConnectX 3 / 5 cards .
  11. zENjA

    Super-Ultra-Wide Monitor – Dank or Dumb?

    I have used wide monitores once, but only to simulate two 1280*1024 Screens on one, because the medical I was the operator of, was optimized for that. It couldn't handel 1080p in mid 2017. But the hight of resolution is the main problem with all modern monitors. They are all wide. I had most screens with 1920x1200 for the most time and at the moment I chane it to 2560x1600. Most programms I use running better on 16:10 then on 16:9 or smaller.
  12. zENjA

    3 Reasons to NOT Buy a $400 Laptop

    I've a Thinkpad x201 as my secondary (privat & work - mix use) one. x201 - 220€ 120GB SSD - 60€ new battery - 50€ 8GB RAM - 65€ Docking Station 20€ But why? I give back my smartphone to use a Laptop - mouse and keyboard is my favorit.
  13. I think, that this show that for 80% of your normal office PC or your moms dust collector box, the G4560. I even configure the main standard system with this for a hospital i have worked for. Its was enouth wit 8G Ram an 120GB SSD. Some of my frients use the G4400 in their main Homserver setup and the only thing they think is, I need more RAM (8/16->16/32/64). I also think about an i7, but I end up with the E5-1620 V3, because of ECC support. Its running a old company DB, that is better with less higher clocked cores. Overclocking was not an option.
  14. zENjA

    Windows vs Windows Server Performance

    Last time in Server 2008R2 and 2012R2 you had to activate the "Feature -> WiFi Support/Service" first. After that i could install the Driver.
  15. zENjA

    Steam Link Review

    I mean playing splitscreen with 2-4 players. I forgot to describe it well. Also they had this system allready. You can see at linustech on instagram (is linked) : https://www.instagram.com/p/BBq42kASbO9/?taken-by=linustech
×