Jump to content

tt2468

Member
  • Posts

    2,052
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Contact Methods

  • Discord
    tt2468#2468
  • Steam
    http://steamcommunity.com/id/big-tuna
  • Twitch.tv
    http://twitch.tv/tt2468

Profile Information

  • Gender
    Male
  • Location
    Here in my garage
  • Interests
    Enterprise Networking, Live Video Workflows, Big SSDs
  • Biography
    I do stuff. Lots of stuff. Notably I'm a maintainer of obs-websocket and obs-ndi
  • Occupation
    Owner of IRLToolkit and contract programmer
  • Member title
    pp

System

  • CPU
    i7 6850k
  • Motherboard
    Asus x99-ws/ipmi
  • RAM
    32GB Vengeance LPX 2400MHz
  • GPU
    Geforce GTX 1070 Ti
  • Case
    Fractal Define C Tempered
  • Storage
    Samsung 960 Pro m.2, Intel 730 Series 240GB
  • PSU
    Corsair RM750
  • Display(s)
    BUNCHA FKIN MONITORS - xqc
  • Cooling
    Corsair H100i V1
  • Keyboard
    Corsair K65
  • Mouse
    Logitech G403
  • Sound
    ATH-M50x, AT2020 XLR, Midas MR12
  • Operating System
    Windows 10 Pro

Recent Profile Visitors

4,622 profile views
  1. New codecs are much more likely to result in higher quality streams, not lower bandwidth as one might assume. Users switching from AVC will continue streaming at the bitrate they are now (I've seen it already with my own eyes as someone working in this space), and on platforms like Twitch, you'll likely see similar bitrates being delivered to viewers. No, if anything they might get slightly better due to the lack of b-frame support in many encoders. More complex to encode/decode != more latency. Like GRG said too, many other factors affect latency like delivery pipeline and transcode stages. In my professional opinion, this is never going to happen on any notable scale. I think you may be confusing container format and codec, as I don't remember HEVC supporting multiple video tracks. Either way, compositing before final encode is always going to be more efficient than compositing on final decode so long as the final product (the presented stream) is the same. As far as I can tell, not likely to happen on Twitch any time soon due to limitations of their infrastructure. YouTube, I have no idea. Regarding HEVC licensing, it's my not-a-lawyer understanding that the primary limitation of delivery is that while a lot of devices do have hardware HEVC decoding, software HEVC decoding is effectively off-limits for most browsers. Some platforms like Windows do have the ability to "purchase" HEVC decode support, but it's generally not user friendly enough to be worthwhile. In some of the circles I'm in, a lot of the legal questions about what is and isn't allowed within the licensing weren't answered until very recently. The lack of HEVC adoption is 100% a result of the confusing licensing IMO. I do also want to point out that Enhanced RTMP is effectively a stop-gap solution to there being no scalable replacement for RTMP currently available. An initiative called MoQ is being undertaken to create a protocol capable of replacing RTMP, but is considered to be more than a year away, so platforms have been looking for a way to make things like HEVC work in the short term.
  2. I'm gonna go ahead and disagree with a lot of the users in this thread, the fiber could very well be fine. Only way to test would be to plug in your transceivers and see if a link pops up. As you saw, it works fine. Modern fiber is incredibly tough against bends, kinks, and all kinds of abuse. What you want to avoid is having a kink in your fiber during normal operation, but it's actually difficult to break modern fiber like om3/om4. Fiber can't really be compared to ethernet in this way. With fiber, it's either broken or it isn't. The only way you can get performance problems is if your cable is kinked to a quite extreme amount, where you get significant enough signal attenuation for the transceivers to not be able to compensate. I've had scenarios where cables were chewed on and damaged in a very similar way to OP, and those cables are still working great to this day. Just a few days ago I pulled some SMF through a quite busy 1" conduit and it got bent in half during pulling, but the actual fiber is completely undamaged.
  3. I use the Dell n2024 in my system. Has 2 SFP+ ports, 24 GbE ports. Got mine for a few hundred bucks on Ebay and it's a perfect fit for my system. I personally love the dell/force10 CLI. Network switches are high among computer hardware which ages slowly but loses value quickly, so there are many deals to be had on hardware which is still modern but is being phased out due to EOL/support contracts ending.
  4. Level 3 is among the big boy providers (the kind that usually only peer with other ISPs), and I guarantee you that you that they are more expensive than Comcast. As the other post stated, you'll likely have to handle basically everything that an ISP like comcast normally does for you, like BGP, routing, IP block management, etc. I'm not sure what kind of services offer other than direct peering, but for direct peering, these are some pretty normal rates. And yes, that's per `megabit`:
  5. There is usually an access panel somewhere in the house. Commonly they are in closets and look like this You put an ethernet switch in the panel and then hook your router up to the switch.
  6. I have an XRackPro that I was given from a business that closed down. It locks and is completely sound dampened too. I would say get an APC NetShelter, but I dont think they sell anything less than 24u.
  7. Why use ESXi when you can use proxmox, which is free, and has every feature he needs? I've been using Proxmox in production for 2+ years and have never had any problems.
  8. I don't see any antenna ports on the device, but the ebay link has a picture of antennas. Which is it?
  9. I wouldn't recommend getting it. It's running at x1 speed, and it looks pretty sketchy.
  10. You can get a wireless adapter that is not as bad as the asus one. Get one that is preferrably pcie and has external antennas. That tiny little dongle is not designed for the connection strength that gaming requires.
  11. TCP and IP is not an "or" thing. They are in different categories. The apple hotspot is just like any other hotspot in that it hosts a wifi network, provides a dhcp server, and does gateway functions like NAT and routing.
  12. Like others have said, you can pretty much just plug both pcs into each other. But since there is no DHCP server on a point to point link, you will need to configure a static address on each pc. For example, one pc can be 192.168.200.1 and the other can be 192.168.200.2. Just make sure they in the same subnet (basically keep both static ip addresses identical except from the last 3 digits). You can then go to the file/folder you want to share, right click, then go to the sharing tab. Click advanced sharing, check "Share this folder" and then go to permissions and enable full control for the Everyone user. See this screenshot: You should then be able to go to pc #2 and in the address bar in windows explorer go to \\[pc#1'sipaddress] and your share should pop up.
  13. Yeah I personally do not see hyper-v as a production hypervisor. It is outdated in so many ways. You should check out proxmox. I've been using it for almost 2 years now in the datacenter and it's been great. It's got a debian base and runs kvm or containers. Really awesome IMO. It will run windows server just fine and has its own web management interface that is used for taking care of everything.
  14. The 45drives storinator is literally just a server. It has no special firmware or anything. It's a Xeon based server with 15 bays/ I personally would go with option 3 and get a nice server and storage array. Something good would be an dl360g8 or r420 with a Lenovo SA120. Here is a nice dl368g8: https://www.ebay.com/itm/HP-Proliant-DL360-G8-Virtualization-16-Core-Server-48GB-RAM-P420-4-Trays/172530477994 And an external HBA if you want to run software raid: https://www.ebay.com/itm/SAS9207-8E-LSI-8-port-6gb-s-SAS-SATA-TO-PCIe-Host-Bus-Adapter/253413945368 Or if you would like to do hardware raid: https://www.ebay.com/itm/9285CV-8E-LSI-SATA-SAS-MEGARAID-CONTROLLER-1GB-CACHE-6G-PCI-E-2-0-x8/131953999051 SA120: https://www.ebay.com/itm/Lenovo-ThinkServer-SA120-DAS-Array-70F10000UX-3-Year-Warranty/332337644546 Everything supports 6Gb/s sata and sas, so you can put whatever drives you want in there. I think the biggest drive someone has put in an SA120 is 8TB, but I'm quite sure it will do more. What's nice is that if you ever need more storage, you can just purchase another array and stack them together if you run out of ports on your HBA/raid card.
  15. While splitting lanes without a PLX is possible, and not actually that hard, I dont think that there is a big enough market for things like this sadly. There exists solutions that use PLX chips to expand the amound of available pcie lanes to a backplane, but they are either pcie gen 2 or overly expensive.
×