Jump to content

power666

Member
  • Posts

    56
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

power666's Achievements

  1. Most of the solutions don't use Ethernet, rather they're point-to-point using CAT cabling. Plugging several of the solutions offered in the video into an Ethernet switch could actually damage it. The Icrons 2304GE-LAN family is genuinely Ethernet but oddly, they don't have an IP address. Rather everything runs over Layer 2 and routing is done via a ZeroConf/Bonjour protocol using MAC addresses. And it is USB so you can plug devices like webcams, conferencing gear and audio interfaces into it. It comes in hand when you only have one CAT drop at a location but you can fan it out via a local switch to perform several other functions.
  2. There are various AV over IP solutions that'll do transmission over a network but right now every big vendor is going a proprietary route. While some of them are arguably 'good' in terms of features and quality, I'd still wait it out for an true interoperable standard to emerge: any money spend now on it for a wide deployment will like need to be replaced down the line when the real standard arrives. It is the future, just not today due to corporate shenanigans. I will say that UHD/4K@60 hz resolutions over 1 Gbit links aren't the greatest as my eyes can tell that there is some compress going on. (1080p60 on 1 gig is fine though.) Newer solutions that are using 2.5 Gbit or 10 Gbit links look far superior at UHD/4K@60 Hz. If you want more than 60 Hz for gaming, you'll definitely have to look around. Similarly while the network encoders are fast (and use tons of bandwidth), there is still a bit of latency added due the encoding process and network transmission, making them less than ideal for hyper fast gaming. If you just need a point-to-point solution HDbaseT is your best solution. It is a defined well support standard. Some displays (though few consumer unit) have a slot to add an HDbaseT input. There are several compromises here as there is only 18 Gbit max bandwidth though DSC is support. Thus UHD/4K@120 Hz is possible but at the very edge of that HDbaseT 3.0 can do. On the flip side, HDbaseT as this is more of a signal conversion than a full re-encode and the signal is point-to-point, there is no latency penalty here.
  3. Latency absolutely matters for USB over network transport, especially for USB 3.0 speeds. The USB protocols themselves dictate a maximum response time for a device to respond to a host request. Most of the time this is doesn't matters as most of the time users have the USB devices attached locally making the latency more dependent on the device being compliant than anything. Toss in a network switch (or several) and you can actually exceed the latency governed by the USB spec. Ditto if half of the USB extender is part of another product (see Creston's NVX solution).
  4. I'm surprised that the awesomeness that is the Icron 2304GE-LAN was not brought up. This encapsulates USB data into Ethernet packets that be transmitted over the network. The two gotchas are that speeds are only USB 2.0 and there is a discrete latency maximum for the delay between the two end points. This isn't just purely the cable run limited but you have to factor in any switch hops that are involved. Now the huge benefit of this is that you can now do dynamic routing of devices on the network. The vanilla Icron 2304GE-LAN pairs a single host end point (USB-B) with a single device end point (USB-A). However the 2304S model is a specialized host end point (USB-B) that can receive multiple network streams from different 2304GE-LAN device endpoints. This make the network function as a multi-device hub. So what are the use cases? There is the obvious KVM over the network where you can switch between USB devices that way. Or the review where have a single system that you'd like to directly access form different locations. And yes, if you want to with the 2304S model you can have the multiple locations simultaneously active much like plugging into two keyboards/mice into a single system. A more unique use case for these system was a hardware authentication key for some enterprise software that was running in a VM on a server. This permitted the VM to migrate to a set of physical hardware as long as the VM host also had an Icron unit attached. It should also be pointed out that Icron is the major OEM for USB extender with many other companies rebranding their products or using their IO board inside other products. If all you need is USB HID functionality (keyboards, mice etc.) the 3rd parties even have a software solution for a host computer that'll pair with the device end but limits everything to a mundane 1.0 Mbit speed but is not as latency sensitive. Thus the only hardware you'd pay for would the 2304GE-LAN device side. Crestron's NUX solution is wholly rebranded Icron gear with a vendor flag set. The Icron protocols are also included on the USB ports of their NVX video over network extenders for a single box solution that'll put video, audio and USB over the network. I've gotten a set of the 2304GE-LAN models for about ~$360, though I haven't looked into the pricing of the newer 2304S or 2304PoE models.
  5. Each one of these amps counts as a network switch hop and thus adds latency with every hop. Small devices like these can be have s small integrated three port switch (the two external ports you see on the rear of the device and the third port existing as a link on the main board). A small integrated switch is generally good as it puts simple dedicated hardware to solve a simple problem but tends to cost more. Additional latency is still there, just not terrible. However I've seen devices like this use two standard NIC interfaces and are bridged in software. Network traffic will still flow from one port to another but at relatively high latency vs. a switch chip. Any sort of synchronization time clock has to be pass along instead of transmitted directly (think IEEE 1588 PTP). If you have an audio over IP topology like AVB, the audio stream itself has a time-to-live counter embedded into it that counts the number switch of hops: after a certain point it'll just stop working even if are under the latency maximum.
  6. I'll second the error between identifying the system memory and the VRAM. There is 4 MB of on board memory and you can see it between the single 72 pin SIMM slot and the two VRAM slots. Practical limit is 68 MB of RAM as I haven't seen a single 128 MB 72 pin SIMM but have encountered a 64 MB unit. I have seen a slot adapter that would connect two 72 pin SIMMs into a single 72 pin slot. The gotcha here is that such an adapter wouldn't let you close the chassis due to this system's small size, limiting you to 68 MB in total. I also hope that some one there also figured out that old school classic Mac OS didn't dynamically allocate memory for you. That extra 16 MB SIMM installed isn't going to be used unless you manually assign RAM to an application prior to launching it. (Select the application in the Finder and then Get Info under the Finder's File menu.) If you had the extra RAM, things generally were snappier even on this old hardware. Even to this day Apple ships about half the amount of RAM that they should on every system for their baseline. For those seeking ultimate performance and had the RAM to spare, these systems can boot off of a RAM disk. This would improve load times dramatically for games. 1:55 This machine could run System 8.1 however, a little bit more advanced than System 7.5.5, in particular the improved Finder. 9:00 Should've pulled up a game of Bolo if you wanted to know what retro Mac gaming was like in the early '90s. Then again, I did see Marathon which would have been since to see played since Bungie is dusting off that old series for a new installment soon.
  7. A PoE speaker setup using AVB/TSN or Dante would have vastly simplified this setup. This does limit speaker output power if there is a need for more than ~70W. There are also PoE amplifiers that let you use your own speakers and plenum rated to be installed into ceiling spaces near the speakers themselves. This vastly simplifies wiring as only CAT cables are what is ran to the rack regardless of using a PoE speaker or amp plus a short speaker jumper if using the PoE amp solution. PoE speakers/amplifiers with built-in DSP processors to handle things like specific EQ for the speaker and its positioning in a room are also options. Depending on the model, you can also get a pass through port on them to connect another device. Pass through ports are nice for reusing a network run to a device that isn't latency sensitive like a projector for control data. Newer generation of these devices are also integrating various sensors like temperature, humidity and ambient light which are useful for tying into an automation system. Daisy chaining as seen in this video is not optimal for audio transmission due to the additive latency. There is at least 6 network hops involved for the last unit to get its audio data (to the wi-fi access point -> core switch -> switch dedicated to the amp cluster -> three more hops through other Sonos amps). For something like playing music in a room of off a phone, this isn't noticeable as there no reference to determine how the skew in time. However with the audio source being say a movie or a console game, there is an end user expectation of when audio should be heard. This is similar in concept to how video scalers can introduce a few frames of latency in a video feed which can become noticeable for gaming. If there are enough ports on the main switch, I'd have direct wired them all into that or provisioned a larger switch for just the amplifiers to cut down on the daisy chain hops. While audio is latency sensitive, the bandwidth requirements are not that great.
  8. Ugh, lots of nitpicks in this video. First of, none of the modern 4-way SLI setups from nVidia were mentioned. Granted, you can only get them in from nVidia directly in their DGX workstations which use Quadro class graphics but they are similar to that 8-way Voodoo card by incorporating a chip on the bridge connector. The nvLink chip on the bridge is what permits standard desktop Quadros which are normally limited to 2-way SLI scale higher via this fanout switch chip. Some who could get their hands on this DGX bridge board could build their own 4-way Quadro setup but Conceptually 8-way SLI is still possible due to the number of nvLink buses supported on the nvLink chip, however, nVidia has kept 8 GPU and higher systems isolated to the data center and their mezzanine style carrier cards. Since nvLink is being leveraged, it also permits memory sharing and aggregation, which is another nitpick that I'll get to later on. The second nitpick is that the video doesn't dive too deep into the challenges of leveraging multiple GPUs which is simply load balancing. Splitting frames up evenly in terms of output pixel count doesn't inherently mean that each GPU does the same amount of work needs to be performed in those regions. With an uneven load across the GPUs, performance is inherently limited to how long it takes the GPU with the most work to finish, a classic bottleneck scenario. Third nitpick is that SLI is never actually displayed on screen using its interleaving nature. 3dfx figured out early on that having each GPU work on a different scan line is a simply and relatively efficient for how 3D APIs were working back then. As more complex rendering techniques are being developed, it was no longer to simply scale this technique: shaders would reference data from the pixels found on the previous scan line. Fourth nitpick is that spit frame rendering was demonstrated as a splitting the screen into quads which isn't how it normally worked. Rather the splits would be in horizontal lines the heights varying to load balance across the GPUs. The reason being is that each GPU would be responsible for a full display scan line. Splitting mid scan line was often glitchy from a visual perspective without having an additional aggregation buffer. The additional buffer was not optimal due to the small memory sizes of GPUs at the time and the lag it would introduce. Not impossible for a quad split to be used but it was not the norm. Fifth nitpick is that different multi-GPU techniques can be used in tandem when there are 4 or GPUs in a system. AFR of SFR was used in some games. Not really a nitpick but more of a piece of trivia, AFR of SFR came to be as DirectX had a maximum 8 frame buffers currently in flight. This figure is a bit deceptive as one buffer is being output to the screen while the next is being rendered which is done on a per GPU basis with that API. This limited AFR under DirectX effectively to 4-way maximum GPU setup. Hence why AFR of SFR was necessary to go beyond 4-way GPU setups or if triple buffering was being leveraged. DirectX 10 did effectively off SFR support which capped GPU support on the consumer side to 4-way. I haven't kept up but Vulkan and DX12 should be able to bring these techniques back but support rests on the game/application developer, not the hardware manufacturer/API developer. Checkerboard render is interesting in several aspects. First off, even single GPUs have leveraged this technique as a means to vary image quality in minor ways across a frame that makes it difficult to discern. Think of a frame split up to 64 x 64 tiles but for some complex tiles and to keep the frame rate up, the GPU will instead render a 16 x 16 or 8 x 8 pixel version of that tile and brute force scale it up to save on computational time. When multiple GPUs were involved, the tile size was important to evenly distribute the work. There is a further technique to load balance by further breaking down larger tiles into even smaller ones and then distributing that work load across multiple GPUs. So instead of a complex 64 x 64 tile getting a 16 x 16 rendered tile that is scaled upward, in a multiple GPU scenario four 16 x 16 tiles are split across multiple GPUs to maintain speed and quality. Further subdividing tiles and assigning them to specific GPU is indeed a very computationally complex task but the modern GPUs already have accelerators in place to tackle this workload. This is how various hardware video encoders function to produce high quality, compress images by subdiving portions of the screen. While never explicitly said, I suspect that this is one of the reasons why modern GPUs have started to include multiple hardware video encoders. One technique not mentioned is when multiple monitors are used as each display can be driven/rendered by a different GPU. While not load balanced, it is pretty straight forward to implement. Both AMD and nVidia provide additional hardware to synchronize refresh rates and output across multiple monitors as well as the GPUs. The original CrossFire with dongle was mostly AFR as the there was a video output switch on the master card. The master card decided which GPU was sending output to the monitor. This was mostly done via detecting V-blank signals. The chip could conceptually switch mid-frame at the end of a scanline but ATI never got this to work reliably so they to focused on AFR. (Note: later implementations of Crossfire that used internal ribbon cables could switch on a per scanline basis making this an issue only for ATI early implementations.) In the early days of multiple GPUs, video memory was simply mirrored across GPUs. This wasn't emphasized in the video but was implied in the scenarios leveraging AFR due to each GPU doing the same work at a different slices in time. Modern GPUs that leverage nvLink from nVidia and Infinity Fabric links from AMD can actually aggregate memory spaces. They also permit dedicated regions to each GPU while having a portion mirrored across the GPUs to limit. For example two modern Quadros with 24 GB of memory on board could provide 24 GB (full mirroring), 36 GB (12 GB dedicated on each card with 12 GB mirrored), or 48 GB of memory (full dedicated) to an application to use. That flexibility is great to have.
  9. Those PCIe switches are pricy and hence the high cost of the carrier card. One of the things on the switch spec sheet for the chip but isn't on the card is a 100 Mbit Ethernet out-of-band management support. Mention in the video as a use-case is leveraging these chips for compositable infrastructure but like all enterprise grade infrastructure you want management functionalities to be separated from production data paths. So why would some one want out-of-band management on a particular card like this? First thing is encryption support as it would provide a means to pass keys around that would not touch the host system or production path ways. While it would be painfully slow, it does provide an online means of extracting data in the event of a host system failure. Lastly and seen in the video, is that out-of-band management can pass health information. For a carrier card crammed like this, being able to run an instance of monitoring software on the card itself to record drive temperatures for a monitoring platform would be great. One thing not fully explored here but it is a power of the PCIe switch chip is that while link between the CPU and card is 16 lane PCIe 4.0, the drives could be simple 4 lane PCIe 3.0 units and performance would not be significantly impacted. Similar situation if a 100 lane PCIe 5.0 switch was used with PCIe 4.0 based NVMe drives. In fact, with a 100 lane PCIe 5.0 switch chip and 21 drives at PCIe 3.0 speeds, the link between the card and the host system will still be a bottleneck vs. the aggregate drive bandwidth.
  10. For standard Ethernet, shielding isn't unnecessary. Shielding does matter for some other digital connections using CAT cables (HDbaseT for example). However, if you'd already laid the STP CAT6, I'd stick with it. What might be a better option is selecting better ends, both female and male. While expensive, I like the these female connectors as they're easy to crimp and I use premade patch cables to jump between those and equipment. I use a patch panel like this to host the female keystone jacks. The nice thing is that since that panel is keystone based, they don't have to host all RJ45 connectors. The obvious thing is that their needs to be a bit of slack in the cables by the patch panel to redo them.
  11. Why do you think moving form your motherboards optical SPDIF to an external audio extractor off of HDMI that also uses optical SPDIF to your speakers? To answer the greater question of, 'Can I output audio out of the HDMI port on my graphics card?', the short answer is yes. I'm currently running a Radeon VII card (Vega) whose HDMI port is going to a video-over-IP box that acts as an audio extractor for my network based setup. It is set to output 7.1 audio in LPCM which I can monitor which of those 8 channels are active. I should also note here that the most you'll get out of optical is uncompressed 5.1 as there isn't enough bandwidth in the SPDIF connection for more. You can do additional channels but that requires a compressed/encoded signal.
  12. I'm perplexed why an audiophile portable rack and some 15-18" depth rackmount cases to fit everything was considered. This would also permit inclusion of a good rackmount switch and leave room for a UPS (though shipping the UPS across boarders might be problematic, you'd have the room*). The downside is that you'd have to ship it via freight but those cases are designed to be abused in transit. Hotels generally can hold some freight and it is always courteous to work with them ahead of time to be prepared to received the gear. Doubly so when going across borders to account for custom times. An alternative to freight shipping and dealing with customs would be to have some one drive it down since it is 'only' Canada to the US. Custom checks are relatively painless, you can move more and you have a built-in chain of custody presuming it is a member of the team driving it. Less lead time is necessary vs. freight but I'd still plan for a buffer in case anything goes wrong. The unused buffer time can be used for prep at the hotel prior to everyone else's arrival: you'd have the network built up and tested as people are arriving to the hotel. If you reserve neighboring rooms, often there are side doors connecting them all which you can run cables underneath. Literally get everyone on to high speed hardwired network. The downside is that this removes members of the staff for rather mundane work that'd otherwise be doing something else. Gotta weigh the cost-benefits here as man power and insurance are factors. There is yet another solution being Las Vegas: rental gear from various AV production companies. Most have huge warehouses in Las Vegas due to the numerous shows taking place there. Various Dell/HP/Lenovo workstations can be rented ahead of time and delivered to the hotel without worry from customs. Same day replacement options are possible (still buffer in some setup time prior to production). Items like storage would have to be flow in but straight forward. This setup can be tested ahead of time by having a unit shipped to the offices and mocked there. It might even be possible to arrange for the same unit used in the mock to be the one used in Las Vegas. This eliminates much of the freight shipping but the upfront costs are not cheap. This can still provide savings if you have good tracking on how much your logistics cost. One downside for CES in particular is that you have to get in your rental requests early. CES is one of the few shows that can drain inventory. If carry-on was a requirement due to the costs, have you considered breaking the system up across multiple chassis and multiple carry-ons? Just moving storage externally frees up so much volume in this design. Or why not do things the LTT way and build a custom case that'd be purpose built to fit inside the Pelican box? You'd be able to put in larger motherboard, have hot swap drive bays and utilize far more powerful hardware even after losing some volume for the necessary vibration mounting/foam. To actually run the system, the Pelican case would have to be open like it was sitting on the table in the video but inside a hotel room this is not a big deal. Obviously you can't put ventilation holes into the external shell. There is a time-cost-benefit analysis that'd need to be done for this. For a media company like this though, the ROI could be spread out across over multiple trips, especially if the internal rig adheres to motherboard/component standards for upgrades later. Also good to see some testing done prior to shipping. This will need to be repeated upon arrival before production work starts on it. *One of the things I've learned for trade shows is that if something is deemed mission critical but complicates shipping, consider purchasing it locally and having it shipped to your hotel. If you're already shipping freight there, the hotel will also accept a package. It is possible to flip such things quickly on CL or eBay to recover some of the cost after the event is over. Vibration rigging so that you can ensure that damage done to the case doesn't get transferred to the components on the inside. High end travel cases suspend the contents so that the exterior can be dented or even ruptured without damaging stuff on the inside. It is possible but generally custom built which means not cheap. It is worth pointing out that this video's solution negates any sort of protection for the items inside due to the tight fit. There needs to be some interior padding, especially for that fragile InWin case. That looks like I could wrap it by just staring menacingly at it.
  13. Brocade is sort of on that list: Extreme Networks acquired them. You do need to run in EXOS mode currently as SLX and the Fabric Engine do not support AVB/TSN features (yet).
  14. I will agree that Dante based speakers do save money in terms of labor and deployment due to the reliance on CAT cabling. That is also why I would argue that they are also a good fit for home usage: one cable cable type to rule them all. The next logical move is to go wireless and WiFi 7 will bring AVB/TSN features. Just would need to power the speakers which brings the debate to the classic passive/active speaker argument.
  15. The difference between Dante and AVB is only a few years. The real difference is that Dante was designed to work over existing switches at the time instead of needing new network switch chips spun to support the queuing enhancements. Audinate also got the protocol discovery right and all of their products early on needed to be certified for interoperability. That is why it took off as they got the first mover advantage. While the Harmon group does support AVB in a few select products, they're mostly a Dante outfit at the professional level. Both AVB and Dante support more than 512 channels over a network. How much you can move over a single link is bandwidth dependent and how the audio channels are configured (ie how much bandwidth an individual audio stream takes). There is a slight advantage to AVB due to how it can bundle channels together to reduce some of the overhead. AVB does have a unique limitation to the number of queues a switch can support simultaneously with the enterprise stuff typically having 1024 queues, consumer 128/256 queues. That's for how many streams can be moved and with bundling the raw number of audio channels can be higher. Both AVB and Dante fail safe connections essentially require the doubling of hardware to get it right: the goal is to remove a single point of failure. AVB not only does support redundant connections, it also can support fabrics between switches. Redundant streams are tagged as such with the switch network actually attempts to have it take a different path than the primary stream. Interestingly enough, the primary and secondary streams can coexist on the same logical network, something Dante cannot do (the primary and secondary require separate subnets).
×