Jump to content

power666

Member
  • Posts

    56
  • Joined

  • Last visited

Everything posted by power666

  1. Most of the solutions don't use Ethernet, rather they're point-to-point using CAT cabling. Plugging several of the solutions offered in the video into an Ethernet switch could actually damage it. The Icrons 2304GE-LAN family is genuinely Ethernet but oddly, they don't have an IP address. Rather everything runs over Layer 2 and routing is done via a ZeroConf/Bonjour protocol using MAC addresses. And it is USB so you can plug devices like webcams, conferencing gear and audio interfaces into it. It comes in hand when you only have one CAT drop at a location but you can fan it out via a local switch to perform several other functions.
  2. There are various AV over IP solutions that'll do transmission over a network but right now every big vendor is going a proprietary route. While some of them are arguably 'good' in terms of features and quality, I'd still wait it out for an true interoperable standard to emerge: any money spend now on it for a wide deployment will like need to be replaced down the line when the real standard arrives. It is the future, just not today due to corporate shenanigans. I will say that UHD/4K@60 hz resolutions over 1 Gbit links aren't the greatest as my eyes can tell that there is some compress going on. (1080p60 on 1 gig is fine though.) Newer solutions that are using 2.5 Gbit or 10 Gbit links look far superior at UHD/4K@60 Hz. If you want more than 60 Hz for gaming, you'll definitely have to look around. Similarly while the network encoders are fast (and use tons of bandwidth), there is still a bit of latency added due the encoding process and network transmission, making them less than ideal for hyper fast gaming. If you just need a point-to-point solution HDbaseT is your best solution. It is a defined well support standard. Some displays (though few consumer unit) have a slot to add an HDbaseT input. There are several compromises here as there is only 18 Gbit max bandwidth though DSC is support. Thus UHD/4K@120 Hz is possible but at the very edge of that HDbaseT 3.0 can do. On the flip side, HDbaseT as this is more of a signal conversion than a full re-encode and the signal is point-to-point, there is no latency penalty here.
  3. Latency absolutely matters for USB over network transport, especially for USB 3.0 speeds. The USB protocols themselves dictate a maximum response time for a device to respond to a host request. Most of the time this is doesn't matters as most of the time users have the USB devices attached locally making the latency more dependent on the device being compliant than anything. Toss in a network switch (or several) and you can actually exceed the latency governed by the USB spec. Ditto if half of the USB extender is part of another product (see Creston's NVX solution).
  4. I'm surprised that the awesomeness that is the Icron 2304GE-LAN was not brought up. This encapsulates USB data into Ethernet packets that be transmitted over the network. The two gotchas are that speeds are only USB 2.0 and there is a discrete latency maximum for the delay between the two end points. This isn't just purely the cable run limited but you have to factor in any switch hops that are involved. Now the huge benefit of this is that you can now do dynamic routing of devices on the network. The vanilla Icron 2304GE-LAN pairs a single host end point (USB-B) with a single device end point (USB-A). However the 2304S model is a specialized host end point (USB-B) that can receive multiple network streams from different 2304GE-LAN device endpoints. This make the network function as a multi-device hub. So what are the use cases? There is the obvious KVM over the network where you can switch between USB devices that way. Or the review where have a single system that you'd like to directly access form different locations. And yes, if you want to with the 2304S model you can have the multiple locations simultaneously active much like plugging into two keyboards/mice into a single system. A more unique use case for these system was a hardware authentication key for some enterprise software that was running in a VM on a server. This permitted the VM to migrate to a set of physical hardware as long as the VM host also had an Icron unit attached. It should also be pointed out that Icron is the major OEM for USB extender with many other companies rebranding their products or using their IO board inside other products. If all you need is USB HID functionality (keyboards, mice etc.) the 3rd parties even have a software solution for a host computer that'll pair with the device end but limits everything to a mundane 1.0 Mbit speed but is not as latency sensitive. Thus the only hardware you'd pay for would the 2304GE-LAN device side. Crestron's NUX solution is wholly rebranded Icron gear with a vendor flag set. The Icron protocols are also included on the USB ports of their NVX video over network extenders for a single box solution that'll put video, audio and USB over the network. I've gotten a set of the 2304GE-LAN models for about ~$360, though I haven't looked into the pricing of the newer 2304S or 2304PoE models.
  5. Each one of these amps counts as a network switch hop and thus adds latency with every hop. Small devices like these can be have s small integrated three port switch (the two external ports you see on the rear of the device and the third port existing as a link on the main board). A small integrated switch is generally good as it puts simple dedicated hardware to solve a simple problem but tends to cost more. Additional latency is still there, just not terrible. However I've seen devices like this use two standard NIC interfaces and are bridged in software. Network traffic will still flow from one port to another but at relatively high latency vs. a switch chip. Any sort of synchronization time clock has to be pass along instead of transmitted directly (think IEEE 1588 PTP). If you have an audio over IP topology like AVB, the audio stream itself has a time-to-live counter embedded into it that counts the number switch of hops: after a certain point it'll just stop working even if are under the latency maximum.
  6. I'll second the error between identifying the system memory and the VRAM. There is 4 MB of on board memory and you can see it between the single 72 pin SIMM slot and the two VRAM slots. Practical limit is 68 MB of RAM as I haven't seen a single 128 MB 72 pin SIMM but have encountered a 64 MB unit. I have seen a slot adapter that would connect two 72 pin SIMMs into a single 72 pin slot. The gotcha here is that such an adapter wouldn't let you close the chassis due to this system's small size, limiting you to 68 MB in total. I also hope that some one there also figured out that old school classic Mac OS didn't dynamically allocate memory for you. That extra 16 MB SIMM installed isn't going to be used unless you manually assign RAM to an application prior to launching it. (Select the application in the Finder and then Get Info under the Finder's File menu.) If you had the extra RAM, things generally were snappier even on this old hardware. Even to this day Apple ships about half the amount of RAM that they should on every system for their baseline. For those seeking ultimate performance and had the RAM to spare, these systems can boot off of a RAM disk. This would improve load times dramatically for games. 1:55 This machine could run System 8.1 however, a little bit more advanced than System 7.5.5, in particular the improved Finder. 9:00 Should've pulled up a game of Bolo if you wanted to know what retro Mac gaming was like in the early '90s. Then again, I did see Marathon which would have been since to see played since Bungie is dusting off that old series for a new installment soon.
  7. A PoE speaker setup using AVB/TSN or Dante would have vastly simplified this setup. This does limit speaker output power if there is a need for more than ~70W. There are also PoE amplifiers that let you use your own speakers and plenum rated to be installed into ceiling spaces near the speakers themselves. This vastly simplifies wiring as only CAT cables are what is ran to the rack regardless of using a PoE speaker or amp plus a short speaker jumper if using the PoE amp solution. PoE speakers/amplifiers with built-in DSP processors to handle things like specific EQ for the speaker and its positioning in a room are also options. Depending on the model, you can also get a pass through port on them to connect another device. Pass through ports are nice for reusing a network run to a device that isn't latency sensitive like a projector for control data. Newer generation of these devices are also integrating various sensors like temperature, humidity and ambient light which are useful for tying into an automation system. Daisy chaining as seen in this video is not optimal for audio transmission due to the additive latency. There is at least 6 network hops involved for the last unit to get its audio data (to the wi-fi access point -> core switch -> switch dedicated to the amp cluster -> three more hops through other Sonos amps). For something like playing music in a room of off a phone, this isn't noticeable as there no reference to determine how the skew in time. However with the audio source being say a movie or a console game, there is an end user expectation of when audio should be heard. This is similar in concept to how video scalers can introduce a few frames of latency in a video feed which can become noticeable for gaming. If there are enough ports on the main switch, I'd have direct wired them all into that or provisioned a larger switch for just the amplifiers to cut down on the daisy chain hops. While audio is latency sensitive, the bandwidth requirements are not that great.
  8. Ugh, lots of nitpicks in this video. First of, none of the modern 4-way SLI setups from nVidia were mentioned. Granted, you can only get them in from nVidia directly in their DGX workstations which use Quadro class graphics but they are similar to that 8-way Voodoo card by incorporating a chip on the bridge connector. The nvLink chip on the bridge is what permits standard desktop Quadros which are normally limited to 2-way SLI scale higher via this fanout switch chip. Some who could get their hands on this DGX bridge board could build their own 4-way Quadro setup but Conceptually 8-way SLI is still possible due to the number of nvLink buses supported on the nvLink chip, however, nVidia has kept 8 GPU and higher systems isolated to the data center and their mezzanine style carrier cards. Since nvLink is being leveraged, it also permits memory sharing and aggregation, which is another nitpick that I'll get to later on. The second nitpick is that the video doesn't dive too deep into the challenges of leveraging multiple GPUs which is simply load balancing. Splitting frames up evenly in terms of output pixel count doesn't inherently mean that each GPU does the same amount of work needs to be performed in those regions. With an uneven load across the GPUs, performance is inherently limited to how long it takes the GPU with the most work to finish, a classic bottleneck scenario. Third nitpick is that SLI is never actually displayed on screen using its interleaving nature. 3dfx figured out early on that having each GPU work on a different scan line is a simply and relatively efficient for how 3D APIs were working back then. As more complex rendering techniques are being developed, it was no longer to simply scale this technique: shaders would reference data from the pixels found on the previous scan line. Fourth nitpick is that spit frame rendering was demonstrated as a splitting the screen into quads which isn't how it normally worked. Rather the splits would be in horizontal lines the heights varying to load balance across the GPUs. The reason being is that each GPU would be responsible for a full display scan line. Splitting mid scan line was often glitchy from a visual perspective without having an additional aggregation buffer. The additional buffer was not optimal due to the small memory sizes of GPUs at the time and the lag it would introduce. Not impossible for a quad split to be used but it was not the norm. Fifth nitpick is that different multi-GPU techniques can be used in tandem when there are 4 or GPUs in a system. AFR of SFR was used in some games. Not really a nitpick but more of a piece of trivia, AFR of SFR came to be as DirectX had a maximum 8 frame buffers currently in flight. This figure is a bit deceptive as one buffer is being output to the screen while the next is being rendered which is done on a per GPU basis with that API. This limited AFR under DirectX effectively to 4-way maximum GPU setup. Hence why AFR of SFR was necessary to go beyond 4-way GPU setups or if triple buffering was being leveraged. DirectX 10 did effectively off SFR support which capped GPU support on the consumer side to 4-way. I haven't kept up but Vulkan and DX12 should be able to bring these techniques back but support rests on the game/application developer, not the hardware manufacturer/API developer. Checkerboard render is interesting in several aspects. First off, even single GPUs have leveraged this technique as a means to vary image quality in minor ways across a frame that makes it difficult to discern. Think of a frame split up to 64 x 64 tiles but for some complex tiles and to keep the frame rate up, the GPU will instead render a 16 x 16 or 8 x 8 pixel version of that tile and brute force scale it up to save on computational time. When multiple GPUs were involved, the tile size was important to evenly distribute the work. There is a further technique to load balance by further breaking down larger tiles into even smaller ones and then distributing that work load across multiple GPUs. So instead of a complex 64 x 64 tile getting a 16 x 16 rendered tile that is scaled upward, in a multiple GPU scenario four 16 x 16 tiles are split across multiple GPUs to maintain speed and quality. Further subdividing tiles and assigning them to specific GPU is indeed a very computationally complex task but the modern GPUs already have accelerators in place to tackle this workload. This is how various hardware video encoders function to produce high quality, compress images by subdiving portions of the screen. While never explicitly said, I suspect that this is one of the reasons why modern GPUs have started to include multiple hardware video encoders. One technique not mentioned is when multiple monitors are used as each display can be driven/rendered by a different GPU. While not load balanced, it is pretty straight forward to implement. Both AMD and nVidia provide additional hardware to synchronize refresh rates and output across multiple monitors as well as the GPUs. The original CrossFire with dongle was mostly AFR as the there was a video output switch on the master card. The master card decided which GPU was sending output to the monitor. This was mostly done via detecting V-blank signals. The chip could conceptually switch mid-frame at the end of a scanline but ATI never got this to work reliably so they to focused on AFR. (Note: later implementations of Crossfire that used internal ribbon cables could switch on a per scanline basis making this an issue only for ATI early implementations.) In the early days of multiple GPUs, video memory was simply mirrored across GPUs. This wasn't emphasized in the video but was implied in the scenarios leveraging AFR due to each GPU doing the same work at a different slices in time. Modern GPUs that leverage nvLink from nVidia and Infinity Fabric links from AMD can actually aggregate memory spaces. They also permit dedicated regions to each GPU while having a portion mirrored across the GPUs to limit. For example two modern Quadros with 24 GB of memory on board could provide 24 GB (full mirroring), 36 GB (12 GB dedicated on each card with 12 GB mirrored), or 48 GB of memory (full dedicated) to an application to use. That flexibility is great to have.
  9. Those PCIe switches are pricy and hence the high cost of the carrier card. One of the things on the switch spec sheet for the chip but isn't on the card is a 100 Mbit Ethernet out-of-band management support. Mention in the video as a use-case is leveraging these chips for compositable infrastructure but like all enterprise grade infrastructure you want management functionalities to be separated from production data paths. So why would some one want out-of-band management on a particular card like this? First thing is encryption support as it would provide a means to pass keys around that would not touch the host system or production path ways. While it would be painfully slow, it does provide an online means of extracting data in the event of a host system failure. Lastly and seen in the video, is that out-of-band management can pass health information. For a carrier card crammed like this, being able to run an instance of monitoring software on the card itself to record drive temperatures for a monitoring platform would be great. One thing not fully explored here but it is a power of the PCIe switch chip is that while link between the CPU and card is 16 lane PCIe 4.0, the drives could be simple 4 lane PCIe 3.0 units and performance would not be significantly impacted. Similar situation if a 100 lane PCIe 5.0 switch was used with PCIe 4.0 based NVMe drives. In fact, with a 100 lane PCIe 5.0 switch chip and 21 drives at PCIe 3.0 speeds, the link between the card and the host system will still be a bottleneck vs. the aggregate drive bandwidth.
  10. For standard Ethernet, shielding isn't unnecessary. Shielding does matter for some other digital connections using CAT cables (HDbaseT for example). However, if you'd already laid the STP CAT6, I'd stick with it. What might be a better option is selecting better ends, both female and male. While expensive, I like the these female connectors as they're easy to crimp and I use premade patch cables to jump between those and equipment. I use a patch panel like this to host the female keystone jacks. The nice thing is that since that panel is keystone based, they don't have to host all RJ45 connectors. The obvious thing is that their needs to be a bit of slack in the cables by the patch panel to redo them.
  11. Why do you think moving form your motherboards optical SPDIF to an external audio extractor off of HDMI that also uses optical SPDIF to your speakers? To answer the greater question of, 'Can I output audio out of the HDMI port on my graphics card?', the short answer is yes. I'm currently running a Radeon VII card (Vega) whose HDMI port is going to a video-over-IP box that acts as an audio extractor for my network based setup. It is set to output 7.1 audio in LPCM which I can monitor which of those 8 channels are active. I should also note here that the most you'll get out of optical is uncompressed 5.1 as there isn't enough bandwidth in the SPDIF connection for more. You can do additional channels but that requires a compressed/encoded signal.
  12. I'm perplexed why an audiophile portable rack and some 15-18" depth rackmount cases to fit everything was considered. This would also permit inclusion of a good rackmount switch and leave room for a UPS (though shipping the UPS across boarders might be problematic, you'd have the room*). The downside is that you'd have to ship it via freight but those cases are designed to be abused in transit. Hotels generally can hold some freight and it is always courteous to work with them ahead of time to be prepared to received the gear. Doubly so when going across borders to account for custom times. An alternative to freight shipping and dealing with customs would be to have some one drive it down since it is 'only' Canada to the US. Custom checks are relatively painless, you can move more and you have a built-in chain of custody presuming it is a member of the team driving it. Less lead time is necessary vs. freight but I'd still plan for a buffer in case anything goes wrong. The unused buffer time can be used for prep at the hotel prior to everyone else's arrival: you'd have the network built up and tested as people are arriving to the hotel. If you reserve neighboring rooms, often there are side doors connecting them all which you can run cables underneath. Literally get everyone on to high speed hardwired network. The downside is that this removes members of the staff for rather mundane work that'd otherwise be doing something else. Gotta weigh the cost-benefits here as man power and insurance are factors. There is yet another solution being Las Vegas: rental gear from various AV production companies. Most have huge warehouses in Las Vegas due to the numerous shows taking place there. Various Dell/HP/Lenovo workstations can be rented ahead of time and delivered to the hotel without worry from customs. Same day replacement options are possible (still buffer in some setup time prior to production). Items like storage would have to be flow in but straight forward. This setup can be tested ahead of time by having a unit shipped to the offices and mocked there. It might even be possible to arrange for the same unit used in the mock to be the one used in Las Vegas. This eliminates much of the freight shipping but the upfront costs are not cheap. This can still provide savings if you have good tracking on how much your logistics cost. One downside for CES in particular is that you have to get in your rental requests early. CES is one of the few shows that can drain inventory. If carry-on was a requirement due to the costs, have you considered breaking the system up across multiple chassis and multiple carry-ons? Just moving storage externally frees up so much volume in this design. Or why not do things the LTT way and build a custom case that'd be purpose built to fit inside the Pelican box? You'd be able to put in larger motherboard, have hot swap drive bays and utilize far more powerful hardware even after losing some volume for the necessary vibration mounting/foam. To actually run the system, the Pelican case would have to be open like it was sitting on the table in the video but inside a hotel room this is not a big deal. Obviously you can't put ventilation holes into the external shell. There is a time-cost-benefit analysis that'd need to be done for this. For a media company like this though, the ROI could be spread out across over multiple trips, especially if the internal rig adheres to motherboard/component standards for upgrades later. Also good to see some testing done prior to shipping. This will need to be repeated upon arrival before production work starts on it. *One of the things I've learned for trade shows is that if something is deemed mission critical but complicates shipping, consider purchasing it locally and having it shipped to your hotel. If you're already shipping freight there, the hotel will also accept a package. It is possible to flip such things quickly on CL or eBay to recover some of the cost after the event is over. Vibration rigging so that you can ensure that damage done to the case doesn't get transferred to the components on the inside. High end travel cases suspend the contents so that the exterior can be dented or even ruptured without damaging stuff on the inside. It is possible but generally custom built which means not cheap. It is worth pointing out that this video's solution negates any sort of protection for the items inside due to the tight fit. There needs to be some interior padding, especially for that fragile InWin case. That looks like I could wrap it by just staring menacingly at it.
  13. Brocade is sort of on that list: Extreme Networks acquired them. You do need to run in EXOS mode currently as SLX and the Fabric Engine do not support AVB/TSN features (yet).
  14. I will agree that Dante based speakers do save money in terms of labor and deployment due to the reliance on CAT cabling. That is also why I would argue that they are also a good fit for home usage: one cable cable type to rule them all. The next logical move is to go wireless and WiFi 7 will bring AVB/TSN features. Just would need to power the speakers which brings the debate to the classic passive/active speaker argument.
  15. The difference between Dante and AVB is only a few years. The real difference is that Dante was designed to work over existing switches at the time instead of needing new network switch chips spun to support the queuing enhancements. Audinate also got the protocol discovery right and all of their products early on needed to be certified for interoperability. That is why it took off as they got the first mover advantage. While the Harmon group does support AVB in a few select products, they're mostly a Dante outfit at the professional level. Both AVB and Dante support more than 512 channels over a network. How much you can move over a single link is bandwidth dependent and how the audio channels are configured (ie how much bandwidth an individual audio stream takes). There is a slight advantage to AVB due to how it can bundle channels together to reduce some of the overhead. AVB does have a unique limitation to the number of queues a switch can support simultaneously with the enterprise stuff typically having 1024 queues, consumer 128/256 queues. That's for how many streams can be moved and with bundling the raw number of audio channels can be higher. Both AVB and Dante fail safe connections essentially require the doubling of hardware to get it right: the goal is to remove a single point of failure. AVB not only does support redundant connections, it also can support fabrics between switches. Redundant streams are tagged as such with the switch network actually attempts to have it take a different path than the primary stream. Interestingly enough, the primary and secondary streams can coexist on the same logical network, something Dante cannot do (the primary and secondary require separate subnets).
  16. You do know that that is a link to audio manufacturer's page that isn't comprehensive right? For example, that page doesn't list any Arista gear which does support AVB. Ditto for some of the Mellanox/nVidia switches. The key thing is that AVB/TSN is being added to modern switch ASICs. They're pretty much going to have to as Wi-fi 7 is set to introduce TSN to the mases. It is still up to the switch manufacturer to expose that capability.
  17. AVB is audio video bridging. I haven't checked the latest MacOS release but AVB has still been supported on capable hardware. The gotcha is that few Apple devices currently have hardwared networking so it is becoming increasingly rare unless a Thunderbolt dongle is acquired. It is a pain to wire up a large building but the great thing is that the cabling doesn't need to change to add AVB/TSN features. CAT5E is good for 2.5 Gbit and PoE. There is also the factor that new buildings will have some form of wired infrastructure regardless if only to support wireless access points for end users. TSN becoming part of the Wifi 7 standard. That is where things are rapidly changing as AVB/TSN capable switches are going to become mainstream to support these additional features. I personally have five AVB capable switches at home, but I'm also not a typical end user. I'm bringing my work home a bit.
  18. The digital data itself is not altered, as you point out, but rather the data flow. AVB/TSN reduces the concept of jitter by bounding latencies. Real time devices generically just playback a buffer which is interesting when data arrives out-of-order or at an inconsistent rate due to highly variable latencies or when clocks are not properly sync'd. By changing how data arrives in that buffer, the playback is then changed. The closer to real-time the work is, the more likely playing back data that arrives out-of-order or simply skipping missing data becomes. That where the chaos is introduced in the digital world: the data itself isn't corrupted but rather how it gets to the destination can be over a switched network. No need to pack/unpack the data stream when you can properly register what it is with a header. This registration process of what the data stream is defined as part of the MSRP (Multi-stream reservation protocol) inside the switch. The presence of these streams are broadcast out so that devices across different switches can exchange data. What the data type is part of the header so video streams are distinct from audio streams. Other data types are also permitted as AVB/TSN is leveraged in the automotive industry for sensors as well as industrial applications like manufacturing. One of the more interesting applications I've heard of is encapsulating USB as a data stream since that too is latency sensitive for extending a USB signal over a network. As for what I do, my typical day-to-day is part of an in-house AV integration team for a Fortune 100 company. The audio work is mainly for conferencing/class room applications though I have done theaters/auditoriums as part of the job (they're just far more rare). My biggest project was an campus wide audio network encompassing six buildings where only the network based microphones and amplifiers/speakers were in the rooms with the DSP processing all being centralized in a single networking closet. This permitted things like paging to be incorporated 'for free' as the infrastructure was already in place for it. Just had to add PoE capable AVB/TSN switch to every floor and then leveraged some fiber rooms that the networking team were abandoning in favor of some new OM5 fiber they were having laid for normal data traffic. Outside of my day-to-day job I've done a bit of consulting and helped a friend of mine design the industrial manufacturing network for a chemical plant. That consulting gig did involve some mission critical devices though not quiet like you were describing.
  19. The sad thing is that there are legitimate audiophile network switches. They are expensive and specialized as they extend the actual Ethernet specification to support features like real bandwidth reservation in the switch, queuing alterations, time clock and determinism. These are several suites of standards like 802.1Qav (AVB), 802.1Qbv (TSN), 802.1Qcv (TSN), and IEEE 1588/802.1AS (time clock). These features are distinctly marked on the switches. Since the AQ-SWITCH doesn't have any of these features, it is indeed snake oil. Now for some errors in the video. Absolutely true that as a digitial signal you either get it or not, the CAT cables that connect between the audio device and the switch does carry one other thing: ground. Benign in the digital world, if the audio end point has analog components (ADC, amplifier etc.) there is the potential for ground hum to go from the switch to the analog end point. (Ground hum like this does not matter for a digital to digital end point as noted in the video.) This can be avoided by using unshielded CAT cables which don't carry the ground lines used for shielding. I have actually encountered this in the field with a microphone with the manufacturer's solution simply being don't use shielded cables. Layer 2 connections can be aware of the connection because technologies like AVB/TSN are layer 2 based. All the streams are broadcast and subscribed on the layer 2 level using the MSRP protocol as part of the greater AVB/TSN spec. VXLAN is necessary to tunnel this layer 2 traffic over a layer 3 connection that would not normally be bridged and hope that the switch is fast enough to keep traffic within the proper latency bounds for AVB/TSN.
  20. The system was not running true quad SLI as there was not a 4-way nvLink bridge installed. Rather it was two dual SLI. Same number of GPUs in total but they are seen by applications as two logical GPUs which the driver then splits the work load again in half for each individual physical GPU. I suspect that stable results would have been achieved by either removing the nvLink bridges so the system would see four physical GPUs or disabling a pair with nvLink bridges between them. On a more positive note, the logical GPU would have had 96 GB of memory to leverage as memory is shared over the nvLink connector. With a true quad nvLink setup, there would be 192 GB of memory for the GPUs to leverage.
  21. The most distracting thing in this video is the amount of glare you see depending on the camera angle. On the side shots, you can see the deck fence reflected on the monitor but when cut to over the shoulder view, it nearly disappears. Between the two different cameras recording, perspectives, and where things are focused in the cameras make the game play footage appear as if it was pasted over over the monitor itself in post production. It wasn't until a rewatched that I noticed a moving reflection in the Resident Evil segment that I was able to see a reflection in the over the shoulder view. Just something seemed really, really off while watching as if a blue screen was being used. This probably should have been made in the studio. One thing that does seem off is the sponsor ad at the end which plays stock store web capture for the pillow. Feels like some one inserted the wrong clip there. As for for the monitor itself, specs and pricing seem to be great. I do think the buyer beware section is well merited but at this instant, there is nothing like it for the price. I'm lucky to have a MicroCenter near me so I'll hopefully be able to view this unit in person sometime soon.
  22. Paging/swap has been around in the PC space in some form since the 80's as well as RAM disks. Most people don't tune these features like they did decades ago. In fact with more modern hardware it has increasingly become difficult to use a RAM disk properly. The thing about paging/swapping is that for SSD's they can conceptually wear out the typical NAND flash quicker than traditional consumer bulk storage. The one thing that has saved SSDs is that all the major operating systems have defaulted to a dynamic schema where they'll leverage disk space only as necessary. The default maximum is still twice the amount of physical memory in a system. If you need more than what a M.2 NVMe can provide on a desktop, you can get some M.2 to U.2 adapters and cable a U.2 drive into the system. At that point the drives are server class with server class capacities and prices. For laptops, limitations are indeed just M.2.
  23. It is more about accessibility and flexibility as it doesn't necessarily have to be Playstations or DVD/Blu-ray players in this type of topology. You can pretty much embed any HDMI signal + USB and output it else where over a network to leverage a system in another room. The idea is great for keeping loud, noisy, power hungry systems away from where the work is actually done. However, that doesn't meant here aren't any trade offs.... The problem is that NVX has its issues in terms of quality as compressing 3840 x 2160@60 Hz down into a bit rate to transmit over 1 Gbit Ethernet is not the greatest quality. I've seen both of Crestron's algorithms using JPEG 2000 (Gen 1) and JPEG-XS (Gen 2) and I can tell what I'm seeing is being compressed. For some applications it is more difficult tell. Simply put, it is unrealistic to expect 4k/UHD high quality, low latency video to be pumped over 1 Gbit of bandwidth using realtime encoding. This goes for other video implementations of AV-over-IP including Extron, Biamp, Atlona, QSC, Samsung/Harman/AMX/SVSI and many others. The vendors that do have good quality are all using 10 Gbit Ethernet which has its trade offs in terms of cost. Granted for businesses and video production going to 10 Gbit (or faster) is far more common than in a residential environment. The home network here certainly can support 10 Gbit in many locations so that is not much of an issue but it would require upgrading the backend infrastructure more. The compromise I see is 2.5 Gbit Ethernet as that should be the big consumer networking upgrade. However finding AV-over-IP solutions supporting 2.5 Gbit is surprisingly slim at this time. Crestron has it on their roadmap for 8K video support but I fear for that level of quality, visual compromises will also occur similar to 4k/UHD over 1 Gbit. However, 2.5 Gbit support probably would be the necessary bandwidth boost to finally provide 4K/UHD quality. I will say that 1080p quality, 1 Gbit is perfectly fine for nearly every AV-over-IP solution I've seen. The other big downside to AV-over-IP even at high bit rates is that there is no single interoperable standard (yet). My Crestron (and other vendor) reps have yet provide me with a viable transition plan when that eventually does happen as they all know a bigger player in the media market (Apple, Google, Samsung, nVidia, AMD, Intel take your pick) will come out with a solution that deprecates all of these proprietary implementations. The closest reasonable response were those that piggybacked on other standards (like SMPTE or AVB) which will continue to be supported out of necessity/continued interoperability. AV-over-IP is the wave of the future but until that interoperable standard arrives, it is best to keep a distance and watch what happens in this segment. While not low latency, there is an interoperable standard in H.264/H.265 streaming and a technology Crestron fully supports on other products. However for NVX supporting these is surprisingly absent. Conceptually a NVX end point should be able to play a stream directly off of a plex server or DMC-STRO card straight off of a Crestron DM matrix switcher. The other downside with NVX is that various features are lost such as variable rate refresh. Considering the latency sensitive nature of NXV this is a feature that could cut down on the end-to-end latency by being able to pre-empt a refresh call from the display when a frame arrives. I've asked this directly to Crestron engineers at Masters before and got some interesting looks of 'why didn't I think of this?'. In comparison, the links in this video are simply DP over fiber which provides the full DP bandwidth and feature set end-to-end with only a few miliseconds of latency for media conversion. More expensive but zero compromise in a point-to-point scenario. While the video world is full of compromises, moving to an IP based audio system does have huge benefits. First off there already is an interoperable standard in AES67 for media transport (which the Crestron NXA does support ) and AVB. The most popular audio-over-the-network solution, Dante, is also interoperable with AES67. The tricky part is de-embedding multichannel audio and getting it onto the network. If all your doing is PCM or decoding at the host, there are far more options since DRM doesn't get in the way. Once on the network, I'd recommend outputting everything to a centralized DSP for mixing/speaker specific EQ from any source and then output that to some PoE based network speakers or a network capable amplifier as appropriate. It saves so much in terms of cabling and very flexible in terms of multichannel source routing/mixing. Crestron's expanding into this market but what they've shown so far is not ideal for this residential use-case. While my Crestron rep hasn't explicitly stated as such, I strongly suspect the silicon shortage is impacting what they can release as they've acknowledged several holes in their line up (Procise retired).
  24. The video is a good refresher on how the memory hierarchy works and it is fairly accurate for consumer systems but things are a bit different at the top of the computing world. As always, stuff that first appears at the top of the computing tends to trickle down as time goes on. IBM high end server/mainframe hardware purchasing and licensing: you buy the hardware and then you need a license to actually enable it to be used. So download a license key and enable more of the memory. Woah, video title can true! Bonus: these systems support hardware accelerated memory compression and encryption. Intel has toyed with such things as enabling additional L2 cache on some Celerons in the past via license key. Similar in concept was the VROC functionality on Sky Lake workstations (socket 2011/3467). While not a secondary license, Intel notoriously did artificially limit how much memory could be connected to a Sky Lake-SP Xeons based upon its SKU forcing server owners to pay up more if they wanted that capacity unlocked. High end clustered servers are migrating to a flat memory space where memory in one system can be directly addressed by a remote node using RoCE. This is a bit different than paging as RAM in the remote systems is seen a bit higher up in the hierarchy, effectively another tier in a NUMA topology. While still over a network, the RoCE protocols remove much of the software stack that adds latency on both ends of the connection. Bandwidth is of course dependent on the network topology but RoCE typically uses 100 Gbit Ethernet so bandwidth is not terrible and can be increased by adding additional 100 Gbit Ethernet connections. At the very top instead of RoCE, some networks are dropping Ethernet entirely and moving towards PCIe. This removes the traditional network switch in favor of large PCIe switches interlinked together. The advantage here is massive bandwidth and far lower latency than Ethernet. The trade off is scalability and cost due to the expense of PCIe switch chips, cabling, and limited number of devices that can interlinked. While this type of topology is rarely deployed today even in the server niche, with PCIe 5.0 and the adoption of CXL (and similar technologies) to put memory directly on the PCIe bus more frequency. This would permit dedicated memory boxes to sit alongside otherwise normal servers to expand memory capacity. Other features like hot memory add and hot host process node additions become possible to increase capacity on demand. The last thing to really upset the memory hierarchy in the server space has been non-volatile memory. Intel's Optane DIMM technology is the premier version of this which allows high capacity Optane modules to sit on the memory bus and be addressed as an extension of DRAM. This is vastly faster than even paging to NVMe SSDs in both latency and bandwidth but slower than DRAM. The three main advantages are higher capacity, lower cost than DRAM and it survives shutdown of a system. Personally I'm surprised that Intel has kept this technology exclusive to the server market as a DRAM alternative as this would have tangible benefits in the consumer space (it has hit consumers as a NVMe drive). Longer battery life, more main memory and instant resume are some immediate potential benefits even if performance took a hit. Large L3 or even an eDRAM L4 cache would greatly offset the usage of Optane as main memory on the consumer side.
  25. Why worry about accurate packet time stamps generated within a machine when you can leverage determinism to know when within a precise range information will arrive at the far end before you send it? This video gets so, so close as it shows the data sheet of the Intel 210 Ethernet controller at 10:13 regarding the hardware timestamp feature but skips over the AVB protocols for determinism listed right above that line item. For more accurate network time, there are devices that can sync to GPS with their own atomic clock and then act as a IEEE 1588/802.1AS master clock. The result is pretty much the same effect of the atomic clock card in the video but implemented at the network level that is shared. Because it is shared now the result is lower cost per node. The security aspects mentioned of knowing that a packet took to long to arrive is already part of the AVB specification due to the bounds on jitter. The packet may not be discarded by default though as what happens in that scenario is configurable. It can be discards as mentioned, flagged which can then be programmed for quarantine or reinserted properly into the data stream. All valid choices depending on the task at hand and these policies can be applied on a per data stream basis. The benefits for streaming mentioned at 12:20 are real as well, I've using AVB. The main benefit is that if you know what your maximum jitter is, you know what the maximum size of buffering is needed for any potential reorder. The end point applications can display various stream issues but since the switch is involved, you can actually watch data stream performance at that level depending upon the switch itself. I'm also fairly certain that LTT is already using some form of IEEE 1588 on their network given some of the production gear I've seen used.
×