Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

power666

Member
  • Content Count

    19
  • Joined

  • Last visited

Awards


This user doesn't have any awards

About power666

  • Title
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. power666

    The New Mac Pro…

    A couple of corrections from the video: The trash can Mac Pro is actually a six year old design: it was announced at WWDC 2013 but first shipped in December 2013. Given that Apple is announcing this system at WWDC 2019 and shipping later, the 6 year mark is apt. Apple has mentioned the 8 pin PCIe power connectors but no normal headers have been spotted on the motherboard shots I've seen. Either Apple has hidden them very well or they're using some proprietary connector. Apple has not mentioned either way if the system will include such cables or if they'll be a separate accessory. The Xeon W's used in this Mac Pro are not yet available on the PC side and they are all LGA 3647. For those keeping track, this maybe the fourth iteration of LGA 3647 (Xeon Phi, Xeon Scalable, Xeon Scalable with FPGA and now this) which bring 64 PCIe lane support but no UPI interfaces to other sockets. Unknown if how interoperable this socket is with existing LGA 3647 parts or if anything else in the future will use it at all. Speaking of costs, you are omitting the dual 10 Gbit NIC though that may or may not be included in a workstation motherboard. I would argue that the video card is more equivalent to the workstation Radeon Pro WX7100 which does cost a bit more. The boot camp drivers have enabled several of the workstation only features on the trash can Mac Pro's graphics cards so there is merit there. Still Apple is over charging versus DIY but they are not out of line from similar single socket systems from Dell/HPLenovo. A statement made in the video also applies to the builds from Dell/HP/Lenovo as well: they do offer different component selection that does permit a lower entry level price. Yeah, the 1.7 Ghz hexacore entry level Xeon-SP you can get for an entry level Windows workstation is going to save you several thousand versus a 3.5 Ghz, 8 core Xeon-SP. Now for my own comments on the system: about time. Apple again when into a different direction with their design. The most interesting aspect in my opinion is that to conserve system volume, they put the DIMM slots and SSDs on the back of the motherboard. I would have moved the CPU to the other side as well to reduce the overall system height even further at the cost of making it wider. Using the specialty video cards that are 'passive' rings of a server-like build where the high pressure fans at the front of the chassis are responsible the air flow there. Apple has a good track record of keeping these quiet but sometimes ventures into form over function. Ever put your hand on top of the new Mac Mini while under load? I do think Apple made a mistake by not leveraging Epyc as that would have provided for more memory support (and no memory capacity premium either) and more PCIe lanes. There does appear to be a PCIe switch on the motherboard. A far lesser mistake here is that they are not leveraging any of the Xeon Scalable chips with an on package FPGA like he Xeon Gold 6138P. Intel had far more plans here at 10 nm but we all know how that roadmap has played out. Instead we get an external FPGA on a PCIe card which is fine given the niche instead of being inside the CPU socket which is fine for now. I hope the pricing for the LGA 3647 Xeon W's are not outrageous now that Intel has competition. The GPU selection is very interesting. Good to see Infinity Fabric leveraged at last on the GPU side of things. Very curious to see how this system performs with 1, 2 and 4 GPUs tied together. While insane for this purpose, I am curious about gaming when topped out, especially at resolutions beyond 4K. Frame rates shouldn't be bad but frame time is the thing to really watch. SSD pricing is poor and capacity is lacking. For a proprietary solution, there is little advantage to going this route. There are a couple of SATA ports for spinning rust if capacity is warranted and the PCIe slots can be adapted to M.2 for more commodity storage. There is really not incentive to go with Apple's storage options unless they force booting from it. The optional wheels are going to be nice. I like wheels. See the Lian Li PC343. The XDR display is going to be nice and I look forward to seeing one. I have experience using a Barco reference display for medical imaging that can attain an even higher brightness so I'll be a good comparison. The Barco unit was ~$38K a few years back and likely hasn't dropped in price due to its certification as a FDA medical device. Apple is nickel and diming a bit here by not including the stand as the 'base' model and then offer a VESA mount model for slightly less. Yeah, displays like these often get custom mounts but also plenty of them don't. These reeks of being cheap on a $5000 product.
  2. power666

    HP Enterprise to Acquire Cray Inc for $1.3B

    The key thing to look for on Ethernet deployments is the node count and topology. That extra latency in the middle quickly adds up. Case in point, only four of the top 100 systems leverage Ethernet. It does get more popular down the list. This is one of the reasons why I mentioned HPE's purchase as a defensive move: they wanted an interconnect and Cray was one of the 'smaller' companies that wasn't a big competitor or one of HPE's suppliers. In terms of large socket machines, HPE's DragonHawk was not that impressive as it leveraged a design that was initially developed for Itanium (note that this was the Tukwila chip with QPI like Xeons). SGI had NUMALink which performed better. So what did HPE do? Well they didn't buy SGI, well not yet. Rather, HPE rebranded SGI hardware under the Integrity MC990 name. (At the time, Dell was doing this too.). Now Dell has no means of going beyond 8 sockets in a server and thus many of benchmark crowns are HPE's for the taking. Even if scaling is poor, HPE can brute force performance higher. While most of it right now is #marketing, there is some genuine good research coming out of "The Machine". I do think HPE goals were a bit too lofty with the time table they wanted. Memristors only exist in their labs and photonics is on the way but likely won't take off until the entire industry moves to chiplets as the changes to make silicon photonics work won't compromise other aspects of the design. HPE has been the biggest backer and the biggest contributor to Gen-Z. There are indeed other companies helping in the effort but HPE has a strong influence in the consortium. This circles back to the idea of interconnects: Ethernet is commodity. Dell and Lenovo can leverage Ethernet just like they can roll out the same x86 commodity servers. The other thing is that there was an Ethernet competitor in Omnipath but could also be seen as 'commodity': select Xeon and Xeon Phi chips could get on package fabric without sacrificing PCIe lanes in the host system. Purchasing Cray does get HPE access to that interconnect and thus the edge they need over their competitors. This could split into its own conversation due to how we now have HP and HPE as two separate entities. Mismanagement at the top had HP in a downward spiral until Meg Whitman. There was more than just dead weight shed to keep the company a float during this period though. Now that they have stabilized, HPE is essentially re-acquiring talent and technologies they used to have internally. The Dell EMC success has much to do with their own internal re-alignment and for a period of time going private. Dell played their hand well by going private to tackle some internal alignments that could be done profitably, just not to a shareholder's desire of profit. The refocus was a success here as well.
  3. power666

    HP Enterprise to Acquire Cray Inc for $1.3B

    RMDA is nice as that helps remove the bottleneck at system nodes but there is still the issue of Ethernet switches in between the nodes. In particular I will cite this paper (PDF) which puts RMDA Ethernet switching at 100 to 300 ns higher than Infiniband. For a single switch that isn't bad but when your topology involves several layers of switches, that adds up and give Infinitband and similar a clear advantage over RMDA capable 100 Gbit Ethernet. You also cite the reason as to why Infiniband market share has been declining: Intel purchased one the bigger Infiniband manufactures and created Omnipath. This result should not be surprising. I believe that you are referring to NUMALink which I mentioned. It replaces the similar coherent interconnect HP re-used for their Dragonhawk based SuperDome systems. The Intel partnership was mainly to get a chipset license in effect, something Intel hasn't done in years except for these high end server exceptions. The interconnect chip is an entire HPE affair. Chicken, meet egg. "The Machine" was publicly announced in 2014 with internal research prior to that. Gen-Z was announced in 2016. The core concept of "The Machine" was that it was a memory focused architecture, just like Gen-Z. In fact HPE even says so: "The fabric is what ties physical packages of memory together to form the vast pool of memory at the heart of Memory-Driven Computing. [...] But we're not the only people in the industry who see the necessity for a fast fabric. We're contributing these findings to an industry-led consortium called Gen-Z..." Gen-Z has similarities with OpenCAPI and CCIX in that it enables high bandwidth, low latency, low over head communication of accelerators with memory. The end goal is the same but how they go about achieving that is indeed very different. I would disagree with that. Factor out the recent purchase of SGI, HPE still has a number of system in the Top500 list. Interestingly enough, one system isn't even x86 based. Sans SGI, HPE would be the 7th largest vendor on the Top500 list today. Summing up Cray, SGI and HPE would put them at the number 2 spot. If you look through previous lists, HPE has been fairly well represented historically, especially in the era where dual socket x86 CPU nodes for clustering were dominate.
  4. Did Intel know about this particular bug in 2007? No. Some of the MDS family of chips require hyper threading which Intel at the time has already abandoned with the Pentium 4 years prior. I have not seen any mention that this could (or could not) be exploited on Netburst based designs. Hyperthreading would not return until 2009, years after that post was made. Has Intel known about various bugs an errata for a long time? Yes. Does Intel catch some of these before release? Yes. That is why TSX was disabled in consumer Haswell post release and on all Haswell-EP/EX parts. Similarly errata is why Intel never shipped Optane DIMMs with SkyLake-SP. Is Theo literally ROTFL right now because some of his predictions came true? YES. He has been advocating for the disabling of SMT/Hyperthreading for years now due to his perceptions of lax security.
  5. power666

    HP Enterprise to Acquire Cray Inc for $1.3B

    HPE has been doing a lot on the fabric side of things. Gen-Z looks like an idea that was spawned for their "The Machine" research. HPE's purchase of SGI got them NUMAlink which is used inside of the SuperDome Flex servers (though HPE doesn't scale them as large as SGI used to for various reasons). Acquiring Cray puts another fabric designer into their portfolio while HPE is also promoting their own technology. This could be a response market changes as Mellanox is being purchased by nVidia. Intel has been quiet about Omnipath so far this year and the on package options are missing from Cascade Lake Xeons. Intel should be debuting 200 Gbit Omnipath this year. What ever the case maybe, the number of companies providing fabric, or at least fabric that likely will not be tied to another kit of hardware (x86 + Omnipath, nVidia Tesla + Infiniband) is declining.* This could be seen as a defensive move in the market to ensure that they have continued access to high speed fabrics. HPE could be moving in an offense nature here too. HP purchased Compaq many years ago with one of the motivators was acquiring the DEC Alpha architecture only to kill it off in favor of the partially HP developed Itanium chip. A bit of a shell game happened where the Alpha assets was spun off into another company (which Intel quickly bought). The result was a reduction in competition while freeing up a custom base who needed a new hardware platform to migrate to. Unfortunately Itanium was a flop while the combination of x86-64 + Linux exploded in popularity in the data center. HP's offense move here ended up shooting themselves in the foot. Acquiring Cray would remove a competitor from the market as they are just beginning to push Gen-Z and Cray is one of the few companies which could produce an open market competitor. The likes of OpenCAPI and CCIX do overlap a bit with Gen-Z in funcitonality but are focused more on internal system communication, not between systems which Gen-Z is attempting. It would be difficult for history to repeat itself here. *High speed Ethernet is an option for bandwidth but for HPC workloads, its high latency is often suboptimal.
  6. power666

    We Got SCAMMED on Deal Extreme...

    I bet the Aquantia also adds AVB audio when connected to a Mac so you could also use it as an audio card.
  7. power666

    An Open Source Motherboard?!

    Ummm... those POWER9 chips have eight memory channels. As such, the more fully populated the memory channels, the faster the system gets. POWER loves memory bandwidth. The big iron POWER9's use a different socket as they support buffered memory. Still eight channels to those buffers but afterward there can be four DDR4 channels per buffer. That is a maximum of 64 DIMMs per socket and 16 TB of memory using the new 256 GB LR-DIMMs. Oh, and 16 of those sockets can be configured together bringing memory capacity up to 256 TB of memory. That's four times more than x86 chips can physically support in a system.* One thing not mentioned is that the high speed nvLink bus from the POWER9 chips only works through mezzanine connectors for Tesla accelerators. Conceptually nVidia could enable nvLink over the PCIe connector upon boot up but the Quadro GP100 and Quadro GV100 are limited to nvLink off of their own connector at the top of the graphics board multi-GPU setups. Still hyper fast and coherent with the POWER9 memory base. Oh, and the nvLink ports on the POWER9 don't utilize the normal nvLink ports on the nvSwitch. It is possible to build a dual socket POWER9 system with 32 Tesla GV100 chips all full coherent and two hops (source -> nvSwitch -> destination) between nodes. If only there was a Tesla with a monitor output so we could try 32-way SLI (eat your heart out Quantum 3D). *Cascade Lake is opening up higher capacity support as well as enabling Optane DIMMs for some truly crazy capacities.
  8. power666

    I Hope We Don't Get a Noise Complaint...

    This is not how you install a hanging projector. That thing is going to fall down. Mainly the base is not that secure on the wooden bracket. Even in the video you can see how unstable it is as it wobbles. To this point, the bottom the wooden base is screwed into the two vertical beams in the same direction as the weight is pulling on it. Ideally you'd want a single pipe dropping from the ceiling to the proper height. If you want/need the adjustable portion, you can simply get a pipe extension to combine a fixed length with the adjustable portion. The fixed length of pipe can have a hole cut into the top so that cabling can be feed through it, maintaining that clean external look. (Generally using the hole in the mounting base for cabling is practical when using ceiling tiles since the projector mount itself is suspended by wiring about the ceiling tile, not on an actual support.) The base of the pipe would be secured to board by drilling all the way through the board with long bolts that'd run through the entire assembly and a metal plate to act as a glorified washer to help distributed the weight across this board. This board assembly would then be secured into multiple beams to distribute that weight. Probably a bit overkill given the relatively light weight of the projector being used but it would be stable and I wouldn't worry about it falling down on some one. This would also look far better than the solution used in the video. Another sneaky thing would be to hang the screen from pipes connected to the ceiling. This would provide room for the display and other items behind the screen. The cabling for the drop trigger can run through them as well, hiding that. For a home theater setup, I would have gone with a screen that has ambient light rejection to help deal with other light sources in the room. Some make ALR screens that drop but other manufacturers only do ALR in a fixed frame. Given that Linus did a recent video leveraging HDbaseT, I would have presumed that that is what used to bring video to the projector. The cable distances involved is what HDbaseT was designed to handle. Though given eagerness for more power and this channel's ability to get demo/evaluation gear for no cost to them, I suspect that this home theater setup will be released by a micoLED wall in roughly a year. That's be a 130 in or so display with an 8K resolution and depending on the LED controller and how many logical displays it actually is, it could do 240 Hz and be fine for gaming too. I would also predict some sound damping material starting to be added to the room.
  9. power666

    The WEIRDEST Video Card We’ve EVER Seen..

    Nope due to being a glorified MXM carrier card. The outputs of the MXM card go through the edge connector. Those outputs are routed on the carrier card to the HDbaseT transceivers. Conceptually you could swap MXM cards but there are some quirks (nVidia GPUs only support four outputs where as the card in the video has six.) If you wanted to go that distance, you can get external extenders that'll accept an ordinary HDMI input.
  10. power666

    The WEIRDEST Video Card We’ve EVER Seen..

    Easier to get through walls and conduit for installs. Fiber HDMI cables are horrible for this as they generally have fixed ends which makes pulling them more difficult than something you can terminate yourself.
  11. power666

    The WEIRDEST Video Card We’ve EVER Seen..

    HDbaseT uses category cabling but the raw signal going over the wiring is not ethernet compliant. Thus it is its own thing which needs specialized switcher to do any sort of routing. It can do uncompressed, low latency (sub one frame) 4K30. The 4K60 implementations out there drop down to 4:2:0 chroma sampling or they leverage display stream compress (DSC). HDbaseT-IP is however different and is yet another AV-over-IP (AVoIP) implementation. SDVoE is a form of AVoIP. SDVoE can do good high quality 4K video because it requires a 10 Gbit Ethernet connection. Most of those are also based around fiber. There is a codec involved with SDVoE but since it requires 10 Gbit, the only time it kicks in is with 4K60 4:4:4 video. The codec part is important as it can scale upward to 8K resolution. I will say that on paper, I am impressed about this Netgear M4300 switch is that it has SDVoE encoder and decoder modules with HDMI 2.0 ports. Regardless of what format wins the AVoIP war, the core switches will look like the M4300. SDVoE is also nice as it is being promoted as a consortium solution. The downside is that SDVoE doesn't have any really big players pushing it from the AV side of things (it does have several name brand IT backers though). Dante is currently audio only (Audinate did announce a M-JPEG2000) AVoIP produce at ISE last month but that is sort of separate from the audio side.). It has become the de facto standard for moving real time professional/production audio over a network. AES67 is the fully interoperable version of Dante that lacks a few minor features. Audinate, the company behind Dante, fully support AES67 in their controller application and oddly enough, SDVoE. AVB does both audio and video but doesn't really define much on the video side. Biamp leverages M-JPEG2000 for their video codec. It supports both 1 Gbit and 10 Gbit for video. Unfortunately you can't do both 1 Gbit and 10 Gbit simultaneously (IE leverage a 8 Gbit video stream for local presentation and then simultaneously send a 800 Mbit stream for transport across a bottle necked corporate network). Biamp's end points also support decoding of H.264 which is a nice feature to have if need to bring in another stream. The big catch for AVB is that it requires special Ethernet switches to handle a deterministic connection. That essentially guarantees a maximum time to reach the destination. Conceptually this is a higher order form of quality of service. On the switch side most newer high speed switches are incorporating AVB features (mainly because high frequency trading outfits love them) includes those at higher data rates. Hence there are 40 Gbit and 100 Gbit Ethernet switches that support AVB out today, just no encoders or decoders to bring in video (yet). For the audio side, AVB has been slowly pushed out of the market because of Dante and AES67. One of the under reported things about AVB is that all recent Macs with Ethernet (or Apple's Thunderbolt Ethernet adapter) support AVB. The iMac Pro is capable of moving 1024 audio channels in and out of it. Like SDVoE, AVB is a consortium effort by the AVnu alliance but only Biamp is really pushing it in the AV market. However, AVB has seen an uptick in interest from the automotive market as several companies are looking at it as a method to pass data between sensors (including camera video feeds). Crestron offers their NVX solution which leverages 1 Gbit Ethernet. It was initially based off of M-JPEG2000 but they switched to something proprietary which they are calling Pixel Perfect to improve the quality of slow motion/static or complex images. Yep, they pretty much admitted that M-JPEG2000 was horrible for 4K. I've personally seen the older M-JPEG2000 solutions and yeah, you could tell where details were being removed. The best consumer analogy would be comparing a 4K Blu-ray to a 4K Netflix stream. Like everything else Crestron, it integrates well with their other controller and audio products. The first generation of NVX products were based around an Intel/Altera FGPA solution which could do decoding or encoding but not both simultaneously (reboot required to switch modes). The second generation of NVX boxes have dedicated encoder and decoder endpoints. The decoders also support H.264 viewing of streams. NVX also is able to act as a USB 2.0 extender for KVM functionality. However, if you do want use the full bandwidth of USB 2.0, the video stream would only be able to occupy 400 Mbit itself. Extron offers their own AVoIP solution that leverages a custom developed codec. This is a recent announcement and I haven't seen it but given the compression ratio required for 4K over 1 Gbit and every other vendor's resolution so far, I don't think this will change anything the than adding yet another 'standard' for AVoIP. Samsung purchased the Harmon Group a few years ago who had which purchased AMX a year before that they had purchased SVSI who was an early player in the video over IP segment. SVSI's products have several endpoints based around specific codecs which include M-JPEG2000 and even H.264. I've only seen earlier generation of these produces before they picked up the AMX branding and for 1920 x 1080 they were OK but explicitly could not do 4K. Later iterations did pick up 4K60 support but have the same quality over 1 Gbit issues that every other vendor has. The series of acquisitions initially had promise that the SVSI technology they pioneered would be implemented in Samsung's commercial/professional products (ie business displays) but it seems that Samsung is winding down parts of Harmon Group. Right now I'd avoid these as the long term outlook is very cloudy. Altona offers their Omnistream solution for AVoIP which does do something different: leverages the Dirac codec. It is fine for HD content over 1 Gbit Ethernet but it can't do 4K with high quality. NewTek has their NDI solutions for video over the network using a proprietary algorithm. Adoption is based around professional live video production. However, NewTek has opened up the spec to be royalty free to help spur adoption across other manufacturers. They also promote plug-ins for other manufacturer's software to bring in NDI feeds for production. Also NewTek is sane in that they recommend a 10 Gbit connection for 4K video but HD is fine over 1 Gbit. SMPTE2022 is a production video standard for encapsulating SDI video over Ethernet. It is uncompressed and requires 10 Gbit. However, 2.5 and 5 Gbit Ethernet speeds should be able to handle HD-SDI (1.5 Gbit) and 3G-SDI (3 Gbit) streams respectively. Extensions are being proposed to encapsulate 12G-SDI and the coming 24G-SDI over 25 Gbit and 40 Gbit Ethernet but I doubt those will really go anywhere. As a production focused spec, it won't see wide spread adoption in its pure form but being from a standard's body, this could be what emerges as a temporary interoperable standard between manufacturers. TL;DR As far as quality goes, if any vendor claims to be able to do high quality, low latency 4K picture over a 1 Gbit connection, they are lying to you. For just 1920 x 1080, 1 Gbit Ethernet is sufficient. AVoIP technologies that can do 10 Gbit can produce high quality 4K video. 10 Gbit solutions are great but incur the higher costs of 10 Gbit networking. What I am personally eager to see are 2.5 and 5 Gbit networking support coming to AVoIP solutions. The compression ratio to do 4K60 4:4:4 is less of an issue for quality at 2.5 and 5 Gbit data rates. Also while the M-JPEG2000 codec is used across multiple manufacturers, none of the solutions are interoperable with each other. Conceptually you should be able to exchange data streams but there is exceedingly heavy vendor lock in right now. I see this the greatest barrier to adoption of any AVoIP solution. The market right now is over saturated with proprietary implementations and it is battle for market share. I do see 4K60 being the breaking point with products stuck at 1 Gbit are going to suffer. Distances of any IP solutions is limited by the physical medium of the IP network itself. For category cable, you can get ~330 ft (100m) before hitting the switch. For most installs, this is fine. Fiber of course can go further with optional all the way into tens of miles between needing a repeater.
  12. power666

    The WEIRDEST Video Card We’ve EVER Seen..

    Most displays that large don't use ordinary displays because bezels are ugly. The majority of LED video walls also leverage RJ45 connectors and category (STP CAT6A) cabling but are not HDbaseT either. Most are based around the NovaStar platform with many simply reselling the design whole sale. Mixing the NovaStar connections with HDbaseT or Ethernet will cause bad things to happen (TM). The NovaStar platform relays on HDMI or DVI inputs. and doesn't do any sort of signal cropping (it will do scaling for the oddball LED resolutions). Chances are that you'd also want a matrix switcher or multi view processor before the NovaStar inputs anyway, so you'd be using more traditional video card outputs anyway. I've personally built a 2688 x 1536 and a 7680 x 2160 resolution walls based off of NovaStar platforms. The newer LED wall stuff from Barco and Christie oddly enough is IP based using true Ethernet and can leverage PoE to power the LED panel. Though these installs typically will use data daisy chaining so PoE while nice can only provide power for a handful of panels. The input side of these is still HDMI, DP, SDI or SDVoE (Ethernet) so the Advoli card would be useless. These platforms are not as mature as I had hoped as the IP transport is designed to be on its own private 10 Gbit network separate from all other traffic. I understand why as these design schema relies on active-active multipathing to provide link redundancy which would normally cause havoc on normal tree-leaf corporate network topologies. The exception is Planar which makes a 27", 1280 x 720 resolution bezel-less cabinet with HDbaseT in. The cabinet has the outputs for eight other cabinets to be chained from it so you can get 3840 x 2160 resolution into 81" from one HDbaseT run. The neat thing is that the Quadro version of the Advoli carrier card supports gen-lock and the Planar cabinet supports gen-lock and 240 Hz refresh if only driving itself from HDbaseT run (no daisy chaining). LED walls are pretty crazy and advertise up to 480 Hz refresh rate but are often limited by the source connection. So it is possible to do a 42" 2560 x 1440 @ 240 Hz logical display from a single card 100m away from the host. Unfortunately no G-sync support.
  13. power666

    Our EPIC New Setup!

    It is worse than that. Samsung purchased the Harmon group a couple of years ago which includes a slew of professional companies like Crown, JBL, AMX, SVSI, BSS, dbx and Martin to name a few. As weird as it was, the super conglomerate was able to keep each of these wings from competing with each other or they'd simply rebrand one product from one wing to another (if you though AMD and nVida rebranding was bad....). Under Samsung's leadership, officially they are rather quiet but behind the scenes it is clear that they're winding down several of these operations in favor of Samsung's own. Technologies from AMX and SVSI I am expecting to be fully integrated into future Samsung displays while those companies themselves become fully integrated into the Samsung corporate juggernaut. The problem I have is that Samsung is horrible in terms of support which is something terrifying when it comes to supporting professional systems like Crown, AMX, BSS etc. The professional market is patiently waiting to see where the things ultimately fall from these mergers but right now the wise decisions is to look else where in the short term.
  14. power666

    The WEIRDEST Video Card We’ve EVER Seen..

    It should be emphasized that this IS NOT Ethernet but HDbaseT. It is a different spec and leverages different signaling over category cable. The two things they have in common is the physical connector and cabling. (But to make things perfectly confusing, there is a 100 Mbit Ethernet channel encapsulated in the HDbaseT signal.) Cabling matters a lot for HDbaseT. The Belden cable being used in the video is extremely high quality and necessary to go the 100 m. It is possible to run short distances (<15 m) using unshielded Cat5E, though for an install of any kind I'd still recommend shielded CAT6A. Many vendors don't officially support transmission over anything less than that. The power over cable is 802.11at based for up to 30W. The HDbaseT spec has extensions to go all the way up to 100W over the cable, the goal being one single cable to drive a display. Various other HDbaseT vendors implement power over cable via PoE pass through meaning if you want power over HDbaseT, you first have to connect a PoE source. For others, including Advoli but not the model in the video, this is also how 100 Mbit Ethernet is embedded into the cable. Off hand, I think that this Advoli card only supports two power-over-cable connections. If they wanted to support more, they'd have to add more 6/8 pin PCIe power headers. Not explored in the video are the RS-232 and USB extension functionality. Yep, this card will pass USB 2.0 alongside video over the same cable making this a great long distance KVM solution. The catch is that you have to plug in a USB header from the motherboard to the card for this and it connects to a USB hub on the carrier card itself so total USB bandwidth is limited by that one uplink (you can see this in a picture I've attached). For keyboard/mouse, this is plenty but I wouldn't expect to be a able to move large files over a thumb drive quickly at the remote end. One cable can pass video, audio, 100 Mbit Ethernet (not supported on the model shown in the video), RS232, IR, USB and power simultaneously. There are some command line tools that permit exploring things like signal quality, power-over-cable usage and how RS-232 is forwarded etc. I wouldn't say they're end user friendly but they're fine for a system administrator. They also make versions with nVida Quadro MXM cards. These still have six RJ45 connectors but only four outputs. The other two connectors are for Ethernet pass through. This is where MXM card swapping and being unsupported comes into play. The AMD MXM cards have six outputs on the MXM connector where as nVidia only has four outputs. Putting a nVidia card into the Advoli AMD carrier will only permit four of the six outputs to work. The other reason why card swapping is unsupported is that high power cards could eliminate the power-over-cable outputs that it supports. Over all this is a nice but also niche product. Most of the use cases for multiple long range extenders also incorporate some video switching or can be just as easily accomplished by adding an external extender. The big use-case for this is digital signage from a central host PC, absolutely the lowest video latency over distances or fixed content displays in an operations center (SOC/NOC/IRC etc.). It appears to be a good product but you have to weigh the need for specifically this vs. alternatives in your use-case. I've attached some pictures when I visited the Advoli booth at last year's Infocomm trade show.
×