Jump to content

rcm024

Member
  • Posts

    13
  • Joined

  • Last visited

Awards

This user doesn't have any awards

rcm024's Achievements

  1. With the announcement of the 7995WX, you guys should revisit X gamers, 1 CPU and see just how many gaming rigs you could run off it at once
  2. https://arstechnica.com/gadgets/2019/10/the-internets-horrifying-new-method-for-installing-google-apps-on-huawei-phones/
  3. Are you guys going to review the Corsair's Virtuoso RGB SE headset? Perhaps pitted against the Sennheiser PC37X which I've heard are a great pair of cans for the cost? I'd be interested to hear your thoughts on their "Broadcast Quality" claims for the built in mic. At ~$200 if the quality is good, it could be a great value for someone trying to get into streaming on a budget.
  4. IIRC, they did this in a Scrapyard Wars already
  5. Make an ultra-small form factor build using this: https://www.techspot.com/news/81950-picobox-56mm-psu-plugs-directly-motherboard.html
  6. I'm sure it is, but that's not what I want to do. Sure, it doesn't "need" it, but i WANT it. Also, I already own the NVMe drives (got them on sale real cheap) so I'm not going sata. Virtual drive is a possibility, but it still doesn't negate the need for full bandwidth, especially if I decide to stripe the array. Also, I'm not worried too much about snapshots and backups. I have on and offsite backup in place for important files and if something catastrophic happened to the machine, I'd just start from scratch anyway.
  7. Multi user (my roommate and I + potentially a guest) gaming rig + dedicated server(s) for games and other services + maybe a router vm (this will probably end up on a different box) The SSDs are plenty fast, but I want to be able to potentially hit them all simultaneously, hence the question. Does a plx bridge allow you to do individual device passthrough? i.e each vm gets its own nvme drive even though they are all on the same physical card?
  8. TL;DR Is it possible to get 2x PCIe 3.0 out of 1x PCIe 4.0? So with PCIe 4.0, the bandwidth available on the PCIe lanes has doubled even with lane count on the AM4 boards remaining esentially the same (yes I know that the X570 chipset has up to 16 additional lanes but they're only connected to cpu over a PCIe 4.0 x4 connection). I plan on doing a build in the near future (I'm still running my 3770K so I'm not desperate) and would like to use the AM4 platform if possible for my use case. I plan on running multiple high performance VMs simultaneously each with their own graphics card, NVMe drive, USB controller, etc... meaning that I need a lot of addressable PCIe lanes. So my question (as I'm not too knowledgeable on the actual workings of PCIe) is, can a PCIe 4.0 x8 interface be split out on an add in card to 4 PCIe 3.0 x4 interfaces (without dropping the add in card to 3.0 obviously)? My idea, is that I have 4 PCIe 3.0 NVMe drives which is equivalent to a PCIe 3.0 x16 capacity which is equivalent to a PCIe 4.0 x8 capacity. Is something like that possible? I haven't been able to find anything on the market that does this so it makes me think it's not possible, but perhaps someone who's more knowledgeable in PCIe can shed some light on the topic for me. I woould love to go with AM4, (though I'll definitely be waiting for the 16 core part that we all know exists) but if this is impossible, I'll just wait for Threadripper 3000 I guess.
  9. sr-iov is fine for things like ethernet, but I'm planning on having at least 2, maybe 3 graphics cards as well as I have 4 1 TB NVMe drives which each take up an x4 connection and might be hit simultaneously. Like I said, I need lots of addressable PCIe hence Threadripper 3000
  10. Usually the implementations (especially for USB 3) do use PCIe lanes. And I'm planning to do a home lab type build. Multiple high performance VMs running simultaneously with their own dedicated IO which will require lots of assignable PCIe addresses
  11. Fair enough I suppose. It won't meet my needs in that configuration, especially since it'll also be splitting those 16 PCH lanes across SATA, USB, etc... as well. So I guess I'll just have to wait for Threadripper 3000
  12. Yeah, that's what threw me off. I saw the slide showing "Up to 40 PCIe 4.0 Lanes) and actually got excited and then all of the announced processors only have the normal AM4 lane count
  13. So I have noticed that no one in the media has yet commented on the fact that AM4 with X570 will support up to 40 PCIe 4.0 lanes. That seems like huge news to me, but it does beg the question of where all of those lanes are coming from. According to the chart below by Anandtech (Source) all of the announced chips are 24 lanes available - x4 to the chipset, x16 or x8/x8 to PCIe slots, and x4 to m.2 NVMe is the usual AM4 breakdown. So where are the additional 16 lanes? Are they all on the chipset and sharing that x4 link to the cpu? Or do you think their 16 core that we all know they have coming at some point will have more lanes? I'd love to be able to take advantage of PCIe 4.0 and the other Ryzen 3000 features, but I do need more than 20 accessible lanes for my next build.
×