Jump to content

8 (or is it 10??) Gamers, 1 CPU

LinusTech
6 hours ago, agent_x007 said:

I think it should be packed with GTX 980's or 1080's, since they will throttle less at this kind of conditions.

Also : What's the difference between this, and a Mainframe PC ?

mainframe will consist of multiple of these boxes like 20 or 100 racks like that.

 

also was it 8 or 10 in the end?

why don't you finally do nVidia GRID?

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, DXMember said:

mainframe will consist of multiple of these boxes like 20 or 100 racks like that.

That would just be a rack of servers and would really stretch the definition of mainframe, see my above post.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, LinusTech said:

Taking it to the next level. That's what this is all about! 8 Gamers, 1 CPU is born!

 

Vessel: https://www.vessel.com/videos/agiMyoSm2

YouTube: 

 

 

Supermicro 4028GR-TRT Barebones Server

Supermicro.com: http://bit.ly/1XJ8TMY

NCIX: http://bit.ly/1XprzRl

 

Intel 2699 V4 22 Core CPU (x2)

Amazon: http://geni.us/2Kgk

 

Zotac GeForce GTX 980 Ti Amp! (x8)

Amazon: http://geni.us/2Caa

 

Zotac GTX Titan X (x2)

Amazon: http://geni.us/2Vm8

 

Kingston KC400 1TB Business SSD (x10)

Amazon: http://geni.us/3c6w

 

 

Kingston 32GB DDR4 ECC RAM (x8)

Amazon: http://geni.us/3N2B

 

Noctua NH-B9 (with narrow ILM mount) (x2)

Amazon: http://geni.us/22GN

 

 

Zotac B Series Barebones

Amazon: http://geni.us/3BUk

 

Zotac C Series Barebones

Amazon: http://geni.us/1jf4

 

LG 27UD88-W 27in 4K IPS Freesync Monitor

Amazon: http://geni.us/3Ig4

 

Lime Technology unRAID Server Prohttp://lime-technology.com/

You forgot to mention the streaming stick from "Fit PC"

Link to comment
Share on other sites

Link to post
Share on other sites

Could you do the same kind of thing, but running VR so you could see how many you could run at a time? Wait till the 1070/1080 if you want

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Nardella said:

That would just be a rack of servers and would really stretch the definition of mainframe, see my above post.

what you described they are... well.... THE Mainframes

what I described it's a mainframe

when all the racks work on the same task at once it'll be a mainframe, when they are individual servers then those will be just shelves of servers.

CPU: Intel i7 5820K @ 4.20 GHz | MotherboardMSI X99S SLI PLUS | RAM: Corsair LPX 16GB DDR4 @ 2666MHz | GPU: Sapphire R9 Fury (x2 CrossFire)
Storage: Samsung 950Pro 512GB // OCZ Vector150 240GB // Seagate 1TB | PSU: Seasonic 1050 Snow Silent | Case: NZXT H440 | Cooling: Nepton 240M
FireStrike // Extreme // Ultra // 8K // 16K

 

Link to comment
Share on other sites

Link to post
Share on other sites

I would like to see a setup with more modest specifications for Arcades and other such applications.

 

Other than that, give the tower a proper name please. :P

 

Watching video now. Will add in here if needed.

 

Edit: It seems Linus shares my thoughts on future computing as well. I'd always written off internet-based cloud gaming for latency and quality reasons, however, the concept would certainly work as a home-based cloud server, perhaps managed and set up as one would an air-conditioner. Surely, the additional power could provide another gigantic push in visual fidelity while simplifying the modern home.

 

The biggest obstacle here is price, though if boards were to become more readily available, one could make do with a single Xeon 22 Core chip (or less) instead.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

Hey Linus,

 

Check out Teradici who has an office in Burnaby.  This company has PCoIP solutions that may be of interest to further improve this project.  I've seen setups where a whole office only uses thin clients with all the hardware sitting in a data  centre and it actually works.  It would be interesting to see this with gaming and not just office computers.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, DXMember said:

mainframe will consist of multiple of these boxes like 20 or 100 racks like that.

isnt "mainframe" a rather old concept?

i mean, its still around, but its mostly just part of the datacenter infrastructure now i guess.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MrRoboCarrot said:

So What were the Zotak Boxes for? 

Clients.

Link to comment
Share on other sites

Link to post
Share on other sites

@LinusTech I'm wondering why the Zotac "thin" clients are necessary in this case.  How come you can't go from HDMI over Cat6 using HDBaseT transmitter/receivers?  As for the USB devices for each VM, there are a number of USB over Ethernet solutions as well.  This would just require 2x Cat6 homeruns to each monitor and kb/mouse.

 

It would be nice if at each LCD/TV that is mounted, you could also have the usb and hdmi receivers mounted behind it, creating a super-slick setup for 10 other locations.

 

HDMI matrix switches are nice too that provide POE to the receivers.  Atlona makes a nice one.  Any thoughts on this anyone or am I just hopeful?

Link to comment
Share on other sites

Link to post
Share on other sites

HDMI over Cat6 would bypass the need for Win7+ and Steam Home-Streaming.  You would no longer be relying on your network bandwidth.  Rather, your connection would be from the host GPU via HDMI --> HDMI HDBaseT Transmitter --> patch panel via Cat6 --> Cat6 drop --> HDBaseT Receiver --> HDMI to LCD/TV.  USB extension would be similar.

 

Again, if you use an HDMI Matrix Switch (ie 10x10), you could get a very clean setup to all the Receivers you want.

 

@LinusTech  I hope you reconsider your final thoughts as I think HDBaseT and USB over Cat6 should solve any drawbacks described in the video.  I am actually 3/4 through my own 3 HTPC+NAS, 1 CPU project inspired by your videos.  It would be awesome to see the 10 Gamers, 1 CPU come out flawless and without issues.  I'm a 1300+ VM, 40+ host VMWare vSphere Datacenter System Admin and always come back to your videos everyday for enjoyment!

Link to comment
Share on other sites

Link to post
Share on other sites

Wow, another ridiculously budgeted build that literally will never get made outside of a youtube video and then dismantled shortly after. I'm so done with these. LTT quality has gone way down since I started watching. I'm unsubscribing after I dislike this video. This type of video may impress the 12 year old kids you cater to but it's gotten old for me. 

Link to comment
Share on other sites

Link to post
Share on other sites

R E H A S H E D C O N T E N T 

If you want to join a really cool Discord chatroom with some great guys here from LTT and outside this community then PM me!

Link to comment
Share on other sites

Link to post
Share on other sites

Wish there was more in depth of the setup and work.  My personal rig is Arch Linux.  Due to Skylake IOMMU issues, running the VFIO kernel with ACS patch using KVM.  I have a Windows VM that I game in on it.  Runs the Division, Fallout 4 and others great no issues at near to max settings.  Video card is a 970.

 

And for the record, Unraid as Linus said uses KVM. I tend to think of it as a specialized distro that does just about all the work and setup for you.  Great for beginners and those who don't want to put the time into getting things working.  I was tempted myself to try it out, but in the end like my config better than what I would have got from Unraid.

 

And I am also curious about the HDMI dongle

Link to comment
Share on other sites

Link to post
Share on other sites

Does anyone else find it funny that Linus' vision for the future is how things were done in the 70s? :P  Seriously, we used to have one big super computer at the heart of <insert local area (business, home, etc.) here> and a bunch of super lightweight "PCs" (terminals) would connect to it.  Then we moved to having every computer be it's own powerhouse, but I guess Linus wants to go back to that other system xD   I can see his point about potentially being a better use of resources and money, what with being able to divert power where needed as needed instead of having a bunch of gaming machines idling all the time, but there are problems with this idea too, and I think he ran into most of them in the video: cooling this 10 card super beast, and latency/connection flakiness.

 

Just to be clear, I did like the video and I think this is a far more practical system than the 7 gamers around one tower, especially since it could be scaled down for us "normal people", but I couldn't help but be amused by the fact we seem to have come full circle 50 years later xD 

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

i want to test my APU vs yours 44 cores +8x980ti +2xTitanX xD

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/22/2016 at 11:45 AM, DXMember said:

what you described they are... well.... THE Mainframes

what I described it's a mainframe

when all the racks work on the same task at once it'll be a mainframe, when they are individual servers then those will be just shelves of servers.

I suppose if one wanted to be pedantic like that one could. But I don't really hear about people in data centers using the term mainframe to describe a commodity rack full of commodity hardware as a mainframe no matter what it is doing. I have also not heard of a rack of commodity servers referred to as a mainframe in any marketing material anywhere.

On 5/22/2016 at 5:08 PM, manikyath said:

isnt "mainframe" a rather old concept?

i mean, its still around, but its mostly just part of the datacenter infrastructure now i guess.

Reading up on mainframes like the IBM z13 is pretty cool and interesting. It looks very little like a rack full of servers or anything you will find often in most data centers.

Everything or almost in a mainframe (as I define them, really the definition could be stretched to include a pair of Raspberry pis that are on the same network each working on a different part of a computer program) is custom made and optimized. You can't simply order parts off of amazon to upgrade these things, except maybe RAM or mass storage. If you look at my earlier post I detail some of the things that are special about products that are actually marketed as mainframes. Here are some highlights:

 

Mainframes are redundant in every way possible, the CPUs are often custom designed for mainframes, they will actually execute each instruction twice in parallel and redo anything that doesn't match, all without a performance hit.

 

Often the CPU is a board several inches a to a side that has dozens of dies all interconnected.

 

Everything on many mainframes is hot swappable, power supplies, fans, storage, pci e cards, cpu/ram combo units. If it can fail or be upgraded then it can probably be swapped out while the machine is running.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 23/5/2016 at 4:02 PM, Ryan_Vickers said:

Does anyone else find it funny that Linus' vision for the future is how things were done in the 70s? :P  Seriously, we used to have one big super computer at the heart of <insert local area (business, home, etc.) here> and a bunch of super lightweight "PCs" (terminals) would connect to it.  Then we moved to having every computer be it's own powerhouse, but I guess Linus wants to go back to that other system xD   I can see his point about potentially being a better use of resources and money, what with being able to divert power where needed as needed instead of having a bunch of gaming machines idling all the time, but there are problems with this idea too, and I think he ran into most of them in the video: cooling this 10 card super beast, and latency/connection flakiness.

 

Just to be clear, I did like the video and I think this is a far more practical system than the 7 gamers around one tower, especially since it could be scaled down for us "normal people", but I couldn't help but be amused by the fact we seem to have come full circle 50 years later xD 

No, I think that it's a pretty good idea. Of course, the idea of running 10 high end gaming coputers is far from ideal (for themp like you said), but who would really do it in a home setup? Do you really need 10 gaming PC at home ?

 

I can imagine doing it for a far more modest home setup. For example get a computer that can run 2-3 vm for gaming/desktop application, 1 for htpc and 1 for nas for shared files (or maybe some other way of sharing hdd). Use the pci slot for 5 graphic cards (of course not all high end) 5 thunderbolt cards, so no need of "client" machines, but only a thunderbolt hub like linus used in is personnal setup... and there seems to be a 30m (98ft) cables, so that's a cool possibility (of course with a relatively small home/condo with perfect placement of the screens arount the machine, mabe with wire in perfect straight line in the wall.

 

The only thing I would like to improve would be the possibility to dinamically share the ressources (1 vm open, get all the cpu cores, 2 vm open, get half...) and maybe possibility to loggin a vm to a specific screen, then everybody in the house have his vm with his Windows (and his setting) and can use in on the screen of his choice, that would even remove the need of the htpc vm as you would only have to log with your vm (assuming you only one at a time, but that make sense), the second point could be done in a getho way I am sure (manually plug the thnderbold wire you want to use on the port you want).

 

Would like to see a breakdown of the cost of doing similar setup, what is cheaper, 3 gaming computer, 1 htpc and 1 NAS, or only one machine with VM.

Link to comment
Share on other sites

Link to post
Share on other sites

Haha Fit PC quoted Linus words and put it as their product testimonial

 

 

this has been done in mission critical places

 

pretty much a server rack full of mainframes and the KVM are run with Teradici thin client boxes which feeds video and audio from the PCs back to the user via ethernet connections.

 

I worked for LucasFlims Singapore here so i kinda know how these system work in the background

 

http://www.teradici.com/

 

Linus idea is not new but the costs is still the limiting factor here

 

That is unless he has millions of dollars that he can just sign over to approve the funds to implement and deploy system like this like LucasFlims, Dreamworks or Disney/Pixar.

 

 

 

linus.PNG

Budget? Uses? Currency? Location? Operating System? Peripherals? Monitor? Use PCPartPicker wherever possible. 

Quote whom you're replying to, and set option to follow your topics. Or Else we can't see your reply.

 

Link to comment
Share on other sites

Link to post
Share on other sites

So if I am thinking right... the thin pcs just simply remote desktop connect to the main server Vms, correct?(sorry having brain fart)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×