Jump to content

Crazy Nerd Idea

The Spectator

After getting old hardware and some other things for free i had a crazy idea. what if i take apart a bunch of older laptops and strip them down to their motherboard and create an array of them in a custom built case and find a software that can mesh all of the processing power together and make a Frankenstein computer of sorts. So i came to the LTT forum and i need some help. I can personally create the case myself with some CAD design and other work but, does anyone know a software that can combine multiple computers and mesh all their display outputs, RAM/Memory, Processors and everything else into one functional computer?

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not sure if this is possible but if you manage it please keep us updated

|| CPU: AMD Ryzen 5 1600 (@3.9GHz) || Motherboard: ASUS Prime B350 Plus || Cooler: Arctic Freezer 33 eSports Edition || GPU: EVGA GTX 1070 SC || Memory: 16GB G.Skill Trident Z RGB C16 (@2933MHz) || SSD: SanDisk 128GB || HDD: WD Blue 2TB, Toshiba 2TB, Transcend 1TB || PSU: Corsair RM550x || Case: Fractal Design Focus G || Monitor: 2x AOC 23” I2369VM IPS Full HD, Samsung 32" LED TV Monitor || Mouse: Logitech G703 Wireless || Keyboard: Cooler Master MK750 RGB (Cherry MX Brown) || Speakers: Dell Stereo Speakers || Headphones: Sennheiser HD 4.40 BT / Samsung Galaxy Buds ||

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Inversion said:

I'm not sure if this is possible but if you manage it please keep us updated

ok, will do

Link to comment
Share on other sites

Link to post
Share on other sites

That's not really how that works. You need software that supports distributed computing to make use of a setup like that. Search for distributed computing or cluster. I'm not aware of software that can combine multiple computers into a single cohesive whole.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Budddy1 said:

After getting old hardware and some other things for free i had a crazy idea. what if i take apart a bunch of older laptops and strip them down to their motherboard and create an array of them in a custom built case and find a software that can mesh all of the processing power together and make a Frankenstein computer of sorts. So i came to the LTT forum and i need some help. I can personally create the case myself with some CAD design and other work but, does anyone know a software that can combine multiple computers and mesh all their display outputs, RAM/Memory, Processors and everything else into one functional computer?

I know it's been done many times with raspberry pi computers...

CPU: Ryzen 5 5600x  | GPU: GTX 1070 FE | RAM: TridentZ 16GB 3200MHz | Motherboard: Gigabyte B450 Aorus M | PSU: EVGA 650 B3 | STORAGE: Boot drive: Crucial MX500 1TB, Secondary drive: WD Blue 1TB hdd | CASE: Phanteks P350x | OS: Windows 10 | Monitor: Main: ASUS VP249QGR 144Hz, Secondary: Dell E2014h 1600x900

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Eigenvektor said:

That's not really how that works. You need software that supports distributed computing to make use of a setup like that. Search for distributed computing or cluster. I'm not aware of software that can combine multiple computers into a single cohesive whole.

Ok, I'm looking into clusters already. Any help is greatly appreciated 

Link to comment
Share on other sites

Link to post
Share on other sites

As said above, it's not quite practical to combine a whole bunch of computers into one. Many applications still have a lot of their work be single-threaded, because you can't rely on two (or more) cores being done with a certain task both at the exact same time. If that doesn't happen, you might have to code your program so one core will wait for another and then proceed.. yada yada..

Now multiply this by 8 for a somewhat normal desktop chip, now multiply that problem by 10 for a cluster.. That's not super practical.

 

You need to find a workload that can spread across multiple processing nodes and still be effective.

A good example for that would be Folding at Home. A distributed network, that processes certain tasks (work units) over many machines. If you are done with a task, you just get another one. That way your system is never doing nothing; waiting for others to go on.

 

A server cluster would be another good example, where you have many machines, but they are all doing their own task.

 

If you want to read more about this, read up on PS3 clusters and Raspberry Pi clusters (the only clusters that have been somewhat attainable/realistic for the normal man.. Well, maybe PS3 clusters weren't)..

https://projects.raspberrypi.org/en/projects/build-an-octapi

https://en.wikipedia.org/wiki/PlayStation_3_cluster

 

"We're all in this together, might as well be friends" Tom, Toonami.

 

mini eLiXiVy: my open source 65% mechanical PCB, a build log, PCB anatomy and discussing open source licenses: https://linustechtips.com/topic/1366493-elixivy-a-65-mechanical-keyboard-build-log-pcb-anatomy-and-how-i-open-sourced-this-project/

 

mini_cardboard: a 4% keyboard build log and how keyboards workhttps://linustechtips.com/topic/1328547-mini_cardboard-a-4-keyboard-build-log-and-how-keyboards-work/

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TrainFan2019 said:

I know it's been done many times with raspberry pi computers...

I've seen that before but it basically uses a router and makes it a network attached computer such as businesses using amazon or other big brand servers to run complex tasks at it without buying themselves thousands of dollars of hardware that would only be used sporatically

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Budddy1 said:

After getting old hardware and some other things for free i had a crazy idea. what if i take apart a bunch of older laptops and strip them down to their motherboard and create an array of them in a custom built case and find a software that can mesh all of the processing power together and make a Frankenstein computer of sorts. So i came to the LTT forum and i need some help. I can personally create the case myself with some CAD design and other work but, does anyone know a software that can combine multiple computers and mesh all their display outputs, RAM/Memory, Processors and everything else into one functional computer?

I believe you're referring to cluster computing. It's been done but I don't believe it would be an easy task for the average user to 'mesh' all of the processing power together from various computers. For example, you would not be able to have all of the laptops running, say regular old Windows 10 and combine them to create a 'super' computer without some serious code behind it. Unfortunately, it's not a simple task...

 

 It would be much simpler to create something that runs parallel tasks. ie: 2 PC's running separate tasks to achieve a final result. In some instances, this will double your output.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, minibois said:

As said above, it's not quite practical to combine a whole bunch of computers into one. Many applications still have a lot of their work be single-threaded, because you can't rely on two (or more) cores being done with a certain task both at the exact same time. If that doesn't happen, you might have to code your program so one core will wait for another and then proceed.. yada yada..

Now multiply this by 8 for a somewhat normal desktop chip, now multiply that problem by 10 for a cluster.. That's not super practical.

 

You need to find a workload that can spread across multiple processing nodes and still be effective.

A good example for that would be Folding at Home. A distributed network, that processes certain tasks (work units) over many machines. If you are done with a task, you just get another one. That way your system is never doing nothing; waiting for others to go on.

 

A server cluster would be another good example, where you have many machines, but they are all doing their own task.

 

If you want to read more about this, read up on PS3 clusters and Raspberry Pi clusters (the only clusters that have been somewhat attainable/realistic for the normal man.. Well, maybe PS3 clusters weren't)..

https://projects.raspberrypi.org/en/projects/build-an-octapi

https://en.wikipedia.org/wiki/PlayStation_3_cluster

 

Wow, Thanks. I was originally thinking about making into a more of a "normal" computer" in the fact that it would run and function on windows or even linux and would share tasks such as rendering if needed or combining all the integrated graphics on each chip and even possibly game if i came to it 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, steelo said:

I believe you're referring to cluster computing. It's been done but I don't believe it would be an easy task for the average user to 'mesh' all of the processing power together from various computers. It would be much simpler to create something that runs parallel tasks.

Ok, i know this seems like a skeptical idea but on a more larger or smaller scale with even recycled components we can reuse them in a greater term with possibly portable and power efficient computing 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Budddy1 said:

Ok, I'm looking into clusters already. Any help is greatly appreciated 

Like I said, you need software that supports it.

 

For example for web servers, you can have one computer that acts as a load balancer and n computers that act as web servers. Whenever a request comes in, it is distributed to one of the web servers. This way each web server only gets a fraction of visitors, which makes it possible to serve a lot more people at once than with a single machine.

 

Databases would be another example. Most database servers support a cluster setup, either for redundancy (failover) or for performance where each node serves only a portion of requests to make use of multiple computers at once.

 

Then there's also things like ray tracing where each node in the cluster is working on one image (of e.g. a movie) which allows you to render as many images in parallel as you have computers available. Whenever a machine is done it receives the next task from the master node responsible for distributing tasks and combining the results.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Eigenvektor said:

That's not really how that works. You need software that supports distributed computing to make use of a setup like that. Search for distributed computing or cluster. I'm not aware of software that can combine multiple computers into a single cohesive whole.

Ok, i will research a little more on this cluster idea everyone is referring me to. my idea seems more business than consumer at the moment but we will see in the future

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Eigenvektor said:

Like I said, you need software that supports it.

 

For example for web servers, you can have one computer that acts as a load balancer and n computers that act as web servers. Whenever a request comes in, it is distributed to one of the web servers. This way each web server only gets a fraction of visitors, which makes it possible to serve a lot more people at once than with a single machine.

 

Databases would be another example. Most database servers support a cluster setup, either for redundancy (failover) or for performance where each node serves only a portion of requests to make use of multiple computers at once.

 

Then there's also things like ray tracing where each node in the cluster is working on one image (of e.g. a movie) which allows you to render as many images in parallel as you have computers available. Whenever a machine is done it receives the next task from the master node responsible for distributing tasks and combining the results.

That sounds amazing, this could theoretically be applied to hosting web servers and other server tasks, correct?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Budddy1 said:

That sounds amazing, this could theoretically be applied to hosting web servers and other server tasks, correct?

Yes, that's one use case for cluster setups. Distribute load across multiple machines so that e.g. your webpage remains responsive even when you have thousands of visitors at once. From the outside it looks like a single web server, but in reality each request is sent to the next available node (e.g. round robin) so that the load is distributed evenly.

 

This works best when tasks are independent from one another so that each computer can work on its own task without having to communicate too much with other nodes. Otherwise network latency can quickly eat up the performance advantage.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Budddy1 said:

That sounds amazing, this could theoretically be applied to hosting web servers and other server tasks, correct?

Yes, the load balancer ensures no single computer is overwhelmed by traffic. Theoretically, you could use something like this to equally distribute any workload...instead of having 1 computer that is attempting to complete x number of tasks and waiting for resources to free to to process the next, you can have multiple computers each processing a job. It's usually more efficient.

Link to comment
Share on other sites

Link to post
Share on other sites

Another use case would be render farms. For example let's say Avatar 2 is 120 minutes in length (7200 seconds) and runs at 48 fps. This means you have to render a total of 345,600 frames. If it takes about 12h to render a frame (which is conservative) it'll take 4,147,200 hours or roughly 475 years to render the movie.

 

The idea is, you have one computer (the master node) that knows all the frames that need to be rendered. You have a few hundred or even thousands of slave nodes.

 

Whenever a slave node is idle, it connects to the master node and asks for a frame to render. Once it is done with that frame, it hands the completed image back to the master node, before it receives the next task. Each node can work independent of all the others and network traffic only happens when you need to distribute work or receive results.

 

E.g. Blender supports it:

 

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, TrainFan2019 said:

I know it's been done many times with raspberry pi computers...

It hasn’t.  Those are wolfram clusters.  They can use multiple computers to work on a single problem given the right kind of software, but that kind of software only works on the right kind of problem.  Usually very specific types of math.  For many things they’re close to useless.  Also they’re not really combined, they’re just connected.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

This doesn't seem like it would be practical, but it sounds like it could be a fun side project.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×