Jump to content

LMG Server Closet / Infrastructure Build / Suggestion

Go to solution Solved by qops1981,
On 9/23/2016 at 4:39 PM, Mark77 said:

I don't think putting the gateway router on the same physical hardware (ie: virtualized) as the actual enterprise applications server is a good idea. If you're on the phone to the ISP, and you have to make a change because something isn't working, the last thing you want to be doing is fighting with the virtualizer, especially if there's any question of whether the virtualizer is at fault.  A standalone pfsense box is relatively simple to troubleshoot and connect/disconnect.  And the cost of the hardware and electricity for such a machine is negligible. 

Hey Mark,

Typically when diagnosing issues, for myself I exhaust all possible diagnoses before calling a support line. Most support lines don't know much anyways. Furthermore I would never even mention it was virtualized hardware. OMG that would just be an excuse for them to blame it on something that doesn't involve them as the source of the problem. As far as the stability if Hypervisors goes, in my experience they are pretty good. I myself have never had problems outside of nailing down the initial configuration. A nice feature of virtualization is that you can just restart the guest to fix a virtualization issue in the (in my experience) unlikely event one should occur.

 

Quote

Likewise, that big machine you propose, its basically concentrating all one's eggs in one basket.  If it goes down, a company basically has to shut their doors until its back up and running and can be fixed.   The 'cost' of idle capacity is largely that of the amortized capital cost of hardware, as electricity scaling is actually very good these days (ie: a lightly loaded machine uses very little of it!).  And with the cost of hardware these days, its trivial and negligible. 

 

Distributed systems like this actually offer redundancy in that anyone system could go down and everything would continue to run without issue. Now in the configuration I outline there are a couple caveats. 1. The cluster servers that form the RADIZ could suffer only the loss of one system. Once that system came back up, the RAIDZ would need to catch that iSCSI device back up with storage changes. 2. The Main Head server does not have a redundant replacement. So in that you are correct, But there is no redundancy in a single router system either. Also the RAIDZ would be rediscoverable once the system was repaired.

 

So, example should the transcoding system go down in the existing setup, then that needs to be repaired before transcoding can continue. In my setup, the transcoding system is spawned on demand on what ever system is available. Systems could go down and the transcoding system would just spawn on another device that is up.

 

Quote

If they were a much larger business, had a dozen or a few hundred servers and could have their system structured for a few spare ones ready to back-up, then sure, the sort of fancy-ness you propose certainly can help reduce costs and improve uptime.  But I'd say for an outfit like LMG, the chief concern should be actually keeping their IT simple enough that they can focus most of their efforts on generating revenue-creating content.  This means going with systems they have expertise in (Linus freely and readily admits he knows little about Linux, which is perfectly okay if he/they can accomplish what they need to accomplish in Windows). 

I think LMG is plenty large enough. I have actually seen many smaller businesses using similar infrastructures. They like the flexible virtual development environments they can create on demand.

 

I hope this seen as just a follow up response and my general disagreement with your stance. You could be right in the end. 

Hello L.M.G. & Friends,

 

So today I am admittedly board at work and I was thinking about the L.M.G. Server Closet. I noticed how there were a lot of systems that were dedicated systems to one thing or another. I thought to my self "There must be a lot of wasted resources available on most of these systems". I my self work in IT as a System Admin and I mostly do a lot of stuff in Amazon Web Services, but I link to think about hardware a lot too. So my thinking is why not build a system with the idea of "pooled" resources? Instead of having a lot of individual systems dedicated to one task, have infrastructure dedicated to do all of it.

 

So here is what I know:

  • You have a router system
  • you have a security cam system
  • You have a Storage System for Video and Such
  • You have Transcoding systems

Instead why not have the following:

  • A main head system that is the Router / Nas / Docker Cluster Control / VM Controller
  • A Security cam Server to Consolidate the connection and pipe the data to the NAS
  • A Cluster Of Storage /  Compute Machines.

The main Head unit could run a PFSense VM with the mainboard network hardware passed through. That would increase security so that the hardware is directly attached to the VM. All other connectivity would be from off of the 10Gb network cards. So when saving a file it goes straight to the Storage with out addition routes.

 

This unit could also directly house the Hi-Speed Storage made of SSD's. Also from here it will have created RAIDZ pools using iSCSI devices for the larger slower storage and for the Security Camera Storage. The devices are from the Cluster of servers in the rack. Each device can have two pools with two types of drives since the Security Cam storage could use a different type of drives.

 

This device will also operate as the Cluster Control and Node 0 for Docker containers. I know you have lots of stuff you do there. You could have a cluster for Rendering, Transcoding, (maybe) Parallel Transcoding?, internal web services, Databases, External web services, I am guessing there are things I am not even thinking of. This could all be managed in the Docker cluster. I also know that you have things that are uniquely windows driven. This unit could be what hosts OS images and triggers the launch of VM's on the cluster of hardware available. You could easily launch 12 or more windows installs to do what ever you need them to do, then bring them down when you are done. 

 

After the main unit every other unit is just one identical cluster unit after another. Each has identical specs to the main head unit, except for the extra network connectivity. Each unit hosts an 2 iSCSI devices, one composed of standard Enterprise Storage and the other would be for the Sec.Cam. footage.

 

Another cool thing about this setup is if you build the employee workstations with extra processing hardware, you can include them into the cluster pool. They could Virtualize windows with GPU passthrough and all extra unassigned cpu cores could be doing docker cluster or other VM work. 

 

I should note that I know at Linus Media Group you like to go full boar on design. Well me too. But that being said and the ridiculous cost of something like this, you know this could be easily scaled down. Like I said I was bored and had nothing better to do than dream. 

 

Example:

ltt_infrastructure.png

Thought? Concerns? Yes I am crazy, no I am not medicated, no, I am not a harm to myself or anyone around me... Yet ;-P

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, qops1981 said:

 

This will cost A TON of money. I'm sure most of their networking equipment was sponsored. 

CPU: AMD Ryzen 5 5600X | CPU Cooler: Stock AMD Cooler | Motherboard: Asus ROG STRIX B550-F GAMING (WI-FI) | RAM: Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-3000 CL16 | GPU: Nvidia GTX 1060 6GB Zotac Mini | Case: K280 Case | PSU: Cooler Master B600 Power supply | SSD: 1TB  | HDDs: 1x 250GB & 1x 1TB WD Blue | Monitors: 24" Acer S240HLBID + 24" Samsung  | OS: Win 10 Pro

 

Audio: Behringer Q802USB Xenyx 8 Input Mixer |  U-PHORIA UMC204HD | Behringer XM8500 Dynamic Cardioid Vocal Microphone | Sound Blaster Audigy Fx PCI-E card.

 

Home Lab:  Lenovo ThinkCenter M82 ESXi 6.7 | Lenovo M93 Tiny Exchange 2019 | TP-LINK TL-SG1024D 24-Port Gigabit | Cisco ASA 5506 firewall  | Cisco Catalyst 3750 Gigabit Switch | Cisco 2960C-LL | HP MicroServer G8 NAS | Custom built SCCM Server.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Abdul201588 said:

This will cost A TON of money. I'm sure most of their networking equipment was sponsored. 

Yah, I was dreaming without out boundaries, lol. But I did say it could be scaled back for sure.

Link to comment
Share on other sites

Link to post
Share on other sites

Id probably run vm's beacuse with docker, you limited to one os, and you want to have different os'

 

Here is what id run

 

1 router(probably cisco)

Switches as needed(probably 1x 24 port 10gbe sfp and 1x 1gbe rj45)

4x(or however many needed) dell r730's running esxi(dual xeons, 128 gb + of ram, 2x 8gb sd for boot)

You could also run your compuer in a blade box, but it costs more for no real benfit.

 

1x ssd san for vm storage and high speed video

 

1x(or more as needed das(md 1280)

1x dell r630 to manage the das into iscsi

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, qops1981 said:

Yah, I was dreaming without out boundaries, lol. But I did say it could be scaled back for sure.

My dad workplace has 2 Dell PE servers. The R910. 4 E7-8870 Xeons. 1TB Of ram (each server) 16 TB of storage. They use them as a cluster for VMware. I forgot how much the total price was.

CPU: AMD Ryzen 5 5600X | CPU Cooler: Stock AMD Cooler | Motherboard: Asus ROG STRIX B550-F GAMING (WI-FI) | RAM: Corsair Vengeance LPX 16 GB (2 x 8 GB) DDR4-3000 CL16 | GPU: Nvidia GTX 1060 6GB Zotac Mini | Case: K280 Case | PSU: Cooler Master B600 Power supply | SSD: 1TB  | HDDs: 1x 250GB & 1x 1TB WD Blue | Monitors: 24" Acer S240HLBID + 24" Samsung  | OS: Win 10 Pro

 

Audio: Behringer Q802USB Xenyx 8 Input Mixer |  U-PHORIA UMC204HD | Behringer XM8500 Dynamic Cardioid Vocal Microphone | Sound Blaster Audigy Fx PCI-E card.

 

Home Lab:  Lenovo ThinkCenter M82 ESXi 6.7 | Lenovo M93 Tiny Exchange 2019 | TP-LINK TL-SG1024D 24-Port Gigabit | Cisco ASA 5506 firewall  | Cisco Catalyst 3750 Gigabit Switch | Cisco 2960C-LL | HP MicroServer G8 NAS | Custom built SCCM Server.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Linus knows what he is doing. I bet he doesnt spend much, and gets lots donated.

I wish I was Linus' friends then I could get some free shit too :)

Link to comment
Share on other sites

Link to post
Share on other sites

If we're going balls to the walls here, why have two separate switches? Just go with a nice Nexus 93180 for 48x 10Gb ports and 6x 40Gb ports :)

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think putting the gateway router on the same physical hardware (ie: virtualized) as the actual enterprise applications server is a good idea.  If you're on the phone to the ISP, and you have to make a change because something isn't working, the last thing you want to be doing is fighting with the virtualizer, especially if there's any question of whether the virtualizer is at fault.  A standalone pfsense box is relatively simple to troubleshoot and connect/disconnect.  And the cost of the hardware and electricity for such a machine is negligible. 

 

Likewise, that big machine you propose, its basically concentrating all one's eggs in one basket.  If it goes down, a company basically has to shut their doors until its back up and running and can be fixed.   The 'cost' of idle capacity is largely that of the amortized capital cost of hardware, as electricity scaling is actually very good these days (ie: a lightly loaded machine uses very little of it!).  And with the cost of hardware these days, its trivial and negligible. 

 

If they were a much larger business, had a dozen or a few hundred servers and could have their system structured for a few spare ones ready to back-up, then sure, the sort of fancy-ness you propose certainly can help reduce costs and improve uptime.  But I'd say for an outfit like LMG, the chief concern should be actually keeping their IT simple enough that they can focus most of their efforts on generating revenue-creating content.  This means going with systems they have expertise in (Linus freely and readily admits he knows little about Linux, which is perfectly okay if he/they can accomplish what they need to accomplish in Windows). 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Canada EH said:

Linus knows what he is doing. I bet he doesnt spend much, and gets lots donated.

I wish I was Linus' friends then I could get some free shit too :)

Linus' knowledge on servers and networking is, well kindly said, limited. He has already wasted some money on that server room because he didn't know what kind of adapter or brackets he would need. I can't even guess how much of actual server hardware is bit off because he didn't use external advisor on those. Security system is only which is for sure exactly as it should be, its from external provider after all.

^^^^ That's my post ^^^^
<-- This is me --- That's your scrollbar -->
vvvv Who's there? vvvv

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Electronics Wizardy said:

Id probably run vm's beacuse with docker, you limited to one os, and you want to have different os'

How about OpenStack B|.. Just occurred to me. I have had a lot of business come to me asking for support there in there own distributed environments. You could actually do both I assume. Docker of all of the Linux'y stuff and Open Stack for all the windows. - shrugs - Just a lot of fun in my own mind :D

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, qops1981 said:

How about OpenStack B|.. Just occurred to me. I have had a lot of business come to me asking for support there in there own distributed environments. You could actually do both I assume. Docker of all of the Linux'y stuff and Open Stack for all the windows. - shrugs - Just a lot of fun in my own mind :D

It would work work, but its designed for much bigger deplyments. For a small system like this, vmware would work much better and is much better supported(openstack doesnt support vgpu)

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Canada EH said:

Linus knows what he is doing.

 

BHAAHAHAAHAHAHA thats a funny 

If you tell a big enough lie and tell it frequently enough it will be believed.

-Adolf Hitler 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah the infrastructure he uses makes very little sense to me tbh, He could have used a proper enterprise router, a good virtualisation cluster and a good san setup, and have a better implementation than he has now.

Comb it with a brick

Link to comment
Share on other sites

Link to post
Share on other sites

Great idea but i dont think LinusMediaGroup would be able to implement such heavy duty equipment in less than 2 months, if we were to consider the negotiations theyd be involved in. Because Linus Sebastian is known as the "Hardware on arrival destroyer", this could be a bad idea. @qops1981 what program did you use to build this massive rack system? Looks like something that you would use Visio for, but theres stuff in there i couldnt find in Visio 2016 Professional :o. Or i actually did find and i dont remenber :P.

Groomlake Authority

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, VerticalDiscussions said:

what program did you use to build this massive rack system? 

http://gliffy.com

I also grabbed the Image of a 24 bay Rack Case from the inter-webs. Its a great website for quick little projects.

 

Quote

"Hardware on arrival destroyer"

Lol, yah, I have seen a couple videos that were amusing. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/23/2016 at 4:39 PM, Mark77 said:

I don't think putting the gateway router on the same physical hardware (ie: virtualized) as the actual enterprise applications server is a good idea. If you're on the phone to the ISP, and you have to make a change because something isn't working, the last thing you want to be doing is fighting with the virtualizer, especially if there's any question of whether the virtualizer is at fault.  A standalone pfsense box is relatively simple to troubleshoot and connect/disconnect.  And the cost of the hardware and electricity for such a machine is negligible. 

Hey Mark,

Typically when diagnosing issues, for myself I exhaust all possible diagnoses before calling a support line. Most support lines don't know much anyways. Furthermore I would never even mention it was virtualized hardware. OMG that would just be an excuse for them to blame it on something that doesn't involve them as the source of the problem. As far as the stability if Hypervisors goes, in my experience they are pretty good. I myself have never had problems outside of nailing down the initial configuration. A nice feature of virtualization is that you can just restart the guest to fix a virtualization issue in the (in my experience) unlikely event one should occur.

 

Quote

Likewise, that big machine you propose, its basically concentrating all one's eggs in one basket.  If it goes down, a company basically has to shut their doors until its back up and running and can be fixed.   The 'cost' of idle capacity is largely that of the amortized capital cost of hardware, as electricity scaling is actually very good these days (ie: a lightly loaded machine uses very little of it!).  And with the cost of hardware these days, its trivial and negligible. 

 

Distributed systems like this actually offer redundancy in that anyone system could go down and everything would continue to run without issue. Now in the configuration I outline there are a couple caveats. 1. The cluster servers that form the RADIZ could suffer only the loss of one system. Once that system came back up, the RAIDZ would need to catch that iSCSI device back up with storage changes. 2. The Main Head server does not have a redundant replacement. So in that you are correct, But there is no redundancy in a single router system either. Also the RAIDZ would be rediscoverable once the system was repaired.

 

So, example should the transcoding system go down in the existing setup, then that needs to be repaired before transcoding can continue. In my setup, the transcoding system is spawned on demand on what ever system is available. Systems could go down and the transcoding system would just spawn on another device that is up.

 

Quote

If they were a much larger business, had a dozen or a few hundred servers and could have their system structured for a few spare ones ready to back-up, then sure, the sort of fancy-ness you propose certainly can help reduce costs and improve uptime.  But I'd say for an outfit like LMG, the chief concern should be actually keeping their IT simple enough that they can focus most of their efforts on generating revenue-creating content.  This means going with systems they have expertise in (Linus freely and readily admits he knows little about Linux, which is perfectly okay if he/they can accomplish what they need to accomplish in Windows). 

I think LMG is plenty large enough. I have actually seen many smaller businesses using similar infrastructures. They like the flexible virtual development environments they can create on demand.

 

I hope this seen as just a follow up response and my general disagreement with your stance. You could be right in the end. 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 9/25/2016 at 6:02 PM, qops1981 said:

Typically when diagnosing issues, for myself I exhaust all possible diagnoses before calling a support line. Most support lines don't know much anyways. Furthermore I would never even mention it was virtualized hardware. OMG that would just be an excuse for them to blame it on something that doesn't involve them as the source of the problem.

Then you have never rang Netapp or Commvault for support, Commvault especially. These guys know their stuff even tier 1 support and is one of the reasons their products cost so much.

 

On 9/25/2016 at 6:02 PM, qops1981 said:

Distributed systems like this actually offer redundancy in that anyone system could go down and everything would continue to run without issue

The problem being pointed out was the single proposed Head Compute server, that is a single point of failure.

 

 

Personally I would go with a dual controller Netapp FAS2650 licensed for NFS and SMB with high performance flash cache SAS or all SSD and a high capacity tray. I would then add 3 ESXi servers to host all the required servers and a FortiGate 200D/300D firewall, use w/e 10Gb switches in a stack.

 

Cost wise if Linus actually paid for what he has this setup would be much cheaper, much faster and much more reliable.

 

However this is boring and standard and wouldn't provide any real interest to the LTT community and certainly wouldn't make many interesting video content that would generate income, 1 video at most like the Eaton UPS.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

Cost wise if Linus actually paid for what he has this setup would be much cheaper, much faster and much more reliable.

 

However this is boring and standard and wouldn't provide any real interest to the LTT community and certainly wouldn't make many interesting video content that would generate income, 1 video at most like the Eaton UPS.

 

Sometimes we just wish the best for LMG. Thank god Linus didn't want to do new office repairs himself. Or just install AC himself.

^^^^ That's my post ^^^^
<-- This is me --- That's your scrollbar -->
vvvv Who's there? vvvv

Link to comment
Share on other sites

Link to post
Share on other sites

My guess is that some (or all) of the network hardware was sponsored with the implication that LMG "use it." Free advertising (minus the cost of the device.)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×