Jump to content

Central computer + terminals in office instead of individual PCs, questions/suggestions?

Luciel

Hi all,

At work we're considering getting rid of individual machines with individual OSs, etc for security and funcionality.

Now, today we tried something very interesting which was the following:

- One central machine (let's call it Server) with Ubuntu Mate installed.
- Then, the terminal (i.e. one of the worker's machines) would connect to its desktop through local SSH tunneling and then running mate-session (equivalent of X11 tunneling basically).

This works nicely, problem is, the moment you have a couple of machines connecting to their desktop, there appears to be a serious network bottleneck. This is a shame because we're loving the concept of user account management, software management and ease of backup.

The problem I see is that network bottleneck is, I guess, to be expected, as the server is streaming, essentially, the whole user desktop and interaction.

In an ideal world what would happen is that the terminal would connect to the server and the server would not stream the desktop itself but rather allow the terminal access to the necesary files and let the processing and computing be done by the terminal. This in my mind would allow a lot more terminals before network bottleneck but I do not know if anything like this exists.

Any ideas? Are we going about this the wrong way?

We would need 12 terminals to be able to connect at the same time and work as if they were their own desktops. The equivalent of something like citrix but on a LAN level. Hardware isn't an issue, we can invest as needed, it's more about knowing what we need. We would also like to stick to Linux.

Thanks in advance!

Modding Portfolio


neonit.net

Link to comment
Share on other sites

Link to post
Share on other sites

Ah, we've come full circle.  It's like the 70s/80s all over again xD

 

Considering you still need a physical machine on every desk, regardless of whether the system they're using runs on that machine, or on a mainframe that they connect to, I feel like it would make sense to just stick with individual computers.

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

Cheers for the reply Ryan!

 

I'm 50/50 on this to be entirely honest and I definately see the cons and pros in each. After seeing the initial tests however I did really liked the SSH tunneling method but was swiftly let down by slowdowns due to network bottleneck and promptly figured something wasn't quite right. Why would this work so nicely but be let down so hard by network constraints? Surely there must be a better way of doing this.

 

Alas none of us at the office have ever had experience with this kind of infrastructure and yet, we're all equally curious :)

Modding Portfolio


neonit.net

Link to comment
Share on other sites

Link to post
Share on other sites

Would it solve your management problem to boot from the network but use the local processing capabilities?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, FredyBobJoe said:

Would it solve your management problem to boot from the network but use the local processing capabilities?

That seems like the ideal scenario

Modding Portfolio


neonit.net

Link to comment
Share on other sites

Link to post
Share on other sites

I imagine you would be able to get a basic computer for about the same price as a Network hub...  It sounds to me that the ability for people to (even if crippled) still work while the main computer is offline would be a good thing to have.

For the Best builds and Price lists here is a world where many points of the price have been predefined already for your convenience!

The Xeon E3 1231 V3 IS BETTER Than the Core i5 4690K and a Significantly better value for the non-overclockers or value shoppers.

The OS is like a kind food, Try it before saying if you like it or don't.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Luciel said:

Hi all,

At work we're considering getting rid of individual machines with individual OSs, etc for security and funcionality.

Now, today we tried something very interesting which was the following:

- One central machine (let's call it Server) with Ubuntu Mate installed.
- Then, the terminal (i.e. one of the worker's machines) would connect to its desktop through local SSH tunneling and then running mate-session (equivalent of X11 tunneling basically).

This works nicely, problem is, the moment you have a couple of machines connecting to their desktop, there appears to be a serious network bottleneck. This is a shame because we're loving the concept of user account management, software management and ease of backup.

The problem I see is that network bottleneck is, I guess, to be expected, as the server is streaming, essentially, the whole user desktop and interaction.

In an ideal world what would happen is that the terminal would connect to the server and the server would not stream the desktop itself but rather allow the terminal access to the necesary files and let the processing and computing be done by the terminal. This in my mind would allow a lot more terminals before network bottleneck but I do not know if anything like this exists.

Any ideas? Are we going about this the wrong way?

We would need 12 terminals to be able to connect at the same time and work as if they were their own desktops. The equivalent of something like citrix but on a LAN level. Hardware isn't an issue, we can invest as needed, it's more about knowing what we need. We would also like to stick to Linux.

Thanks in advance!

What kind of network equipment do you have? being that you are facing network bottleneck issues this is important information.

 

what kind of network switches are you using, and at what speeds? links between switches also need to be very quick, and depending on the number of systems youre planning to use I would recommend AT LEAST a 10G connection to the server from the switch, and between switches.

do you have routers in the mix? Routers are slower than switches due to the lack of ASICs, so I wouldnt put any of those between the systems and server.

 

*I have never done a setup like this, so I am unaware of the requirements, but I am still happy to come up with ideas

 

 

*EDIT*

 

"In an ideal world what would happen is that the terminal would connect to the server and the server would not stream the desktop itself but rather allow the terminal access to the necesary files and let the processing and computing be done by the terminal. This in my mind would allow a lot more terminals before network bottleneck but I do not know if anything like this exists."

 

This sounds like a normal PC connecting to a server, am I missing part of this? lol

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Luciel said:

--SNIP--
In an ideal world what would happen is that the terminal would connect to the server and the server would not stream the desktop itself but rather allow the terminal access to the necesary files and let the processing and computing be done by the terminal. This in my mind would allow a lot more terminals before network bottleneck but I do not know if anything like this exists.

Any ideas? Are we going about this the wrong way?
We would need 12 terminals to be able to connect at the same time and work as if they were their own desktops. The equivalent of something like citrix but on a LAN level. Hardware isn't an issue, we can invest as needed, it's more about knowing what we need. We would also like to stick to Linux.
Thanks in advance!

You've got the right idea, but are sort of going about it the wrong way, yes. What you want is a central server that stores all the critical company data - user data, email (if not going with properly hosted options like hosted exchange), programs for installation, and/or company policy management for the remote clients connecting to the server. You'd still have individual PC's that would run full desktop OS's and have hardware capable of running the software that users need.

 

Users would logon to the clients as if they were normal computers, but what they'd actually be doing is authenticating remotely with the server, which would then either map their storage drives as network shares on the client computer, or create a local copy of the files inside their user folder. Programs required for daily usage would ideally be installed and auto configure themselves for each user upon launch using the same credentials users login with. Alternatively, you could also choose to only install base software that most users will use, and then allow privileged users to remotely auto-install extra software via the server's app management platform.

 

The entire idea here is to use real computers running good hardware with a full OS for the client PCs which your users login to like normal, but behind the scenes are actually authenticating with the server that stores their files, settings, emails, and manages the client PC's. In the Microsoft world this would be achieved using Active Directory + Exchange running on top of Windows Server 2012 R2 that the client PC's would DOMAIN join, while on the Linux side I know of LDAP and... well, there's a reason I chose Web Development and General IT Consulting for a trade. I'd rather leave this overly complicated proprietary crap to someone else, similarly to how I refuse to even think about getting my CCNA.

 

Anywho, this is usually how most public schools are setup - real client PC's with fully installed apps that users login to through a remote server for authentication and mapping of their H:\ storage drives. Lots of companies do it this way too. That being said, you can certainly attempt to roll your own Citrix XenApp style streaming desktop system, but in my own personal experience it's going to tank unless you've got the budget to upgrade the network as well.

Desktop: KiRaShi-Intel-2022 (i5-12600K, RTX2060) Mobile: OnePlus 5T | Koodo - 75GB Data + Data Rollover for $45/month
Laptop: Dell XPS 15 9560 (the real 15" MacBook Pro that Apple didn't make) Tablet: iPad Mini 5 | Lenovo IdeaPad Duet 10.1
Camera: Canon M6 Mark II | Canon Rebel T1i (500D) | Canon SX280 | Panasonic TS20D Music: Spotify Premium (CIRCA '08)

Link to comment
Share on other sites

Link to post
Share on other sites

While the best user experience still lies with full-OS client PCs, as noted by the posts above, if you are really gung-ho about a modern version of a terminal server, then look into TrueOS https://www.trueos.org/ and especially TrueOS Pico https://www.trueos.org/trueos-pico/. TrueOS is from the folks behind FreeNAS, it is a BSD based desktop/server operating system.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

i can just say dont do it, there is a reason just about every big company is moving away from this.

 

We are replacing 400 Thin clients with PC´s till summer next year so we can get rid of that bullshit.

 

PC´s are so cheap these days that there is absolutely no reason to go for thin clients, get yourself a proper management system to manage the PC´s instead of going back two decades.

Link to comment
Share on other sites

Link to post
Share on other sites

As much fun as it sounds I wouldn't home-lab an actual work place. In other words, don't use Linux for a central server.

Yes, it's cheap comparing to Windows but the de facto industry standard is Windows, yes it sucks, live with it.

 

Going with full PC, Terminal Server, VDI or a hybrid environment really depends on the characteristics of your users and their work.

Whatever you chose, always go with a central management of credentials and data storage (backup, backup, backup!).

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 2017-10-25 at 5:40 PM, Luciel said:

Hi all,

At work we're considering getting rid of individual machines with individual OSs, etc for security and funcionality.

Now, today we tried something very interesting which was the following:

- One central machine (let's call it Server) with Ubuntu Mate installed.
- Then, the terminal (i.e. one of the worker's machines) would connect to its desktop through local SSH tunneling and then running mate-session (equivalent of X11 tunneling basically).

This works nicely, problem is, the moment you have a couple of machines connecting to their desktop, there appears to be a serious network bottleneck. This is a shame because we're loving the concept of user account management, software management and ease of backup.

The problem I see is that network bottleneck is, I guess, to be expected, as the server is streaming, essentially, the whole user desktop and interaction.

In an ideal world what would happen is that the terminal would connect to the server and the server would not stream the desktop itself but rather allow the terminal access to the necesary files and let the processing and computing be done by the terminal. This in my mind would allow a lot more terminals before network bottleneck but I do not know if anything like this exists.

Any ideas? Are we going about this the wrong way?

We would need 12 terminals to be able to connect at the same time and work as if they were their own desktops. The equivalent of something like citrix but on a LAN level. Hardware isn't an issue, we can invest as needed, it's more about knowing what we need. We would also like to stick to Linux.

Thanks in advance!

Here’s my suggestion:

 

First, keep individual PCs. You don’t want the server going down affecting all users completely.

 

I would get a centralized server, and run Active Directory (or open source equivalent if you want). 

 

I would create user accounts in AD for every employee. 

 

Then, on the server, create a file share, and use AD Group Policies to map the file share to each user. Bonus points if you redirect documents to a separate home folder for each account. 

 

You can configure security, etc, for each user account and for different folders on the file share. 

 

That way, all data is still stored on the server (and thus a central backup point). 

 

But also if the server goes down, you can still run local cached logins on each PC.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...

Hi Everyone,

 

Just thought I'd come back and let you know what we ended up doing:

- We used Zentyal over Ubuntu 16.04 Server, for the server.
- Users login to their accounts through LDAP (the linux equivalent of Active Directory), from their desktops (Linux Mint) (it looks just like logging in to a local account, but in the background it actually checks this credentials on the server, not locally).
- Once they log in, their pc mounts their server home folder automatically. This way all processing is done locally but their settings and files are stored in the server, so there's some "lag" when logging in, but once logged in, it's not much different to a conventional desktop experience.

 

So basically it's not terminal clients, each computer does their own workload but, settings are kept elsewhere. This makes it really easy for any authorized used to login to their account from any PC in the office. It also makes deployment of new Desktops considerably easier. it also allows for the server to be pretty low spec, as a side note, all home folders are individual, meaning only the logged in user can see their own files.

 

(Very similar to what @dalekphalm suggested).

 

Thanks again everyone for their input! :)

Modding Portfolio


neonit.net

Link to comment
Share on other sites

Link to post
Share on other sites

On 26/10/2017 at 8:40 AM, Luciel said:

We would need 12 terminals to be able to connect at the same time and work as if they were their own desktops. The equivalent of something like citrix but on a LAN level. Hardware isn't an issue, we can invest as needed, it's more about knowing what we need. We would also like to stick to Linux.

Looked at AADS?

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Windspeed36 said:

Looked at AADS?

We did but we ended up deciding against a terminal server styled solution and geared more towards and Active Directory based solution. We also wanted to stick with a Linux based solution for servers and nodes (desktops) if possible.

Modding Portfolio


neonit.net

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Just thought I'd add a followup. At some point in the new year I may upload a full walkthrough as this has expanded quickly. So, we started with LDAP authentication (LInux's Active Directory) with Zentyal which is also fully compatible with Windows which means we've been able to add Windows based VMs for certain uses while mantaining a centralized system for logins. We've also made use of this for Owncloud which we have hosted on Amazon S3 (to substitute Dropbox which we used to use) + using hamachi for our easy to use vpn (as some members of staff will have to work remotely on occasions).

 

We're now working on integrating PMP (password manager) with LDAP and Zentyal, it's definately been and continues to be an interesting Journey.

Modding Portfolio


neonit.net

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×