Jump to content

Help Please: Building Web Server

Go to solution Solved by SupaKomputa,

As i mentioned, the resources to run 50 users is not that big. An 4 core-8 threads cpu could handle that (including the db server).

I say just rent a server for 1-2 months, play with it, and if you're not satisfied with the performance, that's time for you to think about cloud.

If i were you i would just setup a baremetal native LAMP server, it's better than docker in terms of raw performance, since docker is running in a container.

If you don't have any problems running a server from europe DC, there's plenty of cheap one here : https://www.hetzner.com/sb 

Greetings. I need help deciding on hardware for a server I have to build. The plan so far is this:

Internet <-> Node 1 <-> Node 2

Node 1
Firewall -> Kubernetes / Docker -> Reverse Proxy -> Apache or nginx

Node 2

Firewall -> MySQL
(only responds to requests directly from Node 1, MySQL might also run in containers)

This server will be used to host a proof-of-concept for my client, and is not a permanent solution, but it needs to be as secure as possible and scale up to 50 simultaneous users which could potentially all be accessing video (8000-12000 bitrate). Both nodes will be running Linux, probably Ubuntu Server. I will be accessing the server nodes remotely through SSH, and Node 2 should physically only be connected to Node 1 directly. But if the server becomes unresponsive and requires on-site access, I suppose I will need a cheap video solution for that. I will probably try to build the two computers into a single case, but I'm very open to suggestions.

I do need to save on money as much as possible. To start with the server will likely be on a consumer 600 Mb connection, but might move up to a business grade line later if we start maxing out the connection or if the ISP complains. Ultimately, my client is aware that I am a first-year college student, but this is a serious project. I want the right hardware for the job and don't want to just go picking things at random.

Any input on where I can start? Thank you in advance. :)

Link to comment
Share on other sites

Link to post
Share on other sites

Why don't you just rent a cloud server? Build when the project is funded.

If you're not encoding / decoding videos, 50 users at a time is not that much, a 8 core cpu can handle that.

50*8mbps = 400 mbps which is not much.

Usually you will get 1 gbps connections for free.

Ryzen 5700g @ 4.4ghz all cores | Asrock B550M Steel Legend | 3060 | 2x 16gb Micron E 2666 @ 4200mhz cl16 | 500gb WD SN750 | 12 TB HDD | Deepcool Gammax 400 w/ 2 delta 4000rpm push pull | Antec Neo Eco Zen 500w

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, MegamanXGold said:

I will be accessing the server nodes remotely through SSH, and Node 2 should physically only be connected to Node 1 directly. But if the server becomes unresponsive and requires on-site access, I suppose I will need a cheap video solution for that. I will probably try to build the two computers into a single case, but I'm very open to suggestions.

 

If you have a requirement they be "physically connected" why not get a single server and virtualise both Ubuntu Servers?

What is the benefit you're considering 2 servers for? 

 

I believe Ubuntu Server comes with UFW by default, which you can setup with a Deny All policy, and then only allow ports like MySQL 3306 to a specific IP address, and only allow ports like SSH to a specific subnet. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

Those are both very good questions, thank you.

 

@SupaKomputa
Is it really that much better a solution than my client buying their own hardware?

 

Honestly, because I'm familiar with Linux and hosting some simple web services myself. I've seen AWS at a distance, but I don't know if I have time to learn all that. My experience with AWS is the instructor showing us his AWS Services on a projector. We weren't tested on any of that.

If I go with cloud then I really don't know what the costs involved are, and if it ends up being free then what's the catch? I'm a bit skeptical.

How easy is it to screw something up with a cloud service? I have experience with setting up web services on docker with firewalls from my first-year classes. If I put in the wrong setting on some cloud server setting and every video file goes open to anyone typing in its URL, then that would be really bad for me.

 

But am I just being dumb not to go with cloud?


@Jarsky

It is my impression that having a separate physical server for the database is just good practice. Of course there would be regular back-ups, though, and I can't really justify that more than it just feels safer in terms of hardware failure risk. Also, I would hope that I could build a second (db only) server for less than the price of the VMWare business license ($576.96 USD), unless there's a solution that doesn't have license fees for business.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, MegamanXGold said:

It is my impression that having a separate physical server for the database is just good practice. Of course there would be regular back-ups, though, and I can't really justify that more than it just feels safer in terms of hardware failure risk. Also, I would hope that I could build a second server for less than the price of the VMWare business license ($576.96 USD), unless there's a solution that doesn't have license fees for business.

 

@MegamanXGold

 

Its not really an issue, since you can create seperate storage to maintain IOPS, and you can allocate the hardware reservations for CPU/Memory in the hypervisor. 

 

I assume the licensing you're talking about the Essentials? You only need that if you have more than 1 physical machine so you can run a vCenter Appliance (VSA) to do advanced functions. One way of getting the full suite is by getting a VMUG Subscription which is $200 per year. For 1 physical machine, then you can just use the free ESXi license. 

 

Rather than getting 2 lower end machines, you can then get a server like a R720 with a couple of E5 Xeons. They also include a proper RAID controller which would be beneficial for VMware datastores and IPMI so you can remote to it if you have a catastrophic failure. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

As i mentioned, the resources to run 50 users is not that big. An 4 core-8 threads cpu could handle that (including the db server).

I say just rent a server for 1-2 months, play with it, and if you're not satisfied with the performance, that's time for you to think about cloud.

If i were you i would just setup a baremetal native LAMP server, it's better than docker in terms of raw performance, since docker is running in a container.

If you don't have any problems running a server from europe DC, there's plenty of cheap one here : https://www.hetzner.com/sb 

Ryzen 5700g @ 4.4ghz all cores | Asrock B550M Steel Legend | 3060 | 2x 16gb Micron E 2666 @ 4200mhz cl16 | 500gb WD SN750 | 12 TB HDD | Deepcool Gammax 400 w/ 2 delta 4000rpm push pull | Antec Neo Eco Zen 500w

Link to comment
Share on other sites

Link to post
Share on other sites

Some more information:

I have a hard deadline of August 15th to completely develop the site with a working database and ability for my client to upload video and audio files which will be shared with his users. Sadly, with my time constraints, I need to make a decision and live with it.

 

Currently my project will max out at 50 users, but it may get moved to an existing larger server with thousands of users, and I will no longer be the developer if this happens. I have to guess what that server is like because my client doesn't have access to that information (large company). It is my understanding that Kubernetes and Docker are rather industry standard. Everything I develop has to be capable of being moved.

 

Cloud services, instead of building a server, are not out of the question but I have zero usable experience there. And unpredictable costs will not fly.

 

---------------

Assuming I still build my own server, but make it a single Linux box running two Ubuntu Server VMs, one using Kubernetes/Docker for web serve and the other strictly be the database, what hardware could I go with?

Should I be rocking a Ryzen 5 with 16 gigs of ECC RAM? Or should I stick to a Xeon platform? SSD for the reverse proxy cache and WD Red drives for large video file storage? What kind of drive is most appropriate for a DB?

 

@Jarsky

It's good to know that there's the option to use VMWare for free commercially for one machine.

@SupaKomputa

For LAMP vs Docker, I'm a little confused by your comments. Unless I am mistaken, Docker enables running containers, and Kubernetes controls Docker containers so that I can scale or load balance based on concurrent connections. So I would run an instance of Apache with PHP in a container, and if that container is getting overloaded then Kubernetes can automagically run a new container with Apache to support those connections. I don't really expect the database would ever need to run multiple dockerized instances. I have read conflicting information that Apache can handle tons of connections versus Apache with PHP can only handle one connection at a time. I have to be 100% certain that if 49 users are streaming video or are all interacting with an element that uses PHP to access the database, that a 50th user wouldn't even notice a speed loss. Would a single bare-bone LAMP still perform better than Kubernetes controlling load balancing?

For cloud, I can't risk the cost going from $10 one month, to $4000 the next. I don't see a way to be 100% certain that I can say to my client "it will cost you this much to do what you need". Is there a way to find that out?

 

Keep in mind, I don't actually know how many services with AWS, Google, or Cloudflare I would need, or even which services give me what I need (I assume for AWS it would be CloudFront, S3, and RDS). I would probably have to start a new topic for those questions.

For renting a server, what would the benefit be there versus something like GoDaddy.com?
I have no idea who Hetzner are. I already have reservations about asking my client to upload confidential files to a cloud, so I don't think I'll be asking him to trust a server on another continent that I know nothing about.

-------------
I know I'm asking a lot of questions, and there's a lot to digest here. It would definitely help if I had a better idea of what I need. So your feedback really helps! Thank you

Link to comment
Share on other sites

Link to post
Share on other sites

The kubernetes concept is really alien to me, i understands that it manages multiple instance of the container, so it would scale automatically.

For me running a bare metal server is still better than running multiple instance of container.

Each container can cost up to 30% of your server computing, running your app natively would be more efficient.

Quote

Everything I develop has to be capable of being moved.

In baremetal, you can scale up (have more powerful servers) or scale horizontally (create load balancers / clone).

 

Quote

Assuming I still build my own server, but make it a single Linux box running two Ubuntu Server VMs, one using Kubernetes/Docker for web serve and the other strictly be the database, what hardware could I go with?

Any hardware with multicore processors run multiple vm if you wanna isolate the resource, but bare in mind that running vm still have performance penalty.

You can also get 3 different servers one for web processing, one for file storage the other for database.

Quote

Should I be rocking a Ryzen 5 with 16 gigs of ECC RAM? Or should I stick to a Xeon platform? SSD for the reverse proxy cache and WD Red drives for large video file storage? What kind of drive is most appropriate for a DB?

For development / testing, a Ryzen should be enough, but Ryzen cpu doesn't have support for registered ECC and the amount of maximum RAM is limited, that's when you need Xeon / AMD Epyc. The drive that would be good for DB : Samsung 970 enterprise SSD.

 

Quote

For renting a server, what would the benefit be there versus something like GoDaddy.com?

Price, for the same kind of hardware, godaddy can be 2 or 3x more expensive. Hetzner is one of the biggest datacenter in europe.

 

Quote

Keep in mind, I don't actually know how many services with AWS, Google, or Cloudflare I would need, or even which services give me what I need (I assume for AWS it would be CloudFront, S3, and RDS). I would probably have to start a new topic for those questions.

Depends on the applications, you'll need them sooner or later.

Ryzen 5700g @ 4.4ghz all cores | Asrock B550M Steel Legend | 3060 | 2x 16gb Micron E 2666 @ 4200mhz cl16 | 500gb WD SN750 | 12 TB HDD | Deepcool Gammax 400 w/ 2 delta 4000rpm push pull | Antec Neo Eco Zen 500w

Link to comment
Share on other sites

Link to post
Share on other sites

Is the file storage server for making regular back-ups?

 

30% per container sounds crazy. Are you saying that if Kubernetes decides to open a fourth container that the whole physical machine will crap the bed? Or just that each container is 30% slower than it would be if it ran on 4 physical machines?


I'm starting my second year of College in September.

Anything I can set up to be automatic is a must. If something fails, I need it to get shut down and replaced on-the-fly without my intervention. I will not be paid for servicing or maintenance the second my first day of classes start. Kubernetes seems to be a solution that can do that.

My client is not the leader of a company, and so resources are limited. But there are three possibilities:
1. The server I build and the site I develop only ever serves the 50 users.
2. The higher-ups say "that's great" and needs to relocate everything I've done for further development or to be redeployed immediately on hardware I will never know anything about.
3. The higher-ups say "that's great", opens it up to thousands of users in its current state while I'm in school, and suddenly they have to hire someone else to make the existing server handle that...

I am documenting everything as well as a 1st-year software engineering college student knows how. If my client can hire someone else, then I need to know that that person can scale it easily without any additional input. If they have to troubleshoot because my documentation missed one thing that could have been avoided by using a Docker image, then doesn't that just make me look bad?

So far, assuming I still build this server instead of renting one, the specs will be:
AMD Epyc or Intel Xeon
16 to 64 GB ECC RAM
Samsung 970 Enterprise SSD

 

Thank you so much! Your continued input is invaluable.

Link to comment
Share on other sites

Link to post
Share on other sites

Going for the cloud is probably the easiest solution here, plus (depending on where you decide to have your cloud instance hosted) it'll probably include failsafe measures. And scaling the available resources for the cloud machine will be easy if and when your project grows.

 

Docker and kubernetes are great if you know what you're doing, but they can be a hassle otherwise.

75% of what I say is sarcastic

 

So is the rest probably

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, myselfolli said:

Going for the cloud is probably the easiest solution here, plus (depending on where you decide to have your cloud instance hosted) it'll probably include failsafe measures. And scaling the available resources for the cloud machine will be easy if and when your project grows.

Thank you. I tried to use the AWS cost calculator (with the limited knowledge I have of the services and with no clue what most of the fields mean), and if the site gets the usage I expect, it could cost $750 a month or more. But ultimately the issue with cloud is the uncertainty, it could be free one month and $4000 the next. Unless there's a better cloud service, or one that it's easier to calculate costs, it isn't worth consideration. If anyone knows a concrete way to figure this out for someone who's never used the cloud before, can you share that with me please?

 

4 hours ago, myselfolli said:

Docker and kubernetes are great if you know what you're doing, but they can be a hassle otherwise.

I haven't really had any issues setting up Docker before, but I haven't used it in a production environment, just school. Kubernetes, on the other hand, I haven't touched at all. I just felt it would be necessary to familiarize myself with it because it seems to be the direction that large web servers are going, so it seemed natural to develop for that environment. SupaKomputa definitely has me questioning my logic behind that decision, and with only 3 actual weeks of real development time it likely won't matter at all what environment I develop for. A VM with nginx running as a reverse proxy and another VM with LAMP might just be the way to go.

 

Thank you for your input! :)

 

-------------

A general update on where I am

 

For renting a server:

Right now I'm just trying to gather information so that I can discuss the options with my client. I've looked a bit more into Hetzner, and will consider that.

Hetzner auctions has the pro of having an actual dedicated server with large amounts of storage space, for affordable pricing. But both Hetzner and GoDaddy's managed options might suit our needs better.

 

For building a server:
This is quickly becoming an unfavourable option, for sure. Even if my client prefers this option, the highest tier of the available business internet line has a similar cost to the highest tier of hosting available from GoDaddy. But if the client chooses this route regardless...

I looked at EPYC (16-core), the Xeons available from Newegg (six Kaby Lake, five Broadwell, and one Coffee Lake), four Core i7 CPUs (7700k, 8700k, 9700k, 9900k), the W-3175X, Ryzen 7, and Ryzen 5. Without factoring in the cost of the platforms, and relying on PassMark scores at cpubenchmark.net, I came to these conclusions:

No surprise to anyone, ever, the W-3175X wins on pure performance overall, and specifically in Multi-threaded.

The 9900k wins on single-threaded performance, and takes second-place in multi-threaded (at 51.72% of the W-3175X).

The winner of performance per dollar for single-threaded was the Coffee Lake Xeon E-2124.
The winner of performance per dollar for multi-threaded was the Ryzen 5 2600.

Reasonably well-balanced for performance vs value seems to be either Ryzen 7 2700X, and 9700k.

 

The closest compromise for Xeon, for me, is probably the E3-1275 v6 which, at least, has integrated video to make first-time set up and maintenance a bit more convenient. It has 81.33% of the single-thread score versus the 9900k, and multi-thread score is 28.19% versus the 9900k's 51.72%.

Worst performance per dollar, before factoring in platform which just makes it worse, is the W-3175X, then EPYC, then the E5-2660 v4.

 

So if we still build a server, we will probably just forget ECC RAM, and go with a consumer desktop rocking either Ryzen 7 2700X or a 9700k. Leaning towards the 9700k with a Gigabyte Aorus board of some flavour, or a 3rd gen Ryzen if that's practical and possible to get.

 

tl;dr:
Cloud pricing seems unpredictable or hard to calculate, and I'd appreciate help sorting that out for my client. But we'll probably be lame and stick to GoDaddy, or build a consumer grade system as a server with a gen 3 Ryzen or a 9700k, using nginx as reverse proxy VM and LAMP as main server VM.

Link to comment
Share on other sites

Link to post
Share on other sites

Do yourself a favor and figure out a hypervisor like vmware or hyper-v. HV is free but esxi also has a free version that will fit your use case and is a little more straightforward to use IMO. 

* You only need one metal box

* If either OS dies it doesn't matter. You can access the hypervisor.

 

Additionally, 

* Strongly recommend just buying a Proliant or PowerEdge. WITH A WARRANTY. If you can't afford new, there's lots of decent used stuff for cheap on ebay. Talk to your professors about this project, maybe they can loan you some gear. Academic institutions like to hoard perfectly good stuff just in case. 

* A business grade server will have an out-of-band manager like iLO or iDRAC that can let you do stuff to the bare metal remotely. 

* Database servers are a great use case for ECC. Memory errors are somewhat rare, but they do happen, especially as the machine ages. 

* All IP KVMs are hot garbage. Trust me, I've been looking. I usually just set up my own vpn account on the client's firewall and jump in from a clean VM on my laptop. 

* Define a service level with your client. They are gonna expect this thing to work untouched for years unless you tell them otherwise. Figure out regular maintenance. Figure out monitoring & alerting. Figure out an SLA (when it breaks, how long do you have to acknowledge / start working / resolve?). If you do it right you can sell them on some recurring revenue for you. At my worst most rubberband & paperclip client I have a monthly retainer that's equivalent to around 10 hours of labor. Best of all, when you graduate or otherwise decide to leave town you can sell the contact to someone else without shaking up your client's experience too badly. 

* Backups??????

Intel 11700K - Gigabyte 3080 Ti- Gigabyte Z590 Aorus Pro - Sabrent Rocket NVME - Corsair 16GB DDR4

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, MegamanXGold said:

Hetzner

Well if Hetzner is an option, maybe have a look at their cloud service?

 

You can get a pretty decent bang for your buck there, plus scaling would be a breeze. And their pricing is soley dependant on the cloud machine tier you opt to get, that's about as transparent as it'll get.

75% of what I say is sarcastic

 

So is the rest probably

Link to comment
Share on other sites

Link to post
Share on other sites

THANK YOU EVERYONE!

My client and I had our meeting today. I discussed a lot of what was said here with them, and we ultimately agreed to start by trying GoDaddy's "Expand" option under their Business Hosting Plans.

On sale for $64.99/mo (then $128.99/mo on renewal), 4 allocated CPU cores, 8 GB of RAM, unmetered traffic/bandwidth and unlimited databases. If it isn't enough for us then it looks like migrations to higher tier plans won't be difficult.

 

On 7/3/2019 at 12:21 AM, myselfolli said:

Well if Hetzner is an option, maybe have a look at their cloud service?


I expressed specific interest in Hetzner, including the cloud option, but we needed to make a decision immediately and we both felt more confident with GoDaddy. It is possible that we might try a Hetzner cloud account at a later date just to see how the service feels and if it is worth it to move the site there - but that assumes we would have a reason to migrate before this project moves to other developers. I may try some Hetzner services in the future even if just for personal use.

 

On 7/2/2019 at 7:52 PM, jake9000 said:

Do yourself a favor and figure out a hypervisor like vmware or hyper-v. HV is free but esxi also has a free version that will fit your use case and is a little more straightforward to use IMO. 

* You only need one metal box

* If either OS dies it doesn't matter. You can access the hypervisor.

Thank you jake9000, I've used hyper-v in Windows Server, and I've used VirtualBox for just playing with Ubuntu occasionally. Now that I understand how esxi works, I will likely set up VMWare just for the experience.

 

About your additional points, that's all a lot of really great info.
About the Warranty, that's an extremely useful point that's very easy to forget. If we had chosen to still build a server, I would have asked for more time to investigate that option.
The regular KVM switches at my school are 'hot garbage' too, so I can't even imagine using one over IP! Though I would have logged in with bash over SSH anyways.

 

--------------
Once again I appreciate all of the help here! Definitely helped me to see reason before I had my client spend a drastic amount of money.

--------------

 

EDIT:

Right after posting this I found a Forbes article from 2012 that doesn't paint GoDaddy in too good a picture, and I never had time to look into other hosts beside Hetzner. We've already made our choice and we're going to stick with it, but if anyone finds this thread and thinks they'll choose GoDaddy too, there are more hosts to investigate that I wish I had time to look into. Here is an article that lists some of those hosts:

https://www.websitebuilderexpert.com/web-hosting-services/best/

 

I still think, given the type of content we need to have hosted, GoDaddy was still a good choice. But not everyone has the same needs. For literally any other site I've ever made, the cloud hosting options all sound awesome. Particularly Hetzner's.

 

EDIT 2:

I marked SupaKomputa's post as the answer because renting a server was what we ended up deciding.

But every answer was fantastic, and it was difficult to choose a Best.

Link to comment
Share on other sites

Link to post
Share on other sites

Wait, 64,99$/month?

 

I pay ~45€/month for a dedicated quadcore hyperthreaded server with 64GB Ram and an unmetered connection with 3TB Raid 1 hard drive storage...

75% of what I say is sarcastic

 

So is the rest probably

Link to comment
Share on other sites

Link to post
Share on other sites

[EDIT]

It has been almost a year since I made this post. The issues with GoDaddy that I list below have not improved. I am learning new skills in my second year of college that are useful for making our switch to the cloud, and we have begun redeveloping our project for AWS.

 

All-in-all, our experience with GoDaddy's Shared Server platform is an unresponsive nightmare to develop on and, now that I have some AWS experience, I honestly can't see a reason for it to exist. Between AWS' trial (limitations apply, of course), cheap S3 storage, and other options, it's worth it to learn your options there.

 

GoDaddy's shared servers are, in my opinion, only marginally more useful than building a free static website on a service like Angelfire, and by far not worth the $1000 my employer threw down for the year. The value proposition would be different if it were $50-100. As it is, nobody should waste their time on those options.

 

I can't speak on the quality of their dedicated servers, but if the only tangible difference is a faster response time in exchange for the costs involved then, again, it doesn't make sense not to go with AWS. As long as you're smart with keeping server-side processing to a minimum, and use Cloudfront effectively, you're probably going to be paying about the same amount for a much more reliable service.

 

This is all personal opinion based on my experience, of course. Perhaps others have had an amazing experience with GoDaddy. I wish I was one of them. But I'm excited to build sites using S3 buckets, React, API Gateway, NodeJS on Lambda, and Cloudfront.

[/EDIT]

 

In case this helps someone else, here's an update on my experience with GoDaddy so far:

  • My client only wanted 1 year of hosting, which meant the $64.99/month (CAD) was marketing, and ended up costing around $950 for the year (~$79.17 CAD, or ~$60.78 USD, or 53,93€).
  • Out of the included storage space, 20 GB are already used. Probably with garbage I'll never use, like WordPress.
  • It took hours to figure out that a Shared IP results in a web address like http://xx.xx.xx.xx/~identifier/ (see edit)
  • Because of the above, the DNS can't ever actually point to our site, and the forwarding options are complete garbage. (see edit)
  • To add just one Dedicated IP to our service will cost about $8.99 CAD per month, but a basic plan (on sale) for $32.99 comes with 3 Dedicated IPs...
  • The administration panel on the site kept timing out, and it took hours before something as simple as FTP finally started working just broke itself again, no changes.
  • Loading my html pages from the host often (every 3-4 page loads) results in a long wait time, upwards of 30 seconds, but is otherwise quick. Still, that's unacceptable for text-only documents (so far).
  • GoDaddy's site is so bad, I kept running into user experience issues. Example: When you pay for a host, the first step you HAVE to perform is to tell it your domain name. It didn't ask if we had that domain, it didn't mention setting up DNS, it just took in what I typed and the configuration panel started working (barely). I had read that Go Daddy and many competitors offer free domains when setting up a new web host. I was googling to see if I missed any steps, or if it just takes a long time to show up, before I finally realized that we still had to buy the domain separately.

I am almost ready to push for a refund, instead of asking my client to throw down another $107.88 for something that should've been free at this "Business" tier. But since I've already wasted my client's time, I can't do that without concrete alternatives. I might call Go Daddy tomorrow to discuss my issues, but we'll see.

I don't want a server made for hosting blogs, it has to host HD video, and it has to do that quickly and reliably for users in North America. It has to be hosted in North America. And it has to have a predictable cost.

 

Personal rant:
Taking $1000 from anyone and offering piss-poor quality like this is unbelievable. The day was an infuriating test of my sanity and patience, and I'm not even getting paid for this nonsense anymore. This is taking up my personal time now because my paid time is required for developing the actual site. The reviewers who wrote the articles I read that said Go Daddy was any good should be jailed. /end_rant

 

(long-winded) EDIT : Concerning the odd URL situation, I wasn't wrong, but the expected behaviour wasn't working at the time I tested. When the DNS server aims the domain at the shared server IP, and the server is aware of the domain that site belongs to, the shared server will properly direct the traffic to the right virtual server. When I initially set the DNS to the correct IP it was simply misdirecting traffic at first, which led to me developing a convoluted javascript solution.

Our domain works properly now, and I can SSH/FTP into the server with Bitvise, which seems (like magic) to be much more reliable than their slow web tools, and much better than their completely insecure FTP method. At least I couldn't get FileZilla or WinSCP to connect with SFTP, or FTP with SSL/TLS, and when I asked tech support about it, they specifically say "no, just use regular FTP". I don't use regular FTP in school, or even to my own Ubuntu VirtualBox test server on the same machine.

One thing that makes me uneasy is that during tech support, the representative accessed files on our server without (as far as I remember) asking permission. Creating folders and moving files around to test it. That was the very first thing he did before I knew what was happening. He didn't cause any damage, of course, but it made me uneasy, again, considering our desire for a secure server for confidential content.

 

When asking the rep about the slow performance, he just passed blame on to our ISP, using the scapegoat "nobody else is complaining" ...even though a tracert revealed the entire timeout period happens between the shared server and their load balancer. I have a feeling our shared server essentially goes to sleep whenever it finds a chance, causing a periodic delay while it wakes back up. I'm hoping the problem disappears when users have PHP sessions, assuming that keeps it awake for the 30 minute duration. But then I really have no experience with what might make a shared server or related tools behave that way.

 

I will edit this post again if my experience with Go Daddy changes (for better or worse). Maybe some of my pain will help LMG make a video some day, lol.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×