Jump to content

Nginx - Can I do this?

Mornincupofhate

So basically I want to create a massive backend for filtering incoming requests.

 

If I did this: 

 

                                              Server 1   

                                          /                   \

DNS --> Load Balancer --> Server 2    --> Exit Server --> Client Web Server

                                          \                   /

                                              Server 3    

 

 

Would I only need to add their domains to the Load Balancer and exit server? And then have them point their domain to the load balancer?

Link to comment
Share on other sites

Link to post
Share on other sites

Yes sounds correct, although some extra details would help. What is the load balancer in your diagram? Is it as dedicated software/hardware appliance or is this your ngix?

 

One thing to be careful of is to not design a single point of failure in to your design i.e. the load balancer. You also have to make sure you configure session persistence etc to give proper user experience.

 

https://www.nginx.com/resources/admin-guide/load-balancer/

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, they're all going to run NginX.

 

And your answer was a little bit vague. When I'm adding a new client's domain, would I need to configure it inside all of the machines listed above? Or just the load balancer? And would he just need to point his domain to Load Balancer's name servers?

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, leadeater said:

Yes sounds correct, although some extra details would help. What is the load balancer in your diagram? Is it as dedicated software/hardware appliance or is this your ngix?

 

One thing to be careful of is to not design a single point of failure in to your design i.e. the load balancer. You also have to make sure you configure session persistence etc to give proper user experience.

 

https://www.nginx.com/resources/admin-guide/load-balancer/

Yes, they're all going to run NginX.

 

And your answer was a little bit vague. When I'm adding a new client's domain, would I need to configure it inside all of the machines listed above? Or just the load balancer? And would he just need to point his domain to Load Balancer's name servers?

Link to comment
Share on other sites

Link to post
Share on other sites

I'm pretty sure you would have to update every webserver with the details of new domains added, if you are using a single IP to host multiple domains (i.e. the nginx config recognizes the URL requested and loads a different folder for each domain). you should be using something to synchronize all files and settings between the servers anyway though, so that a change on one replicates to them all.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Mornincupofhate said:

Yes, they're all going to run NginX.

 

And your answer was a little bit vague. When I'm adding a new client's domain, would I need to configure it inside all of the machines listed above? Or just the load balancer? And would he just need to point his domain to Load Balancer's name servers?

Your diagram is kind of strange and you will need to explain the use case a bit more. I don't know what your "exit" server does and the webservers are usually the 3 load balanced servers. the response goes back out the way it the response came in, not out a different server.

 

This is what I would expect

                                                  |------------->database

                                      |    Server 1          |

                                      |   /                   \    |

DNS --> Load Balancer |-> Server 2     |---|

                                      |   \                   /    |

                                      |       Server 3 -----^

                                     ^ Firewall

 

Then you could do one of two things, you could host the each site as a vhost and target them with server_name

 

server {
    listen       80;
    server_name  client1site.com;
}

server {
    listen       80;
    server_name  client2site.com;
}

Then you need to dns each client on the load balancer and add an entry on each webserver.

 

or you could have each loadbalanced server run a vm per client and have it all routed by the load balancer.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Mornincupofhate said:

Yes, they're all going to run NginX.

 

And your answer was a little bit vague. When I'm adding a new client's domain, would I need to configure it inside all of the machines listed above? Or just the load balancer? And would he just need to point his domain to Load Balancer's name servers?

 

5 hours ago, WaxyMaxy said:

Your diagram is kind of strange and you will need to explain the use case a bit more. I don't know what your "exit" server does and the webservers are usually the 3 load balanced servers. the response goes back out the way it the response came in, not out a different server.

I had to be vague as your diagram/post doesn't fully show all the details needed to make a proper detailed response, it's a little confusing as @WaxyMaxy pointed out.

 

The simple answer is what @WaxyMaxy showed you, but remember load balancers like ngix alone are not the answer to all problems. Getting the reliability and up time of someone like Square Space is both more complex but also more simple than you might expect, depends on what resources you have.

 

Normally you have a redundant/resilient border to you network which can be done a few ways or a mix of more than one.

  • Round robin DNS to multiple IP addresses of different border entry paths
  • HA or VRRP router/firewall on each border entry path
  • BGP peering so if a path goes down the route tables are updated and traffic comes in a new path

Remember some of the above is usually done by a hosting provider so if this is the case you don't even have to worry about it.

 

After you have created a resilient border you then start looking at load balancing web sites/applications. What you can do is have multiple ngix load balancers which you set the public DNS records to these IP addresses, a DNS record can have more than one IP. If a load balancer goes down you will want that IP address to then become live on another ngix server. The reason for this is DNS TTL, and even with a low value people will still experience moderate down time.

 

What I outlined above is fairly similar to what we run at work. We have 3 data centers each with multiple 10Gb/40Gb connections with resilient routers and firewalls and we have multiple /16 public IP spaces which we control with BGP. There is also backend connectivity between each data center so if a public entry point goes down traffic can enter from another data center and travel down the backend. For load balancers we use virtual Citrix NetScalers.

 

I am by no means a networking expert, I'm in the systems engineers team so my work stops at firewall configuration. If you want someone who is much more of a master at this then @Wombo is the person you want.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, WaxyMaxy said:

Your diagram is kind of strange and you will need to explain the use case a bit more. I don't know what your "exit" server does and the webservers are usually the 3 load balanced servers. the response goes back out the way it the response came in, not out a different server.

 

This is what I would expect

                                                  |------------->database

                                      |    Server 1          |

                                      |   /                   \    |

DNS --> Load Balancer |-> Server 2     |---|

                                      |   \                   /    |

                                      |       Server 3 -----^

                                     ^ Firewall

 

Then you could do one of two things, you could host the each site as a vhost and target them with server_name

 


server {
    listen       80;
    server_name  client1site.com;
}

server {
    listen       80;
    server_name  client2site.com;
}

Then you need to dns each client on the load balancer and add an entry on each webserver.

 

or you could have each loadbalanced server run a vm per client and have it all routed by the load balancer.

Load balancing three servers with the same content isn't really what I had in mind. Those three servers were just supposed to filter out spam HTTP requests, and then forward good traffic to the client's webserver.

 

Let me create a more simple diagram

                                                                                  ------------------------------

DNS --> Reverse Proxy (Stuff gets filtered here) --> Dedicated Server

                                                                                                  |

                                                                                             VHosts

                                                                                  -------------------------------


Basically I'm just trying to get things as simple as I can here. My goal is for people to buy from a WHMCS panel, and then have WHMCS write a new config for the webserver. The reason I'm doing this is because the host I'm buying from meets all my expectations, except their layer 7 filtering is garbage, and I'd like to make my own.

If I just forwarded all traffic from the reverse proxy to the IP address of the dedicated server, would the dedicated server be able to point HTTP requests to where they need to go inside the Vhosts? I want to make it as simple as can be and only write to one server's config when someone buys a webserver.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Mornincupofhate said:

Load balancing three servers with the same content isn't really what I had in mind. Those three servers were just supposed to filter out spam HTTP requests, and then forward good traffic to the client's webserver.

 

Let me create a more simple diagram

                                                                                  ------------------------------

DNS --> Reverse Proxy (Stuff gets filtered here) --> Dedicated Server

                                                                                                  |

                                                                                             VHosts

                                                                                  -------------------------------


Basically I'm just trying to get things as simple as I can here. My goal is for people to buy from a WHMCS panel, and then have WHMCS write a new config for the webserver. The reason I'm doing this is because the host I'm buying from meets all my expectations, except their layer 7 filtering is garbage, and I'd like to make my own.

If I just forwarded all traffic from the reverse proxy to the IP address of the dedicated server, would the dedicated server be able to point HTTP requests to where they need to go inside the Vhosts? I want to make it as simple as can be and only write to one server's config when someone buys a webserver.

 I see, that makes more sense. The front facing load balancer config should remain rather static unless you add more filter servers.

 

http {
    upstream backend {
        server backend1.example.com or IP;
        server backend2.example.com or IP;
    }
    server {
        location / {
            proxy_pass http://backend;
        }
    }
}

 

For your filter servers you would update them with a script, the basic config per hosted client on these is rather simple.

 

location /some/path/ {
    proxy_pass http://www.example.com/link/;
}

 

Here I'm reverse proxying at the filter servers but you could move that to the front load balancer.

 

My real question would be are you not trying to recreate what cloudflare does? Would it not simply be easier to integrate that in to your solution.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

 I see, that makes more sense. The front facing load balancer config should remain rather static unless you add more filter servers.

 

http {
    upstream backend {
        server backend1.example.com or IP;
        server backend2.example.com or IP;
    }
    server {
        location / {
            proxy_pass http://backend;
        }
    }
}

 

For your filter servers you would update them with a script, the basic config per hosted client on these is rather simple.

 

location /some/path/ {
    proxy_pass http://www.example.com/link/;
}

 

Here I'm reverse proxying at the filter servers but you could move that to the front load balancer.

 

My real question would be are you not trying to recreate what cloudflare does? Would it not simply be easier to integrate that in to your solution.

I'm somewhat trying to do what cloudflare does, because cloudflare does a horrible job at filtering layer 7 attacks, even when I've set it to under attack mode.

NginX would be a lot better imo because I can add new filter settings when I need to and what not.

 

Also, would I be able to integrate this process in WHMCS?

 

Also, as shown in my first example, could I just point all my filter servers to a fourth server, and then have that point to all of the client servers, so I would only have to update the configs on one of the servers?

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, leadeater said:

 

I had to be vague as your diagram/post doesn't fully show all the details needed to make a proper detailed response, it's a little confusing as @WaxyMaxy pointed out.

 

The simple answer is what @WaxyMaxy showed you, but remember load balancers like ngix alone are not the answer to all problems. Getting the reliability and up time of someone like Square Space is both more complex but also more simple than you might expect, depends on what resources you have.

 

Normally you have a redundant/resilient border to you network which can be done a few ways or a mix of more than one.

  • Round robin DNS to multiple IP addresses of different border entry paths
  • HA or VRRP router/firewall on each border entry path
  • BGP peering so if a path goes down the route tables are updated and traffic comes in a new path

Remember some of the above is usually done by a hosting provider so if this is the case you don't even have to worry about it.

 

After you have created a resilient border you then start looking at load balancing web sites/applications. What you can do is have multiple ngix load balancers which you set the public DNS records to these IP addresses, a DNS record can have more than one IP. If a load balancer goes down you will want that IP address to then become live on another ngix server. The reason for this is DNS TTL, and even with a low value people will still experience moderate down time.

 

What I outlined above is fairly similar to what we run at work. We have 3 data centers each with multiple 10Gb/40Gb connections with resilient routers and firewalls and we have multiple /16 public IP spaces which we control with BGP. There is also backend connectivity between each data center so if a public entry point goes down traffic can enter from another data center and travel down the backend. For load balancers we use virtual Citrix NetScalers.

 

I am by no means a networking expert, I'm in the systems engineers team so my work stops at firewall configuration. If you want someone who is much more of a master at this then @Wombo is the person you want.

I answer the call.

 

Hmm, intriguing. I like to think I have domain over layer 3 and under, and a bit of layer 4. Once we start hitting those higher layers I tend to fall by the wayside. I'll get the packets to you, but it's up to you what you do with them!

 

I'm familiar with the concepts we are discussing here, but not in the implementation. I could definitely help if there's any routing concerns/configs for routing/high-availability protocols, but I'm a bit lost with the rest.

 

I think you're handling this one far better than I could!

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Mornincupofhate said:

I'm somewhat trying to do what cloudflare does, because cloudflare does a horrible job at filtering layer 7 attacks, even when I've set it to under attack mode.

NginX would be a lot better imo because I can add new filter settings when I need to and what not.

 

Also, would I be able to integrate this process in WHMCS?

 

Also, as shown in my first example, could I just point all my filter servers to a fourth server, and then have that point to all of the client servers, so I would only have to update the configs on one of the servers?

Unfortunately I haven't used WHMCS so can't answer with certainty but automation is a big part of WHMCS so I would say yes, you could practically do anything you want so long as you can write the script/code to do it and call that as part of the provisioning processes in WHMCS.

 

You could point the filter servers to a downstream reverse proxy type server yes but the logic behind it seems to be not that ideal. If you picture a configuration with 1 front end load balancer which hands off to many filter servers then they hand off to 1 backend server you have two single point of failures which would effect all customers. Also the load that these front and backend servers would be under may be an issue, if you need to have multiple filter servers even if they do more work could single servers alone handle this raw traffic?

 

Personally I would make the front end server as basic a possible and then combine the filtering and hand off to customer web servers on the same logical layer, this would also scale out much simpler if you need it to. Spin up a new filter server and then add this to the load balancer configuration then update the WHMCS configuration workflow to account for this new server.

 

I don't know how big of a service you are going to be offering but one thing I always try and avoid are single points of failure.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

I don't know how big of a service you are going to be offering but one thing I always try and avoid are single points of failure.

Two is one, and one is none.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×