Chapter 2: System Requirements Up Main page Chapter 4: Installation 

3 Network Setup

In this chapter, we’ll look at the different ways to set up an InterWorx cluster. It’s worth noting that clustering works the same from the user-interface perspective regardless of the LVS mode the user chooses. However, the examples here will be using LVS-DR as it is by far the most popular way to build an InterWorx Cluster.

3.1 Basic Network Configuration

figure images/diagrams/01-single-server.png
Figure 3.1 A Basic, Single-Server InterWorx Installation
Figure 3.1 shows a basic single server hosting setup. All hosting services are handled by this single server. We break the services down into the following Roles: AppServer, (web, mail, dns, etc), File Server, and Database. In this setup, there is one network device on the server, and it is connected directly to the public network, which is then connected to the greater internet. This is not a clustered setup, it is a baseline from which we can expand into a clustered setup.
figure images/diagrams/02-cluster.png
Figure 3.2 InterWorx Clustering With A Single NIC Per Application Server
The first thing to note in Figure 3.2 is the addition of an extra Role to first server - Load Balancer. This server can now load balance application requests to the other AppServers, or Nodes, in the cluster. Also, note the addition of the blue directional arrows, indicating intra-cluster communication. Since we are still using just the public network for this cluster, this means that one IP address on the Load Balancer server must be reserved for intra-cluster communication - this IP is called the quorum IP. At least one other public IP address must be assigned to the Load Balancer, which will be the IP(s) that hosting services utilize.
figure images/diagrams/03-cluster.png
Figure 3.3 InterWorx Clustering With Two NICs Per Application Server
Figure 3.3 shows the addition of a second, private network for the cluster to utilize. The cluster will use the private network exclusively for intra-cluster communication. The quorum, that is, the IP address on the load balancer that the other servers use to communicate to the load balancer, is now a private IP address, rather than a public one. This removes the requirement of two public IP addresses for cluster load balancing to work.
figure images/diagrams/04-cluster.png
Figure 3.4 InterWorx Clustering With Two NICs Per Application Server, And Separate File And Database Servers
Figure 3.4 shows a more advanced InterWorx Cluster setup, where the File Server and Database Roles have been moved to two separate servers connected to the rest of the cluster only via the private network.

3.2 Public vs. Private Network For Intra-cluster Communication

Intra-cluster communication refers to the sharing of site and email data between servers, commands being sent through the InterWorx command queue, and the routing of load balanced requests from cluster manager to nodes for processing. Before setting up your InterWorx Cluster, you must decide if you will use a private or public network for intra-cluster communication.

3.2.1 Public Intra-cluster Communication

Pros:
  • Requires only one network device per server.
  • Simpler setup.
Cons:
  • Must use 1 public IP as the cluster quorum IP, for intra-cluster communication.
  • If your network connection is throttled at the switch, you may hit throttle limit sooner.
  • Depending how your bandwidth usage is calculated by your provider, your intra-cluster communication may be billed.

3.2.2 Private Intra-cluster Communication

Pros:
  • Allows use of all public IPs for hosting services in the cluster.
  • Keeps cluster communication separate from public/internet communication.
Cons:
  • Requires use of 2 network devices for the cluster manager and cluster nodes.
  • Slightly more complex setup.

3.3 Home Directory Considerations

If you want a separate partition for /home on the InterWorx Cluster Node servers, you should name this partition /chroot instead, and symlink /home to /chroot/home. This will allow the shared storage mounts to mount cleanly.

3.4 Shared Storage

The default setup is to use the Cluster Manager server as the shared storage device. However, it is possible to set up the cluster using an external storage device. You need to decide prior to setting up the cluster because switching later will require a complete rebuild of the cluster.
If you want to use an external device for shared storage, there are several options for configuring the mounts. If you do not already have a separate /home partition, just mount the NFS to /chroot. Then, create a /chroot/home directory, and symlink the /home directory to /chroot/home.
If you have /home mounted as a separate partition, then remount it at /chroot and symlink /home to /chroot/home.
Then, proceed with the cluster setup in NodeWorx. InterWorx will make the cluster nodes mount the correct location.
 Chapter 2: System Requirements Up Main page Chapter 4: Installation 

(C) 2017 by InterWorx LLC