InterWorx Control Panel Installation Guide

by InterWorx LLC

Preface

About This Guide

This guide is intended to walk the user through installation of the InterWorx Control Panel as well as the InterWorx Cluster Panel. The products are actually exactly the same - you download and run the installer in the exact same fashion in both cases. The difference is that there are slightly different requirements when putting the InterWorx Control Panel into cluster mode to make it the InterWorx Cluster Panel.

Pre-Requisites

It is recommended that readers of this guide are competent linux administrators. This means essentially having experience setting up and maintaining a linux server that hosted services on the internet via command line. Command-line experience is essentially what is recommended. While the panel is designed to mitigate the amount of knowledge required for a user to interact with a Linux server, ultimately troubleshooting problems and making certain configuration changes is infinitely easier if the Server Administrator is comfortable in a text-based shell. InterWorx is primarily designed to run on the CentOS Linux distribution, with support also for Red Hat Enterprise Linux. As such, experience using RPM-based OS’s may also be helpful to a server administrator setting up InterWorx for the first time.

Part I. Installing InterWorx Control Panel

1 InterWorx Installation Requirements

1.1 Operating System Support

InterWorx is a Linux based hosting control panel. Linux is the most popular, and (in our opinion) best platform for web hosting applications. It utilizes heavily on the RPM package system for distribution of InterWorx itself, as well as various software packages that handle all the web hosting related needs. Therefore, an RPM-compatible Linux distribution is required. In general, this includes Red Hat Linux compatible systems. As of this writing (2017-02-07), the following distributions are fully supported:
  • Red Hat Enterprise Linux 5 (Not Recommended)
  • Red Hat Enterprise Linux 6
  • Red Hat Enterprise Linux 7
  • CentOS Linux 5 (Not Recommended)
  • CentOS Linux 6
  • CentOS Linux 7
The following distributions are were at one point supported, but are now past EOL (end of life) and are NO LONGER recommended or supported:
  • Red Hat Enterprise Linux 3
  • Red Hat Enterprise Linux 4
  • CentOS Linux 3
  • CentOS Linux 4
  • Red Hat 9
If you aren’t sure which OS to choose, we recommend either Red Hat Enterprise Linux 7 (requires a paid subscription) or CentOS Linux 7 (free) .

What about Fedora Linux?

The Fedora release, support, and maintenance schedules are much shorter than alternatives above. A new Fedora version is released every 6 months. This is great for testing new things for a limited amount of time, but not so great for web hosting where stability is a key requirement. Therefore, InterWorx does not officially support the Fedora Linux distributions, even though there are various Fedora releases that are binary compatible with the above supported Linux distributions.

1.2 Virtual Machine Support

Any virtual machine environment where the supported operating systems above can be installed, will work just fine with InterWorx. Here is a not necessarily exhaustive list:
  • VMware
  • Xen
  • VirtualBox
  • OpenVZ / Virtuozzo*
  • Rackspace Cloud
* In OpenVZ / Virtuozzo, the “Second Level Quotas” feature must be enabled for things to work properly.

1.3 InterWorx License

The final step of installation is the activation of the InterWorx License. To activate a license you need a “license key.” InterWorx license keys generally look something like: INTERWORX_XXXXXXXXX, and are sent when a license is purchased, or a demo license request is received. Details of how to activate a license are described later on.

2 The InterWorx Install Script

2.1 Install Overview

Installing InterWorx is pretty straightforward. We will break it down into the following simple steps
  1. Install the operating system.
  2. Download the install script.
  3. Run the install script
  4. Activate the InterWorx License.

2.2 Installing the Operating System

Once you’ve decided on the compatible operating system you want to use, it needs to be installed. If you’re renting a dedicated server or VPS from a hosting provider, this step may already be completed for you. If so, you can skip to section 2.3.

2.2.1 Disk Partitioning

During the Linux install process, the installer will give you the opportunity to define disk partition size and configurations. The default configuration is usually to have one large partition mounted at the root (/) directory. The default configuration is acceptable, but if you want to make customizations to the partition layout, that is fine. As you make these plans, keep the following things in mind:
  1. MySQL databases will be installed at /var/lib/mysql directory.
  2. All user web and mail data will reside under the /home directory.
If you want a separate partition for /home, we recommend that you instead set it up as a partition called /chroot, instead of /home. Later, the InterWorx install script will move /home into /chroot/ and recreate /home as a symlink to /chroot/home. The reason for this is two-fold, first for compatibility with the jailed-ssh feature, and second, for compatibility with the Clustering setup. If you know you won’t use the jailed-ssh feature or Clustering in the future, this step is not required - you can just make the /home partition as /home.

2.2.2 Choosing Packages

During Linux installation, you will be given the opportunity to choose what kind of server this will be. You can choose whatever you like, as the InterWorx install script will uninstall any conflicting packages, and install any packages that are needed but not initially installed. If you’re looking for a recommendation, we suggest Basic Server, no GUI.

2.3 InterWorx Install Script Overview

InterWorx is installed via a bash script which essentially does a few things:
  1. The script will deactivate SELINUX since it gets in the way of installation.
  2. The script will move /home/ to /chroot/home/ if it’s not on its own partition and then symlink /chroot/home/ to /home.
  3. The script will uninstall any conflicting RPM packages that are installed initially. It does this with yum.
  4. The script will install InterWorx and it’s supporting software packages using yum.
  5. Finally, disk usage quotas will be enabled on the primary user partition (usually /, /home, or /chroot).

2.4 Getting the Install Script

After OS installation is complete, and the server as rebooted, the following steps need to be run at the Linux terminal/command line. Login to the server as root via SSH, and then continue.
The InterWorx install script is located at http://updates.interworx.com/iworx/scripts/iworx-cp-install.sh. The easiest way to grab it is via the command “wget”. If wget is not installed, run the command: yum install wget
[root@server1 ~]# wget http://updates.interworx.com/iworx/scripts/iworx-cp-install.sh
Resolving updates.interworx.com... 69.39.81.135 Connecting to updates.interworx.com|69.39.81.135|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 41045 (40K) [application/x-sh] Saving to: ‘iworx-cp-install.sh.1’
100%[======================================>] 41,045      --.-K/s   in 0.006s
2012-07-05 15:53:37 (6.90 MB/s) - ‘iworx-cp-install.sh.1’ saved [41045/41045] 
[root@server1 ~]#

2.5 Running the Install Script (Simple Version)

Run the install script like this:
[root@server1 ~]# sh iworx-cp-install.sh
At this point the script will take over and install automatically. It has been designed to anticipate differences and distributions and thus is very good at performing the actions necessary to get InterWorx installed properly on your operating system. You will be occasionally prompted and warned when the script will make a significant change to your system. For example, prior to disabling SELINUX, the script asks you if this is OK. You can suppress these prompts by running the script with the -l option.
In the event that yum is unable to install a package due to a dependency issue, the script will halt. You will usually be able to see the output from yum and either use it to diagnose and repair the issue yourself or forward the output to InterWorx Support and have them repair the issue for you. In either case, you can always run the script as many times as you need until the install completes successfully. Since most of what the script does is run yum, running it multiple times will just make yum ignore commands to install already installed packages.

2.6 Running the Install Script (Advanced Version)

The install script has a few optional parameters that can be useful for installation in advanced environments.
[root@server1 ~]# sh iworx-cp-install.sh -h
Usage: $0 [-s rpm/ks server hostname] [-s server] [-d] [-f] [-u] [-k] [-l] [-i] [-h]
    -s <server> Specify a different rpm host to grab 
                rpms from (default: updates.interworx.com).
    -d Turn debugging on (you’ll have to hit enter
       sometimes to keep things moving, output is halted so 
       you can see it).
    -f FORCE installation even if the distro isn’t supported.
    -u Perform a "yum update" prior to installation of
       InterWorx-CP.
    -k Force the removal of any conflicting packages that will
       interrupt the InterWorx-CP install.
    -l Run in headless mode.  No prompting will occur.
    -i Install all packages *except* InterWorx-CP itself.
    -h Show this help message
For example - if you wanted to run the install script and have it not prompt the user for anything, and run yum update first, you could run it like this:
[root@server1 ~]# sh iworx-cp-install.sh -l -u
...
Once installation is complete, you can move on to the license activation step.

3 License Activation

3.1 Activating your license

The next step after running the install script is to activate InterWorx with a license key. Activation is a bit of a misnomer as this process will also initialize the InterWorx database and setup cron jobs.

3.2 Command-line License Activation

The command-line script to activate and initialize InterWorx is /home/interworx/bin/goiworx.pex. You’d run that script just like you’d run any script. Assuming you are logged in as root:
[root@server1 ~]# /home/interworx/bin/goiworx.pex
You will then be prompted for the license key you have been assigned. Keep in mind that the license key authorization will fail if
  • The license key server license.interworx.info cannot be contacted on TCP port 2443
  • The license key has been activated before on another server with a different IP on the primary network device (typically eth0).
Once the license is authenticated, you will be prompted for the email address and password for the master NodeWorx user. Keep in mind that it is possible to change this email address later. Also keep in mind that all server warning emails will be sent to this address. Once you have entered the credentials for the master NodeWorx user, the script will initialize the databases and the cron jobs for the InterWorx control panel and you will be ready to log in and start using it.

3.3 Browser-based License Activation

It is also possible to activate InterWorx via a web-based activation screen that is accessible after running the initial install script. This is accessible on https://[server ip address or DNS hostname]:2443/nodeworx. This screen will have form fields identical to what is asked in the script-based install for you to fill in, and operates behind the scenes in a similar fashion. This feature is especially nice for InterWorx license resellers who wish to give new dedicated server and VPS clients the ability to designate the master user login and password themselves.

Part II. Installing InterWorx Cluster Panel

4 InterWorx Cluster Installation Requirements

It is important to note that the Cluster Panel is essentially the InterWorx Control Panel in “cluster mode” - i.e. with clustering setup. Therefore we encourage you to look at the OS and VPS requirements found in chapter 1 on page 1↑ in order to understand what is expected. This chapter will expand on those requirements. If the concept of clustering is foriegn to you, or you don’t really understand what clustering means in the context of InterWorx, we encourage you to read our clustering guides found on the documentation sections of our website. Our clustering guides also cover the configurations that we support in more depth.

4.1 Server Requirements

Clustering essentially involves spreading the workload of specific services across multiple servers to mitigate the amount of work a given set of hardware is forced to process. It does this by load-balancing incoming connections destined for specific services to different machines in your cluster. Therefore, you obviously need at minimum 2 servers in order to benefit from the clustering system.

4.1.1 The Cluster Manager

One of the servers in your cluster will be designated the cluster manager. This will be the point of entry for incoming connections to your server and where the load balancer will reside. The cluster manager is also tasked with the responsibility of maintaining synchronization of it’s child cluster nodes such that, when say you add a new SiteWorx account, that account data is populated to the nodes.

Minimum Responsibilities of a Cluster Manager

The load balancer will be handling incoming connections and routing them to the child nodes for processing. This is typically a very non-processor/memory intensive process but it is important to note that if you move all the other responsibilities of a typical webserver to other nodes, the cluster manager will still at minimum be acting as a load-balancing router.
In addition, the InterWorx MySQL database (note: this is not the same as the MySQL database that your siteworx accounts utilize) is used to maintain synchronization across the cluster. Email, FTP, SiteWorx account meta data is stored on the InterWorx panel database residing on the cluster manager. This is also a responsbility that cannot be easily moved to another server. Lastly, the database stores the state of the command queue - i.e. the queue of actions the nodes need to perform in order to maintain synchronization with the cluster manager.
Thus, if load balancing and running the InterWorx database are the only things that your cluster manager are doing, then the CPU and Memory requirements are probably going to be quite reasonable.

Typical Responsibilities of a Cluster Manager

The thing is, in addition to load-balancing and running the InterWorx database, cluster managers are typically doing a lot more.
  • They are typically also acting as a node and handling part of the incoming connction workload. They handle HTTP/S requests much like the other nodes do.
  • The storage of SiteWorx user data - i.e. webpages and email - is by default set to the Cluster Manager and shared to the nodes via NFS. This means that the disks backing the SiteWorx user data are typically extremely busy as they are responsible for handling read/write requests from the CM itself and also from the nodes via NFS.
  • The SiteWorx databases are by default using the localhost MySQL server of which it’s files reside in /var/lib/mysql. The CPU usage of a MySQL server plus the disk access can also be taxing on the server’s hardware.
  • The control panel and webmail is typically mostly accessed via the cluster manager. Luckily the control panel doesn’t really utilize as much CPU/memory to run and the control-panel is one of the more modestly used services on a given cluster. On the other hand many users prefer to use webmail over using an IMAP or POP3 client and roundcube, arguably the nicest webmail client of the 3 InterWorx ships with, is known to be highly resource hungry when many clients are concurrently using it.

Spreading computation responsibility to other servers

There are many ways to mitigate the cluster manager being crushed by high demand.
  • Set the load-balancer to route all connections away from the cluster manager. This means send all HTTP/S, FTP, SMTP, IMAP, POP3 connections to the nodes so they are handled from there. That way the cluster manager’s webserver, ftp server, mail server are not using CPU/memory handling those requests. Using the load-balancer is covered more extensively in our clustering documentation, found on our documentation site.
  • InterWorx Cluster Panel supports moving the /home partition where SiteWorx user data is stored (actually, it’s typically in /chroot/home/ and /home is a symlink to /chroot/home) to a storage appliance or file server elsewhere and then shared with the cluster via NFS. That way your user data can be stored on a high-performance RAID device with say, RAID 5 for data protection and the cluster manager doesn’t have to use local disk to service disk requests to the SiteWorx user data.
  • For MySQL, InterWorx supports setting up standalone MySQL servers and giving InterWorx the root MySQL password in order to create users and databases for SiteWorx accounts on it. These are called “remote MySQL servers” in InterWorx-speak. Essentially InterWorx will have a SiteWorx account use a remote MySQL server instead of the localhost, offloading that site’s MySQL workload to a server who’s sole responsbility is servicing MySQL. In addition, that server’s MySQL data files can be backed by a fast, redundant RAID as well.
  • It’s a bit difficult to mitigate use of the control panel via the cluster manager. If clients use http(s)://domain.com/webmail - their connection to the webmail service will first be load balanced since they are making their connection via port 80 or 443 - both of which are services/ports that InterWorx can load balance. On the other hand, InterWorx does not load balance connections to it’s panel web service on port 2080 and port 2443. That means clients connecting to NodeWorx or SiteWorx will be connecting direct to the cluster manager.

4.1.2 The Nodes

The cluster nodes are much simpler to describe. They recieve connections from the load balancer on the Cluster Manager as if they came direct from the external client making the initial request. The service listening on the port for which the connection is destined handles and processes the connection and replies direct to the client. As such, most of the load is network-bound (and subsequently disk bound on the device storing SiteWorx user data) since it accesses the SiteWorx user data via NFS.
PHP applications can still yield high demand for CPU and memory, though, in the event that the code is required to process large data sets or the code is running very CPU-intensive algorithms. An example of this would be a PHP application that needs to sort an array with a million entries everytime a certain page is loaded - if you are having 100 GET requests per second for this page your CPU/memory demands will be quite high. While you could always mitigate this by improving the design of the application - often you are acting only as the host and you are unable to modify client code.
In any case, disk-use locally on the nodes is typically much lighter than it is on the cluster manager or the devices which are hosting the SiteWorx user data and MySQL server.

4.1.3 Resource Requirements

The resource requirements we have are quite modest and are comparable to what we require for a single server - a modern-ish 2005 or later processor with maybe 2 cores or hyperthreading and around 1-2GB of RAM and 10-20GB of disk space. This is of course a very low “minimum requirements” that is really “What is the minimum I need to invest in a machine to get InterWorx and CentOS running”. The answer is relatively little.
But if you intend to built a high-performance cluster or a cluster that is going to service a high-workload - you would be silly to not use the multi-core high-speed processors, high-speed memory , and the fastest disks out there to ensure your clients never see any slow downs. On the other hand the nice thing about a cluster is you can build a small cluster initially with modest systems and later if necessary buy a super-powerful node and route most traffic to it if you find that your cluster is hurting. Ideally we’d recommend you pick the resources you would dedicate if you were to build a single-server setup for each device.
Obviously the nodes can make do with less disk space, and the cluster manager can make do with less disk space if the responsibility of storing SiteWorx user data and the MySQL database is offloaded to other machines.

4.2 Network Requirements

The InterWorx Cluster panel can operate in 3 different network topologies - The cluster manager and nodes all sitting on the public network, the cluster manager and nodes on both a public network and private network for intra-cluster communication, and the cluster manager and nodes sitting on a private network behind a NAT.

4.2.1 All servers on a public network

This is probably the easiest to understand, connections come into the cluster manager, get routed to the nodes on the public network, and then the nodes respond back on their public network connection. The requirments are:
  • The cluster manager needs 2 public IP’s at minimum in order to perform it’s duty. One of the IP’s will be where incoming connections will come in on to the services hosted on the cluster. The other will be designated the cluster’s quorum IP which is the main IP where all server-to-server communication occurs. No SiteWorx accounts can be placed on this IP. In order to understand more about the quorum consult our cluster documentation.
  • The nodes need just a single public IP.
  • All IP’s need to be within the same subnet
  • If you want to be able to host SSL sites that need their own dedicated IP or just want more than a single IP, you can add additional IP’s on the cluster manager.

4.2.2 Servers on both public and private network

If we were to recommened the best network topology for a cluster, it would be this solution. It allows for you to segment intra-cluster communication on the private LAN network and incoming/outgoing traffic on the public network. The requirements are similar to those in section 4.2.1 on page 1↑, with some slight differences.
  • You don’t need 2 public IP addresses because the cluster manager quorum can be it’s private LAN IP.
  • On the other hand, all public IP’s still need to be within the same subnet.
  • You should have 2 NIC’s in each server - one to communicate on the public network and one to communicate on the private network.
  • Naturally, every server needs to be connected to private network with a private IP.
The additional benefit with this setup is that if you intend to have a segregated MySQL server or have SiteWorx user data on a dedicated storage appliance or file server - you can put these on only the private network and prevent direct access to these devices from outside the cluster. In addition, your outside traffic and inside traffic are segregated on different hardware and therefore aren’t competiting for bandwidth or packet priority.

4.2.3 All servers on private network behind NAT

This makes it possible to run a cluster without multiple public IP’s or exposing all your cluster servers to the internet. On the other hand, you are usually a slave to your NAT device in that it will determine the capabilities of your cluster. Most SOHO NAT devices do not permit one-to-one NAT from a public IP to a private IP so most private clusters behind NAT are limited to one public IP. This means that if you are hosting multiple sites on the cluster, none will be able to take advantage of SSL. Also, high traffic to your cluster may cripple your NAT device’s memory as it may completly expend the memory available for the translation table.
In any case, the requirements are that:
  • The cluster manager needs 2 private IP’s - one that will recieve traffic from the NAT, the other acting as quorum.
  • Each node needs just 1 private IP. When it sends packets out to clients, it spoofs the address of the cluster manager so the NAT should just correctly accept the packet and route it to the correct host on the outside.
  • The NAT needs to forward ports 20, 21, 22, 25, 53, 80, 110, 143, 443, 993, 995, 2080, 2443 to the non-quorum IP of your cluster manager. In addition, you may want to forward 3306 if you are having outsiders connect to your MySQL server.

4.3 Storage Options

As stated earilier in the Spreading Computation Responsibility in 3 on page 1↑, you can split the SiteWorx user data and the MySQL server out from the cluster manager onto standalone servers or appliances. This is helpful in keeping the workload reasonable on the cluster manager reasonable on extremely high-throughput clusters.

4.3.1 Segregated Storage Requirements

The segregated storage solution, in order to integrate with InterWorx, must share it’s data partition on the cluster manager using NFS. This means the appliance or file server must support NFS, NFS quotas, and NFS file locking. This also means that the local filesystem backing the NFS share must support quotas and file locking. We recommend something common like EXT3 or EXT4.
In order to support quotas, InterWorx has developed a plugin that utilizes SSH to communicate with the storage server everytime a new SiteWorx account is created or deleted so that it can create identical unix users with identical UIDs and GIDs. This is neccessary to ensure quotas are working properly as quotas are set via normal quota utilities on the cluster manager. When using a standalone fileserver for SiteWorx user data, eventually a command to set a quota limit reaches the fileserver’s implementation of quotas as an instruction to set the quota on a specific GID. If that GID does not exist, the quota will not be set and thus the quota will not be enforced.
Therfore, if you need storage restrictions on your cluster, you either need a file server or appliance that supports:
  • InterWorx SSHing in and doing useradd/groupadd
  • A local filesystem that supports quotas and file locking
  • NFSv3 with locking and quota support
Or you should consider simply have the siteworx user data served from the cluster manager.
When you mount the NFS share as /chroot, InterWorx will automatically detect that /chroot is an NFS share during cluster setup and instruct the nodes to mount the same NFS share as /chroot instead of trying to mount /chroot from the cluster manager (which is the case when SiteWorx user data is served from the cluster manager). This means that in /etc/exports on your fileserver or storage appliance, all InterWorx cluster nodes need to be permitted to access the share. Not allowing the nodes to access the standalone fileserver’s share will cause cluster setup to fail!

4.3.2 Seperate MySQL Server

The nice thing about a segregate MySQL server is you can run whatever MySQL server you prefer on whatever OS you prefer - as long as InterWorx can access the MySQL root user of the server and issue commands via the PHP MySQL API, that server should suffice. It should be noted that InterWorx does not support clustered MySQL.[A][A]It should also be noted that when your SiteWorx users create MySQL users, they should leave the “host” of the MySQL user to %, in order to allow all servers the ability to use that MySQL user. localhost will make it so PHP applications executing from the CM and node are unable to access the MySQL server.

4.4 Additional VPS Considerations

We occasionally have hosts set up cluster managers and nodes within VPS containers, which is fine. Note that you cannot cluster with InterWorx VPS licenses. You must use a regular unlimited-domain license in order to cluster. The VPS requirements are similar to what is discussed in section 1.2 on page 1↑, with an additional caveat.
  • OpenVZ and Virtuozzo do not support kernel-level NFS by default.
This means if you are a host without access to the hypervisor of your virtuozzo/OpenVZ instance. VPS instance, you will probably run into issues. Kernel-level NFS is required in order for the CM to share the SiteWorx user data with the nodes. In addition, the nodes require kernel-level NFS to share apache logs back to the cluster manager. This is required to calculate statistics and data usage on the cluster manager.

5 Installation

Installation is fairly straight forward and this is just a brief overview of the install process, with more detail to come at a later date.
  1. If you intend to use an off-server storage solution for your siteworx user data, mount it as /chroot on the cluster manager.
  2. Install and Activate InterWorx on each server as detailed in chapter 2 on page 1↑ and chapter 3 on page 1↑.
  3. Log into NodeWorx on the cluster manager and set your default DNS servers.
  4. Goto Clustering ▷ Setup on the cluster manager and click the Setup button under the cluster manager panel. If you are using the split public/private network setup, you should pick your private IP address as the quourum. Otherwise you should use the most primary IP address (i.e. lowest-numbered eth device) as the quorum for simplicity.
  5. Now to Clustering ▷ Nodes and prepare to add nodes to your cluster.
  6. Log into NodeWorx on your cluster nodes. You will have to probably accept the EULA and put some temporary default DNS servers in as part of the default “first time login” process.
  7. Go to NodeWorx ▷ API Key and click “generate” to generate an API key for the node.
  8. Return to NodeWorx on the cluster manager and add the node’s IP address and API key. The IP address should be the private IP if using a split public/private network setup. First click “test” to verify that the CM can talk to the node. If the test passes, go ahead and add the node.
  9. Repeat steps 6-8 for each node in the cluster.
  10. Go to Clustering ▷ Load Balancing to set policies for how you want incoming traffic distributed. This is discussed more in the clustering guide.
  11. Setup your remote MySQL server in System Services ▷ MySQL Server ▷ Remote Servers

Footnotes

[A]It should also be noted that when your SiteWorx users create MySQL users, they should leave the “host” of the MySQL user to %, in order to allow all servers the ability to use that MySQL user. localhost will make it so PHP applications executing from the CM and node are unable to access the MySQL server.

(C) 2017 by InterWorx LLC