Configuring a centralized global datastore

To configure high availability (HA) MFT Server clusters, all nodes in the cluster must have the same server configurations.

 

This is accomplished by installing MFT Server using one of the supported third-party Databases.

 

Warning: The embedded H2 database that ships with MFT Server cannot be used for high availability purposes. If you are currently using the H2 database, you can migrate the data to a supported relational database by following the instructions outlined here: Migrating existing data to a centralized global datastore

 

The third-party database will serve as a shared or centralized global datastore for all nodes in the HA cluster. This implementation means that any changes made to the server configuration of one MFT Server instance will automatically be applied to all other nodes in the cluster.

 

The documentation for each installation (see the links below) provide instructions on how to install MFT Server using a supported third-party database. Regardless of the operating system, you must create the database first, before starting the installation process.

 

When performing an installation on your primary MFT Server System, keep in mind the following:

 

  • Do not use the H2 built-in database. As previously mentioned, it is not supported in an HA environment. The installation instructions provide details on how to use a supported third-party database and H2. Make sure you use the third-party database option.

     

  • When you are prompted to enter an IP address for the two MFT Server services (REST and Management), you must use 0.0.0.0. Using this IP address means the services listen on all network interfaces.

Note: The MFT Server Manager UI is built on the REST architecture. Additionally, the REST API is available for programmatic access. The Management (sometimes called "Administrative") service lets you manage MFT Server programmatically using the JAVA API.

Installation documentation

 

 

Once you have a primary MFT Server installation configured using a supported third-party database, secondary MFT Server installations instructions vary, depending on the OS you are performing the installation on.

 

For GUI based installations (Windows and MAC OS X), follow the instructions below.

 

  • Run the installer on the secondary system, as usual.

     

  • Use the same license key as the primary MFT Server.

     

  • Use the built-in H2 database. This means you should follow the instructions in the installation documentation that describes how to configure the H2 database. The H2 database serves as a temporary placeholder, used only to get through the installation process.

     

  • Like the primary installation, when configuring the REST and Management ("Administrative") services, you must use the IP address 0.0.0.0.

     

    After the secondary installation is complete, copy the /etc/database.properties file (which includes the third-party database URL) from the primary MFT Server installation to the secondary MFT server installation. To do this:

    • Stop the MFT Server service on the secondary installation.

       

    • Copy the primary server's <MFT Server Installation directory>/etc/database.properties file to the secondary's <MFT Server installation directory>/etc directory (overwrite the existing file).

       

    • Start the MFT Server service.

 

For Command line installation (Linux, etc.), follow the instructions below.

 

  • Run the installer as usual.

     

  • Run the following commands:

  • ./js-database-configuration -configure -url <URL of the primary node> -user <DB user> -password <DB Password> - For more details see js-database-configuration The URL of the primary node is specified in this command, which results in both installations (primary and secondary) using the shared datastore. No further database action is required.

     

    The next two commands set the IP, host and timeout for MFT Server services, including the REST and Management ("Administrative") services respectively.

     

    Note: The MFT Server Manager UI is built on the REST architecture. Additionally, the REST API is available for programmatic access. The Management (sometimes called "Administrative") service lets you manage MFT Server programmatically using the JAVA API.

     

  • ./js-web-configuration -host [ip address] -port [port] -timeout [timeout in seconds] The host IP must be 0.0.0.0. The default port is 11880.

 

Example: ./js-web-configuration -host 0.0.0.0 -port 11880 -timeout 10

 

  • ./js-server-configuration -host [ip address] -port [port] -timeout [timeout in seconds]. The host IP must be 0.0.0.0. The default port is 10880.

 

Example: ./js-server-configuration -host 0.0.0.0 -port 10880 -timeout 10

 

  • Copy the license key file to the etc directory that is relative to MFT Server's installation directory (e.g., /opt/mft_server/etc). This must be done before attempting to start the service. The license is the same one used for the primary MFT Server installation.

 

Note: Several steps are not required when installing MFT Server on a secondary non-Windows system using the command-line. This is true because the steps were performed during the primary installation. This includes initializing the database (this only needs to be done once), and creating a user name and password to log in to the MFT Server ManagerUI, and creating a Server Key. The Admin and Server Key information, when entered during the primary installation, are stored in the global (shared) datastore.

 

Triggers and Directory Monitors

 

Below you will find information regarding the handling of Triggers and Directory Monitors in a high availability MFT Server environment.

 

  • Triggers that occur due to user activity (e.g. file upload, download) are tied to the node that is managing the user session.

     

  • Triggers that are time-based are raised on all nodes. However, if you wish the trigger to only run on one node, you can use the Health Monitor Trigger Action to accomplish this. 

     

  • Directory Monitor behavior is determined by the setting of the Raise events on <First> <All> instance(s) field - when adding or editing a Directory Monitor.

     

    • If First is selected, the first server in the cluster (the active MFT Server) will listen for directory monitor events. If the active (First) server should go down, the other server in the queue will automatically listen for directory events. This is to avoid duplicate events when using a cluster.

       

    • If All is selected, all servers in the Active/Active cluster will listen for directory monitor events. Note: When MFT Server HA is configured using an Active/Passive configuration - only First is applicable.