In a previous post, we explored a basic session-based replication setup for SpagoBI. However, that configuration lacked true session redundancy.
This post provides a comprehensive guide to configuring SpagoBI Tomcat clustering using mod_jk
and httpd
on CentOS, focusing on in-memory session replication for high availability. We’ll leverage mod_jk
to replicate sessions across all Tomcat servers in the cluster, ensuring seamless failover and improved user experience.
We’ve chosen mod_jk
over mod_proxy
for the following reasons:
- Maturity and Community Support:
mod_jk
is a well-established load balancing connector with a large and active user base within the Tomcat community. This translates to better documentation, readily available support, and a proven track record. - Independent Development Cycle:
mod_jk
’s development is independent of Apache HTTPD releases. This allows it to incorporate new features and improvements more rapidly thanmod_proxy
, whose updates are tied to the Apache HTTPD release cycle. Therefore,mod_jk
tends to be more current with the latest advancements.
mod_jk
acts as the bridge between the Apache HTTPD web server (which handles client requests) and the Tomcat application servers. It utilizes the AJP (Apache JServ Protocol) for efficient communication, ensuring fast and reliable data transfer within the cluster.
1. Setting the Stage: Installing Apache Tomcat and HTTPD
First, we need to install Apache Tomcat and HTTPD on our CentOS server. We’ll create two Tomcat instances for our cluster.
1.1. Extracting and Renaming Apache Tomcat
Download the Apache Tomcat distribution from the Apache Tomcat website. For this example, we’re using version 7.0.59. Adjust the commands accordingly if you’re using a different version.
[ahmed@ahmed-server ~]# tar xvzf apache-tomcat-7.0.59.tar.gz -C /opt
[ahmed@ahmed-server ~]# mv /opt/apache-tomcat-7.0.59 /opt/apache-tomcat-7.0.59-8009
Here, we extract the Tomcat archive to the /opt
directory and rename the resulting directory to apache-tomcat-7.0.59-8009
. The 8009
suffix indicates that this instance will use port 8009 for AJP communication.
1.2. Creating a Second Tomcat Instance
We create a second Tomcat instance by copying the first one. This instance will be configured to use AJP port 8019.
[ahmed@ahmed-server ~]# cp -rf /opt/apache-tomcat-7.0.59-8009 /opt/apache-tomcat-7.0.59-8019
1.3. Installing HTTPD and HTTPD-devel
To build mod_jk
in the subsequent steps, we require the httpd-devel
package.
[ahmed@ahmed-server ~]# yum install httpd httpd-devel
2. Building mod_jk from Source
Next, we download the tomcat-connectors
source code and build the mod_jk
module.
2.1. Downloading tomcat-connectors Source
Download the tomcat-connectors-src
from the Apache Tomcat Connectors download page.
[ahmed@ahmed-server ~]# cd Downloads
[ahmed@ahmed-server Downloads]$ wget \
http://www.apache.org/dist/tomcat/tomcat-connectors/jk/tomcat-connectors-1.2.40-src.tar.gz
[ahmed@ahmed-server Downloads]$ tar xzf tomcat-connectors-1.2.40-src.tar.gz
[ahmed@ahmed-server Downloads]$ ls -l tomcat-connectors-1.2.40-src
total 64
drwxrwxr-x. 3 ahmed ahmed 4096 Mar 4 22:43 build
drwxr-xr-x. 2 ahmed ahmed 4096 Mar 4 22:57 conf
drwxr-xr-x. 10 ahmed ahmed 4096 Apr 11 2014 docs
-rw-r--r--. 1 ahmed ahmed 7819 Mar 31 2014 HOWTO-RELEASE.txt
drwxr-xr-x. 6 ahmed ahmed 4096 Apr 11 2014 jkstatus
-rw-r--r--. 1 ahmed ahmed 13597 May 4 2008 LICENSE
drwxr-xr-x. 9 ahmed ahmed 4096 Mar 4 22:54 native
-rw-r--r--. 1 ahmed ahmed 269 Jan 3 2014 NOTICE
-rw-r--r--. 1 ahmed ahmed 1238 Mar 18 2012 README.txt
drwxr-xr-x. 2 ahmed ahmed 4096 Apr 11 2014 support
drwxr-xr-x. 4 ahmed ahmed 4096 Apr 11 2014 tools
drwxr-xr-x. 9 ahmed ahmed 4096 Apr 11 2014 xdocs
[ahmed@ahmed-server Downloads]# cd tomcat-connectors-1.2.40-src/native/
2.2. Configuring mod_jk
Configure the mod_jk
code to build against your Apache HTTPD installation.
[ahmed@ahmed-server native]$ ./configure --with-apxs=/usr/sbin/apxs
The --with-apxs
option specifies the path to the apxs
tool, which is used to build Apache modules.
2.3. Building mod_jk
Build the mod_jk
module using the make
command.
[ahmed@ahmed-server native]$ make
[ahmed@ahmed-server native]$ ls
aclocal.m4 buildconf.sh config.log configure iis Makefile.am README.txt TODO
apache-1.3 BUILDING.txt config.nice configure.ac libtool Makefile.in scripts
apache-2.0 common config.status docs Makefile netscape STATUS.txt
The compiled module (mod_jk.so
) will be located in the apache-2.0
directory.
[ahmed@ahmed-server native]$ cd apache-2.0
[ahmed@localhost apache-2.0]$ ls -l mod_jk.so
-rwxrwxr-x. 1 ahmed ahmed 1161265 Mar 4 22:55 mod_jk.so
2.4. Installing mod_jk
Copy the compiled mod_jk.so
module to the Apache HTTPD modules directory.
sudo cp /home/ahmed/Downloads/tomcat-connectors-1.2.40-src/native/apache-2.0/mod_jk.so \
/usr/lib64/httpd/modules/
2.5. Copying workers.properties
Copy the workers.properties
file to the Apache HTTPD configuration directory. This file defines the Tomcat workers that mod_jk
will use for load balancing.
sudo cp /home/ahmed/Downloads/tomcat-connectors-1.2.40-src/conf/workers.properties \
/etc/httpd/conf/
3. Configuring Apache HTTPD for mod_jk
Now we need to configure Apache HTTPD to load the mod_jk
module and define the load balancing rules.
3.1. Editing httpd.conf
Open the /etc/httpd/conf/httpd.conf
file and add the following lines after the LoadModule
section:
# Load mod_jk module
LoadModule jk_module /usr/lib64/httpd/modules/mod_jk.so
# Specify path to worker configuration file
JkWorkersFile /etc/httpd/conf/workers.properties
# Configure logging and memory
JkShmFile /var/log/httpd/mod_jk.shm
JkLogFile /var/log/httpd/mod_jk.log
JkLogLevel info
Add the following configuration at the end of the httpd.conf
file:
# Configure monitoring
JkMount /jkmanager/* jk-status
<Location /jkmanager>
Order deny,allow
Deny from all
Allow from localhost
</Location>
# Configure applications
JkMount /* balancer
Explanation of the Parameters:
LoadModule
: This directive loads themod_jk
module into Apache HTTPD. The file extension may vary based on the operating system.JkWorkersFile
: This specifies the path to theworkers.properties
file, which contains the definitions of the Tomcat workers.JkShmFile
: This sets the path to the shared memory file used bymod_jk
. It’s generally a good practice to keep this file in the logs directory.JkLogFile
: This defines the path to themod_jk
log file.JkLogLevel
: This sets the logging level formod_jk
. Valid values are “debug”, “error”, or “info”, in descending order of verbosity.JkMount
: This directive maps URL patterns to specific workers defined in theworkers.properties
file.JkMount /jkmanager/* jk-status
: This maps the/jkmanager
URL to thejk-status
worker, which provides a monitoring interface formod_jk
.JkMount /* balancer
: This maps all other requests to thebalancer
worker, which handles load balancing across the Tomcat instances.
<Location /jkmanager>
: This section defines access control for the/jkmanager
URL. In this case, it allows access only from localhost, which is a security best practice.
4. Configuring Cluster Workers in workers.properties
The workers.properties
file defines the Tomcat instances (workers) that mod_jk
will use for load balancing and session replication.
4.1. Editing workers.properties
Open the /etc/httpd/conf/workers.properties
file and add the following configuration:
# Define status/manager workers
worker.list=jk-status,jk-manager
worker.jk-status.type=status
worker.jk-status.read_only=true
worker.jk-manager.type=status
# Define the load balancer worker
worker.list=balancer
worker.balancer.type=lb
worker.balancer.balance_workers=spagobi-node-1,spagobi-node-2
# Worker for spagobi-node-1
worker.spagobi-node-1.type=ajp13
worker.spagobi-node-1.host=localhost
worker.spagobi-node-1.port=8009
worker.spagobi-node-1.activation=A
# Worker for spagobi-node-2
worker.spagobi-node-2.type=ajp13
worker.spagobi-node-2.host=localhost
worker.spagobi-node-2.port=8019
worker.spagobi-node-2.activation=A
Explanation of the Configuration:
worker.list
: This property defines the list of workers. Here, we definejk-status
,jk-manager
, andbalancer
.worker.jk-status.type=status
andworker.jk-manager.type=status
: These define status workers for monitoring.worker.balancer.type=lb
: This defines a load balancer worker.worker.balancer.balance_workers
: This specifies the Tomcat workers that the load balancer will use. In this case, it’sspagobi-node-1
andspagobi-node-2
.worker.spagobi-node-1.type=ajp13
andworker.spagobi-node-2.type=ajp13
: This specifies the type of worker asajp13
, indicating that it will use the AJP protocol.worker.spagobi-node-1.host
andworker.spagobi-node-2.host
: These define the hostname or IP address of the Tomcat instances.worker.spagobi-node-1.port
andworker.spagobi-node-2.port
: These define the AJP port for each Tomcat instance.worker.spagobi-node-1.activation
andworker.spagobi-node-2.activation
: This determines whether the node is active.A
: Active - The node is fully used.D
: Disabled - The node is only used if sticky sessions require it.S
: Stopped - The node is not used.
5. Configuring Tomcat for Clustering
To enable in-memory session replication, we need to configure the server.xml
file for each Tomcat instance.
5.1. Editing server.xml
Edit the server.xml
file for the first Tomcat instance (/opt/apache-tomcat-7.0.59-8009/conf/server.xml
). We’ll then copy the modified file to the second instance, making the necessary port adjustments.
5.1.1. Setting the SHUTDOWN Port
Configure a unique shutdown port for each Tomcat instance:
spagobi-node-1
: 8005spagobi-node-2
: 8015
Modify the <Server>
tag as follows:
<Server port="8005" shutdown="SHUTDOWN">
5.1.2. Disabling the HTTP Connector
Since we’re using mod_jk
for all requests, we can disable the default HTTP connector on port 8080. Comment out the following section:
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
5.1.3. Configuring the AJP Connector
Configure the AJP connector. Note the AJP and SSL ports for each instance:
spagobi-node-1
: AJP - 8009, SSL - 8443spagobi-node-2
: AJP - 8019, SSL - 8444
For spagobi-node-1
, the configuration should look like this:
<Connector port="8009" URIEncoding="UTF-8" protocol="AJP/1.3" redirectPort="8443" />
5.1.4. Setting the Node Name (jvmRoute)
Set the jvmRoute
attribute in the <Engine>
tag to match the corresponding worker name in workers.properties
. This is crucial for session affinity.
<Engine name="Catalina" defaultHost="localhost" jvmRoute="spagobi-node-1">
5.1.5. Adding the Cluster Configuration
Add the <Cluster/>
tag within the <Engine/>
tag:
<Engine name="Catalina" defaultHost="localhost" jvmRoute="spagobi-node-1">
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Interceptor
className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor
className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
</Engine>
Explanation of the Cluster Configuration:
<Cluster>
: The main element for clustering configuration.channelSendOptions="8"
enables asynchronous communication.<Manager>
: Configures the session manager.DeltaManager
provides basic cluster-aware session management.expireSessionsOnShutdown="false"
prevents session loss on node shutdown.notifyListenersOnReplication="true"
notifies listeners when a session is updated.<Channel>
: Handles communication between cluster nodes using the Tribes component.<Membership>
: Defines how nodes discover each other. Uses multicast communication by default.<Sender>
: Configures how messages are sent to other nodes. Uses the NIO transport for best performance.<Receiver>
: Configures how messages are received from other nodes. Allows nodes to automatically discover each other.<Interceptor>
: Modifies messages sent between nodes.TcpFailureDetector
detects delays and provides an alternative TCP connection.MessageDispatch15Interceptor
optimizes message dispatch.<Valve>
: Provides filtering of replicated files.ReplicationValve
filters files replicated across the cluster.JvmRouteBinderValve
binds sessions to the correct JVM route.<ClusterListener>
: Listens for messages and intercepts those that match their specifications.
5.2. Copying and Modifying server.xml for the Second Instance
Copy the modified server.xml
file from the first instance to the second instance:
cp /opt/apache-tomcat-7.0.59-8009/conf/server.xml /opt/apache-tomcat-7.0.59-8019/conf/server.xml
Then, edit /opt/apache-tomcat-7.0.59-8019/conf/server.xml
and make the following changes:
- Set
<Server port="8015" shutdown="SHUTDOWN">
- Set
<Connector port="8019" URIEncoding="UTF-8" protocol="AJP/1.3" redirectPort="8444" />
- Set
<Engine name="Catalina" defaultHost="localhost" jvmRoute="spagobi-node-2">
6. Starting the Servers and Verifying the Setup
With the configuration complete, we can now start the Tomcat instances and the Apache HTTPD web server.
6.1. Starting Tomcat
Start both Tomcat instances:
/opt/apache-tomcat-7.0.59-8009/bin/startup.sh
/opt/apache-tomcat-7.0.59-8019/bin/startup.sh
6.2. Starting HTTPD
Start the Apache HTTPD web server:
service httpd start
6.3. Testing the Setup
-
Access the SpagoBI Application: Open your web browser and navigate to
http://localhost/SpagoBI
. This should direct you to the SpagoBI user interface. -
Check the mod_jk Status: Access the
mod_jk
status page by navigating tohttp://localhost/jkmanager
. This page provides valuable information about the status of the Tomcat workers and the load balancing configuration.
6.4. Interpreting the JK Manager Status
The JK Manager status page displays information about each worker. Key indicators include:
- Active: Indicates whether the worker is currently handling requests.
- Busy: Shows the number of active requests being handled by the worker.
- Error: Indicates any errors encountered by the worker.
The legend provides a visual representation of the worker status:
- Green: Active and healthy
- Yellow: Active, but potentially experiencing issues
- Red: Inactive or experiencing errors
By monitoring the JK Manager status page, you can ensure that your Tomcat cluster is functioning correctly and that load is being distributed evenly across the nodes.
This detailed guide provides a comprehensive approach to configuring SpagoBI Tomcat clustering with mod_jk
for in-memory session replication. By following these steps, you can achieve a highly available and scalable SpagoBI deployment, ensuring a robust and reliable business intelligence platform.