apache v2.2.22
tomcat v6.0.32
instance1: c:\cluster1
instance2: c:\cluster2
1) There are two options to configure communication between app server and web server, mod_jk and mod_proxy
A pros/cons comparison for those modules from http://blog.jboss.org/ is as follows,
mod_proxy:
* Pros:
o No need for a separate module compilation and maintenance. mod_proxy,
mod_proxy_http, mod_proxy_ajp and mod_proxy_balancer comes as part of
standard Apache 2.2+ distribution
o Ability to use http https or AJP protocols, even within the same
balancer.
* Cons:
o mod_proxy_ajp does not support large 8K+ packet sizes.
o Basic load balancer
o Does not support Domain model clustering
mod_jk:
* Pros:
o Advanced load balancer
o Advanced node failure detection
o Support for large AJP packet sizes
* Cons:
o Need to build and maintain a separate module
1.1) jk connector
a) download mod_jk.so from http://tomcat.apache.org/download-connectors.cgi and put it to modlules folder in apache.
b) create two new files mod_jk.conf and workers.properties and put them to conf folder in apache.
-- content of mod_jk.conf
LoadModule jk_module modules/mod_jk.so # load mod_jk.so
JkWorkersFile conf/workers.properties # load workers file
JkLogFile logs/mod_jk.log
JkLogLevel info
JkMount /* loadbalancer # forward all incoming requests to loadbalancer
#JKMount /*.jsp loadbalancer # forward requests from jsp file to loadbalancer
HostnameLookups Off
-- content of workers.properties
worker.list = loadbalancer,cluster1,cluster2 # a list of servers in cluster
#========cluster1========
worker.cluster1.port=8889
worker.cluster1.host=localhost
worker.cluster1.type=ajp13
worker.cluster1.lbfactor = 1
#========cluster2========
worker.cluster2.port=8899
worker.cluster2.host=localhost
worker.cluster2.type=ajp13
# The higher the value of the lbfactor for Tomcat instance, the more work the server will do, and vice versa
worker.cluster2.lbfactor =1
#========loadbalancer======
worker.loadbalancer.type=lb # load balancer
worker.loadbalancer.balanced_workers=cluster1,cluster2
# keep requests belonging to the same session (which means the same user) forwarded to the same worker
worker.loadbalancer.sticky_session=false
c) add one more statement at the end of file httpd.conf
include conf/mod_jk.conf # load mod_jk.conf
2) edit server.xml in instance cluster1
2.1) change listen port for shutdown <Server port="8005" shutdown="SHUTDOWN"> to <Server port="8006" shutdown="SHUTDOWN">
2.2) change listen port for connector
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
Java AJP Connector: /docs/config/ajp.html
APR (HTTP/AJP) Connector: /docs/apr.html
Define a non-SSL HTTP/1.1 Connector on port 8080
-->
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
to
<Connector port="8081" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
2.3) change port for AJP
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
to
<Connector port="8889" protocol="AJP/1.3" redirectPort="8443" />
2.4) uncomment the engine element and specify the value for jvmRoute
<!-- You should set jvmRoute to support load-balancing via AJP ie :-->
<Engine name="Catalina" defaultHost="localhost" jvmRoute="cluster1">
2.5) uncomment this element to enable all-to-all session replication using the DeltaManager to replicate session deltas which replica session to all the other nodes in the cluster.
the other option is to replicate session to backup node only, see http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
3) edit server.xml in instance cluster1
3.1) <!-- Define an AJP 1.3 Connector on port 8009 -->
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
to
<Connector port="8899" protocol="AJP/1.3" redirectPort="8443" />
3.2) uncomment the engine element and specify the value for jvmRoute
<Engine name="Catalina" defaultHost="localhost" jvmRoute="cluster2">
3.3) uncomment this element
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
4) create a new web application, the element <distributable/> must be configured in its web.xml
5) add a index.jsp file to test.
<%@ page contentType="text/html; charset=utf-8" %>
<%@ page import="java.util.*" %>
<html><head><title>Cluster App Test</title></head>
<body>
Server Info:
<%
out.println(request.getLocalAddr() + " : " + request.getLocalPort()+"<br>");%>
<%
out.println("<br> ID " + session.getId()+"<br>");
String dataName = request.getParameter("dataName");
if (dataName != null && dataName.length() > 0) {
String dataValue = request.getParameter("dataValue");
session.setAttribute(dataName, dataValue);
}
out.println("<b>Session List</b><br>");
System.out.println("============================");
Enumeration e = session.getAttributeNames();
while (e.hasMoreElements()) {
String name = (String)e.nextElement();
String value = session.getAttribute(name).toString();
out.println( name + " = " + value+"<br>");
System.out.println( name + " = " + value);
}
%>
<form action="index.jsp" method="POST">
Name:<input type=text size=20 name="dataName"><br/>
Value:<input type=text size=20 name="dataValue"><br/>
<input type=submit>
</form>
</body>
</html>
6) test
working fine with this scenario described in http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html#How_it_Works
if set sticky_session to true
worker.loadbalancer.sticky_session=true
The session still can be replicated to other workers. so need to add this configure to force sticky session in worker.properties
worker.loadbalancer.sticky_session_force=1