Tomcat实现session保持案例

Tomcat实现session保持案例

apache: tomcats
        (1) apache:
                mod_proxy
                mod_proxy_http  实现代理
                mod_proxy_balancer  实现负载均衡
            tomcat:
                http connector  http连接器
        (2) apache:
                mod_proxy
                mod_proxy_ajp  ajp模块
                mod_proxy_balancer               
            tomcat:
                ajp connector  ajp连接器
        (3) apache:
                mod_jk
            tomcat:
                ajp connector

方案一:使用nginx反代用户请求到tomcat:(实现负载均衡和session绑定)
配置hosts文件:
192.168.20.1 node1.lee.com node1
192.168.20.2 node2.lee.com node2
192.168.20.8 node4.lee.com node4
192.168.20.7 node3.lee.com node3
前端nginx配置实现负载均衡:
1.在http上下文定义upstream server
upstream tcsrvs {
        ip_hash;  实现session绑定
        server node1.lee.com:8080;
        server node2.lee.com:8080;
        }
2.在server段中调用:
    location / {
        root  /usr/share/nginx/html;
        }
    location ~* \.(jsp|do)$ {
                proxy_pass http://tcsrvs;
                }
后端两个tomcat配置server.xml:
示例只给了第一台的配置,第二台的只需将所有node1改为node2即可
<Engine name="Catalina" defaultHost="node1.lee.com">
 <Host name="node1.lee.com" appBase="/data/webapps/" unpackWARs="true" autoDeploy="true">
        <Context path="" docBase="/data/webapps" reloadable="true">
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="/data/logs"
                prefix="web1_access_log" suffix=".txt"
                pattern="%h %l %u %t &quot;%r&quot; %s %b" />
/data/webapps/index.jsp文件:
<%@ page language="java" %>
<%@ page import="java.util.*" %>
  <html>
      <head>
        <title>JSP Test Page</title>
      </head>
      <body>
        <% out.println("Hello, world."); %>
      </body>
  </html>

方案二:使用httpd反代用户请求到tomcat
前端httpd反代配置:
<proxy balancer://lbcluster1>
        BalancerMember http://172.16.100.68:8080 loadfactor=10 route=TomcatA
        BalancerMember http://172.16.100.69:8080 loadfactor=10 route=TomcatB
</proxy>
<VirtualHost *:80>
    ServerName web1.lee.com
    ProxyVia On
    ProxyRequests Off
    ProxyPreserveHost On
    <Proxy *>
        Order Deny,Allow
        Allow from all
    </Proxy>
    ProxyPass /status !
    ProxyPass / balancer://lbcluster1/
    ProxyPassReverse / balancer://lbcluster1/
    <Location />
        Order Deny,Allow
        Allow from all
    </Location>
</VirtualHost>
 
后端tomcat主机配置:此处为node1主机,node2主机设置为 jvmRoute="TomcatB",测试页也做相应替换
<Engine name="Catalina" defaultHost="node1.lee.com" jvmRoute="TomcatA"> #jvmRoute为了让前端httpd可以精确识别自己,使用jvmRoute作为标示
 
编辑测试页面:/data/webapps/index.jsp
<%@ page language="java" %>
<html>
  <head><title>TomcatA</title></head>
  <body>
    <h1><font color="red">TomcatA.lee.com</font></h1>
    <table align="centre" border="1">
      <tr>
        <td>Session ID</td>    <% session.setAttribute("lee.com","lee.com"); %>
        <td><%= session.getId() %></td>
      </tr>
      <tr>
        <td>Created on</td>
        <td><%= session.getCreationTime() %></td>
    </tr>
    </table>
  </body>
</html>

测试:

发现即便调度到同一主机session也会变,更不用说不调度在同一主机

解决:修改这两行,使用session粘性功能

Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<proxy balancer://lbcluster1>
        BalancerMember http://192.168.20.1:8080 loadfactor=10 route=TomcatA
        BalancerMember http://192.168.20.2:8080 loadfactor=10 route=TomcatB
        ProxySet stickysession=ROUTEID
</proxy>

测试后发现session绑定成功

使用ajp连接:只需要修改两行
#Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
注释上面一行是因为使用ajp协议的话只需要ProxySet stickysession=ROUTEID一条语句即可绑定
<proxy balancer://lbcluster1>
        BalancerMember ajp://172.16.100.68:8009 loadfactor=10 route=TomcatA
        BalancerMember ajp://172.16.100.69:8009 loadfactor=10 route=TomcatB
        ProxySet stickysession=ROUTEID
</proxy>

方案三:使用mod_jk作反向代理,使用mod_jk后端连接只能使用ajp协议
tar  xf tomcat-connectors-1.2.40-src.tar.gz
cd tomcat-connectors-1.2.40-src/native
准备编译环境:
yum install httpd-devel gcc glibc-devel
yum groupinstall "Development tools"
mod_jk依赖于apxs
[root@node3 rpm]# which apxs
/usr/sbin/apxs
在native目录下:
./configure --with-apxs=/usr/sbin/apxs
装载mod_jk模块:在httpd.conf文件中:
LoadModule  jk_module  modules/mod_jk.so
查看是否装载成功:
[root@node3 conf]# httpd -M | grep jk
Syntax OK
 jk_module (shared)
配置jk_module属性:httpd.conf中
JkWorkersFile  /etc/httpd/conf.d/workers.properties
JkLogFile  logs/mod_jk.log
JkLogLevel  debug
JkMount  /*  TomcatA  #此处的TomcatA必须与后端Tomcat的engine中定义哪个TomcatA的一致
JkMount  /status/  stat1
创建/etc/httpd/conf.d/workers.properties
worker.list=TomcatA,stat1
worker.TomcatA.port=8009
worker.TomcatA.host=192.168.20.1
worker.TomcatA.type=ajp13
worker.TomcatA.lbfactor=1
worker.stat1.type = status
访问mod_jk自带的status页面:此页面也有管理功能

改为负载均衡:并且会话绑定功能
修改httpd.conf:
JkWorkersFile  /etc/httpd/conf.d/workers.properties
JkLogFile  logs/mod_jk.log
JkLogLevel  debug
JkMount  /*  lbcluster1
JkMount  /jkstatus/  stat1
修改/etc/httpd/conf.d/workers.properties
worker.list = lbcluster1,stat1
worker.TomcatA.type = ajp13
worker.TomcatA.host = 192.168.20.1
worker.TomcatA.port = 8009
worker.TomcatA.lbfactor = 5
worker.TomcatB.type = ajp13
worker.TomcatB.host = 192.168.20.2
worker.TomcatB.port = 8009
worker.TomcatB.lbfactor = 5
worker.lbcluster1.type = lb
worker.lbcluster1.sticky_session = 1
worker.lbcluster1.balance_workers = TomcatA, TomcatB
worker.stat1.type = status
测试成功

介绍:proxy-balancer-manager模块页面的使用

<proxy balancer://lbcluster1>
        BalancerMember http://192.168.20.1:8080 loadfactor=10 route=TomcatA
        BalancerMember http://192.168.20.2:8080 loadfactor=10 route=TomcatB
        ProxySet stickysession=ROUTEID
</proxy>
<VirtualHost *:80>
    ServerName web1.lee.com
    ProxyVia On
    ProxyRequests Off
    ProxyPreserveHost On
    <Location /balancer-manager>
        SetHandler balancer-manager
        ProxyPass !
        Order Deny,Allow
        Allow from all
    </Location>
 
    <Proxy *>
        Order Deny,Allow
        Allow from all
    </Proxy>
    ProxyPass /status !
    ProxyPass / balancer://lbcluster1/
    ProxyPassReverse / balancer://lbcluster1/
    <Location />
        Order Deny,Allow
        Allow from all
    </Location>
</VirtualHost>

测试:

delta-manager实现会话复制集群实现

在server.xml中Host上下文中添加:
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
                channelSendOptions="8">
 
          <Manager className="org.apache.catalina.ha.session.DeltaManager"
                  expireSessionsOnShutdown="false"
                  notifyListenersOnReplication="true"/>
 
          <Channel className="org.apache.catalina.tribes.group.GroupChannel">
            <Membership className="org.apache.catalina.tribes.membership.McastService"
                        address="228.0.1.7"
                        port="45564"
                        frequency="500"
                        dropTime="3000"/>
            <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
                      address="192.168.20.1"  #node2改成192.168.20.2
                      port="4000"
                      autoBind="100"
                      selectorTimeout="5000"
                      maxThreads="6"/>
 
            <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
              <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
            </Sender>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
            <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
          </Channel>
 
          <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
                filter=""/>
          <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
 
          <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
                    tempDir="/tmp/war-temp/"
                    deployDir="/tmp/war-deploy/"
                    watchDir="/tmp/war-listen/"
                    watchEnabled="false"/>
 
          <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
        </Cluster>
配置我们的特定应用程序调用上面的cluster功能
[root@node1 conf]# cp web.xml /data/webapps/WEB-INF/
[root@node1 conf]# vim /data/webapps/WEB-INF/web.xml
添加<distributable/>
[root@node1 conf]# scp /data/webapps/WEB-INF/web.xml node2:/data/webapps/WEB-INF/
web.xml                                                100%  163KB 162.7KB/s  00:00
查看日志,发现集群中加入了主机:
tail -100 /usr/local/tomcat/logs/catalina.out
01-Nov-2015 00:09:04.215 INFO [Membership-MemberAdded.] org.apache.catalina.ha.tcp.SimpleTcpCluster.memberAdded Replication member added:org.apache.catalina.tribes.membership.MemberImpl[tcp://{192, 168, 20, 2}:4000,{192, 168, 20, 2},4000, alive=1036, securePort=-1, UDP Port=-1, id={-40 -58 -73 -47 -114 -18 76 74 -81 -66 125 -30 -36 -78 -87 -23 }, payload={}, command={}, domain={}, ]

测试发现尽管负载均衡切换了主机,但是session不会改变

同理,使用Mod_jk和ajp连接后端也成功,使用nginx做反代也行,这里就不缀余了

使用msm实现session服务器实现:

借助于memcached:

yum install memcached
[root@node3 ~]# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
OPTIONS=""
 
提供四个java类库:
[root@node1 msm-1.8.3]# ls
memcached-session-manager-1.8.3.jar      msm-javolution-serializer-1.8.3.jar
memcached-session-manager-tc8-1.8.3.jar  spymemcached-2.10.2.jar
javolution-5.5.1.jar
放置于两台tomcat服务器的/usr/local/tomcat/lib目录下:
[root@node1 ~]# scp -r msm-1.8.3/ node2:/usr/local/tomcat/lib
The authenticity of host 'node2 (192.168.20.2)' can't be established.
RSA key fingerprint is d5:69:d0:fc:ce:90:14:14:6d:4c:52:82:53:a5:ed:0b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.20.2' (RSA) to the list of known hosts.
root@node2's password:
spymemcached-2.10.2.jar                                100%  429KB 428.8KB/s  00:00   
memcached-session-manager-tc8-1.8.3.jar                100%  10KB  10.2KB/s  00:00   
msm-javolution-serializer-1.8.3.jar                    100%  69KB  69.4KB/s  00:00   
memcached-session-manager-1.8.3.jar                    100%  144KB 143.6KB/s  00:00
javolution-5.5.1.jar                                    100%  144KB 143.6KB/s  00:00
编辑server.xml文件Host上下文中定义context
<Context path="" docBase="/data/webapps" reloadable="true">
        <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
                memcachedNodes="n1:192.168.20.7:11211,n2:192.168.20.8:11211"
                failoverNodes="n1"
                requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
                transcoderFactoryClass="de.javakaffee.web.msm.serializer.javolution.JavolutionTranscoderFactory"
              />
        <Valve className="org.apache.catalina.valves.RemoteAddrValve"
                deny="172\.16\.100\.100"/>
            <Valve className="org.apache.catalina.valves.AccessLogValve" directory="/data/logs"
                prefix="web1_access_log" suffix=".txt"
                pattern="%h %l %u %t &quot;%r&quot; %s %b" />
        </Context>

更多Tomcat相关教程见以下内容

Tomcat 的详细介绍请点这里
Tomcat 的下载地址请点这里

猜你喜欢

转载自www.linuxidc.com/Linux/2016-12/138750.htm