tomcat源码系列---HTTP请求处理过程

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u013857458/article/details/82355879

tomcat源码系列—HTTP请求处理过程

分析tomcat对HTTP请求的处理过程得从Connector的架构说起,前面说过Connector是用于接收请求,并将请求封装成Request和Response,然后交给Container处理,处理完之后再由Connector将处理结果返回给客户端。
Connector的结构图
这里写图片描述
Connector使用ProtocolHandler来处理请求的,ProtocolHandler包含三个部件:Endpoint、Processor、Adapter
- Endpoint用来处理底层的Socket网络连接的,Processor用于将连接接收到的Socket封装成Request,Adapter用于将Request交给Container具体处理
- Endpoint由于是处理底层Socket网络连接的,所以Endpoint是用来实现TCP/IP协议的,而Processor是用来实现HTTP协议的,Adapter是用来将请求适配到Servlet容器进行处理的。
- Endpoint的抽象类AbstractEndpoint里面定义了Acceptor和AsynTimeout两个内部类和一个Handler接口。Acceptor用于监听请求,AsyncTimeout用于检查异步Request超时的,Handler用于处理接收到的Socket,在内部调用Processor进行处理。

以上分析了Connector的架构,分析了几个重要类之间是如何处理请求的,下面具体分析每一个类中如何处理流转一次HTTP请求的

Connector

从Connector的架构图中可以看出Connector类中有两个比较重要的属性,ProtocolHandler(协议)和adapter(适配器)。既然是连接器,需要具备处理客户端请求的连接,然后将客户端的Socket请求的数据,解析和包装成HTTP数据格式,然后将HTTP数据包交给容器去处理。协议完成接收连接与数据包装,而adapter完成把封装的数据适配到容器进行处理。

  1. Connector构造函数
public Connector(String protocol) {
    setProtocol(protocol);
    // Instantiate protocol handler
    ProtocolHandler p = null;
    try {
        Class<?> clazz = Class.forName(protocolHandlerClassName);
        p = (ProtocolHandler) clazz.getConstructor().newInstance();
    } catch (Exception e) {
        log.error(sm.getString(
                "coyoteConnector.protocolHandlerInstantiationFailed"), e);
    } finally {
        this.protocolHandler = p;
    }

    if (Globals.STRICT_SERVLET_COMPLIANCE) {
        uriCharset = StandardCharsets.ISO_8859_1;
    } else {
        uriCharset = StandardCharsets.UTF_8;
    }
}

在Connector的构造方法中通过反射实例化协议protocolH,协议的设置在conf/server.xml中配置,通过setProtocol来赋值。

tomcat8中默认实现Http11NioProtocol,Http11NioProtocol构造方法中实例化 NioEndpoint

  1. Connector的初始化
    在Connector的初始化方法initInternal中,主要完成两个操作
    • 实例化适配器CoyoteAdapter
    • 初始化ProtocolHandler的init方法
@Override
protected void initInternal() throws LifecycleException {
    ……
    adapter = new CoyoteAdapter(this);
    protocolHandler.setAdapter(adapter);
    ……
    try {
        protocolHandler.init();
    } catch (Exception e) {
       ……
    }
}

ProtocolHandler的init方法会调用父类AbstractProtocol的init方法

public void init() throws Exception {
   ……
    String endpointName = getName();
    endpoint.setName(endpointName.substring(1, endpointName.length()-1));
    endpoint.setDomain(domain);

    endpoint.init();
}

其父类的init方法会调用endpoint的init方法,init方法中调用bind方法完成底层网络Socket端口绑定与监听

public void bind() throws Exception {

   serverSock = ServerSocketChannel.open();
   socketProperties.setProperties(serverSock.socket());
   InetSocketAddress addr = (getAddress()!=null?new InetSocketAddress(getAddress(),getPort()):new InetSocketAddress(getPort()));
   serverSock.socket().bind(addr,getAcceptCount());
   serverSock.configureBlocking(true); //mimic APR behavior

   // Initialize thread count defaults for acceptor, poller
   if (acceptorThreadCount == 0) {
       // FIXME: Doesn't seem to work that well with multiple accept threads
       acceptorThreadCount = 1;
   }
   if (pollerThreadCount <= 0) {
       //minimum one poller thread
       pollerThreadCount = 1;
   }
   setStopLatch(new CountDownLatch(pollerThreadCount));

   // Initialize SSL if needed
   initialiseSsl();

   selectorPool.open();
}
  1. Connector的启动
protected void startInternal() throws LifecycleException {
   try {
        protocolHandler.start();
    } catch (Exception e) {

    }
}

Connector启动方法中调用ProtocolHandler的start方法,ProtocolHandler的start方法中启动Endpoint。

Endpoint及内部类分析

Endpoint主要用来提供基础的网络I/O服务,封装了网络通讯相关的细节。
1. AbstractEndpoint的线程池
AbstractEndpoint有一个Executor的属性,是它所使用的线程池,这个线程池可以是外界指定的,也可以是由AbstractEndpoint自己创建的,通过属性internalExecutor来标识使用的是外部的线程池,还是Endpoint自己创建的线程池。这个线程池具体是用来处理网络连接的读写的。

在当调用者没有显示指定所用的线程池时,会创建一个自己所用的线程池

public void createExecutor() {
    internalExecutor = true;
    TaskQueue taskqueue = new TaskQueue();
    TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority());
    executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS,taskqueue, tf);
    taskqueue.setParent( (ThreadPoolExecutor) executor);
}
  1. AbstractEndpoint的Acceptor
    在AbstractEndpoint中定义了Acceptor类(实现了Runnable接口),同时定义了acceptors属性,主要用于接收网络请求。
    启动acceptors时,并没有使用前面提到过的线程池,而是生成了新的守护线程来运行,具体生成Acceptor由子类重写抽象方法createAcceptor来完成
protected final void startAcceptorThreads() {
    int count = getAcceptorThreadCount();
    acceptors = new Acceptor[count];

    for (int i = 0; i < count; i++) {
        acceptors[i] = createAcceptor();
        String threadName = getName() + "-Acceptor-" + i;
        acceptors[i].setThreadName(threadName);
        Thread t = new Thread(acceptors[i], threadName);
        t.setPriority(getAcceptorThreadPriority());
        t.setDaemon(getDaemon());
        t.start();
    }
}

AbstractEndpoint框架主要定义了一些基本的属性,同时规定了生命周期的调用顺序,Endpoint的初始化和启动,主要执行了具体子类的所实现的startInternal来完成。

public final void init() throws Exception {
    if (bindOnInit ) {
        bind();
        bindState = BindState.BOUND_ON_INIT;
    }
}

public final void start() throws Exception {
    if (bindState == BindState.UNBOUND) {
        bind();
        bindState = BindState.BOUND_ON_START;
    }
    startInternal();
}
  1. NioEndpoint
    NioEndpoint的bind操作上面已经分析过,主要做一些配置参数的计算,以及端口绑定与监听,
    下面看startInternal方法
@Override
public void startInternal() throws Exception {

    if (!running) {
        running = true;
        paused = false;

        processorCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                socketProperties.getProcessorCache());
        keyCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                        socketProperties.getKeyCache());
        eventCache = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                        socketProperties.getEventCache());
        nioChannels = new SynchronizedStack<>(SynchronizedStack.DEFAULT_SIZE,
                socketProperties.getBufferPool());

        // Create worker collection
        if ( getExecutor() == null ) {
            createExecutor();
        }

        initializeConnectionLatch();

        // Start poller threads
        pollers = new Poller[getPollerThreadCount()];
        for (int i=0; i<pollers.length; i++) {
            pollers[i] = new Poller();
            Thread pollerThread = new Thread(pollers[i], getName() + "-ClientPoller-" +i);
            pollerThread.setPriority(threadPriority);
            pollerThread.setDaemon( true);
            pollerThread.start();
        }

        startAcceptorThreads();
    }
}

在startInternal中,初始化线程池,创建和启动网络数据接收线程组,创建和启动poller线程组。Poller和网络接收有关,后面分析。

  1. 线程池的调用时机
    Acceptor和Poller与线程池的调用时机有关,Acceptor是主要监听网络连接并且进行任务分发的后台线程。Acceptor负责接收网络请求,建立连接,连接建立之后,将这个socket连接交给Poller。由Poller来负责执行数据的读取和业务执行。

在Acceptor的run方法中接收到网络连接socket,通过setSocketOptions方法交给Poller,setSocketOptions方法是专门处理socket连接的方法,将一个SocketChannel对象包装能一个NioChannel之后,注册到Poller中。

protected boolean setSocketOptions(SocketChannel socket) {
    // Process the connection
    try {
        //disable blocking, APR style, we are gonna be polling it
        socket.configureBlocking( false);
        Socket sock = socket.socket();
        socketProperties.setProperties(sock);
        NioChannel channel = nioChannels.pop();
        // 这里省略了设置channel属性的一些语句
        getPoller0().register(channel);
    } catch (Throwable t) {
        // 省略异常处理的相关代码
        return false ;
    }
    return true ;
}

getPoller0是在startInternal方法中初始化的pollers数组中取一个poller,通过Poller对象的register方法把这个channel注册到Poller对象上,Pollers数组的大小是根据当前的运行环境计算出来的,无法通过配置修改。

private int pollerThreadCount = Math.min(2,Runtime.getRuntime().availableProcessors());
  1. Poller
    Poller是实现了Runnable接口的,在启动NioEndpoint的时候,会启动pollers。每个Poller都有一个自己的Selector对象,在Poller的构造方法中通过Selector.open方法生成。在run方法中处理已有的事件
public void run() {
    while (true) {
        try {
            if (!close) {
                hasEvents = events();
                if (wakeupCounter.getAndSet(-1) > 0) {                   
                    keyCount = selector.selectNow();
                } else {
                    keyCount = selector.select(selectorTimeout);
                }
                wakeupCounter.set(0);
            }
            if (close) {
                events();
                timeout(0, false);
                try {
                    selector.close();
                } catch (IOException ioe) {
                    log.error(sm.getString("endpoint.nio.selectorCloseFail"), ioe);
                }
                break;
            }
        } catch (Throwable x) {
            ExceptionUtils.handleThrowable(x);
            log.error("",x);
            continue;
        }
        //either we timed out or we woke up, process events first
        if ( keyCount == 0 ) hasEvents = (hasEvents | events());

        Iterator<SelectionKey> iterator =
            keyCount > 0 ? selector.selectedKeys().iterator() : null;
        // Walk through the collection of ready keys and dispatch
        // any active event.
        while (iterator != null && iterator.hasNext()) {
            SelectionKey sk = iterator.next();
            NioSocketWrapper attachment = (NioSocketWrapper)sk.attachment();
            // Attachment may be null if another thread has called
            // cancelledKey()
            if (attachment == null) {
                iterator.remove();
            } else {
                iterator.remove();
                processKey(sk, attachment);
            }
        }//while

        //process timeouts
        timeout(keyCount,hasEvents);
    }//while

    getStopLatch().countDown();
}

processKey主要的工作就是调用NioEndpoint的processSocket来实现socket的读写。在processSocket中使用前面分析过的线程池来处理封装成了SocketProcessor对象的任务。

public boolean processSocket(SocketWrapperBase<S> socketWrapper,
            SocketEvent event, boolean dispatch) {
    ……
    SocketProcessorBase<S> sc = processorCache.pop();
    if (sc == null) {
        sc = createSocketProcessor(socketWrapper, event);
    } else {
        sc.reset(socketWrapper, event);
    }
    Executor executor = getExecutor();
    if (dispatch && executor != null) {
        executor.execute(sc);
    } else {
        sc.run();
    }
   ……
}

Poller中Selector的注册
前面提过过Acceptor的主要工作是把建立好的socket注册到Poller上,通过register方法实现,Poller的register把建立好的连接socket封装成一个PollerEvent对象,然后放入Poller维护的事件队列中,Poller内部维护的事件队列如下:

private final SynchronizedQueue<PollerEvent> events = new SynchronizedQueue<>();

events是一个PollerEvent类型的队列,events方法中循环取出PollerEvent对象,然后执行它。PollerEvent实现了Runnable接口,在run方法中完成了channel对selector的注册。

  1. SocketProcessor
    实现了Runnable接口,在processSocket中提交给上面分析过的线程池执行,doRun方法中获取ConnectionHandler来将接收的Socket交给processor处理
SocketState state = SocketState.OPEN;
// Process the request from this socket
if (event == null) {
    state = getHandler().process(socketWrapper, SocketEvent.OPEN_READ);
} else {
    state = getHandler().process(socketWrapper, event);
}

Handler是处理协议的地方,process方法在AbstractProcessorLight中的实现。AbstractProcessorLight是一个轻量级的抽象processor实现。

public SocketState process(SocketWrapperBase<?> socketWrapper, SocketEvent status)
            throws IOException {

   SocketState state = SocketState.CLOSED;
   Iterator<DispatchType> dispatches = null;
   do {
       if (dispatches != null) {
           DispatchType nextDispatch = dispatches.next();
           state = dispatch(nextDispatch.getSocketStatus());
       } else if (status == SocketEvent.DISCONNECT) {
           // Do nothing here, just wait for it to get recycled
       } else if (isAsync() || isUpgrade() || state == SocketState.ASYNC_END) {
           state = dispatch(status);
           if (state == SocketState.OPEN) {
             state = service(socketWrapper);
           }
       } else if (status == SocketEvent.OPEN_WRITE) {
           // Extra write event likely after async, ignore
           state = SocketState.LONG;
       } else if (status == SocketEvent.OPEN_READ){
           state = service(socketWrapper);
       } else {
           state = SocketState.CLOSED;
       }

       if (state != SocketState.CLOSED && isAsync()) {
           state = asyncPostProcess();
       }
       if (dispatches == null || !dispatches.hasNext()) {
           // Only returns non-null iterator if there are
           // dispatches to process.
           dispatches = getIteratorAndClearDispatches();
       }
   } while (state == SocketState.ASYNC_END ||
           dispatches != null && state != SocketState.CLOSED);

   return state;
}

默认实现Http11Processor的service,这里就是现实HTTP协议的地方,最终交给adapter来适配到Container容器镜像处理

public SocketState service(SocketWrapperBase<?> socketWrapper){
 try {
        rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
        getAdapter().service(request, response);
     }catch(){
     }
}

在CoyoteAdapter中将Request和Response装换成Servlet容器中处理的Request和Response,然后从service中获取容器,再调用管道Pipeline的阀门Valve的invoke方法

public void service(org.apache.coyote.Request req, org.apache.coyote.Response res)
            throws Exception {
    try {
            // Parse and set Catalina and configuration specific
            // request parameters
            postParseSuccess = postParseRequest(req, request, res, response);
            if (postParseSuccess) {
                //check valves if we support async
                request.setAsyncSupported(
                        connector.getService().getContainer().getPipeline().isAsyncSupported());
                // Calling the container
                connector.getService().getContainer().getPipeline().getFirst().invoke(
                        request, response);
            }
         } catch (IOException e) {
            // Ignore
        } finally {
        }
}

整个容器中执行的链路如图:这里写图片描述

Servlet.service调用

在StandardWrapper的invoke中,如果Servlet还未初始化则初始化Servlet

 public final void invoke(Request request, Response response)
    throws IOException, ServletException {
    boolean unavailable = false;
    Throwable throwable = null;
    // This should be a Request attribute...
    long t1=System.currentTimeMillis();
    requestCount.incrementAndGet();
    StandardWrapper wrapper = (StandardWrapper) getContainer();
    Servlet servlet = null;
    Context context = (Context) wrapper.getParent();
    try {
        if (!unavailable) {
            //初始化Servlet
            servlet = wrapper.allocate();
        }
    } catch (UnavailableException e) {
    }

    MessageBytes requestPathMB = request.getRequestPathMB();
    DispatcherType dispatcherType = DispatcherType.REQUEST;
    if (request.getDispatcherType()==DispatcherType.ASYNC) dispatcherType = DispatcherType.ASYNC;
    request.setAttribute(Globals.DISPATCHER_TYPE_ATTR,dispatcherType);
    request.setAttribute(Globals.DISPATCHER_REQUEST_PATH_ATTR,
            requestPathMB);
    // 生成过滤器调用链,包含匹配的Filter和Servlet
    ApplicationFilterChain filterChain =
            ApplicationFilterFactory.createFilterChain(request, wrapper, servlet);

    try {
        if ((servlet != null) && (filterChain != null)) {
            // Swallow output if needed
            if (context.getSwallowOutput()) {
                try {
                    SystemLogHandler.startCapture();
                    if (request.isAsyncDispatching()) {
                        request.getAsyncContextInternal().doInternalDispatch();
                    } else {
                        //filter责任链调用
                        filterChain.doFilter(request.getRequest(),
                                response.getResponse());
                    }
                } finally {
                    String log = SystemLogHandler.stopCapture();
                    if (log != null && log.length() > 0) {
                        context.getLogger().info(log);
                    }
                }
            } else {
                if (request.isAsyncDispatching()) {
                    request.getAsyncContextInternal().doInternalDispatch();
                } else {
                    filterChain.doFilter
                        (request.getRequest(), response.getResponse());
                }
            }

        }
    } catch (ClientAbortException e) {
    }

    if (filterChain != null) {
        filterChain.release();
    }

    try {
        if (servlet != null) {
            wrapper.deallocate(servlet);
        }
    } catch (Throwable e) {
    }
}

猜你喜欢

转载自blog.csdn.net/u013857458/article/details/82355879