Those with knowledge related to Netty, you know how much

Principle Netty
Netty is a high performance, asynchronous event-driven NIO framework, API JAVA NIO based implementation provided. It provides support for TCP, UDP and file transfer as an asynchronous NIO framework, Netty all IO operations are asynchronous non-blocking by Future-Listener mechanism, users can easily retrieve or get active IO operations through a notification mechanism result.

Netty high performance
in the IO programming process, when it is necessary to handle multiple clients access request can be processed using multi-threading or IO multiplexing. IO blocked by multiplexing a plurality of IO multiplexed onto the same blocking select, so that the system can handle the case where a plurality of single-threaded client requests simultaneously. With the traditional multi-threaded / multi-process model ratio, I / O multiplexing biggest advantage is the small system overhead, the system does not need to create new and additional processes or threads, do not need to run these maintenance processes and threads, reducing the system maintenance workload, saving system resources.

Socket and ServerSocket with corresponding, NIO SocketChannel and also provides two different socket channel ServerSocketChannel achieved.

Multiplexed communication

Netty architecture and implemented in Reactor mode, its server communication sequence is as follows:

Those with knowledge related to Netty, you know how much
The client communication sequence FIG follows:
Those with knowledge related to Netty, you know how much
Netty IO thread NioEventLoop Selector Since polymerized multiplexer, can handle hundreds of concurrent clients Channel, since the read and write operations are non-blocking, which can sufficiently enhance the IO the operating efficiency of the thread, to avoid clogging caused due to frequent IO thread is suspended.

异步通讯 NIO

由于 Netty 采用了异步通信模式,一个 IO 线程可以并发处理 N 个客户端连接和读写操作,这从根本上解决了传统同步阻塞 IO 一连接一线程模型,架构的性能、弹性伸缩能力和可靠性都得到了极大的提升。

零拷贝(DIRECT BUFFERS 使用堆外直接内存)

  1. Netty 的接收和发送 ByteBuffer 采用 DIRECT BUFFERS,使用堆外直接内存进行 Socket 读写,不需要进行字节缓冲区的二次拷贝。如果使用传统的堆内存(HEAP BUFFERS)进行 Socket 读写,JVM 会将堆内存 Buffer 拷贝一份到直接内存中,然后才写入 Socket 中。相比于堆外直接内存,消息在发送过程中多了一次缓冲区的内存拷贝。

  2. Netty 提供了组合 Buffer 对象,可以聚合多个 ByteBuffer 对象,用户可以像操作一个 Buffer 那样方便的对组合 Buffer 进行操作,避免了传统通过内存拷贝的方式将几个小 Buffer 合并成一个大的Buffer。

  3. Netty的文件传输采用了transferTo方法,它可以直接将文件缓冲区的数据发送到目标Channel,避免了传统通过循环 write 方式导致的内存拷贝问题内存池(基于内存池的缓冲区重用机制)随着 JVM 虚拟机和 JIT 即时编译技术的发展,对象的分配和回收是个非常轻量级的工作。但是对于缓冲区 Buffer,情况却稍有不同,特别是对于堆外直接内存的分配和回收,是一件耗时的操作。为了尽量重用缓冲区,Netty 提供了基于内存池的缓冲区重用机制。

高效的 Reactor 线程模型

常用的 Reactor 线程模型有三种,Reactor 单线程模型, Reactor 多线程模型, 主从 Reactor 多线程模型。

Reactor 单线程模型

Reactor 单线程模型,指的是所有的 IO 操作都在同一个 NIO 线程上面完成,NIO 线程的职责如下:

1) 作为 NIO 服务端,接收客户端的 TCP 连接;

2) 作为 NIO 客户端,向服务端发起 TCP 连接;

3) 读取通信对端的请求或者应答消息;

4) 向通信对端发送消息请求或者应答消息。

Those with knowledge related to Netty, you know how much

由于 Reactor 模式使用的是异步非阻塞 IO,所有的 IO 操作都不会导致阻塞,理论上一个线程可以独立处理所有 IO 相关的操作。从架构层面看,一个 NIO 线程确实可以完成其承担的职责。例如,通过Acceptor 接收客户端的 TCP 连接请求消息,链路建立成功之后,通过 Dispatch 将对应的 ByteBuffer派发到指定的 Handler 上进行消息解码。用户 Handler 可以通过 NIO 线程将消息发送给客户端。

Reactor 多线程模型

Rector 多线程模型与单线程模型最大的区别就是有一组 NIO 线程处理 IO 操作。 有专门一个NIO 线程-Acceptor 线程用于监听服务端,接收客户端的 TCP 连接请求; 网络 IO 操作-读、写等由一个 NIO 线程池负责,线程池可以采用标准的 JDK 线程池实现,它包含一个任务队列和 N个可用的线程,由这些 NIO 线程负责消息的读取、解码、编码和发送;

Those with knowledge related to Netty, you know how much
主从 Reactor 多线程模型

服务端用于接收客户端连接的不再是个 1 个单独的 NIO 线程,而是一个独立的 NIO 线程池。Acceptor 接收到客户端 TCP 连接请求处理完成后(可能包含接入认证等),将新创建的SocketChannel 注册到 IO 线程池(sub reactor 线程池)的某个 IO 线程上,由它负责SocketChannel 的读写和编解码工作。Acceptor 线程池仅仅只用于客户端的登陆、握手和安全认证,一旦链路建立成功,就将链路注册到后端 subReactor 线程池的 IO 线程上,由 IO 线程负责后续的 IO 操作。
Those with knowledge related to Netty, you know how much

无锁设计、线程绑定

Netty using the serial lock-free design, operate within the serial IO thread, multi-thread to avoid performance degradation caused by competition. On the surface, the serial design seems CPU utilization rate is not high enough degree of concurrency. However, by adjusting parameters NIO thread pool threads may be started simultaneously a plurality of threads run in parallel serialized, no such localized locking thread design comparison of a serial queue - a plurality of threading model work better performance.

Those with knowledge related to Netty, you know how much
After the NioEventLoop Netty read message, a direct call ChannelPipeline fireChannelRead (Object msg), as long as the user does not automatically switch the thread, the thread switching has been performed can not call to the user by the NioEventLoop Handler, during such a way to avoid serialization process competition resulting from the operation of the lock multi-threaded, from a performance point of view is optimal.

High performance framework sequences

Netty default Google Protobuf provides support for the interface, the user may implement other performance framework sequence by extension codec Netty, e.g. Thrift compressed binary frame codec.

  1. SO_RCVBUF and SO_SNDBUF: The recommended value is usually 128K or 256K.

Small envelope large bag, to prevent network congestion

  1. SO_TCPNODELAY: NAGLE algorithm small packets in the buffer is automatically connected to form a larger packet from a large number of small packet transmission network congestion, thereby improving the efficiency of network applications. But for delay-sensitive applications scenarios need to close the optimization algorithm.

Hash values ​​and soft interrupt CPU-bound

  1. Soft interrupt: soft interrupt can be achieved after opening RPS, improve network throughput. RPS data packet source address, destination address and source port and a destination, calculate a hash value, and then select the soft interrupt cpu operating according to this hash value from the top view, that is tied to each connection and cpu set, and by the hash value, to balance on a plurality of soft interrupt cpu, parallel processing to improve network performance.

Continuously updated in ............

Guess you like

Origin blog.51cto.com/14587687/2462783