How does Netty implement Reactor?

Hello, my name is yes.

The first technical article at the beginning of the year, this is the previous stock, and the update in the recent period will still focus on interview questions, after all, gold three silver four ha.

This article is also an interview point. After all, Reactor may also be asked. This article looks at how Reactor is implemented in Netty, deepens the impact on Reactor, and also mentions the evolution of the Reactor model.

By the way, I have written an understanding of Reactor before. If you have n't read it, it is recommended to read that article first, and then read this article.

Without further ado, let's go!

Reactor for Netty

We all know that Netty can have two thread groups, one is bossGroup and the other is workerGroup. As mentioned earlier, bossGroup is mainly for receiving new connections (the boss picks up work), and workerGroup is responsible for all subsequent I/O (employee work) of the new connection. )

Corresponding to the Reactor model, the eventLoop in the bossGroup is the main Reactor, and its task is to monitor and wait for the arrival of the connection event, namely OP_ACCEPT, then create a sub-channel, select an eventLoop from the workerGroup, bind the sub-channel to this eventLoop, and then The I/O events corresponding to this sub-channel are all handled by this eventLoop.

The eventLoop in this workerGroup is the so-called sub-Reactor, and its task is to be responsible for all I/O requests after the connection has been established.

In fact, it can be seen from the name of eventLoop that its function is loop event, which is a thread that waits for the occurrence of an event in an infinite loop, and then performs different follow-up processing according to different event types, that's all.

Under normal circumstances, bossGroup will only configure one eventLoop, that is, one thread, because the general service only exposes one port, so only one eventLoop listens to this port, and then accepts the connection.

For workerGroup in Netty, the default is the number of cpu cores*2, such as 4-core CPU, 8 eventLoops will be built in the workerGroup by default, so there are 8 sub-Reactors.

Therefore, the configuration of the normal Netty server is one master Reactor and multiple slave Reactors. This is the so-called master-slave Reactor. Basically, the current mainstream configuration is the master-slave Reactor.

The evolution of the Reactor model

Before diving into Netty's Reactor implementation, let's take a look at why it evolved into a master-slave Reactor?

The initial model is a single Reactor single thread. You can understand it as a thread that listens for new connections and responds to requests from old connections. If the logic is processed quickly, there is no problem. Just look at other people's redis. , but if the logic is slow, it will block other requests.

Therefore, there is a single Reactor multi-threading, and one thread monitors all the underlying sockets, but some time-consuming operations can be assigned to the thread pool for business processing, so that the Reactor will not be blocked due to slow logic processing.

However, this model will still have bottlenecks, that is, requests for listening to new connections and responding to old connections are handled by one thread. There are many old connections accumulated, and there are many events that need to be responded to, which will affect the access of new connections. It's not very comfortable. Besides, we are all multi-core CPUs now. Is there such a thread?

Therefore, it evolved into a master-slave Reactor. One thread, that is, the master Reactor, waits for the establishment of a new connection, and then creates multiple threads as child Reactors, which are evenly responsible for the old connections that have been connected, so that neither affects The speed of receiving new connections can also better utilize the ability of multi-core CPUs to respond to requests from old connections.

This is about the evolution of the Reactor model.

Well, let's take a look at the core class that Netty implements Reactor. We generally use NIO now, so let's look at the NioEventLoop class.

Friendly reminder, it is recommended to read the following content on the PC side if possible, it is not very comfortable to watch on the mobile phone of the source code

NioEventLoop

We have mentioned earlier that a NioEventLoop is a thread, and the core of the thread must be its run method.

Based on our understanding, we know that the main tone of this run method must be an infinite loop waiting for I/O events to be generated, and then processing the events.

The fact is also true, NioEventLoop mainly does three things:

  1. select waits for an I/O event to occur
  2. Handling I/O events that occur
  3. Process tasks submitted to threads, including submitted asynchronous tasks, scheduled tasks, and tail tasks.

First, fold down the code, you can see a proper infinite loop, which is also the standard configuration of Reactor threads. In this life, it is only to wait for events to happen and process them.

In Netty's implementation, the NioEventLoop thread not only needs to process I/O events, but also needs to process submitted asynchronous tasks, timed tasks and tail tasks, so this thread needs to balance the time for I/O event processing and task processing.

Therefore, there is a strategy such as selectStrategy, which determines whether there are currently tasks waiting to be executed. If there is, it will immediately perform a non-blocking select to try to obtain I/O events. If there are no tasks, the SelectStrategy.SELECT strategy will be selected.

It can also be seen from the figure that this strategy will control the longest blocking time of select according to the execution time of the scheduled task that will happen recently.

As you can see from the following code, a time window of 5 microseconds is reserved according to the time when the scheduled task is about to be executed. If it is about to arrive within 5 microseconds, it will not be blocked, and a non-blocking select will be performed immediately. Get I/O events.

After the above operation, the select is completed, and finally the number of ready I/O events will be assigned to the strategy. If not, the strategy is 0, and then it is time to process the I/O events and tasks.


I have framed the key parts of the above code. There is a selectCnt to count the number of selects. This bug is used to deal with the empty polling of the JDK Selector, which will be mentioned below.

The ioRatio parameter is used to control the proportion of I/O event execution time and task execution time. After all, a thread has to do multiple things, and it must be all rain and dew , right? No one can be left out.

It can be seen that the specific implementation is to record the execution time of the I/O event, and then calculate the longest time that the task can execute according to the ratio to control the execution of the task.

Handling of I/O events

Let's take a look at how I/O events are handled specifically, the processSelectedKeys method.

Click in and you can see that there are actually two processing methods, one is the optimized version and the other is the normal version.


The logic of these two versions is the same. The difference is that the optimized version will replace the type of selectedKeys. The selectedKeys implemented by JDK is of set type, and Netty believes that there is still room for optimization in the selection of this type.

Netty replaces the set type with the SelectedSelectionKeySet type, which actually replaces the set with an array


Compared with the set type, the traversal of the array is more efficient, and the efficiency of adding the tail of the array is also higher than that of the set. After all, the set may also have hash conflicts. Of course, this is what Netty does to pursue the ultimate optimization of the bottom layer. Our usual code does not need to be so "pretentious", which is of little significance.

So how does Netty replace this type?

reflection .

Look at the code, it's not very complicated:

this can also give us some ideas. For example, if you call the jar package provided by the third party, you can't modify its source code, but you want to make some enhancements to it, then you can imitate Netty's practice, replace it by reflection~

Let's make a breakpoint to see the type of selectedKey before and after the replacement, which was HashSet before:


After replacement, it becomes SelectedSelectionKeySet.

ok, now let's look at the optimized version of the traversal method for handling I/O events, which is the same as the normal version, except that the traversal uses arrays

. In many open source software, you will find that there are many such implementations, which are directly set to null statements, which are to help the GC.

Next, let's take a look at the method of actually handling I/O eventsprocessSelectedKey


It can be seen that the essence of this method is to perform different processing according to different events. In fact, the event will be propagated on the pipeline of the corresponding channel, and various corresponding custom events will be triggered. I take the OP_ACCEPT event as an example to analyze.

For the OP_ACCEPT event, unsafe.read actually calls the NioMessageUnsafe#read method.


From the above code, the logic is not complicated. The main thing is to read the newly created sub-channel cyclically, and trigger the ChannelRead and ChannelReadComplete events to spread them in the pipeline. During this period, the previously added ServerBootstrapAcceptor#channelRead will be triggered and allocated To the eventLoop in the workerGroup, the child Reactor thread.

Of course, our custom handler can also implement these two event methods, so that after the corresponding event arrives, we can perform corresponding logical processing.

Well, this is the end of Netty's OP_ACCEPT event processing analysis. Other events are similar. They will trigger the corresponding events, and then pass them in the pipeline to trigger the methods of different Channelhandlers for logical processing.

The above is the master-slave Reactor model implemented by Netty.

Of course, Netty also supports a single Reactor, nothing more than no workerGroup. As for the number of threads, you can configure it yourself, which is very flexible, but now the master-slave Reactor model is generally used.

At last

This article not only talks about Netty's Reactor implementation, but also covers how Netty handles I/O operations.

The next article is about Netty's redistribution pipeline mechanism. This responsibility chain model is also very important and inspiring.

After the pipeline is written, you should have a clearer understanding of Netty as a whole, and then you will start to write some sticky packages and half packages, memory management, including some "advanced" usage of Netty, etc. Half of the content is not written. After writing it, I will review it completely, and I can "blow" with Netty when I go out.

Okay, not many BBs, I am yes, from a little bit to a million dots, see you in the next part~

Guess you like

Origin blog.csdn.net/yessimida/article/details/122360361