Five I/O Models of Network Programming

There are five I/O models in network programming. Today we will talk about the principles and differences of these five models.

1. Blocking I/O model.

The communication diagram of the blocking I/O model is as follows:

Blocking I/O model communication diagram

When the user calls the recvfrom system call, the kernel begins to prepare the data. For network I/O, many times the data has not arrived. At this time, you have to wait for enough data to arrive. At this time, the user's process will be blocked. When the data is ready, it will copy the data from the kernel to the user memory, and then return the result. The user process will be unblocked at this time and run again. In the blocking I/O model, the process is blocked and suspended without consuming cpu resources, and responds to each operation in a timely manner. It is suitable for the development of network applications with small concurrency. For applications with a large amount of concurrency, it is necessary to allocate a processing process for each request, and the system overhead is high.

2. Non-blocking I/O model.

The communication diagram of the non-blocking I/O model is as follows:

Schematic diagram of non-blocking I/O model communication

In the non-blocking I/O model, when the user process issues a read operation, if the data in the kernel is not ready, then it will not block the user process, but immediately returns an error. From the perspective of the user process, a read operation is initiated without waiting, but a result is obtained immediately. When the user process judges an error and knows that the kernel has not prepared the data yet, it sends the read operation again. Once the kernel has prepared the data and received the call from the user process, it will copy the data to the user memory immediately and return. . In this model, the user process needs to constantly ask whether the kernel is ready, consumes CPU resources, and is suitable for network application development that has a small amount of concurrency and does not require timely response.

3. Multiplexing I/O model.

The schematic diagram of the multiplexed I/O model is as follows:

Schematic diagram of multiplexed I/O model communication

In the multiplexed I/O model, the I/O of multiple processes can be registered on a multiplexer (Selector). When the user process calls the Selector, if all I/O monitored by the Selector is in the kernel buffer If there is no readable data, the select call process will be blocked, and when any I/O has data in the kernel buffer, the select call will return, and then the select call process can initiate reading again by itself or notify another process I/O, read the data prepared in the kernel.

Compared with the non-blocking model, the multiplexing model requires two system calls (recvfrom and select), but the advantage of Selector is that it can handle multiple connections at a time. The advantage is obvious when there are many connections, but a single connection It cannot be processed faster. The multiplexing model is suitable for high-concurrency server development.

4. Signal-driven I/O model.

The signal-driven I/O model communication diagram is as follows:

 

Schematic diagram of signal-driven I/O model

The signal-driven model means that the process registers a signal processing function with the kernel, and then the user process returns without blocking. When the kernel data is ready, it returns a signal to the process, and the user process calls I/O to read the data in the processing function. In fact, the process of copying the I/O kernel to the user process is still blocked. This mode does not achieve real blocking. It is a pseudo-asynchronous and is not commonly used in practice.

5. Asynchronous I/O model.

The communication diagram of the asynchronous I/O model is as follows:

Schematic diagram of asynchronous I/O communication model

After the user process initiates the aio_read operation, it passes the three parameters of the same descriptor, buffer pointer, buffer size and file offset to the kernel as read, and tells the kernel how to notify us when the whole operation is completed, and we can start doing other things immediately From the perspective of the kernel, when it receives an aio_read, it will return immediately, so it will not block the user process. The kernel will wait for the data preparation to complete, and then copy the data to the user memory. After all this is complete, the kernel will send a signal to the user process to tell it that the aio_read operation is complete.

The mechanism of asynchronous I/O is: tell the kernel to start an operation, and let the kernel notify us when the entire operation is completed, it realizes the real asynchronous operation, is a real asynchronous model, suitable for high-performance and high-concurrency applications.

Guess you like

Origin blog.csdn.net/wzs535131/article/details/107599887