[Computer Network]-Link Layer-Overview, Error Detection Technology and Multiple Access Protocols

Overview

The network layer provides communication services between any two hosts, and between two hosts, datagrams need to be transmitted across a series of communication links , and here we enter the scope of the link layer.

We call any device running a link layer protocol a node , including hosts, routers, switches and WiFi access points; the communication channel connecting adjacent nodes along the communication path is called a link . When the link is connected, the transmission node encapsulates the datagram in a link layer frame and transmits the frame to the link.

To better understand the link layer and how it relates to the network layer , consider a transportation example: Suppose a travel agency wants to create a route for tourists from point A to point D, and the travel agency believes that it is most convenient for tourists. The best plan is: take a bus from A to B, then take a plane from B to C, and then take a train from C to D. Each of the "adjacent" points in this route is directly connected, and the three sections of the route are different transportation methods and will be handled by different transportation companies. Although the transportation methods are different, they all provide A service that transports passengers from one location to an adjacent location. Then, the tourists here are like a datagram, each transportation segment is like a link, each transportation mode is like a link layer protocol, and the travel agency is like a routing protocol

Services provided by the link layer

The basic service of any link layer is to move datagrams from one node to an adjacent node through a single communication link , but the details of the services provided vary with the link layer protocol, and the possible services that the protocol can provide as follows:

  • Framing : Before each network layer datagram is transmitted over the link, almost all link layer protocols need to encapsulate it in a link layer frame, and the datagram is used as the data part of the frame.

  • Link Access : Media Control Access ( Medium Access Control Medium\ Access\ ControlMedia Access Control ( MAC ) protocol   specifies the rules for frame transmission on the link .
    Links can be divided into two types based on the number of nodes at both ends: one end of the link has only one sender, and the other end has only one receiver. In this case, the点对点链路MAC protocol is relatively simple or even does not exist, because no matter Whenever the link is idle, the sender can send frames. Protocol for point-to-point linkspoint-to-point protocol(point − to − point protocol point-to-point\ protocolpointtop o in t p ro t oco l  , PPP),high-level data link controlhigh-level\ data\ link\controlhighl e v e l d a t a link co n t ro l    , HDLC);
    the other is that multiple nodes share a single广播链路link, which will facemultiple access problems, where MAC can be used to coordinate Frame transmission across multiple nodes. Examples of broadcast link layer technologies areEthernetandWireless LAN

  • Reliable delivery : Link layer protocols provide reliable delivery services that ensure error-free movement of each network layer datagram through the link layer. Similar to TCP, reliable delivery of services at the link layer is usually achieved through acknowledgments and retransmissions

    Link layer reliable delivery services are typically used on links prone to high error rates, such as wireless links, where the purpose is to correct an error locally rather than forcing end-to-end retransmissions through transport layer or application layer protocols. However
    , For low bit error links, such as optical fiber, coaxial cable, and many twisted pair copper links, reliable delivery of the link layer is considered unnecessary overhead, so many wired link layer protocols do not provide reliable delivery Serve

  • Error detection and correction : When a bit in a frame is transmitted as a 1, the link layer hardware in the receiving node may incorrectly interpret it as a 0, and vice versa. This bit error is caused by signal attenuation and electromagnetic noise

    It is not necessary to send an errored datagram, so many link layer protocols provide a mechanism to detect such bit errors by having the sending node include error detection bits in the frame so that the receiving node can perform error checking . Get the job done
    Error detection at the link layer is typically more complex than Internet checksums at the transport layer and network layer and is implemented in hardware

    Error correction is similar to error detection, except that the receiver can not only detect bit errors that occur in the frame but also accurately determine where the errors occur and correct them.

Where is the link layer implemented

The main part of the link layer is in the network adapter ( network adapter network\ adapternetwork interface card _ _ _ _ _ _ _  ____Network Interface Card,NIC)

Located at the core of the network adapter is the link layer controller . This controller is usually a dedicated chip that implements many link layer services such as framing, link access, error detection, etc. Different controllers may implement different Link layer protocol, such as Ethernet protocol, etc.

At the sending end, the controller obtains the datagram generated by the higher layer of the protocol stack and stored in the host memory, encapsulates the datagram in a link layer frame, and then transmits the frame into the communication link following the link access protocol. ;At the receiving end, the controller receives the entire frame and extracts the network layer datagram. If the link layer performs error detection, the sending controller also needs to set the error detection bit in the header of the frame, and the receiving controller performs error detection.

error detection and correction technology

The link layer typically provides bit-level error detection and correction , that is, the detection and correction of bit impairments in link layer frames sent from one node to another physically connected adjacent node.
At the sending node, in order to protect bits from errors, error detection and correction bits ( E rror − Detection and − C orrection Error-Detection\ and-CorrectionErrorDetection andC orrec t i o n , EDC) to enhance data, add error detection and correction bits EDC to the original data segment D; the receiver's task is to determine whether D' is Same as the initial D

Error detection and correction techniques allow the receiver to sometimes but not always detect the presence of bit errors, which means that the receiver may not be able to detect that the received information contains bit errors. In order to reduce the probability of this happening, it is necessary to choose a more complex and expensive error detection scheme that requires more calculations and more error detection and error correction bits

Three of these techniques are discussed below: parity checking , checksum methods , and cyclic redundancy detection.

Parity

For the information D to be sent, let it contain d bits. In the even parity scheme, the sender only needs to include an additional check bit and choose its value to be 0 or 1, so that among these d + 1 bits The total number of 1's is an even number; for the odd parity scheme, the check bit value is selected so that the total number of 1's is an odd number.

When checking, the receiver only needs to count the number of 1's in the received d + 1 bits. Taking the even parity scheme as an example, if an odd number of bits with a value of 1 are found, the receiver will know that an odd number of bit errors have occurred.

But if an even number of bit errors occur, there is a chance that an undetected error will occur. If the probability of a bit error is very small, and the errors between bits can be regarded as occurring independently, then the probability of multiple bits in a group being wrong at the same time will be extremely small. In this case, a single parity bit is sufficient

However, measurements show that errors often cluster together in "bursts" rather than occurring individually . Therefore, a more robust error detection scheme is needed

Checksum method

Checksums are more for the transport layer

In the checksum technique, the above-mentioned d-bit data is regarded as a sequence consisting of multiple k-bit integers. The simple checksum method is to add up these k-bit integers and use the resulting sum as the error detection bit

The Internet checksum is based on this method, which treats data as a sequence of 16-bit integers. For IP, UDP, and TCP, the checksum field in the header can be set to 0 during calculation. Then perform 1's complement sum operation on these 16-bit integers. The 1's complement sum operation here is addition with circular carry . If there is a carry in the highest bit, it should be rotated into the lowest bit. Then calculate the complement of the obtained sum, and the result obtained is the checksum to be carried in the header of the message segment.

The receiver performs the same summation and one's complement operation on the received data. If the result is 0, it means that the transmission is correct; otherwise, it means that there is an error in the transmission.

Cyclic redundancy detection

The error detection technologies widely used today are based on cyclic redundancy detection ( C yclic Redundancy Check Cyclic\ Redundancy\ CheckC y c l i c R e d u n d an cy C h ec k   , CRC)
Here we consider a d bit of data D to be sent,

  1. The sender and receiver must first agree on an r + 1 bit pattern, called a generator polynomial , here we denote it as G
  2. For a given data segment, the sender chooses r additional bits R to append to the end of D such that the resulting d + r bit pattern is exactly divisible by G modulo 2 . What is sent to the receiving node is this d + r bit pattern.
    The modulo 2 operation here is a binary algorithm, including modulo 2 addition, modulo 2 subtraction, modulo 2 multiplication and modulo 2 division, which are different from the normal four arithmetic operations. What is more important is that the modulo 2 operation does not consider the carry and borrow of the result of a certain bit operation (1 + 1 = 0, and no carry; 0 - 1 = 1, no need to borrow from the lower bit), so that when two binary bits are operated, The value of these two bits determines the result of the operation and is not affected by the previous operation or the next operation. This means that addition and subtraction are the same and equivalent to the bitwise XOR operation of the operands
  3. The receiver uses G to remove the received d + r bits. If the remainder is non-zero, the receiver knows that an error has occurred, otherwise the data is considered correct and received.
  4. Another question is how to calculate R on the sender side. The formula we want to satisfy is D * 2 r Thinking of XOR as addition, we can see that R is the remainder obtained by dividing D * 2 r by G. Therefore, R can be found by dividing D * 2 r by G to obtain the remainder.

Multiple access links and protocols

As mentioned earlier, network links are divided into 点对点链路and 广播链路. Point-to-point links are relatively simple, while broadcast links will face multiple access problems , that is, how to coordinate the access of multiple sending nodes to a shared channel. Nodes will regulate their transmission behavior on the shared broadcast channel through a multiple access protocol.

Because all nodes can transmit frames, it is possible for multiple nodes to transmit frames at the same time . The result is that all nodes receive multiple frames at the same time, that is, the transmitted frames collide at all receivers. Usually,
collisions When this occurs, no receiving node can effectively obtain any transmitted frame, and the signals of the colliding frames are entangled together, causing all frames involved in the collision to be lost; naturally, the broadcast channel is wasted during the collision interval. Got it

In order to ensure that the broadcast channel performs useful work, it is necessary to coordinate the activities of active nodes. This coordination is responsible for the multi-access protocol.

Multiple access can be divided into three types: channel partitioning protocol , random access protocol , and round-robin protocol

When describing various protocols below, we use this example: There is a channel that supports N nodes, and the transmission rate of the channel is R bps

Channeling Protocol

time division multiplexing

Time division multiplexing (TDM) divides time into time frames , further divides each time frame into N time slots (slots), and then allocates each time slot to N nodes. When a node needs to send a link layer frame, it transmits packet bits in the time slot assigned to it in the cyclic TDM frame, so the time slot length should be selected so that a single packet can be transmitted in one time slot.

TDM eliminates collisions and is very fair. Each node gets a dedicated transmission rate R/N bps in each frame time. The
main drawback is that a node is limited to an average rate of R/N bps, even if only it is The only node that needs to send a frame; a node must always wait for its turn in the transmission sequence, even if it is the only node that needs to send a frame. That is to say, TDM is a poor choice for situations where there are only a few or even one node that is frequently active.

frequency division multiplexing

Frequency division multiplexing (FDM) divides the R bps channel into different frequency bands, each band has R/N bandwidth, and then assigns each frequency to one of N nodes, that is, in a single larger R N smaller R/N bps channels are created within the bps channel

FDM also eliminates collisions and is very fair. It divides the bandwidth fairly among N nodes.
Its main disadvantage is that it limits a node to only use R/N bandwidth, even if it is the only one that needs to send frames. Node

CDMA

码分多址 ( C o d e   D i v i s i o n   M u l t i p l e   A c c e s s Code\ Division\ Multiple\ Access C o d e D i v i s i o n M u lt i p e A ccess    , CDMA) assigns a different code to each node, and then each node uses its unique code to send data to it. The data is encoded. As long as these codes are carefully chosen and the receiver knows the sender's code, then different nodes can transmit at the same time, and their respective receivers can still correctly receive the data bits coded by the sender, regardless of interference from other nodes.

random access protocol

In the random access protocol, a transmission node always transmits at the full rate of the channel (that is, R bps). When a collision occurs, each node involved in the collision retransmits its frame repeatedly until the frame passes without collision. However, the frame is not retransmitted immediately after experiencing a collision, but is retransmitted after waiting for a random delay. As long as the delay selected by one of these nodes is sufficiently smaller than the delay of other collision nodes, it can emit its frame without collisions

Slotted ALOHA

The transmission time is divided into multiple time slots. When a node has a frame to send, it waits until the next time slot starts and transmits the entire frame in that time slot. If there is no collision, the node successfully transmits Its frame; if there is a collision, the node detects the collision before the end of the time slot, and then retransmits its frame in each subsequent time slot with probability p until the frame is transmitted without collision.

Compared with the channel partitioning protocol, when only one node is the only active node in the network, slotted ALOHA allows the node to transmit continuously at full speed (i.e. R bps); it is also highly decentralized, with each node sending individually. When a frame collides, it independently decides when to retransmit with probability p.
The problem is that the time slot ALOHA needs to synchronize the time slot in the node, that is, each node needs to know when the time slot starts, because the sending The frame is sent at the beginning of the time slot; when there is more than one active node but multiple active nodes, part of the time slot will be "wasted" due to collisions, and due to the probabilistic transmission strategy, each node The transmission behavior of Unable to retransmit successfully

ALOHA

The fully decentralized ALOHA protocol does not require nodes to synchronize their time slots and transmissions. When a frame first arrives, that is, a datagram is passed down from the network layer to the network card at the sending node, the node immediately transmits the frame completely into the broadcast channel. If a transmitted frame experiences a collision, the node will immediately Retransmit the frame with probability p, otherwise wait for one frame transmission time, and then transmit the frame with probability p

Carrier Sense Multiple Access CSMA

In slotted and pure ALOHA, a node's decision whether to transmit is independent of the activity of other nodes connected to the broadcast channel. Specifically, a node does not care whether other nodes happen to be present when it starts transmitting. is transmitting and will not stop even if another node starts to interfere with its transmission.

In response to the above problems, in the Carrier Sense Multiple Access ( Carrier Sense Multiple Access Carrier\ Sense\ Multiple\ AccessC a r i er S e n se M u lt i p e A ccess    , CSMA) andCSMA with Collision DetectionCSMA\ with\ Collision\ DetectionCSM A w i t h C o ll i s i o n De t ec t i o n    , CSMA/CD) contains two rules:

  1. Carrier sensing : A node listens to the channel before transmitting. If a frame from another node is sent on the channel, the node waits until it detects no transmission for a short period of time, and then starts transmitting again.
  2. Collision detection : that is, when a transmitting node is listening to this channel while transmitting, if it detects that another node is transmitting an interference frame, it stops transmitting before repeating the "listen - transmit when idle" cycle. wait for a random amount of time

It is equivalent to not only listening to whether other nodes are sending frames when sending frames, but also detecting whether collisions occur during the sending process of frames.

Under CSMA, we consider such a situation: at a certain time t 0 , node B detects that the channel is idle, because no other nodes are currently transmitting, so node B starts to transmit, and at a subsequent time t 1 , node B is transmitting, but the bits it transmits have not yet reached node D, so D detects that the channel is idle at t 1 , then according to the CSMA protocol, D will start its transmission, and after a short After some time, B's transmission collided with D's transmission at a certain location. This is due to the effect of the end-to-end channel propagation delay on the broadcast channel, the longer the delay, the greater the chance that the carrier sense node will not be able to detect that another node in the network has started to transmit

Then, if the node does not perform collision detection, even if a collision occurs, B and D will continue to transmit their frames completely; when collision detection is performed, the node will stop transmitting immediately once it detects a collision. By adding collision detection, it helps to improve the performance of the protocol by not transmitting a useless, damaged frame. What carrier sensing does is that no other node is sending when the frame is sent; what collision detection does is that when a collision is detected during frame sending, it immediately stops sending.

Carrier Sense Multiple Access CSMA/CD with collision detection

The operation process of CSMA/CD is as follows:

  1. The adapter gets a datagram from the network layer, prepares the link layer frame, and places it in the frame adapter buffer
  2. If the adapter hears that the channel is idle, that is, no signal energy is entering the adapter from the channel, then it starts transmitting the frame; on the other hand, if the adapter hears that the channel is busy, it will wait until it hears that there is no signal energy. transmission frame
  3. During transmission, the adapter monitors for the presence of signal energy from other adapters using the broadcast channel
  4. If an adapter transmits the entire frame without detecting signal energy from other adapters, the adapter completes the frame; otherwise if the adapter detects signal energy from other adapters while transmitting, it aborts the transmission.
  5. After aborting a transmission, the adapter waits for a random amount of time before continuing back to the listening channel - a cycle that transmits when idle. For the selection of a random amount of time, the binary exponential backoff algorithm is used in Ethernet to solve the problem

rotation agreement

polling protocol

The polling protocol requires that one of these nodes be designated as the master node . The master node polls each node in a cyclic manner, telling the node the maximum number of frames that can be transmitted, and then transmits at that node. After certain frames, it continues to tell the next node the maximum number of frames that can be transmitted. The master node can observe whether there is a lack of signal on the channel to determine when a node has completed sending the frame.

In this mode of master node management, the polling protocol eliminates the collision and null delay problems that plague the random access protocol, making polling much more efficient. Its disadvantages are that firstly, it introduces polling delay , that is, the time required to notify a node; secondly, it introduces the centralized mode of the master node, if the master node fails, the entire channel will become Inoperable

Token Passing Protocol

There is no master node in this protocol. A small special frame called a token is exchanged between nodes in a fixed order. When a node receives a token, it holds the token only if it has some frames to send, otherwise it will immediately forward the token to the next node; when a node receives the token and It does have frames to transmit, so it sends the maximum number of frames before forwarding the token to the next node.

The token transfer action is decentralized and highly efficient. The problem is that the failure of one node may collapse the entire channel, or if a node accidentally forgets to release the token, some recovery measures must be taken to return the token to the cycle normally.

Guess you like

Origin blog.csdn.net/Pacifica_/article/details/125226505