swarm, conn & stream
In Swarm, conn stands for the connection, on which streams are multiplexed.
First of all, we must create a connection before starting any stream operations. A connection can be made by swarm.dial() or swarm.listen, outbound or inbound
type Conn struct {
conn transport.CapableConn
swarm *Swarm
closeOnce sync.Once
err error
notifyLk sync.Mutex
streams struct {
sync.Mutex
m map[*Stream]struct{}
}
stat network.Stat
}
swarm.addConn
Whenever swarm get a incoming/outgoing connection, addConn will be called. To be more specific, swarm.dial or swarm.listen
notifyAll() for Connetced notification
start() a routine for stream handling
go s.ConnHandler() to invoke connHandler registered in Swarm
...
s.notifyAll(func(f network.Notifiee) {
f.Connected(s, c)
})
c.notifyLk.Unlock()
c.start()
// TODO: Get rid of this. We use it for identify but that happen much
// earlier (really, inside the transport and, if not then, during the
// notifications).
if h := s.ConnHandler(); h != nil {
go h(c)
}
return c, nil
}
go h© —> our conn handler will be executed in a new routine
Take a look at start()
It wil listen to new stream comming and addStream or so.
func (c *Conn) start() {
go func() {
defer c.swarm.refs.Done()
defer c.Close()
for {
ts, err := c.conn.AcceptStream()
if err != nil {
log.Error(err, c)
return
}
c.swarm.refs.Add(1)
go func() {
s, err := c.addStream(ts, network.DirInbound)
// Don't defer this. We don't want to block
// swarm shutdown on the connection handler.
c.swarm.refs.Done()
// We only get an error here when the swarm is closed or closing.
if err != nil {
return
}
if h := c.swarm.StreamHandler(); h != nil {
h(s)
}
}()
}
}()
}
acceptStream() of yamux: Waiting for new stream on channel
func (s *Session) AcceptStream() (*Stream, error) {
select {
case stream := <-s.acceptCh:
if err := stream.sendWindowUpdate(); err != nil {
return nil, err
}
return stream, nil
case <-s.shutdownCh:
return nil, s.shutdownErr
}
}
swarm.connHandler
Only got a new connection, start IdentifyService asap
IdentifyConn will create a new stream, exchange protocol ID and tery to get Identify from peer
// newConnHandler is the remote-opened conn handler for inet.Network
func (h *BasicHost) newConnHandler(c network.Conn) {
// Clear protocols on connecting to new peer to avoid issues caused
// by misremembering protocols between reconnects
h.Peerstore().SetProtocols(c.RemotePeer())
h.ids.IdentifyConn(c)
}
swarm.NewStream
IdentifyConn as example, call stack:
github.com/libp2p/go-yamux.(*Session).OpenStream at session.go:194
github.com/libp2p/go-libp2p-yamux.(*conn).OpenStream at yamux.go:28
<autogenerated>:2
github.com/libp2p/go-libp2p-swarm.(*Conn).NewStream at swarm_conn.go:172
github.com/libp2p/go-libp2p/p2p/protocol/identify.(*IDService).IdentifyConn at id.go:185
github.com/libp2p/go-libp2p/p2p/host/basic.(*BasicHost).newConnHandler at basic_host.go:247
github.com/libp2p/go-libp2p/p2p/host/basic.(*BasicHost).newConnHandler-fm at basic_host.go:243
runtime.goexit at asm_amd64.s:1357
- Async stack trace
github.com/libp2p/go-libp2p-swarm.(*Swarm).addConn at swarm.go:243
- NewStream -> mux.SelectOneOf(ProtocolID) -> SelectProtoOrFail: Neogotiate the protocol
- Write("/multistream/1.0.0") -> Write(Proto)
- Read("/multistream/1.0.0") -> Read(Proto)
func SelectProtoOrFail(proto string, rwc io.ReadWriteCloser) error {
errCh := make(chan error, 1)
go func() {
var buf bytes.Buffer
delimWrite(&buf, []byte(ProtocolID))
delimWrite(&buf, []byte(proto))
_, err := io.Copy(rwc, &buf)
errCh <- err
}()
// We have to read *both* errors.
err1 := readMultistreamHeader(rwc)
err2 := readProto(proto, rwc)
if werr := <-errCh; werr != nil {
return werr
}
if err1 != nil {
return err1
}
if err2 != nil {
return err2
}
return nil
}
swarm.StreamHandler
mux.Negotiate will exchange protocl ID and determine which handler to use by looking up the handler map
func (h *BasicHost) newStreamHandler(s network.Stream) {
before := time.Now()
if h.negtimeout > 0 {
if err := s.SetDeadline(time.Now().Add(h.negtimeout)); err != nil {
log.Error("setting stream deadline: ", err)
s.Reset()
return
}
}
lzc, protoID, handle, err := h.Mux().NegotiateLazy(s)
took := time.Since(before)
if err != nil {
if err == io.EOF {
logf := log.Debugf
if took > time.Second*10 {
logf = log.Warningf
}
logf("protocol EOF: %s (took %s)", s.Conn().RemotePeer(), took)
} else {
log.Debugf("protocol mux failed: %s (took %s)", err, took)
}
s.Reset()
return
}
s = &streamWrapper{
Stream: s,
rw: lzc,
}
if h.negtimeout > 0 {
if err := s.SetDeadline(time.Time{}); err != nil {
log.Error("resetting stream deadline: ", err)
s.Reset()
return
}
}
s.SetProtocol(protocol.ID(protoID))
log.Debugf("protocol negotiation took %s", took)
go handle(protoID, s)
}
go handle(protoID, s) —> our stream handler will be executed in a new routine
stream header definition
12 bytes as show below:
const (
sizeOfVersion = 1
sizeOfType = 1
sizeOfFlags = 2
sizeOfStreamID = 4
sizeOfLength = 4
headerSize = sizeOfVersion + sizeOfType + sizeOfFlags +
sizeOfStreamID + sizeOfLength
)
type header [headerSize]byte
func encode(msgType uint8, flags uint16, streamID uint32, length uint32) header {
var h header
h[0] = protoVersion
h[1] = msgType
binary.BigEndian.PutUint16(h[2:4], flags)
binary.BigEndian.PutUint32(h[4:8], streamID)
binary.BigEndian.PutUint32(h[8:12], length)
return h
}
stream.Write
stream.Write -> multistream.Write -> yamux.Write -> fragment(max 65535) if need ->session.sendMsg
Call Stack
github.com/libp2p/go-yamux.(*Session).sendMsg at session.go:358
github.com/libp2p/go-yamux.(*Stream).write at stream.go:172
github.com/libp2p/go-yamux.(*Stream).Write at stream.go:128
github.com/libp2p/go-libp2p-swarm.(*Stream).Write at swarm_stream.go:88
github.com/multiformats/go-multistream.(*lazyServerConn).Write at lazyServer.go:25
github.com/libp2p/go-libp2p/p2p/host/basic.(*streamWrapper).Write at basic_host.go:804
main.serveStub.func1.1 at proxy.go:45
runtime.goexit at asm_amd64.s:1357
- Async stack trace
main.serveStub.func1 at proxy.go:31
yamux will firstly fragment the msg body if necessary then do the stream header encoding and finally send msg to the sendloop routine through a channel
yamux creates a loop routine waiting for buf sent from Mux session, when creating the yamux session
func newSession(config *Config, conn net.Conn, client bool, readBuf int) *Session {
var reader io.Reader = conn
if readBuf > 0 {
reader = bufio.NewReaderSize(reader, readBuf)
}
s := &Session{
config: config,
client: client,
logger: log.New(config.LogOutput, "", log.LstdFlags),
conn: conn,
reader: reader,
pings: make(map[uint32]chan struct{}),
streams: make(map[uint32]*Stream),
inflight: make(map[uint32]struct{}),
synCh: make(chan struct{}, config.AcceptBacklog),
acceptCh: make(chan *Stream, config.AcceptBacklog),
sendCh: make(chan []byte, 64),
recvDoneCh: make(chan struct{}),
sendDoneCh: make(chan struct{}),
shutdownCh: make(chan struct{}),
}
if client {
s.nextStreamID = 1
} else {
s.nextStreamID = 2
}
if config.EnableKeepAlive {
s.startKeepalive()
}
go s.recv()
go s.send()
return s
}
Check out s.send:
// send is a long running goroutine that sends data
func (s *Session) send() {
if err := s.sendLoop(); err != nil {
s.exitErr(err)
}
}
In sendLoop:
select {
case buf = <-s.sendCh:
case <-s.shutdownCh:
...
_, err := writer.Write(buf)
Here, writer is the low level net.Conn, or to be more specific, secio ETMWriter