我对hyperledger fabric1.1.0的执着(六):kafka集群部署

1、用11台服务器,如下

名称 ip Hostname 组织机构
Zk1 192.168.2.237 zookeeper1  
Zk2 192.168.2.131 zookeeper2  
Zk3 192.168.2.188 zookeeper3  
kafka1 192.168.2.182 kafka1  
kafka2 192.168.2.213 kafka2  
kafka3 192.168.2.137 kafka3  
kafka4 192.168.2.186 kafka4  
orderer0 192.168.2.238 orderer0.example.com  
orderer1 192.168.2.210 orderer1.example.com  
orderer2 192.168.2.235 orderer2.example.com  
peer0 192.168.2.118 peer0.org1.example.com Org1
peer1 192.168.2.21 peer1.org2.example.com org2

2、crypto-config.yaml配置:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

# ---------------------------------------------------------------------------
# "OrdererOrgs" - Definition of organizations managing orderer nodes
# ---------------------------------------------------------------------------
OrdererOrgs:
  # ---------------------------------------------------------------------------
  # Orderer
  # ---------------------------------------------------------------------------
  - Name: Orderer
    Domain: example.com
    # ---------------------------------------------------------------------------
    # "Specs" - See PeerOrgs below for complete description
    # ---------------------------------------------------------------------------
    Specs:
      - Hostname: orderer0
      - Hostname: orderer1
      - Hostname: orderer2
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
  # ---------------------------------------------------------------------------
  # Org1
  # ---------------------------------------------------------------------------
  - Name: Org1
    Domain: org1.example.com
    # ---------------------------------------------------------------------------
    # "Specs"
    # ---------------------------------------------------------------------------
    # Uncomment this section to enable the explicit definition of hosts in your
    # configuration.  Most users will want to use Template, below
    #
    # Specs is an array of Spec entries.  Each Spec entry consists of two fields:
    #   - Hostname:   (Required) The desired hostname, sans the domain.
    #   - CommonName: (Optional) Specifies the template or explicit override for
    #                 the CN.  By default, this is the template:
    #
    #                              "{{.Hostname}}.{{.Domain}}"
    #
    #                 which obtains its values from the Spec.Hostname and
    #                 Org.Domain, respectively.
    # ---------------------------------------------------------------------------
    # Specs:
    #   - Hostname: foo # implicitly "foo.org1.example.com"
    #     CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
    #   - Hostname: bar
    #   - Hostname: baz
    # ---------------------------------------------------------------------------
    # "Template"
    # ---------------------------------------------------------------------------
    # Allows for the definition of 1 or more hosts that are created sequentially
    # from a template. By default, this looks like "peer%d" from 0 to Count-1.
    # You may override the number of nodes (Count), the starting index (Start)
    # or the template used to construct the name (Hostname).
    #
    # Note: Template and Specs are not mutually exclusive.  You may define both
    # sections and the aggregate nodes will be created for you.  Take care with
    # name collisions
    # ---------------------------------------------------------------------------
    Template:
      Count: 2
      # Start: 5
      # Hostname: {{.Prefix}}{{.Index}} # default
    # ---------------------------------------------------------------------------
    # "Users"
    # ---------------------------------------------------------------------------
    # Count: The number of user accounts _in addition_ to Admin
    # ---------------------------------------------------------------------------
    Users:
      Count: 1
  # ---------------------------------------------------------------------------
  # Org2: See "Org1" for full specification
  # ---------------------------------------------------------------------------
  - Name: Org2
    Domain: org2.example.com
    Template:
      Count: 2
    Users:
      Count: 1
    Specs:
      - Hostname: foo
        CommonName: foo27.org2.example.com
      - Hostname: bar
      - Hostname: baz

  - Name: Org3
    Domain: org3.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org4
    Domain: org4.example.com
    Template:
      Count: 2
    Users:
      Count: 1

  - Name: Org5
    Domain: org5.example.com
    Template:
      Count: 2
    Users:
      Count: 1

3、将cryptogen-config.yaml文件上传到192.168.2.238服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic文件夹,执行以下命令生成节点所需要的配置文件:

./bin/cryptogen generate --config=./crypto-config.yaml

4、编写configtx.yaml配置文件:

# Copyright IBM Corp. All Rights Reserved.
#
# SPDX-License-Identifier: Apache-2.0
#

---
################################################################################
#
#   Profile
#
#   - Different configuration profiles may be encoded here to be specified
#   as parameters to the configtxgen tool
#
################################################################################
Profiles:

    TwoOrgsOrdererGenesis:
        Orderer:
            <<: *OrdererDefaults
            Organizations:
                - *OrdererOrg
        Consortiums:
            SampleConsortium:
                Organizations:                
                    - *Org1
                    - *Org2
                    - *Org3
                    - *Org4
                    - *Org5
    TwoOrgsChannel:
        Consortium: SampleConsortium
        Application:
            <<: *ApplicationDefaults
            Organizations:
                - *Org1
                - *Org2
                - *Org3
                - *Org4
                - *Org5

################################################################################
#
#   Section: Organizations
#
#   - This section defines the different organizational identities which will
#   be referenced later in the configuration.
#
################################################################################
Organizations:

    # SampleOrg defines an MSP using the sampleconfig.  It should never be used
    # in production but may be used as a template for other definitions
    - &OrdererOrg
        # DefaultOrg defines the organization which is used in the sampleconfig
        # of the fabric.git development environment
        Name: OrdererOrg

        # ID to load the MSP definition as
        ID: OrdererMSP

        # MSPDir is the filesystem path which contains the MSP configuration
        MSPDir: crypto-config/ordererOrganizations/example.com/msp

    - &Org1
        # DefaultOrg defines the organization which is used in the sampleconfig
        # of the fabric.git development environment
        Name: Org1MSP

        # ID to load the MSP definition as
        ID: Org1MSP

        MSPDir: crypto-config/peerOrganizations/org1.example.com/msp

        AnchorPeers:
            # AnchorPeers defines the location of peers which can be used
            # for cross org gossip communication.  Note, this value is only
            # encoded in the genesis block in the Application section context
            - Host: peer0.org1.example.com
              Port: 7051

    - &Org2
        # DefaultOrg defines the organization which is used in the sampleconfig
        # of the fabric.git development environment
        Name: Org2MSP

        # ID to load the MSP definition as
        ID: Org2MSP

        MSPDir: crypto-config/peerOrganizations/org2.example.com/msp

        AnchorPeers:
            # AnchorPeers defines the location of peers which can be used
            # for cross org gossip communication.  Note, this value is only
            # encoded in the genesis block in the Application section context
            - Host: peer0.org2.example.com
              Port: 7051

    - &Org3
        Name: Org3MSP
        ID: Org3MSP
      
        MSPDir: crypto-config/peerOrganizations/org3.example.com/msp

        AnchorPeers:
            - Host: peer0.org3.example.com
              port: 7051

    - &Org4
        Name: Org4MSP
        ID: Org4MSP
      
        MSPDir: crypto-config/peerOrganizations/org4.example.com/msp
  
        AnchorPeers:
            - Host: peer0.org4.example.com
              port: 7051

    - &Org5
        Name: Org5MSP
        ID: Org5MSP
      
        MSPDir: crypto-config/peerOrganizations/org5.example.com/msp

        AnchorPeers:
            - Host: peer0.org5.example.com
              port: 7051

################################################################################
#
#   SECTION: Orderer
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for orderer related parameters
#
################################################################################
Orderer: &OrdererDefaults

    # Orderer Type: The orderer implementation to start
    # Available types are "solo" and "kafka"
    OrdererType: kafka

    Addresses:
        - orderer0.foodsbu.com:7050
        - orderer1.foodsbu.com:7050
        - orderer2.foodsbu.com:7050

    # Batch Timeout: The amount of time to wait before creating a batch
    BatchTimeout: 2s

    # Batch Size: Controls the number of messages batched into a block
    BatchSize:

        # Max Message Count: The maximum number of messages to permit in a batch
        MaxMessageCount: 10

        # Absolute Max Bytes: The absolute maximum number of bytes allowed for
        # the serialized messages in a batch.
        AbsoluteMaxBytes: 98 MB

        # Preferred Max Bytes: The preferred maximum number of bytes allowed for
        # the serialized messages in a batch. A message larger than the preferred
        # max bytes will result in a batch larger than preferred max bytes.
        PreferredMaxBytes: 512 KB

    Kafka:
        # Brokers: A list of Kafka brokers to which the orderer connects
        # NOTE: Use IP:port notation
        Brokers:
            - 192.168.2.182:9092
            - 192.168.2.213:9092
            - 192.168.2.137:9092
            - 192.168.2.186:9092

    # Organizations is the list of orgs which are defined as participants on
    # the orderer side of the network
    Organizations:

################################################################################
#
#   SECTION: Application
#
#   - This section defines the values to encode into a config transaction or
#   genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults

    # Organizations is the list of orgs which are defined as participants on
    # the application side of the network
    Organizations:

Capabilities:
    Global: &ChannelCapabilities
        V1_1: true

    Orderer: &OrdererCapabilities
        V1_1: true

    Application: &ApplicationCapabilities
        V1_1: true
5、将configtx配置文件上传到192.168.2.238服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目录,创建文件夹channel-artifacts,并执行以下命令生成创世区块文件:

./bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block

6、创世区块genesis.block是为了orderer排序服务启动时用到的,peer节点在启动后需要创建的channel的配置文件在这里也一并生成,执行以下命令:

./bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/mychannel.tx -channelID mychannel

7、Zookeeper配置:

7.1、编写docker-zookeeper1.yaml配置文件:

version: '2'

services:

  zookeeper1:
    container_name: zookeeper1
    hostname: zookeeper1
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=1
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

7.2、编写docker-zookeeper2.yaml配置文件:

version: '2'

services:

  zookeeper2:
    container_name: zookeeper2
    hostname: zookeeper2
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=2
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

7.3、编写docker-zookeeper3.yaml配置文件:

version: '2'

services:

  zookeeper3:
    container_name: zookeeper3
    hostname: zookeeper3
    image: hyperledger/fabric-zookeeper
    restart: always
    environment:
      - ZOO_MY_ID=3
      - ZOO_SERVERS=server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888
    ports:
      - "2181:2181"
      - "2888:2888"
      - "3888:3888"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8、kafka配置

8.1、编写docker-kafka1.yaml配置文件:

version: '2'

services:

  kafka1:
    container_name: kafka1
    hostname: kafka1
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      - KAFKA_BROKER_ID=1
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8.2、编写docker-kafka2.yaml配置文件:

version: '2'

services:

  kafka2:
    container_name: kafka2
    hostname: kafka2
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      - KAFKA_BROKER_ID=2
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8.3、编写docker-kafka3.yaml配置文件:

version: '2'

services:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka
    restart: always
    environment:
      - KAFKA_BROKER_ID=4
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

8.4、编写docker-kafka4.yaml配置文件:

version: '2'

serrvices:

  kafka4:
    container_name: kafka4
    hostname: kafka4
    image: hyperledger/fabric-kafka
    restart: always
    enviroment:
      - KAFKA_BROKER_ID=4
      - KAFKA_MIN_INSYNC_PERLICAS=2
      - KAFKA_DEFAULT_PERLICATION_FACTOR=3
      - KAFKA_ZOOKEEPER_CONNECT=zookeeper1:2181,zookeeper2:2181,zookeeper3:2181
      - KAFKA_MESSAGE_MAX_BYTES=103809024
      - KAFKA_PERLICA_FETCH_MAX_BYTES=103809024
      - KAFKA_UNCLIEAN_LEADER_ELECTION_ENABLE=false
      - KAFKA_LOG_RETENTION_MS=-1
    ports:
      - "9092:9092"
    extra_hosts:
      - "zookeeper1:192.168.2.237"
      - "zookeeper2:192.168.2.131"
      - "zookeeper3:192.168.2.188"
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

9、orderer配置

9.1、编写docker-orderer0.yaml

version: '2'

services:

  orderer0.example.com:
    container_name: orderer0.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10S
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100S
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1S
      - PRDERER_KAFKA_RETRY_SHORTTOTAL=30S
      - ORDERER_KAFKA_VERBOSE=TRUE
      - ORDERER_KAFKA_BROKERS=[192.168.2.182:9092,192.168.2.213:9092,192.168.2.137:9092,192.168.2.186:9092]      
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - example
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

9.2、编写docker-orderer1.yaml

version: '2'

services:

  orderer1.example.com:
    container_name: orderer1.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10S
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100S
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1S
      - PRDERER_KAFKA_RETRY_SHORTTOTAL=30S
      - ORDERER_KAFKA_VERBOSE=TRUE
      - ORDERER_KAFKA_BROKERS=[192.168.2.182:9092,192.168.2.213:9092,192.168.2.137:9092,192.168.2.186:9092]      
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - example
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

9.3、编写docker-orderer2.yaml

version: '2'

services:

  orderer2.example.com:
    container_name: orderer2.example.com
    image: hyperledger/fabric-orderer
    environment:
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - ORDERER_GENERAL_LOGLEVEL=error
      - ORDERER_GENERAL_LOGLEVEL=debug
      - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
      - ORDERER_GENERAL_LISTENPORT=7050
      #- ORDERER_GENERAL_GENESISPROFILE=AntiMothOrdererGenesis
      - ORDERER_GENERAL_GENESISMETHOD=file
      - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
      - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
      - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
      #- ORDERER_GENERAL_LEDGERTYPE=ram
      #- ORDERER_GENERAL_LEDGERTYPE=file
      # enabled TLS
      - ORDERER_GENERAL_TLS_ENABLED=false
      - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
      - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
      - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]

      - ORDERER_KAFKA_RETRY_LONGINTERVAL=10S
      - ORDERER_KAFKA_RETRY_LONGTOTAL=100S
      - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1S
      - PRDERER_KAFKA_RETRY_SHORTTOTAL=30S
      - ORDERER_KAFKA_VERBOSE=TRUE
      - ORDERER_KAFKA_BROKERS=[192.168.2.182:9092,192.168.2.213:9092,192.168.2.137:9092,192.168.2.186:9092]      
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric
    command: orderer
    volumes:
    - ./channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp:/var/hyperledger/orderer/msp
    - ./crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/tls/:/var/hyperledger/orderer/tls
    networks:
      default:
        aliases:
          - example
    ports:
      - 7050:7050
    extra_hosts:
      - "kafka1:192.168.2.182"
      - "kafka2:192.168.2.213"
      - "kafka3:192.168.2.137"
      - "kafka4:192.168.2.186"

10、启动集群,启动顺序为:zookeeper集群——kafka集群——orderer排序服务集群

10.1 启动zookeeper集群:

将文件docker-zookeeper.yaml上传至192.168.2.237下的/opt/gopath/src/github.com/hyperledger/fabric/aberic目录,如图:

执行以下命令启动docker-zookeeper1.yaml:

docker-compose -f docker-zookeeper1.yaml up

此时会报错,拒绝连接,因为2,3doum都没启动,如图:

按照上面方法依次启动zookeeper2(192.168.2.131)和zookeeper3(192.168.2.188),在回头看zookeeper1的日志如下:

表示zookeeper1已与zookeeper2和3通信,并设置自身属性为FOLLOWING。

zookeeper3与1类似,为FOLLOWING:

而zookeeper2的日志如下:

通过选举,zookeeper2成了新的leader。

10.2 进入zookeeper2所在192.168.2.131服务器,执行以下命令进入zookeeper容器:

docker exec -it zookeeper2 bash

然后进入bin目录,并执行以下命令,看到如下信息:

zkServer.sh status

表示zookeeper的配置所在路径及mode属性为leader,与先前启动时观测到的日志一致,zookeeper启动完成且一切正常。

11、启动kafka集群

分别将docker-kafka1.yaml/docker-kafka2.yaml/docker-kafka3.yaml/docker-kafka4.yaml上传至192.168.2.182/192.168.2.213/192.168.2.137/192.168.2.186服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目录,如图:

在192.168.2.182服务器执行以下命令启动docker-kafka1.yaml:

docker-compose -f docker-kafka1.yaml up

如图表示kafaka已经初始化、实例化并启动,接下来依次启动192.168.2.213/192.168.2.137/192.168.2.186上的kafka。

在kafka单个服务节点启动时,zookeeper的leader节点服务器日志会有fank反馈,比如启动kafka1时,在zookeeper2服务器中反馈日志如下:

12、启动orderer节点

12.1 分别将docker-orderer0.yaml/docker-orderer1.yaml/docker-orderer2.yaml上传至192.168.2.238/192.168.2.210/192.168.2.235服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目录下。

12.2 将上文192.168.2.238服务器中生成的genesis.block创世区块文件上传到各自的/opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts目录,若无此目录,手动创建一个即可。

12.3 将crypto-config.yaml配置文件上传至各服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目录。

12.4 将crypto-config目录下的ordererOrganizations上传至各服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic/crypto-config目录下,若无此目录,手动创建即可。

注意:/opt/gopath/src/github.com/hyperledger/fabric/aberic/crypto-config/ordererOrganizations/example.com/orderers目录下有三个文件夹,只需要保留与之对应的文件夹即可。比如Orderer1排序服务器上只保留orderer1.example.com即可。

上传完之后tree一下,目录结构如图:

12.5 分别在三台orderer排序服务器上启动docker-orderer.yaml,命令如下:

docker-compose -f docker-orderer0.yaml up

docker-compose -f docker-orderer1.yaml up -d

docker-compose -f docker-orderer2.yaml up -d

13、集群环境测试

13.1 编写docker-peer0org1.yaml文件:

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - "5984:5984"

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/5e9cad71528b55c1e42b4c1e44bb656aae91b11a419ea146b26be21359cfa159_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/5e9cad71528b55c1e42b4c1e44bb656aae91b11a419ea146b26be21359cfa159_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  peer0.org1.example.com:
    container_name: peer0.org1.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984

      - CORE_PEER_ID=peer0.org1.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_CHAINCODELISTENADDRESS=peer0.org1.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
      - CORE_PEER_LOCALMSPID=Org1MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
        - /var/run/:/host/var/run/
        - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
        - ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - example
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
      - CORE_PEER_LOCALMSPID=Org1MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/[email protected]/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/aberic/chaincode/go
        - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
        - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - peer0.org1.example.com
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"
      - "peer0.org1.example.com:192.168.2.118"

13.2 将编写好的docker-peer0org1.yaml文件上传到182.168.2.118服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic目录下。

13.3 将前文生成的mychannel.txpibd频道文件上传到192.168.2.118服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts目录。

13.4 将前文生成的peerOrganizations上传至192.168.2.118服务器的/opt/gopath/src/github.com/hyperledger/fabric/aberic/crypto-config文件夹,且只上传org1相关配置即可。

13.5 在192.268.2.118服务器的aberic目录执行以下命令启动peer节点服务:

 docker-compose -f docker-peer0org1.yaml up -d

13.6 docker ps -a查看是否启动成功:

13.7 进入客户端,并创建频道:

docker exec -it cli bash

peer channel create -o orderer0.example.com:7050 -c mychannel -t 50 -f ./channel-artifacts/mychannel.tx

13.8 加入频道:

peer channel join -b mychannel.block

13.9 安装智能合约:

peer chaincode install -n mycc -p github.com/hyperledger/fabric/aberic/chaincode/go/chaincode_example02 -v 1.0

13.10 实例化:

peer chaincode instantiate -o orderer0.example.com:7050 -C mychannel -n mycc -c '{"Args":["init","A","10","B","10"]}' -P "OR ('Org1MSP.member','Org2MSP.member')" -v 1.0

注:

    查看已安装的chaincode:peer chaincode list --installed

    查看已实例化的chaincode:peer chaincode list --instantiated -C mychannel

13.11 查询A的资产值:

peer chaincode query -C mychannel -n mycc -c '{"Args":["query","A"]}'

如图,能查询到说明部署成功。

14 foo27org2节点测试

14.1 编写docker-foo27org2.yaml

version: '2'

services:

  couchdb:
    container_name: couchdb
    image: hyperledger/fabric-couchdb
    # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service,
    # for example map it to utilize Fauxton User Interface in dev environments.
    ports:
      - "5984:5984"

  ca:
    container_name: ca
    image: hyperledger/fabric-ca
    environment:
      - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
      - FABRIC_CA_SERVER_CA_NAME=ca
      - FABRIC_CA_SERVER_TLS_ENABLED=false
      - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org2.example.com-cert.pem
      - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/73281a53ab19100240ebc4633e8b489514c0fef921a3a2bc5ba348c595fc6765_sk
    ports:
      - "7054:7054"
    command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/73281a53ab19100240ebc4633e8b489514c0fef921a3a2bc5ba348c595fc6765_sk -b admin:adminpw -d'
    volumes:
      - ./crypto-config/peerOrganizations/org2.example.com/ca/:/etc/hyperledger/fabric-ca-server-config

  foo27.org2.example.com:
    container_name: foo27.org2.example.com
    image: hyperledger/fabric-peer
    environment:
      - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
      - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=192.168.2.221:5984

      - CORE_PEER_ID=peer1.org1foo27.org2.example.com
      - CORE_PEER_NETWORKID=aberic
      - CORE_PEER_ADDRESS=foo27.org2.example.com:7051
      - CORE_PEER_CHAINCODEADDRESS=foo27.org2.example.com:7052
      - CORE_PEER_CHAINCODELISTENADDRESS=foo27.org2.example.com:7052
      - CORE_PEER_GOSSIP_EXTERNALENDPOINT=foo27.org2.example.com:7051
      - CORE_PEER_LOCALMSPID=Org2MSP

      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # the following setting starts chaincode containers on the same
      # bridge network as the peers
      # https://docs.docker.com/compose/networking/
      - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=aberic_default
      - CORE_VM_DOCKER_TLS_ENABLED=false
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
      - CORE_PEER_GOSSIP_USELEADERELECTION=true
      - CORE_PEER_GOSSIP_ORGLEADER=false
      - CORE_PEER_PROFILE_ENABLED=false
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/aberic/chaincode/go
        - ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/msp:/etc/hyperledger/fabric/msp
        - ./crypto-config/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls:/etc/hyperledger/fabric/tls
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    command: peer node start
    ports:
      - 7051:7051
      - 7052:7052
      - 7053:7053
    depends_on:
      - couchdb
    networks:
      default:
        aliases:
          - aberic
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"

  cli:
    container_name: cli
    image: hyperledger/fabric-tools
    tty: true
    environment:
      - GOPATH=/opt/gopath
      - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
      # - CORE_LOGGING_LEVEL=ERROR
      - CORE_LOGGING_LEVEL=DEBUG
      - CORE_PEER_ID=cli
      - CORE_PEER_ADDRESS=foo27.org2.example.com:7051
      - CORE_PEER_LOCALMSPID=Org2MSP
      - CORE_PEER_TLS_ENABLED=false
      - CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.crt
      - CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/server.key
      - CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/foo27.org2.example.com/tls/ca.crt
      - CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/[email protected]/msp
    working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
    volumes:
        - /var/run/:/host/var/run/
        - ./chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/aberic/chaincode/go
        - ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
        - ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
    depends_on:
      - foo27.org2.example.com
    extra_hosts:
      - "orderer0.example.com:192.168.2.238"
      - "orderer1.example.com:192.168.2.210"
      - "orderer2.example.com:192.168.2.235"
      - "foo27.org2.example.com:192.168.2.221"

14.2 将foo27org2.yaml文件上传到192.168.2.221服务器的aberic目录下,并启动:

docker-compose -f docker-foo27org2.yaml up -d

14.3 将192.168.2.118服务器中的/opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts目录下的mychannel.block复制到192.168.2.221相同目录下,然后执行以下命令复制到cli容器中:

docker cp /opt/gopath/src/github.com/hyperledger/fabric/aberic/channel-artifacts/mychannel.block 4f3d4a373e0c:/opt/gopath/src/github.com/hyperledger/fabric/peer/

其中:4f3d4a373e0c为cli容器的id。

14.4 进入客户端:docker exec -it cli bash

14.5 ls查看是否有mychannel.block

14.6 加入频道:peer channel join -b mychannel.block

14.7 安装智能合约:

peer chaincode install -n mycc -p github.com/hyperledger/fabric/aberic/chaincode/go/chaincode_example02 -v 1.0

14.8 查询A账户的值:

peer chaincode query -C mychannel -n mycc -c '{"Args":["query","A"]}'

猜你喜欢

转载自blog.csdn.net/tianshuhao521/article/details/84726849