Micronaut services using micro Kafka

Today, we will Apache Kafkabuild some micro-services asynchronous communication with one another topic. We use Micronautframework, which is to Kafkaprovide specialized library integration. Let us briefly describe the architecture of an example system. We have four 订单服务micro-services: 行程服务, , 司机服务and 乘客服务. Realization of these applications is very simple. They are stored in memory, and is connected to the same Kafkainstance.

The main goal of our system is to provide customers with travel arrangements. Orders application also acts as a gateway service. It receives requests from clients, preservation of historical records and send events to the orderstopic. All other services are in the micro-monitor ordersthis topic, and processing order-serviceorders sent. Each service has its own dedicated micro-topic, which sends the information to change the event contains. Such events are received by a number of other micro-services. Architecture as shown in FIG.

img

Before reading this article, it is necessary to familiarize yourself with the Micronautframework. Before you can read an article, which describes by REST API构建微服务通信的过程: using microaut framework to build micro quick guide services .

1. Run Kafka

To run on the local machine Apache Kafka, we can use it Docker image. The latest image is shared by the https://hub.docker.com/u/wurstmeister. In the start Kafkabefore the container, we have to start kafkabeing used by ZooKeeperthe server. If you Windowsrun the Dockerdefault address that the virtual machine is 192.168.99.100. It must be set to Kafkathe environment of the container.

ZookeeperAnd Kafkathe vessel will start on the same network. In docker to run Zookeeper zookeeper's name to provide services, and expose 2181the port. KafkaContainer requires the use of environment variables KAFKA_ZOOKEEPER_CONNECTaddresses.

$ docker network create kafka
$ docker run -d --name zookeeper --network kafka -p 2181:2181 wurstmeister/zookeeper
$ docker run -d --name kafka -p 9092:9092 --network kafka --env KAFKA_ADVERTISED_HOST_NAME=192.168.99.100 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 wurstmeister/kafka

2. Introduction dependent Micronaut Kafka

Use Kafkabuilt microautapplications can be started in the presence of HTTP server can also be started in the absence of the HTTP server. To enable Micronaut Kafka, add the micronaut-kafkalibrary to the dependencies. If you want to expose HTTP API, you should also add micronaut-http-server-netty:

<dependency>
    <groupId>io.micronaut.configuration</groupId>
    <artifactId>micronaut-kafka</artifactId>
</dependency>
<dependency>
    <groupId>io.micronaut</groupId>
    <artifactId>micronaut-http-server-netty</artifactId>
</dependency>

3. Build Orders microService

订单微服务It is the only one to start the embedded HTTP server and exposed REST APIapplications. That's why we can Kafkaoffer a built- Micronauthealth checks. To do this, we should first add micronaut-management-dependent:

<dependency>
    <groupId>io.micronaut</groupId>
    <artifactId>micronaut-management</artifactId>
</dependency>

For convenience, we will pass on application.ymlto enable the management of all endpoints defined in the following configuration and disable their HTTP authentication.

endpoints:
  all:
    enabled: true
    sensitive: false

Now, you can address http: // localhost: use under 8080 / Health health check. Our sample application will be exposed 添加新订单and 列出所有以前创建的订单simple REST API. Below these endpoints exposed Micronautcontrollers:

@Controller("orders")
public class OrderController {

    @Inject
    OrderInMemoryRepository repository;
    @Inject
    OrderClient client;

    @Post
    public Order add(@Body Order order) {
        order = repository.add(order);
        client.send(order);
        return order;
    }

    @Get
    public Set<Order> findAll() {
        return repository.findAll();
    }

}

Each service uses micro memory repository implementation. The following is 订单微服务(Order-Service)the repository implementation:

@Singleton
public class OrderInMemoryRepository {

    private Set<Order> orders = new HashSet<>();

    public Order add(Order order) {
        order.setId((long) (orders.size() + 1));
        orders.add(order);
        return order;
    }

    public void update(Order order) {
        orders.remove(order);
        orders.add(order);
    }

    public Optional<Order> findByTripIdAndType(Long tripId, OrderType type) {
        return orders.stream().filter(order -> order.getTripId().equals(tripId) && order.getType() == type).findAny();
    }

    public Optional<Order> findNewestByUserIdAndType(Long userId, OrderType type) {
        return orders.stream().filter(order -> order.getUserId().equals(userId) && order.getType() == type)
                .max(Comparator.comparing(Order::getId));
    }

    public Set<Order> findAll() {
        return orders;
    }

}

Memory storage repository Orderobject instance. OrderThe object is also sent to the named ordersKafkatopic of. Here is the Orderimplementation class:

public class Order {

    private Long id;
    private LocalDateTime createdAt;
    private OrderType type;
    private Long userId;
    private Long tripId;
    private float currentLocationX;
    private float currentLocationY;
    private OrderStatus status;

    // ... GETTERS AND SETTERS
}

4. The use of asynchronous communication Kafka

Now, let's think of a use case can be achieved by way of example systems - 添加新的行程.

We have created OrderType.NEW_TRIPa new order type. After that, (1) 订单服务to create an order and send it to the orderstopic. Orders received by the three micro-services: 司机服务, 乘客服务and 行程服务.
(2) All of these applications have to deal with this new order. 乘客服务Whether the application has sufficient funds checks on passengers account. If not, it canceled the trip, otherwise do nothing. 司机服务We are looking for the nearest available driver, (3) 行程服务create and store new itinerary. 司机服务And 行程服务will send events to their Topic ( drivers, trips), which contains information related to the changes.

Each event can be other microservicesvisits, for example, (4) 行程服务listens from 司机服务the event, in order to assign a new driver for the trip

The following figure illustrates when adding new itineraries, the communication process between our micro-services.
Here Insert Picture DescriptionNow, let's continue to discuss the implementation details.

4.1. Send Order

First of all, we need to create Kafka client is responsible for sending a message to the topic. An interface we created, named OrderClient, add to it @KafkaClientand sends a message to one or more method declaration. Each method should be adopted @Topicsetting a target topic name comment. For the method parameters, we can use the three notes @KafkaKey, @Bodyor @Header. @KafkaKeyFor partition, this is our sample application needs. In the following available client implementation, we only use @Bodyannotations.

@KafkaClient
public interface OrderClient {

    @Topic("orders")
    void send(@Body Order order);

}

4.2. Receive orders

Once the client sends an order, it will be listening ordersto all the other micro-service topic reception. Here is 司机服务the listener implementation. Listener class OrderListenershould add @KafkaListenerannotations. We can declare groupIdreceived as multiple instances of the same message a parameter annotation, to prevent a single application. Then, we declare a method of processing incoming messages. The same as the client-side approach should be adopted, @Topicsetting target topic name annotation, because we are listening Orderobjects, so you should use the @Bodynotes - the same as the corresponding client methods.

@KafkaListener(groupId = "driver")
public class OrderListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private DriverService service;

    public OrderListener(DriverService service) {
        this.service = service;
    }

    @Topic("orders")
    public void receive(@Body Order order) {
        LOGGER.info("Received: {}", order);
        switch (order.getType()) {
            case NEW_TRIP -> service.processNewTripOrder(order);
        }
    }

}

4.3. Send to another topic

Now, let's look at 司机服务the processNewTripOrdermethod. DriverServiceInjection of two different Kafka Client
bean: OrderClientand DriverClient. When dealing with new orders, it will try to look for passengers and the driver to send orders recently. After finding him, to change the status of the driver UNAVAILABLE, and with Driversending an event object to the driverstopic.

@Singleton
public class DriverService {

    private static final Logger LOGGER = LoggerFactory.getLogger(DriverService.class);

    private DriverClient client;
    private OrderClient orderClient;
    private DriverInMemoryRepository repository;

    public DriverService(DriverClient client, OrderClient orderClient, DriverInMemoryRepository repository) {
        this.client = client;
        this.orderClient = orderClient;
        this.repository = repository;
    }

    public void processNewTripOrder(Order order) {
        LOGGER.info("Processing: {}", order);
        Optional<Driver> driver = repository.findNearestDriver(order.getCurrentLocationX(), order.getCurrentLocationY());
        driver.ifPresent(driverLocal -> {
            driverLocal.setStatus(DriverStatus.UNAVAILABLE);
            repository.updateDriver(driverLocal);
            client.send(driverLocal, String.valueOf(order.getId()));
            LOGGER.info("Message sent: {}", driverLocal);
        });
    }

    // ...
}

This is Kafka Clientthe 司机服务implementation for the driversend message topic. Because we need to Driverand Orderit is associated, so we use the @Headerannotation orderIdparameters. It is not necessary to include Driverthe class, which is assigned to the right stroke end listener.

@KafkaClient
public interface DriverClient {

    @Topic("drivers")
    void send(@Body Driver driver, @Header("Order-Id") String orderId);

}

4.4. Communication between services

By the DriverListenerreceipt @KafkaListenerin 行程服务the statement. It listens for incoming to the triptopic. Similar parameters client receiving method and transmission method, as follows:

@KafkaListener(groupId = "trip")
public class DriverListener {

    private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);

    private TripService service;

    public DriverListener(TripService service) {
        this.service = service;
    }

    @Topic("drivers")
    public void receive(@Body Driver driver, @Header("Order-Id") String orderId) {
        LOGGER.info("Received: driver->{}, header->{}", driver, orderId);
        service.processNewDriver(driver);
    }

}

The final step, the orderIdquery stroke to Tripthe driverIdassociation, so that the entire process ends.

@Singleton
public class TripService {

    private static final Logger LOGGER = LoggerFactory.getLogger(TripService.class);

    private TripInMemoryRepository repository;
    private TripClient client;

    public TripService(TripInMemoryRepository repository, TripClient client) {
        this.repository = repository;
        this.client = client;
    }


    public void processNewDriver(Driver driver, String orderId) {
        LOGGER.info("Processing: {}", driver);
        Optional<Trip> trip = repository.findByOrderId(Long.valueOf(orderId));
        trip.ifPresent(tripLocal -> {
            tripLocal.setDriverId(driver.getId());
            repository.update(tripLocal);
        });
    }

    // ... OTHER METHODS

}

5. Tracking

We can use Micronaut Kafkaeasily enable distributed tracking. First of all, we need to enable and configure Micronauttracing. To do this, first of all we should add some dependencies:

<dependency>
    <groupId>io.micronaut</groupId>
    <artifactId>micronaut-tracing</artifactId>
</dependency>
<dependency>
    <groupId>io.zipkin.brave</groupId>
    <artifactId>brave-instrumentation-http</artifactId>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>io.zipkin.reporter2</groupId>
    <artifactId>zipkin-reporter</artifactId>
    <scope>runtime</scope>
</dependency>
<dependency>
    <groupId>io.opentracing.brave</groupId>
    <artifactId>brave-opentracing</artifactId>
</dependency>
<dependency>
    <groupId>io.opentracing.contrib</groupId>
    <artifactId>opentracing-kafka-client</artifactId>
    <version>0.0.16</version>
    <scope>runtime</scope>
</dependency>

We also need application.ymla configuration file, the configuration of the track address of the Zipkin.

tracing:
  zipkin:
    enabled: true
    http:
      url: http://192.168.99.100:9411
    sampler:
      probability: 1

Before starting the application, we have to run Zipkincontainer:

$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin

6. Summary

In this article, you will learn by Apache Kafkausing asynchronous communication services to build micro-architecture process. I have to show the Microaut Kafkalibrary's most important feature, which allows you to easily declare Kafkatopic of producers and consumers, for your service to enable micro 健康检查and 分布式跟踪. I have described a simple scenario for our system, including adding a new route based on customer requests. The overall implementation of the present exemplary system, please see the GitHub source

Original link: https: //piotrminkowski.wordpress.com/2019/08/06/kafka-in-microservices-with-micronaut/

Author: Piotr's

Translator: Dong

Guess you like

Origin www.cnblogs.com/springforall/p/11610643.html