Today, we will Apache Kafka
build some micro-services asynchronous communication with one another topic. We use Micronaut
framework, which is to Kafka
provide specialized library integration. Let us briefly describe the architecture of an example system. We have four 订单服务
micro-services: 行程服务
, , 司机服务
and 乘客服务
. Realization of these applications is very simple. They are stored in memory, and is connected to the same Kafka
instance.
The main goal of our system is to provide customers with travel arrangements. Orders application also acts as a gateway service. It receives requests from clients, preservation of historical records and send events to the orders
topic. All other services are in the micro-monitor orders
this topic, and processing order-service
orders sent. Each service has its own dedicated micro-topic, which sends the information to change the event contains. Such events are received by a number of other micro-services. Architecture as shown in FIG.
Before reading this article, it is necessary to familiarize yourself with the Micronaut
framework. Before you can read an article, which describes by REST API构建微服务通信的过程
: using microaut framework to build micro quick guide services .
1. Run Kafka
To run on the local machine Apache Kafka
, we can use it Docker image. The latest image is shared by the https://hub.docker.com/u/wurstmeister. In the start Kafka
before the container, we have to start kafka
being used by ZooKeeper
the server. If you Windows
run the Docker
default address that the virtual machine is 192.168.99.100
. It must be set to Kafka
the environment of the container.
Zookeeper
And Kafka
the vessel will start on the same network. In docker to run Zookeeper zookeeper
's name to provide services, and expose 2181
the port. Kafka
Container requires the use of environment variables KAFKA_ZOOKEEPER_CONNECT
addresses.
$ docker network create kafka
$ docker run -d --name zookeeper --network kafka -p 2181:2181 wurstmeister/zookeeper
$ docker run -d --name kafka -p 9092:9092 --network kafka --env KAFKA_ADVERTISED_HOST_NAME=192.168.99.100 --env KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 wurstmeister/kafka
2. Introduction dependent Micronaut Kafka
Use Kafka
built microaut
applications can be started in the presence of HTTP server can also be started in the absence of the HTTP server. To enable Micronaut Kafka
, add the micronaut-kafka
library to the dependencies. If you want to expose HTTP API
, you should also add micronaut-http-server-netty
:
<dependency>
<groupId>io.micronaut.configuration</groupId>
<artifactId>micronaut-kafka</artifactId>
</dependency>
<dependency>
<groupId>io.micronaut</groupId>
<artifactId>micronaut-http-server-netty</artifactId>
</dependency>
3. Build Orders microService
订单微服务
It is the only one to start the embedded HTTP server and exposed REST API
applications. That's why we can Kafka
offer a built- Micronaut
health checks. To do this, we should first add micronaut-management
-dependent:
<dependency>
<groupId>io.micronaut</groupId>
<artifactId>micronaut-management</artifactId>
</dependency>
For convenience, we will pass on application.yml
to enable the management of all endpoints defined in the following configuration and disable their HTTP authentication.
endpoints:
all:
enabled: true
sensitive: false
Now, you can address http: // localhost: use under 8080 / Health health check
. Our sample application will be exposed 添加新订单
and 列出所有以前创建的订单
simple REST API
. Below these endpoints exposed Micronaut
controllers:
@Controller("orders")
public class OrderController {
@Inject
OrderInMemoryRepository repository;
@Inject
OrderClient client;
@Post
public Order add(@Body Order order) {
order = repository.add(order);
client.send(order);
return order;
}
@Get
public Set<Order> findAll() {
return repository.findAll();
}
}
Each service uses micro memory repository implementation. The following is 订单微服务(Order-Service)
the repository implementation:
@Singleton
public class OrderInMemoryRepository {
private Set<Order> orders = new HashSet<>();
public Order add(Order order) {
order.setId((long) (orders.size() + 1));
orders.add(order);
return order;
}
public void update(Order order) {
orders.remove(order);
orders.add(order);
}
public Optional<Order> findByTripIdAndType(Long tripId, OrderType type) {
return orders.stream().filter(order -> order.getTripId().equals(tripId) && order.getType() == type).findAny();
}
public Optional<Order> findNewestByUserIdAndType(Long userId, OrderType type) {
return orders.stream().filter(order -> order.getUserId().equals(userId) && order.getType() == type)
.max(Comparator.comparing(Order::getId));
}
public Set<Order> findAll() {
return orders;
}
}
Memory storage repository Order
object instance. Order
The object is also sent to the named orders
Kafkatopic of. Here is the Order
implementation class:
public class Order {
private Long id;
private LocalDateTime createdAt;
private OrderType type;
private Long userId;
private Long tripId;
private float currentLocationX;
private float currentLocationY;
private OrderStatus status;
// ... GETTERS AND SETTERS
}
4. The use of asynchronous communication Kafka
Now, let's think of a use case can be achieved by way of example systems - 添加新的行程
.
We have created OrderType.NEW_TRIP
a new order type. After that, (1) 订单服务
to create an order and send it to the orders
topic. Orders received by the three micro-services: 司机服务
, 乘客服务
and 行程服务
.
(2) All of these applications have to deal with this new order. 乘客服务
Whether the application has sufficient funds checks on passengers account. If not, it canceled the trip, otherwise do nothing. 司机服务
We are looking for the nearest available driver, (3) 行程服务
create and store new itinerary. 司机服务
And 行程服务
will send events to their Topic ( drivers
, trips
), which contains information related to the changes.
Each event can be other microservices
visits, for example, (4) 行程服务
listens from 司机服务
the event, in order to assign a new driver for the trip
The following figure illustrates when adding new itineraries, the communication process between our micro-services.
Now, let's continue to discuss the implementation details.
4.1. Send Order
First of all, we need to create Kafka client is responsible for sending a message to the topic. An interface we created, named OrderClient
, add to it @KafkaClient
and sends a message to one or more method declaration. Each method should be adopted @Topic
setting a target topic name comment. For the method parameters, we can use the three notes @KafkaKey
, @Body
or @Header
. @KafkaKey
For partition, this is our sample application needs. In the following available client implementation, we only use @Body
annotations.
@KafkaClient
public interface OrderClient {
@Topic("orders")
void send(@Body Order order);
}
4.2. Receive orders
Once the client sends an order, it will be listening orders
to all the other micro-service topic reception. Here is 司机服务
the listener implementation. Listener class OrderListener
should add @KafkaListener
annotations. We can declare groupId
received as multiple instances of the same message a parameter annotation, to prevent a single application. Then, we declare a method of processing incoming messages. The same as the client-side approach should be adopted, @Topic
setting target topic name annotation, because we are listening Order
objects, so you should use the @Body
notes - the same as the corresponding client methods.
@KafkaListener(groupId = "driver")
public class OrderListener {
private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);
private DriverService service;
public OrderListener(DriverService service) {
this.service = service;
}
@Topic("orders")
public void receive(@Body Order order) {
LOGGER.info("Received: {}", order);
switch (order.getType()) {
case NEW_TRIP -> service.processNewTripOrder(order);
}
}
}
4.3. Send to another topic
Now, let's look at 司机服务
the processNewTripOrder
method. DriverService
Injection of two different Kafka Client
bean: OrderClient
and DriverClient
. When dealing with new orders, it will try to look for passengers and the driver to send orders recently. After finding him, to change the status of the driver UNAVAILABLE
, and with Driver
sending an event object to the drivers
topic.
@Singleton
public class DriverService {
private static final Logger LOGGER = LoggerFactory.getLogger(DriverService.class);
private DriverClient client;
private OrderClient orderClient;
private DriverInMemoryRepository repository;
public DriverService(DriverClient client, OrderClient orderClient, DriverInMemoryRepository repository) {
this.client = client;
this.orderClient = orderClient;
this.repository = repository;
}
public void processNewTripOrder(Order order) {
LOGGER.info("Processing: {}", order);
Optional<Driver> driver = repository.findNearestDriver(order.getCurrentLocationX(), order.getCurrentLocationY());
driver.ifPresent(driverLocal -> {
driverLocal.setStatus(DriverStatus.UNAVAILABLE);
repository.updateDriver(driverLocal);
client.send(driverLocal, String.valueOf(order.getId()));
LOGGER.info("Message sent: {}", driverLocal);
});
}
// ...
}
This is Kafka Client
the 司机服务
implementation for the driver
send message topic. Because we need to Driver
and Order
it is associated, so we use the @Header
annotation orderId
parameters. It is not necessary to include Driver
the class, which is assigned to the right stroke end listener.
@KafkaClient
public interface DriverClient {
@Topic("drivers")
void send(@Body Driver driver, @Header("Order-Id") String orderId);
}
4.4. Communication between services
By the DriverListener
receipt @KafkaListener
in 行程服务
the statement. It listens for incoming to the trip
topic. Similar parameters client receiving method and transmission method, as follows:
@KafkaListener(groupId = "trip")
public class DriverListener {
private static final Logger LOGGER = LoggerFactory.getLogger(OrderListener.class);
private TripService service;
public DriverListener(TripService service) {
this.service = service;
}
@Topic("drivers")
public void receive(@Body Driver driver, @Header("Order-Id") String orderId) {
LOGGER.info("Received: driver->{}, header->{}", driver, orderId);
service.processNewDriver(driver);
}
}
The final step, the orderId
query stroke to Trip
the driverId
association, so that the entire process ends.
@Singleton
public class TripService {
private static final Logger LOGGER = LoggerFactory.getLogger(TripService.class);
private TripInMemoryRepository repository;
private TripClient client;
public TripService(TripInMemoryRepository repository, TripClient client) {
this.repository = repository;
this.client = client;
}
public void processNewDriver(Driver driver, String orderId) {
LOGGER.info("Processing: {}", driver);
Optional<Trip> trip = repository.findByOrderId(Long.valueOf(orderId));
trip.ifPresent(tripLocal -> {
tripLocal.setDriverId(driver.getId());
repository.update(tripLocal);
});
}
// ... OTHER METHODS
}
5. Tracking
We can use Micronaut Kafka
easily enable distributed tracking. First of all, we need to enable and configure Micronaut
tracing. To do this, first of all we should add some dependencies:
<dependency>
<groupId>io.micronaut</groupId>
<artifactId>micronaut-tracing</artifactId>
</dependency>
<dependency>
<groupId>io.zipkin.brave</groupId>
<artifactId>brave-instrumentation-http</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.zipkin.reporter2</groupId>
<artifactId>zipkin-reporter</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.opentracing.brave</groupId>
<artifactId>brave-opentracing</artifactId>
</dependency>
<dependency>
<groupId>io.opentracing.contrib</groupId>
<artifactId>opentracing-kafka-client</artifactId>
<version>0.0.16</version>
<scope>runtime</scope>
</dependency>
We also need application.yml
a configuration file, the configuration of the track address of the Zipkin.
tracing:
zipkin:
enabled: true
http:
url: http://192.168.99.100:9411
sampler:
probability: 1
Before starting the application, we have to run Zipkin
container:
$ docker run -d --name zipkin -p 9411:9411 openzipkin/zipkin
6. Summary
In this article, you will learn by Apache Kafka
using asynchronous communication services to build micro-architecture process. I have to show the Microaut Kafka
library's most important feature, which allows you to easily declare Kafka
topic of producers and consumers, for your service to enable micro 健康检查
and 分布式跟踪
. I have described a simple scenario for our system, including adding a new route based on customer requests. The overall implementation of the present exemplary system, please see the GitHub source
Original link: https: //piotrminkowski.wordpress.com/2019/08/06/kafka-in-microservices-with-micronaut/
Author: Piotr's
Translator: Dong