[Translated] akka in action of akka-stream (2 streaming HTTP)

2 HTTP Streaming

Log stream processors (log-stream processor) will run HTTP service. Let's see what that means. Akka-http using akka-stream, so from the APP file to the HTTP-based services do not need a lot of glue code. Akka-http is a very good example of a library that contains akka-stream.

 

First of all, we want to add more dependent in the project:

 

 

This time we will build LogsApp, can read and write some traffic logs from memory. In this case, to keep things simple, we will flow directly written to the file.

 

There are quite a responsive client library available on stream. The example connects to some other type (database) storage, as exercise for the reader.

 

Receiving the 2.1 HTTP streaming

Our services allow clients to use HTTP POST fluidized log event data. Data will be stored to a file server. POST will create a file named [log_id] in the logs directory of the URL / logs / [log_id]. Later, when a GET command / logs / [log-id], we obtain from this document flow. Here omitted to set up the HTTP server.

 

HTTP LogsApi route definition in the class, as shown below. logsDir point to the log storage directory. logFile method returns only ID documents. EventMarshalling qualities mixed support for JSON marshalling. You will notice ExecutionContext and ActorMaterializer within the scope of the implicit, they are required to run Flow.

 

We will use the one in the BidiFlow, because the agreement from the log file to Event JSON it has been defined. The following code shows the Flow, Sink and Source will be used when performing HTTP GET, we will return it.

Use Flow and Sink in the POST

In the present embodiment, the event remains unchanged; all log events rows converted to JSON. Stream transformation will be connected to the BidiFlow's left to the reader as a method for filtering events based on query parameters.

 

Source entity completely read before responding
fully read all the data is very important from databytessource. If the response before reading all the data from the source, e.g., a permanent connection using HTTP client may determine TCP socket is still suitable for the next request; this may lead to re-read yet incompletely read source the connection is over.
Typically HTTP client that the request will be processed completely and therefore is completely processed prior to the request, it does not try to read the response. Even if you do not use a permanent connection, preferably completely process the request.
This is usually a problem obstructive HTTP client, it will not begin to read the response before writing the entire request.
This does not mean a request / response cycle is the synchronization processing. In the examples in this section, the response is returned after treatment of asynchronous requests.

 

In postRoute HTTP POST method process, as shown below. Because akka-http based on akka-stream, on receiving HTTP streaming quite simple. HTTP request body has a dataBytes Source, we can read the data.

Processing POST

Future run method returns a [IOResult], so we use onComplete instruction, it will eventually pass the results to the internal routing Future, where the process Success and Failure situations. The command returns a response to complete.

 

In the next section, we will look at a response to an HTTP GET request, the response will be in JSON format stream of log files and returned to the client.

 

2.2 HTTP response to the stream

Use HTTP GET, the client can retrieve the log event flow. Implement routing as follows:

GET processing

Quotes identifiers
Akka-http close as possible to the HTTP specification, which is also reflected in the HTTP header, content type and other elements of the HTTP specification identifier name. When you create an identifier, but usually not allowed in the HTTP specification commonly used characters, such as dashes and slashes can be used to create quotes in Scala.

 

HttpEntity有一个含有一个ContentType和一个Source的apply方法。从文件流化数据的Source传递给该方法,使用complete指令完成响应。在POST例子中,我们简单的假设以期望的日志格式用文本发送。在GET例子中,我们用JSON格式返回数据。

 

现在, 我们已经有了最简单的流化GET和POST 示例, 让我们来看一下如何使用akka-http 进行内容协商, 这将使客户端能够以 JSON 或日志格式GET和POST 数据。

 

2.3 用于内容类型和协商的自定义 marshallers 和 unmarshallers

如果有多个媒体类型可用,则Accept头部允许HTTP客户端指定它想要GET的格式。HTTP客户端可以设置Content-Type 头部类指定POST中实体的格式。我们将在这一节中对这两种情况进行处理, 使得可以以 JSON 或日志格式交换 POST 和GET

数据, 类似于 BidiEventFilter 示例。

 

幸运的是,akka-http提供了自定义编组和解组,负责内容协商,这意味着我们更少的工作。让我们开始于处理POST里的Content-Type头部。

 

在自定义解组中处理Content-Type

Akka-http为许多预定义类型,比如来自实体、字节数组、字符串等等数据,提供了解组。也可以自定义解组。本例,我们仅支持两种内容类型:用于表示日志格式的text/plain和用于表示JSON格式日志事件的 application/json。基于Content-Type,entity.dataBytes source或者用分隔线或者用JSON帧化,并像往常一样处理。

 

Unmarshaller特质仅需要实现一个方法。

在EventUnmarshaller里处理Content-Type

create方法创建了一个匿名的Unmarshaller 实例。apply方法首先创建了一个Flow用于处理传入的数据,并通过via方法与dataBytesSource组合成一个新的Source。

 

这个Unmarshaller 必须在隐式范围内,以便实体指令可用于提取Source[Event, _],可以在ContentNeg-Logs-Api 类中找到。

在POST中使用EventUnmarshaller

 

尝试 aia.stream.ContentNegLogsApp 是留给读者的练习。请务必指定Content-Type, 例如使用 httpie。如下是使用httpie的POST例子,使用了Content-Type 头部:

 

 

下面,我们将看看使用自定义的编组,为内容协商处理Accept头部。

 

使用自定义编组进行内容协商

我们将写一个自定义的Marshaller在响应中来支持text/plain和application/json内容类型。Accept头部能用于指定响应可接受的媒体类型。如下所示(使用httpie):

httpie的GET例子,使用Accept头部

 

客户端可以表示它只接受特定的Content-type, 或者它具有特定的首选项。确定应用哪个Content-type进行响应的逻辑是在Akka中实现的。我们所要做的就是创建一个支持一组内容类型的Marshaller。

 

LogEntityMarshaller 对象创建了ToEntityMarshaller。

提供编组用于内容协商

 

Marshaller.withFixedContentType 是一个方便的方法用于根据指定的Content-Type创建Marshaller。它带有一个函数,A => B,本例中是Source[ByteString, Any] => HttpEntity。src提供了JSON日志文件的字节,它转化为一个HttpEntity。

 

LogJson.jsonToLogFlow 方法使用了先前我们使用过的相同技巧,将一个 Flow[Event]加入BidiFlow ,这次是JSON到日志格式。

 

这个Marshaller必须放在隐式范围内,以便HTTP GET路由中使用。

GET路由

Marshal(src).toResponseFor(req)采用日志文件Source,并根据请求(包括Accept 头部)创建响应,其中以LogEntityMarshaller设置内容协商。

 

这就结束了使用Content-Type 头部支持两种格式和使用Accept头部进行内容协商的示例。

 

LogsApi 和 ContentNegLogsApp 读写事件不变。我们可以在请求时对它们的状态进行筛选, 但将事件根据状态进行拆分 (OK, warning, error, critical) 并将这些事件存储在不同的文件中是更有意义的, 例如, 可以检索所有错误, 而无需每次都要过滤。下一节,我们将看一看如何在akka-stream中扇入扇出。我们将根据状态将事件存储在服务器上不同的文件中,但我们也可以检索部分状态,像所有非OK状态的事件。

 

JSON流化支持
本例支持文本日志格式和JSON格式的日志事件。当你只想支持JSON,这有个更简单的选项。akka.http.scaladsl.common 包下的EntityStreamingSupport 对象通过EntityStreamingSupport.json提供了一个JsonEntityStreamingSupport,当放入隐式范围时,可以直接使用complete(events)直接完成HTTP请求。也可以从entity(asSourceOf[Event])直接获取Source[Event, NotUsed]。
发布了6 篇原创文章 · 获赞 43 · 访问量 57万+

Guess you like

Origin blog.csdn.net/hany3000/article/details/96481175