akka-stream and actor systems integration and how to deal with back pressure attendant problems

There is a total of four api:

Source.actorRef, return actorRef, actorRef the received message will be consumed downstream consumers.
Sink.actorRef, receiving actorRef, as consumer data streams downstream node.
Source.actorPublisher, return actorRef, use Publisher to the reactive stream.
Sink.actorSubscriber, used in the reactive stream of Subscriber.
Source.actorRef

  val stringSourceinFuture = Source.actorRef [String] ( 100, OverflowStrategy.fail) // cache up to 100, beyond words, will fail
  val hahaStrSource = stringSourceinFuture.filter (str => str.startsWith ( "haha")) / / source data stream to the character string is not "haha" beginning filtered
  Val hahaStrSource.to the Actor = (Sink.foreach (the println)). RUN ()
  the Actor! "asdsadasd"
  the Actor! "hahaasd"
  the Actor! Success ( " ok ") // data flow completed successfully and close the

    " how to create a Source that can receive elements later via a method call? " often encountered in akka-http in the Source [T, N] is the place to upload and download files coding function (file IO), the complete file => Source [ByteString, _ ] conversion, or Source (List (1,2,3,4,5)) which hello-world toy level code, such when defining code Source, you have to determine what is in the data stream. So how do you define the flow, and then transfer the data to the stream it? The answer is Source.actorRef. Solemn Description: Source.actorRef no back pressure tactics (back pressure simply is greater than the rate of formation of producer consumer handling rate, resulting in a backlog of data). 

Sink.actorRef

class MyActor extends Actor{
  override def receive: Receive = {
    case "FIN"=>
      println("完成了哇!!!")
      context.stop(self)
    case str:String =>
      println("msgStr:"+str)
  }
}
......
  val actor=system.actorOf(Props[MyActor],"myActor")
  val sendToActor=Sink.actorRef(actor,onCompleteMessage = "FIN")
  val hahaStringSource=Source.actorRef[String](100,OverflowStrategy.dropHead).filter(str=>str.startsWith("haha"))
  val actorReceive=hahaStringSource.to(sendToActor).run()
  actorReceive!"hahasdsadsa1"
  actorReceive!"hahasdsadsa2"
  actorReceive!"hahasdsadsa3"
  actorReceive!"hahasdsadsa4"
  Success actorReceive ( "the ok")!
// the Output
msgid: hahasdsadsa1
msgid: hahasdsadsa2
msgid: hahasdsadsa3
msgid: hahasdsadsa4
completed wow !!!
  
    Sink node as a data stream end-consumer, the common usage of such Sink.foreach [T] (t: T => Unit), Sink.fold [U , T] (z: U) ((u: U, t: T) => U) and the like. Sink.actorRef actorRef specifies an instance, all of the transmission data of the data stream to the terminal according to the processing example to actorRef process. Explained above procedure, Sink, actorRef Incidentally which actorRef to receive messages, and upon completion of an upstream data flow, this will actorRef what message is received as a signal to complete. We can see onCompleteMessage this message has not been str => str.startsWith ( "haha" ) role of these filter conditions (the same, Sink.actorRef no back-pressure processing functions, certain data can only squeeze too much strategies give up, or simply fail).

Back pressure treatment

Above Source.actorRef and Sink.actorRef not support back-pressure tactics. We can use Source.actorPublisher or Sink.actorPublisher backpressure upstream or downstream processing of the data flow problems, but need to inherit ActorPublisher [T] or ActorSubscriber implements the process logic.

Source.actorPublisher

In the data stream to achieve a back pressure upstream of the processing logic to manually:

Object JobAccepted Case
Case Object JobDenied
Case the Job class (MSG: String)
...
class MyPublisherActor the extends ActorPublisher [the Job] {
  Import akka.stream.actor.ActorPublisherMessage._
  Val MAXSIZE = 10
  var Vector.empty = buf [the Job]
  the override DEF the receive: the Receive = {
    Case job: IF the job buf.size MAXSIZE = ==>
      SENDER () // JobDenied processing refuse exceeds cache!
    Case job: the job =>
      ! SENDER () // JobAccepted the confirmation process task
      buf.isEmpty && totalDemand> match {0
        Case to true =>
          OnNext (Job)
        Case to false =>
          buf: // + = Job Xianxiang cache storage Job
          deliverBuf () // when downstream demand is present, go from the cache consumption Job
      }
    case req@Request(n)=>
      deliverBuf()
    case Cancel=>
      context.stop(self)
  }

  deliverBuf DEF (): = totalDemand Unit> match {0
    Case to true =>
      totalDemand <= {match Int.MaxValue
        Case to true =>
          Val (use, Keep) = buf.splitAt (totalDemand.toInt) // equivalent (buf. Take (n-), buf.drop (n-))
          buf = Keep
          use.foreach (OnNext (_)) // a buf the two halves, the first half of the transmission to the downstream node consumption, the retention half
        Case to false =>
          buf. Take (Int.MaxValue) .foreach (OnNext (_))
          buf = buf.drop (Int.MaxValue)
          deliverBuf () // recursive
      }
    Case to false =>
  }
}
...
Val jobSource = Source.actorPublisher [the Job] ( props [MyPublisherActor])
jobSource.via jobSourceActor = Val (Flow [the Job] .map (Job => the Job (job.msg 2 *))). to (Sink.foreach (the println)). RUN ()
jobSourceActor! the Job ( "HA")
jobSourceActor ! Job ( "he")
  
    function actorPublisher signature def actorPublisher [T] (props: props): Source [T, ActorRef]. TotalDemand above code is determined by the downstream node consumption. onNext (e) method is defined in ActorPublisher, the function is to transmit the data to the downstream node. Course onComplete (), onError (ex) function, but also for informing downstream nodes take appropriate action.

Sink.actorSubscriber

case class Reply(id:Int)
...
class Worker extends Actor{
  override def receive: Receive = {
    case (id:Int,job:Job)=>
      println("finish job:"+job)
      sender()!Reply(id)
  }
}
...
class CenterSubscriber extends ActorSubscriber{
  val router={ //路由组
    val routees=Vector.fill(3){ActorRefRoutee(context.actorOf(Props[Worker]))}
    Router(RoundRobinRoutingLogic(),routees)
  }
  var buf=Map.empty[Int,Job]
  override def requestStrategy: RequestStrategy = WatermarkRequestStrategy.apply(100)
  import akka.stream.actor.ActorSubscriberMessage._
  override def receive: Receive = {
    OnNext Case (Job: the Job) =>
      Val TEMP = (the Random) .nextInt (10000) -> Job
      buf = TEMP + // record and issued tasks
      router.route (TEMP, Self)
    Case the OnError (EX) =>
      the println ( "error occurred upstream ::" + ex.getMessage)
    Case the OnComplete =>
      the println ( "mission of the data stream ..")
    Case the Reply (ID) =>
      buf - // ID = when processing is completed by deleting the recording
  }
}
...
Val Source.actorPublisher the Actor = [the Job] (Props [MyPublisherActor]). to (Sink.actorSubscriber [the Job] (Props [CenterSubscriber])). RUN ()
the Actor! the Job ( "the jobs that job1")
the Actor! the Job ( "JOB2")
the Actor! the Job ( "JOB3")
  
    ActorSubscriber may receive the following message types: an upstream OnNext to the news, the OnComplete upstream data stream has ended, an error occurs upstream OnError and other common types of messages. Subclass inherits ActorSubscriber need to override requestStrategy in order to provide the requested back-pressure tactics to control the data flow (around requestDemand start, when to request data upstream, how much data a request and so on).
--------------------- 
Author: hangscer 
Source: CSDN 
Original: https: //blog.csdn.net/hangscer/article/details/78375134 
Disclaimer: This article as a blogger original article, reproduced, please attach Bowen link!

Released six original articles · won praise 43 · views 570 000 +

Guess you like

Origin blog.csdn.net/hany3000/article/details/83619852