hadoop 2.x-the hadoop rpc protocols

1. submitting a MR job


2.flow from nodemanager to resourcemanager:



 

  so from this figure,we know that the yarn use WritableRPCEnginge.java as rpc engine by default,but here of course,hadoop use ProtobufRpcEngine.java instead ,and the serialization/deserialization of parameters are used the ProtocolBuffer protocol.

  for more detail,check in YARNServiceProto.java to see how does yarn transition a common param to PB's one.

扫描二维码关注公众号,回复: 573363 查看本文章

some important classes:

 ApplicationClientProtocolPBClientImpl--this is the client proxy to remote ResourceManager

 RpcClientFactoryPBImpl--supplys which client proxy implemention of certain protocols ,eg. ApplicationClientProtocol.that is it determines which class to react to the appropriate api protocol.for example transition package path to impl.pb.client,convert protocol class with subfix PBClientImpl etc.

 HadoopYarnProtoRPC--provide rpc proxy to remote server,so the above proxy is generated by this.

 DefaultFailoverProxyProvider--failover provider when the retry proxy provider warrants failure over.

 RetryInvocationHandler--this is the entry of proxy,it will deliver invocation to underlying impl e.g. ApplicationClientPBClientImpl

 ApplicationClientPBClientImpl--the client proxy class. with this proxy,any methods invocations have been supplied the retry machanism,yep,this is the meaning of 'proxy'

 Client --the final rpc service in client side.

 YarnServiceProtos--used to convert common request & response relative params to PB's one,corresponding to 'yarn_service_protos.proto'

 YarnProtos--similar to YarnServiceProtos,this is responsible to conert certain fields of a param generated by YarnServiceProtos.corresponding to 'yarn_protos.proto'

 

 

猜你喜欢

转载自leibnitz.iteye.com/blog/2164221