Android binder学习笔记1 - 启动ServiceManager

1. 前言

本文主要是binder系列文章的总结笔记,主要是理清binder的总体流程和总体架构,期间会对照Android R进行代码重读,也会按照自己的理解对内容进行调整,以加深对binder总体的理解。本文主要讲述 启动ServiceManager。
在这里插入图片描述
了解了 Binder 驱动,怎么与 Binder 驱动进行通讯呢?那就是通过 ServiceManager,好多文章称 ServiceManager 是 Binder 驱动的守护进程,大管家,其实 ServiceManager 的作用很简单就是提供了查询服务和注册服务的功能。下面我们来看一下 ServiceManager 启动的过程。

  • ServiceManager 分为 framework 层和 native 层,framework 层只是对 native 层进行了封装方便调用,图上展示的是 native 层的 ServiceManager 启动过程。

  • ServiceManager 的启动是系统在开机时,init 进程解析 init.rc 文件调用 service_manager.c 中的 main() 方法入口启动的。 native 层有一个 binder.c 封装了一些与 Binder 驱动交互的方法。

  • ServiceManager 的启动分为三步,首先打开驱动创建全局链表 binder_procs,然后将自己当前进程信息保存到 binder_procs 链表,最后开启 loop 不断的处理共享内存中的数据,并处理 BR_xxx 命令(ioctl 的命令,BR 可以理解为 binder reply 驱动处理完的响应)。

2. ServiceManager的启动

service servicemanager /system/bin/servicemanager
    class core animation
    user system
    group system readproc
    critical
    onrestart restart healthd
    onrestart restart zygote
    onrestart restart audioserver
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart inputflinger
    onrestart restart drm
    onrestart restart cameraserver
    onrestart restart keystore
    onrestart restart gatekeeperd
    onrestart restart thermalservice
    writepid /dev/cpuset/system-background/tasks
    shutdown critical

ServiceManager是由init进程通过解析init.rc文件而创建的,其所对应的可执行程序/system/bin/servicemanager,所对应的源文件是android/frameworks/native/cmds/servicemanager/main.cpp,进程名为/system/bin/servicemanager。
下面将从main函数开始讲述ServiceManager的启动流程。

3. main

/*android/frameworks/native/cmds/servicemanager/main.cpp*/
int main(int argc, char** argv)
    |--const char* driver = argc == 2 ? argv[1] : "/dev/binder";
    |--sp<ProcessState> ps = ProcessState::initWithDriver(driver);
    |--ps->setThreadPoolMaxThreadCount(0);
    |      |--ioctl(mDriverFD, BINDER_SET_MAX_THREADS, &maxThreads)
    |--ps->setCallRestriction(ProcessState::CallRestriction::FATAL_IF_NOT_ONEWAY);
    |--sp<ServiceManager> manager = new ServiceManager(std::make_unique<Access>());
    |--manager->addService("manager", manager, false /*allowIsolated*/, 
    |                   IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk())
    |--IPCThreadState::self()->setTheContextObject(manager);
    |  //成为上下文管理者
    |--ps->becomeContextManager(nullptr, nullptr);
    |--sp<Looper> looper = Looper::prepare(false /*allowNonCallbacks*/);
    |--BinderCallback::setupTo(looper);
    |  // 进入无限循环,处理client端发来的请求
    \--while(true)
           looper->pollAll(-1)

ServiceManager是Binder IPC通信过程中的守护进程,本身也是一个Binder服务, 主要的工作包括:

  1. initWithDriver:调用binder_open,在驱动层创建binder_proc,设置最大binder线程数为16,并通过mmap分配binder通信的内存空间;

  2. ps->setThreadPoolMaxThreadCount(0):将进程的最大线程数设置为0?

  3. ps->setCallRestriction: 设置binder线程阻塞时候的行为,是打印日志还是退出进程

  4. new ServiceManager:创建ServiceManager对象

  5. manager->addService(“manager”…): 添加servicemanager自己到服务列表中

  6. IPCThreadState::self()->setTheContextObject(manager)

  7. ps->becomeContextManager:注册成为binder服务的大管家

  8. 进入无限循环,开始监听binder节点

|- -ProcessState::initWithDriver(driver)

sp<ProcessState> ps = ProcessState::initWithDriver(driver);
    |--if (gProcess != nullptr)
    |    if (!strcmp(gProcess->getDriverName().c_str(), driver)) {
    
    
    |        return gProcess;
    |  // 打开binder驱动,并申请空间
    |--gProcess = new ProcessState(driver)
          |--mDriverFD(open_driver(driver)
          |--mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE |
                               MAP_NORESERVE, mDriverFD, 0);

初始化进程间的单例对象processstate,通过open_driver调用到binder驱动的open回调,创建当前进程的binder_proc对象,通过mmap回调分配vma线性区,并创建Binder_buffer对象,它放入当前binder_proc的proc->buffers链表。

  1. strcmp(gProcess->getDriverName().c_str(), driver):首先判断ProcessState是否已经初始化过,如果被初始化过则mDriverName是被赋值过,可以通过getDriverName获取到mDriverName,如果被初始化过则直接返回,service manager的ProcessState只允许被初始化一次;

  2. open_driver
    创建ProcessState对象时会执行open_driver。open_driver()会打开binder设备,open()方法经过系统调用,进入Binder驱动,然后调用方法binder_open(),该方法会在Binder驱动层创建一个binder_proc对象,再将binder_proc对象赋值给fd->private_data,同时放入全局链表binder_procs。

  3. mmap
    创建ProcessState对象时会执行mmap,调用mmap()进行内存映射,对应于Binder驱动层的binder_mmap()方法,该方法会在Binder驱动层创建Binder_buffer对象,并放入当前binder_proc的proc->buffers链表。

|- - -open_driver

mDriverFD(open_driver(driver)
    |--int fd = open(driver, O_RDWR | O_CLOEXEC)
    |  //获取binder版本,之后用于对版本进行核查
    |--status_t result = ioctl(fd, BINDER_VERSION, &vers)
    |--对binder版本进行核查
    |--size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
    |  //设置最大线程数
    |--result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
  1. open: open_driver中会打开/dev/binder驱动,期间会调用到binder_open函数, 为binder_proc分配空间并初始化,链入全局链表binder_procs

  2. ioctl(fd, BINDER_VERSION, &vers): 通过BINDER_VERSION获取binder版本,并对版本进行检查

  3. ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads): 之后通过BINDER_SET_MAX_THREADS来设置进程支持的最大线程数。

如下为binder驱动中binder_open的流程

static int binder_open(struct inode *nodp, struct file *filp)
    |--struct binder_proc *proc;
    |  struct binder_device *binder_dev;
    |  struct binderfs_info *info;
    |  //为binder_proc分配空间,它代表进行binder通信的进程,一般指服务端进程
    |--proc = kzalloc(sizeof(*proc), GFP_KERNEL)
    |  //对binder_proc进行初始化
    |--proc->tsk = current->group_leader;
    |  INIT_LIST_HEAD(&proc->todo);
    |--proc->default_priority.sched_policy = current->policy;
    |  proc->default_priority.prio = current->normal_prio;
    |--binder_dev = container_of(filp->private_data,struct binder_device, miscdev);
    |  refcount_inc(&binder_dev->ref);
    |  proc->context = &binder_dev->context;
    |  binder_alloc_init(&proc->alloc);
    |      |--alloc->pid = current->group_leader->pid
    |      |--INIT_LIST_HEAD(&alloc->buffers)
    |  proc->pid = current->group_leader->pid
    |  INIT_LIST_HEAD(&proc->delivered_death);
    |  INIT_LIST_HEAD(&proc->waiting_threads)
    |  filp->private_data = proc
    |  //将新创建的binder_proc链入到全局binder_procs链表
    |  hlist_add_head(&proc->proc_node, &binder_procs);

为binder_proc分配空间并初始化,链入全局链表binder_procs
1.kzalloc(sizeof(*proc), GFP_KERNEL):为binder_proc分配空间

2.初始化新创建的binder_proc

3.hlist_add_head(&proc->proc_node, &binder_procs): 链入全局链表binder_procs

|- - -mmap

mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE |   
                              MAP_NORESERVE, mDriverFD, 0)
    |--binder_mmap(filp, struct vm_area_struct *vma)
           |--struct binder_proc *proc = filp->private_data
           |--vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
           |  vma->vm_flags &= ~VM_MAYWRITE;
           |--vma->vm_ops = &binder_vm_ops
           |  vma->vm_private_data = proc
           |--binder_alloc_mmap_handler(&proc->alloc, vma)    
                  |--alloc->buffer = (void __user *)vma->vm_start
                  |--alloc->pages = kcalloc((vma->vm_end - vma->vm_start) / PAGE_SIZE,
                  |                            sizeof(alloc->pages[0]),
                  |                            GFP_KERNEL); 
                  |--alloc->buffer_size = vma->vm_end - vma->vm_start      
                  |--buffer = kzalloc(sizeof(*buffer), GFP_KERNEL) 
                  |--buffer->user_data = alloc->buffer    
                  |--list_add(&buffer->entry, &alloc->buffers)  
                  |--binder_insert_free_buffer(alloc, buffer)
                  |--alloc->free_async_space = alloc->buffer_size / 2     
                  |--binder_alloc_set_vma(alloc, vma)      

mmap系统调用为用户空间分配了VMA,这样可以用户空间与内核空间共享一块内存区域,减少了一次内存拷贝

|- -addService(“manager”…)

status_t ServiceManagerShim::addService(const String16& name, const sp<IBinder>& service,
                                        bool allowIsolated, int dumpsysPriority)
{
    
    
    Status status = mTheRealServiceManager->addService(
        String8(name).c_str(), service, allowIsolated, dumpsysPriority);
    return status.exceptionCode();
}
#frameworks/native/cmds/servicemanager/ServiceManager.cpp
ServiceManager::addService(const std::string& name, 
	const sp<IBinder>& binder, bool allowIsolated, int32_t dumpPriority)
	|--// Overwrite the old service if it exists
	    mNameToService[name] = Service {
    
    
	        .binder = binder,
	        .allowIsolated = allowIsolated,
	        .dumpPriority = dumpPriority,
	        .debugPid = ctx.debugPid,
	    };

	    auto it = mNameToRegistrationCallback.find(name);
	    if (it != mNameToRegistrationCallback.end()) {
    
    
	        for (const sp<IServiceCallback>& cb : it->second) {
    
    
	            mNameToService[name].guaranteeClient = true;
	            // permission checked in registerForNotifications
	            cb->onRegistration(name, binder);
	        }
    }

from: https://www.jianshu.com/p/1bb2e9a87300
addservice其实就是将servie的名字和对应的binder保存起来即可,在一些场景中,可能需要获得指定名字的binder,可是又不知道什么时候可以获取到,那么就可以有2种方法来实现了。一种是直接在轮询中使用getService,一种就是注册添加服务通知。

|- -ps->becomeContextManager

ps->becomeContextManager(nullptr, nullptr);
    |  //通过ioctl,传递BINDER_SET_CONTEXT_MGR指令
     |--ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR_EXT, &obj)
            |--struct flat_binder_object fbo;
            |--binder_ioctl_set_ctx_mgr(filp)
                 |  //binder_open中会创建binder_proc
                 |--struct binder_proc *proc = filp->private_data;
                 |  struct binder_context *context = proc->context;
                 |  struct binder_node *new_node;
                 |  kuid_t curr_euid = current_euid();
                 |	
                 |  //保证binder_context_mgr_node只有一个
                 |--if (context->binder_context_mgr_node) goto out;
                 |  //设置当前线程euid作为Service Manager的uid
                 |--context->binder_context_mgr_uid = curr_euid
                 |  //创建了全局的binder_node对象binder_context_mgr_node
                 |--binder_context_mgr_node = binder_new_node(proc, 0, 0)
                 |--new_node->local_weak_refs++;
                 |  new_node->local_strong_refs++
                 |  new_node->has_strong_ref = 1
                 |  new_node->has_weak_ref = 1
                 |--context->binder_context_mgr_node = new_node;

成为上下文的管理者,整个系统中只有一个这样的管理者。 创建了全局唯一的binder_node对象binder_context_mgr_node

binder_context_mgr_node = binder_new_node(proc, 0, 0)
   |--struct binder_node *new_node = kzalloc(sizeof(*node), GFP_KERNEL)
   |--struct binder_node *node = binder_init_node_ilocked(proc, new_node, fp)
       |--获取红黑树的插入位置
       |--node = new_node;
       |  // 将新创建的node对象添加到proc红黑树
       |--rb_link_node(&node->rb_node, parent, p);
       |--rb_insert_color(&node->rb_node, &proc->nodes);
       |--node->proc = proc
       |--node->ptr = ptr;
       |--node->work.type = BINDER_WORK_NODE;
       |--INIT_LIST_HEAD(&node->work.entry);
       |--INIT_LIST_HEAD(&node->async_todo);

|- -looper->pollAll

looper->pollAll(-1)
    |--pollAll(timeoutMillis, nullptr, nullptr, nullptr)
           |--if (timeoutMillis <= 0)
                  do {
    
    
                      result = pollOnce(timeoutMillis, outFd, outEvents, outData)
                  } while (result == POLL_CALLBACK);
result = pollOnce(timeoutMillis, outFd, outEvents, outData)
	|--for (;;)
	        if (result != 0)
	            if (outFd != nullptr) *outFd = 0;
	            if (outEvents != nullptr) *outEvents = 0;
	            if (outData != nullptr) *outData = nullptr;
	            return result;
	        result = pollInner(timeoutMillis);
int Looper::pollInner(int timeoutMillis)
    |  //Adjust the timeout based on when the next message is due
    |--if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX)
    |       nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
    |       int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);
    |       if (messageTimeoutMillis >= 0 && (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis))
    |           timeoutMillis = messageTimeoutMillis;
    |--struct epoll_event eventItems[EPOLL_MAX_EVENTS];
    |--int eventCount = epoll_wait(mEpollFd.get(), eventItems, EPOLL_MAX_EVENTS, timeoutMillis);
    .......

servicemanager成功注册成为Binder机制的上下文管理者后,servicemanager就是Binder机制的“总管”了,它需要在系统运行期间处理client端的请求,由于client端的请求不确定何时发送,因此需要通过无限循环来实现,首先创建了一个Looper,然后设置两个回调,一个用于处理binder请求BinderCallback,一个用于定时通知client变化ClientCallbackCallback

参考文档

serviceManager介绍

猜你喜欢

转载自blog.csdn.net/jasonactions/article/details/119732367