Python uses Beanstalkd to do asynchronous task processing

Use  Beanstalkd  as a message queue service, and then combine  Python  's decorator syntax to implement a simple asynchronous task processing tool.

final effect

Define the task:

from xxxxx.job_queue import JobQueue

queue = JobQueue()

@queue.task('task_tube_one')
def task_one(arg1, arg2, arg3):
 # do task

Submit a task:

task_one.put(arg1="a", arg2="b", arg3="c")

These tasks can then be performed by the background worker thread.

Implementation process

1. Understand Beanstalk Server

Beanstalk is a simple, fast work queue. https://github.com/kr/beanstalkd

Beanstalk is a message queue service implemented in C language. It provides a common interface and was originally designed to reduce page latency in a large number of web applications by running time-consuming tasks asynchronously. There are different Beanstalkd Client implementations for different languages. There are beanstalkc and so on in Python. I just use beanstalkc as a tool to communicate with beanstalkd server.

2. The principle of asynchronous execution of tasks

beanstalkd can only schedule tasks for strings. In order for the program to support submitting functions and parameters, the function is then executed by the waker with the parameters. An intermediate layer is required to register the function with the passed parameters.

The implementation mainly consists of 3 parts:

Subscriber: Responsible for registering the function on a tube of beanstalk. The implementation is very simple. Register the correspondence between the function name and the function itself. (This means that the same function name cannot exist under the same group (tube)). Data is stored in class variables.

class Subscriber(object):
 FUN_MAP = defaultdict(dict)

 def __init__(self, func, tube):
  logger.info('register func:{} to tube:{}.'.format(func.__name__, tube))
  Subscriber.FUN_MAP[tube][func.__name__] = func

JobQueue: It is convenient to convert an ordinary function into a decorator with Putter ability

class JobQueue(object):
 @classmethod
 def task(cls, tube):
  def wrapper(func):
   Subscriber(func, tube)
   return Putter(func, tube)

  return wrapper

Putter: Combine the function name, function parameters, and specified grouping into an object, then serialize the json into a string, and finally push it to the beanstalkd queue through beanstalkc.

class Putter(object):
 def __init__(self, func, tube):
  self.func = func
  self.tube = tube

 # Direct call to return
 def __call__(self, *args, **kwargs):
  return self.func(*args, **kwargs)

 # Push to offline queue
 def put(self, **kwargs):
  args = {
   'func_name': self.func.__name__,
   'tube': self.tube,
   'kwargs': kwargs
  }
  logger.info('put job:{} to queue'.format(args))
  beanstalk = beanstalkc.Connection(host=BEANSTALK_CONFIG['host'], port=BEANSTALK_CONFIG['port'])
  try:
   beanstalk.use(self.tube)
   job_id = beanstalk.put(json.dumps(args))
   return job_id
  finally:
   beanstalk.close()

Worker: Take the string from the beanstalkd queue, and then deserialize it into an object via json.loads to get the function name, parameters and tube. Finally, get the function code corresponding to the function name from Subscriber, and then pass the parameters to execute the function.

class Worker(object):
 worker_id = 0

 def __init__(self, tubes):
  self.beanstalk = beanstalkc.Connection(host=BEANSTALK_CONFIG['host'], port=BEANSTALK_CONFIG['port'])
  self.tubes = tubes
  self.reserve_timeout = 20
  self.timeout_limit = 1000
  self.kick_period = 600
  self.signal_shutdown = False
  self.release_delay = 0
  self.age = 0
  self.signal_shutdown = False
  signal.signal(signal.SIGTERM, lambda signum, frame: self.graceful_shutdown())
  Worker.worker_id += 1
  import_module_by_str('pear.web.controllers.controller_crawler')

 def subscribe(self):
  if isinstance(self.tubes, list):
   for tube in self.tubes:
    if tube not in Subscriber.FUN_MAP.keys():
     logger.error('tube:{} not register!'.format(tube))
     continue
    self.beanstalk.watch(tube)
  else:
   if self.tubes not in Subscriber.FUN_MAP.keys():
    logger.error('tube:{} not register!'.format(self.tubes))
    return
   self.beanstalk.watch(self.tubes)

 def run(self):
  self.subscribe()
  while True:
   if self.signal_shutdown:
    break
   if self.signal_shutdown:
    logger.info("graceful shutdown")
    break
   job = self.beanstalk.reserve(timeout=self.reserve_timeout) # Block the acquisition task, waiting for the longest timeout
   if not job:
    continue
   try:
    self.on_job(job)
    self.delete_job(job)
   except beanstalkc.CommandFailed as e:
    logger.warning(e, exc_info=1)
   except Exception as e:
    logger.error(e)
    kicks = job.stats()['kicks']
    if kicks < 3:
     self.bury_job(job)
    else:
     message = json.loads(job.body)
     logger.error("Kicks reach max. Delete the job", extra={'body': message})
     self.delete_job(job)

 @classmethod
 def on_job(cls, job):
  start = time.time()
  msg = json.loads(job.body)
  logger.info(msg)
  tube = msg.get('tube')
  func_name = msg.get('func_name')
  try:
   func = Subscriber.FUN_MAP[tube][func_name]
   kwargs = msg.get('kwargs')
   func(**kwargs)
   logger.info(u'{}-{}'.format(func, kwargs))
  except Exception as e:
   logger.error(e.message, exc_info=True)
  cost = time.time() - start
  logger.info('{} cost {}s'.format(func_name, cost))

 @classmethod
 def delete_job(cls, job):
  try:
   job.delete()
  except beanstalkc.CommandFailed as e:
   logger.warning(e, exc_info=1)

 @classmethod
 def bury_job(cls, job):
  try:
   job.bury()
  except beanstalkc.CommandFailed as e:
   logger.warning(e, exc_info=1)

 def graceful_shutdown(self):
  self.signal_shutdown = True

When writing the above code, I found a problem:

The correspondence between the function name registered by the Subscriber and the function itself is run in a Python interpreter, that is, in a process, and the Worker runs asynchronously in another process. How can I make the Worker get the same as Putter? Subscriber. Finally found that this problem can be solved through Python's decorator mechanism.

It is this sentence that solves the problem of Subscriber

import_module_by_str('pear.web.controllers.controller_crawler')
# implementation of import_module_by_str
def import_module_by_str(module_name):
 if isinstance(module_name, unicode):
  module_name = str(module_name)
 __import__(module_name)

When import_module_by_str is executed, __import__ is called to dynamically load classes and functions. After the module containing the function that uses the JobQueue is loaded into memory. When running Woker, the Python interpreter will first execute the @ decorated decorator code, and will load the correspondence in Subscriber into memory.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325190136&siteId=291194637