day 31-1 daemon, mutex, queue, production and consumption model

Daemon

About daemon needs to be emphasized two points

First: the daemon will terminate after the end of the main process code execution

Second: within daemon can no longer turn a child process, otherwise it throws an exception: AssertionError: daemonic processes are not allowed to have children

from multiprocessing import Process
import time

def foo():
    pass

DEF RUN ():
     Print ( ' the this IS the child ' )
    time.sleep(3)
    p1 = Process(target=foo,)   # AssertionError: daemonic processes are not allowed to have children
    p1.start()
if __name__ == '__main__':
    p = Process(target=run,)
    p.daemon = True              # daemon process to be opened before the main process is finished 
    p.start ()
    p.join()
    Print ( ' primary process ' )

 

Mutex

Do not share data between processes, but share the same set of file systems, so access to the same file, or print with a terminal, there is no problem, and the share is the result of bringing bring competition, competition is confusion,

The main parameters:

multiprocessing Import from Process, Lock  # import locks 
mutex.acquire () # locked
mutex.release () # release lock
from multiprocessing import Process, Lock
import time


def run(name, mutex):
    mutex.acquire ()      # lock 
    Print ( ' the this process IS S. 1% ' % name)
    time.sleep(1)
    print('this is  进程%s 2' % name)
    time.sleep(1)
    print('this is  进程%s 3' % name)
    mutex.release ()      # release lock


if __name__ == '__main__':
    mutex = Lock()
    for i in range(3):
        p = Process(target=run, args=(i, mutex))
        p.start()
        Print ( ' primary process, execution ends ' )

Results of the:

User 0 See, remaining votes as follows: 5  
users 1 view, the remaining votes as follows: 5  
user 2 to view the remaining votes is: 5  
users 3 view, the remaining votes as follows: 5  
users 4 view, the remaining votes as follows: 5  
users 5 view, the remaining votes is: 5  
users 6 view, the remaining votes as follows: 5  
user 7 to view the remainder of votes: 5  
users 8 view, the remaining votes as follows: 5  
users 9 view, votes of the remainder: 5 
0 users purchase success
User 1 ticket success
User 2 tickets Success
User 3 tickets Success
User 4 tickets Success
User 5 tickets fail
User 6 tickets fail
User 7 tickets fail
User 8 tickets fail
User 9 tickets fail

 

Mutex and join

Use join can become complicated by serial mutex principle will also become complicated by walking through, then we will be able to directly join ah, why mutex?
Here I quickly tried it.

from multiprocessing import Process, Lock
import time
import json


def show(name):
    DIC = the json.load (Open ( ' db.txt ' , ' R & lt ' , encoding = ' UTF-. 8 ' ))
     Print ( ' % S to view the remainder of votes: S% ' % (name, DIC [ ' COUNT ' ]))


def get(name):
    time.sleep(0.5)
    dic = json.load(open('db.txt', 'r', encoding='utf-8'))
    if dic['count'] != 0:
        dic['count'] -= 1
        time.sleep(0.5)
        The json.dump (DIC, Open ( ' db.txt ' , ' W ' , encoding = ' UTF-. 8 ' ))
         Print ( ' % S ticket success " % name)
     the else :
         Print ( ' % S ticket failed " % name)


def run(name):
    show(name)
    # muext.acquire()
    get(name)
    # muext.release()


if __name__ == '__main__':
    # muext = Lock()
    for i in range(10):
        p = Process(target=run, args=('用户 %s' % i,))
        p.start()
        p.join()

Results of the:

0 users view, votes of the remainder: 5 
0 users purchase success
User 1 View, the remaining votes as follows: 4  
users 1 tickets Success
User 2 to view the remaining votes as follows: 3  
users 2 tickets Success
User 3 Check the remaining votes as follows: 2  
User 3 tickets Success
User 4 to view the remaining votes: 1  
user 4 tickets Success
User 5 view, votes of the remainder: 0
User 5 tickets fail
User 6 view, votes of the remainder: 0
User 6 tickets fail
User 7 to view the remainder of votes: 0
User 7 tickets fail
User 8 view, the remaining votes is: 0
User 8 tickets fail
User 9 view, votes of the remainder: 0
User 9 tickets fail

Found that the use of concurrent changed join the walk through, does ensure data security, but the problem is even checked the operation has become only a one person went to check, it is clear that we should be concurrently checked to query data without the need to consider accurate or not, this time difference and join the mutex lock on the obvious, is to join a whole serial task, and the benefits mutex is a task for a piece of code can be serial, for example, only allow task get the task serial function.

Mutex Summary

Locking ensures that when multiple processes modifying the same piece of data, at the same time only one task can be modified, that is serially modify, yes, the speed is slow, but speed is sacrificed to ensure data security.

Although it can share data files to achieve inter-process communication, but the question is:

1, the low efficiency (based on the shared data file, and the file data on the hard disk)

2, needs its own lock handle

So we'd better look for a solution that can take into account:

1, high efficiency (multiple processes to share data in a memory)

2, to help us handle locking problems.

This is a message-based IPC communication mechanism mutiprocessing module offers us: queues and pipes.

Queues and pipes are the data stored in the memory, and the queue is based on the (pipeline + lock) to achieve, let us free from the lock complex problem, and thus inter-queue process is the best choice for communication.

We should try to avoid using shared data, use messaging and queuing as much as possible, avoid complex synchronization and locking problems, but also in the increase in the number of processes can often get a better availability of malleability.

queue

The main parameters

q.put (): data for insertion into the queue.
q.get (): may be read from the queue and removes an element.
q.full (): whether the queue is full, return values: True, False.
q.empty (): whether the queue is empty, return values: True, False.

Queue Introduction

Isolated from each other processes, to achieve inter-process communication (IPC), multiprocessing module supports two forms: the queue and the pipeline, these two methods are used messaging.

Queue ([maxsize]): create a shared process queue, Queue is a multi-process safe queue Queue can be used to pass data between multiple processes.

maxsize is the maximum number of entries allowed in the queue, no size limit thereof is omitted.
But need to be clear:
1, put the message queue memory rather than large data
2, queue memory space is occupied, so even maxsize unlimited size is also limited by memory size

For instance parameters

Note: When the queue is full (empty) when placed again in the queue (remove) the value of the card in the system, remove the queue (put) value.

from multiprocessing import Queue

Q = Queue (. 3)   # Set queue length field is not required

print(q.full())             # False
print(q.empty())            # True
q.put('hello')
q.put({'name': 'ysg'})
q.put([1, 2, 3])
print(q.full())             # True
print(q.empty())            # False
print(q.get())              # hello
print(q.get())              # {'name': 'ysg'}
print(q.get())              # [1, 2, 3]
print(q.full())             # False
print(q.empty())            # True

Production model consumers

Why use producer-consumer model

Producers refers to the task of production data, consumer refers to the task of data processing, in concurrent programming, if the producer processed quickly, while consumer processing speed is very slow, so producers have to wait for consumers to handle finished, in order to continue production data. By the same token, if the consumer's capacity is larger than the producer, the consumer would have to wait for the producer. To solve this problem so the introduction of producer and consumer patterns.

What is the mode of producers and consumers

Producer-consumer model is the strong coupling problem is solved by a producer and consumer of container. Producers and consumers do not directly communicate with each other, and to communicate by blocking queue, so producers after completion of processing of production data without waiting for the consumer, direct throw blocking queue, consumers do not find a producer to data, but taken directly from blocking the queue, the queue is equivalent to a blocking buffer, balance the producers and consumers of processing power.

The blocking queue is used to decouple the producers and consumers

from multiprocessing import Queue, Process
import time


def server(q, s):
    for i in range(10):
        res = '%s %s' % (s, i)
        the time.sleep ( 0.5 )
         Print ( ' production S% ' % RES)
        q.put(res)


def client(q):
    while 1:
        time.sleep(1)
        res = q.get()
        if res == None: break
        print('消费 %s' % res)


if __name__ == '__main__':
    q = Queue()
    SS = ' buns ' 
    SS1 = ' donut ' 
    SS2 = ' cake '

    # Producers to 
    S = Process (= target Server, args = (Q, SS))
    s1 = Process(target=server, args=(q, ss1))
    s2 = Process(target=server, args=(q, ss2))

    # Consumers 
    c = Process (target = Client, args = (q,))
    c1 = Process(target=client, args=(q,))

    s_l = [s, s1, s2]
    c_l = [c, c1]
    for i in s_l:
        i.start()
    for i in c_l:
        i.start()
    for i in s_l:
        i.join()
    q.put(None)
    q.put(None)

    Print ( ' ----- ----- main process ' )

 

Producer consumer model summary

1, the program has two types of roles

A class is responsible for production data (producer)
a class is responsible for processing the data (consumer)

2, the introduction of a producer-consumer model to solve the problem is

Balance between producers and consumers speed difference
program unlock coupling

3, how to achieve producer-consumer model

Producer <---> queue <---> Consumer

 

JoinableQueue([maxsize])

It's like a Queue object, but the queue allows the user to inform the creator of the project the project has been successfully processed. Notification process is using a shared signal and condition variables to achieve.

Parameter Description

maxsize is the maximum number of entries allowed in the queue, no size limit thereof is omitted.


Methods Introduction

Examples of p JoinableQueue Queue object in addition to the same method further comprises:
q.task_done (): Using this method the user signal that the q.get () Returns the item has been processed. If the number of calls to this method is greater than the number of items deleted from the queue, it will raise a ValueError exception.
q.join (): Producer calls this method blocks until all of the items in the queue have been processed. Blocking will continue to queue for each project are calling q.task_done () method so far.

Based JoinableQueue achieve producer-consumer model

Producer 1, the main processes, etc. p1, p2, p3 end
2, and p1, p2, p3 is after all the consumer data will take a clean end
3, so once p1, p2, p3 over, consumers prove did not need to exist, should die as a main course, so you need to producers are set to daemon

from multiprocessing import JoinableQueue, Process
import time


def server(q, s):
    for i in range(3):
        res = '%s %s' % (s, i)
        the time.sleep ( 0.5 )
         Print ( ' production S% ' % RES)
        q.put(res)
    q.join ()   # Wait until all the data consumers to put themselves in the queue are removed, the producer until the end


def client(q):
    while 1:
        time.sleep(1)
        RES = q.get ()
         Print ( ' consumption S% ' % RES)
        q.task_done ()    # send a signal to q.join (), a data description has been removed from the queue and processed


if __name__ == '__main__':
    q = JoinableQueue()
    SS = ' buns ' 
    SS1 = ' donut ' 
    SS2 = ' cake '

    # Producers to 
    S = Process (= target Server, args = (Q, SS))
    s1 = Process(target=server, args=(q, ss1))
    s2 = Process(target=server, args=(q, ss2))

    # Consumers 
    c = Process (target = Client, args = (q,))
    c1 = Process(target=client, args=(q,))
    c.daemon = True
    c1.daemon = True

    s_l = [s, s1, s2]
    c_l = [c, c1]
    for i in s_l:
        i.start()
    for i in c_l:
        i.start()
    for i in s_l:
        i.join()

    Print ( ' ----- ----- main process ' )

 

Guess you like

Origin www.cnblogs.com/ysging/p/12319763.html