1-5 mutex

A mutex

Do not share data between processes, but share the same set of file systems, so access to the same file, or print with a terminal, there is no problem, and the share is caused by caused by competition, competition result is confusion, as follows

#并发运行,效率高,但竞争同一打印终端,带来了打印错乱
from multiprocessing import Process
import os,time
def work():
    print('%s is running' %os.getpid())
    time.sleep(2)
    print('%s is done' %os.getpid())

if __name__ == '__main__':
    for i in range(3):
        p=Process(target=work)
        p.start()

How to control, that is, the lock processing. The mutex mutually exclusive, which means that, if more than one process compared to more than one person, works mutex is more than one person have to go to compete with a resource: the bathroom, on a lock after the man grabbed a bathroom others must wait until after the completion of this task releases the lock, there may be other people there so ...... grab a mutex principle, is to walk through the concurrent change, reducing the efficiency, but to ensure good data security chaos

#由并发变成了串行,牺牲了运行效率,但避免了竞争
from multiprocessing import Process,Lock
import os,time
def work(lock):
    lock.acquire() #加锁
    print('%s is running' %os.getpid())
    time.sleep(2)
    print('%s is done' %os.getpid())
    lock.release() #释放锁
if __name__ == '__main__':
    lock=Lock()
    for i in range(3):
        p=Process(target=work,args=(lock,))
        p.start()

Two simulation exercises to grab votes

Multiple processes share the same file, we can put the file as a database, simulate more than one person to perform tasks with multiple processes to grab votes

#文件db.txt的内容为:{"count":1}
#注意一定要用双引号,不然json无法识别
from multiprocessing import Process
import time,json

def search(name):
    dic=json.load(open('db.txt'))
    time.sleep(1)
    print('\033[43m%s 查到剩余票数%s\033[0m' %(name,dic['count']))

def get(name):
    dic=json.load(open('db.txt'))
    time.sleep(1) #模拟读数据的网络延迟
    if dic['count'] >0:
        dic['count']-=1
        time.sleep(1) #模拟写数据的网络延迟
        json.dump(dic,open('db.txt','w'))
        print('\033[46m%s 购票成功\033[0m' %name)

def task(name):
    search(name)
    get(name)

if __name__ == '__main__':
    for i in range(10): #模拟并发10个客户端抢票
        name='<路人%s>' %i
        p=Process(target=task,args=(name,))
        p.start()

Concurrent operation, high efficiency, but the competition to write the same file, data is written to confusion, only one ticket, sold successfully to 10 people

<路人0> 查到剩余票数1
<路人1> 查到剩余票数1
<路人2> 查到剩余票数1
<路人3> 查到剩余票数1
<路人4> 查到剩余票数1
<路人5> 查到剩余票数1
<路人6> 查到剩余票数1
<路人7> 查到剩余票数1
<路人8> 查到剩余票数1
<路人9> 查到剩余票数1
<路人0> 购票成功
<路人4> 购票成功
<路人1> 购票成功
<路人5> 购票成功
<路人3> 购票成功
<路人7> 购票成功
<路人2> 购票成功
<路人6> 购票成功
<路人8> 购票成功
<路人9> 购票成功

The lock processing: booking behavior by a concurrent become a serial, at the expense of operating efficiency, but ensures data security

#把文件db.txt的内容重置为:{"count":1}
from multiprocessing import Process,Lock
import time,json

def search(name):
    dic=json.load(open('db.txt'))
    time.sleep(1)
    print('\033[43m%s 查到剩余票数%s\033[0m' %(name,dic['count']))

def get(name):
    dic=json.load(open('db.txt'))
    time.sleep(1) #模拟读数据的网络延迟
    if dic['count'] >0:
        dic['count']-=1
        time.sleep(1) #模拟写数据的网络延迟
        json.dump(dic,open('db.txt','w'))
        print('\033[46m%s 购票成功\033[0m' %name)

def task(name,lock):
    search(name)
    with lock: #相当于lock.acquire(),执行完自代码块自动执行lock.release()
        get(name)

if __name__ == '__main__':
    lock=Lock()
    for i in range(10): #模拟并发10个客户端抢票
        name='<路人%s>' %i
        p=Process(target=task,args=(name,lock))
        p.start()

Results of the

<路人0> 查到剩余票数1
<路人1> 查到剩余票数1
<路人2> 查到剩余票数1
<路人3> 查到剩余票数1
<路人4> 查到剩余票数1
<路人5> 查到剩余票数1
<路人6> 查到剩余票数1
<路人7> 查到剩余票数1
<路人8> 查到剩余票数1
<路人9> 查到剩余票数1
<路人0> 购票成功

Three mutex and join

Use join can become complicated by serial mutex principle will also become complicated by walking through, then we will be able to directly join ah, why mutex, at this point I quickly tried it

#把文件db.txt的内容重置为:{"count":1}
from multiprocessing import Process,Lock
import time,json

def search(name):
    dic=json.load(open('db.txt'))
    print('\033[43m%s 查到剩余票数%s\033[0m' %(name,dic['count']))

def get(name):
    dic=json.load(open('db.txt'))
    time.sleep(1) #模拟读数据的网络延迟
    if dic['count'] >0:
        dic['count']-=1
        time.sleep(1) #模拟写数据的网络延迟
        json.dump(dic,open('db.txt','w'))
        print('\033[46m%s 购票成功\033[0m' %name)

def task(name,):
    search(name)
    get(name)

if __name__ == '__main__':
    for i in range(10):
        name='<路人%s>' %i
        p=Process(target=task,args=(name,))
        p.start()
        p.join()

Results of the

<路人0> 查到剩余票数1
<路人0> 购票成功
<路人1> 查到剩余票数0
<路人2> 查到剩余票数0
<路人3> 查到剩余票数0
<路人4> 查到剩余票数0
<路人5> 查到剩余票数0
<路人6> 查到剩余票数0
<路人7> 查到剩余票数0
<路人8> 查到剩余票数0
<路人9> 查到剩余票数0

We found that the use of concurrent changed join the walk through, does ensure data security, but the problem is even checked the operation has become only a one person went to check, it is clear that we should be concurrently checked to query data without the need to consider accurate or not, this time difference and join the mutex lock on the obvious, is to join a whole serial task, and the benefits mutex is a task for a piece of code can be serial, for example, only allow task get the serial task in the function

def task(name,):
    search(name) # 并发执行

    lock.acquire()
    get(name) #串行执行
    lock.release()

Four summary

Locking ensures that when multiple processes modifying the same piece of data, at the same time only one task can be modified, that is serially modify, yes, the speed is slow, but speed is sacrificed to ensure data security.

Although it can share data files to achieve inter-process communication, but the question is:

1, the low efficiency (based on the shared data file, and the file data on the hard disk)

2, needs its own lock handle

So we'd better look for a solution that can take into account:

1, high efficiency (multiple processes to share data in a memory)

2, to help us handle locking problems.

This is a message-based IPC communication mechanism mutiprocessing module offers us: queues and pipes.

Queues and pipes are the data stored in the memory, and the queue is based on the (pipeline + lock) to achieve, let us free from the lock complex problem, and thus inter-queue process is the best choice for communication.

We should try to avoid using shared data, use messaging and queuing as much as possible, avoid complex synchronization and locking problems, but also in the increase in the number of processes can often get a better availability of malleability.

Guess you like

Origin www.cnblogs.com/shibojie/p/11664754.html