lock
In use with the process modules need to import threading class Lock
Use locks:
- When multiple threads substantially simultaneously edit a shared data of a certain time, the need for synchronization control
- Thread Synchronization ensures secure access to multiple threads compete for resources, the most simple synchronization mechanism is introduced mutex.
- Mutex for the introduction of a resource status: locked / unlocked.
Lock syntax
Create a lock, lock to lock, the lock is released
from threading import Lock
# Create a lock
mutex = Lock ()
# acquire the lock (lock)
mutex.acquire ()
# release lock (unlock)
mutex.release ()
acquire () method of locking the lock can accept a blocking process parameter,
If blocking is set to True, the current thread will be blocked until this date to get a lock (if not specified, the default is True)
If blocking is set to False, the current thread will not be blocked
Locking and unlocking process (assuming that is multi-threaded scheduling):
This lock is usually a shared resource services, namely multiple threads simultaneously using shared resources. This locks the same time only one thread scheduling, another thread is blocked, currently scheduled thread releases the lock, the blocked thread can be scheduled.
Lock advantages:
To ensure that certain key code can only be a complete implementation of a thread from beginning to end.
Lock disadvantages:
Organized a concurrent execution of multiple threads, a piece of code that contains virtually lock can only be performed in a single-threaded mode, efficiency is greatly reduced; there may be multiple lock code, if multiple threads have multiple locks, likely to cause deadlock.
Deadlock phenomenon (example):
# Deadlock neither needs the other to release the lock, and happens to be the condition for the release of the other party to acquire the lock release needed
# thread 1
class MyThread1 (threading.Thread):
DEF __init __ (Self):
Super () .__ the init __ ()
RUN DEF (Self):
# 1 Gets A thread lock
IF mutexA.acquire ():
Print (self.name + "DO1 --- ----- ----- up")
SLEEP (1)
# At this Thread 2 B acquired lock, it is necessary to release the waiting thread 2 B lock
IF mutexB.acquire ():
Print (self.name + "DO1 --- ----- ----- Down")
mutexB.release ()
mutexA .release ()
# 线程2
class MyThread2(threading.Thread):
def __init__(self):
super().__init__()
RUN DEF (Self):
# 2 Thread acquire lock B
IF mutexB.acquire ():
Print (self.name + "DO2 --- ----- ----- up")
SLEEP (1)
# at this time 1 a thread acquired the lock, waiting for the release of a lock thread 1
IF mutexA.acquire ():
Print (self.name + "Down ----- --- ----- DO2")
mutexA.release ()
mutexB.release ()
mutexA = threading.Lock()
mutexB = threading.Lock()
the __name__ == IF '__main__':
# Thread 1 and Thread 2 while performing
T1 = MyThread1 ()
T2 = MyThread2 ()
t1.start ()
t2.start ()
Deadlock avoidance method: banker's algorithm
Multi-process and multi-threaded compare and choose
Whether to adopt multi-tasking, our mission depends on the type of
If it is computationally intensive, requires a lot of CPU resources for computing, critical operational efficiency of the code, such a task generally do not use multiple threads, because of frequent task scheduler will slow down the CPU
operation.
If the IO-intensive tasks related to the hard disk read and write, read and write the network, the more time waiting for IO operation is complete, you can put this type of multi-threaded task or process to carry out.
(Time to achieve the same code with) single-threaded, multi-threaded, multi-process
# Single-threaded, multi-threaded, multi-process and different
# simple summation
DEF fib (the X-):
RES = 0
for i in the Range (100000000):
RES + = i * the X-
return RES
# 阶乘
def fac(x):
if x < 2:
return 1
return x*fac(x-1)
# Simple summation
DEF SUM (X):
RES = 0
for I in Range (50000000):
RES = I * X +
return RES
Function list #
funcs = [FIB, FAC, SUM]
n-100 =
class MyThread(threading.Thread):
def __init__(self, func, args, name=""):
super().__init__()
self.name = name
self.func = func
self.args = args
self.res = 0
def getResult(self):
return self.res
def run(self):
print("starting ", self.name, " at: ", ctime())
self.res = self.func(self.args)
print(self.name, "finished at: ", ctime())
def main():
nfuncs = range(len(funcs))
print("单线程".center(30, "*"))
start = time()
for i in nfuncs:
print("start {} at: {}".format(funcs[i].__name__, ctime()))
start_task = time()
print(funcs[i](n))
end_task = time()
print("任务 耗时:", end_task-start_task)
print("{} finished at: {}".format(funcs[i].__name__, ctime()))
= Time End ()
Print ( "single-threaded runtime:", End-Start)
Print ( "single-threaded ends:". center (30, " *"))
Print ()
Print ( "multi-threaded" .center (30, "*"))
Start Time = ()
Threads = []
for I in nfuncs:
# a binding thread a function
t = MyThread (funcs [i] , n , funcs [I] .__ name__)
threads.append (T)
i in nfuncs for:
# simultaneously starting a thread
threads [i] .start ()
I in nfuncs for:
Threads [I] .join ()
Print (Threads [I] .getResult ())
End Time = ()
Print ( "multi-thread runtime:", End-Start)
Print ( "multi-threaded ends: ".center (30," * " ))
Print ()
Print ( "multi-process" .center (30, "*"))
Start Time = ()
process_list = []
for i in nfuncs:
# a process to bind a function
t = Process (target = funcs [ i] , args = (n-,))
process_list.append (T)
i in nfuncs for:
# the same time start the process
process_list [i] .start ()
i in nfuncs for:
process_list [i] .join ()
End = Time ()
Print ( "multi-process run time:", End - Start)
Print ( "multi-process ends:" center (30, "* ").)
if __name__ == "__main__":
main()