一 利用QUEUE类的实现线程之间同步
QUEUE队列在python语言中已经进行了封装,同时实现可同步操作数据。因此,不用担心变量访问会出错。
在不使用线程互斥同步变量的情况下我们利用QUEUE也可以实现线程同步(注意此处,同步有所不同,此处
实质是对数据的操作进行了同步。线程之间的顺序不是同步的)。下面我们看看QUEUE的定义代码:
class Queue:
'''Create a queue object with a given maximum size.
If maxsize is <= 0, the queue size is infinite.
'''
def __init__(self, maxsize=0):
self.maxsize = maxsize
self._init(maxsize)
# mutex must be held whenever the queue is mutating. All methods
# that acquire mutex must release it before returning. mutex
# is shared between the three conditions, so acquiring and
# releasing the conditions also acquires and releases mutex.
self.mutex = threading.Lock()
# Notify not_empty whenever an item is added to the queue; a
# thread waiting to get is notified then.
self.not_empty = threading.Condition(self.mutex)
# Notify not_full whenever an item is removed from the queue;
# a thread waiting to put is notified then.
self.not_full = threading.Condition(self.mutex)
# Notify all_tasks_done whenever the number of unfinished tasks
# drops to zero; thread waiting to join() is notified to resume
self.all_tasks_done = threading.Condition(self.mutex)
self.unfinished_tasks = 0
def task_done(self):
'''Indicate that a formerly enqueued task is complete.
Used by Queue consumer threads. For each get() used to fetch a task,
a subsequent call to task_done() tells the queue that the processing
on the task is complete.
If a join() is currently blocking, it will resume when all items
have been processed (meaning that a task_done() call was received
for every item that had been put() into the queue).
Raises a ValueError if called more times than there were items
placed in the queue.
'''
with self.all_tasks_done:
unfinished = self.unfinished_tasks - 1
if unfinished <= 0:
if unfinished < 0:
raise ValueError('task_done() called too many times')
self.all_tasks_done.notify_all()
self.unfinished_tasks = unfinished
def join(self):
'''Blocks until all items in the Queue have been gotten and processed.
The count of unfinished tasks goes up whenever an item is added to the
queue. The count goes down whenever a consumer thread calls task_done()
to indicate the item was retrieved and all work on it is complete.
When the count of unfinished tasks drops to zero, join() unblocks.
'''
with self.all_tasks_done:
while self.unfinished_tasks:
self.all_tasks_done.wait()
def qsize(self):
'''Return the approximate size of the queue (not reliable!).'''
with self.mutex:
return self._qsize()
def empty(self):
'''Return True if the queue is empty, False otherwise (not reliable!).
This method is likely to be removed at some point. Use qsize() == 0
as a direct substitute, but be aware that either approach risks a race
condition where a queue can grow before the result of empty() or
qsize() can be used.
To create code that needs to wait for all queued tasks to be
completed, the preferred technique is to use the join() method.
'''
with self.mutex:
return not self._qsize()
def full(self):
'''Return True if the queue is full, False otherwise (not reliable!).
This method is likely to be removed at some point. Use qsize() >= n
as a direct substitute, but be aware that either approach risks a race
condition where a queue can shrink before the result of full() or
qsize() can be used.
'''
with self.mutex:
return 0 < self.maxsize <= self._qsize()
def put(self, item, block=True, timeout=None):
'''Put an item into the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until a free slot is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Full exception if no free slot was available within that time.
Otherwise ('block' is false), put an item on the queue if a free slot
is immediately available, else raise the Full exception ('timeout'
is ignored in that case).
'''
with self.not_full:
if self.maxsize > 0:
if not block:
if self._qsize() >= self.maxsize:
raise Full
elif timeout is None:
while self._qsize() >= self.maxsize:
self.not_full.wait()
elif timeout < 0:
raise ValueError("'timeout' must be a non-negative number")
else:
endtime = time() + timeout
while self._qsize() >= self.maxsize:
remaining = endtime - time()
if remaining <= 0.0:
raise Full
self.not_full.wait(remaining)
self._put(item)
self.unfinished_tasks += 1
self.not_empty.notify()
def get(self, block=True, timeout=None):
'''Remove and return an item from the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until an item is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Empty exception if no item was available within that time.
Otherwise ('block' is false), return an item if one is immediately
available, else raise the Empty exception ('timeout' is ignored
in that case).
'''
with self.not_empty:
if not block:
if not self._qsize():
raise Empty
elif timeout is None:
while not self._qsize():
self.not_empty.wait()
elif timeout < 0:
raise ValueError("'timeout' must be a non-negative number")
else:
endtime = time() + timeout
while not self._qsize():
remaining = endtime - time()
if remaining <= 0.0:
raise Empty
self.not_empty.wait(remaining)
item = self._get()
self.not_full.notify()
return item
def put_nowait(self, item):
'''Put an item into the queue without blocking.
Only enqueue the item if a free slot is immediately available.
Otherwise raise the Full exception.
'''
return self.put(item, block=False)
def get_nowait(self):
'''Remove and return an item from the queue without blocking.
Only get an item if one is immediately available. Otherwise
raise the Empty exception.
'''
return self.get(block=False)
# Override these methods to implement other queue organizations
# (e.g. stack or priority queue).
# These will only be called with appropriate locks held
# Initialize the queue representation
def _init(self, maxsize):
self.queue = deque()
def _qsize(self):
return len(self.queue)
# Put a new item in the queue
def _put(self, item):
self.queue.append(item)
# Get an item from the queue
def _get(self):
return self.queue.popleft()
下面测试一下QUEUE列队在多线程中的使用。
QUEUE_VALUE = 0
QUEUE_URL_LIST = queue.Queue(30)
def Doing_Single_Jobs(name):
QUEUE_VALUE = QUEUE_URL_LIST.get()
# print('thread name is : ', name)
print('doing jobs num : ', QUEUE_VALUE)
QUEUE_URL_LIST.task_done()
# define thread class
class Queue_Thread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self): # run proc
while True:
if QUEUE_URL_LIST.qsize() > 0:
Doing_Single_Jobs(self.getName())
else:
break
def _delete(self):
threading.Thread._delete(self)
print('thread delete is : ', self.getName())
INT_MAX_JOBS_VALUE = 30
INT_MAX_THREAD_NUM = 5
for iIndexJobs in range(INT_MAX_JOBS_VALUE):
QUEUE_URL_LIST.put(iIndexJobs)
for iThreadIndex in range(INT_MAX_THREAD_NUM):
thread = Queue_Thread()
thread.setDaemon(True)
thread.start()
QUEUE_URL_LIST.join()
doing jobs num : 0
doing jobs num : 1
doing jobs num : 2
doing jobs num : 3
doing jobs num : 4
doing jobs num : 5
doing jobs num : 6
doing jobs num : 7
doing jobs num : 8
doing jobs num : 9
doing jobs num : 10
doing jobs num : 11
doing jobs num : 12
doing jobs num : 13
doing jobs num : 14
doing jobs num : 15
doing jobs num : 16
doing jobs num : 17
doing jobs num : 18
doing jobs num : 19
doing jobs num : 20
doing jobs num : 21
doing jobs num : 22
doing jobs num : 23
doing jobs num : 24
doing jobs num : 25
doing jobs num : 26
doing jobs num : 27
doing jobs num : 28
doing jobs num : 29
//注意 线程之间依旧是竞争关系。将上面注释掉的代码恢复。
# print(' thread name is : ', name)
结果输出
thread name is : Thread-1
doing jobs num : 0
thread name is : Thread-1
doing jobs num : 1
thread name is : Thread-1
doing jobs num : 2
thread name is : Thread-1
doing jobs num : 3
thread name is : Thread-1
doing jobs num : 4
thread name is : Thread-1
doing jobs num : 5
thread name is : Thread-1
doing jobs num : 6
thread name is : Thread-1
doing jobs num : 7
thread name is : Thread-1
thread name is : Thread-2
doing jobs num : 8
doing jobs num : 9
thread name is : Thread-3
thread name is : Thread-1
thread name is : Thread-2
doing jobs num : 10
doing jobs num : 11
doing jobs num : 12
thread name is : Thread-3
thread name is : Thread-4
thread name is : Thread-1
thread name is : Thread-2
doing jobs num : 13
doing jobs num : 14
doing jobs num : 15
doing jobs num : 16
thread name is : Thread-3
thread name is : Thread-4
thread name is : Thread-1
thread name is : Thread-5
thread name is : Thread-2
doing jobs num : 17
doing jobs num : 18
doing jobs num : 19
doing jobs num : 20
doing jobs num : 21
thread name is : Thread-3
thread name is : Thread-4
thread name is : Thread-1
thread name is : Thread-5
thread name is : Thread-2
doing jobs num : 22
doing jobs num : 23
doing jobs num : 24
doing jobs num : 25
doing jobs num : 26
thread name is : Thread-3
thread name is : Thread-4
thread name is : Thread-1
doing jobs num : 27
doing jobs num : 28
doing jobs num : 29
二 利用_dummy_thread.allocate_lock()的实现线程之间互斥
在python语言中使用互斥锁实现线程可互斥操作数据。
下面我们看看allocate_lock()的定义代码:
def allocate_lock():
"""Dummy implementation of _thread.allocate_lock()."""
return LockType()
注意,实质上是LockType类实现的是互斥锁
class LockType(object):
"""Class implementing dummy implementation of _thread.LockType.
Compatibility is maintained by maintaining self.locked_status
which is a boolean that stores the state of the lock. Pickling of
the lock, though, should not be done since if the _thread module is
then used with an unpickled ``lock()`` from here problems could
occur from this class not having atomic methods.
"""
def __init__(self):
self.locked_status = False
def acquire(self, waitflag=None, timeout=-1):
"""Dummy implementation of acquire().
For blocking calls, self.locked_status is automatically set to
True and returned appropriately based on value of
``waitflag``. If it is non-blocking, then the value is
actually checked and not set if it is already acquired. This
is all done so that threading.Condition's assert statements
aren't triggered and throw a little fit.
"""
if waitflag is None or waitflag:
self.locked_status = True
return True
else:
if not self.locked_status:
self.locked_status = True
return True
else:
if timeout > 0:
import time
time.sleep(timeout)
return False
__enter__ = acquire
def __exit__(self, typ, val, tb):
self.release()
def release(self):
"""Release the dummy lock."""
# XXX Perhaps shouldn't actually bother to test? Could lead
# to problems for complex, threaded code.
if not self.locked_status:
raise error
self.locked_status = False
return True
def locked(self):
return self.locked_status
def acquire(self, waitflag=None, timeout=-1):
def release(self):
import _dummy_thread
import threading
def Doing_Single_Jobs(name, num):
print('thread name is : ', name)
print('doing jobs num : ', num)
# define thread class
class MZ0_Thread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self): # run proc
global MutexLock
global INT_MAX_JOBS_VALUE
while True:
MutexLock.acquire()
if INT_MAX_JOBS_VALUE > 0:
Doing_Single_Jobs(self.getName(), INT_MAX_JOBS_VALUE)
INT_MAX_JOBS_VALUE = INT_MAX_JOBS_VALUE - 1
else:
break
MutexLock.release()
def _delete(self):
threading.Thread._delete(self)
print('thread delete is : ', self.getName())
INT_MAX_JOBS_VALUE = 30
INT_MAX_THREAD_NUM = 5
MutexLock = _dummy_thread.allocate_lock()
INT_MAX_JOBS_VALUE = 30
INT_MAX_THREAD_NUM = 5
if __name__ == '__main__':
print('Main Thread Run :', __name__)
for iThreadIndex in range(INT_MAX_THREAD_NUM):
thread = MZ0_Thread()
thread.setDaemon(False)
thread.start()
print('Main Thread Exit :', __name__)
Main Thread Run : __main__
thread name is : Thread-1
doing jobs num : 30
thread name is : Thread-1
doing jobs num : 29
thread name is : Thread-1
doing jobs num : 28
thread name is : Thread-1
doing jobs num : 27
thread name is : Thread-1
doing jobs num : 26
thread name is : Thread-1
doing jobs num : 25
thread name is : Thread-1
doing jobs num : 24
thread name is : Thread-1
doing jobs num : 23
thread name is : Thread-1
doing jobs num : 22
thread name is : Thread-1
doing jobs num : 21
thread name is : Thread-1
doing jobs num : 20
thread name is : Thread-1
doing jobs num : 19
thread name is : Thread-1
doing jobs num : 18
thread name is : Thread-1
doing jobs num : 17
thread name is : Thread-1
doing jobs num : 16
thread name is : Thread-1
doing jobs num : 15
thread name is : Thread-1
doing jobs num : 14
thread name is : Thread-1
doing jobs num : 13
thread name is : Thread-1
doing jobs num : 12
thread name is : Thread-1
doing jobs num : 11
thread name is : Thread-1
doing jobs num : 10
thread name is : Thread-1
doing jobs num : 9
thread name is : Thread-1
doing jobs num : 8
thread name is : Thread-1
doing jobs num : 7
thread name is : Thread-1
doing jobs num : 6
thread name is : Thread-1
doing jobs num : 5
thread name is : Thread-1
doing jobs num : 4
thread name is : Thread-1
doing jobs num : 3
thread name is : Thread-1
doing jobs num : 2
thread name is : Thread-1
doing jobs num : 1
Main Thread Exit : __main__
同样的,在python语言中使用互斥信号量实现线程可互斥操作数据。
下面我们看看Semaphore的定义代码:
class Semaphore:
"""This class implements semaphore objects.
Semaphores manage a counter representing the number of release() calls minus
the number of acquire() calls, plus an initial value. The acquire() method
blocks if necessary until it can return without making the counter
negative. If not given, value defaults to 1.
"""
# After Tim Peters' semaphore class, but not quite the same (no maximum)
def __init__(self, value=1):
if value < 0:
raise ValueError("semaphore initial value must be >= 0")
self._cond = Condition(Lock())
self._value = value
def acquire(self, blocking=True, timeout=None):
"""Acquire a semaphore, decrementing the internal counter by one.
When invoked without arguments: if the internal counter is larger than
zero on entry, decrement it by one and return immediately. If it is zero
on entry, block, waiting until some other thread has called release() to
make it larger than zero. This is done with proper interlocking so that
if multiple acquire() calls are blocked, release() will wake exactly one
of them up. The implementation may pick one at random, so the order in
which blocked threads are awakened should not be relied on. There is no
return value in this case.
When invoked with blocking set to true, do the same thing as when called
without arguments, and return true.
When invoked with blocking set to false, do not block. If a call without
an argument would block, return false immediately; otherwise, do the
same thing as when called without arguments, and return true.
When invoked with a timeout other than None, it will block for at
most timeout seconds. If acquire does not complete successfully in
that interval, return false. Return true otherwise.
"""
if not blocking and timeout is not None:
raise ValueError("can't specify timeout for non-blocking acquire")
rc = False
endtime = None
with self._cond:
while self._value == 0:
if not blocking:
break
if timeout is not None:
if endtime is None:
endtime = _time() + timeout
else:
timeout = endtime - _time()
if timeout <= 0:
break
self._cond.wait(timeout)
else:
self._value -= 1
rc = True
return rc
__enter__ = acquire
def release(self):
"""Release a semaphore, incrementing the internal counter by one.
When the counter is zero on entry and another thread is waiting for it
to become larger than zero again, wake up that thread.
"""
with self._cond:
self._value += 1
self._cond.notify()
def __exit__(self, t, v, tb):
self.release()
def acquire(self, blocking=True, timeout=None):
def release(self):
示例代码
def Doing_Single_Jobs(name, num):
print('thread name is : ', name)
print('doing jobs num : ', num)
# define thread class
class MZ0_Thread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self): # run proc
global THREAD_SEMAPHORE
global INT_MAX_JOBS_VALUE
while INT_MAX_JOBS_VALUE > 0:
THREAD_SEMAPHORE.acquire()
if INT_MAX_JOBS_VALUE > 0:
Doing_Single_Jobs(self.getName(), INT_MAX_JOBS_VALUE)
INT_MAX_JOBS_VALUE = INT_MAX_JOBS_VALUE - 1
else:
break
THREAD_SEMAPHORE.release()
def _delete(self):
threading.Thread._delete(self)
print('thread delete is : ', self.getName())
INT_MAX_JOBS_VALUE = 30
INT_MAX_THREAD_NUM = 5
THREAD_SEMAPHORE = threading.Semaphore(True)
if __name__ == '__main__':
print('Main Thread Run :', __name__)
for iThreadIndex in range(INT_MAX_THREAD_NUM):
thread = MZ0_Thread()
thread.setDaemon(False)
thread.start()
print('Main Thread Exit :', __name__)
Main Thread Run : __main__
thread name is : Thread-1
doing jobs num : 30
thread name is : Thread-1
doing jobs num : 29
thread name is : Thread-1
doing jobs num : 28
thread name is : Thread-1
doing jobs num : 27
thread name is : Thread-1
doing jobs num : 26
thread name is : Thread-1
doing jobs num : 25
thread name is : Thread-1
doing jobs num : 24
thread name is : Thread-1
doing jobs num : 23
thread name is : Thread-1
doing jobs num : 22
thread name is : Thread-1
doing jobs num : 21
thread name is : Thread-1
doing jobs num : 20
thread name is : Thread-1
doing jobs num : 19
thread name is : Thread-1
doing jobs num : 18
thread name is : Thread-1
doing jobs num : 17
thread name is : Thread-1
doing jobs num : 16
thread name is : Thread-1
doing jobs num : 15
thread name is : Thread-1
doing jobs num : 14
thread name is : Thread-1
doing jobs num : 13
thread name is : Thread-1
doing jobs num : 12
thread name is : Thread-1
doing jobs num : 11
thread name is : Thread-1
doing jobs num : 10
thread name is : Thread-1
doing jobs num : 9
thread name is : Thread-1
doing jobs num : 8
thread name is : Thread-1
doing jobs num : 7
thread name is : Thread-1
doing jobs num : 6
thread name is : Thread-1
doing jobs num : 5
thread name is : Thread-1
doing jobs num : 4
thread name is : Thread-1
doing jobs num : 3
thread name is : Thread-1
doing jobs num : 2
thread name is : Thread-1
doing jobs num : 1
Main Thread Exit : __main__
当然以上的方法只是常见的其中几种,你也可以使用其它提供的类和接口实现同步,互斥。
比如说condition,RLock,multiprocessing等等。