Python 并发系列 3 —— 示例大战

本文深入探讨了Python中的并发编程,对比了多线程、多进程和AsyncIO的不同应用场景及优缺点,通过实例展示了如何利用这些技术提高程序效率。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

三、进程、线程、协程 示例大作战

翻译自:AsyncIO, Threading, and Multiprocessing in Python

AsyncIO 是在 python 中实现并发的一个相对较新的框架。在本文中,我将把它与传统方法(如多线程和多处理)进行比较。

  • CPython 强制使用 GIL(全局解释器锁),防止充分利用多线程。在运行任何字节码之前,每个线程都需要获取这个互斥锁
  • 对于网络 I/O 或磁盘 I/O,多线程通常是首选的,因为线程之间不需要为了获得 GIL 而进行激烈的竞争。
  • 多进程通常是 CPU 密集型任务的首选。多进程不需要 GIL,因为每个进程都有自己的状态,但是,创建和销毁进程并不是一件小事。
  • 带线程模块的多线程是抢占式的,这需要自愿和非自愿的线程交换。
  • AsyncIO 是单线程单进程协作多任务处理。异步任务独占使用 CPU,直到它希望将其交给协调器或事件循环。(稍后将介绍术语)

3.1 顺序执行

程序延迟打印消息。当主线程处于睡眠状态时,CPU处于空闲状态,这是对资源的低效使用。

import logging
import time

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

num_word_mapping = {1: 'ONE', 2: 'TWO', 3: "THREE", 4: "FOUR",
                    5: "FIVE", 6: "SIX", 7: "SEVEN", 8: "EIGHT",
                    9: "NINE", 10: "TEN"}

def delay_message(delay, message):
    logging.info(f"{message} received")
    time.sleep(delay)
    logging.info(f"Printing {message}")

def main():
    logging.info("Main started")
    delay_message(2, num_word_mapping[2])
    delay_message(3, num_word_mapping[3])
    logging.info("Main Ended")

main()
12:43:30:MainThread:Main started
12:43:30:MainThread:TWO received
12:43:32:MainThread:Printing TWO
12:43:32:MainThread:THREE received
12:43:35:MainThread:Printing THREE
12:43:35:MainThread:Main Ended

3.2 线程并发

一般线程模块
import logging
import time
import threading

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

num_word_mapping = {1: 'ONE', 2: 'TWO', 3: "THREE", 4: "FOUR",
                    5: "FIVE", 6: "SIX", 7: "SEVEN", 8: "EIGHT",
                    9: "NINE", 10: "TEN"}


def delay_message(delay, message):
    logging.info(f"{message} received")
    time.sleep(delay)
    logging.info(f"Printing {message}")


def main():
    logging.info("Main started")
    threads = [threading.Thread(target=delay_message, args=(delay, message))
               for delay, message in zip([2, 3], [num_word_mapping[2], num_word_mapping[3]])]
    for thread in threads:
        thread.start()
    for thread in threads:
        thread.join()     # waits for thread to complete its task
    logging.info("Main Ended")


main()
18:19:07:MainThread:Main started
18:19:07:Thread-1:TWO received
18:19:07:Thread-2:THREE received
18:19:09:Thread-1:Printing TWO
18:19:10:Thread-2:Printing THREE
18:19:10:MainThread:Main Ended

使用 python 的线程模块在单独的非守护进程线程上运行多个delay_message 调用。毫不奇怪,该程序的执行速度比上面同步版本快了 2 秒。操作系统在线程空闲(休眠)时交换线程。

线程池

尽管线程是轻量级的,但是创建和销毁大量线程是昂贵的。concurrent.futures 构建于线程模块上,它的原理是不创建新线程,而是重用池中的现有的线程。

import concurrent.futures as cf
import logging
import time

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

num_word_mapping = {1: 'ONE', 2: 'TWO', 3: "THREE", 4: "FOUR", 5: "FIVE", 
                    6: "SIX", 7: "SEVEN", 8: "EIGHT", 9: "NINE", 10: "TEN"}


def delay_message(delay, message):
    logging.info(f"{message} received")
    time.sleep(delay)
    logging.info(f"Printing {message}")
    return message


if __name__ == '__main__':
    with cf.ThreadPoolExecutor(max_workers=2) as executor:
        future_to_mapping = {executor.submit(delay_message, i, num_word_mapping[i]): num_word_mapping[i] for i in
                             range(2, 4)}
        for future in cf.as_completed(future_to_mapping):
            logging.info(f"{future.result()} Done")

11:04:43:ThreadPoolExecutor-0_0:TWO received
11:04:43:ThreadPoolExecutor-0_1:THREE received
11:04:45:ThreadPoolExecutor-0_0:Printing TWO
11:04:45:MainThread:TWO Done
11:04:46:ThreadPoolExecutor-0_1:Printing THREE
11:04:46:MainThread:THREE Done

3.3 AsyncIO

一般 asyncio 模块
  1. Coroutine: 与常规的单点退出功能不同,协同程序可以暂停并恢复其执行。创建协同路由就像在声明函数之前使用 async 关键字一样简单。
  2. Event Loop Or Coordinator: 管理其他协程的协程。你可以把它看作是一个调度程序或主控程序。
  3. Awaitable object Coroutine、Tasks 和 Future 都是可等待对象。Coroutine 可以在可等待的对象上等待。当一个协程正在等待一个可等待的对象时,它的执行将被暂时挂起,并在将来完成后恢复。
# 运行在 python 3.8 环境下
import asyncio
import logging
import time

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

num_word_mapping = {1: 'ONE', 2: 'TWO', 3: "THREE", 4: "FOUR", 5: "FIVE", 
                    6: "SIX", 7: "SEVEN", 8: "EIGHT", 9: "NINE", 10: "TEN"}


async def delay_message(delay, message):
    logging.info(f"{message} received")
    await asyncio.sleep(delay)  # time.sleep 是阻塞调用. 因此它不能成为可等待的必须使用 asyncio.sleep
    logging.info(f"Printing {message}")


async def main():
    logging.info("Main started")
    logging.info(f'Current registered tasks: {len(asyncio.all_tasks())}')
    logging.info("Creating tasks")
    task_1 = asyncio.create_task(delay_message(2, num_word_mapping[2]))
    task_2 = asyncio.create_task(delay_message(3, num_word_mapping[3]))
    logging.info(f'Current registered tasks: {len(asyncio.all_tasks())}')
    await task_1  # suspends execution of coroutine and gives control back to event loop while awaiting task completion.
    await task_2
    logging.info("Main Ended")


if __name__ == '__main__':
    asyncio.run(main())                 # creats an envent loop

11:18:17:MainThread:Main started
11:18:17:MainThread:Current registered tasks: 1
11:18:17:MainThread:Creating tasks
11:18:17:MainThread:Current registered tasks: 3
11:18:17:MainThread:TWO received
11:18:17:MainThread:THREE received
11:18:19:MainThread:Printing TWO
11:18:20:MainThread:Printing THREE
11:18:20:MainThread:Main Ended

尽管程序是在一个线程上运行的,但是通过协同多任务处理,它可以达到与多线程代码相同的性能水平。

更好的方式:创建 AsyncIO tasks

使用 asyncio.gather 一次性创建多个任务。

import asyncio
import logging
import time

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

num_word_mapping = {1: 'ONE', 2: 'TWO', 3: "THREE", 4: "FOUR", 5: "FIVE", 
                    6: "SIX", 7: "SEVEN", 8: "EIGHT", 9: "NINE", 10: "TEN"}


async def delay_message(delay, message):
    logging.info(f"{message} received")
    await asyncio.sleep(delay)
    logging.info(f"Printing {message}")


async def main():
    logging.info("Main started")
    logging.info("Creating multiple tasks with asyncio.gather")
    await asyncio.gather(
        *[delay_message(i + 1, num_word_mapping[i + 1]) for i in range(5)])  # awaits completion of all tasks
    logging.info("Main Ended")


if __name__ == '__main__':
    asyncio.run(main())  # creats an envent loop
11:23:03:MainThread:Main started
11:23:03:MainThread:Creating multiple tasks with asyncio.gather
11:23:03:MainThread:ONE received
11:23:03:MainThread:TWO received
11:23:03:MainThread:THREE received
11:23:03:MainThread:FOUR received
11:23:03:MainThread:FIVE received
11:23:04:MainThread:Printing ONE
11:23:05:MainThread:Printing TWO
11:23:06:MainThread:Printing THREE
11:23:07:MainThread:Printing FOUR
11:23:08:MainThread:Printing FIVE
11:23:08:MainThread:Main Ended
有关在异步任务中阻止调用的警告

正如我之前所说的,异步任务有使用 CPU 的专有权,直到它自愿放弃。如果一个阻塞调用错误地潜入到你的任务中,它将暂停程序的进程。

import asyncio
import logging
import time

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

num_word_mapping = {1: 'ONE', 2: 'TWO', 3: "THREE", 4: "FOUR", 5: "FIVE", 
                    6: "SIX", 7: "SEVEN", 8: "EIGHT", 9: "NINE", 10: "TEN"}


async def delay_message(delay, message):
    logging.info(f"{message} received")
    if message != 'THREE':
        await asyncio.sleep(delay)  # non-blocking call. gives up execution
    else:
        time.sleep(delay)  # blocking call
    logging.info(f"Printing {message}")


async def main():
    logging.info("Main started")
    logging.info("Creating multiple tasks with asyncio.gather")
    await asyncio.gather(
        *[delay_message(i + 1, num_word_mapping[i + 1]) for i in range(5)])  # awaits completion of all tasks
    logging.info("Main Ended")


if __name__ == '__main__':
    asyncio.run(main())  # creats an envent loop
13:33:32:MainThread:Main started
13:33:32:MainThread:Creating multiple tasks with asyncio.gather
13:33:32:MainThread:ONE received
13:33:32:MainThread:TWO received
13:33:32:MainThread:THREE received
13:33:35:MainThread:Printing THREE
13:33:35:MainThread:FOUR received
13:33:35:MainThread:FIVE received
13:33:35:MainThread:Printing ONE
13:33:35:MainThread:Printing TWO
13:33:39:MainThread:Printing FOUR
13:33:40:MainThread:Printing FIVE
13:33:40:MainThread:Main Ended

当 delay_message 接收到消息 3 时,它进行阻塞调用,并且在完成任务之前不会放弃对事件循环的控制,从而延迟执行进度。因此,它比上一次运行多花费 3 秒。虽然这个例子看起来是量身定做的,但如果你不小心,它可能会发生。另一方面,线程是抢占式的,如果操作系统正在等待阻塞调用,它会抢占式地切换线程。

竞争条件

如果不考虑竞争条件,多线程代码可能会很快崩溃。在使用外部库时,这一点尤其棘手,因为我们需要验证它们是否支持多线程代码。例如,最常用的请求模块的 session 对象不是线程安全的。因此,尝试使用会话对象并行化网络请求可能会产生意外的结果。

import concurrent.futures as cf
import logging
import time

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")


class DbUpdate:
    def __init__(self):
        self.value = 0

    def update(self):
        logging.info("Update Started")
        logging.info("Sleeping")
        time.sleep(2)  # thread gets switched
        logging.info("Reading Value From Db")
        tmp = self.value ** 2 + 1
        logging.info("Updating Value")
        self.value = tmp
        logging.info("Update Finished")


db = DbUpdate()
with cf.ThreadPoolExecutor(max_workers=5) as executor:
    updates = [executor.submit(db.update) for _ in range(2)]
logging.info(f"Final value is {db.value}")
13:49:52:ThreadPoolExecutor-0_0:Update Started
13:49:52:ThreadPoolExecutor-0_0:Sleeping
13:49:52:ThreadPoolExecutor-0_1:Update Started
13:49:52:ThreadPoolExecutor-0_1:Sleeping
13:49:54:ThreadPoolExecutor-0_0:Reading Value From Db
13:49:54:ThreadPoolExecutor-0_1:Reading Value From Db
13:49:54:ThreadPoolExecutor-0_0:Updating Value
13:49:54:ThreadPoolExecutor-0_1:Updating Value
13:49:54:ThreadPoolExecutor-0_0:Update Finished
13:49:54:ThreadPoolExecutor-0_1:Update Finished
13:49:54:MainThread:Final value is 1

理想情况下,最终值应为2。但是,由于线程的抢先交换,线程-1 在更新值之前被交换,因此更新错误地将最终值生成为1。我们必须用锁来防止这种情况发生。(上面程序多跑几次,一般是 2,但确实存在结果为 1 这种不正常结果)

import concurrent.futures as cf
import logging
import time
import threading

LOCK = threading.Lock()

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")


class DbUpdate:
    def __init__(self):
        self.value = 0

    def update(self):
        logging.info("Update Started")
        logging.info("Sleeping")
        time.sleep(2)  # thread gets switched
        with LOCK:
            logging.info("Reading Value From Db")
            tmp = self.value ** 2 + 1
            logging.info("Updating Value")
            self.value = tmp
            logging.info("Update Finished")


db = DbUpdate()
with cf.ThreadPoolExecutor(max_workers=5) as executor:
    updates = [executor.submit(db.update) for _ in range(2)]
logging.info(f"Final value is {db.value}")
13:54:16:ThreadPoolExecutor-0_0:Update Started
13:54:16:ThreadPoolExecutor-0_0:Sleeping
13:54:16:ThreadPoolExecutor-0_1:Update Started
13:54:16:ThreadPoolExecutor-0_1:Sleeping
13:54:18:ThreadPoolExecutor-0_0:Reading Value From Db
13:54:18:ThreadPoolExecutor-0_0:Updating Value
13:54:18:ThreadPoolExecutor-0_0:Update Finished
13:54:18:ThreadPoolExecutor-0_1:Reading Value From Db
13:54:18:ThreadPoolExecutor-0_1:Updating Value
13:54:18:ThreadPoolExecutor-0_1:Update Finished
13:54:18:MainThread:Final value is 2
AsyncIO 很少出现竞争条件

由于任务可以完全控制何时暂停执行,因此 asyncio 很少出现竞争条件。

import asyncio
import logging
import time

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")


class DbUpdate:
    def __init__(self):
        self.value = 0

    async def update(self):
        logging.info("Update Started")
        logging.info("Sleeping")
        await asyncio.sleep(1)
        logging.info("Reading Value From Db")
        tmp = self.value ** 2 + 1
        logging.info("Updating Value")
        self.value = tmp
        logging.info("Update Finished")


async def main():
    db = DbUpdate()
    await asyncio.gather(*[db.update() for _ in range(2)])
    logging.info(f"Final value is {db.value}")


asyncio.run(main())
13:57:07:MainThread:Update Started
13:57:07:MainThread:Sleeping
13:57:07:MainThread:Update Started
13:57:07:MainThread:Sleeping
13:57:08:MainThread:Reading Value From Db
13:57:08:MainThread:Updating Value
13:57:08:MainThread:Update Finished
13:57:08:MainThread:Reading Value From Db
13:57:08:MainThread:Updating Value
13:57:08:MainThread:Update Finished
13:57:08:MainThread:Final value is 2

如你所见,一旦任务在睡眠后恢复,它不会放弃控制,直到它完成协同程序的执行。对于线程,线程交换不是很明显,但是对于异步,我们可以控制什么时候应该挂起协同路由执行。尽管如此,当两个协程进入死锁时可能会出错。

import asyncio


async def foo():
    await boo()


async def boo():
    await foo()


async def main():
    await asyncio.gather(*[foo(), boo()])


asyncio.run(main())

# RecursionError: maximum recursion depth exceeded   递归异常,无底洞似的

3.4 多进程

如前所述,在实现 CPU 密集型程序时,多进程非常方便。下面的代码对包含 30000 个元素的 1000 个列表执行合并排序。请原谅下面合并排序的实现有点笨拙。

同步版本
import concurrent.futures as cf
import logging
import math
import numpy as np
import time
import threading

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

r_lists = [[np.random.randint(500000) for _ in range(30000)] for _ in range(1000)]

def merge(l_1, l_2):
    out = []
    key_1 = 0
    key_2 = 0
    for i in range(len(l_1) + len(l_2)):
        if l_1[key_1] < l_2[key_2]:
            out.append(l_1[key_1])
            key_1 += 1
            if key_1 == len(l_1):
                out = out + l_2[key_2:]
                break
        else:
            out.append(l_2[key_2])
            key_2 += 1
            if key_2 == len(l_2):
                out = out + l_1[key_1:]
                break
    return out

def merge_sort(l):
    if len(l) == 1:
        return l
    mid_point = math.floor((len(l) + 1) / 2)
    l_1, l_2 = merge_sort(l[:mid_point]), merge_sort(l[mid_point:])
    out = merge(l_1, l_2)
    del l_1, l_2
    return out

if __name__ == '__main__':
    logging.info("Starting Sorting")
    for r_list in r_lists:
        _ = merge_sort(r_list)
    logging.info("Sorting Completed")
14:11:56:MainThread:Starting Sorting
14:14:32:MainThread:Sorting Completed
异步版本
import concurrent.futures as cf
import logging
import math
import numpy as np
import time
import threading

logger_format = '%(asctime)s:%(threadName)s:%(message)s'
logging.basicConfig(format=logger_format, level=logging.INFO, datefmt="%H:%M:%S")

r_lists = [[np.random.randint(500000) for _ in range(30000)] for _ in range(1000)]

def merge(l_1, l_2):
    out = []
    key_1 = 0
    key_2 = 0
    for i in range(len(l_1) + len(l_2)):
        if l_1[key_1] < l_2[key_2]:
            out.append(l_1[key_1])
            key_1 += 1
            if key_1 == len(l_1):
                out = out + l_2[key_2:]
                break
        else:
            out.append(l_2[key_2])
            key_2 += 1
            if key_2 == len(l_2):
                out = out + l_1[key_1:]
                break
    return out

def merge_sort(l):
    if len(l) == 1:
        return l
    mid_point = math.floor((len(l) + 1) / 2)
    l_1, l_2 = merge_sort(l[:mid_point]), merge_sort(l[mid_point:])
    out = merge(l_1, l_2)
    del l_1, l_2
    return out

if __name__ == '__main__':
    logging.info("Starting Sorting")
    with cf.ProcessPoolExecutor() as executor:
        sorted_lists_futures = [executor.submit(merge_sort, r_list) for r_list in r_lists]
    logging.info("Sorting Completed")
21:29:33:MainThread:Starting Sorting
21:30:03:MainThread:Sorting Completed

# 下面是我电脑跑的,鬼知道发生了什么
14:31:02:MainThread:Starting Sorting
14:39:50:MainThread:Sorting Completed   
# 把列表大小缩小 r_lists = [... for _ in range(300)] for _ in range(100)]
# 多进程需要 1s
16:01:06:MainThread:Starting Sorting
16:01:07:MainThread:Sorting Completed
# 同步版本需要 < 1s
16:03:13:MainThread:Starting Sorting
16:03:13:MainThread:Sorting Completed

# 把列表大小缩小 r_lists = [... for _ in range(3000)] for _ in range(100)]
# 同步版本
16:03:50:MainThread:Starting Sorting
16:03:52:MainThread:Sorting Completed
# 多进程异步版本
16:04:20:MainThread:Starting Sorting
16:04:25:MainThread:Sorting Completed

默认情况下,进程数等于计算机上的处理器数。你可以看到两个版本之间的执行时间有了相当大的改进(我这不但没有提升,反而 4 个进程耗资源多运行时间还多了好几倍,怎么着都是多进程方案垃圾,可能取决于 CPU?)。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值