The asyncio library in Python

The asyncio library in Python provides a framework for writing single-threaded concurrent code using coroutines, which are a type of asynchronous function. It allows you to manage asynchronous operations easily and is suitable for I/O-bound and high-level structured network code.

Key Concepts

  1. Event Loop: The core of every asyncio application. It runs asynchronous tasks and callbacks, performs network I/O operations, and runs subprocesses.
  2. Coroutines: Special functions defined with async def, which can use await to call other asynchronous functions.
  3. Tasks: A higher-level way to manage coroutines. They allow you to schedule coroutines concurrently.
  4. Futures: Objects that represent the result of an asynchronous operation, typically managed by the event loop.
  5. Streams: High-level APIs for working with network connections.

Basic Usage

Here’s a simple example of using asyncio to run a couple of coroutines:

import asyncio

async def say_hello():
    await asyncio.sleep(1)
    print("Hello")

async def say_world():
    await asyncio.sleep(1)
    print("World")

async def main():
    await asyncio.gather(
        say_hello(),
        say_world(),
    )

asyncio.run(main())

Creating Tasks

You can use asyncio.create_task() to schedule a coroutine to run concurrently:

async def task_example():
    print("Task started")
    await asyncio.sleep(2)
    print("Task finished")

async def main():
    task = asyncio.create_task(task_example())
    await task

asyncio.run(main())

Anticipate Futures

Futures represent a value that may not be available yet. You can create and wait for a future:

import asyncio

async def set_future(fut):
    await asyncio.sleep(2)
    fut.set_result("Future is done")

async def main():
    fut = asyncio.Future()
    await asyncio.gather(set_future(fut))
    print(fut.result())

asyncio.run(main())

streams

Working with TCP streams using asyncio is straightforward:

import asyncio

async def handle_client(reader, writer):
data = await reader.read(100)
message = data.decode()
addr = writer.get_extra_info(‘peername’)

print(f"Received {message} from {addr}")

writer.write(data)
await writer.drain()
writer.close()

async def main():
server = await asyncio.start_server(handle_client, ‘127.0.0.1’, 8888)
addr = server.sockets[0].getsockname()
print(f’Serving on {addr}’)

async with server:
    await server.serve_forever()

asyncio.run(main())

Exception Handling

You can handle exceptions within asyncio tasks:

async def error_task():
    raise ValueError("An example error")

async def main():
    task = asyncio.create_task(error_task())
    try:
        await task
    except ValueError as e:
        print(f"Caught an exception: {e}")

asyncio.run(main())

Integration with Other Libraries

asyncio can be integrated with various libraries, including web frameworks like FastAPI, databases, and more. This allows for highly responsive applications that can handle many tasks concurrently.

Useful Functions

  • asyncio.sleep(): Sleep for a given number of seconds.
  • asyncio.gather(): Run multiple coroutines concurrently and wait for them to complete.
  • asyncio.wait_for(): Wait for a coroutine with a timeout.
  • asyncio.shield(): Protect a task from cancellation.
  • asyncio.run(): Run an event loop until the given coroutine completes.

The asyncio library is a powerful tool for managing asynchronous operations in Python, making it easier to write concurrent code that is more readable and maintainable. For more detailed information, you can refer to the official asyncio documentation.

Related articles

Chain of TRUST in LLM

https://galadriel.com/ In the realm of artificial intelligence, verifying that an AI response genuinely came from a specific model and wasn’t tampered with presents a significant challenge. The Chain of Trust in verified AI inference provides a robust solution through multiple layers of security and cryptographic proof. The Foundation: Trusted Execution Environment (TEE) At the core of verified inference lies the Trusted Execution Environment (TEE), specifically AWS Nitro Enclaves. This hardware-isolated environment provides a critical security […]

Learn More

Reliable fully local RAG agents with LLaMA3

https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_rag_agent_llama3_local.ipynb Building reliable local agents using LangGraph and LLaMA3-8b within the RAGE framework involves several key components and methodologies: Model Integration and Local Deployment: LLaMA3-8b: Utilize this robust language model for generating responses based on user queries. It serves as the core generative engine in the RAGE system. LangGraph: Enhance the responses of LLaMA3 by integrating structured knowledge graphs through LangGraph, boosting the model’s capability to deliver contextually relevant and accurate information. Advanced RAGE Techniques: […]

Learn More
together ai

aGLM MASTERMIND RAGE Mixtral8x7B playground 1

together.ai provides a cloud environment playground for a number of LLM including Mixtral8x7Bv1. This model was chosen for the 32k ++ context window and suitable point of departure dataset for deployment of aGLM Autonomous General Learning Model. aGLM design goals include RAGE with MASTERMIND controller for logic and reasoning. The following three screenshots show the first use of aGLM recognising aGLM and MASTERMIND RAGE components to include machine.dreaming and knowledge as THOT from aGLM parse. […]

Learn More