Hone logo
Hone
Problems

Mastering Asynchronous Operations with Python's async/await

Modern applications often need to perform multiple tasks concurrently without blocking the main execution thread. Python's async/await syntax provides a powerful and elegant way to handle such asynchronous operations, making your programs more responsive and efficient, especially when dealing with I/O-bound tasks like network requests or file operations.

Problem Description

Your task is to implement a program that simulates downloading multiple web pages concurrently using Python's asyncio library and the async/await keywords. You will need to:

  1. Define an asynchronous function that simulates downloading a web page. This function should take a URL and a delay (representing download time) as arguments.
  2. Simulate the download process within the asynchronous function. Instead of actual network requests, use asyncio.sleep() to mimic the time it takes to download.
  3. Create a main asynchronous function that orchestrates the concurrent downloads. This function should take a list of URLs and initiate the download tasks for each.
  4. Use asyncio.gather() to run these download tasks concurrently and wait for all of them to complete.
  5. Measure and print the total time taken for all downloads to finish.

The goal is to demonstrate how async/await can significantly reduce the overall execution time compared to a sequential approach.

Examples

Example 1:

Input:
URLs: ["http://example.com/page1", "http://example.com/page2", "http://example.com/page3"]
Delays: [2, 1, 3] (seconds for each URL respectively)

Expected Output (approximate):
Starting download for http://example.com/page1 (2 seconds)...
Starting download for http://example.com/page2 (1 seconds)...
Starting download for http://example.com/page3 (3 seconds)...
Finished downloading http://example.com/page2
Finished downloading http://example.com/page1
Finished downloading http://example.com/page3
All downloads completed in X.XX seconds.

Explanation: The program initiates downloads for all three URLs. http://example.com/page2 finishes first due to its shorter delay. The total time taken should be close to the longest individual delay (3 seconds), plus a small overhead, rather than the sum of all delays (2 + 1 + 3 = 6 seconds).

Example 2:

Input:
URLs: ["http://example.com/data_fetch"]
Delays: [5]

Expected Output (approximate):
Starting download for http://example.com/data_fetch (5 seconds)...
Finished downloading http://example.com/data_fetch
All downloads completed in X.XX seconds.

Explanation: With a single URL, the program waits for its simulated download to complete. The total time will be approximately the delay time.

Example 3: (Edge Case - Empty list)

Input:
URLs: []
Delays: []

Expected Output:
No URLs provided.
All downloads completed in 0.00 seconds.

Explanation: If no URLs are provided, the program should handle this gracefully and report that no downloads were initiated, with a completion time of 0.

Constraints

  • The number of URLs will be between 0 and 50.
  • Each delay will be an integer between 1 and 10 seconds.
  • The input will always be two lists of equal length: one for URLs (strings) and one for corresponding delays (integers).
  • The total execution time should be demonstrably shorter than the sum of all delays, showcasing the benefit of concurrency.

Notes

  • You will need to import the asyncio library.
  • The async def keyword is used to define coroutines (asynchronous functions).
  • The await keyword is used to pause the execution of a coroutine until an awaitable (like asyncio.sleep()) completes.
  • asyncio.run() is used to start the asyncio event loop and run your main asynchronous function.
  • Consider using f-strings for formatted output.
  • When simulating download, print a message indicating the start of the download with its URL and delay. Also, print a message when each download finishes.
Loading editor...
python