Hone logo
Hone
Problems

Implementing an Asynchronous Mutex in Rust

Concurrent access to shared mutable data is a fundamental challenge in multithreaded programming. While synchronous mutexes solve this for threads, asynchronous programming requires a similar mechanism that cooperates with the async runtime without blocking threads. This challenge asks you to build a basic asynchronous mutex in Rust, enabling safe concurrent access to shared data within an async environment.

Problem Description

Your task is to implement a custom asynchronous mutex in Rust, named AsyncMutex. This mutex should allow multiple asynchronous tasks to safely acquire exclusive access to a shared piece of data.

Key Requirements:

  • Asynchronous Acquisition: The lock() method should return an async future that resolves when the mutex is acquired. If the mutex is already locked, tasks attempting to acquire it should yield to the runtime and be resumed when the mutex becomes available.
  • Exclusive Access: Only one task should hold the lock at any given time.
  • Data Protection: The mutex should wrap a piece of data (generic type T) and provide safe access to it through the acquired lock guard.
  • Lock Guard: Upon successful acquisition, lock() should return a guard object. This guard should provide mutable access to the protected data (&mut T) and automatically release the mutex when it goes out of scope (RAII).
  • No Blocking: The implementation must not use any blocking operations that would prevent the async runtime from polling other tasks.
  • Thread Safety (for runtime): The AsyncMutex itself should be safe to use across different threads that might be running async tasks (e.g., when using tokio::spawn on a multi-threaded runtime).

Expected Behavior:

When multiple tasks try to acquire the AsyncMutex:

  1. The first task to call lock() will acquire it immediately.
  2. Subsequent tasks calling lock() will be queued.
  3. When the current lock holder releases the mutex, the next task in the queue will acquire it.

Examples

Example 1:

use std::sync::Arc;
use tokio::task;
use tokio::time::{sleep, Duration};

// Assume AsyncMutex is implemented as described.
// The following code is for demonstration of expected usage.

#[tokio::main]
async fn main() {
    let shared_data = Arc::new(AsyncMutex::new(0));
    let mut handles = vec![];

    for i in 0..5 {
        let shared_data_clone = Arc::clone(&shared_data);
        let handle = task::spawn(async move {
            println!("Task {} trying to acquire lock...", i);
            let mut data = shared_data_clone.lock().await;
            println!("Task {} acquired lock. Data: {}", i, *data);
            *data += 1;
            sleep(Duration::from_millis(10 + i as u64)).await; // Simulate work
            println!("Task {} releasing lock. Data: {}", i, *data);
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.await.unwrap();
    }

    let final_data = shared_data.lock().await;
    println!("Final data: {}", *final_data);
}

Expected Output (order of acquisition may vary, but final result is consistent):

Task 0 trying to acquire lock...
Task 0 acquired lock. Data: 0
Task 1 trying to acquire lock...
Task 2 trying to acquire lock...
Task 3 trying to acquire lock...
Task 4 trying to acquire lock...
Task 0 releasing lock. Data: 1
Task 1 acquired lock. Data: 1
Task 1 releasing lock. Data: 2
Task 2 acquired lock. Data: 2
Task 2 releasing lock. Data: 3
Task 3 acquired lock. Data: 3
Task 3 releasing lock. Data: 4
Task 4 acquired lock. Data: 4
Task 4 releasing lock. Data: 5
Final data: 5

Example 2: (Illustrating contention)

use std::sync::Arc;
use tokio::task;
use tokio::time::{sleep, Duration};

// Assume AsyncMutex is implemented as described.

#[tokio::main]
async fn main() {
    let shared_counter = Arc::new(AsyncMutex::new(0));
    let mut handles = vec![];

    for _ in 0..10 {
        let counter_clone = Arc::clone(&shared_counter);
        let handle = task::spawn(async move {
            let mut counter = counter_clone.lock().await;
            *counter += 1;
            sleep(Duration::from_millis(5)).await; // Small delay to encourage interleaving
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.await.unwrap();
    }

    let final_count = shared_counter.lock().await;
    println!("Total increments: {}", *final_count);
}

Expected Output:

Total increments: 10

Constraints

  • The AsyncMutex should be generic over the data type T.
  • T must be Send and Sync if you intend to use the mutex across multiple threads (which is typical for async runtimes).
  • Your implementation should not rely on external crates that provide an async mutex. You may use standard library synchronization primitives (like Mutex if absolutely necessary, but the goal is to build the async logic) or lower-level async primitives from crates like tokio if permitted by the environment you are testing in (though a pure implementation using futures might be more illustrative). For this challenge, assume you can use tokio's Mutex as inspiration or for underlying signaling mechanisms, but the core acquisition/release logic and guard management should be yours.
  • The lock() future should not consume the AsyncMutex itself, allowing it to be locked multiple times by different tasks sequentially.

Notes

  • Consider how to manage waiting tasks when the mutex is locked. An internal queue of tasks waiting for the lock is a common pattern.
  • tokio::sync::Notify or similar mechanisms can be useful for signaling when the mutex is released.
  • The lock guard's Drop implementation is crucial for automatically releasing the mutex.
  • Think about how to handle the Send and Sync requirements for the mutex itself if it's to be shared across threads. Arc is a common way to share state across threads.
  • The core challenge lies in implementing the asynchronous waiting and notification mechanism without blocking threads.
Loading editor...
rust