Hone logo
Hone
Problems

Custom Arc Implementation in Rust

This challenge asks you to implement a simplified version of Rust's Arc (Atomic Reference Counting) smart pointer. Arc allows multiple owners of a shared, immutable resource, automatically managing its lifetime and deallocating it when the last owner goes out of scope. This is crucial for concurrency and shared data structures.

Problem Description

You are to implement a CustomArc struct that mimics the core functionality of std::sync::Arc. Your CustomArc should provide a way to wrap a shared resource (represented by a generic type T) and track the number of owners. The key features to implement are:

  1. Wrapping: A constructor that takes ownership of the resource T.
  2. Cloning: A clone() method that increments the reference count and returns a new CustomArc pointing to the same resource.
  3. Decrementing: When a CustomArc instance goes out of scope, the reference count should be decremented.
  4. Deallocation: When the reference count reaches zero, the resource T should be deallocated (dropped).
  5. Accessing the Resource: Provide a way to access the underlying resource T immutably (e.g., via a & reference).

You should use std::sync::atomic::{AtomicUsize, Ordering} for atomic operations on the reference count. This is essential for thread safety.

Key Requirements:

  • The CustomArc must be thread-safe.
  • The resource T must be deallocated only when the last CustomArc pointing to it is dropped.
  • The clone() method must return a new CustomArc that shares the same underlying resource.
  • Accessing the resource should be safe and provide an immutable reference.

Expected Behavior:

  • Multiple CustomArc instances can point to the same resource.
  • Changes to the reference count must be atomic.
  • The resource T should be dropped only when the reference count reaches zero.
  • Accessing the resource through a CustomArc should not cause data races.

Edge Cases to Consider:

  • Creating a CustomArc with a null or invalid resource (handle this gracefully, potentially by panicking or returning an error). For simplicity, assume the resource is always valid.
  • Cloning a CustomArc multiple times.
  • Dropping multiple CustomArc instances pointing to the same resource.
  • Concurrent access and modification of the reference count from multiple threads (though full thread safety testing is not required for this challenge, the design should accommodate it).

Examples

Example 1:

Input:
```rust
let data = String::from("Hello, world!");
let arc = CustomArc::new(data);
let arc_clone = arc.clone();

// Drop arc
drop(arc);

// Check if data is still valid (it should be, as arc_clone still exists)
assert_eq!(arc_clone.lock().as_str(), "Hello, world!");

// Drop arc_clone
drop(arc_clone);

Output: data is deallocated when arc_clone is dropped. Explanation: The initial CustomArc has a reference count of 1. Cloning it creates another CustomArc with a reference count of 2. Dropping the first CustomArc decrements the count to 1. Dropping the second CustomArc decrements the count to 0, triggering deallocation of the underlying String.

Example 2:

Input:
```rust
use std::thread;

let data = String::from("Shared data");
let arc = CustomArc::new(data);
let arc_clone = arc.clone();

let handle = thread::spawn(move || {
    let another_arc = arc_clone.clone();
    drop(arc_clone);
    assert_eq!(another_arc.lock().as_str(), "Shared data");
    drop(another_arc);
});

drop(arc);
handle.join().unwrap();

Output: The String "Shared data" is deallocated after the thread completes and drops its CustomArc instances. Explanation: This demonstrates cloning across threads. The main thread and the spawned thread both have CustomArc instances pointing to the same data. The main thread drops its CustomArc, but the data remains alive because the thread still holds a clone. When the thread finishes and drops its CustomArc, the reference count reaches zero, and the data is deallocated.

Constraints

  • The CustomArc implementation must use std::sync::atomic::AtomicUsize for the reference count.
  • The resource T must be deallocated only when the reference count reaches zero.
  • The code should be reasonably efficient (avoid unnecessary allocations or copies).
  • The implementation should be thread-safe, although extensive thread safety testing is not required.
  • The lock() method should return a MutexGuard<'_, T> (you can use std::sync::Mutex for this).

Notes

  • Consider using a Mutex to protect access to the underlying resource T. This is necessary to allow for potential future modifications (although this challenge focuses on immutable access).
  • The Ordering for the atomic operations is important for correctness. Ordering::SeqCst is the simplest and most conservative choice, but you can explore other orderings for potential performance improvements if you understand their implications.
  • Focus on the core reference counting and deallocation logic. Error handling and advanced features (like interior mutability) are not required for this challenge.
  • The lock() method is a placeholder for accessing the underlying data. You'll need to implement it to acquire a lock on the Mutex protecting the data.
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Mutex;

pub struct CustomArc<T> {
    ptr: Mutex<T>,
    strong_count: AtomicUsize,
}

impl<T> CustomArc<T> {
    pub fn new(data: T) -> Self {
        CustomArc {
            ptr: Mutex::new(data),
            strong_count: AtomicUsize::new(1),
        }
    }

    pub fn clone(&self) -> Self {
        self.strong_count.fetch_add(1, Ordering::SeqCst);
        CustomArc {
            ptr: self.ptr.clone(),
            strong_count: self.strong_count.clone(),
        }
    }

    pub fn lock(&self) -> std::sync::MutexGuard<'_, T> {
        self.ptr.lock().unwrap()
    }
}

impl<T> Drop for CustomArc<T> {
    fn drop(&mut self) {
        let prev = self.strong_count.fetch_sub(1, Ordering::SeqCst);
        if prev == 1 {
            // Last reference, deallocate
            let _ = self.ptr.into_inner(); // Drop the data
        }
    }
}
Loading editor...
rust