Implement an In-Memory Cache in Go
This challenge involves building a fundamental in-memory cache mechanism in Go. Caching is a crucial technique for improving application performance by storing frequently accessed data closer to the application, reducing the need for expensive operations like database queries or API calls. You will create a flexible and thread-safe cache that can store and retrieve data based on a key.
Problem Description
Your task is to design and implement an in-memory cache in Go. The cache should support basic operations: Set (to add or update a key-value pair) and Get (to retrieve a value by its key). Additionally, the cache needs to have a mechanism for managing its size, specifically an eviction policy. For this challenge, you will implement a Least Recently Used (LRU) eviction policy.
Key Requirements:
- Thread Safety: The cache must be safe for concurrent access from multiple goroutines.
- Key-Value Storage: The cache should store data as key-value pairs. Keys should be strings, and values can be of any type (
interface{}). - Get Operation:
- If a key exists, return its value and
true. - If a key does not exist, return
nilandfalse. - When a key is accessed (via
Get), it should be marked as recently used.
- If a key exists, return its value and
- Set Operation:
- Add a new key-value pair to the cache.
- If the key already exists, update its value and mark it as recently used.
- If adding a new item causes the cache to exceed its capacity, an item must be evicted.
- Capacity and Eviction (LRU):
- The cache should have a defined maximum capacity.
- When the capacity is reached and a new item needs to be added, the least recently used item must be evicted.
- The "least recently used" item is the one that hasn't been accessed (via
GetorSetupdate) for the longest time.
Expected Behavior:
- The cache should behave as expected under concurrent access.
Getoperations on existing keys should correctly retrieve values and update their "recency."Setoperations should correctly add/update items and trigger eviction when capacity is exceeded.- Eviction should consistently remove the LRU item.
Edge Cases to Consider:
- Cache with zero or very small capacity.
- Accessing a key that has been evicted.
- Setting a value for a key that has been evicted and re-added.
- Concurrent
SetandGetoperations on the same or different keys.
Examples
Example 1: Basic Set and Get
Input:
cache = NewCache(2) // Capacity of 2
cache.Set("key1", "value1")
cache.Set("key2", "value2")
val1, ok1 := cache.Get("key1")
val2, ok2 := cache.Get("key2")
Output:
val1: "value1", ok1: true
val2: "value2", ok2: true
Explanation: Both keys are successfully set and retrieved.
Example 2: LRU Eviction
Input:
cache = NewCache(2) // Capacity of 2
cache.Set("key1", "value1")
cache.Set("key2", "value2")
// Accessing key1 makes it most recently used
cache.Get("key1")
// Setting key3 should evict key2 (the least recently used)
cache.Set("key3", "value3")
val1, ok1 := cache.Get("key1")
val2, ok2 := cache.Get("key2") // key2 should be evicted
val3, ok3 := cache.Get("key3")
Output:
val1: "value1", ok1: true
val2: nil, ok2: false
val3: "value3", ok3: true
Explanation: When "key3" was set, "key2" was the LRU item and was evicted. "key1" was accessed, making it the MRU.
Example 3: Update and Eviction Order
Input:
cache = NewCache(3) // Capacity of 3
cache.Set("a", 1)
cache.Set("b", 2)
cache.Set("c", 3)
cache.Get("a") // a is now MRU
cache.Set("b", 20) // b is updated, now MRU
// LRU is c
cache.Set("d", 4) // c should be evicted
Output:
// (Implicitly, if we were to get them)
// cache.Get("a") -> 1, true
// cache.Get("b") -> 20, true
// cache.Get("c") -> nil, false
// cache.Get("d") -> 4, true
Explanation: "a" was accessed, then "b" was updated. This made "c" the LRU. Adding "d" evicted "c".
Constraints
- Cache capacity will be a non-negative integer. A capacity of 0 means no items can be stored.
- Keys will be strings.
- Values can be of any type (
interface{}). - The cache should handle a reasonable number of concurrent operations without deadlocks or data corruption.
- The implementation should aim for efficient lookups and updates, generally O(1) on average for
GetandSetoperations, assuming a good hash map implementation.
Notes
- Consider using a combination of a hash map for O(1) average time key lookups and a doubly linked list to maintain the order of recency for LRU eviction.
- Synchronization primitives like
sync.Mutexorsync.RWMutexwill be essential for thread safety. - Think about how to efficiently move elements in your data structures when they are accessed or updated to reflect their recency.
- An empty cache or a cache with capacity 0 should gracefully handle
GetandSetoperations.