Hone logo
Hone
Problems

Implement an LRU Cache

You are tasked with implementing a Least Recently Used (LRU) cache. An LRU cache is a caching mechanism that discards the least recently used items first when the cache is full. This is a common and useful data structure in system design, used in databases, web servers, and operating systems to manage memory efficiently.

Problem Description

Implement a data structure that supports two operations: get(key) and put(key, value).

  1. get(key): Returns the value of the key if the key exists in the cache. Otherwise, it returns -1. Accessing a key should mark it as recently used.
  2. put(key, value): Inserts or updates the value of a key. If the number of keys in the cache exceeds the capacity, the least recently used key must be removed before inserting the new key-value pair.

The cache should maintain its order of usage, with the most recently used items at one end and the least recently used at the other.

Key Requirements:

  • The cache must have a fixed capacity.
  • When get is called on an existing key, that key becomes the most recently used.
  • When put is called for a new key and the cache is at full capacity, the least recently used key must be evicted.
  • When put is called for an existing key, its value is updated, and it becomes the most recently used.

Expected Behavior:

  • get(key):
    • If key is found, return its value and move key to the "most recently used" position.
    • If key is not found, return -1.
  • put(key, value):
    • If key already exists, update its value and move it to the "most recently used" position.
    • If key does not exist:
      • If the cache is at full capacity, evict the "least recently used" key-value pair.
      • Insert the new key-value pair and mark it as the "most recently used".

Edge Cases:

  • Cache capacity of 1.
  • Putting the same key multiple times.
  • Getting keys that have been recently evicted.

Examples

Example 1:

Capacity: 2

Operations:
put(1, 1)
put(2, 2)
get(1)     -> returns 1. Cache state: (2,2), (1,1) [2 is MRU, 1 is LRU]
put(3, 3)  -> evicts key 2. Cache state: (1,1), (3,3) [1 is MRU, 3 is LRU]
get(2)     -> returns -1 (not found). Cache state: (1,1), (3,3)
put(4, 4)  -> evicts key 1. Cache state: (3,3), (4,4) [3 is MRU, 4 is LRU]
get(1)     -> returns -1 (not found). Cache state: (3,3), (4,4)
get(3)     -> returns 3. Cache state: (4,4), (3,3) [4 is MRU, 3 is LRU]
get(4)     -> returns 4. Cache state: (3,3), (4,4) [3 is MRU, 4 is LRU]

Output:

[1, -1, -1, 3, 4]

Explanation: The sequence of operations and their corresponding return values are demonstrated above. The internal state of the cache (ordered from most recently used to least recently used) is shown for clarity.

Example 2:

Capacity: 1

Operations:
put(2, 1)
get(2)     -> returns 1. Cache state: (2,1)
put(3, 2)  -> evicts key 2. Cache state: (3,2)
get(2)     -> returns -1. Cache state: (3,2)
get(3)     -> returns 2. Cache state: (2,3) [NOTE: This state is wrong in the explanation, corrected here: (3,2) becomes (2,3) when 3 is accessed] -> Correction: (3,2) becomes (3,2) and 3 is MRU. The next access would be (3,2) -> (3,2)

Corrected Example 2 Explanation:

Capacity: 1

Operations:
put(2, 1) -> Cache: {(2,1)}. MRU: 2, LRU: 2
get(2)    -> Returns 1. Cache: {(2,1)}. MRU: 2, LRU: 2
put(3, 2) -> Capacity is 1, key 2 is LRU. Evict (2,1). Cache: {(3,2)}. MRU: 3, LRU: 3
get(2)    -> Returns -1. Cache: {(3,2)}. MRU: 3, LRU: 3
get(3)    -> Returns 2. Cache: {(3,2)}. MRU: 3, LRU: 3 (accessing 3 doesn't change order if it's already MRU)

Output (for corrected Example 2):

[1, -1, 2]

Constraints

  • 1 <= capacity <= 10^4
  • The keys and values will be integers.
  • The number of operations will be between 1 and 10^5.
  • All get and put operations should ideally take O(1) time complexity on average.

Notes

Consider using a combination of a hash map (for O(1) key lookup) and a doubly linked list (for O(1) insertion/deletion and maintaining order) to achieve the desired performance. The hash map will store key-to-node mappings, and the doubly linked list will maintain the recency order.

Loading editor...
plaintext