Skip to content

146. LRU Cache

Solve in Leetcode


Description

Static Badge

Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.

Implement the LRUCache class:

  • LRUCache(int capacity) Initialize the LRU cache with positive size capacity.
  • int get(int key) Return the value of the key if the key exists, otherwise return -1.
  • void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.

The functions get and put must each run in O(1) average time complexity.

Example 1

Input:

["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]

Output: [null, null, null, 1, null, -1, null, -1, 3, 4]

Explanation:

LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1);    // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2);    // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1);    // return -1 (not found)
lRUCache.get(3);    // return 3
lRUCache.get(4);    // return 4

Constraints

  • 1 <= capacity <= 3000
  • 0 <= key <= 104
  • 0 <= value <= 105
  • At most 2 * 105 calls will be made to get and put.

Solution: Doubly-Linked List + Hash Map

We combine a HashMap for O(1) key lookup with a doubly-linked list to track recency order — the head holds the most recently used node, the tail holds the least recently used.

Dummy sentinel head and tail nodes are added so that every insertion just rewires their neighbors, with no empty-list edge cases to handle.

  • get(key) — Look up the node in the map. If missing, return -1. Otherwise move the node to the head (marking it recently used) and return its value.
  • put(key, value) — If the key exists, update its value and move it to the head. If it's new, push a new node to the head. If caches.size exceeds capacity, remove the node just before the dummy tail — that's the LRU entry.

caches.size gives the exact entry count directly because dummy head and tail nodes are never inserted into the map, so no offset arithmetic is needed.

  • Time Complexity: O(1) for both get and put — HashMap lookup is O(1), and all linked list operations (insert at head, remove by node reference) are O(1)
  • Space Complexity: O(n) where n is capacity — the HashMap and linked list each hold at most capacity entries
interface CacheNode {
    key?: number;
    value?: number;
    prev?: CacheNode;
    next?: CacheNode;
}

class LRUCache {
    private maxLength: number;
    private caches: Map<number, CacheNode>;
    private head: CacheNode = {};
    private tail: CacheNode = {};

    constructor(capacity: number) {
        this.maxLength = capacity;
        this.caches = new Map();

        // 1. add dummy head and tail to insert O(1)
        // 2. no need care about whether the hashmap is empty or not
        this.head.next = this.tail;
        this.tail.prev = this.head;
    }

    private addNode(key: number, node: CacheNode) {
        // Push the node to the head of the linked list
        node.prev = this.head;

        // link the "new" headNode's "next" with dummy head's "next"
        node.next = this.head.next;

        // update the pointer of the old head node's "prev" to the new head node
        this.head.next!.prev = node;

        // link the dummy head "next" with the "new" headNode
        this.head.next = node;

        // add the new node
        this.caches.set(key, node);
    }

    private removeNode(node: CacheNode) {
        if (!node.prev || !node.next) return; // safety check

        const prev = node.prev;
        const next = node.next;

        // link the "prev" node with the "next" node
        prev.next = next;

        // link the "next" node with the "prev" node
        next.prev = prev;

        // Delete from cache
        this.caches.delete(node.key!);
    }

    private moveToHead(node: CacheNode) {
        this.removeNode(node);
        this.addNode(node.key!, node);
    }

    get(key: number): number {
        const node = this.caches.get(key)

        if (!node) return -1;

        // Move the node to the head of the linked list
        this.moveToHead(node);

        return node.value!;
    }

    put(key: number, value: number): void {
        const node = this.caches.get(key);

        if (!node) {
            const newNode: CacheNode = {
                key,
                value,
            }

            this.addNode(key, newNode);

            // caches only stores real entries — dummy head/tail are never added to the map,
            // so caches.size is an exact count with no offset needed
            if (this.caches.size > this.maxLength) {
                // Get the node before the dummy tail
                const oldtailNode = this.tail.prev;
                this.removeNode(oldtailNode!);
            }

        } else {
            node.value = value;
            this.moveToHead(node);
        }
    }
}

/**
 * Your LRUCache object will be instantiated and called as such:
 * var obj = new LRUCache(capacity)
 * var param_1 = obj.get(key)
 * obj.put(key,value)
 */