146. LRU Cache¶
Description¶
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache.
Implement the LRUCache class:
LRUCache(int capacity)Initialize the LRU cache with positive sizecapacity.int get(int key)Return the value of thekeyif the key exists, otherwise return-1.void put(int key, int value)Update the value of thekeyif thekeyexists. Otherwise, add thekey-valuepair to the cache. If the number of keys exceeds thecapacityfrom this operation, evict the least recently used key.
The functions get and put must each run in O(1) average time complexity.
Example 1
Input:
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output: [null, null, null, 1, null, -1, null, -1, 3, 4]
Explanation:
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1); // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2); // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1); // return -1 (not found)
lRUCache.get(3); // return 3
lRUCache.get(4); // return 4
Constraints
1 <= capacity <= 30000 <= key <= 1040 <= value <= 105At most 2 * 105 calls will be made togetandput.
Solution: Doubly-Linked List + Hash Map¶
We combine a HashMap for O(1) key lookup with a doubly-linked list to track recency order — the head holds the most recently used node, the tail holds the least recently used.
Dummy sentinel head and tail nodes are added so that every insertion just rewires their neighbors, with no empty-list edge cases to handle.
get(key)— Look up the node in the map. If missing, return-1. Otherwise move the node to the head (marking it recently used) and return its value.put(key, value)— If the key exists, update its value and move it to the head. If it's new, push a new node to the head. Ifcaches.sizeexceedscapacity, remove the node just before the dummy tail — that's the LRU entry.
caches.size gives the exact entry count directly because dummy head and tail nodes are never inserted into the map, so no offset arithmetic is needed.
- Time Complexity:
O(1)for bothgetandput— HashMap lookup is O(1), and all linked list operations (insert at head, remove by node reference) are O(1) - Space Complexity:
O(n)whereniscapacity— the HashMap and linked list each hold at mostcapacityentries
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | |