Recent advances in memory technology have expedited the adoption of a large-capacity memory layer for key-value stores. RocksDB, which is one of the most popular key-value stores, utilizes the memory layer effectively with an LRU cache for supporting fast lookup queries. Also, RocksDB ensures the consistency of the LRU cache using mutex locks. But, this design brings a contention issue, resulting in performance degradation, when multiple threads access the LRU cache simultaneously. To tackle this problem, we propose a novel LRU cache architecture called DCA that exploits an additional hash table to enable parallel lookups. Experimental results show that DCA outperforms Vanilla RocksDB by up to $2. 75 \times$ on read-intensive workloads.