Novel Cache Optimization Strategies for Multicore Processor Architectures
Creators
Description
The rapid evolution of multicore processor architectures has intensified the demand for efficient cache management techniques to meet the growing computational and memory requirements of modern applications. Traditional cache optimization approaches often face challenges such as high latency, frequent cache misses, and scalability issues when applied to parallel workloads. This paper proposes novel cache optimization strategies that combine adaptive replacement policies, data prefetching mechanisms, and cooperative caching techniques tailored for multicore environments. Simulation studies conducted on benchmark workloads demonstrate that the proposed strategies reduce cache miss rates by up to 18 percent and improve execution time by nearly 12 percent compared to conventional policies such as LRU and FIFO. Furthermore, energy consumption is optimized through selective prefetching and intelligent block replacement, making the strategies suitable for power-constrained computing platforms. The findings highlight the potential of innovative cache management frameworks to enhance system performance, scalability, and energy efficiency in next-generation multicore processors
Files
1760426991_IJAEASEP25V2A95.pdf
Files
(456.2 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:6d557da06dae8922a5d43a8aeb53e92c
|
456.2 kB | Preview Download |
Additional details
Identifiers
Dates
- Issued
-
2025-09-30The rapid evolution of multicore processor architectures has intensified the demand for efficient cache management techniques to meet the growing computational and memory requirements of modern applications. Traditional cache optimization approaches often face challenges such as high latency, frequent cache misses, and scalability issues when applied to parallel workloads. This paper proposes novel cache optimization strategies that combine adaptive replacement policies, data prefetching mechanisms, and cooperative caching techniques tailored for multicore environments. Simulation studies conducted on benchmark workloads demonstrate that the proposed strategies reduce cache miss rates by up to 18 percent and improve execution time by nearly 12 percent compared to conventional policies such as LRU and FIFO. Furthermore, energy consumption is optimized through selective prefetching and intelligent block replacement, making the strategies suitable for power-constrained computing platforms. The findings highlight the potential of innovative cache management frameworks to enhance system performance, scalability, and energy efficiency in next-generation multicore processors
References
- [1] N. Jouppi, "Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers," ACM SIGARCH Computer Architecture News, vol. 18, no. 2, pp. 364–373, 1990. [2] M. K. Qureshi, D. N. Lynch, O. Mutlu, and Y. N. Patt, "A case for MLP-aware cache replacement," ACM SIGARCH Computer Architecture News, vol. 34, no. 2, pp. 167–178, 2006. [3] A. Jaleel, K. B. Theobald, S. C. Steely Jr, and J. Emer, "High performance cache replacement using re-reference interval prediction (RRIP)," ACM SIGARCH Computer Architecture News, vol. 38, no. 3, pp. 60–71, 2010. [4] S. Somogyi, T. Wenisch, A. Ailamaki, B. Falsafi, and A. Moshovos, "Spatial memory streaming," ACM SIGARCH Computer Architecture News, vol. 34, no. 2, pp. 252–263, 2006. [5] H. Zhang and Z. Zhu, "Fair cache sharing and partitioning in a chip multiprocessor architecture," ACM Journal on Emerging Technologies in Computing Systems, vol. 3, no. 1, pp. 1–37, 2007. [6] D. Chiou, "Cooperative caching: Using remote client memory to improve file system performance," Proceedings of the USENIX Symposium on Operating Systems Design and Implementation, pp. 267–280, 1995. [7] S. P. Vanderwiel and D. J. Lilja, "Data prefetch mechanisms," ACM Computing Surveys, vol. 32, no. 2, pp. 174–199, 2000. [8] C. Hsu, I. Singh, L. K. John, and A. R. Lebeck, "Exploring energy-performance trade-offs in processors: Cache and memory design considerations," ACM Transactions on Computer Systems, vol. 22, no. 4, pp. 489–523, 2004. [9] Z. Wang, S. Kim, and M. Lipasti, "Predicting conditional branch direction with neural networks," ACM SIGARCH Computer Architecture News, vol. 29, no. 2, pp. 1–12, 2001. [10] K. Sudan, N. Madan, A. Alameldeen, A. Davis, and R. Balasubramonian, "Dynamic partitioning of shared caches: A case for QoS," Proceedings of the IEEE International Symposium on High-Performance Computer Architecture, pp. 23–34, 2009.