0
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: found
      Is Open Access

      On Algorithmic Cache Optimization

      Preprint

      Read this article at

      Bookmark
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          We study matrix-matrix multiplication of two matrices, \(A\) and \(B\), each of size \(n \times n\). This operation results in a matrix \(C\) of size \(n\times n\). Our goal is to produce \(C\) as efficiently as possible given a cache: a 1-D limited set of data values that we can work with to perform elementary operations (additions, multiplications, etc.). That is, we attempt to reuse the maximum amount of data from \(A\), \(B\) and \(C\) during our computation (or equivalently, utilize data in the fast-access cache as often as possible). Firstly, we introduce the matrix-matrix multiplication algorithm. Secondly, we present a standard two-memory model to simulate the architecture of a computer, and we explain the LRU (Least Recently Used) Cache policy (which is standard in most computers). Thirdly, we introduce a basic model Cache Simulator, which possesses an \(\mathcal{O}(M)\) time complexity (meaning we are limited to small \(M\) values). Then we discuss and model the LFU (Least Frequently Used) Cache policy and the explicit control cache policy. Finally, we introduce the main result of this paper, the \(\mathcal{O}(1)\) Cache Simulator, and use it to compare, experimentally, the savings of time, energy, and communication incurred from the ideal cache-efficient algorithm for matrix-matrix multiplication. The Cache Simulator simulates the amount of data movement that occurs between the main memory and the cache of the computer. One of the findings of this project is that, in some cases, there is a significant discrepancy in communication values between an LRU cache algorithm and explicit cache control. We propose to alleviate this problem by ``tricking'' the LRU cache algorithm by updating the timestamp of the data we want to keep in cache (namely entries of matrix \(C\)). This enables us to have the benefits of an explicit cache policy while being constrained by the LRU paradigm (realistic policy on a CPU).

          Related collections

          Author and article information

          Journal
          12 November 2023
          Article
          2311.07615
          020d0397-43c3-41c7-aefe-8b9e1a8fe18f

          http://creativecommons.org/licenses/by/4.0/

          History
          Custom metadata
          20 pages, 3 figures, 2 tables
          cs.DS

          Data structures & Algorithms
          Data structures & Algorithms

          Comments

          Comment on this article