How does cache work?

Last modified by Microchip on 2023/11/10 11:08

Using the cache is optional, but it is critically important if you want to maximize performance. You can manually load the cache or have it automatically load itself as you execute code.

Once data is in the cache, it can be read and modified without accessing the main memory. If the cache is large enough to hold all the data needed by your application, it may never have to update the main memory. This will enable maximum performance and will reduce the power consumed by accessing the main memory.

Cache Hit

When the CPU fetches data from cached memory, the system checks if the required data exists in the cache. If it exists, the data is read directly from the cache with no performance penalty. This is called a "cache hit".

Cache hit

Back to top

Cache Miss

If required data does not exist in the cache, hardware fills the cache with data from the main memory. The CPU must wait for the cache fill to complete before reading the data. This is called a “cache miss”. A cache miss incurs a performance penalty proportional to the time required to perform the cache fill.

Cache miss

Back to top

Dirty Cache

If the CPU modifies a location in cache without writing it back to main memory, that cache line is said to be “dirty”. The cache controller keeps track of these dirty lines.

Cache dirty

Update Main Memory Before Evicting Data From Cache

At some point, the hardware may need to make room in the cache for new data. Before modified (dirty) data can be evicted from the cache, it must be written back to the main memory. The hardware will do this automatically.

Cache writeback

There are two methods the hardware uses to update main memory:

  1. Immediately update the main memory when the cache changes (write-through).
  2. Wait to update the main memory when data is evicted from the cache (write-back).

You control which method is used.

Back to top