Using the cache is optional, but it is critically important if you want to maximize performance.
You can manually load cache or have it automatically load itself as you execute code.
Once data is in the cache, it can be read and modified without accessing main memory. If the cache is large enough to hold all the data needed by your application, it may never have to update main memory. This will enable maximum performance and will reduce power consumed by accessing main memory.
When the CPU fetches data from cached memory, the system checks if the required data exists in the cache. If it exists, the data is read directly from the cache with no performance penalty. This is called a "cache hit".
If required data does not exist in the cache, hardware fills the cache with data from main memory. The CPU must wait for the cache fill to complete before reading data. This is called a “cache miss”. A cache miss incurs a performance penalty proportional to the time required to perform the cache fill.
If the CPU modifies a location in cache without writing it back to main memory, that cache line is said to be “dirty”. The cache controller keeps track of these dirty lines.
Update main memory before evicting data from cache
At some point, the hardware may need to make room in the cache for new data. Before modified (dirty) data can be evicted from the cache, it must be written back to main memory. The hardware will do this automatically.
There are two methods the hardware uses to update main memory:
- Immediately update main memory when cache changes (write-through).
- Wait to update main memory when data is evicted from the cache (write-back).
You control which method is used.