Embedded Systems

L1 Instruction and data memory and why cache memory is important

L1 instruction and data cache

This post tells about L1 instruction memory and data cache memory. The instructions in the processor may range in size in order to achieve the optimal code density. Instructions can run with 16bits, 32bits or 64bits wide.

Instruction memory is usually used for storing instructions, but not data itself. Instruction memory is usually accessed via DMA controllers or instruction fetches.

L1 data memory is usually organised as sub-banks for easier access, that can be accessed during single cycle. Memory structure with spread sub-banks increase performance of the memory.

To access L1 memory there is two buses, so access can be performed with up to two fetches. In case of data outside L1, access should be performed in pipeline mode – as soon as first fetch happens, second fetch starts and processor has both fetch operations in the same instruction.

Cache memory is a good alternative to adding more L1 memory to the processor that can increase the processor cost.

Cache is a small amount of advanced memory, that improves access to large amounts of slow memories. Cache memory stores data and instructions for fast access where application require that. Small cache can be mapped to a relatively large cachable memory space.

Every sub-bank of cache consists of ways.  Every way consist of lines. Every cache line consist of the number of consecutive bytes. Ways and lines combines to create location where data and instructions can be found.

Our educational content can be also reached through our Reddit community r/ElectronicsEasy.