KJ3241X1-BK1 12P4710X032 SE4006P2 Valid bits serve as cache indexes

KJ3241X1-BK1 12P4710X032 SE4006P2 Valid bits serve as cache indexes

KJ3241X1-BK1 12P4710X032 SE4006P2 If every location in main memory can be cached in either of the two locations in the cache, then a logical question is: Which of the two? The simplest and most common scheme, shown above on the right, is to use the least significant bit of the memory location index as the cached index, with two entries for each index.

An advantage of this scheme is that labels stored in the cache do not have to include the portion of the main storage address implied by the index of the cache. Because cache tags have fewer bits, they require fewer transistors, take up less space on the processor circuit board or microprocessor chip, and can be read and compared more quickly. LRU because you only need to store one bit per pair.

One of the advantages of KJ3241X1-BK1 12P4710X032 SE4006P2 Direct mapping cache is that it allows for simple and fast speculation. Once the address is computed, you know a cache index that might have a copy of that location in storage. The cache entry can be read, and the processor can continue working on the data until it finishes checking that the label actually matches the requested address.

The idea of having the processor use cached data before tag matching is complete can also be applied to association caching. A subset of tags, called a hint, that can be used to pick one of the possible cache entries that map to the requested address. The entry selected by the prompt can then be used in parallel with the check complete label. As described below, the prompt technique works best when used in the context of address translation.