VMIVME-5565-110000 Data cache AMD Velociraptor is bidirectional group association
This placement policy determines where a copy of a particular entry in main storage will be placed in the cache. If the placement policy is free to select any entry in the cache to hold a copy, then the cache is fully associated. At the other extreme, if each entry in main memory can only be placed in one place in the cache, then the cache is mapped directly.
Many caches implement a compromise where each entry in main storage can go to any of N locations in the cache and is described as an n-way group association. [10] For example, AMD Velocirus is bidirectional group associative, which means that any particular location in main memory can be cached in either of the two locations in the primary data cache.
Choosing the right associative value involves trading. If there are ten locations to which the placement policy can map storage locations, ten cache entries must be searched in order to check if the location is in the cache. (9) To examine more areas requires more power and chip area, [①③ ③ ③ ⑥] (9 ③ ③ 0⑦), and may take more time. On the other hand, the more relevant there are fewer cache misses (see collision misses), so the CPU wastes less time reading from slow main memory.
The general guideline is that doubling the correlation, mapping from direct to bidirectional, or from bidirectional to quad, has about the same effect on the hit ratio as doubling the cache size. However, increasing the correlation above 4 does not improve the hit ratio, [11] and is usually done for other reasons (see virtual aliasing). Some cpus can dynamically reduce the relevance of their caches in a low-power state, which is an energy saving measure. [12]