Computer Organization Chapter 5

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/yfren1123/article/details/80593055

存储器层次结构


本章主要讨论如何构建一个容量无限大的虚拟快速存储器

Questions to ask

  1. What I’ve known?
    • 我知道存储器是分级的;而分级可以减小因为磁盘和CPU速度差而导致的速度变慢
  2. What I don’t know?
    • 不知道为什么分级可以提高速度;不知道存储器具体的层次结构
  3. What I can learn?
    • 存储器是怎么分层的;为什么分层能提高速度;关于存储器的速度具体怎么计算
  4. What I can’t learn?
    • 应该怎么去系统性地设计存储器;存储器硬件的具体实现比如DRAM

Abstract

  • Temporal locality, space locality, block, line, hit rate, miss rate,
  • SRAM, DRAM, 地址交叉, EEPROM, 磁道、扇区、
  • direct mapped, valid bit、cache相关计算、cache确实处理、写直达、写缓冲、写回
  • cache性能评估、计算cache性能、set associative
  • virtual memory, page table, TLB: translation-Lookaside Buffer

Terms

  1. Temporal locality:(locality in time): if an item is referenced, it will tend to be referenced again soon
  2. Spatial locality (locality in space): if an item is referenced, items whose addresses are close by will tend to be referenced soon
  3. Block, line: the minimum unit of infor mation that can be either present or not present in the two­level hierarchy is called a block or a line
  4. The hit rate, or hit ratio, is the fraction of mem ory ac cesses found in the upper level
  5. Direct mapped: A cache structure in which each memory location is mapped to exactly one location in the cache
  6. write-through: A scheme in which writes always update both the cache and the next lower level of the memory hierarchy, ensuring that data is always con sistent between the two.
  7. write buffer: A queue that holds data while the data is waiting to be written to memory.
  8. write-back: A scheme that han dles writes by updating values only to the block in the cache, then writing the modified block to the lower level of the hierar chy when the block is replaced.
  9. Handling Cache Misses:
    1. Send the original PC value (current PC – 4) to the memory.
    2. Instruct main memory to perform a read and wait for the memory to complete its access.
    3. Write the cache entry, putting the data from memory in the data portion of the entry, writing the upper bits of the address (from the ALU) into the tag field, and turning the valid bit on.
    4. Restart the instruction execution at the first step, which will refetch the instruction, this time finding it in the cache.
  10. fully associative cache: A cache structure in which a block can be placed in any location in the cache.
  11. set-associative cache: A cache that has a fixed number of loca tions (at least two) where each block can be placed.
  12. virtual memory: A technique that uses main memory as a “cache” for secondary storage.
  13. physical address An address in main memory.
  14. protection A set of mecha nisms for ensuring that multiple processes sharing the processor, memory, or I/O devices cannot interfere, intentionally or unin tentionally, with one another by reading or writing each other’s data. These mechanisms also isolate the operating system from a user process.
  15. page fault An event that occurs when an accessed page is not present in main memory.
  16. virtual address An address that corresponds to a location in virtual space and is translated by address mapping to a physical address when memory is accessed.
  17. address translation Also called address mapping. The process by which a virtual address is mapped to an address used to access memory.
  18. segmentation A variable­size address mapping scheme in which an address consists of two parts: a segment number, which is mapped to a physical address, and a segment offset.
  19. page table The table contain ing the virtual to physical address translations in a virtual memory system. The table, which is stored in memory, is typically indexed by the virtual page number; each entry in the table contains the physical page number for that virtual page if the page is currently in memory.
  20. swap space The space on the disk reserved for the full virtual memory space of a process.
  21. reference bit Also called use bit. A field that is set whenever a page is accessed and that is used to implement LRU or other replacement schemes.
  22. translation-lookaside buffer (TLB) A cache that keeps track of recently used address mappings to try to avoid an access to the page table.
  23. virtually addressed cache A cache that is accessed with a vir tual address rather than a physi cal address.
  24. aliasing A situation in which the same object is accessed by two addresses; can occur in vir tual memory when there are two virtual addresses for the same physical page.
  25. physically addressed cache A cache that is addressed by a physical address.

猜你喜欢

转载自blog.csdn.net/yfren1123/article/details/80593055