3.2. MAXIMIZING PERFORMANCE

Ultimate Memory Guide. 3.2. MAXIMIZING PERFORMANCE

It appears that you are using AdBlocking software. The cost of running this website is covered by advertisements. If you like it please feel free to a small amount of money to secure the future of this website.

<< Previous page  TOC  Next page >>

3.2. MAXIMIZING PERFORMANCE

Computer processor speeds have been increasing rapidly over the past several years. Increasing the speed of the processor increases the overall performance of the computer. However, the processor is only one part of the computer, and it still relies on other components in a system to complete functions. Because all the information the CPU will process must be written to or read from memory, the overall performance of a system is dramatically affected by how fast information can travel between the CPU and main memory.

So, faster memory technologies contribute a great deal to overall system performance. But increasing the speed of the memory itself is only part of the solution. The time it takes for information to travel between memory and the processor is typically longer than the time it takes for the processor to perform its functions. The technologies and innovations described in this section are designed to speed up the communication process between memory and the processor.

CACHE MEMORY

Cache memory is a relatively small amount (normally less than 1MB) of high speed memory that resides very close to the CPU. Cache memory is designed to supply the CPU with the most frequently requested data and instructions. Because retriev-ing data from cache takes a fraction of the time that it takes to access it from main memory, having cache memory can save a lot of time. If the information is not in cache, it still has to be retrieved from main memory, but checking cache memory takes so little time, it's worth it. This is analogous to checking your refrigerator for the food you need before running to the store to get it: it's likely that what you need is there; if not, it only took a moment to check.

The concept behind caching is the "80/20" rule, which states that of all the programs, information, and data on your computer, about 20% of it is used about 80% of the time. (This 20% data might include the code required for sending or deleting email, saving a file onto your hard drive, or simply recognizing which keys you've touched on your keyboard.) Conversely, the remaining 80% of the data in your system gets used about 20% of the time. Cache memory makes sense because there's a good chance that the data and instructions the CPU is using now will be needed again.

HOW CACHE MEMORY WORKS

Cache memory is like a "hot list" of instructions needed by the CPU. The memory controller saves in cache each instruction the CPU requests; each time the CPU gets an instruction it needs from cache - called a "cache hit" - that instruction moves to the top of the "hot list." When cache is full and the CPU calls for a new instruction, the system overwrites the data in cache that hasn't been used for the longest period of time. This way, the high priority information that's used continuously stays in cache, while the less frequently used information drops out.

LEVELS OF CACHE

Today, most cache memory is incorporated into the processor chip itself; however, other configurations are possible. In some cases, a system may have cache located inside the processor, just outside the processor on the motherboard, and/or it may have a memory cache socket near the CPU, which can contain a cache memory module. Whatever the configuration, any cache memory component is assigned a "level" according to its proximity to the processor. For example, the cache that is closest to the processor is called Level 1 (L1) Cache, the next level of cache is numbered L2, then L3, and so on. Computers often have other types of caching in addition to cache memory. For example, sometimes the system uses main memory as a cache for the hard drive. While we won't discuss these scenarios here, it's important to note that the term cache can refer specifically to memory and to other storage technologies as well.

You might wonder: if having cache memory near the processor is so beneficial, why isn't cache memory used for all of main memory? For one thing, cache memory typically uses a type of memory chip called SRAM (Static RAM), which is more expensive and requires more space per megabyte than the DRAM typically used for main memory. Also, while cache memory does improve overall system performance, it does so up to a point. The real benefit of cache memory is in storing the most frequently-used instructions. A larger cache would hold more data, but if that data isn't needed frequently, there's little benefit to having it near the processor.

It can take as long as 195ns for main memory to satisfy a memory request from the CPU. External cache can satisfy a memory request from the CPU in as little as 45ns.

SYSTEM BOARD LAYOUT

As you've probably figured out, the placement of memory modules on the system board has a direct effect on system performance. Because local memory must hold all the information the CPU needs to process, the speed at which the data can travel between memory and the CPU is critical to the overall performance of the system. And because exchanges of information between the CPU and memory are so intricately timed, the distance between the processor and the memory becomes another critical factor in performance.

INTERLEAVING

The term interleaving refers to a process in which the CPU alternates communication between two or more memory banks. Interleaving technology is typically used in larger systems such as servers and workstations. Here's how it works: every time the CPU addresses a memory bank, the bank needs about one clock cycle to "reset" itself. The CPU can save processing time by addressing a second bank while the first bank is resetting. Interleaving can also function within the memory chips themselves to improve performance. For example, the memory cells inside SDRAM chip are divided into two independent cell banks, which can be activated simultaneously. Interleaving between the two cell banks produces a continuous flow of data. This cuts down the length of the memory cycle and results in faster transfer rates.

BURSTING

Bursting is another time-saving technology. The purpose of bursting is to provide the CPU with additional data from memory based on the likelihood that it will be needed. So, instead of the CPU retrieving information from memory one piece of at a time, it grabs a block of information from several consecutive addresses in memory. This saves time because there's a statistical likelihood that the next data address the processor will request will be sequential to the previous one. This way, the CPU gets the instructions it needs without having to send an individual request for each one. Bursting can work with many different types of memory and can function when reading or writing data.

Both bursting and pipelining became popular at about the same time that EDO technology became available. EDO chips that featured these functions were called "Burst EDO" or "Pipeline Burst EDO" chips.

PIPELINING

Pipelining is a computer processing technique where a task is divided into a series of stages with some of the work completed at each stage. Through the division of a larger task into smaller, overlapping tasks, pipelining is used to improve performance beyond what is possible with non-pipelined processing. Once the flow through a pipeline is started, execution rate of the instructions is high, in spite of the number of stages through which they progress.

<< Previous page  TOC  Next page >>

 

© 1998-2023 – Nicola Asuni - Tecnick.com - All rights reserved.
about - disclaimer - privacy