Address: Place where a data value resides in memory

Bandwidth: capacity

Bit: Smallest data size; either a 0 or a 1

Branch instruction: A decision instruction (similar to a fork in the road)

Bus: A signal path that serves multiple devices or multiple points on a circuit board

Cache coherence: Cache value is correct, valid, and current (e.g. hasn't been updated in a different part of memory)

Cache miss: A data value does not reside in cache and must be obtained from a higher level of memory

Clock gating: Ability to turn off a chip/part of a chip:

Coherence protocol Protocol to adhere to which maintains all cache values are correct and current

data value prediction: prediction of an actual data value (not an instruction); more complicated and lower accuracy than branch prediction

distributed memory model: each core has its own cache

dynamic/software scheduling: reorganize a loop such that each iteration is made from instructions chosen from different iterations of the original loop

homogenous: cores are exactly the same, containing identical parts

I/O: input/output

L1 cache: Fastest memory and closest to the processor

L2 cache: Slightly slower than L2 cache, but much faster than main memory

loop unrolling: A programming or compiler strategy whereby instructions that are executed within a loop are copied one or more times to reduce (or eliminate) the number of times the loop is executed

MIMD: Multiple instructions, multiple data; Multiple computer instructions, which may or may not be the same, and which may or may not be synchronized with each other, perform actions simultaneously on two or more pieces of data

misprediction rate: the rate of branch instructions that are predicted incorrectly

Multithread: A capability of a processor core to switch to another processing thread, i.e., a set of logically connected instructions that make up a (part of) a process

Multicore processor: A processor with multiple cores containing communication and memory models

neural network-based predictor: The main advantage of the neural predictor is its ability to exploit long histories while requiring only linear resource growth

Off-chip: Not located directly on the microprocessor chip

Out-of-order: Instructions are not processed in the same order in which they were received

parallel processing: Concurrent or simultaneous execution of two or more processes, or programs within the same processor, as contrasted with serial or sequential processing

Power processing element: The main processor in the CELL processor

Pre-fetched: Fetched before the instruction has been fully read

Replacement policy: A policy to replace invalid blocks in cache with current values

Register renaming: a technique used to avoid unnecessary serialization of program operations imposed by the reuse of registers by those operations

reorder buffer: puts instruction back in the correct order so that, if issued out-of-order, instructions are still committed in-order

Scalable: Can have multiple of these that will work together without causing overlapping errors

Shared memory model: All cores share the same cache

Superscalar: A processor architecture in which the processor can execute multiple instructions (typically two or four) per instruction cycle

Synergistic processing element: Smaller more frequent processors in the CELL processor

trace cache: a trace of the upcoming instructions is read into cache; many advantages, again aimed at minimizing delays in carrying out the processor wrong predictions conversions

Transistor: An electronic device used to control the flow of electricity

Vdd: Source voltage

VLIW: Very long instruction word; The use of large instruction words to keep many functional units busy in parallel

References:

http://www.m2ktech.com/hardware_glossary.htm

http://www.top500.org/2007_overview_recent_supercomputers/glossary_terms

http://acts.nersc.gov/glossary.html

http://www.bdti.com/articles/dspdictionary.html

http://en.wikipedia.org/wiki/

http://www.iec.net/Browse05/GLSP.html

http://www.cse.iitb.ac.in/~br/iitk-webpage/courses/cs422-spring2004/slides/lec15.pdf