General Information Academics Admission Facilities People Research Search College of Engineering
Florida Institute of Technology Department of Computer Sciences

Computer Architectures

Computer Architecture

A generic digital computer can be represented by a block diagram:


\begin{picture}(288,144)
\put(80,80){\framebox[72pt]{Memory} }
\put(10,40){\fram...
...,75){\vector(0,-1){15}}
\put(80,0){\framebox[72pt]{Input/Output} }
\end{picture}

A processor typically contains 4 modules: the CPU, FPU, MMU, and Cache.

Memory is composed of 5 modules: MMU, processor cache, external cache, and RAM. The connection between the components of a computer are called busses. A bus interface controls the movement of data along the busses to different parts of the computer. The busses may carry different number of bits and operate at different speeds. The processor bus carries data between the processor, external cache, and RAM. It is sometimes composed of a data bus and an instruction bus. The I/0 bus carries between I/0 devices and the process (through the bus interface). The remaining parts of a computer are the input/output (I/0) devices and include: keyboard, mouse, graphics card, monitor, disk controller, hard disk Computer architecture is the study of designing these component for performance (efficiency/speed) and price.

Types of Architectures

There are a few basic types of computer architectures.

All of these basic types fall under what is called the von Neumann architecture which can be described by the execution cycle:


pc = 0; /* initialize the program counter */


do
    instruction = memory[pc++]; /* instruction fecth */
    decode(instruction); /* instruction decode */
    fetch(operands); /* get data */
    execute; /* execute instruction on data */
    store(results); /* save answer */
while (instruction != halt);

Accumulator machines

These types of machines are virtually obsolete, the EDSAC (1950) computer was an accumulator machine. But their simplicity makes describing them first profitable before the other architectures are studied.

\epsfig{file=acc.eps}
The accumulator machine architecture

All operations have the form:

accumulator $\oplus$ operand;
which causes the operation $\oplus$ (say it is addition) to be performed on value in the accumulator (ACC) and the operand with the result stored back in the ACC.

There are two instructions to transfer data between the ACC and memory:


load operand; /* transfer data from memory to the accumulator */
store operand; /* transfer data from the accumulator to memory */
The memory data register (MDR) is used to temporarily hold the data in loads and stores. The memory address register (MAR) where in memory data is to be transfered from or to.

A program counter (PC) contains the address of the next instruction to be executed. An instruction register (IR) holds the instruction being executed.

Accumulator machines have very regular instruction patterns which makes decoding them fast, but they typically require more instructions than other architecture types to perform identical tasks. This space versus time tradeoff is a fundamental question in the design of computers. We can add more (less regular) instructions to reduce program size (space), but decoding instructions and fetching operands might take more time.

Some authors call the Intel 80x86 architecture an extended accumulator machine since most registers have a dedicated use.

Stack machines

Stack machines are similar to accumulator machines except that the single accumulator is replaced by a stack of registers.

\epsfig{file=stack.eps}
The stack machine architecture

HP calculators uses an architecture similar to the stack architecture with reverse Polish notation. Operands are ``entered'' onto the stack and then the instructions are given, e.g, 2 ENTER, 3 ENTER, +. Burroughs developed the first stack machine in 1963. More recently, the Java Virtual Machine is based on a stack architecture.

Given the Java program:


public class Factorial {
    public static void main(String[] args) {
        int input = Integer.parseInt(args[0]);
        double result = factorial(input);
        System.out.println(result);
    }
    public static double factorial(int x) {
        if (0 > x) return 0.0;
        double fact = 1.0;
        while (1 < x) {
            fact *= x;
            x -= 1;
        }
        return fact;
        }
}
javap -c Factorial disassembles the Factorial.class file as the JVM bytecode: (Comment added by hand)

Method Factorial()
0 aload_0
1 invokespecial #1 <Method java.lang.Object()>
4 return


Method void main(java.lang.String[])
0 aload_0
1 iconst_0
2 aaload
3 invokestatic #2 <Method int parseInt(java.lang.String)>
6 istore_1
7 iload_1
8 invokestatic #3 <Method double factorial(int)>
11 dstore_2
12 getstatic #4 <Field java.io.PrintStream out>
15 dload_2
16 invokevirtual #5 <Method void println(double)>
19 return


Method double factorial(int)
0 iconst_0 // push 0 onto stack
1 iload_0 // push local (integer) variable 0 onto stack
2 if_icmple 7 // compare top 2 stack registers; jump to 7 if less or equal
5 dconst_0 // push double constant 0.0 onto stack
6 dreturn // return top of stack
7 dconst_1 // push double constant 1.0 onto stack
8 dstore_1 // store 1.0 in local variable 1
9 goto 20 // jump to instruction 20
12 dload_1 // push local (double) variable 1 onto stack
13 iload_0 // push local (integer) variable 0 onto stack
14 i2d // convert integer to double
15 dmul // multiple top 2 stack registers
16 dstore_1 // store 1.0 in local variable 1
17 iinc 0 -1 // increment local (integer) variable 0 by -1
20 iconst_1 // push integer constant 0 onto stack
21 iload_0 // push local (integer) variable 0 onto stack
22 if_icmplt 12 // compare top 2 stack registers; loop to 12 if less
25 dload_1 // push local (double) variable 1 onto stack
26 dreturn // return top of stack

Load/Store machines

Load/store computers look superficially like stack machines. The difference is any register can be selected to provide operands for instructions. This archicture is also called: (general purpose) register. Operands can also come from memory in this design. Most machines today are load/store machines.

\epsfig{file=ls.eps}
The load/store machine architecture

Instructions frequently have up to three operands:


operation     source1, source2, destination
where the operation may be something like add or multipy, and the source operands and destination may be any register or memory.

Florida Institute of Technology
Department of Computer Sciences
150 West University Boulevard,
Melbourne, FL 32901-6988
Tel. (321) 674-8763, Fax (321) 674-7046,
E-mail: www@cs.fit.edu


© 2001 Florida Tech, this server is currently maintained by the Department of Computer Sciences. Please send your questions, comments and suggestions to www@cs.fit.edu.

William D. Shoaff
2002-01-23