[SOLVED] CS cache scheme computer architecture PowerPoint Presentation

$25

File Name: CS_cache_scheme_computer_architecture_PowerPoint_Presentation.zip
File Size: 574.62 KB

5/5 - (1 vote)

PowerPoint Presentation

TU856-1 & TU858-1 Computer Architecture and Technology
Module Code: CMPU 1006
VON NEUMANN ARCHITECTURE
Presenter: Dr Art Sloan
Semester 1, Week 7
1

Presentation Outline
This presentation begins with a historic reference to the person credited with computer architecture as it has become known.
It will focus on The electronic components that need the architecture, and they are investigated for their functionality.

There is a number of examples of the fetch-execute cycle associated with von Neumann architecture.
2

Presentation Content including
John von Neumann
The Architecture
The Stored Program Concept
Memory Types
Memory Addressing
The Arithmetic Logic Unit
The Control unit
3
Instruction Set
Engineering Needs
The Fetch-Execute Cycle
Memory Operations
von Neumann Architecture Summary
Where to Next?

John von Neumann
The father of the modern computer.

This quote is often cited in relation to John von Neumann.
Born in Hungary and named Neumann Jnos Lajos Margittai, John spent a lot of time as a professor at Princeton University, USA.
A lot of his work was questionable (atomic bomb development) but he was a scientific genius. (Physics, mathematics, chemistry)

4

John von Neumann (2)
Mauchly and Eckert (as mentioned in the notes of Week 2) had mastered the architecture advances for which von Neumann has become famous while developing the ENIAC computer.

Von Neumann worked with Mauchly and Eckert and helped with the ENIAC project but it was his publication of a journal paper on the internal workings of an ENIAC-type machine (called First Draft) that earned him the accolades.
5

The Architecture
Before the ENIAC (Electronic Numerical Integrator And Computer) the electrical computing machines were manually loaded with instructions that were executed as they came through the processor causing immediate output.
Von Neumann Architecture is based on the stored program concept.
Going from in-the-moment computing to stored programs was a big leap in computing, at the time (mid-1940s).
6

The Architecture (2)
The von Neumann Architecture is a computer design model that uses a single storage structure to hold both instructions and data.
A computer which implements the referential model of specifying sequential architectures, in contrast with parallel architectures is described as having von Neumann Architecture.
The separation of storage from the processing unit is implicit in the von Neumann Architecture.
7

Referential Model
8

The architecture developed by von Neumann, Mauchly and Eckert (as above), and the stored program concept are the basis of the Referential Model.

The Architecture (3)
The term stored-program computer is generally used to mean a computer of this design.
Von Neumann begins his idea with a broad description of the general-purpose computing machine containing four main sub-components.
The sub-components relate to memory, control, central processing and connection with the human operator (sometimes defined as input/output).
These are represented by the arithmetic logic unit, the control unit, the memory, and the input-output devices of the classical computer model.
9

The Architecture (4)
Diagram of the Architecture:
10

The Architecture (5)
Another diagram of the Architecture this matches the ENIAC architecture:

11

Main
Memory

Arithmetic
Logic Unit

Program
Control Unit

I/O
Equipment

The Architecture (6)
In a special purpose machine the computational procedure could be part of the hardware. In a general purpose one the instructions must be as changeable as the numbers they acted upon.
To von Neumann the key to building a general purpose computing machine was in its ability to store not only its data and the intermediate results of computation, but also to store the instructions, or orders that allowed the computation.

12

The Architecture (7)
Therefore, why not encode the instructions into numeric form and store instructions and data in the same memory?

This frequently is viewed as the principal contribution provided by von Neumanns insight (and Mauchly and Eckerts) into the nature of what a computer should be.

13

The Architecture (8)
The stored program concept was proposed about sixty-five years ago and it is the fundamental architecture employed by computers, even now.

Computer technologies come and go (or get superseded) but the stored-program concept endures.

14

The Stored Program Concept
The stored program concept had several technical implications:
Four key sub-components operate together to make the stored program concept work.
The process that moves information through the sub-components is called the Fetch-Execute Cycle.
Unless otherwise indicated, program instructions are executed in sequential order.

15

The Four Sub-Components
Physically, the four sub-components in von Neumann Architecture might be called:
Input/Output (or I/O)
Control Unit
Arithmetic-Logic Unit
Memory

While only four sub-components are listed in the books, there is a fifth part to the operation: a bus (or wire) that connects the components together and over which data flows from one sub-component to another. (More on buses next week.)
16

Memory
There are several different types of memory at processor level:
RAM (Random Access Memory)
ROM (Read Only Memory)
Registers
Cache
Why not just engineer one type?
Each type of memory represents cost/benefit tradeoffs between capability and cost

17

Memory (2)
18

Memory Types
RAM is an array of cells, each with a unique address.
It should take the same amount of time to access any memory cell, regardless of its location with the memory bank. (Random access.)

ROM memory cells can be read from, but not written to. Cells are protected.
ROM is more expensive than RAM and only critical information is stored. (I.E. Firmware instructions.)
19

Memory Types (2)
Register cells are powerful, costly, and physically located close to the heart of computing.
Some registers are the main participants in the Fetch-Execute Cycle. (More on the Fetch-Execute Cycle later.)

Cache memory is a type of RAM. The main difference between cache and actual RAM is that cache is addressed by the CPU before RAM so instructions may be performed quicker if in cache.
20

Memory Addressing
For all these memory types, cell size or cell width is a measure of how many individual memory cells (switches for bits) are addressed at a time.

At a minimum, this is a byte (8 bits) but to support all data types and operations cell size can be larger (example, a word of 16 bits).
21

Memory Addressing (2)
22

Memory Addressing (3)
All cells have an address and can contain data (where data = data or instructions).

The cell address is a label (like a post code) that identifies a particular cell.

The cell contents are whatever data is stored at a given address location.

23

Memory Addressing (4)
The memory labels are themselves binary numbers.

One of the internal registers is used to hold address locations. (MAR memory address register or machine address register.)

This register has to be big enough to hold the biggest address of a memory location.

24

I/O: Input and Output
There is both a human-machine interface and a machine-machine interface to I/O.

Examples of the human-machine interface include a keyboard, screen or printer.

Examples of the machine-machine interface include things like mass storage and secondary storage devices.

25

Arithmetic Logic Unit The ALU
The third component in the von Neumann Architecture is called the Arithmetic Logic Unit.

This is the subcomponent that performs the arithmetic and logic operations based on combinations of flip-flops, latches and gates, generally.

The ALU is the brain of the computer.

26

The ALU (2)
27
The graphic on the left is a 3-D lego-type representation of the ALU

The ALU (3)
The ALU contains many of the registers.

Its flip-flop and latch circuitry allows it to perform addition, subtraction, multiplication and division, as well as logical comparisons: less than, equal to and greater than.
28

Control Unit
The control unit drives the fetch and execute cycle.

As mentioned before, in memory, a cell address is loaded into the MAR it is the control unit that figures out which address is loaded, and what operation is to be performed with the data moved to the memory data register (or machine data register) MDR.
29

Control Unit (2)
30

Control Unit (3)
To do its job, the Control Unit has several tools, including;
Special memory registers
Hard-wired understanding of an Instruction Set
31

Control Unit (4)
Special Memory Registers
The Control Unit must keep track of where it is within a program, and what it should do next.
Two special registers are used to accomplish this:
A program counter, typically referred to as a PC, holds the address of the next instruction to be executed.
An instruction register, typically referred to as an IR, holds an instruction fetched from memory.

32

Control Unit (5)
Along with the special registers, the Control Unit utilises special circuitry, called an instruction decoder.

The instruction decoder is atypical decoder circuit, and its purpose is to read an instruction from the instruction register (IR), and then to activate the appropriate circuit line.
33

Simple View of CPU in Operation
34

How this Works
Remember, we are trying to figure out how the stored program concept works.

In this model, the program and the data upon which it operates are stored in memory locations.

We know how to encode the data.

We need to learn how to encode the programming instructions.

35

The Instruction Set
At the heart of all programming are a few basic, general instructions.
The set of instructions is small and particular to the processors specification.
The power of the instruction set is that by identifying a definite, bounded, simple task, an instruction can be executed with appreciable speed (typically within a few billionths of a second).
36

The Instruction Set (2)
37

The Instruction Set (3)
The instruction set is something like the ASCII alphabet encoding scheme.

The specific instructions are given unique binary codes.

Obviously, the instruction register must be big enough to hold any instruction within the numbered set.
38

Execution Happens in Cycles
Data and program instructions are translated into binary form and stored in RAM.
As the information is needed, it is moved to the high speed, costlier registers where it is processed.
This process occurs in a cycle: fetch information to the registers, and execute it there, fetch the next information from the registers, and execute it, etcetera.
39

Engineering Needs
Once we know on which data we should be working, we know how to build circuitry to perform processing operations. (Wecan add, subtract, divide and compare).

How do we figure out what data to be working on, and exactly which operation to perform?
40

Engineering Needs (2)
Specifically, this is what we need to be able to do:
Build a circuit that will allow us to take whatever number is in the Memory Address Register (MAR), and use this number to access a specific memory cell.
Build a circuit that will allow us to choose which data results should be placed in the Memory Data Register (MDR).
This magic happens in the Control Unit (also known as the PCU (Program Control Unit).
41

Choosing a Memory Location
The initial requirement first: how do we determine which address location holds the data on which we need to operate?

As mentioned before, the MAR holds an address a binary number.

We need some circuitry to read that number, and based on its value, find exactly the correct address location to read.
The circuit iscalled a decoder

42

Decoder
43

A Von Neumann Architecture Effect
The idea of the stored program concept had a specific effect: The stored program needed to have procedures to allow the individual instructions and data to be called and dealt with a fetch and execute arrangement (or fetch-execute-store arrangement).
44

The Fetch-Execute Cycle
The CPU executes a program that is stored as a sequence of machine language instructions in main memory.

It does this by repeatedly reading, or fetching, an instruction from memory and then carrying out, or executing, that instruction.
45

The Fetch-Execute Cycle (2)
This process of fetch an instruction, execute it, fetch another instruction, execute it and so on, is called the Fetch-Execute Cycle.

This continuous check that the CPU makes; fetching, executing and pushing out the result, is called polling but can be interrupted by an interrupt.
46

The Fetch-Execute Cycle (3)
This is a two-cycle process because both instructions and data are in memory.
Fetch
Decode or find an instruction, load from memory into a register and signal the ALU.
Execute
This performs an operation that the instruction requires.
Move/transform data.
47

The Fetch-Execute Cycle (4)
48

The Fetch-Execute Cycle (5)
With an interrupt the CPU reacts to a signal to jump to an address. There will be interrupt handler instructions there.
An interrupt handler is part of the device driver software for the device that signalled the interrupt.
It will allow the diversion to another program and, after the interrupt, direct the CPU back to the previous program where it left off
49

The Fetch-Execute Cycle (6)
Instruction cycle example:
The ALU and PCU are used for the fetch-execute cycle:

Arithmetic and Logic Unit ALU
The ALU is part of the Central Processing Unit. All computations are performed in this unit.
The ALU comprises adders, counters, and registers, numerical operations (+ / x) and logical operations (AND, OR, program branching).
50

The Fetch-Execute Cycle (7)
(Program) Control Unit PCU
The PCU :
Coordinates all other units in the computer,
Organises movement of data from/to Input/Output, memory and registers.
Directs ALU, specifically to indicate the operations to be performed.
The control unit operates according to the stored program, receiving and executing its instructions one at a time.
51

The Fetch-Execute Cycle (8)
Instruction Cycle (or Fetch-Execute Cycle)
Fetch Cycle;
Fetch instruction from memory
Execute Cycle;
Decode instruction,
Fetch required operands,
Perform operation.
52

The Fetch-Execute Cycle (9)
The components of a processor
DR Data Register
M Memory
MAR Memory Address Register
MBR (or MDR) Memory Buffer Register (or Mem Data Reg)
AC Accumulator
PC Program Counter
IR Instruction Register
ALU Arithmetic Logic Unit(including arithmetic logic circuits)
PCU Program Control Unit
53

The General Sequence to aFetch-Execute Cycle
A program, stored in the main memory of a computer, is obeyed by the CPU carrying out the following sequence of events, described with the symbolic notation:
1. [PC] -> MAR
2. [M] -> MBR
3. [PC]+1 -> PC
4. [MBR] -> IR
5. Decode
6. Execute this instruction
7. Repeat from step (1)

54

More on the Fetch-Execute Cycle
Often the CPU actually has several tasks to perform.
All modern computers use multitasking to perform several tasks at once, devoting a fraction of a second to each task in turn.
This application of multitasking is called timesharing. For example, the user might be typing a document while a clock is continuously displaying the time and a file is being downloaded from the internet.
Each individual tasks of a CPU is called a thread.
55

To Include the Store Operation
Some books on Architecture prefer the Fetch-Execute Cycle while others talk about a Fetch-Execute-Store Cycle.

Here are a few slides to show the Store feature plus one more showing ADD because the ADD operation is most common in the CPU.
56

Memory Operations
Two basic operations occur within this subcomponent: a fetch operation, and a store.
The fetch operation:
A cell address is loaded into the Memory Address Register (MAR).
The address is decoded, which means that through circuitry, a specific cell is located.
The data contents contained within that cell are copied into another special register, called a Machine (or Memory) Data Register (MDR).
Note that this operation is non-destructive that is, the data contents are copied, but not destroyed.
57

Memory Operations (2)
The second memory operation is called a store.
The fetch is like a read operation; the store is like a write operation.
In the store, the address of the cell into which data is going to be stored is moved to the MAR and decoded.
Contents from yet another special register, called an accumulator, are copied into the cell location (held in the MAR).
This operation is destructive, meaning that whatever data was originally contained at that memory location is overwritten by the value copied from the accumulator.
58

Load Fetch-Execute Cycle
59
PC -> MARTransfer the address from the PC to the MAR
MDR -> IRTransfer the instruction to the IR
IR(address) -> MARAddress portion of the instruction loaded in MAR
MDR -> ACActual data copied into the accumulator
PC + 1 -> PC
Program Counter incremented

Store Fetch-Execute Cycle
60
PC -> MARTransfer the address from the PC to the MAR
MDR -> IRTransfer the instruction to the IR
IR(address) -> MARAddress portion of the instruction loaded in MAR
AC -> MDRAccumulator copies data into MDR
PC + 1 -> PC
Program Counter incremented

Notice how Step 4 differs for LOAD and STORE

ADD Fetch-Execute Cycle
61
PC -> MARTransfer the address from the PC to the MAR
MDR -> IRTransfer the instruction to the IR
IR(address) -> MARAddress portion of the instruction loaded in MAR
AC + MDR -> ACContents of MDR added to the contents of the accumulator
PC + 1 -> PC
Program Counter incremented

62

START
Instruction awaiting execution?
Fetch next instruction
Execute next instruction
Interrupt requiring servicing?
Transfer control to interrupt-handling program

No
No
Yes
Yes
Fetch Cycle
Execute Cycle
Program Transfer

Overview of the Fetch-Execute Cycle
RAM holds machine language programs and data.
The CPU fetches machine language instructions from Cache or RAM (but possibly a 32 bit register) one after another and executes them without understanding what it does or how it does it.
So the program it executes must be perfect, complete and unambiguous because the CPU can do nothing but execute instructions exactly as they are written.
63

Instruction and Data Movement
The Fetch-Execute Cycle means that the processor must interact with registers, cache memory, RAM and then a long list of other hardware input-output and storage devices.
How do these strings of binary numbers move from device to device?

They take the bus! (Buses, actually)

64

Conclusion
The architecture of the modern computer has changed very little (at a fundamental level, at least) in the seventy or so years since it was first established.
The improvements and innovations that have extended from the classic architecture have been in memory structures and capacities, and in the way that the arithmetic/logic and control units function.
The innovation of computer architecture continues as time goes on.
65

End of von Neumann Architecture
That describes a broad view of computer architecture in terms of the fundamental design known as von Neumann Architecture.
We went into detail on some of the most fundamental features of the architecture, and have seen how some of the subcomponents of the hardware system operate to support the overall computer system.
The working principles of the central processing unit and internal memory have shown how instructions and data can be processed.

Are thereANY QUESTIONS?
66

Where to Next?
NEXT WEEK:
The theme of the next lecture:
Interface Types
How do instructions and data travel on buses? Why are there different hardware interface types? We can look at these things next.
67

Thanks for your attentiveness.

See you here next time. Be safe and well in the meantime.
68

/docProps/thumbnail.jpeg

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] CS cache scheme computer architecture PowerPoint Presentation
$25