INTRODUCTION TO COMPUTER ORGANISATION AND ARCHITECTURE
In order to achieve complete understandings of computer systems, it is always important to consider both hardware and software design of various computer components. In other words, every functionality of the computer has to be studied to increase the performance of the computer.
Computer organization and architecture mainly focuses on various parts of the computer in order to reduce the execution time of the program, improve the performance of each part. Generally, we tend to think computer organization and computer architecture as same but there is slight difference.
- Computer Organization is study of the system from software point of view and gives overall description of the system and working principles without going into much detail. In other words, it is mainly about the programmer’s or user point of view.
- Computer Architecture is study of the system from hardware point of view and emphasis on how the system is implemented. Basically, throws light on the designer’s point of view.
- A chef prepares a certain recipe, then serves it to the customers. Chef knows how to prepare the food item whereas customer cares only about quality and taste of the food. In a same way, “chef” can referred to as computer architecture and “customer” as computer organization.
- In a system, there are a set of instructions, it is enough for programmer or user to know what are a set of instructions present in case of computer organization; whereas it is important for the system designer worries about how a set of instructions are implemented, algorithm of implementation is the emphasis in the case of architectural studies.
A computer consists of various functional blocks- Input, Output, Memory, arithmetic and logical unit, control units.
INPUT UNITS: Various input devices like keyboard etc, provide input to computer; whenever a key is pressed, the letter or key gets automatically translated to binary codes and then transmitted to either memory or processor. The information is stored in memory for further use
MEMORY UNITS: The main function of memory unit is to store data and programs. The programs must be stored in the device while execution. Inside the system, memory plays a vital role in execution of set of instructions. Memory can be further classified into:
PRIMARY MEMORY: The data or set of instructions are stored in primary storage before processing and the data is transferred to ALU where further processing is done. These are expensive and also known as Main Memory.
SECONDARY MEMORY: The data or set of instruction are stored permanently; user can use it whenever required in future. They are cheaper than primary memory.
ARITHMETIC LOGIC UNIT (ALU): Any arithmetic or logical operation like addition, multiplication etc has to be carried out by ALU. For instance, two numbers located in the memory are to be multiplied; from memory they have to be transferred to processor where ALU performs the arithmetic operation required. Then product will remain in the processor incase of immediate use, otherwise stored in the memory.
CONTROL UNIT (CU): All the other functional units as in ALU, I/O devices, memory has to be coordinated in some ways. Data transfers between the processor and the memory are controlled by this particular unit through timing signals. Timing signals are the signals to determine which action to take place and when.
OUTPUT UNIT: It provides processed results received from the operations performed. Devices like printers, monitor etc provides the desired output.
- Input device provides information in the form of program to the computer and stores it in the memory.
Further the information is fetched from memory to the processer.
Inside the processor, it is processed by the ALU.
The processed output further passes to output devices.
All these activities are controlled by control unit.
The most efficient computer depends on how quickly it executes tasks. The performance highly depends on few factors:
- As the programs are written in high level language, compiler transfers the high level language to machine level language; so the performance is highly affected.
- The speed of the computer depends on the hardware design and machine instruction sets.
Therefore, for optimum results it is important to design compiler, hardware and machine instruction sets in a coordinated way.
The hardware comprises of processor and memory usually connected by a bus. The execution of the program depends on computer system, the processor time depends on hardware. Cache memory is a part of the processor.
The flow of program instructions and data between processor and memory:
- All the program and data are fetched from input device, and then stored in main memory.
Instructions are fetched one by one over bus from memory into processor and a copy is placed in cache memory for future use whenever required.
The processor and small cache memory is fabricated in a small integrated circuit chip which makes the processing speed very high.
If the instruction movement between main memory and processor is minimized, program will be executed faster which is achieved by cache memory.
NUMBER AND ARITHMETIC OPERATION
Logical circuits are used to build computer that operates on two valued electric signals which are 0 and 1. A bit of information is the amount of information received from these electric signals.
To represent a number in a computer system by string of bits is called binary number.
To represents a text character by string of bits is called character code. Characters can be alphabets, decimal digits, punctuation marks and so on which are represented by 8 bits long codes.
As we need to represent all types of positive and negative numbers which can be represented in three ways where leftmost bit is 0 for positive numbers and 1 for negative numbers:
- Sign and magnitude
- 1’s complement
- 2’s complement
Positive values have identical representation in all the three system whereas negative values have different representations.
- In Sign and magnitude representation:
Negative values are represented by changing the most significant bit from 0 to 1. It is most natural and can be manually computed.
Example: +5= 0101 whereas -5=1101 (Most significant bit changed from 0 to 1)
- 1’s complement representation:
We get negative number by complementing each bit of the corresponding positive number
Example: +3=0011 whereas 3=1100
- 2’s complement representations:
2’s complement is obtained by adding 1 to 1’s complement of that number.
2’s complement has only one representation of 0 whereas for +0 and -0, there are distinct representation for sign and magnitude and 1’s complement. However, 2’s complement is most efficient in carrying out addition and subtraction operations.
- Computer Organization by Carl Hamacher and Zaky 5th edition.
- IITM- NPTEL lectures by S Raman.
- IIT-KGP NPTEL digital computer design by P.K Biswas.