Computer Organization and Architecture Sample Papers MST 2
Computer Organization and Architecture Sample Papers
Syllabus
Chapter 2.1 Design of control unit: Hardwired control unit, Micro-Programmed control unit, and comparative study.
Chapter 2.2 Memory organization: Memory hierarchy, Cache Memory, Associative Memory, cache size vs block size, mapping functions, replacement algorithms, write policy, basic optimization techniques in cache memory, Cache memory with associative memory, Virtual Memory: Paging, Segmentation.
Chapter 2.3 Input-output organization: Asynchronous Data transfer: Source Initiated, Destination Initiated, Handshaking, Programmed I/O, Interrupts DMA, and IOP.
SAMPLE PAPER 1 of 3
Acadmic Year: 2022- 2023 | Semester: 4th |
Time: 1 hour | Maximum Marks: 20 |
Instructions: Attempt all questions
SECTION-A
Each question carries 2 marks.
Question 1: Explain the difference between hardwired and microprogrammed control units.
Answer:
- Hardwired control units are built using hardware components like logic gates, flip-flops, and registers. They are designed to execute a specific set of instructions that are hardwired into the control unit.
- Microprogrammed control units, on the other hand, use microcode to control the execution of instructions. The microcode is stored in a control memory and is responsible for generating the control signals required to execute each instruction.
Question 2: Define the terms control signals, control variables, control word, and control memory.
Answer:
- Control signals are electrical signals that are used to control the operation of a computer’s components. For example, a control signal might be used to indicate that an arithmetic operation should be performed.
- Control variables are values that are used to control the operation of a computer’s components. For example, a control variable might be used to specify the size of an instruction.
- Control word is a group of control signals that are sent to a computer’s components to execute a specific instruction.
- Control memory is a type of memory that stores microcode, which is used to generate the control signals needed to execute instructions.
Question 3: What is microinstruction?
Answer:
- Microinstruction is a type of instruction that is used to control the execution of other instructions in a microprogrammed control unit.
- Microinstructions are stored in a control memory and are responsible for generating the control signals required to execute each instruction.
Question 4: Compare the advantages and disadvantages of microprogrammed control units.
Answer:
Advantages of microprogrammed control units:
- More flexible and easier to modify compared to hardwired control units.
- Allow for the use of high-level languages to design control logic.
- Easier to test and debug than hardwired control units.
Disadvantages of microprogrammed control units:
- Slower than hardwired control units due to the additional time required to fetch and execute microinstructions.
- Require more memory and hardware components to store the microcode.
Question 5: What is the purpose of the input logic for the microprogram sequencer?
Answer:
- The input logic for the microprogram sequencer is responsible for decoding the current instruction and generating the address of the next microinstruction to be executed.
- It does this by examining the current instruction and generating a set of control signals that are used to select the next microinstruction from the control memory.
SECTION-B
Each question carries 5 marks.
Question 6: Draw a diagram of a control memory and explain it’s working in detail.
Answer:
Diagram:
| Control Memory |
Memory |------------------|
Address | Microinstruction |
|------------------|
| Microinstruction |
|------------------|
| Microinstruction |
|------------------|
| ... |
|------------------|
| Microinstruction |
|------------------
Working of Control Memory:
1. A control memory is a type of memory that stores microcode, which is used to generate the control signals needed to execute instructions. The microcode consists of a sequence of microinstructions, each of which contains control signals that are specific to a particular instruction.
2. The control memory is typically organized as a matrix of memory cells, with each cell storing a single microinstruction. Each memory cell contains the control signals necessary to execute a single instruction. These control signals are generated by the microinstruction in the cell, which is executed when the control unit fetches the microinstruction from the control memory.
3. To execute an instruction, the control unit fetches the microinstruction from the control memory and decodes it to generate the control signals needed to execute the instruction. The control signals are then sent to the appropriate components of the computer to perform the required operation.
4. The address of the next microinstruction to be executed is generated by the input logic for the microprogram sequencer. The input logic examines the current instruction and generates a set of control signals that are used to select the next microinstruction from the control memory. This address is then sent to the address bus, which selects the appropriate memory cell from the control memory.
5. Once the appropriate memory cell is selected, the microinstruction is fetched and executed. The execution of the microinstruction generates the control signals required to execute the current instruction. The address of the next microinstruction to be executed is generated by the input logic, and the process repeats until the instruction has been completed.
In summary, control memory is a type of memory that stores microcode, which is used to generate the control signals required to execute instructions. The microcode is organized into a matrix of memory cells, with each cell containing a single microinstruction. The control signals are generated by the microinstruction in the memory cell, and the address of the next microinstruction is generated by the input logic for the microprogram sequencer.
Question 7: Explain the difference between FIFO and LRU replacement algorithms for cache memory.
Overview:
Cache memory is a type of memory that is used to store frequently accessed data in a computer. The cache memory is much faster than the main memory of the computer, which allows for faster access to data. However, cache memory is typically much smaller than the main memory, which means that the cache memory must be carefully managed to ensure that the most frequently accessed data is stored in the cache.
One of the most important tasks in managing cache memory is replacing data that is no longer needed with new data that is likely to be needed soon. There are many different algorithms for replacing data in cache memory, but two of the most common are FIFO and LRU.
FIFO:
FIFO (First-In, First-Out) is a simple algorithm that replaces the oldest data in the cache with new data when the cache is full. When data is added to the cache, it is added to the end of a queue. When the cache is full and new data needs to be added, the data at the front of the queue is replaced with the new data.
Advantages of FIFO:
- Simple to implement
- Requires very little overhead
Disadvantages of FIFO:
- Does not take into account the frequency of access of data
- This can result in less frequently accessed data being kept in the cache while frequently accessed data is evicted.
LRU:
LRU (Least Recently Used) is an algorithm that replaces the least recently used data in the cache with new data when the cache is full. The LRU algorithm keeps track of the order in which data is accessed, and when new data needs to be added to the cache, it replaces the data that has not been accessed for the longest time.
Advantages of LRU:
- Takes into account the frequency of access of data
- Ensures that the most frequently accessed data is kept in the cache while less frequently accessed data is evicted.
Disadvantages of LRU:
- Requires more overhead to keep track of the order in which data is accessed
- Can be more complex to implement than FIFO.
SAMPLE PAPER 2 of 3
Acadmic Year: 2022- 2023 | Semester: 4th |
Time: 1 hour | Maximum Marks: 20 |
Instructions: Attempt all questions
SECTION-A
Each question carries 2 marks
Question 1. Define the term memory hierarchy and draw a diagram to illustrate the different levels.
Answer:
Memory hierarchy refers to the organization of memory devices in a system, arranged in a hierarchy based on access speed. The memory hierarchy typically consists of several levels, each with different characteristics, capacities, and access times.
Here’s a diagram to illustrate the different levels of the memory hierarchy:
+-----------------------+
| CPU Registers |
+-----------------------+
| Cache |
+-----------------------+
| RAM |
+-----------------------+
| Hard Disk |
+-----------------------+
| External Storage |
+-----------------------+
Question 2. Explain the need for virtual memory and its advantages and disadvantages.
Answer:
Virtual memory is a technique used by modern operating systems to extend the available memory beyond the physical memory installed on a system. The need for virtual memory arises due to the following reasons:
- Programs require more memory than the physical memory available.
- Multiple programs are executed simultaneously, and each program requires a significant amount of memory.
- Memory fragmentation causes inefficiencies in memory utilization.
Advantages of virtual memory:
- Allows programs to use more memory than physically available, improving system performance.
- Improves memory utilization efficiency by reducing fragmentation and optimizing memory allocation.
- Provides a layer of protection to prevent one program from accessing the memory space of another program.
Disadvantages of virtual memory:
- Slower access times due to the need for constant swapping between physical and virtual memory.
- Increased system overhead due to the need for managing the virtual memory mapping.
- May cause thrashing, a condition where the system spends more time swapping pages than executing programs, leading to degraded performance.
Question 3. What is synchronous data transfer? Explain the types of synchronous data transfer.
Answer:
Synchronous data transfer refers to a data transfer method where data is transferred in a synchronized manner, using a common clock signal, between the sender and the receiver. The clock signal is used to synchronize the data transfer and ensure that the data is received correctly.
Types of synchronous data transfer:
- Synchronous serial transfer: Data is transferred bit by bit, using a single data line, with the clock signal controlling the timing of each bit transfer. Examples include SPI and I2C.
- Synchronous parallel transfer: Data is transferred simultaneously across multiple data lines, with the clock signal controlling the timing of each data transfer. Examples include PCI and DDR memory.
Question 4. Define the term programmed I/O and explain its use.
Definition:
- Programmed I/O refers to a data transfer method where the CPU performs the data transfer between an I/O device and memory.
Explanation of use:
- In programmed I/O, the CPU initiates the data transfer by writing data to or reading data from an I/O device and then waits for the data transfer to complete before resuming normal execution.
- Programmed I/O is used for simple I/O operations, such as transferring small amounts of data or controlling device status.
Advantages:
- Simple and easy to implement.
- Minimal hardware requirements.
Disadvantages:
- The CPU is tied up during the data transfer, reducing system performance.
- Inefficient for large data transfers or high-speed devices.
Question 5. Define the term interrupt and explain the use of interrupts in operating systems.
Definition:
- An interrupt is a signal generated by a device or software, indicating that it requires the attention of the CPU.
Explanation of use:
- Interrupts are used in operating systems to allow devices to communicate with the CPU without requiring constant monitoring by the CPU.
- When an interrupt is generated, the CPU temporarily suspends its current execution and executes a specific routine, known as the interrupt service routine (ISR), to handle the interrupt.
- Interrupts can be classified as hardware interrupts or software interrupts.
Use of Interrupts in Operating Systems:
- Interrupts are used to handle events in real-time, such as input/output operations, hardware errors, and timer events.
- Interrupts are used to manage multiple processes and provide multi-tasking capability by allowing the CPU to switch between processes when an interrupt is generated.
- Interrupts are used for communication between the operating system and device drivers, allowing the operating system to manage devices without requiring detailed knowledge of device operations.
SECTION-B
Each question carries 2 marks.
Question 6. Draw a diagram to illustrate the hardware organization for associative memory and explain the functions of the argument register, key register, associative memory array, and match register.
Answer:
The hardware organization for associative memory includes:
- Argument register: Holds the argument that needs to be matched with the stored key.
- Key register: Stores the key that needs to be matched with the argument.
- Associative memory array: Where the key-value pairs are stored and the search operation is performed to retrieve the required value.
- Match register: Stores the result of the search operation, indicating whether a match was found or not.
The search operation for associative memory involves the following steps:
- The argument register value is compared with all the keys in the associative memory array.
- When a match is found, the corresponding value is stored in the match register.
Associative memory can be implemented using various technologies, such as content-addressable memory (CAM), which allows searching for a key-value pair in a single clock cycle.
Associative memory is used in applications where fast and efficient search operations are required, such as database management, cache memory, and pattern recognition.
Here’s a diagram to illustrate the hardware organization for associative memory:
+---------------+
| Argument Reg. |
+---------------+
|
v
+---------------+
| Key Reg. |
+---------------+
|
v
+---------------+
| Assoc. Mem. |
| Array |
+---------------+
|
v
+---------------+
| Match Reg. |
+---------------+
Question 7. Explain the process of direct memory access (DMA) and the purpose of the DMA register.
Answer:
Direct Memory Access (DMA) is a method of transferring data between memory and input/output (I/O) devices without the intervention of the CPU. In DMA, the device controller transfers data directly to or from memory without the need for CPU involvement. This reduces the burden on the CPU and improves system performance by freeing up the CPU.
The DMA process involves the following steps:
- The device controller initiates the DMA transfer by sending a DMA request signal to the DMA controller.
- The DMA controller requests access to the system bus and coordinates with the device controller to transfer data to or from memory.
- Once the DMA transfer is complete, the DMA controller releases the system bus and signals the device controller that the transfer is complete.
The DMA register is a hardware component used to manage DMA transfers. It stores the memory address, transfer count, and transfer direction for a DMA transfer.
Here’s a diagram to illustrate the DMA transfer process:
+-----------+
| I/O Device|
+-----------+
|
DMA Request Signal
|
+-----------+
| DMA |
| Controller|
+-----------+
|
System Bus Access
|
+-----------+
| Memory |
+-----------+
In summary, the DMA process allows for efficient data transfer between memory and I/O devices without the need for CPU intervention, and the DMA register is used to manage DMA transfers.
SAMPLE PAPER 3 of 3
Acadmic Year: 2022- 2023 | Semester: 4th |
Time: 1 hour | Maximum Marks: 20 |
Instructions: Attempt all questions
SECTION-A
Each question carries 2 marks.
Question 1. Define the term control unit and explain the difference between hardwired and microprogrammed control units.
Answer:
The control unit is a component of a CPU that manages the execution of instructions. It retrieves instructions from memory and decodes them, then generates the necessary control signals to execute them.
Hardwired control units use a fixed wiring scheme to control the CPU’s operations. The control signals are generated by combinational logic circuits that are hardwired to specific instruction codes. Hardwired control units are simple and fast, but they are inflexible and difficult to modify.
Microprogrammed control units use microcode to generate control signals. Microcode is a set of instructions that tell the control unit how to generate the necessary control signals for each instruction. Microcode is stored in a ROM and can be easily modified to support new instructions or fix errors. Microprogrammed control units are more flexible and easier to modify, but they are slower and more complex than hardwired control units.
Question 2. What is a microprogram? Explain the concept of microprogram routines
Answer:
Microprogram is a set of instructions stored in a ROM that tells the control unit how to generate the necessary control signals for each instruction. Microprogram routines are sequences of microinstructions that perform specific tasks.
Each microinstruction in a microprogram routine corresponds to a specific control signal that is generated by the control unit. For example, a microprogram routine might contain microinstructions that load data from memory, perform arithmetic operations, or store results in memory.
Question 3. Define the term cache memory and explain its importance in computer organization and architecture.
Answer:
Cache memory is a small, high-speed memory that stores frequently accessed data and instructions. It is located between the CPU and main memory and is used to reduce the average time required to access data from the main memory.
Cache memory is important in computer organization and architecture because it helps to improve the overall performance of the CPU. By storing frequently accessed data and instructions in cache memory, the CPU can access them more quickly and reduce the number of times it needs to access main memory. This helps to reduce the memory access time and improve the CPU’s overall speed.
Question 4. Explain the process of associative mapping and its different types.
Answer:
Associative mapping is a cache mapping technique that allows a block of data to be stored in any cache location, instead of being assigned to a specific location.
The different types of associative mapping include:
- Fully associative mapping: In this technique, any block of data can be stored in any cache location. This provides the most flexibility but also requires the most hardware.
- Set-associative mapping: In this technique, each block of data is assigned to a specific set of cache locations. Within each set, the block can be stored in any location. This reduces the hardware requirements while still providing flexibility.
- Direct-mapped mapping: In this technique, each block of data is assigned to a specific cache location. This is the simplest mapping technique, but also the least flexible.
Question 5. What is a pipeline register? Explain the function of the control data register (CDR) in pipeline processing.
Answer:
Pipeline Register:
- A pipeline register is a storage element used in a pipelined processing system.
- It provides a buffering mechanism that helps to improve the overall performance of the system by allowing each stage of the pipeline to operate independently.
- It temporarily holds intermediate results and data as they are passed from one stage of the pipeline to the next.
The function of CDR in Pipeline Processing:
- In a pipelined processing system, the execution of each instruction is divided into multiple stages, each of which performs a specific task.
- By using pipeline registers to temporarily store intermediate results, the system can overlap the execution of multiple instructions, leading to faster overall processing times.
- The use of pipeline registers also helps to minimize the impact of hazards, such as data hazards and control hazards, that can occur when instructions are executed sequentially.
- The CDR plays a crucial role in pipeline processing by storing and passing control signals between pipeline stages.
- It ensures that the correct control signals are applied to each stage of the pipeline, thereby ensuring that instructions are executed in the correct order and that hazards are avoided.
- Without the CDR, it would be difficult to coordinate the execution of instructions in a pipelined processing system.
SECTION-A
Each question carries 5 marks.
Question 6. Draw a diagram of the microprogrammed control unit and explain the function of the control address register (CAR) and the next address generator.
Answer:
Microprogrammed Control Unit Diagram:
+------------------+ +----------------------+
| Control Store |------->| Microinstruction ROM |
+------------------+ +----------------------+
^ |
| |
| v
| +--------------+
| | Control Word |
| +--------------+
| |
| |
| v
+------------------+ +------------------+
| Control Address | | Next Address Gen. |
| Register | | (NAG) |
+------------------+ +------------------+
Function of the Control Address Register (CAR) and Next Address Generator (NAG):
- A microprogrammed control unit is a type of control unit that uses microcode to control the operation of a computer’s hardware.
- The control store contains a set of microinstructions, which are stored in a ROM.
- The microinstructions control the operation of the computer’s hardware, such as the ALU, registers, and memory.
- The control store is accessed using a control address register (CAR).
- The CAR holds the address of the next microinstruction to be executed.
- The microinstruction stored at this address is loaded into a control word register (CWR), which contains the microinstruction’s control signals.
- The control signals are then used to control the computer’s hardware.
The next address generator (NAG) is responsible for generating the next address for the control store. It does this by examining the current microinstruction and determining the next address based on the microinstruction’s opcode and address fields. The NAG uses a set of control logic circuits to generate the next address, which is then loaded into the CAR.
Question 7. Explain the process of synchronous and asynchronous data transfer and the purpose of CPU handshaking.
Answer:
Synchronous Data Transfer:
- Synchronous data transfer is a type of data transfer in which data is transferred between two devices using a common clock signal.
- The sender device sends data to the receiver device only when the clock signal is active.
- The receiver device latches the data on the rising or falling edge of the clock signal, depending on the configuration of the system.
- Synchronous data transfer is commonly used in high-speed communication systems and in digital signal processing applications.
Asynchronous Data Transfer:
- Asynchronous data transfer is a type of data transfer in which data is transferred between two devices without using a common clock signal.
- The sender device sends data to the receiver device without waiting for any clock signal.
- The receiver device detects the start and end of each data byte using a control signal, such as a start bit and stop bit, and latches the data bits accordingly.
- Asynchronous data transfer is commonly used in low-speed communication systems and in computer peripheral devices.
Purpose of CPU Handshaking:
CPU handshaking is a process used in both synchronous and asynchronous data transfer to ensure that data is being transferred correctly between devices. It involves a series of signals sent between the sender and the receiver to confirm that data has been received and to synchronize the transfer of data.
If you like this content and want more, let us know or share your feedback or views here: https://progiez.com/contact