Marquette University logo      

Midterm Examination 1

 

Directions:

Read and follow the directions.

  • Write on three of the four questions. If you write more than three questions, only the first three will be graded. You do not get extra credit for doing extra problems.
  • Read the entire exam. If there is anything you do not understand about a question, please ask at once.
  • If you find a question ambiguous, begin your answer by stating clearly what interpretation of the question you intend to answer.
  • Begin your answer to each question at the top of a fresh sheet of paper. [Or -5]
  • Be sure I can read what you write.
  • Write only on one side of a sheet.
  • Be sure your name is on each sheet of paper you hand in. [Or -5]

 

Score Distribution:

 

 

Problem 1 Definitions

Give careful definitions for each term (in the context of computer operating systems):

  1. Symmetric multiprocessing (SMP)
  2. Operating system calls
  3. Protection
  4. Virtual machine
  5. Shared memory systems
  6. Message passing systems
  7. IPC
  8. Sockets
  9. Pipes
  10. RPC

Warning: A definition that starts "Z is when ...," or "Z is where ..." earns -5 points.

  1. Multiprocessing system in which each processor performs all tasks within the operating system (p. 14). Does your "definition" hold also for asymmetric?
  2. An interface to operating system services made available to applications software by an operating system (Section 2.3, p. 55)
  3. Mechanisms for controlling access to the resources provided by a computer system (Sect 1.9, p. 26; p. 51; Sect. 2.4.6, p. 66; and others). Protection <> Security.
  4. Abstract the hardware of a single computer into several different execution environments, creating the illusion that each separate execution environment is running its own private computer (Sect. 2.8, p. 76)
  5. A region of memory residing in the address space of two or more cooperating processes (Sect. 3.4.1, p. 117). "Shared memory is memory that is shared" is circular.
    Alternative: Separate CPUs that address a common pool of memory
  6. A mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. A message-passing facility provides at least two operations: send(message) and receive(message). (Sect. 3.4.2, p. 119)
  7. Interprocess Communication: shared memory and message passing. (Sect. 3.4 & 3.5) It is important to know at least some of the TLAs (Three Letter Abbreviations)
  8. An endpoint for communication. A pair of processes communicating over a network employs a pair of sockets. A socket is identified by an IP address and a port number. (Sect. 3.6.1, p. 128)
  9. A pipe acts as a conduit allowing two processes to communicate. A pipe is usually treated as a special type of file, accessed by ordinary read() and write() system calls. (Sect. 3.6.3, p. 134)
  10. Remote Procedure Calls abstract the procedure-call mechanism for use between systems with network connections using message-based communication. (Sect. 3.6.2, p. 131)

Except for part a), each term is the title of a section or subsection heading. These are not obscure terms.

An example is not a definition.

As you prepare for our next exam, I encourage you to maintain a vocabulary list.

Problem 2 How is an interrupt handled?

Suppose we have a system with 50 active processes, P0, P1, ..., P49, some user processes and some kernel processes. For simplicity, we are not concerned with threads in this question. Ready processes are scheduled to Run by some scheduler. The scheduling algorithm does not matter here. Suppose process P13 makes a disk_read() operating system call. Assume that completion of disk transfer is signaled by an interrupt from the disk controller. Trace as accurately as you can what happens in the CPU (not the disk) until process P13 has received its requested information from the disk.

Hints:

  • The question is about interrupt handling
  • I am looking for a trace of what processes run, why, and what they do.
  • I am not looking for instruction-level explanations.
  • I am not looking for a discussion of disk access, operation, or transfer.
  1. At time t = 0, which process is running? What is it doing?
  2. Who runs next? Why? What does it do?
  3. Who runs next? Why? What does it do?
  4. . . . What happens in the mean time?
  5. When does P13 receive its requested information from the disk?
  6. What must happen first? Who does that? Why?
  7. What role does the scheduler play?
  1. P13 is running. It makes a disk_read() operating system call.
  2. The portion of the operating system in which the function disk_read() resides.
    It is an interesting design choice whether the disk_read() code executes as part of process P13 in its address space or in a kernel mode FileSystem process.
    In either case, disk_read() passes appropriate parameters to the disk controller and blocks process P13. That is a little easer to think of if the disk_read() code is executing in process P13, because then it is blocking itself.
  3. Scheduler. We have blocked P13, and we need the next process to run. The Scheduler selects some process from the Ready list.
  4. Disk access takes a relatively long time. The disk controller is copying contents of the disk into some memory buffer. Meanwhile, other processes are doing their thing, with the Scheduler working as necessary.
  5. P13 must be running to receive its requested information from the disk. (Note that I did not ask disk access works. That is a question later chapters.) P13 cannot receive while it is blocked.
  6. At the completion of the disk transfer, the disk controller raises an interrupt.
    Hardware transfers control to the/an interrupt handler.
    The interrupt handler MAY disable additional interrupts.
    The interrupt handler saves the state of the currently running process.
    The interrupt handler determines which interrupt has occurred.
    The interrupt handler moves the PCB for process P13 from the list of processes waiting for disk I/O into the Ready queue
    If it disabled interrupts, the interrupt handler re-enables interrupts.
    The interrupt handler may either restore the previously running process, or it may call the Scheduler.
    Eventually, process P13 is selected by the Scheduler to Run.
  7. See above

This answer requires integration of process Running -- > Waiting --> Ready --> Running; of operating systems call for disk I/O; and of interrupt handling.

The problem asked "How are interrupts handled?" I'd put the answer to that in part f). You might put it elsewhere, but if that explanation is missing, you miss 30 of 100 points.

Problem 3 Context swap

Describe the actions taken by the kernel to context-swap (or switch) between processes.

See Problem 3.7, p. 142. See Dr. Brylow's Feb. 11 lecture and homework Assignment #4.

One purpose of this question is to reward students who started to work on Assignment #4 before the last minute.

See Sect. 3.2.3, p. 110.

There was a lot of fuzzy thinking about the exact meaning of "running." In a uni-processor system, only one process can be running (holding the CPU) at a time.

Especially, how is the Program Counter handled? How does the CPU get to start executing the new process?

Process Control Block is where everything is saved; you do not save the PCB. PCB is in memory.

A context swap does not save "memory." What information about the use of memory by a process does it save?

"Saves the state of the process"? What is the "state of the process"?

The order in which things are saved matters. You must save the Program Counter first because subsequent activity changes it. You must restore the Program Counter last because the very next instruction is loaded from wherever the PC points. Many of you did not seem to understand that the context swapping itself was changing the Program Counter.

Yes, this question overlaps with Problem 2.

Problem 4 Threads

This program uses the Pthreads API:

#include <pthread.h>
#include <stdio.h>

int value = 0;
void *runner(void *param); /* the thread */

int main(int argc, char *argv[]) {
  int pid;
  pthread_t tid; /* the thread identifier */
  pthread_attr_t attr; /* set of attributes for the thread */

  pid = fork();
  if (0 == pid) {  /* Child process */
      pthread_attr_init(&attr);
      pthread_create(&tid, &attr, runner, NULL);
      pthread_join(tid, NULL);
      printf("CHILD : value = %d\n", value);  /* LINE C */
  }
  else if (0 < pid) {  /* Parent process */
      wait(NULL);
      printf("PARENT: value = %d\n", value);  /* LINE P */
  }
}

/**
 * The thread will begin control in this function
 */
void *runner(void *param) {
  value = 5;
  pthread_exit(0);
}
  1. What does it print at LINE C?
  2. What does it print at LINE P?
  3. Explain

See Problem 4.13, p. 175, and Figure 4.14

When I run this program, I get:

CHILD : value = 5
PARENT: value = 0
make: *** [run] Error 18

fork() gives the child process a COPY of the parent's address space.
For discussion, we distinguish parent.value and child.value
They are copied, not shared.
When the child is created, parent.value = child.value = 0.

In the child process, we create a new thread. The child process and its thread share child.value.

The thread runs runner, which sets child.value <-- 5
Runner exits, and the child processes's thread is destroyed.

The child process prints its value of child.value, which is 5, and drops out the bottom of its code without returning the int promised by the declaration int main().

The parent process has been waiting for its child process to finish.
The parent process prints its parent.value, which is still zero, and drops out the bottom of its code without returning the int promised by the declaration int main().

 

 
  Marquette University. Be The Difference. Marquette | Corliss |