Marquette University logo      

COSC 3250 / COEN 4820 Operating Systems
Midterm Examination 1
Wednesday, February 24, 2016

 

Your first midterm examination will appear here. It covers Chapters 1 - 5, omitting Section 3.6 (we'll cover that with networking) and Section 5.9 & 10.

It will be a 50 minute, closed book exam.

Expectations:

Section and page number references in old exams refer to the edition of the text in use when that exam was given. You may have to do a little looking to map to the edition currently in use. Also, which chapters were covered on each exam varies a little from year to year.

Some questions may be similar to homework questions from the book. You will be given a choice of questions. I try to ask questions I think you might hear in a job interview. "I see you took a class on Operating Systems. What can you tell me about ...?"

Preparation hints:

Read the assigned chapters in the text. Chapter summaries are especially helpful. Chapters 1 and 2 include excellent summaries of each of the later chapters.

Read and consider Practice Exercises and Exercises at the end of each chapter.

Review authors' slides and study guides from the textbook web site.

Review previous exams listed above.

See also Dr. Barnard's notes. He was using a different edition of our text, so chapter and section numbers do not match, but his chapter summaries are excellent.

Read and follow the directions:

  • Write on four of the five questions. If you write more than four questions, only the first four will be graded. You do not get extra credit for doing extra problems.
  • In the event this exam is interrupted (e.g., fire alarm or bomb threat), students will leave their papers on their desks and evacuate as instructed. The exam will not resume. Papers will be graded based on their current state.
  • Read the entire exam. If there is anything you do not understand about a question, please ask at once.
  • If you find a question ambiguous, begin your answer by stating clearly what interpretation of the question you intend to answer.
  • Begin your answer to each question at the top of a fresh sheet of paper [or -5].
  • Be sure your name is on each sheet of paper you hand in [or -5]. That is because I often separate your papers by problem number for grading and re-assemble to record and return.
  • Write only on one side of a sheet [or -5]. That is because I scan your exam papers, and the backs do not get scanned.
  • No electronic devices of any kind are permitted.
  • Be sure I can read what you write.
  • If I ask questions with parts
    1. . . .
    2. . . .
    your answer should show parts in order
    1. . . .
    2. . . .

The university suggests exam rules:

  1. Silence all electronics (including cell phones, watches, and tablets) and place in your backpack.
  2. No electronic devices of any kind are permitted.
  3. No hoods, hats, or earbuds allowed.
  4. Be sure to visit the rest room prior to the exam.

 

In addition, you will be asked to sign the honor pledge at the top of the exam:

"I recognize the importance of personal integrity in all aspects of life and work. I commit myself to truthfulness, honor and responsibility, by which I earn the respect of others. I support the development of good character and commit myself to uphold the highest standards of academic integrity as an important aspect of personal integrity. My commitment obliges me to conduct myself according to the Marquette University Honor Code."

Name _____________________________   Date ____________

 

 

Score Distribution:

Histogram of scores

 

Median: 73.9 ; Mean: 77; Standard Deviation: 13.8

These shaded blocks are intended to suggest solutions; they are not intended to be complete solutions.

I have updated these solutions after grading your papers.

Problem 1 Definitions
Problem 2 Tricky C
Problem 3 Processes vs. Threads
Problem 4 Message Passing
Problem 5 Semaphores

Problem 1 Definitions

Give careful definitions for each term (in the context of computer operating systems):

  1. Batch system
  2. Embedded system
  3. Operating system calls
  4. POSIX
  5. Privileged instructions
  6. Kernel mode vs. user mode
  7. Busy waiting
  8. Deadlock vs. starvation
  9. Priority inversion
  10. Race condition

Vocabulary list. I ask mostly about terms that appear as section, subsection, or subsubsection headings in the text, but any phrase printed in blue in the text is fair game. Here are some of the terms I might ask you to define. This list is not exhaustive:

  1. Batch system: Typically large, no user present. E.g., overnight accounting runs, daily update of Checkmarq records. Batch mode is an old business data processing mode, still VERY widely used in mainframe business data processing. Batch processes are often scheduled. TAbot is a batch process.
  2. Blocking receive vs. Non-blocking receive: [Section 3.4.2.2.]
  3. Blocking send vs. Non-blocking send: [Section 3.4.2.2.] Spell "receive" correctly.
  4. Busy waiting [p. 213]. Waiting while holding the CPU, typically waiting by loop, doing nothing. For example
          while (TestAndSet(&lock) ;  // Does nothing
    A process in a Waiting state (in the sense of the Ready-Running-Waiting state diagram) is not busy waiting because it is not holding the CPU.
  5. Client-Server computing and Peer-to-peer computing: [Section 1.11.4 & 5.] Client-server is a master-slave relationship. Client requests, server answers. Which is the master?
  6. Clustered systems: [Section 1.3.3] Two or more individual (computer) systems gathered together to accomplish computational work.
  7. Interactive system: Typically, a user is present.
  8. Embedded system: A dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. -- Wikipedia. Embedded systems usually have real time constraints. In an embedded system, the computer is not the purpose of the device. A car or a cell phone is an embedded system; a Raspberry Pi can be USED in an embedded system, but it is not an embedded system by itself. Some embedded systems are small and relatively simple (e.g., hearing aids), but other are large and state-of-the-art (e.g., medical imaging equipment).
  9. Deadlock vs. starvation [p. 217]. Both concern indefinite wait. "A set of processes is in a deadlocked state when every process in the set is waiting for an event that can be caused only by another process in the set." Starvation is also an indefinite wait not satisfying the definition of deadlock, often due to an accidental sequence of events. It is starvation if an infinite string of processes keep cutting in ahead of yours, for example, if they have higher priorities.
  10. IPC: [Sect. 3.4 & 3.5] Interprocess Communication: shared memory and message passing. It is important to know at least some of the TLAs (Three Letter Abbreviations).
  11. Kernel: [P. 6] Part of the OS that is running all the time, core OS processes.
  12. Kernel node vs. user mode: States of the processor. Normally, the processor is in user mode, executing instructions on behalf of the user. The processor must be in kernel mode to execute privileged instructions. Most operating systems instructions are executed in user mode; even the OS only switched to kernel mode when it is necessary.
  13. Kernel thread: [Section 4.3] Created by OS processes.
  14. Kernighan's Law: [P. 87] "Debugging is twice a hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." You should recognize the name Kernighan as one of the authors of our C textbook
  15. Message passing systems: [Sect. 3.3.4.2] A mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. A message-passing facility provides at least two operations: send(message) and receive(message). Be careful to avoid circular definitions: "Message passing systems pass messages" counts zero points.
  16. Non-blocking receive: [Section 3.4.2.2] Receive a message. If no message is there, return from the receive() function all immediately.
  17. Operating system: [P. 3] A program that manages the computer hardware, provides a basis for application programs, and acts as an intermediary between the computer user and the computer hardware.
  18. Operating system calls: [Section 2.3] An interface to operating system services made available to applications software by an operating system.
  19. POSIX: Portable Operating System Interface. We looked at POSIX examples for shared memory, message passing, and Pthreads.
  20. Pipes: [Sect. 3.6.3] A pipe acts as a conduit allowing two processes to communicate. A pipe is usually treated as a special type of file, accessed by ordinary read() and write() system calls.
  21. Priority inversion [p. 217]: A high priority process is forced to wait for a lower priority process to release some resource(s). For example, suppose a high priority process is running and blocks to wait for some resource, say a disk read. Since the high priority process is waiting, the schedule runs a lower priority process. As the lower priority process is running, the disk read completes, and the high priority process is moved to the Ready list, but it does not run until the lower priority process relinquishes the CPU. To counter this form of priority inversion, when the disk read completes, the scheduler may interrupt the low priority process to let the high priority process run. As another example, a high priority process may request a service from a lower priority process. Then the high priority process must wait until the lower priority process gets to run. Priority inversion does not involve changing the priority of a process.
  22. Privileged instructions: [P. 22] Some hardware instructions that may cause harm to the intended operation of the operating system. "Harm" does not mean physical damage to the host computer. May be executed only in kernel mode. Examples: Instruction to switch to kernel mode, I/O control, timer management, interrupt management.
  23. Process: [P. 105] Program in execution, a unit of work in a modern time-sharing system.
  24. Process Control Block (PCB): [Section 3.1.3] Representation of a process in the operating system.
  25. Process scheduling: [Section 3.2] Switch between processes so frequently that users can interact with each process. Select a process from the Ready list for execution in the CPU.
  26. Process State: [Sect. 3.1.2] Defined by the current activity of the process. Each process may be in one of: New, Running, Waiting, Ready, Terminated.
  27. Process vs. Thread: [P. 105-106, 163] A process is a program in execution. A process may include several threads. A thread is a basic unit of CPU use. It includes a thread ID, a program counter, a register set, and a memory stack.
  28. Protection vs. Security: [Section 1.9.] Protection - Mechanisms for controlling access to the resources provided by a computer system. Protection controls access to computer system resources. Security defends a system from internal and external attacks. Protection <> Security.
  29. Race condition [p. 205]: Several processes access and manipulate the same data concurrently, and the outcome depends on what happens to be the order of execution. Race condition is not about one process being fast and one being slow.
  30. Real-time system: [Section 1.11.8] Typically time-sensitive interaction with external devices.
  31. RPC (in context of inter-process communication): [Sect. 3.6.2] Remote Procedure Calls abstract the procedure-call mechanism for use between systems with network connections using message-based communication.
  32. Shared memory systems: [Sect. 3.4.1] A region of memory residing in the address space of two or more cooperating processes. "Shared memory is memory that is shared" is circular. Alternative: Separate CPUs that address a common pool of memory
  33. Shared memory vs. Message passing: [Section 3.4.] Be careful to avoid circular definitions: "Shared memory is memory that is shared" counts zero points.
  34. Sockets: [Sect. 3.6.1] An endpoint for communication. A pair of processes communicating over a network employs a pair of sockets. A socket is identified by an IP address and a port number.
  35. Symmetric multiprocessing (SMP): [P. 15] Multiprocessing system in which each processor performs all tasks within the operating system. Does your "definition" hold also for asymmetric?
  36. Thread cancelation: [Sect. 4.6.3] Task of terminating a thread before it has completed.
  37. Thread library: [Section 4.4] Provides a programmer with an API for creating and managing threads.
  38. Thread-Local Storage: [ 4.6.4] Data of which each thread must keep its own copy. Do not confuse with local variables.
  39. Thread pool: [Sect. 4.5.1] Create a number of threads at process startup and place them in a pool, where they sit and wait for work.
  40. Time sharing: [P. 20] The CPU executes multiple jobs by switching among them, but the switches happen so frequently that the users can interact with each program while it is running.
  41. User mode vs. Kernel mode: [Section 1.5.1] User mode is a state of the operating system in which most instructions of user applications are executed. Kernel mode is a state of the operating system in which some critical portions of the operating system is executed. What is it called when a user process makes an operating system call for service? Most operating system code is executed in user mode. An appreciation for user vs. kernel mode is necessary to understand why virtual machines are magic.
  42. User thread: [Section 4.3] Created by user processes.
  43. Virtual machine: [Section 1.11.6] Abstract the hardware of a single computer into several different execution environments, creating the illusion that each computing environment is running its own private computer.
Variants: Compare and contrast; Give an example of.

 

Problem 2. Tricky C

Straight from last week's lecture on the homework.

Consider the C code below for adding a process ID onto the tail of a doubly-linked queue in a static table of structs, with sentinel head and tail entries.

Does this work? Why or why not?

#include <xinu.h>
#define NQENT   (NPROC + 2)

struct qentry
{                       /* one for each process plus two for each list */
    pid_typ next;       /* index of next process or tail               */
    pid_typ prev;       /* index of previous process or head           */
};


struct qentry queuetab[NQENT];  /* global queue table                  */

/**
 * Insert a process at the tail of a queue
 * @param  pid process ID to enqueue
 * @param  q   queue to which the process should be added
 * @return process id of enqueued process
 */
pid_typ enqueue(pid_typ pid, qid_typ q)
{
    int tail = 0;

    queuetab[pid].next = tail;
    queuetab[pid].prev = queuetab[tail].prev;
    queuetab[queuetab[tail].prev].next = pid;
    return pid;
}
No, this does not work:
  1. The queue parameter 'q' is not referenced,
  2. The tail doesn't seem to be set to the tail of the queue, and
  3. Any double-linked list insertion requires four assignments in the table, so at least one is missing here. (The tail node's prev link is not updated.)
Hint: Draw the right pictures.

Scoring: Zero points if you said it works. If you were making a completely uninformed guess, the name of the question should bias the guess toward "No." 13 points if you said it did not work. Full credit for two of the errors. Bonus points for three. 10 points if you drew an appropriate picture and did nothing else.

 

Problem 3. Processes vs. Threads

Context: One operating system (Linux) treats processes and threads in the same way, allowing a task to be more akin to a process or a thread (as described in our text) depending on flags passed to the system call that creates it. Another operating system (Windows) treat processes and threads quite differently. The Process Control Block contains pointers to the separate threads belonging to the process.

Question: Contrast these two approaches for modeling processes and threads within the kernel. What differences do the two approaches make? What are some consequences?

Hint: Answer the question asked. This is not a question about Linux or Windows. It tests your ability to reason about differences in the treatments of processes and threads.

See Problem 4.16, p. 193.

Define what you understand "processes'' to mean.

Define what you understand "thread'' to mean.

The Windows interpretation is quite close to that of our text.

Explain how you think the Linux version must work.

Explain what differences you think it makes. For example, the Linux model probably makes it easier for you to flood your own computer with many threads competing for CPU time, slowing down non-participating processes.

 

Problem 4. Message Passing

Suppose that our operating system provides two message passing primitives for direct interprocess communication:

  • send(P, message) - Send a message to process P. Non-blocking.
  • receive(Q, message) - Receive a message from process Q. Blocking.

Processes P (Producer) and C (Consumer) are written as shown:

Process P // Producer

while (TRUE) {
    item = produce();
    send(M, item);
} 
Process M  // Middle

while (TRUE) {


} 
Process C // Consumer

while (TRUE) {
    receive(M, item);
    consume(item);
} 

This pseudo-code intends to implement a Producer-Consumer system as described in our text and in class lectures.

We assume that the operating system maintains one FIFO queue for messages from process P to process M and a separate FIFO queue for messages from process M to process C. We assume message queues are as large as necessary.

  1. (3 points) What does it mean that send() is non-blocking?
  2. (3 points) What does it mean that receive() is blocking?
  3. (8 points) Write pseudo code for the body of the while (TRUE) loop in process M.
  4. (8 points) Explain how your process M works. Does it allow several items to be produced before any are consumed?
  5. (3 points) How is mutual exclusion enforced?

Hint: One answer for part c) is two lines of code. Other correct answers may be longer.

Dennis: "P.S. The answer for exam question #4 is "42"."
  1. Control returns from the function send() at once. send() does not wait to see if its message is received.
  2. If there is no message waiting, receive() waits to return until a message is received.
  3. One solution:
    Process P // Producer
    
    while (TRUE) {
        item = produce();
        send(M, item);
    } 
    Process M  // Middle
    
    while (TRUE) {
        receive(P, item);
        send(C, item);
    } 
    Process C // Consumer
    
    while (TRUE) {
        receive(M, item);
        consume(item);
    } 
    Whatever else Process M does, it must receive() from P and send() to C, or nothing can possibly work. Of course, this solution cannot send an item to C before it receives the item from P, but it does not force strict alternation in the sense that P could produce 10s of items before C consumes any.

     

  4. We need no buffer; the message queues form the buffer. M can receive MANY items before C receives them; they sit in the M -> C message queue. In fact, we have no need of process M; P could send directly to C, and C receive directly from P. Hence, an even better solution is
    Process P // Producer
    
    while (TRUE) {
        item = produce();
        send(C, item);
    } 
    Process C // Consumer
    
    while (TRUE) {
        receive(P, item);
        consume(item);
    } 
    If you attempted to implement a circular buffer inside process M, you have no need for mutual exclusion because your buffer variables are local to one process (somewhat monitor-like). However, unless you assume a more flexible variant of receive(), you probably enforce strict alteration of P and C.

     

  5. There is no need for mutual exclusion at the application level because there are no shared resources. Process M needs no protection from itself, and Processes P and C appear to share no memory with anyone. You also could say the mutual exclusion is enforced by M preventing P and C from communicating.

    Mutual exclusion (no two processes in their critical sections simultaneously, but there are no critical sections) is a different issue from synchronization (C cannot consume something until P produces it).

    Actually, there are shared memory locations within the operating system's message queueing system, but that is the problem of the OS writer (you in Assignment 6).

    This problem illustrates that application developers often can avoid synchronization considerations by considered message passing among well-designed processes. The effect is that the burdens of synchronization and mutual exclusion are relegated (or promoted) to the operating system and its developers. That's us.

Spell "receive" correctly [or -2]. You did not have to know how to spell "receive," it appears four times in the question.

 

Problem 5 Semaphores

Semaphore S is an integer variable, accessed only through two atomic hardware operations defined as

 

SHARED:
   int S;
 
wait(S) { /* "P" - proberen */
   while (S <= 0) ;  /* wait */
   S --;
}
signal(S) { /* "V" - verhogen */
   S ++;
}

 

Suppose we have three cooperating processes sharing Semaphore S, with initial value 1:

 

SHARED:
   Semaphore S = 1;
    
    
  14:
  15:
  16:
  17:
  18:
  19:
Process 0:
    . . .
   while (TRUE) {
      signal(S);
      /* CRITICAL SECTION */
      wait(S);
      /* REMAINDER SECTION */
   }
Process 1:
    . . .
   while (TRUE) {
      signal(S);
      /* CRITICAL SECTION */
      wait(S);
      /* REMAINDER SECTION */
   }
Process 2:
    . . .
   while (TRUE) {
      signal(S);
      /* CRITICAL SECTION */
      wait(S);
      /* REMAINDER SECTION */
   }

 

  1. [5 points] What does "mutual exclusion" mean in this example?
  2. [20 points] Explain how the semaphore enforces mutual exclusion. Use a careful stepping through the pseudo-code to make your case.

Do not be confused by three processes, rather than two. The logic is identical.

Do not be confused by one processor vs. several. What matters is that they share one semaphore.

  1. [p. 206] If one process P1 is executing in its critical section (line 16:), no other processes can be executing in their critical sections.

     

  2. To show mutual exclusion is enforced requires thorough analysis (which does not hold here). To show mutual exclusion is not enforced only required one example of a scenario for which it fails. It is similar in mathematics: To show a statement is true, you must prove it; to show the statement is false, you need only give one counter-example.

    This code does not enforce mutual exclusion, as the following example shows.

    Fundamental: We can make no assumptions about the order or the relative speed of processes executing in parallel. Suppose we have the following sequence of events:

     

    Process 0:
        . . .
       while (TRUE) {
          signal(S);  // S++ -> 2
          /* CRITICAL SECTION */
             INTERRUPT ...
             
             
             
    
    Process 1:
    
    
    
    
    
    
    
       while (TRUE) {
          signal(S);  // S++ -> 3
          /* CRITICAL SECTION */
             // TWO processes are in their critical
             // sections at the same time.
             // Mutual exclusion has FAILED
    

     

    A key insight in these mutual exclusion discussions is to see when is the worst time for an interrupt to occur.

    Compared to the code I showed in class for the semaphore example, wait(S) and signal(S) are reversed.

Scoring: If you recognized it does not work, but had little idea what is wrong, you received about 8 of the 20 points for part b. If you said it does work, and you gave a reasonable argument (although wrong), you might have received up to 10 of the 20 points.

You did not have to recognize what is wrong (signal() and wait() are reversed) to receive full credit, but you may have received extra credit if you did.

 

 
  Marquette University. Be The Difference. Marquette | Corliss |