Marquette University logo      

Midterm Examination 2

 

Your second midterm examination will appear here. It covers through Chapters 5 - 11.

It will be a period-long, closed book exam.

Expectations:

Questions will be similar to homework questions from the book. You will be given a choice of questions. I try to ask questions I think you might hear in a job interview. "I see you took a class on Operating Systems. What can you tell me about ...?"

See also Dr. Barnard's notes. He was using a different edition of our text, so chapter and section numbers do not match, but his chapter summaries are excellent.

Directions:

Read and follow the directions.

  • Write on four of the five questions. If you write more than four questions, only the first four will be graded. You do not get extra credit for doing extra problems.
  • Read the entire exam. If there is anything you do not understand about a question, please ask at once.
  • If you find a question ambiguous, begin your answer by stating clearly what interpretation of the question you intend to answer.
  • Begin your answer to each question at the top of a fresh sheet of paper. [Or -5]
  • Be sure I can read what you write.
  • Write only on one side of a sheet.
  • Be sure your name is on each sheet of paper you hand in. [Or -5]

 

Score Distribution:

 

 

Median score: 72; Mean score: 71

 

 

Problem 1 Cooperative Scheduling

One reason for using a quantum to interrupt a running process after a "reasonable" period of time is to allow the operating system to regain the processor and dispatch the next process. Suppose a system does not have an interrupting clock, and that the only way a process can lose the processor is to relinquish it voluntarily. Suppose that no dispatching (scheduling) mechanism is provided in the operating system.

  1. Describe how a group of user processes could cooperate among themselves to effect a user-controlled dispatching mechanism.
  2. What potential dangers would be inherent in this scheme?
  3. What are the advantages of this scheme to the users over a system-controlled dispatching scheme?

Taken from 2005 midterm exam 1

This is exactly how many real-time systems are programmed. It works well for embedded systems with only one or a small number of processes.

In standard, quantum-driven time-sharing systems, few processes use their allotted quantum. Most processes leave the Running state when they block themselves by requesting some resource. Hence, the design proposed here does not make a BIG difference in typical operation.

a) How can processes cooperate?

  • Any resource request blocks the process making the request
  • Process voluntarily gives up CPU at convenient points in its processing, not necessarily waiting until it is done; it may never be gone
  • Blocking process selects next process to run
  • Something like
    loop: {
         work a while;
         blockMe();   // or blockMe(<ID of process to run next>);
    }

Describe how this CAN work. Who manages which process runs next.

Do NOT make each process keep track of time or operation counts, or we may as well be timer interrupt driven.

In practice, real-time processes do what they must do, then block themselves to let the next process run or explicitly pass control to another process.

There is no reason for each process to get the same time, or even to run round-robin. As the designer of the system, I might prefer to have one process run for a short time in every other turn, for example.

b) Dangers?

A process does NOT give up the CPU

c) Advantages?

  • Efficient - low overhead
  • Precise control - not interrupted in the middle of critical processing
  • Precise control - very flexible selection of next process
  • Mutual exclusion is easy: You do your critical section before you call blockMe()
  • Simpler operating system

Grading Criteria

C -- Give at least one advantage and one danger

B -- Describe a method for cooperation

 

We had considered a question on the state machine diagram for the current XINU:

  1. Show the relationship between all of the current process states - PRFREE, PRCURR, PRSUSP, PRREADY, PRWAIT, PRSLEEP.
  2. There are a variety of process queues in the system - show which states they are associated with, and how those queues are ordered.
  3. Assume a new blocking interprocess communication mechanism is added to the system that allows processes to send() and receive() messages to each other. How does this impact the system you've described?
However, the cooperative scheduling question is somewhat related already.

 

Problem 2 Starvation and Deadlock

Consider the sleep queue you are constructing for your current assignment

  1. Define deadlock.
  2. List the four necessary conditions for deadlock.
  3. Can a process in your sleep queue be subject to starvation (indefinite blocking, indefinite postponement)?
    If it can, explain how, and suggest a mitigation strategy.
    If it cannot, explain why not.
  4. Can a process in your sleep queue be subject to deadlock?
    If it can, explain how, and suggest a mitigation strategy.
    If it cannot, explain why not.
English lesson:
  • A servant waits on. A peer waits for. Usually, you wait for your friends. A process waits for another process or a recourse.
  • Never use "ZZZ is when ..." in a definition, unless ZZZ refers to a time. Deadlock is a state, not a time.
Your use of language says a lot about you. Take pride in using it correctly.
  1. P. 245: "A process requests resources, and if the resources are not available at that time, the process enters a wait state. Sometimes, a waiting process is never again able to change state because the resources it has requested are held by another waiting process.
  2. Section 7.2.1:
    1. Mutual exclusion
    2. Hold and wait
    3. No preemption
    4. Circular wait
  3. In this case, a process would starve by being held in the wait queue indefinitely.

    Yes, that could happen if the clock interrupt is disabled indefinitely. Starvation could also result from a programming error on your part.

    Otherwise, no, a process cannot by being held in the sleep queue indefinitely. Its time remaining is decremented at each clock tick, and it can never be increased. If many new processes are introduced ahead of it in the queue because they have shorter waiting times, they do not delay the re-entry of your process into the Ready list.

    A process is in the sleep queue for a specified length of time. It is not waiting for a resource (unless you consider the clock interrupt as a resource).

     

  4. Consider two possible causes:
    • As a result of its participation in your sleep queue?
      No. Deadlock is impossible if any one of the four necessary conditions are denied. Since a waiting process in your queue is guaranteed to exit your queue (part c), there is no Hold and wait. One could also argue there is no Circular wait
    • As a result of waiting not related to your queue?
      It would seem that a process in your queue might be participating in a deadlock not related to your queue at all? However, if the process is blocked waiting for other resources, it would not have been allowed to run to place itself in your sleep queue.
      What would be the consequences if one process could place another process in your sleep queue?
Many of you used correct words and phrases to discuss starvation and deadlock but did not interpret them correctly in this context.

 

Problem 3 Page Replacement Strategies

In a virtual memory system implemented by paging, when a page fault occurs, we must bring in a new page. However, usually the operating system must choose a page to be removed from memory to make room for the newly-referenced page.

  1. (5 points) Describe each of the following page replacement strategies:
    1. First-in, first-out (FIFO) page replacement
    2. Optimal page replacement
    3. Least recently used (LRU) page replacement
    4. Least frequently used (LFU) page replacement
    1. FIFO: The page selected for replacement is the one which has been in memory the longest
    2. Optimal: Select the victim so that the run has the fewest page faults. Not necessarily the page next used furthest in the future
    3. LRU: Victim is the page with longest time since last access
    4. LFU: Victim is the page used the least number of times

    Beware of circular definitions, such as "Least recently used is least recently used."

  2. (5 points) Give at least one advantage (+) and one disadvantage (-) of each of the following page replacement strategies:
    1. First-in, first-out (FIFO) page replacement
    2. Optimal page replacement
    3. Least recently used (LRU) page replacement
    4. Least frequently used (LFU) page replacement
    1. FIFO: +: Simple, fair
      -: Can be very bad
    2. Optimal: +: Best performance
      -: Impossible to implement
    3. LRU: +: Good performance
      -: Expensive to implement
    4. LFU: +: Intuitive?
      -: Probably remove the page you just brought in
  3. (10 points) Now, work out a simple example.
    Suppose our process is allocated four (0 - 3) page frames in physical memory.
    Suppose our process consists of nine (1 - 9) pages in its virtual memory.
    The table shows the order in which pages are referenced in the execution of the program.
    Complete the table by showing the number of each page residing in each frame after each page reference shown under each of the page replacement strategies.
    Mark each page fault with an asterisk (*).
    How many page faults were generated, including the three page faults already shown (denoted by *)

     

    Least recently used (LRU) page replacement
      Page references: 1   2   3   4   5   3   4   1   6   7   8   7   8   9   7   8   9   5   4   5   4   2  
      Frame 0: 1* 1 1 1 5* 5 5 5 6* 6 6 6 6 6 6 6 6 5* 5 5+ 5 5
      Frame 1:   2* 2 2 2 2 2 1* 1 1 1 1 1 9* 9 9 9+ 9 9 9 9 9
      Frame 2:     3* 3 3 3+ 3 3 3 7* 7 7+ 7 7 7+ 7 7 7 4* 4 4+ 4
      Frame 3:       4* 4 4 4+ 4 4 4 8* 8 8+ 8 8 8+ 8 8 8 8 8 2*
      Total number of
    page faults:
       13

     

  4. (5 points) Now I tell you that pages 1 - 4 are code pages (never written), pages 5 - 7 are read-only data pages, and pages 8 and 9 are very frequently written data pages.
    • Is "number of page faults" still the best metric for evaluation of page replacement strategies? Explain.
    • Suggest a modification of LRU that improves its performance for this example

The metric we really care about is time. In this scenario, selecting page 8 or 9 as a victim is roughly twice as expensive as selecting one of pages 1 - 7 because the victim page must first be written back to the disk before it can be replaced.

One modification to LRU is to consider the time since page last accessed for pages 8 and 9 to be half their true value. That biases the selection away from those pages. To be selected as a victim, page 8 or 9 must not be accessed for twice as long as the next candidate.

You could also argue that no modification is necessary. Since I said pages 8 and 9 are accessed "very frequently," they will never be selected as not used recently.

 

Problem 4 File System Implementation

Suppose you lead a team of designers and programmers responsible for adding files to the XINU operating system. You may assume create() and open() are done. Your team is working on read() to support sequential file access.

  1. What do you understand read() should do?
  2. What are the steps that need to be implemented for read()?
    Hint: You might express your answer as pseudo code for read(). I'm expecting 5 - 10 steps, depending on the detail you put into each.
  3. How does the function read() differ to support direct file access?
  4. Now suppose you know that a certain application will access the file data in a sequential manner. How could you exploit this information to improve performance?

Hint: The difference between part b) and part d) is speed. Your solution in part b) should be correct, simple, and clear. Your solution in part d) tells how to make the simple solution faster.

  1. At its simplest, read() is responsible for transferring a portion of the contents of a file from where it resides on a disk to a specified location in memory. I know you have used functional equivalents of read() in your programs to read a file.

    See p. 375: Specify the handle of the file and the block in memory where the next (or a specified) block should be read. For the file designated with that handle, transfer the contents of the specified disk block to the designated memory address.

    Be clear whether your interpretation reads a specified number of bytes, reads to a newline character, or reads to EOF.

    A major task of read() is to send disk addresses to the disk controller. The disk controller probably returns a disk block, probably by Direct Memory Access (DMA) or some variant. Read() puts some (or all) of that disk block where the calling program expects to receive it.

     

  2. See p. 375:
    • Pass in a file handle, a result from open();
    • Keep a read pointer/cursor to the next location from which to read, probably in the File Control Block pointed to by the file handle;
    • Find disk address of next block;
    • Initiate transfer from disk block to memory address (DMA?);
    • Wait for transfer;
    • Update read pointer;
  3. Read must specify desired disk address
    Read pointer is unnecessary
  4. (P. 408, Problem 10.6) Probably a read-ahead cache.

If you got the idea that read() brings data from a file to a calling program, and that sequential is different from direct, you got 13 points (half credit)

Your score improved as you discussed a cursor pointing to the current location to support sequential calls to read for the same file, handling EOF, translating to disk addresses, how/where data is placed, caching, etc.

Several students assume read() reads the entire file. How does that work if you have asked to read only one character? You have much experience reading files in a program. Do you usually get the entire file?

Before the final, you might review the distinctions between sequential, direct, and indexed file access.

 

Problem 5 Tricky C program

The code below assumes it is being run in XINU. It models the classic Dining Philosophers problem with MAXFORKS forks (represented by semaphores) and MAXFORKS philosophers (represented by processes). Each philosopher process has an assigned ID (0 <= i < MAXFORKS). This solution assigns to each philosopher process a different priority, equal to the ID value.

void philosopher(int id, semaphore forks[])
{
        int myLeft, myRight;

        myLeft  = id;
        myRight = (id + 1) % MAXFORKS;

        while (1)
        {

                printf("Philosopher %d thinks\n", id);
                sleep(100);

                wait(forks[myLeft]);
                wait(forks[myRight]);

                printf("Philosopher %d eats\n", id);
                sleep(100);

                signal(forks[myRight]);
                signal(forks[myLeft]);

        }
}


int main(int argc, char **argv)
{
        semaphore forks[MAXFORKS];

        for (i = 0; i < MAXFORKS; i++)
        {
                forks[i] = newsem(1);
                ready(create((void *)philosopher, INITSTK, i, "PHIL",
                                         2, i, forks), RESCHED_NO);
        }

        resched();
        ...
}
  1. Does this instance of the Dining Philosophers potentially deadlock?
  2. If not, make a rigorous argument for why not.
  3. If so, change the philosopher process to prevent deadlock. Hint: You are welcome to rip this page off the exam packet, put your name on it, make your changes on this copy, and hand it in with your exam solutions.

As I said in class, I do not consider it sexist to assume these philosophers are male. Women have better sense than to get involved in such a contrived dinner.

Dr. Brylow writes:

The exam version has most of the output statements removed to make the code shorter, but if you put a printf() after each wait() and signal(), you get something like:

     Philosopher 4 has acquired left fork
     Philosopher 3 has acquired left fork
     Philosopher 2 has acquired left fork
     Philosopher 1 has acquired left fork
     Philosopher 0 has acquired left fork
and then deadlock.

Interestingly, as I was capturing the output for you above, I realized that this question may be even subtler than I thought. I cleaned the output statements out to make the code shorter for the exam, but that also eliminates the deadlock. Without a potentially blocking I/O call between the two wait() calls, nothing can stop the highest priority process from acquiring both his forks if the scheduler is simple priority-based. Even if he is preempted, priority scheduling indicates that the highest numbered philosopher will be ready to run again before any lower priority philosophers.

If the scheduler uses aging, or if there is any blockable call between the two wait() calls, then deadlock becomes possible again.

But that's really just a quirky, non-circular property of this particular deterministic implementation of the Dining Philosophers. If you add random wait times into the mix, it is still possible for the highest priority philosopher to get his first fork, but not the other, and then we have potential circular wait again.

Ahhh, concurrency.

 

 

 
  Marquette University. Be The Difference. Marquette | Corliss |