Marquette University logo      

Midterm Examination 1
COEN 183, February 24, 2005

 

Expectations

Your first midterm examination will appear here.

It will be a 75 minute, closed book exam.

It covers through Chapter 7

Expectations:

Questions will be similar to homework questions from the book. You will be given a choice of questions, e.g., "Write on four out of five questions." I try to ask questions I think you might hear in a job interview. "I see you took a class on Operating Systems. What can you tell me about ...?"

If it matters, I will be grading the exam, not the TA. I expect proper English.

See also Dr. Barnard's notes. He was using a different edition of our text, so chapter and section numbers do not match, but his chapter summaries are excellent.

Directions:

Read and follow the directions.

  • Write on four of the six questions. If you write more than four questions, only the first four will be graded. You do not get extra credit for doing extra problems.
  • Read the entire exam. If there is anything you do not understand about a question, please ask at once.
  • If you find a question ambiguous, begin your answer by stating clearly what interpretation of the question you intend to answer.
  • Begin your answer to each question at the top of a fresh sheet of paper. [Or -3]
  • Write only on ONE side of a sheet.
  • Be sure your name is on each sheet of paper you hand in. [Or -3]

The shaded, italic boxes represent outlines of solutions or comments. They suggest what a complete solution might be, but they are not complete solutions.

These questions are all taken from H. M. Deitel, Operating Systems, Second Edition, Addison Wesley, 1990.

Grades:

Average score: 71
Range: [42, 100]
Distribution:

Question:

Are you trained somewhere that to write in paragraphs? Yes, that is often the preferred means, but in many technical circumstances, including this exam, outlines, enumerations, bullets, and pictures may communicate more effectively. At the very least, you should answer by question parts:

  1.  
  2.  
  3.  

Problem 1. Operating system kernel or nucleus

11 of 18 students selected. Average score: 14 of 25
  1. What is the kernel or nucleus of an operating system?
  2. Why is the kernel usually maintained in primary storage?
  3. What functions are normally performed by the kernel?
  1. What is the nucleus?
    • Small portion
    • Most frequently used
    • Direct interface with the hardware
    • Minimum needed for proper operation
    • Includes code to manage processes. NOT "IS the code to manage processes", since the kernel also contains code for other things, as you described in part c)
    • Remains resident in main memory
    • NOT what is loaded first. The bootstrap loader is loaded first, and it is overwritten when its job is done
    • NOT in ROM or equivalent in large OS's such as Windows or Linux
  2. Hint: What is the alternative? Disk? Speed. If it were only kept on the disk, its execution would require about 1000 times as long.
  3. See Section 2.7.
    The kernel is NOT the Operating System. What parts of the OS do NOT belong to the kernel? Parts that are brought into memory as needed. In many OS's, that is MOST of the OS.

 


Problem 2. "Cooperative" Scheduling

9 of 18 students selected. Average score: 16 of 25

One reason for using a quantum to interrupt a running process after a "reasonable" period of time is to allow the operating system to regain the processor and dispatch the next process. Suppose a system does not have an interrupting clock, and that the only way a process can lose the processor is to relinquish it voluntarily. Suppose that no dispatching (scheduling) mechanism is provided in the operating system.

  1. Describe how a group of user processes could cooperate among themselves to effect a user-controlled dispatching mechanism.
  2. What potential dangers would be inherent in this scheme?
  3. What are the advantages to the users over a system-controlled dispatching scheme?

This is exactly how many real-time systems are programmed. It works well for embedded systems with only one or a small number of processes.

In standard, quantum-driven time-sharing systems, few processes use their allotted quantum. Most processes leave the Running state when they block themselves by requesting some resource. Hence, the design proposed here does not make a BIG difference in typical operation.

Many of you were thinking on too granular a level, some thinking of Windows' windows.

a) How can processes cooperate?

  • Any resource request blocks requesting process
  • Process voluntarily gives up CPU at convenient points in its processing
  • Blocking process selects next process to run
  • Something like
    loop: {
         work a while;
         blockMe();
    }

Describe how this CAN work.

Do NOT make each process keep track of time or operation counts, or we may as well be timer interrupt driven.

In practice, real-time processes do what they must do, then block themselves to let the next process run.

b) Dangers?

A process does NOT give up the CPU

c) Advantages?

  • Efficient - low overhead
  • Precise control - not interrupted in the middle of critical processing
  • Precise control - very flexible selection of next process
  • Mutual exclusion is easy: You do your critical section before you call blockMe()
  • Simpler operating system

Grading Criteria

C -- Give at least one advantage and one danger

B -- Describe a method for cooperation

 


Problem 3. Blocking Send and Receive Calls

7 of 18 students selected. Average score: 16 of 25

Suppose a system implements interprocess communications with blocking send and receive calls.

  1. Show several (at least three) ways in which such a system could deadlock.
  2. Can such deadlocks occur with non-blocking calls? Why or why not?
  3. Propose a modification to blocking calls that avoids such deadlocks.

"Blocking send" means that on sending a message, control does not return from the function send() until the message is received.

"Blocking receive" means that the function receive() does not return until it returns with a message.

In your homework assignment 3, if you used message passing, you probably used a non-blocking send() and a blocking receive().

A full mailbox is not the issue here

a) Several ways deadlock can occur with blocking send() and receive(). For example:

  1. Process A sends to Process B at the same time Process B sends to Process A.
  2. Similarly, both Processes A and B receive from each other at the same time.
  3. Process A sends to Process B at the same time Process B sends to Process C at the same time Process C sends to Process A.
  4. Sent message is lost.
  5. Acknowledging message is lost.
  6. Process A sends to or receives from Process A.

What if a message is lost? Process A sends to Process B, but the message is lost. Process B executes a receive. A is waiting for an acknowledgment from B. B is waiting for the message from A, but the message will never arrive because the message is lost. This is NOT indefinite postponement, as I wrote on many of your papers, because indefinite postponement occurs when a process could proceed, but is prevented from doing so by a certain sequence of events. Yes, a lost message causes deadlock, too.

b) Deadlock cannot occur with nonblocking calls. It is not enough to simply assert that. You should show how nonblocking calls prevent deadlock in the cases you gave as your answers to part a). Basically, if you don't block, you are not waiting, so you cannot have a circular wait.

c) Modify blocking calls to avoid deadlock. I suppose you could argue that the nonblocking calls you discussed in part b) are modifications of blocking calls. However, it is better to describe how a time-out feature added to blocking calls would work. Several people proposed allowing a process only to send messages to processes with higher (or lower) process ids. Is that a feasible restriction?

 


Problem 4. Havender's Method for Denying "Wait For"

17 of 18 students selected. Average score: 20 of 25
  1. List the four necessary conditions for deadlock.
  2. We can prevent deadlock by denying any one of those four necessary conditions. One method of denying the "wait for" conditions suggested by Havender requires that a process must request all of the resources it will need before the system lets it proceed. The system grants resources on an "all or none" basis. Explain how this solution prevents deadlocks.
  3. Discuss its advantages and disadvantages.
  1. Necessary Conditions:
    • Mutual exclusion
    • No preemption
    • Hold and wait (or wait for)
    • Circular wait
  2. Hold all implies no waiting
  3. The advantages of Havender's method for denying the "wait for" condition include
    • Deadlock is prevented
    • Relatively easy to implement
    • Relative efficient for execution. Once a process is initiated, it does not have to wait for more resources.

    You should explain each one.

    The disadvantages of Havender's method for denying the "wait for" condition include

    • wait for resources to be available
    • waste resources requested but not yet assigned
    • waste resources being accumulated for a request, but not yet assigned
    • may not know accurate resource needs in advance
    • indefinite postponement is possible

    You should explain each one.

 


Problem 5. Banker's Algorithm

12 of 18 students selected. Average score: 15 of 25

In the context of deadlock avoidance, discuss whether each of the following states is safe or unsafe. If the state is safe, show how it is possible to guarantee that all processes may complete. If a state is unsafe, show how it is possible for a deadlock to occur

  1. Define "safe" and "unsafe" states
  2. State A:
      Currently
    using
       Maximum
    need
    User 1  1     4
    User 2  4     6
    User 3  5     8
    User 4  0     2
    Available      1    

     

  3. State B:
      Currently
    using
       Maximum
    need
    User 1  4     8
    User 2  3     8
    User 3  5     8
    Available      1    
Safe: There is an execution sequence whereby all processes can get their maximum needed number of resources and complete execution successfully.

State A is unsafe.

This sequence of events results in a deadlock:

  1. User 1 requests and is granted 1 resource. The system is NOT yet deadlocked; it might be that each (or even only one) process can finish with its current holdings.
  2. User 2 requests 1 resource, and is forced to wait. The system is NOT yet deadlocked; only one process is waiting.
  3. User 3 requests 1 resource, and is forced to wait. User 4 requests 1 resource, and is forced to wait. The system is STILL not deadlocked; there is no circular wait. It COULD happen that User 1 is able to finish and release all its resources, allowing the rest to finish.
  4. User 1 requests another resource. NOW the system is deadlocked!

The "Maximum Need" is not necessarily used. Waiting is not necessarily deadlock.

State B is also unsafe, for a similar reasoning.

I expect you to give an explicit sequence of events that results in deadlock.

 


Problem 6. Process Scheduling

16 of 18 students selected. Average score: 21 of 25

Explain why each of the following is false. It may help to describe the algorithms mentioned and define the terms you use.

  1. Shortest Time Remaining (preemptive) always has a lower average response time than Shortest Job First (non-preemptive).
  2. Shortest Job First (non-preemptive) is fair.
  3. The shorter the job, the better service it should receive.
  4. Because Shortest Job First (non-preemptive) gives preference to short jobs, it is useful in timesharing.

Be careful to answer the question asked. State why each of the following is incorrect.

a) STR always has a lower average response than SJF.
Points you might make:

  1. Be careful with your definition of "average". Yes, SRT tends to increase the response time of longer jobs. How does that affect the "average?"
  2. If every request arriving is longer than the time remaining for the currently running process, then SRT and SJF give identical results.
  3. They are different if a long job is running when a shorter one arrives.
  4. Overhead. SRT requires time for record-keeping and for preempting a running process. Outline a situation for which the overhead is greater that the time saved. This requires some care and some detail.

b) SJF is fair.
Points you might make:

  1. Define "fair".
  2. If "fair" means "equal treatment", SJF favors short jobs over long ones.
  3. If "fair" means "eventually runs", long jobs under SJF can suffer indefinite postponement.
  4. Be careful what you say about starvation. It is a potential problem. However, it can only occur when you do not have enough CPU power to accomplish all the work you are given, in which case, your problems are really more fundamental than starvation.

c) The shorter the job, the better the service it should receive.
Points you might make:

  1. Depends on your criteria. True, if the objective is to minimize average response or average waiting times.
  2. False for long jobs with high priority.

d) Because SJF gives preference to short jobs, it is useful in timesharing.
Points you might make:

  1. SJF is non-preemptive. Hence, a relatively long process may prevent a later-arriving short process from receiving service.
  2. In a timesharing system, you want to provide good response in the sense that you get a chunk of CPU every second or less, although your task may not finish in that time.
  3. Beware assuming Round-Robin is the answer, just because the book says so.
  4. What is the effect if I want to run a long process interactively?
  5. In an interactive session, how do I know how long my process will take to execute? How can I ask the user?
  6. Be careful talking about "guaranteed" service. Is eng-feta a decent time-sharing system? Does it guarantee service? What can be guaranteed about service?

A different, but also interesting, question is "Why are each of these statements valid?" Can they be both valid and incorrect?

 

 

 
  Marquette University. Be The Difference. Marquette | Corliss |