-->
Marquette University logo      
Name   ______________________            

COSC 3250 / COEN 4820 Operating Systems
Midterm Examination 1
Wednesday, February 21, 2018

 

Your first midterm examination will appear here. It covers Chapters 1 - 5, omitting Section 3.6 (we'll cover that with networking) and Sections 5.9 & 10.

It will be a 50 minute, closed book exam.

Expectations:

Section and page number references in old exams refer to the edition of the text in use when that exam was given. You may have to do a little looking to map to the edition currently in use. Also, which chapters were covered on each exam varies a little from year to year.

Some questions may be similar to homework questions from the book. You will be given a choice of questions. I try to ask questions I think you might hear in a job interview. "I see you took a class on Operating Systems. What can you tell me about ...?"

Preparation hints:

Read the assigned chapters in the text. Chapter summaries are especially helpful. Chapters 1 and 2 include excellent summaries of each of the later chapters.

Read and consider Practice Exercises and Exercises at the end of each chapter.

Review authors' slides and study guides from the textbook web site.

Review previous exams listed above.

See also Dr. Barnard's notes. He was using a different edition of our text, so chapter and section numbers do not match, but his chapter summaries are excellent.

Read and follow the directions:

  • Write on four of the five questions. If you write more than four questions, only the first four will be graded. You do not get extra credit for doing extra problems.
  • Each problem is worth 25 points.
  • In the event this exam is interrupted (e.g., fire alarm or bomb threat), students will leave their papers on their desks and evacuate as instructed. The exam will not resume. Papers will be graded based on their current state.
  • Read the entire exam. If there is anything you do not understand about a question, please ask at once.
  • If you find a question ambiguous, begin your answer by stating clearly what interpretation of the question you intend to answer.
  • Begin your answer to each question at the top of a fresh sheet of paper [or -5].
  • Be sure your name is on each sheet of paper you hand in [or -5]. That is because I often separate your papers by problem number for grading and re-assemble to record and return.
  • Write only on one side of a sheet [or -5]. That is because I scan your exam papers, and the backs do not get scanned.
  • No electronic devices of any kind are permitted.
  • Be sure I can read what you write.
  • If I ask questions with parts
    1. . . .
    2. . . .
    your answer should show parts in order
    1. . . .
    2. . . .
  • The instructors reserve the right to assign bonus points beyond the stated value of the problem for exceptionally insightful answers. Conversely, we reserve the right to assign negative scores to answers that show less understanding than a blank sheet of paper. Both of these are rare events.

The university suggests exam rules:

  1. Silence all electronics (including cell phones, watches, and tablets) and place in your backpack.
  2. No electronic devices of any kind are permitted.
  3. No hoods, hats, or earbuds allowed.
  4. Be sure to visit the rest room prior to the exam.

 

In addition, you will be asked to sign the honor pledge at the top of the exam:

"I recognize the importance of personal integrity in all aspects of life and work. I commit myself to truthfulness, honor and responsibility, by which I earn the respect of others. I support the development of good character and commit myself to uphold the highest standards of academic integrity as an important aspect of personal integrity. My commitment obliges me to conduct myself according to the Marquette University Honor Code."

Name _____________________________   Date ____________

 

 

Score Distribution:

Histogram of scores

 

Median: 73; Mean: 72.1; Standard Deviation: 16

These shaded blocks are intended to suggest solutions; they are not intended to be complete solutions.

Problem 1 Compare and Contrast
Problem 2 Tricky C
Problem 3 Process Context Switch
Problem 4 Threads
Problem 5 Synchronization

 

Problem 1 Compare and Contrast each pair of terms

(in the context of computer operating systems):

  1. Operating system call and User function call
  2. Shared memory and Message passing
  3. Blocking send and Non-blocking send
  4. Thread Control Block and Process Control Block

Hint: "Compare and contrast A and B" questions evoke a standard pattern for an answer:

    1. Define A
    2. Define B
    3. Tell how A and B are similar (that's the "compare" part)
    4. Tell how A and B are different (that's the "contrast" part)

That's what your answers to each part should look like.

Warning: A definition that starts "Z is when ...," or "Z is where ..." earns -5 points.

Similar to 2015.

  1. Operating system calls: [Section 2.3] An interface to operating system services made available to applications software by an operating system. Operating system calls must be executed in a protected mode only if they manipuate sensitive internal operating system data structures. An operating system call often interacts with hardware, but that is not necessary. A user program/process can call an operating system function, e.g., fork() or fileopen().
    User function call: Customary call, compute, and return.
    Similar: Both appear in application code as function calls. Both call, compute, and return, with a few exceptions. Most function calls of either kind execute in the process space of the caller, not in their own process.
    Different: Operating system calls may execute in kernel/privileged mode
    What is the role of the operating system in each?
    In the context of operating systems, "process" is a technical term. A function call is rarely a process. -1 if you used "process" in the English sense.

     

  2. Shared memory: [Sect. 3.4.1] A region of memory residing in the address space of two or more cooperating processes. "Shared memory is memory that is shared" is circular. Shared memory has other uses beyond buffers.
    Alternative: Separate CPUs that address a common pool of memory
    Message passing: [Sect. 3.3.4.2] A mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. A message-passing facility provides at least two operations: send(message) and receive(message). Be careful to avoid circular definitions: "Message passing systems pass messages" counts zero points. "recieve" earns -2 points.
    Similar: Mechanisms for Interprocess Communication (IPC)
    Different: Once the shared memory mechanism is set up, the operating system plays (little) further role. The operating system is an active participant in sending and receiving messages.
    What is the role of the operating system in each?
    We have discussed IPC using producer/consumer examples, but IPC is helpful in many other circumstances.

     

  3. See Section 3.4.2.2
    Blocking send: The sending process is blocked until the message is received by the receiving process or by the mailbox. The function send() does not return until the message is received()ed - the current message, not a previous one. Must send before the message can be receive()ed.
    Non-blocking send: The sending process sends the message and resumes execution. This is the usual mechanism.
    Similar: Both send messages.
    Different: Non-blocking send returns immediately with no assurance that the message was received. Blocking send waits, perhaps forever, and returns when its message has been received.

     

  4. [P. 105-106, 163] A process is a program in execution. A process may include several threads. A thread is a basic unit of CPU use. It includes a thread ID, a program counter, a register set, and a memory stack.
    Thread Control Block: Representation of a thread in the operating system. Data attributes include memory stack parameters, space to store registers, and other information unique to a thread.
    Process Control Block: [Section 3.1.3] Representation of a process in the operating system. Attributes include state, CPU register storage space, memory management information.
    Similar: Both are operating systems structures for bookkeeping. Some of the same types of information are stored in both.
    Different: Logically, thread control blocks "belong" to the process control block of their parents, and they are smaller.

     

 

Vocabulary list. I ask mostly about terms that appear as section, subsection, or subsubsection headings in the text, but any phrase printed in blue in the text is fair game. Here are some of the terms I might ask you to define. This list is not exhaustive:

  1. Batch system: Typically large, run at off-peak times, with no user present. E.g., overnight accounting runs, daily update of Checkmarq records. Batch mode is an old business data processing mode, still VERY widely used in mainframe business data processing. TABot is a batch system.
  2. Blocking receive vs. Non-blocking receive: [Section 3.4.2.2.]
  3. Blocking send vs. Non-blocking send: [Section 3.4.2.2.] Spell "receive" correctly.
  4. Busy waiting [p. 213]. Waiting while holding the CPU, typically waiting by loop, doing nothing. For example
          while (TestAndSet(&lock) ;  // Does nothing
    A process in a Waiting state (in the sense of the Ready-Running-Waiting state diagram) is not busy waiting because it is not holding the CPU.
  5. Client-Server computing and Peer-to-peer computing: [Section 1.11.4 & 5.] Client-server is a master-slave relationship. Client requests, server answers. Which is the master?
  6. Clustered systems: [Section 1.3.3] Two or more individual (computer) systems gathered together to accomplish computational work.
  7. Interactive system: Typically, a user is present.
  8. Embedded system: A dedicated function within a larger mechanical or electrical system, often with real-time computing constraints.[1][2] It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. -- Wikipedia.
  9. Deadlock vs. starvation [p. 217]. Both concern indefinite wait. "A set of processes is in a deadlocked state when every process in the set is waiting for an event that can be caused only by another process in the set." Starvation is also an indefinite wait not satisfying the definition of deadlock, often due to an accidental sequence of events.
  10. IPC: [Sect. 3.4 & 3.5] Interprocess Communication: shared memory and message passing. It is important to know at least some of the TLAs (Three Letter Abbreviations).
  11. Kernel: [P. 6] Part of the OS that is running all the time, core OS processes.
  12. Kernel node vs. user mode: States of the processor. Normally, the processor is in user mode, executing instructions on behalf of the user. The processor must be in kernel mode to execute privileged instructions.
  13. Kernel thread: [Section 4.3] Created by OS processes.
  14. Kernighan's Law: [P. 87] "Debugging is twice a hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." You should recognize the name Kernighan as one of the authors of our C textbook
  15. Message passing systems: [Sect. 3.3.4.2] A mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. A message-passing facility provides at least two operations: send(message) and receive(message). Be careful to avoid circular definitions: "Message passing systems pass messages" counts zero points. "recieve" earns -2 points.
  16. Non-blocking receive: [Section 3.4.2.2] Receive a message. If no message is there, return from the receive() function all immediately.
  17. Operating system: [P. 3] A program that manages the computer hardware, provides a basis for application programs, and acts as an intermediary between the computer user and the computer hardware.
  18. Operating system calls: [Section 2.3] An interface to operating system services made available to applications software by an operating system.
  19. Priority inversion [p. 217]: A high priority process is forced to wait for a lower priority process to release some resource(s).
  20. Race condition [p. 205]: Several processes access and manipulate the same data concurrently, and the outcome depends on what happens to be the order of execution.
  21. POSIX: Portable Operating System Interface. We looked at POSIX examples for shared memory, message passing, and Pthreads.
  22. Pipes: [Sect. 3.6.3] A pipe acts as a conduit allowing two processes to communicate. A pipe is usually treated as a special type of file, accessed by ordinary read() and write() system calls.
  23. Privileged instructions: [P. 22] Some hardware instructions that may cause harm. May be executed only in kernel mode. Examples: Instruction to switch to kernel mode, I/O control, timer management, interrupt management
  24. Process: [P. 105] Program in execution, a unit of work in a modern time-sharing system.
  25. Process Control Block (PCB): [Section 3.1.3] Representation of a process in the operating system.
  26. Process scheduling: [Section 3.2] Switch between processes so frequently that users can interact with each process. Select a process from the Ready list for execution in the CPU.
  27. Process State: [Sect. 3.1.2] Defined by the current activity of the process. Each process may be in one of: New, Running, Waiting, Ready, Terminated.
  28. Process vs. Thread: [P. 105-106, 163] A process is a program in execution. A process may include several threads. A thread is a basic unit of CPU use. It includes a thread ID, a program counter, a register set, and a memory stack.
  29. Protection vs. Security: [Section 1.9.] Protection - Mechanisms for controlling access to the resources provided by a computer system. Protection controls access to computer system resources. Security defends a system from internal and external attacks. Protection <> Security.
  30. Real-time system: [Section 1.11.8] Typically time-sensitive interaction with external devices.
  31. RPC (in context of inter-process communication): [Sect. 3.6.2] Remote Procedure Calls abstract the procedure-call mechanism for use between systems with network connections using message-based communication.
  32. Shared memory systems: [Sect. 3.4.1] A region of memory residing in the address space of two or more cooperating processes. "Shared memory is memory that is shared" is circular. Alternative: Separate CPUs that address a common pool of memory
  33. Shared memory vs. Message passing: [Section 3.4.] Be careful to avoid circular definitions: "Shared memory is memory that is shared" counts zero points.
  34. Sockets: [Sect. 3.6.1] An endpoint for communication. A pair of processes communicating over a network employs a pair of sockets. A socket is identified by an IP address and a port number.
  35. Symmetric multiprocessing (SMP): [P. 15] Multiprocessing system in which each processor performs all tasks within the operating system. Does your "definition" hold also for asymmetric?
  36. Thread cancelation: [Sect. 4.6.3] Task of terminating a thread before it has completed.
  37. Thread library: [Section 4.4] Provides a programmer with an API for creating and managing threads.
  38. Thread-Local Storage: [ 4.6.4] Data of which each thread must keep its own copy. Do not confuse with local variables.
  39. Thread pool: [Sect. 4.5.1] Create a number of threads at process startup and place them in a pool, where they sit and wait for work.
  40. Time sharing: [P. 20] The CPU executes multiple jobs by switching among them, but the switches happen so frequently that the users can interact with each program while it is running.
  41. User mode vs. Kernel mode: [Section 1.5.1] User mode is a state of the operating system in which most instructions of user applications are executed. Kernel mode is a state of the operating system in which some critical portions of the operating system is executed. What is it called when a user process makes an operating system call for service? Most operating system code is executed in user mode.
  42. User thread: [Section 4.3] Created by user processes.
  43. Virtual machine: [Section 1.11.6] Abstract the hardware of a single computer into several different execution environments, creating the illusion that each computing environment is running its own private computer.

 

Problem 2. Tricky C

Here is a tricky C program:

#include <stdlib.h>
#include <stdio.h>

#define MAX 127
int i = 23456;

int getNumber(void) {
	printf ("In getNumber, i = %d\n", i);
	return ++i;
}

void main(int argc, char** argv) {
    int *array;
    int index = 0;
    int current = 0;
    int j = 0;

    while ( EOF != (current = getNumber()) ) { /* EOF has the value -1 */
    	printf ("In while, index, current = %d, %d\n", index, current);
    	if (! index) { array = (int *)malloc(MAX); }
    	array[index++] = current;
    	index &= MAX;
    }
    for (; j < index; j++) {
    	printf("%10d: %10d\n", j, array[j]);
    }
}

What does it print? Explain.

Repeated from 2008 exam.

When I run it with a particular compiler, I get

In getNumber, i = 23456
In while, index, current = 0, 23457
In getNumber, i = 23457
In while, index, current = 1, 23458
 . . .
In getNumber, i = 23583
In while, index, current = 127, 23584
In getNumber, i = 23584
In while, index, current = 0, 23585
*** [a] Error -1073741819
Dr. Brylow writes:
Looks like it ought to read and store an arbitrary list of numbers, but it has three major problems:
  • the malloc size is only one fourth of what is intended,
  • the return value from malloc() is not checked against NULL, and most importantly,
  • each successive dynamically allocated block is lost because the pointers aren't being stored anywhere else.
Also, it is just badly written, with the last line depending on a fragile property of the MAX constant being one less than a power of 2.

Having a macro named "EOF" does not imply there is a file anywhere. That is bad, confusing programming practice. It violates the Principle of Least Astonishment, but it runs, and you will see worse in the wild.

Several students suggested one of the counters might overflow. Try it. What happens?

 

Problem 3. Process Context Switch

In the context of scheduling and dispatching of processes on a single-core processor,

  1. (5 points) What is the purpose of a context switch?
  2. (20 points) Describe the steps, in order, taken by a kernel to context-switch between processes.

Repeated from 2014 and 2015 exams. See Section 3.2.3. This question reflects what you did for Project 4; it should be a give-away.

Context switch changes which process holds the CPU. It does not interrupt a running process, nor does it do the work of the scheduler to determine which is the next process to run.

At a high level, the assignment saves state in the PCB. To paraphrase the comments in the TODO section of the code:

  • save callee-save ("non-volatile") registers to the stack. "Save to memory" is correct, but not sufficiently precise. In a different design, they could be saved to the PCB.
  • save outgoing stack pointer to Process Control Block. Many students missed this step, without which we have no idea where to find the stack for the incomming process.
  • load incoming stack pointer from PCB
  • restore callee-save ("non-volatile") registers
  • restore argument registers (for first time process is run)
  • jump or return to destination address

Save and restore the "state"? What constitutes the state of a process?

How is control handed over to the incoming process? (or -5)

Think for a moment. Can you write code to save or to restore the Program Counter? No. Why not? In our design, the last instruction executed by the context switch is RETURN, as if from a function call.

 

Problem 4. Threads

  1. (9 points) What are threads?
  2. (8 points) Why might we use threads? List four of their (potential) benefits.
  3. (8 points) List four challenges of writing multi-threaded application.
New question for 2018.
  1. Threads: [p. 163] A thread is a basic unit of CPU use. It comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, global data section, and other operating system resources.
    An excellent vocabulary for describing "What is a thread?" is the language of object orientation. If you were to design class Thread, what would be its attributes and its services?
  2. Benefits: See Sections 4.1.1 & 4.1.2. Benefits of threads may be grouped into four categories:
    1. Responsiveness,
    2. Resource sharing,
    3. Economy, and
    4. Scalability.
    To receive full credit, you should say a sentence about each.
    Might add: Problem decomposition.
  3. Challenges: See Section 4.2.1. Challenges of coding for multi-threaded systems include:
    1. Identifying tasks,
    2. Balance,
    3. Data splitting,
    4. Data dependency, and
    5. Testing and debugging.
    To receive full credit, you should say a sentence about each.

This question did not require you to say anything about processes, although comparisons and contrasts may help explain what threads are.

This is an example of what I consider high-level concepts I hope you learn in this class: What is it?, What is it good for?, Why is it hard? Your answered does not need to replicate the book, but philosophical ramblings received little credit.

 

Problem 5. Synchronization

You may choose to work this problem on the exam paper and hand in the exam pages. Be sure your name is on each page.

Here is C-like pseudocode for a bounded buffer solution to the producer-consumer problem for a system in which running processes are interrupted by a timer and swapped out:

 

SHARED:
   double buffer[BUFFER_SIZE];
   int    counter = 0;
 
Producer:
     int in = ___;
     while (true) {
        // Produce item into nextProduced
        while (counter == BUFFER_SIZE) {};
        buffer[in] = nextProduced;
        in = (in + 1) % BUFFER_SIZE;
        counter ++;
     }
Consumer:
     int out = ___;
     while (true) {
        while (counter == 0) {};
        nextConsumed = buffer[out];
        out = (out + 1) % BUFFER_SIZE;
        counter --;
        // Consume item in nextConsumed
    }

 

  1. (2 points) With what values should int in = and int out = be initialized in order that the logic of this solution is correct?
  2. (5 points) Give an example sequence of execution in which the solution fails to work correctly.
  3. (8 points) Assume that you have available both

     

    1. An atomic hardware instruction TestAndSet hardware instruction, defined as

      boolean TestAndSet (boolean *target) {
          boolean returnValue = *target;
          *target = TRUE;
          return returnValue;
       }

       

    2. and an integer Semaphore accessed only through two atomic hardware operations:
      wait(S) { 
         while (S <= 0) ;  /* wait */
         S --;
      }
      signal(S) { 
         S ++;
      }

     

    Fix the incorrect producer-consumer (non-)solution shown above.
    (A solution involving messages or interrupt settings will receive at most half credit.)

     

  4. (10 points) Convince me that your solution works.

New question for 2018.

  1. int in = 0; and int out = 0;
    Producer should put the first item it produces into buffer[0]. When Consumer runs, it should take its first item to be consumed from buffer[0].

    Actually, it works to initialize both to the same integer value between 0 and BUFFER_SIZE, inclusive. The student who made that observation received double credit for part a).

     

  2. Problem: Either Producer or Consumer is interrupted during its update of counter such that the value of counter does not match the number of items in buffer. Interrupting is not enough; you must show that the value of counter is wrong. For example,

     

    SHARED:
       Suppose  counter = 2;
       buffer holds 2 elements
       We add one, and we remove one
     
    Producer:
            . . .
            buffer[in] = nextProduced;
            in = (in + 1) % BUFFER_SIZE;
            // counter ++;
            LOAD counter  // 2
            ADD 1         // 3
            INTERRUPT   >>>>>>>>
            
            
            
            
            
            
             
            resume execution
            STORE counter  // counter gets 3
             . . .
         
    Consumer:
            . . .
            
             
            
            
            
            
            resume execution
            // counter --;
            LOAD counter  // 2
            SUB 1         // 1
            STORE counter  // counter gets 1
            . . .
    <<<<    INTERRUPT
            . . .
        

     

    counter started at 2. Producer added an item. Consumer removed a item. counter is 3. WRONG. What will happen about BUFFER_SIZE iterations later?

    If the Consumer had run first and been interrupted, counter would have been 1. WRONG.

    It does no harm if Consumer runs first; it will properly be forced to wait at while (counter == 0) ;, and eventually, Producer will run. -3 of 5 if you gave this as the answer to part b) because you missed what is the real critical section.

    It does not hurt if Producer is interrupted between buffer[in] = nextProduced; and counter ++; or of Consumer is interrupted between nextConsumed = buffer[out]; and counter --;. The other process might be delayed unnecessarily, but we do not get incorrect results. -2 of 5 if you gave this as the answer to part b) because you missed what is the real critical section.

    In either process, in the body of the while (true) loop, only manipulation of counter is critical. It is not wrong to guard a larger critical section than necessary, but it may slow processing by forcing unnecessary waiting. -1 of 8 if you protected correctly a larger critical section than necessary.

    Variables in and out cannot participate in a race condition or a critical section since they are not shared.

    In part b), you should determine that the critical regions are counter ++; and counter --;. Those become the focus of protection in part c).

     

  3. Your answer must be in the form of (pseudo-)code. There is no adequate brief answer in English.

    Use either TestAndSet() or semaphore. Your choice. Not both.

    Using TestAndSet:

     

    SHARED:
       boolean lock = FALSE;  // Be sure you initialize (or -2 points)
       double buffer[BUFFER_SIZE];
       int    counter = 0;
     
    Producer:
         int in = 0;
         while (true) {
            // Produce item into nextProduced
            while (counter == BUFFER_SIZE) {};
            buffer[in] = nextProduced ;
            in = (in + 1) % BUFFER_SIZE;
            while (TestAndSet(&lock)) {};
            counter ++;  // CRITICAL SECTION
            lock = FALSE;
         }
    Consumer:
         int out = 0;
         while (true) {
            while (counter == 0) ;
            nextConsumed = buffer[out];
            out = (out + 1) % BUFFER_SIZE;
            while (TestAndSet(&lock)) {};
            counter --;    // CRITICAL SECTION
            lock = FALSE;
            // Consume item in nextConsumed
       }

    while (TestAndSet(&lock)) {}; and lock = FALSE; must appear in pairs, before and after the critical region, in each process.

     

    Using Semaphore:

     

    SHARED:
       Semaphore S = 1;  // Be sure you initialize (or -2 points)
       double buffer[BUFFER_SIZE];
       int    counter = 0;
     
    Producer:
         int in = 0;
         while (true) {
            // Produce item into nextProduced
            while (counter == BUFFER_SIZE) {};
            buffer[in] = nextProduced ;
            in = (in + 1) % BUFFER_SIZE;
            wait(S);
            counter ++;  // CRITICAL SECTION
            signal(S);
         }
    Consumer:
         int out = 0;
         while (true) {
            while (counter == 0) {};
            nextConsumed = buffer[out];
            out = (out + 1) % BUFFER_SIZE;
            wait(S);
            counter --;  // CRITICAL SECTION
            signal(S);
            // Consume item in nextConsumed
        }

    wait() and signal() must appear in pairs, before and after the critical region, in each process.

    Binary semaphores as used here typically use wait(S); ... signal(S); in that order in each cooperating process. Counting sempahores typically signal() (increment) in one process an wait() (decrement) in the other. Typically.

    Another correct solution uses a pair of counting sempahores, e.g., countOfAvailableSpaces and countOfAvailableItems. That solution needs no shared counter. Properly speaking, that solution no longer has a critical section.

     

  4. Convince me that if one process is interrupted while inside its critical region, and the other process tries to get into its critical region, your use of synchronization primitives prevents the second process from entering its critical region until the first process exits its.

    I am not convinced by "This code works," or "This code works because the book (or you) says it does." I expect a line-by-line code walkthrough. Faith is not a code testing strategy; I experience too many counter-examples.

    To receive full credit, you must recognize where the danger lies:

    1. One process is inside its critical region,
    2. It is interrupted, and
    3. The other process attempts to get into its critical region.

    To receive full credit, you must demonstrate that

    1. the second process is prevented from entering its critical region, and
    2. eventually, both processes can complete their critical sections.

    If your solution to part c) is wrong, you might get up to 5 of 10 points for part d) for sound (but flawed) logic.

You will not complete Project 6 without coming to understand semaphores.

A few of you might find my addition/subtraction was wrong? I'll correct it if you prefer.

 

 
  Marquette University. Be The Difference. Marquette | Corliss |