A Comparison of 5 Uniprocessor OS Scheduling Policies

In my recent post on Uniprocessor OS Scheduling Policies, I covered the algorithms for five short-term operating system scheduling policies:

  • First-come-first-served
  • Round robin
  • Shortest process next
  • Shortest remaining time
  • Highest response ratio next

But I didn’t compare, analyze, or go over the use cases for each policy. I would like to do that in this post. Note that the concepts covered in my previous post are required to understand this one.

Scheduling Criteria

When designing a system, we often want to optimize for some specific system behavior(s). We can divide potential requirements into two categories: user-oriented and system-oriented. Each category can then be further divided into performance criteria that are quantitative and can be easily measured, and performance criteria that are more qualitative and are not as easily measured. This is not an exhaustive list, but only the ones that I think are the most important to examine.

User-oriented Criteria

User-oriented criteria is the behavior of the system as perceived by a single user.


  • Turnaround Time – The time from when a process is first added to the group of ready executable processes to the time when it finishes executing and exits. Discussed in much greater detail below.
  • Response Time – The time from when the request is first made to the time when the response starts to be received. From a user’s point of view, response time is a better measure than turnaround time because a process can produce feedback to the user while the process is still executing.


  • Predictability – For any given job, it should run in approximately the same amount of time regardless of the load on the system.


System-oriented criteria focus on effective and efficient utilization of the processor.


  • Throughput – The number of processes completed per unit of time. Depends a lot on the average length of the processes, but is still affected by the scheduling policy.
  • Processor Utilization – The percentage of time that the processor is busy. More important in multi-user systems than single user systems.


  • Fairness – Processes should be treated the same, and no process should suffer starvation.

Criteria Tradeoffs

It is impossible to optimize for all criteria concurrently. There is a tradeoff between user-oriented and system-oriented criteria—when you maximize performance for one, you must give up performance in the other.

For example, optimizing for response time would require a scheduling algorithm that switches between processes frequently. This increases overhead and reduces throughput.

Normalized Turnaround Time

In order to compare and contrast the different scheduling policies, a useful metric to consider is the normalized turnaround time (NTAT) for an individual process.

Turnaround Time (TAT) not normalized is the total time a process spends in a system; this includes the time spent waiting for the processor and the time spent executing on the processor (aka service time):

Turnaround Time (TAT) = Waiting Time + Service Time

We normalize it by taking into account the ratio of the turnaround time to the service time:

Normalized Turnaround Time (NTAT) = TAT / Service Time

So, why is this important?

Well, the normalized turnaround time indicates the relative delay experienced by a process. For longer processes that require more service time, more delay is acceptable. A low NTAT value indicates less delay and a high quality level of service for a process; however, a high NTAT value indicates more delay and a poor quality level of service.

The lowest and best possible NTAT value is 1.0. This happens when a process doesn’t experience any waiting time. To illustrate using the formula, NTAT = 0 + Service Time / Service Time = 1.0.

A Helpful Analogy

Let’s imagine we’re at a restaurant. Inside, we notice that there’s only one waiter. This waiter serves tables of varying sizes. Some tables have 2 customers, while others have 12.

The waiter is the single-core processor. Each table of customers is a ready process. The number of customers at the table indicates the amount of service time that the process needs to complete.

Here’s an example scenario: a family of 5 walks into the restaurant. The waiter notices the family and ignores them momentarily, because he’s serving other people. After making the family wait for a period of time, he finally serves the aforementioned table.

Whether he waits on them the entire time until they get up and leave, or waits on them in various time-chunk configurations depends on the algorithm, i.e Round Robin, etc.

The total time that the waiter makes the customers wait, including the beginning and intermediary parts, added to the time he spends serving them until they leave the restaurant, is the turnaround time.

So given multiple, differently-sized tables, how can the waiter finish his job quickly while making sure no customers starve? Examining the turnaround time provides good insight when solving this problem.

Scheduling Policy Comparison

Now that we have some tools and terminology, we can begin our analysis of each policy. Let’s jump in.

For each policy we are going to aim to answer 4 questions.

  1. Are all types of processes (long, short, I/O bound, processor bound) treated fairly?
  2. What are the effects on response time?
  3. Does the policy have high or low throughput?
  4. Is starvation possible, and if so, in what case?

I am using the same example and numbers from my first post. The diagrams from that initial post may still be a helpful reference here.

First-come-first-served (FCFS)

FCFS is a non-preemptive policy that selects and runs a process until completion based on FIFO.

Process A B C D E Mean
Arrival Time 0 2 4 6 8
Service Time 3 6 4 5 2
Finish Time 3 9 13 18 20
TAT 3 7 9 12 12 8.60
NTAT 1.00 1.17 2.25 2.40 6.00 2.56

Are all types of processes treated fairly?

No, they are not. Longer processes are going to fare better than shorter processes, especially in the case when a short process arrives just after a long process. This can be observed in the NTAT data. A short process, process E, arrives just after a long process, process D. Process E’s NTAT is 6.00, which is more than twice of any other process.

Processor-bound processes are favored over I/O-bound processes.** Suppose we have a collection of processor-bound and I/O-bound processes. When a processor-bound process is running, all I/O-bound processes will have to wait. Some of these I/O-bound processes will be in a blocked state, but could move to the ready state even while the processor-bound process is running once they receive their required input or finish their output operation. When the processor-bound process finally finishes executing, the I/O-bound process will execute only to become quickly blocked by I/O operations again.

**I despise this terminology, but you will encounter it everywhere, so I’m sticking to convention. A I/O-bound process is one that will have to wait on I/O operations (e.g. user inserting a disc, user entering a password) and is therefore limited by the speed at which the I/O requests are handled. A processor-bound process is limited only by the speed of the processor—the process would go faster if the processor could go faster.

What are the effects on response time?

There will be high response times in the case when there is a large variance in process execution times. So the case when we have long processes followed by short processes will cause significant delays for those shorter processes.

Does the policy have high or low throughput?

In FCFS, throughput is not emphasized as it is in other policies. As a result, FCFS throughput is dependent only the lengths of the processes in the system.

Is starvation possible, and if so, in what case?

No, starvation is not possible and every process will eventually get run.

Round Robin (RR)

Round Robin is a policy that uses preemption based on a clock interrupt at periodic intervals.

Process A B C D E Mean
Arrival Time 0 2 4 6 8
Service Time 3 6 4 5 2
RR (q=1)
Finish Time 3 18 17 20 15
TAT 4 16 13 14 7 10.80
NTAT 1.33 2.67 3.25 2.80 3.50 2.71

Are all types of processes treated fairly?

Yes, all are treated fairly. However, there is an edge case that is fun to consider. It only presents itself when the average running time of I/O bound processes is less than the time quantum before they are blocked for I/O. This results in the I/O bound processes not being able to use their entire time slice and processor bound processes end up getting more executing time.

The solution? I’m not going to go into all the detail, but it’s called Virtual Round Robin (VRR). The main difference is that there are two queues instead of one: an auxiliary queue for processes that were blocked by I/O and a main queue for the processes that were preempted. This auxiliary queue gets priority over the main queue when a process needs to be selected. However, a process from the auxiliary queue doesn’t run for longer than the time quantum minus the time it already used from its last ran.

Studies have shown that VRR is superior to general RR in terms of fairness.

What are the effects on response time?

The closer a process’s service time is to the time quantum, the fewer round robin cycles and delays it is going to experience. As a result, round robin provides good response time for short processes.

Does the policy have high or low throughput?

The most important thing with round robin is choosing the length of the time quantum. The smaller the time quantum, the greater amount of overhead required in order to switch between the processes. The more time we spend switching between processes, the less time we spend running processes. So throughput (the number of processes that are 100% completed per unit time) may be low if the selected time quantum is too small.

Rule of thumb: your time quantum should be slightly greater than the average needed running time; otherwise, most processes will require two time quanta. If your time quantum is too long, round robin reduces to FCFS.

Is starvation possible, and if so, in what case?

No, starvation is not possible, because all processes are serviced and rotated between equally.

Shortest Process Next (SPN)

Process A B C D E Mean
Arrival Time 0 2 4 6 8
Service Time 3 6 4 5 2
Finish Time 3 9 15 20 11
TAT 3 7 11 14 3 7.60
NTAT 1.00 1.17 2.75 2.80 1.50 1.84

Are all types of processes treated fairly?

This is no surprise, because of the name, but SPN penalizes long processes. Short processes are able to cut in line ahead of longer processes.

What are the effects on response time?

Response time for short processes is phenomenal. If you compare the NTAT value of the shortest process in our data set, process E, the NTAT value for SPN is 1.50 whereas the NTAT for process E in FCFS and RR is 6.00 and 3.50 respectively.

SPN has the drawback of increasing variability of response times, especially for longer processes. As a result, predictability is reduced.

Does the policy have high or low throughput?

SPN has high throughput because it does the shortest processes first, so we are going to be able to complete more processes. Kind of like how you could eat over 100 peanuts, but only 1 pizza. Small peanuts = small processes. Large pizza = large process. SPN chooses to run short processes, not large ones… get it?

Is starvation possible, and if so, in what case?

Possible, if there is a constant supply of short processes, longer processes will be starved.

An interesting thing with SPN is the need to estimate how long a process is going to run, but math is hard and life is short.

Shortest Remaining Time (SRT)

Process A B C D E Mean
Arrival Time 0 2 4 6 8
Service Time 3 6 4 5 2
Finish Time 3 15 8 20 10
TAT 3 13 4 14 2 7.20
NTAT 1.00 2.17 1.00 2.80 1.00 1.59

Are all types of processes treated fairly?

Penalizes long processes for the same reason as SPN.

What are the effects on response time?

SRT is really similar to SPN, but it has better, shorter response times. If you look at our above example, you will notice that the NTAT value for the three shortest processes, A, C, and E, are all 1.0 indicating zero delay.

Does the policy have high or low throughput?

High throughput for the same reason as SPN. You can eat 100 peanuts or 1 pizza.

Is starvation possible, and if so, in what case?

It is possible for the same reason as SPN. A steady supply of short processes would starve a longer process.

Highest Response Ratio Next (HRRN)

In my first post, I didn’t go into much detail with HRRN, because I was avoiding explaining turnaround time. HRRN is neat, because it tries to minimize the turnaround time average over all of the processes. After all, in a perfect system, all processes would have zero delay and a NTAT of 1.0.

Previously all I shared with you was that HRRN was nonpreemptive and the selection function for HRRN was the max(w+s/s). Look familiar? I hope so, because w+s/sis our formula for normalized turnaround time! So HRRN, just selects the process that has the current highest NTAT.

Process A B C D E Mean
Arrival Time 0 2 4 6 8
Service Time 3 6 4 5 2
Finish Time 3 9 13 20 15
TAT 3 7 9 14 7 8.00
NTAT 1.00 1.17 2.25 2.80 3.5 2.14

Are all types of processes treated fairly?

There is a good balance, though it is not 100% fair. Short jobs are slightly favored because a smaller denominator yields a bigger ratio. However, long processes will eventually get past shorter processes.

What are the effects on response time?

Provides good response time as you can see from the NTAT values, all are relatively low.

Does the policy have high or low throughput?

High throughput.

Is starvation possible, and if so, in what case?

No, it is not possible, because the more delay a process experiences the higher its NTAT value and the more likely it is to be selected.


Here is a table summarizing everything just discussed.

Selection Function max[w] constant min[s] min[s-e] max(w + s / s)
Decision Mode Non-preemptive Preemptive (at time slice quantum q) Non-preemptive Preemptive(at arrival) Non-preemptive
Throughput Not emphasized Depends on quantum, could be low if time quantum is small High High High
Response Time May be high, especially if there is a large variance in process execution times Provides good response time for short processes Provides good response time for short processes Provides good response time to all Provides good response time to all
Overhead Minimum Minimum Can be high Can be high Can be high
Effect on Processes Penalizes short processes; penalizes I/O bound processes Fair treatment Penalizes long processes Penalizes long processes Good balance
Starvation No No Possible Possible No

I hope that after reading the two blog posts in this series, you can describe the algorithm for each of the five scheduling policies and can think of some of the practical pros/cons of each policy.

Again, I would like to give credit where credit is due and thank the authors. I found this book invaluable while writing this post:

Stallings, William. Operating Systems Internals and Design Principles. New Jersey: Prentice Hall, 2012. Print.

  • I’ll immediately take hold of your rss as I can’t to find your e-mail subscription hyperlink or newsletter service. Do you’ve any? Please allow me recognize in order that I may subscribe. Thanks.

  • Comments are closed.