Friday, 18 January 2013

Unit 1



UNIT: 1: INTRODUCTION AND SCHEDULING
v Introduction to Operating System, Functions of OS
v Different types of Operating Systems: Real time, Multi – user, Time sharing
v OS Structure – Monolithic, Layered, Virtual Machine, Client – Server
v CPU Scheduling: Introduction to process, process control block, process scheduling
v FCFS Scheduling, SJF scheduling, Priority scheduling, Round Robin scheduling
                                                                                                                                    
v INTRODUCTION TO OPERATING SYSTEM

ü  An operating system is a program that acts as an intermediary between a user of computer and the computer hardware.
ü  The purpose of an operating system is to provide an environment in which user can execute programs.
ü  The primary goal of an operating system is to make the computer system convenient to use and second goal is to use the computer hardware in a efficient manner.
Abstract view of the components of a computer system
ü  An operating system is a program that controls the execution of application programs and acts as an interface between applications and computer hardware. It can be thought of as  having three objectives:
§  Convenience: An operating system makes a computer more convenient to use.
§  Efficiency: An operating system allows the computer system resources to be used in an efficient manner.
§  Ability to evolve: An operating system should be constructed in such a way as to permit the effective development, testing and introduction of new system function without interfering with service. 
ü  An operating system is an important part of almost every computer system. A computer system can be divided in to four components. That is.
i.     Hardware,
ii.   Operating system,
iii. Application programs,
iv. Users.

i.        Hardware: Hardware contain the CPU, Memory, Input/output devices provide the basic computing resources.
iii.    Application program: It contain compiler, database system, game and business program defined in the way in which these resources are use to solve the computing problems of the users.
iv.    Users: We have people, machine & other computers trying to solve different problems. The operating system controls and co – ordinate the use of the hardware among the various application program for the various users.

v FUNCTIONS OF OPERATING SYSTEM

1.     OS as resource manager:
ü  Operating system acts as a resource manager and performs a following function:                              
§  It assigns processors to different tasks performed by the computer system.
§  It allocates the main memory and other storage areas to system programs as well as user programs and data.
§  It carries out the I/O management and co-ordinates and assigns different input and output devices while one or more programs are being executed.
§  It manages files on various storage devices and transfers of these files from one storage devices to another.
§  It establishes and enforces the job priority. It automatically transmits from job to job as directed by special control statements.
§  It is able to interpret commands and instruction.
§  It co-ordinates and assigns compiler, assembler, utility programs and other software packages to various users working on the computer system.
§  It establishes data security and integrity.
§  It also produces dumps, traces, error messages and other debugging and error detecting code.
§  It facilitates easy communication between the computer system and computer operator.

2.     Os as processor management:
ü  In a large system, a number of jobs are done on the computer every day. When these jobs are done individually, a lot of time is consumed between the different jobs for operator action. Hence in modern system, number of jobs batched and kept ready. The job control program of the OS is given change of executing these jobs one after another. Most medium and large computer use multiprogramming and multitasking systems.



3.     Device management:
ü  Another important function of OS is to coordinates the operations of central processing unit with its peripheral equipment. One task is to identity the various peripheral devices attached to CPU.

4.     Memory management:
ü  An operating system is also responsible for managing the main memory of CPU so that it can be used efficiency. Three tasks along these lines are:
§  Protecting programs including itself from accidental erasure.
§  Handling programs that exceeds that the physical limitations of the main memory.
§  Protecting users from each other.

5.     Os as information manager:
ü  One of the major functions of the OS is to keep track of the data on disk. The application program does not know where the data is actually stored and how to get it. This knowledge is contained in OS’s access methods or device driver routine. When program is ready to accept data it signals OS with the coded message. The OS finds the data and delivers it to the programs. Conversely, when programs are ready to output, the OS transfers the data from program on to the next available space on the disk.

v DEFINITION

1.      File: A file is collection of related information defined by its creator.

2.      Job: Job is the particular task performed by particular process.

3.      Process: A process is basically a program in execution. Associated with each process is its address space, a list of memory location from some minimum to some maximum, which the process can read and write. The address space contains the execution program, the program’s data and its stack.

4.      Kernel: Kernel is the hidden part of operating system. It is a program which accepts commands given by shell, and executes the command and returns the output to shell.

5.      System procedure call: It is an interface between application program and operating system, the actual mechanism of issuing system calls are highly machine dependent.


v TYPES OF OPERATING SYSTEM

ü  There are three types of operating system which are listed below:
1.      Real time system.
2.      Multiuser system.
3.      Time sharing system.

1.     Real time system:
ü A real time system is used when there are rigid time requirements on the operation of a processor or the flow of the data.
ü  Embedded system almost always runs real time operating system.
ü  Embedded system can be seen as an entire house which can be computerized so that a central computer – either a general purpose computer or an embedded system can control heating and lighting, alarm system and even coffee makers.
ü  Some examples: Automobile engine, Fuel injection systems, Home appliance controllers and weapon system.
ü  A real time system is considered to function correctly only if it returns the correct result with any time constraints.
ü  A hard real time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time that it takes the operating system to finish any request made of it.
ü  A less restrictive type of real time system is a soft real time system, where a critical real time task gets priority over other tasks, and retains that priority until it completes.
ü  As in hard real time system, kernel delays need to be bounded. Soft real time is an achievable goal that is amenable to mixing with other types of system.
2.     Multiuser operating system:
ü  Provides regulated access for a number of users by maintaining a database of known users.
a.      Refers to computer systems that support two or more simultaneous users.
b.      Another term for multi-user is time sharing.
c.       Ex: All mainframes are multi-user systems.
d.      Example: Unix
ü  Since the Operating System is responsible for managing memory and processing for other applications and programs being run. It also recognizes and uses hardware connected to the system, and properly handles user interaction and data requests. So the operating system must make sure that the requirements of the various users are balanced, and that each of the programs they are using has sufficient and separate resources so that a problem with one user does not affect the entire community of users.
ü  Multi user systems have traditionally been the most common type of bar code system. This system uses serial ports to connect a single PC or other computer system to multiple bar code readers, terminals, or both. Each terminal runs a single session on the multi-user operating system. Cheaper PC prices and the availability of very basic network PCs will undoubtedly sway some users away from multi – user systems. Clearly the multi-user System is on its way out.
ü  This type of operating systems is very rarely used. Important facts in a multi user operating system are preserving the environment (each task is independent and needs to be protected from other tasks) and Resource sharing (should be performed in such a way that deadlock should be avoided and the resources should be managed effectively so that time should be saved).
3.     Time – sharing system:
ü  Time – sharing (or multitasking) is a logical extension of multiprogramming.
ü  In this CPU executes multiple jobs by switching among them, but this switching occurs so frequently that the users can interact with each program while it is running. 
ü  Time-sharing systems were developed to provide interactive use of a computer system at a reasonable cost. A time shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time – shared computer.
ü  A time – shared operating system allows the many users to share the computer simultaneously. Since general main memory is too small to accommodate all jobs, the jobs are kept initially on the disk in the job pool.
ü  Since each action or command in a time – shared system tends to be short, only a little CPU time is needed for each user.
Time – sharing operating systems are even more complex than are multiprogrammed operating system. Time sharing systems provide a mechanism for concurrent execution, which requires sophisticated CPU scheduling schemes.

v OPERATING SYSTEM STRUCTURE

ü  The four designs are monolithic systems, layered systems, virtual machines and client – server system.
1.     Monolithic System:
ü  To construct the actual object program of the operating system when this approach is used, one first compiles all the individual procedures or files containing the procedures and then binds them all together into a single object file using the system linker.
How a system call can be made: (1) User program traps to the kernel. (2) Operating system determines service number required. (3) Operating system calls service procedure. (4) Control is returned to user program.
ü  The services provided by the operating system are requested by putting the parameters in well-defined places, such as in registers or on the stack, and then executing a special trap instruction known as a cell or supervisor cell.
ü  The instruction switches the machine from user mode to kernel mode and transfers control to the operating system, shown as event (1) in figure.
ü  The operating system then examines the parameters of the call to determine which system call is to be carried out, shown as event (2) in figure.
ü  Next, the operating system indexes into a table that contains in slot k a pointer to the procedure that carries out system call k. This operation, shown as event (3) in figure, identifies the service procedure, which is then called.
ü  When the work has been completed and the system call is finished, control is given back to the user program event (4), so it can continue execution with the statement following the system call.
ü  This organization suggests a basic structure for the operating system:
1.      A main program that invokes the requested service procedure.
2.      A set of service procedures that carry out the system calls.
3.      A set of utility procedures that help the service procedure.
A simple structuring model for a monolithic system
2.     Layered System:
ü  Layer 0 dealt with allocation of the processor, switching between processes when interrupts occurred or time expired.
ü  Above layer 0, the system consisted of sequential processes, each of which could be programmed without having to worry about the fact that multiple processes were running on a single processor. In other words, layer 0 provided the basic multiprogramming of the CPU.




Layer
Function
5
The operator
4
User program
3
Input/output management
2
Operator-process management
1
Memory and drum management
0
Processor allocated and multiprogramming

ü  Layer 1 did the memory management. It allocated space for processes in main memory and on a 512K word drum used for holding parts of processes for which there was no room in main memory.
ü  Layer 2 handled communication between each process and the operator console. Above this layer each process effectively had its own operator console.
ü  Layer 3 took care of managing the I/O devices and buffering the information streams to and from them. Above layer 3 each process could deal with abstract I/O devices with nice properties, instead of real devices with many peculiarities.
ü  Layer 4 was where the user programs were the user programs were found. They did not have to worry about process, memory console, or I/O management. The system operator process was located in layer 5.

3.     Virtual Machines:
1-26
The structure of VM/370 with CMS.
ü  This system was based on an astute observation: a time sharing system provides (1) multiprogramming and (2) an extended machine with a more convenient interface than the bare hardware.
ü  The heart of the system, known as the virtual machine operator, runs on the bare hardware and does the multiprogramming, providing not one but several virtual machines to the next layer up as shown in figure. However, unlike all other operating system, these virtual machines are not extended machines, with files and other nice features.
ü  When a CMS program executes a system call is trapped to the operating system in its system in its own virtual machine, not to VM/370.
ü  CMS then issues the normal hardware I/O instruction for reading its virtual disk or whatever is needed to carry out the call.
ü  These I/O instructions are trapped by VM/370, which then performs them as part of its simulation of the real hardware.
ü  Two variants on this design are possible. In the first one, MS-DOS itself is loaded into the virtual address space. In the other variant, the virtual machine monitor just catches the first trap and does the I/O itself, since it knows what all the MS-DOS system calls and thus knows what each trap is supposed to do.
ü  Examples: The idea of virtual machine can been seen from the following that is, running old MS-DOS programs on Pentium( or  other 32-bit Intel CPU) for this Intel designed a Pentium providing a virtual 8086 mode on the Pentium. In this mode the machine acts like an 8086 which is identical to an 8088 from a software point of view and including 36-bit addressing with a 1 MB limit.
4.     Client – server model:
ü  A trend in modern operating system is to take this idea of moving code up into higher layers even further and remove as much as possible from the operating system, leaving a minimal kernel.
ü  To request a service, such as reading a block of a file, a user process sends the request to a server process, which then does the work and sends back the answer.
The client – server model
ü  All the kernel does is handle the communication between clients and servers. By splitting the operating system up into parts, each of which only handles one facet of the system, such as file service, process service, terminal service or memory service, each part becomes small and manageable.
The client – server model in distributed system.
ü  Another advantage of the client-server model is its adaptability to use in distributed systems. If a client communicates with a server by sending it messages, the client need not know whether the message is handled locally in its own machine, or whether it was sent across a network to a server on a remote machine. As far as the client is concerned, the same thing happens in both cases: a request was sent and a reply came back.

v PROCESS CONCEPT

ü  A process is a program in execution. The execution of a process must progress in a sequential fashion.
ü  A process is more than the program code. It also includes the current activity, as represented by the value of the program counter and the contents of the processor’s register.
ü  A process generally also includes the process stack, containing temporary data, and a data section containing global variables.
ü  A program is a passive entity, such as the contents of a file stored on disk, where as a process is an active entity, with a program counter.

v PROCESS STATE

ü  As a process executes it changes state. The state of a process is defined in part by the current activity of that process. Each process may be in one of the following states:
Diagram of process state.
§  New: The process is being created.
§  Running: Instructions are being executed.
§  Waiting: The process is waiting for some event to occur.
§  Ready: The process is waiting to be assigned to a processor.
§  Terminated: The process has finished execution.
ü  It is important that only one process can be running on any processor at any instant and many processes may be ready and waiting state.

v PROCESS CONTROL BLOCK

ü  Each process is represented in the operating system by a process control block (PCB) – also called a task control block.
ü  It contains many pieces of information associated with a specific process, including these:
Process control block.
ü  Process state: The state may be new, ready, running, waiting, halted and so on.
ü  Program counter: The counter indicates the address of the next instruction to be executed for this process.
ü  CPU registers: The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers and general purpose registers, plus any condition-code information.
ü  CPU scheduling information: This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters.
ü  Memory management information: This information may include such information as the value of the base and limit registers, the page tables, or the segment tables depending on the memory system used by the operating system.
ü  Accounting information: This information includes the amounts of CPU and real time used, time limits, account numbers, job or process numbers and so on.
ü  I/O status information: The information includes the list of I/O devices allocated to this process, a list of open files and so on.


v CONTEXT SWITCH

ü  Switching the CPU to another process requires saving the state of the old process and loading the saved state for the new process. This task is known as a context switch.

v DISPATCHER

ü  The dispatcher is the module that gives control of the CPU to the process selected by the short-term scheduler. This function involves:
§  Switching context.
§  Switching to user mode.
§  Jumping to the proper location in the user program to restart that program.
ü  The dispatcher should be as fast as possible given that it is invoked during every process switch.
ü  The time it takes for the dispatcher to stop one process and start another running is known as the dispatch latency.

v CPU SCHEDULING CONCEPT     OR
v PROCESS SCHEDULING

ü  The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. For a uniprocessor system, there will never be more than one running process. If there are more processes, the rest will have to wait until the CPU is free and can be rescheduled.
ü  A process is executed until it must wait, typically for the completion of some I/O request. In a simple computer system, the CPU would then just sit idle. All this waiting time is wasted; no useful work is accomplished.
ü  With multiprogramming, several processes are kept in memory at one time. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues. Every time one process has to wait, another process may take over the use of the CPU.
ü  Scheduling is a fundamental operating system function. Almost all computer resources are scheduled before use. The CPU is, of course, one of the many computer resources. Thus, its scheduling is central to operating system design.
ü  Process scheduling can be represented by queuing diagram. Following figure shows queueing diagram.















ü  Two type of queue are present: a ready queue and set of device queue. The circles represent the resources and arrows indicate the flow of processes.
ü  As process enters the system, they are put into a job queue. This queue consists of all processes in the system. Ready queue contains the list of processes that are residing in main memory and are ready and waiting to execute.
ü  When the process allocated the CPU, it executes until it free CPU either by interruption or some I/O operation. The list of processes waiting for a particular I/O device is called device queue. Each device has its own device queue.

v TYPES OF SCHEDULING

ü  The aim of processor scheduling is to assign processes in a way that meets system objectives, such as response time, throughput and processor efficiency.
ü  Scheduling affects the performance of the system because it determines which processes will wait and which will progress.
ü  Basically, there are three types of scheduling
1.      Long – term scheduling
2.      Medium – term  scheduling
3.      Short – term scheduling
1.     Long – term scheduling:
ü  The long – term scheduler select the process from the job pool and load them into memory for execution. It is also known as job scheduler.
ü  The long – term scheduler determines which programs are admitted to the system for processing. Thus, it controls the degree of multiprogramming.
ü  Once admitted, a job or user program becomes a process and is added to the queue for the short – term scheduler.
ü  In some systems, a newly created process begins in a swapped – out condition, in which case it is added to a queue for the medium term scheduler.

2.     Medium-term scheduling:
ü  Medium – term scheduling is part of the swapping function. The swapping in decision is based on the need to manage the degree of multiprogramming. Thus, the swapping in decision will consider the memory requirements of the swapped out process.
3.     Short-term scheduling:
ü  The short – term scheduling is the selection of the one process from the ready queue. The short – term scheduler select the process that are ready to execute and allocate the CPU. It is also known as CPU scheduler.
ü  The short – term scheduling executes most frequently and makes the fine – grained decision of which process to execute next.
ü  The short – term scheduler is invoked whenever an event occurs that may lead to the blocking of the current process or that may provide an opportunity to preempt a currently running process in favor of another.

*      The primary difference between long – term and short – term scheduler is the frequency of their execution. The short – term scheduler must be very fast because short duration of time required to executing a process by CPU. On the other hand, long – term scheduler must be very slow because the creation of the new process takes a time.  

v SCHEDULING CRITERIA

ü  Many criteria have been suggested for comparison CPU scheduling algorithms. Which characteristics are used for comparison can make a substantial difference in the determination of the best algorithm. Criteria that are used include the following:
§  CPU utilization: We want to keep the CPU as busy as possible. CPU utilization may range from 0 to 100 percent. In a real system, it should range from 40 percent to 90 percent.
§  Throughput: If the CPU is busy executing processes, then work is being used. One measure of work is the number of processes that are completed per time unit, called throughput.
§  Turnaround time: For the point of view of a particular process, the important criteria are how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue executing on the CPU, and doing I/O.
§  Waiting time: The CPU scheduling algorithm does not affect the amount of time during which a process executes or does I/O; it spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue.
§  Response time: Response time is the amount of time it takes to output that response. The turnaround time is generally limited by the speed of the output device.

v NON – PREEMPTIVE

ü  Once a process is in running state, it continues to execute until
a.    It terminates.
b.    It blocks itself to wait for I/O or to request some operating system operating service.

v PREEMPTIVE

ü  The currently running process may be interrupted and moved to Ready state by the operating system. The decision to preempt may be performed when a new process arrives; when an interrupt occurs that places a blocked process in the Ready state; or periodically based on a clock interrupt.

v FIRST COME FIRST SERVED SCHEDULING ALGORITHM

ü  The simplest CPU scheduling algorithm is the first come first served scheduling algorithm.
ü  With this scheme, the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue.
ü  When a process enters the ready queue, its PCB is linked on to the tail of the queue. When the CPU is free, it is allocated to the process at the head of the queue. The remaining process is then removed from the queue.
ü  The average waiting time under the FCFS policy is often quite long. Consider the following set of processes that arrive at time 0, with length of the CPU burst time given in milliseconds.
Process
Burst time
P1
24
P2
3
P3
3
ü  If the processes arrive in the order P1, P2, P3 and are served in FCFS order, we get the result shown in the following Gantt chart:
ü  The waiting time is 0 milliseconds for processes P1, 24 milliseconds for process P2, and 27 milliseconds for process P3. Thus, average waiting time is (0+24+27) / 3 = 17 milliseconds.
ü  If the processes arrive in the order P2, P3, P1. The results will be as shown in the following Gantt chart:
ü  The average waiting time is now (6+0+3) / 3 = 3 milliseconds. Thus, the average waiting time under a FCFS policy is generally not minimal.
ü  In addition, consider the performance of FCFS scheduling in a dynamic situation. Assume we have one CPU bound process and many I/O bound processes.
ü  The CPU bound process will get the CPU and hold it. During this time, all the other processes will finish their I/O and move into the ready queue, waiting for the CPU.
ü  While the processes wait in the ready queue, the I/O devices are idle. Eventually, the CPU bound process finishes its CPU burst and moves to an I/O device.
ü  All the I/O bound processes, which have very short CPU bursts, execute quickly and move back to the I/O queues.
ü  The CPU bound process will then move back to the ready queue and be allocated the CPU. Again, all the I/O processes end up waiting in the ready queue until the CPU bound process is done.
ü  The FCFS scheduling algorithm is non – preemptive. Once the CPU has been allocated to a process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O.
ü  Advantages:
1.      It is simple scheduling algorithm and easy to understand.
2.      It is easy to implement.
ü  Disadvantages:
1.      Convey effect: There is convey effect as all the other processes wait for the one big process to get off the CPU. This effect results in lower CPU utilization.
2.      This is not suitable for time – sharing systems, where it is important that each user get a share of the CPU at regular intervals.   

v SHORTEST JOB FIRST SCHEDULING ALGORITHM

ü  A different approach to CPU scheduling is the shortest job first algorithm. This algorithm associates with each process the length of the latter’s next CPU burst.
ü  When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If two processes have the same length next CPU burst, FCFS scheduling is used to break the tie.
ü  More appropriate term would be the shortest next CPU burst, because the scheduling is done by examining the length of the next CPU burst of a process, rather than its total length.






ü  Consider the following set of processes, with the length of the CPU burst time given in milliseconds.
Process
Burst time
P1
6
P2
8
P3
7
P4
3
ü  Using SJF scheduling, we would schedule these processes according to the following Gantt chart:
ü  The waiting time is 3 milliseconds for process P1, 16 milliseconds for process P2, 9 milliseconds for process P3 and 0 milliseconds for process P4. Thus, the average waiting time is ( 3 + 16 + 9 + 0 ) / 4 = 7 milliseconds.
ü  The SJF scheduling algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of processes.
ü  By moving a short process before a long one the waiting time of the short process decreases more than it increases the waiting time of the long process.
ü  The real difficulty with the SJF algorithm knows the length of the next CPU request. Although the SJF algorithm is optimal, it cannot be implemented at the level of short-term CPU scheduling.
ü  The SJF algorithm may be either preemptive or non preemptive. The choice arises when a new process arrives at the ready queue while a previous process is executing. The new process may have a shorter next CPU burst than what is left of the currently executing process.
ü  A preemptive SJF algorithm will preempt the currently executing process, where as non preemptive SJF algorithm will allow the currently running process to finish its CPU burst. Preemptive SJF scheduling is sometimes called shortest remaining time first scheduling.
ü  Consider the following four processes with the length of the CPU burst time given in milliseconds:
Process
Arrival Time
Burst time
P1
0
8
P2
1
4
P3
2
9
P4
3
5
ü  If the processes arrive at the ready queue at the times shown and need the indicated burst times, then the resulting preemptive SJF schedule is as depicted in the following Gantt chart:
ü  Process P1 is started at time 0, it is the only process in the queue. Process P2 at time 1. The remaining time for process P1 ( 7 ms ) is larger than the time required by process P2 ( 4 ms ), so process P1 is preempted and process P2 is scheduled.
ü  The average waiting time for this example is
((10 – 1) + (1 – 1) + (17 – 2) + (5 – 3)) / 4 = 26 / 4 = 6.5 ms.
ü  Advantages:
1.      The SJF scheduling algorithm is optimal i.e. it takes the minimum average waiting time for a given set of processes. By moving a short process before a long one, the waiting time of the short process decreases.
ü  Disadvantages:
1.      It is difficult to know length of the CPU burst time in advance.

v PRIORITY SCHEDULING ALGORITHM

ü  The SJF algorithm is a special case of the general priority scheduling algorithm. A priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal priority processes are scheduled in FCFS order.
ü  An SJF algorithm is simply a priority algorithm where the priority ( P ) is the inverse of the next CPU burst. The larger the CPU burst, the lower the priority and vice-versa.
ü  Priorities are generally some fixed range of numbers. Some systems use low numbers to represent low priority; others use low numbers for high priority.
ü  Consider the following set of processes, assumed to have arrived at time 0, in the order P1, P2, ……….., P5, with the length of the CPU burst time given in milliseconds:
Process
Priority
Burst time
P1
3
10
P2
1
1
P3
3
2
P4
4
1
P5
2
5


ü  Using priority scheduling, we would schedule these processes according to the Gantt chart:
ü  The average waiting time is 8.2 milliseconds.
ü  Priority scheduling can be either preemptive or non preemptive. When a process arrives at the ready queue, its priority is compared with the priority of the currently running process.
ü  A preemptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived processes higher than the priority of the currently running process. A non preemptive priority scheduling algorithm will simply put the new process at the head of the ready queue.
ü  A major problem with priority scheduling algorithm is indefinite blocking or starvation. A process is ready to run lacking the CPU can be considered blocked, waiting for the CPU.
ü  A solution to the problem of indefinite blockage of low – priority processes is aging. Aging is a technique of gradually increasing the priority of processes that wait in the system for a long time.

v ROUND – ROBIN PROCESS SCHEDULING ALGORITHM

ü  The Round – Robin algorithm is designed specially for time – sharing system. It is similar to FCFS scheduling, but preemption is added to switch between processes.
ü  A small unit of time, called a time quantum or time slice, is defined. A time quantum is generally from 10 to 100 milliseconds.
ü  The ready queue is treated as a circular queue. The CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
ü  To implement RR scheduling, we keep the ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks the first process from the ready queue, sets a timer to interrupt after 1 time quantum and dispatches the process.
ü  One of two things will then happen. The process may have a CPU burst of less than 1 time quantum. In this case, the process itself will release the CPU voluntarily. Otherwise, if the CPU burst of the currently running process is longer than 1 time quantum, the timer will go off and will cause as interrupt to the operating system.
ü  A context switch will be executed, and the process will then select the next process in the ready queue.
ü  Consider the following set of processes that arrive at time 0, with the length of CPU-burst time given in milliseconds:
Process
Burst time
P1
24
P2
3
P3
3
ü  If we use a time quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to the next process in the queue, process P2. Since process P2 does not need 4 milliseconds, it quits before its time quantum expires. The CPU is then given to the next process, process P3. Once each process has received 1 time quantum, the CPU is returned to process P1 for an additional time quantum. The resulting RR schedule is
ü The average waiting time is 17 / 3 = 5.66 milliseconds.
ü If there are n processes in the ready queue and the time quantum is q, then each process gets 1/n of the CPU time in chunks of at most q time units. Each process must wait no longer than (n – 1) x q time units until its next time quantum.
ü If the time quantum is very small, the RR approach is called processor sharing, and appears to the users as though each of n processes has its own processor running at 1/n the speed of the real processor.
Showing how a smaller time quantum increases context switches.
ü We have only one process of 10 time units. If the quantum is 12 time units, the process finishes in less than 1 time quantum, with no overhead. If the quantum is 6 time units, the process require 2 quanta, resulting in a context switch. If the time quantum is 1 time unit, then nine context switches will occur, slowing the execution of the process accordingly.

=*=*=*=*=

No comments:

Post a Comment