
Throughput = 3 taken / 12 s = 0.25 jobs/s Average Job Completion Time = (1 s + 11 s + 12s)/3 = 24s / 3 = 8s.Task 3 arrives at 0.2 sec, waits 10.8s and takes 1s => duration: 12s.Task 2 arrives at 0.1 sec, waits 0.9s and takes 10s => duration: 11s (rounded).

Task 1 arrives at 0 sec, waits 0s and takes 1s => duration: 1s.Average Job Wait Time = (0 s + 1 s + 11s)/3 = 12s / 3 = 4s.Task 3 arrives at 0.2 sec and can start after 11s => wait time: 10.8s => 11s (rounded).Task 2 arrives at 0.1 sec and can start after 1s => wait time: 0.9s => 1s (rounded).Task 1 arrives at 0 sec and can immediately start running => wait time: 0s.3 jobs over 12 seconds => 0.25 jobs / s.The result of the cooperative scheduler’s job is shown in the image: FCFS with cooperative scheduling.Īpplying the first three metrics on the example above gives the following results: The image below shows three tasks that arrive very close to each other. The order in which the jobs arrive (are started) is the same as the order on which the jobs are allowed on the processor. This metric is primarily used in 7.3, as we will assume the overhead is 0 in this chapter.Ī simple algorithm that a scheduler can follow is: First Come, First served (FCFS). As such, the more transitions between tasks there are, the less efficient the CPU is being used. Remember that every time the CPU switches between tasks, there is a certain amount of overhead due to context switching. CPU efficiency (η CPU): the percentage of time that the processor performs useful work.Average Job Completion Time (AJCT): the average time that a job needs to wait before it has fully finished (last time - creation time).Average Job Wait Time (AJWT): the average time that a job needs to wait before it gets scheduled for the first time (first time - creation time).Average Throughput: the number of tasks that are executed in one second.When studying schedulers, the following metrics are typically used: To be able to reason about different scheduling algorithms, there is a need of some sort of metric to determine which approach is best. A (very select) number of algorithms are given here. Independent of whether cooperative or preemptive scheduling is used, there exist many algorithms the scheduler may use to determine which job is to be scheduled next. As such, the hardware timer is a crucial piece necessary for implementing pre-emptive scheduling! Scheduler algorithms
#LAB SCHEDULING ROUND ROBIN WITH A TIMESLICE OF 4 TICKS CODE#
In this case, the handling code will pause the current process and schedule a new one. This interrupt then triggers a pre-defined OS function call that handles the interrupt. CPUs contain a specialized timer component that can trigger an “interrupt” to the CPU after a certain amount of time has passed. If the scheduler needs to preempt jobs after a certain amount of time (or execution ticks), it requires hardware assistance. As such, most modern OSes will employ a form of preemptive scheduling. If a given task takes a long time to complete or doesn’t properly yield at appropriate intervals, it can end up hogging the CPU for a long time, causing other tasks to stall. While the cooperative scheduling approach is the simplest, it also has some severe downsides. This is possible because the PCB and equivalent TCB’s track fine-grained per-task state. preemptive scheduling: Interrupt the current task at any time.One way of yielding is for example by using the sleep(), but also pthread_join() or sem_wait() can indicate a task can be paused for the time being. non-preemptive / cooperative scheduling: Let the current task run until it is finished or until it (explicitly) yields (to give up/away, to release) its time on the processor.It then has two options to determine when the next job is dispatched: Say that the scheduler starts with the execution of the first task. When working with a single core, only a single task can be active at the same time.

However, the concepts introduced here also (largely) hold for multi-core systems. While the image above is mainly for processes, similar logic of course exists for Threads as well, as they go through similar conceptual lifecycle phases as processes.įor the sake of simplicity, in this chapter we assume a system which has only a single processor with a single CPU core.
