[SOLVED] CS CS202-MMA

$25

File Name: CS_CS202-MMA.zip
File Size: 113.04 KB

5/5 - (1 vote)

Task Scheduling

Recap: Each process has a separate virtual memory space
static data

Copyright By Assignmentchef assignmentchef

heap Virtual memory
static data
heap Virtual memory
static data
heap Virtual memory
static data
heap Virtual memory
They are isolated from one another. Each of them is not supposed to know what happens to another one
Virtually, every process seems to have a processor, but only a few of them are physically executing.

Thread #1 Thread #2
CPU PC CPU
Task#1 Thread #1 Thread #2
CPU PC CPU
Recap: Threads Task#2
static data
Virtual memory heap
0x01234567
static data
Virtual memory heap

Process is an abstraction of a computer
Whenyoucreateaprocess,youduplicateeverything
However,youonlyneedtoduplicateCPUabstractiontoparallelize computation tasks
Threads as lightweight processes
ThreadisanabstractionofaCPUinacomputer Maintainseparateexecutioncontext
Shareotherresources(e.g.memory)
Recap: Why Threads?

Recap: Synchronization mechanisms
Mutualexclusion
Progress
Fairness
Accommodationofnondeterminism/multipleprocessors
WaitlisttoreduceCPUload
UsedintensivelyinLinuxkernels
All above needs to be implemented through atomic instructions 5
Semaphores
Allowmultipleconcurrentprocesses/threadsworkingtogether
Wait-free/Lock-freewhenreads
Wait-free synchronization
Wait-freeregardlessreads/writes

Mechanisms of changing processes Basic scheduling policies
Linux Scheduling
An experimental time-sharing system The Multi-Level Scheduling Algorithm

The mechanisms of changing the running processes
Cooperative Multitasking (non-preemptive multitasking) Preemptive Multitasking

Preemptive Multitasking
The OS controls the scheduling can change the running
process even though the process does not give up the
resource But how?

return-from-trap
return from exception handler
return from interrupt handler
Three ways to invoke OS handlers
System calls / trap instructions raised by applications Exceptions raised by processor itself
Displayimages,playsounds
Dividedbyzero,unknownmemoryaddresses
Interrupts raised by hardware Keystroke, network packets
user program
sbb %ecx,0x13(%rcx)
and %cl,(%rbx)
xor $0x19,%al
add %edx,(%rbx) trap add %al,(%rax)
add%al,(%rbx)
div%ecx
add0x1bad(%eax),%dh
add%al,(%eax)
decb 0x52(%edi)
in $0x8d,%al
mov%eax,0x101c
lea-0x2bb84(%ebx),%eax
mov%eax,-0x2bb8a(%ebx)
lgdtl-0x2bb8c(%ebx)
lea-0x2bf3d(%ebx),%eax
push $0x10
kernel/privileged mode

How preemptive multitasking works
Setup a timer (a hardware feature by the processor)event
before the process start running
After a certain period of time, the timer generates interrupt kernel
to force the running process transfer the control to OS
The OS kernel code decides if the system wants to
continue the current process
If not context switch
If yes, return to the process

Scheduling Policies from Undergraduate OS classes

Virtualizing the processor
Multipleprocessesneedtoshareasingleprocessor
Createanillusionthattheprocessorisservingmytaskbyrapidly switching the running process
Determine which process gets the processor for how long
CPU Scheduling

Scheduling Metrics
CPU utilization how busy we keep the CPU to be
Throughput the amount of tasks/processes/threads that we can
Response time the time between submission and the first time when the job is scheduled
Wait time the time between the job is ready (not including the overhead of queuing, command processing) and the first time when the job is scheduled
Fairness every process should get a fair chance to make progress 14
completion
finish within a given amount of time
Turnaround time the time between submission/arrival and

What you learned before
Non-preemptive/cooperative: the task runs until it finished
FIFO/FCFS:FirstInFirstOut/FirstComeFirstServe
SJF:ShortestJobFirst
STCF: Shortest Time-to-Completion First RR: Round robin
Preemptive: the OS periodically checks the status of processes
and can potentially change the running process

Poll close in
Parameters for policies
How many of the following scheduling policies require knowledge of
process run times before execution?
! FIFO/FCFS:FirstInFirstOut/FirstComeFirstServe SJF:ShortestJobFirst
# STCF:ShortestTime-to-CompletionFirst
$ RR:Roundrobin
A. 0 B. 1 C. 2 D. 3 E. 4

Parameters for policies
How many of the following scheduling policies require knowledge of
process run times before execution?
! FIFO/FCFS:FirstInFirstOut/FirstComeFirstServe
SJF:ShortestJobFirst
# STCF:ShortestTime-to-CompletionFirst $ RR:Roundrobin
You can never know the execution time before executing them! These policies are not realistic
The best ones you learned in undergraduate OS does not even work in real!
forget about them in real implementation

An experimental time-sharing system
. Corbato,-Daggett and. Institute of Technology, Cambridge, Massachusetts

Poll close in
Why Multi-level scheduling algorithm
! Turn-aroundtime Waittime
# Fairness
$ Responsetime A. 0
B. 1 C. 2 D. 3 E. 4
Why MITs experimental time-sharing system proposes Multi-level
schedule algorithm? How many of the followings is it trying to optimize?

Why Multi-level scheduling algorithm
! Turn-aroundtime Waittime
# Fairness
$ Responsetime A. 0
B. 1 C. 2 D. 3 E. 4
Why MITs experimental time-sharing system proposes Multi-level
schedule algorithm? How many of the followings is it trying to optimize?

What happens to round robin when the system is saturated? Why Multi-level scheduling algorithm?
System saturation the demand of computing is larger than the physical processor resource available
Service level degrades
Lotsofprogramswapins-and-outs(knownascontextswitches
in our current terminology)
User interface response time is bad
you have to wait until your turn
Long running tasks cannot make good progress frequent
swap in-and-out

Poll close in
What happens during a context switch?
How many of the followings must occur during a context switch? ! SavethecurrentprocesssPC/registerstoitsPCB
Flush/invalidatethecachecontentofthecurrentprocess
# RestoretheupcomingprocesssPC/registersfromitsPCB
$ Loadmemorypagesoftheupcomingprocesses A. 0

What happens during a context switch?
How many of the followings must occur during a context switch? ! SavethecurrentprocesssPC/registerstoitsPCB
Flush/invalidatethecachecontentofthecurrentprocess
# RestoretheupcomingprocesssPC/registersfromitsPCB
$ Loadmemorypagesoftheupcomingprocesses A. 0

Context Switch Overhead
You think round robin should act like this
0 1 2 3 4 5 6 7 8 9 10
But the fact is
0112233445
Your processor utilization can be very low if you switch frequently
No process can make sufficient amount of progress within a given period of time
It also takes a while to reach your turn
Overhead P1->P2
Overhead P2->P3
Overhead P3->P1
Overhead P1->P2
Overhead P2->P3

Smaller tasks are given higher priority in the beginning
The Multilevel Scheduling Algorithm
Place new process in the one of the queue
Dependingontheprogramsize
Schedule processes in one of N queues
Startininitiallyassignedqueuen
Runfor2nquanta(whereniscurrentdepth)
If not complete, move to a higher queue (e.g. n +1) Larger process will execute longer before switch
Level m is run only when levels 0 to m-1 are empty
Smaller process, newer process are given higher priority
wp is the program memory size smaller ones areWhy? assigned to lower numbered queues

The Multilevel Scheduling Algorithm
Not optimized for anything its never possible to have an
optimized scheduling algorithm without prior knowledge
regarding all running processes
Its practical many scheduling algorithms used in modern
OSes still follow the same idea

The Linux Scheduler: a Decade of
Wasted Cores
J-P. Lozi, B. Lepers, J. Funston, F. Gaud, V. Quema, and A. Fedorova

Linuxs Completely Fair Scheduler (CFS)
Real time process classes always run first (rare)
Interactive processes: Usually blocked, low total run time, high
Other processes:
Red-blackBSTofprocess,organizedbyCPUtimetheyve
Pick the ready process that has run for the shortest (normalized)
time thus far.
Run it, update its CPU usage time, add to tree 31

CFS on multicore systems
Each processor has a run-queue the load within each local
queue may not be balanced
Run load balancing algorithms
communication-wise
Emergency load-balancing if any core is idle
CannotinvokedoftenExpensivecomputation-wiseand

User-level v.s kernel threads
user-level threads
kernel threads
runtime library
user- level
The process is a virtual processor
thread list
privilege boundary
kernel mode
process list
thread list process list
The OS kernel is unaware of user-level threads
Switching threads does not require kernel mode operations Thread switch requires kernel/user mode switch and system calls
A thread can block other threads within the same process 33 Thread works individually
The kernel can control threads directly

Load balancing
task load = weight ! % of cpu use
Processor Core #1
Processor Core #2

Load balancing if we have more cores?
Scheduling group #0
Processor #0 #1
Scheduling group #1
Processor #2 #3

Load balancing if we have more cores?
avg. load = 2000!
avg. load = 2000!
Scheduling group #0
Processor Core #0
Processor Core #1
Scheduling group #1
Processor #2 #3

Loading balancing grouping?
avg. load = 2000!
balanced! Scheduling group #2
avg. load = 2000!
Scheduling group #0
Processor Core #0
Processor Core #1
Processor Core #2
Scheduling group #1
Processor Core #3

Group Imbalance bug
Threadloadaredivided
Bugs in Linux CFS
Workstealingbasedonaverageloaduseminimumloadinstead
The Scheduling Group Construction bug
Linuxspawnsthreadsonthesamecoreastheirparentthread
The Missing Scheduling Domains bug
Whenacoreisdisabledandthenre-enabledusingthe/procinterface,load balancing between any NUMA nodes is no longer performed.
The Overload-on-Wakeup bug
athreadthatwasasleepmaywakeuponanoverloadedcorewhileothercoresin
the system are idle promotes cache reuse

Lottery Scheduling: Flexible Proportional-
Share Resource Management Carl A. Waldspurger and. Weihl

Why Lottery
Most approaches are not flexible, responsive
We want Quality of Service
The overhead of running those algorithms are high!
No body knows how they work

Solution Lottery and Tickets

Poll close in
What does ticket abstraction promote?
How many of the following can the ticket abstraction in the lottery
paper promote?
! Proportionalfairness
Machine-independentimplementationoftheschedulingpolicy # Genericschedulingpolicyacrossdifferentdevices
$ Starvationfree

What lottery proposed?
Each process hold a certain number of lottery tickets
Randomize to generate a lottery
If a process wants to have higher priority Obtainmoretickets!

What does ticket abstraction promote?
How many of the following can the ticket abstraction in the lottery
paper promote?
! Proportionalfairness
Machine-independentimplementationoftheschedulingpolicy
Tickets represent the share of a process should receive from a resource
# Genericschedulingpolicyacrossdifferentdevices
$ Starvationfree
You may use tickets on everything you would like to share
A. 0 B. 1 C. 2 D. 3
Eventually every process with a ticket gets to run Its also state-free reduce the overhead
The ticket abstraction can be independent of machine speeds or detail

Ticket transfers Ticket economics Ticket inflation
Ticket currencies
Compensation tickets

Flexibility
AllowsMonte-Carlo algorithm to dynamically inflate its tickets
The overhead is not too bad
How good is lottery?
1000instructions~lessthan500nsona2
GHz processor
Figure5:averageratioinproportiontothe
ticket allocation
Ticket transfer
Client-serversetup

Data center scheduling
Youbuytimes
Lotteryschedulingofyourvirtualmachine
The impact of lottery

Will you use lottery for your system?
Will it be good for
Event-drivenapplication Real-timeapplication
GUI-basedsystem
Is randomization a good idea?
Theauthorslaterdevelopedadeterministicstride-scheduling

Setup Poll Everywhere https://pollev.com/hungweitseng
Download the Poll Everywhere app
Login through the app using
Join PollEv.com/ hungweitseng

Regarding in-person instructions starting from February
WewillhavelivesessionsinMaterialScienceBuilding103startingnextTuesday2p!
OnlineoptionsremainopenZoomoryoutube
PleasesetupPolleverywherebeforeattendinglecturesbothin-personandonlinebynextTuesdayaswell
Willreleaseon2/10/20210%00amanddueon2/11/202111%59%00pm
Youwillhavetofindaconsecutive,non-stop80-minuteslotwiththisperiod
Onetime,cannotreinitiatepleasemakesureyouhaveastablesystemandnetwork Nolatesubmissionisallowed
Project released https://github.com/hungweitseng/CS202-MMA
Groupsin2.3isacceptable,butnotrecommended
Pullthelatestversionhadsomechangesforlaterkernelversions
InstallanUbuntuLinux20.04VMassoonasyoucan!
Pleasedonotusearealmachineyoumaynotbeabletorebootagain
Needhelp?Checkforofficehourshttps://calendar.google.com/calendar/u/0/r?
Reading quizzes due next Tuesday
Announcement

Engineering

CS: assignmentchef QQ: 1823890830 Email: [email protected]

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] CS CS202-MMA
$25