侧边栏壁纸
博主头像
蚌埠住了捏博主等级

快乐,健康,自由,强大

  • 累计撰写 33 篇文章
  • 累计创建 10 个标签
  • 累计收到 17 条评论

目 录CONTENT

文章目录

OSTEP-notes-virtualization-chap9-to-14

蚌埠住了捏
2023-03-23 / 0 评论 / 0 点赞 / 245 阅读 / 1,437 字

proportional-share scheduling

lottery scheduling

proportional-share scheduler, also sometimes referred to as a fair-share scheduler. Proportional-share is based around a simple concept: instead of optimizing for turnaround or response time, a scheduler might instead try to guarantee that each job obtain a certain percentage of CPU time. An excellent early example of proportional-share scheduling is lottery scheduling;

tickets, which are used to represent the share of a resource that a process (or user or whatever) should receive. One of the most beautiful aspects of lottery scheduling is its use of randomness.

stride scheduling

While randomness gets us a simple (and approximately correct) scheduler, it occasionally will not deliver the exact right proportions, especially over short time scales. For this reason, invented stride scheduling, a deterministic fair-share scheduler.

With jobs A, B, and C, with 100, 50, and 250 tickets, respectively, we can compute the stride of each by dividing some large number by the number of tickets each process has been assigned. For example, if we divide 10,000 by each of those ticket values, we obtain the following stride values for A, B, and C: 100, 200, and 40. We call this value the stride of each process; every time a process runs, we will increment a counter for it (called its pass value) by its stride to track its global progress. The basic idea is simple: at any given time, pick the process to run that has the lowest pass value so far; when you run a process, increment its pass counter by its stride.

The Linux Completely Fair Scheduler (CFS)

Its goal is simple: to fairly divide a CPU evenly among all competing processes. It does so through a simple counting-based technique known as virtual runtime (vruntime) . When a scheduling decision occurs, CFS will pick the process with the lowest vruntime to run next.

Weighting (Niceness) . CFS also enables controls over process priority, enabling users or administrators to give some processes a higher share of the CPU. It does this not with tickets, but through a classic UNIX mechanism known as the nice level of a process. The nice parameter can be set anywhere from -20 to +19 for a process, with a default of 0. Positive nice values imply lower priority and negative values imply higher priority; when you’re too nice, you just don’t get as much (scheduling) attention, alas.

when the scheduler has to find the next job to run, it should do so as quickly as possible. Simple data structures like lists don’t scale. CFS addresses this by keeping processes in a red-black tree.

CFS does not keep all process in this structure; rather, only running (or runnable) processes are kept therein. If a process goes to sleep (say, waiting on an I/O to complete, or for a network packet to arrive), it is removed from the tree and kept track of elsewhere.

Multiprocessor Scheduling

issues

cache coherence, synchronization (locking), cache affinity

multiprocessor scheduling

The most basic approach is to simply reuse the basic framework for single processor scheduling, by putting all jobs that need to be scheduled into a single queue; we call this single queue multiprocessor scheduling or SQMS for short. This approach has the advantage of simplicity; most SQMS schedulers include some kind of cache affinity mechanism to try to make it more likely that process will continue to run on the same CPU if possible.

multi-queue multiprocessor scheduling (or MQMS). In MQMS, our basic scheduling framework consists of multiple scheduling queues. Each queue will likely follow a particular scheduling discipline, such as round robin, though of course any algorithm can be used. When a job enters the system, it is placed on exactly one scheduling queue, according to some heuristic (e.g., random, or picking one with fewer jobs than others).

How should a multi-queue multiprocessor scheduler handle load imbalance?

The obvious answer to this query is to move jobs around, a technique which we (once again) refer to as migration. By migrating a job from one CPU to another, true load balance can be achieved. One basic approach is to use a technique known as work stealing. With a work-stealing approach, a (source) queue that is low on jobs will occasionally peek at another (target) queue, to see how full it is. If the target queue is (notably) more full than the source queue, the source will “steal” one or more jobs from the target to help balance load.

Linux Multiprocessor Schedulers

the O(1) scheduler, the Completely Fair Scheduler (CFS), and the BF Scheduler (BFS)

Both O(1) and CFS use multiple queues, whereas BFS uses a single queue, showing that both approaches can be successful.

summary

The single-queue approach (SQMS) is rather straightforward to build and balances load well but inherently has difficulty with scaling to many processors and cache affinity.

The multiple-queue approach (MQMS) scales better and handles cache affinity well, but has trouble with load imbalance and is more complicated.

Whichever approach you take, there is no simple answer: building a general purpose scheduler remains a daunting task, as small code changes can lead to large behavioral differences. Only undertake such an exercise if you know exactly what you are doing, or, at least, are getting paid a large amount of money to do so.

Address Spaces

OS creates an easy to use abstraction of physical memory. We call this abstraction the address space, and it is the running program’s view of memory in the system. The address space of a process contains all of the memory state of the running program, e.g., code, stack, and heap.

Major goals of a virtual memory (VM) system

transparency. The OS should implement virtual memory in a way that is invisible to the running program

efficiency. The OS should strive to make the virtualization as efficient as possible, both in terms of time (i.e., not making programs run much more slowly) and space (i.e., not using too much memory for structures needed to support virtualization).

protection. The OS should make sure to protect processes from one another as well as the OS itself. Protection thus enables us to deliver the property of isolation among processes; each process should be running in its own isolated cocoon, safe from the ravages of other faulty or even malicious processes

EVERY ADDRESS YOU SEE IS VIRTUAL

never forget: if you print out an address in a program, it’s a virtual one, an illusion of how things are laid out in memory; only the OS (and the hardware) knows the real truth.

Memory API

Types of Memory

In running a C program, there are two types of memory that are allocated. The first is called stack memory, and allocations and deallocations of it are managed implicitly by the compiler for you, the programmer; for this reason it is sometimes called automatic memory. The second type of memory, called heap memory, where all allocations and deallocations are explicitly handled by you, the programmer.

malloc()

The malloc() call is quite simple: you pass it a size asking for some room on the heap, and it either succeeds and gives you back a pointer to the newly-allocated space, or fails and returns NULL.

#include <stdlib.h>
void *malloc(size_t size);

The single parameter malloc() takes is of type size t which simply describes how many bytes you need. For example, to allocate space for a double-precision floating point value, you simply do this:

double *d = (double *) malloc(sizeof(double));

Casting doesn’t really accomplish anything, other than tell the compiler and other programmers who might be reading your code: “yeah, I know what I’m doing.” By casting the result of malloc(), the programmer is just giving some reassurance; the cast is not needed for the correctness.

free()

To free heap memory that is no longer in use, programmers simply call free(). The routine takes one argument, a pointer returned by malloc().

common errors

  • Forgetting To Allocate Memory
  • Not Allocating Enough Memory
  • Forgetting to Initialize Allocated Memory: uninitialized read
  • Forgetting To Free Memory: memory leak
  • Freeing Memory Before You Are Done With It: dangling pointer
  • Freeing Memory Repeatedly: double free
  • Calling free() Incorrectly: invalid frees

Because of frequent errors with memory, a whole ecosphere of tools have developed to help find such problems in your code. Check out both purify and valgrind; both are excellent at helping you locate the source of your memory-related problems.

0

评论区