[SOLVED] 代写 data structure algorithm Scheme android compiler operating system computer architecture statistic Chapter 9 Main Memory

30 $

File Name: 代写_data_structure_algorithm_Scheme_android_compiler_operating_system_computer_architecture_statistic_Chapter_9_Main_Memory.zip
File Size: 1186.92 KB

SKU: 4646452443 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


Chapter 9 Main Memory

Chapter 9: Memory Management
■ Background
■ Contiguous Memory Allocation ■ Paging
■ Structure of the Page Table
■ Swapping
2

Background – Basic Hardware
■ Main memory and the registers built into each processing core are the only general-purpose storage that the CPU can access directly.
● Registers that are built into each CPU core are generally accessible within one cycle of the CPU clock.
● Completing a memory access may take many cycles of the CPU clock.
■ Therefore, any instructions in execution and any data being used by the instructions must be in one of these direct-access storage devices.
■ If the instructions/data are not in memory, they must be moved there before the CPU can operate on them.
3

Background – Basic Hardware
■ For proper system operation, we must protect the operating system from access by user processes, as well as protect user processes from access by one another.
■ One possible solution is to ensure that each process has a separate memory space.
● Separate per-process memory space protects the processes from each other and is fundamental to having multiple processes loaded in memory for concurrent execution.
● To separate memory spaces, we need the ability to determine the range of legal addresses that the process may access and to ensure that the process can access only these legal addresses.
● We can provide this protection by using two registers, usually a base register and a limit register.
4

Background – Basic Hardware
■ Thebaseregisterholdsthe smallest legal physical memory address; the limit register specifies the size of the range.
● OnlyOScanmodifythe content of these two registers in kernel mode.
■ For example, if the base register holds 300040 and the limit register is 120900, then the program can legally access all addresses from 300040 through 420939 (inclusive)
5

Background – Basic Hardware
■ With these registers, protection of memory space is accomplished by having the CPU hardware compare every address generated in user mode with the registers.
■ Any attempt by a program executing in user mode to access operating-system memory or other users’ memory results in a trap to the operating system, which treats the attempt as an error.
6

Background – Address Binding
■ Usually, a program resides on a disk as a binary executable file.
■ To run, the program must be brought into memory.
■ Most systems allow a user process to reside in any part of
the physical memory.
■ In most cases, a user program goes through several steps before being executed.
● Some of the steps could be optional.
7

Background – Address Binding
■ Addresses may be represented in different ways during these steps.
● Addresses in the source program are generally symbolic (such as the variable count).
● A compiler typically binds these symbolic addresses to relocatable addresses (such as “14 bytes from the beginning of this module”).
● The linker or loader in turn binds the relocatable addresses to absolute addresses (i.e. real addresses).
8

Background – Logical vs. Physical Address
■ Logical Address: The address from the perspective of user program.
● It is also known as virtual address.
● The user program only uses logical addresses and thinks that the corresponding process runs in memory locations from 0 to max.
■ Physical Address: The real address for each storage unit in main memory.
● The logical addresses must be mapped to physical addresses before they are used.
● Suppose that we use a relocation register (which is equivalent to the base register mentioned previously) to accomplish the mapping.
9

Background – Logical vs. Physical Address
■ With the relocation register, the value in the relocation register is added to every logical address used by a user process at the time the address is sent to memory.
■ For example, if the base is at 14000, then an attempt by the user to address location 0 is dynamically relocated to location 14000; an access to location 346 is mapped to location 14346.
10

Background – Dynamic Loading
■ In our discussion so far, it has been necessary for the entire program to be in physical memory for the corresponding process to execute.
● The size of a process has thus been limited to the size of physical memory.
■ To obtain better memory-space utilization, we can use dynamic loading.
● With dynamic loading, a function is not loaded until it is called. All functions are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. When a function needs to call another function, the calling function first checks to see whether the other function has been loaded. If it has not, it will be loaded.
11

Contiguous Memory Allocation
■ The main memory must accommodate both the operating system and the various user processes.
■ We first discuss one early method to allocate main memory, contiguous memory allocation.
● The memory is usually divided into two partitions: one for the operating system and one for the user processes.
● We can place the operating system in either low memory addresses or high memory addresses.
● However, many operating systems (including Linux and Windows) place the operating system in high memory, and therefore we discuss only that situation.
● With contiguous memory allocation, each process is contained in a single section of memory that is contiguous to the section containing the next process.
12

Contiguous Memory Allocation
■ One of the simplest methods of allocating memory is to assign processes to variably sized partitions in memory, where each partition may contain exactly one process.
■ In this variable-partition scheme, the operating system keeps a table indicating which parts of memory are available and which are occupied.
■ Initially, all memory is available for user processes and is considered one large block of available memory, a hole.
■ Eventually, as you will see, memory contains a set of holes of various sizes.
13

Contiguous Memory Allocation
■ Here is one example:
● Initially, the memory is fully utilized, containing processes 5,
8, and 2.
● After process 8 leaves, there is one contiguous hole.
● Later on, process 9 arrives and is allocated memory.
● Then process 5 departs, resulting in two noncontiguous holes.
14

Contiguous Memory Allocation
■ When a process is loaded into the memory, the OS needs to find a proper hole to allocate enough memory to the process.
■ This procedure is a particular instance of the general dynamic storage-allocation problem, which concerns how to satisfy a request of size n from a list of free holes. Here are three commonly used methods:
● First fit: Allocate the first hole that is big enough.
● Best fit: Allocate the smallest hole that is big enough.
● Worst fit: Allocate the largest hole.
■ Simulations have shown that:
● Both first fit and best fit are better than worst fit in terms of
decreasing time and storage utilization.
● Neither first fit nor best fit is clearly better than the other in terms of storage utilization, but first fit is generally faster.
15

Contiguous Memory Allocation
■ Both the first-fit and best-fit strategies for memory allocation suffer from external fragmentation.
● As processes are loaded and removed from memory, the free memory space is broken into little pieces.
● External fragmentation exists when there is enough total memory space to satisfy a request but the available spaces are not contiguous: storage is fragmented into a large number of small holes.
● Depending on the total amount of memory storage and the average process size, external fragmentation may be a minor or a major problem.
● Statistical analysis of first fit, for instance, reveals that, even with some optimization, given N allocated blocks, another 0.5 N blocks will be lost due to fragmentation.
16

Contiguous Memory Allocation
OS can also suffer from internal fragmentation.
■ Consider a multiple-partition allocation scheme with a hole of
18,464 bytes.
■ Suppose that the next process requests 18,462 bytes. If we allocate exactly the requested block, we are left with a hole of 2 bytes.
■ The overhead to keep track of this hole will be substantially larger than the hole itself.
■ The general approach to avoiding this problem is to break the physical memory into fixed-sized blocks and allocate memory with the units being blocks.
■ With this approach, the memory allocated to a process may be slightly larger than the requested memory. The difference between these two numbers is internal fragmentation—unused memory that is internal to a partition.
17

Paging
■ Memory management discussed thus far has required the physical address space of a process to be contiguous, which inherently suffers from external fragmentation.
■ We now introduce paging, a memory-management scheme that permits a process’s physical address space to be noncontiguous.
● Paging avoids the external fragmentation problem.
■ Because it offers numerous advantages, paging in its various forms is used in most operating systems, from those for large servers through those for mobile devices.
18

Paging – Basic Method
■ The basic method for implementing paging involves breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of the same size called pages.
■ When a process is to be executed, its pages are loaded into any available memory frames.
■ Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d):
19

Paging – Basic Method
■ The page number is used as an index into a per-process page table.
20

Paging – Basic Method
21

Paging – Basic Method
■ The page table contains the base address of each frame in physical memory, and the offset is the location in the frame being referenced.
■ Thus, the base address of the frame is combined with the page offset to define the physical memory address.
21

Paging – Basic Method
■ Theoretically, here are the steps to translate a logical address generated by the CPU to a physical address:
Step 1. Extract the page number p and use it as an index into the page table.
Step 2. Extract the corresponding frame number f from the page table.
Step 3. Replace the page number p in the logical address with the frame number f.
■ As the offset d does not change, it is not replaced, and the frame number and offset now comprise the physical address.
22

Paging – Basic Method
■ The page size (note that it is the same as frame size) is defined by the hardware.
● The size of a page is a power of 2, typically varying between 4 KB and 1 GB per page, depending on the computer architecture.
● The selection of a power of 2 as a page size makes the translation of a logical address into a page number and page offset particularly easy.
■ If the size of the logical address space is 2m, and a page size is 2n bytes, then the high-order (m−n) bits of a logical address designate the page number, and the n low-order bits designate the page offset. Thus, the logical address is as follows:
23

Page Number 0
1 2
3
Logical Address
Physical Address
Frame Number 0
1 2
3 4
5 6
7
In this example system:
n = 2 and m = 4
24

Paging – Basic Method
■ Using a page size of 4 bytes and a physical memory of 32 bytes (8 frames), we show how the programmer’s view of logical memory can be mapped into physical memory.
● Logical address 0 is page 0, offset 0. Indexing into the page table, we find that page 0 is in frame 5. Thus,
logical address 0 maps to physical address 20 [= (5 × 4) + 0].
● Logical address 3 (page 0, offset 3) maps to physical address 23 [= (5 × 4) + 3].
● Logical address 4 is page 1, offset 0; according to the page table, page 1 is mapped to frame 6. Thus, logical address 4 maps to physical address 24 [= (6 × 4) + 0].
● Similarly, logical address 13 maps to physical address 9.
25

Paging – Basic Method
■ When we use a paging scheme, we have no external fragmentation: any free frame can be allocated to a process that needs it.
■ However, we may have some internal fragmentation. Notice that frames are allocated as units. If the memory requirements of a process do not happen to coincide with page boundaries, the last frame allocated may not be completely full.
■ For example:
● If page size is 2,048 bytes, a process of 72,766 bytes will need
35 pages plus 1,086 bytes.
● It will be allocated 36 frames, resulting in internal fragmentation of 2,048 − 1,086 = 962 bytes.
■ In the worst case, a process would need n pages plus 1 byte. It would be allocated n + 1 frames, resulting in internal fragmentation of almost an entire frame.
26

Paging – Basic Method
■ If process size is independent of page size, we expect internal fragmentation to lead to a half-page waste per process on average.
■ This consideration suggests that small page sizes are desirable.
■ However, overhead is involved in each page-table entry, and this overhead is reduced as the size of the pages increases.
● Also, disk I/O is more efficient when the amount of data being transferred is larger.
■ Generally, page sizes have grown over time as processes, data sets, and main memory have become larger.
● Today, pages are typically either 4 KB or 8 KB in size.
27

■ In terms of process execution, when a process arrives in the system to be executed, its size, expressed in pages, is examined. Each page of the process needs one frame.
■ Thus, if the process requires n pages, at least n frames must be available in memory.
(a) A New Process (4 pages) Arrives. (b) Four Frames are Allocated to New Process.
28

Paging – Basic Method
■ Since the operating system is managing physical memory, it must be aware of the allocation details of physical memory:
● Which frames are allocated
● Which frames are available
● How many total frames there are ● etc.
■ This information is generally kept in a single, system-wide data structure called a frame table.
■ The frame table has one entry for each physical page frame, indicating whether the latter is free or allocated and, if it is allocated, to which page of which process (or processes).
29

Structure of the Page Table
■ Most modern computer systems support a large logical address space (232 to 264).
■ In such an environment, the page table itself becomes very large.
■ For example, consider a system with a 32-bit logical address space.
● If the page size in such a system is 4 KB (212), then a page table may consist of over 1 million entries (220 =
232/212).
● Assuming that each entry consists of 4 bytes, each process may need up to 4 MB of physical address space for the page table alone.
30

Structure of the Page Table – Hierarchical Paging
■ Clearly, we would not want to allocate the page table contiguously in main memory.
■ One simple solution to this problem is to divide the page table into smaller pieces.
■ We can accomplish this division in several ways.
■ One way is to use a two-level paging algorithm, in which the page table itself is also paged. The details of two-level paging can be found in the textbook.
31

Swapping
■ Process instructions and the data they operate on must be in memory to be executed.
■ However, a process, or a portion of a process, can be swapped temporarily out of memory to a backing store (i.e. the secondary storage area for process swapping) and then brought back into memory for continued execution.
■ Swapping makes it possible for the total physical address space of all processes to exceed the real physical memory of the system, thus increasing the degree of multiprogramming in a system.
32

Swapping – Standard Swapping
■ Standard swapping involves moving entire processes between main memory and a backing store.
■ The backing store is commonly fast secondary storage.
● It must be large enough to accommodate whatever parts of processes need to be stored and retrieved, and it must provide direct access to these memory images.
■ When a process is swapped to the backing store, the data structures associated with the process must be written to the backing store.
● For a multithreaded process, all per-thread data structures must be swapped as well.
● The operating system must also maintain metadata for processes that have been swapped out, so they can be restored when they are swapped back in to memory.
33

Swapping – Swapping with Paging
■ Standard swapping was used in traditional UNIX systems, but it is generally no longer used in contemporary operating systems, because the amount of time required to move entire processes between memory and the backing store is prohibitive.
■ Most systems, including Linux and Windows, now use a variation of swapping in which pages of a process (rather than an entire process) can be swapped.
■ This strategy still allows physical memory to be oversubscribed, but does not incur the cost of swapping entire processes.
■ In fact, the term swapping now generally refers to standard swapping, and paging refers to swapping with paging.
34

Swapping – Swapping with Paging
■ A page out operation moves a page from memory to the backing store; the reverse process is known as a page in.
■ Swapping with paging is illustrated in the following figure, where a subset of pages for processes A and B are being paged-out and paged-in respectively.
35

■ ■
Swapping – Swapping on Mobile Systems
Most operating systems for PCs and servers support swapping pages.
In contrast, mobile systems typically do not support swapping in any form.
● Mobile devices generally use flash memory rather than more spacious hard disks for nonvolatile storage.
● The resulting space constraint is one reason why mobile operating-system designers avoid swapping.
● Otherreasonsinclude:
 The limited number of writes that flash memory can
tolerate before it becomes unreliable.
 The poor throughput between main memory and flash memory in these devices.
36


Swapping – Swapping on Mobile Systems
Instead of using swapping, when free memory falls below a certain threshold, Apple’s iOS asks applications to voluntarily give up allocated memory.
● However, any applications that fail to free up sufficient memory may be terminated by the operating system.
■ Android adopts a strategy similar to that used by IOS.
● It may terminate a process if insufficient free memory is
available.
● However, before terminating a process, Android writes its application state to flash memory so that it can be quickly restarted.
37

Chapter 10 Virtual Memory

Chapter 10: Virtual Memory
■ Background
■ Demand Paging
■ Page Replacement ■ Allocation of Frames
2

Background
■ We have seen that the introduction of virtual memory (i.e. logical memory) makes memory management easier.
■ In our discussion so far, the size of a process (i.e. the size of virtual memory) has been limited to the size of physical memory.
3

Background
■ However, the use of virtual memory actually allows the size of a process to be greater than the size of physical memory.
■ This could be achieved using demand paging.
4

Demand Paging
■ When a program is loaded into the memory, two methods could be used.
■ Method #1: Load the entire program in physical memory at program execution time.
● However, we may not initially need the entire program in memory.
● Suppose a program starts with a list of available options from which the user is to select. Loading the entire program into memory results in loading the executable code for all options, regardless of whether or not an option is ultimately selected by the user.
■ Method #2: Load pages only when they are needed.
● Thistechniqueisknownasdemandpagingandiscommonly used in virtual memory systems. With demand-paged virtual memory, pages are loaded only when they are demanded during program execution.
5

Demand Paging
■ Demand paging is very similar to swapping with paging, which was mentioned previously. In this course, to distinguish these two terms, we assume that:
● With swapping with paging, the entire program is initially loaded into physical memory.
● With demand paging, only part of the program is initially loaded into physical memory.
■ With demand paging, while a process is executing, some pages will be in memory, and some will be in secondary storage.
■ Thus, we need some form of hardware support to distinguish these two cases.
6

Demand Paging
■ We can add a valid- invalid bit to page table to implement demand paging.
■ When the bit is set to “valid” (e.g. 0), the corresponding page is in memory.
■ When the bit is set to “invalid” (e.g. 1), the page is currently in secondary storage.
7

Demand Paging
■ When a process tries to access a page that was not brought into memory, it will cause a page fault, resulting in a trap to the OS.
■ Here is the procedure used to access physical memory when a page fault is generated:
1. Extract the address from the current instruction.
2. Use page table to check whether the corresponding page has been loaded. If valid-invalid bit is ”invalid”, a trap is generated.
3. Find a free frame in physical memory.
4. Move the desired page into the newly allocated frame.
5. Modify the page table to indicate that the page is now in memory.
6. Restart the instruction that was interrupted by the trap. The process can now access the page as if it had always been in memory.
8

Demand Paging
9

Demand Paging – Effective Access Time
■ Demand paging can significantly affect the performance of a computer system.
■ To see why, let’s compute the effective access time for a demand-paged memory.
● Assume the memory-access time, denoted ma, is 200 nanoseconds.
● As long as we have no page faults, the effective access time is equal to the memory access time.
● If, however, a page fault occurs, we must first move the relevant page from secondary storage to physical memory and then access the desired word.
10

Demand Paging – Effective Access Time
■ Let p be the probability of a page fault (0 ≤ p ≤ 1).
■ We would expect p to be close to zero. That is, we
would expect to have only a few page faults.
■ The effective access time can be calculated using the following equation:
effective access time = (1−p)×ma + p×page fault time
11

Demand Paging – Effective Access Time
■ Assume that:
● The average page-fault service time of 8 milliseconds ● The memory-access time of 200 nanoseconds
■ Then the effective access time in nanoseconds is: effective access time = (1−p)×(200)+p (8 milliseconds)
= (1−p)×200+p×8,000,000 = 200+7,999,800×p
■ Obviously, page fault probability determines the overall memory access performance.
12

■ Previously, when page fault occurs, a free frame in physical memory needs to be found to contain the desired page.
■ However, physical memory could be full (i.e. no free frame exists) when a free frame is requested.
13

Page Replacement
■ In the case that a page fault occurs and there is no free frame, the OS has to use one of the “page replacement” algorithms to:
● Find a victim frame
● Move the victim frame to secondary storage
● Move the page that caused page fault into the freed frame
■ Note that in this case, two page transfers (one for the page- out and one for the page-in) are required.
● This situation effectively doubles the page-fault service time and increases the effective access time accordingly.
14

Page Replacement
■ There are many different page-replacement algorithms.
■ In general, we should adopt the algorithm with the lowest
page-fault rate.
■ We can evaluate an algorithm by running it with a particular series of memory references and computing the number of page faults.
● This series of memory references is called a reference string.
● We can generate reference strings artificially (by using a random-number generator, for example), or we can trace a given system and record the address of each memory reference.
15

Page Replacement
■ When we use a reference string to evaluate a page replacement algorithm, we should be aware of these facts:
● First, for a given page size (and the page size is generally fixed by the hardware or system), we only need to consider the page number, rather than the detailed addresses.
● Second, if we have a reference to a page p, then any references to page p that immediately follow will never cause a page fault. Page p will be in memory after the first reference, so the immediately following references will not fault.
16

Page Replacement
■ For example, if we trace a particular process, we might record the following address sequence:
0100, 0432, 0101, 0612, 0102, 0103, 0104, 0101, 0611, 0102, 0103, 0104, 0101, 0610, 0102, 0103, 0104, 0101, 0609, 0102, 0105
■ At 100 bytes per page, this sequence is reduced to the following reference string:
1, 4, 1, 6, 1, 6, 1, 6, 1, 6, 1
17

Page Replacement
■ To determine the number of page faults for a particular reference string and page-replacement algorithm, we also need to know the number of page frames available.
■ Obviously, as the number of frames available increases, the number of page faults decreases.
■ For the reference string considered previously, for example:
● If we had three or more frames, we would have only three
faults—one fault for the first reference to each page.
● In contrast, with only one frame available, we would have a replacement with every reference, resulting in eleven faults.
18

Page Replacement
■ In general, we expect a curve such as that in the following figure.
■ As the number of frames increases, the number of page faults drops to some minimal level.
19

Page Replacement – FIFO
■ We will evaluate a few page-replacement algorithms.
■ To do so, we use the following parameters:
● Referencestring:7,0,1,2,0,3,0,4,2,3,0,3,2,1,2,0, 1, 7, 0, 1
● Number of frames in physical memory: 3
■ FIFO Page Replacement: The simplest page-replacement
algorithm is a first-in, first-out (FIFO) algorithm.
● A FIFO replacement algorithm associates with each page the time when that page was brought into memory.
● When a page must be replaced, the oldest page is chosen.
20

Page Replacement – FIFO
■ For our example reference string, our three frames are initially empty.
● The first three references (7, 0, 1) cause page faults and are brought into these empty frames.
● The next reference (2) replaces page 7, because page 7 was brought in first.
● Since 0 is the next reference and 0 is already in memory, we have no fault for this reference.
21

Page Replacement – OPT
■ OPT Page Replacement: FIFO is a simple algorithm. However, the optimal page replacement algorithm (in short, OPT) should replace the page that will not be used for the longest period of time.
■ Use of this page-replacement algorithm guarantees the lowest possible page-fault rate for a fixed number of frames.
■ For example, on our sample reference string, the optimal page replacement algorithm would yield nine page faults, as shown in the following figure. Note that FIFO results in fifteen faults.
22

Page Replacement – OPT
■ Unfortunately, the optimal page replacement algorithm is difficult to implement, because it requires future knowledge of the reference string.
● As a result, the optimal algorithm is used mainly for comparison studies.
■ If the optimal algorithm is not feasible, perhaps an approximation of the optimal algorithm is possible.
■ The key distinction between the FIFO and OPT algorithms is that the FIFO algorithm uses the time when a page was brought into memory, whereas the OPT algorithm uses the time when a page is to be used.
23

Page Replacement – LRU
■ LRU Page Replacement: LRU stands for least recently used.
● LRU replacement associates with each page the time of that
page’s last use.
● When a page must be replaced, LRU chooses the page that has not been used for the longest period of time.
● The result of applying LRU replacement to our example reference string is shown below.
24

Allocation of Frames
■ With demand paging, initially, we do not load the entire program into physical memory.
■ How many frames should be allocated to each program initially?
■ Equal Allocation: The easiest way to split m frames among n
processes is to give everyone an equal share, m/n frames.
● For instance, if there are 93 frames and 5 processes, each process will get 18 frames. The 3 leftover frames can be used as a free-frame buffer pool.
■ Proportional Allocation: We allocate available memory to each process according to its size.
● Let the size of the virtual memory for process pi be si, and define S=∑si.
● If the total number of available frames is m, we allocate ai frames to process pi, where ai is approximately si /S x m.
25

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写 data structure algorithm Scheme android compiler operating system computer architecture statistic Chapter 9 Main Memory
30 $