CS307 Fall 2019-2020Memory Management API Phase 21 IntroductionThe part of the operating system that manages (part of) the memory hierarchy is called thememory manager. Its job is to efficiently manage memory: keep track of which parts of memoryare in use, allocate memory to processes when they need it, and deallocate it when theyare done.In general terms, there are two ways to keep track of memory usage: bitmaps and free lists.With a bitmap, memory is divided into allocation units as small as a few words and as large asseveral kilobytes. Corresponding to each allocation unit is a bit in the bitmap, which is 0 if theunit is free and 1 if it is occupied (or vice versa).Another way of keeping track of memory is to maintain a linked list of allocated and free memorysegments, where a segment either contains a process or is an empty hole between twoprocesses.In this homework you are expected to implement a memory manager which maintains a linkedlist for organizing the memory allocation.2 Program FlowYou are expected to use POSIX threads which will work on the shared data structures. Youshould use POSIX threads, semaphores, mutex or mutexes, linked list and queue data structuresand a main memory array.2.1 Data StructuresIn general case, the memory is divided into fixed size of chunks. In this homework we areassuming one chunk is 1 Byte.2.1.1 Memory ArrayThe memory array that will be used in this homework will be a char array with size M. Sinceeach chunk is 1 Byte size, total memory will be M bytes size. If the memory address is free,1there should be X in the proper memory index. If it is allocated by a thread, the thread IDshould be written in the proper memory index.2.1.2 Linked ListYou need to maintain a linked list for free and allocated memory indexes. The purpose of thelinked list is tracking the memory state. The linked list should be updated with each memoryallocation or deallocation.ID SIZE INDEXFigure 1: A linked list nodeA linked list node should contain; ID, SIZE and INDEX variables.A hole (free memory) or an allocated thread information could be represented by a linked listnode.For a thread; ID denotes the unique integer given to that thread, SIZE denotes the random integercreated during the execution. INDEX denotes the starting memory index of the thread.For a hole; ID equals to -1, SIZE denotes the size of free memory, INDEX denotes the startingmemory index of hole. The linked list should be formed by using the above node.You can use C++ Standard Library linked list or implement your own linked list class.Also, you are allowed to use any type of linked list, which first fit algorithm can work,as long as it works properly during the memory allocation.2.1.3 Request QueueYou need to maintain a queue for thread memory requests. The purpose of the queue isordering the thread requests so that memory server can process them one by one.ID SIZEFigure 2: A queue nodeA queue node should contain; ID and SIZE variables.ID denotes the unique integer given to that thread, SIZE denotes the random integer createdduring the execution.You can use C++ Standard Library queue or implement your own queue class.2.2 Main ThreadYou need to initialize any related data structure before starting the memory server thread.Memory server thread will run in a function called server_function. After the initialization,you are expected to start all of ordinary threads, which will run in a function calledthread_function. After that the main thread should sleep for 10 seconds and call release_function.Page 2In other words, after the creation of threads, your program should terminate at the end of 10seconds.2.2.1 release_functionAfter 10 seconds, main thread should call release function to terminate the program. Insideof the release_function, linked lists should be deallocated and memory should be returned toits initial state. After releasing the allocated memory, your program should terminate.2.3 Ordinary ThreadsAfter the creation of these threads, they will run in a thread_function. A Thread will followthe cycle denoted in Figure 3. The generated size will be between 1 and 1/3 of the totalmemory size. Thread should call my_malloc function. If there is enough space in the memoryfor a thread, use_mem and free_mem functions will be called.
Figure 3: Thread pipeline.2.3.1 my_mallocThread should add its ID and size to the queue and block on its semaphore. After memoryserver thread increments the semaphore of the blocked thread, thread should continue andcheck the linked list to see if it is allocated or not. Thread could use or not use memory basedon the return value of the my_malloc function.2.3.2 use_memSince threads need to complete their jobs, they need to spent some time in the memory if theyare allocated. To simulate this behaviour, threads will sleep inside of this function. A threadneeds to wait between 1 and 5 seconds.2.3.3 free_memAfter a thread is done with using memory, the memory that thread used would be freed bymemory manager. For the sake of simplicity in this homework, threads are able to do it bythemselves. When a thread is done, thread will update the linked list and it will update thememory array.
Figure 4: Possible states of the linked list after thread terminates.Assuming a thread is allocated between two other threads, the termination could resultfour possible cases of memory alignment as its shown in the Figure 4. If you do not update thelinked list holes properly, it will contain consecutive small sized holes. Even though, there isan enough size in the memory for a new thread, memory manager could not allocate memorybecause of this small holes. Thus, after each thread termination you need to check the linkedlist and merge the holes to avoid such case.2.4 Memory Server ThreadAfter the creation of memory server thread, it will run in a server_function. Memory serverthread will continuously check the request queue to see if a thread wants to allocate memory.If there is an element inside of the queue, memory server thread will process the element andpop the element from the queue afterwards. This processing is done by checking the linkedlist. If there is a hole which is big enough for the requested size, memory server thread willgrant access by updating the linked list by adding the ID, size and starting index. It shouldalso update the memory array with using the ID of the thread. Afterwards,it will incrementthe semaphore of the thread to allow its execution and pop the element from the queue. Ifthere is not enough size for the requested size, memory server thread should increment thesemaphore of the thread and pop the element from the queue. In order to check the size, youare expected to use first fit algorithm on the linked list.If memory is granted to the thread, memory server thread will call dump_memory functionto print out the current state of the linked list and the current state of the memory array.2.4.1 dump_memoryThis function will be used for printing the current state of the memory and the state of thelinked list. First you need to print the linked list and then you need to print the memoryarray. In order to print the linked list and memory array, you are expected to use the followingconvention.For representing a single node:[ID][SIZE][INDEX]An example linked list of nodes:Page 4[ID][SIZE][INDEX][ID][SIZE][INDEX][ID][SIZE][INDEX]An example memory array with memory size 5 and 3 threads:[ID2] [ID2] [ID1] [X] [ID0]Where IDi denotes the thread ID and X is used for free memory.3 Sample RunsSample Run 1Below example is a test run for 3 threads, (t0 t1 ,t2) , working on a 10 bytes of memory array.In this example, t0 allocated memory first, t1 allocated memory second and t2 allocated memorythird.After that, t1 and t2 finished their jobs and deallocated memory. Since these threads arefinished they should create a new request in the queue, and as its seen from the output t2 sentits request before t1 and allocated memory before t1.After the t1 allocated the memory, t0 finished its job and sent another request to the queuewith size of 3 bytes. Since the first hole is not big enough for t0 requested size, the secondhole checked and the memory index 5 is granted to t0 as starting index.Then t0 and t2 finished their jobs and added another request to the queue. Since t2 was fasterin this step, t2 pushed its request to the queue faster than t0. Thus, it is allocated before t0.List:[0][1][0][-1][9][1]Memory Dump:0XXXXXXXXX*********************************List:[0][1][0][1][2][1][-1][7][3]Memory Dump:011XXXXXXX*********************************List:[0][1][0][1][2][1][2][2][3][-1][5][5]Memory Dump:01122XXXXX*********************************List:[0][1][0][2][2][1][-1][7][3]Memory Dump:022XXXXXXX*********************************List:[0][1][0][2][2][1][1][2][3][-1][5][5]Memory Dump:Page 502211XXXXX*********************************List:[-1][1][0]-[2][2][1][1][2][3][0][3][5][-1][2][8]Memory Dump:X2211000XX*********************************List:[2][3][0][1][2][3][-1][5][5]Memory Dump:22211XXXXX*********************************List:[2][3][0][1][2][3][0][2][5][-1][3][7]Memory Dump:2221100XXX*********************************…(Print until main terminates)Sample Run 2Below example is a test run for 10 threads working on a 20 bytes of memory array.List:[0][3][0][-1][17][3]Memory Dump:000XXXXXXXXXXXXXXXXX*********************************List:[0][3][0][3][5][3][-1][12][8]Memory Dump:00033333XXXXXXXXXXXX*********************************List:[0][3][0][3][5][3][1][1][8][-1][11][9]Memory Dump:000333331XXXXXXXXXXX*********************************List:[0][3][0][3][5][3][1][1][8][4][4][9][-1][7][13]Memory Dump:Page 60003333314444XXXXXXX*********************************List:[0][3][0][3][5][3][1][1][8][4][4][9][5][2][13][-1][5][15]Memory Dump:000333331444455XXXXX*********************************List:[0][3][0][3][5][3][1][1][8][4][4][9][5][2][13][2][1][15][-1][4][16]Memory Dump:0003333314444552XXXX*********************************List:[0][3][0][3][5][3][1][1][8][4][4][9][5][2][13][2][1][15][8][1][16][-1][3][17]Memory Dump:00033333144445528XXX*********************************List:[0][3][0][3][5][3][1][1][8][4][4][9][5][2][13][2][1][15][8][1][16][6][3][17]Memory Dump:00033333144445528666*********************************List:[-1][3][0][3][5][3][-1][1][8][4][4][9][-1][2][13][2][1][15][-1][1][16][6][3][17]Memory Dump:XXX33333X4444XX2X666*********************************List:[7][3][0][3][5][3][-1][1][8][4][4][9][-1][2][13][2][1][15][-1][1][16][6][3][17]Memory Dump:77733333X4444XX2X666*********************************…(Print until main terminates)Page 74 TestingIn order to understand that threads are working as intended, you need to compare the linkedlist and the memory array. Both data structures have to be consistent with each other. Yourprogram should work on different number of threads and different sizes of memory.Also, you need to use semaphores and mutexes properly to get full grade. You should not useunnecessary semaphores/mutexes, or mutexes to lock any actions. You should definitely avoidusing other means of aligning your threads. Having a proper output without using semaphoresand mutexes will result an invalid solution. Thus, you need to be careful with mutual exclusionand race conditions.
Reviews
There are no reviews yet.