[SOLVED] 代写 C++ C data structure Scheme parallel compiler graph software Introduction

30 $

File Name: 代写_C++_C_data_structure_Scheme_parallel_compiler_graph_software_Introduction.zip
File Size: 753.6 KB

SKU: 1892915080 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


Introduction
In this final project you will implement a cache simulator. Your simulator will be configurable and will be able to handle caches with varying capacities, block sizes, levels of associativity, replacement policies, and write policies. The simulator will operate on trace files that indicate memory access properties. All input files to your simulator will follow a specific structure so that you can parse the contents and use the information to set the properties of your simulator.
After execution is finished, your simulator will generate an output file containing information on the number of cache misses, hits, and miss evictions (i.e. the number of block replacements). In addition, the file will also record the total number of (simulated) clock cycles used during the situation. Lastly, the file will indicate how many read and write operations were requested by the CPU.
It is important to note that your simulator is required to make several significant assumptions for the sake of simplicity.
•You do not have to simulate the actual data contents. We simply pretend that we copied data from main memory and keep track of the hypothetical time that would have elapsed.
•When a block is modified in a cache or in main memory, we always assume that the entire block is read or written. This means that you don’t have to deal with the situation where only part of a block needs to be updated in main memory.
•Assume that all memory accesses occur only within a single block at a time. In other words, we don’t worry about the effects of a memory access overlapping two blocks, we just pretend the second block was not affected.
Additional Resources
Sample trace files
Students are required to simulate the instructor-provided trace files (although you are welcome to simulate your own files in addition).
Trace files are available on Flip in the following directory:
/nfs/farm/classes/eecs/fall2019/cs472/public/tracefiles
You should test your code with all three tracefiles in that directory (gcc, netpath, and openssl).
Starter Code
In order to help you focus on the implementation of the cache simulator, starter code will be provided (written in C++) to parse the input files and handle some of the file I/O involved in this assignment. You are not required to use the provided code (it’s up to you).
Further details will be discussed in class.

Basic-Mode Usage (472 & 572 students)
L1 Cache Simulator
All students are expected to implement the L1 cache simulator. Students who are enrolled in 472 can ignore the sections that are written in brown text. Graduate students will be simulating a multiple-level cache (an L1 cache, an L2 cache, and even an L3 cache).
Input Information
Your cache simulator will accept two arguments on the command line: the file path of a configuration file and the file path of a trace file containing a sequence of memory operations. The cache simulator will generate an output file containing the simulation results. The output filename will have “.out” appended to the input filename. Additional details are provided below.
# example invocation of cache simulator
cache_sim ./resources/testconfig ./resources/simpletracefile
Output file written to ./resources/simpletracefile.out
The first command line argument will be the path to the configuration file. This file contains information about the cache design. The file will contain only numeric values, each of which is on a separate line.
Example contents of a configuration file:
1 <– this line will always contain a “1” for 472 students
230 <– number of cycles required to write or read a block from main memory
8 <– number of sets in cache (will be a non-negative power of 2)
16 <– block size in bytes (will be a non-negative power of 2)
3 <– level of associativity (number of blocks per set)
1 <– replacement policy (will be 0 for random replacement, 1 for LRU)
1 <– write policy (will be 0 for write-through, 1 for write-back)
13 <– number of cycles required to read or write a block from the cache (consider this to be the access time per block)Here is another example configuration file specifying a direct-mapped cache with 64 entries, a 32 byte block size, associativity level of 1 (direct-mapped), least recently used (LRU) replacement policy, write-through operation, 26 cycles to read or write data to the cache, and 1402 cycles to read or write data to the main memory. CS/ECE472 projects can safely ignore the first line.1
1402
64
32
1
1
0
26The second command line argument indicates the path to a trace file. This trace file will follow the format used by Valgrind (a memory debugging tool). The file consists of comments and memory access information. Any line beginning with the ‘=’ character should be treated as a comment and ignored.==This is a comment and can safely be ignored.
==An example snippet of a Valgrind trace file
I04010173,3
I04010176,6
 S 04222cac,1
I0401017c,7
 L 04222caf,8
I04010186,6
I040101fd,7
 L 1ffefffd78,8
 M 04222ca8,4
I04010204,4Memory access entries will use the following format in the trace file:[space]operation address,size•Lines beginning with an ‘I’ character represent an instruction load. For this assignment, you can ignore instruction read requests and assume that they are handled by a separate instruction cache.•Lines with a space followed by an ‘S’ indicate a data store operation. This means that data needs to be written from the CPU into the cache or main memory (possibly both) depending on the write policy.•Lines with a space followed by an ‘L’ indicate a data load operation. Data is loaded from the cache into the CPU.•Lines with a space followed by an ‘M’ indicate a data modify operation (which implies a special case of a data load, followed immediately by a data store).The address is a 64 bit hexadecimal number representing the address of the first byte that is being requested. Note that leading 0’s are not necessarily shown in the file. The size of the memory operation is indicated in bytes (as a decimal number). In this project you will use a simplification and ignore the size of the request (essentially treating each operation as if it only affects 1 byte).If you are curious about the trace file, you may generate your own trace file by running Valgrind on arbitrary executable files:valgrind –log-fd=1 –log-file=./tracefile.txt –tool=lackey –trace-mem=yes name_of_executable_to_traceCache Simulator OutputYour simulator will write output to a text file. The output filename will be derived from the trace filename with “.out” appended to the original filename. E.g. if your program was called using the invocation “cache_sim ./dm_config ./memtrace” then the output file would be written to “./memtrace.out”(S)tore, (L)oad, and (M)odify operations will each be printed to the output file (in the exact order that they were read from the Valgrind trace file). Lines beginning with “I” should not appear in the output since they do not affect the operation of your simulator.Each line will have a copy of the original trace file instruction. There will then be a space, followed by the number of cycles used to complete the operation. Lastly, each line will have one or more statements indicating the impact on the cache. This could be one or more of the following: miss, hit, or eviction.Note that an eviction is what happens when a cache block needs to be removed in order to make space in the cache for another block. It is simply a way of indicating that a block was replaced. In our simulation, an eviction means that the next instruction cannot be executed until after the existing cache block is written to main memory. An eviction is an expensive cache operation.It is possible that a single memory access has multiple impacts on the cache. For example, if a particular cache index is already full, a (M)odify operation might miss the cache, evict an existing block, and then hit the cache when the result is written to the cache.The general format of each output line (for 472 students) is as follows (and will contain one or more cache impacts):operation address,size L1 <…>
The final lines of the output file are special.  They will indicate the total number of hits, misses, and evictions. The last line will indicate the total number of simulated cycles that were necessary to simulate the trace file, as well as the total number of read and write operations that were directly requested by the CPU.
These lines should exactly match the following format (with values given in decimal):
L1 Cache: Hits: Misses: Evictions: Cycles: Reads:<# of CPU read requests> Writes:<# of CPU write requests>
In order to illustrate the output file format let’s look at an example. Suppose we are simulating a direct-mapped cache operating in write-through mode. Note that the replacement policy does not have any effect on the operation of a direct-mapped cache. Assume that the configuration file told us that it takes 13 cycles to access the cache and 230 cycles to access main memory. Keep in mind that a hit during a load operation only accesses the cache while a miss must access both the cache and the main memory. For this scenario, assume that memory access is aligned to a single block and does not straddle multiple cache blocks.
In this example the cache is operating in write-through mode so a standalone (S)tore operation takes 243 cycles, even if it is a hit, because we always write the block into both the cache and into main memory. If this particular cache was operating in write-back mode, a (S)tore operation would take only 13 cycles if it was a hit (since the block would not be written into main memory until it was evicted).
The exact details of whether an access is a hit or a miss is entirely dependent on the specific cache design (block size, level of associativity, number of sets, etc). Your program will implement the code to see if each access is a hit, miss, eviction, or some combination.
Since the (M)odify operation involves a Load operation (immediately followed by a Store operation), it is recorded twice in the output file. The first instance represents the load operation and the next line will represent the store operation. See the example below…
==For this example we assume that addresses 04222cac, 04222caf, and 04222ca8 are all in the same block at index 2
==Assume that addresses 047ef249 and 047ef24d share a block that also falls at index 2.
==Since the cache is direct-mapped, only one of those blocks can be in the cache at a time.
==Fortunately, address 1ffefffd78 happens to fall in a different block index (in our hypothetical example).
==The output file for our hypothetical example:
S 04222cac,1 243 L1 miss
L 04222caf,8 13 L1 hit
M 1ffefffd78,8 243 L1 miss <– notice that this (M)odify has a miss for the load and a hit for the store
M 1ffefffd78,8 243 L1 hit <– this line represents the Store of the modify operation
M 04222ca8,4 13 L1 hit <– notice that this (M)odify has two hits (one for the load, one for the store)
M 04222ca8,4 243 L1 hit <– this line represents the Store of the modify operation
S 047ef249,4 243 L1 miss eviction <– 243 cycles for miss, no eviction penalty for write-through cache
L 04222caf,8 243 L1 miss eviction
M 047ef24d,2 243 L1 miss eviction <– notice that this (M)odify initially misses, evicts the block, and then hits
M 047ef24d,2 243 L1 hit <– this line represents the Store of the modify operation
L 1ffefffd78,8 13 L1 hit
M 047ef249,4 13 L1 hit
M 047ef249,4 243 L1 hit
L1 Cache: Hits:8 Misses:5 Evictions:3
Cycles:2239 Reads:7 Writes:6 <– total sum of simulated cycles (from above), as well as the number of reads and writesImplementation DetailsYou may use either the C or the C++ programming language. Graduate students will have an additional component to this project.In our simplified simulator, increasing the level of associativity has no impact on the cache access time. Furthermore, you may assume that it does not take any additional clock cycles to access non-data bits such as Valid bits, Tags, Dirty Bits, LRU counters, etc.Your code must support the LRU replacement scheme and the random replacement scheme. For the LRU behavior, a block is considered to be the Least Recently Used if every other block in the cache has been read or written after the block in question. In other words, your simulator must implement a true LRU scheme, not an approximation.You must implement the write-through cache mode. You will receive extra credit if your code correctly supports the write-back cache mode (specified in the configuration file). Acceptable Compiler VersionsThe flip server provides GCC 4.8.5 for compiling your work. Unfortunately, this version is from 2015 and may not support newer C and C++ features (especially related to parallel programming). If you call the program using “gcc” (or “g++”) this is the version you will be using by default.If you wish to use a newer compiler version, I have compiled a copy of GCC 9.2 (released August 12, 2019). You may write your code using this compiler and you’re allowed to use any of the compiler features that are available. The compiler binaries are available in the path:/nfs/farm/classes/eecs/fall2019/cs472/public/gcc/binFor example, in order to compile a multithreaded C++ program with GCC 9.2, you could use the following command (on a single terminal line):/nfs/farm/classes/eecs/fall2019/cs472/public/gcc/bin/g++ -ocache_sim -lpthread -Wl,-rpath,/nfs/farm/classes/eecs/fall2019/cs472/public/gcc/lib64 my_source_code.cppIf you use the Makefile that is provided in the starter code, it is already configured to use GCC 9.2.L2/L3 Cache Implementation (required for CS/ECE 572 students)Implement your cache simulator so that it can support up to 3 layers of cache. You can imagine that these caches are connected in a sequence. The CPU will first request information from the L1 cache. If the data is not available, the request will be forwarded to the L2 cache. If the L2 cache cannot fulfill the request, it will be passed to the L3 cache. If the L3 cache cannot fulfill the request, it will be fulfilled by main memory.There are specific implementation requirements (see the Code Implementation section).It is important that the properties of each cache are read from the provided configuration file. As an example, it is possible to have a direct-mapped L1 cache that operates in cohort with an associative L2 cache. All of these details will be read from the configuration file. As with any programming project, you should be sure to test your code across a wide variety of scenarios to minimize the probability of an undiscovered bug.Cache OperationWhen multiple layers of cache are implemented, the L1 cache will no longer directly access main memory. Instead, the L1 cache will interact with the L2 cache. During the design process, you need to consider the various interactions that can occur. For example, if you are working with three write-through caches, than a single write request from the CPU will update the contents of L1, L2, L3, and main memory!++++++++++++        ++++++++++++        ++++++++++++        ++++++++++++        +++++++++++++++
|          |        |          |        |          |        |          |        |           |
| CPU    | <—-> | L1 Cache | <—-> | L2 Cache | <—-> | L3 Cache | <—-> | Main Memory |
|          |        |          |        |          |        |          |        |           |
++++++++++++        ++++++++++++        ++++++++++++        ++++++++++++        +++++++++++++++
Note that your program should still handle a configuration file that specifies an L1 cache (without any L2 or L3 present). In other words, you can think of your project as a more advanced version of the 472 implementation.

572 Extra Credit
By default, your code is only expected to function with write-through caches. If you want to earn extra credit, also implement support for write-back caches.
In this situation, you will need to track dirty cache blocks and properly handle the consequences of evictions. You will earn extra credit if your write-back design works with simple L1 implementations. You will receive additional extra credit if your code correctly handles multiple layers of write-back caches (e.g. the L1 and L2 caches are write-back, but L3 is write-through) .
Simulator Operation
Your cache simulator will use a similar implementation as the single-level version but will parse the configuration file to determine if multiple caches are present.
Input Information
The input configuration file is as shown below. Note that it is backwards compatible with the 472 format.
The exact length of the input configuration file will depend on the number of caches that are specified.
3 <– this line indicates the number of caches in the simulation (this can be set to a maximum of 3)
230 <– number of cycles required to write or read a block from main memory
8 <– number of sets in L1 cache (will be a non-negative power of 2)
16 <– L1 block size in bytes (will be a non-negative power of 2)
4 <– L1 level of associativity (number of blocks per set)
1 <– L1 replacement policy (will be 0 for random replacement, 1 for LRU)
0 <– L1 write policy (will be 0 for write-through, 1 for write-back)
13 <– number of cycles required to read or write a block from the L1 cache (consider this to be the access time)
8 <– number of sets in L2 cache (will be a non-negative power of 2)
32 <– L2 block size in bytes (will be a non-negative power of 2)
4 <– L2 level of associativity (number of blocks per set)
1 <– L2 replacement policy (will be 0 for random replacement, 1 for LRU)
0 <– L2 write policy (will be 0 for write-through, 1 for write-back)
40 <– number of cycles required to read or write a block from the L2 cache (consider this to be the access time)
64 <– number of sets in L3 cache (will be a non-negative power of 2)
32 <– L3 block size in bytes (will be a non-negative power of 2)
8 <– L3 level of associativity (number of blocks per set)
0 <– L3 replacement policy (will be 0 for random replacement, 1 for LRU)
1 <– L3 write policy (will be 0 for write-through, 1 for write-back)
110 <– number of cycles required to read or write a block from the L2 cache (consider this to be the access time)Cache Simulator OutputThe output file will contain nearly the same information as in the single-level version (see the general description provided in the black text). However, the format is expanded to contain information about each level of the cache.The general format of each output line is as follows (and can list up to 2 cache impacts for each level of the cache):operation address,size L1 <…> L2 <…> L3 <…>
The exact length of each line will vary, depending how many caches are in the simulation (as well as their interaction). For example, imagine a system that utilizes an L1 and L2 cache.
If the L1 cache misses and the L2 cache hits, we might see something such as the following:
L 04222caf,8 53 L1 miss L2 hit
In this scenario, if the L1 cache hits, then the L2 cache will not be accessed and does not appear in the output.
L 04222caf,8 13 L1 hit
Suppose L1, L2, and L3 all miss (implying that we had to access main memory):
L 04222caf,8 393 L1 miss L2 miss L3 miss
(M)odify operations are the most complex since they involve two sub-operations… a (L)oad immediately followed by a (S)tore.
M 1ffefffd78,8 163 L1 miss eviction L2 miss L3 hit <– notice that the Load portion of this (M)odify operation caused an L1 miss, L2 miss, and L3 hit
M 1ffefffd78,8 13 L1 hit <– this line belongs to the store portion of the (M)odify operationThe final lines of the output file are special.  They will indicate the total number of hits, misses, and evictions for each specific cache. The very last line will indicate the total number of simulated cycles that were necessary to simulate the trace file, as well as the total number of read and write operations that were directly requested by the CPU.
These lines should exactly match the following format (with values given in decimal):L1 Cache: Hits: Misses: Evictions: L2 Cache: Hits: Misses: Evictions: L3 Cache: Hits: Misses: Evictions: Cycles: Reads:<# of CPU read requests> Writes:<# of CPU write requests>
Code Implementation
Since your cache simulator is modeling multiple caches, it’s a natural extension to implement this software using principles of parallel programming in C or C++. Your program must split its tasks into threads. Each hypothetical cache must operate on a separate thread. You may use additional threads to coordinate the simulator’s operation.
Flip supports 24 virtual cores, feel free to use them!
You should plan to research parallel programming techniques. Depending on your design, you will need to utilize mutexs, semaphores, or some sort of locking mechanism to coordinate the cache simulator.
If you are programming in C, you may use pthreads or other libraries (as long as you can include the libraries in your project directory and it compiles on flip.engr.oregonstate.edu).
If you are programming in C++, I suggest using the threads support that is natively incorporated into C++14. Again, you may use alternate libraries if you are able to incorporate all dependencies into the project directory (and the code compiles on flip.engr.oregonstate.edu)
Project Write-Up
Note: Any chart or graphs in your written report must have labels for both the vertical and horizontal axis.
Undergraduates (CS/ECE 472)
Part 1: Summarize your work in a well-written report. The report should be formatted in a professional format. Use images, charts, diagrams or other visual techniques to help convey your information to the reader.
Explain how you implemented your cache simulator. You should provide enough information that a knowledgeable programmer would be able to draw a reasonably accurate block diagram of your program.
•What data structures did you use to implement your design?
•What were the primary challenges that you encountered while working on the project?
•Is there anything you would implement differently if you were to re-implement this project?
•How do you track the number of clock cycles needed to execute memory access instructions?
Part 2: There is a general rule of thumb that a direct-mapped cache of size N has about the same miss rate as a 2-way set associative cache of size N/2.
Your task is to use your cache simulator to conclude whether this rule of thumb is actually worth using. You may test your simulator using instructor-provided trace files (see the sample trace files section) or you may generate your own trace files from Linux executables (“wget oregonstate.edu”, “ls”, “hostid”, “cat /etc/motd”, etc). Simulate at least three trace files and compare the miss rates for a direct-mapped cache versus a 2-way set associative cache of size N/2. For these cache simulations, choose a block size and number of indices so that the direct-mapped cache contains 32KiB of data. The 2-way set associative cache (for comparison) should then contain 16KiB of data. You are welcome to experiment with different block sizes/number of indices to see how your simulation results are affected. You could also simulate additional cache sizes to provide more comparison data. After you have obtained sufficient data to support your position, put your simulation results into a graphical plot and explain whether you agree with the aforementioned rule of thumb. Include this information in your written report.
Part 3: If you chose to implement any extra credit tasks, be sure to include a thorough description of this work in the report.
Graduate Students (CS/ECE 572)
Part 1: Summarize your work in a well-written report. The report should be formatted in a professional format. Use images, charts, diagrams or other visual techniques to help convey your information to the reader.
Explain how you implemented your cache simulator. You should provide enough information that a knowledgeable programmer would be able to draw a reasonably accurate block diagram of your program.
•What data structures did you use to implement your multi-level cache simulator?
•How did you implement the multi-thread programming aspect of the project?
•What were the primary challenges that you encountered while working on the project?
•Is there anything you would implement differently if you were to re-implement this project?
•How do you track the number of clock cycles needed to execute memory access instructions?
Part 2: Using trace files provided by the instructor (see the sample trace files section), how does the miss rate and average memory access time (in cycles) vary when you simulate a machine with various levels of cache? Note that you can compute the average memory access time by considering the total number of read and write operations (requested by the CPU), along with the total number of simulated cycles that it took to fulfill the requests.
Research a real-life CPU (it must contain at least an L2 cache) and simulate the performance with L1, L2, (and L3 caches if present). You can choose the specific model of CPU (be sure to describe your selection in your project documentation). This could be an Intel CPU, an AMD processor, or some other modern product. What is the difference in performance when you remove all caches except the L1 cache?  Be sure to run this comparison with each of the three instructor-provided trace files. Provide written analysis to explain any differences in performance. Also be sure to provide graphs or charts to visually compare the difference in performance.
Part 3: If you chose to implement any extra credit tasks, be sure to include a thorough description of this work in the report.
Submission Guidelines
You will submit both your source code and a PDF file containing the typed report.
Any chart or graphs in your written report must have labels for both the vertical and horizontal axis!
For the source code, you must organize your source code/header files into a logical folder structure and create a tar file that contains the directory structure. Your code must be able to compile on flip.engr.oregonstate.edu. If your code does not compile on the engineering servers you should expect to receive a 0 grade for all implementation portions of the grade.
Your submission must include a Makefile that can be used to compile your project from source code. It is acceptable to adapt the example Makfile from the starter code. If you need a refresher, please see this helpful page (链接到外部网站。). If the Makefile is written correctly, the grader should be able to download your TAR file, extract it, and run the “make” command to compile your program. The resulting executable file should be named: “cache_sim”.
Grading and Evaluation
CS/CE 472 students can complete the 572 project if they prefer (and must complete the 572 write-up, rather than the undergraduate version). Extra credit will be awarded to 472 students who choose to complete this task.
Your source code and the final project report will both be graded. Your code will be tested for proper functionality. All aspects of the code (cleanliness, correctness) and report (quality of writing, clarity, supporting evidence) will be considered in the grade. In short, you should be submitting professional quality work.
You will lose points if your code causes a segmentation fault or terminates unexpectedly.
The project is worth 200 points (100 points for the written report and 100 points for the the C/C++ implementation).
Extra Credit Explanation
The extra credit is as follows. Note that in order to earn full extra credit, the work must be well documented in your written report.
ECE/CS 472 Extra Credit Opportunities
10 points – Implement and document write-back cache support.
30 points – Implement and document the 572 project instead of the 472 project. All 572 expectations must be met.
ECE/CS 572 Extra Credit Opportunities
10 points – Implement and document write-back cache support for a system that contains only an L1 cache.
10 points (additional) – Extend your implementation so that it works with multiple layers of write-back caches. E.g. if a dirty L1 block is evicted, it should be written to the L2 cache and the corresponding L2 block should be marked as dirty. Assuming that the L2 cache has sufficient space, the main memory would not be updated (yet).
Errata
This section of the assignment will be updated as changes and clarifications are made. Each entry here should have a date and brief description of the change so that you can look over the errata and easily see if any updates have been made since your last review.
Nov 2nd – Released the project guidelines.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 代写 C++ C data structure Scheme parallel compiler graph software Introduction
30 $