[SOLVED] CS代考 Vector Processing

30 $

File Name: CS代考_Vector_Processing.zip
File Size: 244.92 KB

SKU: 9548950424 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


Vector Processing
(aka, Single Instruction Multiple Data, or SIMD)

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Copyright By PowCoder代写加微信 assignmentchef

Computer Graphics
simd.vector.pptx
mjb – March 15, 2022

What is Vectorization/SIMD and Why do We Care?
Performance!
Many hardware architectures today, both CPU and GPU, allow you to perform arithmetic operations on multiple array elements simultaneously.
(Thus the label, “Single Instruction Multiple Data”.)
We care about this because many problems, especially scientific and engineering, can be cast this way. Examples include convolution, Fourier transform, power spectrum, autocorrelation, etc.
Sine and Cosine values
Fourier products
Computer Graphics
mjb – March 15, 2022

SIMD in Intel Chips
Year Released
Width (bits)
Width (FP words)
Xeon Phi Note: one complete cache line! Note: also a 4×4 transformation matrix!
If you care:
• MMX stands for “MultiMedia Extensions”
• SSE stands for “Streaming SIMD Extensions”
• AVX stands for “Advanced Vector Extensions”
Computer Graphics
mjb – March 15, 2022

Intel and AMD CPU architectures support vectorization. The most well-known form is called Streaming SIMD Extension, or SSE. It allows four floating point operations to happen simultaneously.
Normally a scalar floating point multiplication instruction happens like this:
mulss r1, r0
“ATT form”: mulss src, dst
Computer Graphics
mjb – March 15, 2022

The SSE version of the multiplication instruction happens like this:
mulps xmm1, xmm0
“ATT form”: mulps src, dst
Computer Graphics
mjb – March 15, 2022

Array * Array
SIMD Multiplication
SimdMul( float *a, float *b, float *c, int len ) {
c[0:len] = a[0:len] * b[0:len];
Note that the construct:
a[ 0 : ArraySize ]
is meant to be read as:
“The set of elements in the array a starting at index 0 and going for ArraySize elements”.
“The set of elements in the array a starting at index 0 and going through index ArraySize”.
Array * Array
SimdMul( float *a, float *b, float *c, int len ) {
Computer Graphics
#pragma omp simd
for( int i= 0; i < len; i++ )c[i] = a[i]*b[i];mjb – March 15, 2022 Array * ScalarSimdMul( float *a, float b, float *c, int len ) {Array * Scalarc[0:len] = a[0:len] * b;SIMD MultiplicationSimdMul( float *a, float b, float *c, int len ) {#pragma omp simdfor( int i = 0; i < len; i++ )c[i] = a[i]*b; Computer Graphicsmjb – March 15, 2022 Computer GraphicsArray*Array Multiplication Speed Array Size (M) mjb – March 15, 2022Speed (MFLOPS) Array*Array Multiplication SpeedupArray Size (M)You would think it would always be 4.0 ± noise effects, but it’s not. Why?Computer Graphicsmjb – March 15, 2022Speedup of SIMD over Non-SIMD SIMD in OpenMP 4.0Computer Graphics#pragma omp simdfor( int i = 0; i < ArraySize; i++ ) {c[ i ] = a[ i ] * b[ i ];#pragma omp simdmjb – March 15, 2022 Requirements for a For-Loop to be Vectorized• If there are nested loops, the one to vectorize must be the inner one.• There can be no jumps or branches. “Masked assignments” (an if-statement- controlled assignment) are OK, e.g.,if( A[ i ] > 0. )
B[ i ] = 1.;
• The total number of iterations must be known at runtime when the loop starts • There can be no inter-loop data dependencies such as:
a[ i ] = a[ i-1 ] + 1.;
101st element
a[100] = a[101] =
102nd element
100th element
a[99] + 1.; // this crosses an SSE boundary, so it is ok a[100] + 1.; // this is within one SSE operation, so it is not OK
101st element
• It helps performance if the elements have contiguous memory addresses.
Computer Graphics
mjb – March 15, 2022

Prefetching
Prefetching is used to place a cache line in memory before it is to be used, thus hiding the latency of fetching from off-chip memory.
There are two key issues here:
1. Issuing the prefetch at the right time
2. Issuing the prefetch at the right distance
The right time:
If the prefetch is issued too late, then the memory values won’t be back when the program wants to use them, and the processor has to wait anyway.
If the prefetch is issued too early, then there is a chance that the prefetched values could be evicted from cache by another need before they can be used.
The right distance:
The “prefetch distance” is how far ahead the prefetch memory is than the memory we are using right now.
Too far, and the values sit in cache for too long, and possibly get evicted.
Too near, and the program is ready for the values before they have arrived.
Computer Graphics
mjb – March 15, 2022

The Effects of Prefetching on SIMD Computations
Array Multiplication
Length of Arrays (NUM): 1,000,000 Length per SIMD call (ONETIME): 256
for( inti=0; i
#define SSE_WIDTH 4
SimdMul( float *a, float *b, float *c, int len ) {
int limit = ( len/SSE_WIDTH ) * SSE_WIDTH; register float *pa = a;
register float *pb = b;
register float *pc = c;
for( int i = 0; i < limit; i += SSE_WIDTH ) {_mm_storeu_ps( pc, _mm_mul_ps( _mm_loadu_ps( pa ), _mm_loadu_ps( pb ) ) ); pa += SSE_WIDTH;pb += SSE_WIDTH;pc += SSE_WIDTH;for( int i = limit; i < len; i++ ) {c[i] = a[i] * b[i]; } Computer Graphicsmjb – March 15, 2022 SimdMulSum using Intel IntrinsicsSimdMulSum( float *a, float *b, int len ) {float sum[4] = { 0., 0., 0., 0. };int limit = ( len/SSE_WIDTH ) * SSE_WIDTH; register float *pa = a;register float *pb = b;__m128 ss = _mm_loadu_ps( &sum[0] ); for( int i = 0; i < limit; i += SSE_WIDTH )ss = _mm_add_ps( ss, _mm_mul_ps( _mm_loadu_ps( pa ), _mm_loadu_ps( pb ) ) ); pa += SSE_WIDTH;pb += SSE_WIDTH;_mm_storeu_ps( &sum[0], ss );for( int i = limit; i < len; i++ ) {sum[0] += a[ i ] * b[ i ]; }return sum[0] + sum[1] + sum[2] + sum[3]; } Computer Graphicsmjb – March 15, 2022 Intel Intrinsics Computer GraphicsArray Sizemjb – March 15, 2022 Why do the Intrinsics do so well with a small dataset size?It’s not due to the code in the inner-loop:C/C++ Assembly Intrinsics for( int i = 0; i < len; i++ ) {c[ i ] = a[ i ] * b[ i ];movups (%r8), %xmm0 movups (%rcx), %xmm1 mulps %xmm1, %xmm0 movups %xmm0, (%rdx) addq $16, %r8addq $16, %rcx addq $16, %rdx addl $4, -4(%rbp)movups (%r10), %xmm0 movups (%r9), %xmm1 mulps %xmm1, %xmm0 movups %xmm0, (%r11) addq $16, %r9addq $16, %r10 addq $16, %r11 addl $4, %r8dIt’s actually due to the setup time. The intrinsics have a tighter coupling to the setting up of the registers. A smaller setup time makes the small dataset size speedup look better.Computer Graphicsmjb – March 15, 2022 A preview of things to come: OpenCL and CUDA have SIMD Data TypesWhen we get to OpenCL, we could compute projectile physics like this: float4 pp; // p’pp.x = p.x + v.x*DT;pp.y = p .y + v.y*DT + .5*DT*DT*G.y;pp.z = p.z + v.z*DT;But, instead, we will do it like this:We do it this way for two reasons:1. Convenience and clean coding2. Some hardware can do multiple arithmetic operations simultaneously float4 pp = p + v*DT + .5*DT*DT*G; // p’ Computer Graphicsmjb – March 15, 2022 A preview of things to come: OpenCL and CUDA have SIMD Data TypesThe whole thing will look like this: constant float4 G = (float4) ( 0., -9.8, 0., 0. ); constant float DT = 0.1;Particle( global float4 * dPobj, global float4 * dVel, global float4 * dCobj ) {Computer Graphicsint gid float4 p float4 vfloat4 float4= get_global_id( 0 ); = dPobj[gid];= dVel[gid];// particle #// particle #gid’s position // particle #gid’s velocitydPobj[gid] = pp; dVel[gid] = vp;pp = p + v*DT + .5*DT*DT*G; // p’ vp = v + G*DT; // v’mjb – March 15, 2022• SIMD is an important way to achieve speed-ups on a CPU• For now, you might have to write in assembly language or use Intel intrinsics to get to all of it• I suspect that #pragma omp simd will eventually catch up• Prefetching can really help SIMD Computer Graphicsmjb – March 15, 2022程序代写 CS代考 加微信: assignmentchef QQ: 1823890830 Email: [email protected]

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] CS代考 Vector Processing
30 $