[SOLVED] algorithm COMS30115 Coursework 2

$25

File Name: algorithm_COMS30115__Coursework_2.zip
File Size: 310.86 KB

5/5 - (1 vote)

COMS30115 Coursework 2

COMS30115 Coursework 2
Rasterisation

Carl Henrik Ek & Magnus Burenius

January 22, 2017

Raytracing is the simplest way of computing a 2D image of a 3D scene.
It can be used to simulate all kinds phenomena, much more than we imple-
mented in the second lab. Its only disadvantage isthe speed. It is therefore
typically not used for real-time visualization. For those scenarios another
method called rasterization is often used. Although rasterization is typically
faster than raytracing itis not as easy to implement and it cannot be used
to simulate all illumination phenomena. It is for instance very difficult to
simulate multiple bounces of light with it. In this lab you will implement a
rasterizer. The lab consists of three parts. In the first part you will explore:

Perspective projection of 3D points by a pinhole camera.

Drawing 3D objects modeled by triangular surfaces by first projecting
the vertices of the triangles to the 2D image plane.

Drawing the edges of each triangle as lines in 2D, using linear interpo-
lation.

The result of this can be seen in the top image in Figure 1. You will then
add:

Drawing of filled triangles.

A depth buffer to check visibility for each pixel.

The result of this is very similar to the first part of lab 2, as can be seen
in the middle image of figure 1, but computed much faster. In the last part
you will extend the program by also implementing:

Light sources.

1

Interpolation of arbitrary quantities across the triangle, although you
will focus on light.

Per vertex and per pixel illumination, implemented by vertex and pixel
shaders.

The final result can be seen in the bottom image of Figure 1.

Table 1: The output from this lab.

1 Transformations

1.1 Perspective Projection of Points from 3D to 2D

In the first lab you made a starfield by simulating a pinhole camera and its
projection of 3D points to an image. In the second lab you did a raytracer
by also simulating a pinhole camera, but you did not project points then.
Instead you sent out rays for each pixel of the camera image. These are the
two main approaches that can be used to make a 2D image of a 3D world.
In this lab we will go back to the first approach and use projection instead
of raytracing.

Assume we have a pinhole camera positioned at the origin looking along
the z-axis of a 3D coordinate system. Let the x-axis point to the right and
let the y-axis point downwards. Let f be the focal length of the camera,
i.e. the distance from the camera position to the image plane, measured in
pixel units. Let the x- and y-axis of the 2D camera image be the same as
for the 3D world, i.e. the origin will be in the middle of the image. Then
the projection of an arbitrary 3D point (X,Y, Z) to a 2D point (x, y) in the

2

camera image can be written as:

x = f
X

Z
(1)

y = f
Y

Z
(2)

This relation can be derived by looking at two triangles that have the same
angles. However, when working with computers it is common to have the
origin of the image in the top left corner. If the image has width W and
height H the projection can then be written:

x = f
X

Z
+
W

2
(3)

y = f
Y

Z
+
H

2
(4)

1.2 Translation

Assume the camera is not fixed at the origin but can move around freely. Let
C be the position of the camera. Then to project a 3D point using equation
3 and 4 we first need to transform the point P from the world coordinate
system to the system where the camera is at the origin. We can do this by
subtracting the position of the camera:

P = P C (5)

P is then the position of the point in a coordinate system centered at the
camera.

1.3 Rotation

Now we assume that the camera position is fixed at the origin again but that
it can rotate around it. We then have a fixed world coordinate system and
another coordinate system that rotates with the camera. The transformation
of a position vector P from the world coordinate system to the system of
the camera can then be expressed using a 3 3 matrix R and vector-matrix
multiplication:

P
13

= P
13

R
33

(6)

where P and P are row vectors. We say that the matrix R that can be used
to perform this transformation is the rotation matrix of the camera.

3

Here we have used row vectors to represent 3D positions. We then mul-
tiply them on the left side of the matrix. It is of course also possible to
use column vectors. Then the transformation from world system to camera
system would have been:

P
31

= R
33

P
31

(7)

i.e. we would have multiplied the column vector on the right side of the
matrix instead. Note that the rotation matrices are slightly different in those
two cases, although they describe the same rotation. They are each others
transposes. There is no theoretical difference between the two approaches,
but in practice it can be confusing, especially if they are mixed. In computer
graphics it is most common to use row vectors that are multiplied on the
left side of the matrix, as in equation 6, but it many other fields the other
approach is more common. It is therefore good if you are familiar with both
approaches and understand that there is nothing strange about them.

1.4 Translation & Rotation

Finally we assume that the camera can both rotate and translate. Then we
can perform the transformation of a point from world coordinate system to
camera coordinate system in two steps:

Transform it from world space to a coordinate system that is centered
at the camera using equation 5.

Transform it from the coordinate system that is centered at the camera
to a system that is also rotated as the camera using equation 6.

The total transformation then becomes:

P = (P C)R (8)

2 Drawing Points

First just try to project and plot the vertices of the scene, similar to how you
did for the starfield in the first lab. The Draw-function in the given skeleton
program looks like:

void Draw()
{

SDL_FillRect( screen, 0, 0 );

4

if( SDL_MUSTLOCK(screen) )
SDL_LockSurface(screen);

for( int i=0; i vertices(3);

vertices[0] = triangles[i].v0;
vertices[1] = triangles[i].v1;
vertices[2] = triangles[i].v2;

for(int v=0; v<3; ++v){ivec2 projPos;VertexShader( vertices[v], projPos );vec3 color(1,1,1);PutPixelSDL( screen, projPos.x, projPos.y, color );}}if ( SDL_MUSTLOCK(screen) )SDL_UnlockSurface(screen);SDL_UpdateRect( screen, 0, 0, 0, 0 );}The function loops through all triangles and all vertices of the trianglesand calls the function VertexShader on each. You have to implement thisfunction:void VertexShader( const vec3& v, ivec2& p );It should take the 3D position of a vertex v and compute its 2D imageposition and store it in the given 2D integer vector p. glm::ivec2 is adata type for 2D integer vectors, i.e. a pixel position will be representedby two integers. Thus it should handle the translation and rotation of thecamera as well as the perspective projection. If you want you can waitwith implementing the rotation and also the motion of the camera in theUpdate-function. These things will be easier to debug later when you willsee something more than just some sparse points on the screen. You can startby just having a fixed position of the camera represented by the variable:5vec3 cameraPos( 0, 0, -3.001 );As you might remember from the second lab our 3D scene consists of a cubicroom placed at the origin and having a side length of 2. If we set the focallength, width and height of the camera to the same number and place thecamera at (0, 0,3.001) it will see the whole room. Why is that? Make surethat you understand this by drawing a figure. If you got this working youshould see something similar to figure 1, i.e. points at the corners/verticesof the room and the two boxes.Figure 1: Drawing points projected by a pinhole camera.3 Drawing EdgesTo make the visualization slightly more interesting we will try to also drawthe edges of the triangles, instead of just the vertices. We can do this bydrawing lines in 2D between the projected vertices of the triangle. To drawlines in 2D you can use a function that does linear interpolation similarto what you wrote for the first lab. Instead of interpolating pixel colorsrepresented by glm::vec3 we will interpolate pixel positions represented byglm::ivec2. In this lab you will later extend the function to interpolatelots of different values and it is therefore convenient if the interpolation isimplemented in a simple but efficient way. As you might remember from thefirst lab this is a good way to do it:6void Interpolate( ivec2 a, ivec2 b, vector & result )
{

int N = result.size();
vec2 step = vec2(b-a) / float(max(N-1,1));
vec2 current( a );
for( int i=0; i result(4);
ivec2 a(4,2);
ivec2 b(1,8);
Interpolate( a, b, result );

Make sure that you understand how the interpolation code works as you will
use it a lot and also extend it to work for other quantities than 2D positions.

It might be good to know that although this is a simple way to perform
interpolation of integers it is not as fast as Bresenhams line algorithm. How-
ever, that is less intuitive and nothing you need to worry about for this lab.
Doing linear interpolation in this simple way is good enough for us.

To draw a line in 2D we first need to know how many pixels the line
should consist of. We do not want any holes in the line. Depending on
whether the line is mostly horizontal or vertical we will use one pixel per
column or one pixel per row. If a and b are the start and end of the line
segment we can then compute the number of pixels to draw as:

ivec2 delta = glm::abs( a b );
int pixels = glm::max( delta.x, delta.y ) + 1;

You can then get the pixel positions of the line by calling the Interpolation
function:

7

http://en.wikipedia.org/wiki/Bresenhams_line_algorithm

vector line( pixels );
Interpolate( a, b, line );

When we have these we can loop through all of them and call PutPixelSDL
for these pixel positions to draw the line. Write a function that does this:

void DrawLineSDL( SDL_Surface* surface, ivec2 a, ivec2 b, vec3 color );

Before applying it to draw the edges of the projected triangles, try to draw
lines between some arbitrary image points. When you got that working add
another function to draw the edges of a triangle:

void DrawPolygonEdges( const vector & vertices )
{

int V = vertices.size();

// Transform each vertex from 3D world position to 2D image position:
vector projectedVertices( V );
for( int i=0; i vertices(3);

vertices[0] = triangles[i].v0;
vertices[1] = triangles[i].v1;
vertices[2] = triangles[i].v2;

DrawPolygonEdges( vertices );
}

if ( SDL_MUSTLOCK(screen) )
SDL_UnlockSurface(screen);

SDL_UpdateRect( screen, 0, 0, 0, 0 );
}

This should give you an image of a wire frame scene like in figure 2. Now
when you have a better visualization of the scene you should also implement
motion of the camera if you have not done that yet. Have variables for the
position of the camera and its rotation.

9

vec3 cameraPos( 0, 0, -3.001 );
mat3 R;
float yaw = 0; // Yaw angle controlling camera rotation around y-axis

Update these in the Update-function when the user presses the arrow keys
just as you did in lab 2 and make sure that the VertexShader-function handles
both the translation and rotation of the camera before it projects a point
from 3D to 2D. It should be possible to rotate the camera around the y-axis.
This type of rotation is called yaw.

If you want you can also add the possibility to rotate the camera up and
down pitch rotation), but that is not mandatory. Another thing that you
can add if you want is control of the camera rotation by the mouse instead
of the keyboard. This is also not mandatory. You can get access the relative
motion of the mouse by calling SDL_GetRelativeMouseState:

int dx;
int dy;
SDL_GetRelativeMouseState( &dx, &dy );

This function can also be used to see which mouse buttons that are pressed.
You can read more about it in the SDL documentaion.

When you move around with the camera you might notice that there are
problems if the vertices are behind the camera. This is actually a bit tricky
to fix and nothing you need to do for this lab. The solution to the problem
is to perform clipping of the triangles before they are drawn. Clipping is a
topic that could be studied in the optional project.

4 Filled Triangles

By just drawing the edges of the triangles we get some feeling for the 3D
structure of the scene which might be good for some engineering visualiza-
tions. Nevertheless, it would be nice if we could also draw solid surfaces,
i.e. fill the triangles with color. This turns out to be a bit more involved.
The main idea is to draw the triangle row by row. We have an array that
stores the start position and another array that stores the end position of
the triangle for each row:

vector leftPixels( ROWS );
vector rightPixels( ROWS );

10

http://en.wikipedia.org/wiki/Six_degrees_of_freedom

Figure 2: Drawing edges.

Assume we have somehow filled these arrays with values, then it is simple to
loop through them and draw each row from the start to the end. The tricky
part is to compute the arrays in the first place.

As an example consider a triangle which has the projected vertices:
(10, 20), (30, 10), (20, 40), it should be drawn as 40 10 + 1 = 31 rows.
The arrays for the left and right positions should then have 31 elements
each. Corresponding elements should have the same y-coordinate but differ-
ent x-coordinates: the left and right of that row.

One way to fill these arrays with values representing the polygon is to
first initialize the start arrays with really big values and the end array with
really small values:

for( int i=0; i ::max();
rightPixels[i].x = -numeric_limits ::max();

}

We then loop through the edges of the polygon and compute the pixels
corresponding to its line. Then for each y-coordinate of this line we check
the corresponding elements in the left and right arrays. If the current x-value
in the left array at that place is larger than the x-value of the line we replace
it. If the current x-value in the right array at that place is smaller than the
x-value of the line we replace it.

11

After we have looped through all edges we then have the smallest x-value
for each row in the left array and the largest value for each row in the right
array. Write a function that does this:

void ComputePolygonRows(
const vector & vertexPixels,
vector & leftPixels,
vector & rightPixels )

{
// 1. Find max and min y-value of the polygon
// and compute the number of rows it occupies.

// 2. Resize leftPixels and rightPixels
// so that they have an element for each row.

// 3. Initialize the x-coordinates in leftPixels
// to some really large value and the x-coordinates
// in rightPixels to some really small value.

// 4. Loop through all edges of the polygon and use
// linear interpolation to find the x-coordinate for
// each row it occupies. Update the corresponding
// values in rightPixels and leftPixels.
}

It should take a vector with the projected position of the three vertices
and compute the start and end image position of each row of the triangle.
In fact, we can use this function not only to draw triangles but any convex
polygon. You can test that your function produces sensible output by writing
something like:

vector vertexPixels(3);
vertexPixels[0] = ivec2(10, 5);
vertexPixels[1] = ivec2( 5,10);
vertexPixels[2] = ivec2(15,15);
vector leftPixels;
vector rightPixels;
ComputePolygonRows( vertexPixels, leftPixels, rightPixels );
for( int row=0; row & leftPixels,
const vector & rightPixels );

This function should call PutPixelSDL for each pixel between the start and
end for each row. Finally write a function that projects the vertices and calls
the other two functions:

void DrawPolygon( const vector & vertices )
{

int V = vertices.size();

vector vertexPixels( V );
for( int i=0; i leftPixels;
vector rightPixels;
ComputePolygonRows( vertexPixels, leftPixels, rightPixels );

13

DrawPolygonRows( leftPixels, rightPixels );
}

Then call DrawPolygon in the Draw-function instead of DrawPolygonEdges.
To signal what color to draw you can create a new variable:

vec3 currentColor;

which you use in DrawPolygonRows when you call PutPixelSDL. We set this
color to the color of the current triangle when we loop over all triangles and
call DrawPolygon in the Draw-function:

void Draw()
{

SDL_FillRect( screen, 0, 0 );

if( SDL_MUSTLOCK(screen) )
SDL_LockSurface(screen);

for( int i=0; i vertices(3);
vertices[0] = triangles[i].v0;
vertices[1] = triangles[i].v1;
vertices[2] = triangles[i].v2;

DrawPolygon( vertices );
}

if ( SDL_MUSTLOCK(screen) )
SDL_UnlockSurface(screen);

SDL_UpdateRect( screen, 0, 0, 0, 0 );
}

You should then get the result seen in figure 3. When you manage to get
that you are done with most of the coding for this lab. The remaining stuff
does not require as much coding but greatly improves the image. Notice
how the blue box is drawn on top of the red since we have not added any
mechanism to deal with occlusion yet. Compare the speed of the program
with the raytracer. How much faster is it for this scene?

14

Figure 3: Filled triangles. Notice how the blue box is drawn on top of the
red.

5 Depth Buffer

You have now implemented the core of a rasterizer. However, a problem
with the current implementation is that the triangles might be drawn on top
of each other, in arbitrary order. If multiple triangles occupy the same pixel
we would like the one closest to the camera to be visible. In the raytracer we
managed this by treating all pixels independently. For each pixel we followed
a ray and looked at the closest intersection for that ray.

In a rasterizer we try to speed things up by not treating every pixel
independently. We just treat every triangle independently. The standard
way to handle this occlusion problem in a rasterizer is to use a so called
depth buffer. That is an image storing a depth value for each pixel instead
of a color value. For each pixel we thus store the depth of the closest surface
point that has been drawn at that position so far. When we draw a new
pixel we can then check if its depth is smaller than the one currently stored
in the depth buffer. Only then do we draw it and also replace the value in
the depth buffer.

To do this we need to keep track of the depth of each pixel in the image.
We can do this by interpolating the depth over the triangle when we draw
it. First we compute the depth of the vertices, in the coordinate system of
the camera. Then we interpolate the depth over each edge of the triangle.
Finally, we interpolate the depth over each row when we draw. This is similar
to the bilinear interpolation we used in the first lab. First we interpolate the

15

left and right edge from top to bottom. Secondly, we interpolate each row
from left to right.

However, the perspective projection makes the interpolation a bit more
involved. Assume z1 is the depth of one of the vertices and z2 is the depth
of another vertex. Then in the 3D world the depth along the edge will vary
linearly between z1 and z2, since the surface is planar. However, in the image
the depth along the edge will not vary linearly! Instead it turns out that 1/z
will vary linearly, due to the perspective projection of the camera. We can
thus compute this quantity for each vertex and then interpolate it linearly
across the projected 2D triangle.

After interpolation we could take the inverse to get back to the depth
z. However, we do not really need the depth z. The inverse 1/z which we
already have is good enough. We can then store 1/z in the depth buffer for
each pixel that is drawn. A new pixel should then be drawn if its 1/z is
larger than the one already stored in the depth buffer. Then its depth z will
be smaller and it is closer to the camera. To implement a depth buffer we
create a 2D array with the same size as the camera image:

float depthBuffer[SCREEN_HEIGHT][SCREEN_WIDTH];

In it we will store the inverse depth 1/z for each pixel. Since we now also
need to interpolate the depth over the triangle, not just 2D position, we
create a new struct to hold the information needed for each pixel:

struct Pixel
{

int x;
int y;
float zinv;

};

Previously we just interpolated the position between the vertices when we a
triangle. Now we also want to interpolate the inverse depth. Thus, you need
to implement an Interpolation function that interpolates our Pixel-struct
linearly instead of glm::ivec2:

void Interpolate( Pixel a, Pixel b, vector & result );

After you have done this you also need to change ComputePoloygonRows,
DrawPolygonRows, VertexShader and DrawPolygon to handle Pixel instead
of glm::ivec2:

16

void ComputePolygonRows(
const vector & vertexPixels,
vector & leftPixels,
vector & rightPixels );

void DrawPolygonRows(
const vector & leftPixels,
const vector & rightPixels );

void VertexShader( const vec3& v, Pixel& p )

Thus, in VertexShader you should now compute the inverse depth zinv for
the vertex in the coordinate system of the camera, before you compute its
projected image position (x, y).

If you do all this you will have access to zinv for every pixel that you draw
in DrawPolygonRows. Before drawing a new pixel you can then check if it is
in front of the one that is currently drawn there by comparing with the value
of depthBuffer at that pixel. If it is in front you can draw it and update the
value in the depth buffer. Remember that you also need to clear the depth
buffer in the beginning of the Draw-function before you start drawing. You
do this by setting all pixels to represent infinite depths, i.e. zero inverse
depths:

for( int y=0; y depthBuffer[y][x] )
{

depthBuffer[y][x] = f.zinv;
PutPixelSDL( screen, x, y, currentColor );

}
}

Currently VertexShader takes a glm::vec3 and computes a Pixel. You have
probably written something like:

18

void VertexShader( const vec3& v, Pixel& p )
{

vec3 pos = (v cameraPos)*R;
p.zinv = 1/pos.z;
p.x = int(focalLength * pos.x * p.zinv) + SCREEN_WIDTH/2;
p.y = int(focalLength * pos.y * p.zinv) + SCREEN_HEIGHT/2;

}

To make our rasterizer a bit more general and abstract we can create a new
type which describes a general vertex:

struct Vertex
{

vec3 position;
};

For now we just store the position of the vertex since that is all we use at the
moment. However, in general we could also assign many other quantities to
the vertex. We will soon do this to handle illumination but first make your
program handle this simple Vertex-type by updating: Draw, DrawPolygon
and VertexShader.

The flow of your general rasterizer can now be described as:

VertexShader is called for each Vertex and computes a Pixel.

These Pixels are interpolated in the image between the vertices. First
vertically along the edges and then horizontally across the polygon.

Each such interpolated Pixel is sent to PixelShader which determines
the final color of the image at that position.

Most of the code you have written is to perform the second step. There is
not that much code in VertexShader and PixelShader. The good news is
that to render more sofisticated images, e.g. with illumination and texture
mapping, you do not need to add things to the second step. You just need
to alter VertexShader, PixelShader and the interpolation function. In fact,
if you would use a rasterization library like OpenGL or Microsofts DirectX,
you would more or less only need to write the first and third step. These
libraries completely handle the cumbersome second step for you.

6.1 Per Vertex Illumination

We will first try to implement the illumination computations for every vertex
and then interpolate these values accross the polygon, similar to how we

19

interpolated zinv. In lab 2 you learned that the direct illumination from an
omni light source to a surface point can be written as:

D =
P max(r n, 0)

4r2
(10)

where D is the power of the incoming direct light per surface area. P is the
power of the light source, r is a vector from the surface point to the light
source and n is the normal of the surface. For ideally diffuse surfaces with
reflectance the total reflected light is then:

R = (D +N) (11)

where N is the incoming indirect illumination reflected from other surfaces.
We approximated N as a constant term. To implement this illumination
model we will store the parameters of the light source as variables just as in
lab 2:

vec3 lightPos(0,-0.5,-0.7);
vec3 lightPower = 1.1f*vec3( 1, 1, 1 );
vec3 indirectLightPowerPerArea = 0.5f*vec3( 1, 1, 1 );

We would initially like to compute the illumination for every vertex in Ver-
texShader. We then need to evaluate equation 10 and 11 for every vertex.
Besides the light variables we then need to know the position, normal and
reflectance at the vertex. It is therefore convenient to add this information
to our Vertex-struct:

struct Vertex
{

vec3 position;
vec3 normal;
vec2 reflectance;

};

Make sure to set this information for every vertex in the Draw-function.
After you have done this you will have access to this in VertexShader. You
can then compute the illumination, but you also need to store it for the
resulting output Pixel of VertexShader. You therefore need to add another
quantity to your Pixel-struct to store this:

struct Pixel
{

20

int x;
int y;
float zinv;
vec3 illumination;

};

Now the illumination gets computed for every Vertex in VertexShader. We
then want to interpolate this value over the polygon before we use it in
PixelShader. To do this you need to modifify the Interpolation-function so
that it also interpolates the illumination quantity in Pixel. After you have
done this you can simply access the illumination in PixelShader and use it
when you draw the pixel:

void PixelShader( const Pixel& p )
{

int x = p.x;
int y = p.y;
if( p.zinv > depthBuffer[y][x] )
{

depthBuffer[y][x] = f.zinv;
PutPixelSDL( screen, x, y, p.illumination );

}
}

This should give you the result seen in figure 5. As you can see the result
is similar to the raytracer with illumination but not exactly the same. Since
we are interpolating the illumination over the surfaces we get less detail, but
gain speed.

6.2 Per Pixel Illumination

T

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] algorithm COMS30115 Coursework 2
$25