[SOLVED] 程序代写代做代考 python c++ Java data structure algorithm COMS30115 – Coursework 1

30 $

File Name: 程序代写代做代考_python_c++_Java_data_structure_algorithm_COMS30115_–_Coursework_1.zip
File Size: 866.64 KB

SKU: 2711152180 Category: Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Or Upload Your Assignment Here:


COMS30115 – Coursework 1

COMS30115 – Coursework 1
Raytracing

Carl Henrik Ek & Magnus Burenius

January 22, 2017

In this lab you will implement a Raytracer, which draws images of 3D
scenes by tracing the light rays reaching the simulated camera. The lab is
divided into several steps. To get something on the screen you first need to
learn how to:

• Represent 3D scenes using triangular surfaces.

• Trace the ray of each pixel in the camera image into the scene.

• Compute ray-triangle intersections, to find out which surface a ray hit.

You will then be able to write a program that draws the image seen to
the left in figure. Next, you will extend this program by adding:

• Camera motion.

• Simulation of light sources.

• Reflection of light for diffuse surfaces.

1 Representing Surfaces

In the first lab you coded a star-field by simulating a pinhole camera and
its projection of moving 3D points. In this lab you will learn how to repre-
sent and draw surfaces, instead of points. The simplest surface to describe
mathematically is a triangular surface. It is simple because it is planar and
its geometrical properties can be described by only three 3D points: its ver-
tices. For instance, from the three vertices we can compute the normal of
the surface, i.e. a vector pointing out from the surface, being orthogonal to

1

Table 1: The output from this lab.

the surface itself. Although one can use other representations for surfaces
triangles are the most used one for computer graphics due to its simplicity.
See figure 1 for an example of a 3D model represented by triangular surfaces.

Figure 1: A 3D model represented by triangular surfaces. This bunny
is a common test model in computer graphics. It can be found at: http:
//graphics.stanford.edu/data/3Dscanrep

When describing a surface for computer graphics it is not only the geo-
metrical properties of the surface that are of interest. We are also interested
in its appearance. The simplest way to describe the appearance of a triangle
is with a single color. We will therefore use the following data structure to
represent a triangle:

struct Triangle
{

2

http://graphics.stanford.edu/data/3Dscanrep
http://graphics.stanford.edu/data/3Dscanrep

vec3 v0;
vec3 v1;
vec3 v2;
vec3 normal;
vec3 color;

};

Although the normal can be computed from the vertices, it can be con-
venient to have it in the data structure as well. Then the normals can be
computed once and then stored so that we can easily access them later when-
ever we want, without any computations. We can now represent a 3D model
by a set of triangles. To store all these triangles we will use the vector:

vector triangles;

To fill this vector with some triangles representing a 3D model you can use
the function:

LoadTestModel( vector & triangles );

The function and the Triangle-struct is defined in the header file TestModel.h
which you should include in the beginning of the code. This function will fill
the vector it takes as an argument with the triangles describing the model
seen in figure. This is a famous test model in computer graphics. You can
read more about it at http://www.graphics.cornell.edu/online/box/.
It is a cubic room with dimensions:

−1 ≤ x ≤ 1 (1)
−1 ≤ y ≤ 1 (2)
−1 ≤ z ≤ 1 (3)

2 Intersection of Ray and Triangle

In our raytracer we want to trace rays, for each pixel of the image, to see
where they come from. Since we use triangular surfaces to represent our
model, this corresponds to finding the the first intersection between a triangle
and a ray. We will now derive equations for this computation.

2.1 Triangles in 3D

Assume v0, v1, v2 are real 3D vectors representing the vertices of the triangle,
i.e. they are elements of R3. To describe a point in the triangle we construct

3

http://www.graphics.cornell.edu/online/box/

a coordinate system that is aligned with the triangle. We let v0 be the origin
of the coordinate system and we let the axis be parallel to two of the edges
of the triangle:

e1 = v1 − v0 (4)
e2 = v2 − v0 (5)

e1 ∈ R3 is then a vector that is parallel to the edge of the triangle between
v0 and v1. Similarly, e2 ∈ R3 is parallel to the edge between v0 and v2. Any
point r ∈ R3 in the plane of the triangle can then be described by two scalar
coordinates u ∈ R and v ∈ R, such that its position in 3D is: #+name

r = v0 + ue1 + ve2 (6)

This holds for all points r that are in the plane of the triangle. For points
that are not only in the same plane, but actually within the triangle we also
have that:

0 < u (7)0 < v (8)u+ v < 1 (9)2.2 Rays in 3DNext, we need a way to describe points that are on a ray. By a ray we meana segment of a line in 3D. We define a 3D line by a vector representing itsstart position s ∈ R3 and a vector representing its direction d ∈ R3. Allpoints r lying on the line can then be written:r = s+ td (10)where t ∈ R is a scalar coordinate describing the position on the line. It isthe signed distance, in units of ‖d‖, to the position r from the start positions. If we do not put any constrain on t we are describing a line that extendsinfinitely in both directions. However, by a ray we only mean the points thatcomes after the start position. For the ray we thus require:0 ≤ t (11)42.3 IntersectionTo get the intersection between the plane of the triangle and the line of theray we can insert the equation of the plane 6 into the equation of the line 10and solve for the coordinates t, u, v:v0 + ue1 + ve2 = s+ td (12)This is a equation of 3D vectors. We thus have one equation for each vectorcomponent, i.e. three. And since we have three unknowns (t, u, v) it can besolved. To find the solution we re-formulate the equation using matrices, asone typically does when solving linear systems of equations:v0 + ue1 + ve2 = s+ td (13)−td+ ue1 + ve2 = s− v0 (14)(−d e1 e2)tuv = s− v0 (15)This might still look a bit weird. To express the linear equation on thestandard form we introduce the 3× 3 matrix:A =(−d e1 e2)(16)where the vectors −d, e1, e2 are the columns of the matrix. We also give anew name for our right hand side vector:b = s− v0 (17)And we put our unknowns in the vector:x =tuv (18)The linear equation 15 can then be written more concisely as:Ax = b (19)And the solution x can be found by multiplying both sides with the inverseof the matrix A:Ax = b (20)A−1Ax = A−1b (21)Ix = A−1b (22)x = A−1b (23)5Thus, if we can compute the inverse of the matrix A we can easily findthe intersection point x. Luckily, GLM both has a class to represent 3 ×3 matrices (glm::mat3) and a function to compute matrix inverses. GLMhas also used the possibility of C++ to overload operators. It has definedthe operators +,-,*,/ so that they can be used directly with glm::vec3 andglm::mat3. We can therefore do the intersection computations by writing:using glm::vec3;using glm::mat3;vec3 v0 = triangle.v0;vec3 v1 = triangle.v1;vec3 v2 = triangle.v2;vec3 e1 = v1 – v0;vec3 e2 = v2 – v0;vec3 b = s – v0;mat3 A( -d, e1, e2 );vec3 x = glm::inverse( A ) * b;This concise syntax is a one of the reasons why we are using C++ for theselabs, rather than e.g. Java. Another reason is that it is often faster than Javaand especially Python. However, C++ is of course not good for everything.There are many applications when Java or Python is a better choice.Now we know the coordinates t, u, v for the intersection point. We canthen check the inequalities 7, 8, 9, 11 to see whether the intersection occurredwithin the triangle and after the beginning of the ray. You Should write afunction that does this:bool ClosestIntersection(vec3 start,vec3 dir,const vector & triangles,
Intersection& closestIntersection );

It takes the start position of the ray and its direction and a std::vector
of triangles as input. It should then check for intersection against all these
triangles. If an intersection occurred it should return true. Otherwise false.
In the case of an intersection it should also return some information about
the closest intersection. Define the following data structure to store this
information:

6

struct Intersection
{

vec3 position;
float distance;
int triangleIndex;

};

The argument closestIntersection sent to the function should be updated
with the 3D position of the closest intersection. Its distance from the start
of the ray and the index of the triangle that was intersected. When writing
the function ClosestIntersection you might need to know the largest value a
float can take. You can get this by including the header file “limits” in the
beginning of your code and then writing:

float m = std::numeric_limits ::max();

If you are into linear algebra a good optional exercise is to calculate a
closed form solution for the inverse of the matrix, instead of just using
glm::inverse. This can be done with Cramer’s rule. We need this re-
sult if we would like to increase the speed of the intersection computations,
which will be the bottleneck of our program. Then, when we have the closed
form solution we can postpone the computation of some of the matrix ele-
ments, until they are needed. We start by just computing the row needed to
compute the distance t. Then, if t is negative there is no need to continue
analyzing that intersection, as it will not occur. However, doing this is not
necessary to pass the lab.

3 Tracing Rays

Now that you have a function that takes a ray and finds the closest inter-
secting geometry you have all you need to start rendering an image with
raytracing. The idea is that you trace a ray for each pixel of the image. You
then need to compute for each pixel what ray it corresponds to.

Initially we assume that the camera is aligned with the coordinate system
of the model with x-axis pointing to the right and y-axis pointing down
and z-axis pointing forward into the image. The start position for all rays
corresponding to pixels is the camera position. A vector d ∈ R3 pointing in
the direction where the light reaching pixel (x,y) comes from can then be
computed as:

d = (x−W/2, y −W/2, f) (24)

7

https://en.wikipedia.org/wiki/Cramer’s_rule

whereW andH is the width and height of the image and f is the focal length
of the camera, measured in units of pixels. We strongly encourage you to
draw a figure of the pinhole camera and make sure that you understand this
equation. The focal lenght can be thought of as the distance from the hole
in the pinhole camera to the image plane. Add the following variables to
store the camera parameters:

float focalLength = …;
vec3 cameraPos( …, …, … );

Fill in . . . with good values for these variables. Make sure that the model
can be seen from the camera position you choose. What will the horizontal
and vertical field of view be for the focal length you have chosen?

You are now ready to write the Draw function. In it you should loop
through all pixels and compute the corresponding ray direction. Then call
the function ClosestIntersection to get the closest intersection in that direc-
tion. If there was an intersection the color of the pixel should be set to the
color of that triangle. Otherwise it should be black. The result should be
similar to figure 2.

Figure 2: Tracing a light ray for each pixel to see which surface it first
intersect.

8

4 Moving the Camera

We now have a simple visualization of our 3D scene. However, it would be
nice if we could move the camera around. First try to implement translation
of the camera by updating the variable cameraPos if the user presses the
arrow keys. You can use SDL to check if a key is pressed in this way:

void Update()
{

// Compute frame time:
int t2 = SDL_GetTicks();
float dt = float(t2-time);
time = t2;

cout << “Render time: ” << dt << ” ms.” << endl;Uint8* keystate = SDL_GetKeyState( 0 );if( keystate[SDLK_UP] ){// Move camera forward}if( keystate[SDLK_DOWN] ){// Move camera backward}if( keystate[SDLK_LEFT] ){// Move camera to the left}if( keystate[SDLK_RIGHT] ){// Move camera to the right}}Add code that translates the camera. Since raytracing is slow you will needto set a low width and height of the rendered image to get real-time perfor-mance, e.g. 100×100 pixels.When you got that working you should add a variable that controls therotation of the camera. Use a glm::mat3 to represent the rotation matrix ofthe camera:9mat3 R;Then make the left and right arrow keys rotate the camera around the y-axis.It might be convinient to add another variable that stores the angle whichthe camera should be rotated around the y-axis:float yaw;When the left and right arrow keys are pressed this angle should be updatedand then the rotation matrix R should be updated to represent a rotationaround the y-axis with this angle. If you do not remember how such a matrixlooks you can have a look in your linear algebra book or at Wikipedia.If the camera is rotated by the matrix R then vectors representing theright (x-axis), down (y-axis) and forward (z-axis) directions can be retrievedas:vec3 right( R[0][0], R[0][1], R[0][2] );vec3 down( R[1][0], R[1][1], R[1][2] );vec3 forward( R[2][0], R[2][1], R[2][2] );To model a rotating camera you need to use these directions both when youmove the camera and when you cast rays. Implement this.As mentioned previously, you probably need to set a low width and heightof the image for the algorithm to be fast enough for real-time motion. Sinceraytracing is slow it is typically not used for games and other interactive real-time applications. It is mainly used to render single images and special effectsfor movies. In lab 3 we will implement another algorithm, rasterization,which is faster and thus more suitable for interactive applications. Afteryou got the camera motion working you can change back to a higher imageresolution when doing the rest of the lab.105 IlluminationYou should now have a raytracer in which you can move the camera around.However, so far all points of a surface has exactly the same color. To addsome more realism to the image we will add a light source. We will model anomni light. Such a light spreads light equally in all directions from a singlepoint in space. The light can be defined by two variables: its position andpower for each color component:vec3 lightPos( 0, -0.5, -0.7 );vec3 lightColor = 14.f * vec3( 1, 1, 1 );The color vector describes the power P , i.e. energy per time unit E/t ofthe emitted light for each color component. Each component thus has thephysical unit W = J/s. The light will spread uniformly around the pointlight source, like an expanding sphere. Therefore, if a surface is further awayfrom the light source it will receive less light per surface area. To derive aformula for the received light at a distance r from the light source we canthink about a sphere with radius r centered at the light source. The emittedlight will be spread uniformly over the whole surface area of the sphere whichis:A = 4πr2 (25)The power per area B reaching any point on the sphere is therefore:B =PA=P4πr2(26)When working with illumination this physical quantity is often used. Sinceit describes power per area it has the unit W/m2 = Js−1m−2. Equation26 describes the power per area reaching any surface point directly facingthe light source. However, most of the time we will have surfaces that donot directly face the light source. Let n̂ be a unit vector describing thenormal pointing out from the surface and let r̂ be a unit vector describingthe direction from the surface point to the light source. Then if B be is thepower of light reaching a virtual surface area directly facing the light sourceat the surface point, then the power per real surface D will just be a fractionof this:D = Bmax(r̂ · n̂, 0) =P max(r̂ · n̂, 0)4πr2(27)We get the fraction betweenD and B by the projection of r̂ on n̂, i.e. a scalarproduct of two unit vectors. Since we do not allow negative light we take11the max of the projected value and zero. If the angle between the surfacenormal and direction to the light source is larger than 90 degrees it does notreceive any direct illumination.One way to think about this projection factor is to consider light as astream of particles. If the particles hit the surface from an angle, instead ofstraight ahead, they will spread out more over the surface and less particleswill hit a fixed area over a fixed time. If it is raining and you place a bucketoutside it will fill more quickly if it is raining straight from above than if thewind is blowing and the rain comes from the side.Now we know how much light that reaches any surface point directlyfrom the light source, i.e. without reflection via another surface. You shouldthen implement the function:vec3 DirectLight( const Intersection& i );It takes an intersection, which gives us the position where we want to knowthe direct illumination, as well as the index to the triangle that should getilluminated, therefore we also know the normal of the surface. The functionshould then return the resulting direct illumination described by equation27.Figure 3: The incoming light.Modify the Draw function such that it uses the function DirectLight tocompute the color for every pixel, after you have computed the intersection it12corresponds to. You should then get the result of figure 3. Then you shouldalso make it possible to move the light source, using the keys W and S totranslate it forward and backward and A and D to translate it left and rightand Q and E to translate it up and down. Do this in the Update functionby writing something like:if( keystate[SDLK_w] )lightPos += forward;However, this result does not correspond to how we would perceive thesurface in the real world. It just tells how much light that reaches a surfacepoint. What we actually see is the fraction of this that gets reflected in thedirection of the ray. We thus need a model for this.Figure 4: The left object is ideally diffuse. The right object is ideallyspecular. The middle object is somewhere in between.We will model all surfaces as ideally diffuse (see figure 4). Such surfacesare simple to simulate because they reflect light equally in all directions. Theviewing angle does not influence the percieved color of such a surface point.It will look the same from all directions, having no specular high lights. Inthat sense, it is the opposite of a mirror, which will look completely differentdepending on the viewing angle. A mirror has an ideally specular surface (see figure 4. Let ρ = (ρr, ρg, ρb) describe the fraction of the incoming lightthat gets reflected by the diffuse surface for each color component. Then thelight that gets reflected is:R = ρ ∗D =ρ ∗ P max(r̂ · n̂, 0)4πr2(28)13where the operator ∗ denotes element-wise multiplication between two vec-tors. If we use glm::vec3 for 3D vectors we can compute this element-wisemultiplication by just writing:vec3 A(1,2,3);vec3 B(4,5,6);vec3 C = A*B; // C will be (4,10,18)since GLM has overloaded the operator ∗ for two glm::vec3 to representelement-wise multiplication. Extend the Draw-function such that only thereflected light gets drawn. The color stored in each triangle is its ρ. Thisshould give the result seen in figure 5.Figure 5: Direct illumination, without shadows.145.1 Direct ShadowsSo far a surface gets no direct illumination if it does not face the light source.However, we would also like to simulate shadows if another surface intersectsthe ray from the light source to the surface. To simulate this we can use theClosestIntersection function that we use to cast rays. When illuminating asurface point we cast another ray from it to the light source. Then we checkthe distance to the closest intersecting surface. If that is closer than the lightsource the surface is in shadow and does not receive any direct illuminationfrom the light source. Add this to your DirectLight function. If it works youshould see shadows like in figure 6.Figure 6: Direct illumination with shadows.5.2 Indirect IlluminationJust implementing the direct illumination will make some surface pointsappear pitch black. It only simulates one bounce of light from the lightsource via a surface point to the camera. In the real world light keepsreflecting multiple times and therefore reaches all surfaces at some point. At15each reflection some of the light is absorbed by the surface, as specified byits ρ. This indirect illumination results in the smooth shadows that we seein the real world.It is very computationally expensive to simulate the multiple bouncesof light required for accurate computation of indirect illumination. For theoptional lab 4 you have the opportunity to do this, but for this lab it is notnecessary. Instead, we will use a very simple approximation to model theindirect illumination. We will just assume that the indirect illumination isthe same for every surface of the scene. Let N be the power per surface areaof this constant indirect illumination. Then the total incident illuminationof a surface point is:T = D +N (29)And the reflected light is:R = ρ ∗ T = ρ ∗ (D +N) (30)First create a new variable to store the constant approximation of the indirectillumination:vec3 indirectLight = 0.5f*vec3( 1, 1, 1 );Then implement this illumination model, equation 30, in your Draw function.You should not alter the function DirectLight as it should only return thedirect illumination which it already does. The result is an image were nosurface is completely black, like in figure 7.6 ExtensionsYou have now written a raytracer, that handles direct illumination, shadowsand approximates indirect illumination with a constant term. Now you havethe oppurtunity to extend the raytracer in a manner of your choice. Try tothink of something that you enjoy doing yourself and then extend it in thatmanner. If you want you could then add things like:• Soft edges (anti-aliasing) by sending multiple slightly perturbed raysfor each pixel.• Depth of field, by modeling the camera lens. Includes soft edges.• Indirect illumination by multiple bounces of light.16Figure 7: Direct illumination and constant approximation of indirect illu-mination.17• Textures. Can be used for color/reflectance and/or normalmaps, and/orparallax mapping.• Loading of general models.• Storing the geometry in a hierarchical spatial structure for faster inter-section queries. This speeds up the rendering which is good for morecomplex/interesting scenes.• Specular materials. For example you could simulate water with normalmaps.• Using fractals to automatically generate geometry and textures formountains, clouds and water.• Optimisations: a raytracer is incredibly parallell and you can exploitto improve speed.• Include elements from global illumination• etc.The grade that you get for your extension depends on two things, first thechallenge involved in understanding and implementing the extension but alsothe execution of the extension. I am really keen to see what your thoughtsare and what you are planning to do so come and have a chat with me andI will be able to give you an indication of the credits that different thingswill give you. Dare to be inovative and follow your own interest I am surewe somehow can make computer graphics out of it.Good Luck and Happy Hacking18Representing SurfacesIntersection of Ray and TriangleTriangles in 3DRays in 3DIntersectionTracing RaysMoving the CameraIlluminationDirect ShadowsIndirect IlluminationExtensions

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] 程序代写代做代考 python c++ Java data structure algorithm COMS30115 – Coursework 1
30 $