[SOLVED] FM04.

$25

File Name: FM04..zip
File Size: 47.1 KB

5/5 - (1 vote)

We have studied discrete time processes to model a stock price, but many of the most important results in financial mathematics will require us to consider continuous time processes.
The most important example of a continuous time stochastic process is the Wiener process named after. This process is also called Brownian motion.
Brownian motion is named after the English botanist who described the random motion of pollen particles immersed in water that he observed under a microscope. This random motion is strong evidence of the atomic theory as it suggests that the motion of tiny pollen particles can be explained by assuming they are being hit at random by water molecules and so this suggests that at a small scale water is not a continuous medium.
Einstein pursued this idea further and developed a quantitative theory which was published in 1905. It is widely agreed that the numerical tests of this theory, and the resulting determination of Avogadros constant, lead to the general acceptance of the atomic theory.

Copyright By Assignmentchef assignmentchef

Einstein published 4 ground-breaking in 1905, including this paper, his theory of special relativity, a paper on the equivalence of mass and energy (E = mc2) and a paper on the photoelectric effect for which he won the Nobel prize for physics.
The mathematical theory of Brownian motion as we understand it today, was developed by in 1923. We will review his construction of Brownian motion this week.
(1773-1858)
Botanist
Reported observations of pollen moving erratically under water which can be explained using the
atomic theory

(1879-1955)
Developed a quantitative theory of Brownian motion that allowed the atomic theory to be tested.
Key Paper: Uber die von der molekularkinetischen Theorie der Warme geforderte Bewegung von in ruhenden Flussigkeiten suspendierten Teilchen (English: On the movement of small particles
suspended in a stationary liquid demanded by the molecular-kinetic theory of heat) 1905

: 1894-1964
Defined the first interesting continuous time stochastic process: continuous time Brownian motion.
Key Paper: Differential Space Journal of Mathematics and Physics (1923) [3]:

Once we can define the Wiener process, we can write down stochastic differential equations that involve this process.
Studying these equations is a central technique in mathematical finance, rather as classical calculus is a central technique in classical physics.
According to Newtons theory of calculus, a planet moving under an inverse square force moves in a perfect ellipse.
This would not be true of a discrete time approximation to Newtons theory. This is an example of how working in continuous time makes calculations easier than discrete time calculations.
On the other hand, the theory of calculus used by Newton wasnt rigorous.
It took till the 19th centry to develop a rigorous theory of calculus. Hopefully most of you have studied rigorous proofs of results such as Taylors theorm and the fundamental theorm of calculus. They arent easy, even though calculus itself is quite easy to use.
The same applies to stochastic calculus. The rigorous proof of results from stochastic calculus are quite

demanding. You will see some of them if you choose to study FM04.
Unit tests
Ive focused so far on the mathematical subjects well learn this week. The key programming skill this week is testing. The key messages I wish to get a cross are:
You should write code using lots of small functions. You should write a test for every function you write.
If you dont follow this advice, you will find it difficult to debug your code and will never be confident that your code is correct.
Learning Outcomes
This week you will be able to:
Simulate a Wiener process in arbitrary detail
Approximate the solution to an SDE using a discrete time simulation Test your code by writing unit tests
1 Wieners Construction
We have studied a discrete time model for stock prices and have seen that a particular elegant model is given by discrete time geometric Brownian motion. Equivalently we could say that a particularly elegant model for the log of the stock price is given by discrete time Brownian motion. We will study Brownian motion in a more abstract setting today and study its mathematical properties separately from its applications in finance. For this reason we will focus on the simpler process of Brownian motion rather than the slighly more complex process of geometric Brownian motion.
As last week, T = {0,t,2t,3t,,nt = T} is a discrete time grid. Here n is the number of steps in our grid.
Definition: Zt follows discrete time Brownian motion with initial value Z0, drift and volatility > 0 if it satisfies
Zt+t=Zt+ t+ tt
where the (t)tT are indepedent, standard normal random variables. In fact, we can learn all we need by studying a slightly simpler process. Definition: Wt is a discrete time on T if it satisfies
Wt+t=Wt+ tt

where the t are independent standard normal random variables for t T .
Lemma: Wt is a discrete time Wiener process if and only if Z0 + t + Wt is a Browian motion with drift and volatility . If Zt is a discrete time Brownian motion, then 1 (Zt Z0 t) is a discrete time Wiener process.
The proof of this is a trivial exercise. It shows that the only interesting aspects of Brownian motion are determined by the corresponding Wiener process. We can therefore safely specialise to studying the Wiener process.
def simulate_wiener_paths( T, n_paths, n):
Simulate discrete time Brownian motion paths.
W = np.zeros( [ n_paths, n+1] )
times = np.linspace(0,T,n+1)
epsilon = np.random.randn( n_paths, n ) for i in range(0,n):
W[:,i+1] = W[:,i] + sqrt(dt)*epsilon[:,i] return W, times
def plot_wiener( T, n ):
W,times = simulate_wiener_paths(T,1,n) plt.plot(times, W[0,:], label='{} steps.format(n))
for n in [50,100,500,1000,5000,10000]: plot_wiener(1,n)
plt.legend(); #+

Does a discrete time Wiener process with 5000 steps look like a discrete time Wiener process with 10000 steps?
If we let n tend to infinity, will our graphs converge to a limit?
The answer to the first question is somewhat subjective, but the answer would appear to me to be yes I cannot tell without looking at the legend which is the line with 5000 steps and which is the line with 10000 steps.
The answer to the second question is clear the graphs will not converge as n tends to infinity. Each time we change n and generate a new plot we get an essentially different path which may be a long way from the original path. The issue is that as we change n, we are also changing the random numbers used to generate the graph.
However, you may feel that there is some sort of convergence taking place as n tends to infinity. Is there some way that we can change n but in some sense fix the random numbers used so that we obtain convergence as n tends to infinity. Wiener found a way to do this and this gives his construction of continuous time Brownian motion.
Wieners idea was to change the order of the simulation. Instead of simulating at consecutive time points, we will simulate at time points in the following order
1 3 5 7 9 11 13 15
16 16 16 16 16 16 16 16

In words we simulate at the ends, then the half way points, then the new half way points and so on ad infinitum.
When simulating discrete Brownian motion on {0, 2t , t} consecutively we write W0 = 0
for independent standard normals t 2
W2t = 2t2t
Wt= 2t2t+ 2tt and t .
W2t =22t
Wt t t t 22
We have already studied linear transformations of independent normal random variables. We know that this gives a multivariate normal distribution
We see that (W t , Wt ) has a multivariate normal distribution with covariance matrix 2
t0t0t0tt
2 2 =2 2 2 tttttt0t 2222222
. The C- d of is the unique p- s r of with positive diagonal.
If is a positive definite symmetric matrix then a matrix satisfying AA = , is called a p- s r of
is the C- d of
It follows that (Wt , W t ) follows a multivariate normal distribution with covariance matrix
The Cholesky decomposition of this matrix is given by solving
( 0 ) ( ) ( t 2t )
0=tt 22
( t 2t ) tt.

with > 0 and > 0.
2 + 2 = 2t
Sosince>0,= t,= 2 andhence =4.Since>0,= 2 .
then AA = where is covariance matrix of (Wt , W t ).
So by the properties of Cholesky decomposition, if we define
( ) ( W )
then t and t are independent standard normal random variables. 2
or in components
( W ) ( t 0 )( )
tt Wt = t t t 2222
Wt = tt
Wt = tt+ tt 2222
=1(W0+Wt)+ tt 222
We have proved
Lemma: If t and t are independent standard normals and we define
2 = t = 2t
W t = t 1
Wt =1(W0+Wt)+ t0t 2222
t 1 t := A
then W0 , W 2t , Wt define a discrete time Wiener process on {0, 2t , t}.
Suppose that for some m and an integer i {1,2,3,,2m 1} we wish to simulate W iT 2m
simulated W (i1)T and W (i+1)T already. 2m 2m

Then write
so that Wt should be a discrete time Wiener process on
{ T 2T} 0, 2m , 2m
Our Lemma shows that we can simulate W T as 2m
Wt=W (i1)T W(i1)T
WT =1(W0+W2T)+1 2T 2m 2 2m 22m
for some standard normal independent of the random variables used previously. Hence we can simulate W iT for i {1,2,3,,2m 1} using
WiT =WT +W(i1)T m
2m 2m 2 =W(i1)T+1(W0+W2T)+1 2T
2m 2 2m 22m 1( ) 12T
for some standard normal which is independent of the normals used previously. Theorem: If iT are independent standard normals for i {1, 2, . . . 2n } and we define
Wt= Tt and for any m n and i {1,2,3,,2m 1} we define
W (i1)T + W (i1)T + 22m 2m 22m
W (i1)T + W (i1)T + 2 2m 2m 2m+1
2m 2 2m then Wt will be a discrete time Wiener process on
W i = 1 (W i1 + W i+1 ) +
T i 2m+1 2m
{T2T3T } T = 0,2n,2n,2n,T .

2 Exercises
2.1 Exercise
Fill in the blanks.
If is a positive definite symmetric matrix then a matrix satisfying AA = , is called a p- s r of
. The C- d of is the unique p- s r of with positive diagonal.
2.2 Exercise
Prove the following:
Lemma: Wt is a discrete time Wiener process if and only if Z0 + t + Wt is a Browian motion with drift and volatility . If Zt is a discrete time Brownian motion, then 1 (Zt Z0 t) is a discrete time Wiener process.
2.3 Exercise
The following claim is made in the notes. Prove it:
So by the properties of Cholesky decomposition, if we define

( ) ( W )
t 1 t := A
then t and t are independent standard normal random variables. 2
2.4 Exercise
Suppose that Wt is a discrete time Wiener process defined on T = {0, 41 , 12 , 14 , 1} compute the covariance matrix of the vector
(W1,W1 ,W1 ,W3 ). 244
Store the answer in a matrix called cov_matrix. What is the Cholesky decomposition of this matrix? Store the answer in a matrix called chol_matrix
2.5 Exercise
Let n = 2. Write a Python function simulate_wiener_4_points that takes as input an N 4 matrix
epsilon of independent, standard random normal variables and returns a matrix with each row representing
a different path of the Wiener process and columns representing the time points (0, 41 , 12 , 34 , 1) in sequence.
YoumustfollowWienersconstructionsothatyousimulatetimesintheorder{W0,W1,W1 ,W1 ,W3 ,}. 244
Check numerically that the variances of the differences Wi := W i W i1 take the value you expect. 2n 2n
Check numerically that the differences Wi+1 and Wj+1 are uncorrelated if i = j. 3es
Definition: A stochastic process Wt is called a Wiener process for t R0 if it satsisfies the following
W has independent increments. For every t > 0, the increments Wt+u Wt for u > 0 are indepen-
dendent of the past values Ws for s t. The increments Wt+u Wt are normally distributed with mean 0 and standard deviation
Wt is almost surely continuous in t.
u. Many courses on stochastic analysis use this definition of a Wiener process, but never actually prove that
such a process exists.
Wieners construction shows how to simulate W at points of the form i for positive integers i and n. If we
assume continuity, we can then define W at all points t R as the set of points of the form i is dense
It is possible to prove that if we follow Wieners construction then Wt will almost surely be continuous. Proving this isnt too difficult but it probably requires a bit more probability theory than you currently know.

A property holds almost surely if the probability of it holding is 1. This isnt the same thing as saying that it is certain. For example, if you pick a uniformly distributed random number on [0, 1], the probability of it being irrational is 1. So a uniformly distributed random number is almost surely irrational.
Most definitions of a Wiener process state that Wt should always be continuous rather than just be almost surely continuous. If you wish to insist on this, you can just throw away any samples of Wt that happen not to be continuous and in this way one can prove formally that a Wiener process exists. If you have a good grasp of probability theory, you will understand that the difference between something always happening and almost surely happening is not important.
The point of Wieners construction is that it allows us to indefinitely refine a sample of Brownian motion so that we can see arbitrarily fine detail. We can use this to write code that zooms in on a particular part of a Brownian motion in arbirary detail.
You can see an animation of this here.
The horizontal axis is time. The vertical axis is Wt.
As we zoom in, the time axis and the vertical axis are scaled in different proportions. You can see this as the rectangles are tall and thin before we zoom in but become fatter as we zoom in.
We are zooming in in such a way as to ensure that the roughness stays the same as we zoom in. This means we need to scaling according to the square root of time.
To see the square root structure of the scaling, look at the corners of the rectangles in the animation. They all lie on parabolic curves with the equation
t = np.linspace(-2,2,1001)
y = np.sqrt(np.abs(t))
plt.plot(t,y)
plt.plot(t,-y);

As you can see, if we rescale in this way, the Wiener process looks broadly self-similar at all times.
It is an example of a fractal. Other famous examples of fractals are include snowflakes, cauliflowers, ferns
and coastlines.

The word fractal comes from fractional dimensional. The graph of a Wiener process looks a bit fatter than just a line. It is somewhere between 1- and 2-dimensional. In fact it has a fractal dimension of 32 . This is closely related to the fact that the dimensions of volatility are y 12 .
4 Creating the Animation and Unit Tests
Showing the full code used to create the animation of zooming into the Wiener process motion would require spending some time discussing the general issue of creating animations using Python and would involve some code that would obscure the mathematics. Instead well look at a simplified problem which is to write a single function which we will call zoom which you can call repeatedly to zoom into a Brownian motion in arbitrary detail. If you have time, you could call this function infinitely often!
To avoid wasting memory, we will only maintain two variables between calls to our zoom function. W will be a vector containing the values of the Brownian motion that were plotted in the last frame and t will be a vector consisting of the times plotted. Well only ever plot about 1000 points, so even though you can keep zooming in forever, our program wont need much memory.

Each time we zoom in, we will simulate W at all times halfway between times in the previous value of t. Suppose for example that the current value of t = (0, 14 , 12 , 34 , 1) then the halfway times would be
( 18 , 38 , 58 , 78 ). Heres a function that computes the halfway times. [2]:
Here is some code that checks compute_halfway_times works
def compute_halfway_times(t): all_but_last=t[0:-1]
all_but_first=t[1:]
return 0.5*(all_but_last + all_but_first)
t = np.array([0,1/4,1/2,3/4,1])
print( compute_halfway_times(t))
[0.125 0.375 0.625 0.875]
But there is a problem. We have to manually check this by looking at the numbers and thinking about it.
In real software there are millions of lines of code, any of which may need to be changed if someone finds a bug. This means that you constantly need to retest millions of lines of software. You cannot afford to manually retest everyting.
To help you test code, Python contains a special command assert which checks that a statement is true. If the statement is not true, Python stops with an error. If the statement is true, Python does nothing at all.
# This does nothing at all
assert 2+2==4
Whereas if you run the next line, you get an error.
assert 2+2==5

AssertionError Traceback (most recent call last)
in
-> 1 assert 2+2==5
AssertionError:
Remember that Python only stores numbers to a certain accuracy, so you should never test floating point numbers are equal using =, instead test that their difference is small.
For example, this code produces an error
assert (sin(pi)==0)
Whereas this code does not
assert (abs(sin(pi)-0)<0.00000001)To write an automated test for your code you should write a function that takes no parameters and which will do nothing if your code works. Such a function is called a unit test.If your code doesnt work, your test function should fail with an error. Heres an example[6]: test_compute_halfway_times()Most novice programmers think that there tests should print something out to show that they have worked. But suppose you have 1000 tests all printing things out. To see if anything has failed, youll have to read through 1000 lines and spot any problems.If your tests do nothing if they work, you can quickly tell if 1000 tests have all passed: did they all do nothing at all?One of the biggest improvements in computer programming occurred in the early 2000s when people re- alised you should unit test all of your code. I personally spent a year working at with my main focus being to get the programming teams their to unit test their code.When you do your coursework or write an MSc dissertation you will naturally want to ask the question: are my answers correct?It is your job not my job to answer this question. If you cannot be confident that your code works then, in my view, your code is essentially useless. It follows from this that if you cannot test your own code, you cannot program a computer.The next task we would like to perform is to merge together the vector t of times and the new vector of halfway times. We will write a function riffle to perform this task.The word riffle refers to a particular way of shuffling cards. [7]: def test_compute_halfway_times():t = np.array([0,1/4,1/2,3/4,1]) halfway_times = compute_halfway_times(t) expected = np.array([1/8,3/8,5/8,7/8]) for i in range(0,len(expected)):assert abs(halfway_times[i]-expected[i])<0.001To test that our code works, we see if the unit test does nothing at all!The riffle function should take two vectors v1 and v2 as parameters and should alternate their entries. For example if we riffle [0, 1, 2] and [0.5, 1.5] we should get [0, 0.5, 1, 1.5, 2].We can use this spe CS: assignmentchef QQ: 1823890830 Email: [email protected]

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] FM04.
$25