, ,

[SOLVED] Stats 202 homework 1 to 4 solutions

$25

File Name: Stats_202_homework_1_to_4_solutions.zip
File Size: 329.7 KB

5/5 - (1 vote)

Problem 1 (4 points)
Chapter 2, Exercise 2 (p. 52).
Problem 2 (4 points)
Chapter 2, Exercise 3 (p. 52).
Problem 3 (4 points)
Chapter 2, Exercise 7 (p. 53).
Problem 4 (4 points)
Chapter 10, Exercise 1 (p. 413).
Problem 5 (4 points)
Chapter 10, Exercise 2 (p. 413).
Problem 6 (4 points)
Chapter 10, Exercise 4 (p. 414).
Problem 7 (4 points)
Chapter 10, Exercise 9 (p. 416).
Problem 8 (4 points)
Chapter 3, Exercise 4 (p. 120).
Problem 9 (4 points)
Chapter 3, Exercise 9 (p. 122). In parts (e) and (f), you need only try a few interactions and transformations.
1
Problem 10 (4 points)
Chapter 3, Exercise 14 (p. 125).
Problem 11 (5 points)
Let x1, . . . , xn be a fixed set of input points and yi = f(xi) + i
, where i
iid∼ P with E (i) = 0 and
Var(i) < ∞. Prove that the MSE of a regression estimate ˆf fit to (x1, y1), . . . ,(xn, yn) for a random test
point x0 or E

y0 − ˆf(x0)
2
decomposes into variance, square bias, and irreducible error components.
Hint: You can apply the bias-variance decomposition proved in class.
Problem 12 (5 points)
Consider the regression through the origin model (i.e. with no intercept):
yi = βxi + i (1)
(a) (1 point) Find the least squares estimate for β.
(b) (2 points) Assume i
iid∼ P such that E (i) = 0 and Var(i) = σ
2 < ∞. Find the standard error of the
estimate.
(c) (2 points) Find conditions that guarantee that the estimator is consistent. n.b. An estimator βˆ
n of a
parameter β is consistent if βˆ
p→ β, i.e. if the estimator converges to the parameter value in probability.

Introduction
Homework problems are selected from the course textbook: An Introduction to Statistical Learning.
Problem 1 (5 points)
Chapter 4, Exercise 1 (p. 168).
Problem 2 (5 points)
Chapter 4, Exercise 4 (p. 168).
Problem 3 (5 points)
Chapter 4, Exercise 6 (p. 170).
Problem 4 (5 points)
Chapter 4, Exercise 8 (p. 170).
Problem 5 (5 points)
Chapter 4, Exercise 10 parts a-h (p. 171)
Problem 6 (5 points)
Chapter 5, Exercise 2 (p. 197).
Problem 7 (5 points)
Chapter 5, Exercise 5 (p. 198).
Problem 8 (5 points)
Chapter 5, Exercise 6 (p. 199).
1
Problem 9 (5 points)
Chapter 5, Exercise 8 (p. 200).
Problem 10 (5 points)
Chapter 5, Exercise 9 (p. 201).

Introduction
Homework problems are selected from the course textbook: An Introduction to Statistical Learning.
Problem 1 (7 points)
Chapter 6, Exercise 3 (p. 260).
Problem 2 (7 points)
Chapter 6, Exercise 4 (p. 260).
Problem 3 (7 points)
Chapter 6, Exercise 9 (p. 263). Don’t do parts (e), (f), and (g).
Problem 4 (7 points)
Chapter 7, Exercise 1 (p. 297).
Problem 5 (7 points)
Chapter 7, Exercise 8 (p. 299). Find at least one non-linear estimate which does better than linear regression, and justify this using a t-test or by showing an improvement in the cross-validation error with respect
to a linear model. You must also produce a plot of the predictor X vs. the non-linear estimate ˆf(X).
Problem 6 (7 points)
Chapter 9, Exercise 1 (p. 368).
Problem 7 (8 points)
Chapter 9, Exercise 8 (p. 371).

Introduction
Homework problems are selected from the course textbook: An Introduction to Statistical Learning.
Problem 1 (10 points)
Chapter 8, Exercise 4 (p. 332).
Problem 2 (10 points)
Chapter 8, Exercise 8 (p. 333).
Problem 3 (10 points)
Chapter 8, Exercise 10 (p. 334).
Problem 4 (10 points)
Chapter 8, Exercise 11 (p. 335).
Problem 5 (10 points)
Let xi
: i = 1, …, p be the input predictor values and a
(2s)
k
: k = 1, …, K be the K-dimensional output from
a 2-layer and M-hidden unit neural network with sigmoid activation σ(a) = {1 + e
−a}
−1
such that
a
(1s)
j = w
(1s)
j0 +
Xp
i=1
w
(1s)
ji xi
: j = 1, …, M
a
(2s)
k = w
(2s)
k0 +
X
M
j=1
w
(2s)
kj σ

a
(1s)
j

Show that there exists an equivalent network that computes exactly the same output values, but with
hidden unit activation functions given by tanh(a) = e
a−e
−a
e
a+e−a , i.e.
a
(1t)
j = w
(1t)
j0 +
Xp
i=1
w
(1t)
ji xi
: j = 1, …, M
a
(2t)
k = w
(2t)
k0 +
X
M
j=1
w
(2t)
kj tanh 
a
(1t)
j

Hint: first derive the relation between σ(a) and tanh(a). Then show that the parameters of the two
networks differ by linear transformations.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] Stats 202 homework 1 to 4 solutions[SOLVED] Stats 202 homework 1 to 4 solutions
$25