1. (20 pts.) Consider a two category classification problem with one dimensional feature x.
Assume that the priors are P(ω1) = 1/3 and P(ω2) = 2/3, and that the class-conditional
distributions have normal densities p(x|ω1) ∼ N(1, 1) and p(x|ω2) ∼ N(3, 1), where
N(µ, σ2
) = 1
√
2πσ2
e
− 1
2
(
x−µ
σ
)(a) Derive the Bayes decision rule for minimum-error-rate classification.
(b) Let λij = λ(αi
|ωj ) be the loss incurred for deciding ωi when the true category is ωj
.
Assume λ11 = 0, λ12 = 2, λ21 = 1, λ22 = 0. Derive the Bayes decision rule for the
minimum risk classification.2. (20 pts.) Consider the Binary Independence Model for text document. Given a vocabulary
V : (w1, . . . , wd) of all English words (and tokens), assume that a text document is represented as a vector of binary features ~x = (x1, . . . , xd)
t
such that xi
is 1 if the word wi
appears in the document, and xi
is 0 otherwise. We want to classify text documents into c
categories.Let P(ωj ) be the prior probability for the class ωj for j = 1, . . . , c. Assume that
the components of ~x are statistically independent given the category (Naive Bayes model),
i.e.,
P(~x|ωj ) = Y
d
i=1
P(xi
|ωj ).Assume we could estimate the following probability from the training data
pij = P(xi = 1|ωj ) i = 1, . . . , d, j = 1, . . . , c.
Show that the minimum probability of error is achieved by the following decision rule: Decide ωk if gk(~x) ≥ gj (~x) for all j 6= k, where the discriminant function is given in the form
of
gj (~x) = X
d
i=1
cijxi + bj
.
Give the expressions of cij and bj
.
COMS, Learning, Machine, Problem, solved
[SOLVED] Coms 573 machine learning problem set 1
$25
File Name: Coms_573_machine_learning_problem_set_1.zip
File Size: 367.38 KB
Only logged in customers who have purchased this product may leave a review.
Reviews
There are no reviews yet.