The Learning Problem
- Which of the following problem is suited for machine learning if there is assumed to be enough associated data? Choose the correct answer; explain how you can possibly use machine learning to solve it.
- predicting the winning number of the next invoice lottery
- calculating the average score of 500 students
- identifying the exact minimal spanning tree of a graph
- ranking mango images by the quality of the mangoes
- none of the other choices
- Which of the following describes an machine learning approach to build a system for spam detection? Choose the correct answer; explain briefly why you think other choices are not machine learning.
- flip 3 fair coins; classify the email as a spam iff at least 2 of them are heads
- forward the email to 3 humans; classify the email as a spam iff at least 2 of them believe so
- produce a list of words for spams by 3 humans; classify the email as a spam iff the email contains more than 10 words from the list
- get a data set that contains spams and non-spams, for all words in the data set, let the machine calculate the ratio of spams per word; produce a list of words that appear more than 5 times and are of the highest 20% ratio; classify the email as a spam iff the email contains more than 10 words from the list
- get a data set that contains spams and non-spams, for all words in the data set, let the machine decide its spam score; sum the score up for each email; let the machine optimize a threshold that achieves the best precision of spam detection; classify the email as a spam iff the email is of score more than the threshold
Perceptron Learning Algorithm
Next, we will play with multiple variations to the Perceptron Learning Algorithm (PLA).
- Short scales down all xn (including the x0 within) linearly by a factor of 4 before running PLA. How does the worst-case speed of PLA (in terms of the bound on page 16 of lecture 2) change after scaling? Choose the correct answer; explain your answer.
- 4 times smaller (i.e. faster)
- 2 times smaller
- 2 times smaller
- unchanged
- 2 times larger (i.e. slower)
- The scaling in the previous problem is equivalent to inserting a learning rate to the PLA update rule
wt+1 wt + yn(t)xn(t)
with. In fact, we do not need to use a fixed . Let t denote the learning rate in the t-th iteration; that is, let PLA update wt by
wt+1 wt + t yn(t)xn(t)
whenever (xn(t),yn(t)) is not correctly classified by wt. Dr. Adaptive decides to set so
longer xn(t) will not affect wt too much. Let
,
which can be viewed as a normalized version of the on page 16 of lecture 2. The bound on the same page then becomes p after using this adaptive t. What is p? Choose the correct answer; explain your answer.
- 0
[b] 1
[c] 2
[d] 4
[e] 8
- Another possibility of setting t is to consider how negative yn(t)wtTxn(t) is, and try to make
0; that is, let wt+1 correctly classify (xn(t),yn(t)). Which of the following update
rules make 0? Choose the correct answer; explain your answer.
[a] wt+1 wt + 2 yn(t)xn(t)
- Separate decides to use one of the update rules in the previous problem for PLA. When the data set is linear separable, how many choices in the previous problem ensures halting with a perfect line? Choose the correct answer; explain the reason behind each halting case.
[a] 1
- 2
- 3
- 4
- 5
Types of Learning
- One shared technique between the famous AlphaGo, AlphaGo Zero, and AlphaStar is called selfpracticing: learning to play the game by practicing with itself and getting the feedback from the judge environment. What best describes the learning problem behind self-practicing? Choose the correct answer; explain your answer.
- human learning
- unsupervised learning
- semi-supervised learning
- supervised learning
- reinforcement learning
- Consider formulating a learning problem for building a self-driving car. First, we gather a training data set that consists of 100 hours of video that contains the view in front of a car, and records about how the human behind the wheel acted with physically constrained choices like steering, braking, and signaling-before-turning. We also gather another 100 hours of videos from 1126 more cars without the human records. The learning algorithm is expected to learn from all the videos to obtain a hypothesis that imitates the human actions well. What learning problem best matches the description above? Choose the correct answer; explain your answer.
- regression, unsupervised learning, active learning, concrete features
- structured learning, semi-supervised learning, batch learning, raw features
- structured learning, supervised learning, batch learning, concrete features
- regression, reinforcement learning, batch learning, concrete features
- structured learning, supervised learning, online learning, concrete features
(We are definitely not hinting that you should build a self-driving car this way. 😀 )
Off-Training-Set Error
As discussed on page 5 of lecture 4, what we really care about is whether g f outside D. For a set of universe examples U with D U, the error outside D is typically called the Off-Training-Set (OTS) error
1 X
Eots(h) = h(x) 6= y .
|U D| (x,y)UD J K
- Consider U with 6 examples
Run the process of choosing any three examples from U as D, and learn a perceptron hypothesis (say, with PLA, or any of your human learning algorithm) to achieve Ein(g) = 0 on D. Then, evaluate g outside D. What is the smallest and largest Eots(g)? Choose the correct answer; explain your answer.
[e] (0,1)
Hoeffding Inequality
- Suppose you are given a biased coin with one side coming up with probability . How many times do you need to toss the coin to find out the more probable side with probability at least 1 using the Hoeffdings Inequality mentioned in page 10 of lecture 4? Choose the correct answer; explain your answer. (Hint: There are multiple versions of Hoeffdings inequality. Please use the version in the lecture, albeit slightly loose, for answering this question. The log here is loge.)
Bad Data
- Consider x = [x1,x2]T R2, a target function f(x) = sign(x1), a hypothesis h1(x) = sign(2x1x2), and another hypothesis h2(x) = sign(x2). When drawing 5 examples independently and uniformly within [1,+1] [1,+1] as D, what is the probability that we get 5 examples (xn,f(xn)) such that Ein(h2) = 0? Choose the correct answer; explain your answer. (Note: This is one of the BAD-data cases for h2 where Ein(h2) is far from Eout(h2).)
[a] 0
[b] [c]
[d]
[e] 1
- Following the setting of the previous problem, what is the probability that we get 5 examples such that Ein(h2) = Ein(h1), including both the zero and non-zero Ein cases? Choose the correct answer; explain your answer. (Note: This is one of the BAD-data cases where we cannot distinguish the better-Eout hypothesis h1 from the worse hypothesis h2.)
[a]
[b] [c] [d]
[e]
- According to page 22 of lecture 4, for a hypothesis set H,
BAD D for H h H s.t.
Let x = [x1,x2, ,xd]T Rd with d > 1. Consider a binary classification target with Y = {+1,1} and a hypothesis set H with 2d hypotheses h1, ,h2d.
- For i = 1, ,d, hi(x) = sign(xi).
- For i = d + 1, ,2d, hi(x) = sign(xid).
Extend the Hoeffdings Inequality mentioned in page 10 of lecture 4 with a proper union bound. Then, for any given N and , what is the smallest C that makes this inequality true?
P[BAD.
Choose the correct answer; explain your answer.
- C = 1 [b] C = d
- C = 2d
- C = 4d [e] C =
Multiple-Bin Sampling
- We then illustrate what happens with multiple-bin sampling with an experiment that use a dice (instead of a marble) to bind the six faces together. Please note that the dice is not meant to be thrown for random experiments. The probability below only refers to drawing the dices from the bag. Try to view each number as a hypothesis, and each dice as an example in our multiple-bin scenario. You can see that no single number is always greenthat is, Eout of each hypothesis is always non-zero. In the next two problems, we are essentially asking you to calculate the probability of getting Ein(h3) = 0, and the probability of the minimum Ein(hi) = 0.
Consider four kinds of dice in a bag, with the same (super large) quantity for each kind.
- A: all even numbers are colored green, all odd numbers are colored orange
- B: (2,3,4) are colored green, others are colored orange
- C: the number 6 is colored green, all other numbers are colored orange
- D: all primes are colored green, others are colored orange
If we draw 5 dices independently from the bag, which combination is of the same probability as getting five green 3s? Choose the correct answer; explain your answer.
- five green 1s
- five orange 2s
- five green 2s
- five green 4s [e] five green 5s
- Following the previous problem, if we draw 5 dices independently from the bag, what is the probability that we get some number that is purely green? Choose the correct answer; explain your answer.
[a]
[b] [c] [d]
[e]
Experiments with Perceptron Learning Algorithm
Next, we use an artificial data set to study PLA. The data set with N = 100 examples is in
http://www.csie.ntu.edu.tw/~htlin/course/ml20fall/hw1/hw1_train.dat
Each line of the data set contains one (xn,yn) with xn R10. The first 10 numbers of the line contains the components of xn orderly, the last number is yn. Please initialize your algorithm with w = 0 and take sign(0) as 1.
- (*) Please first follow page 4 of lecture 2, and add x0 = 1 to every xn. Implement a version of PLA that randomly picks an example (xn,yn) in every iteration, and updates wt if and only if wt is incorrect on the example. Note that the random picking can be simply implemented with replacementthat is, the same example can be picked multiple times, even consecutively. Stop updating and return wt as wPLA if wt is correct consecutively after checking 5N randomly-picked examples.
Hint: (1) The update procedure described above is equivalent to the procedure of gathering all the incorrect examples first and then randomly picking an example among the incorrect ones. But the description above is usually much easier to implement. (2) The stopping criterion above is a randomized, more efficient implementation of checking whether wt makes no mistakes on the data set.
Repeat your experiment for 1000 times, each with a different random seed. What is the median number of updates before the algorithm returns wPLA? Choose the closest value.
- 8
- 11
- 14
- 17
- 20
- (*) Among all the w0 (the zero-th component of wPLA) obtained from the 1000 experiments above, what is the median? Choose the closest value.
- -10
- -5 [c] 0
- 5
- 10
- (*) Set x0 = 10 to every xn instead of x0 = 1, and repeat the 1000 experiments above. What is the median number of updates before the algorithm returns wPLA? Choose the closest value.
- 8
- 11
- 14
- 17
- 20
- (*) Set x0 = 0 to every xn instead of x0 = 1. This equivalently means not adding any x0, and you will get a separating hyperplane that passes the origin. Repeat the 1000 experiments above. What is the median number of updates before the algorithm returns wPLA?
- 8
- 11
- 14
- 17
- 20
- (*) Now, in addition to setting x0 = 0 to every xn, scale down each xn by 4. Repeat the 1000 experiments above. What is the median number of updates before the algorithm returns wPLA? Choose the closest value.
- 8
- 11
- 14
- 17
- 20
Reviews
There are no reviews yet.