, ,

[SOLVED] Isye6740-homework 4

$25

File Name: Isye6740-homework_4.zip
File Size: 178.98 KB

5/5 - (1 vote)

In lectures, we learn different classifiers. This question is compare them on two datasets. Python users, please feel free to use Scikit-learn, which is a commonly-used and powerful Python library with various machine learning tools. But you can also use other similar libraries in other languages of your choice to perform the tasks.This dataset is about participants who completed the personal information form and a divorce predictors scale. The data is a modified version of the publicly available at https://archive.ics.uci. edu/ml/datasets/Divorce+Predictors+data+set (by injecting noise so you will not get the exactly same results as on UCI website). The dataset marriage.csv is contained in the homework folder. There are 170 participants and 54 attributes (or predictor variables) that are all real-valued. The last column of the CSV file is label y (1 means “divorce”, 0 means “no divorce”). Each column is for one feature (predictor variable), and each row is a sample (participant). A detailed explanation for each feature (predictor variable) can be found at the website link above. Our goal is to build a classifier using training data, such that given a test sample, we can classify (or essentially predict) whether its label is 0 (“no divorce”) or 1 (“divorce”).We are going to compare the following classifiers (Naive Bayes, Logistic Regression, and KNN). Use the first 80% data for training and the remaining 20% for testing. If you use scikit-learn you can use train test split to split the dataset.Remark: Please note that, here, for Naive Bayes, this means that we have to estimate the variance for each individual feature from training data. When estimating the variance, if the variance is zero to close to zero (meaning that there is very little variability in the feature), you can set the variance to be a small number, e.g.,. We do not want to have include zero or nearly variance in Naive Bayes. This tip holds for both Part One and Part Two of this question.This question is to compare different classifiers and their performance for multi-class classifications on the complete MNIST dataset at http://yann.lecun.com/exdb/mnist/. You can find the data file mnist 10digits.mat in the homework folder. The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. We will compare KNN, logistic regression, SVM, kernel SVM, and neural networks.Train the classifiers on training dataset and evaluate on the test dataset.In this problem, we will use the Naive Bayes algorithm to fit a spam filter by hand. This will enhance your understanding to Bayes classifier and build intuition. This question does not involve any programming but only derivation and hand calculation.Spam filters are used in all email services to classify received emails as “Spam” or “Not Spam”. A simple approach involves maintaining a vocabulary of words that commonly occur in “Spam” emails and classifying an email as “Spam” if the number of words from the dictionary that are present in the email is over a certain threshold. We are given the vocabulary consists of 15 wordsV = {secret, offer, low, price, valued, customer, today, dollar, million, sports, is, for, play, healthy, pizza}.We will use Vi to represent the ith word in V . As our training dataset, we are also given 3 example spam messages,Recall that the Naive Bayes classifier assumes the probability of an input depends on its input feature.The feature for each sample is defined as  and the class of the ith sample is y(i). In our case the length of the input vector is d = 15, which is equal to the number of words in the vocabulary V . Each entry is equal to the number of times word Vj occurs in the i-th message.dP(x|y = c) = Y θc,kxk ,         c = {0,1}k=1where 0 ≤ θc,k ≤ 1 is the probability of word k appearing in class c, which satisfies.Given this, the complete log-likelihood function for our training data is given byCalculate the maximum likelihood estimates of θ0,1, θ0,7, θ1,1, θ1,15 by maximizing the log-likelihood function above.(Hint: We are solving a constrained maximization problem and you will need to introduce Lagrangian multipliers and consider the Lagrangian function.)Consider a simple two-layer network in the lecture slides. Given n training data (xi,yi), i = 1,…,n, the cost function used to training the neural networkswhere σ(x) = 1/(1 + ex) is the sigmoid function, zi is a two-dimensional vector such that), and ). Show the that the gradient is given by,where ui = wTzi. Also show the gradient of `(w,α,β) with respect to α and β and write down their expression.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] Isye6740-homework 4
$25