, , , , ,

[SOLVED] CS6035 Project Machine Learning Fall24 Solved

$25

File Name: CS6035_Project_Machine_Learning_Fall24_Solved.zip
File Size: 423.9 KB

5/5 - (1 vote)

Task 1Task 1For the first task, lets get familiar with some pandas basics. pandas is a Python library that deals with Dataframes, which you can think of as a Python class that handles tabular data. In the real world, you would create graphics and other visuals to better understand the dataset you are working with. You would also use plotting tools like PowerBi, Tableau, Data Studio, and Matplotlib. This step is generally known as Exploratory Data Analysis. Since we are using an autograder for this class, we will skip the plotting for this project.For this task, we have released a local test suite. If you are struggling to understand the expected input and outputs for a function, please set up the test suite and use it to debug your function.Its critical you pass all tests locally before you submit to Gradescope for credit. Do not use Gradescope for debugging.TheoryIn this Task, were not yet getting into theory. Its more nuts and bolts you will learn the basics of pandas. pandas dataframes are something of a glorified list of lists, mixed in with a dictionary. You get a table of values with rows and columns, and you can modify the column names and index values for the rows. There are numerous functions built into pandas to let you manipulate the data in the dataframe.To be clear, pandas is not part of Python, so when you look up docs, youll specifically want the official Pydata pandas docs. Note that we linked to the API docs here, this is the core of the docs youll be looking at.You can always get started trying to solve a problem by looking at Stack Overflow posts in Google search results. There youll find ideas about how to use the pandas library. In the end, however, you should find yourself in the habit of looking directly at the docs for whichever library you are using, pandas in this case.For those who might need a concrete example to get started, heres how you would take a pandas dataframe column and return the average of its values:import pandas as pd# create a dataframe from a Python dictdf = pd.DataFrame({color:[yellow, green, purple, red], weight:[124,4.56,384,-2]}) df # shows the dataframeNote that the column names are [color,weight] while the index is [0,1,2,3] where [] the brackets denote a list.Now that we have created a dataframe, we can find the average weight by summing the values under weight and dividing them by the sum:Note: In the example above, were not paying attention to rounding, you will need to round your answers to the precision asked for in each Task.Also note, we are using slightly older versions of the pandas, Python and other libraries so be sure to look at the docs for the appropriate library version. Often theres a drop-down at the top of docs sites to select the older version.Refer to the Submissions page for details about submitting your work.Useful Links:Deliverables:Instructions:The Task1.py file has function skeletons that you will complete with Python code, mostly using the pandas library. The goal of each of these functions is to give you familiarity with the pandas library and some general Python concepts like classes, which you may not have seen before. See information about the functions inputs, outputs, and skeletons below.Table of contentsIn this function you will take a dataset and the name of a column in it. You will return the columns data type.Useful Resources https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dtypes.htmlINPUTSOUTPUTSnp.dtype data type of the columnFunction Skeletondef find_data_type(dataset:pd.DataFrame,column_name:str) -> np.dtype: return np.dtype()In this function you will take a dataset and a series and set the index of the dataset to be the seriesUseful Resources https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.htmlINPUTSOUTPUTSa pandas DataFrame indexed by the given index seriesFunction Skeletondef set_index_col(dataset:pd.DataFrame,index:pd.Series) -> pd.DataFrame:return pd.DataFrame()In this function you will take a dataset with an index already set and reindex the dataset from 0 to n-1, dropping the old indexUseful Resources https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.htmlINPUTSOUTPUTSa pandas DataFrame indexed from 0 to n-1Function Skeletondef reset_index_col(dataset:pd.DataFrame) -> pd.DataFrame: return pd.DataFrame()In this function you will be given a DataFrame, column name and column type. You will edit the dataset to take the column name you are given and set it to be the type given in the input variableUseful Resources https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.htmlINPUTSOUTPUTSa pandas DataFrame with the column in column_name changed to the type in new_col_type Function Skeleton# Set astype (string, int, datetime)def set_col_type(dataset:pd.DataFrame,column_name:str,new_col_type:type) -> pd.DataFrame: return pd.DataFrame()In this function you will take data in an array as well as column and row labels and use that information to create a pandas DataFrameUseful Resources https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.htmlINPUTSOUTPUTSa pandas DataFrame with columns set from column_name_list, row index set from index and data set from array_2dFunction Skeleton# Take Matrix of numbers and make it into a DataFrame with column name and index numbering def make_DF_from_2d_array(array_2d:np.array,column_name_list:list[str],index:pd.Series) -> pd.DataFr return pd.DataFrame()In this function, you are given a dataset and column name. You will return a sorted dataset (sorting rows by the value of the specified column) either in descending or ascending order, depending on the value in the descending variable.Useful Resources https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.htmlINPUTSOUTPUTSa pandas DataFrame sorted by the given column name and in descending or ascending order depending on the value of the descending variableFunction Skeleton# Sort DataFrame by values def sort_DF_by_column(dataset:pd.DataFrame,column_name:str,descending:bool) -> pd.DataFrame: return pd.DataFrame()In this function you are given a DataFrame. You will return a DataFrame with any columns containing NA values droppedUseful Resources https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.htmlINPUTSOUTPUTSa pandas DataFrame with any columns that contain an NA value droppedFunction SkeletonIn this function you are given a DataFrame you will return a DataFrame with any rows containing NA values droppedUseful Resourceshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.htmlINPUTSOUTPUTSa pandas DataFrame with any rows that contain an NA value droppedFunction Skeletondef drop_NA_rows(dataset:pd.DataFrame) -> pd.DataFrame: return pd.DataFrame()In this function you are given a dataset, a new column name and a string value to fill in the new column. Add the new column to the dataset and return the dataset.Useful Resourceshttps://pandas.pydata.org/pandasdocs/stable/getting_started/intro_tutorials/05_add_columns.htmlINPUTSOUTPUTSa pandas DataFrame with the new column created named new_column_name and filled with the value in new_column_value Function SkeletonIn this function you are given 2 datasets and the name of a column with which you will left join them on using the pandas merge method. The left dataset is dataset1 right dataset is dataset2, for example purposes.Useful Resourceshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html https://stackoverflow.com/questions/53645882/pandas-merging-101INPUTSOUTPUTSa pandas DataFrame containing the two datasets left joined together on the given column nameFunction Skeletondef left_merge_DFs_by_column(left_dataset:pd.DataFrame,right_dataset:pd.DataFrame,join_col_name:str) return pd.DataFrame()This project will require you to work with Python Classes. If you are not familiar with them we suggest learning a bit more about them.You will take the inputs into the class initialization and set them as instance variables (of the same name) in the Python class.Useful Resources https://www.w3schools.com/python/python_classes.aspINPUTSOUTPUTSNone, just setup the init method in the class.Function Skeletonclass simpleClass(): def __init__(self, length:int, width:int, height:int): passNow that you have learned a bit about pandas DataFrames, we will use them to generate some simple summary statistics for a DataFrame. You will be given the dataset as an input variable, as well as a column name for a column in the dataset that serves as a label column. This label column contains binary values (0 and 1) that you also summarize, and also the variable to predict. In this context:This type of binary classification is common in machine learning tasks where we want to be able to predict the field. An example of where this could be useful would be if we were looking at network data, and the label column was IsVirus. We could then analyze the network data of Georgia Tech services and predict if incoming files look like a virus (and if we should alert the security team).Useful ResourcesINPUTSOUTPUTSFunction SkeletonDisclaimer: You are responsible for the information on this website. The content is subject to change at any time.CS 6035Projects / Machine Learning / Task 2Task 2 (25 points)Now that you have a basic understanding of pandas and the dataset, it is time to dive into some more complex data processing tasks.TheoryIn machine learning a common goal is to train a model on one set of data. Then we validate the model on a similarly structured but different set of data. You could, for example, train the model on data you have collected historically. Then you would validate the model against real-time data as it comes in, seeing how well it predicts the new data coming in.If youre looking at a past dataset as we are in these tasks, we need to treat different parts of the data differently to be able to develop and test models. We segregate the data into test and training portions. We train the model on the training data and test the developed model on the test data to see how well it predicts the results.You should never train your models on test data, only on training data.At a high level it is important to hold out a subset of your data when you train a model. You can see what the expected performance is on unseen sample. Thus, you can determine if the resulting model is overfit (performs much better on training data vs test data).Preprocessing data is essential because most models only take in numerical values. Therefore, categorical features need to be encoded to numerical values so that models can use them. A machine learning model may not be able to make sense of green, blue and red. In preprocessing, well convert those to integer values 1, 2 and 3, for example. Its an interesting question as to what happens when you have training data that has green, red and blue, but your testing data says yellow.Numerical scaling can be more or less useful depending on the type of model used, but it is especially important in linear models. Numerical scaling is typically taking positive value and compressing them into a range between 0 and 1 (inclusive) that retains the relationships among the original data.These preprocessing techniques will provide you with options to augment your dataset and improve model performance.Useful Links:Deliverables:Instructions:The Task2.py File has function skeletons that you will complete with python code (mostly using the pandas and scikit-learn libraries). The Goal of each of these functions is to give you familiarity with the applied concepts of Splitting and Preprocessing Data. See information about the Functions Inputs, Outputs and Skeletons belowTable of contentsa __init__ b One Hot Encoding c Min/Max ScalingttsIn this function, you will take:You will return features and labels for the training and test sets.At a high level, you can separate the task into two subtasks. The first is splitting your dataset into both features and labels (by columns), and the second is splitting your dataset into training and test sets (by rows). You should use the scikit-learn train_test_split function but will have to write wrapper code around it based on the input values we give you.Useful ResourcesINPUTSOUTPUTSFunction Skeletondef tts( dataset: pd.DataFrame, label_col: str, test_size: float, stratify: bool, random_state: int) -> tuple[pd.DataFrame,pd.DataFrame,pd.Series,pd.Series]:# TODOreturn train_features,test_features,train_labels,test_labelsThe PreprocessDataset Class contains a code skeleton with nine methods for you to implement. Most methods will be split into two parts: one that will be run on the training dataset and one that will be run on the test dataset. In Data Science/Machine Learning, this is done to avoid something called Data Leakage.For this assignment, we dont expect you to understand the nuances of the concept, but we will have you follow principles that will minimize the chances of it occurring. You will accomplish this by splitting data into training and test datasets and processing those datasets in slightly different ways.Generally, for everything you do in this project, and if you do any ML or Data Science work in the future, you should train/fit on the training data first, then predict/transform on the training and test data. That holds up for basic preprocessing steps like task 2 and for complex models like you will see in tasks 3 and 4.For the purposes of this project, you should never train or fit on the test data (and more generally in any ML project) because your test data is expected to give you an understanding of how your model/predictions will perform on unseen data. If you fit even a preprocessing step to your test data, then you are either giving the model information about the test set it wouldnt have about unseen data (if you combine train and test and fit to both), or you are providing a different preprocessing than the model is expecting (if you fit a different preprocessor to the test data), and your model would not be expected to perform well.Note: You should train/fit using the train dataset; then, once you have a fit encoder/scaler/pca/model instance, you can transform/predict on the training and test data.You will also notice that we are only preprocessing the Features and not the Labels. There are a few cases where preprocessing steps on labels may be helpful in modeling, but they are definitely more advanced and out of the scope of this introduction. Generally, you will not need to do any preprocessing to your labels beyond potentially encoding a string value (i.e., Malware or Benign) into an integer value (0 or 1), which is called Label Encoding.Similar to the Task1 simpleClass subtask you previously completed you will initialize the class by adding instance variables (add all the inputs to the class).Useful ResourcesINPUTSExample of feature_engineering_functions:Dont worry about copying it we also have examples in the local test cases this is just provided as an illustration of what to expect in your function.OUTPUTSNone, just assign all the input parameters to class variables.Also per the instructions below, youll return here and create another instance variable: a scikitlearn OneHotEncoder with any Parameters you may need later.Function SkeletonPreprocessDataset:one_hot_encode_columns_train and one_hot_encode_columns_testOne Hot Encoding is the process of taking a column and returning a binary vector representing the various values within it. There is a separate function for the training and test datasets since they should be handled separately to avoid data leakage (see the 3rd link in Useful Resources for a little more info on how to handle them).Pseudocodeone_hot_encode_columns_train()OneHotEncoder that can help you with this) and the same index that train_features had.OneHotEncoder that can help you with this) and the same index that test_features had.Example Walkthrough (from Local Testing suite):INPUTS:one_hot_encode_cols[color,version]Train Features2.DataFrame with columns to encode:5.One Hot Encoded DataFrame with Index and Column NamesFinal DataFrame with passthrough/other columns joined backTEST DATAFRAMES AT EACH STEP:1.DataFrame with columns to encode:DataFrame with other columns:One Hot Encoded 2d array:One Hot Encoded DataFrame with Index and Column NamesFinal DataFrame with passthrough columns joined backNote: For the local tests and autograder use the column naming scheme of joining the previous column name and the column value with an underscore (similar to above where Type -> Type_Fruit and Type_Vegetable)Note 2: Since you should only be fitting your encoder on the training data, if there are values in your test set that are different than those in the training set, you will denote that with 0s. In the example above, lets say we have a row in the test set with pizza, which is neither a fruit nor vegetable for the Type_Fruit and Type_Vegetable. It should result in a 0 for both columns. If you dont handle these properly, you may get errors like Test Failed: Found unknown categories.Note 3: You may be tempted to use the pandas function get_dummies to solve this task, but its a trap. It seems easier, but you will have to do a lot more work to make it handle a train/test split. So, we suggest you use scikit-learns OneHotEncoder.Useful ResourcesINPUTSOUTPUTSa pandas DataFrame with the columns listed in one_hot_encode_cols one hot encoded and all other columns in the DataFrame unchangedFunction SkeletonPreprocessDataset:min_max_scaled_columns_train and min_max_scaled_columns_testMin/Max Scaling is a process to transform numerical features to a specific range, typically [0, 1], to ensure that input values are comparable (similar to how you may have heard of normalizing data) and is a crucial preprocessing step for many machine learning algos. In particular this standardization is essential for algorithms like linear regression, logistic regression, k-means, and neural networks, which can be sensitive to the scale of input features, whereas some algos like decision trees are less impacted.By applying Min/Max Scaling, we prevent feature dominance, to ideally improve performance and accuracy of these algorithms and improve training convergence. Its a recommended step to ensure your models are trained on consistent and standardized data.For the provided assignment you should use the scikit-learn MinMaxScaler function (linked in the resources below) rather than attempting to implement your own scaling function.The rough implementation of the scikit-learn function is provided below for educational purposes.X_std = (X X.min(axis=0)) / (X.max(axis=0) X.min(axis=0))X_scaled = X_std * (max min) + minNote: There are separate functions for the training and test datasets to help avoid data leakage between the test/train datasets. Please refer to the 3rd link in Useful Resources for more information on how to handle this namely that we should still scale the test data based on our knowledge of the train dataset.Example Dataframe:Note: For the Autograder use the same column name as the original column (ex: Price -> Price)Useful ResourcesINPUTSOUTPUTSa pandas DataFrame with the columns listed in min_max_scale_cols min/max scaled and all other columns in the DataFrame unchangedFunction SkeletonPrincipal Component Analysis is a dimensionality reduction technique (column reduction). It aims to take the variance in your input columns and map the columns into N columns that contain as much of the variance as it can. This technique can be useful if you are trying to train a model faster and has some more advanced uses, especially when training models on data which has many columns but few rows. There is a separate function for the training and test datasets because they should be handled separately to avoid data leakage (see the 3rd link in Useful Resources for a little more info on how to handle them).Note 1: For the local tests and autograder, use the column naming scheme of column names: component_1, component_2 .. component_n for the n_components passed into the __init__ method.Note 2: For your PCA outputs to match the local tests and autograder, make sure you set the seed using a random state of 0 when you initialize the PCA function.Note 3: Since PCA does not work with NA values, make sure you drop any columns that have NA values before running PCA.Useful ResourcesCAINPUTSOUTPUTSa pandas DataFrame with the generated pca values and using column names: component_1, component_2 .. component_nFunction Skeletondef pca_train(self,train_features:pd.DataFrame) -> pd.DataFrame:# TODO: Read the function description in https://github.gatech.edu/pages/cs6035-tools/cs6035-to pca_dataset = pd.DataFrame() return pca_datasetdef pca_test(self,test_features:pd.DataFrame) -> pd.DataFrame:# TODO: Read the function description in https://github.gatech.edu/pages/cs6035-tools/cs6035-to pca_dataset = pd.DataFrame() return pca_datasetPreprocessDataset:feature_engineering_train, feature_engineering_testFeature Engineering is a process of using domain knowledge (physics, geometry, sports statistics, business metrics, etc.) to create new features (columns) out of the existing data. This could mean creating an area feature when given the length and width of a triangle or extracting the major and minor version number from a software version or more complex logic depending on the scenario.For this method, you will be taking in a dictionary with a column name and a function that takes in a DataFrame and returns a column. Youll be using that to create a new column with the name in the dictionary key.For example:Given the above functions, you would create two new columns named double_height and half_height in your Dataframe.Useful ResourcesINPUTSOUTPUTSa pandas dataframe with the features described in feature_engineering_train andfeature_engineering_test added as new columns and all other columns in the dataframe unchangedFunction SkeletonPreprocessDataset:preprocess_train, preprocess_testNow, we will put three of the above methods together into a preprocess function. This function will take in a dataset and perform encoding, scaling, and feature engineering using the above methods and their respective columns. You should not perform PCA for this function.Useful ResourcesSee resources for one hot encoding, min/max scaling and feature engineering aboveINPUTSOUTPUTSa pandas dataframe for both test and train features with the columns in one_hot_encode_cols encoded, the columns in min_max_scale_cols scaled and the columns described infeature_engineering_functions engineered. You do not need to use PCA here.Function SkeletonDisclaimer: You are responsible for the information on this website. The content is subject to change at any time.CS 6035Projects / Machine Learning / Task 3In Task 2 you learned how to split a dataset into training and testing components. Now its time to learn about using a K-means model. We will run a basic model on the data to cluster files (rows) with similar attributes together. We will use an unsupervised model.TheoryAn unsupervised model has no label column. By constrast, in supervised learning (which youll see in Task 4) the data has features and targets/labels. These labels are effectively an answer key to the data in the feature columns. You dont have this answer key in unsupervised learning, instead youre working on data without labels. Youll need to choose algorithms that can learn from the data, exclusively, without the benefit of lablels.We start with K-means because it is simple to understand the algorithm. For the Mathematics people, you can look at the underlying data structure, a Voronoi diagram. Based on squared Euclidian distances, K-means creates clusters of similar datapoints. Each cluster has a centroid.The idea is that for each sample, its associated/clustered with the centroid that is the closest.Closest is an interesting concept in higher dimensions. You can think of each feature in a dataset as a dimension in the data. If its 2d or 3d, we can visualize it easily. Concepts of distance are clear in 2d and 3d, and they work similarly in 4+d.If you read the Wikipedia articles for K-means youll see a discussion of the use of squared Euclidean distances in K-means. This is compared with simple Euclidean distances in the Weber problem, and better approaches resulting from k-medians and k-mediods is discussed.Please use scikit-learn to create the model and Yellowbrick to determine the optimal value of k for the dataset.So far, we have functions to split the data and preprocess it. Now, we will run a basic model on the data to cluster files (rows) with similar attributes together. We will use an unsupervised model (model with no label column), K-means. Again, use scikit-learn to create the model and Yellowbrick to determine the optimal value of k for the dataset.Refer to the Submissions page for details about submitting your work.Useful Links:Deliverables:Instructions:The Task3.py File has function skeletons that you will complete with Python code. You will mostly be using the pandas, Yellowbrick and scikit-learn libraries. The goal of each of these functions is to give you familiarity with the applied concepts of Unsupervised Learning. See information about the functions Inputs, Outputs and Skeletons below.The KmeansClustering Class contains a code skeleton with 4 methods for you to implement.Note: You should train/fit using the train dataset then once you have a Yellowbrick/K-means model instance you can transform/predict on the training and test data.Similar to Task 1, you will initialize the class by adding instance variables as needed.Useful ResourcesINPUTSOUTPUTS NoneFunction SkeletonKmeans Clustering is a process of grouping together similar rows together and assigning them to a cluster. For this method you will use the training data to fit an optimal K-means cluster on the data.To help you get started we have provided a list of subtasks to complete for this task:Useful ResourcesINPUTSOUTPUTSa list of cluster ids that the K-means model has assigned for each row in the train dataset Function SkeletonK-means clustering is a process of grouping together similar rows together and assigning them to a cluster. For this method you will use the training data to fit an optimal K-means cluster on the test data.To help you get started, we have provided a list of subtasks to complete for this task:Useful ResourcesINPUTSOUTPUTSa list of cluster ids that the K-means model has assigned for each row in the test datasetFunction SkeletonKmeansClustering:train_add_kmeans_cluster_id_feature, test_add_kmeans_cluster_id_featureUsing the two methods you completed above (kmeans_train and kmeans_test) you will add a new feature(column) to the training and test dataframes. This is similar to the feature engineering method in Task 2 where you appended new columns onto an existing dataframe.To do this, use the output of the methods (the list of cluster ids you return) from the corresponding train or test method and add it as a new column named kmeans_cluster_id in the input dataframe, then return the full dataframe.Useful ResourcesINPUTSUse the needed instance variables you set in the __init__ method and the kmeans_train and kmeans_test methods you wrote above to produce the needed output.OUTPUTSA pandas dataframe with the kmeans_cluster_id added as a feature and all other input columns unchanged, for each of the two methods train_add_kmeans_cluster_id_feature andtest_add_kmeans_cluster_id_feature.Function Skeletonoutput_df = pd.DataFrame() return output_dfDisclaimer: You are responsible for the information on this website. The content is subject to change at any time.CS 6035Projects / Machine Learning / Task 4Task 4 (25 points)Now lets try a few supervised classification models:We have chosen a few commonly used models for you to use here, but there are many options. In the real world, specific algorithms may fit a specific dataset better than other algorithms.You wont be doing any hyperparameter tuning yet, so you can better focus on writing the basic code. You will:(Note on feature importance: You should use RFE for determining feature importance of your Logistic Regression model, but do NOT use RFE for your Random Forest or Gradient Boosting models to determine feature importance. Please use their built-in values for this.)Useful Links:An Introduction to Classification in Machine Learning builtin Classification in Machine Learning: An Introduction DataCampDeliverables:Instructions:The Task4.py File has function skeletons that you will complete with Python code (mostly using the pandas and scikit-learn libraries).The goal of each of these functions is to give you familiarity with the applied concepts of training a model, using it to score records and calculating performance metrics for it. See information about the function inputs, outputs and skeletons below.Table of contentsModelMetricsYou do not need to return a feature importance DataFrame in the ModelMetrics value for the naive model you will create, just return None in that position of the return statement, as the given code does.calculate_naive_metricsA Naive model is a very simple model/prediction that can help to frame how well a more sophisticated model is doing. At best, such a model has random competence at predicting things. At worst, its wrong all the time.Since a naive model is incredibly basic (often a constant or randomly selected result), we can expect that any more sophisticated model that we train should outperform it. If the naive Model beats our trained model, it can mean that additional data (rows or columns) is needed in the dataset to improve our model. It can also mean that the dataset doesnt have a strong enough signal for the target we want to predict.In this function you will use the approach of a single / constant output naive model. You will use a given constant integer (as a parameter) as your naive prediction. You will then calculate 4 metrics (accuracy,recall,precision and fscore) for the training and test datasets. The integer is passed into the function as the variable naive_assumption. As noted above here you do not return a feature importance object, just return None in that place of the ModelMetrics object you return.Useful ResourcesINPUTStrain_features a dataset split by a function similar to the tts function you created in task2OUTPUTSA completed ModelMetrics object with a training and test metrics dictionary with each one of the metrics rounded to 4 decimal placesFunction Skeletonpcalculate_logistic_regression_metricsA logistic regression model is a simple and more explainable statistical model that can be used to estimate the probability of an event (log-odds). At a high level, a logistic regression model uses data in the training set to estimate a columns weight in a linear approximation function.Conceptually this is similar to estimating m for each column in the line formula you probably know well from geometry: y = m*x + b. If you are interested in learning more, you can read up on the math behind how this works. For this project, we are more focused on showing you how to applythese models, so you can simply use a scikit-learn Logistic Regression model in your code.For this task use scikit-learns LogisticRegression class and complete the following subtasks:NOTE: Make sure you use the predicted probabilities for roc aucUseful Resourcesl)./08%3A_Multiple_and_Logistic_Regression/8.04%3A_Introduction_to_Logistic_RegressionINPUTSThe first 4 are similar to the tts function you created in Task 2:OUTPUTSFunction Skeletondef calculate_logistic_regression_metrics(train_features:pd.DataFrame, test_features:pd.DataFrame, t model = LogisticRegression() train_metrics = { accuracy : 0,recall : 0,precision : 0,A Random Forest model is a more complex model than the naive and Logistic Regression Models you have trained so far. It can still be used to estimate the probability of an event, but achieves this using a different underlying structure: a tree-based model.Conceptually, this looks a lot like many if/else statements chained together into a tree. A Random Forest expands on this and trains different trees with different subsets of the data and starting conditions. It does this to get a better estimate than a single tree would give. For this project, we are more focused on showing you how to apply these models, so you can simply use the scikitlearn Random Forest model in your code.For this task use scikit-learns Random Forest Classifier class and complete the following subtasks:NOTE: Make sure you use the predicted probabilities for roc aucUseful ResourcesINPUTSOUTPUTSFunction SkeletonA Gradient Boosted model is more complex than the Naive and Logistic Regression models and similar in structure to the Random Forest model you just trained. A Gradient Boosted model expands on the tree-based model by using its additional trees to predict the errors from the previous tree. For this project, we are more focused on showing you how to apply these models, so you can simply use the scikit-learn Gradient Boosted Model in your code.For this task use scikit-learns Gradient Boosting Classifier class and complete the following subtasks:NOTE: Make sure you use the predicted probabilities for roc aucRefer to the Submissions page for details about submitting your work.Useful ResourcesINPUTSOUTPUTSFunction SkeletontraExample of Feature Importance DataFrameDisclaimer: You are responsible for the information on this website. The content is subject to change at any time.CS 6035Projects / Machine Learning / Task 5Now that you have written functions for different steps of the model-building process, you will put it all together. You will write code that trains a model with hyperparameters you determine (you should do any tuning locally or in a notebook, i.e., dont tune your model in gradescope since the autograder will likely timeout).Refer to the Submissions page for details about submitting your work.For Task 5 you should write your own local tests to pass before submitting to Gradescope. Do not share these tests with other students.train_model_return_scoresInstructions (10 points):Your function will:Our autograder will compare your predictions with the correct answers, and to get credit, you will need a roc auc score of .9 or higher on the test set (should not require much hyperparameter tuning for this dataset). This is basically a simulation of how your model would perform in a production system using batch inference.Sample Submission:def train_model_return_scores(train_df_path,test_df_path) -> pd.DataFrame:# TODO: Read the function description in https://github.gatech.edu/pages/cs6035-tools/cs6035-too test_scores = pd.DataFrame() return test_scorestrain_model_unsw_return_scoresInstructions (10 points):Your function will:Our autograder will compare your predictions with the correct answers, and to get credit, you will need a roc auc score of .55 or higher on the test set which will give 1/4 credit (2.5 points) and should not require much hyperparameter tuning for this dataset. We also add thresholds of .75 which will give half credit (5 points) and will likely require parameter tuning. The full credit (10 points) threshold is .76 which will give full credit and will likely require parameter tuning.This is basically a simulation of how your model would perform on unseen data in a production system using batch inference.Use any of the techniques we covered in this project to train models and return predicted probabilities for each row of the test sets as a DataFrames with columns index (same as your index from the input test df) and prob_class_1 (predicted probabilities).Sample Submission:Deliverables:More InformationVisit the public GitHub respositoy for more general information. For this project, we use the 55 feature ClaMP_Raw-5184.csv file referenced in the Dataset Files section of the repository.There are also papers that have been written using the dataset, such as Investigation and preprocessing of CLaMP malware dataset for machine learning models.You can visit the projects website for more general information. Of particular use may be the dataset description and feature descriptions provided by the creators at UNSW Canbrerra. Please note, we do not use all features or classes in the project.Disclaimer: You are responsible for the information on this website. The content is subject to change at any time.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] CS6035 Project Machine Learning Fall24 Solved
$25