, , , , ,

[SOLVED] Comp9414 assignment 1- artificial neural networks 1 p0

$25

File Name: Comp9414_assignment_1__artificial_neural_networks_1_p0.zip
File Size: 508.68 KB

5/5 - (1 vote)

Problem Overview

The Amazon rainforest is the largest tropical rainforest on Earth, renowned for its unparalleled biodiversity and its key role in regulating global climate patterns[1,2]. Covering over 5.5 million square kilometers, it acts as a vital carbon sink by absorbing significant amounts of atmospheric COâ‚‚, thereby mitigating global climate change[2]. However, in recent decades, temperatures in the Amazon basin have shown a worrying upward trend, largely attributed to climate change[3].

Within this vast rainforest (shown in green in Figure 1), temperatures are rising particularly rapidly in certain areas, leading to more frequent and intense “heat events†[4]. These heat events threaten local ecosystems by increasing the risk of forest fires and have broader implications for global climate stability.
The occurrence and intensity of heat events in the Amazon are influenced by large-scale climate drivers that regulate global weather patterns [5]. In particular, four important ocean climate patterns—the El Niño-Southern Oscillation (ENSO), the Tropical South Atlantic (TSA), the Tropical North Atlantic (TNA), and the North Atlantic Oscillation (NAO)—play a key role in determining temperature variability in the Amazon. These climate patterns occur in different ocean regions surrounding the Amazon, as shown in Figure 1.
1. Data provided The temperature time series data and the values ​​of climate pattern indices (ENSO, NAO, TSA, TNA) are shown in Table 1 (Amazon temperature student.csv).
index Measurement content scope explain ENSO Sea surface temperature anomalies in the Niño 3.4 region -3 to
3°C + indicates El Niño; indicates La Niña WITH THEM Normalized sea level pressure difference (Azores–Iceland) -4 to 4 + means stronger westerly winds and milder winters; – means the opposite TSA Sea surface temperature anomalies in the tropical South Atlantic -1 to 1°C + indicates warmer waters in the South Atlantic TNA Sea surface temperature anomalies in the tropical North Atlantic -1 to 1°C + indicates warmer North Atlantic waters Temperature thresholds for each month of each year (threshold.csv).
1. Goals Your goal is to build neural network models for two different tasks:
Task A (Classification): Predict the occurrence of thermal events. Task B (Regression): Predict temperature. 4. Task A: Classification (Thermal Event Detection) Data Preparation (a) Define a binary variable Hot: If the temperature of a month exceeds a given threshold for that particular month, then the month is classified as Hot. (i) If the monthly temperature exceeds the monthly threshold, then Hot = 1. (ii) Otherwise, Hot = 0.
(b) Create a bar graph summarizing the number of hot months per year.
Model development (c) Randomly divide the dataset into training, validation, and test sets.
(d) Preprocessing: Apply any necessary transformations to the training set, then apply the same transformations to the validation set. Keep arecord of all applied transformations.
(e) Build a neural network classifier to predict thermal events: Define the architecture and hyperparameters (loss function, optimizer, batch size,learning rate, number of epochs). It is recommended that the total number of trainable parameters satisfy: [N_{params }<rac{N_{samples }} {10}] , meaning the number of parameters is less than one-tenth of the sample size.
(f) Create a plot showing the relationship between the accuracy of the training set and the validation set (yaxis) and the number of epochs (x-axis).
Model Evaluation (g) Apply the same transformations to the test set as to the training and validation sets.
(h) Use the model to predict the hot category of the test set.
(i) Evaluate the performance by plotting the confusion matrix. Note that the positive class is 1.
(j) Calculation: Balanced accuracy, true negative rate (specificity), and true positive rate (sensitivity).
1. Task B: Regression (Temperature Prediction) For this task, the temperature value is predicted directly without using a binary Hot variable.
Model development (k) Randomly split the dataset into training, validation, and test sets.
(l) Preprocessing: Apply any necessary transformations to the input features of the training set, and apply the exact same transformations to thevalidation set. Do not scale, normalize, or otherwise transform the ground truth (targets). Explicitly document any transformations applied.
(m) Build the model by defining the architecture and training hyperparameters (loss function, optimizer, batch size, learning rate, and number ofepochs). It is recommended that the total number of trainable parameters be less than one-tenth of the number of samples.
(n) Create a plot showing the relationship between the loss function value (y-axis) and the number of epochs (x-axis) for the training set and thevalidation set.
Model Evaluation (o) Evaluate the regression model by comparing the true and predicted temperature values ​​on the test set. Use the Pearson correlation coefficient (r) and mean absolute error (MAE) as evaluation metrics. These metrics are defined as:
[r=rac{sum_{i=1}^{n}left(y_{i}-overline{y} ight)left(hat{y} {i}-overline{hat{y}} ight)}{sqrt{sum {i=1}^{n}left(y_{i}-
overline{y} ight)^{2}} sqrt{sum_{i=1}^{n}left(hat{y} {i}-overline{hat{y}} ight)^{2}}}] [MAE =rac{1}{n} sum {i=1}^{n}left|y_{i}hat{y} {i} ight|] where (y {i}) is the true temperature, (hat{y}_{i}) is the predicted temperature, and n is the number of test samples.
Model development (by year & target normalization)[6] (p) Partition the data by full calendar year: each year must appear in only one subset. Use the same train/validation/test splits as in part (k). Fit a separate scaler for the temperature target on the training set and apply it to the targets on the validation and test sets; do not reuse feature scalers. Keep the feature transformations specified in part (l) unchanged.
(q) Use the train/validation/test partitions and target scaler established in part (p). Now, apply all the remaining feature-level preprocessing stepsspecified in part (l) to these same splits.
(r) Retrain the same regression network and hyperparameters defined in (m) on the year-partitioned data (without using a new architecture).
(s) Create a plot showing the relationship between the loss function value (y-axis) and the number of epochs (x-axis) for the training set and thevalidation set.
Model Evaluation by Year (t) Evaluate the regression model by comparing the true and predicted temperature values ​​on the test set. Pearson correlation coefficient (r) and mean absolute error (MAE) are used as evaluation metrics.
1. Additional Notes (Applicable to both tasks) You need to set the random seed to ensure reproducible results. You must serialize two versions of each trained model, along with the corresponding feature scaler objects
2. Code testing and discussion Your notebook will be demonstrated live in a tutorial-based discussion (25 points total); attendance is mandatory; submissions without participation in the discussion will receive zero points.
1. Assignment Submission The assignment must be completed independently. You will need to submit your solution on Moodle. The submission must include:
A Jupyter Notebook (.ipynb). Sequential training models for classification and regression tasks; All relevant scaler objects (feature scalers for each model and a separate target scaler used in the year-wise regression). The first line of your Jupyter notebook should contain your full name and zID as a comment. Your notebook should contain all necessary code for reading files, preprocessing data, constructing the network, and evaluating results.

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

Shopping Cart
[SOLVED] Comp9414 assignment 1- artificial neural networks 1 p0[SOLVED] Comp9414 assignment 1- artificial neural networks 1 p0
$25