, , , , ,

[SOLVED] Ai6126 project 1  celebamask face parsing solution

$25

File Name: Ai6126_project_1____celebamask_face_parsing_solution.zip
File Size: 489.84 KB

5/5 - (1 vote)

Project 1 Specification

Challenge Description

Face parsing assigns pixel-wise labels for each semantic components, e.g., eyes, nose, mouth. The goal of this mini challenge is to design and train a face parsing network. We will use the data from the CelebAMask-HQ Dataset [1] (See Figure 1). For this challenge, we prepared a mini-dataset, which consists of 1000 training and 100 validation pairs of images, where both images and annotations have a resolution of 512 x 512.

The performance of the network will be evaluated based on the F-measure between the predicted masks and the ground truth of the test set (the ground truth of the test set will not be released).

Figure 1. Sample images in CelebAMask-HQ

Assessment Criteria

We will evaluate and rank the performance of your network model on our given 100 unseen test images based on the F-measure.

The higher the rank of your solution, the higher the score you will receive. In general, scores will be awarded based on the Table below.

Percentile in ranking

≤ 5%

≤ 15%

≤ 30%

≤ 50%

≤ 75%

≤ 100%

*

Scores

20

18

16

14

12

10

0

Notes:

We will award bonus marks (up to 2 marks) if the solution is interesting or novel.
Marks will be deducted for incomplete submissions, such as missing key code components, inconsistencies between predictions and code, significantly poorer results than the baseline, or failure to submit a short report.

Submission Guideline
Download dataset: this link
Train and test your network using our provided training set.
[Optional] Evaluate your model on an unseen CodaBench validation set during development, with up to 5 submissions per day and 60 in total.
[Required] During test phase, submit your (1) test set predictions, (2) source code, and (3) pretrained models to CodaBench. The test set will be released one week before the deadline, following standard vision challenge practices. You are allowed up to 5 submissions per day, with a total limit of 5.

Restrictions
To maintain fairness, your model should contain fewer than 1,821,085 trainable parameters, which is 120% of the trainable parameters in SRResNet [2] (your baseline network). You can use

sum(p.numel() for p in model.parameters())

to compute the number of parameters in your network.

No external data and pretrained models are allowed in this mini challenge. You are only allowed to train your models from scratch using the 1000 image pairs in our given training dataset.
You should not use an ensemble of models.

Step-by-step Submission Procedure

We host the validation and test sets on CodaBench. Please follow the guidelines to ensure your results to be recorded.

The website of the competition is https://www.codabench.org/competitions/5726

Register the CodaBench account with your NTU email (ends with @e.ntu.edu.sg), with your matric number as your username.
Register for this competition and waits for approval.

Submit a file with your prediction results as follows. Include source code and pretrained models in the test phase; not required for the dev phase.

IMPORANT NOTE Please refer “Get Started → Submission” on the CodaBench page to for the file structure of your submission. Please adhere to the required file structure. Submissions that do not follow the structure cannot be properly evaluated, which may affect your final marks.

If your submission status is “failed”, check the error logs to identify the issue. The evaluation process may take a few minutes.

Submit the following (zipped) files to NTU Learn before the deadline.
A short PDF report (max five A4 pages, Arial 10 font) detailing your model, loss functions, and any processing steps used to obtain results. Include the Fmeasure on the test set and the total number of model parameters. Name your report as: [YOUR_NAME]_[MATRIC_NO]_[project_1].pdf
A screenshot from the CodaLab leaderboard, with your username and best score. We will use the score from CodaLab for marking, but will keep your screenshot here for double-check reference.

Computational Resource

You can use the computational resources assigned by the MSAI course. Alternatively, you can use Google CoLab for computation

References
Cheng-Han Lee, Ziwei Liu, Lingyun Wu, Ping Luo, MaskGAN: Towards Diverse and

Interactive Facial Image Manipulation, CVPR 2020

Ledig et al., Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network, CVPR 2017

Shopping Cart
[SOLVED] Ai6126 project 1   celebamask face parsing solution[SOLVED] Ai6126 project 1  celebamask face parsing solution
$25