CelebAMask Face Parsing
Project 1 Specification
Important Dates
Development Phase: 21 Feb 2025 12:00 AM – 21 Mar 2025 12:00AM (UTC+8) Test Phase: 21 Mar 2025 12:00AM – 28 Mar 2025 12:00 AM (UTC+8)
Group Policy
This is an individual project
Challenge Description
Face parsing assigns pixel-wise labels for each semantic components, e.g., eyes, nose, mouth. The goal of this mini challenge is to design and train a face parsing network. We will use the data from the CelebAMask-HQ Dataset [1] (See Figure 1). For this challenge, we prepared a mini-dataset, which consists of 1000 training and 100 validation pairs of images, where both images and annotations have a resolution of 512 x 512.
The performance of the network will be evaluated based on the F-measure between the predicted masks and the ground truth of the test set (the ground truth of the test set will not be released).
Figure 1. Sample images in CelebAMask-HQ
Assessment Criteria
We will evaluate and rank the performance of your network model on our given 100 unseen test images based on the F-measure.
The higher the rank of your solution, the higher the score you will receive. In general, scores will be awarded based on the Table below.
Percentile in ranking ≤ 5%
≤ 15%
≤ 30%
≤ 50%
≤ 75%
≤ 100%
*
Scores 20 18 16 14 12 10 0
Notes:
Submission Guideline
● Download dataset: this link
● Train and test your network using our provided training set.
● [Optional] Evaluate your model on an unseen CodaBench validation set during development, with up to 5 submissions per day and 60 in total.
Restrictions
● To maintain fairness, your model should contain fewer than 1,821,085 trainable parameters, which is 120% of the trainable parameters in SRResNet [2] (your baseline network). You can use
sum(p.numel() for p in model.parameters())
to compute the number of parameters in your network.
● No external data and pretrained models are allowed in this mini challenge. You are only allowed to train your models from scratch using the 1000 image pairs in our given training dataset.
● You should not use an ensemble of models.
Step-by-step Submission Procedure
We host the validation and test sets on CodaBench. Please follow the guidelines to ensure your results to be recorded.
The website of the competition is https://www.codabench.org/competitions/5726
1. Register the CodaBench account with your NTU email (ends with @e.ntu.edu.sg), with your matric number as your username.
2. Register for this competition and waits for approval.
3. Submit a file with your prediction results as follows. Include source code and pretrained models in the test phase; not required for the dev phase.
1) A short PDF report (max five A4 pages, Arial 10 font) detailing your model, loss functions, and any processing steps used to obtain results. Include the Fmeasure on the test set and the total number of model parameters. Name your report as: [YOUR_NAME]_[MATRIC_NO]_[project_1].pdf
2) A screenshot from the CodaLab leaderboard, with your username and best score. We will use the score from CodaLab for marking, but will keep your screenshot here for double-check reference.
Computational Resource
References
[1] Cheng-Han Lee, Ziwei Liu, Lingyun Wu, Ping Luo, MaskGAN: Towards Diverse and
Reviews
There are no reviews yet.