This project is dedicated to understanding how different anonymization techniques, particularly face blurring, affect the performance of detection models. By systematically blurring faces in training datasets and evaluating the resulting model accuracies, we aim to identify anonymization strategies that preserve the utility of the data while ensuring privacy.
Project Goals
The primary objectives of this project are:
- To assess the impact of various face blurring techniques on the accuracy of detection models.
- To develop an anonymization strategy that minimizes bias and maintains high model performance.
Prerequisites
Participants in this project should have:
- Python programming experience
- Previous experience in machine learning and computer vision (Pytorch).
- Knowledge about computer vision and camera model is recommended.
Contact
Please send an email to [email protected] if you are interested in this project.