Improving Identity-Robustness for Face Models (2304.03838v2)
Abstract: Despite the success of deep-learning models in many tasks, there have been concerns about such models learning shortcuts, and their lack of robustness to irrelevant confounders. When it comes to models directly trained on human faces, a sensitive confounder is that of human identities. Many face-related tasks should ideally be identity-independent, and perform uniformly across different individuals (i.e. be fair). One way to measure and enforce such robustness and performance uniformity is through enforcing it during training, assuming identity-related information is available at scale. However, due to privacy concerns and also the cost of collecting such information, this is often not the case, and most face datasets simply contain input images and their corresponding task-related labels. Thus, improving identity-related robustness without the need for such annotations is of great importance. Here, we explore using face-recognition embedding vectors, as proxies for identities, to enforce such robustness. We propose to use the structure in the face-recognition embedding space, to implicitly emphasize rare samples within each class. We do so by weighting samples according to their conditional inverse density (CID) in the proxy embedding space. Our experiments suggest that such a simple sample weighting scheme, not only improves the training robustness, it often improves the overall performance as a result of such robustness. We also show that employing such constraints during training results in models that are significantly less sensitive to different levels of bias in the dataset.
- Fairness and robustness in invariant learning: A case study in toxicity classification. arXiv preprint arXiv:2011.06485, 2020.
- img2pose: Face alignment and detection via 6dof, face pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7617–7627, 2021.
- Detailed human avatars from monocular video. In 2018 International Conference on 3D Vision (3DV), pages 98–109. IEEE, 2018.
- Estimating structural disparities for face models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10358–10367, 2022.
- Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
- Penalizing unfairness in binary classification. arXiv preprint arXiv:1707.00044, 2017.
- Robust solutions of optimization problems affected by uncertain probabilities. Management Science, 59(2):341–357, 2013.
- Fairness in recommendation ranking through pairwise comparisons. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, pages 2212–2220, 2019.
- Léon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421–436. Springer, 2012.
- Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053, 2020.
- Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9268–9277, 2019.
- Minimax group fairness: Algorithms and experiments. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pages 66–76, 2021.
- Fairness via representation neutralization. Advances in Neural Information Processing Systems, 34:12091–12103, 2021.
- Adversarial removal of demographic attributes from text data. arXiv preprint arXiv:1808.06640, 2018.
- Fairness without demographics in repeated loss minimization. In International Conference on Machine Learning, pages 1929–1938. PMLR, 2018.
- Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning, pages 1939–1948. PMLR, 2018.
- Learning deep representation for imbalanced classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5375–5384, 2016.
- Facial expression recognition: A survey. Symmetry, 11(10):1189, 2019.
- Davis King. https://github.com/davisking/dlib-models.
- Attribute and simile classifiers for face verification. In 2009 IEEE 12th international conference on computer vision, pages 365–372. IEEE, 2009.
- Fairness without demographics through adversarially reweighted learning. Advances in neural information processing systems, 33:728–740, 2020.
- Oversampling for imbalanced learning based on k-means and smote. arxiv 2017. arXiv preprint arXiv:1711.00837.
- On tilted losses in machine learning: Theory and applications. arXiv preprint arXiv:2109.06141, 2021.
- Just train twice: Improving group robustness without training group information. In International Conference on Machine Learning, pages 6781–6792. PMLR, 2021.
- Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
- Fair contrastive learning for facial attribute classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10389–10398, 2022.
- Attentional biased stochastic gradient for imbalanced classification. arXiv preprint arXiv:2012.06951, 2020.
- John Rawls. Justice as fairness: A restatement. Harvard University Press, 2001.
- Interpretations are useful: penalizing explanations to align neural networks with prior knowledge. In International conference on machine learning, pages 8116–8126. PMLR, 2020.
- Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv preprint arXiv:1911.08731, 2019.
- Achieving fairness through adversarial learning: an application to recidivism prediction. arXiv preprint arXiv:1807.00199, 2018.
- Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 322–330, 2019.
- Learning to model the tail. Advances in neural information processing systems, 30, 2017.
- Facial landmark detection: A literature survey. International Journal of Computer Vision, 127(2):115–142, 2019.
- Fair class balancing: Enhancing model fairness without observing sensitive attributes. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, pages 1715–1724, 2020.
- Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 335–340, 2018.
- Fairness-aware contrastive learning with partially annotated sensitive attributes. In The Eleventh International Conference on Learning Representations.
- Learning social relation traits from face images. In Proceedings of the IEEE International Conference on Computer Vision, pages 3631–3639, 2015.