Reflections on Diversity: A Real-time Virtual Mirror for Inclusive 3D Face Transformations

IEEE EEITE 2025
Paraskevi Valergaki1,2, Antonis Argyros1,2, Giorgos Giannakakis2,3*, Anastasios Roussos2*
1Computer Science Department, University of Crete, Greece
2Institute of Computer Science (ICS), Foundation for Research & Technology – Hellas (FORTH), Greece
3Department of Electronic Engineering, Hellenic Mediterranean University, Chania, Greece
*Joint last authorship
Paper arXiv Video

Abstract

Real-time 3D face manipulation has significant applications in virtual reality, social media and human-computer interaction. This paper introduces a novel system, which we call Mirror of Diversity (MOD), that combines Generative Adversarial Networks (GANs) for texture manipulation and 3D Morphable Models (3DMMs) for facial geometry to achieve realistic face transformations that reflect various demographic characteristics, emphasizing the beauty of diversity and the universality of human features.

As participants sit in front of a computer monitor with a camera positioned above, their facial characteristics are captured in real time. Our system provides a dynamic, responsive “mirror” effect, allowing the digital 3D model to follow the participant’s motions, offering an immersive virtual reflection. Participants can further alter their digital face reconstruction with transformations reflecting different demographic characteristics—such as gender and ethnicity (e.g., a person from Africa, Asia, Europe).

Another feature of our system, which we call “Collective Face”, generates an averaged face representation from multiple participants’ facial data. A comprehensive evaluation protocol is implemented to assess the realism and demographic accuracy of the transformations. Qualitative feedback is gathered through participant questionnaires, which include comparisons of MOD transformations with similar filters on platforms like Snapchat and TikTok, focusing on realism, feature preservation, and faithfulness to demographic representation. Additionally, quantitative analysis is conducted using a pretrained Convolutional Neural Network that predicts gender and ethnicity, to validate the accuracy of demographic transformations.

MOD transformation examples
Examples of input and transformed faces using the MOD software. The transformations include face reconstruction, gender-based transformations (male and female), and ethnicity-based transformations (Asian and African). Each transformation is visualized in three formats: the input face, 3D shape with texture, and 3D shape only.

Video

For a detailed presentation, check our full demo video:

BibTeX Citation

@misc{valergaki2025reflectionsdiversityrealtimevirtual,
  title={Reflections on Diversity: A Real-time Virtual Mirror for Inclusive 3D Face Transformations},
  author={Paraskevi Valergaki and Antonis Argyros and Giorgos Giannakakis and Anastasios Roussos},
  year={2025},
  eprint={2503.20819},
  archivePrefix={arXiv},
  primaryClass={cs.GR},
  url={https://arxiv.org/abs/2503.20819}
}