Reverse Personalization
Abstract
Recent text-to-image diffusion models have demonstrated remarkable ability to generate realistic facial images conditioned on textual prompts and human identities. This has enabled the creation of personalized facial imagery. However, existing prompt-based methods for removing or modifying identity-specific features rely either on the subject being well-represented in the distribution of the pre-trained model or require model fine-tuning for specific identities. In this work, we analyze the identity generation process in diffusion models and introduce a reverse personalization framework for effective face anonymization. Our approach leverages conditional diffusion inversion, allowing direct manipulation of images without relying on text prompts. To generalize beyond subjects present in the model's training data, we incorporate an identity-guided conditioning branch. Unlike prior anonymization methods, which lack the ability to control facial attributes, our framework supports flexible, attribute-controllable anonymization. We demonstrate that our method achieves state-of-the-art performance in identity removal, attribute preservation, and image quality, offering a practical and scalable solution for privacy-preserving face generation.