Feb 11 2025
Match Group Collaborates With Reality Defender to Detect AI-Generated Photos Across the Platform
Written by Yoel Roth

AI detection platform’s analysis reveals deepfakes are rare on Match Group’s apps 

This research will guide the development of new tools to help combat AI-generated photos

Over the last few years, AI has brought about new and exciting shifts in how people connect online, from helping you pick your best profile pic to making even better profile recommendations. At Match Group, we believe machine learning and AI can be game-changers for our users and our business. We also know that AI can be used to mislead or mimic real people in very convincing ways, using AI-generated content known as deepfakes. That’s why we are focused on how to stay ahead of those risks, and ensure AI is used to improve, rather than erode, the experiences our members have.

Our strategy for helping ensure the authenticity of our apps starts with building a deep understanding of the risks posed by AI and other emerging technologies. To help us combat new forms of fraud more effectively, we first need to understand the prevalence of AI-generated images and how they are showing up on our apps. 

Last year, we asked Reality Defender, an RSAC Award-winning AI deepfake detection platform, for an independent analysis of a representative sample of profile images drawn from Tinder and Hinge. Their findings revealed that AI-generated or -manipulated content likely accounts for only a small fraction of content on our platforms—with 99.4% of images showing no signs of concerning AI manipulations. Notably, even among the images that did show signs of manipulation, we found that in over 88% of cases, these manipulated images were not malicious deepfakes but rather authentic users employing face-tuning apps or other types of image filters.

While AI-generated photos currently make up a small percentage of profile photos on our platforms, we’re working to stay ahead of the risks these technologies could create in the future. In 2025, we’ll be leveraging this research to develop new educational materials to help users recognize AI-generated images, and build these learnings into our existing spam education resources, including tips to help users spot online financial scams or crypto investment schemes. 

The resources we develop for our members will include visuals with examples of AI-generated photos, face-tuning edits, and authentic images to help users identify the hallmarks of manipulated imagery, so they can remain vigilant. 

We also recognize that the uses of technologies like generative AI are raising important questions about norms and behavior. Long-term, we believe this work will help us create an even better experience as we enhance our ability to effectively detect and block malicious AI-generated photos, ensuring a safer and more transparent user experience.

Ben Colman, Co-Founder and CEO of Reality Defender, highlighted the importance of this proactive approach: “Deepfakes might be rare on Match Group’s platform, but their potential to disrupt trust makes them worth addressing proactively. Match Group’s leadership in tackling this emerging issue helps to set a standard for the industry. Together, we’re building tools to flag and mitigate risks while preserving user privacy and fostering a safer environment for authentic connections.”

By staying ahead of the risks of emerging technologies like AI and collaborating with experts, we are raising the bar for safety and authenticity in online dating. Together, we’re building a trusted platform where users can connect with confidence.

SHARE