close
close

New tools use AI fingerprints to detect altered photos and videos

A research team at Binghamton University created thousands of images using common generative AI tools and then analyzed them using signal processing techniques. Image credit: Binghamton University, State University of New York

As artificial intelligence networks become more sophisticated and accessible, digitally manipulated “deepfake” photos and videos are becoming increasingly difficult to detect. A new study led by Binghamton University and the State University of New York analyzes images using frequency domain analysis techniques, looking for anomalies that might indicate they were generated by AI.

In an article published in Disruptive Technologies in Information Science VIIIGraduate student Nihal Poredi, Deeraj Nagothu and Professor Yu Chen from Binghamton's Department of Electrical and Computer Engineering compared real and fake images, excluding telltale signs of image manipulation such as elongated fingers or unintelligible background text. Master's student Monica Sudarsan and Professor Enoch Solomon from Virginia State University also collaborated on the work.

The team created thousands of images using popular generative AI tools such as Adobe Firefly, PIXLR, DALL-E, and Google Deep Dream, and then analyzed them using signal processing techniques to understand their frequency-domain features. The difference in frequency-domain features of AI-generated and natural images is the basis for distinguishing them using a machine learning model.

By comparing images using a tool called Generative Adversarial Networks Image Authentication (GANIA), researchers can detect anomalies (called artifacts) that are due to the way the AI ​​creates the fakes. The most common method used to create AI images is upsampling, which clones pixels to increase file size but leaves fingerprints in the frequency domain.

“When you take a picture with a real camera, you get information from all over the world – not just the person or the flower or the animal or the thing you want to photograph, but all kinds of environmental information is embedded in it,” Chen said.

“With generative AI, the images focus on what you want to generate, no matter how detailed you are. For example, you can't describe what the air quality is like, how the wind is blowing, or all the little things that are background elements.”

Nagothu added: “Although there are many new AI models, the basic architecture of these models remains largely the same. This allows us to exploit the predictive nature of content manipulation and use unique and reliable fingerprints to detect it.”

The research also explores ways in which GANIA could be used to identify the AI ​​origin of a photo, thereby limiting the spread of misinformation through deepfake images.

“We want to be able to identify the 'fingerprints' of different AI image generators,” Poredi said. “This way, we could build platforms to authenticate visual content and prevent unwanted events related to disinformation campaigns.”

In addition to deepfake images, the team has developed a technique to detect fake AI-based audio-video recordings. The developed tool, called “DeFakePro”, uses environmental fingerprints, the so-called ENF (Electric Network Frequency) signal, which is created by slight electrical fluctuations in the power grid. Like a subtle background hum, this signal is naturally embedded in media files when recorded.

By analyzing this signal, which is unique to the time and location of the recording, the DeFakePro tool can verify whether the recording is authentic or if it has been manipulated. This technique is extremely effective against deepfakes and further investigates how it can secure large-scale intelligent surveillance networks against such AI-based forgery attacks. The approach could be effective in the fight against misinformation and digital fraud in our increasingly connected world.

“Disinformation is one of the biggest challenges facing the global community today,” Poredi said. “The widespread use of generative AI in many fields has led to its misuse. Combined with our reliance on social media, this has created a flashpoint for a disinformation disaster. This is particularly evident in countries where restrictions on social media and freedom of expression are minimal. Therefore, it is imperative to ensure the sanity of data shared online, especially audiovisual data.”

Although generative AI models are being misused, they are also contributing significantly to the advancement of imaging technology. Researchers want to help the public distinguish between fake and real content – but keeping up with the latest innovations can be challenging.

“AI is evolving so quickly that once you develop a deepfake detector, the next generation of that AI tool will take those anomalies into account and fix them,” Chen said. “Our work is trying to do something unconventional.”

Further information:
Nihal Poredi et al., Generative Adversarial Networks-based AI-generated image authentication using frequency domain analysis, Disruptive Technologies in Information Science VIII (2024). DOI: 10.1117/12.3013240

Provided by Binghamton University

Quote: New tools use AI fingerprints to detect altered photos and videos (September 12, 2024), accessed September 12, 2024 from

This document is subject to copyright. Except for the purposes of private study or research, no part may be reproduced without written permission. The contents are for information purposes only.