close
close

New tools use AI fingerprints to detect altered photos and videos

As artificial intelligence networks become more sophisticated and accessible, digitally manipulated “deepfake” photos and videos are becoming increasingly difficult to detect.

A new study led by Binghamton University uses frequency domain analysis techniques to analyze images, looking for anomalies that might indicate they were generated by AI.

In an article published in Procedure of Disruptive technologies in information scienceGraduate student Nihal Poredi, Deeraj Nagothu MS '16, PhD '23, and Professor Yu Chen of the Department of Electrical and Computer Engineering in the Thomas J. Watson College of Engineering and Applied Science compared real and fake images, excluding telltale signs of image manipulation such as elongated fingers or unintelligible background text. Also collaborating on the work were master's student Monica Sudarsan and Professor Enoch Solomon of Virginia State University.

The team created thousands of images using popular generative AI tools such as Adobe Firefly, PIXLR, DALL-E, and Google Deep Dream, and then analyzed them using signal processing techniques to understand their frequency-domain features. The difference in frequency-domain features of AI-generated and natural images is the basis for distinguishing them using a machine learning model.

When comparing images using a tool called Generative Adversarial Networks Image Authentication (GANIA), researchers can detect anomalies (called artifacts) because the AI ​​creates these fakes in a specific way. The most common method used to create AI images is upsampling, which clones pixels to increase the file size, but leaves fingerprints in the frequency domain.

“When you take a picture with a real camera, you get information from all over the world — not just from the person or the flower or the animal or the object you want to photograph, but all kinds of environmental information is embedded in there,” Chen said. “With generative AI, images focus on what you want to generate, no matter how detailed you are. For example, there's no way to describe the air quality or how the wind is blowing or all the little things that are background elements.”

Nagothu added: “Although there are many new AI models, the basic architecture of these models remains largely the same. This allows us to exploit the predictive nature of content manipulation and use unique and reliable fingerprints to detect it.”

The research also explores ways in which GANIA could be used to identify the AI ​​origin of a photo, thereby limiting the spread of misinformation through deepfake images.

“We want to be able to identify the 'fingerprints' of different AI image generators,” Poredi said. “This way, we could build platforms to authenticate visual content and prevent unwanted events related to disinformation campaigns.”

In addition to deepfake images, the team has developed a technique to detect fake AI-based audio-video recordings. The developed tool, called “DeFakePro”, uses environmental fingerprints, the so-called ENF (Electric Network Frequency) signal, which is created by slight electrical fluctuations in the power grid. Like a subtle background hum, this signal is naturally embedded in media files when recorded.

By analyzing this signal, which is unique to the time and location of the recording, the DeFakePro tool can verify whether the recording is authentic or if it has been manipulated. This technique is extremely effective against deepfakes and further investigates how it can harden large-scale intelligent surveillance networks against such AI-based forgery attacks. The approach could be effective in the fight against misinformation and digital fraud in our increasingly connected world.

“Disinformation is one of the biggest challenges facing the global community today,” Poredi said. “The widespread use of generative AI in many fields has led to its misuse. Combined with our dependence on social media, this has created a flashpoint for a disinformation disaster. This is particularly evident in countries where restrictions on social media and freedom of expression are minimal. Therefore, it is imperative to ensure the sanity of data shared online, especially audiovisual data.”

Although generative AI models are being misused, they are also making a significant contribution to the advancement of imaging technology. Researchers want to help the public distinguish between fake and real content—but keeping up with the latest innovations can be challenging.

“AI is evolving so quickly that once you develop a deepfake detector, the next generation of that AI tool will take those anomalies into account and fix them,” Chen said. “Our work is trying to do something unconventional.”