IAS (Image Authentication Software)
“Pics or it didn´t happen” is a thing of the past. Can you tell if an image is AI generated?
With rapidly advancing AI-based technologies for altering or generating images, the question of an image’s authenticity becomes increasingly relevant. So far, there are few reliable ways to distinguish ‘artificial’ images from ‘real’ ones. IAS is a fictional software designed to address this issue by analyzing an image’s metadata, including AI-related data, in order to assess its authenticity.

Speculative Software;
Image Authentication;
Prototyping;
Contributors:
Julia Hann von Weyhern
Tools:
Figma
In spring 2023, social media feeds were flooded with eerily realistic images: Vladimir Putin kneeling before Xi Jinping, an explosion at the Pentagon, Joe Biden playing in the rain. None of it had actually happened – these were AI-generated fakes.
With powerful image-generation tools now widely accessible, distinguishing real from artificial visuals is becoming increasingly difficult.
The problem? There is currently no standard, regulation, or technical safeguard that allows us to reliably distinguish between AI-generated images and real ones.
This is especially concerning because images have long been perceived as trustworthy sources of truth. Photographs, in particular, are regarded as factual representations of reality, hence the popular phrase, “pics or it didn’t happen.” Research has shown that people are more likely to believe a piece of information when it’s accompanied by an image. This deep-rooted trust in visual media has historically been exploited to manipulate public perception, such as when Stalin famously doctored photos of Lenin addressing a crowd, erasing individuals who had fallen out of political favor.
History has shown how easily visual media can be manipulated—now, AI makes that process faster, more convincing, and more available than ever. As images continue to play a central role in how we understand the world, the question of authenticity is no longer optional—it’s essential.

Authenticity through Metadata
AI-generated images, like real ones, contain Exif metadata. This software reads and visualizes that data and uses AI to fill in missing details. It helps assess whether an image has been altered or artificially created—even when information is incomplete.Designed as a native Mac app, the tool mimics a professional utility for verifying visual content before sharing, especially in public contexts. It highlights the growing need for careful handling of visual misinformation.
Until recently, no regulations required AI-generated images to be labeled. Users relied on visual clues or context—but these methods are no longer reliable as AI evolves. Inspired by the EU’s AI Act, the software applies its own metadata standards, including:
- Creation and modification time
- Software and ownership info
- Risk level (based on the AI Act)
- The original prompt used for generation
The software analyzes the available metadata for completeness, cross-checks individual data points, and identifies potential inconsistencies. In addition, users can perform their own integrated research, for example, by using map services or comparing GPS coordinates to verify the content of the image independently.
Data reconstruction
The software also offers AI-supported data reconstruction. When information is missing or inconsistent, this feature helps analyze and interpret the available data. For example, it can recreate possible prompts or compare how different AI models respond to the same input.
Additionally, the software can analyze the visual content of an image to detect subtle signs of AI generation—details that may go unnoticed by the human eye. Together, these tools provide a well-founded understanding of how trustworthy an image is, how it was created, and where it may have originated.