Share
Try our deepfake detector with your work email
Is this real?
Substantial Evidence of Manipulation
TrueMedia.org verdict: substantial evidence of manipulation.
Verified by human analyst
Analysis
Results
Generative AI
Substantial Evidence
Visual Noise
Little Evidence
Disclaimer: TrueMedia.org uses both leading vendors and state-of-the-art academic AI methods. However, errors can occur.
AI-Generated Insights
The image shows an unusual scene that likely would have been widely reported if real. Additionally, the account name "Buttcrack Sports" suggests it is intended to be humorous or satirical. The visuals themselves appear to be edited or staged, typical of digitally manipulated or AI-generated images for satire or commentary.
Generative AI5
Detects signatures of GenAI tools
Substantial Evidence
Multi-Modal AI
Analyzes the image for indications it was generated by AI or otherwise digitally manipulated.
Substantial Evidence
99% confidence
AI Generated Image Detection
Detects AI-generated photo-realistic images, created for example by Generative Adversarial Networks and Diffusion Models like Stable Diffusion, MidJourney, DALL·E 2 and others.
Substantial Evidence
91% confidence
Universal Fake Detector Analysis
Using the feature space of a large, pretrained vision-language model, this model analyzes images to determine if they were generated by a variety of popular generative and autoregressive models.
Little Evidence
3% confidence
Image Generator Analysis
Analyzes image for indications that it was generated by popular AI image generators, like MidJourney, Dall-E, Stable Diffusion and thispersondoesnotexist.com.
Little Evidence
AI Generated Image Detection
The model was trained on a large dataset comprising millions of artificially generated images and human-created images such as photographs, digital and traditional art, and memes sourced from across the web.
Visual Noise1
Variations in pixels and color
Little Evidence
10% confidence
Diffusion-Generated Image Detection
Evaluates the discrepancy between the original image and the version reconstructed by pre-trained diffusion models. Such models are known to potentially capture visual noise, commonly associated with the diffusion process.
Details
.jpg
File type and size
74KB
Processing time
Time to analyze media
> 90m
Analyzed on
When media was analyzed
Fri, Mar 1, 2024
Join Waitlist
Get beta access
Test your own media for deepfakes.