Search

Saved articles

You have not yet added any article to your bookmarks!

Newsletter image

Subscribe to the Newsletter

Join 10k+ people to get notified about new posts, news and tips.

Do not worry we don't spam!

A.I. Videos Have Never Been Better. Can You Tell What’s Real?

A.I. Videos Have Never Been Better. Can You Tell What’s Real?

Post by : Anis Farhan

Blurred lines emerge

It used to be easy to tell when a video was fake. But not anymore. AI-generated videos have improved at a breakneck pace in recent months, thanks to tools like OpenAI’s Sora, Runway Gen-3, and Pika Labs. These platforms can now generate hyper-realistic scenes—complete with human-like movements, facial expressions, and dynamic lighting—that are almost indistinguishable from real footage. The result is a growing wave of content that looks authentic but is entirely artificial.

 

How the tech got here

The leap in realism comes from major advances in video diffusion models—machine learning systems that generate visuals frame-by-frame using prompts or source images. Early AI videos looked dreamy, glitchy, and distorted. But now, platforms like Sora can produce detailed cinematic shots, smooth transitions, and complex physics simulations, often in 1080p or higher resolution.

Crucially, these tools are now accessible to the public. Anyone with a decent prompt and a few minutes of processing time can create fake interviews, fake protests, or fake natural disasters that feel disturbingly real.

 

Why AI videos feel “off”

Even as these videos dazzle, many viewers report an eerie sensation while watching them—a kind of “uncanny valley” effect. Experts say that’s because AI-generated humans often lack the micro-details of real life. Their blinks are slightly too rhythmic, their gestures too fluid, their expressions just a bit too polished. This perfection, ironically, is what makes the content feel subtly wrong.

Still, for viewers scrolling fast or watching on mobile screens, these small flaws are easy to miss. And once they go viral, AI clips can be mistaken for genuine news or firsthand footage.

 

Deepfakes and the disinformation threat

The most alarming side of this trend is its use in deepfakes—AI videos that impersonate real people, often without consent. Political deepfakes have already appeared in elections from the U.S. to India. Earlier this year, an AI-generated robocall mimicked President Joe Biden’s voice, urging voters to skip a primary election—a move condemned as voter manipulation.

Celebrities and influencers are regular targets, too, with their faces and voices cloned into fake endorsements, interviews, or worse. Beyond defamation, these tools have been used for financial fraud, blackmail, and the spread of conspiracy theories—posing a global risk to digital trust.

 

Why people fall for fakes

Researchers say humans are surprisingly bad at detecting AI-generated content. A 2024 study by the University of Zurich and RAND Corporation found that participants were more likely to believe AI-created social media posts—both true and false—than ones written by actual humans. When it comes to video, the illusion is even stronger. The combination of visuals, voice, and narrative tricks the brain into assuming what it’s seeing must be real.

Even after a clip is debunked, the initial impression often sticks. Psychologists call this the “continued influence effect,” and it’s one of the reasons disinformation spreads so easily.

 

Can you still spot a fake?

Spotting a deepfake or AI-generated video isn’t easy—but there are still clues. Look for unnatural blinking, mismatched shadows, poorly rendered hands, or jerky lip-syncing. Check if the background glitches, if clothing logos are warped, or if text within the scene doesn’t make sense.

Sound can be another giveaway. AI-generated voices often sound too smooth or lack background noise. Some tools still struggle with consistent accents, intonation, or emotional depth.

But as models improve, even these tells are fading. Which means relying on gut instinct is no longer enough.

 

What platforms and regulators are doing

Social media platforms are under pressure to address AI content. Some, like Meta, now label AI-generated images and videos using invisible watermarks or metadata. YouTube and TikTok have added disclosure requirements for synthetic content. But enforcement is inconsistent, and bad actors can still post fakes that go undetected for hours—or even days.

Meanwhile, governments around the world are drafting legislation. The EU’s AI Act mandates disclosure of synthetic media, while the U.S. and India have both proposed new regulations targeting AI misuse in elections and public safety contexts.

Still, laws are often reactive. And AI tech is evolving faster than any legal framework can keep up.

 

How to stay alert

Experts recommend a mindset shift. Rather than assuming what you see is true, approach sensational videos with skepticism. Ask: Who posted this? Is it verified? Does it appear on trusted news sites? Use reverse image and video search tools. And remember—if something seems perfectly staged or too outrageous to be real, it might be AI.

In the future, digital literacy will be as essential as reading and writing. Knowing how to identify false content, understand context, and question sources may be our best defense in a world where video evidence can be easily faked.

 

Disclaimer:

This article has been prepared by Newsible Asia purely for informational and editorial purposes. The information is based on publicly available sources as of June 2025 and does not constitute financial, medical, or professional advice.

June 30, 2025 2:24 p.m. 453

Sri Lanka Ex-Intel Chief Arrested Over Easter Attacks
Feb. 25, 2026 4:57 p.m.
Former SIS Chief Suresh Sallay arrested by CID in connection with the 2019 Easter Sunday bombings that killed 279 and injured over 500 people
Read More
Japan Reports Spike in Measles Cases Authorities Issue Alert
Feb. 25, 2026 4:39 p.m.
Japan confirms 43 measles cases in early 2026, prompting health authorities to warn potential contacts and urge symptom monitoring nationwide
Read More
Korea US Clash Over West Sea Drill Communication
Feb. 25, 2026 4:25 p.m.
Conflicting accounts emerge on prior notice briefing, and apology during Feb 18-19 US air exercise in West Sea near Korean Peninsula
Read More
China urges political solution to Ukraine crisis backs UN peace efforts
Feb. 25, 2026 4:04 p.m.
China urges diplomatic resolution in Ukraine backs UN efforts and calls all parties to build consensus for lasting peace and respect sovereignty
Read More
Four Fatally Stabbed in Washington Suspect Shot Dead by Deputy
Feb. 25, 2026 3:36 p.m.
A man fatally stabbed four people near Gig Harbor Washington a deputy shot the suspect dead while authorities investigate motives and connections
Read More
Richard Liu launches $690M eco-yacht brand Sea Expandary
Feb. 25, 2026 3:10 p.m.
JD.com founder Richard Liu invests $690M in Sea Expandary aiming to produce affordable green yachts for households with HQ in Shenzhen and factory in Zhuhai
Read More
China imposes export curbs on 40 Japanese firms over military ties
Feb. 25, 2026 2:53 p.m.
Beijing restricts dual-use exports to Japanese companies, citing remilitarization concerns, prompting formal protest from Tokyo as tensions over Taiwan escalate
Read More
Thailand reports 49 Streptococcus suis cases 3 fatalities
Feb. 25, 2026 1:56 p.m.
Thailand reports 49 Streptococcus suis infections with 3 fatalities; authorities warn against undercooked pork and unsafe pig handling
Read More
Russian man Thai woman arrested in Chon Buri over call-centre scam
Feb. 25, 2026 1:25 p.m.
Two suspects in Chon Buri accused of running foreign call-centre fraud posting false info online and withdrawing over one million baht from victims
Read More
Trending News