Website logo
Home

Blog

Apocalyptic ice walls in Kamchatka: But viral videos are fake (fakes are getting harder to spot) |

Apocalyptic ice walls in Kamchatka: But viral videos are fake (fakes are getting harder to spot) |

On social media and even in the media, real footage is mixed with clips created by artificial intelligence: fact-checking is becoming more and more complex. Snow Walls in Kamchatka: But the Viral Video Is Fake (and Fakes Are Very Hard...

Apocalyptic ice walls in Kamchatka But viral videos are fake fakes are getting harder to spot

On social media and even in the media, real footage is mixed with clips created by artificial intelligence: fact-checking is becoming more and more complex.

Snow Walls in Kamchatka: But the Viral Video Is Fake (and Fakes Are Very Hard to Spot)

On social media, and even in the media, real images are mixed with artificial intelligence: it is becoming more and more difficult to verify the truth.

Here we go again.In recent days, fake videos and photos of snowfall in Russia's Kamchatka region, which are mixed with real photos, have gone viral on social media.It's real snow.In fact, the area was hit by a storm that buried cars and houses under three meters of snow.Shops and schools remained closed, and residents dug tunnels to escape from their homes.But some videos are fake.The problem is that even the media fall in love with it because sometimes it is really difficult to understand what the video represents is true or fiction.

Forget the "romantic winter" myth.❄️

— Nofoud (@Nofoud) January 19, 2026

It's not AI - no strings attached, no side effects.Only real snow, raw nature.

Kamchatka, Russia: Days of extreme storm buried places up to the fourth floor.🏢🌨

Postcard view → cold cities, closed roads, everyday life on vacation.

Nature is not always... pic.twitter.com/PnPPGqvlki

The videos are fake

The viral fake videos are those that show wind -blown snow accumulations on roofs and balconies, or from people sliding on sleds on fresh snow with slopes stretching from the roofs of buildings.How can such high accumulations occur?Are we perhaps in the ice age?And could the buildings maintain this pressure?And how can sleds go so fast in fresh, loose snow?With a little thought and care, many can be achieved from these images recognizing that they create with AI.But it takes time, the uncertainty always continues, especially when the images say about unknown events or places.And while traditional fact-checking takes hours or days, ai-powered disinformation only takes a few minutes.RecentlyFalse images are also published in the media (appearing under real pressure).

– Black Hole (@constructivism) January 19, 2026

People fall from 4th floor due to snowfall in Kamchatka.pic.twitter.com/BMGqsNj5wn

Clues to watch out for

The problem is these days fake images are becoming more real. And anyone using programs like Midjourney, Veo, Sora, Kling can create fake clips with increasing accuracy.These images are not simple manipulations. But they are created from the ground up according to which the algorithm "learns" from enormous datasets of real images and videos, so the border between real and synthetic is gradually reduced.

Not sure if this is real life or AI, but Russian kids are having fun #Kamchatka pic.twitter.com/EIDJwazaug

— Russian Market (@runews) Enero 19, 2026

Production algorithms use complex models that capture textures, lights, shadows and shapes in a way that is very difficult for the eye to discern.However, these programs only know what they are taught and do not operate according to logic.A careful eye can spot flaws in the details.In deep fake videos, for example, people rarely blink because AI often relies on images of people captured in their eyes.

Experts recommend observing the background, often abnormally blurred, too perfect a person's appearance, illegible inscriptions, contradictory details (for example, in the photo of Maduro's arrest, the plane he was traveling on had double rows of windows that do not exist in reality).

The aspect ratio and the duration of the videos can be a clue (eight seconds is typical if the clips appear blurry, it can be a sign that the user wants to hide the sign of the AI.

Details like the mouth or hands (especially the number of fingers) were often imperfect a few months ago, but new programs are becoming more sophisticated and anatomical details are becoming more accurate.

Kamchatka: A city covered in several meters of snow

The unreliability of the human body

One of the most important problems is the low reliability of the human eye in distinguishing deep fakes from real images.Multiple studies have shown that 70-80% of the time, humans fail to correctly detect deep fakes created by AI (especially when it comes to faces), even when warned.Things get better with a little practice, but the task is still difficult.

what can i do

There are many software (such as the Italian platform IdentifAI) that promise to identify images created by artificial intelligence, but it must be said that they are not always reliable.Some organizations (such as NewsGuard) monitor the reliability of online information, including images.The sub-reddit community (RealOrAI) analyzes suspicious images posted by users and responds based on comments generated after careful and collaborative analysis.Jeremy Carrasco, specialized in video production, with the keen eye of those who see mistakes in films, publishes interesting content on social networks that explains how to distinguish truth from lies, an increasingly difficult activity that also runs the risk of accidentally affecting information.

Stay informed with the most engaging stories in your language, covering Sports, Entertainment, Health, Technology, and more.

© 2025 Pianeta Strega, Inc. All Rights Reserved.