Skip to content

Maple Sport Daily

Menu
  • Home
  • C sports
  • Current News
  • Privacy Policy
  • About us
Menu

Facebook video of CN Tower on fire should have AI-generated content label, experts say

Posted on September 25, 2025

A fake video of the CN Tower on fire went viral on Facebook this week and experts say it’s another example of why social media platforms need to clearly mark AI-generated content.

One expert went so far as to say the creator should be arrested and charged with knowingly conveying false information with the intent to cause panic.

The media relations team at the CN Tower has made it clear that no fire broke out at the landmark structure in Toronto. The video, however, has been viewed more than 20 million times.

In the 24-second Facebook reel posted on Monday, people on the waterfront appear to film the CN Tower in the distance as a plume of grey smoke emanates from its upper levels that house a restaurant and observation decks.

“What the hell is that?” one person asks.

Then the point of view shifts and other people film the tower from a downtown street as the fire burns more intensely. Flames and black smoke are visible. Motorists honk their horns.

“Oh my god!” one person says.

In the final clip, the top of the CN Tower, ablaze and in ruins, is shown in an aerial view. 

CN Tower video 1
An image from a fake Facebook video shows the CN Tower on fire. In a statement on Wednesday, the media team at the CN Tower said the video ‘is a deepfake and entirely fictional.’ (Adrian Gee/Facebook)

In a statement on Wednesday, the media team at the CN Tower said: “This video is a deepfake and entirely fictional. There was no fire, and the CN Tower remains safe, secure, and fully operational.

“Unfortunately, this is not the first time AI-generated content or visual effects have been used to create misleading depictions of the CN Tower,” the statement reads.

Technology advancing faster than legislation: expert

Francis Syms, associate dean at the faculty of applied sciences and technology at Humber Polytechnic, said in an “era of misinformation,” the video could cause harm or alarm if people do not instantly realize that it is fake.

“I think what we need to do is ensure that, when these videos are created, that the providers are putting the AI-generated label on it. That’s an easy thing to do,” Syms said.

The label would alert consumers of social media that the video is fake and the federal government could easily pass a law to make that a requirement, Syms added. He said the problem is technology is advancing faster than legislation governing new tech tools.

“We’re still in sort of that grey zone. If somebody posts a fake video to social media, the first thing you should do is you should go to a news source to validate it. But the problem is people share those videos and are sometimes getting their news from social media directly,” he added.

“It’s very difficult for law enforcement to know how to address that type of situation. There’s very little they can do other than talk to the social media platform and ask them to put the AI-generated label on top of it.”

CBC Toronto has reached out to the office of the federal minister of artificial intelligence for comment.

Francis Syms
Francis Syms, associate dean at the faculty of applied sciences and technology at Humber Polytechnic, says: ‘I think what we need to do is ensure that, when these videos are created, that the providers are putting the AI-generated label on it. That’s an easy thing to do.’ (Jason Trout/CBC)

CBC Toronto has also reached out to Facebook to ask if it is considering such a label on its content.

Meta, the U.S. company that owns Facebook, says on its website: “We will begin adding ‘AI info’ labels to a wider range of video, audio and image content when we detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.”

Toronto police not investigating

An AI label, however, is not enough to protect people from the spread of misinformation, Syms said.

One good rule of thumb: assume videos or images you see on social media might not be real and go to a trusted news source to verify the information, Syms said.

Jeffrey Dvorkin, senior fellow at the University of Toronto’s Massey College, said the video is “so inauthentic” that the RCMP should lay a charge under Section 372 of Criminal Code of Canada, which says it is an offence to convey false information with the intent to injure or alarm.

Doing so would deter people from creating fake videos that spread misinformation and send a message that such videos are damaging and irresponsible, he said.

Dvorkin said the creation of the video is “completely” illegal.

“A person who is deliberately spreading misinformation, which has the purpose of creating panic, is liable for a sentence of two years in jail and the RCMP should be looking after who has been doing this and charge them with that offence. This is a bit outrageous. It’s more than a bit.”

Toronto police said they are not investigating the creation of the video.

“There is no protocol. It’s an artificial video,” Const. Shannon Eames, spokesperson for the Toronto Police Service, said in an email on Wednesday.

Poster calls himself ‘creator of viral moments’

The video was posted by Adrian Gee, who calls himself “creator of viral moments since 2014.” On his Facebook profile, he says he is “now teaching the future: AI-generated art & content.” 

Gee doesn’t say he used AI to generate the video, but there are signs that suggest AI was used, such as the unnatural drift of the smoke, the lack of licence plate numbers on vehicles and the overall look of the video.

CBC Toronto reached out to Gee on Facebook and Instagram for comment, but he has not yet responded.

Philip Mai, co-director of Toronto Metropolitan University’s social media lab, said it’s clear from the comments on the video that people weren’t fooled by it. 

Philip Mai
Philip Mai, co-director of Toronto Metropolitan University’s social media lab, says: ‘The only thing that we as a society can do right now to make sure that this stuff doesn’t spread is slow down.’ (CBC)

But he said consumers of social media shouldn’t share fake videos, creators who refuse to label their content as AI generated should be banned and media literacy should be upgraded so that individuals will know for themselves when something is fake.

“The only thing that we as a society can do right now to make sure that this stuff doesn’t spread is slow down. It’s all in our hands, sadly, where we have to be the one to stop ourselves from sharing things,” he said.
 

Source link

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • Interview with Canadian curler Karlee Burgess
  • 4 debate-worthy prospects to track at the CFL Combine
  • Nicholas Brendon of TV’s Buffy the Vampire Slayer dead at 54
  • U.S. citizen who illegally drove into Manitoba will be spending time in Canadian jail
  • ICE bars staffer for Democratic representative from visiting detainees

Recent Comments

No comments to show.

Archives

  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • November 2024
  • October 2024

Categories

  • C sports
  • Current News
©2026 Maple Sport Daily | Design: Newspaperly WordPress Theme