• 0 Replies


  • Flat Earth Believer
  • 5407
  • DET Developer
« on: January 31, 2018, 02:55:13 PM »
Recently something called 'deepfakes' has been in the news. It is a piece of software that some particularly disgusting people use for fake pornography of celebrities and people they know as well as, as this is the internet, inserting Nicholas Cage into anything you care to imagine.
Here is Nicholas Cage in Raiders of the Lost Ark

And that is substantially lower quality than many out there.

The idea behind the software is simple. Using primitive AI and 'training' it on footage and images of a face, it learns to insert and edit and replace to place it in other footage.
The app was put up online fairly recently with the idea that any amateur could use it. You heard that correctly, any amateur can create fake video footage that most people could not easily recognize as fake. Many people have speculated on the potentially disastrous effects of this, from topics that vary from revenge porn to political scandal.

I want to suggest you look at this another way.

In the present, software, knowledge and technology has advanced to the point where an untrained amateur could, in a day, create convincing fake footage of Nicholas Cage as Lois Lane. This is publicly available, basically amateur-level stuff.
Before this point, there are necessary precursors that must have been reached.

One is, obviously, people with the technical expertise of the app developer being able to fake such footage the long way round.
Next is more sinister. We move beyond individuals; after all, corporations (IT corporations most of all) make a living off of being ahead of the market. If a random person can develop such software, a larger company absolutely could have before. They could also have dedicated the time to creating fakes without an AI algorithm.

Look past the obvious implications.

A look at the world and natural evolution tells us about the relationship between predator and prey. One cannot develop if the other does not as well. Anything else is self-destructive.
So, logically, as the technology to fake develops, the software to detect fakes must as well.
One of the ways to ascertain if footage was accurate before, was if it moved rather than was a still image. Plainly this is no longer a factor. For a 'trained' AI (to use a term they use) you will notice no clear blurring. I placed an image into online tools to detect edited images, as you can, and it is not especially clear. And, to emphasise, this is free, publicly available, hobby-designed software that someone threw together.
If that doesn't scare you, we are not living in the same world.
But, more than that, how are there not counters already? Why is it better known how to fake an image than how to detect a fake? Yes, law enforcement use a few bits of software, but it is not nearly as efficient as this which basically does all the work for you, once you supply the images via also free video-to-frame-image software. To detect a fake you need both specialized software and someone trained in its use, to create a fake you need one program. The software to detect fake still images is substantially less well-designed than the software to create fake videos, and yet the latter is a task incredibly more complex.

The software to forge and fake is years ahead of its counterpart. Technical companies have had access to this faking ability for some time now.
Each of these are simple, logical facts.
Consider why.
On the sister site if you want to talk.