Deepfake Future

We need to differentiate artificial from human, fake from authentic, wrong from right, truth from lie, real from unreal – even and especially when the boundaries are blurred.

The impact of generative AI on how we live, work and vote is huge.

If you think this picture is creepy:

Or this picture:

Please look at these pictures who might remind you of one or two beautiful people:

And these pictures are not even photorealistic. And no living people in high and responsible positions like politicians or CEO´s doing drugs, kinky or forbidden things are shown in this series. Also no genAI fake information about these pictures are added to enhance their creepiness.

Nevertheless the images in this series are creepy – it´s the similarity, the possibility that somebody or something in these pictures could somehow be real. We witness an uncanny valley here. To produce and share effects like those in these four pictures right above can be called ethically questionable for many reasons.

But to produce them is possible by applying image generation models such as DALL-E2, Stable Diffusion, Google Imagen or Midjourney. It is up to the human co-creator to choose what to co-create and which of her co-creations to share. You can show ugliness and lies, but also beauty and (the aspiration to) truth. This power is potentially dangerous.

Via genAI everybody can re-create pictures of any person whose pictures can be found in a critical large enough number on the Internet. Besides pictures you can also easily co- and re-create artificial (some say fake) texts, voices and videos. There are more than enough examples of deepfakes on the Internet.

Deepfakes are used as propaganda tools in geopolitical, political, economic, or personal wars.

Instead of writing a long article, I share these images hoping they warn more than thousand words: We have to make sure that our future does not look like deepfakes. Much more efforts than already technical, ethical, legal and economical – need to be made to differentiate artificial from human, fake from authentic, wrong from right, truth from lie, real from unreal – even and especially when the boundaries are blurred.

Societies need common ground. People need to trust in something true and real to keep them together. Otherwise, fear and chaos begin to destabilize a society, which could boost destructive, anti-democratic forces.

To protect democratic systems – based on agreements about what truth is – we have to start by being even more attentive and sceptic than before when confronted with online content and then take (technical, ethical, legal and economical) action so that genAI-tools strengthen rather than weaken democracies.

Praying and laughing are not enough.

This is serious. GenAI tools pose gigantic challenges for democracies.

© Gisela Schmalz, 2023