The Good, the Evil, the Ugly: AR and AI design in the post-truth era

Pixetic
7 min readMar 17, 2021

Artificial Intelligence might be exciting for some people, can be scary for others. In the post-truth era, AI is becoming poison for the informational environment, but at the same time, it is the most effective remedy for its destructive impact. Moreover, AI design is turning into a mighty weapon in this battle. Augmented Reality design also comes to the rescue by encouraging critical thinking and often winning the attention of the audience disinformed by deep fakes. So, let’s dig deeper into this rivalry of AI vs AI and discover who’s going to take that victory.

Post-truth era: how we ended up here

We all are aware that we’re living in the era when communication is taken to the greatest extent ever when we can get information just within a click, and a great part of our lives has shifted to online. This environment is a perfect incubator for disinformation and deep fakes — the reason why our generation is facing a post-truth era.

The social and phycological reasons behind the post-truth crisis were well-described by Professor Nick Enfield: “Human thought processes have some glitches that can be exploited”. He explains that people tend to fall under the spell of deep fakes and disinformation because they either want this lie to be true, or don’t want to let go of things they already believe in, or the fake story is so good they’d buy it rather than a real fact.

Moreover, we’ve transferred a lot of our everyday activities like shopping, communication, entertainment to the digital space. This influences greatly our perception of reality as we not only tend to digest information at an impressive speed and thus not spending proper time processing and analyzing it. We also are struggling to make a distinction between the real object and its online representation. These glitches in human nature are what make us so vulnerable to deep fakes, especially if they are well-made.

The dark side of AI design

This leads us to the technological side of the post-truth crisis and finally to the role of AI in this mess. The term “deep fake” originates from deep learning. For example, to create a deep fake photo or video, you need to train a neural network with the videos of the person you target with deep fake. The neural network needs to understand how the person moves in a different environment, meaning in different lighting and from different angles. Then you combine it with computer-graphics techniques and voilà, here’s your deep fake video. Everyone can create one, but to make it believable, you need to have a certain set of skills and advanced knowledge of AI.

Some might think that the threat of AI-generated deep fakes is overrated. In reality, we should keep an eye on them even more. They can cause great damage not only to separate individuals but also to institutions, businesses, and democracy. Not only a deep fake video with a famous politician or religious leader can damage society’s relationship with the government and democratic institutions; the very existence of deep fake technology, which enables the creation of such convincing disinformation, undermines the trust relations that society is based on.

This environment, where Internet users are in a constant seek for the truth, resembles the Wild West where they can’t trust anybody, and the deep fake and misinformation peril is everywhere. As a result, it becomes more complicated for UX designers to create a trustable and reliable design. The sphere of UX, which depends greatly on the designer-user relations, can struggle greatly when these relations are undermined. Now, the creators must put much more effort into their visual content to make users trust them, especially if it’s AI-generated.

AI and AR also are the main protagonists

With this pessimistic picture, it seems that we are doomed. But there is hope as there’s a new sheriff in town. And it’s also AI. With the speed and abilities of AI to generate deep fakes, it’s only logical to fight it with AI. Basically, data scientists are trying to train models to detect deep fakes. However, there’s no guarantee that the malicious AI won’t learn how to hide from that detection. This is when we need to think out of the box.

Deep fakes for good: AI design to make use of them

Despite the usual malicious intentions behind deep fakes creation, they can actually be used for good. The ability to create realistic replication of a person on video, without the need for a previous recording of it opens numerous possibilities. The difference here is the consent of the people that are “starring” in those deep fakes. By creating their digital twin, people can assure their presence when needed. For example, parents or relatives who work remotely and are busy can create videos with bedtime stories for their kids or with everyday messages. Moreover, the deep fake technology can be used in education, for example by replicating historical figures or famous scientists who will engagingly explain their accomplishments.

Deep fakes, if used with good intentions, also open new horizons in communication. If you want to deliver a message globally in multiple languages, AI design can assist you in that. For example, this technology was used for the campaign against malaria, where David Beckham could “speak” nine languages. Furthermore, deep fake technology can become widespread in the entertainment sphere, notably in the film industry. For example, one Internet user tried to outdo the original “Flux” 3D technology used in Martin Scorsese’s The Irishman by simply using the free deep fake app and came up with arguably better results.

Designing to earn the trust

These positive implications of the deep-fakes technology also raise a question: How can people trust a technology that can be so dangerous and so harmful, even if it’s used for good? One of the ways to form a rather positive perception of AI is to show that it can facilitate people’s life. For example, it’s quite possible that in the nearest future the use of AI in photo and video editing will be widespread. Adobe is already developing an AI tool with the help of an algorithm that detects whether the photo was manipulated before.

Another way to fight the deep fakes is to promote critical thinking with the design. For this reason, designers now need to foster this culture of critical thinking, by promoting transparency and raising awareness within the companies and among the users. They need to teach users about the dangers of deep fakes and disinformation and how they are created. And finally, if possible, they should equip them with the tools to check and filter out fake content and implement them in their design systems.

AR design to encourage critical thinking

Let’s admit that despite the universal hate of deep fakes and fake news, they catch people’s attention very easily. This is what they are designed for — to distract users from the truth. Therefore, in this clickbait culture, designers need to create something as engaging to make users want to explore the truth rather than a fake. Here is when Augmented Reality comes to the stage.

AR design seems perfect for engaging users as it’s not detached from the real world as Virtual Reality but still offers an immersive experience. This is what makes it perfect for educating people on deep fakes and encouraging them to get to the truth. For example, the Escape Fakeapp helps to promote media literacy among its users and learn to detect fake news during escape room-type games. The game-like apps with AR design are a great means for this purpose.

AR design and immersive journalism

Immersive journalism can become another solution to develop critical thinking in modern internet users. With the help of AR and VR, immersive journalism enables users to be at the heart of the event. This helps them not only build empathy but to analyze the facts even better as if they’ve seen what’s happened with their own eyes.

However, the power of this technology can also be easily used for disinformation. Therefore, it should be regulated and protected as other media channels. In the US, there is a legislative attempt to battle the deep fakes and hopefully, if immersive journalism becomes popular, it will be protected as well.

Who do you need to bet on in this battle?

The adequate means to battle deep fakes and disinformation have appeared not so long ago, but so have the deep fakes themselves. Both are emerging with the same speed and it feels like it’s becoming an arms race. In this seemingly even battle, the best weapon is our critical thinking and careful information consumption. Therefore, there is a great responsibility that lies on designers’ shoulders to promote these values, implement them in companies’ cultures, and supply proper tools powered by AR and AI design.

Originally published at https://pixetic.com.

--

--

Pixetic

Pixetic is a digital agency driven by a passion for design. Agency aims to create unique digital products to reflect clients’ brand values and identity.