3 min readAug 8, 2019
image by

Deepfake — a portmanteau of “deep learning” and “fake” — refers to synthesized media (commonly video) that is ostensibly authentic, but that has been created by a Generative Adversarial Network (GAN). In the age of disinformation and “fake news”, deepfakes pose a threat to online discourse — particularly political discourse — when clicks and views are considered more important than truth or verification.

Misinformation is not a new concept by any means, but the prevalence of social media as a way of consuming news has increased the reach and audience of potential “fake news”. The greater the number of viewers, the greater the number of viewers who might be fooled by this kind of media. In parallel, this kind of information sharing could lend itself to confirmation bias where the deepfake reinforces the beliefs of the user. Though cynical, the possibility of wilfully ignorant sharing (i.e. where the user knows the media to be false) could become a major issue too.

Given what we now know about the power of disinformation during the US presidential election in 2016, namely by malicious Russian actors, it is plain to see how targeted manipulation of information online can disrupt the democratic process. The “bot” accounts that were used for propagation showed a level of sophistication that helped the culprits avoid detection, and deepfakes and machine learning are only getting more powerful and convincing. It is possible to produce high-resolution deepfake stills that could be paired with a fake social media account for the purposes of supporting and promoting lies on Facebook, Twitter et al. Similarly, one of the world’s largest banks has “hired” a machine-learning tool because it writes better marketing copy than its human counterparts.

Although it is potentially harder to produce than images, deepfaked audio could also pose serious problems for privacy and security professionals as well as the average user online. Taking into account the exponential advances in artificial intelligence (AI) and machine learning, in addition to the reported incidents so far, it is conceivable that in the coming months and years high quality audiovisual deepfakes could be commonplace. The impact of such an epidemic is unfathomable. Should a hole be torn in the fabric of real, true information, it’s possible that democracy could be undermined, stock markets manipulated, classified information stolen and on and on.

In light of the rise of deepfakes, there has naturally been some resistance to the deceptive technology, and a number of researchers determined to create countermeasures. Although there are those that will try in earnest to counteract the dangers, it is inherently more difficult to train AI to positively identify a deepfake than it is to use an open-source GAN to create one. This is likely to cause lengthy debates between governments and tech giants – who run the platforms where the media will circulate – before proper legislation and safeguards are introduced. Conversations to that end have begun, but without direct and proportionate action, the public discourse of the immediate future risks being compromised.





Welsh privacy professional writing short digests for the discerning tech user