PR Stunts
Japan’s ruling Liberal Democratic Party recently unveiled a new campaign poster featuring a slogan written entirely by AI. As the Japan Times reported: “Generative AI tools, including ChatGPT, studied [Japanese Prime Minister Fumio] Kishida’s remarks and party policy documents over the past three years to draw up drafts, according to people familiar with the matter. The AI-crafted slogan was chosen after LDP executives screened more than 500 candidate phrases, including ones proposed by copywriters.”
The AI-generated catchphrase reportedly translates as “Economic revitalisation: Providing tangible results.” (Hopefully something was lost in translation there). 150,000 copies of the poster are to be printed and distributed around the country.
Japan’s PM isn’t the first politician to turn to AI to produce such material (and a decent PR stunt). Last year Tom Giffard, a Conservative member of the Welsh Assembly, gave a speech congratulating Wales on winning the World Cup of Darts, revealing at the end of his oration that the text had been written by the ChatGPT. This, he said, accounted for the speech’s odd crescendo: “Long live darts!” Giffard said he made the speech “to show just how advanced the technology is becoming” (an excuse you may want to use yourself if caught out relying too much on ChatGPT at work).
These two PR gimmicks (as that is fundamentally what they are) seem harmless enough, as at least the AI-use here was out in the open. However, there are darker trends concerning AI and politics.
Deepfake Politics
Last year an analysis by Bloomberg argued that “AI holds the potential to supercharge the dissemination of misinformation in political campaigns,” citing the use of AI-generated deepfake imagery in materials disseminated to American voters by political campaign groups, including an attack ad featuring AI-generated images of a (thankfully fictional) Chinese invasion of Taiwan which you can see here. (Note the small disclaimer in the top left-hand corner: ‘Built entirely with AI imagery’).
At the start of this year, OpenAI, which owns ChatGPT, said that it would blockpoliticians and lobbyists from using ChatGPT for official business during campaign season, citing “potential abuse and cybersecurity risks”.
Make Politics Banal Again
While cybersecurity risks and the spread of misinformation are serious issues, one possibly more mundane worry has received less attention, and particularly strikes us as Public Relations people, namely: AI-generated banality. Should politicians increasingly outsource speech-writing and related tasks to AI, we’re in for an era of boring and lazy political rhetoric.
ChatGPT and similar ‘Large Language Models’ are incredible, yet they have a habit of producing bland and formulaic writing. And bland and formulaic writing is also produced – sometimes prodigiously – by political campaigns. Official statements, remarks made during a ribbon-cutting ceremony, social media posts from a politician’s official social media channels, all are vulnerable to the call of AI. And how about – as in Japan now – election campaign materials, or even whole manifestos and pieces of legislation?
The Bloomberg analysts noted that, “Politicians…increasingly use AI to hasten mundane but critical tasks like analyzing voter rolls, assembling mailing lists and even writing speeches.” This seems likely to only increase as the current crop of young people reared on ChatGPT (and seemingly incapable of completing homework assignmentswithout it) come of age and begin working as speech writers and consultants on political campaigns.
What if in order to produce an important speech at short notice, an intern decides to get AI to write a first draft, and after that to ‘tidy it up’? Well, in that instance, the AI has set the terms of the piece and defined its scope, in addition to setting the tone. While this could be avoided with a large staff of speechwriters, how could a politician justify such expense when AI “can do it all for you”? The central problem is not simply aesthetic: most people don’t read the manifesto pledges of political parties, and if they do it certainly isn’t for their sparkling literary qualities. Rather, the question is one of originality.
Rinse and Repeat
Generative AI bases its responses on an enormous bank of existing written content. Which is to say, all it can do is regurgitate and repackage that which has already been thought up and written down by someone else (and now that AI-generated content is everywhere, AI will presumably start to produce writing based on AI-generated content). Remember how the Japanese PR team came up with their AI slogan: they fed in speeches and other materials produced by the Prime Minister, and used the application to distill the overall message.
So where does originality of thought come in? Would an original, daring, or genuinely bold proposal or thought ever come from this? It seems doubtful. Political AI gimmicks like these are surely a function of the novelty of AI, and will likely become less common as time goes on. However, with so much ‘knowledge economy’ work being outsourced to AI, we risk politics becoming increasingly beholden to the cliche-ridden and unoriginal ideas, at precisely the time we need bold and creative thinking.