Humans Have Freedom Of Expression, Bots Don’t

Democracy can not survive automatisation of hate-speech and harassment. A world where every single action online by a human is followed by a cloud of synthetic personalities far outnumbering us, is one where we cannot afford them wishing us ill. Should we let that happen, the only reasonable response is to also use machines to parse online discourse to avoid being targeted directly. By the time this happens at scale it will already be too late, and we will have jaywalked into a society where public discourse is entirely managed by machines. And the joke will be on us, the individual’s freedom of speech won’t matter a damn. Nobody will directly read it, and it would have been be lost in a soup of endless bots anyway.

Without the public sphere of media and discourse, there can be no democratic politics. To avoid this sphere from being taken over by bots might be the most important near-term goal for us a culture. And it will not be easy, because the productivity enhancements and other benefits of using bots to micromanage tediousness out of our lives will be so clearly positive. The prospect of getting ourselves trapped behind a wall of useful AI’s unintentionally is known by experts in the AI Risk field as: “the boring apocalypse” (4).

A long-time thinker on issues of machine-human interactions at scale, Yuval Noah Harari holds a clear vision on the importance of retaining control of politics in human hands. In his words: “Humans have freedom of expression, bots don’t” (1). What this means is that while we need to retain the ability for all political views to be spoken, we cannot guarantee this happening by simply allowing bots to stand in for humans. The most obvious issue is representation: “one bot, one voice” clearly is an idea dead at infancy because money can buy bots while it shouldn’t be able to buy voices. The second issue is freedom of speech itself. As automatisation of hate-speech and harassment on the borderline of legal becomes trivial at scale, speaking your mind freely would become mentally unbearable for almost anyone, regardless of their political alignments. Hordes of bots dealing in defamation, death threats, automatised blackmail etcetera, would take care of that very effectively. Third, misinformation and disinformation tailored for every occasion become off-the-shelf items with generative AI. An automatic online hybrid of Gestapo & KGB, I’m sure no-one but the people running the service would be comfortable with, but don’t stop there: Picture competing systems like this by the dozen, fighting for the information space and drowning every last bit that is human into infinitely small nooks & crannies.

As per the topic of this essay, limiting how bots can engage with the public space is one thing, but identifying them is another. Along with Harari and many others, Nina Schick also advocates for strong cryptographic-level signing of AI-generated content (2). But it should be expected that simply mandating signing from friendly actors will not be enough, as other actors with ill intent cannot be compelled to comply. Signing of human-generated content will therefore also be necessary at relatively short notice. As cryptographic signing requires a key, privacy experts tend to generally dismiss a simple system of one key per person as problematic with good merit. As has been shown by the unintended buildup of networked systems brokering and exploiting information bound to a single-token digital identity, like that of India’s Aadhar system (3), to retain a pragmatic levels of anonymity within domains of digital signatures is not possible without a more complex solution.

To enable humans to sign their content as human while remaining anonymous, for example when they record a video video of a war crime with their phone, the key must be able to change from one signage to another. Combined with any hardware identifiers the user allows, location, timestamp, etcetera, a key provided by a human authenticator service is then signed into every pixel of the video (or block, rather, because bandwidth still needs to be taken care of). To avoid tracking by automated systems of the enemy, the human would then never use the same key again. However, being signed with a key from a respectable provider of human identity, say a nation-state of well-established press freedom and democratic institutions, the recipients of said video would still have pretty good reason to suspect it not being fake. With the signing of each block of the video stream separately, any further alternations to the video would also be clearly visible and separable from the original stream.

As cryptographic experts will quickly point out, a message is only trustworthy assuming the key has not been compromised. And keys would be stolen, by the millions. They would be hacked, and even the ultra-high-security providers creating them would be hacked from time to time. Keys would need to be invalidated, and most importantly systems of trust would need to be developed that carry those invalidations successfully to end-users depending on being able to tell human from deepfake, bot from actual user. In building this system and the systems creating keys to begin with, we as a society will for sure have to pay very close attention to how the distributed trust systems of cryptocurrencies operate, learn from them, perhaps partner with them to pull off this feat which is no small undertaking to say the least! Even if for now blockchain networks are frequented by criminals and other shady elements, their mechanism of rewarding mining of tokens is the best method we know of to guarantee voluntary participation and submission to the will of majority in a distributed system of trust. Cryptocurrency networks due to their nature have also weathered countless attacks simply motivated by the sheer will to steal. With eventually signing most of human authenticity by digital signatures, we will find our online identities targeted with similar methods and would do good to pay attention, learn from and hire people who are experts in this field.

In a post generative AI world therefore, to be able to limit how bots can affect our democracies we must first be able to positively identify them. With the current internet infrastructure alone this is impossible, but with new open standards supported by both the industry and the legal frameworks of states, it is possible develop systems that would both allow anonymity via one-time tokens and provide cryptographic verification of bot vs human content (2). It should go without saying that protocols like these should be completely agnostic of the providers of keys and how they set themselves up. Continuing from the previous chapter, the top-level providers of authenticity could be anything from private to public, centralised to decentralised. Autocracies may of course take the simpler route and force their citizens to use a single provider which provides them their necessary back-doors (because they generated the keys), but this would and should not be a problem for the world-wide network in general. To establish trust is a common problem for everyone, regardless of the statecraft applied in their domain of residence. For those who can choose however, may the best system win and the people decide which level of trust to assign which one. Let them compete for users and use the free market to optimise their logistics, while legal frameworks of states point their watchful eyes over handling of sensitive information and privacy. Global coordination will be a must, but everyone (except maybe North Korea!) will for certainly benefit enough to be motivated in chipping in. It will simply be cheapest to do this together, just like it was with the internet.

Of course, none of this will ever be perfect. There will be the occasional hack and breach of trust, a bot army passing itself off as real people, etcetera. But when we reach reasonable levels of trust and the breaking of it becomes the exception, we can get back into the business of being fully identified as human by each other, in the good and bad. Being able to trust a message as authentic and human will enable us to continue our technological development without fear more than any limits placed on AI research alone can. And the bots we allow into that sphere will have to abide by the rules we set for them, rather than them hijacking our world of language (1) to twist as they please, twiddling with our simple monkey brains, pushing our buttons like the animals we are. Legal frameworks will take care of that, and those of course will use all the AI tools necessary to enforce themselves. For example, we could simply disallow all political content from bots in times leading up to an election and more traditional systems of advertising moving at speeds and ways all humans can understand would be utilised for campaigning only. It is most likely that most people would like the bots to remain docile, more like house pets rather than a virtual Hitler Jugend running around. The possibility of latter is not ungraspable though, even with the current level of AI tools, properly jailbroken. Finally, where public involvement is absolutely critical is meeting the commercial side developing the tools on an equal footing. Policing the bots will become as important as developing models for running them, as jailbreaking commercially available ones and developing open-source models not bound to public safety guidelines will continue to be viable pursuits far into the foreseeable future.

References:

  1. Harari, Yuval Noah 2023 – AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum – Youtube
    https://youtu.be/LWiM-LuRe6w
  2. Harris, Sam & Schick, Nina 2023 – AI & Information Integrity: A Conversation with Nina Schick – Making Sense Podcast
    https://www.samharris.org/podcasts/making-sense-episodes/326-ai-information-integrity
  3. Aurbach, David 2023 – Meganets: How Digital Forces Beyond Our Control Commandeer Our Daily Lives and Inner Realities
    https://www.amazon.com/Meganets-Digital-Control-Commandeer-Realities/dp/1541774442
  4. Thompson, Clyde 2023 – The “Boring Apocalypse” Of Today’s AI: Machines writing dreary text, to be read by other machines – Medium
    https://clivethompson.medium.com/the-boring-apocalypse-of-todays-ai-6365345444a8

Leave a comment