Let us be blunt: With regard to the long-term process for defending freedom of speech online, the recent controversy of Elon Musk vs. Twitter has been effective at derailing the conversation from solutions to problems. Not only are we not discussing what’s relevant, the event has quickly become a pawn in the ever-raging culture war between US liberals and conservaties. With Musk himself largely ignorant of his super-political status and likely unable to see outside a deadlocked, bipolar political system, any grand hopes of spontaneous course-correction are unlikely. There are of course some glimmers of hope like Tristan Harris at the Center for Humane Technology or the Facebook whistleblower Frances Haugen, or the Slate being a particularly refreshing source of many articles looking more at the big picture than the eye-catching small details. My sincere hope is that more articles like theirs and this one will eventually increase from drips to a rainfall and steer the public discourse into a healthier direction.
With Twitter and Facebook particularly, we have entered a new period of social and political discussion online. So far, we have been relying on private social media companies to get us over the threshold to entering this brave new world, but are we actually prepared for the world that suits their needs as businesses? It is my opinion that not nearly enough hard and honest analyses have been made or popularized to what these companies would require of us & our societies to remain maximally profitable, and that they are actually ready to do it. And unfortunately the least has been said about how we should be making sure that we are made aware about when and how they are doing it to us: We have good & established laws and processes for handling the responsibilities of media actors in the real world, but as late only a few have seen or raised the issue that the core operating principles of social media platforms are so different from traditional media that our problems lie much more in the direction of what we don’t know than what we do.
But let’s get to the matter at hand, shall we? Even if for the technically savvy internet-based new networks have been available for a long time, it is only within the last 10 years during which digital social media has become the primary distribution and discussion platform for news for everyone. The way in which this transformation keeps changing news itself can readily be observed, as standard ‘clickbait’ headlines are basically a yesterday’s topic already. We are now entering a more dangerous phase with ill-intended culture-war content becoming commonplace, and an even more sinister wave of machine-learning generated misinformation and scams masquerading as news is clearly visible on the horizon.
Why have the private social media companies become so successful though? One could think building a network that simply allows people to communicate would be easy, just ask what and who they want to hear news and run with it. Turns out an argument like this is completely false: The breadth of information released by a global networked community is simply too wide for any human actor to sort through. Hence the secret sauce for providing a useful service has become the filtering algorithm. These models started out as simple heuristics, but have grown to huge and extremely complex systems that model the psychological characteristics of the user itself. Companies like Meta, Twitter and Alphabet have become experts at knowing what one wants, how, and at what time one wants it.
If taken on benign terms only, these machine-learning filters are simply amazing tools. Instead of one taking a walk on a busy city square observing the signs and the people gathering around them, then using that to decipher which places get the most attention and hence are probably most interesting ones, these tools take that insight people naturally would have and automate it for the pleasure of the user. And, we don’t simply observe which are most popular, we are also very keen to spot which groups the people gathering are most like us. This is also something the algorithms have become very adept at modelling for us, and delivering the “culturally relevant” content at our digital doorsteps without difficulty.
Let’s dig a bit deeper, and compare our experience on social media like it were modelling our normal interactions between people on the street but as if trying to present us with a birds-eye view instead, which is what most of us would probably like to feel when we log on to our social media account. As has been popular lately, let’s indeed compare this to a view of the hypothetical town square: Should social media be like a town square, we would not only be receptive to the number of likes some speaker gets, we would be receptive to the details of their crowds as well. And most importantly, we would be receptive on how many people are not part of the crowd but have seen the message. We would also be surprised to see if crowds that usually stayed apart were suddenly intermingling. And we would be able to detect this well in time, before we even see what the actual message being told is.
Note the emphasis on crowds in the previous chapter. What the social media’s algorithm is effectively trying to do, is to replace our sense for finding the “right crowd”. This is, however, perhaps one of the primary ways in which humans naturally make decisions. Gravitating to the right crowd is an essential part of how we decide which leaders we want to follow, and which ones we want to avoid. On the street, a conspiracy fanatic can put any number of signs out, and have as many books on sale as he likes, but as the kind of people you identify with keep passing this person, and the people you think are not nice interact with them, you will not take their message to heart. As time goes on, people of all walks of life will see how many people just pass this person all the time, and even though there is no censorship the message does not catch.
On current social media however, the algorithms not only don’t support these models found naturally by our unconscious processing systems, but through multiple fake accounts bad actors can suddenly arouse a very large crowd by fooling us to think this was not the work of a lone wolf. Also we have absolutely no idea of how many people saw the message and didn’t react, we don’t know what a person saw before and after seeing this message, and we don’t know the route they took before they decided to interact. It turns out, there is a lot to be learned about what the context was in which any person interacted. In natural settings we would be privy to this information. On the dominant social media platforms of today, we are left in the dark.
Another important psychological constraint that the social media companies have modelled is the need for safety and comfort. In the real world safety is based on observations of harmony and conflict, and to some extent the individual’s own attitude of seeking either of those poles. To maximize engagement, psychological profiling can be used to limit the observation of threats to those who naturally seek comfort, while at the same time maximizing observation of threats to those that actively seek signs of conflicts. Again, we are faced with the same issue of the public being left in the dark about how and in which ratios these different views of the world are rendered online for our virtual views on it.
On the need for censorship for sure, the social media giants themselves haven’t taken an active role to say otherwise. In fact, for most of the time they have assured us that the question to be solved is about moderation: Who gets to be on the platform and who gets banned. However, from their own research they simply must know that harmful messaging actually is more engaging than civil messaging. It often looks more like they are scoring points at the right time with the right people, with those getting banned not always being worse than those still on the platform, rather than trying to move the platform to a more humane direction. And their changes are often temporary, research shows that after the press has quieted down the algorithms tend to go back to promoting inflammatory content.
However, in all of their communication to date, almost never has any social media company mentioned data. This is a topic neither they nor most other parties readily bring up. Any social media company executive worth their stuff would indeed know that if the public went and demanded open data access for their platforms, their “wiggle room” would be significantly reduced. To keep the conversation about who gets to speak and what, pouring great resources into it to appear as if working hard on it, could even be a convenient distraction to avoid tackling the problem at its root: If the public had access to their data and the results could be compared word to word for what they are claiming to be doing, accountability would come knocking at their door very quickly. Their current position of not actually being scientifically accountable for any changes in our social fabric they might be partaking in fostering, it is a very advantageous position for a business don’t you think? Additionally, should that data be buried with the companies (ie. takedown), without any access to the data we would never know. Could we afford the loss of data that would essentially contain the breadcrumbs on how our societies changed when we made the leap from analog print media to online digital presences as a species? I think not.
As pointed to in the beginning of this article, the wonder of social media lies within the algorithm. The magic is its effective emulation and semi-replacement of limbic and cognitive grouping and filtering functions of the human brain. However, for individual humans we have law and order to keep people accountable for how their brains are performing: Even when those actions are subconscious, we still put them in jail when they hurt people, even if they didn’t understand what they were doing. With the current social media companies, we are effectively cutting what the platforms do to cybernetically augment us out of the loop of law and order. We are allowing them to replace a large chunk of what guides our actions as people and in return we ask them almost no accountability on how this augmentation performs its actions on a day-to-day, minute-to-minute, millisecond-to-millisecond basis.
We humans are extremely complex creatures. The structures we build are even more complex. In some ways we are much like ants, the sum of our actions are stupendously greater than the actions of single specimens. Through our progress through modernity, we have however come up with something probably no other created has stumbled upon yet: The power of inter-subjective verification. It means that only facts that can be verified to be true between individuals as much as within their thoughts can be claimed valid. Science is the most well known product of this marvel of human capabilities. Holding on to the requirement of inter-subjective verification guarantees that we can not become trapped by ideas only popular within some circles, or facts that only apply to others. And with the current social media giants, it is as if the call for inter-subjective verification doesn’t apply to them at all. I personally have failed to understand why they are constantly taken by their word instead of asking them to be taken by their actions instead: Why aren’t we calling on them to allow inter-subjective verification of how their algorithms are working?
This finally brings us to the main topic of this post: censorship and being against it. I believe it is simply a red herring, being wielded in front of us by parties who either socially, monetarily or politically benefit from such a position being at the forefront of discourse. The real deal is that algorithms could easily be adjusted to be transparent, fair and more suited to building interactions that benefit the democratic process. The social media companies could very likely (and quickly) train models that look for robust critique, civilized argumentation and balanced debate. They could tune it to find unlikely consensus and stories where differences were settled and people admitted to being wrong. With open data we could mine all the posts in the world for laying down the building blocks of a civil and functioning democracy. In fact, if all that was done it might well be that censorship of any kind would become practically unnecessary. Like nobody pays attention to the conspiracy fanatic’s shed standing on the public square, with humane user experiences and humane algorithms, close to nobody would eventually pay attention on humane social media either. The parties that benefit keep pointing us to look at censorship, keeping us arguing, while the actual low-hanging fruit of open data is kept out of the spotlight where it rightfully should belong as a sustainable long-term solution to striking a balance between our needs as societies versus the needs of business and political entities.
So, is there a problem here somewhere? As the data of post history and reposting patterns would become open, some of the tricks social media companies do to boost their profitability would be questioned not only on the base of engagement but also on their morality. The companies probably would have to stop implementing some of the most radical ones, and as a result they might become a little less profitable. Facebook might have to scale down a bit, but very likely it would still remain hugely profitable even with public institutions of research keeping its algorithmic exploits in check. Data would also have to be strictly anonymized to keep privacy of users safe, and public spending would have to be increased somewhat to police those implementing the anonymization in case of exploits and attempted hacks to steal this data in unanonymized form for ill intent. For research grants to the database to run smoothly & fairly, it would require efficient bureaucratic operations which are sometimes difficult to set up.
An efficiently anonymized or pseudonymized database would also be a treasure trove for the advancement of psychology, behavioral sciences, cognitive sciences, social sciences, anthropology, and almost any non-mathematical branch of science could potentially benefit. Data could be mined with machine learning algorithms especially for preventative medicine, urban & social planning, social justice, highly advanced social sciences like prevention of segregation between social classes, etcetera. This research could also open new learnings for tools to manage our coming challenges in meeting our new global problems, especially climate change: The methods of managing huge populations facing very particular problems related to their particular patterns simply aren’t there yet, and as humans we cannot adapt fast enough. For these kinds of complex problems related to our limitations as a species, machine learning tools are perhaps the only solutions that can offer us the god’s eye view that we need to lift the veil of our own particular biases and ignorances.
The thing is, freedom of speech is one of the design principles we need but it is far from being the only one that is needed. In addition we need the civil and the humane to clearly connect with the technology and the business. With this brave new world of instant online networking, we actually want to arrive somewhere better than the limitations we are facing now. On a more humane social media, freedom of speech can lead to both good and bad, but any extremism it might breed can die out naturally like it does on the street in civilized and free societies everywhere where they exist, instead of being artificially boosted for a profit motive. Basically, we as societies actually have a pretty good understanding of how free speech with all its caveats work, the proof is in the pudding of our functioning democracies. But what we are not doing by leaving the social media companies out of the loop of being responsible for how they handle our free speech is actually the core problem in this debate, even if it is receiving remarkably little attention of late.
A position for the public to seek accountability from social media platforms via open data access is the best, simplest, and most viable solution for guaranteeing we will have a fair chance at keeping the algorithms of social media companies in check, making it possible for us to demand that their ideas of the future of human behaviour online stay aligned with those of ours. We need it much more than we need anyone taking over Twitter. It is truly a low-hanging fruit which people keep just casually passing by. Why on earth would a thing like that be happening and why are there not more people demanding it now? You tell me.
Links and interesting podcasts to dig deeper. I recommend subscribing!
Center for Humane Technology
Your Undivided Attention podcast’s special bonus episode on the matter
https://www.humanetech.com/podcast/bigger-picture-elon-twitter
Conspirituality podcast episode 100