Whether or not crypto-currencies deserve the hype, the hope remains that they can lead to more solidary forms of economy. Crypto has come to stay and, with a US debt crisis looming, may at some point become the more popular option.
Platforms are not neutral
Online debate and the rules of interaction
Critique of social media tends to focus on the content of online discourse, particularly the impacts of fake news and hate speech. But how do social media platforms themselves determine interaction, and how can users adapt to default functionalities in the interests of constructive debate?
Discussion of social media generally focuses on content: what people are saying and how they are saying it. There are certain topics – immigration or Islam, for example – where debate tends to deteriorate, when people write the first thing that occurs to them, and when hatred, racism and sexism predominate. There is, however, one important dimension that is rarely examined in its own right, or if so only in very general terms: the platforms where exchanges take place.
The platform itself is by no means ‘neutral’. Taking part in an online forum devoted to a specific subject, discussing topics on Twitter with complete strangers or debating on Facebook with ‘friends’ are not the same thing. These three types of platform offer users differing options and functionalities, and these have a major influence on how the debate unfolds. Of course, this is not to suggest that the online exchanges in which we habitually engage are entirely determined by the platforms on which they take place, but it is important to be aware of the way in which these platforms set their parameters.
The platform’s ‘affordances’
On Facebook, most exchanges take place on someone’s page (or ‘wall’) and thus between people who, by definition, are already in contact in one way or another (because they belong to a ‘group’, are followers of one or more common Facebook page or because they share ‘friends’). When an exchange takes place, it is as a commentary on an initial post. Users can reply to one other – up to a point: Facebook ‘allows’ only two ‘levels’ of reply (responding to the initial post, responding to a first-level response). If you are commenting on the photo of the school play of your friend’s child, then that’s all you need, but it becomes a lot more complicated in a more involved or polemical exchange with a large number of participants, who end up replying to each other within the main thread of comments. This obliges participants to choose to respond to what concerns them and to ignore what does not. That inevitably causes misunderstandings, for example when new participants refer to exchanges that took place ‘further back’ in the thread. There is a risk that these earlier exchanges may not be directly visible (they have to be scrolled manually, or are shown according to popularity criteria), or that the user does not look at them because of lack of time or interest. Nevertheless, Facebook has an advantage in that it maintains a hierarchy (chronological) that is identical for all participants.
This is often not the case with Twitter. In addition to contributions on Twitter necessarily being shorter (until recently 140 characters, now 280) or being ‘sliced’ into several successive tweets that are more or less spaced out in time, tweets in an exchange will vary from one user to another, depending on their contacts as well as the contacts and ‘hashtags’ mentioned in any contribution, or the moment when the user joins the discussion. It is now possible to create threads, each of which is linked to the next, thus allowing a more developed narrative line, but it is still possible to respond to any one of these tweets and to enter the discussion whenever you notice one of them. Consequently, the discussion is not necessarily structured as a chronology common to all participants.
This brief comparison of some of the features of these two very widely-used platforms shows that interaction will not necessarily take place in the same way on them, since the options available to users are not the same. These are known as the platforms’ ‘affordances’ 1 – in other words, the totality of possibilities these environments offer. They are not ‘neutral’. Whether one can respond to a particular contribution or whether one has to address one’s remarks to the group at large; whether the platform allows images or links to be posted; whether all participants have the same view of a given exchange and its dynamics – all these aspects play a part in determining the exchange and its outcome.
Shared or non-existent functionalities
The creativity of a platform’s users is also a factor. If a community deems necessary a functionality that is not built into the platform, it is not unusual for a creative workaround to emerge. Between 1990 and 2000, for example, it was impossible to address comments to one particular person in forums, since contributions were generally posted one after another in chronological order, without any hierarchy. This was not necessarily practical in an exchange between several people, in which several interlinked dialogues were generated (A responds to B while C speaks to B and D, who, in turn, responds to A, and so on). Consequently, numerous platforms emerged on which users created their own codes and procedures to make exchanges easier to follow. For example, address codes were invented for a specific user (‘@Baptiste: I don’t agree with …’), identifying other users and indenting their contributions or using italics. The same applies to posting links: ‘please see http://www…’
Nowadays, most online interactive platforms include these and other options: free text (which may be limited in length, as in the case of Twitter); identification of the contributor (via their account or their profile, which makes it possible to view other contributions by them, their network of relationships, the way they present themselves, etc.); the ability to post images or links; the option to personally address comments to other users (by ‘tagging’ them or explicitly specifying that they are special addressees of a particular contribution). The flow of exchanges is therefore safeguarded: everyone can make their contribution and do so more easily than was the case for the previous generation of internet users.
But do these facilities yield more ‘productive’ conversations? Probably not. For one thing, ease of publication is likely to be connected to an ever-more powerful illusion of ‘real time’. Whilst this offers interesting possibilities – such as the illusion of interacting in writing with the same spontaneity as one might interact orally – it can also lead to parallel monologues: each user simply proceeds with their own contribution at the same time as everyone else. This is the typical sort of dynamic in comments on articles in the press and, even more so, on Facebook publications involving a large number of participants. Indeed, real time in writing makes it impossible to ‘take turns’ at speaking in a way that would require contributors to consider what their interlocutor had just said before making their own contribution, let alone their response.
Although these platforms are making great strides in developing ease of access and publication, they have far less to offer in terms of regulation and synthesis. Leaving aside the difficulties raised by the automated regulation of exchanges (that is, through algorithms and not through human intervention, using charters, arbitration and sometimes sanctions such as exclusion or blocking the user), let us focus on synthesis. When does an online exchange come to a conclusion?
The question may seem simple, but the answer is not so obvious: exchanges conclude when nobody is contributing any more. Some platforms generate a de facto conclusion (for example, by preventing anyone from contributing beyond a certain deadline), whilst others impose no limit, leaving threads to die away for lack of protagonists. But the absence of new contributions does not mean that the exchange actually got anywhere. Did everyone get a chance to have their say? Did every point of view get a hearing? Did participants find points on which they could agree or accept a minimum consensus?
We don’t know. It would be reasonable to hope that the many online debates that thousands of internet users engage in every day might affect their views on the subjects. But platforms do not make conclusions possible. Indeed, at times they actually prevent them. An exchange in which anyone can intervene at any time to reopen a debate whose previous participants had resolved causes it to go round in circles, even though this may not be the users’ intention.
This possibility of reopening the debate at any time affects opposing points of view about climate change and global warming, for example. Exchanges on this topic are characterized by the absence of shared premises. ‘Climate-change sceptics’ reject all scientific evidence – or almost all – making any kind of exchange with them very difficult. And yet, because they associate online, in some cases every day, internet users do contrive to agree on certain points, such as what it is they disagree on and how these disagreements should be approached. But all it takes is for one new user to arrive and ask a question that had been a stumbling-block some weeks earlier for the process to start up again. The shared premise for discussion is redefined yet again.
Whilst such impasses can result from a wish to sabotage the discussion, trolling is not always the cause. Instead, it is a relatively predictable consequence of the way that interactive platforms function. After all, you cannot reproach a newcomer for not having read the hundreds (sometimes thousands) of posts or tweets previously exchanged on the subject, especially when their content has not resulted in any synthesis and is anyway inaccessible to novices. Consequently, anyone can challenge everything and anything at any time. In general, platforms provide no adequate tool for referring to earlier agreements or any sort of ‘precedent’.
It is true that more and more platforms include ‘voting systems’ that make it possible to highlight messages or contributors with the highest levels of support. But as things currently stand, these tools still have a limited effect, especially since people very often do not know how they are being used by internet users. Does a Like mean ‘You are right’, ‘I agree with you’, ‘I really liked what you said’ or something else entirely? Above all, platforms prioritize such messages or contributors purely on the basis of popularity and not, for example, synthesis.
There is no ideal platform
It makes little sense to criticize Facebook or Twitter for not letting constructive discussions develop, since they were not designed for that purpose. Yet we ought to be realistic: more often than not, choosing a platform is a default choice rather than a reasoned one. One consents to work with what is simple, where one’s network is to be found and where one finds people whom one wants to debate with. Not to take account of such criteria means is to risk finding oneself on a site that is not used or not used very much. When it comes to choosing between discussions that are structurally flawed or no discussions at all, it is clear that the ‘digital agora’ is going to be difficult to achieve.
In this context, rather than waiting for public sites that are both popular and (better) suited to debate, it would be useful for every internet user to ask themselves about their online behaviour and the ways that they use and/or avoid the facilities offered by their favourite platforms. This will not change those platforms. But taking a critical step back could be one way to overcome their intrinsic limitations.
James J. Gibson, ‘The Theory of Affordances’, in : Robert Shaw and John D. Bransford (eds), Perceiving, Acting, and Knowing: Toward an Ecological Psychology, New York 1977.
Published 6 August 2018
Original in French
Translated by Mike Routledge
First published by Eurozine (English version)
Contributed by La Revue nouvelle © Baptiste Campion / La Revue nouvelle / EurozinePDF/PRINT
An interview with Yasha Levine
‘Everything that we’ve been sold about the democratic nature of the internet has always been a marketing pitch.’ Yasha Levine on the military origins of the internet, on data modelling and technocratic government, and why the Cambridge Analytica scandal was good for Facebook.