Take a look at this graph. It shows an increase in young American teenagers’ rates of depression, with a notable uptick, especially in girls, since around 2010. It, and studies like it, are at the centre of a debate around social media and mental health.
Findings like this have been replicated in many countries. That in many cases, reports of mental health problems – depression, anxiety, self-harm, suicide, and so on, have almost tripled.
Psychologist Jonathan Haidt argues that the timing is clear: the cause is social media. Others have pointed to the 2008 financial crash, climate change, worries about the future. But Haidt asks why this would effect teenage girls in particular?
He points to Facebook’s own research, leaked by the whistle blower Frances Haugen, who said, ‘Teens themselves blame Instagram for increases in the rate of anxiety and depression… this reaction was unprompted and consistent across all groups’.
In 2011, in surveys, around one in three teenage girls reported experiencing persistent ‘sadness or hopelessness’. Today, the American CDC Youth Risk Survey reports that 57% do. In some studies, shockingly, 30% of young people say they’ve considered suicide, up from 19%.
At least 55 studies have found a significant correlation between social media and mood disorders. A study of 19,000 British children found the prevalence of depression was strongly associated with time spent on social media.
Many studies have found that time watching television or Netflix is not the problem: it’s specially social media.
Of course, causation rather than correlation is difficult to prove. Social media is has become ubiquitous over a period in time in which the world has change in many other ways. Who’s to say it’s not fear of existential threats from climate change, inequality, global politics, or even a more acute focus on mental health more broadly?
But Haidt points out that the correlation between social media use and mental health problems is greater than that between childhood exposure to lead and brain development and worse than binge drinking and overall health. And both are things we address.
He argues all of these studies – those 55 at least, and many, many more that are related – are not just ‘random noise’. He says a ‘consistent story is emerging from these hundreds of correlational studies’.
Instagram was founded in 2010, just before that uptick. And the iPhone 4 was released at the same time, the first phone with a front facing camera. I remember when it was ‘cringe’ to take a selfie.
It also makes sense qualitatively. School age children are particularly sensitive to social dynamics, bullying, and self-worth. And now they’re suddenly bombarded with celebrity images, idealised body shapes and beauty standards, endless images and videos to compare themselves to on demand. On top of this, social networks like Instagram display the size of your social group for everyone to see, how many people like you, how many like your next post, your comments, and, more importantly, as a result, how many people don’t.
Social media is popularity quantified for everyone in the schoolyard to see.
One study which designed an app that imitated Instagram found that those exposed to images manipulated to look extra attractive reported lower self body image in the period after.
Another study looked at the roll out of Facebook to university campuses in its early years and compared the time periods with studies of mental health. It found out that when Facebook was introduced to an area, symptoms of poor mental health, especially depression, increased.
Another study looked at areas as high-speed internet was introduced – making social media more accessible – and then looked at hospital data. They concluded: ‘We find a positive and significant impact on girls but not on boys. Exploring the mechanism behind these effects, we show that HSI increases addictive Internet use and significantly decreases time spent sleeping, doing homework, and socializing with family and friends. Girls again power all these effects’.
Young girls, for various reasons, seem to be especially affected. However, the reasons why are difficult to establish – although idealised beauty standards are one obvious answer.
One researcher, epidemiologist Yvonne Kelley, said: ‘One of the big challenges with using information about the amount of time spent on social media is that it isn’t possible to know what is going on for young people, and what they are encountering whilst online’.
In 2017, here in the UK, a 14-year-old girl, Molly Russell, took her own life after looking at posts about self-harm and suicide.
The Guardian reported: ‘In a groundbreaking verdict, the coroner ruled that the “negative effects of online content” contributed to Molly’s death’.
The report said that, ‘Of 16,300 pieces of content that Molly interacted with on Instagram in the six months before she died, 2,100 were related to suicide, self-harm and depression. It also emerged that Pinterest, the image-sharing platform, had sent her content recommendation emails with titles such as “10 depression pins you might like”’.
Studies have found millions of posts of self-harm on Instagram; the hashtag ‘#cutting’ had around 50,000 posts each month.
A Swansea University study which included respondents with a history of self-harm, and those without, found that 83% of them been recommended self-harm content on Instagram and TikTok without searching for it. And three quarters of self-harmers had harmed themselves even more severely as a result of seeing self-harm content.
One researcher said, ‘I jumped on Instagram yesterday and wanted to see how fast I could get to a graphic image with blood, obvious self-harm or a weapon involved… It took me about a minute and a half’.
According to an EU study, 7% of 11-16 year olds have visited self-harm websites. These are websites, forums, and groups that encourage and often explicitly admire cutting. One Tumblr blog posts suicide notes and footage of suicides. Many communities have their own language – codes and slang.
Another study found that, to no one’s surprise, those who had visited pro self-harm, eating disorder, or suicide websites reported lower overall levels of self-esteem, happiness, and trust.
Harm Principles
Okay, but anything can be harmful. Crossing the road carries risks. So do many other technologies – driving, air travel, factories, medicines. But with other technologies we identify those risks, the harmful effects or side effects, to try to ameliorate them.
These problems are bound up, and often come into contact with, other values that we hold dear – free speech, freedom for parents to raise children in the way they wish, the liberal live-and-let-live attitude.
But we usually tolerate intervention when there is a clear risk of harm.
Our framework for thinking about liberal interventionism comes from the British philosopher J.S. Mill’s harm principle. That my freedom to swing my fist ends at your face. That we are free to do what we wish as long as it doesn’t harm others.
Mill wrote: ‘The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others’.
We – usually the police or government – prevent violence before it happens, investigate threats, make sure food and medicine and other consumer products aren’t harmful or poisonous. We regulate and have safety codes to make sure technology, transport, and buildings are safe.
So to make sense of all of this, I want to start from cases where social media has actually harmed in some way, and work back from there. One of the problems, as we’ll see, is that it’s not always clear where to draw the line or how to draw it.
Harmful Posts
First, is there even any such thing as a harmful post? After all, a post is not the same as violence. It might encourage, endorse, promote, lead to, or raise the likelihood of harm. But it’s not harm itself. As Jonathan Rauch says, ‘If words are just as violent as bullets, then bullets are only as violent as words’.
But in other contexts, we do intervene before the harm is done. False advertising or leaving ingredients that might be harmful off labels. Libel laws. We arrest and prosecute for planning violence, even though it hasn’t been carried out. For threats.
These are cases where words and violence collide. I call it ‘edge speech’ – they’re right at the edge of where an abstract word signals that something physical is about to be done in the world.
During the Syrian Civil War, which started in 2011, at least 570 British citizens travelled to Syria to fight, many of them for ISIS.
The leader – Abu Bakr Al-Baghdadi – called for Sunni youths around the world to come and fight in the war, saying, ‘I appeal to the youths and men of Islam around the globe and invoke them to mobilise and join us to consolidate the pillar of the state of Islam and wage jihad’.
ISIS had a pretty powerful social media presence. One recruitment video was called, ‘There’s no life without Jihad’. They engaged in a ‘One Billion’ social media campaign to try and raise one billion fighters. They had a free app to keep up with ISIS news, ‘The Dawn of Glad Tidings’, and used Twitter to post pictures, including those of beheadings.
The Billion campaign, with its hashtags, lead to 22,000 tweets on Twitter within four days. The hashtag ‘#alleyesonISIS’ on Twitter had 30,000 tweets.
One Twitter account had almost 180,00 followers, its tweets viewed over 2 million times a month, with two thirds of foreign ISIS fighters following it.
Ultimately, the British Government alone requested the removal of 15,000 ‘Jihadist propaganda’ posts.
Or take another example, what’s been called ‘Gang Banging’.
Homicides in the UK involving 16-24 year olds have risen by more than 60% in the past five years. There are an increasing number of stories of provocation through platforms like Snapchat. In one instance in the UK, a 13-year-old was stabbed to death by two other 13 and 14 year olds after an escalation involving bragging about knives which began on Snapchat. In another, a 16 year old was filmed dying on Snapchat after being stabbed.
One London youth worker told Vice, ‘Snapchat is the root of a lot of problems. I hate it’, ‘It’s full of young people calling each other out, boasting of killings and stabbings, winding up rivals, disrespecting others’.
Another said, ‘Some parts of Snapchat are 24/7 gang culture. It’s like watching a TV show on social media with both sides going at it, to see who can be more extreme, who can be hardest’.
Vice reports that much gang violence now plays out on Snapchat in some way, with posts being linked with reputation, impressing, threatening, humiliating, boasting, and, of course, eventually, escalating.
Youth worker and author Ciaran Thapar said, ‘When someone gets beaten up on a Snapchat video, to sustain their position in the ecosystem they have to counter that evidence with something more extreme, and social media provides space to do that. It is that phenomenon that’s happening en masse’.
The head of policing in the UK also warned that social media was driving children to increasing levels of violence: https://www.bbc.co.uk/news/uk-43603080
Hate Speech
Or take another example, controversial hate speech laws.
The UN says: ‘In common language, “hate speech” refers to offensive discourse targeting a group or an individual based on inherent characteristics (such as race, religion or gender) and that may threaten social peace’.
This latter part is often forgotten. That the point of hate speech laws – rightly or wrongly, as we’ll see – is to address threats of harm before they happen.
The Universal Declaration of Human Rights – which many countries have adopted and most in Europe at least have similar laws to – declares that, ‘In that exercise of their rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society’.
But, in an exception: ‘Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law’.
These laws were developed after the Nazi atrocities during the Second World War, and it was argued that laws of this type – now called hate speech laws – were necessary because the threat of harm from things like genocide was so great.
The UN’s David Kaye writes that, ‘the point of Article 20(2) was to prohibit expression where the speaker intended for his or her speech to cause hate in listeners who would agree with the hateful message and therefore engage in harmful acts towards the targeted group’.
It wasn’t meant to ban speech that caused offence, but to prevent speech that would lead to violence.
Of course, the problem is that this is very difficult to define, but by many metrics speech loosely defined as ‘hate speech’ has increased over the past few years.
In ‘Western’ countries including New Zealand, Germany, and Finland, 14-19% of people report being harassed online.
A 2020 study in Germany found observations of hate speech online had almost doubled from 34% to 67% in the last five years.
Just under 50% of people in the UK, US, Germany and Finland (aged 15 to 35) answered yes to: ‘In the past three months, have you seen hateful or degrading writings or speech online, which inappropriately attacked certain groups of people or individuals?’
And some research has found that youth suicide attempts are lower in US states that have hate crime laws.
The Edges of Harm
Okay, so when should social media companies and the government step in? Should social media companies host self-injury groups? Should the Ku Klux Klan have a Facebook page? What’s the difference between a joke and harassment? Should the police ever be involved? There’s also a variety of ways of addressing issues: from banning, to hiding posts, to shadow banning and demonetising, age restriction, to civil suits, fines, and prosecution.
Then there’s the problem of overreach. I’ve had this channel demonetised, videos age restricted and demonetised, which never recover in the algorithm – that I spend months working on and get nothing back from. This video is on a sensitive topic, I wouldn’t be surprised if it has problems with reach and demonetisation, so if you’d like to support content like this then you can do so through Patreon below.
Okay, so how can we approach this? One answer I think we can rule out pretty quickly is the libertarian one. Let everyone do what they want, users and platforms alike, and social media platforms and pages will thrive or fail as a result.
First, no libertarian society in history has worked. And second, even in the early days of the internet, where in effect the libertarian approach did thrive, social media companies slowly realised that if they let their platforms be full of self-harm, pornography, violence, and more, that advertisers and users tend to leave quickly. So they started to self-regulate, and some say, over-regulate.
As a result, free speech has become a subject of fierce debate. This is, I think, for three reasons: first, that free speech is, correctly, considered one of our most important values. Second, that with the internet, we now have more speech than ever before. And third, because in some cases, speech clearly can lead to harm.
We’ve seen this: suicide, mental health issues, self-harm, eating disorders, depression, promotion of terrorism and gang violence, the promotion of hate speech that openly calls for fascism or genocide.
So should we restrict speech of this type?
First, we should acknowledge that there is no such thing as free speech absolutism. We already limit the fringes of speech in many ways: threats, harassment, blackmail, libel, copyright, slander, harassment, incitement to violence, advertising standards, drug and food labelling standards, broadcasting standards.
Furthermore, we restrict many freedoms based on the likelihood of causing harm: health and safety and sanitation in restaurants, building codes, speeding and drink driving laws, wider infrastructure requirements, air travel regulation, laws on weapons, knives, etc… the list goes on.
So if we regulate these things, why do we not regulate social media companies when there is a significant risk of harm? I think we should be careful. Only focusing on those very ‘edge cases’. If there’s a substantial risk or if regulation can effectively reduce a risk of harm while minimising the curtailing of freedoms then it should either be done by institutions, companies, or governments that can. Importantly, how this is done should be subject to democratic debate.
Policy analyst and author David Bromell writes: ‘Rules that restrict the right to freedom of expression should, therefore, be made by democratically elected legislators, through regulatory processes that enable citizens voices’.
He continues: ‘Given the global dominance of a relatively small number of big tech companies, it is especially important that decisions about deplatforming are made within regulatory frameworks determined by democratically elected legislators and not by private companies’.
And none of this is to deny that it’s difficult, and finding that line, striking the right balance, is complex. But in the cases I’ve mentioned, with the statistics as they are, to do nothing would be irresponsible. In all of them, I think the potential for harm is often as clear as the potential for harm from, for example, libel.
Does this it mean posts, forums, and speech of this type should be banned outright.? Not always. Human rights expert Frank La Rue has argued that we should make clear distinctions between: ‘(a) Expression that constitutes an offence under international law and can be prosecuted criminally; (b) expression that is not criminally punishable but may justify a restriction and a civil suit; and (c) expression that does not give rise to criminal or civil sanctions, but still raises concerns in terms of tolerance, civility and respect for others’.
In other words, context, proportionality, tiered responses, all matter. There are many policies that can be put in place before banning or prosecution – not amplifying certain topics, age-restrictions, removing posts or demonetising before banning.
Haidt argues that big tech should be compelled by law to allow academics access to their data. One example is the Platform Transparency and Accountability Act, proposed by the Stanford University researcher Nate Persily. We could raise the age above which children can use social media, or force stricter rules. We could ban phones in schools.
Finally, democratic means transparent, so we can all have a debate about where the line is. Youtube is terrible at this. I don’t mind if a video gets demonetised or age-restricted because I’ve broken a reasonable rule. But it’s more often the case that I haven’t and they do anyway.
The UK has just introduced an online safety bill which addresses much of all of this, and I agree with the spirit of it. The Guardian reports that, ‘The bill imposes a duty of care on tech platforms such as Facebook, Instagram and Twitter to prevent illegal content – which will now include material encouraging self-harm – being exposed to users. The communications regulator Ofcom will have the power to levy fines of up to 10% of a company’s revenues’.
It makes encouragement to suicide illegal, prevents young people seeing porn, and provides stronger protections against bullying and harassment, encouraging self-harm, deep-fake pornography, etc.
However, it also tries to force social media companies to scan private messages, which is an abhorrent breach of privacy, and a reminder that giving politicians the power to decide can carry as much risk as letting a single tech billionaire decide.
But ultimately, through gritted teeth, I remind myself that the principle remains: the more democratically decided, the better. And democratically elected politicians are one-step better than non-democratically elected big tech companies.
0 responses to “AntiSocial: How Social Media Harms”
fortsæt med at guide andre. Jeg var meget glad for at afdække dette websted. Jeg er nødt til at takke dig for din tid