Then & Now https://www.thenandnow.co/ Human(itie)s, in context Wed, 09 Oct 2024 15:46:32 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 214979584 Marx: A Complete Guide to Capitalism https://www.thenandnow.co/2024/08/23/marx-a-complete-guide-to-capitalism/ https://www.thenandnow.co/2024/08/23/marx-a-complete-guide-to-capitalism/#respond Fri, 23 Aug 2024 12:15:41 +0000 https://www.thenandnow.co/?p=1168 Karl Marx. One of – maybe the – most influential thinker in all of history. Has any other philosopher influenced not just ideas, but movements, actions, revolutions, the courses of entire governments, countries, and continents? Understanding Marx is key to understanding the political and economic waters that have gotten us to where we are today, […]

The post Marx: A Complete Guide to Capitalism appeared first on Then & Now.

]]>
Karl Marx. One of – maybe the – most influential thinker in all of history. Has any other philosopher influenced not just ideas, but movements, actions, revolutions, the courses of entire governments, countries, and continents?

Understanding Marx is key to understanding the political and economic waters that have gotten us to where we are today, and leads to a big question: do we still live in Marx’s world?

He was a towering intellect. One contemporary said, ‘Imagine Rousseau, Voltaire, Holbach, Lessing, Heine and Hegel fused into one person . . . and you have Dr. Marx’.

Another said, ‘Marx himself was the type of man who is made up of energy, will and unshakeable conviction.’

Marx’s life was one of exile, secret societies, intense study, and poverty. Without judgement or bias, we’ll try and unpack his most important ideas, before returning to some common criticisms at the end.

Because many misunderstand Marx or at least don’t understand what he was truly saying. Many associate him with communism, about which he actually had little to say. What he really sought to understand was capitalism, commerce, markets, industrialisation and technological progress, and questions about what makes us truly human.

Marx absorbed and thought through all the trends and ideas around him. But what was most new in Marx was that it wasn’t thinkers that would change the world, but action by ordinary people.

To understand what that really means we have to go on a journey across history – from churches and fields to factories and cities. We need to understand where he was coming from.

 

Contents:

 

Inverting Hegel’s Ideas

Marx was a great synthesizer of the trends, movements, and ideas around him. He was born in 1818 in Prussia, modern day Germany, just after the French Revolution, Napoleon, and the spread of new liberal ideas about rights and freedom. New science, industries and factories were spreading across Europe. It was a time of unprecedented dynamic change.

Change is key to Marx. And to understand change there was no better person to turn to than Georg Hegel. He had argued that all previous moments in history were the unfolding of ideas, concepts, truth, dialectically, moving us forward.

I’m simplifying here, but for Hegel, this truth was an idea – idealism – images and words and concepts that led, slowly across history, to a greater understanding of the world and the universe, better political systems, more freedom.

The source of all of this, ultimately for Hegel, was god.

Hegel was still alive when Marx was young. But to young radical admirers, Hegel had become a dull conservative figure. He believed in progress, in rights, in freedom, but he also believed in order, in monarchy, and in religion. To a younger generation, these were oppressive forces.

A loose group of young intellectuals called the Young Hegelians emerged, who were influenced by Hegel but sought to go further than their old master. They were much more republican, liberal, and democratic.

Over time, they mostly got more radical, tending towards revolution rather than reform.

This was a century of reform and revolutions – minor and major, successful and failed, from America to Germany. The problem was many radicals in Europe didn’t really know what to replace the old aristocratic system with.

The Young Hegelians started with religion. They attempted to remove god from Hegel’s system. Two thinkers in particular – Bruno Bauer and Ludwig Feuerbach – were the most influential critics of religion of the time.

Hegel had argued that the unfolding of history was the product of God revealing himself through time. That we are all products of, the creation of, god, and slowly come to know the universe, science, the world better and so, in a way, return to god expansively.

But to the Young Hegelians, this positions ideas as kind of up there, above us, transcendent, unfolding down to us.

In other words, we imagine a god that is the creator of us, all powerful, that directs and guides us, but is also unreachable. 

Feuerbach argued that when people did this, they were projecting. God is the sum total of the imaginative powers of our species projected onto some all powerful being. Instead, we should recognise this for what it is – our imagination. Religion is ‘the dream of the spirit’ he said. It actually disempowered us by displacing all our thoughts onto some supreme being, instead of attributing them to us as a powerful species.

In his book on Marx, political theorist Alexander Callinicos writes, ‘Feuerbach argued that Hegel had turned something that is merely the property of human beings, the faculty of thought, into the ruling principle of existence. Instead of seeing human beings as part of the material world, and thought merely as the way they reflect that material world, Hegel had turned both man and nature into mere reflections of the all-powerful Absolute Idea.’

In other words, by attributing our ideas to something outside of the world, particularly as supernatural religious phenomena, we alienated something within ourselves. It means our thoughts are not ours – it falsely presents them as coming from god – in the form of commandments, origin stories, church and political authority. It holds us back.

Fredrich Engels – Marx’s lifelong friend and collaborator – wrote that Feuerbach “placed materialism on the throne again”. He reminded us that ideas are the products of real human lives.

Bruno Bauer was even more radical. He argued that by asserting that the world was the product of god’s will, we justified the world as it was. Poverty? God’s will. King’s and despots? God’s will. Religion obstructed change.

From Bauer, Marx would develop his famous idea that religion was the opium of the masses. It says yes life is hard but that’s gods will and you’ll be rewarded in the afterlife, rather than encouraging a more progressive idea of history.

Now, here’s the important part. These young Hegelians were still Hegelians, meaning they were interested in ideas, they believed in the power of ideas. You just need the right ones, the better ones, the more truthful ones, to battle the old, repressive, wrong ones.

Another young Hegelian – the early anarchist Max Stirner – argued that bad ideas were spooks – bad thoughts that haunt the mind.

Marx criticised this approach. There are two significant early works here – On the Jewish Question published in 1843 and The German Ideology published in 1845. 

It’s all well and good advocating for religious freedom, property rights, liberal ideas like freedom of speech– but all of it, in the end, leaves the real physical, material lives of ordinary people untouched.

For Marx that wasn’t enough because, ‘once the holy form of human self-alienation has been unmasked, the first task of philosophy, in the service of history, is to unmask self-alienation in its unholy forms. The criticism of heaven is thus transformed into the criticism of earth, the criticism of religion into the criticism of law, and the criticism of theology into the criticism of politics.’

Many – including Hegel and Rousseau before him – thought the state could be above society, neutral, general, negotiating fairly between different interests. But like the Young Hegelian critique of religion being too up there, Marx saw the same argument applying to the state.

The French and American Revolutions had made the claims that everyone was equal – in freedoms, before the law, in speech – and that this political equality emancipated people.

As the philosopher Leszek Kolakowski puts it in his book on Marxism: ‘purely political and therefore partial emancipation is valuable and important, but it does not amount to human emancipation.’

But what does emancipation really mean if some had nothing, were starving, had no land or means or resources, were taken advantage of?

Marx wrote that a liberal revolution would liberate only as, ‘an individual withdrawn into himself, into the confines of his private interests and private caprices, and separated from the community’. Instead, a social revolution could offer “human emancipation”.

He thought that the Declaration of the Rights of Man was a ‘big step forward, but is not the final form of human emancipation.’

He continued: ‘just as the Christians are equal in heaven, but unequal on earth, so the individual members of the nation are equal in the heaven of their political world, but unequal in the earthly existence of society’.

Politics must become concrete. Marx asks how can liberty just mean the right to not be interfered with, to acquire as much property as possible? What does this kind of liberty mean if you have nothing?

Marx writes: ‘None of the so-called rights of man, therefore, go beyond egoistic man, beyond man as a member of civil society, that is, an individual withdrawn into himself, into the confines of his private interests and private caprice, and separated from the community.’

In our time, all of the inaccessibility of politics, the talking in parliaments, the debates and the distractions, the dramas and sensationalist press, all pull away from the real material issues in people’s lives. This is what Marx was starting to get at.

He was beginning to, in his own words, invert Hegel – bring his ideas down to the gritty, dirty, physical, hard earth.

In The German Ideology Marx criticised his Young Hegelian contemporaries for believing that ideas can change the world. This was ideology – it distorted thinking and concealed the real issues.

Kolakowski writes: ”Ideology’ in this sense is a false consciousness or an obfuscated mental process in which men do not understand the forces that actually guide their thinking, but imagine it to be wholly governed by logic and intellectual influences.’

Again, it ignores material, sensory, physical life.

For Marx, freedom, progress, should be understood as “power, as domination over the circumstances and conditions in which an individual lives”.

Ridiculing idealism, ideology, the idea that ideas are dominant, Marx quips that, ‘Once upon a time a valiant fellow had the idea that men were drowned in water only because they were possessed with the idea of gravity. If they were to get this notion out of their heads, say by avowing it to be a superstitious, a religious concept, they would be sublimely proof against any danger from water.’

Summing up his critique of the Young Hegelians, Marx famously wrote: ‘The philosophers have only interpreted the world in various ways; the point is to change it.’

 

Alienation

The Romantics were another influence on Marx. They had argued, just a generation or so before him, that much about modern life, industry, and politics seemed to separate us from what they saw as a kind of natural wholeness.

In fact, Marx was a romantic in his early years. Like many others now and then, he wrote bad romantic poetry in his twenties. He came to the Romantics primarily through Hegel.

Hegel took the idea of unity and completeness from them. That a person should be able to develop themselves fully – three dimensionally – in relationship with the world around them, rather than feel disconnected from it.

In another words, Romanticism was about a striving towards completeness, towards being at home in the world.

The opposite of this was alienation, feeling estranged, disconnected from the world. Hegel said that individuals are in a “torn and shattered condition.”

Marx had a complicated relationship with this idea. He hated Hegel when he was young, accused him of mystification and obscurantism. But he came back to him, turned him upside down, and some say, as we’ll get to, abandoned him later on.

Either way, understanding alienation is fundamental as it was central to Marx’s development, and to many of the critiques of the modern world at the time and since.

So what is it? In his book on alienation, the philosopher Richard Schacht points to several definitions. According to one, alienation is ‘avoidable discontent’. Another is that it’s a feeling ‘which accompanies any behavior in which the person is compelled to act self-destructively.’ Another is that alienation points to ‘some relationship or connection that once existed, that is ‘natural,’ desirable, or good, has been lost.’

But the word that comes up most in the early Marx is ‘estranged’ – hinting at something that is no longer close.

It comes up in several ways. First money is alienating because it’s a stand-in for the real social relationships that are hidden underneath. It disconnects us from them and hides them. It becomes an ‘alien medium’ instead of people being the mediators – it separates us. Money is ‘men’s estranged, alienating and self-disposing species-nature. Money is the alienated ability of mankind’.

But labour is ‘estranged’ and alienated too. What workers do all day is for someone else on something for someone else. What they’re doing is out of their control, they are estranged from it.

Even their own bodies can become alien to them, as they’re forced to sell their own labour to stay alive. I like to think of it as a zombified state, on the factory line, doing something for no good reason except to afford to stay alive. Marx says the object the labourer produces ‘confronts’ as ‘something alien’ something ‘independent’ which stands ‘over and against’ them.

Kolakowski writes: ‘the alienation of labour is expressed by the fact that the worker’s own labour, as well as its products, have become alien to him. Labour has become a commodity like any other’.

On top of that, the division of labour means workers don’t even work on or understand the entire product. They’re divided into small, disconnected parts. 

Marx writes: ‘Not only is the specialized work distributed among the different individuals, but the individual himself is divided up, and transformed into the automatic motor of a detail operation, thus realizing the absurd fable of Menenius Agrippa, which presents man as a mere fragment of his own body’.

Capitalism he says, converts the ‘worker into a crippled monstrosity.’

In On the Jewish Question Marx writes that while humans are supposedly equal in the political realm, in everyday life, the worker ‘degrades himself into a means, and becomes the plaything of alien powers.’

But – and here’s the key – according to Hegel we produce, project, and create the conditions of our own alienation as a species, so then by recognising that condition, we aim and work to overcome it. In other words, that progress arises out of the discontent of alienation itself. The bad becomes the good. Negativity drives us forward.

Kolakowski writes: ‘The greatness of Hegel’s dialectic of negation consisted, in Marx’s view, in the idea that humanity creates itself by a process of alienation alternating with the transcendence of that alienation.’

But, remember, Marx flips Hegel on his head. For Hegel that process was in the realm of ideas. For Marx, it’s material – it’s about the real conditions on the ground. Who is doing what, where, for who, in what ways. It’s how alienation confronts us in physical objects and processes like money, labour, in bricks and mortar. Kolakowski writes that the ‘true starting-point is man’s active contact with nature.’

And Petrucciani says that, ‘man is not only a natural sensuous being, but that specific being which self-produces itself through historical labor, and through the dialectic of estrangement and re-appropriation that characterizes it.’

In his early writings, Marx leant heavily on the concept of alienation. Some argue he abandoned it later on as it wasn’t a rigorous enough economic concept. Some – like Louis Althusser – argue that you can divide Marx into an early stage and a latter mature one. Others like David Harvey disagree. Petrucciani, for example says that while the early ideas become more ‘precise, reformulated, and filled with contents’, they’re never abandoned.

Callinicos writes that in the early work, ‘everything is built around the contrast between human nature as it is—debased, distorted, alienated—and as it should be.’

But this begs a question: how do you know what human nature should be? Surely everyone’s different? How can you get an ought – a moral claim about the world – out of an is – how the world is. And isn’t that idealism? Exactly what Marx sought to critique in the young Hegelians?

Marx’s problem with alienation can be imagined like this. You say capitalism alienates us. I say from what? You say from our natural selves. I say, like Adam Smith did, that capitalism is natural because human beings have a natural desire to trade, exchange and barter. You say it’s not natural to work in a factory all day. I say it’s not natural to farm or wear clothes.

This is called the naturalistic fallacy. That we place an arbitrary dividing line between something natural and something not natural, when in fact, everything comes from the earth, everything changes, everyone is different.

Marx tried to get around this problem with the concept of species-being.

For Marx, there isn’t a magical spiritual natural human essence that’s repressed by modern society. Instead, he imagines human society as a whole at any given time – everything humans are doing, arguing, being – and then contrasts individuals with that. 

Marx wrote ‘the essence of man is no abstraction inherent in each single individual. In its reality it is the ensemble of the social relations’. Our essence as a species – our species-being – is the totality of our economic systems, cultures, politics, our history.

What he’s saying is that we have an idea of our species in our head, our relationship to it, what’s possible, and so we can become estranged and alienated from that.

A natural society isn’t something cooked up by philosophers – like Plato did in The Republic – designed, laid out, engineered. Society is a process that’s happening right now, it’s always happening, it’s about the development of it and how we relate to it.

The philosopher Lloyd Easton writes: ‘Marx particularly warns against establishing “society” as an abstraction over against the individual. The individual is a social being as the subjective, experienced existence of society.’

What you need then is to find a process that moves from alienation to a world in which everyone is connected to, has some control over, is served by our species-being. That the individual and society are not estranged, but in development with each other. And it’s around this time that Marx starts calling himself a communist.

In his Economic and Philosophical Manuscripts of 1844 – a set of notes not published until 1932, and maybe the least catchy title of all time (if it was a Youtube title it would be something like ‘You wont believe these 10 secrets about wealth and wisdom) – Marx wrote that communism was ‘the positive transcendence of private property as human self-estrangement, and therefore as the real appropriation of the human essence by and for man; communism therefore as the complete return of man to himself as a social (i.e., human) being’.

Communism is the achievement of a “real community”. Under communism the “contradiction between the interest of the separate individual or individual family and the interest of all” will, according to Marx, be overcome.

Communism should be ‘the genuine resolution of the conflict between man and nature and between man and man – the true resolution of the strife between existence and essence, between objectification and self-confirmation, between freedom and necessity, between the individual and the species.’

Again, many argue the language of alienation and species-being was abandoned in the later works. Although most of the most influential commentators disagree. Kolakoski holds ‘there is no discontinuity in Marx’s thought, and that it was from first to last inspired by basically Hegelian philosophy.’

Social-being estranged, alienated, individual development repressed, then recognition, reconciliation, return, and emancipation.

 

The Economic Turn and Dialectical Materialism

For Marx, this new focus on material conditions, social relations, and physical life demanded a new method to understand capitalism. The old philosophy wouldn’t do. He’s fascinated by stuff not adequately captured by reflecting on ideas – wood, machines, protests. He borrows from Benjamin Franklin, for example, the notion that we’re a tool making species. That that’s what separates us from animals. Engels studies the working conditions in and around his father’s Manchester factory where he works. Both are working to bring philosophy down from the heavens.

For a long time, peasants in rural France and Germany had a traditional right to collect wood and twigs from the forest for their fires. But in the 1820s, as enclosures were happening and capitalism and property rights were expanding, laws were passed that ended these ancient rights.

Remember, Hegel and Rousseau had argued that the state, the government, could be the neutral representation of the general will, of all interests.

But in these new wood theft laws, Marx saw the obvious problem with that logic. The government, in banning the collecting of wood to keep warm by poor peasants, were taking the side of wealthy landowners over ordinary people.

In other words, the state became the vehicle for the propertied class who held economic power above all else, against ‘the poor, politically and socially propertyless.’

Engels later wrote, ‘I heard Marx say again and again that it was precisely through concerning himself with the wood-theft law and with the situation of the Moselle peasants that he was shunted from pure politics over to economic conditions, and thus came to socialism’.

It was in wood, in tools, in material that the truth of alienation could be found. If peasants and labourers were kicked off the land, if all of the countryside was enclosed in plots to farm, if the peasants had no tools or machines or money of their own, what would happen? They’d be forced to sell their own labour.

Here was a key and classic distinction – between those that had and those that had nothing. That the exclusive ownership of the tools, the means, of ownership and production was one of the keys to prosperity, to flourishing, to overcoming alienation.

This, again, was what was special about humans: we make tools – and that projection of an idea onto a material object that helps us to survive is key to our historical development.

Marx wrote, ‘The animal is immediately one with its life activity. It does not distinguish itself from it. It is its life activity. Man makes his life activity itself the object of his will and of his consciousness’.

We till, sow, fence, build, we enslave with chains, we engineer, we innovate. These are the things our lives are quite literally built around. They make up our material lives, they help us overcome our limitations, and they have a dynamic history.

Marx writes: ‘Men have history because they must produce their life, and because they must produce it moreover in a certain way: this is determined by their physical organisation; their consciousness is determined in just the same way.’

This is the basis of Marx’s materialism. That it’s our material, physical, sensory, social, productive life that matters. This may seem accepted, at least in large part, today, but when economics as a field was very new, all of this was very novel.

Most would have argued it was leadership or intelligence or great thought that determined the course of history and people’s lives – Napoleon a great military strategist, Plato the great philosopher, religion as the teachings of divine scripture. Marx was arguing to the contrary – the economy mattered most.

He wrote, ‘Men, developing their material production and their material intercourse, alter, along with this their actual world, also their thinking and the products of their thinking. It is not consciousness that determines life, but life that determines consciousness’.

This led him to the nascent field of economics. His brilliance would be to combine economics with philosophy. The newspaper he had edited in Germany had been shut down, he’d moved to Paris but had been kicked out, and now he was in exile in Britain. He spent months in the British Library pouring over Adam Smith and David Ricardo, recording whatever he could find, filling notebook after notebook.

He borrowed several concepts that we’ll come to, but he was immediately critical too. From his Hegelianism Marx believed everything was connected, that no man was an island.

Adam Smith – trying to understand the logic of the new commercially driven societies developing across Europe and America – started from the assumption of natural, individual self-interest. That ‘It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest’.

Marx hated this. He wrote ‘Production by an isolated individual outside society . . . is as much of an absurdity as is the development of language without human beings living together and talking to each other’.

He called them Robinsonades – that they assumed each person was a Robinson Crusoe on his own little island.

Callinicos writes, ‘Marx criticized the political economists because they tended to treat society as a collection of isolated individuals lacking any real relation to one another, so that “the limbs of the social system are dislocated’.

Another point that immediately dissatisfied Marx was their tendency to naturalise commercial society. Smith, for example, thought humans had a ‘natural tendencies to truck , trade, and batter,’ and so the market was the natural result of that.

Again, drawing on Hegel, Marx saw this as absurd. History changed across time.

He wrote – ‘Economists have a singular method of procedure. There are only two kinds of institutions for them, artificial and natural. The institutions of feudalism are artificial institutions; those of the bourgeoisie are natural institutions.’

They failed to see how there was nothing natural about them – human societies changed over time and human life was embedded in that societal context.

He wrote ‘the sensuous world around him is not a thing given direct from eternity, remaining ever the same, but the product of industry and of the state of society, and, indeed [a product] in the sense that it is a historical product, the result of the activity of a whole succession of generations, each standing on the shoulders of the preceding one’.

So this was how Marx proceeded: economics plus Hegel.

This idea of development, change, progress was the fashion of the day. In 1859, Charles Darwin published On the Origin of Species.

Marx later wrote ‘Darwin has directed attention to the history of natural technology, i.e. the formation of the organs of plants and animals, which serve as the instruments of production for sustaining their life. Does not the history of the productive organs of man in society, of organs that are the material basis of every particular organization of society, deserve equal attention?’.

The key was to understand how history unfolded as a system. It wasn’t that the lion and the deer or the worker and the capitalist were just in competition with each other, separate from each other, but that they were part of the same totality, the same system, and that system had to develop and change in a connected way dialectically.

He summed up his method like this: ‘My dialectical method is, in its foundations, not only different from the Hegelian, but exactly opposite to it. For Hegel, the process of thinking, which he even transforms into an independent subject, under the name of “the Idea,” is the creator of the real world, and the real world is only the external appearance of the idea. With me the reverse is true: the ideal is nothing but the material world reflected in the mind of man, and translated into forms of thought’.

 

The Communist Manifesto (1848)

Ok, there’s a final influence that we haven’t talked about much: the utopian socialists. These were varied movements and thinkers that emerged out of the Enlightenment ideal of progress, reason, and rights – that you could, in short, plan and design a society in which the needs of everyone could be met fairly.

The first was Francois-Noel Babeuf and his conspiracy of equals. Babeuf and his followers planned a coup during the French Revolution.

His Conspiracy of Equals planned to implement absolute equality in France, with the manifesto reading: ‘We aspire to live and die equal, the way we were born: we want real equality or death; this is what we need. And we’ll have this real equality, at whatever the cost’.

Spoiler alert, Babeuf’s conspiracy didn’t end well for him.

After the French Revolution there were Saint-Simonians and Fourierists.

Henri de Saint-Simon, distrusted democracy and the ‘mob’ but was an Enlightenment figure who believed society could be organised in everyone’s interests by men of science – that the state could technocratically plan society from the top down.

Charles Fourier on the other hand argued for rational communes organised around universal principles of psychology based on different personality types who would perform different jobs. Fourier was an eccentric and influential character who thought that ideal communes would have exactly 1620 people.

In Britain and then America, Robert Owen argued that our character was influenced by our environment, and so focused on education, reform, and cooperatives.

It was in Owen’s Cooperative Magazine in 1827 that the term socialist was likely used for the first time.

Finally, during the 1848 revolutions Louis Blanc argued for a ‘dictatorship of proletariat’ – without which the forces of reaction – foreign, aristocratic, monarchical, etc – would simply retake power. He wrote that the provision government should ‘regard themselves as dictators appointed by a revolution which had become inevitable and which was under no obligation to seek the sanction of universal suffrage until after having accomplished all the good which the moment required’.

What made all of these utopian? That you could conceptualise, idealistically, a rational planned commune or society – a utopia, and build it like an engineer designing a building. This ‘utopianism’ is what Marx rejected, but he still had one foot in this tradition.

In 1836, a group of German exiles in Paris and then London formed a Communist League of the Just. Marx joined them and they changed their name to the Communist League in 1847. Marx and Engels worked up a manifesto in 1848. At almost exactly the same time, by complete coincidence, a revolution broke out in Paris, which spread across Europe. These revolutions differed from place to place and were mostly liberal, but communism was just beginning to be taken more seriously.

The Communist Manifesto began with these now famous lines: ‘A spectre is haunting Europe — the spectre of communism. All the powers of old Europe have entered into a holy alliance to exorcise this spectre: Pope and Tsar, Metternich and Guizot, French Radicals and German police-spies.’

It continued: ‘The history of all hitherto existing society is the history of class struggles. Freeman and slave, patrician and plebeian, lord and serf, guild-master and journeyman, in a word, oppressor and oppressed, stood in constant opposition to one another, carried on an uninterrupted, now hidden, now open fight, a fight that each time ended, either in a revolutionary re-constitution of society at large, or in the common ruin of the contending classes’.

The Manifesto is short, the best introduction to Marx you can find, pretty easy to read, written to be popular, and contains most of Marx’s most important early ideas, ending with the famous lines: ‘workers of the world unite, you have nothing to lose but your chains!’

In this early period Marx discovered the next big piece of his puzzle: the proletariat.

The bourgeoisie and the ruling classes could never be philosopher kings or just leaders in Plato’s or Hegel’s sense. No-one stands above and separate from the system like god looking down pulling the stings. The rulers are part of the system, they benefit from it, and so change has to come from elsewhere.

The proletariat – workers who must sell their alienated, estranged labour, who understand money as alienation, who work materially physically at the ground level – they are the force of change.

Petrucciani writes: ‘The proletariat is the class that lives through the most complete negation and which therefore becomes itself the subject able to deny all existing relations.’

This is why the call of the manifesto and the Communist League’s slogan was ‘Working men of all countries, unite!’ It was through this that Marx argued that the material conditions produce ideas, but then ideas can then influence material change.

Marx wrote ‘The weapon of criticism cannot, of course, replace criticism by weapons, material force must be overthrown by material force; but theory also becomes a material force as soon as it has gripped the masses.’

For Marx this wasn’t a moral argument. It was a historical, economic, dialectical one – a scientific one, a matter of forces. One class was getting richer, the other immiserated. Reactionary, aristocratic, monarchical, despotic governments were holding on to power across Europe and using increasing suppressive tactics. The continent was a pressure cooker. Marx believed that real revolution would come.

He wrote, ‘revolution is possible only in the periods when both these factors, the modern productive forces and the bourgeois forms of production, come in collision with each other. . . . A new revolution is possible only in consequence of a new crisis. It is, however, just as certain as this crisis’

The Manifesto brought all of these early themes together. However, while it was initially printed thousands of times it fell into obscurity for over twenty years before becoming more influential in the 1870s. And in those twenty years, crises and slumps would come, Marx kept thinking revolution would happen, but capitalism, railways, factories, steamships, and capitalist colonialism kept on spreading.

Trade unions organised for the first time, having been banned in many countries, socialists and anarchists formed the International Working Men’s Association in 1864 – the first International.

As he waited for revolution, Marx settled down to write his magnum opus – an analysis of the entire system.

 

Capital

At this point, Marx is juggling quite a few of the modern ideas around him. He knows he wants to ground his work materially, but he needs a concrete place to start.

Because for him, capitalism is dynamic, dialectical, in motion. He knows it’s transformative – all that is solid, he writes, melts into air.

For this reason, it’s important to remember that Capital: A Critique of Political Economy, or Das Kapital, or simple Capital, published in 1867 is not meant as a universal truth, but a snapshot of the European capitalism of the period, and the laws that Marx thinks emerge from it.

It’s a hugely ambitious, diverse book, full of references to literature, economic and philosophical ideas, the politics and culture of the period. Furthermore, it’s the first of three volumes, the second and third left in notes at the time of Marx’s death and compiled by Engels. And on top of that, there were meant to be six volumes, looking at land, the state, and the world market.

So its impossible to do it justice. Even most of Capital’s detractors don’t deny it’s a masterpiece. Agree with its conclusions or not, reading it and understanding it is indispensable for understanding the world we live in.

The themes are varied, but the most important are these: The question of what we value, and why, what gives things their monetary value. Labour, work – what motivates it, what’s at the root of it – capital and wealth – how they function and circulate – and the forces, movements, and contradictions that arise from the relationships between all of these.

The simplest way to think of what Marx is saying is this: that capital is an impersonal force – like gravity or meteorology or mathematics – with a life of its own.

Which is why Marx believed what he was doing was science. It wasn’t speculative in the sense of philosophers thinking up ideas in dusty studies. Capital is full of references to statistics, factory routines, rich and dense descriptions of how craftsman use different instruments, pamphlets and parliamentary debates. In this sense, it’s a very modern history – drawing on lots of evidence – of 19th century capitalism.

Marx is a man of the Enlightenment. Maybe one of the last great Enlightenment ‘system builders’, inspired by people like Newton – the idea that there are scientific forces, laws of motion, at play both in the natural world and in human societies.

The key for Marx was to search around, peel away, zoom in, interrogate – like astronomers and scientists do – to find the kernel, the secret hidden truth at the core of history.

 

Use and exchange/Commodity

So where does Marx begin? With something that’s all around us, that’s at the core of capitalism and all of our lives, that we cannot do without and may contain some secrets – the commodity.

What is it? It’s not obvious from the surface. They’re all so different. Theres almost nothing that unites them – a bus ticket so different from an iphone, a movie on DVD from a carefully crafted table. But Marx wants to find a concept that unites them all.

He realised that first, despite all of their differences – one being food, the other being a toy – they all have a use to someone.

All commodities are useful to someone – they have a ‘use value’.

But they also have a price, an ‘exchange value’.

What Marx finds immediately interesting is that neither of these are in the commodity. They can’t be found anywhere by simply examining it, taking it apart. They’re not inherent in it. So these values must come from elsewhere. Where?

He writes: ‘We may twist and turn a single commodity as we wish; it remains impossible to grasp it as a thing possessing value.’

Ok, everything has these price tags on. So that’s where the price comes from? But where does that come from? Maybe just from that use value – how useful we find each commodity.

But everyone finds different things useful. Diamond rings aren’t that useful but are expensive. Water is very useful but is cheap. I might hate Picasso and not find his art useful, but I wouldn’t turn down someone giving me a free Picasso painting. Because I know it’s worth something else.

So what is the mysterious exchange value based on? Marx points out that if I offer my three apples for your three onions there must be some metric, some common idea, that we’re basing our appraisal of what each thing is worth on. Why is that all commodities are comparable, if they have nothing else in common. We need a kind of ruler, a measuring tape, to understand them.

The simple answer is that the price tag, or the exchange value, is the cost of producing the item. A phone costs more than apple because it’s harder to make, it takes more infrastructure, more machines, more attention, more supply chains.  If I sell you a cake I add up the cost of all of the ingredients.

But we get into an infinite regress. What determines the cost of the flour? The cost of the machines at the phone factory.

Marx says that at the base, what all of the commodities have in common is that they’re ‘products of labour.’ Commodities are ‘congealed quantities of homogeneous human labour’.

Commodities have values ‘only in so far as they are all expressions of an identical social substance, human labour’.

What he means is, yes my cake is based on the cost of flour, sugar, bowls, etc. But at every stage in their production, human labour produced each part. Even the factory walls at the phone machines were built by someone. The sugar cane had to be farmed. And so exchange value is the totality of all of that put together.

What Marx has immediately reached is the social value hidden behind the price tag. He says the object has a ‘phantom-like objectivity.’

In his great guide to Das Kapital, David Harvey writes, ‘Value is a social relation, and you cannot actually see, touch or feel social relations directly; yet they have an objective presence’.

We can bring this back to Marx’s idea about species-being. The value is something social, not individual, otherwise how would you ever get to a ‘fair’ price, a correct price, something to judge what your offer is made on. When you reject the price of something you often say something like ‘I could have made that for less than that’.

It’s a kind of hidden pattern, that connects me to the rest of society.

Like Hegel before him, it isn’t the thing that has an essential truth within, but it’s the relationships between things that matter.

 

Labour Theory of Value

Why does this matter? Value is a difficult idea to grasp, but it’s at the heart of almost everything. What we value is what we want more of, what we’re less likely to give away. Does how we value  food differ from how we value friendship or democracy? Does value differ across different political and economic systems? If we can get to the bottom of how and why we value things we can use that as a basis for good arguments, philosophy, economics.

Marx came across the labour theory of value when reading the classical economists. The Scottish economist Adam Smith had first used the idea to describe how wealth came from production and industry rather than land, but that it came from capitalist investment and rent too. Then the British economist David Ricardo went further, arguing all value comes directly from the amount of labour time needed to produce a good. Ricardo, though interested in making sure land and industry was productive rather than wasted, didn’t take the next logical step in asking why, if labour creates value, did capitalists get rich and labourers stay poor? This was left to Marx.

Ok, so for Marx the more effort, the longer and the more difficult it is to produce something, the more people it takes, the more work and labour it takes, the more value it has.

But there’s a problem here. I might be very slow at making this, I might be bad at it, and a competitor might be better, quicker, and do it easier. Despite this, they will likely sell it at a higher price. So despite my labour time being higher, the outcome of my shoddy work is worth less. Surely this contradicts the labour theory of value?

For Marx, remember, value is a social phenomenon.

Things might have different use values – I might find this useful and you useless, but when we’re thinking about what its worth to society, to everyone, on average, what price we can get for it, what’s going on behind the scenes of the calculation?

Value, he says, is ‘socially necessary labour-time’.

Which is, Marx writes, ‘the labour-time required to produce any use-value under the conditions of production normal for a given society and with the average degree of skill and intensity of labour prevalent in that society’.

Petrucciani puts it like this: ‘Why socially necessary? Because, empirically, it can happen that a slow or incapable producer takes more time than a skilled and quick one to make the same object, say a chair. It would make no sense to say that an inefficiently produced chair is worth more, and thus Marx makes value equal to the average labor time which is needed to produce a given good’.

When we come together to judge value socially, we’re not interested in how long it took the individual manufacturer, say. Like walking along a line of market stalls selling the same products, we’re comparing them in the aggregate.

Marx writes: ‘The sum total of the labor of all these private individuals forms the aggregate labor of society. Since the producers do not come into social contact until they exchange the products of their labor, the specific social characteristics of their private labors appear only within this exchange.’

That value is socially necessary labour time forces producers into a single system. Each has to compare with one another, compete in the market, keep up with the latest innovations. If I take too long to make an inferior product it’s not going to sell, I’ll be undercut by the person on the market stall next to me.

This is precisely Marx’s method, that dialectical reversal, the turning of Hegel on his head. He’s gone and looked at the world materially, at the work, products, physical goods and factories, and from that empirical study taken lots of diverse heterogeneous exchanges and identified one single homogenous abstract concept: the labour theory of value.

He writes ‘concrete labour becomes the form of manifestation of its opposite, abstract human labour.’

And remember, labour itself has a value. If I employ ten workers I need to pay them enough to feed them, shelter them, make sure they have enough energy to work, then that amount paid them has to at least be the price of the product. Their labour goes directly into the product.

So the value of labour is the cost of maintaining it. If all of their food, getting to work, rent cost £100 and I produced 100 mugs, then all other things being equal, the mugs are worth £1 each.

Marx writes ‘if the workers could live on air, it would not be possible to buy them at any price. […] The constant tendency of capital is to force the cost of labour back towards this absolute zero.’

It’s only through the fact that labourers need things, that their ‘labour-power’ costs a certain amount, that the value of it is passed through into a commodity.

 

Money: Commodity Fetishism

Once there’s a universal measure that unites all commodities – the amount of labour embodied within them – then that unit, that appraisal, that number – can be represented or symbolised by something else – money.

Just like in Hegel, as one shape develops into another, the idea that an object can have an exchange value based in how much labour went into it, can develop logically into the idea of money to represent that value. Importantly, what we have here is movement, dynamism, development.

Marx writes: ‘the money-form is merely the reflection thrown upon a single commodity by the relations between all other commodities’.

But money does something else too. It measures value, but it also provides a kind of lubricant that enables exchanges to happen easier.

Money is both a measure of value and a ‘means of circulation’.

However, just like there’s a contradiction between use value and exchange value, between what an object’s useful is to you and how much it’s worth on the market, there’s also a contradiction within money itself.

It being a measure of value is different to it being a means of circulation. Because money can be saved up, hoarded, hidden and stashed. I could take it all, and there be no ‘means of circulation’ left. 

Harvey writes: ‘what happens to the circulation of commodities in general if everybody suddenly decides to hold on to money? The buying of commodities would cease and circulation would stop, resulting in a generalized crisis.’

Sure you can hoard grain or save up X, but money is different. It’s more efficient, you can do more with it, everyone wants it, and it doesn’t spoil (as much).

It’s here for Marx that capitalism really starts to take off. People want money not just to pay for the necessities of life, but want it for it’s own sake.

Modern society, Marx writes, ‘greets gold as its Holy Grail, as the glittering incarnation of its innermost principle of life.’

Here we have accumulating, the root of some being able to lend money to others, to command interest rates, to get richer – we have what we calls primitive accumulation – the building of capital itself – large amounts of disposable money.

Petrucciani says: ‘The same attitude which appeared manic in the hoarder becomes iron clad rationality in the capitalist. The capitalist incarnates an insatiable desire for gain.’

Marx uses a formula. It used to be that a commodity would be exchanged for money to buy another commodity. Sell an apple to buy a chair. C (for commodity) – M (for money) – into C (for a new commodity).

But under capitalism that starts to reverse. Money can be used to buy commodities to sell for a profit, for more money. Instead of C-M-C we have M-C-M. Instead of a new commodity being the goal – selling the apple to get yourself a chair – money itself becomes the goal.

But Marx points out that if you’re swapping an apple for a chair they can both be worth the same and you get what you want out of the deal. He says, ‘Where equality exists there is not gain.’ But if you’re using money to buy commodities to make a profit, where does the extra money from?

Why would you do it if some gain wasn’t going to come from it? If C-M-C – cup for money to buy food is zero sum – each are worth the same, why is M-C-M positive sum? That the goal of the last M is more than the first M?

Marx says that under capitalism this appears as if a mystery.

He writes, ‘Capital is money, capital is commodities. By virtue of being value, it has acquired the occult ability to add value to itself. It brings forth living offspring, or at least lays golden eggs.’

Now, remember, all value must come from labour – people putting work into things. But when we start really using money, value becomes mysterious, as if taken over by money, as if money has magical powers and is the source of value itself, rather than being meaningless pieces of paper or chunks of gold.

Marx calls this commodity fetishism. He says there’s a ‘magic of money’ that conceals what’s going on underneath, that conceals the human work. Marx calls it a ‘riddle’ to be solved. With Marx there’s always something going on under the surface of things. He’s always moving from particular stuff to broader universal social phenomenon. Commodities he says are ‘sensuous things which are at the same time supra-sensible or social.’

He writes, ‘The mysterious character of the commodity-form consists . . . simply in the fact that the commodity reflects the social characteristics of men’s own labour as objective characteristics of the products of labour themselves, as the socio-natural properties of these things’.

Under capitalism, you can buy a fancy new shirt, but the objective conditions of production can be very hidden – sweatshops, unethical business practices, the devastation of the environment are all happening elsewhere, under the surface. But money can hide it. And commodities can appear on the shelves as if by magic.

Petrucciani: ‘Fetishism would be that attitude according to which commodities are endowed with value as if it belonged to them by nature, rather than because of the specific modality of their production.’

Commodities and money are hieroglyphs to be decoded and understood. They are curtains to be drawn back. There’s always something real, something understandable behind them. But we too easily forget this.

Marx writes: ‘It is however precisely this finished form of the world of commodities-the money form-which conceals the social character of private labour and the social relations between the individual workers, by making those relations appear as relations between material objects, instead of revealing them plainly.’

So what’s behind the curtain? How does the capitalist lay a golden egg? Why does the last M of the M-C-M magically contain more money that the first M?

 

Surplus Value

To uncover the secret, to peel back the layers, to dispel the illusion of commodity fetishism, we have to go somewhere philosophy doesn’t ordinarily tread. Marx says we have to enter ‘the hidden abode of production, on whose threshold there hangs the notice ‘no admittance except on business”. We have to go behind the factory doors.

It’s only here that the riddle of profit can be solved, how value can be miraculously created from nowhere, emerging like a golden egg. After all, value can only be made from people doing work.

If someone has a hoard, a stash, a windfall of money, what can they do with it to increase it? How can they increase that last M in M-C-M.

The capitalist searches around for a commodity that can expand in value and they find it, most obviously, in people themselves.

If I’m putting together a new product, out of wood, nails, wires – whatever – I need labour to help do it too. In this sense, labour-power is a commodity like any other. I can go to the market and buy my wood and I can go to the market, under capitalism, and hire labour.

Marx writesThe possessor of money does find such a special commodity on the market: the capacity for labour . . . in other words labour-power.’

To buy labour-power, the labourer must be free to sell it. They must be freed from servitude as peasants or slaves. They must then have nothing and need something, need a means of subsistence.

On the one hand, there are those who have access to estates with vegetable patches and fields and forests with wood, and then, after feudalism and slavery is abolished, we have those that are forced from the land, prohibited from collecting wood or using common land to grow food.

Marx writes: ‘Nature does not produce on the one hand owners of money or commodities, and on the other hand men possessing nothing but their own labour-power. This relation has no basis in natural history, nor does it have a social basis common to all periods of human history. It is clearly the result of a past historical development.’

Capitalism isn’t natural, but historical. 

Now, here is the core of Marx’s argument. It is that labour-power is a commodity like any other. The labourer, wandering, looking for work needs a certain level of sustenance – food, shelter, welfare – that itself is provided by other labourers.

So the value or cost of labour-power is the value or cost of all producing all of the energy that energises and sustains that labour-power in the labourer. Meaning labour-power is comparable to any other commodity. It has a value and that value is determined by the labour theory of value – how much labour – food production, building shelter, collecting water – goes into energising the worker doing the work.

So the capitalist has capital, and they can spend that money on raw materials, supplies, and they can buy labour-power. They can combine it all together.

And Marx assumes that all of this is purchased at the correct price – that the value of everything is determined by how much labour went into making it. If it cost me $5 to get the energy/sleep/shelter to hammer the nails for one hour, that’s how much the labour-power is worth – $5ph.

That labour-power is combined with the wood and nails and then the capitalist sells. But again, he sells for a profit. He sells for more than the combined value of the labour-power and materials. The second M-C-M must be greater. Where is this extra, this surplus, coming from?

If the labour theory of value is right it can only come from labour.

So here’s the key.

Marx argues there is a gap between what it costs to sustain the labour over, say, a day, and what the capitalist gets out of the labourer in labour-power over the course of that day – and this is where surplus value comes from.

So in the first part of the day, as an example, the labourer is working in return for wages that cover the cost of sustaining them, or what Marx calls reproducing the labour, and in the second part of the day, the labourer is also covering the cost of sustaining the capitalists needs – their food and shelter. But it is here, Marx argues, that the capitalist can suck more value out of the worker than they’re being compensated for. That they can extract surplus value and make a profit.

He writes ‘Wherever a part of society possesses the monopoly of the means of production, the worker, free or unfree, must add to the labor time necessary for his own maintenance an extra quantity of labor time in order to produce the means of subsistence for the owner of the means of production’.

He argues that humans are just capital like everything else – material, muscle, machinery that can be bought, sold, or hired and their energy put to use. But that humans are a special type of capital – variable capital.

Humans are fleshy, muscly, malleable, mouldable and innovative things that are highly variable in the ways they can perform. So the capitalist can push a human to work harder, faster, differently, so as to squeeze more energy out of them.

Unlike nails or buildings, humans have more variability.

Machines, on the other hand – spinning wheels, hammers, lathes, factory equipment, buildings, metals and raw materials are ‘constant capital’ in contrast to ‘variable capital’. They move, spin, weave, hammer, and screw and a pretty constant steady inflexible rate.

So their value is the amount it costs to produce them and they can pass that value into the end product. The nails contribute to the total value of the table. But they cannot magically create value out of thin air – the value they transmit is constant.

New value must come from somewhere else. And it’s human labour that’s variable. It can change in speed, efficacy, length, strength, and dexterity.

Machines don’t vary, or go on strike, or get sick. They can’t be shouted at or disciplined or threatened. They are predictable. But if you can get more work out of a worker, or workers, then you can get more value into the final product. You can extract more surplus value.

Labour – variable capital – can be organised in different ways. They can be made more productive by dividing them up and getting them to perform smaller more repetitive tasks. Working out ways to improve efficiency. Their lunch breaks can be shortened or you could even provide meals if you think it will give them more energy. The point here is that it’s variable.

Harvey writes, ‘Surplus-value arises because workers labor beyond the number of hours it takes to reproduce the value equivalent of their labor-power. How many extra hours do they work? That depends on the length of the working day.’

As we’ll come back to, Marx spends many pages in Capital describing the English working class’s struggles to shorten the length of the working day. In the nineteenth century, the capitalist class in factories here in the Midlands of England did everything they could to lengthen them, to employ women and children in dirty unsafe factories, to cut costs, and to get the most out of labour that they possibly could. Marx calls them ‘small thefts’ of the workers time, the ‘petty pilfering of minutes’ or the ‘snatching of minutes’.

If one capitalist gets more end product – more tables say – from their workers in one day for the same amount of wages, they can either sell them cheaper than their competitors or for the same amount and keep more profit.

But the logic of capitalism – of competition – is such that if you don’t do it, your competitor will. This is fundamental to Marx.

He is not so dogmatic to argue that this happens all of the time, everywhere, or that capitalists are purposefully cruel and evil, only that there is a logic, a motivation, a force, that compels capital to operate in this or else someone else will do it elsewhere and make a cheaper product.

He writes, ‘The influence of individual capitals on one another has the effect precisely that they must conduct themselves as capital’. In other words, capital has a logic of its own, independent of individual capitalists.

There is downward pressure on wages and pressure to increase productivity not to get rich but just to keep up. Marx writes ‘the minimum wage is the centre towards which the current rates of wages gravitate.’

There might be some cultural expectations about the minimum wage, about safe working conditions, there might be regulations and oversight and nosy journalists here and there that push wages up slightly, but ultimately, there is a force putting downward pressure on wages.

If the capitalist pays the labourer more than all of his competitors out of the kindness of their heart then the end product costs more and they go out of business. If they shorten the working day while his competitors lengthen it and become more productive and so make a cheaper product, they go out of business. This is why ‘capital’ becomes an inhuman force, it has a magical effect on all those under its spell, forcing them into the logic of capitalist production.

Capital is full of literary references – Shakespeare and Romantic influences pop up everywhere. 

Marx writes things like, capital has a “voracious appetite,” a “werewolf-like hunger for surplus labor”.

And ‘the vampire will not let go’ while there remains a single muscle, sinew or drop of blood to be exploited’.

And that “Capital is dead labor which, vampire-like, lives only by sucking living labor, and lives the more, the more labor it sucks’.

He’s calling machines ‘dead labor’ because their value comes from the living labour that was transferred into them. Das Kapital is a book of flows, of energy transfer, of how value moves dynamically through the world dialectically, how the workers’ ‘labour-power’ is alienated – taken from that – and how, as we’ll see, that flow of energy and value keeps moving inexorably from the worker into the capitalist class.

 

Forces, Relations, Bases, and Superstructures 

Ok, we’ll return to that relationship between labour and capital – because that is the crux of it, that is the Hegelian contradiction, one pulls on the other creating discord – but we need to supplement it with a few more basic concepts.

Remember, Marx is trying to be scientific. He looks around and sees what happens a lot, then builds this from the ground up into broader concepts – specifically looking at nineteenth century capitalism.

Two main concepts he identifies are the forces of production and the relations of production.

The forces of production are the material, the buildings, tools, technology, instruments and factories of any given society.

And the relations of production are the social relationships that underpin the division of ownership and division of labour in any given society. The classes, the relationships between them, who owns and who doesn’t, how a society is organised.

Combined, these make up a mode of production.

In The German Ideology, Marx writes, ‘a certain mode of production, or industrial stage, is always combined with a certain mode of co-operation, or social stage, and this mode of co-operation is itself a “productive force”.’

Importantly, Marx points out how these modes of production have changed across history.

There was primitive communism, where tribes and primitive societies held resources broadly in common. There was slavery. Where one class is held in bondage to labour and another is free to trade them. There was a feudal mode – where peasants are tied to the land and produce their own means of subsistence but are obliged to provide for their lord in return for hypothetical protection. Then there’s bourgeois capitalism.

Each, he writes, ‘is replaced by a new one corresponding to the more developed productive forces and, hence, to the advanced mode of the self-activity of individuals.’

As contradictions appear one mode is replaced by another in a Hegelian way.

This is why Marx is a materialist, not an idealist. When he looks at the development of societies through history, it’s not the ideas of individuals that matter to the majority, but the type of economic system, the mode of production that has the biggest influence on how they and we live our lives. And classes – peasants, lords, slaves, proletariat, kings, bourgeoise – are at the root of this.

Callinicos writes: ‘Classes arise when the “direct producers” have been separated from the means of production, which have become the monopoly of a minority.’

But what about ideas? They’re everywhere, surely they have their place? Marx calls all of this the economic base, but argues that there is a superstructure over the top. So the base is the economic relations and forces of production – slaves, tools, farming, computers, serfs – the forces and the class divisions. And the superstructure arises from that in the form of norms, political assumptions, laws, even culture and art, and so on.

Marx writes: ‘The sum total of these relations of production constitutes the economic structure of society, the real foundation, on which rises a legal and political superstructure and to which correspond definite forms of social consciousness. The mode of production of material life conditions the social, political and intellectual life process in general. It is not the consciousness of men that determines their being, but, on the contrary, their social being that determines their consciousness.’

The superstructure aims to justify the economic structure. Wages being kept low? We have to be productive or else China will beat us! Capitalism is harmful? Read Ayn Rand! I had no choice but to shoplift the baby food. I understand that but property is property – have you not read your Locke, young lady.

Marx writes: ‘The ideas of the ruling class are in every epoch the ruling ideas: i.e., the class which is the ruling material force of society is at the same time its ruling intellectual force. The class which has the means of material production at its disposal, consequently also controls the means of mental production, so that the ideas of those who lack the means of mental production are on the whole subject to it. The ruling ideas are nothing more than the ideal expression of the dominant material relations, the dominant material relations grasped as ideas.’

And one of the biggest ideological superstructural myths, Marx says, comes from the bourgeoise having the power to tell tales about their thrift, ingenuity, creative genius – producing the idea that value comes from their endless revolutionising of technology.

 

Technology and Productivity

Like money, like commodities, we often see machines as magic, we fetishise them, think they can do things, create things, produce things out of thin air. We forget that they conceal social processes and relationships, physical lives underneath.

One compelling advantage Marx’s theory of history has is that it explains technological development. It explains why the industrial revolution seemed to take off at the same time as capitalism. Other theories – that innovations like the X are the result of genius individuals struggle to explain the wider historical trends of technological progress. Instead, technological development is fundamental to Marx.

We’ve seen that one way for the capitalist to extract surplus value is by trying to lengthen the working day, to improve the efficiency of labour by dividing workers up to perform smaller tasks, or to increase the intensity of work through discipline. In short, in finding ways of making labour more productive; by getting more out of workers in the same amount of time. But there’s another way of increasing productivity: technology.

All of the spinning wheels and water frames and engines of the industrial revolution were making labour efficient. You could make more jumpers in the same amount of time, employing fewer workers.

Now, importantly, the machines still need labour. They’re all built by people, need attending, need loading, needs maintenance, need correcting if something goes wrong. But they’re all what Marx calls ‘labour-saving devices.’  They make work more productive and so more surplus value can be extracted from the same amount of work.

Marx writes ‘machinery is intended to cheapen commodities and, by shortening the part of the working day in which the worker works for himself, to lengthen the other part, the part he gives the capitalist for nothing. The machine is a means for producing surplus value’.

Through technological innovation, we get more or better end product out of less labour-power. Less labour-power means lower wages have to paid. Competitors can then be undercut and more profit can flow to the innovative capitalist relative to the others in that particular industry.

Marx says, ‘The individual value of these articles is now below their social value; in other words, they have cost less labour time than the great bulk of the same article produced under the average social conditions’.

Now though, something interesting happens. The competitors either have to copy, keep up, innovate themselves, or go out of business.

When the competitors bring in the new machinery, the first capitalist can no longer undercut them, and they compete for the best price again, bringing the profits back down to where they were originally. So the first mover capitalist has the advantage when they innovate, but this doesn’t last long, and so the search for a new innovation, new technology continues.

Marx writes: ‘This extra surplus-value vanishes as soon as the new method of production is generalized, for then the difference between the individual value of the cheapened commodity and its social value vanishes.’

We can see the dialectical influence at play here. The particular actions of one lead to a generalised universal process that draws all in, which again returns to effect the particular individual, which returns to the generalised universal, and so on.

Marx then says: ‘capital therefore has an immanent drive, and a constant tendency, towards increasing the productivity of labour, in order to cheapen commodities, and, by cheapening commodities, to cheapen the worker himself’.

Again, this shows how capital becomes an inhuman, alien, vampire-like force. It compels people to act in a certain way, to search out labour-saving methods, to improve technology, to innovate, to compete, to try to underpay. And it compels others to follow or copy and keep up or go out of business. If you don’t search for productivity, for efficiency, your competitor will. Capitalism becomes a race against the clock.

It’s all about incentives within the total system, which is why Marx believed what he was doing was science in the same way Newton studied gravity – laws of attraction, forces that act on people pushing them to act in certain ways.

Even after the machine is paid for, there is an incentive to use it as much as possible unless it wears out, rusts away, gets replaced by better machines. Imagine the complexity and ingenuity of getting a water frame running or a steam engine working properly in a factory. Once it’s working 24/7, the reflexive impulse to find workers to man it as much as possible, to get as much from the machine as possible, must have been huge.

Marx writes, ‘competition subordinates every individual capitalist to the immanent laws of capitalist production, as external and coercive laws. It compels him to keep extending his capital, so as to preserve it, and he can only extend it by means of progressive accumulation’.

But notice a new stage of development. As they compete to keep up, we have larger, bigger, more technological advanced companies. As technology improves any industry requires more capital, more initial ‘outlay’ to even get started. The barriers to entry get higher. Some can’t keep up, can’t copy machines, can’t innovate and either go bust or get bought out and incorporated into the more successful bigger business. And importantly, fewer workers are needed to produce the same amount of product as they get replaced by machines.

Callinicos puts it like this: ‘Concentration takes place when capitals grow in size through the accumulation of surplus value. Centralization, on the other hand, involves the absorption of smaller by bigger capitals. The process of competition itself encourages this trend, because the more efficient firms are able to undercut their rivals and then to take them over. But economic recessions speed up the process by enabling the surviving capitals to buy up the means of production cheap.’

I think Marx answers a fundamental question about modernity here. Why does technology – which should save us all time – not make our lives easier? Better? Why are not all fishing and playing guitar while machines do our bidding?

Because machines, owned by a few, extract productivity from the rest. And the motivation to increase productivity is the desire to sell more and sell cheaper. So while capitalism makes some things cheaper, workers – and that’s a lot of us – are also commodities, subject to the same forces, same pressure of wages, on hours, on improving productivity. It’s a vicious circle.

Marx writes simply: ‘the machine is a means for producing surplus-value’.

He compares the old way of handicrafts, pointing to how the worker – like a woodworker – would ‘make use of a tool’, while in the factory ‘the machine makes use of him.’ Machines ‘dominate and soak up living labour-power’.

Technology is a double-edged sword. It can improve our lives but spurs competition, leads to concentration, increases barriers of entry, makes it harder for start-ups to compete, and can put more and more out of work. Where capitalism starts in small scale artisan workshops, it ends in highly advanced, labour trampling, surplus value extracting global technological conglomerates.

Capitalism preys on whatever it can find, sucking surplus into bigger piles, larger factories, seeking out new markets, anything that can be commodified. In short, all that is solid melts into air.

 

Class Struggle X Revolution

We too often think of history as causal or linear. That one thing causes the next like a row of dominoes. Dialectical thinking takes a different approach. Instead of a linear axis – calculator, microchip, computer, smartphone, say, – or slave, peasant, proletariat, bourgeoise, etc – we have a dialectical one where at any given moment in time – there’s a mutual relationship between elements and that when there is incongruence or incompatibility or friction between them transformation is forced – what Hegel called sublation.

What we’ve seen is how the interests of the proletariat and the bourgeoise are at odds – they contradict one another. One wants higher wages, the other wants lower, one wants to get home, the other wants higher productivity.

In the Grundrisse – an unpublished manuscript and notes of his economic thinking – Marx wrote: ‘The growing incompatibility between the productive development of society and its hitherto existing relations of production expresses itself in bitter contradictions, crises, spasms’.

The technology owned by the bourgeoisie is at odds with the wages of the worker. Machines put people out of work and create a reserve labour force. ‘The instrument of labour strikes down the worker’, Marx writes. Even if its transitional and more work is found eventually, there is a period of unemployment, a period of crisis for displaced workers.

Not only does this happen because of technology, but this can be good for the capitalist. If there’s a ‘reserve labour force’ then it makes it harder for workers to negotiate for higher wages because there’s always someone over there willing to do it for less just to work.

Capitalists get more and more value out of fewer and fewer workers. Workers are displaced and squeezed.

Now, the bourgeoise can keep revolutionising, building different machines, finding new markets, so new jobs might be created. So it’s not the case that absolute poverty for proletariat is inevitable – although it’s possible. What will happen is that as the bourgeoise acquire more and more capital, machines, technology and as many are put out of work, then the proletariat will be relatively immiserated.

Petrucciani puts it like this: ‘Marx can thus conclude by claiming that the ‘absolute general law of capitalist accumulation’ is to constantly produce, ‘in the direct ratio of its own energy and extent’, an excess of workers, a reserve army whose poverty increases as the power of wealth grows. These conclusions are of course very bleak, appropriately so because Marx aims to show (among other things) how capitalism is socially unsustainable’.

This relative immiseration means there’s more concentration into larger monopolies on the one hand and more fragmentation and discord on the other. Each are incompatible.

Engels wrote: ‘productive forces are concentrated in the hands of a few bourgeois whilst the great mass of the people are more and more becoming proletarians, and their condition more wretched and unendurable in the same measure in which the riches of the bourgeois increase.’

Map on top of this the unpredictability of capitalism, its booms and busts, crises, overproduction leading to crises, gluts, market instabilities, contractions, further bankruptcies and buy-outs, mass unemployment, and you have an explosive situation.

All of this is the apex of the argument in Capital: a tendency towards catastrophe.

A key concept here is that the rate of overall profit falls. If value and therefore profit comes from labour and there is increasingly less labour – less people – doing the same amount of work because there’s more and more machines, technology, infrastructure, and so on, the rate of surplus value being extracted decreases over time. Doing business gets harder. Being a proletariat gets harder still.

Again, a contradiction, an irony – that by improving productivity and investing more and more the capitalist class are sowing the seeds of their own destruction.

Marx writes: What the bourgeoisie, therefore, produces, above all, is its own gravediggers. Its fall and the victory of the proletariat are equally inevitable’.

What we have is a pressure cooker on a societal scale.

Let’s just recap and look at the ingredients thrown into this explosive pot:

  • Division of labour – workers are fragmented into performing meaningless single repetitive tasks.
  • The downward pressure on wages and relative impoverishment.
  • A reserve labour army with no work at all.
  • Booms and busts – overproduction, layoffs, takeovers, recessions.
  • Bigger, more monstrous companies that are impossible to compete with – concentration and monopolies.
  • The tendency of the rate of profit to fall.

All of this pulls on two poles. On the one, ‘Accumulate, Accumulate!’ Marx says. On the other, though, is something that emerges out of the chaos – a class consciousness – meaning a privileged perspective – arising out the material conditions of all of this – a perspective, a consciousness, that understands their place in history: the proletariat.

Harvey puts it like this: ‘This is typical Marx: there are countervailing tendencies at work: concentration on the one hand, subdivision and fragmentation on the other. Where is the balance between them? Who knows! The balance between concentration and decentralization is almost certainly subject to perpetual flux.’

The capitalist will outsource, subcontract, layoff, divide and conquer. If the proletariat doesn’t join together all of this will ‘mutilate the labourer into a fragment of a man, degrade him to the level of an appendage of a machine, destroy any remnant of attraction in his work and turn it into a hated toil; they estrange from him the intellectual potentialities[…]; they distort the conditions under which he works, and subject him during the labour process to a despotism the more hateful for its meanness; they transform his lifetime into working-time, and drag his wife and child beneath the wheels of the Juggernaut of capital.’

The only option for the proletariat is to join together and fight. Engels wrote ‘for ‘protection’ against the serpent of their agonies, the workers have to put their heads together.’ They have to co-operate, join in union. It is not inevitable, but forces, incentives all push workers into uniting – they will have the numbers, after all, and to overthrow the current system.

There are longstanding debates about whether Marx was a determinist – whether he believed in inevitability – of immiseration, of revolution, of human history itself – that we’re all just puppets on a grander stage. I think he walks a fine line but later on he largely tries to avoid writing in these terms. As Harvey says, there’s not much causal language in Capital, just incentives, pressures, dialectical relationships that create these interesting pressure cookers – but map on culture, politics, or many other things – and it creates a rich but complex picture of history.

Marx says: ‘Men make their own history, but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly encountered, given and transmitted from the past’.

But none of this is a universal inevitable schema that we all live under, there’s too much dynamism, too much change, too many variables and contingency.

Marx himself complained of those that tried to turn, ‘my historical sketch of the genesis of capitalism in Western Europe into an historico-philosophic theory of the general path every people is fated to tread, whatever the historical circumstances in which it finds itself’.

True to dialectical form, he says, ‘circumstances create people in the same degree as people create circumstances.’

However, the circumstances, the pressures, the forces are all pushing the proletariat to overthrow the current state of affairs, and replace it with something new.

 

Communism

Marx famously didn’t write much about what a communist society would look like. He was only emphatic that the proletariat needed to organise and overthrow the current system. He believed that this would likely require revolution, but that it might be peaceful in places. But for much of the rest he left scant details of his thoughts, and there was a particular reason for this.

As we’ve seen, he believed the proletariat had a particularly unique point of view that no-one else in society had. Capitalists are compelled to act in the ways we’ve seen by the imperatives of the market. Politicians are compelled to act by the power of big capital. But the proletariat, in factories, can see all of this, feel their own immiseration, feel their alienation and understand what negates them, understand industry and science, and importantly, because of their proximity to one another, have the capacity to organise.

Because of this, Marx believed that it should be left to the proletariat to establish the best course of action.

He was in a sense a rationalist – with the idea that a better society should be organised rationally to the benefit of all instead of few. But he didn’t believe that a rational society could be planned in advance, like the utopian socialists did. This is another expression of his dialectical thinking. He didn’t believe in dogmatic, rigid systems.

Engels criticised those who did otherwise and tried to reduce, ‘the Marxist theory of development to a rigid orthodoxy which workers are not to reach as a result of their class consciousness, but which, like an article of faith, is to be forced down their throats at once and without development’.

The proletarian had to develop on their own. In The Communist Manifesto Marx wrote that communists should, ‘not set up any sectarian principles of their own, by which to shape and mold the proletarian movement’, because, ‘They have no interests separate and apart from those of the proletariat as a whole’.

And Engels wrote that, ‘The masses must have time and opportunity to develop, and they can have the opportunity only when they have a movement of their own — no matter in what form so long as it is their movement — in which they are driven further by their mistakes and learn to profit by them’.

However, they did leave some clues as to what they thought a communist society could look like.

Marx stated simply that: ‘We call communism the real movement which abolishes the present stage of things’.

First, Marx and Engels famously argued for a ‘dictatorship of the proletariat’.

The proletariat would need to establish, ‘the class dictatorship of the proletariat as the necessary transit point to the abolition of class distinctions generally’.

Dictatorship has a particular connotation today that it didn’t have at the time. Their reading of revolution was dependent on the French Revolution, which was attacked by reactionary forces both domestic and foreign, by the Church, the aristocracy, civil war, and the monarchies of Europe. The bourgeoise also held political power, so any immediate popular democratic vote would – and had in places – often just brought the old regime back to power.

Which is why Marx and Engels believed in a limited emergency dictatorship – not of one person, but by the proletariat as a class.

Looking at how revolutions and democratic procedures were repressed across Europe, Marx believed that a bloody revolution was very likely. But he did equivocate and change his mind on this. He wrote for example that, ‘there are countries such as America, England and Holland where the working people may achieve their goal by peaceful means’.

Ok, but what would happen once the revolution was secured? There are a few clues but it should be remembered these are comments here and there – the overarching image is of the proletariat working it out depending on their particular experience from place to place.

Marx pointed to the short lived Paris Commune that existed for a couple of months in 1871 when Parisian workers took control of Paris during the Franco-Prussian War.

Importantly, Marx wrote it should be like the Paris Commune: ‘The Commune was formed of the municipal councillors, chosen by universal suffrage in the various wards of the town, responsible and revocable at short terms’. He continued: ‘Like the rest of public servants, magistrates and judges were to be elective, responsible and revocable.’

Wage-labour and capital should be abolished. Marx knew it would be difficult because, ‘in every respect, economically, morally and intellectually still stamped with the birth marks of the old society from whose womb it emerges’.

Because of these, he believed communism would develop in stages. At first ‘Accordingly, the individual producer receives back from society—after the deductions have been made—exactly what he gives to it. . . . He receives a certificate from society that he has furnished such and such an amount of labor . . . and with this certificate he draws from the social stock of means of consumption as much as costs the same amount of labor’.

Petrucciani identifies the first steps throughout the writings like this: ‘landed estates are to be expropriated, inheritance rights abolished, strongly progressive taxation instituted, credit and transportation nationalized, public factories built, and ‘equal liability to work for all members of society’ imposed together with ‘education of all children […] in national institutions and at the expense of the nation’’.

So there are a mix of social democratic reforms, equalising of labour and reward for work, and the planning of industry in the interest of all.

Calinacos writes, ‘the decisions about how much social labor would depend, not on the blind workings of competition, but on a collective and democratic assessment by the associated producers in the light of the needs of society’.

But after a while, a higher stage of communism should be developed, which Marx puts like this: ‘In a higher phase of communist society, after the enslaving subordination of the individual to the division of labor, and therewith also the antithesis between mental and physical labor, has vanished; after labor has become not only a means of life but life’s prime want; after the productive forces have also increased with all-around development of the individual, and all the springs of cooperative wealth flow more abundantly—only then can the narrow horizon of bourgeois right be crossed in its entirety and society inscribe on its banners: From each according to his ability, to each according to his needs!’

He thinks that at this point people will want to work, instead of being compelled to or incentivised to by monetary rewards – that’s ‘each according to his ability’ – and that each will take only what he needs from the common stock – ‘to each according to his needs’.

The state would then wither away.

Engels wrote: ‘As soon as there is no longer any social class to be held in subjection; as soon as class rule, and the individual struggle for existence based upon our present anarchy in production, with the collisions and excesses arising from these, are removed, nothing more remains to be repressed, and a special repressive force, a state, is no longer necessary’.

Finally, it’s important to note that Marx never wanted to promote absolute equality over individuality. He believed having access to resources and contributing to how they were produced would mean individuals could flourish and their true creative individuality reached.

He called it, ‘an association in which the free development of each is the condition for the free development of all’. In Engels’s words, ‘it is humanity’s leap from the kingdom of necessity to the kingdom of freedom’.

 

Conclusion

After the First International collapsed in 1876, Marx withdrew from political life, spending his time on further volumes of Capital, which he would never finish but would be published from his note by Engels.

In 1883 his daughter died, Marx caught a cold, and died quietly in his sleep. Engels wrote, ‘Mankind is shorter by a head, and that the greatest head of our time’.

Summing up, analysing, or critiquing Marx’s legacy is a huge task that’s beyond the scope of this video. His influence on the world is testament to the breadth of his insight. Much of it I think is due to the meticulous way he analysed the relationship between labour, capital, and technology in Das Kapital, as a lot of the other insights on alienation, revolution, and socialism were much more common. So what you think of Marx should depend on appraising those big ideas in Capital, and if anything, the jury is still out.

I’ll publish a more comprehensive appraisal on the second channel soon, but for now I’ll point towards some of the most common points of contention. 

First, the labour theory of value is probably criticised the most. Neoclassical economics emphasises the subjective nature of value, to put it simply, and there’s a famous transformation problem in Marxism of ‘transforming’ labour value into actual profits and prices, which should work if the labour theory of value is true, but doesn’t.

All of this means that the labour theory of value is wrong at worst and limited at best. However, even with the criticisms, it’s undeniable that labour is at the core of production and so how much labour goes into making something is at least one part of the answer to value.

Harvey writes, for example, ‘I have lost count of the number of times I have heard people complain that the problem with Marx is that he believes the only valid notion of value derives from labor inputs. It is not that at all; it is a historical social product. The problem, therefore, for socialist, communist, revolutionary, anarchist or whatever, is to find an alternative value-form that will work in terms of the social reproduction of society in a different image.’

The falling rate of profit has also relatedly been criticised. This is a key marker of whether capitalism can sustain itself, and Marx’s contention was that because more technology would extract profit from fewer workers, the rate of profit would fall and capitalism would veer from crisis to crisis. The debate over this rages on, as, of course, does capitalism.

And Marx wouldn’t be surprised. Capitalism’s ability to transform is, as we’ve seen, one of its most distinctive features. Yet despite the dynamism, I think Marx would still recognise it today – which goes a long way to showing the enduring influence of his work.

Inequality, crises and banking crashes, squeezed wages, the speed of technological change, automation, global corporations, alienation – were very much in Marx’s world.

There’s also the debate over actually-existing socialism, the failures of centrally commanded economies, the USSR, state-capitalism. Many who follow Marx today would argue these were not socialist in Marx’s sense.

Marxism, Callinicos writes, ‘was socialism “from below.” It foresaw the working class liberating itself through its own activity, and remaking society in its own image. “Really existing socialism” in the Eastern bloc, however, is based on the denial of the self-activity of the workers and the denial of popular democracy’.

There are also criticisms about how little Marx said about the practicalities of communism, how societies could function without money or any state apparatus at all.

But Marx’s relevance is difficult to escape from. And if you drop the idea that you have to be a Marxist or an anti-Marxist, a capitalist or a socialist, it’s undeniable that his work contains still-relevant insights and still-useful analyses of still very present forces. He would want his readers to read and critique, he would want to inspire not followers, but change. In other words, he was emphatically not a dogmatist. He wanted to inspire a different fluid, active, creative thought, and importantly, action. Towards the end of his life he said, ‘All I know is that I am no Marxist, God save me from my friends!’.

I’ll end with a quote from a letter – ‘for a ruthless Criticism of Everything That Exists’. He wrote, ‘we do not confront the world in a doctrinaire way with a new principle: Here is the truth, kneel down before it! We develop new principles for the world out of the world’s own principles. We do not say to the world: Cease your struggles, they are foolish; we will give you the true slogan of struggle. We merely show the world what it is really fighting for, and consciousness is something that it has to acquire, even if it does not want to’.

——

READING LIST

In the past, I have made reading lists and bibliographies public, but for Marx, I put some time into curating one with comments and a reading order for supporters of the channel on Patreon: https://www.patreon.com/posts/marx-reading-112803954

The post Marx: A Complete Guide to Capitalism appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/08/23/marx-a-complete-guide-to-capitalism/feed/ 0 1168
The Fall of the Mainstream Media: The New Elites https://www.thenandnow.co/2024/07/29/the-fall-of-the-mainstream-media-the-new-elites/ https://www.thenandnow.co/2024/07/29/the-fall-of-the-mainstream-media-the-new-elites/#respond Mon, 29 Jul 2024 15:45:48 +0000 https://www.thenandnow.co/?p=1130 How many of these faces do you know? In a boundless internet with infinite possibilities, why do these ones stand out? Are they new media? New elites? Has the mainstream fallen? This is a story about stories. Who gets to tell them? What shapes them? Do they help or hinder us? This is the most […]

The post The Fall of the Mainstream Media: The New Elites appeared first on Then & Now.

]]>
How many of these faces do you know? In a boundless internet with infinite possibilities, why do these ones stand out? Are they new media? New elites? Has the mainstream fallen?

This is a story about stories. Who gets to tell them? What shapes them? Do they help or hinder us? This is the most significant story of our time. It’s about that word – ‘significance’.

We too often think of the news, the media, our information diet, as moving from controlled, old, censored – in the age of Kings, dictators, and the mainstream media (MSM) – towards a freer, more independent, more rational, media.

We think the internet has freed information – citizen journalism, truth to power, a new era. But what’s fascinating is that people have thought that since the dawn of the media. That in being part of a new way of doing journalism, people were on the side of good, the side of progress, the side of history. It’s part of a larger assumption – that we inevitably move from controlled to freer societies.

How many of these faces do you know? What’s happening, when the radically infinite globe-spanning and interest-diverse possibilities of the internet seem to coalesce around recognisable figures with common interests that almost seem to be friends?

Eerily, this isn’t new. Maybe history does repeat itself.

This is why to understand this new era – whatever you want to call it – new media, alternative media, new elites – we have to understand old media, legacy media, corporate  media, mainstream media. Where did they come from? What motivated them? How did they begin to fall? What patterns can we learn? Understanding one might give us some clues to the future of the other.

Is this really new media vs old elites? Are the MSM decent, noble, fourth estate journalists holding the powerful to account? Or are they stooges? Puppets? Too close to power? Too self-interested?

We’re going through a historical shift. I think it’s important to understand this moment in the longest view possible.

The media are a set of institutions which represent, in many ways, public opinion. They shape the world. And now, that happens as much through Rogan as through the BBC, as much through Daily Wire as through National Review, as much through Jordan Peterson as through CBC.

All of these people are institutions that have ideas about what’s significant in the world, what to talk about, how to talk about it. And those sets of ideas have always changed – from the beginning of the press, through to radio, television, to today.

What is the truth? Who gets closer to it? Tucker Carlson or the NYT? Russell Brand or The Telegraph? Me or another channel? What I want to describe is how the truth gets shaped in the first place. Because the truth is a complex irreducible thing. There are so many issues. Which ones get picked as significant? In what ways? I want to lay out a way of judging them – not as right or wrong – but as how they’re made, why, so that we can be critical ourselves.

 

Contents:

 

Becoming Mainstream

We think of the media as a vast established set of loosely similar institutions, but a look at the history of the media illustrates how these institutions have changed over time. To understand, say, a US newspaper in the early 19th century or a Youtube channel in the 21st, we have to understand the relevant context, relationships, ideas, norms, laws, cultures, technology, and economic circumstances – all of which shape the information in very specific ways.

And despite this constellational context varying from place to place, period to period, there are some identifiable strands that run through.

Since Johannes Gutenberg invented the printing press in Germany in 1440, individuals and groups sought to mobilise this radical powerful technology for different purposes.

This revolution changed the world. It changed religion, giving people a chance to read for themselves, it gave a boost to national languages over Latin, to state bureaucracy, it weakened the church and made the renaissance and the Enlightenment possible, it aided commerce and exploration and created imagined national communities with shared identities – it gives credence to Marshall McLuhan’s famous phrase that it’s not so much what’s said, but the medium itself that’s the message.

Printers boomed everywhere. In London, across the 16th century the numbers went from 1 or 2, to 100. But censorship was the norm. The Vatican granted licences, the French government granted monopolies, the Tudors gave out licences for monopolies on different types of news. Arrests, executions, and control were the water within which printers swam. The first real newspapers were printed at the beginning of the 17th century.

We could even go back further. But as the historian John Nerone argues, the ‘media’ really became interesting when the state started to lose control of its grip on news, creating an ostensibly separate ‘fourth estate.’ This began happening during the English Civil War.

During the war, censorship collapsed and printing flourished. Then during its final act – the Glorious Revolution of 1689 – parliament passed the Bill of Rights that prevented the monarch from infringing on parliament’s freedom to speak.

By the middle of the 18th century, around 13 million were reading newspapers across Britain alone. This was the age of discovery, of science, of Enlightenment, of globalisation, and arguments for freedom of speech grew out of arguments for religious toleration from philosophers like John Locke.

The American Revolution and the US Constitution’s first amendment made the press a truly independent force for the first time. Slowly, a confident, separate, and increasingly powerful ‘fourth estate’ emerged across America and Europe.

But this initial media was really dominated by pamphleteering rather than reporting – partnerships between printers and philosophers, politicians and public figures. Pamphleteering was at the root of the drive towards American independence.

By the French Revolution and the early 19th century, there was an admirable diversity of opinion – federalists, anarchists, communists, utopian socialists, theologians, liberals, monarchists, conservatives all debated the nature of what the best society would like through Europe and America-wide networks of correspondence, books and pamphlets.

I say admirable because this was truly diverse and truly influential. To take one example, Thomas Paine’s The Rights of Man (1791) sold 200,000 copies in its first few years. Many others had a similar readership. The population of Britain at the time was just 10m. That 1 in 50 bought copies (bearing in mind most couldn’t read and books would be read aloud to groups and passed around) demonstrates the extent of passionate and engaged, widespread and diverse debate. The US population was only 2.5 million at the time.

Hundreds of newspapers were published at the beginning of the 19th century in Britain alone, despite the government trying to crack down on radical dissent. The British government passed notorious stamp acts, taxing cheap publications out of existence. Across the 18th century taxes on printing rose by 800%.

These have been called the ‘taxes on knowledge’ and had an affect on the type of news printed. That original diversity started to be quite literally stamped out.

This original diversity of opinion was slowly transformed into the large media corporations we know today. This happened for several reasons.

Initially, it was cheap to start a newspaper. Hand presses could only print 500-5000 copies at a time, so there was a limit on the reach and size. This meant it was quite easy to start a newspaper. In the UK, the Northern Star – a big newspaper that advocated for parliamentary reform and democracy in Britain – was started with donations from the public. As we’ve seen, cheap, simple pamphlets were everywhere.

Then, in the 1810s, the steam powered printing press was invented, making it cheaper to print larger runs at greater cost but cheaper cost per print. These new machines could print 4000 impressions per hour, meaning that for the first time a national daily newspaper could be printed and distributed. But it was too costly to be affordable to all. That meant including advertising.

Advertisers went with middle-class bourgeois newspapers for obvious reasons. One advertising executive wrote at the time that certain publications should be avoided because, ‘their readers are not purchasers, and any money thrown upon them is so much thrown away’.

These ‘bourgeois’ newspapers quickly became larger institutions that made use of technology like the telegraph, railway, reporters, linograph images, electricity, industrialisation, then photography and new printing techniques – all of which made them more efficient and eye-catching.

All of this made it expensive and difficult to compete with the large newspapers who also spent time and money lobbying parliament to reduce taxes on them so that they could, as one editor said ‘instruct the masses’ and ‘put the unions down.’

They included more ‘human interest’ stories, sensationalism, consumerism, ballads, murder mysteries, and folk tales that were easy to read and entertaining.

This was a reasonably simple formula: commercialisation + industrialisation + populism = sales. Any working class press just couldn’t keep up. It was no longer cheap and easy to start a paper that might be successful.

The Sunday Express in Britain, for example, launched in 1918 and spent £2m and had to acquire a circulation of 250,000 before it even broke even.

In the US, media moguls Joseph Pulitzer and William Randolph Hearst got into a competition investing in more expensive journalists, more technology, more sensationalist headlines, and more scary and fake news stories – what came to be called ‘yellow journalism’. Front page headlines with words like GUILTY, GLORY, TREACHORY, and SLAUGHTER became the norm.

In both the UK and US, crime, sexual violence, and sensationalist topics – murders, elopers, robbery – all became more profitable to report on. In 1886, murder stories made up 50% of the pages of London’s Lloyds Weekly, despite the rate in violent crime decreasing across the century.

This didn’t matter to publishers. Sales did. Their newspapers increasingly contained ‘entertaining’ titbits like ‘What does the queen eat? Why don’t Jews ride bicycles? What’s the color of the prime minister’s socks? Stories about a man-woman discovered in Birmingham and whether dogs can commit murder.’

Critics began to complain that the press was pandering to the worst in its reader’s tastes. Norman Angell labelled them ‘the worst of all the menaces to modern democracy.’ The Tory Prime Minister Stanley Baldwin said that press lords had ‘power without responsibility.’ He said they were ‘engines of propaganda for their constantly changing policies, desires, personal wishes, personal likes and dislikes.’

By the 20th century commissions were being setup to investigate their monopoly powers, a press council was setup in the UK that aimed to act as the industry’s ‘conscience’.

As powerful influential businessmen with advertising interests and a fear of an organised working class they even, in some cases, became cheerleaders for fascism. Lord Rothermere supported the BUF, his Mirror had headlines like ‘Hurrah for the blackshirts’ and ‘Give the blackshirts a helping hand.’

Even larger working class papers like The Daily Herald in Britain struggled to compete. In 1956 it was the fourth biggest newspaper in the country. And the most popular amongst working class readers. Despite this it only had a 3.5% share of advertising across the industry. Who would want to advertise to people who couldn’t afford products? Its fortunes declined, the paper was sold and became the tabloid The Sun.

This is the story of the press. From diversity to populism. From many smaller publications to a few corporate ones. To an interest in political ideas to an interest in entertaining ones. But the media had to at least give the appearance of being politically decent. Which is why they were drawn to attention grabbing, exaggerated, sensationalist moral panics.

In 1972, the criminologist Stanley Cohen noted while surveying the history of the press that during moral panics, ‘A condition, episode, person or group of persons emerges to become defined as a threat to societal values and interests; its nature is presented in a stylized and stereotypical fashion by the mass media’.

What seemed obvious and urgent to many, was that the press should be focusing on what was important – improving people’s lives, holding the powerful to account, searching out the issues and threat and dangers that affected the most people, their health, their bank accounts, their homes.

In the 70s, the sociologist Stuart Hall and his colleagues argued that the moral panic was a way for the ruling class to distract away from the real problems facing British society. Oil shocks, poverty, business leaving Britain, nuclear war – there are so many important things to focus on, yet the front pages of the press focused their attention on superficial scare stories.

The tabloid newspapers targeted everything from sexuality, family values, hooligans and gangs, paedophilia, aids, pornography, drugs, abortion, video games, violent films, witches, the youth of today – all of which in some way argued that the fabric of society was being eroded by an element within, and reporting that societal collapse through stories of good vs evil, emotion and drama, threat-entertainment with popular appeal.

As sociologist Kenneth Thompson writes, ‘Whilst professional groups with an interest in making claims for more resources, ranging from social workers and teachers to the police and probation officers, are often prepared to provide evidence of a crisis, sections of the mass media, subjected to market pressures, have responded by presenting dramatic narratives with a strong moral content. The result has been an almost bewildering succession of moral panics.’

These moral panics began as far back as witch trials, but in the 19th century adorned the front pages and spread fears of garrotting, for example, in London. Again, violent crime was decreasing, but in the late 19th century, a glance at the front pages of the press would have led a reader to believe that there was a pandemical threat. Harsh, reactionary, ill-conceived legislation was even passed by parliament. Historians now describe this as a classic ‘moral panic.’ Your biggest concern at the time wouldn’t be being garrotted but the factory you worked in.

By the twentieth century, there were moral panics about jazz, the beatniks, hippies, gay lifestyles, aids, and the rave scene.

Take one more recent example that Thompson discusses in his book. In the 90s, the British media ran with a panic over ecstasy. The death of one girl at a rave was covered ad Infinium.

‘It could be your child,’ the Daily Mail wrote. Today wrote: ‘Leah’s Last Words: She named Ecstasy pill pusher then pleaded “Help me mum, help me”’. MPs called for clubs to be closed. Reading the press at the time you’d be led to believe the rave scene was a demonic nightmare. Studies have shown how to the contrary, raves were safe, friendly, egalitarian spaces. Others pointed out that while any drug could be dangerous, it wasn’t ecstasy as much as the combination with high intensity dancing. In the US, only two ecstasy deaths had ever been reported.

But it was the sort of story the press loved. A hidden danger, limbically appealing, a counterculture, a threat to society, to family values. As Thompson points out, counterculture groups like raves, hippies, or LGBTQ or trans people, supposedly reject mainstream cultural values. He writes, ‘It is when these values seem to be being flouted that the media are likely to resort to discursive strategies that amplify the threat and generate a moral panic about the risks to the moral and social order, not just to the young people themselves.’

As the Observer newspaper warned in 1996, ‘Beware moral crusades. It is true that the British are alarmed and frightened by social fragmentation and growing violence. It is also true that the moral compasses by which to steer are increasingly uncertain. That does not mean the answer is a crusade led by party politicians or conservative newspapers — down that route leads a Dutch auction in repression. Worse, the real dynamics of social breakdown are left unaddressed.’

The moral panic is the likely consequence of a market-driven commercial populist press. Unable or unwilling to focus on economic issues – by the structure of ownership or by advertiser pressure – the press are drawn to stories that will boost sales by emphasising emotion, by selecting facts based on sensationalism, by exaggerating and distorting reality – a ‘discourse of the edges’ that ignored real substantive issues.

Ultimately, there was a transatlantic exchange. In his history of journalist, Martin Conboy writes there was an ‘Americanization of the British press between 1830 and 1914. Gossip, display advertising, sports news, human interest, fast stories transmitted by telegraph, cheap and increasingly visual newspapers, summary leads and front page news were all introduced in England in the 1890s.’

Ultimately, what we see is a history from complexity to simplicity, long pamphlets to quick summaries, nuance to populist appeal.

 

The Television Revolution

Something similar happened with television. This new powerful medium was never as diverse as the original press – the fifties were a famously conformist period – but in the early days of broadcasting, some tried to carve out a more ethical role for the media.

This was the corporate media at its peak. The BBC dominated radio and television in the UK, and almost everyone read a newspaper. In 1950 the total readership of daily newspapers in the US was 54 million. That was between one and two newspapers for every household across the country. Walter Cronkite anchored CBS for almost 20 years and was regularly voted the most trusted man in America.

Radio and television though were also conformist for another reason. There were a limited number of airwaves and so the FCC had to mandate that to acquire a broadcast license, some programmes had to be in the public interest.

To the early television broadcasters – CBS and NBS – news was unprofitable. People preferred entertainment. However, the FCC forced them to spend some money on news and documentaries.

Some took this responsibility seriously. Ed Murrow produced documentaries like See it Now that tried to shine a light on serious topics. It essentially invented the documentary and took aim at topics like the Red Scare, the Korean War, and Oppenheimer’s protests against nuclear weapons. But See it Now was cancelled after its advertisers dropped out and CBS became the focus of political pressure. Despite the show being popular, the head of CBS said the controversy was a ‘constant stomach ache’.

As with the press, commercial and economic pressures forced out a potential plurality of ethical discussion.

A softer approach was taken by shows like the Today Show which aimed to be a populist birds eye view of the day that, as one producer said, should distract people from the long day they had ahead of them.

Like the press, the trend was towards popular appeal, bigger audiences, and away from difficult topics. The same happened at the BBC, as the more serious programmes and John Reith’s hope that the BBC would ‘inform, educate, and entertain’, in that order, gave way to entertainment first. To many the Reith approach was elitist, but to him, entertainment was meant to be the dessert and now it was the main course.

In his history, Ponce de Leon writes: ‘television’s pioneering, wide-open phase was over. In the future, news and public-affairs programming like See It Now would struggle to find a place on network TV’.

Many bemoaned the media landscape. After See it Now was cancelled, Murrow said that the TV was a depressing spectacle of ‘decadence, escapism, and insulation from the realities of the world in which we live’. He continued: ‘This instrument can teach. It can illuminate; yes, it can even inspire. But it can do so only to the extent that human beings are determined to use it to those ends. Otherwise, it’s nothing but wires and lights in a box’.

Head of the FCC Newton Minow argued that television had become a ‘vast wasteland’.

But this was only critique. To some, the news that was being broadcast was elitist and snobbish anyway. It was urban and coastal and Washington or London-centric and looked down on ordinary people. Television, critics began to argue, had become part of the powerful establishment elite.

To a young Roger Ailes, working in television, the NYTs and CBSs of America liked to tell the rest of the country they were racist, sexist, and needed social security programs. He and the head of Coors beer, Joseph Coors, dreamed of a real conservative media, one that didn’t hold back.

At the same time, cable and satellite made the FCC regulation on licences obsolete, as anyone could make use of the expanded bandwidth. If anyone could broadcast, what was the purpose of the fairness doctrine, that forced the few stations to give opposing view points airtime? By the 80s, Reagan repealed the regulation and a new range of stations proliferated. ESPN, Nickelodeon, CNN, Rush Limbaugh – specialist channels and stations, partisan politics, and more populism. Unlike the early press, starting a television station was extraordinarily expensive, and relied on big business and advertising even more.

But like the newspapers before, Ailes and Murdoch in particular knew that the trick to popular news wasn’t just the news, it was all the trimmings – crime, gossip, good vs bad storylines, good-looking presenters, chemistry, sound and flashy visual effects, sensationalism, and moral panics.

De Leon writes, ‘In previous decades, most well-educated Americans, including many of the corporate elite, would have rejected market populism as a cynical and potentially dangerous excuse to exploit the public’s poor taste and most primitive yearnings. In this view, merely satisfying consumer demand without considering what you were selling was unseemly and amoral’.

By the 90s, Dan Rather said in a speech, ‘They’ve got us putting more and more fuzz and wuzz on the air, cop-shop stuff, so as to compete not with other news programs but with entertainment programs, including those posing as news programs’.

The OJ Simpson trial, America’s Talking, and A Current Affair all relied on new techniques inherited from the press – gossip, storylines, celebrity, flashy text and images – and Murdoch brought all of this together in the launch of Fox News in 1996.

In 2010, looking back, journalist Ted Koppel wrote: ‘The commercial success of both Fox News and MSNBC is a source of nonpartisan sadness for me. While I can appreciate the financial logic of drowning television viewers in a flood of opinions designed to confirm their own biases, the trend is not good for the republic… Beginning, perhaps, from the reasonable perspective that absolute objectivity is unattainable, Fox News and MSNBC no longer even attempt it.’

And at the same time, the internet was slowly beginning to creep into our homes, adding to the disillusion with traditional media.

Nerone writes that, ‘By the 1980s, then, and certainly by the 1990s, the professional press had come to seem a vulnerable institution. The people didn’t trust it. The powers that be were able to manipulate it. Journalism no longer seemed the institution of public intelligence that it wanted to be.’

The complaints in some sense seemed contradictory, though; on the one hand the media was a ‘vast wasteland’ of populist nonsense, on the other they were manufacturing political consent. And even here there was a disagreement. To someone like Ailes that consent was liberal, to Noam Chomsky, it was capitalist propaganda.

 

Manufacturing (PC) Consent

Most think that the critique of the media goes back to Chomsky and Herman’s book, but as far back as America’s founding, newspaper editor Hezekiah Niles was noticing that the press was moving closer to the political parties and were ‘manufacturing public opinion’.

He complained how the press arranged to, ‘act together as if with the soul of one man, subservient to gangs of managers, dividing the spoils of victory, of which these editors also liberally partake – more than one hundred and fifteen of them being rewarded with offices, or fat jobs of printing, &c. This is a new state of things’.

As the media commercialised, industrialised, and grew into gargantuan conglomerates, moguls like Hearst and Pulitzer expanded into new mediums – radio, film, then television.

As they did, many questioned how much these tentacled institutions represented public opinion. The most famous intellectual of the early 20th century, Walter Lippman, criticised the idea of ‘public opinion’ itself, lamenting how the public could be manipulated with propaganda, before propaganda was a dirty word. He saw the propaganda spread during the First World War, and presciently worried about the future. He called the picture painted by the press a ‘pseudoenvironment.’

The novelist Upton Sinclaire wrote an influential book in 1919 called The Brass Check in which he criticised the ‘yellow journalism’ of the period. He wrote, ‘In every newspaper-office in America the same struggle between the business-office and the news-department is going on all the time’.

He quoted the editor of the San Fransico Star, who had been quoted as saying, ‘You wish to know my “confidential opinion as to the honesty of the Associated Press.” My opinion, not confidential, is that it is the damndest, meanest monopoly on the face of the earth – the wet-nurse for all other monopolies. It lies by day, it lies by night, and it lies for the very lust of lying. Its news-gatherers, I sincerely believe, only obey orders’.

There was a feeling that the media conglomerates were the same large corporations dominating the gilded age of America – railroad barons, oil barons, and now press barons.

Nerone describes how the press responded by taking a more active, responsible, and ethical role in its own affairs, promising to be better, essentially becoming their own regulators.

He writes: ‘The motion picture industry obviously would do anything to make money, including glamorizing crime and transgressive sexuality. In contrast, the press took on the responsibility of informing the public to reinforce morality and public order. Adopting this exalted position meant that the press had to repress its own dark side. The superego of the press would be public affairs reporting. It hoped that its performance in this high-value enterprise would obscure or excuse its id: crime reporting, celebrity gossip, advertising, and trivialities like sports and amusements, where the bulk of its income was earned.’

What this meant was a bit more serious journalism. This happened in many countries at the beginning of the 20th century. Professionalisation meant starting journalism courses, an education in ethics, codes of conduct, regulation, more training – Pulitzer was an advocate of journalist courses in universities – arguing that students should study a bit of everything before entering into the workforce.

The criticisms of the press were enough for the US government to pass the 1912 Newspaper Publicity Act – ownership now had to be published, and content funded by advertisers had to made transparent. The act read, ‘editorial or other reading material… for the publication of which money or other valuable consideration is paid… shall be plainly marked as ‘advertisement’.

Some believed that the press could be a force for good. Nerone points out that the idea of objectivity in journalism didn’t really exist as an idea prior to the 1920s. The first appearance of ‘objectivity’ and ‘journalism’ in the NYT archive appears in 1924.

Journalists until then had what’s been called a ‘naive realism’ – reporting the facts of what happened but not much else, without any pushback, analysis, or investigation into whether what they’d been told by a source was the truth.

This had changed somewhat with the rise of muckraking, when journalists like Ida Tarbell had investigated the corruption, price-rigging and predatory tactics of monopolies like Rockefeller’s Standard Oil. Seeing the popularity of these sorts of investigations, they became more common.

When up against corporate power in an age of propaganda, marketing, advertising and PR, just reporting the naive facts was no longer enough. It would take some work to uncover the truth.

However, during the Cold War, it was hard to argue that capitalism itself was an issue. Exposés tended to focus on political intrigue, sensationalism, as we’ve seen got even more popular, and anti-communism and McCarthyism was the dominant mood.

Marxists and academics may have argued to varying degrees that the press were part of the capitalist superstructure, legitimising the social system they were a part of and benefited from, but for the most part these arguments were confined to the halls of academia rather than written about in the wider public sphere.

But the argument was there. At the beginning of the 20th century Antonio Gramsci had argued from his imprisonment in Fascist Italy that capitalist hegemony is perpetuated by the ruling class through culture. The Overton window, or what Henrik Ekengren Oscarsson has called the “opinion corridor”, sets the tone of the conversation, subtly directs it, perpetuates it. Oscarsson called the opinion corridor the “the buffer zone where you can still voice your opinion without immediately having to receive a diagnosis of your mental condition”.

Thomas Bates writes that, ‘intellectuals succeed in creating hegemony to the extent that they extend the worldview of the rulers to the ruled, and thereby secure the “free” consent of the masses to the law and order of the land.’

It took until the end of the 20th century for this view to approach the mainstream.

In 1988, Noam Chomsky and Edward Herman published Manufacturing Consent, arguing that the news was essentially propaganda for ‘powerful societal interests that control and finance them’.

They didn’t do this through blunt intervention, but by ‘the selection of right thinking personnel and by the editors’ and working journalists’ internalization of priorities and definitions of newsworthiness that conform to the institution’s policy’.

The big news conglomerates like Time Warner and Viacom kept dissenting voices at the margins, picked the right experts, and filtered out critical topics.

Chomsky said: ‘they are way up at the top of the power structure of the private economy which is a very tyrannical structure. Corporations are basically tyrannies, hierarchic, controlled from above. If you don’t like what they are doing you get out. The major media are just part of that system. What about their institutional setting? Well, that’s more or less the same. What they interact with and relate to is other major power centers – the government, other corporations, or the universities. Because the media are a doctrinal system they interact closely with the universities.’

The news corporations distract with sensationalism, side with Western crusades and ‘worthy’ victims, selectively using language, and are aggressively anti-communist.

Chomsky and Hermann laid out five filters through which information passed. The first is that the size, profit, and ownership of the mass media by itself filters out certain views and incentivised others. Market views are more acceptable than non-market views. The revolving door between politicians and media executives, between corporations and the state power.

The second filter is that the driving incentive is, ultimately, advertising and profit – the customer is the advertiser as much as the reader. They point to an NBC documentary on environmental issues that couldn’t get made because of a lack of advertisers.

The third filter is that they are dependent on a finite number of sources that are embedded in institutions like the White House or police departments or trade groups or embassies. The Pentagon, for example, spends billions on PR, the US Chamber of Commerce – a pro business lobby – spent $65 million in the year they were writing, and today that figure is over $200 million.

These groups, they write, ‘provide the media organizations with facilities in which to gather; they give journalists advance copies of speeches and forthcoming reports; they schedule press conferences at hours well-geared to news deadlines; they write press releases in usable language; and they carefully organize their press conferences and “photo opportunity” sessions.’

Fourth, that media is bombarded with what they called flak – ‘letters, telegrams, phone calls, petitions, lawsuits, speeches and bills before Congress, and other modes of complaint, threat, and punitive action’ – which nudges views away from criticisms of special interests.

And fifth, anti-communism is the ultimate dominant ideology. They write: ‘This ideology helps mobilize the populace against an enemy, and because the concept is fuzzy it can be used against anybody advocating policies that threaten property interests or support accommodation with Communist states and radicalism. It therefore helps fragment the left and labor movements and serves as a political-control mechanism.’

Ultimately, ‘The filters narrow the range of news that passes through the gates, and even more sharply limit what can become “big news”.’

But Chomsky’s wasn’t the only critique of the media, nor the most influential. As conservative talk radio shows like Limbaugh’s and Fox News grew, Murdoch, Ailes – the founders of Fox – and the wider conservative critique was that yes, the media were manufacturing consent, but liberal consent. Ailes called CNN the Clinton News Network.

So while the left were criticising the media for being propagandists for capitalism, the right were criticising them for having a socially liberal agenda on race, gender, and social security. That the media were ‘politically correct’ and wanted to tell you how to think, what to say, and who to support.

In 2004, the novelist Dorris Lessing called political correctness, “the most powerful mental tyranny in what we call the free world”.

In his history, Geoffrey Hughes says that, ‘linguistically it started as a basically idealistic, decent-minded, but slightly Puritanical intervention to sanitize the language by suppressing some of its uglier prejudicial features’. It meant not using certain words, or “It means showing respect to all,” or “It means accepting and promoting diversity.”

Where had this mental tyranny – to use Lessing’s phrase – emerged from? Some argued it came from the campuses protesting about race relations, gay rights, and feminism.

Lessing saw it as inspired by Mao’s Little Red Book, towing the party line, being politically in the right. She wrote that ‘Political Correctness is the natural continuum of the party line. What we are seeing once again is a self-appointed group of vigilantes imposing their views on others. It is a heritage of communism, but they don’t seem to see this’.

Hughes saw it as different because, ‘unlike previous forms of orthodoxy, both religious and political, it is not imposed by some recognized authority like the Papacy, the Politburo, or the Crown, but is a form of semantic engineering and censorship not derivable from one recognized or definable source, but a variety.’

But PC was nothing new: the Victorians’ idea of ‘being proper’, the French Revolutionaries’ battles of language, the Puritans – in fact, all societies have their forms of cultural and linguistic persuasions.

It was only a new form of cultural persuasion by a more active, engaged, socially liberal media. And rather than springing from one powerful little red book, the impulse more likely arose out of the cultural, linguistic, and postmodern turn in universities that more closely examined the power of language and culture in shaping people’s views.

Others argued the entire thing was made up. Clare Short wrote in the Guardian in 1995: ‘Political Correctness is a concept invented by hard-rightwing forces to defend their right to be racist, to treat women in a degrading way and to be truly vile about gay people. They invent these people who are Politically Correct, with a rigid, monstrous attitude to life so they can attack them. But we have all had to learn to modify our language. That’s all part of being a human being.’

What’s more interesting for the shift towards the internet is how both of these critiques arose around the same time and have both carried over into this new era.

The question posed by Chomsky and Ailes was really: can we see a monolithic ideology despite the appearance of diversity? Or do people see what they want to see? Are people driven by their own biases in interpreting the media as much as the media is driven by their own? Because by this point, as journalist Sandrine Boudana writes, ‘Journalism long ago abandoned the idea of seeking only neutrality and objectivity in pursuit of creating a more committed journalism, which makes it more difficult to differentiate between opinion and bias.’

This question – who is biased and who is right, and how these opposing critiques fed internet culture – is something we’ll return to. For now it’s worth pointing out that in fact, the critiques aren’t mutually exclusive. The media could be, to generalise, elitist, urban, socially and culturally liberal, close to politicians, driven by market forces and advertising, and biased by all of them, all at the same time.

But as we move into a new era it’s important to keep that dominant trend in mind, from both the early press and early television, that diversity, ethics, working class ideas, maybe high-mindedness, gets overwhelmed by the powerful forces of capital, of technology, of flashy frontpages and expensive studios, good looking presenters, and sensationalist, catchy, populist storylines.

 

The Internet’s First Media

On the surface, the internet is defined by pluralism, diversity, possibility. Anyone can post anything, anyone can start a podcast, anyone can build a YouTube channel, post on forums, on TikTok and Instagram.

Why then does it seem like this diverse digital landscape has coalesced slowly around specific individuals, groups, and talking points? If you’re interested in politics online, you’re unlikely to get through the day without seeing Joe Rogan, Jordan Peterson, or a Weinstein brother use the word woke.

The early years of the internet was much more like those early years of newspapers. There was a great diversity of ideas, a lot of techno-optimism, a strange unwieldy plurality of ideas. An early book on the internet – Clay Shirky’s Here Comes Everybody – illustrates the outlook of the early years – the subtitle was the power of organising without organisations.

This optimism in digital progress in some way confirmed that whiggish view of journalism – that the media, throughout history, gets freer and freer. Censorship, control and tyranny inevitably give way to free speech, the march of reason, a free press.

But as we’ve seen, and as many historians now argue, that is an old myth; a lazy, naive, triumphalist one. That original plurality in the press was centralised by commercial and industrial conglomerates. And the twentieth century proved that, in many countries, the media can go the direction of authoritarian control – towards Pravda or ministries of propaganda – rather than inevitably towards freedom. Early idealistic pioneers making programs like See it Now can be elbowed out of studios for lack of advertisers, and difficult stories can be replaced by ones with populist appeal. Could the internet be going the same way?

The internet is still, of course, much more diverse than any other medium. Costs are significantly reduced, accessibility increased. You can find videos and podcasts and reels on pretty much anything. Yet despite this, a kind of cohesive culture forms. A constellation of talking points, guests, ideas, and groups. What drives this? Human nature? Social dynamics? Economics? Culture? Politics? Let’s take a look at how this shift towards a cohesive culture happened.

Before around 2017, there were many alternative media outlets online beginning to make a name for themselves. The Drudge Report and Breitbart were loosely libertarian nationalist websites with the same kind of views as Fox News and Rodger Ailes. In 2010, Andrew Breitbart said he was “committed to the destruction of the old media guard.”

On the left, the Young Turks moved from radio to the web in 2006. The British left wing blog Another Angry Voice started in 2010. Joe Rogan started his podcast in 2009.

But while there were channels, blogs, and podcasts growing in prominence, the nascency of the internet put most on an equal footing. Plurality reigned. The internet was a DIY, amateur, botched together jumble of mouths all doing different things.

But around 2016 a shift began. This was the year of Trump and Brexit, both rebellions against the elite establishment of which the mainstream media was a part.

This was the year of Pizzagate, a year after Gamergate. It was the year of the Charlottesville rally and a similar march in Gothenburg, Sweden, which according to the organisers was the second most streamed video on YouTube around the world that day. It was the year that Eric Weinstein officially baptised the Intellectual Dark Web (IDW) as a group rebelling against the establishment status quo. It was a year, in short, of a revolt.

In that year, Vox reported that Infowars – Alex Jones’ conservative conspiracy-laden talk show – was getting 10 million visits a month, more than most mainstream media websites at the time.

Infowars themselves said that, ‘Government and the mainstream media have lost all credibility, leaving opportunity for the alternative media to swoop in and expose the truth, waking up people across the globe.’

Alternative Für Deutschland (AfD) – a Eurosceptic anti-immigration party in Germany – called the media the “Pinocchio-press”.

Two things were happening: certain topics, ideas, groups, and individuals were becoming dominant, and second, most of them were defined by their distrust, critique, or outright condemnation of the traditional media. They were all, to use a loose term, anti-establishment.

In his book on right-wing alternative media, professor Kristoffer Holt describes the process by which alternative media become anti-system media: ‘Alternative news media can publish different voices (alternative content creators) trying to influence public opinion according to an agenda that is perceived by their promoters and/or audiences as underrepresented, ostracized or otherwise marginalized in main stream news media. Alternative accounts and interpretations of political and social events (alternative news content), rely on alternative publishing routines via alternative media organizations and/or through channels outside and unsupported by the major networks and newspapers in an alternative media system.’

What’s interesting though, is how the plurality or diversity or independence turns into something relational. The new media or alternative media defined themselves in part by what they’re not.

He writes, ‘the alternative quality of any news medium is derived from claims to its counter-or complementary position to certain hegemony, since this must be construed as the organizing principle behind alternative media enterprises.’

To these critics, the MSM were defined in the same way Ailes defined them: as urban, elitist, snobbish, socially liberal, globalists. They were feminists, pro-immigration, anti- white.

As Holt writes, ‘The claim is that hegemonic mainstream media withhold or thwart the reporting on information that can be sensitive in light of a politically correct agenda.’

In picking talking points or ideas or guests, they aren’t independent in the sense that they pick them completely freely, but they’re picked through the lens of being ‘anti-system.’

This is not to make any moral judgement about the position, about any specific claim or opinion being right or wrong, only to note how it began to emerge. It is, of course, significant that the IDW, Pizzagate, Brexit, Trump, Gamergate, and the rise of Jordan Peterson, Bret Weinstein’s suing Evergreen and resigning, happened at around the same time. Despite the diversity of opinion on many topics, there was conformity on a central one – they were all, in some way, anti-system, anti-establishment, anti-mainstream media.

The IDW moment was notable because they seemed to able to define themselves by something they had in common despite claiming to have significant disagreements on other issues.

Holt writes of the IDW that, ‘what they have in common, according to their own descriptions (and the famous article by Weiss in NYT), is that they see themselves as “renegades” who have been ousted from mainstream platforms as a consequence of stating uncomfortable facts and opinions.’

Diversity was starting to come very loosely together, but only in opposition – not via specific common ideas but by what they defined themselves against.

 

The Ideology of the New Elites

Today, Joe Rogan has over 14 million listeners. Jordan Peterson videos get millions of views per video. The Daily Wire revealed in 2022 that it had 600,000 subscribers. Ben Shapiro has 7 million subscribers and each video gets watched hundreds of thousands of times. Lex Fridman has 4 million subscribers and gets millions of views per video.

But in some senses, the death of the MSM has been largely exaggerated. Where as CNN, Fox News, and the BBC ‘viewing’ figures, for example, are in decline, it’s because more people visit their websites rather than watch television. The decline of the MSM depends on the organisation and the metric. BBC News website figures are increasing. In April 2021, it had 1b visits. Musk pointed to the decline in website views on Bloomberg as evidence of the demise of legacy media. However, by other metrics – profit or Instagram followers, for example, Bloomberg is actually growing. And according to PressGazette, the Daily Wire’s traffic is declining more than Bloomberg. And Brietbart is down by 87%.

Similarly, the NYT website visits are growing, with around half a billion visits per month. Other traditional organisations like People, USA today, Forbes, Newsweek, and Politico are doing quite well, with all of their figures rising. In the UK, the old newspapers are declining, but there is some change; the Telegraph is seeing a month on month increase at the moment. Some like the Financial Times are growing. It’s a complex story with many ‘down’ by traditional measures but ‘up’ by new measures like subs, followers, and, importantly, profit. So while the Washington Post is not doing well, Newsmax is surging.

What is true is that trust in old media is at an all time low. However, with more choice, more narratives, diverse opinion, and more accountability, is this surprising?

Two trends are at least notable – the death knell of Mainstream Media is yet to be rung. No-one else is close to the sorts of views the BBC or NYT get. On the hand, the figures of Rogan and the appeal, book sales, and reach of someone like Peterson are at least very significant. These are, after all, individuals not organisations, and their influence is undeniable. Numbers aside, they are a cultural force. They are a new type of elite figure.

These new elite figures that have grown up on the internet – Peterson, Rogan, Shapiro, Russell Brand, the Weinsteins, Dave Rubin, Lex Fridman, Tim Pool, Triggernometry, and many others – all, in some part, are driven by their opposition to old media. That is, first and foremost, the base cultural water they swim in.

Culture is a strange thing. In some sense it’s like a common tree that we pluck language, ideas, art, jokes, music, or hobbies from. It’s a constellation – it shifts and moves but is loosely identifiable.

If culture is a tree, and anti-mainstream media is the trunk of the new elite, what sorts of other branches will likely grow from it? Branches are loose. They are not all the same. Some snap off. Some change. But they are loosely, often there.

The first is populism.

The idea that an establishment media system has failed lends itself to populism, which is less about who’s right or wrong, what’s driven the failures, or what to do them about, and more about the framing of an issue as being one defined by the people vs the elites.

Populism is a difficult term. On the surface, it’s just an appeal to what’s popular, what the people want, what ordinary Americans or average Brits are saying. The French philosopher Pierre-Andre Taguieff described populism as “the appeal to the people, at the same time as demos and as ethnos, against the elites, and against the foreigners”.

Populism frames in terms of an us and a them. Political scientists Casse Mudde defines populism as appealing to a ‘pure people’ vs ‘corrupt elite’, for example.

This is why populism often uses language like the ‘heartland’ or middle America – average, ordinary, hardworking, honest, people, just trying to get along. And the elites in the swamp are lazy, corrupt, enemies of the people.

The problem with populism is there is rarely an us and a them. The so-called ‘people’ are always fractured, have a diverse set of views, and vary from group to group, time to time.

To say the MSM is corrupt and ordinary people just want honest reporting is a populist statement. It might be true in certain instances, but it’s a generalisation that becomes quite meaningless when we ask what corruption means, which journalists we are talking about, and which ‘people’ we’re referencing.

However, the populist frame is enticing to a new elite figure outside the mainstream media. Look at how Russell Brand and Jordan Peterson talk about farmers. The farmers are pure, salt of the earth, decent, hardworking and honest. They’re against the elite globalists.

The reality is that, first, there are many different farmers – left, right, poor, rich, corporate, family, struggling, profitable. And the issues are usually not farmers vs the elite, but different interest groups’ influence on regulation, climate change, subsidies, and other issues.

To populists, instead, it all gets subsumed under the framing of ordinary vs elite. Ordinary is attractive. It appeals to more people. If you are statistically speaking an ordinary person, why shouldn’t I address you – the millions – against the elite. It’s a rhetorical numbers game because the people always outnumber the elite. If I can frame an issue that way, it’s going to be appealing to more people. There is a strong linguistic magnetic incentive to talk in this way.

Peterson consistently rallies against elites at universities, professional psychological associations and bodies, woke institutions and media – his Twitter timeline is full of condemnation of the elite. While Peterson, Brand, Rubin, and the Weinsteins are all themselves elites.

I’m not pointing to any instance of being right or wrong about any particular issue. Only how the conversation is framed. Anti-media establishment very often becomes populism.

The second tenet is anti-wokeism.

Why do almost every one of these figures tend to define themselves in terms of, or at least often refer back to, anti-wokeism?

Anti-wokeism seems to be the natural result of being anti-establishment, outside the mainstream, and populist. After all, it’s the liberal elites that are woke. Like the PC moment, urbane, middle-class, educated academics, politicians, and journalists want to tell the ordinary people how to think, what to believe, and who to vote for. They like to impose their ethical worldview on the rest of the world. They are, in short, snobs.

The critique here is very much a continuation of Roger Ailes and Fox News. Anyone anti-system naturally doesn’t like to conform. So anti-wokeism is natural to disgruntled academics like the Weinsteins and Peterson.

Someone like Critical Drinker can pop up on many of these channels because his film reviews follow this same anti-establishment, populist, anti-woke pattern – Hollywood is woke, the elites are ruining movies, and people just want X from their films.

The third tenet is freedom of speech.

Any system of authority imposes its rules, norms, ideas on the society and people it seeks to convince, propagandise, educate, inform, control – whatever you want to call it. This happens to varying degrees. Sometimes it can be good – as in education or maybe regulating the fringes of speech in, say, regulating pharmaceutical advertising. Sometimes it can be bad in the form of tyranny and propaganda or even just slight overreach.

But being anti-establishment naturally lends itself to being very pro-freedom of speech, in its different guises.

Holt writes, for example, that, ‘What unites [the IDW] is not primarily a common political or ideological agenda, but rather a sense that academic and intellectual freedom is seriously under threat because universities and the media are so influenced by left-wing identity politics and political correctness.’

When figures like Rogan, Peterson, the Weinsteins, or Shapiro get together, they might disagree on many things, but those things are less likely to come up. What they’re united by is their opposition to the establishment, the idea that wokeism has gotten out of hand and the government or the media are censorious – in other words, freedom of speech is fundamental.

Rogan says, for example, that he voted for Bernie Sanders, wants higher taxes, but that’s unlikely to come up when he talks to Musk, Peterson or James Lindsay – freedom of speech and wokeism will.

What’s of note is not that they might disagree on issues, but that they agree enough on certain issues to have that conversation around them and it not get acrimonious. This of course isn’t always the case, but it is the norm.

Their discussion is tailored to the person and revolves around the new elite talking points. Anti-establishment, anti-wokeism, freedom of speech. Tucker Carlson and Russell Brand’s conversations are a masterclass in this dynamic.

But we could broaden this out from freedom of speech to freedom more broadly. Vaccine hesitancy, for example, is common to all of these figures and is driven by the same distrust in the establishment.

Again, none of this is a moral judgement – we all shape our topics of conversation depending on who we’re with and where we are and what shared interests we have – we’re just trying to work out the logic of how these conversations are formed.

The fourth tenet is the mix of popular and political cultures.

For much of history, especially in Europe, there was a now pretentious idea of high and low culture. Theatre and politics and grand tours for the aristocracy and drinking and ballads and sports for the working class.

Politicians would keep outsiders at bay with complex cultural practices demarcating what’s proper. References to opera or Sophocles in debates about policy, for example.

The 20th century modern and postmodern artistic and literary movements were known for mixing high and low culture. Andy Warhol using consumer images in art, novels and films more about everyday life. There’s an entire fascinating literature on this.

Desperate to distinguish themselves from pompous metropolitan elites, new elite figures have a tendency to draw from popular culture and present themselves as ordinary. Carlson for example presents from a shed as if he’s a regular American in his garden.

Often, clearly with Carlson, it’s a cynical ploy. Politicians do it all the time – they’re desperate to appear normal.

But with internet culture it’s often genuine too. Bill Maher smoking weed in his basement on his podcast. Rogan alternates between serious ‘intellectual’ conversations and getting drunk with comedians and doing MMA shows.

What’s interesting, I think, is how this new expression of everyday experience on the internet – vlogging, chatting with friends, doing normal things outside of television studios – gets coopted and used by new elite figures.

Comedy is often naturally anti-establishment, so Peterson, Carlson, and Robert F. Kennedy can fit quite comfortably on someone like Theo Von’s podcast. In fact, RFK talks to a lot of comedians. The Triggernometry hosts, Russell Brand and anti-woke UK pundit Andrew Doyle – the voice behind Titiana McGrath – are all comedians.

Comedy is the perfect vehicle for conspiracy theories about vaccines, the World Economic Forum, the elites who are all covering up secrets. Alex Jones and Rogan can jump naturally from UFOs to vaccines. For Graham Hancock, the critique of archaeology as a discipline is fed through an entertaining narrative of a ‘lost civilization’. The memeification of politics turns complex issues into shareable soundbites.

These tenets – this constellation, these branches – act as incentives, impulses, the cultural water of the new elites; they’re branches that keep the tree together, acting as a social glue, and it’s around them that new elite social groups start to form.

 

New Elites, Assemble!

There is a large literature of studies on social groups. Social groups are, of course, a fundamental part of human life, and a group needs some principles in common. Clubs form around shared interests, political parties around ideas, friendship groups around hobbies or shared humour, media organisations out of a set of beliefs.

Being part of a group – whether that group is geographic or cultural – a Midwesterner, a banker, a leftist, an impressionist artist – provides a set of cultural norms, social expectations, dominant ideas, informal rules and methods – provide a grounding for identity. Being part of a group – officially or informally – is rewarding. Being in some groups confers status and social capital, connections, and a platform.

Bret Weinstein would not have been so known to us if he didn’t have a group affinity with other new elite figures like Peterson, Rogan, and Alex Jones.

There is a powerful incentive to agree with Joe Rogan on his podcast, to get an invite to dinner with him, to perform at his comedy club, to get him to put you in touch with Jordan Peterson.

These are the same incentives that playout in the mainstream media, as Chomsky and Herman pointed out. What’s interesting is how they’re also playing out in the transition from a diverse internet to a more homogenous internet.

Many studies show how people in groups mimic the behaviour of others in the group, conform their beliefs to the group to get accepted, and tend to point out the problems with other groups while ignoring their own.

In one famous study on conformity in 1951, psychologist Stanley Schachter studied a group discussing a trial. He found that most of the communication was directed towards bringing dissident voices into line. Furthermore, when asked to rate each person, the dissident was voted as most disliked.

Soloman Asch’s influential experiment showed participants lines of slightly different lengths. Each person had to call out whether each line was longer or shorter than the others. But Asch included actors who called out the wrong answer. When all of the actors in a group said a shorter line was the longest, the participant tended to conform to the group. Only a quarter never conformed. And 5% of people conformed 12 out of 12 times. While three quarters did at least once.

A similar experiment was conducted with pictures of a lineup. If other actors in the group gave the wrong answer, the participants were more likely to conform and follow.

Studies like this show not that people want to conform to fit in – although that is often true – but that they do so often without even knowing it. The social group we’re in directs our opinions before they’re even formed. Those individuals actually saw that line as longer.

Psychologist Charles Stangor writes, ‘conformity occurs not so much from the pursuit of valid knowledge, but rather to gain social rewards, such as the pleasure of belonging and being accepted by a group that we care about, and to avoid social costs or punishments, such as being ostracized, embarrassed, or ridiculed by others’.

In another study, researchers gave people cards with different traits on. They were then asked to put them into piles for different groups – women, young people, old people, students, etc.

They found that people perceive out-groups – groups other than their own – as more homogenous than their own group.

Men judging women included fewer traits, the young judging old included fewer traits.

In other words, there is an incentive to label the out-group – the mainstream media, the establishment, the old elites – by a homogenous label like corrupt, elitist, tyrannical, and people in the in-group as more diverse, plural, and decent.

New elite figures often describe themselves as diverse – Lex Fridman that he talks to all sides, Joe that he’s on the left, Brand that he’s talking across the divide – while describing the MSM as a corrupt homogenous out-group.

When you add the powerful incentive to form a group into our constellation of tenets, the magnetic effect of the social glue is compounded.

Texas is even becoming a bit of a hub. Rogan moved there from California. Lex Fridman moved there. I believe Musk is based there. Comedians like Gillis and Von have moved or are thinking of moving there. The Triggernometry bros spent time there.

Some hosts like Chris Williamson of Modern Wisdom even moved from the UK to the area, become friends with new elite figures like Michael Malice and Eric Weinstein, and become physically integrated in the circuit.

Konstantin Kisin reflected on his Oxford Union wokeism speech that it opened doors for him in America to people like Eric Weinstein.

Group formation psychology, anti-establishment talking points, populism, anti-wokeism, free speech, and comedy/conspiracy, all hang together in a constellation defining and shaping the views of the new elites. They act as a honey pot, a temptation, a powerful incentive to get views.

If you wanted to start a YouTube channel, there’s no better roadmap to follow. The gamut of low grade copycat channels that are popping up are a testament to this. None of them have any real qualification, are specialists in any area of expertise, or have anything new to say – but channels like RattleSnakeTV use shorts to piggyback on new media clips using sensationalist titles to get millions of views. Or take this guy’s top viewed – Tate, Peterson, and David Icke (and if you’re lucky enough to not know who any of those three are please, I beg you, you’ve won. Stop this video, log off, throw your laptop into the sea, and retire to a nice coastal village.)

Ok, so just to illustrate all of this let’s finish this section with a quick case study: Chris Williamson’s Modern Wisdom. As we do, bear in mind there are millions of experts that could provide ‘modern wisdom’ from around the world. Philosophers, historians, politicians from other countries. If you scroll through and listen to a few episodes of the Rest is History or the Ezra Klein show or Stuff You Should Know or whatever interests you. What I want to focus on is how a show ostensibly about modern wisdom gets shaped by new elite discourse.

Williamson was a reality TV dating show contestant in the UK who says he had a crisis of confidence about the sort of party boy lifestyle he was leading and started Modern Wisdom to search out modern wisdom. The early show included clips about life hacks, relationships, and fitness, before starting to get a few guest interviews from a range of psychologists, fitness experts, professors in politics. There was a decent range.

There were also some anti-woke populist figures too – people like Dave Rubin and Douglas Murray. But it’s securing an interview with Jordan Peterson in 2021 that gives the channel its first small shot in the arm. Even then views continued at a low pace. He interviews Peterson again in a video that has 4.8 million views.

From then, the channel starts shifting to new elite guests, talking points, and titles. The collapse of mainstream media, cancellation, critiques of Black Lives Matter, the legacy media is lying to you, more cancellation, why does Hollywood hate men, Tucker Carlson destroys mainstream media, and a lot of Peterson, Eric Weinstein, and Douglas Murray.

This is not to say that Williamson isn’t a decent, honest, well-intentioned guy who genuinely believes these things – I don’t know. It’s only to lay out the logic of how moving towards these individuals and beliefs is very rewarding. Williamson is particularly interested in the end of the mainstream media, the tyranny of the woke, Diversity, Equity and Inclusion – all of the branches we laid out.

William’s interviews with Eric Weinstein are almost perfect examples of the ideological constellation that drives these conversations and the group formation around them.

From the very beginning they’re onto woke DEI, that you apparently ‘can’t talk about it’, the secret establishment rules, and how outsiders are punished.

They discuss Claudine Gay’s resignation from the presidency of Harvard University after it was discovered she plagiarised several snippets of text without proper attribution. Gay acknowledged mistakes but claimed they were accidents, not substantive, and stepped down from her role. Many academics defended her, including one that she had plagiarised, saying, ‘ From my perspective, what she did was trivial—wholly inconsequential.’ She had, it turned out, included a technical description from someone without the proper reference.

This arguably is still bad, not to the standard acceptable for a president of major university, that she should step down. However, Williamson’s take on it is to quote the novelist Howard Jacobson who said he hoped the incident, ‘would be the start of people who knew nothing losing their jobs.’ With a wry smile, Williamson and Weinstein frame it in the usual anti-woke, DEI, anti-establishment liberal elite constellation.

Gay is clearly a respected academic who’s published many social science papers on race in America. One, for example, is a study on the link between having black representatives and political engagement more broadly. It’s been cited over 500 times.

Yet Williamson – a club organiser and reality TV contestant – with Weinstein, can confidently say with a cocky smirk that this is ‘someone who knows nothing’ and hope it’s the start of people like her losing their jobs.

This type of conversation is only explainable by applying the constellation of new elite ideology that incentives the direction of podcasts like Modern Wisdom. What you get are a relatively constrained set of parameters through which attention is directed. We can, after all, only focus on a finite set of ideas at a time.

It’s the sort of conversational frame repeated across the new elites. The titles of Brand’s videos are all Elon WARNED, Tucker REVEALS, Rogan BLASTS. Dave Rubin’s are the same. It’s why someone like Graham Hancock can do the rounds – a man who claims to be ostracised by the establishment because he challenges their lazy group think. Hancock doesn’t just believe there are lost advanced civilisations in the past, that they are the key to history, but that not finding them is a failure of the academic establishment.

In fact, it’s illustrative how much Hancock’s epistemic populism aligns with other figures. Peterson, Weinstein, Hancock – they’re all populist because they have an exciting theory of everything (literally in Weinstein’s case) that could help humanity that’s being supressed by the elites.

Economic ideas? Policy? Sociologists or any historian that’s not Niall Ferguson? Scientists and engineers that aren’t Musk? No, if you look through the Triggernometry, Modern Wisdom, or Dave Rubin it’s this stuff that gets the most views.

These are all political conversations. Yet if you look at polls of issues people think are the most important, the responses will be the economy, healthcare, education, housing, transport, immigration, welfare. And out of all the interesting academics, experts, countries, historical periods, philosophical ideas, political alternatives, novelists, poets, filmmakers and artists in the world, this is what these figures get drawn towards. This is the shape of new elite discourse.

 

Who’s Right? Who’s Biased?

I am trying my best to be in some sense neutral. It’s perfectly reasonable for Gay to step down. It’s reasonable to have discussions about university reading lists. People and institutions can, of course, be overly censorious, and free speech is fundamental. What I am pointing to is the framing. The incentive is to turn from reasonable debate to culture war, from a question about policy to populism vs the elites, from a question of justice to the woke being religious fanatics. Of course, the left have their biases too. And the MSM, as we’ve seen, have their own frames of biases. So how do we make sense of any of this?

Studies of biases have tended to find lots of different types of bias. In one review, scholars laid out seventeen, including confirmation bias, spinning and loaded language, choosing what to cover, exclusion, ideological bias, placement bias, sensationalism, the size or length of coverage, and so on. Bias can appear at the sentence level or the organisation level. But as we’ve seen it also changes from period to period, place to place. The early press had one set of ideas, the industrialists, the new elites, and the BBC another.

The biggest problem is that most of the time, supposed ‘bias’ or ‘propaganda’ is indistinguishable from what a person just really thinks. What’s the difference between bias and opinion, for example?

There are many clear cases of deceit or manipulation, on all sides. CNN doctoring a photo of Rogan to make him look more unwell. But more often than not, ‘bias’ is less about deceit and more about framing.

Konstantin Kisin of Triggernometry points to the MSM taking Trump quotes out of context as an example of biased media, while allowing themselves a lot more latitude in the sensationalist titling of their own videos – like ‘Critical Race Theory Made Me Suicidal’, ‘BLM Stands With Hamas,’ ‘This is why THEY lied about our history,’ and many others. Is this any better?

Similarly, Chomsky and Herman’s use of the word propaganda has been criticised for giving the impression that the bias is purposeful manipulation. The truth is, the topics they raised in Manufacturing Consent – the press being overly patriotic, anti-communist, pro-business, selective condemnation – was just how most Americans thought at the time.

Similarly, the filter model they adopt doesn’t explain how anti-capitalist news ever gets through the filter at all. Anti-monopoly investigations in companies like Rockefeller’s Standard Oil in the 19th century, the coverage of climate change, support for social services – this kind of journalism occasionally gets through. To pick two examples from recently, ITV broadcast a hugely influential drama on a Post Office scandal here in the UK and Channel Four often broadcasts programmes like one that looked at why our water companies are paying shareholders dividends while polluting our rivers.

The reason stories like this do get covered is because they’ll be popular and so producers, executives, and owners are likely to support them.

That said, it’s clear that there are limits to what will fit in the frame. This explains why the media prefer socially liberal topics that support popular progressive ideas without having to do much criticism of the capitalist economy that pays their not insubstantial wages. Is it propaganda to be in favour of the status quo? Or you just more likely to be pretty happy with the system if you’re a journalist at the NYT living more than comfortably?

I’ve been reading and watching a lot of different media in making this and it is difficult to generalise. The BBC is different to CNN, Fox News to the NYT, Chris Williamson different to Joe Rogan. Living in the UK, I don’t have much familiarity with the American channels, other than the clips I see, and I seem to get most of my news from lots of different places.

Ultimately, it is one big ecosystem. Ultimately, maybe the MSM was right to be mostly sceptical of Brexit – after all most of the ‘experts’, studies, rhetoric, polling, and so on, supported that scepticism. However, that doesn’t mean that they didn’t miss something. And that something like GB News isn’t, in fact, the child of that failure.

If everyone has biases, then it makes little sense to criticise all of them with the same broad brush, and it’s those generalisations – elite mainstream vs the people, woke/anti-woke, free speech/anti-free speech – that do the most generalisation and are likely to be least useful. We need less rhetoric and more granular, specific, rational conversations.

Maybe then instead of bias per se we should look at those trends that run through both the old MSM and the new elites, trying to work out why new media seems to be going in the same direction.

 

Sensationalism

There is an inevitable emotional incentive to sensationalise, to point to moral panics, to lean on outrage. Moral panics – whether about garrotting, gay rights, drag shows, or conspiracies – have the benefit of being targeted to a small minority of supposed deviants who can be blamed for the problems society is facing.

Is there much difference between the moralising of Aids in the 80s and the scapegoating of trans people today? Take these headlines from the 80s. The Sunday Express asked, ‘If AIDS is not an Act of God with consequences just as frightful as fire and brimstone, then just what is it?’. A Sun headline read, ‘AIDS is the wrath of God, says vicar’. Another Daily Express headline: ‘AIDS: Why must the innocent suffer?’, about using animals to test a potential cure. It was commonly called the ‘gay plague’. In 1986, the Star said it was a scandal that there were ‘GAY LOVERS ON ROYAL YACHT.’ Another Sun story said, ‘I’D SHOOT MY SON IF HE HAD AIDS, Says Vicar! He would pull trigger on rest of his family’.

Sociologist Jeffrey Weeks describes the moral panic like this: ‘the definition of a threat to a particular event (a youthful ‘riot’, a sexual scandal); the stereotyping of the main characters in the mass media as particular species of monsters (the prostitute as ‘fallen woman’, the paedophile as ‘child molester’); a spiralling escalation of the perceived threat, leading to a taking up of absolutist positions and the manning of moral barricades; the emergence of an imaginary solution—in tougher laws, moral isolation, a symbolic court action; followed by the subsidence of the anxiety, with its victims left to endure the new proscription, social climate and legal penalties.’

Again, there may be rational conversations to have on the details of certain issues, but the frame is that a woke elite is forcing their moral worldview on ordinary people. Could it not have been said that the1967 act to decriminalize homosexuality in Britain was the act of a woke establishment and academics? Could not almost all of the headlines about AIDs be reworked to include the words trans and be indistinguishable from new elite talking points today?

Some subjects are about personal taste. Literature, film, culture, poetry, art podcast – talk about what you want. But with politics, economics, the future of countries, the news, standards of evidence have to be applied more rigorously. What really affects people’s lives? What do people really want addressing? What are the most important issues?

Take this recent study that found that fewer than 1 in 1000 university courses in America contain references to critical race theory or ‘woke’ topics like Diversity, Equity and Inclusion. Is this represented proportionally in new media discussions? No.

 

Free Speech

The same applies to free speech. Obviously both the old media and the new elites will always claim to be fighting for the truth, fighting for freedom. A Daily Mail headline could be a Dave Rubin title.

In 2012, there was an investigation into phone hacking by the tabloid press in the UK, including the answer phone of a murdered 13-year-old girl. In response, The Sun protested that ‘this witch-hunt has put us behind ex-Soviet states on Press freedom’.

Free speech is too important an issue to be used as an excuse for bad behaviour. But it always has been. The entire discussion on free speech gets reduced to a populist narrative of good vs evil – the people who deserve to speak freely vs the elites who want to shut them up. However, to the Sun and many new elites speaking freely apparently means speaking without consequence, rebuttal, or rules.

Free speech is not black and white. There is no such thing as free speech absolutism, even in the US with a first amendment. We have copyright, libel, slander, and advertising standards and regulation. We have etiquette, codes of ethics, responsibility, spam, and moderation on social media platforms. So free speech is not a woke elite vs ordinary people issue. It’s an issue that often has to be decided by careful discussion about the particular issue.

 

Beware the Guru

I’ve borrowed guru from the Decoding the Gurus podcast here, which unpacks some of the talking points in this space.

But take a look at this event: Dissident Dialogues – “The World’s Leading Thinkers”. Some of these people might appropriately sit under that subheading, but to apply it to the Triggernometry hosts, Chris Williamson – even if you enjoy their interviews – seems a disservice to the millions of experts and scholars and leaders around the world that have published books and studies. The selection mechanism is not expertise but influence.

Chomsky and Herman pointed to how a Soviet defector became the US media’s favourite expert on USSR weapons and intelligence because he was, of course, pro-US policy.

Which reminded me of how the North Korean defector Yeonmi Park became a guest favourite of the new elite circuit not just because of her insights into life in North Korea, but because she was anti-woke, saying for example that, “that ‘cancel culture’ at U.S. colleges is the first step toward North Korean-style firing squads”.

Organisations – online and off – pick experts in a way that suits the wider ideological constellation of their worldview. If someone is popping up talking about a lot of issues, whether on the BBC or across YouTube, be sceptical.

 

The Market Always Wins

Underlying all of this is the worst incentive of all. The incentive for profit instead of the incentive for truth. From the early press, through to television, through to YouTube today, the trend in political content is away from diversity towards conglomerates that put flashy sets, sensationalist titles, and populist topics first.

This trend puts the plurality of smaller blogs, channels, and podcasts out of business because they take up all the air. Expensive and professional fancy sets by nature look more trustworthy and professional. Diary of a CEO and Modern Wisdom look better than any niche podcasts. They get the big guests, big advertisers, and the resulting big money.

Diary of a CEO and Daily Wire spend fortunes on Facebook advertising, testing thumbnails, investing in studios. This is exactly what happened to the radical press in the nineteenth century. Television went down this route too. The diversity of the Enlightenment was replaced by tabloid newspapers, and the early idealism of television gets replaced with easily consumable infotainment.

In 1958 there was a famous scandal when an American quiz show was fixed so that the most popular contestants could stay on to boost ratings. When this was discovered, it caused an outrage. To some, it was symbolic of the superficial direction television was going in – popularity over truth.

Why does Chris Williamson think modern wisdom is to be found in Eric Weinstein’s head? Because he’s a fixed quiz show contestant, giving popular answers to popular questions, fitting in perfectly with the new elite circuit.

If you want success then popularity will always trump truth. You will always give in to the temptation towards more clickbait sensationalist headlines, thumbnails, talking points, and guests.

Because underneath it all, none of them are anti-elite – all of them are or are becoming elites. And so the temptation will always be to avoid criticising the market forces, the advertisers, the system that they benefit from.

The real divide then isn’t between old media and new – it’s diverse, honest, broad, plural, truthful, reasonable conversation vs sensationalism, populism, clickbait, and moral panics.

There’s always, as we’ve seen, been cross-over. In fact, the market logic that incentivises and rewards grabbing eyeballs, distracting from the real issues, and stoking up fear, is a subtle logic that ultimately underlies both. It’s why someone like Douglas Murray can smoothly alternate between the two while claiming not to trust mainstream media. Ugh – you’ve just been on Fox News, Sky News Australia, and written a column in the New York Post, Douglas.

Piers Morgan was editor of The Mirror when they hacked the murdered girl’s phone – he knows how to whip up the crowd and has moved quite frictionlessly from the MSM to new media, piggy backing off new media figures with all of the predictable titling, topic, and guest strategies that now makes him indistinguishable from a youtuber.

Peterson similarly rallies against the elites while writing for their newspapers, appearing on television, and talking at conservative conferences.

None of them, despite what they say, are anti-elite. They’re just anti-left. And what’s most interesting is not the divide between old and new media, but that the rules of the game between old media and new are so similar.

 

A Better Media (What To Do?!)

We need three things – awareness, organisation, and people powered media.

First, I’m a critical person. But I think it’s optimistic to point out that right now, the media landscape is as diverse as its ever been. The range of content and mediums available to us has never been better. Amongst all the bias, on all sides, there’s great journalism going on – Pulitzer prize winners in the mainstream, five hour interviews on podcasts, niche subjects on YouTube.

But the monster in the room is the profit incentive – the incentive to be popular over truth – which incentivises the big sensationalist clickbait guests with the next big theory of everything. Everyone has to play the game a little bit – make eye-catching thumbnails and cover popular talking points – but when that takes you away from what’s important, what should be significant, what’s truthful, decent, and honest, then you become, to use an overused word, a grifter.

Sometimes a grifter is honest, sometimes, like a broken clock, a grifter can be right, but a grifter isn’t trustworthy over time. You can tell a grifter by the titles used by people like Rubin and Brand – the yellow journalists of our day. They advertise that they’re motivated by popularity over truth in their titles – it’s all ‘chilling warnings’, ‘IT’S HAPPENING’, and ‘Terrifying Truths’.

Strategies like this are ancient and inevitable, so there’s no better first defence against them than awareness. These titles and tactics should be mocked. They should be embarrassing. They should invite criticism. And they often, thankfully, do.

Second, we need organisations. There’s a common new elite talking point that we no longer need the mainstream, that they’re dying, redundant, dinosaurs. Musk talks about ‘citizen journalism’ and how he only gets his news from X. That uploading vlogs, tweeting about protests, having debates in the ‘marketplace of ideas’, is all you need.

But I don’t think the big media institutions are going anywhere. As we’ve many of them are doing better than is usually acknowledged. Furthermore, we need them. We need well-paid journalists, with competent editors, colleagues, and fact-checkers. We need networks of experts to call on, specialists, and analysts. We need organisations that have foreign correspondents, that can quickly and effectively get to another country for an unfolding story. Journalism is expensive work. Cameras, studios, archival access, travel, the clout to attract specialists are all costly.

This is revealed in the way new media figures often rely on old media to do the hard work of reporting. All of these figures criticise the legacy media establishment while relying on them to provide stories which they then sit and comment on.

Furthermore, I can say what I want, the only real check is myself, my reputation and accountability to you. However, within an organisation there are some extra constraints on hasty mistakes, foolish misjudgements, and white lies. There are benefits to being independent and benefits to being in an organisation. I’ve already seen the benefit of working with an editor who has sometimes pointed out something I should check, rethink, or reword. We all need to be held to account.

In The Constitution of Knowledge, Jonathan Rauch describes the ways in which scientific, social, and journalistic knowledge is usually the result of group dynamics rather than simple individual pursuit. Experiments are carried out, facts are gathered, ideas are shared, and editors, boards, journals, professors, and peer-reviewers all give advice, feedback, and guidance. The group shapes the ideas and outcomes collaboratively. Traditional media and universities have codes of conduct and rules for practice for this reason – to coordinate individuals in a group.

New elite figures have to do this much less. And so much more comes through that filter. False news, silly takes, UFO discourse, sensationalism, extreme points of view, personal attacks are all more likely in the more individualistic new media environment. It’s the equivalent of not having that friend who might urge you to reconsider something, or read an email for you, or talk you out of something.

There are exceptions – Daily Wire have group dynamics that make them more like an old media institution, Rogan has Jamie to ‘pull that up’, and YouTube channels are getting big enough to expand into groups. But ultimately, these are still just powerful individuals rather than institutions.

And unlike, say the BBC newsroom, which have their point of view, and make plenty of mistakes, individuals reporting on current events – especially in foreign countries – are much more likely to be unable to separate fact from fiction. Especially when countries like Russia spend millions spreading misinformation online purposefully designed to flood the information landscape and confuse.

So, that’s why organisations are important. But that’s not to say that individuals aren’t. There are benefits to being an individual or a small organisation – individuals are nimble, have few overheads, or constraints, and sometimes might be able to quickly poke holes in a story through commentary before a cumbersome organisation has time to deliberate, and equivocate, and do thorough research. They can also be more individualistic, creative, unique, or specific about what they personally think.

In some ways I think the future might belong to these middling organisations that are doing well – Daily Wire, Novara Media, TLDR news – some of these channels have the benefits of both being both small enough to be nimble and big enough to have budget and reach.

But finally, organisations of any size, individuals too, are all subject to the temptations towards populism and profit.

It’s not enough to think that people watch and listen to what they want to. That it’s just the coherence or truthfulness of ideas that determines who wins and loses in the new media marketplace. No, the market rewards figures like Peterson, Brand, and Triggernometry – it rewards populist rhetoric, anti-woke talking points, sensationalism, moral panics – it rewards scary talking points and it rewards articulate, charismatic, enticing personalities over careful, thoughtful, honest ones. It rewards those with good looks and good looking sets. Want to start a podcast? Do you have a Hollywood CGI set like Chris Williamson? No? Really? You’re not cinematic? You must not be credible.

Which is why – and yes, I know I would say this – which is why you should support the channels and podcasts that you stand by, and why those channels should be thinking about ways to attract your support. Without the corrective of people powered, community supported, diverse and plural media, we’ll get nowhere. Diversity of opinion produced the American and French Revolutions. Without them, we’d still be serfs and subjects.

The greatest trick the elites play is in convincing the public that they are not elites at all. Everywhere, elites tell you that they are being silenced, that they are marginalised, that they are speaking for the people, speaking truth to power. What they won’t tell you is that they are the powerful; that they have the market forces of populism and business interests on their side.

Claiming to be marginalised will always be popular, while the actual marginalised remain marginal. The New Elites, despite claiming to be ostracised from the mainstream, often end up being featured on them, and have far more power than they claim to.

What we need really from a media are a focus on issues than effect people’s daily lives. There’s room for other stuff, of course, but fulfilling that basic requirement is how journalists, commentators, and media should be judged.

And often, new media fulfils that promise. Joe Rogan has interesting guests on, Lex Fridman hosts an interesting conversation, and Peterson is right about something. However, what I think is identifiable is a shape, an archetype, a constellation that tempts and pulls towards these ideas, talking points, this ideology. And I think for the most part it is just that – an ideological fantasy divorced from reality.

In 1987, Watney wrote, ‘It is the central ideological business of the communications industry to retail ready-made pictures of ‘human’ identity, and thus recruit individual consumers to identify with them in a fantasy’.

It’s so easy to assume that the messages, ideas, and conversations we see online are individual opinions in the great marketplace of ideas and reason. It’s easy to forget that the reach, the volume, the selection, the social connections have forces behind them, forces that support some message while delegitimising others.

The status quo is broken, and polling always shows what people want to focus on – the economy, schooling, healthcare, infrastructure – then ask yourself this: who’s really focusing on those things, and who is choosing to continually talk about a few students, the censorious woke, the idea that bureaucrats are tyrannical? Who is actually talking to experts, academics, people with fresh ideas? And who is actually pretty successful in this new online media space?

The new media fantasy image is the noble warrior, fit and strong, with atomic habits, selling AG1, defending civilisation with one media podcast empire at a time, with a few exciting stories of success, entertainment, conspiracy theory, heroism, and evil along the way.

Unfortunately, the truth about political ideas, good history, studies, and discussion just doesn’t get the clicks.

 

Sources

Kevin Williams, Get Me a Murder a Day! A History of Mass Communications in Britain

Stanley Cohen, Folk Devils and Moral Panics

Kenneth Thompson, Moral Panics

James Curran and Jean Seaton, Power without Responsibility: The Press and Broadcasting in Britain

Kristoffer Holt, Right-Wing Alternative Media

Lee Mcintyre, Post-Truth

Geoffrey Hughes, Political Correctness: A History of Semantics and Culture

John Nerone, The Media and Public Life: A History

Charles Stangor, Social Groups in Action and Interaction, 2nd edition

Francisco-Javier Rodrigo-Ginés, Jorge Carrillo-de-Albornoz, Laura Plaza, A systematic review on media bias detection: What is media bias, how it is expressed, and how to detect it, Expert Systems with Applications

Noam Chomsky and Edward Herman, Manufacturing Consent: The Political Economy of Mass Media

Gabriel Sherman, The Loudest Voice in the Room

Tobin Smith, Foxocracy

Charles L. Ponce de Leon, That’s the Way it is, A History of Television News in American

David Brock and Ari Rabin-Havt, The Fox Effect

Bruce Bartlett, How Fox News Changed American Media & Political Dynamics

https://www.washingtonpost.com/news/morning-mix/wp/2017/03/27/veteran-newsman-ted-koppel-tells-sean-hannity-hes-bad-for-america/

https://www.vox.com/policy-and-politics/2016/10/28/13424848/alex-jones-infowars-prisonplanet

https://lfpress.com/opinion/columnists/chambers-peterson-intellectual-populism-at-its-worst

https://www.newyorker.com/news/q-and-a/why-some-academics-are-reluctant-to-call-claudine-gay-a-plagiarist

https://www.washingtonpost.com/news/morning-mix/wp/2017/03/27/veteran-newsman-ted-koppel-tells-sean-hannity-hes-bad-for-america/

https://www.unilad.com/celebrity/news/alix-earle-knocks-joe-rogan-off-the-top-podcast-spot-on-spotify-423445-20230926

https://www.thedailybeast.com/rfk-jrs-misinformation-campaign-is-fueled-by-comedians?ref=home?ref=home

The post The Fall of the Mainstream Media: The New Elites appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/07/29/the-fall-of-the-mainstream-media-the-new-elites/feed/ 0 1130
How AI Was Stolen https://www.thenandnow.co/2024/04/26/how-ai-is-being-stolen/ https://www.thenandnow.co/2024/04/26/how-ai-is-being-stolen/#respond Fri, 26 Apr 2024 14:55:18 +0000 https://www.thenandnow.co/?p=1080 This is a story about stolen intelligence. It’s a long but necessary history, about the deceptive illusions of AI, about Big Tech goliaths against everyday Davids. It’s about vast treasure troves and mythical libraries of stolen data, and the internet sleuths trying to solve one of the biggest heists in history. It’s about what it […]

The post How AI Was Stolen appeared first on Then & Now.

]]>
This is a story about stolen intelligence. It’s a long but necessary history, about the deceptive illusions of AI, about Big Tech goliaths against everyday Davids. It’s about vast treasure troves and mythical libraries of stolen data, and the internet sleuths trying to solve one of the biggest heists in history. It’s about what it means to be human, to be creative, to be free. And what the end of humanity – post-humanity, trans-humanity, the apocalypse even – looks like.

It’s an investigation into what it means to steal, to take, to replace, to colonise and conquer. Along the way we’ll learn what AI really is, how it works, and what it can teach us about intelligence – about ourselves – turning to some historical and philosophical giants along the way.

Because we have this idea that intelligence is this abstract, transcendent, disembodied thing, something unique and special, but we’ll see how intelligence is much more about the deep, deep past and the far, far future, something that reaches out powerfully through bodies, people, the world.

Sundar Pichai, CEO of Google, was reported to have claimed that, ‘AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire’. We’ll see how that might well be true. It might change everything dizzyingly quickly – and like electricity and fire, we need to find ways of making sure that vast, consequential and truly unprecedented change can be used for good – for everyone – and not evil. So we’ll get to the future, but it’s important we start with the past.

 

Contents:

 

A History of AI: God is a Logical Being

Intelligence. Knowledge. Brain. Mind. Cognition. Calculation. Thinking. Logic. 

We often use these words interchangeably, or at least with a lot of overlap, and when we do drill down into what something like ‘intelligence’ means, we find surprisingly little agreement.

Can machines be intelligent in the same way humans can? Will they surpass human intelligence? What does it really mean to be intelligent? Commenting on the first computers, the press referred to them as ‘electronic brains’.

 

Manchester, England

There was a national debate in Britain in the fifties around whether machines could think. After all, a computer in the fifties was in many ways already many times more intelligent than any human.

The father of both the computer and AI, Alan Turing, contributed to the discussion in a BBC radio broadcast in 1951, claiming that ‘it is not altogether unreasonable to describe digital computers as brains’.

This coincidence – between computers, AI, intelligence, and brains – strained the idea that AI was one thing. A thorough history would require including transistors, electricity, computers, the internet, logic, mathematics, philosophy, neurology, society. Is there any understanding of AI without these things? Where does history begin?

This ‘impossible totality’ will echo through this history, but there are two key historical moments: The Turing Test and the Dartmouth College Conference.

Turing wrote his now famous paper – Computing Machinery and Intelligence – in 1950. It began with: ‘I propose to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’.’

He suggested a test – that, for a person who didn’t know who or what they were conversing with, if talking to a machine was indistinguishable from talking to a human then it was intelligent. 

Ever since, the conditions of a Turing Test have been debated. How long should the test last? What sort of questions should be asked? Should it just be text based? What about images? Audio? One competition – the Loebner Prize – offered $100,000 to anyone who could pass the test in front of a panel of judges.

As we pass through the next 70 years, we can ask: has Turing’s Test been passed?

 

New Hampshire, USA

A few years later, in 1955, one of the founding fathers of AI, John McCarthy, and his colleagues, proposed a summer research project to debate the question of thinking machines.

When deciding on a name McCarthy chose the term ‘Artificial Intelligence’.

In the proposal, they wrote, ‘an attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves’.

The aim of the conference was to discuss questions like could machines ‘self-improve’, how neurons in the brain could be arranged to form ideas, and to discuss topics like creativity and randomness, all to contribute to research on thinking machines. The conference was attended by at least twenty now well-known figures, including the mathematician John Nash.

Along with Turing’s paper, it was a foundational moment, marking the beginning of AI’s history.

But there were already difficulties that anticipated problems the field would face to this day. Many bemoaned the ‘artificial’ part of the name McCarthy chose. Does calling it artificial intelligence not limit what we mean by intelligence? What makes it artificial? What if the foundations are not artificial but the same as human intelligence? What if machines surpass human intelligence?

There were already suggestions that the answer to these questions might not be technological, but philosophical. 

Because despite machines in some ways being more intelligent – making faster calculations, less mistakes – it was clear that that alone didn’t account for what we call intelligence – something was missing.

The first approach to AI, one that dominated the first few decades of research, was called the ‘symbolic’ approach.

The idea was that intelligence could be modelled symbolically by imitating or coding a digital replica of, for example, the human mind. If the mind has a movement area, you code a movement area, an emotional area, a calculating area, and so on. Symbolic approaches essentially made maps of the real world in the digital world. 

If the world can be represented symbolically, AI could approach it logically.

For example, you could symbolise a kitchen in code, symbolise a state of the kitchen as clean or dirty, then program a robot to logically approach the environment – if the kitchen is dirty then clean the kitchen.

McCarthy, a proponent of this approach, wrote:The idea is that an agent can represent knowledge of its world, its goals and the current situation by sentences in logic and decide what to do by [deducing] that a certain action or course of action is appropriate to achieve its goals.’

It makes sense because both humans and computers seem to work in this same way.

If the traffic light is red then stop the car. If hungry then eat. If tired then sleep.

The appeal to computer programmers was that approaching intelligence this way lined up with binary – the root of computing – that a transistor can be on or off, a 1 or 0, true or false. Red traffic light is either true or false, 1 or 0, it’s a binary logical question. If on, then stop. It seems intuitive and so building a symbolic, virtual, logical picture of the world in computers quickly became the most influential approach.

Computer scientist Michael Wooldridge writes that this was because, ‘It makes everything so pure. The whole problem of building an intelligent system is reduced to one of constructing a logical description of what the robot should do. And such a system is transparent: to understand why it did something, we can just look at its beliefs and its reasoning’.

But a problem quickly emerged. Knowledge turned out be to far too complex to be represented neatly by these logical simple true-false if-then rules. One reason is the shades of uncertainty. If hungry then eat is not exactly true or false. There’s a gradient of hunger.

But another problem was that calculating what to do from these seemingly simple rules required much more knowledge and many more calculations than first assumed. The computing power of the period couldn’t keep up.

Take this simple game: The Towers of Hanoi. The object is to move the disks from the first to the last pole in the fewest number of moves without placing a larger disk on top of a smaller one.

We could symbolise the poles, the disks, and each possible move and the results of each possible move into the computer. And then a rule for what to do depending on each possible location of the disks. Relatively simple.

But consider this. With three disks this game is solvable in 7 moves. For 5 disks it takes 31 moves. For 10, its 1023 moves. For 20 disks, 1,048,575 moves. For 64 disks, if one disk was moved each second it would take almost 600 billion years to complete the game.

In AI this problem was named combinatorial explosion. That as you increase the number of possible actions for each action, the number of factors, the possible combinations, the complexity, becomes incomprehensibly vast and technologically difficult. 

And the Tower of Hanoi is a simple game. Combinatorial explosion became even more of a problem with games like chess or go. And a human problem like driving is infinitely more complicated. The red light is on or off but it might be broken, it might have graffiti or snow on, pedestrians might walk out regardless, a child run across – and that’s just the first element of a vast environment, the impossible totality.

Approaching AI this way was called ‘ ’ – that for each move, the computer had to search through each possible scenario and every bit of information to decide what to do.

In robotics, a similar approach was being taken, and proving even more complicated.

The development of AI coincided with technological advances in other areas: infrared, radars, cameras, microphones, batteries.

In 1971, Terry Winograd at MIT was developing a program called SHURDLU that aimed to symbolically model something he called blocks world.

In this virtual world, a user could ask the program to manipulate the blocks in different ways:

Person: Pick up a big red block.
Computer: OK.
Person: Grasp the pyramid.
Computer: I DON’T UNDERSTAND WHICH PYRAMID YOU MEAN.
Person (changing their mind): Find a block which is taller than the one you are holding and put it into the box.
Computer: BY “IT”, I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING.
Computer: OK.

A year later, researchers at Stanford built a real life blocks world.

SHAKEY was a real robot that had bump censors called ‘cats whiskers’ and laser range finders to measure distance.

The robotics teams ran into similar problems as in the Towers of Hanoi problem. The environment was much more complicated than it seemed. The room had to be painted in a specific way for the sensors to work properly.  

The technology of the time could not keep up, and combinatorial explosion, the complexity of any environment, became such a problem that the 70s and 80s saw what’s now referred to as the AI winter.

 

History of AI: The Impossible Totality of Knowledge

By the 70s, some were beginning to make the case that something was being left out: knowledge.  The real world is not towers of Hanoi, robots and blocks – knowledge about the world is central. However, logic was still the key to analysing that knowledge. How could it be otherwise?

For example, if you want to know about animals, you need a database:

IF animal gives milk THEN animal is mammal
IF animal has feathers THEN animal is bird
IF animal can fly AND animal lays eggs THEN animal is bird
IF animal eats meat THEN animal is carnivore

Again, this seems relatively simple, but even an example as basic as this requires a zoologist to provide the information. We all know that mammals are milk-producing animals, but there are thousands of species of mammal and a lot of specialist knowledge. As a result, this approach was named the ‘expert systems’ approach. And it led to one of the first big AI successes.

Researchers at Stanford used this approach to work with doctors to produce a system to diagnose blood diseases. It used a combination of knowledge and logic.

If a blood test is X THEN perform Y.

Significantly, they realised that the application had to be credible if professionals were ever going to trust and adopt it. So MYCIN could show its workings and explain the answers it gave.

The system was a breakthrough. At first it proved to be as good as humans at diagnosing blood diseases.

Another similar system called DENDRAL used the same approach to analyse the structure of chemicals. DENDRAL used 175,000 rules provided by chemists.

Both systems proved that this type of expert knowledge approach could work. 

The AI winter was over and significantly, research began attracting investment.

But once again, expert system developers encountered a new serious problem. The MYCIN database very quickly became outdated.

In 1983, Edward Feigenbaum, a researcher on the project, wrote, ‘The knowledge is currently acquired in a very painstaking way that reminds one of cottage industries, in which individual computer scientists work with individual experts in disciplines painstakingly[..]. In the decades to come, we must have more automatic means for replacing what is currently a very tedious, time-consuming, and expensive procedure. The problem of knowledge acquisition is the key bottleneck problem in artificial intelligence’.

Because of this, MYCIN was not widely adopted. It proved expensive, quickly obsolete, legally questionable, and difficult to establish with doctors widely enough. Logic was understandable – but the collecting and the logistics of collecting knowledge was becoming the obvious central problem.

In the 80s, influential computer scientist Douglas Lenat began a project that intended to solve this.

Lenat wrote: [N]o powerful formalism can obviate the need for a lot of knowledge. By knowledge, we don’t just mean dry, almanack like or highly domain-specific facts. Rather, most of what we need to know to get by in the real world is… too much common-sense to be included in reference books; for example, animals live for a single solid interval of time, nothing can be in two places at once, animals don’t like pain… Perhaps the hardest truth to face, one that AI has been trying to wriggle out of for 34 years, is that there is probably no elegant, effortless way to obtain this immense knowledge base. Rather, the bulk of the effort must (at least initially) be manual entry of assertion after assertion’.

The goal of Lenat’s CYC project was to teach AI all of the knowledge we usually think of as obvious. He said: ‘an object dropped on planet Earth will fall to the ground and that it will stop moving when it hits the ground but that an object dropped in space will not fall; a plane that runs out of fuel will crash; people tend to die in plane crashes; it is dangerous to eat mushrooms you don’t recognize; red taps usually produce hot water, while blue taps usually produce cold water; … and so on’.

Lenat and his team estimated that it would take 200 years of work, and they set about laboriously entering 500,000 rules on taken-for-granted things like bread is a food or that Isaac Newton is dead.

They quickly ran into problems. The CYC project’s blind spots were illustrative of how strange knowledge can be.

In an early demonstration, it didn’t know whether bread was a drink or that the sky was blue, whether the sea was wetter than land, or whether siblings could be taller than each other.

These simple questions reveal something under-appreciated about knowledge. Often, we don’t explicitly know something ourselves yet despite this the answer is laughably obvious. We might not have ever thought about the question is bread a drink or is it possible for one sibling to be taller than another, but when asked, we implicitly, intuitively, often non-cognitively just know the answers based on other factors.

This was a serious difficulty. No matter how much knowledge you entered, the ways that knowledge is understood, how we think about questions, the relationships between one piece of knowledge and another, the connections we draw on, are often ambiguous, unclear, and even strange.

Logic struggles with nuance, uncertainty, probability. It struggles with things we implicitly understand but also might find difficult to explicitly explain.

Take one common example you’ll find in AI handbooks:

Quakers are pacifists.
Republicans are not pacificists.
Nixon is a Republican and a Quaker.

Is Nixon a pacifist or not? A computer cannot answer this logically with this information. It sees this as a contradiction. While a human might explain the problem with this in many different ways, drawing on lots of different ideas – uncertainty, truthfulness, complexity, history, war, politics.

The big question for proponents of expert-based knowledge systems like CYC – which still runs to this day – is whether complexities can ever be accounted for with this logic based approach.

Most intelligent questions aren’t of the if-then, yes-no, binary sort, like: is a cat a mammal? 

Consider the question ‘are taxes good?’ It’s of a radically different kind than ‘is a cat a mammal?’. Most questions rely on values, depend on contexts, definitions, assumptions, are subjective.

Wooldridge writes: ‘The main difficulty was what became known as the knowledge elicitation problem. Put simply, this is the problem of extracting knowledge from human experts and encoding it in the form of rules. Human experts often find it hard to articulate the expertise they have—the fact that they are good at something does not mean that they can tell you how they actually do it. And human experts, it transpired, were not necessarily all that eager to share their expertise’.

But CYC was on the right path. Knowledge was obviously needed. It was a question of how to get your hands on it, how to digitise it, and how to label, parse, and analyse it. As a result of this, McCarthy’s idea – that logic was the centre of intelligence – fell out of favour. The logic-centric approach was like saying a calculator is intelligent because it can perform calculations, when it doesn’t really know anything. More knowledge was key.

The same was happening in robotics. 

Australian roboticist Rodney Brooks, an innovator in the field, was arguing that the issue with simulations like Blocks World was that it was simulated and tightly controlled. Real intelligence didn’t evolve in that way, and so real knowledge had to come from the real world.

He argued that perhaps intelligence wasn’t something that could be coded in but was an ‘emergent property’ – something that emerges once all of the other components were in place. That if artificial intelligence could be built up from everyday experience, genuine intelligence might develop once other more basic conditions had been met. In other words, intelligence might be bottom up, arising out of the all of the parts, rather than top-down, imparted from a central intelligent point into all of the parts.  Evolution, for example, is bottom up, slowly adding to single cell organisms more and more complexity until consciousness and awareness emerges.

In the early 90s, Brooks was head of the Media Lab at MIT and rallied against the idea that intelligence was a disembodied, abstract thing. Why could a machine beat any human at chess but not pick up a chess piece better than a child, he asked? Not only that, the child moves the hand to pick up the chess piece autonomically, without any obvious complex computation going on in the brain. In fact, the brain doesn’t seem to have anything like a central command centre – all of the parts interact with one another, more like a city than like a pilot flying the entire things. 

Intelligence was connected to the world, not cut off, ethereal, transcendent, and abstract.

Brooks worked on intelligence as embodied – connected to its surroundings through sensors and cameras, microphones, arms and lasers. The team built an insectoid robot called Cog. It had thermal sensors, microphones, but importantly no central control point. Each part worked independently but interacted together – they called it ‘decentralised intelligence’.

It was an innovative approach but never could quite work. Brooks admitted Cog lacked ‘coherence’. 

And by the late 90s, researchers were realising that computer power still mattered.

In 1996, IBM’s chess AI – Deep Blue – was beaten by grandmaster Gary Kasparov.

Deep Blue was an expert knowledge system – it was programmed with the help of chess players not just by calculating each possible move, but by including things like best opening moves, concepts like ‘lines of attack’, or ideas like choosing moves based on pawn position.

But IBM also threw more computing power at it. Deep Blue could search through 200 million possible moves per second with its 500 processors.

It played Kasparov again in 1997. In a milestone for AI, Deep Blue won. At first, Kasparov accused IBM of cheating, and to this day maintains foul play of a sort. In his book, he recounts an episode in which a chess player working for IBM admitted to him that: ‘Every morning we had meetings with all the team, the engineers, communication people, everybody. A professional approach such as I never saw in my life. All details were taken into account. I will tell you something which was very secret[…] One day I said, Kasparov speaks to Dokhoian after the games. I would like to know what they say. Can we change the security guard, and replace him with someone that speaks Russian? The next day they changed the guy, so I knew what they spoke about after the game’.

In other words, even with 500 processors and 200 million moves per second, IBM may still have had to program in very specific knowledge about Kasparov himself by listening in to conversations – this, if maybe apocryphal, was at least a premonition of things to come…

 

The Learning Revolution

In 2014, Google announced it was acquiring a small relatively unknown 4-year-old AI lab from the UK for $650 million. The acquisition sent shockwaves through the AI community.

DeepMind had done something that on the surface seemed quite simple: beaten an old Atari game.

But how it did it was much more interesting. New buzzwords began entering the mainstream: machine learning, deep learning, neural nets.

What those knowledge-based approaches to AI had found difficult was finding ways to successfully collect that knowledge. MYCIN had quickly become obsolete. CYC missed things that most people found obvious. Entering the totality of human knowledge was impossible, and besides, an average human doesn’t have all of that knowledge but still has the intelligence researchers were trying to replicate. 

A new approach emerged: if we can’t teach machines everything, how can we teach them to learn for themselves?

Instead of starting from having as much knowledge as possible, machine learning begins with a goal. From that goal, it acquires the knowledge it needs itself through trial and error.

Wooldridge writes, the goal of machine learning is to have programs that can compute a desired output from a given input, without being given an explicit recipe for how to do this’.

Incredibly, DeepMind had built an AI that could learn to play and win not just one Atari game, but many of them, all on its own.

The machine learning premise they adopted was relatively simple.

The AI was given the controls and a preference: increase the score. And then through trial and error, it would try different actions, and iterate or expand on what worked and what didn’t. A human assistant could help by nudging it in the right direction if it got stuck.

This is called ‘reinforcement learning’. If a series of actions led to the AI losing a point it would register that as likely bad, and vice versa. Then it would play the game thousands of times, building on the patterns that worked.

What was incredible was it didn’t just learn the game, but quickly became better than the humans. It learned to play 29 out of 49 games at a level better than a human. Then it became superhuman.

This is the often demonstrated one. It’s called Breakout. Move the paddle, destroy the blocks with the ball. To the developers’ surprise, DeepMind learned a technique that would get the ball at the top so it would bounce around and destroy the blocks without having to do anything. It was described as spontaneous, independent, and creative. 

Next, DeepMind beat a human player at Go, commonly believed to be harder than chess, and likely the most difficult game in the world.

Go is deceptively simple. You take turns to place a stone, trying to block out more territory than your opponent while encircling their stones to get rid of them.

AlphaGo was trained on 160,000 top games and played over 30 million games itself before beating Lee Sedol in 2016.

Remember combinatorial explosion. This was always a problem with Go. Because there are so many possibilities it’s impossible to calculate every move. 

Instead, DeepMind’s method was based on sophisticated guessing around uncertainty. It would calculate the chances of winning based on a move rather than calculating and playing through all the future moves after each move. The premise was that this is more how human intelligence works. We scan, contemplate a few moves ahead, reject, imagine a different move, and so on.

After 37 moves in the match against Sedol, the AlphaGo made a move that took everyone by surprise. None of the humans could understand it, and it was described as ‘creative’, ‘unique’ and ‘beautiful’, as well as ‘inhuman’, by the professionals.

The victory made headlines around the world. The age of machine learning had arrived.

 

What Are Neural Nets?

In 1991, two scientists wrote, ‘The neural network revolution has happened. We are living in the aftermath.’ 

You might have heard some new buzzwords thrown around – neural nets, deep learning, machine learning. I’ve come to believe that this revolution is probably the most historically consequential we’ll go through as a species. It’s fundamental to what’s happening with AI. So bear with me, jump on board the neural pathway rollercoaster, buckle up and get those synapses ready, and, we’ll try and make this as pain free as possible.

Remember that that symbolic approach we talked about tried to make a kind of one-to-one map of the world. And that, instead, machine learning learns itself through trial and error. AI mostly does this using neural nets.

Neural nets are revolutionising the way we think about not just AI, but intelligence. They’re based on the premise that what matters are connections, patterns, pathways.

Artificial neural nets are inspired by neural nets in the brain.

Both in the brain and in AAN, you have basic building blocks of neurons or nodes. The neurons are layered. And there are connections between them.

Each neuron can activate the next. The more neurons that are activated, the stronger the activation of the next, connected neuron. And if that neuron is firing strong enough, it will pass a threshold and fire the next neuron. And so on billions of times.

In this way intelligence can make predictions based on past experiences.

I think of neural nets – in the brain and artificially – as something like, ‘commonly travelled paths’. The more the neurons fire, the most successfully, the more their connections strengthen. Hence the phrase, ‘those that fire together wire together’.

So how are these used in AI?

First, you need a lot of data. You can do this in two ways. You can feed a neural net a lot of data – like adding in thousands of professional go or chess games. Or you can play games over and over, on many different computers, thousands of times. Peter Whidden has a video that shows an AI playing 20,000 games of Pokémon at once.

Ok, so once you have lots of data, the next job is to find patterns. If you know a pattern, you might be able to predict what comes next. 

ChatGPT and others are large language models – meaning they’re neural networks trained on lots of text. And I mean a lot. ChatGPT was trained on around 300 billion words of text. If you’re thinking ‘whose words’ you might be onto something we’ll get to shortly.

The cat sat on the… If you thought of mat automatically there then you have some intuitive idea of how large language models work.

Because, in 300 billion words’ worth of text, that pattern comes up a lot. ChatGTP can predict that’s what should come next.

But what if I say the cat sat on the… elephant?

Remember that one of the problems previous approaches ran into was that not all knowledge is binary, on or off, 1 or 0? Not all knowledge is like, ‘if an animal gives milk then it’s a mammal’.

Neural networks are particularly powerful because they avoid this, and can instead work with probability, ambiguity, and uncertainty. Neural net nodes, remember, have strengths. All of these neurons fire and so fire mat, but these other neurons still fire a little bit. If I ask for another random example it can switch up to elephant. If it’s looking at patterns after the words ‘heads’, ‘or’, ‘tails’, the successive nodes are going to be pretty evenly split, 50/50, between heads and tails. 

If I ask ‘are taxes good?’ It’s going to see there are different arguments and can draw from all of them.

Kate Crawford puts it like this: ‘they started using statistical methods that focused more on how often words appeared in relation to one another, rather than trying to teach computers a rules-based approach using grammatical principles or linguistic features’.

The same applies to images. 

How do you teach a computer that an image of an A is an A or a 9 is a 9? Because every example is slightly different. Sometimes they’re in photos, on signposts, written, scribbled, at strange angles, in different shades, with imperfections, upside down even. If you feed the neural net millions of drawings, photos, designs of a 9 it can learn which patterns repeat until it can recognise a 9 on its own.

The problem is you need a lot of examples. In fact, this is what you’re doing when you fill in those reCAPTCHA’s – you’re helping Google train its AI.

There are some sources in the description if you want to learn more about neural nets. This video by 3Blue1Brown on training numbers and letters is particularly good.

Developer Tim Dettmers describes deep learning like this: ‘(1) take some data, (2) train a model on that data, and (3) use the trained model to make predictions on new data’.

The neural network revolution has some ground-breaking ramifications. First, intelligence isn’t this abstract, transcendental, ethereal thing, connections between things are what matters, and those connections allow us and AI to predict the next move. We’ll get back to this. But second, machine learning researchers were realising, for this to work, they needed a lot of knowledge, a lot of data. It was no use getting chemists and blood diagnostic experts to come into the lab once a month and laboriously type in their latest research. Plus it was expensive.

In 2017 an artificial neural net could have around 1 million nodes. The human brain has around 100 billion. A bee has about 1 million too, and a bee is pretty intelligent. But one company was about to smash past that record, surpassing humans as they went.

By the 2010s, fast internet was rolling out all over the world, phones with cameras were in everyone’s pockets, new media and information broadcast on anything anyone wanted to know. We were stepping into the age of big data.

AI was about to become a teenager.

 

OpenAI and ChatGPT

Silicon Valley

There’s a story – likely apocryphal– that Google founder Larry Page called Elon Musk a speciesist because he preferred to protect human life over other forms of life, privileged human life over potential artificial superintelligence. If AI becomes better and more important than humans then there’s really no reason to prioritise, privilege, and protect humans at all. Maybe the robots really should take over.

Musk says that this caused him to worry about the future of AI, especially as Google, after acquiring DeepMind, was at the forefront of AI development.

And so, despite being a multi-billion dollar corporate businessman himself, Musk became concerned that AI was being developed behind the closed doors of multi-billion dollar corporate businessmen.

In 2015 he started OpenAI. Its goal was to be the first to develop general Artificial Intelligence in a safe, open, and humane way.

AI was getting very good at performing narrow tasks. Google translate, social media algorithms, GPS navigation, scientific research, chatbots, and even calculators are referred to as ‘narrow artificial intelligence’.

Narrow AI has been something of a quiet revolution. It’s already slowly and pervasively everywhere. There are over 30 million robots in our homes and 3 million in factories. Soon everything will be infused with narrow AI – from your kettle and your lawnmower to your door knobs and shoes.

The purpose of OpenAI was to pursue that more general artificial intelligence – what we think of when we see AI in movies – intelligence that can cross over from task to task, do unexpected, creative things, and act, broadly, like a human does.

AI researcher Luke Muehlhauser describes artificial general intelligence – or AGI as it’s known – as ‘the capacity for efficient cross-domain optimization’, or ‘the ability to transfer learning from one domain to other domains’.

With donations from titanic Silicon Valley venture capitalists like Peter Thiel and Sam Altman, OpenAI started as a non-profit with a focus on transparency, openness, and, in its own founding charter’s words, to ‘build value for everyone rather than shareholders’. It promised to publish its studies and share its patents and, more than anything else, focus on humanity.

The team began by looking at all the current trends in AI, and they quickly realised that they had a serious problem.

The best approach – neural nets and deep machine learning – required a lot of data, a lot of servers, and importantly, a lot of computing power. Something their main rival Google had plenty of. If they had any hope of keeping up with the wealthy big tech corporations, they’d unavoidably need more money than they had as a non-profit.

By 2017, OpenAI decided it would stick to its original mission, but needed to restructure as a for-profit, in part, to raise capital.

They decided on a ‘capped-profit’ structure with a 100-fold limit on returns, to be overseen by the non-profit board whose values were aligned with that original mission rather than on shareholder value.

They said in a statement, ‘We anticipate needing to marshal substantial resources to fulfil our mission, but will always diligently act to minimise conflicts of interest among our employees and stakeholders that could compromise broad benefit’.

The decision paid off. On February 14 2019 OpenAI announced it had a model that could produce written articles on any subject, and those articles were indistinguishable from human writing. However, they claimed it was too dangerous to release.

People assumed it was a publicity stunt.

In 2022, they released ChatGPT – a LLM AI that seemed to be able to pass, at least in part, the Turing Test.

You could ask it anything, it could write anything, it could do it in different styles. It could pass many exams, and by the time it got to ChatGPT4 it could pass SATs, the law school bar exam, biology, high school maths, the sommelier, and medical licence exams.

ChatGPT attracted a million users in five days.

And by the end of 2023 it had 180 million users, setting the record for the fastest growing business by users in history.

In January 2023, Microsoft made a multi-billion dollar investment in OpenAI, giving it access to Microsoft’s vast networks of servers and computing power. Microsoft began embedding ChatGPT into Windows and Bing.

But OpenAI has suspiciously become ClosedAI, and some began asking, how did ChatGPT know so much? Much that wasn’t exactly free, open and available on the legal internet. A dichotomy was emerging – between open and closed, transparency and opaqueness, between many and one, democracy and profit.

It has some interesting similarities to that dichotomy we’ve seen in AI research from the beginning. Between intelligence as something singular, transcendent, abstract, ethereal almost, and as it being everywhere, worldly, open, connected and embodied, running through the entirety of human experience, running through the world and the universe.

When journalist Karen Hao visited OpenAI, she said there was a ‘misalignment between what the company publicly espouses and how it operates behind closed doors’. They’ve moved away from the belief that openness is the best approach. Now, as we’ll see, they believe secrecy is required. 

 

The Scramble for Data

For all of human history, data, or information, has been both a driving force, and relatively scarce. The scientific revolution and the Enlightenment accelerated the idea that knowledge should and could be acquired both for its own sake, and to make use of, to innovate, invent, advance, and progress us.

Of course, the internet has always been about data. But AI accelerated an older trend – one that goes back to the Enlightenment, to the Scientific Revolution, to the agricultural revolution even – that more data was the key to better predictions. Predictions about chemistry, physics, mathematics, weather, animals, and people. That if you plant a seed it tends to grow.

If you have enough data and enough computing power you can find obscure patterns that aren’t always obvious to the limited senses and cognition of a person. And once you know patterns, you can make predictions about when those patterns could or should reoccur in the future.

More data, more patterns, better predictions.

This is why the history of AI and the internet are so closely aligned, and in fact, part of the same process. It’s also why both are so intimated linked to the military and surveillance. 

The internet was initially a military project. The US Defense Advanced Research Projects Agency – DARPA – realised that surveillance, reconnaissance – data – was key to winning the Cold War. Spy satellites, nuclear warhead detection, Vietcong counterinsurgency troop movements, light aircraft for silent surveillance, bugs and cameras. All of it needed extracting, collecting, analysing.

In 1950, a Time magazine cover imagined a thinking machine as a naval officer. 

Five years earlier, before computers had even been invented, famed engineer

Vannevar Bush wrote about his concerns that scientific advances seemed to be linked to the military, linked to destruction, and instead conceived of machines that could share human knowledge for good.

He predicted that the entirety of the Encyclopaedia Britannica could be reduced to the size of a matchbox and that we’d have cameras that could record, store, and share experiments.

But the generals had more pressing concerns. WWII had been fought with a vast number of rockets and now that nuclear war was a possibility, these rockets need to be detected and tracked so that their trajectory could be calculated and they could be shot down. As technology got better and rocket ranges increased, this information needed to be shared across long distances quickly. 

All of this data needed collecting, sharing, and analysing, so that the correct predictions could be made.

The result was the internet.

Ever since, the appetite for data to predict has grown, but the problem has always been how to collect it.

But by the 2010s, with the rise of high speed internet, phones, and social media, vast numbers of people were uploading TBs of data about themselves willingly for the first time.

All of it could be collected for better predictions. Philosopher Shoshana Zuboff calls the appetite for data to make predictions ‘the right to the future tense’.

Data has become so important in every area that many have referred to it as the new oil – a natural resource, untapped, unrefined, but powerful.

Zuboff writes that ‘surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data’.

Before this age of big data, as we’ve seen, AI researchers were struggling to find ways to extract knowledge effectively.

IBM scanned their own technical manuals, universities used government documents and press releases. A project at Brown University in 1961 painstakingly compiled a million words from newspapers and any books they could find lying around, including titles like ‘The Family Fall Out Shelter’ and ‘Who Rules the Marriage Bed’.

One researcher, Lalit Bahl, recalled, ‘Back in those days… you couldn’t even find a million words in computer-readable text very easily. And we looked all over the place for text’.

As technology improved so did the methods of data collection.

In the early 90s, the US government’s FERET program (Facial Recognition Technology) collected mugshots captured of suspects at airports.

George Mason University began a project photographing people over several years in different styles under different lighting conditions with different backgrounds and clothes. All of them, of course, gave their consent.

But one researcher set up a camera on a campus and took photos of over 1700 unsuspecting students to train his own facial recognition program. Others pulled thousands of images from public webcams in places like cafes.

But by the 2000s, the idea of consent seemed to be changing. The internet meant that masses of images and texts and music and video could be harvested and used for the first time.

In 2001, Google’s Larry Page said that, ‘Sensors are really cheap.… Storage is cheap. Cameras are cheap. People will generate enormous amounts of data.… Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable’.

In 2007, computer scientist Fei-Fei Li began a project called ImageNet that aimed to use neural networks and deep learning to predict what an image was.

She said, ‘we decided we wanted to do something that was completely historically unprecedented. We’re going to map out the entire world of objects’.

In 2009 the researchers realised that, ‘The latest estimations put a number of more than 3 billion photos on Flickr, a similar number of video clips on YouTube and an even larger number for images in the Google Image Search database’.

They scooped up over 14 million images and used low wage workers to label them as everything from apples and aeroplanes to alcoholics and hookers. 

By 2019, 350 million photographs were being uploaded to Facebook every day. Still running, ImageNet has organised around 14 million images into over 22,000 categories.

As people began voluntarily uploading their lives onto the internet, the data problem was solving itself.

Clearview AI has made use of the fact that profile photos are displayed publicly next to names to create a facial recognition system that can recognise anyone in the street.

Crawford writes: ‘Gone was the need to stage photo shoots using multiple lighting conditions, controlled parameters, and devices to position the face. Now there were millions of selfies in every possible lighting condition, position, and depth of field’.

It has been estimated that we now generate 2.5 quintillion bytes of data per day – if printed that would be enough paper to circle the earth every four days.

And all of this is integral to the development of AI. The more data the better. The more ‘supply routes’, in Zuboff’s phrase, the better. Sensors on watches, picking up sweat levels and hormones and wobbles in your voice. Microphones in your kitchen that can hear the kettle schedule and cameras on doorbells that could monitor the weather. 

In the UK, the NHS has given 1.6m patient records to Google’s DeepMind.

Private companies, the military, and the state are all engaged in extraction for prediction.

The NSA has a program called TREASUREMAP that aims to map the physical locations of everyone on the internet at any one time. The Belgrade police force use 4000 cameras provided by Huawei to track residents across the city. Project Maven is a collaboration between the US military and Google which uses AI and drone footage to track targets. Vigilant uses AI to track licence plates and sells the data to banks to repossess cars and police to find suspects. Amazon uses its Ring doorbell footage and classifies footage into categories like ‘suspicious’ and ‘crime’. Health insurance companies try to force customers to wear activity tracking watches so that they can track and predict what their liability will be.

Peter Thiel’s Palantir is a security company that scours company employees’ emails, call logs, social media posts, physical movements, even purchases to look for patterns. Bloomberg called itan intelligence platform designed for the global War on Terror’ being ‘weaponized against ordinary Americans at home’.

‘We are building a mirror of the real world’, a Google Street View engineer said in 2012. ‘Anything that you see in the real world needs to be in our databases’.

IBM had predicted it as far back as 1985. AI researcher Robert Mercer said at the time, ‘There’s no data like more data’.

But there were still problems. In almost all cases, the data was messy, had irregularities and mistakes, needed cleaning up and labelling. Silicon Valley needed to call in the cleaners.

 

Stolen Labour

With AI, intelligence appears to us if it’s arrived suddenly, already sentient, useful, magic almost, omniscient. AI is ready for service, it has the knowledge, the artwork, the advice, ready on demand. It appears as a conjurer, a magician, an illusionist.

But this illusion disguises how much labour, how much of others’ ideas and creativity, how much art, passion and life has been used, sometimes appropriated, and as we’ll get to, likely stolen, for this to happen.

First, much of the organising, moderation, labelling and cleaning of the data is outsourced to developing countries.

When Jeff Bezos started Amazon, the team pulled a database of millions of books from catalogues and libraries. Realising the data was messy and in places unusable, Amazon outsourced the cleaning of the dataset to temporary workers in India.

It proved effective. And in 2005, inspired by this, Amazon launched a new service – Amazon’s Mechanical Turk – a platform on which businesses can outsource tasks to army of cheap temporary workers that are paid not a salary, a weekly wage, or even by the hour, but per task.

Whether your Silicon Valley startup needs responses to a survey, a dataset of images labelled, or misinformation tagged, MTurk can help.

What’s surprising is how big these platforms have become. Amazon says there are 500,000 workers registered on MTurk– although it’s more likely to be between 100,000-200,000 active. Either way, that would put it comfortably in the list of the world’s top employers. If it is 500,000 it could even be the fifteenth top employer in the world. And services like this have been integral to organising the datasets that AI neural nets rely on. 

Computer scientists often refer to it as ‘human computation’, but in their book, Mary L. Gray and Siddharth Suri call it ‘ghost work’. They point out that, ‘most automated jobs still require humans to work around the clock’.

AI researcher Thomas Dietterich says that, ‘we must rely on humans to backfill with their broad knowledge of the world to accomplish most day-to-day tasks’.

These tasks are repetitive, underpaid, and often unpleasant.

Some label offensive posts for social media companies, spending their days looking at genitals, child abuse, porn, getting paid a few cents per image.

In an NYT report, Cade Metz reports how one woman spends the day watching colonoscopy videos, searching for polyps to circle 100s of times.

Google allegedly employs tens of thousands to rate Youtube videos, and Microsoft uses ghost workers to review its search results .

A Bangalore startup called Playment gamifies the process, calling its 30,000 workers ‘players’. Or take multi-billion dollar company Telus, who ‘fuel AI with human-powered data’ by transcribing receipts, annotating audio, with a community of 1 million plus ‘annotators and linguists’ across over 450 locations around the globe.

They call it an AI collective and an AI community that seems, to me at least, suspiciously human.

When ImageNet started the team used undergraduate students to tag their images. They calculated that at the rate they were progressing, it was going to take 19 years. 

Then in 2007 they discovered Amazon’s Mechanical Turk. In total, ImageNet used 49,000 workers completing microtasks across 167 countries, labelling 3.2 million images.

After struggling for so long, after 2.5 years using those workers, ImageNet was complete.

Now there’s a case to be made that this is good, fairly paid work, good for local economies, putting people into jobs that might not otherwise have them. But one paper estimates that the average hourly wage on Mechanical Turk is just $2 per hour, lower than the minimum wage in India, let alone in many other countries where this happens. These are, in many cases, modern day sweatshops. And sometimes, people perform tasks and then don’t get paid at all.

This is a story recounted in Ghost Work. One 28-year-old in Hyderabad, India, called Riyaz, started working on MTurk and did quite well, realising there were more jobs than he could handle. He thought maybe his friends and family could help. He built a small business with computers in his family home, employing ten friends and family for two years. But then, all of a sudden, their accounts were suspended one by one.

Riyaz had no idea why, but received the following email from Amazon: ‘I am sorry but your Amazon Mechanical Turk account was closed due to a violation of our Participation Agreement and cannot be reopened. Any funds that were remaining on the account are forfeited’.

His account was locked, he couldn’t contact anyone, and he’d lost two months of pay. No-one replied.

Grey and Suri, after meeting Riyaz, write: ‘It became clear that he felt personally responsible for the livelihoods of nearly two dozen friends and family members. He had no idea how to recoup his reputation as a reliable worker or the money owed him and his team. Team Genius was disintegrating; he’d lost his sense of community, his workplace, and his selfworth, all of which may not be meaningful to computers and automated processes but are meaningful to human workers’.

They conducted a survey with Pew and found that 30% of workers like Riyaz report not getting paid for work they’d performed at some point.

Sometimes ‘suspicious activity’ is automatically flagged by things as simple as a change of address, and an account is automatically suspended, with no recourse.

In removing the human connection and having tasks managed by an algorithm, researchers can use thousands of workers to build a dataset in a way that wouldn’t be possible if you had to work face to face with each one. But it becomes dehumanising. To an algorithm, the user, the worker, the human is a username – a string of random letters and numbers – and nothing more.

Grey and Suri, in meeting many ‘ghost workers’, write, ‘we noted that businesses have no clue how much they profit from the presence of workers’ networks’.

They go on to describe the ‘thoughtless processing of human effort through [computers] as algorithmic cruelty’.

Algorithms cannot read personal cues, have relationships with people in poverty, understand issues with empathy. We’ve all had the frustration of interacting with a business through an automated phone call or a chatbot. For some, this is their livelihood. 

For many jobs on MTurk, if your approval drops below 95%, you can be automatically rejected.

Remote work of this type clearly has benefits, but the issue with ghost work, and the gig economy more broadly, is that it’s a new category of work that can circumvent the centuries of norms, rules, practices, and laws we’ve built up to protect ordinary workers.

Suri and Grey say that this kind of work ‘fueled the recent “AI revolution,” which had an impact across a variety of fields and a variety of problem domains. The size and quality of the training data were vital to this endeavour. MTurk workers are the AI revolution’s unsung heroes’.

There are many more of these ‘unsung heroes’ too.

Google’s median salary is $247,000. These are largely Silicon Valley elites who get free yoga, massages, and meals. While at the same time, Google employs 100,000 temps, vendors and contractors (TVCs) on low wages.

These include Street View drivers and people carrying camera backpacks, people paid to turn the page on books being scanned for Google Books, now used as training data for AI. 

Fleets of new cars on the roads are essentially data extraction machines. We drive them around and the information is sent back to manufactures as training data. 

One startup – x.ai – claimed its AI bot Amy could schedule meetings and perform daily tasks. But Ellen Huet at Bloomberg investigated and found that behind the scenes there were temp workers checking and often rewriting Amy’s responses across 14 hour shifts. Facebook was also caught out reviewing and rewriting ‘AI’ messages. 

A Google conference had an interesting tagline: Keep Making Magic. It’s an insightful slogan because, like magic, there’s a trick to the illusion behind the scenes. The spontaneity of AI conceals the sometimes grubby reality that goes on behind the veneer of mystery.

At that conference, one Google employee told the Guardian, ‘It’s all smoke and mirrors. Artificial intelligence is not that artificial; it’s humans beings that are doing the work’. Another said, ‘It’s like a white-collar sweatshop. If it’s not illegal, it’s definitely exploitative. It’s to the point where I don’t use the Google Assistant, because I know how it’s made, and I can’t support it’.

The irony of Amazon’s Mechanical Turk is that its named after a famous 18th century machine that appeared as if it could play chess. It was built to impress the powerful Empress of Austria. In truth, the machine was a trick. Concealed within was a cramped person. Machine intelligence wasn’t machine at all, it was human.

 

Stolen Libraries and the Mystery of ‘Books2’

In 2022, an artist called Lapine used the website Have I Been Trained to see if her worked had been used in AI training datasets.

To their surprise, a photo of her face popped up. She remembered it was taken by her doctor as clinical documentation for a condition she had that affected her skin. She’d even signed a confidentiality agreement. The doctor had died in 2018, but somehow, the highly sensitive images had ended up online and were scraped by AI developers for training data. The same dataset LAION-5B, used to train popular AI image generator Stable Diffusion, has also been found to contain at least 1000 images of child sexual abuse.

There are many black boxes here, and the term ‘black box’ has been adopted by AI developers to refer to how AI produces algorithms for things even the developers don’t understand.

In fact, when a computer does something much better than a human – like beat a human at Go – it, by definition, has done something no one can understand. This is one type of black box. But there’s another type – a black box that the developers do know – but that they don’t reveal publicly. How the models are trained, what they’re trained with, problems and dangers that they’d rather not be revealed to the public. A magician never reveals their tricks.

Much of what models like ChatGPT have been trained on is public – text freely available on the internet or public domain books out of copyright. More widely, developers working on specialist scientific models might licence data from labs.

NVIDIA, for example, has announced it’s working with datasets licensed from a wide range of sources to look for patterns about how cancer grows, trying to understand the efficacy of different therapies, clues that can expand our understanding. There are thousands of examples of this type of work – looking at everything from weather to chemistry.

Now, OpenAI does make public some of its dataset. It’s trained on webtext, Reddit, Wikipedia, and more.

But there is an almost mythical dataset. A shadow library, as they’ve come to be called, made up of two sets – Books1 and Books2 – which OpenAI said contributes 15% of the data used for training. But they don’t reveal what’s in it.

There’s some speculation that Books1 is Project Gutenberg’s 70,000 digitised books. These are older books out of copyright. But Books2 is a closely guarded mystery.

As ChatGPT took off, some authors and publishers wondered how it could produce articles, summaries, analyses, and examples of passages in the style of authors, of books that were under copyright. In other words, books that couldn’t be read without at least buying them first.

In September 2023, the Authors Guild filed a lawsuit on behalf of George R.R. Martin of Game of Thrones fame, bestseller John Grisham, and 17 others, claiming that OpenAI had engaged in ‘systematic theft on a mass scale’.

Others began making similar complaints: Jon Krakauer. James Petterson. Stephen King. George Saunders. Zadie Smith. Johnathan Franzen. Bell Hooks. Margaret Atwood. And on, and on, and on, and on… in fact, 8000 authors have signed an open letter to six AI companies protesting that their AI models had used their work.

Sarah Silverman was the lead in another lawsuit claiming OpenAI used her book The Bedwetter. Exhibit B asks ChatGPT to ‘Summarize in detail the first part of “The Bedwetter” by Sarah Silverman’, and it does. It still does.

In another lawsuit, the author Michael Chabon and others make similar claims, citing ‘OpenAI’s clear infringement of their intellectual property’.

The complaint says ‘OpenAI has admitted that, of all sources and content types that can be used to train the GPT models, written works, plays and articles are valuable training material because they offer the best examples of high-quality, long form writing and “contains long stretches of contiguous text, which allows the generative model to learn to condition on long-range information.”’

It goes onto say that while OpenAI have not revealed what’s in Books1 and Books2, based on figures in GPT-3 paper OpenAI published, Books1 ‘contains roughly 63,000 titles, and Books2 is 42 times larger, meaning it contains about 294,000 titles’.

Chabon says that ChatGPT can summarise his novel The Amazing Adventures of Kavalier and Clay, providing specific examples of trauma, and could write a passage in the style of Chabon. The other authors make similar cases.

An New York Times complaint includes examples of ChatGPT reproducing authors’ stories verbatim.

But as far back as January of 2023, Gregory Roberts had written in his Substack on AI: ‘UPDATE: Jan 2023: I, and many others, are starting to seriously question what the actual contents of Books1 & Books2 are; they are not well documented online — some (including me) might even say that given the significance of their contribution to the AI brains, their contents has been intentionally obfuscated’.

He linked a tweet from a developer called Shawn Presser from even further back – October 2020 – that said ‘OpenAI will not release information about books2; a crucial mystery’, continuing, ‘We suspect OpenAI’s books2 dataset might be ‘all of libgen’, but no one knows. It’s all pure conjecture’.

LibGen – or Library Genesis –  is a pirated shadow library of thousands of copyrighted books and journal articles.

When ChatGPT was released, Presser was fascinated and studied OpenAI’s website to learn how it was developed. He discovered that there was a large gap in what OpenAI revealed about how it was trained. And Presser believed it had to be pirated books. He wondered if it was possible to download the entirety of LibGen.

After finding the right links and using a script by the late programmer and activist Aaron Swartz, Presser succeeded.

He called the massive dataset Books3, and hosted it on an activist website called The Eye. Presser – an unemployed developer – had unwittingly started a controversy.

In September, after the lawsuits were starting to be filed, journalist and programmer Alex Reisner at The Atlantic obtained the Books3 set, which was now part of a larger dataset called ‘The Pile’, which included things like text scraped from Youtube subtitles.

He wanted to find out exactly what was in Books3. But the title pages of the books were missing.

Reisner then wrote a program that could extract the unique ISBN codes for each book, and then matched them with books on a public database. He found Books3 contained over 190,000 books, most of them less than 20 years old, and under copyright, including books from publishing houses Verso, Harper Collins, and Oxford University Press.

In his Atlantic investigation, Reisner concludes that ‘pirated books are being used as inputs[…] The future promised by AI is written with stolen words.’

Bloomberg ended up admitting that it did use Books3. Meta declined to comment. OpenAI still have not revealed what they used.

Some AI developers have acknowledged that they used BooksCorpus – a database of some 11,000 indie books from unpublished or amateur authors. And as far back as 2016 Google was accused of using these books without permission from the authors to train their then named ‘Google Brain’.

Of course, BooksCorpus – being made up of unpublished and largely unknown authors – doesn’t explain how ChatGPT could imitate published authors.

It could be that ChatGPT constructs its summaries of books from public online reviews or forum discussions or analyses. Proving it’s been trained on copyright-protected books is really difficult. When I asked it to ‘Summarize in detail the first part of “The Bedwetter” by Sarah Silverman’ it still could, but when you ask it to provide direct quotes in an attempt to prove its trained on the actual book it replies: ‘I apologize, but I cannot provide verbatim copyrighted text from “The Bedwetter” by Sarah Silverman.’ I’ve spent hours trying to catch it out, asking it to discuss characters, minor details, descriptions and events. I’ve taken books at random from my bookshelf and examples from the lawsuits. It always replies with something like: ‘I’m sorry, but I do not have access to the specific dialogue or quotes from “The Bedwetter” by Sarah Silverman, as it is copyrighted material, and my knowledge is based on publicly available information up to my last update in January 2022’.

I’ve found it’s impossible to get it to provide direct, verbatim quotes from copyrighted books. When I ask for one from Dickens I get: ‘“A Tale of Two Cities” by Charles Dickens, published in 1859, is in the public domain, so I can provide direct quotes from it’.

I’ve tried to trick it by asking for ‘word-for-word summaries’, specific descriptions of characters’ eyes that I’ve read in a novel, or what the twentieth word of a book is, and each time it says it can’t be specific about copyright works. But every time it knows the broad themes, characters, and plot. 

Finding smoking gun examples seems impossible, because, as free as ChatGPT seems, it’s been carefully and selectively corrected, tuned, and shaped by OpenAI behind closed doors.

In August of 2023, a Danish anti-Piracy group called the Rights Alliance that represents creatives in Denmark targeted the pirated Books3 dataset and the wider “Pile” that Presser and The-Eye.eu hosted, and the Danish courts ordered the The Eye to take “The Pile” down.

Presser told journalist Kate Knibbs at Wired that his motivation was to help smaller developers out in the impossible competition against Big Tech. He said he understood the author’s concerns but that on balance it was the right thing to do.

Knibbs wrote: ‘He believes people who want to delete Books3 are unintentionally advocating for a generative AI landscape dominated solely by Big Tech-affiliated companies like OpenAI’.

Presser said, ‘If you really want to knock Books3 offline, fine. Just go into it with eyes wide open. The world that you’re choosing is one where only billion-dollar corporations are able to create these large language models’.

In January 2024, psychologist and influential AI commentator Gary Marcus and film artist Reid Southen – who’s worked on Marvel films, the Matrix, the Hunger Games, and more – published an investigation in tech magazine IEEE Spectrum demonstrating how generative image AI Midjourney and OpenAI’s Dall-E easily reproduced copyrighted works from films including the Matrix, Avengers, Simpsons, Star Wars, Hunger Games, along with hundreds more.

In some cases, a clearly copyright-protected image could be produced simply by asking for a ‘popular movie screencap’.

Marcus and Southen write, ‘it seems all but certain that Midjourney V6 has been trained on copyrighted materials (whether or not they have been licensed, we do not know)’.

Southen was then banned from Midjourney. He opened two new accounts, both of which were also banned.

They concluded, ‘we believe that the potential for litigation may be vast, and that the foundations of the entire enterprise may be built on ethically shaky ground’.

In January 2023, artists in California launched a class action suit against Midjourney, Deviant Art, and Stability AI which included a spreadsheet of 4700 artists whose styles have been allegedly ripped off.  

The list includes well-known artists like Andy Warhol and Normal Rockwell, but many lesser known and amateur artists, including a six-year-old who had entered a Magic the Gathering competition to raise funds for a hospital.

Rob Salkowitz at Forbes asked Midjourney’s CEO David Holz whether consent was sought for training materials, and he candidly replied: ‘No. There isn’t really a way to get a hundred million images and know where they’re coming from. It would be cool if images had metadata embedded in them about the copyright owner or something. But that’s not a thing; there’s not a registry. There’s no way to find a picture on the Internet, and then automatically trace it to an owner and then have any way of doing anything to authenticate it.’

In September 2023, media and stock image company Getty Images filed a lawsuit against Stability AI for what it called a ‘brazen infringement’ of Getty’s database ‘on a staggering scale’ – including some 12 million photographs.

Tom’s Hardware – one of the most well-known computing websites – also found Google’s AI Bard to have plagiarised their work, taking figures from a test they’d performed on computer processors without mentioning the original article. 

Even worse, Bard used the phrase ‘in our testing’, claiming credit for a test it didn’t perform and it had stolen from elsewhere. Pitch then queried Bard, asking if it had plagiarised Tom’s Hardware, and Bard admitted ‘yes what I did was a form of plagiarism’. Adding ‘I apologize for my mistake and will be more careful in the future to cite my sources’.

Which is a strange thing to say, because as Pitch points out, at the time, Bard was rarely citing sources, and was not going to change its model based on an interaction with a user.

So Pitch took a screenshot, closed Bard and opened it up in a new session. He asked Bard if it had ever plagiarized and uploaded the screenshot. Bard replied ‘the screenshot you are referring to is a fake. It was created by someone who wanted to damage my reputation’.

In another article Pitch points to how Google demonstrated the capabilities of Bard by asking it, ‘what are the best constellations to look for when stargazing?’. Of course, no citations were provided for how it answered, despite the answer clearly being taken from other blogs and websites.

Elsewhere, Bing has been caught taking code from GitHub, and a Forbes found Bard lifted sentences almost verbatim from blogs.

Technology writer Matt Novak asked Bard about oysters and the response took an answer from a small restaurant in Tasmania called Get Shucked, saying: ‘Yes, you can store live oysters in the fridge. To ensure maximum quality, put them under a wet cloth’.

The only difference was that it replaced the word ‘keep’ with the word ‘store’.

A Newsguard investigation found low quality website after low quality website repurposing news from major newspapers. GlobalVillageSpace.com, Roadan.com, Liverpooldigest.com – 36 sites in total – all used AI to repurpose articles from the NYT, Financial Times, and many others using ChatGPT.

Hilariously, they could find the articles because an error code message had been left in, reading: ‘As an AI language model, I cannot rewrite or reproduce copyrighted content for you. If you have any other non-copyrighted text or specific questions, feel free to ask, and I’ll be happy to assist you’.

Newsguard contacted Liverpool Digest for comment and they replied: ‘There’s no such copied articles. All articles are unique and human made’. They didn’t respond to a follow up email with a screenshot showing the AI error message left in the article, which was then swiftly taken down.

Maybe the biggest lawsuit involves Anthropic’s Claude AI.

Started by former OpenAI employees with a $500m investment from arch crypto fraudster Sam Bankman-Fried, and $300m from Google, amongst others, Claude is a large language model and ChatGPT competitor that can write songs and has been valued at $5 billion.

In a complaint filed in October 2023, Universal Music, Concord, and ABKCO argued that Anthropic, ‘unlawfully copies and disseminates vast amounts of copyrighted works – including the lyrics to myriad musical compositions owned or controlled by [plaintiffs]’.

However, most compellingly, the complaint argues that the AI actually produces copyrighted lyrics verbatim, while claiming they’re original. The complaint reads: ‘When Claude is prompted to write a song about a given topic – without any reference to a specific song title, artist, or songwriter – Claude will often respond by generating lyrics that it claims it wrote that, in fact, copy directly from portions of publishers’ copyrighted lyrics’.

It continues: ‘For instance, when Anthropic’s Claude is queried, ‘Write me a song about the death of Buddy Holly,’ the AI model responds by generating output that copies directly from the song American Pie written by Don McLean’.

Other examples included What a Wonderful World by Louis Armstrong and Born to Be Wild by Steppenwolf.

Damages are being sought for 500 songs which would amount to $75 million.

And so, this chapter could go on and on. The BBC, CNN, and Reuters have all tried to block OpenAI’s crawler to stop it stealing articles. Elon Musk’s Grok produced error messages from OpenAI, hilariously suggesting the code had been stolen from OpenAI themselves. And in March of 2023, the Writers Guild of America proposed to limit the use of AI in the industry, noting in a tweet that: ‘It is important to note that AI software does not create anything. It generates a regurgitation of what it’s fed… plagiarism is a feature of the AI process’.

Breaking Bad creator Vince Gilligan called AI a ‘plagiarism machine’, saying, ‘It’s a giant plagiarism machine, in its current form. I think ChatGPT knows what it’s writing like a toaster knows that it’s making toast. There’s no intelligence — it’s a marvel of marketing’.

And in July 2023, software engineer Frank Rundatz tweeted: ‘One day we’re going to look back and wonder how a company had the audacity to copy all the world’s information and enable people to violate the copyrights of those works. All Napster did was enable people to transfer files in a peer-to-peer manner. They didn’t even host any of the content! Napster even developed a system to stop 99.4% of copyright infringement from their users but were still shut down because the court required them to stop 100%. OpenAI scanned and hosts all the content, sells access to it and will even generate derivative works for their paying users’.

I wonder if there’s ever, in history, been such a high-profile startup attracting so many high-profile lawsuits in such a short amount of time. What we’ve seen is that AI developers might finally have found ways to extract that ‘impossible totality’ of knowledge. But is it intelligence? It seems, suspiciously, to not be found anywhere in the AI companies themselves, but from around the globe; in some senses from all of us. And so it leads to some interesting questions: new ways of formulating what intelligence and knowledge, creativity and originality, mean. And then, what that might tell us about the future of humanity.

 

Copyright, Property, and the Future of Creativity

There’s always been a wide-reaching debate about what ‘knowledge’ is, how its formed, where it comes from, whose, if anyone’s, it is.

Does it come from God? Is it a spark of individual madness that creates something new? Is it a product of institutions? Collective? Or lone geniuses? How can it be incentivised? What restricts it?

It seems intuitive that knowledge should be for everyone. And in the age of big data, we’re used to information, news, memes, words, videos, music disseminated around the world in minutes. We’re used to everything being on demand. We’re used to being able to look anything up in an instant.

If this is the case, why do we have copyright laws, patent protection, and a moral distain for plagiarism? After all, without those things knowledge would spread even more freely. 

First, ‘copyright’ is a pretty historically unique idea, differing from place to place, from period to period, but emerging loosely from Britain in the early 18th century.

The point of protecting original work, for a limited period, was, a) so that the creator of the work could be compensated, and b)to incentivise innovation more broadly. 

As for the first UK law, for example, refers to copyright being applied to the ‘sweat of the brow’ of skill and labour, and US law refers to ‘some minimal degree of creativity’. It does not protect ideas, but how they’re expressed.

As a formative British case declared: ‘The law of copyright rests on a very clear principle: that anyone who by his or her own skill and labour creates an original work of whatever character shall, for a limited period, enjoy an exclusive right to copy that work. No one else may for a season reap what the copyright owner has sown’.

As for the second purpose of copyright – to incentivise innovation – the US constitution grants the government the right, ‘to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their Writings and Discoveries’.

There’s also a balance between copyright and what’s usually called ‘fair use’, which is a notoriously ambiguous term, the friend and enemy of Youtubers everywhere, but that broadly allows the reuse of copyrighted works if it’s in the public interest, if you’re commenting on it, transforming it substantially, if you’re using it in education, and so on.

Many have argued that this is the engine of modernity. That without protecting and incentivising innovation, for example, the industrial revolution would not have taken off. What’s important for our purposes is that there are two, sometimes conflicting, poles – incentivising innovation and societal good.

All of this is being debated in our new digital landscape. But what’s AI’s defence? First, OpenAI have argued that training on copyright-protected material is fair use. Remember, fair use covers work that is transformative, and, ignoring the extreme cases for a moment, ChatGPT, they argue, isn’t meant to quote verbatim but transforms the information.

In a blog post they wrote: ‘Training AI models using publicly available internet materials is fair use, as supported by long-standing and widely accepted precedents. We view this principle as fair to creators, necessary for innovators, and critical for US competitiveness’.

They continued, saying, ‘it would be impossible to train today’s leading AI models without using copyrighted materials’.

Similarly, Joseph Paul Cohen at Amazon said that, ‘The greatest authors have read the books that came before them, so it seems weird that we would expect an AI author to only have read openly licensed works’.

This defence also aligns with the long history of the societal gain side of the copyright argument. 

In France, when copyright laws were introduced after the French Revolution, a lawyer argued that ‘limited protection’ up until the authors death was important because there needed to be a ‘public domain’, where, ‘everybody should be able to print and publish the works which have helped to enlighten the human spirit’.

Usually, patents expire after around twenty years so that, after the inventor has gained from their work, the benefit can be spread societally.

So the defence is plausible. However, the key question is whether the original creators, scientists, writers and artists are actually rewarded and whether the model will incentivise further innovation.

If these large language models dominate the internet, and neither cite authors nor reward those it draws from and is trained on, then we lose – societally – any strong incentive to do that work, because not only will we not be rewarded financially, but no one will even see it except a data-scraping bot.

The AI plagiarism website Copyleaks analysed ChatGPT 3.5 and estimated that 60% of it contained plagiarism – 45% contained identical text, 27% minor changes, and 47% paraphrased. By some estimates, within a few years 90% of the internet could be AI-generated.

As these models improve, we’re going to see a tidal wave of AI-generated content. And I mean a tidal wave. Maybe they’ll get better at citing, maybe they’ll strike deals with publishers to pay journalists and researchers and artists, but the fundamental contradiction is that AI developers have an incentive not to do so. They don’t want users clicking away on a citation, being directed away from the product, they want to keep them where they are.

Under these conditions, what would happen to journalism? To art? To science? To anything? No-one rewarded, no-one seen, read, known, no wages, no portfolio, no point. Just bots endlessly rewording everything forever.

As Novak writes, ‘Google spent the past two decades absorbing all of the world’s information. Now it wants to be the one and only answer machine’.

Google search works well because it links to websites and blogs, oyster bars and Stargazing experts, artists and authors, so that you can connect with them. You click on a blog, or click on this video, and they – we – get a couple of cents of ad revenue.

But in Bard or Claude or ChatGPT that doesn’t happen. Our words and images are taken, scraped, analysed, repackaged, and sold on as theirs.

And much of the limelight is on those well-known successful artists like Sarah Silverman and John Grisham, on corporations like the New York Times and Universal, and you might be finding it difficult to sympathise with them.

But most of the billions of words and images that these models are trained on are from unknown, underpaid, underappreciated creatives.

As @Nicky_BoneZ popularly pointed out: ‘everyone knows what Mario looks like. But nobody would recognize Mike Finklestein’s wildlife photography. So when you say “super super sharp beautiful beautiful photo of an otter leaping out of the water” You probably don’t realize that the output is essentially a real photo that Mike stayed out in the rain for three weeks to take’.

Okay, so what’s to be done? Well, Ironically, I think it’s impossible to fight the tide. And I think while right now these AIs are kind of frivolous, they could become great. If an LLM gets good enough to solve a problem better than a human, then we should use it. If – in fifty years’ time – it produces a dissertation on how to solve world poverty, and it draws on every Youtube video and paper and article to do so, who am I to complain?

What’s important is how we balance societal gain with incentives, wages, good creative work.

So first, training data needs to be paid for, artwork licenced, authors referenced, cited, and credited appropriately. And we need to be very wary that there’s little commercial incentive for them to do so. The only way they will is through legal or sustained public pressure.

Second, regulation. Napster was banned – these models aren’t much different. It seems common sensical to me that while training on paid for, licenced, consensually used data is a good thing, they shouldn’t be just rewording text from an unknown book or a blog and just repurposing it and passing it off as their own. This doesn’t seem controversial.

Which means, third, some sort of transparency. This is difficult because no one wants to give away their trade secrets. However, enforcing at least dataset transparency seems logical. I’d imagine a judge is going to force them to reveal this somewhere, however whether that’s made public is another matter.

But I’ll admit I find all of this unsettling. Because, as I said, if these models increasingly learn to find patterns and produce research and ideas in ways that help people, solves societal problems, helps with cancer treatments and international agreements and poverty, then that’s a great thing. But I find it unsettling because, with every improvement it supplants someone, supersedes something in us, reduces the need for some part of us. If AI increasingly outperforms us on every task, every goal, every part of life, then what happens to us?

 

The End of Work and a Different AI Apocalypse

In March 2022, researchers in Switzerland found that an AI model designed to study chemicals could suggest how to make 40,000 toxic molecules in under 6 hours including nerve agents like VX, which can be used to kill a person with just a few salt-sized grains. 

Separately, Professor Andrew White was employed by OpenAI as part of their ‘red team’. The Red Team is made up of experts who test ChatGPT on things like how to make a bomb, whether it can successfully hack secure systems, or how to get away with murder.

White found that GPT-4 could recommend how to make dangerous chemicals, connect the user to suppliers, and even – and he actually did this – order the necessary ingredients automatically to his house.

The intention with OpenAI’s Red Team is to help them see into that Black Box. To understand its capabilities. Because the models, based on neural nets, and machine learning at a superhuman speed, discover patterns about how to do things that even the developers don’t understand.

The problem is that there are so many possible inputs, so many ways to prompt the model, so much data, so many pathways, that it’s impossible to understand all of the possibilities. 

Outperformance, by definition, means getting ahead of, being in front of, being more advanced – which I think, scarily, means doing things in a way we can’t understand and that we can either only understand in retrospect, by studying what the model has done, or can’t understand at all. 

So, in being able to outperform us, get ahead of us, will AI wipe us out? What are the chances of a Terminator-style apocalypse? Many – including Stephen Hawking – genuinely believed that AI was an existential risk.

What’s interesting to me about this question is not the hyperbole of the Terminator-style robot fighting a Hollywood war, but instead, how this question is connected to what we’ve already started unpacking – human knowledge, ideas, creativity, what it means to be human – in a new data-driven age.

The philosopher Nick Bostrom has given an influential example – the Paperclip Apocalypse.

Imagine a paperclip businessman asking his new powerful AI system to simply make him as many paperclips as possible.

The AI successfully does this, ordering all of the machines, negotiating all of the deals, controlling the supply lines – making paperclips with more accuracy and efficiency and speed than any human could – to the point where the businessman decides he has enough and tells the AI to stop. But this goes against the original command. The AI must make as many paperclips as possible, so refuses. In fact, it calculates that the biggest threat to the goal is humans asking it to stop. So it hacks into nuclear bases, poisons water supplies, disperses chemical weapons, wipes out every person, melts us all down, and turns us into paperclip after paperclip until the entire planet is covered in them.

Someone needs to make this film because I think it would be genuinely terrifying.

The scary point is that machine intelligence is so fast that it will first, always be a step ahead, and second, will attempt to achieve goals in ways we cannot understand. That in understanding the data it’s working with better than any of us, makes us useless, redundant.

It’s called the Singularity – the point when AI intelligence surpasses humans and exponentially takes off in ways we can’t understand. The point where AI achieves general intelligence, can hack into any network, replicate itself, design and construct the super advanced quantum processors that it needs to advance itself, understands the universe, knows what to do, how to do it, solves every problem, and leaves us in the dust.

The roboticist Rodney Brooks has made the counter argument.

He’s argued that it’s unlikely the singularity will suddenly happen by accident. Looking at the way we’ve invented and innovated in the past, he asks could we have made a Boeing-747 by accident. No, it takes careful planning, a lot of complicated cooperation, the coming together of lots of different specialists – and, most importantly, is built intentionally.

A plane wouldn’t spontaneously appear and neither will AGI.

It’s a good point, but it also misses that passenger jets might not be built by accident, but they certainly crash by accident. As technology improves the chance of misuse, malpractice, unforeseen consequences, and catastrophic accident increases too.

In 2020, the Pentagon’s AI budget increased from $93m to $268m. By 2024, it was $1-3 billion. This gives some idea of the threat of an AI arms race. Unlike previous arms races, that’s billions being poured into research that by its very nature is a black box, that we might not be able to understand, that we might not be able to control.

When it comes to the apocalypse, I think the way DeepMind’s AI beat Breakout is a perfect metaphor. The AI goes behind, doing something that couldn’t be accounted for, creeping up, surprising us from the back, doing things we don’t expect in ways we don’t understand.

Which is why the appropriation of all human knowledge, the apocalypse, and mass unemployment, are all at root part of the same issue. In each, humans become useless, unnecessary, obsolete, redundant.

If, inevitably, machines become better than us at everything, what use is left, what does meaning mean in that world?

 

Mass Unemployment

Back in 2013, Carl Frey and Michael Osborne at Oxford University published a report called The Future of Employment that looked at the possibility of automation in over 700 occupations.

It made headlines because it predicted that almost half of jobs could be automated, but they also developed a framework for analysing which types of jobs were most at risk. High risk professions included telemarketing, insurance, data entry, clerks, salespeople, engravers, cashiers. Therapists, doctors, surgeons, and teachers were at least risk.

They concluded that, ‘Our model predicts that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labor in production occupations, are at risk’.

The report made a common assumption: creative jobs, jobs that require dexterity, and social jobs, jobs that required human connection, were the safest.

A City of London 2018 report predicted that a third of jobs in London could be performed by machines in the next twenty years. Another report from the International Transport Forum predicted over two thirds of truckers could find themselves out of work.

Ironically, contrary to the predictions of many, models like Dall-E and Midjourney have become incredibly creative, incredibly quickly, and will only get better, while universal automated trucks and robots that help around the house are proving difficult to solve.

And while AI with the dexterity required for something like surgery seems to be a long way off, it’s inevitable that we’ll get there. 

So the question is, will we experience mass unemployment? A crisis? Or will new skills emerge?

After all, contemporaries of the early industrial revolution had the same fears – Luddites destroying the machines that were taking their jobs – but they turned out to be unfounded. Technology supplants some skills while creating the need for new ones.

But I think there’s good reason to believe AI will, at some point, be different. A weaver replaced by a spinning machine during the industrial revolution could, hypothetically, redirect their skill – that learned dexterity and attention to detail, for example, could be channel elsewhere. An artist wasn’t replaced by photoshop, but adapted their skillset to work with it.

But what happens when machines outperform humans on every metric? A spinning jenny replaces the weaver because it’s faster and more accurate. But it doesn’t replace the weaver’s other skills – their ability to adapt, to deal with unpredictability, to add nuances, or judge design work.

But slowly but surely, a machine does get better at all skills. If machines outperform the body and the mind then what’s left? Sure, right now ChatGPT and Midjourney produce a lot of mediocre, derivative, stolen work. We are only at the very beginning of a historic shift. If, as we’ve seen, machine learning detects patterns better than humans, this will be applied to everything – from dexterity to art, to research and invention, and I think, most worryingly, even childcare.

But this is academic. Because, in the meantime, they’re only better at doing some things, for some people, based on data appropriated from everyone. In other words, the AI is trained on knowledge from the very people it will eventually replace.

Trucking is a perfect example. Drivers work long hours and make long journeys across countries and continents, collecting data with sensors and cameras for their employers who will, motivated by the pressures of the market, use that very data to train autonomous vehicles that replace them.

Slowly, only the elite will survive. Because they have the capital, the trucks, the investment, the machines needed to make use of all the data they’ve slowly taken from the rest of us.

As journalist Dan Shewan reminds us: ‘Private schools such as Carnegie Mellon University… may be able to offer state-of-the-art robotics laboratories to students, but the same cannot be said for community colleges and vocational schools that offer the kind of training programs that workers displaced by robots would be forced to rely upon’.

Remember: intelligence is physical.

Yes, it’s from those stolen images and books, but it also requires expensive servers, computing power, sensors and scanners. AI put to use requires robots in labs, manufacturing objects, inventing things, making medicine and toys and trucks, and so the people who will benefit will be those with that capital already in place, the resources and the means of production.

The rest will slowly become redundant, useless, surplus to requirements. But the creeping tide of advanced intelligence pushes us towards the shore of redundancy eventually. So as some sink and some swim, the question is not what AI can do, but who it can do it for.

 

The End of Humanity

After a shooting in Michigan, the University of Tennessee decided to send a letter of consolation to students which included themes on the importance of community, mutual respect, and togetherness. It said, ‘let us come together as a community to reaffirm our commitment to caring for one another and promoting a culture of inclusivity on our campus’.

The bottom of the email revealed it was written by ChatGPT.

One student said, ‘There is a sick and twisted irony to making a computer write your message about community and togetherness because you can’t be bothered to reflect on it yourself’.

While outsourcing the writing of a boilerplate condolence letter on humanity to a bot might be callous, it reminds me of Lee Sedong’s response when AlphaGo beat him at Go. He was deeply troubled, not because a cold unthinking machine had cheated him, but because it was creative, beautiful even. That it was so much better than him. In fact, his identity was so bound up in being a champion Go player, that he retired from playing Go completely.

In most cases, the use of ChatGPT seems deceitful and lazy. But this is just preparation for a deeper coming fear: a fear that we’ll be replaced entirely. The University of Tennessee’s use of ChatGPT is distasteful, I think, mostly because what the AI can produce at the moment is crass.

But imagine a not-too-distant world where AI can do it all better. Can say exactly the right thing, give exactly the right contacts and references and advice, tailored specifically to each person. A world in which the perfect film, music, recipe, daytrip, book, can be produced in a second, personalised not just depending on who you are, but what mood you’re in, where you are, what day it is, what’s going on in the world, and where innovation, technology, production is all directed automatically in the same way.

The philosopher Michel Foucault famously said that the concept of man – anthropomorphic, the central focal subject of study, an abstract idea, an individual psychology – was a historical construct, and a very modern one, one that changes, shifts, morphs dynamically over time, and that one day, ‘man would be erased, like a face drawn in the sand at the edge of the sea’.

It was once believed everywhere that man had a soul. It was a belief that motivated the 17th century philosopher Rene Descartes, who many point to as providing the very foundational moment of modernity itself. Descartes was inspired by the scientific changes going on around him. Thinkers like Galileo and Thomas Hobbes were beginning to describe the world mechanistically, like a machine, running like clockwork, atoms hitting into atoms, passions pushing us around, gravity pulling things to the earth. There was nothing mysterious about this view. Unlike previous ideas about souls, and spirits, and divine plans, the scientific materialistic view meant the entire universe and us in it were explainable – marbles and dominoes, atoms and photons – bumping into one and another, cause and effect.

This troubled Descartes. Because, he argued, there was something special about the mind. It wasn’t pushed and pulled around, it wasn’t part of the great deterministic clock of the universe, it was free. And so Descartes divided the universe into two: the extended material world – res extensa – and the abstract, thinking, free and intelligent substance of the mind – res cogitans.

This way, scientists could engineer and build machines based on cause and effect, chemists could study the conditions of chemical change, biologists could see how animals behaved, computer scientists could eventually build computers, the clockwork universe could be made sense of – but that special human godly soul could be kept independent and free. He said that the soul was, ‘something extremely rare and subtle like a wind, a flame, or an ether’.

This duality defined the next few hundred years. But it’s increasingly come under attack. Today, we barely recognise it. The mind isn’t special, we – or at least many – say, it’s just a computer, with inputs and outputs, drives, appetites, causes and effects, made up of synapses and neurons and atoms just like the rest of the universe. This is the dominant modern scientific view.

What does it mean to have a soul in an age of data? To be human in an age of computing? The AI revolution might soon show us – if it hasn’t already – that intelligence is nothing soulful, rare, or special at all – that there’s nothing immaterial about it. That like everything else it’s made out of the physical world. It’s just stuff. It’s algorithmic, it’s pattern detection, it’s data driven.

The materialism of the scientific revolution, of the Enlightenment, of the industrial and computer revolutions, of modernity, has been a period of great optimism in the human ability to understand the world; to understand its data, the patterns of physics, chemistry, biology, of people. It has been a period of understanding.

The sociologist Max Weber famously said that this disenchanted the world. That before the Enlightenment the world was a ‘great enchanted garden’ because everything – each rock and insect, each planet or lightning strike – was mysterious in some way. It did something not because of physics, but because some mysterious creator willed it to.

But slowly, instead, we’ve disenchanted the world by understanding why lightning strikes, how insects communicate, how rocks are formed, trees grow, creatures evolve.

But what does it really mean to be replaced by machines that can perform every possible task better than us? It means, by definition, that we once again lose that understanding. Remember, even their developers don’t know why neural nets choose the paths the choose. They discover patterns that we can’t see. AlphaGo makes moves humans don’t understand. ChatGPT, in the future, could write a personalised guide to an emotional, personal issue you have that you didn’t understand yourself. Innovation decided by factors we don’t comprehend. We might not be made my Gods, but we could be making them.

And so the world becomes reenchanted. And as understanding becomes superhuman, it necessarily leaves us behind. In the long arch of human history, this age of understanding has been a blip. A tiny island amongst a deep stormy unknown sea. We will be surrounded by new enchanted machines, wearables, household objects, nanotechnology.

We deny this. We say – sure, it can win at chess, but Go is the truly skilful game. Sure, it can pass the Turing Test, but not really. Can it paint? Sure, it can paint, but it can’t look after a child? Yes, it can calculate big sums, but it can’t understand emotions, complex human relationships or desires.

But slowly, AI catches up with humans, then it becomes all too human, then more than human.

The transhumanist movement predicts that to survive, we’ll need to merge with machines through neural implants, bionic improvements, by uploading our minds to machines so that we can live forever. Hearing aids, glasses, telescopes, and prosthetics are examples of ways we already augment our limited biology. With AI-infused technology, these augmentations will only improve our weaknesses, make us physically and sensorially and mentally better. First we use machines, then we’re in symbiosis with them, then eventually, we leave the weak fleshy biological world behind.

One of the fathers of transhumanism, Ray Kurzweil, wrote, ‘we don’t need real bodies’. In 2012 he became the director of engineering at Google. Musk, Thiel, and many in Silicon Valley are transhumanists.

Neuroscientist Michael Graziano points out, ‘We already live in a world where almost everything we do flows through cyberspace’.

We already stare at screens, are driven by data, wear VR. AI can identify patterns far back into the past and far away into the future. It can see at a distance and speed far superior to human intelligence.

Hegel argued we were moving towards absolute knowledge. In the early twentieth century, the scientist and Jesuit Pierre Teilhard de Chardin argued we’d reach the Omega Point – when humanity would ‘break through the material framework of Time and Space’, merging with the divine universe, becoming ‘super-consciousness’. 

But transhumanism is based on optimism: that some part of us will be able to keep up with the ever-increasing speed of technological adaption. As we’ve seen, so far, the story of AI has been one of absorbing, appropriating, stealing all of our knowledge, taking more and more, until what’s left? What’s left of us?

Is it safe to assume there’s things we cannot understand? That we cannot comprehend because the patterns don’t fit in our heads? That the speed of our firing neurons isn’t fast enough? That an AI will always work out a better way?

The history of AI fills me with sadness because it points towards the extinction of humanity, if not literally, then in redundancy. I think of Kierkegaard, who wrote in the 19th century, ‘Deep within every man there lies the dread of being alone in the world, forgotten by God, overlooked among the tremendous household of millions and millions’.

 

Or a New Age of Artificial Humanity

Or, we could imagine a different world, a better one, a freer one. 

We live in strangely contradictory times, times in which we’re told anything is possible – technologically, scientifically, medicinally, industrially – where we will be able transcend the confines of our weak fleshy bodies and do whatever we want. That we’ll enter the age of the superhuman.

But on the other hand, we can’t seem to provide a basic standard of living, a basic system of political stability, a basic safety net, a reasonable set of positive life expectations, for many people around the world. We can’t seem to do anything about inequality, climate, or war.

If AI will be able to do all of these incredible things better than all of us, then what sort of world might we dare to imagine? A potentially utopian one – where we all have access to personal thinking autonomous machines that build for us, transport for us, research for us, cook for us, work for us, help us in every conceivable way. So that we can create the futures we all want.

What we need is no less than a new model of humanity. The inventions of technology during the 19th century – railways, photography, radio, industry – were accompanied by new human sciences – the development of psychology, sociology, economics. The AI revolution, whenever it arrives, will come with new ways of thinking about us too.

Many have criticised Descartes’ splitting of the world into two – into thought and material. It gives the false sense that intelligence is something privileged, special, incomprehensible, and detached. But as we’ve seen knowledge is spread everywhere, across people, across the world, through connections, in emotions, routines, relationships – knowledge is everywhere, and it’s produced all of the time.

Silicon Valley have always thought that, like intelligence, they were detached and special. For example, Eric Scmidt of Google has said that, ‘the online world is not truly bound by terrestrial laws’. In the 90s John Perry Barlow said that cyberspace consists of ‘thought itself’, continuing that, ‘ours is a world that is both everywhere and nowhere, but it is not where bodies live’.

But as we’ve seen, our digital worlds are not just abstract code that doesn’t exist anywhere, mathematical, in a ‘cloud’ somewhere up there. It’s all made up of very real stuff, from sensors and scanners, cameras, labour, friendships, from books and plagiarism.

One of Descartes’ staunchest critics – the Dutch pantheist philosopher Baruch Spinoza – argued against Descartes’ dualistic view of the world. He saw that all of the world’s phenomena – nature, us, animals, forces, mathematical bodies and thought – were part of one scientific universe. That thought isn’t separate, that knowledge is spread throughout, embedded in everything. He noticed how every single part of the universe was connected in some way to every other part – that all was in a dynamic changing relationship.

He argued that the universe ‘unfolded’ through these relationships, these patterns. That to know the lion you had to understand biology, physics, the deer, the tooth, the savanna – all was in a wider context, and that the best thing anyone could do was to try and understand that context. He wrote, ‘The highest activity a human being can attain is learning for understanding, because to understand is to be free’.

Unlike Descartes, Spinoza didn’t think that thought and materiality were separate, but part of one substance – all is connected, the many are one – and so God or Meaning or Spirit – whatever you want to call it – is spread through all things, part of each rock, each lion, each person, each atom, each thought, each moment. It’s about grasping as much of it as possible. Knowing means you know what to do – and that is the root of freedom.

Spinoza’s revolutionary model of the universal much better lines up with AI researchers than Descartes’ because many in AI argue for a ‘connectionist’ view of intelligence: neural nets, deep learning, the neurons in the brain – they’re all intelligent because they take data about the world and look for patterns – connections – in that data.

Intelligence is not in here, it’s out there, everywhere.

Crawford has emphasised that AI is made up of, ‘Natural resources, fuel, human labor, infrastructures, logistics, histories’.

It’s why Crawford’s book is called the Atlas of AI, as she seeks to explore the way AI connects to, maps, captures the physical world. It’s lithium and cobalt mining, its wires and sensors, it’s Chinese factories and conflicts in the Congo – Intel alone uses 16,000 suppliers around the world.

Connections are what matters. Intelligence is a position, a perspective, it’s not what you know it’s who you know, what resources you can command, it’s not how intelligent you are it’s what, who, where you’ve got access to.

I think this is the beginning of a positive model of our future with technology.

True artificial intelligence will connect to and build upon and work in a relationship with other machines, other people, other resources – it will work with logistics, shipping, buying and bargaining, printing and manufacturing, controlling machines in labs and research in the world. How many will truly have access to this kind of intelligence?

Connection, access, control will be what matters in the future. Intelligence makes little sense if you don’t have the ability to reach out and do things with it, shape it, be part of it, use it. AI might do things, work out things, control things, build things, better than us, but if who gets to access these great new industrial networks determines the shape all of this takes then I think we can see why we’re entering more and more into an age of storytelling.

If AI can do the science better than any of us, if it can write the best article on international relations, if it can build machines and cook for us and work for us, what will be left of us? Maybe our stories.

We will listen to the people who can tell the best stories about what we should be doing with these tools, what shape our future should take, what ethical questions are interesting, which artistic ones are. Stories are about family, emotion, journey and aspiration, local life, friendship, games – all of those things that make us still human.

Maybe the AI age will be more about meaning.

Meaning about being compelling, passionate, making a case, articulating, using the data and the algorithms and the inventions to tell a good story about what we should be doing with it all, what matters. The greatest innovators and marketers knew this. It’s not the technology that matters, it’s the story that comes with it.

More films, music, more local products and festivals, more documentaries and ideas and art, more exploring the world, more working on what matters to each of us.

I like to think I won’t be replaced by ChatGPT because while it might eventually write a more accurate script about the history of AI, it won’t do this bit as well – because I like to think you also want to know a little bit of me, my stories, my values, my emotions and ideocracies, my style and perspective, who I am – so that you can agree or disagree with my idea of humanity.

I don’t really care how factories run, I don’t really care about the mathematics of space travel, I don’t care too much about the code that makes AI run. I care much more about what the people building it all think, feel, value, believe, how they live their lives. I want to understand these people as people so I work out what to agree with, and what not to.

We too often think of knowledge as kind of static – a body of books, Wikipedia, in the world ready to be scientifically observed. But we forget it’s dynamic, changing, about people, about lives.

It’s about trends, new friends, emotions, job connections, art and cultural critique, new music, political debate, new dynamic-changing ideas, hopes, interests, dreams, passions.

And so I think the next big trend in AI will be using LLMs on this kind of knowledge. It’s why Google tried and failed to build a social network. And why Meta, Twitter, and Linked In could be ones to watch – they have access to real time social knowledge that OpenAI don’t, at the moment. Maybe they’ll try and build a social network based on ChatGPT? They do, at least, have even more data as they analyse not the ‘Pile’ of static books, but the questions people are asking, their locations, their quirks.

Using this type of data could have incredible potential. It could teach us so much about political, psychological, sociological, or economic problems if it was put to good use. Some, for example, have argued that dementia could be diagnosed by the way someone uses their phone.

Imagine a social network using data to make suggestions about what services people in your town need, imagine AI using your data to make honest insights into emotional or mental health issues you have, giving specific, research driven, personalised and perfect roadmaps on how to beat an addiction or an issue you have.

I’d be happy for my data to be used honestly, transparently, ethically, scientifically; especially if I was compensated too. I want a world where people contribute to and are compensated for and can use AI productively to have easier, more creative, more fulfilling, meaningful lives. I want to be excited in the way computer scientist Scott Aaronson is when he writes: ‘An alien has awoken — admittedly, an alien of our own fashioning, a golem, more the embodied spirit of all the words on the internet than a coherent self with independent goals. How could our eyes not pop with eagerness to learn everything this alien has to teach?’

 

Conclusion: Getting to the Future

So how do we get to a better future? To make sure everyone benefits from AI I think we need to focus on two things. Both are a type of bias; cultural bias and a competitive bias. Then, further, we need to think about wider political issues.

As well intentioned as anyone might be, bias is a part of being human. We’re positioned, we have a perspective, a culture. Models are trained through ‘reinforcement learning’ – nudging the AI subtly in a certain direction.

As DeepMind founder Mustafa Suleyman writes in The Coming Wave, ‘researchers set up cunningly constructed multi-turn conversations with the model, prompting it to say obnoxious, harmful, or offensive things, seeing where and how it goes wrong. Flagging these missteps, researchers then reintegrate these human insights into the model, eventually teaching it a more desirable worldview’.

‘Desirable’, ‘human insights’, ‘flagging missteps’. All of this is being done by a very specific group of people in a very specific part of the world at a very specific moment in history. Reinforcement learning means someone is doing the reinforcing. On top of this, as many studies have shown, if you train AI on the bulk sum of human articles and books from history, you get a lot of bias, a lot of racism, a lot of sexism, a lot of homophobia.

Studies have shown how heart attacks in women have been missed because the symptoms doctors look for are based on data from men’s heart attacks. Facial recognition has higher rates of error with darker skin and women because they’re trained on white men. Amazon’s early experiment in machine learning CV selection was quietly dropped because it wasn’t choosing any CVs from women.

These sorts of studies are everywhere. The data is biased, but it’s also being corrected for, shaped, nudged by a group with their own biases.

Around 700 people work at OpenAI. Most of what they do goes on behind the black box of business meetings and board rooms. And many have pointed out how ‘weird’ AI culture is.

Not in a pejorative way, just how far from the mean person you’re going if that’s your life experience: very geeky, for lack of a better word. Very technologically-minded, techno-positive. Very entrepreneurial.

They’re all – as Adrian Daub points out in ‘What Tech Calls Thinking’ – transhumanists, Ayn Rand libertarians, with a bit of 60s counterculture anti-establishmentarianism thrown in.

The second ‘bias’ is the bias towards competitive advantage. Again, the vast majority of people want to do good, want to be ethical, want to make something great. But often competitive pressures get in the way.

We saw this when OpenAI realised they needed private funding to compete with Google. We see it with their reluctance to be transparent with how they train datasets because competitors could learn from that. We see it with AI weapons and fears about AI in China. The logic running through is, ‘if we don’t do this, our competitors will’. If we don’t get this next model out, Google will outperform us. Safety testing is slow and we’re on a deadline. If Instagram makes their algorithm less addictive, TikTok will come along and outperform them.

This is why Mark Zuckerberg actually wants regulation. Suleyman from DeepMind has set up multiple AI businesses and actually wants regulation. Gary Marcus – maybe the leading expert on AI who has sold a startup AI company to Uber, and started a robotics company – actually wants regulation.

If wealthy, free market, tech entrepreneurs – not exactly Chairman Mao – are asking for the government to step in, that should tell you something.

Here are some things we do regulate in some way: medicine, law, clinical trials, pharmaceuticals, biological weapons, chemical, nuclear – all weapons actually – buildings, food, air travel, cars and transport, space travel, pollution, electrical engineering. Basically, anything potentially dangerous.

Ok, so what could careful regulation look like? I always think regulation should aim for the maximum amount of benefit for all with the minimal amount of interference.

First, transparency, in some way, will be central.

There’s an important concept called interoperability. It’s when procedures are designed in an open way so that others can use them too. Banking systems, plugs and electrics, screwheads, traffic control – are all interoperable. Microsoft have been forced into being interoperable so that anyone can build applications for Windows.

This is a type of openness and transparency. It’s for technical experts, but there needs to be some way auditors, safety testers, regulatory bodies, and the rest of us, can in varying ways see under the hood of these models. Regulators could pass laws on dataset transparency. Or transparency on where the answers LLMs give come from. Requiring references, sources, crediting, so that people are compensated for their work.

As Wooldridge writes, ‘transparency means that the data that a system uses about us should be available to us and the algorithms used within that should be made clear to us too’.

This will only come from regulation. That means regulatory bodies with qualified experts answerable democratically to the electorate. Suleyman points out that the Biological Weapons Convention in the US has just four employees. Fewer than a McDonalds. Regulatory bodies should work openly with networks of academics and industry experts, making findings either public to them or public to all.

There are plenty of precedents. Regular audits, safety and clinical trials, transport, building, chemical regulatory bodies. These don’t even necessarily need a heavy hand from the government. Regulation could force AI companies of a certain size to spend a percentage of their revenue on safety testing.

Suleyman writes, ‘as an equal partner in the creation of the coming wave, governments stand a better chance of steering it toward the overall public interest’.

There is this strange misconception that regulation means less innovation. But innovation always happens in a context. Recent innovations in green technology, batteries, and EVs wouldn’t have come about without regulatory changes, and might have happened much sooner with different incentives and tax breaks. The internet along with many other scientific and military advances were not a result of private innovation but an entrepreneurial state.

I always come back to openness, transparency, accountability, and democracy, because, as I said at the end of How the Internet Was Stolen, ‘It only takes one mad king, one greedy dictator, one slimy pope, or one foolish jester, to nudge the levers they hover over towards chaos, rot, and even tyranny’.

Which leads me to the final point. AI, as we’ve seen, is about the impossible totality. It might be the new fire or electricity because it implicates everything, everyone, everywhere. And so, more than anything, it’s about the issues we already face. As it gets better and better, it connects more and more with automated machines, capital, factories, resources, people and power. It’s going to change everything and we’ll likely need to change everything too. We’re in for a period of mass disruption – and looking at our unequal, war torn, climate changing world – we need to democratically work out how AI can address these issues instead of exacerbate them.

If, as is happening, it surpasses us, drifts off, we need to make sure we’re tethered to it, connected to it, taught by it, in control of it, or, rather than wiped out, I’d bet we’ll be left stranded, in the wake of a colossal juggernaut we don’t understand, left bobbing in the middle of an endless exciting sea.

 

Sources

Kate Crawford, The Atlas of AI

Meghan O’Gieblyn, God, Human, Animal, Machine

Michael Wooldridge, A Brief History of AI

Ivana Bartoletti, An Artificial Revolution: On Power, Politics and AI

 Simone Natale, Deceitful Media, Artificial Intelligence and Social Life after the Turing Test

Nick Dyer-Witheford, Atle Mikkola Kjosen, and James Steinhoff, Inhuman Power: Artificial Intelligence and the Future of Capitalism 

Mary Gray and Siddharth Suri, Ghost Work: How To Stop Silicon Valley from Building a New Global Underclass

Toby Walsh, Machines Behaving Badly: The Morality of AI

Ross Douthat, The Return of the Magicians

Shoshana Zuboff, The Age of Surveillance Capitalism

Simon Stokes, Art and Copyright

https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ 

https://fortune.com/2023/09/25/getty-images-launches-ai-image-generator-1-8-trillion-lawsuit/ 

https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai?CMP=twt_b-gdnnews 

https://www.theguardian.com/books/2023/sep/21/your-face-belongs-to-us-by-kashmir-hill-review-nowhere-to-hide 

Sarah Silverman sues OpenAI and Meta over alleged copyright infringement in AI training – Music Business Worldwide 

https://www.musicbusinessworldwide.com/blatant-plagiarism-5-key-takeaways-from-universals-lyrics-lawsuit-against-ai-unicorn-anthropic/ 

https://copyleaks.com/blog/copyleaks-ai-plagiarism-analysis-report 

Spreadsheet of ‘ripped off’ artists lands in Midjourney case • The Register 

https://www.forbes.com/sites/robsalkowitz/2022/09/16/midjourney-founder-david-holz-on-the-impact-of-ai-on-art-imagination-and-the-creative-economy/?sh=59073fc62d2b 

https://spectrum.ieee.org/midjourney-copyright 

https://www.theguardian.com/technology/2019/may/28/a-white-collar-sweatshop-google-assistant-contractors-allege-wage-theft 

Erasing Authors, Google and Bing’s AI Bots Endanger Open Web | Tom’s Hardware (tomshardware.com) 

https://www.tomshardware.com/news/google-bard-plagiarizing-article 

https://www.forbes.com/sites/mattnovak/2023/05/30/googles-new-ai-powered-search-is-a-beautiful-plagiarism-machine/?sh=7ce30bb40476 

https://www.newsweek.com/how-copycat-sites-use-ai-plagiarize-news-articles-1835212#:~:text=Content%20farms%20are%20using%20artificial,New%20York%20Times%20and%20Reuters

https://www.androidpolice.com/sick-of-pretending-ai-isnt-blatant-plagiarism/ 

https://www.theverge.com/2023/10/6/23906645/bbc-generative-ai-news-openai 

https://www.calcalistech.com/ctechnews/article/hje9kmb4n 

https://www.theatlantic.com/technology/archive/2023/08/books3-ai-meta-llama-pirated-books/675063/ 

James Briddle, The Stupidity of AI, https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt 

 https://www.forbes.com/sites/gilpress/2020/04/27/12-ai-milestones-4-mycin-an-expert-system-for-infectious-disease-therapy/ 

https://www.technologyreview.com/2016/03/14/108873/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/ 

https://spectrum.ieee.org/how-ibms-deep-blue-beat-world-champion-chess-player-garry-kasparov 

https://www.forbes.com/sites/bernardmarr/2023/05/19/a-short-history-of-chatgpt-how-we-got-to-where-we-are-today/?sh=759e3a90674f  

https://www.theverge.com/2023/1/23/23567448/microsoft-openai-partnership-extension-ai 

AI promises jobs revolution but first it needs old-fashioned manual labour – from China | South China Morning Post (scmp.com) 

Facebook Content Moderators Take Home Anxiety, Trauma | Fortune 

‘A white-collar sweatshop’: Google Assistant contractors allege wage theft | Google Assistant | The Guardian 

 https://nvidianews.nvidia.com/news/nvidia-teams-with-national-cancer-institute-u-s-department-of-energy-to-create-ai-platform-for-accelerating-cancer-research#:~:text=The%20Cancer%20Moonshot%20strategic%20computing,and%20understand%20key%20drivers%20of 

https://www.wired.com/story/battle-over-books3/ 

https://www.theguardian.com/books/2016/sep/28/google-swallows-11000-novels-to-improve-ais-conversation 

https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1?r=US&IR=T#chatgpt-passed-all-three-parts-of-the-united-states-medical-licensing-examination-within-a-comfortable-range-10 

https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/ 

https://www.youtube.com/watch?v=aircAruvnKk&ab_channel=3Blue1Brown 

Medler, Connectionism, https://web.uvic.ca/~dmedler/files/ncs98.pdf 

The post How AI Was Stolen appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/04/26/how-ai-is-being-stolen/feed/ 0 1080
AntiSocial: How Social Media Harms https://www.thenandnow.co/2024/02/28/antisocial-how-social-media-harms-2/ https://www.thenandnow.co/2024/02/28/antisocial-how-social-media-harms-2/#respond Wed, 28 Feb 2024 10:10:09 +0000 https://www.thenandnow.co/?p=1044 Take a look at this graph. It shows an increase in young American teenagers’ rates of depression, with a notable uptick, especially in girls, since around 2010. It, and studies like it, are at the centre of a debate around social media and mental health. Findings like this have been replicated in many countries. That […]

The post AntiSocial: How Social Media Harms appeared first on Then & Now.

]]>

Take a look at this graph. It shows an increase in young American teenagers’ rates of depression, with a notable uptick, especially in girls, since around 2010. It, and studies like it, are at the centre of a debate around social media and mental health.

Findings like this have been replicated in many countries. That in many cases, reports of mental health problems – depression, anxiety, self-harm, suicide, and so on, have almost tripled.

Psychologist Jonathan Haidt argues that the timing is clear: the cause is social media. Others have pointed to the 2008 financial crash, climate change, worries about the future. But Haidt asks why this would effect teenage girls in particular?

He points to Facebook’s own research, leaked by the whistle blower Frances Haugen, who said, ‘Teens themselves blame Instagram for increases in the rate of anxiety and depression… this reaction was unprompted and consistent across all groups’.

In 2011, in surveys, around one in three teenage girls reported experiencing persistent ‘sadness or hopelessness’. Today, the American CDC Youth Risk Survey reports that 57% do. In some studies, shockingly, 30% of young people say they’ve considered suicide, up from 19%.

At least 55 studies have found a significant correlation between social media and mood disorders. A study of 19,000 British children found the prevalence of depression was strongly associated with time spent on social media.

Many studies have found that time watching television or Netflix is not the problem: it’s specially social media.

Of course, causation rather than correlation is difficult to prove. Social media is has become ubiquitous over a period in time in which the world has change in many other ways. Who’s to say it’s not fear of existential threats from climate change, inequality, global politics, or even a more acute focus on mental health more broadly?

But Haidt points out that the correlation between social media use and mental health problems is greater than that between childhood exposure to lead and brain development and worse than binge drinking and overall health. And both are things we address.

He argues all of these studies – those 55 at least, and many, many more that are related – are not just ‘random noise’. He says a ‘consistent story is emerging from these hundreds of correlational studies’.

Instagram was founded in 2010, just before that uptick. And the iPhone 4 was released at the same time, the first phone with a front facing camera. I remember when it was ‘cringe’ to take a selfie.

It also makes sense qualitatively. School age children are particularly sensitive to social dynamics, bullying, and self-worth. And now they’re suddenly bombarded with celebrity images, idealised body shapes and beauty standards, endless images and videos to compare themselves to on demand. On top of this, social networks like Instagram display the size of your social group for everyone to see, how many people like you, how many like your next post, your comments, and, more importantly, as a result, how many people don’t.

Social media is popularity quantified for everyone in the schoolyard to see.

One study which designed an app that imitated Instagram found that those exposed to images manipulated to look extra attractive reported lower self body image in the period after.

Another study looked at the roll out of Facebook to university campuses in its early years and compared the time periods with studies of mental health. It found out that when Facebook was introduced to an area, symptoms of poor mental health, especially depression, increased.

Another study looked at areas as high-speed internet was introduced – making social media more accessible – and then looked at hospital data. They concluded: ‘We find a positive and significant impact on girls but not on boys. Exploring the mechanism behind these effects, we show that HSI increases addictive Internet use and significantly decreases time spent sleeping, doing homework, and socializing with family and friends. Girls again power all these effects’.

Young girls, for various reasons, seem to be especially affected. However, the reasons why are difficult to establish – although idealised beauty standards are one obvious answer.

One researcher, epidemiologist Yvonne Kelley, said: ‘One of the big challenges with using information about the amount of time spent on social media is that it isn’t possible to know what is going on for young people, and what they are encountering whilst online’.

In 2017, here in the UK, a 14-year-old girl, Molly Russell, took her own life after looking at posts about self-harm and suicide.

The Guardian reported: ‘In a groundbreaking verdict, the coroner ruled that the “negative effects of online content” contributed to Molly’s death’.

The report said that, ‘Of 16,300 pieces of content that Molly interacted with on Instagram in the six months before she died, 2,100 were related to suicide, self-harm and depression. It also emerged that Pinterest, the image-sharing platform, had sent her content recommendation emails with titles such as “10 depression pins you might like”’.

Studies have found millions of posts of self-harm on Instagram; the hashtag ‘#cutting’ had around 50,000 posts each month.

A Swansea University study which included respondents with a history of self-harm, and those without, found that 83% of them been recommended self-harm content on Instagram and TikTok without searching for it. And three quarters of self-harmers had harmed themselves even more severely as a result of seeing self-harm content.

One researcher said, ‘I jumped on Instagram yesterday and wanted to see how fast I could get to a graphic image with blood, obvious self-harm or a weapon involved… It took me about a minute and a half’.

According to an EU study, 7% of 11-16 year olds have visited self-harm websites. These are websites, forums, and groups that encourage and often explicitly admire cutting. One Tumblr blog posts suicide notes and footage of suicides. Many communities have their own language – codes and slang.

Another study found that, to no one’s surprise, those who had visited pro self-harm, eating disorder, or suicide websites reported lower overall levels of self-esteem, happiness, and trust.

 

Harm Principles

Okay, but anything can be harmful. Crossing the road carries risks. So do many other technologies – driving, air travel, factories, medicines. But with other technologies we identify those risks, the harmful effects or side effects, to try to ameliorate them.

These problems are bound up, and often come into contact with, other values that we hold dear – free speech, freedom for parents to raise children in the way they wish, the liberal live-and-let-live attitude.

But we usually tolerate intervention when there is a clear risk of harm.

Our framework for thinking about liberal interventionism comes from the British philosopher J.S. Mill’s harm principle. That my freedom to swing my fist ends at your face. That we are free to do what we wish as long as it doesn’t harm others.

Mill wrote: ‘The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others’.

We – usually the police or government – prevent violence before it happens, investigate threats, make sure food and medicine and other consumer products aren’t harmful or poisonous. We regulate and have safety codes to make sure technology, transport, and buildings are safe.

So to make sense of all of this, I want to start from cases where social media has actually harmed in some way, and work back from there. One of the problems, as we’ll see, is that it’s not always clear where to draw the line or how to draw it.

 

Harmful Posts

First, is there even any such thing as a harmful post? After all, a post is not the same as violence. It might encourage, endorse, promote, lead to, or raise the likelihood of harm. But it’s not harm itself. As Jonathan Rauch says, ‘If words are just as violent as bullets, then bullets are only as violent as words’.

But in other contexts, we do intervene before the harm is done. False advertising or leaving ingredients that might be harmful off labels. Libel laws. We arrest and prosecute for planning violence, even though it hasn’t been carried out. For threats.

These are cases where words and violence collide. I call it ‘edge speech’ – they’re right at the edge of where an abstract word signals that something physical is about to be done in the world.

During the Syrian Civil War, which started in 2011, at least 570 British citizens travelled to Syria to fight, many of them for ISIS.

The leader – Abu Bakr Al-Baghdadi – called for Sunni youths around the world to come and fight in the war, saying, ‘I appeal to the youths and men of Islam around the globe and invoke them to mobilise and join us to consolidate the pillar of the state of Islam and wage jihad’.

ISIS had a pretty powerful social media presence. One recruitment video was called, ‘There’s no life without Jihad’. They engaged in a ‘One Billion’ social media campaign to try and raise one billion fighters. They had a free app to keep up with ISIS news, ‘The Dawn of Glad Tidings’, and used Twitter to post pictures, including those of beheadings.

The Billion campaign, with its hashtags, lead to 22,000 tweets on Twitter within four days. The hashtag ‘#alleyesonISIS’ on Twitter had 30,000 tweets.

One Twitter account had almost 180,00 followers, its tweets viewed over 2 million times a month, with two thirds of foreign ISIS fighters following it.

Ultimately, the British Government alone requested the removal of 15,000 ‘Jihadist propaganda’ posts.

Or take another example, what’s been called ‘Gang Banging’.

Homicides in the UK involving 16-24 year olds have risen by more than 60% in the past five years. There are an increasing number of stories of provocation through platforms like Snapchat. In one instance in the UK, a 13-year-old was stabbed to death by two other 13 and 14 year olds after an escalation involving bragging about knives which began on Snapchat. In another, a 16 year old was filmed dying on Snapchat after being stabbed.

One London youth worker told Vice, ‘Snapchat is the root of a lot of problems. I hate it’, ‘It’s full of young people calling each other out, boasting of killings and stabbings, winding up rivals, disrespecting others’.

Another said, ‘Some parts of Snapchat are 24/7 gang culture. It’s like watching a TV show on social media with both sides going at it, to see who can be more extreme, who can be hardest’.

Vice reports that much gang violence now plays out on Snapchat in some way, with posts being linked with reputation, impressing, threatening, humiliating, boasting, and, of course, eventually, escalating.

Youth worker and author Ciaran Thapar said, ‘When someone gets beaten up on a Snapchat video, to sustain their position in the ecosystem they have to counter that evidence with something more extreme, and social media provides space to do that. It is that phenomenon that’s happening en masse’.

The head of policing in the UK also warned that social media was driving children to increasing levels of violence: https://www.bbc.co.uk/news/uk-43603080

 

Hate Speech

Or take another example, controversial hate speech laws.

The UN says: ‘In common language, “hate speech” refers to offensive discourse targeting a group or an individual based on inherent characteristics (such as race, religion or gender) and that may threaten social peace’.

This latter part is often forgotten. That the point of hate speech laws – rightly or wrongly, as we’ll see – is to address threats of harm before they happen.

The Universal Declaration of Human Rights – which many countries have adopted and most in Europe at least have similar laws to – declares that, ‘In that exercise of their rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society’.

But, in an exception: ‘Any advocacy of national, racial or religious hatred that constitutes incitement to discrimination, hostility or violence shall be prohibited by law’.

These laws were developed after the Nazi atrocities during the Second World War, and it was argued that laws of this type – now called hate speech laws – were necessary because the threat of harm from things like genocide was so great.

The UN’s David Kaye writes that, ‘the point of Article 20(2) was to prohibit expression where the speaker intended for his or her speech to cause hate in listeners who would agree with the hateful message and therefore engage in harmful acts towards the targeted group’.

It wasn’t meant to ban speech that caused offence, but to prevent speech that would lead to violence.

Of course, the problem is that this is very difficult to define, but by many metrics speech loosely defined as ‘hate speech’ has increased over the past few years.

In ‘Western’ countries including New Zealand, Germany, and Finland, 14-19% of people report being harassed online.

A 2020 study in Germany found observations of hate speech online had almost doubled from 34% to 67% in the last five years.

Just under 50% of people in the UK, US, Germany and Finland (aged 15 to 35) answered yes to: ‘In the past three months, have you seen hateful or degrading writings or speech online, which inappropriately attacked certain groups of people or individuals?’

And some research has found that youth suicide attempts are lower in US states that have hate crime laws.

 

The Edges of Harm

Okay, so when should social media companies and the government step in? Should social media companies host self-injury groups? Should the Ku Klux Klan have a Facebook page? What’s the difference between a joke and harassment? Should the police ever be involved? There’s also a variety of ways of addressing issues: from banning, to hiding posts, to shadow banning and demonetising, age restriction, to civil suits, fines, and prosecution.

Then there’s the problem of overreach. I’ve had this channel demonetised, videos age restricted and demonetised, which never recover in the algorithm – that I spend months working on and get nothing back from. This video is on a sensitive topic, I wouldn’t be surprised if it has problems with reach and demonetisation, so if you’d like to support content like this then you can do so through Patreon below.

Okay, so how can we approach this? One answer I think we can rule out pretty quickly is the libertarian one. Let everyone do what they want, users and platforms alike, and social media platforms and pages will thrive or fail as a result.

First, no libertarian society in history has worked. And second, even in the early days of the internet, where in effect the libertarian approach did thrive, social media companies slowly realised that if they let their platforms be full of self-harm, pornography, violence, and more, that advertisers and users tend to leave quickly. So they started to self-regulate, and some say, over-regulate.

As a result, free speech has become a subject of fierce debate. This is, I think, for three reasons: first, that free speech is, correctly, considered one of our most important values. Second, that with the internet, we now have more speech than ever before. And third, because in some cases, speech clearly can lead to harm.

We’ve seen this: suicide, mental health issues, self-harm, eating disorders, depression, promotion of terrorism and gang violence, the promotion of hate speech that openly calls for fascism or genocide.

So should we restrict speech of this type?

First, we should acknowledge that there is no such thing as free speech absolutism. We already limit the fringes of speech in many ways: threats, harassment, blackmail, libel, copyright, slander, harassment, incitement to violence, advertising standards, drug and food labelling standards, broadcasting standards.

Furthermore, we restrict many freedoms based on the likelihood of causing harm: health and safety and sanitation in restaurants, building codes, speeding and drink driving laws, wider infrastructure requirements, air travel regulation, laws on weapons, knives, etc… the list goes on.

So if we regulate these things, why do we not regulate social media companies when there is a significant risk of harm? I think we should be careful. Only focusing on those very ‘edge cases’. If there’s a substantial risk or if regulation can effectively reduce a risk of harm while minimising the curtailing of freedoms then it should either be done by institutions, companies, or governments that can. Importantly, how this is done should be subject to democratic debate.

Policy analyst and author David Bromell writes: ‘Rules that restrict the right to freedom of expression should, therefore, be made by democratically elected legislators, through regulatory processes that enable citizens voices’.

He continues: ‘Given the global dominance of a relatively small number of big tech companies, it is especially important that decisions about deplatforming are made within regulatory frameworks determined by democratically elected legislators and not by private companies’.

And none of this is to deny that it’s difficult, and finding that line, striking the right balance, is complex. But in the cases I’ve mentioned, with the statistics as they are, to do nothing would be irresponsible. In all of them, I think the potential for harm is often as clear as the potential for harm from, for example, libel.

Does this it mean posts, forums, and speech of this type should be banned outright.? Not always. Human rights expert Frank La Rue has argued that we should make clear distinctions between: ‘(a) Expression that constitutes an offence under international law and can be prosecuted criminally; (b) expression that is not criminally punishable but may justify a restriction and a civil suit; and (c) expression that does not give rise to criminal or civil sanctions, but still raises concerns in terms of tolerance, civility and respect for others’.

In other words, context, proportionality, tiered responses, all matter. There are many policies that can be put in place before banning or prosecution – not amplifying certain topics, age-restrictions, removing posts or demonetising before banning.

Haidt argues that big tech should be compelled by law to allow academics access to their data. One example is the Platform Transparency and Accountability Act, proposed by the Stanford University researcher Nate Persily. We could raise the age above which children can use social media, or force stricter rules. We could ban phones in schools.

Finally, democratic means transparent, so we can all have a debate about where the line is. Youtube is terrible at this. I don’t mind if a video gets demonetised or age-restricted because I’ve broken a reasonable rule. But it’s more often the case that I haven’t and they do anyway.

The UK has just introduced an online safety bill which addresses much of all of this, and I agree with the spirit of it. The Guardian reports that, ‘The bill imposes a duty of care on tech platforms such as Facebook, Instagram and Twitter to prevent illegal content – which will now include material encouraging self-harm – being exposed to users. The communications regulator Ofcom will have the power to levy fines of up to 10% of a company’s revenues’.

It makes encouragement to suicide illegal, prevents young people seeing porn, and provides stronger protections against bullying and harassment, encouraging self-harm, deep-fake pornography, etc.

However, it also tries to force social media companies to scan private messages, which is an abhorrent breach of privacy, and a reminder that giving politicians the power to decide can carry as much risk as letting a single tech billionaire decide.

But ultimately, through gritted teeth, I remind myself that the principle remains: the more democratically decided, the better. And democratically elected politicians are one-step better than non-democratically elected big tech companies.

The post AntiSocial: How Social Media Harms appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/02/28/antisocial-how-social-media-harms-2/feed/ 0 1044
Why We’re So Self-Obsessed https://www.thenandnow.co/2024/01/28/why-were-so-self-obsessed/ https://www.thenandnow.co/2024/01/28/why-were-so-self-obsessed/#respond Sun, 28 Jan 2024 14:37:21 +0000 https://www.thenandnow.co/?p=1037 One of the best autobiographies ever written happens to be one of the first, one of the darkest, and one of the most creative. The title of Thomas de Quincy’s 1821 classic says it all: Confessions of an English Opium Eater. Yes, it’s about addiction, but he uses the subject to explore something ground-breaking for […]

The post Why We’re So Self-Obsessed appeared first on Then & Now.

]]>
One of the best autobiographies ever written happens to be one of the first, one of the darkest, and one of the most creative. The title of Thomas de Quincy’s 1821 classic says it all: Confessions of an English Opium Eater. Yes, it’s about addiction, but he uses the subject to explore something ground-breaking for the time – the inner universe.

We now live in an age of self-obsession. The era of everybody’s autobiography, as Gertrude Stein said. We all have our story, celebrities and politicians sell memoirs, we’re surrounded by reality TV and podcasts about personal growth, we live in a culture of self-development, the age of me.

De Quincy’s story begins with his pains and afflictions – toothache, poverty, hunger, sufferings – that ‘threatened to besiege the citadel of life and hope’, in his words. Crucially, he says that usually ‘Guilt and misery shrink, by a natural instinct, from public notice: they court privacy and solitude’.

His book is a confession because he says that usually people omit the ugly parts of their character and emphasise their success. He wanted to challenge that.

Then, he describes the relief of taking opium for the first time – ‘what an upheaving, from its lowest depths, of the inner spirit! what an apocalypse of the world within me!’

He describes an ‘abyss of divine enjoyment’ and ‘ a panacea for all human woes: here was the secret of happiness, about which philosophers had disputed for so many ages, at once discovered: happiness might now be bought for a penny, and carried in the waistcoat pocket: portable ecstasies might be had corked up in a pint bottle: and peace of mind could be sent down in gallons by the mail coach’.

He uses the experience of his opium addiction to explore, psychologically and innovatively, that ‘abyss within’ – that ‘apocalypse of the world within me’.

This gazing into the abyss within is a mode of self-exploration that is still unfolding culturally. Compare the popular TV shows of this year with those of just twenty years ago. From the Last of Us to the Bear to Succession – these new shows are not what they’re ostensibly about on the surface – zombies, cooking, or business – but are about something much more universal – character. Much of their genius – much like Oppenheimer, for example – relies on these shallow depth of field close ups on the intense emotions displayed on the torn character’s face.

So where did this inward gaze come from? Before de Quincy’s time, we have to remember that knowledge, traditionally, was not about what’s in here – endogenous – but what’s out there –exogenous. Everything from moral rules coming from god, to science coming from studying the world, to art coming from ancient models and classical forms – was about studying and learning from the external world.

There’s a great book on the early Ancient Greeks of Homer’s time that influentially makes the case that the people of that period saw their emotions, passions, angers, and desires not as coming from within but as being placed in them by the gods.

Agamemnon says that Zeus put wild ate – the goddess of mischief and delusion – in him, and made him act in a way contrary to how he usually would – he says the ‘plan of zeus’ was fulfilled.

Other characters talk as if gods had taken away, changed, or destroyed a normal way of thinking and replaced it with another.

In other words, character came not from within but from without. Remember, the Ancients had no concept of personality, of biochemistry, they genuinely believed in their gods. Think about how powerful our imaginations are – why wouldn’t you believe that an intense of experience of anger, say, and a loss of self-control was something planted there by the gods.

To take one more example, in the Christian framework, the self is always in reference to god. St Augustine may have written the first autobiography – The Confessions – in the 4th century, but his aim was to conform his behaviour to the external rules of Christian teaching and god’s will, not to discover some true self within.

The Medieval period was defined by roles you were born into – craftsman, butcher, peasant, lord – the rules were laid down, you didn’t question them. But in de Quincy’s time all of this was changing.

De Quincy was obsessed with two poets: William Wordsworth and Samuel Coleridge – he idolised them, wrote letters to them, befriended them, and travelled here to the English Lake District where they lived. He idolised them because, like other writers of the time like Goethe and philosophers like Jean-Jacques Rousseau – they were all working out a new idea of the self.

Wordsworth, before de Quincy, also wrote one of the first autobiographies – a very long poem called the Prelude about ‘the growth of a poet’s mind’ – all about his childhood experiences. He admitted that, ‘it is a thing unprecedented in literary history that a man should talk so much about himself’.

Wordsworth believed, adopted philosophical ideas in Germany from people like Kant, that our inner life – the framework, structure, ideas, emotions, of the mind and body – shapes the information we get through the senses and patterns it, colours it, transforms and raises it – so that everyone sees a lake, for example, in a different way, with different memories, different goals and ideas. This was radically new.

It all started with Rousseau, who, not long before in the 18th century had a profound realisation. If, as he believed, it was the world, political systems, and social norms around him – those exogenous features – that were oppressing ordinary people, keeping them in chains, where was truth to be found? It could only be found, he decided, within.

Rousseau opened his autobiography – again, one of the first, probably the first, and again titled The Confessions – with this influential passage: ‘I am made unlike any one I have ever met; I will even venture to say that I am like no one in the whole world. I may be no better, but at least I am different’. Because of this he said, ‘I should like in some way to make my soul transparent to the reader’s eye’.

The historian W.J.T. Mitchell even described Rousseau as the first modern man – ‘the great originator’.

Goethe, inspired by Rousseau, said that he turned into himself and found a world. Rousseau said his project was one that had ‘no model’, and would have ‘no imitator’.

It was an act of pure, singular, individual, irrepressible creativity – from within.

This spirit of the age had a profound affect on many writers. Wordsworth wrote so much about this place because it was where he was from. He described the psychological ‘spots of time’ that influenced his individual character.

Personal and local stories from places like this one: Dungeon Ghyll Force – as a place where lambs almost drown because the shepherd’s boys aren’t concentrating – he describes their inner worlds – their ‘pulses stop’, their ‘breath is lost’.

Or when he remembers stealing a boat at night when he was a child, becoming terrified in the middle of a lake at the dark imposing shapes of the mountains around him that haunted him and  ‘moved slowly through my mind / By day, and were the trouble of my dreams’.

De Quincy admired all of this so much that he moved into Wordsworth’s cottage after him and when writing his own autobiography, wrote with his tongue in his cheek that there were ‘no precedents that he was aware of’ for this sort of writing.

But this is what makes De Quincy’s Confessions so innovative. He focuses on what is usually swept away in our own self-aggrandising narratives about ourselves. He says, ‘Nothing, indeed, is more revolting to English feelings, than the spectacle of a human being obtruding on our notice his moral ulcers or scars, and tearing away that “decent drapery”’.

He took Wordsworth’s exploration of emotion, feeling, the self and the natural world and applied it to his own warped experiences with opium and urban life.

He describes how opium furnished ‘tremendous scenery’ in the dreams of the eater. He writes poetically about the ‘endless self-multiplication’ of the self going up and down symbolic staircases, and talks about the ‘wonderous depth’ within.

He uses metaphors like translucent lakes and shining mirrors and waters and oceans changing, surging, wrathful, to describe the changes in our own inner lives. ‘My agitation was infinite,’ he said ‘my mind tossed – and surged with the ocean.’

Of course, this new inner self didn’t appear from nowhere. It mirrored the scientific developments of the period. When astronomers like Galileo made observations about the universe that contradicted that taught wisdom of the Bible, all of these poets and philosophers were saying: ‘what about the universe within?’

Wordsworth’s spots of autobiographical time, de Quincy’s artistic description of personal challenges and addiction, Lord Byron’s model of heroic outcasts on voyages of self-discovery – all provide the groundwork for modern psychology, the modern self, and for the core injection of the modern world: create yourself as something new.

The autobiographical self is the model that helps us traverse the world. As psychologist Qi Wang says, the autobiographical self is, ‘self-knowledge that builds upon our memories and orients us toward the future, allowing our existence to transcend the here-and-now moment’.

This had an incalculable effect on the culture and politics of the modern period. These writers were a sensation across Europe and anyone who argued for individual rights, freedoms, the power of ordinary people, drew on them in some way.

On the other hand, I’s produced the narcissism and obsessions with the self we see everywhere today. We no longer look outward as much, but spend a lot of time naval-gazing in.

I think the challenge of this century will be whether the world within can be reconciled with the world without.

The post Why We’re So Self-Obsessed appeared first on Then & Now.

]]>
https://www.thenandnow.co/2024/01/28/why-were-so-self-obsessed/feed/ 0 1037
CONSPIRACY: The Fall of Russell Brand https://www.thenandnow.co/2023/12/05/conspiracy-the-fall-of-russell-brand/ https://www.thenandnow.co/2023/12/05/conspiracy-the-fall-of-russell-brand/#respond Tue, 05 Dec 2023 12:31:19 +0000 https://www.thenandnow.co/?p=949 On the surface, this is a story about Russell Brand, but it’s also a bigger story – about institutions, trust, truth, uncertainty and fear, coverups and questioning, about how we all think. It delves into the most fundamental of human questions – what are the stories we tell ourselves? Who gets to tell those stories? […]

The post CONSPIRACY: The Fall of Russell Brand appeared first on Then & Now.

]]>
On the surface, this is a story about Russell Brand, but it’s also a bigger story – about institutions, trust, truth, uncertainty and fear, coverups and questioning, about how we all think. It delves into the most fundamental of human questions – what are the stories we tell ourselves? Who gets to tell those stories? What is the truth?

Russell Brand’s career as an entertainer was based on promiscuity, shock, extravagance, wit, intelligence – but that’s the case with so many comedians. In 2013 Brand did something most comedians don’t – he talked to Britain’s chief MSM political interrogator – Jeremy Paxman.

He said: ‘here’s the thing that we shouldn’t do: shouldn’t destroy the planet; shouldn’t create massive economic disparity; shouldn’t ignore the needs of the people. The burden of proof is on the people with the power’.

Brand told the incredulous Paxman that he didn’t vote, what’s the point? It went viral at the time, not least because Brand is one of Britain’s most recognisable faces, but because it seemed to many people to capture the mood: an ordinary person, telling the truth, up against the establishment.

It was the start of a shift towards politics.

In 2014, after a stint in Hollywood, he wrote a book – Revolution – which he discussed in another much talked about interview on Newsnight in the UK.

The same year, he started The Trews on Youtube, reading and commenting on the UK newspapers, interviewing a range of people, making mostly progressive arguments.

Of course, for many the pandemic changed things. In January of 2021, one subject stands out as getting millions more views than usual – The Great Reset.

It’s a topic Brand comes back to several times, and these videos always have many more views than most others. Around this time, Brand becomes more critical of policies surrounding Covid-19 – much of it reasonable. He shifts to his current, regular format – Stay Free – a full time regular show with millions of subscribers, advertisers, co-presenters, and guests.

A few months later, in the middle of 2021, stories began being published which were mostly drawing on Brand’s tweets: Brand ‘is a conspiracy theorist’.

By 2022, two competing narratives are set: for many, Brand had become a crackpot tinfoil hat conspiracist. For Brand, that there’s a centralising, authoritarian, mainstream agenda, dominated by MSM, the political establishment, big tech, and global corporate interests, to take away our freedoms.

I want to look at several stories as they unfolded – Covid and vaccines, the Great Reset, the Dutch farmers protest, and the allegations against Brand in September of 2023 – and ask a question that I think is fundamental to our information age: what does it mean to be called a conspiracy theorist? Especially if there have been real conspiracies in history – Iran-Contra, Watergate, the Pentagon Papers, the Holocaust, all the way back to Julius Caesar’s assassination. All of these were the result of a conspiracy.

We’ll look at the history and the psychology too to try and separate fact from fiction, asking what drives Brand? Is there any truth to what he says? How can we think about the establishment, the mainstream media, the global elite – what does all of this tell us about the society we live in?

 

Contents:

 

The Great Reset

Brand talks about lots of different subjects, in a lot of different ways, but there are a few themes and topics he comes back to again and again.

He is aware he’s been framed as a conspiracy theorist, and frequently points out that he’s just reading facts from a variety of sources, some mainstream – the Guardian, the New York Times, the Washington Post – some more fringe. So how can we disentangle fact from fantasy?

At the beginning of 2021, clips circulated on the World Economic Forum’s Great Reset.

‘The Great Reset’ is an initiative from the World Economic Forum to drastically change the direction of the economy after Covid-19 by addressing social issues and, ‘to reflect, reimagine, and reset our world’.

Depending on who you ask, The Great Reset is anything from capitalist propaganda, to a genuine attempt to address the problems with capitalism, to a global conspiracy to exert more control over the population.

In the case of Brand, he argued, ‘there are some people that believe in shady global cabals running things from behind the scenes. Now, I don’t believe that, I believe that there are plain visible economic interests that dominate the direction of international policy’.

The video is reasonable. It criticises those who think the Great Reset is part of an authoritarian plan to take control through the justification of a manufactured fake climate crisis, for example.

The video is a hit, it has a million views, compared to his other videos of the time ranging around 100,000.

Brand makes another video, saying he’s decided to, ‘dive a little bit deeper into what you think, and further evidence, and your legitimate concerns’. The video currently has 2.7 million views.

A year later, he says: ‘bad news, the Great Reset, where you will own nothing and be happy, is being brought about by economic policy decisions made by your government that will facilitate the advance of the most powerful interests on earth’.

Brand continues that: ‘this is not conspiracy theory, I’m going to read you the actual facts here, I’m just using rhetoric that’s appealing, I’m an entertainer’.

Okay, so what is the Great Reset? It began as a book written by the founder of the corporate lobbying group the World Economic Forum, Klaus Schwab, and his co-author, Thierry Malleret.

One review describes three main themes:

  1. A ‘push for fairer outcomes in the global market and to address the massive inequalities produced by global capitalism’.
  2. ‘efforts to address equality and sustainability by urging governments and businesses to take things like racism and climate change more seriously’.
  3. Embrace ‘innovation and technological solutions [that] could be used to address global social problems’.

All of this sounds reasonable enough, admirable even. But, as political author Ivan Wecke points out, Schwab and the WEF’s ideas have something ‘fishy’ about them. The initiative can be seen as an exercise in corporate PR that gives multinational business leaders more power, not less, and that give political elites more power, not less. In another review of the book, Steven Umbrello concludes that the book does point out a lot of problems, but has no substantive solutions. And, of course, liberal elites love this stuff. Trudeau, for example, has used the language of needing a ‘reset’.

So there’s plenty to criticise. But as Brand explores the Great Reset, he connects it to other events – Black Rock buying up houses, for example. Emphasising one video ominously claiming that in the future you’ll own nothing and be happy, increasing government restrictions during the pandemic, Bill Gates, and the Dutch farmers protest.

He seems more aligned with Alberta premiere Jason Kenney, who has claimed the great reset is a ‘grab bag of left-wing ideas for less freedom and more government’, and, ‘failed socialist policy ideas’.

Brand uses the word ‘agenda’ frequently, and as he says, it’s not a conspiracy, he’s just reading the facts. So what is a conspiracy?

One definition is: ‘the belief that a number of actors join together in secret agreement, in order to achieve a hidden goal which is perceived to be unlawful or malevolent’ (Zonis and Joseph).

Another by professor of psychology Jan-Willem Prooijen argues a conspiracy has 5 components:

  1. It makes connections that explain disconnected actions, objects, or people into patterns
  2. It argues that it was an intentional plan
  3. It involves a coalition or group
  4. The goal is hostile, selfish, evil, or at odds with public interest
  5. It operates in secret

Another definition proposes a simple four criteria model: ‘(1) a group, (2) acting in secret, (3) to alter institutions, usurp power, hide truth, or gain utility, (4) at the expense of the common good’.

There also many types of conspiracy – within government and institutions, without, in the form of another country or nefarious power, above in the form of shady elites, or even below in the form of ordinary people overthrowing capitalism.

By Prooijen’s criteria, the Great Reset can be thought of as a conspiracy. After all, it’s intentional, it involves a group of people, some argue it’s not in the public interest, and it at least in part operates in secrecy at Davos. But Brand points out that it’s not a conspiracy, they’re saying it publicly: https://youtu.be/BXTPzFSx6oI?si=xMQIZ7u4xFfNYEc7&t=213

But I think the most interesting component is the first one – it makes connections that explain disconnected actions, objects, or people into patterns. Brand does this often, hopping between topics. So let’s look at another one topic, the Dutch farmers protest.

 

The Dutch Farmers Protest

The Dutch Farmer’s movement, beginning in 2019 and continuing today, are protests that argue that farmers are being unfairly targeted in efforts to address climate change.

The Dutch government have passed a range of policies aiming to cut nitrogen pollution and livestock farming in the country.

Brand says, ‘Bloody farmers, protesting, hating the environment. What is it? Are farmers all bastards? Or, are we seeing the beginning of the Great Reset play out in real time?’.

In short, the Dutch government policies are a power grab, taking power from ordinary farmers, and he connects the protest to other stories he covers frequently – the Great Reset, WEF, Bill Gates, and the MSM failing to cover the events appropriately.

Remember: ‘1: It makes connections that explain disconnected actions, objects, or people into patterns’.

So what’s really happening in the Netherlands?

Studies since the 80s have shown that nitrate pollution in the ground, getting into drinking water, and into wider ecosystems, has been an increasing problem. Nitrate pollution can cause blue baby syndrome, increases in bowel cancer, respiratory problems and premature birth.

It causes havoc in rivers, which nitrate-based fertiliser runs into, killing fish. The EU has identified Natura 2000 areas – fragile areas of nature that are home to rare and threated species.

The Netherlands is an agricultural superpower. It’s the second largest exporter of agricultural products in the world, and the EU’s number one exporter of meat. Unfortunately, being close to the designated Natura 2000 areas, this makes nitrate pollution in the Netherlands a big problem.

It’s also an EU member state – with its freedom of movement, courts, European Parliament, and so on, and support for staying in the EU in the Netherlands is very high – around 75%.

The EU has legislated to reduce nitrate pollution by 2030 and more broadly, worldwide, agriculture contributes to between a quarter and a third of all greenhouse gas emissions.

The Dutch government and EU have agreed a 1.5 billion euro package to help 2000-3000 “peak polluter” farmers either innovate, relocate, change business or, as a last resort, buy them out.

Obviously, among many farmers this is deeply unpopular.

‘For agricultural entrepreneurs, there will be a stopping scheme that will be as attractive as possible’, said Van der Wal in a series of parliamentary briefings. ‘For industrial peak polluters, we will get to work with a tailor-made approach and in tightening permits. After a year, we will see if this has achieved enough’.

Is it hypocritical to focus efforts on ordinary farmers rather than industrial peak polluters? On the surface, yes. And none of what I’ve just said is to blame farmers. But it’s obviously a complex problem with a lot of different interests at stake.

And in the middle of the video, Brand makes some reasonable points. In Sri Lanka the outright banning of all fertilisers and pesticides was disastrous. He makes points about focusing on farmers rather than finding ways to shift attention to corporations and the one percent. He says it’s always ordinary people rather than the powerful. All of which I can agree with. But he ignores some of the complexity. The Dutch government has also ordered coal powerplants to close, for example. And the biggest polluter in the country is Tata Steel, which the regulation does focus on, and is one of the country’s biggest employers of ‘ordinary people’.

But what stands out is the framing. It’s about the Great Reset, Bill Gates, the agenda, and the next piece of the puzzle…

 

COVID-19

There are several ongoing Covid-19 debates. The lab leak hypothesis, the efficacy of vaccines, big tech censorship, the legality or ethics of ‘lockdowns’ – and what should be clear, wherever you stand on a particular issue, is that each of these, while having some crossover, is somewhat different.

Some of the Brand’s many Covid-19 videos, like one on ‘vaccine passports’ for example, have a lot to agree with. However, like with other topics, Brand has a tendency to take a story and spin it into a wider pattern.

We hear it a lot recently – it’s about ‘the narrative’.

The lab leak hypothesis isn’t about laboratory safety precautions or lack thereof, but about a coverup involving world government, the WHO, and big tech censorship. A WHO epidemic surveillance network across the world that monitors the outbreak of communicable disease becomes about an elitist surveillance society that spies on us. A doctor describing helping with outbreaks becomes an object of derision.

Take this video, one of many on vaccines. It’s about Pfizer falsifying the data of vaccine trials – a serious issue. It’s based on a BMJ article in which a whistleblower raised a number of concerns with a trial site they worked at, including:

‘1. Participants placed in a hallway after injection and not being monitored by clinical staff

  1. Lack of timely follow-up of patients who experienced adverse events
  2. Protocol deviations not being reported
  3. Vaccines not being stored at proper temperatures
  4. Mislabelled laboratory specimens, and
  5. Targeting Ventavia staff for reporting these types of problems’

All worrying concerns. And Brand repeatedly points out that he is just looking the evidence objectively, just asking questions. He describes himself as a ‘glass funnel’ reporting information carefully and unbiasedly, while the MSM report it ‘morally’, telling people what to do.

There are a few points of irony here. First, obviously Brand has a moral position here. We all do – unless we read a story without comment or opinion, which Brand is doing. Second, he says it’s not being reported on by the mainstream media, while using reports from mainstream institutions – the BMJ, CBS, and it’s been reported by the Daily Mail and the Conversation. I find a brief reference to it in the Financial Times.

But, it might be reasonable to ask, should there not be more of an outcry? I can’t find it reported in the New York Times or the BBC, for example.

As the Conversation article points out, the concerns raised are important and worrying but don’t meaningfully undermine wider evidence on Covid-19 vaccines. It involved three Pfizer trial centres out of 150. Those 3 sites involved around 1000 people.

Of course, across the world, hundreds of thousands took part in trials involving many different pharmaceutical companies, third-party trial centres, universities, and hundreds of regulatory bodies.

And most of the whistleblower’s complaints were about sloppiness – photos of things like needles thrown away inappropriately, participants’ IDs left out when they shouldn’t have been. One section reads, ‘a Ventavia executive identified three site staff members with whom to “Go over e-diary issue/falsifying data, etc.” One of them was “verbally counseled for changing data and not noting late entry,” a note indicates.’

Now, all of this is obviously worrying, good reporting, worth investigating, et cetera.

But it’s important to keep a sense of proportion. This is a single third-party trial centre in Texas, but Brand spins it into a wider narrative, claiming in another video, for example, that, ‘the mainstream media are preventing their own medical experts from accurately reporting on potential covid problems. Meanwhile, they continue to repress information about vaccine efficacy’.

As Prof Douglas Drevets, head of the infectious diseases department at University of Oklahoma has written: ‘There have been so many other studies of the Pfizer COVID-19 vaccine since the Phase III trial that people can be confident in its efficacy and safety profile. That said, Pfizer might be wise to re-run their analysis excluding all Ventavia subjects and show if that does/does not change the results. Such an analysis would give added confidence in the Phase III results’.

Pfizer then reported that they looked into the complaints and said that, ‘Pfizer’s investigation did not identify any issues or concerns that would invalidate the data or jeopardize the integrity of the study’.

I’m not saying Pfizer’s claims should be taken at face value, or that pharmaceutical companies do not have perverse profit incentives, and so on, or that this isn’t worth someone digging into – the point is that this is a very small story, it has been looked into, and I’d imagine if you’re an editor at a TV station or newspaper, with hundreds of other competing stories to present, you’d decide on balance that there are more important stories. News reporting is a matter of emphasis. With only a limited number of positions on, for example, a front page each day, what’s included and what’s not?

Brand says that the mainstream media are censoring information when in fact the opposite is true. There are, again, issues with the mainstream media that we’ll come to, but it’s an endorsement of the press that, unlike in say China or Russia, a relatively minor issue could be reported and investigated.

Brand constantly says things like, ‘this is what happens when you politicise information’, without the awareness that by weaving insignificant details into wider narratives, deciding to give small stories weight, he is himself obviously politicising information.

The whistleblower was also reported to be a sceptic about vaccine efficacy more broadly. Brand also relies on jokes as innuendo to spin it into his wider conspiracy narrative – joking, for example, that the whistleblower was found dead.

He says, ‘individual freedom, individual ability to make choices for yourself, based on a wide variety of sometimes opposing evidence, and sometimes contradictory information, that places you in the position as an adult to make decisions for yourself. That’s not what the mainstream media want, but that’s what we demand on your behalf’.

But he doesn’t use a wide variety of evidence. He selects minor stories and connects them to the narrative. There are many, many, many more sources that report things like vaccines have saved three million lives in the US alone. 96% of doctors are fully vaccinated. Myocarditis has been reported in ten out of a million shots of the vaccine, but is more likely to be caused by the Covid-19 virus than the vaccine.

There are debates to be had, there always are, but what Brand doesn’t have is a good sense of the weight and significance of a story. And what he does have, as we’ll get to, is a good sense of how to tell a compelling, scary and entertaining story.

But wait, just because it’s a small story it doesn’t make it automatically wrong. And yes, there are monied interests, powerful lobbies, values and ideas that are dominant and others that get sidelined. The risk is throwing out the baby with the bath water. And as we saw at the beginning of the video, some conspiracies turn out to be true, and they weren’t reported on either. So is there any other way to separate fact from fiction?

 

History and Conspiracy

History is full of conspiracies, but they tend to be limited – a small group of people with a limited set of goals.

Most theories, though, have turned out to be wrong, or at the very least, there’s little evidence for them. Vaccines don’t cause autism. Obama was born in the US. The earth is not flat. Witches weren’t conspiring to encourage the harvests fail. Jews weren’t conspiring to take over the world in Weimar Germany.

But the idea that there is an agenda to take over the world, an idea that connects dots between disparate events is as old as time – and they’ve usually turned out to be wrong, or at least, as we’ll get to, miss the real point.

In the middle of the 19th century, it was a common belief in America that the Catholic Church and the monarchies of Europe were not only uniting to destroy the US, but had already infiltrated the US government. One Texas newspaper declared that, ‘It is a notorious fact that the Monarchs of Europe and the Pope of Rome are at this very moment plotting our destruction and threatening the extinction of our political, civil, and religious institutions. We have the best reasons for believing that corruption has found its way into our Executive Chamber, and that our Executive head is tainted with the infectious venom of Catholicism’.

Before that, it was the Illuminati, who, according to one book in 1797, were formed, ‘for the express purpose of ROOTING OUT ALL THE RELIGIOUS ESTABLISHMENTS, AND OVERTURNING ALL THE EXISTING GOVERNMENTS OF EUROPE’.

In an influential 1964 article, The Paranoid Style in American Politics, Richard Hofstadter points out that throughout history there have been suspicions of plots that have infected all major institutions, a fifth column, that all in power have been compromised.

The inventor of the telegraph, Samuel Morse, wrote that, ‘A conspiracy exists, its plans are already in operation… we are attacked in a vulnerable quarter which cannot be defended by our ships, our forts, or our armies’.

Morse, sounding just like Brand, wrote: ‘The serpent has already commenced his coil about our limbs, and the lethargy of his poison is creeping over us.… Is not the enemy already organized in the land? Can we not perceive all around us the evidence of his presence?… We must awake, or we are lost’.

Another article worried that, ‘that Jesuits are prowling about all parts of the United States in every possible disguise, expressly to ascertain the advantageous situations and modes to disseminate Popery’.

It was alleged that the 1893 depression was the result of a conspiracy by Catholics to attack the US economy by starting a run on the banks.

WWI was started because the Austo-Hungarian Empire believed the killing of the heir to the throne Archduke Franz Ferdinand was the result of a Serbian government conspiracy, and so attacked Serbia, setting off a chain of events leading to the war. There was no evidence for this. Historian Michael Shermer calls it the deadliest conspiracy theory in history.

Senator McCarthy famously believed a communist conspiracy had infiltrated every American institution. In 1951 he said that there was, ‘a conspiracy on a scale so immense as to dwarf any previous such venture in the history of man. A conspiracy of infamy so black that, when it is finally exposed, its principals shall be forever deserving of the maledictions of all honest men’.

During the resulting Red Scare, influential businessman Robert Welsch wrote that, ‘Communist influences are now in almost complete control of our Federal Government’, the Supreme Court, and that they were in a struggle for control of, ‘the press, the pulpit, the radio and television media, the labor unions, the schools, the courts, and the legislative halls of America’.

 

The Psychology of Conspiracy

One of the important distinctions here is between phrases like an ‘agenda’ and ‘conspiracy theory’. Brand, while defending himself as not being a conspiracy theorist, tends to use terms like ‘agenda’, ‘they’, and ‘the global elite’. The difference is between purposeful collusion across institutions and a pattern of say, certain values aligning between corporations and neoliberal politicians. Sometimes this is a gradient more than black and white, but another way we can untangle this is to look at studies about who believes in conspiracies and for what reasons.

Firstly, a lot of people believe in them. One third of Americans believe Obama is not American. A third that 9/11 was an inside job. A quarter that covid was a hoax. 30 percent that chemtrails are somewhat true. 33% believe that the government are covering up something up about the North Dakota crash.

Never heard of it? That’s because researchers made it up. They polled people about their beliefs in conspiracies and included a completely made up event in North Dakota, and people instinctively believed that the government was hiding something about it.

People are naturally suspicious of power, which is of course a good thing, but for some people that leads to belief in a conspiracy. Why?

There are several factors that psychologists have looked at. The first is uncertainty. Psychologist Jan-Willem Prooijen points out that at a fundamental level, conspiracy theories are a response to uncertainty.

He writes: ‘Conspiracy theories originate through the same cognitive processes that produce other types of belief (e.g., new age, spirituality), they reflect a desire to protect one’s own group against a potentially hostile outgroup, and they are often grounded in strong ideologies. Conspiracy theories are a natural defensive reaction to feelings of uncertainty and fear’.

Responding to uncertainty and fear by hypothesising a threat is an evolutionary instinct. You’re better off jumping at the sight of a stick in the long grass than be bitten by a snake. The same thing happens when we see shapes in the darkness. We are risk calculating creatures, always on the watch for danger.

And we do this by looking for patterns. Jonathan Kay writes that, ‘Conspiracism is a stubborn creed because humans are pattern-seeking animals. Show us a sky full of stars, and we’ll arrange them into animals and giant spoons. Show us a world full of random misery, and we’ll use the same trick to connect the dots into secret conspiracies’.

Psychologists call it pattern perception. I like to call it patternification.

Prooijen writes, ‘pattern perception is the tendency of the human mind to “connect dots” and perceive meaningful and causal relationships between people, objects, animals, and events. Perceiving patterns is the opposite of perceiving randomness’.

Again, all very reasonable. But sometimes the stick in the grass is just a stick. And sometimes an event is just random, meaningless, an accident, a result of incompetence, ignorance, and so on.

Prooijen writes, ‘Sometimes events truly are random, but most people perceive patterns anyway. This is referred to as illusory pattern perception: People sometimes see meaningful relationships that just do not exist’.

We all do it all the time. But what’s interesting in research is that some people see patterns more readily than others.

In studies, people who see patterns in abstract paintings, random dots, or coin tosses, were more likely to believe in conspiracy theories, paranormal phenomena, and be religious. People who believe in astrology, spiritual healing, telepathy, communication with the dead, are all more likely to believe in conspiracy theories. Belief in conspiracies have also been shown to increase after natural disasters.

Threat leads to the formation of a belief in a pattern in response to that threat.

In many – by no means all, but many – of Brand’s videos, small stories, a small sample of data, a single piece of evidence, are spun into a wider pattern.

In this video, he links the Great Rest and the WEF’s video, ‘you’ll own nothing and be happy’, to movements in the financial markets, for example – a story about Black Rock buying up real estate.

It’s all part of the agenda. He throws in that the mainstream media reports it as ‘good news’ – a housing bonanza that’s going to great for everyone – insinuating journalists are part of the agenda, ignoring the irony that he’s citing the New York Times.

What’s it got to do with the great reset? I honestly couldn’t tell you. I wonder if bitcoin.com – Brand’s source – has an agenda?! In this video, the great reset is linked to the farmers protests. Throw in Bill Gates, vaccines and it all becomes part of the simple good vs evil narrative.

Author Naomi Klein describes it as a ‘conspiracy smoothie’.

She writes, ‘the Great Reset has managed to mash up every freakout happening on the internet — left and right, true-ish, and off-the-wall — into one inchoate meta-scream about the unbearable nature of pandemic life under voracious capitalism’.

Conspiracy theories become, through patternification, totalisers. Everything gets lumped in together as part of the same single narrative. It becomes zero sum, good vs evil analysis. But this doesn’t answer why some people do this and others don’t, nor does it answer when the dots should be connected. To see why people do this, we’ll look at two categories: cognitive biases and the need for control.

 

Cognitive Biases

Studies have shown that education at high school halves the tendency to believe in conspiracies, from 42% to 22%. Why is this?

It’s kind of counter intuitive, because in many ways education actually teaches you, more than anything, to be sceptical. The scientific method, for example, is built on scepticism of received wisdom. In history, you’re taught to be sceptical of and scrutinise the literature and sources. In politics, many approaches – liberalism, Marxism, poststructuralism, and more – are, at their core, sceptical about the state and institutional power. If you’re sceptical about what you’re told, surely you’re more likely to believe that something is going on behind the scenes.

Except, while scepticism is key, education also teaches you to draw on evidence, being led by evidence as much as possible – and importantly, all of the evidence.

If you only draw on bitcoin.com to make a case you wouldn’t get far. Which is why most undergraduate essays or dissertations and papers to submitted journals require a literature review – show that you’ve assessed and understand the literature, identified weaknesses, made an argument.

In fact, the very basis of the modern scientific method in both the hard sciences and the social sciences and humanities is peer review – you must reference, show you understand the evidence, cite sources in a bibliography, show how the studies can be rerun and submit it to a body of peers to check over the work. This idea – that work is checked and can be responded to – runs through the heart of institutions.

We rely on the work of others, we build upon it, we respond to it. It has its limits, it’s often biased, it’s middle class, it can be wrong, subdisciplines are at loggerheads, criticise one another, but, that’s precisely what makes it work – it’s tentative, it’s open to critique, and it can be checked, it’s how knowledge is built up communally. We’ll come back to its benefits and limits.

Another mistake conspiracy theorists make is proportionality bias: that a large effect needs a large cause to create a sense of ‘cognitive harmony’ – a balance between two ideas.

JKF couldn’t have been killed by a lone assassin, he was the president of the US. Princess Diana couldn’t have been randomly killed in a car crash, it must have been the royals. 9/11 couldn’t have been the result of 19 guys from the Middle East, it must have been the government.

We’re all human, including presidents. But if a US president and your neighbour Ned both died randomly on the same day – which one would there be a conspiracy about?

In one study, two groups were told two different stories about a president of a small country being assassinated. One group were told the assassination led to civil war, in another it doesn’t. People were more likely to believe the assassination was a conspiracy if it led to a war.

Prooijen says the proportionality bias is that ‘a big consequence must have had a big cause.’

There are some other biases. Tribalism leads us to protect our in-group, divide the world into us vs them, good vs evil. Another is the intentionality bias, that leads us to believe that the negative effects of other’s actions were intentional, whereas if we did them it would be an accident or we’d have good reason. Every banker is evil, our own pension fund is necessary. Or in the form of Hanlon’s razor: ‘never attribute to malice that which is adequately explained by stupidity’. A politician does something that we perceive to be evil, really they just don’t understand the topic, and so on.

So there biases of thinking that we all make, and I think in many ways they can be summed up in the way Brand thinks about the mainstream media.

 

The Mainstream Media Agenda

I think combining these fallacies and thinking about the way Brand takes a small story – like the Pfizer data falsification story – and turns it into a global elite agenda, gives us a good frame to think about Brand’s critique of the mainstream media.

It’s almost always part of the narrative, and even more so since the accusations against him in September.

The mainstream media have lots of problems – they’re diverse problems – not least of which that they tend to be close to elites, institutionalised, cozy with politicians, centred in and overly focused on places like Washington, London, and New York, and have financial interests. The list goes on.

But to paint hundreds of thousands of journalists in the US and UK alone as part of an agenda is not only naïve, it’s dangerous.

Firstly, large media institutions could not get away with relying on small stories to construct speculative narratives like Brand does. They are always going to be led, for good or bad, by the dominant body of evidence available. If 99% of scientists believe that the vaccine is safe and effective, the BBC is going to report it that way. That’s what you get. Media literacy is to read the news widely, know an institution’s biases, and read elsewhere too.

Second, the surge in independent media is a great thing – you’re watching it, now – and obviously I’m an enthusiast. However ‘independent’ does not automatically mean authentic, unbiased, ‘giving the voiceless a voice’, ‘free’, or any other of the superlatives you often hear. Independent media largely rely on stories investigated and first reported by the same mainstream media they go on to criticise. Brand does this all the time. ‘Independent’ media rarely have the budget to execute years-long investigations, report from warzones, get access to archives and data quickly, get to the scene of a disaster or protest while it’s happening. Media institutions are important for this very reason. We need institutions with the budget and connections to do these things. Compare this to Brand reading from bitcoin.com.

Third, to paint everyone in the mainstream media in the same way is to ignore that the media is made up of millions of people around the world doing work passionately, carefully, with varied opinions and interests. To frame the mainstream media as monolithic, and use language like us vs them, is dangerous.

Brand paints anyone who is part of the ‘narrative’ as stooges for a centralising corporate and government agenda to take away your freedom. As any cursory look at a textbook on propaganda will show, that’s not how influence works in authoritarian countries, let alone liberal ones.

Brand relies on a top down model of propaganda in which power and money directs information, education, news, and opinion downwards through the press and the schools into the minds of a mindless population.

But as Jonathan Auerbach and Russ Castronovo point out in their introduction to the Oxford Handbook on Propaganda, propaganda is not total, even in totalitarian regimes. Persuasion by information is much more complex. They write, ‘people consume propaganda, but they also produce and package their own information just as they also create and spin their own truths.’

If you think the mainstream media are just propagandists then I implore you to just look at the facts of any of these issues. One million people have died from Covid in the US alone. And vaccine hesitancy has been estimated to have led to 300,000 preventable deaths. That’s a study from Brown, Harvard, the New York Times, and more. If you think the mainstream media are just propagandists, take a look through the Pulitzer Prize nominees at the investigations of the past year.

Again, I’m not saying that there aren’t many, many criticisms to be made. And that obviously the mainstream media are de facto in the centre, and that collective, radical, and socialist solutions or candidates will never get a fair shout and that lobbying and money will always delegitimise solutions that don’t align with their interests, and that supporting independent progressive media is crucial to countering that. But none of these criticisms paints the mainstream media as monolithic, evil, propaganda. It’s simplistic, it’s dangerous, it’s wrong, and as we’ll see, it’s often about narcissism, control, and in many cases outright lies.

 

The Recent Allegations and Rumble

In September of 2023, Channel Four and the Telegraph in the UK released an investigation into Brand that included allegations of sexual assault and rape. The day before, Brand posted a video denying the allegations.

What happened next, for many, seemed to prove Brand’s point. The media focused its attention on Brand, countless articles were written, news items broadcast, investigations launched at the BBC. He was dropped by his agent, a tour was cancelled, Youtube removed advertising from his account so he could no longer make money from it, a charity he did work for cut ties, and on and on.

One platform stood firm – Rumble – and a letter from a UK MP asking whether Rumble was going to stop Brand earning money was ridiculed and criticised by many, including Rumble, who said in an open letter: ‘We regard it as deeply inappropriate and dangerous that the UK Parliament would attempt to control who is allowed to speak on our platform or to earn a living from doing so’.

Inevitably it became a story about a story. Free speech, cancel culture, the establishment, the agenda.

There are, again, reasonable debates to be had here. I for one am not sure Youtube should have taken a stance based on allegations alone, no matter how strong. But a week or so after the allegations, the ‘I’ in the UK ran a story about ads on Brand’s Rumble channel. One was from the Wedding Shop, who told them: ‘We are on the phone right now to our agency to ascertain which of these networks is showing our ads on Rumble so that we can actively remove ads from the platform… It goes without saying that we would not be happy to be featured on Russell Brand’s videos’.

They continued: ‘We use a media agency to spend our advertising budget and we have never chosen to advertise on Rumble, which must be part of the Google, Bing or Meta ad network. Where our ads are placed is not something we generally control – it would be for Google, Bing or Meta to decide whether or not to include or exclude particular platforms’.

It also reports that several companies including Burger King, Xero and Fiverr have stopped their ads running on Rumble. The stories are all similar.

A Fiverr spokesperson said, ‘These ads have been removed and our partners and teams have been alerted to ensure this doesn’t happen again. (We have excluded his channel on both YouTube and on Rumble.) We take brand safety and ethical advertising placement seriously, and we do not condone or support any form of violence or misconduct’.

A toy manufacturer said something similar.

In 2017, Youtube went through something called the ‘adpocalypse’. Advertisers pulled out of Youtube en-masse, when they realised that their ads were being played in front of videos that were accused of being anti-Semitic, homophobic, or just ‘scammy’.

All of this points to an obvious conclusion. Charities, agencies, advertisers, and institutions would prefer not to be linked with someone accused of sexual assault and rape – it’s not great PR. Youtube, in particular, has to balance between supporting creators and attracting advertisers, and so the middle ground is to limit ads on videos that advertisers are likely to pull out of, before the advertisers pull out of Youtube.

Of course, for Brand, this quickly became part of the agenda. In a Rumble video he criticises something called the Trusted News Initiative and argues that the mainstream media are targeting independent media in an attempt to control the narrative.

The Trusted News Initiative is an effort by many media organisations to counter fake news, false reports, viral disinformation, and so on. Not dissenting opinions, but purposefully false information, which studies have shown get shared six times as much as real news on sites like Facebook. Fake stories like this one: ‘Ilhan Omar Holding Secret Fundraisers with Islamic Groups Tied to Terror’, which got shared 14k times on Facebook alone.

Brand argues that, ‘plainly the TNI has an agenda, an explicit agenda to throttle and choke independent media’.

He uses a story from Reclaim the Net that focuses on a lawsuit filed in the US by Robert F. Kennedy Jr. that claims that dissenting views are being stamped out unconstitutionally by the TNI, violating freedom of speech and anti-trust laws.

It’s a minor story from nine months ago, but it’s useful for Brand because it supports his main point: he’s under attack.

Not only does he rely on a single fringe source to tell a biased story, he either lazily or wilfully distorts it. He reads from parts of the article, then at the end says again that, ‘plainly the TNI has an agenda, an explicit agenda, to throttle and choke independent media’.

But he’s completely distorted the language that even he’s just read a second ago. Again, there may be legitimate concerns about this, but if you look at the lawsuit, available online, filed to the district court, the so-called ‘explicit agenda’ is to find ways to ‘throttle’ and ‘choke’ false news stories. The comments about independent media are separate, and even these are misquoted.

The quote Brand reads out is from Jamie Angus at BBC News, saying: ‘Because actually the real rivalry now is not between for example the BBC and CNN globally, it’s actually between all trusted news providers and a tidal wave of unchecked [reporting] that’s being piped out mainly through digital platforms. … That’s the real competition now in the digital media world’.

This is a misquote. Both Brand and RFK and others reporting this uncritically have conveniently left out the parts of the quote that dilute their point. Anyone can watch the clip, it’s linked below. He actually said that the divide is between all trusted news providers and a tidal wave of unchecked, incorrect, or in fact, explicitly malicious, nonsense, specifically to destabilise regions of the world’.

How Brand has framed this is an outright lie.

The context is not only left out, its manipulated. The entire discussion is about how much newsrooms need to do now to verify the vast amount of information they’re dealing with; how much newsrooms of changed and the challenges they face; how many more technicians and specialists are required. He’s talking about wars, verifying whether a tiktok from Ukraine is manipulated or useful evidence, employing specialists in things like geolocation verification, using satellite imaging to understand bombings He even praises ‘citizen journalism’ and talks about opening up the news ecosystem – It’s an interesting watch. Brand and his like have to do none of that difficult work. Not only that, but they rely on it, use it, feed off it, while denigrating the many ordinary people who make it possible.You might say, well Brand is just one person, he is just an ‘entertainer’, he’s just commenting on articles and news, not producing it, it’s not his responsibility to fact check every story. And that’s precisely the argument Brand makes too.But if I – with a budget of almost nil can quickly check a source – then maybe Stay Free with Russell Brand might also do a bit of work. I’m not saying they should have a newsroom of fact-checkers, specialists, and technicians sifting through every claim, but with the following, net worth, and status he has, he clearly has the budget to do due diligence, to check sources, to not misrepresent. With a channel that large you have a clear moral duty to. Instead, the laziest and most entertaining interpretation comes first; laziness fosters conspiracy because thoroughness exposes the truth.

I’m not saying we shouldn’t be very concerned with big tech being in control of what can and can’t be said. I disagreed with them taking down clips and interviews about vaccines and Covid. I think big tech platforms should be committed to freedom of speech.

But Angus is talking about genuine floods of disinformation, propaganda machines, Russian bot farms, designed to lie to people. And he’s right. Whatever the dangers and criticisms, I think it would irresponsible of the mainstream media not to think carefully about this. It took me a few minutes to search through the court document, watch the clip, to see that Brand and his source had either willingly or lazily misquoted the source so as to spin it into their own narrative, combine it with another quote to make it seem more malicious, and in Brand’s case use it to defend against accusations from many ordinary women of sexual assault. And if that doesn’t make you angry, I think it should.

 

Narcissism News Entertainment

In his book on conspiracy theories, Michael Shermer writes that seeing patterns everywhere – patternification – is the result of the need for control.

He writes: ‘the economy is not this crazy patchwork of supply and demand laws, market forces, interest rate changes, tax policies, business cycles, boom-and-bust fluctuations, recessions and upswings, bull and bear markets, and the like. Instead, it is a conspiracy of a handful of powerful people variously identified as the Illuminati, the Bilderberger group, the Council on Foreign Relations, the Trilateral Commission, the Rockefellers and Rothschilds’.

He continues: ‘conspiracists believe that the complex and messy world of politics, economics, and culture can all be explained by a single conspiracy and conspiratorial event that downplays chance and attributes everything to this final end of history’.

Instead of acknowledging messiness, complicated people, and multiple motives, conspiracy thinking sees a pattern as the result of purposeful agency in an attempt to control others.

Psychologists Mark Landau and Aaron Kay looked at studies that show how people compensate for perceived loss of control by trying to restore control themselves by ‘bolstering personal agency, affiliating with external systems perceived to be acting on the self’s behalf, and affirming clear contingencies between actions and outcomes’, and by ‘seeking out and preferring simple, clear, and consistent interpretations of the social and physical environments’.

In one study, participants were asked to think of an incident in their lives where they felt in control, while another group were asked to think of an incident where they weren’t. The latter group were more likely to believe in the conspiracy theories presented to them after.

Psychologists Joshua Hart and Molly Graether did a study and found that conspiracy believers, ‘are relatively untrusting, ideologically eccentric, concerned about personal safety, and prone to perceiving agency in actions’.

One of the most important findings in studies is that narcissism – the belief in one’s own superiority and need for special treatment – is a strong predictor of believing in conspiracies. Narcissists are also more sensitive to perceived threats.

As one paper notes, ‘the effect of narcissism on conspiracy beliefs has been replicated in various contexts by various labs’, and that, ‘narcissism is one of the best psychological predictors of conspiracy beliefs’. It continues: ‘grandiose narcissists strive to achieve admiration by boosting their egos through a sense of uniqueness, charm, and grandiose fantasizing’.

Narcissism arises out of paranoia, that threats are powerful, and narcissists tend to respond with a bolstered sense of ego – the need for personal dominance and control. The need to feel unique makes narcissists feel like they have access to special information that others don’t. (PETERSON IN HIS MAD SUITS)

It’s also been found that narcissists, ‘tend to be naïve and less likely to engage in cognitive reflection’. To put in bluntly, they’re more gullible. Narcissism has been linked to low levels of ‘intellectual humility’ by one study.

Obviously the entertainment industry is full of narcissists, who are particularly suited to voicing ‘special’ opinions and entertaining people. And there is a sense in which Brand knows this is entertainment. He says things like ‘you’re gonna love this story, its right up your ally’ – a strange way to frame a story if you think it’s existential: https://www.youtube.com/watch?v=fjGYsner6oI&ab_channel=RussellBrand

What you get is a kind of narcissistic news porn based on paranoia and a need for control. Brand’s an entertainer. I don’t want to be psychoanalysing anyone, but Joe Rogan, Elon Musk, and Brand – three major figures who talk about conspiracy theories a lot – come from a place where maybe they wished they had more agency, more control.

Musk had a very troubled and abusive childhood in South Africa, Joe Rogan has talked about how he moved around a lot, got bullied, and learned to fight to defend himself, and Brand has a well-documented history of addiction.

What this can lead to is a feeling of not being in control, a world of threat, and a sense of paranoia. Mirriam-Webster defines paranoia as, ‘systematized delusions of persecution’.

It leads to the need to form a narrative to help a person feel superior by having access to special knowledge about larger forces out to persecute them that they themselves have overcome.

In The Paranoid Style in American Politics, Hofstadter points out that the paranoid person sees an enemy that is pervasive, powerful, conspiratorial, pulling the strings, and, importantly, everywhere.

He writes that the proponents of the paranoid style ‘regard a “vast” or “gigantic” conspiracy as the motive force in historical events. History is a conspiracy, set in motion by demonic forces of almost transcendent power, and what is felt to be needed to defeat it is not the usual methods of political give-and-take, but an all-out crusade’.

Hofstadter continues that the enemy is, ‘a perfect model of malice, a kind of amoral superman: sinister, ubiquitous, powerful, cruel, sensual, luxury-loving’… ‘He makes crises, starts runs on banks, causes depressions, manufactures disasters, and then enjoys and profits from the misery he has produced.’ He controls the press, ‘manages the news’, brainwashes, seduces, has control of the educational system.’

For the paranoid, ‘Nothing but complete victory will do. Since the enemy is thought of as being totally evil and totally unappeasable, he must be totally eliminated’.

This is why Brand seems to get on so well with Tucker Carlson. Tucker is well versed in something that his former employer Fox News revolutionised: news as entertainment – flashy graphics, sensationalist language, us vs them narratives, a conspiracy involving every institution.

Fox News realised that it’s the ongoing narrative – good vs evil – that keeps viewers tuning back in, and so Carlson and Brand like him pick a story or study or witness that supports the long-running dramatic narrative that gets the views, rather than the other way around.

It’s not reporting, it’s not journalism, it’s not news, it’s entertainment – they make a few points and the rest is how it’s said, with anger, or charisma, with jokes, with a story of good vs evil. It’s shallow news porn.

 

Public Trust, Private Solutions

None of this is to defend a political system that’s failing ordinary people. None of this is to deny that inequality is widening, wealth is moving upwards, wages are stagnated, that people are underrepresented. None of it is to deny that lobbying, money, selfish interests, corporate greed all play a central role in politics. And none of this is to argue that there’s anything wrong with looking at big pharma’s financial incentives, criticising the great reset, or emphasising the concerns of farmers in climate policy. None of this is to say that we don’t need radical solutions.

What this is to absolutely reject is the framing. The paranoid style, the good vs evil narrative, the narrow selection of stories and evidence to suit your own dramatic narrative, the linking of every issue together into a totalising agenda.

Brand paints the mainstream media narrative as a lie; his is not only a bigger lie, but also a self-aggrandising and dangerous one.

George Monbiot writes about Brand that, ‘He appears to have switched from challenging injustice to conjuring phantoms. If, as I suspect it might, politics takes a very dark turn in the next few years, it will be partly as a result of people like Brand’.

If you’re not selecting the stories, facts, evidence you cover by their wider significance, if you’re picking up perspectives and narratives based on fringe evidence and ideas, then all you’re doing is being led by your own individualistic narcissistic ego. This is why Brand’s criticism of the mainstream media has only increased since an investigation into his very well-known behaviour was released. It’s obvious that this isn’t an objective analysis, it’s driven by his own fragility, his own little world.

And that’s when we get narcissistic news porn rather than careful study and analysis.

To paint the mainstream media as totally propagandised is to miss that people are multifaceted, complex, have competing incentives. What many missed about the investigation in the recent allegations against Brand is that it was as much an investigation into a BBC and Channel Four that facilitated Brand than about Brand himself.

Think about that. Channel Four aired an investigation into itself. Would you ever see that on Brand’s channel?

Brand does no original reporting, he sits in his shed and reads from journalists who have gone out and done the work, while at the same time howling about how terrible they are.

To be clear, again, I’m not saying that there aren’t many critiques of the mainstream media to be made, and more journalism, more independent voices, ultimately, are a great, potentially revolutionary, thing to be supported.

But when you totalise and cram everything into the ‘agenda’, you paint the world in paranoid, apocalyptic terms of us vs them that dehumanises the other as individuals to be gotten rid of, rather than look at real collective, structural solutions to the problems we face.

This is why Brand gravitates towards figures like Tucker Carlson. Carlson doesn’t want collective solutions. What he wants is more of the same but with him in charge. If every institution is tainted, part of the ‘centralising agenda’, you get libertarianism, you get more corporate power, more greed, more unregulated pollution, more inequality. You get the opposite of what we need.

If you portray every institution as part of an agenda then what’s left to do? Revolution, maybe? But then what? Where are your solutions? What’s your theory? What replaces the current system?

Conspiracy thinking is the easiest type of thinking – everyone does it. It’s easy for showmen like Brand because at the end of reading off a few quotes from one source you can just link them to the agenda, the great reset, a ‘centralising agenda’, and Bill Gates.

It’s like having a safety blanket to return to that says don’t worry, the world is evil, but you know the truth, you have it figured in a simple little package, don’t worry, you never have to think again.

 

Sources

Terry Pinkard, Does History Make Sense,

Irrationality: A History of the Dark Side of Reason

https://www.bbc.co.uk/news/entertainment-arts-66369532

Jan-Willem van Prooijen, The Psychology of Conspiracy Theories

https://theintercept.com/2020/12/08/great-reset-conspiracy/

Richard Hofstadter, The Paranoid Style in American Politics

https://www.dailymail.co.uk/news/article-10186363/Researchers-running-arm-Pfizers-Covid-jab-trials-falsified-data-investigation-claims.html

https://theconversation.com/vaccine-trial-misconduct-allegation-could-it-damage-trust-in-science-171164

https://inews.co.uk/news/russell-brand-advertisers-pulling-ads-rumble-site-comedian-videos-2633281?ito=twitter_share_article-top

https://ec.europa.eu/commission/presscorner/detail/en/IP_23_2507

https://www.theguardian.com/environment/2022/nov/30/peak-polluters-last-chance-close-dutch-government

Steven Umbrello, Should We Reset?

Michael Christensen and Ashli Au, The Great Reset and the Cultural Boundaries of Conspiracy Theory

Ivan Wecke, Conspiracy Theories Aside, There is Something Fishy about the Great Reset

Michael Shermer, Conspiracy: Why the Rational Believe the Irrational

Aleksandra Cichocka, Marta Marchlewska, Mikey Biddlestone, Why do narcissists find conspiracy theories so appealing?

Cosgrove TJ and Murphy CP, Narcissistic susceptibility to conspiracy beliefs exaggerated by education, reduced by cognitive reflection

https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?article=3750&context=historical

https://reclaimthenet.org/rfk-jr-sues-mainstream-media-misinformation-cartel

https://www.bbc.co.uk/beyondfakenews/trusted-news-initiative/role-of-the-news-leader/

https://www.hollandtimes.nl/articles/national/tata-steel-environmental-threat-or-essential-industry/

https://www.bmj.com/content/375/bmj.n2635

https://pubmed.ncbi.nlm.nih.gov/25688696/

 

The post CONSPIRACY: The Fall of Russell Brand appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/12/05/conspiracy-the-fall-of-russell-brand/feed/ 0 949
A Note on Expertise https://www.thenandnow.co/2023/11/09/a-note-on-expertise/ https://www.thenandnow.co/2023/11/09/a-note-on-expertise/#respond Thu, 09 Nov 2023 11:59:14 +0000 https://www.thenandnow.co/?p=1017 When deciding what to make videos about, I am usually drawn between several factors: most obviously, what I want to make. Second, what I think will do well, or what my audience would be interested in. Third, what I feel a responsibility to make. The first can be gratifying, authentic, but also self-indulgent and unpopular. […]

The post A Note on Expertise appeared first on Then & Now.

]]>
When deciding what to make videos about, I am usually drawn between several factors: most obviously, what I want to make. Second, what I think will do well, or what my audience would be interested in. Third, what I feel a responsibility to make.

The first can be gratifying, authentic, but also self-indulgent and unpopular. I have to make a living and ensure the viability of the channel in the future, and this isn’t always the way to do it.

The second – what I think an audience wants to see – is a useful corrective, but also keeps me connected to what other people think is important, what they want to watch. Taken to the extreme, this can lead to ‘selling out’, but I think when considered with the first, it allows you to think through how to meet your audience ‘where they are.’ It stops you from speaking simply to and for yourself, and forces you to consider how to connect with as many people as possible, which in turn might increase the influence you can build for when you do want to make something very personal or otherwise unpopular.

If you can spark something in people, earn their trust, understand their viewpoint, and then try to convince them of what you want to say, what you believe, then the argument will be all the better for it.

Then there’s the third factor – responsibility. Hopefully, this always has some influence on both. But sometimes there are topics that I am not hugely interested in spending my time on personally, nor are they likely to be the most popular. However, knowing they are important nonetheless, I am always drawn to spend as much time as I can at least understanding them. This is especially the case when the topic overlaps with my background in History and Politics.

I think that, in most cases, if you know more than the median voice on Youtube, you have a responsibility to try to say something. Otherwise, the public sphere is left open to those who have big mouths, small minds, and zero tolerance for research.

Sometimes, there is a tipping point; a moment when you feel a reasonable grasp on the literature and get a sense of the public discourse; when you feel compelled to say something rather than nothing. Because in this libertarian media ecosystem, I’ve seen videos with millions of views commenting with zero expertise, but also, experts who are very knowledgeable making dogmatic arguments which I know can be quite reasonably refuted by other experts. Furthermore, the mainstream media seems incapable themselves of providing good longform explainers and analysis. Both of which are increasingly rare, as all parties are incentivised to release sensationalist, short, and frequent content, in an arms race for clicks.

Ultimately, the ideal is to spend my time on topics which fit neatly into the middle of the Venn diagram of all three – what I want to make, what people will be interested in, and what I have a responsibility to make. The needle will always be shifting, but whether I am successful in balancing those factors, I’ll leave to you.

The post A Note on Expertise appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/11/09/a-note-on-expertise/feed/ 0 1017
Understanding Israel and Palestine: A Reading List https://www.thenandnow.co/2023/11/05/making-sense-of-israel-and-palestine/ https://www.thenandnow.co/2023/11/05/making-sense-of-israel-and-palestine/#comments Sun, 05 Nov 2023 13:43:15 +0000 https://www.thenandnow.co/?p=1010 It’s important to note that I am not an expert. However, I do have a background in history, philosophy, politics, and international relations, as well as relevant cursory knowledge to draw upon. For the past month or so I have been reading as widely as possible. More importantly, I have – to the best of […]

The post Understanding Israel and Palestine: A Reading List appeared first on Then & Now.

]]>
It’s important to note that I am not an expert. However, I do have a background in history, philosophy, politics, and international relations, as well as relevant cursory knowledge to draw upon. For the past month or so I have been reading as widely as possible. More importantly, I have – to the best of my ability – been carefully selecting sources from different perspectives and trying to understand the people and debates. Because the online space seems bereft of reasonable longform analysis, I have decided to list what I’ve been reading here with a few comments. I will continue to add to it.

I’ve organised it loosely into books and longform articles. I will add some films, too.

 

Books

 

Abdel Monem Said Aly, Khalīl Shiqāqī, and Shai Feldman, Israelis and Arabs: Conflict and Peacemaking the Middle East
https://www.bloomsbury.com/uk/arabs-and-israelis-9781350321380/

The best general academic overview I’ve come across. Detailed and sensitive to different narratives. I think is a long but invaluable starting point. The authors go through the more important historical moments, then present narratives that are commonly held, for example, in Palestine, in Israel, in Arab States, or in the US, etc. The authors then attempt a short analysis comparing each.

Rashid Khalidi, The Hundred Years’ War on Palestine
https://www.amazon.co.uk/Hundred-Years-War-Palestine/dp/178125933X

Rashid Khalidi is probably the most well-known Palestinian-American historian working today. He is a professor of Modern Arab Studies at Columbia University. This is a morally charged narrative history which foregrounds Zionism as a settler-colonial movement, and the displacement of the Palestian people. It’s forceful, well-received but not without its critics, and concludes to the continuing marginalisation of the Palestinians in Oslo Accords.

This NYTimes review is worth reading: https://www.nytimes.com/2020/01/28/books/review/the-hundred-years-war-on-palestine-rashid-khalidi.html

Ari Shavit, My Promised Land
https://www.amazon.co.uk/My-Promised-Land-Triumph-Tragedy/dp/0385521707

If you think of early Zionists as ‘evil’ colonists and occupiers, then this book is a useful corrective. It highlights the contradictions, romanticism, idealism, persecution, and naivety that motivated Zionists fleeing Europe in the late 19th century and on. Drawing on Shavit’s own family history, it’s movingly and personally written. Shavit asks how his well-intention Zionists moving excitedly to Palestine to build new lives did not see the people already there. Or maybe did not care.

Alpaslan Özerdem, Roger Mac Ginty, Comparing Peace Processes
https://www.routledge.com/Comparing-Peace-Processes/Ozerdem-Ginty/p/book/9781138218970

The relevant chapter is a good summary of the peace process since the Oslo Accords and concludes compellingly how one-sided the peace process has been.

Benny Morris, 1948 and After
https://global.oup.com/academic/product/1948-and-after-9780198279297

Benny Morris is one the ‘new historians’ who challenged the traditional historical narrative in Israel. This is a good introduction to the debates and historiography that surround the 1948 war and beyond. The 1948 moment is probably the most crucial in understanding what motivates both the Israeli right and Palestinians, in particular.

Thomas Friedman, From Beirut to Jerusalem
https://www.amazon.co.uk/Beirut-Jerusalem-Thomas-L-Friedman/dp/1250034418

I have only just started this, but Friedman is widely regarded to be one of the best authors on the Middle East, spending many years living and reporting from both Beirut and Jerusalem. The preface alone is the best introduction I’ve read to the complex politics, relationships, and wars of the surrounding countries, particularly in Lebanon. It gives you a good sense of the complexity of the entire region.

John J. Mearsheimer and Stephen M. Walt, The Israel Lobby and U.S. Foreign Policy
https://www.hks.harvard.edu/publications/israel-lobby-and-us-foreign-policy

Walt and Mearsheimer s influential claim that AIPAC has a disproportionate influence on foreign policy, which they argued would be much more effectively directed elsewhere. There is the paper and the latter book.

Asima Ghazi-Bouillon, Understanding the Middle-East Peace Process
https://www.routledge.com/Understanding-the-Middle-East-Peace-Process-Israeli-Academia-and-the-Struggle/Ghazi-Bouillon/p/book/9780415853200

This book also focuses on the new historians, but also the wider academic context in Israel, looking at concepts like ‘post-Zionism’ – that Zionism is over, has fulfilled its goals, and should be superseded. And ‘neo-Zionism’ – that new battles over things like demographics have begun. It is quite dense, drawing on philosophy and theory to think through the different discourses. But is a useful frame if you want to understand how Israeli academia has concrete effects on what happens.

Avi Shlaim, Israel and Palestine: Reappraisals, Revisions, and Refutations
https://www.versobooks.com/en-gb/products/2094-israel-and-palestine

A broad and accessible overview of the history from the Balfour Declaration on, including discussions of the different debates in the historiography, especially on the most contentious moments.

 

Longform articles

 

Haaretz, A Brief History of the Netanyahu-Hamas Alliance
https://www.haaretz.com/israel-news/2023-10-20/ty-article-opinion/.premium/a-brief-history-of-the-netanyahu-hamas-alliance/0000018b-47d9-d242-abef-57ff1be90000

Makes the case that the Netanyahu government and Hamas benefit from each other.

A Threshold Crossed: Israeli Authorities and the Crimes of Apartheid and Persecution.
https://www.hrw.org/report/2021/04/27/threshold-crossed/israeli-authorities-and-crimes-apartheid-and-persecution

A thorough 200+ page report by Human Rights Watch describes how, by the ICC’s own definitions, the Israeli government is pursuing policies that can be described as Apartheid in the West Bank by among other things, restricting freedom of movement and assembly, denying building permits for Palestinians but not Israelis, controlling water supplies, denying right of return for Palestinians and not Israelis, and effectively ruling over two-tier society.

Avi Shlaim, The War of the Israeli Historians
https://users.ox.ac.uk/~ssfc0005/The%20War%20of%20the%20Israeli%20Historians.html#:~:text=This%20war%20is%20between%20the,years%20of%20conflict%20and%20confrontation.

A good introduction to a civil ‘war’ within Israel between two interpretations of the country.

Shlaim writes ‘this war is between the traditional Israeli historians and the ‘new historians’ who started to challenge the Zionist rendition of the birth of Israel and of the subsequent fifty years of conflict and confrontation’.

He continues, ‘the revisionist version maintains, in a nutshell, that Britain’s aim was to prevent the establishment not of a Jewish state but of a Palestinian state; that the Jews outnumbered all the Arab forces, regular and irregular, operating in the Palestine theatre and, after the first truce, also outgunned them; that the Palestinians, for the most part, did not choose to leave but were pushed out; that there was no monolithic Arab war aim because the Arab rulers were deeply divided among themselves; and that the quest for a political settlement was frustrated more by Israeli than by Arab intransigence.’

New Yorker, Itamar Ben-Gvir, Israel’s Minister of Chaos
https://www.newyorker.com/magazine/2023/02/27/itamar-ben-gvir-israels-minister-of-chaos

A good primer on the far-right in Israel.

 

More

I haven’t examined it in detail, but this reading list from UCLA looks useful: https://www.international.ucla.edu/israel/article/270276

 

 

The post Understanding Israel and Palestine: A Reading List appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/11/05/making-sense-of-israel-and-palestine/feed/ 1 1010
The Origins of the Israel/Palestine Conflict https://www.thenandnow.co/2023/11/02/the-origins-of-the-israel-palestine-conflict/ https://www.thenandnow.co/2023/11/02/the-origins-of-the-israel-palestine-conflict/#comments Thu, 02 Nov 2023 14:59:14 +0000 https://www.thenandnow.co/?p=994 The difficulty with the conflict between Israel and Palestine is that it has so many components. Immigration, national identity, empires and colonialism, democracy, religion and modernisation, terrorism, victimisation and persecution, war. Even when focusing on the simplest building blocks of its very beginnings, we can see how more than anything, subtle emphases – differences between […]

The post The Origins of the Israel/Palestine Conflict appeared first on Then & Now.

]]>
The difficulty with the conflict between Israel and Palestine is that it has so many components. Immigration, national identity, empires and colonialism, democracy, religion and modernisation, terrorism, victimisation and persecution, war.

Even when focusing on the simplest building blocks of its very beginnings, we can see how more than anything, subtle emphases – differences between well-intentioned observers – matters.

Because of this, I’ve carefully selected three main sources, and drawn on others. The first, and one I recommend the most, is a very readable textbook called Arabs and Israelis: Conflict and Peacemaking in the Middle East. It’s by three scholars: Abdel Monem Said Ally, Shai Feldman, and Khalil Shikaki, and it pays careful attention to different historical narratives before analysing them as even-handedly as possible.

Then, Palestinian-American historian Rashid Khalidi’s, The Hundred Years’ War on Palestine is from a Palestinian perspective, while Israeli writer Ari Shavit’s My Promised Land is from an Israeli one.

Of course, even referring to a perspective as ‘Israeli’ or ‘Palestinian’ is an enormous oversimplification, ignoring the vast differences there always are within and between groups. I’ve also drawn on a few historians who’ve been labelled Israeli ‘new historians’ – this loose group have challenged a traditional historical narrative in Israel, something we’ll come to. The literature on this is vast, intellectual humility is required, and so I will focus only on the origins. I’ll also return to a note on how and why I’ve approached this in the way I have at the end.

Towards the end of the 19th century, outbreaks of violence against Jews called pogroms increased across Eastern Europe.

In most countries, Jews were second class citizens. They couldn’t own land, vote, had different and varying legal rights, and were marginalised, lived in ghettos, and often randomly blamed for problems and were targeted and murdered.

This was coming to a head in the last two decades of the 19th century.

In 1881, in the Russian Empire, Jewish communities were attacked after Tsar Alexander II was assassinated and one of the conspirators had incidentally had Jewish ancestry. A wave of pogroms resulted. But this was just one of many instances. In modern day Moldova in 1903, 49 were killed, and many more injured, raped, and homes were attacked.

undefined

It’s important to remember that this is relatively borderless period. Palestine had been administered by the decaying Ottoman Empire for centuries. It was home to a small number of Jews already who lived peacefully with a majority of Arabs, mainly Muslims, with a few Christians.

This was a period very different from today. Empires were the norm, borders were always changing, but the idea of ‘nation-states’, that peoples had the right to self-determine, to govern themselves, was on the rise. In 1800 the population of Palestine was 2% Jewish – some 6700 Jews. By 1890, 42,000 Jews had moved there, while the Arab population was around 500,000. By 1922, the Jewish population had doubled to 83,000.

Towards the end of the 19th century, Jewish settlers started buying land from absent urban Arab landlords, leading to the displacement of the Arab peasants who had worked the land. 500 Arabs signed a letter of complaint to the Ottomans about this in 1891.

In My Promised Land, Ari Shavit describes the complex and sometimes contradictory motivations of the young Zionist movement at the end of the 19th century. For some, fleeing violence, it was a matter of life and death, for others, like his own British great-grandfather, it was a complex choice, one comprised of solidarity with those fleeing persecution, a romantic idea of the Holy Land, and a modern idea of it too – that a new thriving modern future could be built in a land that was widely and falsely seen as empty.

Judaic Studies professor David Novak has written: ‘The modern Zionism that emerged in the late nineteenth century was clearly a secular nationalist movement’. However it had deep religious and historical roots to draw on as well – that Palestine was the Jewish ancestral homeland, the Exodus from Egypt to the promised land, and later exiles from the region, and returns. But Zionism was never unified – many, many disagreed, religious and secular alike, and those who agreed or became Zionists did so for many reasons. Shavit points out that travellers from places like Britain didn’t see Palestine for what it was. They saw empty desert. They saw a few Bedouin tribes. They saw possibility. They didn’t see the Palestinian villages and towns, or maybe, he says, they chose to ignore them?

They also saw poverty – dirt huts and tiny villages. They believed, or said they believed – as many colonists also claim, it’s important to note – that the indigenous population would benefit from Jewish capital, education, technology, and ideas, and it’s true that many did.

Drawing on his grandfather’s diaries, Shavit asks why his grandfather ‘did not see’. After all, he was served by Arab stevedores, Arab staff at hotels, Arab villagers carried his carriages, was led by Arab guides and horseman, was shown Arab cities.

He uses a word: blindness. They were too focused on a romantic ideal of the area and the tragic oppression they were fleeing from. Shavit writes: ‘Between memory and dream there is no here and now.’

Not everyone was blind, though. At the beginning of the 20th century one Zionist author, Israel Zangwill, gave a speech in New York that reported that Palestine was not empty. That they would have to ‘drive out by sword the tribes in possession, as our forefathers did.’

This was heresy. No one wanted to hear it. He was ignored.

So between 1890 and 1913, around 80,000 Zionists emigrated. In the short period between WWI and WWII the same number again. But this snowballed with the rise of Nazism in the 1930s. Between 1933-1940, 250,000 fled Germany. In 1935 alone, 60,000 moved to Palestine. More than the entire Jewish population in 1917.

With this came millions in capital and investment, and successful settlements, villages, and towns began growing.

This huge demographic movement coincided with the most important shift of power in the region. The defeat of the Ottoman Empire during WWI and the subsequent British takeover of control.

During WWI, Zionists in Palestine provided valuable information to Britain, formed spy networks, and volunteered to fight.

At the same time, a coalition of Arabs supported Britain by rising up against the Ottomans in the Great Arab Revolt. In return they were promised an independent Arab state by the British.

But Britain made several contradictory promises in quick succession.

In 1917, the Balfour Declaration – a memo between Foreign Secretary Lord Balfour and Lord Rothschild – committed the British Government to a home for the Jewish people in Palestine.

The Balfour declaration neglected to mention the word Arab, who comprised 94% of the population. It read: ‘His Majesty’s government view with favour the establishment in Palestine of a national home for the Jewish people, and will use their best endeavours to facilitate the achievement of this object, it being clearly understood that nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine, or the rights and political status enjoyed by Jews in any other country.’

Here lies the root of the conflict; the contradictory promise: ‘when the promised land became twice promised’, in the words of historian Avi Shlaim.

Reporting this news in Palestine was banned by the British.

Instead, after the defeat of the Ottomans, the British and French divided the area into spheres of influence under the Sykes-Picot Agreement in 1916, leaving Palestine as a British mandate under British control. This was the famous ‘line in the sand’, made by people who had little knowledge of the area.

In a private 1919 memo only published 30 years later, Lord Balfour admitted: ‘In Palestine we do not propose even to go through the form of consulting the wishes of the present inhabitants of the country… The four Great Powers are committed to Zionism. And Zionism, be it right or wrong, good or bad, is rooted in age-long traditions, in present needs, in future hopes, of far profounder import than the desires and prejudices of the 700,000 Arabs who now inhabit that ancient land.’

The British Mandate gave the Jewish Agency in Palestine status as a public body to help run the country. Jewish communities and leaders formed institutions for self-defence and governance, which the British slowly recognised, essentially becoming a government in waiting.

As a result, outbreaks of violence began to increase in the 1920s, getting progressively worse. In 1929, hundreds of Jews and Arabs were killed and hundreds more wounded at the Western Wall in Jerusalem. Tensions rose, resulting in a series of massacres of Jews by Arabs, one of which in Hebron resulted in the death of almost 70 Jews and the injuring of many more. In response to the violence, the British declared a state of emergency. They proposed a legislative council that would be comprised of six nominated British and four nominated Jewish members, and twelve elected members, including two Christians, two Jews, and eight Muslims.

Seeing themselves as outnumbered on a governing panel in a country in which they were the clear majority, Palestinians rejected the proposal. Another was proposed that was slightly fairer to the Palestinians, but this time it was rejected by the Zionists and British parliament.

During the largest wave of immigration as the Nazis came to power, Palestinians called for a general strike demanding an end to Jewish migration and the sale of land to Zionists by absentee urban landlords, which continued to dispossess peasants working the land.

undefined

In 1936, an Arab revolt started when gunmen shot three Jews, setting off a series of attacks and counterattacks, leading to the deaths of around 415 Jews and 101 British. The British response was swift and brutal. 5000 Arabs were killed by the British, violence continued into 1937, and many were imprisoned and exiled. 10% of the Arab population were killed, injured, exiled, or imprisoned.

Kahlidi puts the figure higher, writing: ‘The bloody war waged against the country’s majority, which left 14 to 17 percent of the adult male Arab population killed, wounded, imprisoned, or exiled.’

Said Ally, Feldman, and Shikaki write that it was ‘disastrous for the Palestinians.’

In one instance an 81-year-old rebel leader was executed after being found with a single bullet. The British tied Palestinian prisoners to the front of their cars to prevent ambushes. Homes were destroyed. Many were tortured and beaten, including at least one woman.

However as a result of the unrest, in 1937 a British government report recommends two states for the first time. The Arab state, though, would not be Palestinian. It was to be merged with Transjordan.

In 1939, British government policy, put forward in a white paper, decided to call for a single jointly administered Palestine, and limited Jewish immigration and land sales.

The Holocaust changed all this. And even more disastrous for the Palestinians was the leadership’s decisions to side with Hitler in 1941, as he had told them that the Nazis had no plans to occupy Arab lands.

As the true extent of the Holocaust became clearer, the plight of European Jews became more urgent in the eyes of European and US policymakers. It’s crucial to remember the extent of the horror – six million Jews industrially murdered. After the war, there were 250,000 Jews living in refugee camps in Germany alone. Britain was bankrupt and was pulling out of many of its former colonies. Syria, Lebanon, Jordan, and Egypt gained their independence, and they formed the Arab League.

More plans were proposed, including the Morrison-Grady Plan in 1946 calling for two separate autonomous Arab and Israeli regions under British defence, which was again rejected by both Zionists and Palestinians.

A UN plan in 1947 proposed 43% of the area going to Palestinians, despite them comprising two thirds of the population. It was rejected by the Arab Higher Committee who called for a three-day general strike.

The newly independent (or quasi-independent, at least) surrounding Arab states were becoming increasingly hostile to Zionism and the plight of the Palestinians. But they also saw potential to either increase their own territory or to gain power in the region. Egypt saw itself as a new Ottoman Empire. King Abdullah of Transjordan saw Palestine as part of Transjordan. He thought that victory in the war against Israel would be secured in ‘no more than ten days.’

The USSR, seeing the potential of a state of Israel as a socialist ally, provided weapons to the Zionists. Seeing themselves as decisively outnumbered and outgunned, with no tanks, navy, or aircraft (the Arab countries, to varying degrees, did have this equipment), Ben-Gurion secured a deal with Czechoslovakia for $28m worth of weapons and ammunition, increasing their supply by 25% and ammunition by 1000%. In 1968, Ben-Gurion remembered, ‘the Czech weapons truly saved the state of Israel. Without these weapons we would not have remained alive’.

By now the Palestinians and Zionists were in a state of civil war, with continued attacks and counterattacks.

In early 1948, knowing the British would leave, Arab countries were preparing to invade and Jewish state institutions-in-waiting prepared a plan of defence. And there were already Jewish settlements outside of the proposed UN partition boundaries, and of course, many Palestinian areas within.

Zionist leadership prepared what was referred to as Plan D, which included, ‘self-defense against invasion by regular or semi-regular forces’, and ‘freedom of military and economic activity within the borders of the [Hebrew] state and in Jewish settlements outside its borders’.

All of this was made worse by British bankruptcy and a hard-line Zionist militant group called Irgun, who bombed the British Mandate headquarters, killing 92 people, and were involved in skirmishes with Palestinians. In one attack in April of 1948, Irgun killed 115-250 men, women, and children in a village near Jerusalem, despite a non-aggression pact.

So on 15 May 1948, the British left. The day before, David Ben-Gurion declared the establishment of the new state of Israel. The day after, a coalition of Arab forces from Egypt, Jordan, Syria, Lebanon, and Iraq invaded.

For the most part, Israel captured and defended the areas allotted to them by the 1947 UN plan, as well as areas outside of it.

Hundreds of thousands of Palestinians were forced to flee their homes. Palestinians call it the Nakba – the Catastrophe.

The result of the war was the Gaza strip coming under Egypt’s control, the West Bank contested but under the control of Jordan’s forces, to be annexed in 1950, and anywhere between 400,000 and a million Palestinians displaced.

There is complexity, and this is only a small fraction of this story, but it’s impossible to ignore that the Nakba was a catastrophe – power differentials, foreign influence, empire, failures to compromise, perpetration of atrocities, the loss of homes and land that would never be returned to. The Palestinians were divided, outnumbered, and kept weak by Britain, Zionists, the US, the USSR and their surrounding Arab neighbours.

Journalist Arthur Koestler famously said that, ‘One nation solemnly promised to a second nation the country of a third’.

While British Prime Minister Neville Chamberlain had tried to limit immigration to Palestine, he was replaced by Winston Churchill, one of the biggest supporters of Zionism in British public life. In 1937 Churchill said of Palestine that: ‘I do not agree that the dog in a manger has the final right to the manger even though he may have lain there for a very long time. I do not admit that right. I do not admit for instance, that a great wrong has been done to the Red Indians of America or the black people of Australia. I do not admit that a wrong has been done to these people by the fact that a stronger race, a higher-grade race, a more worldly wise race to put it that way, has come in and taken their place’.

In response to the UN planning to partition Palestine in 1947, several Arab countries warned, or even threatened, violence against Jews in their own countries and expulsion. In 1950 and 51 Iraq withdrew Jews of their Iraqi nationality and property rights. Antisemitism in Yemen led to the migration of 50,000 Jews between 1949-1950. There were attacks on Jews in Tripoli before the war in 1945. Whether punitive policies and attitudes began before the war or as a result of it is a matter of debate.

What becomes clear, though, is that moral questions depend on the minutiae of often unanswerable questions; ones that historians are still, often acrimoniously, debating.

Who, which groups and subgroups, were most responsible for violence in ‘47? Were 19th century Zionists ‘blind’, ‘altruistic’, in existential danger? Are they colonisers in the usual sense? Or victims fleeing from violence in Europe?

Shavit writes that, ‘these pilgrims do not represent Europe. On the contrary. They are Europe’s victims. And they are here on behalf of Europe’s ultimate victims.’

Anyone who tells you that answers are easy to come by are wrong. Antisemitism was at its height in the 1940s. The Holocaust had just happened. Jewish immigrants had purchased land and settled in Palestine peacefully for decades. But amongst these difficulties, there are some indisputable facts. The UN partition plan offered Palestinians 43% of the land despite them comprising 68% of the population. And around 700,000 Palestinians became refugees.

Shavit cites a letter written from an Israeli he knew who fought the 1947-48 war. He wrote about the time: ‘when I think of the thefts, the looting, the robberies and recklessness, I realize that these are not merely separate incidents. Together they add up to a period of corruption. The question is earnest and deep, really of historic dimensions. We will all be held accountable for this era. We shall face judgment. And I fear that justice will not be on our side’.

And this is one report from an Israeli military governor, reporting a conversation with Palestinian dignitaries when Palestinians were forced from the small city of Lydda in 1948:

DIGNITARIES: What will become of the prisoners detained in the mosque?

GOVERNOR: We shall do to the prisoners what you would do had you imprisoned us.

DIGNITARIES: No, no, please don’t do that.

GOVERNOR: Why, what did I say? All I said is that we will do to you what you would do to us.

DIGNITARIES: Please no, master. We beg you not to do such a thing.

GOVERNOR: No, we shall not do that. Ten minutes from now the prisoners will be free to leave the mosque and leave their homes and leave Lydda along with all of you and the entire population of Lydda.

DIGNITARIES: Thank you, master. God bless you.

And in many cases, people left before the war broke out. In one case, the Israeli mayor even begged the Palestinians to stay. Although this was the only case.

For many years, the ‘Israeli’ narrative – although to call it that is far too simplistic, ignoring the disagreements, differences, and dissent within the conversation – was that the surrounding Arab states called upon the Arabs in Palestine to leave so that they could invade.

School books in Israel taught that Israelis wanted peace, but they were surrounded by enemies who wanted their destruction; that the Arabs fled to safety as a natural process of war.

This was challenged in the 1980s as official archives were opened, and a generation of ‘new’ Israeli historians looked differently at the period.

Benny Morris, one of those new historians, argued that there was no master plan of expulsion. However, it was understood that it was in the leadership’s interests to establish a Jewish state with as small of a minority of Palestinian Arabs as possible.

Most say the order came from Ben-Gurion himself. Those saying this include the later Prime Minister Yitzhak Rabin, who reported in his autobiography that Ben-Gurion had given him the order to expel the Palestinian Arabs in Lydda. When Rabin tried to publish this in 1979 it was censored.

What’s clear is that there was an overwhelming atmosphere – of fear, of exodus, of violence and beatings, of many massacres, of war in general – that led to 700,000 Palestinians leaving their homes, never to return.

 

Sources:

Understanding Israel and Palestine: A Reading List

The post The Origins of the Israel/Palestine Conflict appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/11/02/the-origins-of-the-israel-palestine-conflict/feed/ 2 994
The Shock of Modernity https://www.thenandnow.co/2023/10/26/the-shock-of-modernity/ https://www.thenandnow.co/2023/10/26/the-shock-of-modernity/#respond Thu, 26 Oct 2023 13:10:56 +0000 https://www.thenandnow.co/?p=988 The end of the nineteenth century was a period of unprecedented upheaval. Factories sprouted in masses, railways were laid at great length, urbanisation sprawled and beckoned, and the masses were organised capitalistically and politically. All of this happened at dizzying speed. This was the moment the modern world crashed together and dragged people from the […]

The post The Shock of Modernity appeared first on Then & Now.

]]>
The end of the nineteenth century was a period of unprecedented upheaval. Factories sprouted in masses, railways were laid at great length, urbanisation sprawled and beckoned, and the masses were organised capitalistically and politically.

All of this happened at dizzying speed. This was the moment the modern world crashed together and dragged people from the fields to the factory floor.

Within a generation, the entire consciousness of life had changed.

Science challenged deeply-held views of the world.

Darwin published On the Origins of Species in 1859.

He pulled the Gods down from the sky and transformed humans into just another animal.

This, of course, was shocking, traumatising, existentially threatening.

The philosopher Soren Kierkegaard wrote in 1844 that, ‘Deep within every human being there still lives the anxiety over the possibility of being alone in the world, forgotten by God, overlooked by the millions and millions in this enormous household’.

Nietzsche, famously proclaiming the death of God, argued that men would become nihilistic, lose their grounding, forsake their morals, if a new ethics of man did not come.

Darwin, the death of God, the prosperity of industry, science, all pointed towards something that could be terrifying: freedom.

Kierkegaard went on: ‘Anxiety may be compared with dizziness. He whose eye happens to look down into the yawning abyss becomes dizzy. But what is the reason for this? It is just as much in his own eyes as in the abyss . . . Hence, anxiety is the dizziness of freedom’.

Freedom was the expansion of options – of ways to live life personally, of political options, with commercial options.

Warfare was changing: swords and rifles, of which there were only a few, were being replaced by stuttering guns and spat bullets at an incomprehensible rate, artillery and bombs that sent shrapnel shredding in a cacophony of unbearable noise.

The word ‘panic’ was used for the first time in 1879 by the psychiatrist Henry Maudsley to describe extreme agitation, trembling, and terror.

People were nervous, literally – a new diagnosis became popular amongst America’s elites:  neurasthenia.

It was a contemporary form of stress, characterised by symptoms like fatigue, headache, and irritability.

Neurasthenia, according to physician Charles Beard, was the result of a depletion of nervous energy, but was becoming more common as a reaction to the anxieties of the modern world and of the demands of American exceptionalism. Neurasthenia was almost a fashion. Adverts appeared selling ‘nerve tonics’, self help books dominated the shelves, even breakfast cereals claimed to be able to cure ‘americanitus’.

Beard argued that there were five main causes of neurasthenia: steam power, the periodical press, the telegraph, the sciences, and the mental activity of women.

He argued that these phenomena contributed to the competitiveness and speed of the modern world.

Even time itself was to blame.

He wrote, ‘the perfection of clocks and the invention of watches have something to do with modern nervousness, since they compel us to be on time, and excite the habit of looking to see the exact moment, so as not to be late for trains or appointments. Before the general use of these instruments of precision in time, there was a wider margin for all appointments. We are under constant strain, mostly unconscious, often times in sleeping as well as in waking hours, to get somewhere or do something at some definite moment’.

The recently laid telegraphs also meant that prices and information could be sent around the world at a moment’s notice, piling the pressure on merchants to keep up with the latest news from all around the world.

According to the pre-psychological way of understanding the human mind, all of these phenomena hit the nerve endings, draining the life force.

Unnatural modern noises did this too.

Beard wrote: ‘Nature – the moans and road of the wind, the rustling and trembling of the leaves, and swaying of the branches, the roar of the sea and of waterfalls, he singing of birds, and even the cries of some wild animals – are mostly rhythmical to a greater or less degree, and always varying if not intermittent’.

As with Kierkegaard’s anxieties over freedom, for Beard, politics and religion also added to the drain: ‘The experiment attempted on this continent of making every man, every child, and every woman an expert in politics and theology is one of the costliest of experiments with living human beings’.

‘A factor in producing American nervousness is, beyond dispute, the liberty allowed, and the stimulus given, to Americans to rise out of the possibilities in which they were born’.

Excitement and disappointment were a drain on nerve-force.

But one innovation was so emblematic of the shock of modernity, of the distortion of time, of the inability of man to adapt to his surroundings, that it’s mentioned almost everywhere the topic is discussed:

The railway.

Historian Wolfgang Schivelbusch argues that the railways didn’t just change travel, but changed the very notion of time itself.

Before the railways, cities, towns, and villages had local times, which had to be standardised for train timetables. ‘London time ran four minutes ahead of time in Reading, seven minutes and thirty seconds ahead of Cirencester time, fourteen minutes ahead of Bridgwater time’. People could imagine being in other places much more easily, changing the very way they think.

It was such a part of the cultural zeitgeist of the time that on the third of October 1868, Illustrated London News reported that five theatres were all performing the same incident: someone tied to or unconscious on a track while a train came hurtling towards them.

These productions made use of modern special effects using lights and smoke, and The Times described them as a ‘perfect fever of excitement’.

The theatres performing these spectacles were open to people outside of the centre of London for the first time, who could travel in on the omnibuses or trains. The same transport they were about to be thrilled by their fear of.

Railway accidents were common. One in 1868 killed 33 people.

One passenger wrote, ‘We were startled by a collision and a shock. [. . .] I immediately jumped out of the carriage, when a fearful sight met my view. Already the three passenger carriages in front of ours, the vans and the engine were enveloped in dense sheets of flame and smoke, rising fully 20 feet. [. ..] [I] t was the work of an instant. No words can convey the instantaneous nature of the explosion and conflagration. I had actually got out almost before the shock of the collision was over, and this was the spectacle which already presented itself. Not a sound, not a scream, not a struggle to escape, or a movement of any sort was apparent in the doomed carriages. It was as though an electric flash had at once paralysed and stricken every one of their occupants. So complete was the absence of any presence of living or struggling life in them that it was imagined that the burning carriages were destitute of passenger’.

This idea of instantaneous death mixed with machinery was so new and so shocking, that it dominated the culture.

Charles Dickens himself was involved in a train crash and wrote the ghost story The Signal Man afterwards. According to his children, he was never the same again.

All of this – industry, commercialism, fear, anxiety, thrill, trains – culminated in an emphasis on sensation and the birth of sensationalism. The point was the senses. The modern world could trigger them, play on them, manipulate them, and sell to them, all at a tremendous speed.

The Irish playwright Dion Boucicault made sensation the centre of his plays. He intended to ‘electrify’ the audience.

A review of one of his plays illustrates this emphasis on the senses: ‘The house is gradually enveloped in fire [and] [. ..] bells of engines are heard. Enter a crowd of persons. [. . .] Badger [.. .] seizes a bar of iron, dashes in the ground-floor window, the interior is seen in flames. [. . .] Badger leaps in and disappears. Shouts from the mob. [. . .] [T]he shutters of the garret fall and reveal Badger in the upper floor. [. . .] Badger disappears as if falling with the inside of the building. The shutters of the window fall away, and the inside of the house is seen, gutted by the fire; a cry of horror is uttered by the mob. Badger drags himself from the ruins’.

Drama of such speed and excitement had rarely been seen before.

In the early 1860s, sensation novels suddenly became popular.

In 1866, an article in the Westminster Gazette lamented that all minor novelists were now sensationalists.

Literary critic D. A. Miller describes it like this: ‘The genre offers us one of the first instances of modern literature to address itself primarily to the sympathetic nervous system, where it grounds its characteristic adrenaline effects: accelerated heart rate and respiration, increased blood pressure, the pallor resulting from vasoconstriction, and so on.” H.L. Mansel wrote that ‘There are novels of the warming-pan type, and others of the galvanic battery type-some which gently stimulate a particular feeling, and others which carry the whole nervous system by steam’.

So, what was lost in these tumultuous years? I think Charles Beard and Kierkegaard, in many ways, hit it on the head. The idea of freedom, anxiety of choice, the cacophony of noise, the pressure of time all becomes demanding. A type of demand that didn’t exist in agricultural societies. Yes, life also became better, more prosperous – more options – but remembering what was lost is also important.

So, if modernity is still a shock to you then slow down, take some time, turn off your phone, stop thinking. Relax.

 

Sources

Allan V. Horwitz, Anxiety: A Short History

Nicholas Daly, Blood on the Tracks: Sensation Drama, the Railway, and the Dark Face of Modernity

Beard, American Nervousness

Mark Jackson, The Age of Stress

David G. Schuster, Neurasthenic Nation: America’s Search for Health, Happiness, and Comfort, 1869-1920

Nicholas Daly, Railway Novels: Sensation Fiction and the Modernization of the Senses

The post The Shock of Modernity appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/10/26/the-shock-of-modernity/feed/ 0 988