The Internet Archives - Then & Now https://www.thenandnow.co/category/the-internet/ Human(itie)s, in context Tue, 25 Apr 2023 14:19:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 214979584 How the Internet was Stolen https://www.thenandnow.co/2023/04/13/how-the-internet-was-stolen-2/ https://www.thenandnow.co/2023/04/13/how-the-internet-was-stolen-2/#respond Thu, 13 Apr 2023 19:09:36 +0000 https://www.thenandnow.co/?p=262 This is a story of deceit, manipulation, lots of money, political lobbying, lawsuits, and, ultimately, covert theft. The history of the internet has been one of big capital pilfering from public investment and national infrastructure, the expropriation of academic research, of democratic open-source alternatives being forced illegally from the market, of devices to steal our […]

The post How the Internet was Stolen appeared first on Then & Now.

]]>

This is a story of deceit, manipulation, lots of money, political lobbying, lawsuits, and, ultimately, covert theft. The history of the internet has been one of big capital pilfering from public investment and national infrastructure, the expropriation of academic research, of democratic open-source alternatives being forced illegally from the market, of devices to steal our privacy for profit being quickly snuck into hour homes, our cars, our watches, glasses, and phones. It’s a story of unethical business practices, of monopoly power, a story that could have been different, and, in the end, a story of competing dreams – hopes – of the future.

It’s a big history – one that needs to be told properly – and one in which that verb – to steal – will be returned to. Stealing can happen in many ways; through force, through dispossession, through tricks, power, and cash. And ideas, attention and privacy can be stolen as much as hardware and physical goods. We’ll see how this story has some striking historical parallels – this is the story of the ideologies, the battles, the court cases, the innovations, the lost hopes – from Microsoft to Uber, from eBay to Google, from the US Department of Defence to dimly lit university laboratories, from Napster to 9/11 – the story of the internet.

The internet is a great ocean: we know that we only swim – paddle – on its surface, never diving beyond the first few pages of search results, rarely interacting outside of our own social network, swimming out occasionally to find a few new journalists or creators to follow, a few new products to buy – buts its depths – what it’s made of, how it was made, the history that gave shape to it, what lurks in its depths, what might be possible – remain, like the ocean, surprisingly unknown to us.

The internet was meant to be a new utopia – a tool of liberty, fraternity, equality – of radical democracy and freedom. And it might have had its moments, but has a tool celebrated for its openness turned into a cage? To understand this, we have to look at the values of those who built it – ask why and how it was designed – what motivated its pioneers, and look at what they came up against. The history of the internet, maybe more than any other history of our present moment, is the history of our future.

 

ARAPANET

The internet is a complicated system. Even when Charles Babbage designed the first computer in 1819 – a mechanical calculator – it was too complicated for Babbage to fund on his own. Babbage received a £17,000 grant – a fortune at the time – from the British government to fund his project, which still failed. Throughout the industrial revolution, single innovators could design and build machines that were the most complicated devices ever envisaged – alone. But with computers, this was no longer true.

The first two modern computers were built at the University of Illinois Centre of Innovation in 1951, funded by the US Department of Defence and the US Army. Just 18 years later, in 1969, at the height of the counterculture revolution, four computer terminals were connected remotely for the first time at universities across the US from California to Utah.

The project was the continued result of Department of Defence financing through DARPA – the Defense Advanced Research Projects Agency – investment that would cost the US taxpayer 124 million dollars, almost a billion in today’s money.

The result was a network called ARPANET – The Advanced Research Projects Agency Network.

Why was so much money spent?

From its inception, ARPANET was the culmination of two visions. The first was to find a way for academics to exchange data across institutions, and, importantly, to share computing power, which at the time was expensive, slow, and valuable to researchers at the universities.

ARPA director Charles M. Herzfeld remembered that, ‘ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them’.

It was, as historian Brian McCullough puts it, a ‘researcher’s dream of a scholarly utopia’.

But it was also a product of the Cold War.

And the Deputy Director, Stephen Lukasik, has challenged how Herzfeld remembers its development. He said, ‘the goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making’.

As the Cold War developed, the US spent more and more on its growing ‘military-industrial complex’, working not just with industry, but with universities too.

Many academics wanted ARPANET to be open to all.  All researchers – funded by the taxpayer or by universities – should be able to access it.

Because of this, they believed, there had to be a common, universal language that would underpin how the new communication network operated.

Steve Crocker, the inventor of some of the early protocols the internet runs on, said that they were looking for ways to make the procedures open, so that they could be added to, changed, and updated democratically – importantly, no single institution should be in charge.

When administrators at MIT started locking doors, or putting passwords on computers that until then anyone could book a slot to use, students and researchers fought back. They started calling themselves hackers.

A young enthusiast called Richard Stallman recalls, ‘anyone who dared to lock a terminal in his office, say because he was a professor and thought he was more important than other people, would likely find his door left open the next morning. I would just climb over the ceiling or under the floor, move the terminal out, or leave the door open with a note saying what a big inconvenience it is to have to go under the floor, “so please do not inconvenience people by locking the door any longer.” … There is a big wrench at the AI Lab entitled “the seventh-floor master key,” to be used in case anyone dares to lock up one of the more fancy terminals’.

As ARPANET proved useful, other government agencies wanted in. The NSF – the National Science Foundation –  created NSFNET in 1985 and eventually linked more universities across the country.

In 1977, the first transmissions were made wirelessly, around the world, under the sea, into space, and back to where they started.

And by the early 80s, the NSF was investing heavily in infrastructure. They built a ‘backbone’ across the US that could connect universities with other research institutions.

The NSF website states that, ‘throughout its existence, NSFNET carried, at no cost to institutions, any U.S. research and education traffic that could reach it. At the same time, the number of Internet-connected computers grew from 2,000 in 1985 to more than 2 million in 1993. To handle the increasing data traffic, the NSFNET backbone became the first national 45-megabits-per-second Internet network in 1991’.

Any school could apply for a grant from NSF to connect to the network.

But its popularity was to become its undoing. By the 90s, the infrastructure was becoming overburdened. And there wasn’t much appetite to increase funding at the height of the neoliberal period. The NSF continued to upgrade but couldn’t keep up with demand.

Data was measured in packets – in 1988 a million packets of traffic were sent across the network, and by 1992 this had increased to 150 billion.

Commercial use was banned – the network was to be used for research and education only.

A 1982 MIT handbook states that, ‘personal messages to other ARPANet subscribers (for example, to arrange a get-together or check and say a friendly hello) are generally not considered harmful… Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people’.

undefined

At the same time, small commercial operators began to provide alternatives, using the same protocols as NSFNET and often hiring DARPA researchers.

Pressure to change how the network operated was increasing as more and more people wanted to join.

In 1991, Democrat senator Al Gore began working on a High-Performance Computing and Communications Act, arguing that, of course, the market should have access to this new ‘information superhighway’, but a middle road was important. Gore’s bill recommended that ‘further research and development, expanded educational programs, improved computer research networks, and more effective technology transfer from government to industry are necessary for the United States to reap fully the benefits of high-performance computing’.

Senator Daniel Inouye argued that legislation should preserve 20% of the infrastructure for public use – libraries, education, non-profits, government agencies, and museums to provide ‘educational, informational, cultural, civic, or charitable services directly to the public without charge for such services’. This would be modelled on the Public Broadcasting Act to complement private media with publicly funded advertisement-free programs for television and radio. Like roads, the information superhighway was, after all, public property.

A group was formed called the Telecommunications Policy Roundtable. It’s co-founder, Jeffrey Chester, told the NYT in 1993, ‘there should be a national debate about what kind of media system we should have. The debate has been framed so far by a handful of communications giants who have been working overtime to convince the American people that the data highway will be little more than a virtual electronic shopping mall’.

It looked to most like a public-private partnership was inevitable. And Gore, with his support for a public internet, became Bill Clinton’s running mate for president.

But when Clinton won the election in 1993, chairman of the Federal Reserve Alan Greenspan visited him and convinced him that the new computer network, in industries and banking’s hands, could revolutionise the economy – by using vast troves of data and super calculation to hedge against investments, the US could enter into a new era of prosperity, in which the economy would be perfectly balanced and everyone prospered, leaving behind a period of boom and bust.

Meanwhile, Clinton, Gore, and the DNC received $120,000 in contributions from a consortium of telecommunications giants. Suddenly, Gore changed his mind about the public-private partnership. Telecoms should be deregulated, the NSFNET infrastructure should be sold off and left to the market.

As this was happening, NSD had subcontracted the running of the network to a consortium called MERIT.

The consortium was made up of representatives from universities, research institutions, and important computing businesses like IBM.

In 1991, MERIT began selling access to the network to more businesses.

NSF was accused of backroom dealing and profiting from the network. So many were outraged that congressional hearings were called in 1992. Businessman William Schrader testified that it was like ‘giving a federal park to Kmart’.

But it was too late. It was decided that NSFNET would be carved up and transferred to the corporations. With the infrastructure handed over, it was officially decommissioned in 1995 with hardly a whimper of protest.

A couple of years later, in 1998, the Washington Post reported that, ‘the nation’s local, long-distance and wireless phone companies have spent $166 million on legislative and regulatory lobbying since 1996 – more than the tobacco, aerospace and gambling lobbies combined’.

Top donors during the 1997-98 political season were AT&T ($1.9 million), Bell Atlantic ($1.6 million), BellSouth ($1.6 million), SBC ($1.2 million) and MCI ($964,348).

The money was donated to candidates across both major parties. Democratic Representative Edward J. Markey, who was heavily involved in telecommunications, received $124,800 in donations.

But Clinton himself was the top recipient, receiving $169,000.

But at least a billion dollars in today’s money of public investment had been transferred – with little fan-fare – to the private sector.

The internet is part of a long history of computing that has relied on public money. From that first computer, to the father of modern computing Alan Turing working on WWII government projects, and the first IBM digital computer being the result of a DOD contract during the Korean War.

Moreover, California’s economy more broadly rests on government defence investment in planes, missiles, military electronics, as well as irrigation systems, highways, and universities.

The Pentagon has had a big hand in helping corporations invest in R&D in everything from jet engines to transistors, circuits, lasers and fibreoptics. Today, the Pentagon works with and funds at least 600 laboratories in the US.

As Scientific America explains, ‘in truth, no private company would have been capable of developing a project like the Internet, which required years of R&D efforts spread out over scores of far-flung agencies, and which began to take off only after decades of investment’.

 

The Web: The Browser Wars

Meanwhile, in California, a pair of young computer hobbyists had been working on the first easy to use programming language.

Bill Gates and Paul Allen agreed a deal with the electronics company MITS to distribute their new BASIC language with the Altair 8800 in 1978. The Altair was one of the first new affordable microcomputers, mostly advertised to enthusiasts, tinkerers, and hobbyists through electronics magazines by mail order – the hobbyists were sparking a new home computing revolution.

They joined clubs and shared ideas and software that they’d received with their mail order hardware.

Gates was irritated by this. He believed that the only way to further popularise and innovate in computing was if programmers were paid for their software.

He wrote a widely cited letter, an ‘Open Letter to Hobbyists’ in 1976.

In it, Gates reported that less than 10% of users had actually purchased BASIC themselves, and that added up, this reduced he and Allen’s hourly compensation to less than $2. He likened sharing to stealing.

One respondent to the letter made the counterargument that this $2 per hour figure was actually the result of poor negotiating with Altair. There was nothing wrong with selling software with computers, but there was nothing wrong with sharing it either. Despite their $2 hourly wage, Microsoft quickly became the largest company in the world. By 2000, they’d have a 97% share of the personal computing market.

And despite Gate’s argument for strong intellectual property rights, by the 80s, Microsoft, Apple and Xerox were in a race to build the first graphical interface for computers, and they were borrowing and stealing ideas from each other with such frequency that they all became embroiled in lawsuits and counterclaims against the other. Apple argued that Microsoft had stolen the ‘look and feel’ of its OS, and Xerox argued Steve Jobs had copied their own work – specifically the design for a mouse – on a visit to their HQ.

The judge decided that out of the 189 claims Microsoft, Apple and Xerox were making, only 10 could be upheld.

With a deal to install MS-DOS and then Windows on IBM computers, a graphical user interface taken from Apple, and a strong dislike of free, open-source, or shareware software, Microsoft quickly dominated the market.

In Geneva, a computer scientist called Tim Berners-Lee came up with an idea that combined the infrastructure of the internet developed by ARPA – which focused on sharing files and computing power – with a simple text-based delivery system.

He wrote an announcement post that said, ‘the WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!’

The WWW was launched with a few dozen servers communicating with each other around the globe. One of those servers was at the University of Illinois National Centre for Supercomputer applications, a research department funded by the US government.

The WWW servers discussed how their project would function. They needed a programming language, which turned into html, the code of the internet, and an application to translate the code, which would become known as a ‘web browser’. Berners-Lee worked on the very first basic browser, but a 21 year old graduate at the NSCA – Marc Andreessen – decided to work on a browser that would be easy to use.

Andreessen recalls that among academics, there was ‘a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out’. He and his colleague Eric Bina thought that the WWW should be simple to use and have the ability to include graphics too. Berners Lee had disagreed, thinking they should only include text and maybe some diagrams – all that would be needed to share things like scientific papers.

But at the university, Andreessen and his colleagues worked hard on a browser that they would release in 1993: X Mosaic.

Berners-Lee posted, ‘an exciting new World-Wide Web browser has come out, written by Marc Andreessen of NCSA’.

As Mosaic was released there were a few hundred websites up and running around the world. Two years later in 1994 there were already tens of thousands. In its first 18 months, 3 million people installed the new browser.

Fortune magazine included Mosaic as a product of the year, writing, ‘this software is transforming the Internet into a workable web . . . instead of an intimidating domain of nerds’.

NSCA began taking notice of what had until now been a small project. And in 1994, Andreessen left to work on a commercial version of Mosaic browser.

He took the same team with him, and with the help of investors started on what they called ‘Mozilla’ – the ‘mosaic killer’ – the world’s very first internet start-up.

People Mag named Andreessen one of the year’s, ‘most intriguing people’, and Fortune included them in their 25 coolest companies.

The company they eventually named Netscape was private, but they had adopted the open practices that were dominant in computer science at the time – instead of patenting features of their browser, they wanted others to be able to copy and use those features, in the hope that they would become the standard. SSL, for example – this security information –  was a Netscape innovation but anyone was welcome to use it. It’s now used everywhere. They wanted users to be able to add features – plugins – like Microsoft, and they wanted to be the platform that other developers would see as the standard and build their own software for.

For these reasons they decided on a new commercial model: the browser would be free for public use – in the hope of achieving dominance and because it aligned with the existing culture of share and share-a-like.

Technology writer Glynn Moody writes that this new logic ‘introduced the idea of capturing market share by giving away free software, and then generating profits in other ways from the resulting installed base. In other words, the Mosaic Netscape release signaled the first instance of the new Internet economics that have since come to dominate the software world and beyond’.

It worked. Mozilla browser was downloaded 6 million times in a few months. They quickly achieved a 90% market share while also selling a corporate version with technical support for $39.

When the University of Illinois found out about Mozilla, they threated to sue – claiming that Andreessen and his team had stolen the university’s code.

Jon Mittelhauser, one of the developers, replied, ‘we didn’t want to take any of [the old Mosaic] code, that’s the thing! We wanted to start from scratch. We wanted to do it right’.

However, Mozilla ended up paying the university $2.2 million in damages, and agreed to change the name from Mozilla to Netscape.

When Netscape IPO’ed in 1995 its share value quickly tripled. Andreessen featured on the cover of Time Magazine. And the WSJ wrote, ‘it took General Dynamics Corp. 43 years to become a corporation worth $2.7 billion. . . . It took Netscape Communications Corp. about a minute’.

Within 18 months it had 38 million users and hit $533 million in revenue in 1997.

Meanwhile, Bill Gates was sceptical about the internet. He knew that eventually, technology would be advanced enough to play video and interact between households, but he didn’t think anyone would want to sit around their computers at a desk, thinking instead that the television set would be the medium at the centre of the shift.

When he saw how popular Netscape was he quickly u-turned.

He licenced the original Mosaic source code from the University of Illinois, copied it, and quickly rolled out the first version of Microsoft’s own browser, Internet Explorer, in 1995.

The difference: it was completely free.

Gates said, ‘one thing to remember about Microsoft, we don’t need to make any revenue from Internet software’.

The launch of Windows 1995 was huge – the Empire State Building was lit up in Windows logo colours and the commercial featured the Rolling Stones. And the new operating system came pre-loaded with Internet Explorer.

Steve Jobs said that if Microsoft’s competitors couldn’t keep up ‘in the next two years, Microsoft will own the Web. And that will be the end of it’.

Microsoft increased its efforts: their browser department grew from 6 to 1000 employees by 1999. They required manufacturers to install Internet Explorer if it wasn’t already installed and included a line in their contracts banning ‘modifying or deleting any part of Windows 95, including Internet Explorer, prior to shipment’.

Netscape, in a desperate bid for survival, tried to deal directly with computer manufacturers like Compaq in an attempt to replace Internet Explorer with Netscape before the units were shipped.

Microsoft responded by threatening Compaq with a lawsuit and a ban from using Windows. Compaq immediately backed down.

In 1995, 90% of Netscape’s revenue came from licencing its browser – by 1997, squeezed out by Microsoft’s monopoly position, that was below 20%.

Netscape wrote a letter to the US Department of Justice (DoJ) complaining that Microsoft was using its position to push Netscape out of the market by preventing them from dealing with manufacturers.

In the first quarter of ’98, Netscape reported a loss for the first time and Internet Explorer took the top spot in user share.

In May, 20 states and the DoJ filed antitrust lawsuits against Microsoft.

The trial went on for several years and in June 2001 Microsoft was found guilty of monopolistic practices.

Judge Thomas Jackson said Microsoft ‘maintained its monopoly power by anticompetitive means and attempted to monopolize the Web browser market’.

An allergy towards monopoly had always been central to the philosophical culture of the United States. Early settlers had been fleeing the twin monopoles of church and state. In England, government-granted monopolies were the norm. Thomas Paine said that England was ‘cut up into monopolies’.

Historian Christopher Hill writes that a 17th century Englishman was ‘living in a house built with monopoly bricks, with windows (if any) of monopoly glass; heated by monopoly coal (in Ireland monopoly timber), burning in a grate made of monopoly iron’. ‘He slept on monopoly feathers, did his hair with monopoly brushes and monopoly combs. He washed himself with monopoly soap, his clothes in monopoly starch. He dressed in monopoly lace, monopoly linen, monopoly leather, monopoly gold thread’, and were ‘held up by monopoly belts, monopoly buttons, monopoly pins’. Food was ‘seasoned with monopoly salt, monopoly pepper, monopoly vinegar’, and mice ‘were caught in monopoly mousetraps’.

As far back as 1641, Massachusetts law declared that ‘no monopolies shall be granted or allowed amongst us, but of such new Inventions that are profitable to the Countrie, and that for a short time’.

The American Revolution had begun as a revolt against the monopoly power of the East India Company, with protestors throwing their tea into Boston harbour.

Maryland’s first constitution in 1776 said, ‘that monopolies are odious, contrary to the spirit of a free government, and the principles of commerce; and ought not to be suffered’.

The anti-monopolistic culture continued into the 19th century when the robber barons were attempting to monopolise the oil, railroad, and telegraph industries while repressing strikes and unions, reducing wages, and artificially inflating prices.

In response, the Sherman Act was introduced in 1890.

The first section prohibits ‘every contract, combination …, or conspiracy, in restraint of trade or commerce…’.

The second section makes it illegal for any person or firm to ‘monopolize … any part of the trade or commerce among the several States, or with foreign nations…’.

Judge Jackson decided that the first section was violated by Microsoft by forcing manufacturers to include IE, the second part by using its market dominance to force a monopoly in browsers.

The judge concluded that Microsoft ‘used incentives and threats’ to push manufacturers to adopt ‘distributional, promotional and technical efforts’ that would favour Internet Explorer at the expense of Netscape Navigator.

He suggested that Microsoft be broken up into two companies: one that would run its operating systems and another its software.

Of course, this never happened. When the Bush administration took office in 2001, the verdict was appealed and reversed. Instead, Microsoft made a deal that required them to open their APIs – that’s application processing interface – and protocols and refrain from monopolistic practices. Opening their APIs means that third-party developers could more effectively design software that would work with the nuts and bolts of Windows – effectively opening the bonnet and allowing applications to interface with all the components with the intention of ‘opening up’ Microsoft and allowing greater competition and innovation.

They also had to agree to ‘consent decrees’ that prohibited from retailing against manufacture who might ‘develops, distributes, promotes, uses, sells, or licenses any non-Microsoft’ software’.

On the one hand, this was a win for Microsoft. On the other, Microsoft was distracted while new and unexpected competitors grew to challenge their dominance. And the appeal still meant Microsoft had to ‘play nice’, giving ammunition to its rivals that could be used against the company. It meant Microsoft had to consider legal repercussions before they acted in the future.

Without the trial, the internet could have gone a different way. Dominated by Microsoft, it could have developed a kind of walled AOL, an internet built into the architecture of Windows itself, run on some kind of grotesque Microsoft software. And forcing Microsoft to open those APIs – what’s called interoperability – is something that could be useful to some challenges we’ll come to today.

While Microsoft was distracted, AOL was staging a coup. It had found success offering a service that dialed into the privatised internet infrastructure. You entered a web portal that acted as a directory of websites, news, chat, email, weather, and other categories.

Just before their trial, Gates had approached AOL CEO Steve Case. He said ‘I can buy 20 percent of you or I can buy all of you. Or I can go into this business myself and bury you’.

AOL choose to fight. Like Netscape, they knew that market saturation was key. The company spent a quarter of a million on AOL trial discs to give out on magazines. It worked. The conversion rate was an unheard of 10%, and suddenly AOL was everywhere.

Marketing Mogul Jan Brandt, who worked on the campaign, said, ‘taking the disc, putting it into the computer, signing up, and giving us a credit card. When I saw that, honestly, it was better than sex’.

In his history, McCullough writes, ‘AOL discs began arriving in Americans’ mailboxes seemingly daily. Almost every computer maker shipped an AOL disc with a new computer. There were AOL discs given away with movie rentals at Blockbuster. There were AOL discs left on seats at football games. At one point, Brandt even tested whether or not discs could survive flash freezing so that she could give away AOL discs with Omaha Steaks’.

Incredibly, half of CDs manufactured at the time were for AOL. AOL’s customers tripled in a year.

Microsoft countered with its own web portal called MSN.

But AOL dominated. It was the internet.

Like Microsoft, AOL used its dominance to squeeze out or buy out its competitors. It was so lucrative to be in AOL’s web portal that companies handed over millions of dollars.

Health website Dr Koop IPO’ed and raised $85 million, spending all of it on a 4 year contract with AOL to provide its users with health content.

Tel-Save paid AOL $100 million, Barnes & Noble $40 million, Amazon $19 million, eBay $75 million, all to be on the home portal – the most coveted position on the net high street – a shop in the digital Times Square.

One businessperson recalled that, AOL demanded 30% of her company, and then ‘for good measure they tell us, ‘These are our terms. You have 24 hours to respond, and if you don’t, screw you, we’ll go to your competitor’’.

AOL’s share price rocketed by 80,000% through the 90s, but even at its height, cable companies were starting to develop faster and cheaper ways of connecting to the internet. AOL and Microsoft imagined themselves as walled-gardens, in control of and masters over their own cyberspace, desperate to keep the hordes of invaders and competitors out. It wasn’t to last.

 

The Californian Ideology

‘The online world is not truly bound by terrestrial laws… it’s the world’s largest ungoverned space’ – Eric Schmidt and Jared Cohen, Google.

Like the settlers arriving in America, many imagined cyberspace as empty, a blank canvas, a state of nature, an ethereal region where civilization could start again, free of corrosive power, corporate donation, corrupt politics, where a new type of democracy and freedom could flourish. As a blank slate, it was imagined lawless, but like precolonial America, that was theorised as a good thing. Because new, better ideas could be written into it.

In 1995, two media scholars Richard Barbrook and Andy Cameron wrote an influential article about what they saw emerging in Silicon Valley. They called it the Californian Ideology.

Barbrook and Cameron argued that this new ideology was the product of ‘a loose alliance of writers, hackers, capitalists and artists from the West Coast of the USA’.

It was a ‘bizarre fusion’ of cultural bohemianism with ‘high tech industry’, a combination of ‘the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies’.

The Californian ideologists imagined a new world, made possible through computers, where everyone would be both hip and rich, and also free.

This best of all worlds ideology arose from two places: a liberal progressive one that rejected the narrow confines of conformist post-war corporate American values – they celebrated difference in all things gay rights, feminism, literature, music, and recreational drug use. They despised the old forms of power and repression that kept the inner self locked away.

They believed that the internet could bring about a new radical democracy, where people were free to organise and express themselves in new ways, directly voting on the issues that affected them. They shared software and computers parts in home-brew computer clubs – they were a new class – a ‘virtual class’.

But as they built businesses and joined companies, they came into contact with traditional businessmen, with venture capitalists and financiers. Through them, they adopted the idea that they should be left alone, free from all regulation, free from interfering governments to impress their vision on blank canvas of cyberspace, regulated by the invisible hand of the market.

Barbrook and Cameron ask, ‘will the advent of hypermedia will realise the utopias of either the New Left or the New Right? As a hybrid faith, the Californian Ideology happily answers this conundrum by believing in both visions at the same time – and by not criticising either of them’.

Left and right ideology combined into support for a new Jeffersonian democracy – small direct democracies where everyone owns property and everyone votes and each is involved in the running of their own lives.

The problem – and this was a fundamental line from their 1995 article – was that the hippies ‘cannot challenge the primacy of the marketplace over their lives’. ‘By mixing New Left and New Right, the Californian Ideology provides a mystical resolution of the contradictory attitudes held by members of the ‘virtual class’’.

But, they said, hidden away from Silicon Valley, in Chinese factories, in mining the materials needed in developing countries, in right wing politics hostile to unions, in the weakening of welfare, a new underclass was emerging. They said, ‘the deprived only participate in the information age by providing cheap non-unionised labour for the unhealthy factories of the Silicon Valley chip manufacturers’.

They warned that, ‘the technologies of freedom are turning into the machines of dominance’.

One of the believers in this new digital utopia was Pierre Omidyar. In the early years of Silicon Valley, Omidyar co-founded an e-pen startup that was sold to Microsoft for $50 million in 1996.

Omidyar started working on a new libertarian platform that he believed would revolutionise commerce – allowing people to find exactly what they wanted and letting the invisible hand of the market perfectly calibrate the price, all without the need for third party interference.

He believed that if you removed any interference, you could create the perfect marketplace online.

His idea was an auction website. He said, ‘if there’s more than one person interested [in an item], let them fight it out. The seller would by definition get the market price for the item, whatever that might be on a particular day’.

He launched AuctionWeb, which he later renamed ebay, and initially it was completely free from interference: there was nothing – no payment function , no ratings – no regulatory infrastructure. Just buyers connected to sellers, who would complete the details of the transaction on their own.

From the beginning the site was a success, but investors were wary. What did Auctionweb do? It had no goods, provided no real service. One investor said, ‘they don’t own anything’, ‘they don’t have any buildings, they don’t have any trucks’.

Omidyar knew that self-regulation from the community was key. He encouraged its users to talk to one another, share advice, and, eventually, leave reviews.

Many of the new companies that were soon to go bust in the dot-com crash of the late 90s were traditional brick and mortar businesses. Pets.com. Sofa companies. Even early grocery delivery services. At the turn of the new millennium, hundreds of these companies went bankrupt, but eBay continued to grow.

It was a new type of service – a pure middle man – a simple ‘platform’,  groundwork for a community of buyers and sellers, without any content – no stock, no warehouses, of its own.

McCullough writes that the market turned out to be imperfect – ‘disputes broke out between buyers and sellers, and Omidyar was frequently called upon to adjudicate. He didn’t want to have to play referee, so he came up with a way to help users work it out themselves: a forum. People would leave feedback on one another, creating a kind of scoring system. ‘Give praise where it is due’, he said in a letter posted to the site, ‘make complaints where appropriate’.

Omidyar had come across a strange new dynamic: eBay was only possible because of the contributions of its users, but without some kind of adjudication, the platform would collapse.

In his book Internet for the People, technology writer Ben Tarnoff writes, ‘AuctionWeb was not only a middleman. It was also a legislator and an architect, writing the rules for how people could interact and designing the spaces where they did so. This wasn’t in Omidyar’s plan. He initially wanted a market run by its members, an ideal formed by his libertarian beliefs’.

Anyone taking notice of ebay might have predicted that being a platform was both the future, and that the Californian Ideology was founded on faulty logic. Google’s Schmidt and Cohen’s words. ‘the online world is not truly bound by terrestrial laws… it’s the world’s largest ungoverned space’, turned out to be a misconception. There was no blank canvas. The problems of the offline world were being shifted online, and new issues were emerging with it.

Political scientists James Muldoon writes that, ‘platform owners claim their products are neutral spaces and that they merely provide an intermediary service to connect parties. This is only half true. Through the design and architecture of the platform, software developers play an active role not only in connecting parties but in shaping the conditions in which they operate’.

 

The Virtual Mall

The first advert on the internet may have been this one from 1994 on hotwired.com for AT&T. It had an enormous click through rate. People clicked it just to see where it went. Ads that you could click on were a novelty.

AOL and eBay had proven something. Online business was not about selling, but connecting, and it was Yahoo that first realised that capturing more audiences, and keeping their attention so that you could serve them more ads, was key to growing revenue.

Yahoo started as two Stanford students’ project – ‘Jerry and David’s guide to the World Wide Web’ – and they included their favourite websites and as it grew, it became the most popular homepage of the internet. But they had no way of making money.

Surely no one would accept ads polluting the directory? With no other way of making money, they decided to tentatively try.

When the first ads went live, Chief Product Officer Tim Brady recalls, ‘the email box was immediately flooded with people badmouthing us and telling us to take it off. ‘What are you doing? You’re ruining the net!’

But traffic to the site remained stable, the ads were begrudgingly accepted, and Yahoo started adding sections to attract more interests: stocks, horoscopes, film, television, travel, weather – the key was to match subgroups with adgroups. One executive described it as a landgrab.

Jerry Yang said ‘we began with simple searching, and that’s still a big hit—our Seinfeld if you will —but we’ve also tried to develop a must-see-TV lineup: Yahoo Finance, Yahoo Chat, Yahoo Mail. We think of ourselves as a media network these days’.

One Wall Street analyst told businessweek, ‘you have to look at it [Yahoo] as the new media company of the 21st century’.

Meanwhile, looking at Yahoo’s increasingly complicated directory, two computer scientists – Larry Page and Sergey Brin – were realising that there had to be a better way to organise the internet.

Instead of humans deciding which websites should sit in which categories and which order – as AOL and Yahoo were doing – an algorithm should do it.

The algorithm was deceptively simple – any website that linked to another website was counted as a vote for that site, and the more links to it a website had, the more popular it was, and the higher up the search rankings it was placed.

It was straightforward but innovative. Journalist David Kirkpatrick wrote an article in Fortune magazine in 1999. He typed, ‘New York Yankees 1999 playoffs’ into both Google and Alta Vista. He said that, ‘the first listing at Google took me directly to data about that night’s game. The first two at Alta Vista linked to info about the 1998 World Series’. On Alta Vista, he had to click on the third link down, then click another link to find the results of the game. He wrote, ‘Google really works’.

But like Yahoo, Google struggled to devise a business plan. Page and Brin hated advertising and instead decided on a licencing model. They signed deals with Yahoo and AOL to be ‘powered by Google’.

While Yahoo saw the power of Google’s innovation, they insisted it was a small part of their service. The main directory would always be human curated, as humans were more trustworthy than algorithms.

When they realised that Google was the future, Yahoo tried to buy them out for 3 billion dollars. Google rejected the offer, Yahoo cancelled their licence, and Google went to work on a new model, realising they had to make their search engine work on its own.

But Page and Brin were academics. They still believed in the ‘online world not truly bound by terrestrial rules’ and they thought their search engine should be scientific, not influenced by offline money.

At a conference in 1998, Brin and Page delivered a paper that noted that, ‘we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. This type of bias is very difficult to detect but could still have a significant effect on the market… we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm’.

Page and Brin were beginning to realise that markets could distort the utopian vision.

Around the same time Google was taking off, Shawn Fanning – a young student programmer and hacker – became fascinated by a new type of sound file – small, compressed, and quick to download, it was called an MP3. And college students were sharing them on usenet newsgroups across campuses.

A 19-year-old student – Justin Frankel – had designed a programme called Winamp that let you organise and play your MP3s, but Fanning wanted an easier way to share them between friends.

He came up with a file sharing application called Napster. When he released in in 1999 it was downloaded 10 million times in less than a year. 73% of college students were using it.

Napster was on the cover of Rolling Stone and Time, but some, including Metallica’s Lars Ulrich were outraged that their music could be shared for free. Ulrich hand-delivered a list of 300,000 usernames of people who had ‘pirated’ their music to Napster’s office. Napster organised a counterprotest for the same day, and protestors shouted ‘Fuck you, Lars, it’s our music too!’

Fanning argued that Napster was just a middleman, like eBay, but the Recording Industry Association of America filed a lawsuit. The publicity lead to a surge of interest in Napster.

The courts decided that Napster had to block copyrighted material or shut down. Fanning tried to implement a system., but it failed, and after just a few years, the company went bankrupt in 2002.

In reality, Fanning claimed he never wanted to give away music for free. He believed artists needed to be paid. He wanted to Napster to grow quickly, prove the concept, and then make a deal with the record companies.

McCullough writes that, ‘in retrospect, there is no shortage of people, even inside the music industry, who imagine how different the world would be if it had worked out that way—if the music companies had partnered with Napster and accepted the inevitability of technology’.

Sean Parker – Napster’s cofounder – predicted at the time that, ‘music will be ubiquitous and we believe you’ll be able to get it on your cell phone, you’ll be able to get it on your stereo, you’ll be able to get it on whatever the device of the future is. And . . . I think people are willing to pay for convenience’.

Mp3.svg

LimeWire, Bearshare, and others tried to take Napster’s place, but forced underground, none were as successful.

The music industry were at the peak of their power, having spent decades reissuing old vinyl albums on expensive CDs and making use of new outlets like MTV to promote new artists.

Early eBay, Google and Napster had something in common. They all thought that it was possible to create a platform that was self-balancing – where supply of information perfectly met demand – a self-sustaining rational system that importantly was free from the interference of offline influences. But they all came up against pressure from the physical world.

Back at the Googleplex, investors were starting to demand profits. After the dot-com crash, no one was thought to be safe. It seemed like you could have a winning model one day, and you could disappear the next. People wondered whether google could turn its innovation into a profitable company.

And so Page and Brin acquiesced to the advertising model. The results were unheard of.

In 2003, Google made $500 million in revenue. Within 10 years, Google had revenues of over 50 billion.

With Google Adwords, McCullough writes ‘Google was able to achieve something amazing: it made the Internet profitable at scale and for the first time’.

Between 2002 and 2006, the amount US advertisers spent online tripled.

 

The Theft of Privacy

‘Half the money I spend on advertising is wasted; the trouble is I don’t know which half’ – John Wanamaker.

‘maps created Empire’ – John B. Hartley.

The idea of privacy has had a chequered history. But in the modern period, it has often been thought of as sacrosanct. That an Englishman’s home was his castle – the idea of rights grew from the idea that some parts of us were inviolable, off-limits, private.

Some theorists have argued that having a private place, off-limits to infringement, is crucial to thinking-through, formulating, and debating the ideas necessary in a deliberative democracy.

Some early radio stations shunned the idea not only of advertising but even talking on air – surely no one would surely accept that naked invasion of privacy; a stranger’s talking in their own home  – people would surely prefer to talk amongst themselves, anyway.

In 2000, a group of scientists at the Georgia Institute of Technology worked on a project called ‘Aware Home’. It aimed to study something they called ‘ubiquitous computing’ – a home full of different types of sensors designed to predict the needs of the inhabitants and make their lives easier. But, the researchers naturally assumed, the data would belong only to the people who lived in the house – the purpose of the data was to make their lives easier.

Meanwhile, Google was also realising it was in the data industry. If their job was to predict what people wanted, the more data they had about people the more effective they’d be.

Google’s mission statement was, ‘organize the world’s information and make it universally accessible and useful’.

This applied to organic search and advertising. To match companies to consumers, to make advertising more efficient and relevant, Google needed to expand how they captured data.

Google’s chief economist Hal Varian explained that the Google model was based on what they called ‘data extraction and analysis’ through better monitoring, personalisation, customisation, and continuous experiments – this was the key to big data and better prediction.

Larry Page had said that the problem with Google was that you had to ask it questions. It should know.

Varian said that each user left behind them a trail of breadcrumbs that Google euphemistically called ‘data exhaust’. He wrote that, ‘every action a user performs is considered a signal to be analyzed and fed back into the system’.

In 2001, Page said that ‘sensors are really cheap.… Storage is cheap. Cameras are cheap. People will generate enormous amounts of data.… Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable’.

Search, Google’s primary product, they realised, was about predicting what a person wanted at any given moment, before they searched for it, and the clues to what the user wanted were often found away from what they typed in a search box.

Google wanted to know what users wanted before they did.

What they wanted might vary between geographic locations, with the weather, depending on who they were talking to, what their past searches were, what mood they were in, what they were doing at the time, and so Google began investing in collecting all of this ‘data exhaust’.

Google invested in email, mapping, street-view cameras, built free word-processors and spreadsheets, calendars, travel, and photo storage, home speakers and thermostats  – anything that could tell them something about their users’ interactions with the world.

They started placing cookies on users’ computers that could track them across third-party websites. In 2015, one study found that if you visit the 100 most popular websites on the internet you would collect 6000 cookies tracking your online journey. Google had cookies on 92 of those 100 sites, and 923 on the top 1000 sites. The study concluded that, ‘Google’s ability to track users on popular websites is unparalleled, and it approaches the level of surveillance that only an Internet Service Provider can achieve’.

They use hundreds of signals to predict what a user wants to see at any moment.

One study found that a single Google nest device connected with so many other services and products that a user would have to read through almost 1000 terms and conditions and privacy policies. Another found it would take 76 days to read each policy that affects us.

Google purchased a satellite imagery company called Skybox that could capture such detailed images that if outside, it could see exactly what was on your desk. They started investing in street-view camera technology on backpacks and snowmobiles and boats to capture places that couldn’t be reached by car. They started a project called Ground Truth that tried to analyse, through ‘deep mapping’, what they called a ‘logic of places’, tracking paths people would take, ponds people would sit at, what the traffic was like, which parks people used, which buildings were entered at which times, or which areas were busier.

The street view cars collected data from open wifi networks as they went around the world, including data from people’s homes.

A long list of countries, from the UK to Japan, complained. In Germany, residents could require their homes be blurred out.

Philosopher Shoshana Zuboff argues that we live in a new economic order that, ‘claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales’. This is the foundation of a surveillance economy in which our privacy is progressively invaded through a flood of new and smaller sensors, cameras, microphones, devices attached to dishwashers and ovens, smart speakers and doorbells, in shopping malls and shops, restaurants and stadiums.

Tarnoff compares data in the 21st century to coal in the 19th – it powers a revolutionary change in the economy.

Data becomes the raw material that can be transformed into an asset to sell to advertisers. Smart beds that track sleep position and sell us different bed sheets, new running shoes after a slow run, vacations after a stressful day, bad news when we’re angry and celebrity gossip when we’re bored; click patterns are measured, microphones listen to miniscule details, and food choices are used to predict health patterns.

Zuboff argues that surveillance capitalism is always searching for new ‘supply routes’ – new devices and new ways of collecting new types of data about our lives.

Zuboff says that this manifests in a logic of extension – outwards into as many areas of life as possible – your fridge, blender, dishwasher, bloodstream – and a logic of depth – downwards into richer, more accurate and more detailed measurements – about your emotions, personality, sweat levels, temperature, hormone levels, through ‘digestible sensors’, or correlations between things like mood and music or film choices, or the inflection and decibel levels of your voice while having certain conversations.

All of this is to measure in order to predict how we might act in the future, and when combined with advertising or the delivery of news or information to click on, is about subtly nudging us into action.

Our present state is unceasingly measured so that future attention can be grabbed. Surveillance assets act as the raw material for digital fortune telling.

According to Zuboff, the tentacles of surveillance capitalism ‘nudge, coax, tune, and herd behaviour’.

Targeting our desires is like casting a net, leaving a hook in the right place, with just enough bait, with the right wording, the right photo, at the right time, in the right location, while under the influence of the right emotion, to nudge us towards the highest bidder for our attention. Zuboff says that, ‘users were no longer ends in themselves but rather became the means to others’ ends’.

 

Facebook

When Mark Zuckerberg started Facebook, he quickly realised that the social graph was trove of useful data. He became obsessed with measuring it.

He saw something that Myspace hadn’t – and that Friendster hadn’t been able to capitalise on – that we were fascinated by those closest to us.

Zuckerburg carefully studied what students were calling the ‘Facebook trance’ – the addictive state of endlessly clicking through your friends’ profiles. Students were obsessed with finding out what was new, but had to click through each profile to find out.

undefined

Facebook designed an innovative solution: the news feed – instead of clicking on each profile, the changes or updates would be delivered in one long feed on the home page.

The task was complex. They had to design elaborate code to pull a unique news feed for each person, depending on their list of contacts.

And Facebook quickly realised that having the feed ordered chronologically wasn’t optimal. Instead, a person should be shown the updates they’d be most interested in– the friends they actually interacted with, the ones that were most popular, or the ones that they didn’t look at, but might be interested in.

In just a few years the news feed would be so ubiquitous that in retrospect it looks obvious and inevitable, but at the time of release it was hated. As McCollough recounts, Facebook employees were ‘deluged with messages of pure outrage’ with only 1 in 100 posts about it being positive, and most complaining about privacy.

A group called Students Against Facebook News Feed quickly attracted 700,000 members.

One complaint read, ‘very few of us want everyone automatically knowing what we update, news feed is just too creepy, too stalkereque [sic], and a feature that has to go’.

College newspapers had headlines like “Facebook is watching you,” “Furious with Facebook” and “Facebook fumbles with changes“.

An online petition quickly attracted thousands of signatures.

Facebook was worried. Zuckerberg replied personally saying  “Calm Down. Breathe. We Hear You”. Moments like this had killed multiple sites that seemed invincible. Digg lost its users to Reddit and soon disappeared after it made an unpopular algorithm change. Facebook almost took the feature down, but Zuckerberg realised something was happening.

While people were publicly outraged, in private they were still using the feed, and they were using it more and more.

He decided to keep it, and it gave them new data to analyse, but it still needed to expand its ‘supply routes’.

In 2010, Zuckerberg added a simple method to harvest more data: the first ever like button.

Almost immediately, millions of people were sending millions of data points of information about what they liked and by omission what they disliked to Silicon valley.

Soon, Facebook users were liking over 4 million posts each minute. Zuckerberg bragged about what Facebook could predict about its users.

In a 2012 interview, Zuckerberg said that him and his friends could figure out from data who would be in a relationship within a week and get it right 1 in 3 times (46.00 – James W. Breyer and Mark E. Zuckerberg Interview, Oct. 26, 2005, Stanford University – YouTube).

Like Google, they began buying and investing in anything that could harvest data – through the logic of extension and the logic of depth – and began using it to sell advertising. Facebook bought Instagram, Whatsapp, and now, the Oculus headset can track your facial expressions.

Amazon, Microsoft, and many others all followed Google and Facebook’s lead.

Microsoft picked up Skype for $8.5 billion and LinkedIn for $26 billion. They started tracking more data in Windows.

Microsoft’s Satori program started collecting up 28,000 DVDs’ worth of data every day.

The project’s senior director said, ‘it’s mind-blowing how much data we have captured over the last couple of years. The line would extend to Venus and you would still have 7 trillion pixels left over’.

Amazon started carefully analysing all the businesses that listed on their site. They decided to carefully record data on sales, shipping, marketing, monitoring which products do well, and then they’d copy the best sellers and make their own cheaper duplicates.

FTC chair Lina Khan said Amazon is a ‘petri dish’ through which ‘independent firms undertake the initial risks of bringing products to market and Amazon gets to reap from their insights, often at their expense’.

Former Amazon exec James Thomson said, ‘they happen to sell products but they are a data company’.

Google, Microsoft, and Amazon’s move to cloud storage – where users could upload any information they needed – was the next natural choice for companies looking for more and more data.

Other smaller surveillance capitalists arrived on the scene too.

Score Assured in the UK offers a service that scans your social media accounts for landlords to predict risky tenants, and start-ups like LendUp look at your social media to determine your creditworthiness. Hi IQ looks at job candidates’ social media profiles ‘to pinpoint with laser-like accuracy the employees that are highest risk.…’.

As data everywhere was hoovered up in pursuit of profit, IT was a race to the bottom… of data.

 

How the Public Space was Stolen

This race to uncover and harvest more lines of surveillance data, the search for ever more lucrative ‘supply routes’ of personal information – extending outwards into more areas of our lives and deeper into richer measurements of data, isn’t just a natural progression of the technology. It happened in a particular context.

In the early 90s, the internet was still largely thought of as a space for information, one that had grown out of academia. But in 1993, the activist Jeffrey Chester was warning that it was becoming a ‘privately owned public space’, a ‘virtual electronic shopping mall’ – as corporate interests circled.

The old NSFNET infrastructure had been decommissioned. And through the 90s, internet through telephone lines, then through cable, was becoming dominated by the telecommunications giants – AT&T, Spring, and Verizon.

Western Union had dominated telegraph communications in the late 19th century, buying up over 500 competitors to have control over a near monopoly. It started abusing its position, using its position to keep competitors out of the market and prioritising its own associates over others.

As the anti-trust movement grew, those fighting the monopolies weren’t just concerned about anti-competitive practices, but what it meant to have single men like robber baron Jay Gould in control of almost the totality of infrastructure that was essential to the security of the country.

In response, congress passed a non-discrimination. No single company should have a say on the entire network, and telecommunications and railways must abide by what was called ‘common carriage’ – they had to treat anyone wanting to access the network equally.

Fast forward to 1990s, and the same rules were applied to cable providers. Networks had to grant access and couldn’t treat their own partners and affiliates with favouritism.

One judge said that, ‘assuring that the public has access to a multiplicity of information sources is a governmental purpose of the highest order, for it promotes values central to the First Amendment’.

But throughout the 80s and 90s this changed. As the Soviet Union collapsed, market liberalism triumphed, neoliberalism was at its peak, and government interference of any kind was challenged.

In 2002, Bush repealed ‘must carry’ rules, and cable giants began refusing access to smaller internet providers across the country, who quickly went out of business.

And across the US, telecommunications companies lobbied local government to prevent new networks. Municipal broadband is banned in 18 states and big telecom sign agreements with local authorities that often prohibit using other new ISPs. Net neutrality – the idea that data sent across networks should be treated equally – was being challenged too, and corporations began paying to have preferential treatment to send their own data faster than competitors. Big tech giants like Facebook, Google and Microsoft do deals with telecoms giants to send their own traffic at quicker speeds – essentially ‘fast-lanes’ for the rich.

This shift towards digital laissez-faire coincided with another shift. After 9/11, security became more important than privacy. The Bush administration passed the Patriot Act, radically expanding the powers of the state to monitor citizens and gather data around the world.

Using 9/11 as justification, the surveillance state grew at the same time as Google was starting to expand its own surveillance practices.

Peter Swire, an expert in privacy in the Obama administration, said that, ‘with the attacks of September 11, 2001, everything changed. The new focus was overwhelmingly on security rather than privacy’.

NSA Chief John Poindexter proposed a program called Total Information Awareness (TIA), that could pick out signals that would predict and stop future attacks.

With that change, politicians seemed to lose interest in regulating big tech companies. Google and the NSA, for example, announced a partnership for Google to provide ‘search appliance capable of searching 15 million documents in twenty-four languages’.

NSA director wrote that, ‘an effective partnership with the private sector must be formed so information can move quickly back and forth from public to private and classified to unclassified… to protect the nation’s critical infrastructure’.

A new techno-industrial-military-complex was born.

Google spent more time in Washington, spending more money on lobbying. The Washington Post called Google the ‘master of Washington influence’. The New York Times ran a story saying that, ‘Google is very aggressive in throwing its money around Washington and Brussels, and then pulling strings. People are so afraid of Google now’.

The neoliberal-techno-industrial-military-complex provided the context for corporations to spend vast sums of money expanding their operations and influencing both politicians and the public through PR that claimed that what they were doing was good for the ‘community’.

As the social fabric was shredded, as Margaret Thatcher made her famous claim that there was no such thing as society, increasing numbers became isolated and atomised. Memberships to churches, unions, and even bowling clubs declined, and big tech claimed that they were rebuilding that sacred lost space – the public, the communal, the common.

Brian Chesky, CEO of Airbnb, said that, ‘Airbnb started really as a community, probably even more than a business. It became a business to scale the community. But the point is that when it became a business it never stopped becoming a community’.

Its slogan was, ‘the world’s largest community-driven hospitality company’. Its Global Head of Community Douglas Atkin declared that, ‘Airbnb and its community wants to create a world where Anyone can Belong Anywhere’.

But across the world, Airbnb were up against real community regulations that were often aimed at keeping rents low for locals, and keeping out property investors who might leave apartments empty as financial assets to appreciate, or lease them out to wealthy travellers.

In many places in the UK – including some of the most beautiful parts of the country like Cornwall and Wales – locals are forced out of the property market as homes are bought up as second homes or to list on sites like Airbnb.

Airbnb knew that to fight this in so many places, they had to mobilise their own ‘community’. They couldn’t do it alone.

They began posting jobs like this one: Community Organiser velvetjobs.com/job-posting/community-organiser-fixed-termcontract-399585. – looking for candidates with experience in organising community, political, government campaigns.

The premise was simple: Airbnb would make sure that their own members fought for their own right to do what they wanted with their own property.

They wanted to maintain the image that this wasn’t landlordism or rentierism, but local communities standing up for their local areas.

Despite the image, what was being listed on the site was changing. It wasn’t just individuals or families renting out a room in their home or their place while they were away – by 2020 ‘hosts’ with more than one property made up 63% of the total, and between 2017 and 2020, hosts with between 101-1000 properties went up by 50%, and 14% of hosts had more than 21 listings.

What on the surface seemed like a project for ‘democratising’ the vacation market was fast becoming a means for landlords to consolidate and extract rents from their capital and assets.

Several reports, including one from the centrist Economic Policy Institute, found that Airbnb has a harmful effect on the economy. With a fight on their hands, Airbnb mobilised a war chest.

In one campaign in San Francisco, Airbnb spent $8 million on tv ads, billboards, and canvassing to lobby against regulation that restricted the rental market.

Its Head of Community Atkin said, ‘mobilisation was absolutely instrumental in changing the law … we got about 250 of our hosts to show up at land use hearings and to give up a day of work’.

Meanwhile, in 2018, Amsterdam, Barcelona, Berlin, Bordeaux, Brussels, Krakow, Munich, Paris, Valencia and Vienna wrote a joint letter to the EU, asking them to intervene in what they saw was a growing problem.

Airbnb have been involved in 11 lawsuits against authorities to try and avoid regulations, while many cities have fought back with regulation like caps on the number of days a property can be let.

Muldoon calls this ‘community-washing’. He writes, ‘according to the spin of tech CEOs, making billions of dollars is almost incidental to – or a welcome but unexpected by-product of – their social mission of connecting the world and giving people a sense of belonging in community’.

But ‘there is a deep irony in one of the world’s most successful entrepreneurs portraying his company as a champion of grassroots community’.

At a 2017 Facebook Communities Summit Zuckerberg took to the stage by saying, ‘it’s not enough to simply connect the world, we must also work to bring the world closer together… Communities give us that sense that we are part of something bigger than ourselves, that we are not alone, that we have something better ahead to work for’.

With one hand, big tech companies support and lobby for deregulation that allows big capital to encroach upon community spaces, while on the other hand they co-opt language and sentiment around the idea of community that gives the impression that they’re on the side of community. As Orwell wrote, ‘political language is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind’. Power twists commonly-accepted values to make them sound appealing – ‘thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness’.

Like Airbnb, Uber spends millions lobbying local governments to repeal regulation that has often been the result of long-fought battles by local taxi drivers. Certain regulations, like limiting the supply of taxis, were about risk-reduction, so that during a downturn drivers could still earn enough money.

Uber systematically tried to circumvent and replace laws like these.

Meanwhile, one study found that half of Uber drivers in Washington DC were living below the poverty line. Uber have fiercely resisting classifying riders as employees, insisting they’re self-employed, so they don’t have to pay sick leave, a minimum wage, or paid vacation.

The incursion into our political space, the theft of the community, at a period in history that prioritised laissez-faire and surveillance security, reminds us that the shape and direction of any change is not natural – there is no such thing as ‘empty cyberspace’ that can be moulded with new utopian rules. The offline world, real power, history, always intrudes.

‘As the legal scholar Veena Dubal has argued, and as Doug Schifter observed in his columns, politics has been pivotal, in particular the pressure the company has brought to bear on regulators and policymakers’.

 

Encroachment

Utopia comes from the Greek meaning no place. It is an apt description of the Californian ideology that saw cyberspace as a no place, an infinite blank sheet upon which their own rules can be written, in some realm that’s out there, disconnected, metaphysical.

But, of course, there is no such place as no place. Big tech platforms and ISPs are made up of wires, servers, and code written by people that thought in a particular way of thinking in a particular historical context with particular rules and laws.

Despite this, big tech platforms often try to absolve themselves of any real world responsibility.

While having a large hand in shaping what users see, what gets amplified, and how you see it, platforms like Facebook aren’t treated like real world publishers. They aren’t subject to libel, copyright, or the other regulations, that their users are.

The 1996 Communications Decency Act decided that, ‘no provider or user of an interactive computer service, shall be treated as the publisher or speaker of any information provided by another information content provider’.

Yet all of these platforms use mountains of data collected on you and your neighbourhoods to decide what you’ll buy and what you’ll watch and what you’ll read. Facebook knows what mood you’re in – whether you’re lonely, lovesick, happy, been paid, politically disenfranchised – and can show you not just what you want – because not even we know what we really want – but what you’ll most likely click on.

The shift towards surveillance in a political context that justified the diminishing of protection and the shrinking of privacy was like entering your home with sweets to tempt you while secretly looking at your likes and dislikes, who you live with, your pets names, and what you were cooking, all for the sake of personalisation and subtly modifying behaviour.

And ironically, from their utopian no place, they’ve become ubiquitous. How we travel, eat, watch, which apps we use, what we listen to, 2/3rd of us get news through social media, Facebook has acquired almost 100 companies, Amazon almost 100, Google more than 200, Microsoft more than 200.

Zuboff calls it dispossession – an ‘incursion into undefended space’: ‘your laptop, your phone, a web page, the street where you live, an e-mail to your friend, your walk in the park, browsing online for a birthday gift, sharing photos of your kids, your interests and tastes, your digestion, your tears, your attention, your feelings, your face’.

They extract more and more information, and they suck it like vampires, from the corpse of history, from no place.

That corpse of history is a long tradition of public investment that’s cheaply sold off or surrendered, while pretending that only capitalism is capable of innovation.

The internet, touchscreen, GPS, MRI, aviation projects and nanotechnology, the first computer, the first digital computer, radar, and much more only exist because of public investment in things that are risky and that individual companies could not pursue without grants.

Nathan Newman has pointed to how the development of the internet is sometimes compared to the development of highway systems. But, he says, the internet is much more complex, and the comparison doesn’t make sense unless the government ‘had first imagined the possibility of cars, subsidized the invention of the auto industry, funded the technology of concrete and tar, and built the whole initial system’.

We have a hangover. Drunk from neoliberalism, we strip regulation that protects local residents and taxi drivers, let bucketloads of cash into our political lobbies, hand over treasure chests full of data, and passed off the infrastructure to telecoms giants who have no incentive to reinvest in faster broadband and instead pay larger dividends to shareholders. They don’t invest in poorer neighbourhoods or rural areas, who put up with insufferably slow speeds, and the US has some of the most expensive internet fees. In 2018, according to Microsoft, almost half the country cannot get and does not use broadband – and they’re low income and rural. In Detroit – 70% of school children have no internet.

Tarnoff writes that, ‘the big ISPs are essentially slumlords. Their principal function is to fleece their customers and funnel the money upward’. On top of that, ‘the broadband cartel regularly receives large infusions of public cash’

For all this talk of state investment and the rolling back of regulation and embracing laissez-faire markets, I’m not suggesting that Facebook or Airbnb or Twitter should be handed over to the national governments. I think choosing between capitalists and politicians as the owners of platforms is a false choice.

Solutions are difficult. Trade-offs are necessary. Real solutions should be multifaceted. I think some combination of regulation, cultural awareness, and democratic alternatives should all be pursued.

Muldoon points out that before the mid-20th century, the idea of what constituted the public interest was much wider. We often bail out banks or energy companies at great cost, and regulation has long protected the weak from the strong.

In one 1877 case, Munn v Illonois, the court concluded that a business involved with large quantities of grain was of public interest and so could be regulated.

There are lots of reasons something might be in the public interest, but the central one is that it’s public – it’s in all of our interests, they tend towards monopoly, and there are clear harms, and so at least part if not all of how they run should be decided democratically.

As legal scholar Frank Pasquale has said, ‘the decisions at the Googleplex are made behind closed doors… the power to include, exclude, and rank is the power to ensure which public impressions become permanent and which remain fleeting.… Despite their claims of objectivity and neutrality, they are constantly making value-laden, controversial decisions. They help create the world they claim to merely ‘show’ us’.

First of all, lets look at the benefits of being open, democratic, transparent, of not being run commercially, then we’ll look at how political policy and building alternative spaces.

In the 70s, a group of computer scientists at MIT were working with an operating system called Unix which was owned by the telecommunications giant AT&T.

One programmer and hacker – Richard Stallman – wanted an alternative. Something that wasn’t restricted and was free to work on, add to, and share. Stallman published the GNU Manifesto and became a pioneer of what came to called open source. GNU was an acronym for GNU not Unix – and he proposed a legal method called copyleft – creating a terms of service in which anyone could use, add to, or modify software provided they did so under the same conditions. Stallman began work on several pieces of code that would contribute towards the running of an open-source operating system that anyone could use through which innovation could be pursued collaboratively– Stallman believed that access to the source code of software was a right.

In the 80s, a Finish software engineer Linus Torvalds contributing to the project with a new open-source operating system. He called it Linux, and anyone could download it, use it, and contribute to its development.

Both Stallman and Torvalds opposed the position that Bill Gates had outlined in his open letter to hobbyists a few years earlier.

The type of software they developed became known as FLOSS – Free/Libre/Open Source Software – and it developed into such an influential movement that we all rely on FLOSS every day – every super computer in existence is run on Linux, including one in the US Department of Energy, one on the ISS, and one on the USS Nuzwalt – the most technologically advanced ship in the world.

The reason they use Linux is that because it’s open, it can be edited to highly specialised needs. The servers that host most of the world’s websites run on Linux, as do the computers at many government agencies. Google’s phone operating system, Android, is Linux-based, making it the most-used operating system in the world.

LibreOffice – a free Microsoft Office alternative, VLC – a great media payer, Firefox – a web browser, Audacity – which I’m using right now – and WordPress, used by more than 60 million people – are all FLOSS.

Open source practices are a challenge to Bill Gate’s biggest complaint in his Open Letter to Hobbyists: why would anyone build anything if it wasn’t commercially profitable?

Studies have found that developers contribute to FLOSS for various reasons that don’t fit in the traditional economic/psychological model of humans as self-serving profit-maximising agents. Contributing to a community, or to humanity more broadly, the desire to challenge themselves, the knowledge that a certain type of status comes from contributing to a project, and because they themselves need the project that they’re contributing to, have all been cited as reasons to contribute outside of a profit motive.

And FLOSS is often worked on outside of conventional worker-boss hierarchies.

Legal scholar Yochai Benkler writes that FLOSS is, ‘radically decentralized, collaborative, and non-proprietary; based on sharing resources and outputs among widely distributed, loosely connected individuals who cooperate with each other without relying on either market signals or managerial commands’.

Media scholar Benjamin Birkinbine argues that FLOSS is an example of an alternative value system, a type of digital commons – public, open, which everyone has a right to use, non-private property.

Enter Microsoft.

As FLOSS has proven itself as a viable alternative to proprietary software, corporations have slowly encroached on open source projects in an attempt to, in Birkinbine’s words, ‘capture the value being produced by FLOSS communities’.

After their court case, Microsoft has made a dramatic u-turn on its position on sharing software.

Microsoft were forced to adopt ‘interoperability’ – the opening of their ‘APIs’ – so that third party software could more easily interact with the Windows ecosystem.

And it was at that time that FLOSS was taking off as a viable way of producing software.

In 1998, a supporter of open source, author Eric Raymond, received leaked internal Microsoft documents that became known as ‘the Halloween Documents’.

In them, MS outlined their new approach to open source – they would need to make use of it while also continuing making the case that software – Microsoft software in particular – was worth the price tag. To do this, the Halloween Documents revealed that Microsoft would use what they referred to as ‘FUD tactics’ – the sowing of Fear, Uncertainty, and Doubt.

They broadcast an advert in the UK, for example, that claimed that, while initially free, Linux was found to be over 10 times more expensive than Windows Server 2003 in the long term. The Advertising Standards Authority found the claim misleading and banned the ad.

Microsoft were also found to be funnelling money into a group who owned the original UNIX, who were fighting a legal battle against Linux for property rights infringement.

But the Halloween Documents also described internal surveys that had found there was a lot of support for FLOSS within Microsoft.

So Microsoft had to balance between fighting open source and incorporating it into Windows, both for their own benefit and to avoid more legal problems.

Today, interoperability is an important feature of everyday life that often goes unnoticed and unappreciated. A simple example is railways and airports, that have to share the same track-gauge, signalling, air traffic control protocols, and so on. If you’ve ever used Microsoft Word alternative Libre Office, it can open and save in Microsoft file formats.

Think about screws, plugs, banking systems, and, internet protocols – interoperability ensures, in part, that dominant corporations can’t dominate the market further by pushing their own interfaces and squeezing out others.

And while the open-source community remains sceptical of Microsoft, Birkinbine writes that Microsoft had to embrace open source because it had proven that it worked, that collaboration was the direction the industry was heading in, and that, ‘the company’s turn to open source may also be viewed as a humble recognition that the commons-based peer production taking place within the FLOSS community was an efficient and effective model of industrial software production that could supplement its own business practices’.

Wikipedia is another example of a similar non-commercial alternative. The first Wikipedia article ever published was on January 15th 2001 – on the letter ‘U’.

McCullough writes that, ‘it was comprehensive, it was well written, and it was—to the surprise of Wales and his team of editors — accurate. The few thousand users who had shown up to test out Wikipedia had, through their collective input and edits, gotten the article polished to near-authoritative quality’.

Like FLOSS, Wikipedia’s contributions are voluntary, it is free to access, there are no advertisements, it doesn’t track its users or sell their data, and is based on the premise that each can participate equally; it’s deliberative rather than hierarchical – administrators and stewards are voted for, for the most part, democratically. Administrators are nominated and voted for based on familiarity with rules and processes, edits are debated over and voted on.

 

Building Alternative Spaces

What these examples prove is that open, collaborative and democratic alternatives to Big Tech are possible.

Influenced by the sociologist Erik Olin Wright, Muldoon argues that we should be building radical democratic institutions within the ‘cracks’ of the capitalist system – what Wright called ‘real utopias’.

Muldoon argues for what he calls platform socialism –which would involvethe organisation of the digital economy through the social ownership of digital assets and democratic control over the infrastructure and systems that govern our digital lives’.

He continues: ‘a broad ecology of social ownership acknowledges the multiple and overlapping associations to which individuals belong and promotes the flourishing of different communities from mutual societies to platform co-operatives, data trusts and international social networks’.

And there are plenty of examples of small platforms trying to challenge Big Tech’s dominance. Up & Go is a cleaning cooperative in NYC, FairBnb is an alternative to AirBnb, Taxiapp an alternative to Uber, Mastadon one to Twitter, Diaspora, Friendica alternatives to Facebook. But if there’s one thing they have in common, it’s their very clear failure to make much progress against incumbents, who are protected by, among other things, what’s been described as their ‘network effect’ – the more people that use them, the bigger they are, and the more valuable they become. This means there’s a cost in leaving Twitter to join Mastadon because everyone you want to follow is on Twitter.

This is a problem, and a reason Microsoft is so dominant, and how they managed to squeeze Netscape and other competitors out of the market. But it’s also why interoperability may hold the key to how to challenge Big Tech monopolies politically.

Sometimes, as the anti-monopoly battle against the robber barons showed, alternative services aren’t enough, only political power can fight the power of corporate incumbents.

I think many people see regulation too simply. Regulation and policy choices come in many forms, and often we can design policy that empowers alternatives instead of just restricting or breaking up Big Tech.

Barcelona City Council, for example, forces Vodafone to makes its data open for public use on the council’s website. Bernie Sanders has suggested billions in grants for local municipalities to ‘build publicly owned and democratically controlled, co-operative, or open access broadband networks’.

Public media outlets like PBS and NPR, legislated by Lyndon B. Johnson in 1967, and the BBC – funded by license fee payers – provide a quasi-political but ideally-independent space motivated by values that sit outside of market mechanisms, and are all routinely viewed in polling as the most trusted organisations for news.

Legislation could be designed to encourage the creation of national or local alternatives to Big Tech that are independent of government, or run through existing library networks or local councils. A government department could build and provide open-source code to provide to councils to run alternatives to Uber or Airbnb that give local drivers and citizens a voice in how the apps are run.

Personally, I’d like to see an alternative to Twitter that’s financed by subscription rather than advertising revenue. For a few dollars a month you get access to a network which is completely open, run by its users, who can all vote, elect, decide on algorithm choices, research directions, privacy policy, and so on, in a similar way Wikipedia does.

And I think this is where interoperability is a smart policy. Regulation could force Facebook, Uber, Twitter and so on to allow third party applications to plug in to them.

Interoperability would mean you could use an alternative to Facebook that still works with Facebook, allowing you to keep access to your social network, to post to Facebook or Instagram or Twitter while using new platforms at the same time. Senator Mark Warner suggests legislation that would force platforms with revenues over $100 million to comply with ‘portability’ rules.

 

Conclusion

The deep contradiction of our age, Zygmunt Bauhman wrote in 1999, is ‘the yawning gap between the right of self-assertion and the capacity to control the social settings which render such self-assertion feasible. It is from that abysmal gap that the most poisonous effluvia contaminating the lives of contemporary individuals emanate’.

The question is who has the legitimate power to control those social settings? And how do we identify where that poisonous effluvia oozes from? In coaxing, nudging, watching, analysing, and sculpting our digital choices? In using monopoly positions to subtly influence our politics?  Should we really only rely on the word of Elon Musk and Mark Zuckerburg that they’re not skewing news feeds; should we really abandon the concept of the community to Uber and Airbnb?

So it is imperative that we experiment more in alternatives, and essential that we have a politics that supports those experiments. As Tarnoff says, ‘while working to disassemble the online malls, we must also be assembling a constellation of alternatives that can lay claim to the space they currently occupy’.

Because unless the platforms we use everyday are transparent, democratic, and open, then it’s not us controlling our own social settings. And while, yes, sometimes the decisions Big Tech makes can be the right ones, it’s also the case that, however benevolent a ruler may be, there are always going to be differences in value-systems from person to person, place to place, group to group.

And over the long arc, history has proven time and again that moral monopolies of all types tend towards stagnation, decay, blunder, or corruption. It only takes one mad king, one greedy dictator, one slimy pope, or one foolish jester, to nudge the levers they hover over towards chaos, rot, and even tyranny.

 

Sources

Martin Campbell-Kelly et al., Computer: A History of the Information Machine

Thomas Rid, Rise of the Machines: A Cybernetic History

Amy Klobuchar, Anti-Trust

Ben Tarnoff, Internet for the People

James Ball, The System

Richard Barbrook and Andy Cameron, The Californian Ideology

Shoshana Zuboff, The Age of Surveillance Capitalism

James Muldoon, Platform Socialism: How to Reclaim our Digital Future from Big Tech

Yes, Government Researchers Really Did Invent the Internet, https://blogs.scientificamerican.com/observations/yes-government-researchers-really-did-invent-the-internet/

A Brief History of NSH and the Internet, https://www.nsf.gov/od/lpa/news/03/fsnsf_internet.htm

Brian McCullough, How The Internet Happened: From Netscape to the iPhone

Sabeel Rahman and Zephyr Teachout, From Private Bads to Public Goods

Ethan Zuckerman, The Case for Digital Public Infrastructure

Where is the Digital Highway Really Heading? https://www.wired.com/1993/03/kapor-on-nii/

Benjamin J. Birkinbine, Incorporating the Commons: Corporate Invovlement in Free and Open Source Software

Erik Olin Wright, Envisioning Real Utopias

Maurine Webb, Coding Democracy

Adam Curtis, All Watched Over By Machines of Loving Grace

Robert David Steele, The Open Source Everything Manifesto

Mark Andrejevic, Automated Media

Bill Gates, An Open Letter to Hobbyists

Clay Shirky, Here Comes Everybody

Hal Varian, Beyond Big Data, https://people.ischool.berkeley.edu/~hal/Papers/2013/BeyondBigDataPaperFINAL.pdf

50 years ago today, the Internet was born. Sort of. https://arstechnica.com/information-technology/2019/10/50-years-ago-today-the-internet-was-born-sort-of/

Seeding Networks: the Federal Role, http://som.csudh.edu/fac/lpress/articles/govt.htm

Getting Started Computing at the AI Lab, https://dspace.mit.edu/handle/1721.1/41180

NSFNET, The first national, then global backbone, http://som.csudh.edu/cis/471/hout/nsfnet.htm

A Brief History of NSF & the Internet, https://www.nsf.gov/news/news_summ.jsp?cntn_id=103050

High-Performance Computing Act of 1991, https://www.govinfo.gov/content/pkg/STATUTE-105/pdf/STATUTE-105-Pg1594.pdf

Telecom’s Lavish Spending on Lobbying, https://www.washingtonpost.com/archive/business/1998/12/06/telecoms-lavish-spending-on-lobbying/b3f35aec-aab0-4a9d-8f48-3212572df8ec/

Ben Tarnoff, ‘Wallets and eyeballs’: how eBay turned the internet into a marketplace https://www.theguardian.com/technology/2022/jun/16/wallets-and-eyeballs-how-ebay-turned-the-internet-into-a-marketplace

Inside the Backlash Against Facebook, http://content.time.com/time/nation/article/0,8599,1532225,00.html

The post How the Internet was Stolen appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/04/13/how-the-internet-was-stolen-2/feed/ 0 262
How Your Attention is Stolen https://www.thenandnow.co/2023/03/14/how-your-attention-is-stolen/ https://www.thenandnow.co/2023/03/14/how-your-attention-is-stolen/#respond Tue, 14 Mar 2023 14:58:27 +0000 https://www.thenandnow.co/?p=86 Likes, shares, friend requests, emails, notifications, updates, alerts, tweets, advert, advert, advert, adverts in schools, adverts on police cars, adverts on every screen, endless scrolling, emotional headline, CAPITALS – stop – must detach – must go for a run, must run further and faster than yesterday, must relax, watch a film, a tv show – […]

The post How Your Attention is Stolen appeared first on Then & Now.

]]>
Likes, shares, friend requests, emails, notifications, updates, alerts, tweets, advert, advert, advert, adverts in schools, adverts on police cars, adverts on every screen, endless scrolling, emotional headline, CAPITALS – stop – must detach – must go for a run, must run further and faster than yesterday, must relax, watch a film, a tv show – which one – endless scroll – email from work, email from Auntie Linda – STOP – meant to be relaxing – play a game – play Tetris – STOP – Uber Eats 50% off? Let’s watch this short first, will only take 30 seconds – STOP.

Welcome to the attention economy. It’s a world where our attention is currency, where our capacity for concentration is competed for, bid on, captured and grabbed, where corporations spend billions on the latest tricks to draw our eyeballs, to perk our interests. The attention economy is everywhere – stress, anxiety, frustration are on the rise worldwide – is it the attention economy that’s draining us of our energy to resist? Are attention merchants draining us of our will power? Is the constant noise detrimental to our mental health? Shrinking us down into insignificant nubs in a chaotic world?

Maybe it’s time to resist. I want to show you how the attention economy developed, the tricks that it uses, what privacy used to look like, why it’s important to carve out space, and how to do that.

Because privacy was once sacred – the sanctity of the self once important.

Now, companies compete in a race to the bottom of tricks to grab our attention – targeting emotions, feelings, words, and ideas – triggers – that are increasingly difficult to resist.

You might think you’re immune, but are you? A study of 100 million articles on Facebook found the most common headlines included phrases like “this will make your cry”, “people are freaking out”, “you’ll be shocked to see”.

When radio became popular at the beginning of the 20th century, people were SHOCKED to even contemplate the thought of advertiser’s voices in their homes.

In 1922 President Hoover said, “It is inconceivable that we should allow so great a possibility for service, for news, for entertainment, for education, and for vital commercial purposes to be drowned in advertising chatter”.

Printer’s Ink – a magazine about advertising for advertisers – recommended that “the family circle is not a public place, and advertising has no business intruding there unless invited”.

But radio and television execs found it difficult to resist the profits. CBS executive Frank Arnold once said, “here you have the advertiser’s ideal—the family group in its moments of relaxation awaiting your message. Nothing equal to this has ever been dreamed of by the advertising man”.

Arnold was a man in the middle of a great shift in history – between newspapers and the nascent radio and television revolution – and onwards to computing and the internet. It was a revolution in technologies to capture our attention.

First, what is attention?

This guy – William James – was the father of psychology in America. He wrote about the importance of attention in 1890. Attention, he pointed out, is central to everything.

He said attention is a “taking possession of the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought, localization, concentration, of consciousness are of its essence”.

It also implies, he said, ‘withdrawal’ from some things to concentrate on others.

Attention, in short, is focus.

Today, psychologists have expanded on this,

We have attention resources – the capacity we have to pay attention, how we pay attention, what we pay attention to, the strength we have to pay attention, and so on.

And there’s a concept called attentional capture – how we shift attention, why we shift attention, what captures our attention, at what times, and for what reasons.

What’s interesting about attention is it gets to the core of a lot of issues – free will, emotion, persuasion, experience, concentration.

Think about fear – if you’re walking in the woods and see a bear – your attention will be captured – by emotion, by your nervous system ramping up, by memories of what to do.

Some attention is conscious and purposive – we engage our prefrontal cortex. Thinking, concentration, and other attention is ‘passive, reflexive, non-voluntary, effortless’ – it’s drawn.

What does all of this mean? There are some important take aways:

  1. First, attention is limited, quite strictly – we can only really focus on one or possibly two things at once, and only a finite number of things through the day, and even through our lives. We ‘pay’ attention – suggesting ‘paying’ a cost for our time.
  2. And second, we can both control our attention and it can be captu-

*Phone rings*

Harold Innis summed the question up neatly: “Why do we attend to the things to which we attend?”.

The concentrating or capturing of attention can, of course, take many forms. We might listen to someone we want to learn from, watch something with good reviews, log on to social media to connect with friends, and choose to attend to something present, but the idea that attention can be captured using specific tricks, words, ideas, methods, is relatively new.

Tabloids – notably the New York Sun, founded in 1833, and its rival, the Herald – were some of the first commercial enterprises to feature stories designed to capture attention through the emotions – they featured murder, scandal and sensationalism.

One report said that in the Herald’s first two weeks the paper featured “three suicides, three murders, a fire that killed five persons, an accident in which a man blew off his head, descriptions of a guillotine execution in France, a riot in Philadelphia, and the execution of Major John André half a century earlier”.

A few decades later, during the 1860s, giant colourful posters began appearing all over Paris. Some were seven feet high, and they used new printing methods that could produce sharp lines and bold colours. One journalist wrote that they were “luminous, brilliant, even blinding”, using “vivid sensations and intense emotions”.

Their designer, Jules Cheret, was called “A master of blazing modernities”. He was awarded the Legion of Honor and had imitators across Europe, leading to a continent wide ‘poster craze’.

One writer said that Paris was “hardly more than an immense wall of posters scattered from the chimneys down to sidewalks with clusters of squares of paper of all colors and formats, not to mention simple inscriptions”.

A group called the Society for the Protection of the Landscape and Aesthetics of France was formed. They waged war on what they described as the ugly posters that you couldn’t get away from.

The authorities banned many billboards, regulations came in, and the posters were taxed and curtailed.

Back in the newspapers of America, those new headlines and cheap newspapers were used to sell a craze, ‘patient medicines’, as commonplace as the posters in Paris – treatments to cure you of all of your ills.

Clark Stanley was the most famous. He started selling a popular snake oil liniment – made from, snake oil. He described it as a “wonderful, pain-destroying compound” and the “strongest and best liniment for cure of all pain and lameness”.

And these medicines were everywhere. “Elixirs of life” – made up of secret ingredients by doctors and scientists.

In 1905 Colliers published a now famous report on the snake oil salesmen called “The Great American Fraud”.

It said:

“Gullible America will spend this year some seventy-five millions of dollars in the purchase of patent medicines. In consideration of this sum it will swallow huge quantities of alcohol, an appalling amount of opiates and narcotics, a wide assortment of varied drugs ranging from powerful and dangerous heart depressants to insidious liver stimulants; and, far in excess of all other ingredients, undiluted fraud. For fraud, exploited by the skilfulness of advertising bunco men, is the basis of the trade”.

The Collier’s reporting led to the first Food and Drugs Act in 1906, banning misleading claims and cracking down on the con men.

But admen had learned that targeting ailments and illness and existential fears, using bright poster colours and emotive language, was key to grabbing attention.

And the advertising industry flourished.

James Rorty, once an adman himself, wrote a bestseller on how those trying to grab attention had to degrade themselves in doing so. He wrote that the adman, “inevitably empties himself of human qualities. His daily traffic in half-truths and outright deceptions is subtly and cumulatively degrading. No man can give his days to barbarous frivolity and live. And ad-men don’t live. They become dull, resigned, hopeless. Or they become daemonic fantasists and sadists”.

Another bestseller, Vance Packard’s The Hidden Persuaders (1957), warned that, “large-scale efforts being made, often with impressive success, to channel our unthinking habits, our purchasing decisions, and our thought processes by the use of insights gleaned from psychiatry and the social sciences. Typically these efforts take place beneath our level of awareness; so that the appeals which move us are often, in a sense, “hidden.” The result is that many of us are being influenced and manipulated, far more than we realize, in the patterns of our everyday lives”.

Richard Stolley, founder of People magazine, had a method for capturing attention. He said the faces on the cover of People had to be recognizable to 80% of the public. After that, “Young is better than old. Pretty is better than ugly. Rich is better than poor. TV is better than music. Music is better than movies. Movies are better than sports. And anything is better than politics”.

Further into the century, cable television made grabbing the attention of more subgroups possible – ESPN, MTV, Fox News, the History Channel.

The instruments of attention capture – the medium, the technology, the paint, lights, sounds – all contributed to an improvement to hyperreality in such a way that made the immediate world less appealing to the dream-like world claiming to be able to take the consumer elsewhere – to a sale, to an article, to a video of an exotic land. Attention idealises the world – idealises the good and the bad.

But with the internet, data and personalisation fundamentally changed how our attention can be captured.

What was new? Instant feedback. Attention grabbers could use data vacuumed up from as many sources as possible – keystrokes, searches, your location, what your friends were doing, your purchases, where you’ve been, what you’ve watched – to finetune precise ways to grab your attention, to suggest the most ‘relevant’ articles, adverts, takeaways, tweets, posts, and videos – personalised attention capture.

You might be thinking so what? The more personalisation, the better. The more relevant the offer, the better. We live in an abundant modern landscape – surrounded by good things – products, information, social networks – so why is this a problem? Choice, after all, is a good thing. How can we think through the downsides?

First, most obviously, our attention is sometimes captured by things that we want – a good film or article – but often it’s captured by things that we know are bad for us. And second, can our own attentional resources – our capacity to attend to the things we want to attend to – be drained or captured or stolen with such ubiquity, with such all-encompassing frenzy, that we become lost to what our true needs are? And to what our own sense of self is, our own personhood?

Imagine that you live in a polygon, surrounded by doors and windows. One door has our friends in, another is full of windows into different parts of the planet, even space, another full of people talking about the news, another shops, another movies. What a wonderful thing – a good thing. But were evolutionarily wired to want more of a good thing. And like doors full of sugar, these doors knock, calling us, enticing us.

Corporations spend billions of dollars trying to improve and decorate those doors and windows. They try and make them seem like theirs are better, more urgent to attend to, than the others.

Reed Hastings, CEO of Netflix, said “at Netflix, we are competing for our customers’ time, so our competitors include Snapchat, YouTube, sleep, etc”.

One obvious problem with living in the polygon is that you’re surrounded by constant knocks, stimulation, notifications, emails, requests for your time.

And distractions not only distract at the moment the distraction happens. Studies have shown that your brain trains itself to predict distractions and so will distract itself in anticipation before the distraction happens. Like knowing a friend is about to knock on the door, or predicting this is about the time you get an email from your boss.

So the more you surround yourself with distractions the more your brain will interrupt itself from what its doing, and the harder it becomes to concentrate on the things that matter. It takes 10-15 minutes to get into a state of deep focus – a flow state – and only a second for your attention to be grabbed away from it.

One study found that being in a distracted environment lowers IQ scores by about 10 points.

When we’re trying to concentrate in the polygon – whether that’s reading, working, watching a film, spending time with family – the temptation to look to the windows is enormous.

The average person looks at their phone 150 times a day and touches it – that includes it picking up, touching it in pocket, and swiping it – around 2500 times a day.

Our attentional resources are drained. We are pulled this way and that by powerful magnets, temptations, siren songs. By sugars, by drugs, by the use of red, by BOLD emotional language, by the need to keep up, by the need to be liked – the attention economy is an overdetermination, a dizziness, a promise of freedom to pursue desires that ends up as a new type of slavery to our deepest desires.

So distraction, in itself, is corrosive.

And the methods of attention grabbing are only getting better, more persuasive and enticing – the game is rigged against us. I want to talk about how the psychological tricks attention grabbers use are linked to modern addictions in the next video – but for now, why is it important to escape from the constant competition for our attention and the pathways that leads us down? And how can we do that?

There’s two solutions – leave the polygon or attend to the polygon.

The centre of the polygon should be where your personhood thrives. Instead, essayist William Deresiewicz points out, “you are marinating yourself in the conventional wisdom. In other people’s reality: for others, not for yourself. You are creating a cacophony in which it is impossible to hear your own voice, whether it’s yourself you’re thinking about or anything else”.

The Global Emotions Survey – taken every year – reports that levels of sadness, worry, anger, and stress, particularly stress, are increasing year on year, and by almost 50% since 2009. Of course, a lot of things have likely contributed to this. But is the increasingly prevalence of smart devices, of being always online, causing overstimulation that leads to a new type of stress?

The term stress, after all, comes from physics – the amount pressure on an object, the stretching of an elastic band, that might be about to break.

Is the attention economy the cause of ‘quiet quitting’ – not doing all the extras a job demands of you? The dropping out of the rat race? The moving out of the cities? A response to power lunches, to always being available, to responding to emails and doing one more bit of work at 10pm, of tweeting about the latest news at 1am, of constantly keeping up with every friend?

Ethicist James William writes:

“We experience the externalities of the attention economy in little drips, so we tend to describe them with words of mild bemusement like “annoying” or “distracting.” But this is a grave misreading of their nature. In the short term, distractions can keep us from doing the things we want to do. In the longer term, however, they can accumulate and keep us from living the lives we want to live, or, even worse, undermine our capacities for reflection and self-regulation, making it harder, in the words of Harry Frankfurt, to “want what we want to want.” Thus there are deep ethical implications lurking here for freedom, wellbeing, and even the integrity of the self”.

As James William points out in his book, Stand Out of Our light, technology should function like a GPS for our goals – guiding us, showing us the latest information on our interests, keeping us updated with what’s important. But imagine if our real GPS took us down wrong paths instead of to our destination – to a sale at a department store rather than to Grandma’s.

Should we just turn off the satnav?

The painter Agnes Martin – talking about the hippy lifestyle – disagrees. She said “A lot of people withdraw from society, as an experiment… So I thought I would withdraw and see how enlightening it would be. But I found out that it’s not enlightening. I think that what you’re supposed to do is stay in the midst of life”.

Instead, In her book How to do Nothing, Jenny Odell talks about why we should be protecting spaces for non-commercial, non-instrumental, independent self and communal thought – places to maintain ourselves, to care, to think about the local.

She writes “What is needed, then, is not a “once-and-for-all” type of quitting but ongoing training: the ability not just to withdraw attention, but to invest it somewhere else, to enlarge and proliferate it, to improve its acuity”.

It’s not just about switching off, but finding the time free from distraction, to attend to things that we value. It’s about using technology in the right way. If the attention economy draws us away, the correct response is to return to the direct moment.

One of the most transformative features of modern life is the separation of space and time. We live in multiple time zones simultaneously – from one space. We have clocks, maps, calendars, timetables – all modern inventions – that are used to take us out of the moment – they allow us to plan with one another in different futures, different geographies, different spaces. Pre-moderns were focused on the plough, modernity dis-embeds us from local experience.

This, of course, has been revolutionary. But when we’re too distracted, too dis-embedded from what’s directly around us, the pre-moderns might have some good suggestions about how to detach.

Philosopher David Abram writes that, “direct sensuous reality, in all its more-than-human mystery, remains the sole solid touchstone for an experiential world now inundated with electronically generated vistas and engineered pleasures; only in regular contact with the tangible ground and sky can we learn how to orient and to navigate in the multiple dimensions that now claim us”.

Paying attention to the immediate informs things in surprising ways.

And paying attention is an act of discipline. We must train ourselves to switch off when we need to. Creating our own polygons – bookshelves, habits, and turning off notifications – and instead listen, watch, and think.

I think one of the issues with the attention economy is that it leaves us no time to truly think for ourselves – to not to be drowned out in conventional wisdom.

To attend carefully to what’s around us – to think problems through and let the mind run on its own – is lost when our privacy is encroached on so much we become part of the bigger machine.

Paying attention to the moment can lead to surprising insights. William Blake said of attending to the moment that it can lead you to “see a world in a grain of sand and a heaven in a wild flower, hold infinity in the palm of your hand, and eternity in an hour”.

This isn’t just romantic – it’s scientific. In looking to understand, to study, to find the causes of a grain of sand, a wild flower, your hand, an hour – you extrapolate outwards. Knowledge is not just endless information, it is learning how to think – and that requires disconnecting. Pay attention to how you’re paying attention.

 

Sources

Jenny Odell, How to Do Nothing: Resisting the Attention Economy

James Williams, Stand Out of Our Light: Freedom and Resistance in the Attention Economy

Tim Wu, The Attention Merchants: The Epic Scramble to Get Inside Our Heads

Adam Alter, Irresistible: The Rise of Addictive Technology

Shoshana Zuboff, The Age of Surveillance Capitalism

Mark Andrejevic, Automated Media

Brian McCullough, How The Internet Happened: From Netscape to the iPhone

The post How Your Attention is Stolen appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/03/14/how-your-attention-is-stolen/feed/ 0 86
How the Internet Was Stolen https://www.thenandnow.co/2023/02/05/how-the-internet-was-stolen/ https://www.thenandnow.co/2023/02/05/how-the-internet-was-stolen/#respond Sun, 05 Feb 2023 12:12:12 +0000 https://www.thenandnow.co/?p=59 This is a story of deceit, manipulation, lots of money, political lobbying, lawsuits, and, ultimately, covert theft. The history of the internet has been one of big capital pilfering from public investment and national infrastructure, the expropriation of academic research, of democratic open-source alternatives being forced illegally from the market, of devices to steal our […]

The post How the Internet Was Stolen appeared first on Then & Now.

]]>
This is a story of deceit, manipulation, lots of money, political lobbying, lawsuits, and, ultimately, covert theft. The history of the internet has been one of big capital pilfering from public investment and national infrastructure, the expropriation of academic research, of democratic open-source alternatives being forced illegally from the market, of devices to steal our privacy for profit being quickly snuck into hour homes, our cars, our watches, glasses, and phones. It’s a story of unethical business practices, of monopoly power, a story that could have been different, and, in the end, a story of competing dreams – hopes – of the future.

It’s a big history – one that needs to be told properly – and one in which that verb – to steal – will be returned to. Stealing can happen in many ways; through force, through dispossession, through tricks, power, and cash, of physical infrastructure, of ideas, attention and privacy can be stolen as much as hardware and physical goods. We’ll see how this story has some striking parallels through history – this is the story of the ideologies, the battles, the court cases, the innovations, the lost hopes – from Microsoft to Uber, from eBay to Google, from the US Department of Defence to dimly lit university laboratories, from Napster to 9/11 – of the internet.

The internet is a great ocean: we know that we only swim – paddle – on its surface, never diving beyond the first few pages of search results, rarely interacting outside of our own social network, swimming out occassionally to find a few new journalists or creators to follow, a few new products to buy – buts its depths – what its made of, how it was made, the history that gave shape to it, what lurks in its depth, what might be possible – remain, like the ocean, surprisingly unknown to us.

The internet was meant to be a new utopia – a tool of liberty, fraternity, equality – of radical democracy and freedom. And it might have had its moments, but has a tool celebrated for its openness bit turned into a cage? To understand this, we have to look at the values of those who built it – ask why and how it was designed – what motivated its pioneers, and look at what they came up against.The history of the internet, maybe more than any other histories of our present moment, is the history of our future.

The internet is a complicated system. Even when Charles Babbage designed the first computer – or mechanical calculator in 1819, it was too complicated for Babbage to fund on his own. Babbage received a £17,000 grant – a fortune at the time from the British government to fund his project which still failed. Throughout the industrial revolution, single innovators could design and build machines that were the most complicated devices ever envisaged – alone. But with computers, this was no longer true.

The first two modern computers were built at the University of Illinois Centre of Innovation in 1951, funded by the Department of Defence and the US Army. Just 18 years later, in 1969, at the height of the counterculture revolution, four computer terminals were connected remotely for the first time at universities across the US from California to Utah.

The project was the continued result of Department of Defence financing through DARPA – the Defense Advanced Research Projects Agency, investment that would cost the US taxpayer 124 million dollars, almost a billion in today’s money.

The result was a network called ARPANET – The Advanced Research Projects Agency Network

Why was so much money spent?

From its inception, ARPANET was the culmination of two visions. The first, was to find a way for academics to exchange data across institutions, and, importantly, to share computing power, which at the time was expensive, slow, and valuable to researchers at the universities

ARPA director Charles M. Herzfeld remembered that ‘ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.’

It was, as historian Brian McCullough puts it, a ‘researcher’s dream of a scholarly utopia.’

But I was also a product of the cold war.

And the Deputy Director, Stephen Lukasik, has challenged how Herzfeld remembers its development. He said ‘the goal was to exploit new computer technologies to meet the needs of military command and control against nuclear threats, achieve survivable control of US nuclear forces, and improve military tactical and management decision making.’

As the Cold War developed, the US spent more and more on its growing ‘military-industrial complex,’ working not just with industry, but with universities too.

Many academic wanted ARPANET to be open to all. All researchers – funded by the taxpayer or by universities – should be able to access it.

Because of this, they believed, there had to be a common, universal language that would underpin how the new communication network operated.

Steve Crocker, the inventor of some of the early protocols the internet runs on, said that they were looking for ways to make the procedures open, so that they could be added to, changed, and updated democratically – importantly, no single institution should be in charge.

When administrators at MIT started locking doors, or putting passwords on computers that until them anyone could a book a slot to use, students and researchers would fight back. They started calling themselves hackers.

A young enthusiast called Richard Stallman recalls:

‘Anyone who dared to lock a terminal in his office, say because he was a professor and thought he was more important than other people, would likely find his door left open the next morning. I would just climb over the ceiling or under the floor, move the terminal out, or leave the door open with a note saying what a big inconvenience it is to have to go under the floor, “so please do not inconvenience people by locking the door any longer.” … There is a big wrench at the AI Lab entitled “the seventh-floor master key,” to be used in case anyone dares to lock up one of the more fancy terminals.’

As ARPANET proved useful, other government agencies wanted in. The NSF – the National Science Foundation – created NSFNET in 1985 and eventually linked more universities across the country.

In 1977, the first transmissions were made wirelessly, around the world, under the sea, into space, and back to where they started.

And by the early 80s, the NSF was investing heavily in infrastructure. They built a ‘backbone’ across the US that could connect universities with other research institutions.

The NSF website states that ‘Throughout its existence, NSFNET carried, at no cost to institutions, any U.S. research and education traffic that could reach it. At the same time, the number of Internet-connected computers grew from 2,000 in 1985 to more than 2 million in 1993. To handle the increasing data traffic, the NSFNET backbone became the first national 45-megabits-per-second Internet network in 1991.’

Any school could apply for a grant from NSF to connect to the network.

But its popularity was to become its undoing. By the 90s, the infrastructure was becoming overburdened. And there wasn’t much appetite to increase funding at the height of the neoliberal period. The NSF continued to upgrade but couldn’t keep up with demand.

Data was measured in packets – In 1988 a million packets of traffic were sent across the network, by 1992 this had increased to 150 billion.

Commercial use was banned – the network was to be used for research and education only.

A 1982 MIT handbook states that ‘personal messages to other ARPANet subscribers (for example, to arrange a get-together or check and say a friendly hello) are generally not considered harmful … Sending electronic mail over the ARPANet for commercial profit or political purposes is both anti-social and illegal. By sending such messages, you can offend many people’

At the same time, small commercial operators began to provide alternatives, using the same protocols as NSFNET and often hiring DARPA researchers.

Pressure to change how the network operated was increasing as more and more people wanted to join.

In 1991, Democrat senator Al Gore began working on a High-Performance Computing and Communications Act, arguing that, of course, the market should have access to this new ‘information superhighway’, but a middle road was important. Gore’s bill recommended that ‘Further research and development, expanded educational programs, improved computer research networks, and more effective technology transfer from government to industry are necessary for the United States to reap fully the benefits of high-performance computing.’[1]

Senator Daniel Inouye argued that a legislation should preserve 20% of the infrastructure for public use – libraries, education, non-profits, government agencies, museums to provide ‘educational, informational, cultural, civic, or charitable services directly to the public without charge for such services.”’ This would be modelled on the Public Broadcasting Act to compliment private media with publicly funded advertisement free programs for television and radio. Like roads, the information superhighway was, after all, public property.

A group was formed called the Telecommunications Policy Roundtable. It’s co-founder, Jeffrey Chester, told the NYT in 1993 ‘“There should be a national debate about what kind of media system we should have. The debate has been framed so far by a handful of communications giants who have been working overtime to convince the American people that the data highway will be little more than a virtual electronic shopping mall.”’

It looked to most like a public-private partnership was inevitable. And Gore, with his support for a public internet, became Bill Clinton’s running mate for president

But when Clinton won the election in 1993, chairman of the federal reserve Alan Greenspan visited him and convinced him that the new computer network, in industries and banking’s hands, could revolutionise the economy – by using vast troves of data and super calculation to hedge against investments, the US could enter into a new era of prosperity, in which the economy would be perfectly balanced and everyone prospered, leaving behind a period of boom and bust.[2]

Meanwhile, Clinton, Gore, and the DNC received $120,000 in contributions from a consortium of telecommunications giants. Suddenly, Gore changed his mind about the public-private partnership. Telecoms should be deregulated, the NSFNET infrastructure should be sold off, and left to the market.

As this was happening, NSD had subcontracted the running of the network to a consortium called MERIT.

The consortium was made up of representatives from universities, research institutions, and important computing businesses like IBM.

In 1991, MERIT began selling access to the network to more businesses.

NSF was accused of backroom dealing and profiting from the network. So many were outraged that congressional hearings were called in 1992. Businessman William Schrader testified that it was like ‘“giving a federal park to Kmart.”

But it was too late. It was was decided that NSFNET would be carved up and transferred to the corporations. With the infrastructure handed over, it was officially decommissioned in 1995 with hardly a whimper of protest.

A couple of years later, In 1998, the Washington Post reported that ‘The nation’s local, long-distance and wireless phone companies have spent $166 million on legislative and regulatory lobbying since 1996 — more than the tobacco, aerospace and gambling lobbies combined’

Top donors during the 1997-98 political season were AT&T ($1.9 million) Bell Atlantic ($1.6 million), BellSouth ($1.6 million), SBC ($1.2 million) and MCI ($964,348).

The money was donated to candidates across both parties. Democratic Representative Edward J. Markey, who was heavily involved in telecommunications, received $124,800 in donations.

But Clinton himself was the top recipient, receiving $169k

But at least a billion dollars in todays money of public investment had been transferred with little fan-fare to the private sector.

The internet is part of a long history of computing that has relied on public money. From that first computer, to the father of modern computing Alan Turing working on WWII government projects, and the first IBM digital computer the result of a Dod contract during Korean War.

Furthermore, California’s economy more broadly rests on government defence investment of planes, missiles, military electronics, as well as irrigation systems, highways, and universities.

The pentagon has had big hand in helping corporations invest in R&D in everything from jet engines to transistors, circuits, lasers and fibreoptics. Today, the Pentagon works with and funds at least 600 laboratories in the US.

As Scientific America explains ‘In truth, no private company would have been capable of developing a project like the Internet, which required years of R&D efforts spread out over scores of far-flung agencies, and which began to take off only after decades of investment.’

Meanwhile, in California, a pair of young computer hobbyists had been working on the first easy to use programming language.

Bill Gates and Paul Allen agreed a deal with the electronics company MITS, to distribute their new BASIC language with the Altair 8800 in 1978. The Altair was one of the first new affordable microcomputers, mostly advertised to enthusiasts, tinkerers, and Hobbyists through electronics magazines by mail order – the Hobbyists were sparking a new home computing revolution.

The joined clubs and shared ideas and software that they’d received with their mail order hardware.

Gates was irritated by this. He believed that the only way to further popularise and innovate in computing was if programmers were paid for their software.

He wrote a widely cited letter, an ‘Open Letter to Hobbyists’ in 1976.

In it, Gates reported that less than 10% of users had actually purchased BASIC themselves, and that added up, this reduced he and Allen’s hourly compensation to less than $2 ph. He likened sharing to stealing.

One respondent to the letter made the counterargument that this $2 ph figure was actually the result of poor negotiating with Altair. There was nothing wrong with selling software with computers, but there was nothing wrong with sharing it either. Despite their $2 hourly wage, Microsoft quickly became the largest company in the world. By 2000, they’d have a 97% share of the Personal Computing market.

And despite Gate’s argument for strong intellectual property rights, by the 80s, Microsoft, Apple and Xerox were in a race to build the first graphical interface for computers, and they were borrowing and stealing ideas from each other with such frequency, that they all became embroiled in lawsuits and counterclaims against the other. Apple argued that Microsoft had stolen the ‘look and feel’ of its OS and Xerox argued Jobs had copied their own work – specifically the design for a mouse – on a visit to their HQ.

The judge decided that out of the 189 claims Microsoft, Apple and Xerox were making only 10 could be upheld.

With a deal to install MS-DOS then Windows on IBM computers, a graphical user interface taken from Apple, and a strong dislike of free, open-source, or shareware software, Microsoft quickly dominated the market.

In Geneva, a computer scientist called Tim-Berners Lee came up with an idea that combined the infrastructure of the internet developed by ARPA – which focused on sharing files and computing power – with a simple text based delivery system.

He wrote an announcement post that said:

‘The WWW project was started to allow high energy physicists to share data, news, and documentation. We are very interested in spreading the web to other areas, and having gateway servers for other data. Collaborators welcome!’

The WWW was launched with a few dozen servers communicating with each other around the globe. One of those servers was at the University of Illinois National Centre for Supercomputer applications, a research department funded by the US government.

The WWW servers discussed how they project would function. They needed a programming language, which turned into html, the code of the internet, and an application to translate the code, what would become known as a ‘web browser’. Berners Lee worked on the very first basic browser, but a 21 year old graduate at the NSCA – Marc Andreessen – decided to work on a browser that would be easy to use.

Andreessen recalls that among academics, there was ‘a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out’. He and his colleague Eric Bina thought that the WWW should be simple to use and have the ability to include graphics too. Berners Lee had disagreed – thinking they should only include text and maybe some diagrams, all that would be needed to share things like scientific papers.

But at the university, Andreessen and his colleagues worked hard on a browser that they’d release in 1993: X Mosaic.

Berners-Lee posted: ‘An exciting new World-Wide Web browser has come out, written by Marc Andreessen of NCSA’

As Mosaic was released there were a few hundred websites up and running around the world. Two years later in 1994 there were already tens of thousands. IN the first 18 months, 3 million people installed the new browser.

Fortune magazine included Mosaic as a product of the year, writing ‘This software is transforming the Internet into a workable web . . . instead of an intimidating domain of nerds.’

NSCA began taking notice of what had until now been a small project. And in 1994, Andreessen left to work on a commercial version of Mosaic browser.

He took the same team with him, and with the help of investors started on what they called ‘Mozilla’ – the ‘mosaic killer’ – the worlds very first internet start-up.

People mag named Andreessen one of the years ‘most intriguing people’ – Fortune included them in the 25 coolest companies

The company they eventually named Netscape was private, but they had adopted the open practices that were dominant in computer science at the time – instead of patenting features of their browser, they wanted others to be able to copy and use those features, in the hope that they would become the standard. SSL, for example – this security information – was a Netscape innovation but anyone was welcome to use it. It’s now used everywhere. They wanted users to be able to add features – plugins – like Microsoft, they wanted to be the platform that other developers would see as standard and build their own software for.

For these reasons they decided on a new commercial model: the browser would be free for public use – in the hope of achieving dominance and because it aligned with existing culture of share and share-a-like.

Technology writer Glynn Moody writes that this new logic ‘introduced the idea of capturing market share by giving away free software, and then generating profits in other ways from the resulting installed base. In other words, the Mosaic Netscape release signaled the first instance of the new Internet economics that have since come to dominate the software world and beyond.’

It worked. Mozilla browser was downloaded 6 million times in a few months. They quickly achieved a 90% market share while also selling a corporate version with technical support for $39.

When the University of Illinois found out about Mozilla, they threated to sue – claiming that Andreessen and his team had stolen the university’s code.

Jon Mittelhauser, one of the developers replied “We didn’t want to take any of [the old Mosaic] code, that’s the thing! We wanted to start from scratch. We wanted to do it right.”

However, Mozilla ended up paying the university $2.2 million in damages, and agreed to change the name from Mozilla to Netscape.

When Netscape IPO’ed in 1995 its share value quickly tripled. Andreessen featured on the cover of Time Magazine. And the WSJ wrote ‘“It took General Dynamics Corp. 43 years to become a corporation worth $2.7 billion. . . . It took Netscape Communications Corp. about a minute’

Within 18 months it had 38 million users and hit $533 million in revenue in 1997.

Meanwhile, Bill Gates was sceptical about the internet. He knew that eventually, technology would be advanced enough to play video and interact between households, but he didn’t think anyone would want to sit around their computers at a desk, thinking instead that the television set would be the medium at the centre of the shift.

When he saw how popular Netscape was quickly u-turned.

He licensed the original Mosaic source code from the University of Illinois, copied it, and quickly rolled out the first version of Microsoft’s own browser, Internet Explorer in 1995.

The difference: it was completely free.

Gates said , “One thing to remember about Microsoft, we don’t need to make any revenue from Internet software.”

The launch of Windows 1995 was huge – the Empire State Building was lit up in Windows logo colours, the commercial featured the Rolling Stones, thishappened. And the new operating system came pre-loaded with Internet Explorer.

Steve Jobs said that if Microsoft’s competitors couldn’t keep up ‘in the next two years, Microsoft will own the Web. And that will be the end of it.”

Microsoft increased its efforts: their browser department grew from 6 to 1000 employees by 1999. They required manufacturers to install Internet Explored if it wasn’t already installed and included a line in their contracts banning “modifying or deleting any part of Windows 95, including Internet Explorer, prior to shipment’’

Netscape, in a desperate bid for survival, tried to deal directly with computer manufacturers like Compaq at an attempt to replace Internet Explorer with Netscape before the units were shipped.

Microsoft responded by threatening Compaq with a lawsuit and a ban from using Windows. Compaq immediately backed down.

IN 95, 90% of Netscape’s revenue came from licensing its browser – by 97, squeezed out by Microsoft’s monopoly position, that was below 20%.

Netscape wrote a letter to the US DoJ complaining that Microsoft was using its position to push Netscape out of the market by preventing them from dealing with manufacturers.

In the first quarter of 98, Netscape reported a loss for the first time and IE took the top spot in user share.

IN May, 20 states and the DoJ filed antitrust lawsuits against Microsoft.

The trial went on for several years and in June 2001 Microsoft was found guilty of monopolistic practices.

Judge Thomas Jackson said ‘“maintained its monopoly power by anticompetitive means and attempted to monopolize the Web browser market.”’

A allergy towards monopoly had always been central to the philosophical culture of the United States. Early settlers had been fleeing the twin monopoles of church and state. In England, government granted monopolies were the norm. Thomas Paine said that England was “cut up into monopolies,”

Historian Christopher Hill writes that a 17thcentury Englishman was ““living in a house built with monopoly bricks, with windows (if any) of monopoly glass; heated by monopoly coal (in Ireland monopoly timber), burning in a grate made of monopoly iron.” “He slept on monopoly feathers, did his hair with monopoly brushes and monopoly combs. He washed himself with monopoly soap, his clothes in monopoly starch. He dressed in monopoly lace, monopoly linen, monopoly leather, monopoly gold thread.” “were held up by monopoly belts, monopoly buttons, monopoly pins,” food was “seasoned with monopoly salt, monopoly pepper, monopoly vinegar.” And mice “were caught in monopoly mousetraps.”

As far back as 1641, Massachusetts law declared that: “No monopolies shall be granted or allowed amongst us, but of such new Inventions that are profitable to the Countrie, and that for a short time.”

The American Revolution had begun as a revolt against the monopoly power of the East India Company, with protestors throwing their tea into Boston harbour.

Maryland’s first constitution in 1776 said ““That monopolies are odious, contrary to the spirit of a free government, and the principles of commerce; and ought not to be suffered.” As did Carolinas

The anti-monopolistic culture continued into the 19thcentury when the Robber Barons were attempting to monopolise the oil, railroad, and telegraph indsutries while repressing strikes and unions, reducing wages, and artificially inflating prices.

In response, The Sherman Act was introduced in 1890.

The first section prohibits ‘every contract, combination …, or conspiracy, in restraint of trade or commerce…’

The second section makes it illegal for any person or firm to ‘monopolize … any part of the trade or commerce among the several States, or with foreign nations. … ’’

Judge Jackson decided that the first section was violated by Microsoft by forcing manufacturers to include IE, the second part by using its market dominance to force a monopoly in browsers.

The judge concluded that Microsoft ‘used incentives and threats’ to push manufacturers to adopt ‘distributional, promotional and technical efforts’ that would favour Internet Explorer at the expense of Netscape Navigator.

He suggested the Microsoft be broken up into two companies: one that would run its operating systems and another its software.

Of course, this never happened. When the Bush administration took office in 2001, the verdict was appealed and reverse. Instead, Microsoft made a deal that required them to open their API’s – that’s application processing interface – and protocols and refrain from monopolistic practices. Opening their API’s means that third-party developers could more effectively design software that would work with the nuts and bolts of Windows – effectively opening the bonnet and allowing applications to interface with all the components with the intention of ‘opening up’ Microsoft and allowing greater competition and innovation.

They also had to agree to ‘consent decrees’ that prohibited from retailing against manufacture who might ‘develops, distributes, promotes, uses, sells, or licenses any non-Microsoft’ software

On the one hand, this was a win for Microsoft. On the other, Microsoft was distracted while new and unexpected competitors grew to challenge their dominance. And the appeal still meant Microsoft had to ‘play nice’, giving ammunition to its rivals that could be used against the company. It meant Microsoft had to consider legal repercussions before they acted in the future.

Without the trial, the internet could have gone a different way. Dominated by Microsoft, it could have developed a kind of walled AOL, an internet built into the architecture of Windows itself, run on some kind of grotesque Microsoft software. And forcing Microsoft to open those API’s – whats called interoperability – is something that could be useful to some challenges we’ll come to today.

While Microsoft was distracted, AOL was staging a coup. It had found success offering a service that dialed into the privatised internet infrastructure. You entered a web portal that acted as a directory of websites, news, chat, email, weather, and other categories.

Just before their trial, Gates had approached AOL CEO Steve Case. He said “I can buy 20 percent of you or I can buy all of you. Or I can go into this business myself and bury you.”

AOL choose to fight. Like Netscape, they knew that market saturation was key. The company spent a quarter of a million on AOL trial discs to give out on magazines. It worked. The conversion rate was an unheard of 10%, and suddenly AOL was everywhere

Marketing Mogul Jan Brandt who worked on the campaign said ‘taking the disc, putting it into the computer, signing up, and giving us a credit card. When I saw that, honestly, it was better than sex.”

IN his history, McCullough writes ‘AOL discs began arriving in Americans’ mailboxes seemingly daily. Almost every computer maker shipped an AOL disc with a new computer. There were AOL discs given away with movie rentals at Blockbuster. There were AOL discs left on seats at football games. At one point, Brandt even tested whether or not discs could survive flash freezing so that she could give away AOL discs with Omaha Steaks.’

Incredibly, half of CDs manufactured at the time were for AOL. AOL’s customers tripled in a year.

Microsoft countered with its own web portal called MSN.

But AOL dominated. IT was the internet.

Like Microsoft, AOL used its dominance to squeeze out or buy out its competitors. It was so lucrative to be in AOL’s web portal that companies handed over millions of dollars.

Health website Dr Koop IPOed and raised $85 million, spending all of it on a 4 year contract with AOL to provide its users with health content.

Tel-Save paid AOL $100 million, Barnes & Noble $40 million, Amazon $19 million, ebay $75 milloin, all to be on the home portal – the most coveted position on the net high street – a shop in the digital Times Square.

One businessperson recalled that ‘AOL demanded 30% of her company, “and then for good measure they tell us, ‘These are our terms. You have 24 hours to respond, and if you don’t, screw you, we’ll go to your competitor.’

AOL’s share price rocketed by 80,000% through the 90s, but even at its height, cable companies were starting to develop faster and cheaper ways of connecting to the internet. AOL and Microsoft imagined themselves as walled-gardens, in control of and masters over their own cyberspace, desperate to keep the hordes of invaders and competitors out. It wasn’t to last.

“The online world is not truly bound by terrestrial laws… it’s the world’s largest ungoverned space.” – Eric Schmidt and Jared Cohen, Google

Like the settlers arriving in America, many imagined cyberspace as empty, a blank canvas, a state of nature, an ethereal region where civilization could start again, free of corrosive power, corporate donation, corrupt politics, where a new type of democracy of freedom could flourish. As a blank slate, it was imagined lawless, but like precolonial America, that was theorised as a good thing. Because new better ideas could be written into it.

In 1995, two media scholars Richard Barbrook and Andy Cameron wrote an influential article about what they saw emerging in Silicon Valley. They called it the Californian Ideology.

Barbrook and Cameron argued that this new ideology was the product of ‘a loose alliance of writers, hackers, capitalists and artists from the West Coast of the USA’

It was a ‘bizarre fusion’ of cultural bohemianism with ‘high tech industry’, a combination of ‘the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies.’

The Californian Ideologists imagined a new world, made possible through computers, where everyone will be both hip and rich, and free.

This best of all worlds ideology arose out from two places: a liberal progressive one that rejected the narrow confines of conformist postwar corporate American values – they celebrated difference in all things gay rights, feminism, literature, music, and recreational drug use. They despised the old forms of power and repression that kept the inner self locked away.

They believed that the internet could bring about a new radical democracy, where people were free to organise and express themselves in new ways, directly voting on the issues that affected them. They shared software and computers parts in home-brew computer clubs – they were a new class – a ‘virtual class’.

But as they built business and joined companies, they came into contact with traditional businessmen, with venture capitalists and financiers. Through them, they adopted the idea that they should be left alone, free from all regulation, free from inteferring governments to impress their vision on blank canvas of cyberspace, regulated by the invisible hand of the market.

Barbrook and Cameron ask ‘will the advent of hypermedia will realise the utopias of either the New Left or the New Right? As a hybrid faith, the Californian Ideology happily answers this conundrum by believing in both visions at the same time – and by not criticising either of them.’

Left and right ideology combined into support for a new Jeffersonian Democracy – small direct democracies where everyone owns property and everyone votes and each is involved in the running of their own lives.

The problem – and this was a fundamental line from their 1995 article – was that the hippies ‘cannot challenge the primacy of the marketplace over their lives’ –

‘By mixing New Left and New Right, the Californian Ideology provides a mystical resolution of the contradictory attitudes held by members of the ‘virtual class’’

But they said, hidden away from Silicon Valley, in Chinese factories, in mining the materials needed in developing countries, in right wing politics hostile to unions, in the weakening of welfare, a new underclass was emerging. They said ‘The deprived only participate in the information age by providing cheap non-unionised labour for the unhealthy factories of the Silicon Valley chip manufacturers.’

They warned that ‘The technologies of freedom are turning into the machines of dominance.”

One of the believers in this new digital utopia was Pierre Omidyar. In the early years of Silicon Valley, Omidyar co-founded an e-pen startup that was sold to Microsoft for $50 million in 1996.

Omidyar started working on a new libertarian platform that he believed would revolutionise commerce – allowing people to find exactly what they wanted and letting the invisible hand of the market perfectly calibrate the price, all without the need for third party interference.

He believed that if you removed any interference, you could create the perfect marketplace online.

His idea was an auction website. He said ‘“If there’s more than one person interested [in an item], let them fight it out. The seller would by definition get the market price for the item, whatever that might be on a particular day.”

He launched AuctionWeb, which he later renamed ebay, and initially it was completely free from interference– there was nothing – no payment function , no ratings – no regulatory infrastructure. Just buyers connected to sellers, who would complete the details of the transaction on their own.

From the beginning the site was a success, but investors were wary. What did Auctionweb do? It had no goods, provided no real service. One investor said ‘They don’t own anything,’ ‘They don’t have any buildings, they don’t have any trucks.’ ”

Omidyar knew that self-regulation from the community was key.

He encouraged its users to talk to one another, share advice, and, eventually, leave reviews.

Many of the new companies that were soon to go bust in the dot-com crash of the late 90s were traditional bricks and mortar bsuinesses. Pets.com. Sofa companies. Even early grocery delivery services. At the turn of the new millennium, hundreds of these companies went bankrupt, but ebay continued to grow.

It was a new type of service – a pure middle man – a simple ‘platform’, groundwork for a community of buyers and sellers, without any content – no stock, no warehouses, of its own.

McCullough writes that the market turned out to be imperfect – ‘disputes broke out between buyers and sellers, and Omidyar was frequently called upon to adjudicate. He didn’t want to have to play referee, so he came up with a way to help users work it out themselves: a forum. People would leave feedback on one another, creating a kind of scoring system. “Give praise where it is due,” he said in a letter posted to the site, “make complaints where appropriate.”’

Omidyar had come across a strange new dynamic: ebay was only possible because of the contributions of its users, but without some kind of adjudication, the platform would collapse.

In his book, Internet for the Poeple technology writer Ben Tarnoff writes ‘AuctionWeb was not only a middleman. It was also a legislator and an architect, writing the rules for how people could interact and designing the spaces where they did so. This wasn’t in Omidyar’s plan. He initially wanted a market run by its members, an ideal formed by his libertarian beliefs.’

Anyone taking notice of ebay might have predicted that being a platform was both the future, and that the Californian Ideology was founded on faulty logic. Google’s Schmidt and Cohen’s words “The online world is not truly bound by terrestrial laws… it’s the world’s largest ungoverned space.” Turned out to be a misconception. There was no blank canvas. The problems of the offline world were being shifted online, and new issues were emerging with it.

Political scientists James Muldoon writes that ‘platform owners claim their products are neutral spaces and that they merely provide an intermediary service to connect parties. This is only half true. Through the design and architecture of the platform, software developers play an active role not only in connecting parties but in shaping the conditions in which they operate’

The virtual Mall/Big Data/Middlemen/Rise of the Platform (Napster, Yahoo, Google)

The first advert on the internet may have been this one from 1994 on hotwired.com for AT&T. It had an enormous click through rate. People clicked it just to see where it went. Ads that you could click on were a novelty.

AOL and eBay had proven something. Online business was not about selling, but connecting, and it was Yahoo that first realised that capturing more audiences, and keeping their attention, so that you could serve them more ads, was key to growing revenue.

Yahoo started as two Stanford students project – “Jerry and David’s guide to the World Wide Web” – they included their favourite websites and as it became popular, it became the most popular homepage of the internet. But they had no way of making money.

Surely no one would accept ads polluting the directory? With no other way of making money, they decided to tentatively try.

When the first ads went live, Chief Product Officer Tim Brady recalls, “The email box was immediately flooded with people badmouthing us and telling us to take it off. ‘What are you doing? You’re ruining the net!’

But traffic to the site remained stable, the ads were begrudgingly accepted, and Yahoo started adding sections to attract more interests: stocks, horoscopes, film, television, travel, weather – the key was to match subgroups with adgroups. One executive described it as a landgrab.

Jerry Yang said “We began with simple searching, and that’s still a big hit—our Seinfeld if you will —but we’ve also tried to develop a must-see-TV lineup: Yahoo Finance, Yahoo Chat, Yahoo Mail. We think of ourselves as a media network these days.”

One Wall Street analyst told businessweek “You have to look at it [Yahoo] as the new media company of the 21st century.”

Meanwhile, looking at Yahoo’s increasingly complicated directory, two computer scientists – Larry Page and Sergey Brin – were realising that there had to be a better way to organise the internet.

Instead of human’s deciding which websites should sit in which categories and which order – as AOL and Yahoo were doing – an algorithm should do it.

The algorithm was deceivingly simple – any website that linked to another website was counted as a vote for that site – the more links to it a website had, the more popular it was, and the higher up the search rankings it was placed.

It was straightforward but innovative. Journalist David Kirkpatrick wrote an article in Fortune magazine in 1999. He typed “New York Yankees 1999 playoffs” into both Google and Alta Vista.

He said that “The first listing at Google took me directly to data about that night’s game. The first two at Alta Vista linked to info about the 1998 World Series.”

On Alta Vista, he had to click on the third link down, then click another link to find the results of the game. HE wrote “Google really works.”’

But like Yahoo, Google struggled to devise a business plan. Page and Brin hated advertising and instead decided on a licensing model. They signed deals with Yahoo and AOL to be ‘powered by Google’.

While Yahoo saw the power of Google’s innovation, they insisted it was a small part of their service. The main directory would always be human curated, as humans were more trustworthy than algorithms.

When they realised that Google was the future, Yahoo tried to buy them out for 3 billion dollars. Google rejected the offer, Yahoo cancelled their license, and Google went to work on a new model, realising they had to make their search engine work on its own.

But Page and Brin were academics. They still believed in the ‘online world not truly bound by terrestrial rules’ and they thought their search engine should be scientific, not influenced by offline money.

At a conference in 1998, Brin and Page delivered a paper that noted that ‘“We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers. This type of bias is very difficult to detect but could still have a significant effect on the market… we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.”’

Page and Brin were beginning to realise that markets could distort the utopian vision.

Around the same time Google was taking off, Shawn Fanning – a young student programmer and hacker – became fascinated by a new type of sound file – small, compressed, and quick to download, it was called an MP3. And college students were sharing them on usenet newsgroups across campuses.

A 19-year-old student – Justin Frankel – had designed a programme called Winamp that let you organise and play your MP3s, but Fanning wanted an easier way to share them between friends.

He came up with a file sharing application called Napster. When he released in in 1999 it was downloaded 10 million times in less than a year. 73% of college students were using it.

Napster was on the cover of Rolling Stone and Time, but some, including Metallica’s Las Ulrich were outraged that their music could be shared for free. Ulrich hand-delivered a list of 300,000 usernames of people who had ‘pirated’ their music to Napster’s office. Napster organised a counterprotest for the same day, and protestors shouted “Fuck you, Lars, it’s our music too!”

Fanning argued that Napster was just a middleman, like ebay, but the Recording Industry Association of America filed a lawsuit. The publicity lead to a surge of interest in Napster.

The courts decided that Napster had to block copyrighted material or shutdown. Fanning tried to implement a system., but it failed, and after just a few years, the company went bankrupt in 2002.

In reality, Fanning claimed he never wanted to give away music for free. He believed artists needed to be paid. He wanted to Napster to grow quickly, prove the concept, and then make a deal with the record companies.

McCullough writes that ‘in retrospect, there is no shortage of people, even inside the music industry, who imagine how different the world would be if it had worked out that way—if the music companies had partnered with Napster and accepted the inevitability of technology’

Sean Parker – Napster’s cofounder – predicted at the time that “Music will be ubiquitous and we believe you’ll be able to get it on your cell phone, you’ll be able to get it on your stereo, you’ll be able to get it on whatever the device of the future is. And . . . I think people are willing to pay for convenience.”

LimeWire, Bearshare, and others tried to take Napsers place, but forced underground, none were as successful.

The music industry were at the peak of their power, having spent decades reissuing old vinyl albums on expensive CDs and making use of new outlets like MTV to promote new artists.

Early eBay, Google and Napster had something in common. They all thought that it was possible to create a platform that was self-balancing – where supply of information perfectly met demand – a self-sustaining rational system that importantly was free from the interference of offline influences. But they all came up against pressure from the physical world.

Back at the Googleplex, investors were starting to demand profits. After the dot-com crash, no one was thought to be safe. It seemed like you could have a winning model one day, and you could disappear the next. People wondered whether google could turn its innovation into a profitable company.

And so Page and Brin acquiesced to the advertising model. The results were unheard of.

In 2003, Google made $500 million in revenue. Within 10 years, Google had revenues of over 50 billion

With Google Adwords, McCullough writes ‘Google was able to achieve something amazing: it made the Internet profitable at scale and for the first time’

Between 2002 and 2006, the amount US advertisers spent online tripled.

The Theft of Privacy (Surveillance Capitalism)

Famous: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” – John Wanamaker,

“maps created Empire.’ John B. Hartley

The idea of privacy has had a chequered history. But in the modern period, it has often been thought of as sacrosanct. That an Englishman’s home was his castle – the idea of rights grew from the idea that some parts of us were inviolable, off-limits, private.

Some theorists have argued that having a private place, off-limits to infringement, is crucial to thinking-through, formulating, and debating the ideas necessary in a deliberative democracy.

Some early radio stations shunned the idea not only of advertising but even talking on air – surely no one would surely accept that naked invasion of privacy; a stranger’s talking in their own home – people would surely prefer to talk amongst themselves, anyway.

In 2000, a group of scientists at the Georgia Institute of Technology worked on a project called “Aware Home”. IT aimed to study something they called ‘ubiquitous computing’ – a home full of different types of sensors designed to predict the needs of the inhabitants and make their lives easier. But, the researchers naturally assumed, the data would belong only to the people who lived in the house – the purpose of the data was to make their lives easier.

Meanwhile, Google was also realising it was the data industry. If their job was to predict what people wanted, the more data they had about people the more effective they’d be.

Googles mission statement was “organize the world’s information and make it universally accessible and useful.”

This applied to organic search and advertising. To match companies to consumers, to make advertising more efficient and relevant, Google needed to expand how they captured data

Google’s chief economist Hal Varian explained that the Google model was based on what they called ‘data extraction and analysis’ through better monitoring, personalisation, customisation, and continuous experiments – this was the key to big data and better prediction.

Larry Page had said that the problem with Google was that you had to ask it questions. It should know.

Varian said that each user left behind them a trail of breadcrumbs that Google euphemistically called ‘data exhaust.’ He wrote that “Every action a user performs is considered a signal to be analyzed and fed back into the system.”

In 2001, Page said that ‘Sensors are really cheap.… Storage is cheap. Cameras are cheap. People will generate enormous amounts of data.… Everything you’ve ever heard or seen or experienced will become searchable. Your whole life will be searchable.”

Search, Google’s primary product, they realised, was about predicting what a person wanted at any given moment, before they searched for it, and the clues to what the user wanted were often found away from what they typed in a search box.

Google wanted to know what users wanted before they did.

What they wanted might vary between geographic locations, with the weather, depending on who they were talking to, what their past searches were, what mood they were in, what they were doing at the time, and so Google began investing in collecting all of this ‘data exhaust.’

Google invested in email, mapping, street-view cameras, built free word-processors and spreadsheets, calendars, travel, and photo storage, home speakers and thermostats – anything that could tell them something about their users interactions with the world.

They started placing cookies on users computers that could track them across third-party websites. IN 2015, one study found that if you visit the 100 most popular websites on the internet you would collect 6000 cookies tracking your online journey. Google had cookies on 92 of those 100 sites, and 923 on the top 1000 sites. The study concluded that ““Google’s ability to track users on popular websites is unparalleled, and it approaches the level of surveillance that only an Internet Service Provider can achieve.”’

They use hundreds of signals to predict what a user wants to see at any moment.

One study found that a single Google nest device connected with so many other services and products that a user would have to read through almost 1000 terms and conditions and privacy policies. Another found it would take 76 days to read each policy that affects us.

Google purchased a satellite imagery company called Skybox that could capture such detailed images that if outside, it could see exactly what was on your desk. They started investing in street-view camera technology on backpacks and snowmobiles and boats to capture places that couldn’t be reached by car. They started a project called Ground Truth that tried to analyse, through ‘deep mapping’, what they called a ‘logic of places’, tracking paths people would take, ponds people would sit at, what the traffic was like, which parks people used, which buildings were entered at which times, or which areas were busier.

The street view cars collected data from open wifi networks as they went around the world, including data from people’s homes.

A long list of countries, from the UK to Japan, complained. IN Germany, residents could require their homes be blurred out.

Philosopher Shoshana Zuboff argues that we live in a new economic order that ‘claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales.’ This is the foundation of a surveillance economy in which our privacy is progressively invaded through a flood of new and smaller sensors, cameras, microphones, devices attached to dishwashers and ovens, smart speakers and doorbells, in shopping malls and shops, restaurants and stadiums

Tarnoff compares data in the 21st century to coal in the 19th – it powers a revolutionary change in the economy.

Data becomes the raw material that can be transformed into an asset to sell to advertisers. Smart beds that track sleep position and sell us different bed sheets, new running shoes after a slow run, vacations after a stressful day, bad news when we’re angry and celebrity gossip when we’re bored, click patterns are measured, microphones listen to miniscule details, and food choices are used to predict health patterns.

Zuboff argues that surveillance capitalism is always searching for new ‘supply routes’ – new devices and new ways of collecting new types of data about our lives.

Zuboff says that this manifests in a logic is extension – outwards into as many areas life as possible -your fridge, blender, dishwasher, bloodstream – and a logic of depth – downwards into richer, more accurate and more detailed measurements – about your emotions, personality, sweat levels, temperature, hormone levels, through ‘disgestable sensors’, or correlations between things like mood and music or film choices, or the inflection and decibel levels of your voice while having certain conversations.

All of this is to measure in order to predict how we might act in the future, and when combined with advertising or the delivery of news or information to click on, is about subtly nudging us into action.

Our present state is unceasingly measured so that future attention can be grabbed. Surveillance assets act as the raw material for digital fortune telling.

According to Zuboff, the tentacles of surveillance capitalism ‘nudge, coax, tune, and herd behaviour’

Targeting our desires is like casting a net, leaving a hook in the right place, with just enough bait, with the right wording, the right photo, at the right time, in the right location, while under the influence of the right emotion, to nudge us towards the highest bidder for our attention. Zuboff says that ‘users were no longer ends in themselves but rather became the means to others’ ends.´

When Mark Zuckerberg started Facebook, he quickly realised that the social graph was trove of useful data. He became obsessed with measuring it.

He saw something that Myspace hadn’t – and that Friendster hadn’t been able to capitalise on – that we were fascinated by those closest to us.

Zuckerburg carefully studied what students were calling the ‘facebook trance’ – the addictive state of endlessly clicking through your friends’ profiles. Students were obsessed with finding out what was new, but had to click through each profile to find out.

Facebook designed an innovative solution: the news feed – instead of clicking on each profile, the changes or updates would be delivered in one long feed on the home page.

The task was complex. They had to design elaborate code to pull a unique news feed for each person, depending on their list of contacts.

And Facebook quickly realised that having the feed ordered chronologically wasn’t optimal. Instead, a person should be shown the updates they’d be most interested in– the friends they actually interacted with, the ones that were most popular, or the ones that they didn’t look at, but might be interested in.

IN just a few years the news feed would be so ubiquitous that in retrospect it looks obvious and inevitable, but at the time of release it was hated. As McCollough recounts Facebook empployees were ‘deluged with messages of pure outrage’ with only 1 in 100 posts about it being positive, and most complaining about privacy.

A group called Students Against Facebook News Feed quickly attracted 700,000 members.

One complaint read ‘“Very few of us want everyone automatically knowing what we update, news feed is just too creepy, too stalkereque [sic], and a feature that has to go’

College newspapers had headlines like “Facebook is watching you,” “Furious with Facebook” and “Facebook fumbles with changes.”

An online petition quickly attracted thousands of signatures.

Facebook was worried. Zuckerburg replied personally saying “Calm Down. Breathe. We Hear You.” Moments like this had killed multiple sites that seemed invincible. Digg lost its users to reddit and soon disappeared after it made an unpopular algorithm change. Facebook almost took the feature down, but Zuckerburg realised something was happening.

While people were publicly outraged, in private they were still using the feed, and they were using it more and more.

HE decided to keep it, and it gave them new data to analyse, but it still needed to expand its ‘supply routes’.

In 2010, Zuckerburg added a simple method to harvest more data: the first ever like button.

Almost immediately, millions of people were sending millions of data points of information about what they liked and by omission what they disliked to Silicon valley.

Soon, Facebook users were liking over 4 million posts each minute. Zuckerburg bragged about what Facebook could predict about its users.

In this 2012 interview, Zuckerburg said that him and his friends could figure out from data who would be in a relationship within (46.00 – James W. Breyer and Mark E. Zuckerberg Interview, Oct. 26, 2005, Stanford University – YouTube)a week and get it right 1 in 3 times

Like Google, they began buying and investing in anything that could harvest data – through the logic of extension and the logic of depth – and began using it to sell advertising. Facebook bought Instagram, Whatsapp, and now, the Oculus headset can track your facial expressions

Amazon, Microsoft, and many others all followed Google and Facebook’s lead.

Microsoft picked up Skype for $8.5 billion and Linkedin for $26 billion. They started tracking more data in Windows

Microsoft’s Satori program started collecting up 28,000 DVDs worth of data every day.

The project’s senior director said ‘it’s mind-blowing how much data we have captured over the last couple of years. The line would extend to Venus and you would still have 7 trillion pixels left over’ –

Amazon started carefully analysing all the businesses that listed on their site. They decided to carefully record data on sales, shipping, marketing, monitoring which products do well, then they’d copy the best sellers and make their own cheaper duplicates.

FTC chair Lina Khan said Amazon is a ‘petri dish’ through which “independent firms undertake the initial risks of bringing products to market and Amazon gets to reap from their insights, often at their expense.”

Former Amazon Exec James Thomson said: “They happen to sell products but they are a data company.”

Google, Microsoft, and Amazon’s move to cloud storage – where users could upload any information they needed – was the next natural choice for companies looking for more and more data

Other smaller surveillance capitalists arrived on the scene too.

Score assured in the UK offers a service that scans your social media accounts for landlords to predict risky tenants, and start-ups like LendUp look at your social media to determine your creditworthiness. Hi IQ looks at job candidates social media profiles “to pinpoint with laser-like accuracy the employees that are highest risk.…”

As data everywhere was hoovered up in pursuit of profit, IT was a race to the bottom… of data.

The Theft of the political community/ public space

This race to uncover and harvest more lines of surveillance data, the search for ever more lucrative ‘supply routes’ of personal information – extending outwards into more areas of our lives and deeper into richer measurements of data, isn’t just a natural progression of the technology. It happened in a context.

In the early 90s, the internet was still largely thought of as a space for information, one that had grown out of academia. But in 1993, the activist Jeffrey Chester was warning that it become a ‘privately owned public space’, a ‘virtual electronic shopping mall’ – as corporate interests circled.

The old NSFNET infrastructure had been decommissioned. And throughout the 90s, internet through telephone lines, then through cable, was becoming dominated by the telecommunications giants – AT&T, Spring, and Verizon.

Western Union had dominated telegraph communications in the late 19th century, buying up over 500 competitors to have control over a near monopoly. It started abusing its position, using its position to keep competitors out of the market and prioritizing its own associates over others.

As the anti-trust movement grew, those fighting the monopolies weren’t just concerned about anti-competitive practices, but what it meant to have single men like Robber Baron Jay Gould in control of almost the totality of infrastructure that was essential to the security of the country.

In response, congress passed a non-discrimination. No single company should have a say on the entire network, and telecommunications and railways must abide by what was called ‘common carriage’– they had to treat anyone wanting to access the network equally.

Fast forward to 1990s, and the same rules were applied to cable providers. Networks had to grant access and couldn’t treat their own partners and affiliates with favouritism.

One judge said that “assuring that the public has access to a multiplicity of information sources is a governmental purpose of the highest order, for it promotes values central to the First Amendment.”’

But throughout the 80s and 90s this changed. As the Soviet Union collapsed, market liberalism triumphed, neoliberalism was at its peak, and government interference of any kind was challenged.

In 2002, Bush repealed ‘must carry’ rules, and cable giants began refusing access to smaller internet providers across the country, who quickly went out of business.

And across the US, telecommunications companies lobby local government to prevent new networks. Municipal broadband is banned in 18 states and big telecom sign agreements with local authorities that often prohibit using other new ISPs. Net Neutrality – the idea that data sent across networks should be treated equally – was being challenged too, and corporations began paying to have preferential treatment to send their own data faster than competitors. Big tech giants like Facebook, Google and Microsoft do deals with telecoms giants to send their own traffic at quicker speeds – essentially ‘fast-lanes’ for the rich.

This shift towards digital laissez-faire coincided with another shift. After 9/11, security became more important than privacy. The Bush administration passed the Patriot Act, radically expanding the powers of the state to monitor citizens and gather data around the world.

Using 9/11 as justification, the surveillance state grew at the same time as Google was starting to expand its own surveillance practices.

Peter Swire, an expert in privacy in the Obama administration said that ““With the attacks of September 11, 2001, everything changed. The new focus was overwhelmingly on security rather than privacy.”

NSA Chief John Poindexter proposed a program called Total Information Awareness (TIA) – that could pick out signals that would predict and stop future attacks.

With that change, politicians seemed to lose interest in regulating big tech companies. Google and the NSA, for example, announced a partnership for Google to provide ““search appliance capable of searching 15 million documents in twenty-four languages.”

NSA director wrote that ‘An effective partnership with the private sector must be formed so information can move quickly back and forth from public to private and classified to unclassified… to protect the nation’s critical infrastructure’

A new techno-industrial-military-complex was born.

Google spent more time in Washington, spending more money on lobbying.

The Washington Post called Google the “master of Washington influence”

While The New York Times ran a story saying that “Google is very aggressive in throwing its money around Washington and Brussels, and then pulling strings. People are so afraid of Google now.”

The neoliberal-techno-industrial-military-complex provided the context for corporations to spend vast sums of money expanding their operations and influencing both politicians and the public through PR that claimed that what they were doing was good for the ‘community.’

As the social fabric was shredded, as Margaret Thatcher made her famous claim that there was no such thing as society, increasing numbers became isolated and atomised. Memberships to churches, unions, and even bowling clubs declined, and big tech claimed that they were rebuilding that sacred lost space – the public, the communal, the common.

Brian Chesky, CEO of AirBnB, said that ‘Airbnb started really as a community, probably even more than a business. It became a business to scale the community. But the point is that when it became a business it never stopped becoming a community.’

It’s slogan was ‘‘the world’s largest community-driven hospitality company’’

Global Head of Community Douglas Atkin declared that ‘‘Airbnb and its community wants to create a world where Anyone can Belong Anywhere.’

But across the world, AirBnB were up against real community regulations that were often aimed at keeping rents low for locals, and keeping out property investors who might leave apartments empty as financial assets to appreciate, or lease them out to wealthy travellers.

In many places in the UK – including some of the most beautiful parts of the country like Cornwall and Wales – locals are forced out of the property market as homes are bought up as second homes or to list on sites like AirBnb.

AirBnb knew that to fight this in so many places, they had to mobilise their own ‘community.’ They couldn’t do it alone.

They began posting job like this one – (Airbnb, ‘Community Organiser (Fixed Term Contract)’, Velvet Jobs. velvetjobs.com/job-posting/community-organiser-fixed-termcontract-399585.) looking for candidates with experience in organising community, political, government campaigns.

The premise was simple: Airbnb would make sure that their own members fought for their own right to do what they wanted with their own property.

They wanted to maintain the image that this wasn’t landlordism or rentierism, but local communities standing up for their local areas.

Despite the image, what was being listed on the site was changing. It wasn’t just individuals or families renting out a room in their home or their place while they were away – by 2020 ‘hosts’ with more than one property made up 63%, between 2017 and 2020, hosts with between 101-1000 properties went up by 50%, and 14% had more than 21 listings.

What on the surface seemed like a project for ‘democratising’ the vacation market was fast becoming a means for landlords to consolidate and extract rents from their capital and assets.

Several reports – including one from the centrist Economic Policy Institute found that Airbnb has a harmful effect on the economy.

With a fight on their hands, AirBnb mobilised a war chest.

In one campaign in San Francisco, AirBnb spent $8 million on tv ads, billboards, and canvasing to lobby against regulation that restricted the rental market.

Head of Community, Atkin said ‘, ‘mobilisation was absolutely instrumental in changing the law … we got about 250 of our hosts to show up at land use hearings and to give up a day of work’’

Meanwhile, in 2018, Amsterdam, Barcelona, Berlin, Bordeaux, Brussels, Krakow, Munich, Paris, Valencia and Vienna wrote a joint letter to the EU, asking them to intervene in what they saw was a growing problem.

AirBnb have been involved in 11 lawsuits against authorities to try and avoid regulations, while many cities have fought back with regulation like caps on the number of days a property can be let.

Muldoon calls this ‘community-washing’.

HE writes ‘According to the spin of tech CEOs, making billions of dollars is almost incidental to – or a welcome but unexpected by-product of – their social mission of connecting the world and giving people a sense of belonging in community.’

But ‘there is a deep irony in one of the world’s most successful entrepreneurs portraying his company as a champion of grassroots community.’

At a 2017 Facebook Communities Summit – Zuckerburg took to the stage by saying ‘‘It’s not enough to simply connect the world, we must also work to bring the world closer together. … Communities give us that sense that we are part of something bigger than ourselves, that we are not alone, that we have something better ahead to work for.’

With one hand, big tech companies support and lobby for deregulation that allows big capital to encroach upon community spaces, while on the other hand they co-opt language and sentiment around the idea of community that gives the impression that they’re on the side of community. As Orwell wrote ‘“Political language is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.” Power twists commonly accepted values to make them sound appealing – ‘Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness.”

Like AirBnb, Uber spends millions lobbying local governments to repeal regulation that has often been the result of the long-fought battles by local taxi drivers. Certain regulations, like limiting the supply of taxis, were about risk-reduction, so that during a downturn drivers could still earn enough money.

Uber systematically tried to circumvent and replace laws like these.

Meanwhile, one study found the half of Uber drivers in Washington DC were living below the poverty line. Uber have fiercely resisting classifying riders as employees, insisting they’re self-employed, so they don’t have to pay sick leave, a minimum wage, or paid vacation.

The incursion into our political space, the theft of the community, at a period in history that prioritised laissez-faire and surveillance security, reminds us that the shape and direction of any change is not natural – there is no such thing as ‘empty cyberspace’ that can be moulded with new utopian rules. The offline world, real power, history, always intrudes.

‘As the legal scholar Veena Dubal has argued, and as Doug Schifter observed in his columns, politics has been pivotal, in particular the pressure the company has brought to bear on regulators and policymakers.’

Utopia comes from the Greek meaning no place. It is an apt description of the Californian ideology that saw cyberspace as a no place, an infinite blank sheet upon which their own rules can be written, in some realm that’s out there, disconnected, metaphysical

But, of course, there is no such place as no place. Big tech platforms and ISPs are made up of wires, servers, and code written by people that thought in a particular way of thinking in a particular historical context with particular rules and laws.

Despite this, big tech platforms often try to absolve themselves of any real world responsibility.

While having a large hand in shaping what users see, what gets amplified, and how you see it, platforms like Facebook aren’t treated like real world publishers. They aren’t subject to libel, copyright, or the other regulations, that they’re users are.

The 1996 Communications Decency Act decided that

‘“No provider or user of an interactive computer service, shall be treated as the publisher or speaker of any information provided by another information content provider.”’

Yet all of these platforms use mountains of data collected on you and your neighbourhoods to decide what you’ll buy and what you’ll watch and what you’ll read. Facebook knows what mood you’re in – whether your lonely, lovesick, happy, been paid, politically disenfranchised – and can show you not just what you want – because not even we know what we really want – but what you’ll most likely click on.

The shift towards surveillance in political context that justified the diminishing of protection and the shrinking of privacy was like entering your home with sweets to tempt you while secretly looking at your likes and dislikes, who you live with, your pets names, and what you were cooking, all for the sake of personalisation and subtly modifying behaviour.

And ironically, from their utopian no place, they’ve become ubiquitous. How we travel, eat, watch, which apps we use, what we listen to, 2/3rs of us get news through social media, Facebook has acquired almost 100 companies, Amazon almost 100, Google more than 200, Microsoft more than 200.

Zuboff calls it dispossession – an ‘incursion into undefended space’: ‘your laptop, your phone, a web page, the street where you live, an e-mail to your friend, your walk in the park, browsing online for a birthday gift, sharing photos of your kids, your interests and tastes, your digestion, your tears, your attention, your feelings, your face.’

They extract more and more information, and they suck it like vampires, from the corpse of history, from no place.

That corpse of history is a long tradition public investment that’s cheaply sold off or surrendered, while pretending that only capitalism is capable of innovation.

The internet, touchscreen, GPS, MRI, aviation projects and nanotechnology, the first computer, the first digital computer, radar, and much more only exist because of public investment in things that are risky and that individual companies could not pursue without grants.

Nathan Newman has pointed to how the development is sometimes compared to the development of highway systems. But he says, the internet is much more complex, and the comparison doesn’t make sense unless the government ‘“had first imagined the possibility of cars, subsidized the invention of the auto industry, funded the technology of concrete and tar, and built the whole initial system.”’

We have a hangover. Drunk from neoliberalism we strip regulation that protects local residents and taxi drivers, let bucketloads of cash into our political lobbies, hand over treasure chests full of data, and passed off the infrastructure to telecoms giants who have no incentive to reinvest in faster broadband and instead pay larger dividends to shareholders. They don’t invest in poorer neighbourhoods or rural areas, who put up with insufferably slow speeds, and the US has some of the most expensive internet fees and terrible speeds. In 2018, according to Microsoft, almost half the country cannot get and does not use broadband – and they’re low income and rural. In Detroit – 70% of school children have no internet.

Tarnoff writes that ‘the big ISPs are essentially slumlords. Their principal function is to fleece their customers and funnel the money upward. ON top of that ‘the broadband cartel regularly receives large infusions of public cash’

For all this talk of state investment and the rolling back of regulation and embracing laissez-faire markets, I’m not suggesting that Facebook or Airbnb or Twitter should be handed over to the national governments. I think choosing between who should decide how platforms are run: capitalists or politicians is a false choice.

Solutions are difficult. Trade-offs are necessary. Real solutions should be multifaceted. I think some combination of regulation, cultural awareness, and democratic alternatives should all be pursued.

Muldoon points out that before the mid-20thcentury, the idea of what constituted as in the public interest was much wider. We often bail out banks or energy companies at great cost, and regulation has long protected the weak from the strong.

In one 1877 case, Munn v Illonois, the court concluded that a business involved with large quantities grain was of public interest and so could be regulated.

There are lots of reasons something might be in the public interest, but the central one is that its public – its in all of our interests, they tend towards monopoly, and there are clear harms, and so at least part if not all of how their run should be decided democratically.

As legal scholar Frank Pasquale has said: “The decisions at the Googleplex are made behind closed doors… the power to include, exclude, and rank is the power to ensure which public impressions become permanent and which remain fleeting.… Despite their claims of objectivity and neutrality, they are constantly making value-laden, controversial decisions. They help create the world they claim to merely ‘show’ us.”

First of all, lets look at the benefits of being open, democratic, transparent, of not being run commercially, then we’ll look at how political policy and building alternative spaces.

Open Source/Free Sfotware/Microsoft vs Open Source (Microsoft, Again…)

In the 70s, a group of computer scientists at MIT were working with an operating system called Unix which was owned by the telecommunications giant AT&T.

One programmer and hacker – Richard Stallman – wanted an alternative. Something that wasn’t restricted and was free to work on, add to, and share. Stallman published the GNU Manifesto and became a pioneer of what came to called open source. GNU was an acronym for GNU not Unix – and he proposed a legal method called copyleft – creating a terms of service in which anyone could juse, add to, or modify software provided they did so under the same conditions. Stallman began work on several pieces of code that would contribute towards the running of an open-source operating system that anyone could use through which innovation could be pursued collaboratively– Stallman believed that access to the source code of software was a right.

In the 80s, a Finish software engineer Linus Torvalds contributing to the project with a new open-source operating system. HE called it Linux, and anyone could download it, use it, and contribute to its development.

Both Stallman and Torvalds opposed the position that Bill Gates had outlined in his open letter to hobbyists a few years earlier.

The type of software they developed became known as FLOSS – Free/Libre/Open Source Software – and it developed into such an influential movement that we all rely on FLOSS every day – every super computer in existence is run on Linux, including one in the US Department of Energy, one on the ISS, and one on the USS Nuzwalt – the most technologically advanced ship in the world.

The reason they use Linux is that because its open, it can be edited to highly specialised needs. The servers that host most of the world’s websites run on Linux, as do the computers at many government agencies, and Google’s phone operating system, Android, is Linux based, making it the most-used operating system in the world.

LibreOffice – a free Microsoft Office altneriatve, VLC – a great media payer, Firefox – a web browser, Audacity – which I’m using right now – and WordPress, used by more than 60 million people – are all FLOSS.

Open source practices are a challenge to Bill Gate’s biggest complaint in his Open Letter to Hobbyists: why would anyone build anything if it wasn’t commercially profitable?

Studies have found that developers contribute to FLOSS for various reasons that don’t fit in the traditional economic/psychological model of humans as self-serving profit-maximising agents. Contribute to a community, or to humanity more broadly, the desire to challenge themselves, the knowledge that a certain type of status comes from contributing to a project, and because they themselves need the project that they’re contributing to, have all been cited as reasons to contribute outside of a profit-motive.

And FLOSS is often worked on outside of conventional worker-boss hierarchies.

Legal Scholar Yochai Benkler writes that FLOSS is ‘‘radically decentralized, collaborative, and nonproprietary; based on sharing resources and outputs among widely distributed, loosely connected individuals who cooperate with each other without relying on either market signals or managerial commands’’

Media scholar Benjamin Birkinbine argues that FLOSS is an example of an alternative value system, a type of digital commons – public, open, which everyone has a right to use, non-private property.

Enter Microsoft.

As FLOSS have proven itself as a viable alternative to proprietary software, corporations have slowly encroached on open source projects in an attempt to, in Birinbine’s words, ‘capture the value being produced by FLOSS communities.’

After their court case, Microsoft has made a dramatic u-turn on its position on sharing software.

Microsoft were forced to adopt ‘interoperability’ – the opening of their ‘APIs’ – so that third party software could more easily interact with the Windows ecosystem.

And it was at that time that FLOSS was taking off as a viable way of producing software.

In 1998, a supporter of open source, author Eric Raymond, received leaked internal Microsoft documents that became known as ‘the Halloween Documents.’

In them, MS outlined their new approach to open source – they would need to make use of it while also continuing making the case that software – Microsoft software in particular – was worth the price tag. To do this, the Halloween Documents revealed that Microsoft would use what they referred to as ‘FUD tactics’ – the sowing of Fear, Uncertainty, and Doubt.

They broadcast an advert in the UK, for example, that claimed that, while initially free, Linux was found to be over 10 times more expensive than Windows Server 2003 in the long term, but the advertising standards authority found the claim misleading and banned the ad.

Microsoft were also found to be funnelling money into a group who owned the original UNIX who were fighting a legal battle against Linux for property rights infringement.

But the Halloween Documents also described internal surveys that had found there was a lot of support for FLOSS within Microsoft.

So Microsoft had to balance between fighting open source and incorporating it into Windows, both for their own benefit and to avoid more legal problems.

Today, interoperability is an important feature of everyday life that often goes unnoticed and unappreciated. A simple example is railways and airports, that have to share the same track-gauge, signalling, air traffic control protocols, and so on. If you’ve ever used Microsoft Word alternative Libre Office, it can open and save in Microsoft file formats.

Think about screws, plugs, banking systems, and, internet protocols – interoperability ensures, in part, that dominant corporations cant dominate the market further by forcing their own interfaces and squeezing out others.

And while the open-source community remains sceptical of Microsoft, Birkinbine writes that Micosoft had to embrace open source because it had proven that it worked, that collaboration was the direction the industry was heading in, and that ‘the company’s turn to open source may also be viewed as a humble recognition that the commons-based peer production taking place within the FLOSS community was an efficient and effective model of industrial software production that could supplement its own business practices.’

Wikipedia is another example of a similar non-commercial alternative. The first Wikipedia article ever published was on January 15th 2001 – on the letter ‘U’.

McCullough writes that ‘It was comprehensive, it was well written, and it was—to the surprise of Wales and his team of editors—accurate. The few thousand users who had shown up to test out Wikipedia had, through their collective input and edits, gotten the article polished to near-authoritative quality’

Like FLOSS, Wikipedia’s contributions are voluntary, its free to access, there are no advertisements, it doesn’t track its users or sell their data, and its based on the premise that each can participate equally; its deliberative rather than hierarchical – administrators and stewards are voted for, for the most part, democratically – administrators are nominated and voted for based on familiarity with rules and processes, edits are debated over and voted on.

Building Alternative Spaces

What these examples prove is that open, collaborative and democratic alternatives to Big Tech are possible.

Influenced by the sociologist Erik Olin Wright, Muldoon argues that we should be building radical democratic institutions within the ‘cracks’ of the capitalist system – what Wright called ‘real utopias.’

Muldoon argues for what he calls platform socialism –which would involve ‘the organisation of the digital economy through the social ownership of digital assets and democratic control over the infrastructure and systems that govern our digital lives.’

He continues:

‘A broad ecology of social ownership acknowledges the multiple and overlapping associations to which individuals belong and promotes the flourishing of different communities from mutual societies to platform co-operatives, data trusts and international social networks.’

And there are plenty of examples of small platforms trying to challenge Big Tech’s dominance. Up & Go is a cleaning cooperative in NYC, FairBnb is an alternative to AirBnb, Taxiapp an alternative to Uber, Mastadon one to Twitter, Diaspora, Friendica alternatives to Facebook, but if there’s one thing they have in common it’s their very clear failure to make much progress against incumbents, who are protected by, among other things, what’s been described as their ‘network effect’ – the more people that use them, the bigger they are, the more valuable they become. This means there’s a cost in leaving Twitter to join Mastadon because everyone you want to follow is on Twitter.

This is a problem, and a reason Microsoft is so dominant, and how, they managed to squeeze Netscape and other competitors out of the market. But its also why interoperability may hold the key to how to challenge Big Tech monopolies politically.

Sometimes, as the anti-monopoly battle against the Robber Barons showed, alternative services aren’t enough, only political power can fight the power of corporate incumbents.

I think many people see regulation to simply. Regulation and policy choices come in many forms, and often we can design policy that empowers alternatives instead of just restricting or breaking up Big Tech.

Barcelona City Council, for example, forces Vodafone to makes its data open for public use on the council’s website. Bernie Sanders has suggested billions in grants for local municipalities to “to build publicly owned and democratically controlled, co-operative, or open access broadband networks.”’

Public media outlets like PBS and NPR, legislated by Lyndon B. Johnson in 1967, and the BBC – funded by license fee payers – provide a quasi-political but ideally-independent space motivated by values that sit outside of market mechanisms, and are all routinely viewed in polling as the most trusted organisations for news.

Legislation could be designed to encourage the creation of national or local alternatives to Big Tech that are independent of government, or run through existing library networks, or local councils. A government department could build and provide open-source code to provide to councils to run alternatives to Uber or AirBnb that give local drivers and citizens a voice in how the apps are run.

Personally, I’d like to see an alternative to Twitter that’s financed by subscription rather than advertising revenue. For a few dollars a month you get access to a network which is completely open, run by its users, who can all vote, elect, decide on algorithm choices, research directions, privacy policy, and so on, in a similar way Wikipedia does.

And I think this is where interoperability is a smart policy. Regulation could force Facebook, Uber, Twitter and so on to allow third party applications to plug into them.

Interoperability would mean you could use an alternative to Facebook that still works with Facebook, allowing you to keep access to your social network, to post to Facebook or Instagram or Twitter while using new platforms at the same time. Senator Mark Warner suggests legislation that would force platforms with revenues over $100 million to comply with ‘portability’ rules.

The deep contradiction of our age, Zygmunt Bauhman wrote in 1999 is “the yawning gap between the right of self-assertion and the capacity to control the social settings which render such self-assertion feasible. It is from that abysmal gap that the most poisonous effluvia contaminating the lives of contemporary individuals emanate.”

The question is who has the legitimate power to control those social settings? And how do we identify where that poisonous effluvia oozes from? In coaxing, nudging, watching, analysing, and sculpting our digital choices? In using monopoly positions to subtly influence our politics? Should we really only rely on the word of Elon Musk and Mark Zuckerburg that they’re not skewing news feeds; should we really abandon the concept of the community to uber and Airbnb?

So its imperative that we have more experiments in alternatives, and a politics that supports those experiments. As Tarnoff says ‘while working to disassemble the online malls, we must also be assembling a constellation of alternatives that can lay claim to the space they currently occupy.’

Because unless the platforms we use everyday are transparent, democratic, and open, then its not us controlling our own social settings. And while, yes, sometimes the decisions big tech makes can be the right ones, its also the case that, however benevolent a ruler may be, there are always going to be differences in value-systems from person to person, place to place, group to group. And over the long arc, history has proven time and again that moral monopolies of all types tend towards stagnation, decay, blunder, or corruption. It only takes one mad king, one greedy dictator, one slimy pope, or one foolish jester, to nudge the levers they hover over towards chaos, rot, and even tyranny.

Bibliography:

Martin Campbell-Kelly et al., Computer: A History of the Information Machine

Thomas Rid, Rise of the Machines: A Cybernetic History

Amy Klobuchar, Anti-Trust

Ben Tarnoff, Internet for the People

James Ball, The System

Richard Barbrook and Andy Cameron, The Californian Ideology

Shoshana Zuboff, The Age of Surveillance Capitalism

James Muldoon, Platform Socialism: How to Reclaim our Digital Future from Big Tech

Yes, Government Researchers Really Did Invent the Internet, https://blogs.scientificamerican.com/observations/yes-government-researchers-really-did-invent-the-internet/

A Brief History of NSH and the Internet, https://www.nsf.gov/od/lpa/news/03/fsnsf_internet.htm

Brian McCullough, How The Internet Happened: From Netscape to the iPhone

Sabeel Rahman and Zephyr Teachout, From Private Bads to Public Goods

Ethan Zuckerman, The Case for Digital Public Infrastructure

Where is the Digital Highway Really Heading? https://www.wired.com/1993/03/kapor-on-nii/

Benjamin J. Birkinbine, Incorporating the Commons: Corporate Invovlement in Free and Open Source Software

Erik Olin Wright, Envisioning Real Utopias

Maurine Webb, Coding Democracy

Adam Curtis, All Watched Over By Machines of Loving Grace

Robert David Steele, The Open Source Everything Manifesto

Mark Andrejevic, Automated Media

Bill Gates, An Open Letter to Hobbyists

Clay Shirky, Here Comes Everybody

https://knightcolumbia.org/content/from-private-bads-to-public-goods-adapting-public-utility-regulation-for-informational-infrastructure

Hal Varian, Beyond Big Data, https://people.ischool.berkeley.edu/~hal/Papers/2013/BeyondBigDataPaperFINAL.pdf

50 years ago today, the Internet was born. Sort of., https://arstechnica.com/information-technology/2019/10/50-years-ago-today-the-internet-was-born-sort-of/

Seeding Networks: the Federal Role, http://som.csudh.edu/fac/lpress/articles/govt.htm

Getting Started Computing at the AI Lab, https://dspace.mit.edu/handle/1721.1/41180

NSFNET, The first national, then global backbone, http://som.csudh.edu/cis/471/hout/nsfnet.htm

A Brief History of NSF & the Internet, https://www.nsf.gov/news/news_summ.jsp?cntn_id=103050

High-Performance Computing Act of 1991, https://www.govinfo.gov/content/pkg/STATUTE-105/pdf/STATUTE-105-Pg1594.pdf

TELECOM’S LAVISH SPENDING ON LOBBYING, https://www.washingtonpost.com/archive/business/1998/12/06/telecoms-lavish-spending-on-lobbying/b3f35aec-aab0-4a9d-8f48-3212572df8ec/

Ben Tarnoff, ‘Wallets and eyeballs’: how eBay turned the internet into a marketplace https://www.theguardian.com/technology/2022/jun/16/wallets-and-eyeballs-how-ebay-turned-the-internet-into-a-marketplace

Inside the Backlash Against Facebook, http://content.time.com/time/nation/article/0,8599,1532225,00.html

The post How the Internet Was Stolen appeared first on Then & Now.

]]>
https://www.thenandnow.co/2023/02/05/how-the-internet-was-stolen/feed/ 0 59