Possibilities, Past
I use to be excited about the possibilities of online communities. I remember feeling optimistic when I was a kid and got the modem working on the family computer so I could call into a local bulletin board system and find people to trade music with. Or I remember as a teenager getting an issue of Wired magazine and reading on the cover that “cypherpunks were plotting world liberation” and that the internet was going to bring us “25 years of prosperity and freedom.” I remember that one month in 2013 when I had a pretty good run of Instagram posts. At each of those moments I felt like I was about to be part of a new world and that that new world was going to be pretty amazing
These days I have trouble feeling so enthusiastic. The disinformation and advertisements, the trolls and corporate user agreements, the hate groups and online ennui spoil the fun. This isn’t what I’d hoped for.
Maybe I’m just not very good social media and I do keep forgetting my LinkedIn password. But I don’t think it’s just me. It seems like the online utopia has gone off the rails.
I want to know what happened. And I want something better to look forward to.
So, I want to look back at how we got to where we are today. Maybe not to see where we went off the rails, but to look at the rails themselves. Where did we think we were heading? Who made what assumptions and who built what structures? Were there other paths we could have taken?
This will take a few episodes, so in this episode I just want to lay out the questions and sketch how I want to approach the issues.
To give away the end, I think we can get some good insights by bringing in critical history and giving attention to the political philosophies hidden in the structure of online communities. In other words, a little bit of history can help diagnose the problems we face and can suggest some radically different solutions. Ultimately, I’m hoping to find something to be optimistic about.
So let’s get started.
Promises and Problems
If you follow news about social media, you’ll see a few dramatically different kinds of stories. Sometimes social media looks as if it might open up new kinds free and open social interaction and other times it looks like something poised to destroy lives and bring the downfall of western democracy. It brings new information to lots of people or it undermines the existence of agreed upon facts. It’s where people organize resistance, but it’s also where demagogues seem to have free reign. It’s a haven for the misunderstood, but also a breeding ground for intolerance.
Depending on who’s reporting and the events of the particular day, it looks like the Best Thing or the Worst Thing.
How do we make sense of that dual nature? It might have something to do with the gap between the promises of the system builders and people’s actual experience.
Let’s start with the promises.
In 2012, Mark Zuckerberg, the founder and CEO of Facebook, wrote a letter to go along with the company’s initial public offering. In the letter he made some pretty grand claims about what Facebook was all about. He said that Facebook wasn’t originally created to be a company, but was created to “accomplish a social mission– to make the world more open and connected.” He goes on to compare Facebook with the printing press and television– as the latest in a long line of technologies that transform society. He claimed that Facebook (not computer networks in general) would change the world as much as the printing press. A pretty big claim.
A few months ago, in early 2017, Zuckerberg wrote a new letter. The new letter reiterates and expands the vision of Facebook as transformative force. Now he talks about Facebook as a social infrastructure that will create a global community. This “social infrastrure” will overcome nationalism, prejudice, global warming, terrorism, and would even combat individual depression. There’s a lot in that letter and it’d be worth doing a close reading of the whole thing, but the main point is that Zuckerberg pitches Facebook as something meant to change the world.
As grand as this is, it doesn’t seem too out of place in how a lot of tech companies talk about their business. So many product announcements, start-up pitches, kickstarter videos, and web company IPOs make claims about bringing people together and changing the world.
But here is the thing I keep noticing– Most of these pitches stop short of explicitly saying that they’re going to change the world for the better. They just sort of a assume that these new systems, new communications methods– these “disruptions,” to use a favorite industry word– are going to improve things.
As familiar as it is, this sort of tech-boosterism feels at odds with a lot of what is actually happening with these systems.
I have to admit that for me it’s hard to hear someone worth 70 billion dollars say they aren’t interested in money. Right off the bat it seems disingenuous or un-self-reflective. But beyond that obvious goofiness, it’s hard to square the grand claims of improving the world with all of the problems that seem to go along with these social network sites.
Here’s where we get into the actual experience.
It’s not hard to make a long list of problems.
We could list the growth of white nationalist organizing and propaganda and the increasing reach of hate groups, which creates a very real threat of violence for the people these groups target.
We could list the proliferation of baseless conspiracies theories and misinformation campaigns that threatens the democratic process (is bringing a tide of reactionary so-called populists into the spotlight around the world).
At an individual level, we could list widespread harassment and threats, especially against women and people of color. The bullying extends from celebrities, to writers, to private individuals, and even teens. Beyond chilling effect this has as people give up and delete their accounts, such terrible behavior damages and sometimes literally destroys people’s lives.
And, since I’m recording this in the US in 2017, toward the top of my mind is also the rise of a regime, that in addition to being steeped in racism, sexism, anti-intellectualism, and authoritarian and fascistic impulses, also seems to embody and employ every one of the worst tendencies of the internet.
All of these effects are a long way from the optimistic promises of Zuckerberg and others.
Of course, there’s good stuff about the internet, too. Lots of people keep in touch with friends, family and colleagues, learn interesting and empowering things, organize for social justice, protest and resistance. Which is great.
On balance, I have no idea if we come out ahead or behind. If you add up all the good stuff and all the bad stuff, how would it look? I don’t know.
Actually, I don’t like that formulation. To understand the impact, it’s important look at the experiences of particular individuals and groups of people. How this effects you has a lot to do with who you are. It’s nothing like a level playing field out there. Some people are more targeted, more vulnerable than others.
Whether or not the internet is overall making things better or worse, the fact that it can be so damaging to some people, means it needs to be improved.
And even if you could excuse all these terrible experiences and effects — and I don’t think we should excuse them– even then, I’m don’t think the list of good internet things fully redeems the current online platforms. Just because people can create meaningful connections doesn’t mean the system is working. People can create connections even in the most broken situations.
The most inspiring stories seem to happen in spite of (rather than because) of the structure of the internet.
Which is all to say, there’s a lot of room for improvement.
Human Nature or Bad Design
Ok, so let’s fix the internet.
Of course that’s easier said than done. And the folks who run these sites are trying to fix these problems before the toxic atmosphere drives everyone away. Hopefully there are some good technical fixes in the works.
But I think it’s worth thinking about the underlying causes and nature of the problems we experience. Where do these behaviors and effects come from?
How you understand the problem shapes what solutions you can imagine.
A lot of the tech companies seem to share the same understanding of the problem– we could call it the “people are people” explanation. With this interpretation, the problem is just the result of human nature. The assumption here is that at some basic level people simply are aggressive, sexist, gullible, and full of prejudices and their behavior online is just an expression of those basic tendencies.
There are variations on this theme. Another version would say that instead of human nature, it’s something unhealthy in modern western society that is finding expression. Or maybe the anonymity of the computer lifts some restraints that people usually put on themselves. All of which again ends up at the same point– people are people, and bad behavior online is just a reflection of the dark side of human behavior in general.
This seems to be an especially popular interpretation among people who create online systems. You can see why this would be attractive. It lets the system off the hook. If all the bad behavior just a symptom of deeper social problems, then it can’t be the platform’s fault. It would seem unreasonable to ask a platform to solve deeply embedded problems like sexism or racism. Many responses from the industry come down to: “we’re just programming computers, you can’t expect us to solve problems that have always existed!”
But here’s the thing: This “people are people” explanation isn’t just an excuse, it also limits what solutions seem possible. If the source of the behavior is completely outside technology, then the solution would also be outside technology. Which means it’s really someone else’s problem. From this perspective the only real role of the technoogy is cleaning up around the edges– they try to limit the worst behavior of the worst people and keep it from ruining the experience of too many other people. In practical terms, you get anti-harassment policies, blocking tools, bans of the worst offenders, and attempts to close the loop-holes that let trolls derail conversations.
And that’s pretty much what we see from Facebook, Twitter, Reddit, and the other online systems. And it’s better than nothing. It probably helps.
But, what if there are other ways of thinking about the underlying problems that would lead to completely different kinds of solutions? What if the mechanics of platforms are actually complicit in the bad behavior that happens online? What if there are things baked right into the technology that that encourage and enable the negative behaviors? Could there be assumptions in the basic design of the system that limit what kinds of online communities we can imagine?
In Need of Social Theory
I have a guess about where things have gone wrong and why no one seems to be able to fix it. Here’s my guess. The mainstream systems are built on a pretty simple understanding of human interaction. In other words, the technology is built on an idea of how people can and will act that simply doesn’t hold up to heavy use.
It doesn’t account for in-groups and out-groups or mobs dynamics. It ignores the existence of power and how power is wrapped up with gender and race. There is pretty limited thought put into power dynamics and conflicting interests of platform owners and users. In other words, these systems ignore the politics of the system.
That might seem like a strange thing to say. After all, so many social media posts are about politics. But most of that is about a kind of capital P Politics– elected officials, policies, court rulings, political parties, and so on. What I’m talking is small p politics– the systems of power, privilege, interaction, coordination, and resistance that underlay every human activity.
That kind of small p politics should be at the heart of all of these system designs. After all, the project of improving the world (with or without technology) is fundamentally political.
Going back to the Zuckerberg letter: He is definitely claiming positive political results– what else could you call ending nationalism, racism, and rebuilding local social connections?– but there is very little political thinking in how to get to the results. What happens when people disagree about what a better world looks like? What happens when some people have more power than others? What happens when people band together to excersise power over others? What happens when the goals of company profits don’t align with the public good?
Even though those questions aren’t really technical, they should inform the technical approaches. What’s needed is a more sophisticated social and political theory. In other words, we need some philosophy to solve the problem.
At least that’s my guess, my hypothesis. There’s some work to fill this out, though. My guess does suggest a way to look at the history. Rather than tell a technical story, I want to look at the political philosophies that are implied by the technology. So that’s my project. I want to look at the tacit political theory that’s been built into these systems. I also want to look for other ideas that have been left by the wayside, that haven’t made it into the present day mainstream.
The Tangled Social Web
Before we get to that, let’s take a step back. I want to define what I’m talking about a little bit better. We’re not really talking about the “internet” as a whole, as an infrastructure, as cables and connections (that’s a topic for another day). I’m really talking about online communities and the social elements of the world wide web. People use to call it Web 2.0.
In the simplest terms, the social web is all the services built around users sharing things with each other. So Facebook, Twitter, Reddit, Youtube, etc. At the core of all of these services are conversations, messages, comments– what people in the business call “user generated content.” Sites like Facebook are (or at least were originally) pretty exclusively social websites, but social elements have crept into a lot of other kinds of sites, like comments on newspaper sites or reviews on shopping sites.
This social web has become so ubiquitous, such a big part of so many people’s online life, that it’s hard to see. It just looks like what people do with computers. But the social web as we know it in 2017 is a very specific thing. It’s been created through a specific process, it has a specific set of features and has been built with a certain set of assumptions.
You could see the social web technology as a bundle of common elements. There are shared technical features, like dynamic webpages that can perform actions without reloading. There are architectural similarities, like the importance of connections between users. There are similarities in how content is organized and controlled, with most using some sort of automatic system for ranking and moderating content. And maybe most important, every one of these systems has the same ownership structure: They are all controlled by private companies, built with venture capital and supported by advertising.
I think ultimately the negative effects of the social web have to do with the interaction among all of these elements. It’s this bundle of assumptions that isn’t working. So I set out to learn more about where this bundle developed.
It’s a complicated tangle. And every element has it’s own logic and history. Lets unpick the knot. I want to dig into the earlier history online communities and computer mediated social interaction to find the sources of these various elements. Looking at earlier instantiations can highlight tendencies and dangers of the current system. I think it can also point toward roads not taken and unrealized potentials. And maybe suggest some new solutions.
Follow the Winding Path
There’s a lot in this topic, so I’m going to tackle it as a series. It’ll be a loose series. I’m not sure how many part it’ll end up being and I’ll probably take breaks to talk about other things. But I’m still going to call it a series.
First up, in the next episode, I’m going to talk about one of the earliest and now the oldest online community– the Whole Earth ‘Lectronic Link, or Well. It’s a story that looks as connections between utopian communes and the beginning of the very idea of online communities. Then later I want to talk about a time when cities across the country ran community owned computer services, I want to talk about a network of electronic bulletin boards called “Afro-net.” I’m hoping to dig into the source of the idea that society is a network of person-to-person links. I’m sure other I’ll find other topics as we go.
The internet as we know it today is not the end product of a natural evolution. Its history is not a story of ever improving features. It’s development isn’t even a single story. The history is actually a collection of stories. Stories of competing ideas and conflicting visions, of arguments and uneasy compromises, of small choices that ended up having big consequences, and of promising ideas that never reached their full potential. We’ve followed a winding path to get to where we are today.
I think there are some hopeful possibilities hidden in this tangled history. On the one hand, this can be a reminder that we aren’t at the end of the story. The social web is constantly being changed and the things that seem natural today, the things we take for granted, and the things that seem intractable or impossible to us today may soon look very different. We shouldn’t feel stuck in place.
On the other hand, it can also be a warning. It’s a warning that the online world won’t fix itself. It won’t naturally evolve into a just, equitable, or humane place, and I have serious doubts that any of the current operators are ever really going to help. I actually find a lot of optimism in that perspective. We’ll get the online community that we fight for. We’ll get the one we build. And I think if we understand these histories and if we understand the politics hidden in these systems, we’ll have a better chance of building something worth feeling optimistic about.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.