The Dangers of Cynical Sci-Fi Disaster Stories – Slate

When I moved to California from Toronto (by way of London), I was shocked by the prevalence of gun stores and, by their implication, that so many of my reasonable-seeming neighbors were doubtless in possession of lethal weapons. Gradually the shock wore off—until the plague struck. When the lockdown went into effect, the mysterious gun stores on the main street near my house sprouted around-the-block lines of poorly distanced people lining up to buy handguns. I used to joke that they were planning to shoot the virus and that their marksmanship was not likely to be up to the task, but I knew what it was all about. They were buying guns because they’d told themselves a story: As soon as things went wrong, order would collapse, and their neighbors would turn on them.

Somehow, I couldn’t help but feel responsible. I’m a science-fiction writer, and I write a lot of disaster stories. Made-up stories, even stories of impossible things, are ways for us to mentally rehearse our responses to different social outcomes. Philosopher Daniel Dennett’s conception of an intuition pump—“a thought experiment structured to allow the thinker to use their intuition to develop an answer to a problem”—suggests that fiction (which is, after all, an elaborate thought experiment) isn’t merely entertainment.*

That’s true. And it’s a problem.

Like all pulp writers, my work puts the plot front and center. That’s not to say that character and theme don’t get a look in, but as William Gibson says, “I can do fucking plot. I can feel my links to Dashiell Hammett. … I’ve still got wheels on my tractor.” And my plots follow sci-fi’s dominant motif, which is “problems and their solutions.”

There’s a prototypical kind of sci-fi story whose throughline is “Technology caused and/or resolved this dire problem.” And when you’re into plot and you need problems, you don’t even need to choose between “human vs. human” and “human vs. nature,” you can opt for both, human vs. nature vs. human. The central crisis—a nuclear meltdown, a viral pandemic, a breakdown of our networks or computers—is turned into a catastrophe when the other people around your characters turn out to have been beasts all along, their vicious true natures barely kept in check all these years by the fragile veneer of civilization. Your character might be part of a team, but they’re still a small band of heroes fighting against a brutal and vicious world.

This is the thought experiment of a thousand sci-fi stories: When the chips are down, will your neighbors be your enemies or your saviors? When the ship sinks, should you take the lifeboat and row and row and row, because if you stop to fill the empty seats, someone’s gonna put a gun to your head, throw you in the sea, and give your seat to their pals? I’ve committed this sin myself. Right at the start of the first novel in my Little Brother series, a character gets stabbed in a crowded subway by someone who is apparently just knifing people at random in a crowd. That’s never explained, and no one has ever asked me about it. It’s just people being awful.

But according to Denning, this isn’t just fiction—it is the stuff we’ve fueled our intuition pumps with. The problem is, it’s wrong. It makes for good stories, but those stories don’t reflect the truth of the world as I see it. Humanity is, on balance, good. We have done remarkable things. The fact that we remain here today, after so many disasters in our species’ history, is a reminder that we are a species of self-rescuing princesses—characters who save one another in crisis, rather than turning on ourselves.

The historical evidence supports this as well. As Rebecca Solnit’s essential 2009 book, A Paradise Built in Hell, lavishly demonstrates, crises are when our species shines, moments of great personal and group sacrifice, marred not by barbaric opportunism but by the expectation of barbaric opportunism. “Elite panic” is the sociological term for it, when wealthy people are convinced that the peasants will dissolve into bestiality and preemptively start shooting anyone who wanders into their neighborhoods during a crisis.

I think that our pulp fiction has done us a disservice, creating a commonsense assumption that we are one power failure away from Mad Max: Fury Road. The reality is ever so much messier, full of people trying to do the right thing—which still causes high-stakes, serious conflicts, but they’re conflicts of good faith and sincere disagreement.

Not only does the red-of-tooth-and-claw storyline misprime our intuition pumps, it’s also lazy storytelling that squanders the opportunity to get more plot into the tale, as the gnarly, complicated stories of irreconcilable, good-faith conflicts are so much more fascinating than merely staving off the ravening hordes of bestial proles who show up as soon as the lights go out.

In my Little Brother novels, I’ve worked to dig into those more complicated conflicts with increasing fervor. In the first book, Little Brother (2008), a group of teens whose hometown is rocked by a terrorist attack wages war on the Department of Homeland Security, which swiftly converts San Francisco into a police state. In Homeland (2013), our heroes become custodians of a huge trove of leaked U.S. government secrets that they attempt to release in a responsible and measured way, caught between private military contractors who are trying to recover the leaked docs on the one hand and shadowy, radical hacktivists who just want to dump it all on the other.

I wrote these stories in the era of mass surveillance, as my fears for networked computers were coming true: that the liberatory power of computers would be sidelined and they would be turned into mass surveillance, control, and manipulation instead. Both stories trace the characters’ dawning realization that there is no individual solution to their problems—that the kind of systemic change they want is a team sport and has to include people usually left on the sidelines in tech fights.

Right after Homeland was published, I read Solnit’s Paradise Built in Hell. That book confirmed the intuition that led me to steer Little Brother (after that initial convenient stabbing) and Homeland away from the cheap narrative convenience toward complicated narratives about irreconcilable, good-faith disagreements.

On Tuesday, I published Attack Surface, a stand-alone novel for adults and the third Little Brother book. (The first two were young adult novels.) It’s the story of Masha Maximow, who appears in the first two books as an antagonist who sometimes saves the day. Masha has convinced herself that building cyberweapons is the right thing to do, even when they’re used by dictators. She’s spied on some pretty bad people, and she’s seen her tools used to disrupt violent psychopathic militias in Iraq, revenge-killing Baathist ex-soldiers engaging in punitive rape and torture. She knows that not everyone who uses her tools is on the side of the angels, but she’s talked herself into a self-conception as a moral actor.

Attack Surface is the story of Masha’s moral reckoning when her cyberweapons have been trained on her childhood best friend and the Black Lives Matter successor group she helps organize. Masha’s journey is a retraining of her intuition pump, a painful process that involves confronting the compromises she made on the way and the stories she told herself about them. I wrote it as tens of thousands of tech workers were walking off the job in protest of workplace harassment, algorithmic racism, collaboration with Immigration and Customs Enforcement and local police, and the deployment of facial recognition and other surveillance tools. It seemed to me that a reckoning was at hand.

The Little Brother series inspired dozens of technologists, activists, cyberlawyers, and cryptographers to get into the field and agitate for a more humane future. For example, if you watch Laura Poitras’ Academy Award–winning documentary Citizenfour closely, you can see that Edward Snowden has a copy of Homeland with him as he leaves his room in Hong Kong.

But in the years since Homeland and Citizenfour, tech has relentlessly marched toward control and oppression. For every app that Hong Kong protesters used to coordinate their protests and evade rampaging cops, there were five shadowy databases keeping tabs on their public transit use, or their mobile phones’ identifiers, or their online search habits. Across the world, tech users are struggling to defend themselves against tech, and within the tech giants, tech workers are struggling to define their moral responsibility to the world.

With Attack Surface, I’m targeting the techies whose hearts may have been in the right place, but whose cheap fictions (inspired by the cheap fictions of my field) about the need to build oppressive systems of control led them to build our world of digital surveillance, manipulation, and control. The workers who rationalized their way into building something they knew to be a little wrong, and then something else that was a little more wrong, and again and again, a thousand compromises made in the name of “fighting crime” or “fighting extremism,” until one day they looked at their careers and their reflection in the mirror and couldn’t recognize either.

We once had stories of technology’s power to liberate us: not merely to build a separate, cryptographically secured demimonde where the illegitimate power of a corrupt state could not penetrate, but a temporary autonomous zone where we could organize for meaningful, structural, political change. But today, it’s easy to think that technology has no role to play in our liberation, to cede the destiny of the digital world to the forces of oppression.

New stories will help us understand the importance of seizing the means of computation and using it to build movements that break up monopolies, fight oligarchy, and demand pluralistic, shared power for a pluralistic, shared world.

Changing our intuition pumps is not easy, but it’s urgent—and overdue.

Correction, Oct. 13, 2020: This article originally misidentified Daniel Dennett as Daniel Denning.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.