To anyone reading this on a phone, tablet, laptop, or desktop (so, you know, basically all of you): We need to talk about how we talk about screen use.
For too long the conversation’s been stuck on how much time we spend on our devices, and the effect that time has on our well-being. The more salient question for a society in which people’s lives increasingly revolve around screens is how we spend that time. But to answer that question, we need better data.
LEARN MORE
The WIRED Guide to Internet Addiction
First off, I know what you’re thinking: The point that screen time is about quality, not quantity, sounds stupidly obvious. And you’re right. It is stupidly obvious. And yet! It’s a point many people, a lot of them smart and well intentioned, have nevertheless overlooked or brushed aside these past few years in the face of mounting public concern that we are all hopelessly, problematically, or involuntarily attached—addicted, even—to our digital devices. In social science research today, it doesn’t matter if a survey respondent uses YouTube to practice conjugating irregular Spanish verbs or to binge on politically extremist rants. It all gets lumped under the unhelpfully broad umbrella of “screen time.”
The trouble is, a whole motherloving lot of that public concern has been driven by lackluster, and often contradictory, scientific results. Earlier this month, researchers from the Oxford Internet Institute published a study in the journal Nature Human Behavior that plainly illustrates how that happened: The gigantic surveys underlying many tech-use studies can be interpreted in such a variety of ways that two different researchers looking at the exact same data set can—and have!—reached opposite conclusions about the association between screen time and well-being.
And those associations? They’re tiny. Way too tiny to warrant the claims you’ve read that we’re all addicted to our devices, that excessive screen time is the new smoking, or that smartphones have led large swaths of society to the brink of the greatest mental health crisis in decades.
Note that saying “the alarmist claims you’ve read were unwarranted” is materially different from saying “our devices aren’t affecting us.” They obviously are. So much of our lives is mediated through the supercomputers in our pockets: How we eat and sleep, how we socialize and close ourselves off, how we bully and comfort, how we communicate and obfuscate, how we lie, hurt, and heal.
So how do we identify the things that are actually worth worrying over? By making bigger demands of the companies that are blocking us from getting answers.
The cruel irony, from a social scientist’s perspective, is that much of the data we seek (more, in fact, than has existed at any point in history) already exists on the servers of Facebook, Google, and several more of the most powerful companies on earth. Those corporations are the gatekeepers that hold researchers back from asking more urgent and incisive questions. For example: When college freshmen with depressive symptoms open YouTube, what do they watch? For how long? What does YouTube recommend them when they’re done, and what do they watch next?
When people battling anorexia tap through to Instagram, what profiles do they visit? What kinds of images do they linger on? What tags do they follow?
When middle-schoolers struggling with bullying in class pick up their phones, only to find that their tormentors have followed them onto Messenger or Instagram or Snapchat, what do they do with the abusive DMs? Whom do they reach out to for support? What online resources, if any, do they seek out?
Researchers would give almost anything to make these observations, because it would allow them to begin untangling the web of causes and correlations that bind our thoughts, behaviors, and development to our increasingly connected ways of being in the world.
The data that would answer those questions is protected—for business reasons, first and foremost, but also, increasingly, through regulations like GDPR, in the interest of public privacy. And while it’s true that all of these companies have hired researchers, including psychologists, to help them make sense of and leverage that data, its full potential will never be realized unless it’s made available to independent scientists.
Impossible, you say. Tech giants’ user data—like the algorithms that data is fed into—are this century's most precious and closely guarded trade secrets. They’ll never part with it. And even if they were open to sharing, what company in a post-Cambridge Analytica world would risk the privacy fiasco of having that data fall into the wrong hands?
Maybe you’re right. Maybe scientists will have to find another way. Then again, you might be wrong: Less than a year ago, political scientist Gary King, director of Harvard University's Institute for Quantitative Social Science, launched Social Science One—an independent research commission that will give social scientists unprecedented access to data inside Facebook and allow them to publish their findings without Facebook's prior approval.
Make no mistake: Getting SSO off the ground was—and continues to be—a royal pain, what with all the legal paperwork, privacy concerns, and ethical considerations at play. Details of the industry-academic partnership are too complex to relate here (though I've written about them previously), but suffice it to say that King and his SSO cofounder, Stanford legal professor Nathan Persily, earlier this month published a 2,300-word update on the status of their initiative, more than half of which is devoted to the ongoing challenges they face in bringing it to fruition. "Complicating matters," they write, "is the obvious fact that almost every part of our project has never even been attempted before."
The good news is that the first studies to receive funding through Social Science One should be announced any day now. They will all focus on Facebook’s impact on democracy and elections.
But if all goes well, SSO could have a more lasting impact, by setting up a framework for secure, ethical, independent research within the tech giants. There's no reason future investigations, funded and overseen by SSO or a similar outfit, can’t grapple with big questions on well-being. They should also involve companies other than Facebook. We not only want to know what a vulnerable individual watches on YouTube, we also want to know what’s happening when they go to Reddit, what questions they ask their Alexa or Google Home, or how they feel when they post on Instagram. We need these companies to open their doors, and their datastreams, in a prescribed way that respects every participant in the process.
We’ve let companies like Google, Facebook and Amazon build vast empires off our data. It’s time they start giving that data back to us.