Fifteen Questions: Jonathan Zittrain on Social Media, AI Litigation, and CompuServe



The law professor sat down with Fifteen Minutes to discuss AI regulation, moderating online communities, and the Applied Social Media Lab. “I’m very interested in ways to see how people can gather with a sense of shared ownership rather than a corporate patron overseeing the conversation,” Zittrain says.



{shortcode-7364daf6ae11c46c96da455bbda66d81c0c40467}

Jonathan L. Zittrain is a professor at Harvard Law School and faculty director of the Berkman Klein Center for Internet and Society. His work focuses on internet governance, privacy, and artificial intelligence.

This interview has been edited for length and clarity.

FM: You co-founded the Berkman Klein Center over 25 years ago and have been writing about the internet and the law for even longer. In that time, what is the biggest change you’ve seen in how people think about the online world and how it should be governed?

JLZ: I think the biggest change in mainstream thinking has been a shift from a rights-based way of looking at the Internet to a public health framework. The rights-based way focused on perceptions of a stifling late 20th-century media environment that was like three network news desks, with the same kinds of anchors, and one newspaper in each town. And a sense that the internet offered a microphone to everyone. And the biggest thing that could mess it up would be government coming in to trample on people’s newfound freedoms.

I think by 2010 that was starting to look pretty frayed.

Both because there were hints that social media in particular were taking people to places that, at any given moment, they wanted to be online. But when you added it all up, they were feeling less happy, less motivated, more lonely, ironically. And I think a realization that there are many instances in which one person speaking can genuinely intimidate or quiet someone else, that government censorship is only one form of flattening discourse.

And so that greatly complicates the story of the 1990s.

Today, I think it’s a really tricky question to reconcile rights and public health thinking and vernacular. People just don’t even use the same words to describe the same things.

FM: The BKC recently launched the Initiative on Artificial Intelligence and the Law, for which you serve as an adviser. And this spring, you’re teaching a course entitled “Case Studies in Public and Private Policy Challenges of Artificial Intelligence.” As AI comes to play a bigger role in our lives, and given the Biden Administration’s recent executive order on AI, how do you think governments should go about regulating it? What concrete steps can companies, politicians, and judges take?

JLZ: It’s such a great question. And it has echoes of the puzzles of regulating the internet over the past 25 years.

If we’re talking about the most recent advances in AI that people are really taken by — so-called generative AI incorporated into large language models — I think it’s moving so quickly, in a way that the internet 25 years ago was thought of as developing quickly, but this is faster.

It just remains amazing to me that in the autumn of 2023, experts — those closest to the building of these models — can’t agree on whether if they shove a few more tens of millions of texts into them, will they more or less have leveled off from where they were? Or will they have a further leap in capability or even cognition?

We kind of have to wait for the timer to ding on the Easy Bake oven before we take GPT-5 out of it and see what it’s like.

What extraordinary times to live in and how difficult to regulate.

I’m very mindful that the Europeans have regulated cookies on browsers, and that took about 25 years.

This becomes another example of what I now think of as the three-and-a-half problems of digital governance. One, we don’t know what we want. We can’t agree on what we want. Two, we don’t trust anybody to give it to us. Three, we want it now. And four, thanks to AI we can scale it everywhere. That’s a weird combination of demands.

FM: Recently, many leading AI companies have stopped open-sourcing their models and begun obfuscating the details of their research, citing safety concerns. As someone who has worked on increasing public access to technical knowledge — including open-sourcing your own torts textbook — how do you feel about this turn away from the ideals of open access? Do you think this is a reasonable safety measure or a self-interested business strategy?

JLZ: I don’t know, and I am bemused by people who think the answer is obvious. There are several things that can be true. One is, for a really powerful technology capable of great harm, it might be a bad idea if everybody has access to it. Call that a munitions view.

Another is, for a really powerful technology that can do good things, but also can do really bad things, it can be really bad if just a few actors had concentrated power over it, and the rest of us can’t see it or understand it, but are simply subject to it.

Both of those are problems.

I wrote an entire book and open-sourced it, not just that torts book, called “The Future of the Internet and How to Stop It,” which really was a paean to generative technologies that anybody unaccredited could build upon and take in whatever direction they wanted: technologies of code that then facilitated generative content, projects like Wikipedia.

So I’m kind of all in on generativity. And at the same time, even in writing that book, I recognized there were technologies where generativity might not be so good, such as ones that would facilitate somebody building a nuclear reactor in their backyard.

I tend to think of AI a little bit like asbestos. It’s extremely useful. It’s getting bolted wholesale and retail into all sorts of things, where it’s not even obvious that it’s inside it. I don’t know, when I’m chatting with an airline agent online, exactly what combination of human and past-human put into a blender and processed out as GPT I’m actually talking to.

And like asbestos, if it turns out there’s some big regret or problem, it’s gonna be awfully hard to retrofit. And I say this as an optimist. Then again, I’m writing a sequel to “The Future of the Internet and How to Stop It” right now called, “Well, we tried.”

FM: An ongoing class-action lawsuit against Github and OpenAI alleges that their language models violate copyright laws and open-source licenses by reproducing code from their training data without proper attribution to its original authors. How do you think AI models will fare under current copyright law? How should AI companies compensate the authors of the data they use, if at all?

JLZ: I think the “should” question probably should trump the “is” question.

I think it does make sense to start with questions like, “What kind of society are we trying to build? What do we find really valuable when people sit down and write, or sing, or compose?”

If there are ways to automate that, are we turning lots of people into artists who can’t hold a paintbrush, but do know how to prompt DALL-E? Or are they something different? Are they coaches rather than players?

These are very basic questions that illustrate we don’t know what we want.

Is this an economic question about making sure artists have livelihoods? Is it a moral rights question about being able to credit them as inspiration?

What a time to be alive and what a time to not be alive. If you’re an AI, it’s just an extraordinary moment.

FM: How often do you use ChatGPT?

JLZ: Not that often.

FM: What will being a lawyer look like in 10 years? Do you think future advances in AI will radically reshape the profession?

JLZ: I think it’s entirely possible.

I asked one of the legal startup services that has GPT underneath to write a memo.

So it wrote me a memo on the Third Amendment implications of a red light traffic camera ticket. For those unfamiliar, which include many students here, the Third Amendment is the prescription against the quartering of soldiers in the people’s houses. And it wrote a decent memo, including its own skepticism that it really fit.

That got me thinking: Initial well-publicized misfires aside, it’s now possible for the price of a filing fee, say $19 or so, to be in a position for anybody to bring a suit.

And I think that could both be a cause for celebration, under the frame of access to justice, so that people don’t have to find and, usually, pay lawyers in order to get a square hearing in the system.

So for an access-to-process standpoint, it could be great. From a systemic standpoint, it could be a distributed denial of justice attack. Where there’s now so much coming in.

It even seems like, in a “turtles all the way down” sense, as the next step, the system will start to employ AI to read, summarize, and even offer an opinion on the outcome of the case. Then once the system can do that, why can’t the litigants do that before they even file?

I think all of this shows just how little we’ve even looked around the corner yet.

FM: Do you have any offline hobbies?

JLZ: I have maintained an aquarium for many years — a saltwater aquarium.

I’ve been fond of small appliance repair, which has gotten a little harder and harder to do over time. It has become more the joyless task of plugging in components until it works again, rather than untangling wires and re-soldering them.

FM: In high school, you joined a pre-internet online community known as CompuServe and eventually became a moderator for it. What online platform do you spend the most time on right now? What’s your favorite?

JLZ: I like to think of the internet as a post-CompuServe online community.

CompuServe had some interesting differences from nearly any section of the internet today. For various reasons, people were charged by the hour, which meant they were charged by the minute to use it. So in the back of one’s mind was always a question of whether this extra minute was worth it.

So I think it tended to make people more conscious of how online they were, even if they found it compelling.

I don’t mean to portray this as a better world. I’m just saying it was a different one.

I’m very interested in ways to see how people can gather with a sense of shared ownership rather than a corporate patron overseeing the conversation.

FM: You’ve argued that social media platforms would be better if they followed a model of community governance, and the BKC recently launched an Applied Social Media Lab, which aims in part to build prototypes of alternative social media platforms. What might those actually look like? Given the overwhelming popularity and profitability of Facebook, Instagram, and TikTok, do you think such a platform could actually succeed?

JLZ: I think we won’t know until we find out. Until we try, that is. And I think ‘succeed’ is a word worth unpacking.

If by succeed, we mean, be maximally compelling to be on at all times, irrespective of one’s blood pressure, building something more in the spirit of the public interest, or of individual interests, may not be as successful.

But there also may be a little bit of a collective exhaustion.

It is awfully hard to find people just saying they’re having a great time online, like “I hope that 50 years from now, my kids and grandkids are still clicking Like and Subscribe.”

I see the Applied Social Media Lab that I’m working on with Professor Mickens and a number of others as a way of exploring — among groups of people already interested in taking the plunge — what existing bits can be cobbled together along with new practices to have a very different and enriching online experience. Even if it isn’t as compelling as Angry Birds or Minecraft.

FM: What is your preferred meme format?

JLZ: I’ve been a fan of the galaxy brain sequence.

FM: What project are you most excited about right now?

JLZ: I’m actually most excited about taking some of the grand theories and thinking around how people can and should relate online and how it should all be regulated and trying that out in our teaching and learning environments on campus. There is absolutely something to be said for just a rectangular table, some chairs, and a two-hour conversation. That’s great. There’s even something to be said, in a law school way, for 75 people in an amphitheater and a Socratic dialogue. But there is still so much untried that is promising.

FM: You’re also the director of the Harvard Law School Library. What is the role of the physical library in the 21st-century?

JLZ: I think it is, in part, a place for a state change in our course of our day.

Having spaces that — in a micro sense, geography is destiny — being able to step inside of them and associate them with shifting into a more contemplative, reflective mode. That seems more valuable than ever, especially when our lives are otherwise so saturated with a fast-moving, clamoring environment for our attention.

I also see great value in the library as a crossroads: a place for people to encounter one another.

FM: You’ve also called for more lawyers and public policy experts to learn computer science. How do you think students and professionals should do this? How should college students engage with these issues?

JLZ: If you truly know enough about a given technology for it to inflect your thinking about the policies around it, it doesn’t mean you need to spend multiple years getting a master’s or a Ph.D. in it.

Understanding that a given technology that you’re trying to scrutinize could be different. A particular intervention might kill the technology as it exists, but it could exist any number of other ways.

It’s learning enough of the technology to understand the art of the possible. For some technologies, that might mean a three-day boot camp could get you a much better sense. For others, it may really be that, you know, sign up for GSAS.

FM: What do you think of the current antitrust cases against Google and Amazon? Can the government win either of them?

JLZ: I’m amazed, as I was at the time of the Microsoft case 23 years ago, at how much these cases boil down to classically boring, bullying, and monopoly maintenance behavior.

It really is a case where it’s not about sophisticated technology.

But just like, “Oh, I see, you’re raising the prices when you can and lowering them when you have to, and you’re keeping your third-party sellers just close enough so they can’t go elsewhere, or can’t offer a lower price somewhere.” It is, from an Econ perspective, textbook.

FM: You also hold a professorship in Computer Science, and you have experience coding in LISP and Basic. Do you still code at all?

JLZ: No, I haven’t, in a true sense, in quite a while. I’d love to get back into that.

— Associate Magazine Editor Hewson Duffy can be reached at hewson.duffy@thecrimson.com.