Skip to main content

How Black communities shaped the internet

An interview with Charlton McIlwain, author of Black Software

Share this story

Photo Illustration by Grayson Blackmon / The Verge

As we’ve gone through the pandemic, I’ve been thinking about how much of our lives happen on screens and through software. We use software to connect with our friends, to play games, to move markets, and to catch criminals. Software isn’t eating the world; it’s eaten it.

Today on Decoder, I’m talking to Charlton McIlwain, a professor of media, culture, and communications at NYU and the author of Black Software: The Internet & Racial Justice, from the AfroNet to Black Lives Matter. The book takes a hard look at the long relationship between the Black community in America and software — including the early pioneers who built online communities in the dial-up era; the relationship between software, the civil rights movement, and policing; and today’s social platforms, which amplify and distribute everything from TikTok dances to the Black Lives Matter movement.

I spend a lot of time thinking about software and culture, so I was excited to talk to McIlwain about how he sees the feedback loop between Black communities using software and what software gets made — and how it gets made. And I have always been curious about why it seems like Black culture is so quickly amplified by social platforms and how those platforms might return some of that value to creators, which is not something that’s happened historically.

One thing to pay attention to in this conversation — and it feels like a lesson I keep learning over and over again — is that while modern internet culture always feels new and novel, it really isn’t. There are long patterns that stretch back across decades of people using computers and software to build communities and talk to each other, and we don’t often stop and think about what we can learn from them. Maybe it’s time we start. 

Okay. Charlton McIlwain, author of Black Software.

Here we go. 

This transcript has been lightly edited for clarity.

Charlton McIlwain, you’re a professor of media, culture, and communication at NYU. Welcome to Decoder.

Thank you. Thank you for having me.

I’m really excited to talk to you. You wrote a book a few months ago called Black Software: The Internet and Racial Justice, which really tells a pretty big story about how Black people in the United States have actually been a part of network communication from the beginning. They’ve been part of the tech industry. There’s a pretty deep historical dive for the first half of the book, and the second half of the book connects all of that to what we’ve seen in this country over the past, I would say, year, year and a half with the Black Lives Matter movement, with the movement for racial justice, and how that has been largely expressed online. Give me a sense of how you began this project, because that is two very different stories. They feel connected, but each half of this book could be a book unto itself, really.

Indeed. I think that represents kind of my journey in writing this book. Anyone out there who has written a book knows the strange anxiety of thinking about and starting with a story that you think you know, pitching that to a publisher, getting a contract and a deadline, and then discovering somewhere along the way that it’s a completely different story than you imagined. That’s kind of how this book went. It started off really narrow. I wanted to understand where Black Lives Matter came from. Here was a movement that seemed to come out of nowhere, that had an impact, that was something that we hadn’t really seen since literally the late ‘60s, early ‘70s, meaning their ability to produce sustained attention to issues of racism, racial justice, race and the criminal justice system, and have that atop the public agenda and to stay there for a period of time.

So this all happened, and it seemed to come out of nowhere. It seemed to very much be pushed by what people were doing online, in social media spaces and other connected technologies, network environments that they were working in to propel this movement. So I wanted to understand where it came from. I had at least enough sense to know that even though Black Lives Matter as a hashtag started in 2012 that there had to be something before our current moment that was a precursor, that was a push, that was a kind of foundation for Black Lives Matter to launch itself from. So that’s where I really started this book. Where do we have that genealogy, that digital genealogy, network computing genealogy, that would lead to this current moment?

So that’s where the book started. From that beginning, as I started to find people, talk to people, this story started to move further and further back in time. It started kind of in the ‘90s, where I thought, this was my go-to. This is where we have the start of the internet as we know it, in the early ‘90s. I got there, and, number one, I found an amazing group of people that I’d never known about, and that was a group of Black creators. Some of them were simply hobbyists. Some were engineers. Some were lawyers, some teachers, etc., all of whom were in this digital space, creating content, networking across the country and the world through new computing tools and devices and so forth.

So what they built through the ‘90s, I started to get a picture of that and had a sense of, “Wow, this is something I’ve never seen in the history of the internet, the history of civil rights, the history of Black invention. Nowhere have I seen these people.” So that started to become a story that needed to be told.

Somewhere through there, I met a man named William Murrell — and all my conversations began the same way. I started off with a simple question that I thought I knew the answer to. “When did you first get online?” I thought I knew the answer, right? It had to be ‘94, ‘93, maybe ‘92 for some. I talked to William, and out the gate, I asked the question, and he [said], “I don’t know. Well, let me think.” I’m thinking, “What does this take? This can’t be that hard.” Finally he said, “I’d say I first went online in ‘78.” I remember having that moment of, “What the hell do you mean by this? What does it mean to you to be online in 1978?” So eventually, that story came out.

But that was the push for me to say, there’s a story here that goes well beyond our current moment, well before the dawn of the internet as we know it, way back into the very earliest parts of computer networking, where Black people were very much connected and a part of that story, and yet that story had been largely untold.

I remember a very particular moment after [interviewing] William, just sort of saying in my mind, “If I’m already back to ‘78 in this question and this part of history, I have a sense that if I go back even further, to that tumultuous moment of the 1960s, that I’m probably going to find a story that connects all these people.” Sure enough, I did, and then ultimately, I felt the need to connect all of this, really, that moment of the ‘60s and what happened at the dawn of the computer revolution, at the height of the civil rights revolution, to what’s happening today in terms of the fight for racial justice — and somehow fit in everything in between.

It’s a lot. The book is a lot to take in. The sweep is incredible. One thing that I am always focused on as I think about communication technologies broadly is that there’s what the people who architected YouTube thought it would be — that it would be used in a certain way and for positive/good outcomes… and then there’s what it is because of the people who use it, which can be really negative, but also is evolving. Those are radically divergent ideas, and often the conflict is expressed in things like moderation policies or free speech debates, which is sort of a negative expression. There’s also a positive expression, which is that if you get it right, the people building the thing can see what the users are doing and they can build tools for them. That feedback loop accelerates in really healthy and interesting ways.

One of the things I caught from the first part of your book, that is back in time, is that that feedback loop almost didn’t exist for the people you’re describing, the graduate students who set up the first website for the Black Students’ Union, the people building the first Afrocentric message boards. They almost weren’t seen in a way that the people building the software would accelerate their development. Why do you think that didn’t happen? Because you talk to somebody who runs a social service today or a social platform today, they are hyperaware that they need to be paying a lot of attention to their users, even if all I ask them about is moderation decisions. But they know the positive side of their equation, too. Why do you think that wasn’t there at the beginning?

Well, I think there’s a very peculiar story just about the invisibility of users generally, but Black folks specifically at that particular time, where simply we were not on the map in terms of computing development, network development. Certainly in that ‘70s, ‘80s build-up to the commercial web, Black folks were not really a part of that story, not part of the sort of invention story in terms of being embedded either in government research development centers, or private enterprise, or science and engineering institutions that were producing and making and building the actual hardware, software, etc.

But the undercurrent of Black folks using these tools when they started to roll out, I think it was easy to just ... Those folks were so under the radar and not on the map of anyone’s thinking about, “Hey, we should pay attention to what these folks are doing, even if all they’re doing is sort of tinkering around with things and playing and so forth.” There wasn’t a sense of Black people’s role or importance as a site of innovation, or as a potential market later on, as we move into the more commercial ends.

So I think there was just a sense of invisibility that led to the fact that there was no real reason and no sense of a needed feedback loop in the development of the internet and computing. The interesting point though on this, that I see, is that there is a moment where that doesn’t end up being the case, meaning there’s a moment towards the early days of the web where people did look back to and say, “All right. Now that we know this thing called the web and now that we realize that it probably needs a lot of users, most of whom don’t have a clue what the hell this technology could or should be used for,” they did have a group of people to look back to just two or three years before. That’s where I think the folks on AfroNet, who had built that bulletin board system of Black users, came into play, all the people that were playing in that world of the internet of ‘88 and ‘89 and [the] ‘90s.

What they saw was an ability for Black folks to, in the very least, produce community in those and through those networks. I think it was interesting, certainly for me, to then see that as folks in ‘94, ‘95, ‘96 began to say, “All right. How do we use this new tool? How do we bring people online?” They look to the AfroNets of the world and try to mimic that pattern of creating a community in an online space, and appeal to Black users to try to model that.

One of the major questions for the dominant social platforms and even some of the ones that are coming up is, you have Black communities on your platforms. They are creating an enormous amount of culture. That culture becomes everyone’s culture in the way that Black culture in America tends to become everyone’s culture. The value exchange still isn’t there. I’m thinking specifically of Black Twitter, right? The language of Black Twitter quickly becomes the language of the United States Senate in a way that makes no sense to me, right? The words cancel culture came out of Black Twitter, and now white Republicans in the United States Congress say cancel culture is the biggest threat to America. I don’t know how that happened. I couldn’t trace that pathway.

At the same time, you have other smaller social networks, like Clubhouse, which are trying not to make the same mistakes, to their credit. They’ve started. They’ve given a lot of invites to prominent Black people and other people of color. They call them creators. They’re going to try to pay them. But it still doesn’t feel equitable. The exchange of the Black community, when it’s distributed that easily and that quickly without any sense of value exchange, the platforms themselves tend to just pull the value out and distribute them, collect that value and not bring it back to the communities it came from. How does that play? Has that gotten better? Am I describing a thing that has gotten progressively better or a thing that has stayed flat?

I think something that’s stayed flat. There’s a very specific trajectory here, I think. Some of this is chronicled in the book up to a point ... I used to have a chapter of the book in an earlier version that was titled “Remember When the Internet Was Black?” It had everything to do with this moment in time from, let’s say, about 1993 to roughly ‘98, when you looked out on the internet landscape of that day, whether it was through your large internet service providers, like AOL or CompuServe and so forth, what you saw were businesses, properties, creators, many of whom were Black or part of the African diaspora who had built profitable businesses in and through the new internet and that platform.

You could see not only what was being produced culturally, but you could see who was benefiting and profiting, and the ownership structure was very different for a very short period of time. So to see that explosion of Black culture and Black ownership and value all at the same time was certainly, I think, a moment to be recognized and celebrated. But then come ‘98, ‘99, 2000, pretty much all of that is gone. What you see that is the story that is flat from that point on, in my opinion, is the continued recognition of the value of Black culture, of Black cultural products, but without the critical elements of ownership and value or profit that comes back to Black creators, entrepreneurs, etc. So that’s the story that I think remains flat, that everywhere, as you mentioned, you see Black culture. You see the celebration of that culture. You see the ways in which Black culture powers social media platforms and everything else. But I don’t think we’ve found a way to create real value, in a sense of Black folks largely standing to benefit from the profits of that. I go back and forth to whether I’m optimistic or pessimistic about whether that’ll ever happen. All I know is that it takes and will take an outsize level of capital and investment to make sure that that happens. We’ll wait and see to see if that actually materializes.

One of the stories in your book is about the early days of AOL. An executive at AOL, who’s actually sort of like a minor player in the story of The Verge off to the side, Ted Leonsis, a major executive at AOL. He says, “AOL is going to be built on communities.” He goes and spends money on a bunch of Black entrepreneurs to build communities to integrate with AOL. Later on Ted Leonsis actually mentored the CEO of Vox Media, Jim Bankoff. So I can see it. Okay. Here’s somebody who believes in communities, he mentored this guy, and now I work at the company that guy runs.

What I don’t see on the flip side is, that other group of executives — I’m not ascribing anything to Ted, I think this is a structural problem — the other group of executives who were given the money, who were given the opportunity, they haven’t started the next chain of businesses, or platforms, or internet companies. I’m wondering why you think that is, because from the jump, the opportunities look the same, right? Here’s some money, build some community on this dominant networked platform. He was obviously trying to do it. He saw the opportunity, and that didn’t leverage itself into the next thing.

I think in part, that’s a critical recognition. There was something about that moment, and I do think that there was something about being able to see, in that moment, the value of Black cultural products as a sort of leader for this new wave of what this thing called the internet would become. And for someone like Ted to recognize folks like David Ellington, and Malcolm CasSelle, and the team that built NetNoir and the things that came after, I think was a recognition of, if we want to jump-start this, here’s a market that we know is solid, and we know the value of Black culture. And I think that was the sort of key recognition from Ted when responding to folks like David who say, “Look, we all know this, we know the world loves Black culture.”

That first investment made sense when you had that configuration of users, markets, etc., at that moment in time, I do think that part of what happened as the ’90s sort of came to a close, what also started to happen is that, the internet starts to open up and we start to get a little bit more traction on the commercial potential and possibilities for this new thing. And my sense is simply that the sense of an existing market, that would include and revolve around Black cultural products and producers, evaporated, when you start to see Black folks being much more of a minority part of that growing internet market. I think part of it was just a course correction, in a sense, that we know how markets are built, we know how capital is built in terms of who powers it, in terms of labor and so forth, and who generally stands to benefit.

And I think that pattern then sort of corrected and played itself out as the internet expanded, expanded in terms of users, expanded in terms of types of commercial enterprises that began to connect. And I think it simply became much easier to say, we can do this, and essentially exploit Black labor and production and profit in the ways that we always have. That was the easy thing to do. The harder thing to do, which would have taken much more thought, deliberation, and effort was, how do we maintain this? And I think that’s simply the question that apparently, too few people ask themselves.

Do you think the platforms are doing a good job of reckoning with that now?


Why not? What would you like to see?

I think a lot of it really, for me, comes back to this sense of value, right? I still don’t think that the platforms have figured out a way to say, “We recognize and identify the ways in which Black culture production is a unique added value to the platforms and how they work.” And Black Twitter could be one of those examples, and has not planned for it in a way that makes it a part of the business structure, and a business structure that then puts a monetary value, in some ways, on that, and then says, here’s a way that we’re going to take that monetary value and return it to those who have produced this value in some way.

I think some of the platforms have done things that are on the outskirts of trying to create an environment that is free of trolls or things like that, and still make it an inviting place for Black folks and others to come and hang out in. But I don’t think we’ve turned this corner in, how do we flip this notion of value, such that those who are producing dividends for the platform also get seen in some way, or have an ownership stake in that? And I think that’s the big thing for platforms to wrestle with until, or at least when we may have some platforms that are Black-owned or minority-owned, that might look different.

A big part of Black Software is focused on police software and where it started, where it came from, the networks that the police were using in the ‘70s and ‘80s. That obviously ladders into the current moment. There’s a, I think, a national reckoning about surveillance and facial recognition that’s taking place. We’re seeing it with Capitol protesters who didn’t realize how deeply they were being surveilled or didn’t realize that their videos in Parler were able to be scraped. Give me a sense of that arc, because I don’t think enough people have looked into the past to figure out what we’ve already done with computers and policing in order to determine what we should do in the future.

Yeah. I think about this often, and my mind goes back to... I forget what the outlet was. It might’ve been ProPublica. It may have been a different source... but about two years ago, who broke this story about the NYPD sharing its video surveillance system with IBM and IBM’s purpose of trying to build an AI system to power its facial recognition and identify criminal suspects based on the color of their skin. I remember reading that report, and the bombshell was, A, this is happening, and the second one was, this has been under wraps for five years that NYPD and IBM have been colluding to do this. I remember just having a chuckle and thinking, “Wow, you guys really missed it. This was not a five-year arc. It was a 50-year one in very specific detail,” and it is. It’s in terms of the relationship between NYPD and IBM.

But more broadly for me, that arc signals something for me, which was a realization that, for all the conversation that we’re having right now about the devastation of surveillance technologies and others utilized in criminal justice, that this is not a new question and that we have been here for a very long time. That, to me, said two things. Number one, it should give us pause. What, to me, should give us pause is that origin story of computing technology, where the first uses of the computer were to essentially devastate Black people and Black communities, and, [second,] the realization that that was — unlike today, where tech companies are running and saying, “save me from the bad PR” of the impact this is having on these communities — being in a moment in history where everybody was very clear and explicit that we have a threat, that threat is in our urban areas, it’s people who are Black, brown, generally poor, and they are protesting in the streets, they are fighting, we see them as the face of crime and violent behavior, we need to use our technological powers to curtail their ability to thwart the nation’s order, economic order, racial order, power, etc. So [seeing] that realization very explicitly powering the first uses of our computational systems, I think, gives us pause about—

Wait. I just want to be clear. You’re talking about in the ‘60s and the ‘70s, the end of the civil rights era. The Watts riots figure prominently in your book.

Absolutely. Those moments where, at the height of the civil rights movement, you’re also beginning this crest of computing technology development and the widespread thinking about the future of computing, what is this going to mean for us now and in the future. It’s always hit me that that was a moment where there was a decision point. Here, we have this great power that comes with computing. We could have very well said, “How could we use the power of computing to help spearhead economic equality,” at a moment in time when that was clearly a thing of great concern, but our minds went to what our preeminent problem was at that time, which had everything to do with race, that had everything to do with Blackness and Black communities, and then we built our computing systems to fit the problem as it looked then.

I think what has persisted is that problem has not changed, or our framing of that problem has not changed. From the ‘60s, ‘70s, ‘80s, up to our current point, we still frame the problem of crime and criminality in terms of Blackness and brownness, and therefore, our computing systems have followed that same developmental pattern and trajectory where now we’re seeing, really, the end stages, if you will, or the full fruition of things that were really just germinating in 1966, ‘67, and ‘68 when these first systems began to be built.

Give me an example of something that’s come to fruition.

I mean, facial recognition has been an outgrowth of that, an outgrowth of the ability and desire to profile criminal suspects. This has everything to do with, “prevention of crime,” as it was framed in 1965, as much as it is now. How do we prevent crime? Well, we try to understand who’s most likely to commit those crimes, and where, and what they look like and what their MO is. The more data we have to tell us and give us a profile of that person, the more we can use police resources, whether that’s manpower or technological powers to identify police and then constrain that, and so I think that impulse to be able to predict — and therefore solve a problem because you’re able to predict — the causes and antecedents of that criminal behavior, shows that facial recognition [is the] fruition of an initial impulse that was really just about tracking people.

Of course, in 1968, what we had was a physical description that is part of these systems. Who, what your race is, what your color, your eyes, your hair, skin, etc., are, and to as great detail as possible. Facial recognition becomes another layer that is even better than having a description of someone. Now I have the image of that person along with a description to match.

That’s why I say, in many ways, that facial recognition technology is the full fruition of an impulse that starts in the early mid-60s to say, “How do we identify and visualize and locate people who are predisposed to commit crime, and then how do we mobilize police resources to thwart them?” That then, I think, has prepared or framed the evolution of policing technology ever since, and I think what has persisted during that whole arc as well, of course, is the position and framing prejudices around Black and brown people as a perpetual source of crime and criminality.

The predictive policing system, it’s a self-fulfilling prophecy. The data says the Black and brown people are going to do crimes. The cops go there.

And lo and behold, I find some crime. 

“We found some crimes, and on and on we go.” The hard question there, and I’ve really struggled with this, is that we do want the cops to be effective, and the cops want computers. They say that computers are what’s going to help them be more effective and more targeted and maybe make better use of limited resources. There’s a natural argument here. “I want to do a better job at my homework. Mom, buy me a computer. I’m going to do my homework,” but it’s the same argument. “I want to be more productive at my job. Automation and computing technology can help me be more productive and more accurate.” What is it about the police so specifically where it’s just that feedback loop? What breaks that feedback loop?

One of the things that goes wrong often, and I think maybe even peculiarly in criminal justice and in policing, is there’s a recognition that this is hard. Policing is hard, the stakes are high, but then there’s this jump to what seems to be an easy solution. 

I think the connection and the idea and the dream that the technology is always more efficient and easier and allows us to do our jobs better, in terms of police, has somehow become ingrained in that thought, to where it’s the technology that drives this story and not policing per se. Or not policing that says, “Look, everything that we want to do is done in the service of safety, ideally in the service of justice.” And we know how complicated that is. Right? So just imagine a world where we’re not even thinking about technology and just thinking about policing, and policing where you have two ideals: safety and justice, and good people who are in roles that they’re in to do good things. And that is keep people safe, do so in a just way. Meaning there’s no disparate impact on certain people more than others. We’re not targeting certain people more than others. We’re trying to keep people safe generally across the board equally.

And then of course we can’t extract those two things [that] I think have, again, over a long arc of time, become conjoined. Number one, that Black and brown folks are overly represented as criminals. And so it becomes a self-fulfilling prophecy that then gets mired into and really directs police use of computing systems. But the [second] is this overreliance on the technology and the belief that just because it’s a technological solution, it is better. A colleague of mine named Meredith Broussard coined this term she calls techno-chauvinism. And it’s really about that. The idea that technology, a technological system, is in and of itself useful, important, and has value.

But I always go back to that same moment in ‘64, ‘65, ‘66, and so forth, where if you went into a police station and said, “Hey, we’ve got this new technological tool,” you are more likely than not to have officers say, “To hell with that. I don’t need that. I do a perfectly good job knowing who it is I see out on the street, the relationships I have with people, the connections that I have, and sure, I may still do certain things wrong. I still may have certain prejudices that lead to different kinds of outcomes. But you know what? I don’t need your technology.”

And to see that story flip from that point up to now, where the predilection is to say, “This database is more effective in telling me whether this individual person or this set of people are likely to have committed this crime than my own good sense, or my own skills as a police officer being part of a community.” And being able to put two [and two] together and say, “Maybe this person seems like a suspect, but it’s not likely.” I’m much more predisposed to just trusting the system that says, “Hey, these five people in this dragnet because of a combination of their location data and their facial recognition and other characteristics, tells me I should go show up at their house to arrest them.”

So I think there’s something about that loop that has not been interrupted, and that has now become so ingrained in policing that says the technology is always better. I don’t need to question it. I don’t need to question its outcomes. I don’t even need to question the motives for which it was made. And the fact of the matter is, I think if you ask a lot of police officers to go in and explain to you the technologies they use and rely on, many of them won’t be able to tell you or explain to you how those technologies work. It’s a simple, “I push some buttons. It gives me a suspect. My job is easier. I go and do what I got to do.” And many of the disparate outcomes that make these things so controversial and unjust, I think come a lot from just that simple overreliance on technology and a technological solution for problems that are much more complex.

Let me put that idea in the tension of what we were talking about with communities online. One of the good things about platforms like Twitter and Facebook and TikTok and YouTube is that Black people are just more visible. They’re able to communicate more broadly. There are less gatekeepers, fewer intermediaries. Black culture is, it just quickly becomes the dominant culture. That is very human. It’s very humanizing. Right? We’re just able to see each other more, no matter what community you’re in. If you have something to say and people are into it, you can be found, and that is a remarkable quality of the modern platforms, regardless of all their other ills. On the flip side, you’re saying the police are getting more and more dehumanized for the communities they serve because of technology. That seems like a really core tension to the modern era.

Absolutely. I mean, it’s that technology produces the distance. And just thinking about the things that you can deduce from being out on a street corner versus staring at a screen with a variety of streams of data that tell you what’s purporting to be going on on that same street corner.

But aren’t those same cops on TikTok and Twitter and marinating in this larger culture? Why doesn’t that get itself resolved?

I mean, they are, they may be as private citizens, but again, when the question becomes, am I using this platform as a place of, “This is where I live, this is where I play, etc.” The moment it becomes, “This is where I do my work of policing and crime. And so therefore there’s a problem motive that permeates my engagement with that space. And now I’m seeing everything as basically potential suspects.” And this is what has happened with the encroachment of police and policing into the platform arena, where the disconnect is between folks who are there saying, “Look, I’m just living life. And I’m doing things that give me enjoyment or having fun. I’m not policing myself in terms of who I’m talking to.” And so the same things might play out in that space as may play out if I were out on the street, but simply as an ordinary way of doing life.

But then when I’ve got cops that are coming into the space and saying, “I’m here to find people who have broken the law, may have broken the law in some way,” and enforcing a surveillance lens around that. Then all of a sudden, everyone becomes in some way a suspect at the moment there’s a precipitating problem. All of a sudden that innocent thing [you] said that probably sounds crass or crude, maybe even violent, becomes evidence to use in a criminal investigation against you, whether or not you actually participated in a crime. And so the social media, what plays out there, becomes a point of criminality in some ways, differently than how you would make sense of things or how you would do police work if most of your work was out on the street, embedded in the networks of people that you are. It gives you a different kind of discretion to check yourself.

There are different things that you know, if I’m out on the street and the people I see and how things work, how people have conversation, who interacts with whom. There’s a different kind of sensibility that allows my decision-making to operate very differently than if I’m removed from those people. And now there’s distance and I don’t know these people out on the street. I know them through their Facebook persona or their Twitter handle or what have you. And again, this just further removed further possibilities of sort of abstractly thinking about human beings, being disconnected from the humanity of folks who are on those platforms.

Here’s a hard question that I struggle with all the time, it’s very simple. Why do we all work for Twitter for free? I do it.


You do it. I looked at your Twitter account before I came on, you work for free a couple of times a day. I work for free way too many times a day. Is it just we’re being seen, and that is the value, and that’s the incentive? Is there something that can break that cycle where we’re all creating value for free? Because that to me is the heart of it, right? It can get as inequitable as you want, but at the end of the day, I still wake up and I’m like, got to send some tweets for free.

Right. It’s such a hard thing, it’s hard for me to know whether this was very specifically a deliberate forethought, a plan, but the truth of the matter is that, the platforms have done well at creating a sense of value for all of us, that now it becomes something that we are so connected to that [it] becomes part of our personal infrastructure, if you will. If I’ve got something to say, I’ve got nobody to talk to, particularly in a moment like this, nobody’s going to listen to me. But I’ve got Twitter, so what am I going to do? I’m going to try to say what I have to say, and I know I have an audience, and this thing has created the possibility for me to have that audience, right?

So I work for free because, in some ways, I see some value that trickles back to me. It may not be monetary in nature, I may not be able to calculate it in terms of dollars and cents, but I do see some personal value. That’s me, I’m not saying that probably most users see it that way, who just sort of go and do their thing because this is the place to do their thing. But I think that’s the kind of grip, and then what you really need to do if you’re going to shut something like that down, is to flip the switch and say, “I’m done, I’m off. I’m not playing the game anymore.” And the risks of that are sort of two-fold. Number one, I don’t get that small sense of value that existed there for me, all right? Where’s my audience now? I got to go and try to build that on my own, or give up having a voice in a way.

The second part is, Twitter might just say, “Who the hell cares? We will go on without you. Thank you very much for your service, your labor, everything you provided for us. Sorry you’re going, but we’ll find people like you elsewhere in the future and continue to make money.” I think we all work for free because we find some value in what the platforms have created. I think if we are to get to some place where there’s not this constant tension about how this plays out on particular platforms and the direction that things take, it’s a mutual recognition and an equitable recognition of, all right, I do things that produce value for the platform and I also gain value from something that the platform has produced for me, an infrastructure for me to work, and play, and do all these other kinds of things in.

Some kind of reckoning that makes sure the labor and the value exchange there, has something in equity built into it. And I’m not so sure that’s the case, when we look at the large-scale financial value that a platform gets out of drawing on our mass labor and what we get as an individual from playing in that sandbox.

Well, the reason I ask is, one, I just want somebody to tell me to stop working for free honestly, but more broadly, to connect to your thesis and your book, you’re describing a moment early on, in the history of computing, when big companies like IBM, big universities like Clemson, would build pipelines for minorities to come join the workforce, get skills, train up, do all those things. Then in the ‘90s, there’s a point where you described them as the vanguard; where they actually stand to own the products, where they stand to make a lot of money because they built communities, they were early. That moment goes away as the platforms and the broader internet dominates, and that’s kind of where we are now. 

But the other half of your book and your thesis and where you began was, well, there’s Black Lives Matter, there’s the ability for the community to harness the tools and say, “Look at this injustice.” And I think maybe for the individual, there’s no obvious equitable value there. I’m not getting paid to be outraged.

But for the community, there’s a massive amount of value that’s being generated, a massive amount of attention, and potentially political and cultural change. How should we balance that?

I think we have to do what Black folks, and people of color, and marginalized folks, have always done in those situations, which is extract value from what we’ve been given. Number one, we don’t own those platforms, we didn’t build them, but we can utilize them, and we can figure out how to utilize them, both for our individual pleasure, but also thinking about communal interests and ways that would serve those interests. And I think that’s the story you saw playing out with Black Lives Matter and many other instances. I think we have to do that on the one hand, but at the same time, I think we have to still fight for what we so long have not had, which is that sense of ownership, our ability to build our own thing, our own way, for our own purposes, and to our own ends. And I think that’s really what’s been the missing piece of the puzzle. A question I often get asked is, is it viable to have the next Twitter that is Black-owned, or is minority-owned, and so forth? Is it viable to have a Black Twitter that is a Black Twitter?

And I think it’s an interesting question that I think needs to happen, because I think the circumstances and the pitfalls with the platforms as they exist, which we’ve seen so many times, even with something as powerful and impactful as Black Lives Matter is, everything is still contingent, right? I’m not sure if this is my place where I’m organizing and pushing for these things and having great visibility and outcome. I don’t know that it’s going to exist continuously. I don’t know if the owners of the platform are someday going to decide that particular features of the platform are shut down or cut off, that then doesn’t afford me the same opportunities to gain visibility and engagement in those same ways. I cannot assume that the powers that be, law enforcement, etc., won’t stake these places out for surveillance in the ways that they have over the last few years.

And then essentially turn this into a platform that’s really not a place I can go to and count on to do this kind of work, that pushes forward in the interest of me and my community and all those things. I think we have to play the game in the spaces that we are able to, and gain a foothold there, transform them in the ways that we can, but I think we still have to push for that dream that we’ve seen unrealized all too often, which is the ability to own, shape, develop something that truly is molded, not only in terms of our culture, but in terms of our interest in both the immediate and long-term outcomes that it delivers.

Let me give you a small example of a creator economy story that I think is really interesting. There’s Verzuz, which is Swizz Beatz and Timbaland, they start in the pandemic, just live streams of all of their famous friends, talking about their music, playing their music. And it’s supposed to be a battle, but it’s not really a battle, it’s just like a cool hang. And they have leveraged this from a thing they were doing on Instagram Live, all the way to a business that is now going to run the Pro Bowl for the NFL.

And my understanding of Swizz Beatz and Timbaland is, they are not bad at business. I assume they’re being paid handsomely for this franchise that exists now, that they can take in all these places, monetize in all these different ways. Why isn’t that a more common story, right? Is it just because they already happen to be famous, and they already happen to have the business infrastructure that connects? Jay-Z is the guy in charge of culture for the NFL, so … Is it just, those are natural, built-in advantages, or is it that systematically, we don’t want those things to happen? Because there are lots of people online who build cool products, that have a big audience on the platforms, and the second they try to take them off the platform, or monetize directly, or sell it again, all kinds of bad things happen.

Yeah, I do think there’s a sense in which people have different starting points. Verzuz has a starting point that is much higher than your average entrepreneur who has a good idea, maybe even has a good prototype of a product, and so forth. And that missing piece is, in my mind, that connection to a network of contacts, resources, capital, etc., that’s already embedded for someone who is a celebrity, who’s already producing in another sphere and transforming something into another. I think that’s what we’re losing out on, because that group of people is small, relatively, right?

We have a much [more] expansive base when you look at people of color who are entrepreneurs and creators and so forth and have great ideas, but what they don’t have is the access to those areas of capital and power, the network that says, “All right, for me to go from here to here, I need this, this, and this,” and be able to know, “All right, I get that from this person,” and get an introduction to that person, or have three or four people that can say, “Oh, I know what you need here. I can make sure you get it from these folks over here.”

I think that’s where we’re losing out because our eyes are always pitched at those who already have access, so it’s easy for these folks to just say, “Hey, here’s my folks I go to for money, for ideas. When someone recognizes what I’m doing, they come to me to ask me to do something else.” That’s already there. We don’t have too many folks that are looking below and looking at all the talent that’s out there in the world and saying, “Oh, I recognize what could be possible if I see what you’re doing and elevate what it is that you’re doing in all these different ways, whether that’s capital or connections or a wider network and so forth.”

I think, to me, sometimes I like to pitch it differently than just thinking about the nefarious forces that say, “We don’t want you playing here,” and I certainly think that’s obviously real in a sense of it, but I think so much more of it is simply our laziness in almost some ways to say, “Let me do something different. Let me look in a different direction and see if I find different people and ideas worth investing in and backing and pushing because the ideas are great and better, because the people who are pushing and producing them are different and are liable to bring something very different to the table.”

We tend to just simply say, “Look, we know this works, so let me look for that same thing, in a very different way, but the same thing over and over and over again,” and we tend to have this sense about the same thing, being able to count on it, being reliable, and we miss out on so much that’s out there and untapped in the world of Black and brown folks who are ready to move into this territory who do have ideas, who have produced, who do have experience. Often, we simply say, “Ah, these people don’t exist out there. None of them have crossed my desk. No one’s shown up in my office somehow,” but it’s because we haven’t looked, and that’s really the reason.

This episode isn’t out yet, and it may not be out yet by the time people are listening to us, but We did an episode of Decoder with a woman named Arlan Hamilton, who is a VC, who runs a [venture capital] company called Backstage Capital. They’re focused on investing in underrepresented founders. I asked her, “Are you afraid of the tech giants?” She said, “Well, no. They’re not looking where I’m looking, and every other tech executive in VC is terrified of the tech giants because they are all looking in the same place.” It’s interesting to connect those two answers and to say, “Well, there’s actually a world of opportunity. We just have to build the network to create value out of it.”


I guess what I would ask you is... I’m like a capitalist at heart. Isn’t that a better solution, [to] focus on building that network and elevating the people and giving them access to tools than, I don’t know, worry that Jack Dorsey is going to hire the right three Black people to build the next features of Twitter, because that is a solution I do hear about all the time, that inside the big tech companies, there’s not enough representation. I don’t think there is, but that feels like a narrow, timeboxed solution, and honestly, you’re still just betting on three individuals to get it right, no matter who they are. To me, it needs to be connected to that much bigger idea, and I’m wondering if you see that big idea taking form.

Yeah. I think in some ways, there’s this disconnect, particularly in the capitalist landscape where there’s supposed to be so much faith in the market, not to go look into the actual market to see that there’s a whole sense of possibilities out there, and ignore them. I don’t know whether it’s simple obstinance or risk aversion, but I do think there is a sense that if we’re going to have real innovation and not just continue to replicate the same thing over and over with a new twist, then you got to 180, and completely look and play in a different pool. There’s always this talk about representation when we talk about tech companies, as you mentioned, and the number of Black and brown folks that are out of Facebook or Twitter or what have you.

That is important. I think it’s important to have critical masses of those folks across the board of those companies and particularly in leadership, but I don’t think that’s the solution because you’re still within an umbrella that says, “This is what we do, and everything that you do is in the service of what we do.” We got to have a lot more people thinking about what it is we do and can do out there. I think the more we invest in that wide diversity of possibilities... I’m blown away, and was blown away in writing Black Software, and looking at the power and ingenuity of Black folks at that particular moment. I work every day with students, and you just see the amazing possibilities that are there, the kinds of ideas, the kind of people that come through. It’s like, I can’t even write this story of what could be, other than knowing that if people look around and look elsewhere and invest in those possibilities, I mean, it’s bound for something amazing to come out of that.

I think that kind of spreading of the wealth, as it were, in terms of investing in new ideas, new people, new networks, I think that’s the next big thing. The person who realizes and figures out how to harness that new area where nobody’s looking is going to build something special and probably profit a good deal as a result.

So let me end here because this is, I think, the biggest question of our moment in the pandemic. You and I are talking to each other through software. Our lives are more and more mediated by screens, but ideally halfway through the year, a lot of people will be vaccinated. We’re going to get away from our computers. We’re going to go back to life, regular life in some way. What do you think our relationship with software will be like? Not just for Black folks, which is what your book is about, but it feels like all of our relationships to software and screens and platforms have changed in this pandemic year. We’re all thinking about it more, we’re all demanding more. What is the change you hope to see when we go back and I can interview you in person again?

I mean, I think at least one thing that will come out of it is a much more deep reflection and understanding of what screens and mediated contexts are good for and what they’re not. I think above all things, I hope we get out of this the very value of face-to-face communication. And again, I think we’re finding that it’s maybe not completely necessary for everything, but I think we’re starting to see when it’s missing and the implications of when we can not have that kind of contact. And I think to be human is to be together physically, to be able to interact and engage and see, and engage all of our senses in a way that can’t be done on a screen. And so I hope part of what comes out of this is a renewed commitment to just thinking about what makes us human and how we build our lives around maximizing our ability to connect as human beings and understand the deep humanity that is at the root of all of us.

What do you think is going to happen to our relationship with software?

I don’t think it’ll change—

I want to throw my phone out the window. I’m done with it.

I don’t think it’ll change much. I think software has a grip on everything that we do that I don’t think is going to be undone in this moment. In fact, it’s probably going to work its way into more facets of our lives, I think. The ways in which probably you, certainly I and many others, saw the evolution of email years ago and how much that bound us to longer times for working and so forth. The productivity of screens. I think we’re starting to see that, and the encroachment of the screen on our space, on work, on home, on life. There will be companies that say, “You know what? I don’t think you have to come back into work anymore. That screen is just fine and it’s much cheaper.”

And so more and more lives that are lived on the screen, mediated through software that constricts what it is that we can do, whether you’re on a Zoom platform or a Microsoft Teams, and the software has dictated how you can interact and with how many people you can comfortably interact. And so I don’t know, but I don’t think it’s going to go in the direction of freeing us from software. I don’t think it’s going to go in the direction of less reliance on technological tools that drive everything that we do.

Decoder with Nilay Patel /

A new podcast from The Verge about big ideas and other problems.

Subscribe now!