Skip to main content

John Scalzi on machine learning and remembering our favorite pets

John Scalzi on machine learning and remembering our favorite pets


A Q&A with the author of “A Model Dog” from Better Worlds, our sci-fi project about hope

Share this story

John Scalzi is a familiar name to most science fiction readers. He’s best known for his long-running blog Whatever as well as books like Old Man’s War, Redshirts, The Collapsing Empire, and, most recently, The Consuming Fire. In “A Model Dog,” Scalzi looks at what first appears to be one tech billionaire’s frivolous project that turns into a heartbreaking effort to preserve the memory of a cherished family pet.

The Verge spoke with Scalzi about his story, the implications of big data, and remembering our favorite pets.

This interview has been lightly edited for clarity.

“A Model Dog” /

John Scalzi’s original story for Better Worlds, our science fiction project about hope.

Read the story

What was your initial inspiration for the story?

Well, I think part of it is that I was thinking about the whole concept of machine learning and the idea that AI is something different than we expected it to be and that it is actually doing a pretty good job of understanding humans. So I wanted to talk about a somewhat optimistic view of that particular technology, and I also thought that a great way to do that would involve pets because everyone loves pets.

It seems like a fine way to introduce the idea that machine learning can be used not just to screen our calls or answer our emails by predicting what we’ll say next, but actually improve our lives with something that’s cute and fuzzy.

Photo by Amelia Holowaty Krales / The Verge

“A Model Dog” is about a tech billionaire’s request. What is it about the incredible wealth these billionaires or major tech companies have that makes these sorts of what appear to be frivolous projects a reality?

Honestly, if you have the money and the resources and the people to do that sort of thing, why wouldn’t you do it? I mean, quite honestly, if I had $12 billion and the ability to make people do something that was initially frivolous but made me happy, there’s a pretty good chance that I would go ahead and do that. I think part of it is that the more money you get and the more power you amass, the less tethered you are to reality. So the idea of “why not spend $100 million to re-create a cat” doesn’t sound like a stupid idea.

Now, the thing about these particular cases is that even things that start frivolously can have a beneficial side effect. So you can argue that this is just the sort of blue sky research and technology that people should be doing, that not everything has to be directed research. There can be things that are frivolous, and you might get something out of it. Ultimately, the guy going, “Uh, I should have a machine learning pet” and making someone do that, that’s an expression of extraordinary privilege. I wanted to capture that, too. If you think about all of the billionaires who have pet space projects and things like that, compared to someone shooting their car into space, making a machine learning pet looks fairly responsible.

It does seem like this type of experimentation would have a downstream effect. I know Neil deGrasse Tyson is fond of saying that going to space brought with it a number of other things you wouldn’t expect.

“A guy wanting to make a more powerful adhesive ended up creating the sticky note.”

Absolutely. It’s the whole Velcro effect. You go into space, so you had to invent Velcro. It’s weird when you think about it. I’m not necessarily a proponent of the idea that you do a big thing because you get a few small, ancillary things out of it because it’s not guaranteed that you’ll get anything out of it. But it’s certainly not wrong. Anything you do is going to have failures and spinoffs and dead ends. But those failures, spinoffs, and dead ends aren’t necessarily things that are going to be bad or useless. It might be an unexpected thing. You do see this. A guy wanting to make a more powerful adhesive ended up creating the sticky note at 3M. Even if something doesn’t work the way you expect it to, you still get something beneficial out of it. And, to some extent, that’s what this story also nets: they aimed for one thing, and they ended up getting another.

Unfettered data collection plays a big role here, and there’s been a lot of spilled ink lately about how social media companies use data. How do you see this mindset in the context of your story and the sort of optimistic future-ish world?

I think, in this particular case, it was founded in the sense of “We’re doing this thing, and if we were to do it in a large-scale way, it would absolutely be an invasion of privacy. But since we’re doing it to this one guy, it’s not a problem. But I think the thing we’re doing after this one guy is my problem.” I do think there is a constant reevaluation of what privacy means. What are we willing to let our tech lords know about us in order to get things out of them?

“What are we willing to let our tech lords know about us in order to get things out of them?”

As an example, I have the new Pixel 3, right? I pretty much have most of the permissions set to allow Google to mine my information. On one hand, it’s absolutely true that Google knows where I am and what I’m doing and all these sorts of things. But it’s also true that, except in an extraordinarily abstract way, Google doesn’t actually care. There’s not someone at Google who has been assigned to John Scalzi, going, “Oh, I see John Scalzi is at an airport today!” It is a vast, unconscious processor that goes, “Oh, he’s at the Raleigh airport, so let’s point to where he can get a burger,” or something like that. Google doesn’t care, in a concrete sense, about us as individuals. But it cares about us as a stream of data. When you think about it that way, you go, “Well, as it happens, I do want a burger, and I do want someone to tell me where one is.”

So is the balance worth it? For some people, the answer is absolutely not. They turn off all the permissions on their phone. For some people, it’s like, “Go ahead, Google.” And if it’s not Google, it’s Facebook or Apple or Microsoft or Amazon. You choose essentially which data company is going to know excessive amounts about you. And it is that constant reevaluation. I think that even people who are privacy advocates end up making that deal because, at this point, that’s how we function.

“I think if you had transparency, people would be more willing to make some trades.”

That said, every step of the way, there has to be the thing where the person who’s giving up the information is made aware of what information they’re giving up and acknowledge that they’re doing it in exchange for something else. And that’s been the problem over and over again that so many of these data services are not playing quite on that level, which we’ve seen recently with Facebook and their new Portal home assistant. Where they’re like, “This is going to be locked down and private.” But it’ll turn out that Facebook is going to mine that shit for as much data as they can, and that’s the thing that burns people. I think if you had transparency, people would be more willing to make some trades. But the thing that pisses everyone off is the idea that they are being manipulated or being used without even the courtesy of a “thank you” for it.

So, with all of that in mind, would you spring for one of these machine learning dogs?

[Laughs] No, I wouldn’t, and I’ll say why. Since we’ve moved into our current house, we’ve had six different cats. Each of the cats has been wonderful and a pain in the ass with all of the emotions around the cat. But each of the cats has been their own individual “person,” as it were.

I think that’s part of the joy of pets: that you get different variations on the theme of the cat or the theme of the dog. Once you have time with that one cat or that one pet and you move on to another one that you can care for and who will hopefully care for you, that you can experience life with, I think in that sense, it’s always sad when a pet dies. It’s always sad when you’re confronted with the fact that you’ll always outlive your pets. But on the other hand, you have the joy of getting and seeing a new pet who explores the world and will become part of your family. We just got a new cat in the last couple of months, Smudge the cat. And Smudge has two modes: he’s adorable or he’s an asshole, either one or the other. It’s been a delight to see this kitten we found in a field, desperately hungry and in need of rescue, become this ridiculous creature whom we’re all incredibly fond of. I wouldn’t exchange the opportunity to get to know a new pet for a machine-learned pet that I’ve already experienced. I don’t think going backward with your pets is a good idea.

But I also do think that there’s an immense market. People clone their dogs, and certainly machine learning your pet would be cheaper. So for some people who wanted to have the same pet forever and ever, it would be great. I’m not that person.

I’ve been thinking about this a lot lately: I’ve had a number of pets over the years, and ever since cellphones have become everywhere, I have far more pictures of Tiki, Merlin, and Arthur, but I don’t have that record for Fionna, Buck, or Tilly. Trying to parse out how you remember them is interesting.

Yeah, absolutely. That’s certainly the case. My cats have a Twitter feed. They have their own social media life outside of me, and I find that absolutely fascinating. Not only do I have a relationship with my cats — this started with me taping bacon to my cat Glaghghee — and not only do I have these pets, but because of social media and the internet and the availability of being able to take so many pictures, they have their own presence beyond the traditional pet-and-owner relationship. And I find that sociologically fascinating.