Skip to main content

Adobe’s Scott Belsky on how NFTs will change creativity

‘Prepare as NFT’ coming to Photoshop

Share this story

Adobe is one of those companies that I don’t think we pay enough attention to — it’s been around since 1982, and the entire creative economy runs through its software. You don’t just edit a photo, you Photoshop it. Premiere Pro and After Effects are industry-standard video production tools. Pro photographers all depend on Lightroom. We spend a lot of time on Decoder talking about the creator economy, but creators themselves spend all their time working in Adobe’s tools.

Adobe is in the middle of announcing new features for all those tools this week — at its annual conference, Adobe Max. On this episode, I’m talking to Scott Belsky, chief product officer at Adobe, about the new features coming to Adobe’s products, many of which focus on collaboration, and about creativity broadly — who gets to be a creative, where they might work, and how they get paid.

Scott is a big proponent of NFTs — non-fungible tokens. You’ve probably heard about NFTs, but the quick version is that they allow people to buy and sell digital artwork and keep records of that ownership in a public blockchain. The idea is to create scarcity for digital goods, just like physical products — to definitively say you own a digital piece of art, just like you own a physical piece of art. Of course, the internet is a giant copy machine, so it’s a little more complicated than that — but a lot of people, including Scott, think it’s a revolution. In fact, Photoshop itself will be able to prepare an image to be an NFT very soon. I’m a little more skeptical — so we got into it.

Scott and I talk about all that. And file formats. And the future of local processing vs. cloud computing. And we squeezed it into just about an hour.

This transcript has been lightly edited for clarity.

Scott Belsky, you’re the chief product officer at Adobe and the executive vice president of Creative Cloud. Welcome to Decoder.

Thanks for having me.

It’s been a while since we’ve talked, I’ve always enjoyed our conversations. We have a lot to talk about. This episode of the podcast is coming out alongside Adobe Max, your big conference, and you’re announcing a ton of new products there, including big features for Creative Cloud on the web. There’s news about the Content Authenticity Initiative.

Yes.

You’re very bullish on NFTs, which I really want to talk to you about, and I have some big questions about the future of computing. I was looking at these topics and I was like, “Man, I need like two hours.” But we’re going to try to get it all in.

Let’s do it, a power hour.

Yeah, exactly. But I want to start with what I have come to think of as the Decoder questions; the basics of how Adobe as a company works. I think Adobe, as a company, we take for granted in the best way. The products are ubiquitous, they’re famous, entire industries depend on them. But I feel like it’s a company we don’t know a lot about. So just start with the basics: you’re the chief product officer, how many people work on products at Adobe?

Probably somewhere around the 7,000-person range fall within the creative organization of engineering, product and design that I oversee. And then there is of course a group of product organization, the digital experience side of the business, which I don’t directly oversee. And the document cloud, which is the PDF, Acrobat business. So I don’t know the exact numbers, but we have quite a large product engineering and design organization at Adobe.

When you talk about the difference between things you have to do in the future, trying to find the next turn, and then Photoshop — because when I say entire industries are organized around some of your software, entire industries are organized around Photoshop. How do you manage the split between making sure that product does what it needs to for its existing customers versus pivoting to what the next generation of customers might want?

You’re getting into my everyday drama right now that I have to deal with.

This is the heart of Decoder. I find it and I push the button.

It’s a great, great question. And there are various ways we go about this. So look at a product like Lightroom. Lightroom now has two variants. There’s Lightroom Classic, and there’s Lightroom for Creative Cloud, which is a more cloud-native photography organization and editing solution. Why did we do that? Because there was actually a legacy, incredible world-leading photography base that just couldn’t imagine ever doing anything on the cloud, and we really wanted to make sure that that product took the path in its evolution that was more towards local, on-premise photography management. And we had to honor that base. 

But at the same time, we didn’t want to constrain the next-generation photographer that wants everything at her fingertips on mobile, desktop, web, and doesn’t even think about where the images are actually stored. And so sometimes we have to go that far and actually splinter and create two products. But we also recognize what is truly empowering for our customers is when everything works together. And so having Photoshop come to iPad, and now as we’ve just announced, coming to web. But Photoshop is Photoshop is Photoshop. You open it anywhere, it is full fidelity, truly interoperable across surfaces without any lossiness. That’s an important thing to deliver to our customer, and that’s kind of a promise, that we’re uncompromising on. In fact, the reason why we can’t port 30 years of features to a new surface like the iPad or the web on day one is because we just have to focus on the file format and the fidelity and trustworthiness of the file itself. Because the PSD is kind of like an iconic format, to your point, that industries standardize on, to some extent.

So I ask every executive that comes on the show, and maybe we can use the Lightroom example, how do you make decisions? How do you decide, “All right, we’ve got to actually split this product into two”?

“Photoshop is Photoshop is Photoshop”

It’s really about, it will sound cliche, but it’s very much anchoring [ourselves] in the customer and understanding where they are going. So, look back to the days when Photoshop was used to make every website. There was a subset of customers that were focusing so much on websites and only using a few specific tools in Photoshop. They struggled, amidst all the rest of the power of Photoshop, to be able to do that in a smooth and efficient way. And then products like Sketch came around that basically took those specific tools out of Photoshop and kind of flanked the product with a new product that was dedicated to screen design. Now we have Adobe XD, and of course others have emerged in the space as well. That’s an example of being very customer-centric.

Now, we could have just said, “Oh, let’s make Photoshop better and better and better and better for the screen designer.” But actually, the customer was saying that they don’t want all the other stuff in there. They want a vector-centric editing capability that vertically integrates prototyping. At the time people were using third-party services to prototype the things that they made in Photoshop and places like Sketch. So we had to listen and say, “Okay, we need a vertically integrated screen design solution.” And now that is collaborative by default. That’s the playbook. And whenever we’re sitting in a meeting and we’re pontificating as people around a table, I’m like, all right, we got to end this meeting right now. What are customers struggling with and how are their behaviors going to impact our roadmap?

If you ask customers, they would invent a faster horse, right? There’s what customers want, which is pretty narrow problem-solving for their needs right in the moment, and then there’s the next turn. I very much doubt a lot of your customers are saying, “I need to mint NFTs to multiple blockchains.” But I know that’s on your mind. How do you balance those two?

It’s another great question, and it’s something that I think every leader of a company like Adobe needs to be very paranoid about. Because it’s very easy for us to build everything in the image of what we’ve done before. And I’ll ask myself this tough question since you’re not asking it, should XD have been on the web by default in the beginning? Hey, listen, we have a customer base that was never willing to trade performance and precision for ease of collaboration. And when we went to customers and said, “Well, if we bring some of this stuff to the web, you’re going to have a bit of a laggy experience and sometimes bandwidth is going to get in the way of you want to do, and you’re going to feel constrained by whatever happens to be the case at Verizon today.” They would’ve said, “Heck no, give me the power and precision. I want faster and faster and faster. And when Apple comes out with their new chips, I want it to be even faster.”

So the idea of going to the web was actually sort of crazy at first. And kudos to the companies that took the risk and also managed the years of frustration of customers because they weren’t delivering on the performance side that was required to be in business. But now bandwidth obviously is better and browsers are much more sophisticated. And partnering with our friends at the Chrome team and at Microsoft, at the Edge team, we were able to start to say, “Okay, what’s the future of web apps? And how can we actually take a product like Photoshop to the web and have that ease of collaboration coupled with performance?”

So in some cases it really does mean having some “burn the ship” moments where you’re like, okay, we are going to go all-in on the web right now, and we are going to make sure we nail this. But then again, it’s back to the customer. It’s knowing these are people who grew up in the age of Google Docs, they expect to be able to just share by clicking an icon. They don’t want to have to send an email and have a version control issue from day one.

There’s an elephant in the room. You keep talking about the other companies that have gotten there first. Obviously Figma is right there, they’re a very successful company. They’re a startup, they were web-based from the start, they now have a $10 billion valuation. Adobe’s a big company, do you wait for the small company to come and prove out the idea? Was that, oh man, we got to get there? Was that a competitive pressure for you? Or was it, man, I had this idea, but we had to serve the customers first, and now we can get there because customers are using Figma and they’re saying, “Why aren’t you doing this?”

Yeah. Well, listen, Dylan’s a friend. I met [Figma co-founder] Dylan [Field] when I was still an independent entrepreneur running Behance back in probably 2010, when he was actually first cracking imaging on the web, which was not doable. And that’s where they kind of pivoted to screen design and vector-based creation. I think that when you are a market leader it is really helpful to make sure that, yes, you have to anchor on what the majority of your customers need, which is never something at the edge, it’s always what is at the center. And the folks that were willing to withstand frictions of web creation three to five years ago were a very small group of people.

And so I try to have small teams exploring some of those things on the edge that may become the center someday. And do we always wish that we had started some of those things earlier? In some cases, yes. In some cases, no, because the technology’s changed and pivoted so many times that it’s almost easier sometimes to build on the modern stack today than to have started something three years ago, that now you have to re-platform. So there’s some advantage. I actually believe that by beginning our web journeys more recently, we’re going to be able to capitalize on some very fundamental new technology. But the market signals itself, right? Everyone sort of elevates, hopefully, and this all serves the customers at the end of the day. Everyone elevates everyone else’s game. What I like about the creative space right now is that there are so many new technologies coming in. So many smart people thinking, at the end of the day, this is going to serve customers regardless.

So let’s talk about the ultimate “moving from the edge to the center,” which is putting Photoshop on the web.

Yes.

That’s part of the announcement at Max, it’s Creative Cloud Web. Tell me what the thinking is there. Photoshop is a classically heavy app. When The Verge reviews team does performance testing on laptops, we open Photoshop, we open Premiere. Bringing Photoshop to the web seems like a big deal, Illustrator is coming along for the ride. How does that work?

Well, first of all, what we know is that every Photoshop workflow is a collaborative one, to some extent. Whether you’re doing it for a client, you’re doing a project with a friend, whatever the case may be, you’re sharing it with somebody. When you’re sharing it with somebody, what do you want them to be able to do? Well, you want them to be able to review and comment on it, which we wanted to do first out of the gate.

Step number two is, if they want to jump in and make a slight tweak or change, make a copy edit, whatever — do they really have to go back to you and ask you and then have more back and forth? Can they just click in it and just start editing right away? And so the first phase that we’re also launching now is sort of a light level of editing, very nondestructive, full fidelity, real PSD in the cloud. We’re not bringing all the features on day one, but we really want to unlock all those basic edits that are just best done now in the browser with whoever you’re working with. We just want to knock that out of the park on day one.

Do you have a button where the boss can just make something bigger?

Oh yeah, the scale functionality is shipping in Photoshop on the web. So, oh my goodness, we now are entering a generation of bigger logos as a result.

So you bring it to the web in this way to enable collaboration. You were talking about files and folders and PSDs and radically different expectations the different generations of consumers have. Once you get the PSD in the cloud, do you get to change that file format? 

Having a PSD in the cloud is a Pandora’s box of oppertunity

It’s a Pandora’s box of opportunity. When you have a PSD in the cloud, you can allow people to access it across any device. You can allow anyone to collaborate on it with you. You can approach the world of co-editing, where people can be in the same document at once. You can also do all kinds of fun integrations with third parties. I mean, imagine any image in the world that you’re working on, on any website, being able to right click, edit in Photoshop on web, jump into a new tab, make a change, click save, and go back to where you were. What sorts of creativity will that unlock as people are sharing content on social media websites all day, or on publishing to your blog an article, and I need to comp out something, or add a layer somewhere, add a little watermark? I mean, these things, it’s just such an unlock for so many more people to enter the funnel and to be able to be outfitted with creative capability.

When you think about the architecture of Photoshop historically, you’ve got an app on your laptop that’s running on a local Intel, or AMD, or now an Apple chip. You’ve got a file, you’re operating a file, you send the file places. What is the architecture of Photoshop on the web? Are you running that on your data centers? Is it the same sort of x86 app that we’re used to with the web front end? How does that work?

No. So it’s all native in the browser. I mean, the client side work and the technologies we’ve leveraged are the latest for web apps, as you would see anywhere else. And the team can give you more specifics, but we have really tried to make sure that as much is done on the client side as possible, because performance is crucial. This is one reason why I’ve been trying to get the loading times down with the team — really the initial load, because there’s just so much. The first time you ever use Photoshop on the browser, we’ve got to bring a lot local for you to be successful and be nimble. But then beyond that, a lot of that stuff is happening locally, and then the cloud’s doing its job keeping things in sync.

And eventually [the cloud will bring] a lot of the power of AI at your fingertips as well. I mean already, when we have masking and stuff like that, a lot of that is algorithmically or AI-driven from the server side. And I think that’s part of the future of creativity, is allowing people to spend more of their time being creative in the exploratory process, as opposed to the mundane, repetitive stuff that we do, like applying marching ants around hair all day.

I’m curious about that. When I think of a web app like Photoshop or anything else, really, I think, okay, I’m looking at the front of an app, but all of the heavy lifting is being done on a computer somewhere else. And that lets you bring that app to many more kinds of devices than you would otherwise have. So I know a lot of designers use Figma, they literally work on Chromebooks, pretty midrange Chromebooks all day because it’s just a web app and they don’t need a ton of local processing power. Are you saying that Photoshop will still need a bunch of local processing power?

Well, Photoshop is being made to be able to take advantage of local processing power, right? But I think “need” is a good question. We want to bring Photoshop and some of these capabilities to everyone. We have tested on some Chromebooks, certainly the higher-end Chromebooks, and are pretty satisfied with some of the initial results. But there is the kind of headless Photoshop approach that we could have taken, which is basically streaming Photoshop from us running it in the server, which we did not do because we think that people need to be able to have agile, local operations.

There’s a big split in computing architectures going on. There is the traditional Intel and AMD x86, there’s Apple’s new approach with its chips, and then there’s a huge push to just move it all to the cloud. Maybe most notably in the video game industry, where game streaming feels like the future, and people still have really big, heavy consoles sitting in their living rooms because that future’s not ready yet. When you think about architecting your apps, that’s three very different paths to go down. How do you pick between them?

We are very much back to the edge and center here. So at the center, we want to make sure that we bring the most powerful and creatively capable tools in the world to market. And so when it comes to the future of Photoshop on desktop, and Premiere Pro, and After Effects, and all the Substance 3D and immersive products, etc., we are in lockstep with Apple and with Microsoft on the absolute latest Apple Silicon and Arm chips, because we need to make sure that we’re always pushing that edge. I mean, think about the types of things that people are rendering and creating these days.

I think we get really excited about all the collaboration stuff, which I’m about to talk about, but there is this need, and that’s why it’s a cross-surface experience. There are things you want to do that are more collaboration-driven, and then there are things where you’re just, rock solid performance. If I can wait five seconds for this to be done instead of three minutes, all day, every day, I’m desktop, desktop, desktop. And to the metal. I want to make sure I get all the juice.

But again, this is the insight on the web side, is that there is a new generation of people that are achieving better productivity as much through collaboration as they are through performance. And the web is just ruling the world there. I do think that the companies that win will be able to bridge both, and that’s very much our strategy.  Sometimes you just want Photoshop, Photoshop, Photoshop on desktop. And you just want to have all the optimized capabilities for your chipset as possible. And then you may want to open that on the web and do something with somebody else.

Apple makes different GPU decisions than the Intel side. Famously, they do not support the very popular Nvidia GPUs. They have a different framework for GPUs called Metal. Now they’ve got an entirely new framework on their Pro machines in their new chips. When you’re trying to address both of those customer bases, how do you make sure that you’re optimizing as close to what the hardware will allow as possible?

Well, it kind of goes back to that principle of being platform-agnostic. We do want to meet customers where they are. That being said, we partner with these hardware partners of ours to make sure that we’re fully utilizing and embracing what they’re bringing to market. And sometimes one platform leapfrogs another for a period of time, then the other one leapfrogs them, and people keep kind of doing that. But it’s the customer’s choice. We really want them to be outfitted with the very best power possible. And obviously M1s have really exceeded our expectations when we first saw our products lit up on them. And we did all the work to optimize them for the M1, so that was super exciting. Our customers on those devices are thrilled.

So in the new machines with Apple’s new GPU structure, are you ready for that on day one, or is that going to take some time?

We’ve worked with them, and we are going to have performance improvements, for sure, right out of the gate. And then there will be more coming.

You’ve spent a lot of time talking about collaboration and what businesses need as they grow. There’s a pretty expanding gap between consumer creative tools and what I would now call enterprise creative tools. The most popular consumer tools are integrated directly into the distribution platforms: TikTok is a very powerful video editor, but it’s also a huge distribution platform. The same for Instagram. Photoshop with web collaboration, it’s expensive, it’s rapidly becoming a kind of enterprise tool. And the tool sets are not evolving together. They’re going in different directions because they have different constraints and different audiences. The way I would most simply phrase that is: being great at TikTok does not make you great at Premiere. It will teach you a lot of the language of video editing and give you instincts, but it will not actually make you good at the app. How do you bridge that gap for the younger generation of creators?

Well, our big bet is that the industry is moving, and has always moved, towards the side of being able to stand out creatively. And I actually shouldn’t even use the word industry, I should just say society. People want to stand out. And especially as we get replaced by algorithms when it comes to productivity stuff, we’re all going to want job security through our creative stuff. And whether it’s hosting podcasts or writing, or telling a narrative with data in a visually compelling way, or whatever the case may be, every small business needs to produce content.

There’s a new movement of template-based creativity tools out there where people just take a template, edit it a little bit, and then post it on social media. I actually see the early signs of people starting to feel like they’re being generic now. And again, people want to go further. They want to add more creativity. So our bet is that everyone’s going to ultimately want to stand out. By the way, a huge amount of Premiere Pro customers export for use in TikTok. Why are they doing that if they have a local video editor? Because they want to do something that people look at on TikTok and they’re like, “Oh, how did they do that? You can’t do that on TikTok.”

Maybe it’s part of the human condition. Maybe it’s because it’s the only thing we, uniquely humans, can do, is create and transcend what’s been done before, and so we all have this innate desire to do that, to sell our products, to sell our ideas, to sell everything. And so that’s where we need to meet the customers at. Now, I think your point is right though, we can’t just make enterprise tools, in a sense, we can’t make creative tools that require huge learning curves. We have to make our products more accessible to more people. So that’s been a huge effort. In Creative Cloud what we’ve been doing is trying to make it easier to onboard, make it easier to learn tutorials, all that kind of stuff. But also, we are developing some stuff that we’re going to talk about in the next few months that’s going to reach a much broader audience, and in a more reimagined way that doesn’t require any learning curve at all. And I think that’s part of our mission for creativity for all.

Do you think that is just breaking out the tools? Like I think about content-aware fill, which is a revolutionary feature of Photoshop. It has just changed the landscape of photo editing in a real way. But to use it, you’ve got to know how to use Photoshop. And that’s the sort of tool where you could democratize it and bring it out to other places. Do you think about it that way? There are certain powerful features that you should just understand how to use, and that will ladder you into the bigger app and maybe a career?

Listen, if I’m one of the billions of people that aren’t Creative Cloud customers and I saw things like content-aware fill and neural filters, where with a little slider, you can change someone’s frown, or you can change the landscape into summer from fall to winter, if I saw any of that, I would want to have access to that without having to learn that tool. And so that is kind of my charge to the teams, to say, “Hey, port some of this incredible — we call it Adobe magic internally — in a very easy to use, revolutionary interface form that everyone can access.” And that’s part of the challenge that I’m alluding to, that we’re going to start to make a dent in going into the new year.

But we already are trying to do it in our products that you see today. And shame on us, right? I mean, we should have this access for everyone. Everyone needs it now, is the point. Whereas 10 years ago, not everyone was creating content on social to make their business stand out. But now the creator economy is kind of the theme for this need.

There was a tweet about a guy who bought a business selling ramps for dogs to go up sofas. Dog ramps. I’m going to get him on the show. And he was like, “I bought this business, it wasn’t doing any social marketing. I just made some great videos and bought the ads, and now my business is like 300x.” And that’s all marketing. The classic “the marketing made the business” is right in there. Is that your lane? That huge market of people with small businesses who see the marketing opportunity with social platforms? And if they want to make great content, it’s worth it for them to pay for the tools? In a way that, I don’t know, teenagers might not think it’s worth it to pay for the tools?

Well, as I imagine the Adobe of tomorrow, I think that every student who’s making a history report, it’s not going to be a printed Word doc anymore, it’s going to be a visually compelling, animated or narrated, and video type of experience. And millions and millions of small businesses were started during the pandemic as people left their old day jobs and said, “Okay, I want to pursue my passion now.” From day one, it’s all about the content you’re representing across all these different platforms and different formats. And you want to test things, you want to make it creative and different than your competitors. Where are you going to go?

And then I think that for the big company too. I think about that moment when the lights went out during the Super Bowl and Oreo said, “You can still dunk in the dark.” That was a marketing moment that happened within 30 seconds. And whoever did that, it wasn’t a design team, it was a social media marketer who was empowered to just do it. They needed to have the brand assets, the fonts, all that stuff at their fingertips and just be able to execute and post it. That’s going to be the case across every brand in the world, big and small. A company needs to accommodate those workflows. 

So I look at Adobe, I’m like, well, we’ve got all the professional tools. We’ve got all that Adobe magic. We’ve got the collaboration services, like Creative Cloud libraries that make those fonts and assets available at your fingertips across mobile, web, and desktop. And then we’re going to have all these more consumer-focused creativity applications that make things more accessible to more people. But it’s all a unified system. To me, that’s the creative operating system of the future that people will need. And I just feel like, in that perspective, Adobe’s in its early days.

Do you think about making some of these features just filters in other people’s apps? Have you thought about making a set of Snap filters? I don’t know if TikTok lets third parties play in their zone, but have you thought about going into the other apps and saying, “We’re bringing some of our technology here”?

Well, it’s interesting. On the augmented reality side, we are making a lot of the tools. We’re always doing the picks and shovels of these mediums. And we’ve done a lot of work there, and we have approached partners who say to us, “Hey, we want your creators to create for our new mediums. Because otherwise our mediums are going to fall flat.” I mean, AR is never going to be interesting until it’s richly filled with interactive, amazing, engaging, entertaining content. And how are you going to do that unless you have millions of people who are the best creators in the world producing for that medium?

So we have had some conversations there, but I feel like, as a platform-agnostic player, our role is to say, “Hey, you’re a small business. You want to make your ad or your engaging content. You want it to be on TikTok and Instagram and Snapchat and YouTube and Facebook and Pinterest. You shouldn’t have to do it all over in each place. You should just be able to save it in all those formats. We should do the AI magic to just make that happen for you, and then you should just be able to publish directly from our product.” I think that would be the holy grail.

So there’s a split there that I think is really interesting. You’re describing the small business owner themself or a creator themself. At the same time, right, you came to Adobe because you were the CEO of Behance. Adobe still runs Behance, it’s a networking platform for creatives. You sell subscriptions on it in a Patreon kind of way. You’re announcing some updates here at Max to make it easier to find jobs. What’s the split there between doing it yourself and then going on a platform like Behance, looking at a bunch of creatives, and hiring them?

What we’re seeing increasingly is both. People go and they commission or they get UI kits, or they commission people to do original work for them, and then they use those as templates and starters for other derivations and evolutions of that content over time. I think that creativity will always be a collaborative discipline. And one of the things I love about Behance is just how many people in the far corners of the world have expertise in certain areas that just are superpowers for you wherever you are. Some of the best motion graphics designers I’ve ever found were in Central and Eastern Europe, in small little towns. And I don’t know how they became so great, but they are such a resource. And typically they would work for a headhunter, who’d work for an agency, who would work for a bigger agency, who worked for a brand, but now the brand can find that person directly and have them on retainer to do all kinds of cool stuff. So we’re seeing that happen all the time. I think you’ll always see a mixture of both.

Do you foresee a world in which these specialized creators become an independent army of freelancers? Do you see creative moving out of the agency or the companies themselves?

A hundred percent. Why? Because the natural inclination of all of us is to work for ourselves to some extent, and especially a creative, it’s like, “I want to choose my own work. I want to choose my own clients and work on my own terms.” And so the better and better you are, the more likely it is that you should have that future. In the old days when no one could find you and you couldn’t get attribution for your work, you had to work for an agency. You always had to be in that chain. But now, if you can get attribution directly for your work and the spotlight that you deserve, you can work directly for whoever and on your own terms. And by the way, I know we’re about to get into some of the NFT stuff, but it’s interesting to see the digital artist be in some ways at the mercy of circumstance and always at the end of that chain, to suddenly monetizing their work directly, both directly through relationships like we’re describing, as well as by minting their work and having it collected by others.

I want to make sure we spend some real time on NFTs, that’s where I was headed. But before I do that, that has a big implication for Adobe’s business, right? Adobe’s business right now is expensive; Creative Cloud subscriptions, I’m assuming CIOs are some of your biggest customers at big companies and they’re buying corporate enterprise licenses. As all those people move and they become freelance, or they start doing it themselves at smaller businesses, how are you thinking about Adobe’s model changing?

It’s funny. I mean, I always think about our business as, our customers are creative professionals, the IT department will buy whatever tools they want to use.

That’s an optimistic read on the relationship between creative professionals and IT, but I buy it.

But they do. No one’s going to tell their designer that we’re not going to pay for the tool you want to use — and that hurts us and helps us. It depends on what industry or what segment of the market we’re talking about. But the truth is, that we need to ultimately empower creative people and teams to work together. While you were saying that there will be more independent professionals and less people in design organizations or agencies, I think it’s more on the agency side. I actually think companies are realizing design is a competitive advantage, so they’re bringing people in-house, but nevertheless, they’re also working as teams. So a lot of the individuals of the world that are working on these incredible animations or editing projects with a product like  Frame.io, they might be distributed freelancers, but they are working as a team. So they need enterprise-level collaboration capabilities, even if they are in fact individuals.

All right, let’s talk about NFTs, I’ve made everybody wait long enough. We’ve been hinting at this conversation. You are very bullish on NFTs, non-fungible tokens. I have a quote here from a Medium post that you wrote: “The NFT world is likely the greatest unlock of artist opportunity in a hundred-plus years. This isn’t a suboptimal or fringe version of the real-world art economy. It is a vastly improved one.” I would say I’m maybe less bullish on NFTs, but tell me why you think they’re so revolutionary.

And let me be clear; I’m revolutionary on the technology of NFTs. I am not suggesting that the current boom of people trading them and buying them and selling them and these series and all that stuff is here to stay. In fact, my opinion would be that there’s going to be more crashes before more booms. However, I have just never seen a more empowering and better-aligned system for creativity than NFTs. You make an NFT and you not only get the primary sale revenue of it, but then you also, based on the contract you’re using, can get a percentage of every secondary sale forever. That blows out of the water any other form of art, in galleries and anything else for that matter — the attribution is always there for you. You always have a connection to your collectors.

Again, it doesn’t exist in the real world with artists. It’s very good luck if you can even ever meet the artist that made your work. Just when you go down the line, it’s just better, better, better, better, better, better. And what it’s incentivizing is creativity. Artists are realizing, “Oh my goodness, I should make these NFTs that have this nature to them and I can airdrop new versions of this NFT to my collectors, just surprising them, delighting them, and I can have a relationship with them. They can even influence the future of my collection.” There’s a large rabbit hole that we won’t have time to go down, but suffice to say, NFTs represent a way of distributing and collectors owning creativity as a form of cultural flex, as a form of membership, as a form of patronage, and I think it’s early days.

So let me offer you the pushback on that, because I buy it, and particularly the secondary sale thing, I think, has never been possible before. So that is all very interesting. The pushback I would give you is that NFTs aren’t actually the work, right? They are a pointer on the blockchain to someone else’s website where the work lives. I think it is very amusing that people are angry about right click and save as. That’s very funny to me, just if you take a step back, the fact that that is the problem in the NFT world is deeply funny.

But, it’s still not the work, right? We’re still creating all the value around the work itself and not reasserting the value of the work, in a way that a painting is inherently worth something, or even a CD is inherently worth something, because the media and the art have merged — how do you solve that problem? Because I think that’s the thing that’s always going to be confusing for people. It’s always going to be the blocker.

Well, two comments on that. First, one is a philosophical one. And then one is very specifically what Adobe is doing to help solve this problem. But on the first side, I would just say that NFTs are really about identity. You are defined by the stuff you collect, by the art on your walls, by the clothes you wear. Any pair of shoes you buy is probably $3 in materials and $97 in virtual goods that are ascribed to the brand. Why are you paying that ridiculous premium? It’s because you’re buying a virtual good that helps define your identity. But the thing that we really want in our identity, everyone really wants, is authenticity. And so the idea of knowing and being able to demonstrate that whatever you have, those digital shoes you’re wearing in Fortnite or whatever, are authentic, is absolutely hard-coded into identity, which is an age-old thing.

People have always wanted to be authentic and everyone’s always been afraid of being a fraud. So there’s that, that we’re capitalizing on in the NFT space. I think it helps address your comment. But from Adobe’s perspective, we’re seeing this right click and save and mint thing and saying, “Wow, if so many of these NFTs are made within our products, and if we can match the person who makes and actually pushes the pixels to the person who mints it, then we can actually solve that attribution gap.” You know who minted it forever on the blockchain, but you don’t know who created it forever on the blockchain. 

It was this crazy thing, for the last few years we’ve been working on something called the Content Authenticity Initiative, which originally was intended to help people know if a piece of media that you saw, for example, on your guys’ website, if it was actually edited by someone on your staff, or if it was edited by some unattributed person. And that helps me determine whether I can trust the video or image. And so we’re going to use that same technology, but we are basically embedding it into our products when you’re minting the NFTs. And then we are putting it onto the blockchain in an open-source way, that is by no means DRM or anything like that. Anyone including competitors can do this, and then we’re working with the OpenSeas of the world to surface that information with the NFT forever more — wherever it’s powered on the blockchain. So in other words, you will be able to see an NFT and not only see who minted it, but also see some attribution for who created it. And I think that solves that problem.

So I make my own CryptoPunk in Photoshop. I hit a button, and the technology you developed, I believe it was with Twitter and The New York Times, to authenticate that a picture was a picture from The New York Times, comes into play, adds some verification to my punk and then mint the CryptoPunk. I sell it somewhere and as someone right clicks it and saves it and tries to remix it as a different NFT, my attribution comes along for the ride?

Well, actually what happens is if someone copies it, they will have been the minter of it, but they won’t have the cryptographic signature of being the creator of it. So what it will do is it will validate that you were the creator. It won’t validate that they weren’t the creator, but they can’t actually show that they are the creator. So imagine a world where you favor buying NFTs from artists with a cryptographic signature that you know that they actually made it, as opposed to one who doesn’t have that cryptographic signature. It’s like believing news from a Twitter account that has a verification badge versus not. Anyone can go on Twitter and repaste anything from anywhere, but if it doesn’t have a verification badge, you start to be skeptical of it. I think similarly of the NFT space. If I just get a CryptoPunk, but it doesn’t have a cryptographic signature of the creator of Larva Labs, I’m going to be like, “Well, wait a second. How do I really know that they made this?”

So is that the combination of NFTs and content authenticity? You’re trying to create a new set of customer norms in the art world, right? You’re checking for validation; that matches the traditional art world, where you have people who come in and say, “This painting is real.” You’re trying to create that more digitally.

It’s actually, in my own career, my quest since 2005 has been to help foster attribution in the creative world. I just simply believe that when people get credit for their work, they get opportunity, and it’s the best thing for creative meritocracy. So fast forward now, NFT boom, tons of people taking other people’s work and minting it and just trying to get away with it. And I’m saying, “Wow, this blockchain thing is great, but you can only track back to the original minter, not the creator.” If we can cryptographically signature the artists and the actual provenance of the object, like what layers, what pixels, where the sources came from and everything, that illuminates a massive gap in this new digital collectibles world, that I think could be very empowering to artists, could make sure that we flip the model and sort of say, “Hey, I only want NFTs that I know were created by the original artist.” And to your point, that’s what galleries and art authenticators are there for, but they’re not even able to do it with 100 percent precision. I think we can.

I buy it. But again, here’s the pushback that I see, that is a huge amount of control, and art is usually at its best when it’s pushing back on control and pushing back on the norms. 

I agree.

I’m going to tape a banana to the wall and then someone’s going to eat it. This is a real thing, if you’re listening to this, this is a real thing that happened at Art Basel in Miami.

I remember.

And that is breaking a norm and then another person breaking a norm. And that created a moment in the art world. You put computers in charge of everything, they don’t allow for norm breaking in that way. Right? They tend to enforce the rules very strictly. How do you see that dynamic? Because that’s the part that’s scary to me, basically we are creating a massive distributed DRM system that limits what people can do.

What’s important here is that it is open source. And by the way, you can attribute it to anything, so any name, your pseudonym, can be your attribution point. So I think that you’ll actually continue to see creatives be very creative with how attribution is used. I don’t think Banksy would suddenly attribute it to his name and his Adobe ID. Right? I mean, it’s going to be like—

Do you think that Banksy has an Adobe ID? Do you know who Banksy is?

(Pause) No. But—

This is a radio show. Scott thought about it for a minute.

I was like, “Do I know Banksy?”

It was very telling.

There was that time in a British pub…  but anyways, no, I think that it’s about making sure that people can get attribution if they want it. And in a way that is very consistent with the decentralized notion of no one single player, no single source of truth, but people just should have this form, and that’s why we made it open source. And we tried to get as many other folks involved as we can and listen, it’s early days, but I do believe it’s a problem that needs to be solved. And I haven’t seen a better solution yet, but this is why we’re working on it.

So do you think eventually Photoshop is going to have in its listed export options, like TIFF, JPEG, NFT?

We are going to have a “prepare as NFT” option by the end of this month.

And what does that look like in practice? “Prepare as NFT” and then it goes where?

“We are going to have a ‘prepare as NFT’ option by the end of this month.”

It will be able to take whatever you’re working on and it will assist you in packaging it and preparing it along with the attribution capabilities that we just discussed, for some of the popular minting platforms and blockchains out there. And again, this is in preview, it’s not something that we’re gold-standard on yet in terms of readiness, we are just trying to respond to the customer’s desires. So a lot of our customers are like, “Listen, I make stuff in your tools, I mint it and I’m proud of that. But then other people mint the same stuff. I want the ability to show that I was the one who did it.” And we’re like, “Great. We’ll give you the ability to prepare as NFT, we’ll cryptographically sign it in an open-source way for you to be able to have that. We’ll work with the open marketplaces of NFTs, to surface that information alongside an NFT with any cryptographic signature around the actual creator. And hopefully that will help solve your problem.”

Have the marketplaces bought onto this? Are they going to support it right away?

They’re very excited about it, because this is one of their biggest problems, is everyone’s right-clicking and minting other cool stuff. The blockchain starts from the moment of minting. So there’s just no way of knowing whether this was right clicked and saved or created from a product or not, from down to the pixels. So that’s something that they want us to help, themselves.

I feel like we’re right back to where we started with file formats and PSDs. I feel like every time I talk to you we end up in the weeds of the PSD file format. How does this change that file format? Is this, once it’s signed, you can’t change it again? Do you have to change the format? Where does it live on the blockchain?

Yeah, no changes to the format, and the cryptographic signature points to an IPFS (InterPlanetary File System)-powered system that shows you the attribution data. But again, it’s a decentralized storage source and it’s an open-source framework. So that, again, anyone can cryptographically sign anything from within the tool that’s used to create something and then you leverage the same system. And that’s great, because we don’t want this to be anything that is proprietary to Adobe or part of one of our formats, that would negate the purpose.

What happens if I want to take one of these signed things that’s minted and remix it and post it to Instagram as a commentary on the art? Is there any place here where that stuff is prevented from happening? Because that’s just been happening with digital art in a variety of ways over time.

Listen, we’re so early days in this wild, wild West, I have no idea. I mean, I don’t think Instagram is surfacing attribution data—

But this is what I mean about computers being really good at this, right? Instagram has a copyright law obligation and people fight over it and they file the claims.

Yeah. DMCA stuff or whatever else.

I mean, I think just this week, Emily Ratajkowski and Dua Lipa got sued by paparazzi for posting photos that paparazzi had taken — which is a whole other conversation. But there’s already a set of laws that control what you can post or might get taken down from Instagram. Once you start creating this kind of parallel digitized system, that stuff gets automatic real fast. So I’m wondering if you see that line yet, or you’re saying it’s so early, we can’t know?

Yeah, I think it’s too early to know. But I always go back to the primal motivations here. We just know that creatives’ opportunities are at the mercy of circumstance, unless they get attribution for their work. And we know that any form of monetization typically is taking advantage of the creative, and now we’re trying to shift the power into the creators’ hands. And I think that obviously the blockchain does that in all the ways we just discussed. I feel like there are also problems to be solved around attribution, which we’re trying to solve.

And then as the desires come to remix and leverage and use, we want to play a role in that. I mean, we have a huge business with our Adobe Stock business. And this is people selling content that they make or shoot for other people to use with various levels of licensing. I think that blockchain is a great area to explore on that front as well. And we’re seeing multimedia types of creations come out now that require more types of stock from more sources, with more different various ways of compensating the artists. So it’s exciting. I mean, it’s really like a wild, wild West. I’m so happy we’re now beyond this traditional world of: either a gallery’s selling it on their wall, or it’s not valid. We’ve moved beyond this, and I think that’s very exciting.

Let me ask you about the Content Authenticity Initiative (CAI) in its original form. It was you and the Times and Twitter saying we want to make sure that what you see is real; a noble goal. You’re announcing some updates to that here at Max. How is it going? Is it working? Is it taking off? Are you ready to deploy it widely?

The progress we’ve made is really around the partnership and the consortium of companies that are focused on this — trying to figure out what the standards should be and how to reveal that information in a very open-source, accessible way. What we have also been doing in tandem is using Photoshop as a reference app, to some degree, for how this should be done. We haven’t really launched that yet, that’s going to come out in preview at Max. So this month, and then we start to see how that works.

Now, of course, it’s also a chicken and egg problem. You need enough people using CAI for there to be enough reason for a network, like Twitter or New York Times, or Behance, to surface the attribution data from an asset that was made in one of our products using content credentials. And so what we’re also going to use as a reference app is Behance. So we’ll have our own two reference apps where it’s like, people can publish with content credentials in Photoshop, people can surface through content credentials in Behance, and then we can leverage that to show all these other players out there who want to solve this problem, hey, we’ve got the APIs and it’s ready to go. And so if you’re another creator tool in the market, leverage what we’re doing in Photoshop as a reference app. And if you’re Twitter or Facebook or someone else, leverage what we’re doing in Behance.

So let me give you a really dumb example that I was thinking about last night. I’m a Packers fan. Yesterday Aaron Rodgers played the Bears, and the cameras on the sideline caught him screaming, “I own you” at Bears fans, which is very funny. And I encourage everyone to A, root for the Packers, and B, watch that clip. Fox caught the audio. They would’ve put out that clip and marked it with CAI. They would’ve said, “We’re the creators of this clip, it’s real.” But the ones that went viral were people pointing their iPhone at the screen, and then there’s definitely one where people went and tweaked the audio so you could hear it more clearly. Do they get to play in this world where someone said, “All I did was tweak the audio,” or “I just pointed my iPhone at the TV,” but it’s the real thing coming from Fox. Or do those kind of get secondary status here, and the one from Fox itself gets the tag?

A reminder that the purpose of this is to help the viewer determine whether they can trust what they’re watching, In a perfect world, the piece that is captured by Fox actually says that: whatever Canon camera registered to Fox Incorporated captured this with this lens and whatever on this date at this location, and you’re seeing that footage. And it was edited a little bit, and it was cut down using Premiere Pro, and there was a filter added and the color correction was using DaVinci or whatever. And so you can sort of see the lineage of that asset. And you know as a news agency, as a viewer, that this is trustworthy content. Going with that logic, it would also say the same for me pointing my phone at the TV. It would say, “Scott Belsky captured this with whatever phone, with whatever tool, and edited it or didn’t edit it and posted it.” And again, this information is just helpful for the viewer to determine whether they can trust it or not.

Imagine if that video also goes around, but someone dubs in something else that’s being yelled, right? And you’re like, “Whoa, can that be true?” And then you click on it and there’s no attribution data for it. And you’re like, “Hmm, well, everyone else, all the news outlets who posted this clip had attribution data with the content verification, and this person didn’t. I’m going to have some skepticism on this one.” I think that that’s where we’re trying to get. And I don’t see a better solution, unfortunately, than that. Obviously I’d love to have a true-and-false meter for every piece of content published in the world, but that’s not going to happen. And technology just keeps getting better. So people are going to have to start discerning based on whether they know the provenance of the asset or not.

I’m just going to ask this very simply, does that require a network connection, right? You’re saying this is happening on IPFS, it’s happening somewhere on a blockchain. Can I see that data if I don’t have Wi-Fi?

Well, any of the content I assume you’re consuming on any of these social platforms is being delivered to you through a network connection. So any information around authenticity, that a company made it, that media does require an internet connection.

I’m just curious, because we’re completely disintermediating the file from the device at some point. I don’t know where the last step is, but this feels like closer to the last step.

Yeah. I mean, there are other technologies that are doing other kinds of things. They hide little things in the image and whatever else. But that becomes a cat-and-mouse game.

I used the Aaron Rodgers example because it is very low-stakes, right? It doesn’t really matter what he screamed at Bears fans. It happened. There are much higher-stakes examples I could use. There’s an election coming up. Do you think this stuff is going to be ready for the next election?

That’s a good question. What I do think will happen, unfortunately, is that there will be some specific things that happen that really diminish trust. I think we’ve seen a few of these examples before, but fortunately, countries haven’t gone to war yet. People haven’t been really traumatically affected yet by fake media on a large scale. And I just fear that it’s a when, not if, sort of scenario. And when that stuff happens, I think that everyone’s going to be grasping for ways to distinguish between true and false. And I think there’s going to be a need for it.

This all started because our leadership team said, “Hey, we have an opportunity to be part of the solution.” Or a responsibility, rather, not just an opportunity to be part of the solution. The opportunity of creating things like content-aware fill — and I remember when I announced content-aware fill in After Effects three years ago on stage. I showed a video where you could literally remove an object or a person from a video and even remove the footsteps and the dust in their walk from an entire piece of video with AI. It sort of begs the question of, oh goodness, what are the implications for this?

Now, that could have always been done. People could have done it the painstaking way. We just were trying to save creatives days and days of work by being able to do these sorts of things. And there are obviously many legitimate uses. I remember when the coffee cup was left in a Game of Thrones episode and there was this moment of, how do you remove that? And I was like, “Hey, we have a feature for that. You can just remove the coffee cup from the entire scene and no one is going to ever notice the Starbucks cup.” So I think that there is a really great, legitimate use case for that. But with that opportunity comes the responsibility of helping people know what’s real. And that’s the type of stuff that this is meant to solve.

Do you think that that affects your roadmap? When the next version of content-aware fill comes up, do you, as the decision-maker, say, “That’s too far until we build the trust tools”?

Everyone is working on these things. It’s not like we’re the only company trying to figure out how to remove stuff from video or imagery. This is a popular capability and we have to do it for our customers, just like our peers are doing it for their customers in the industry. I think that it’s a statement about Silicon Valley as a whole, that we tend to have teams that are very creative about what can go right in the future, and don’t spend time being creative about what could go wrong in the future. I think that if the early groups of product leaders and designers at Facebook, for example, sat around trying to brainstorm what could go wrong with their technology, years if not decades from then, maybe they would’ve built the platform differently.

“I think that if the early groups of product leaders and designers at Facebook, for example, sat around trying to brainstorm what could go wrong with their technology, years if not decades from then, maybe they would’ve built the platform differently.”

I’m hoping that we have those conversations now. And I’m hoping that we spend the effort and cycles to innovate in ways that may or may not catch on in the industry. I think it’s important to do so in an open-source way, because then we can help the whole industry do this. And of course we benefit by being leaders in this. And I hope it’s something that the networks and other partners in this space start to prioritize as well. Because we can build it, but if companies like Twitter or Pinterest or whomever are not surfacing this information, it doesn’t really work either.

I think that’s a pretty excellent place to stop. Last question, it’s a softball everybody gets at the end. We’re obviously talking around Adobe Max, you’ve announced a lot of stuff, but what’s next for Adobe? Where do you see the next turn for the stuff you’re working on?

I’m very committed to making Creative Cloud as much about collaboration as it is about creativity. I feel like anyone working alone these days is working at a massive disadvantage. You want to leverage other people’s assets, you want to be able to frictionlessly collaborate with others. That’s why we are bringing our products to the web. That’s why we are building all these services like libraries, but also, as we just announced at Max, Creative Cloud spaces and the canvas, as new forms of collaboration for creative teams. I think we have an opportunity to make creativity a collaborative discipline that is far more inclusive and really transcends what we’ve ever seen before, which is the ultimate measure of the work that we do. So that’s what gets me excited every day these days.

Scott, it is always a pleasure to talk to you. I feel like I could definitely spend another hour on either NFTs or the future of computing, but we got to wrap it up. Thank you so much for being on Decoder.

Thank you for having me.

Decoder with Nilay Patel /

A new podcast from The Verge about big ideas and other problems.

Subscribe now!