Big players, including Microsoft, with its Bing AI (and Copilot), Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.
How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”
Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”
But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.
- Microsoft CTO Kevin Scott thinks Sydney might make a comeback
- Microsoft’s Bing Chat AI is now open to everyone, with plug-ins coming soon
- Google announces AI features in Gmail, Docs, and more to rival Microsoft
- AI chatbots compared: Bard vs. Bing vs. ChatGPT
- Microsoft announces Copilot: the AI-powered future of Office documents
- OpenAI announces GPT-4 — the next generation of its AI language model
- 7 problems facing Bing, Bard, and the future of AI search
There have been a handful of before-and-after moments in the modern technology era. Everything was one way, and then just like that, it was suddenly obvious it would never be like that again. Netscape showed the world the internet; Facebook made that internet personal; the iPhone made plain how the mobile era would take over. There are others — there’s a dating-app moment in there somewhere, and Netflix starting to stream movies might qualify, too — but not many.Read Article >
ChatGPT, which OpenAI launched a year ago today, might have been the lowest-key game-changer ever. Nobody took a stage and announced that they’d invented the future, and nobody thought they were launching the thing that would make them rich. If we’ve learned one thing in the last 12 months, it’s that no one — not OpenAI’s competitors, not the tech-using public, not even the platform’s creators — thought ChatGPT would become the fastest-growing consumer technology in history. And in retrospect, the fact that nobody saw ChatGPT coming might be exactly why it has seemingly changed everything.
Sam Altman is officially OpenAI’s CEO again.Read Article >
Just before Thanksgiving, the company said it had reached a deal in principle for him to return, and now it’s done. Microsoft is getting a non-voting observer seat on the nonprofit board that controls OpenAI as well, the company announced on Wednesday.
Nov 23A troll worthy of Clippy themself.
This Wall Street Journal article about the recent drama at OpenAI contains an amazing anecdote. Apparently an employee at AI rival Anthropic thought it’d be funny to send “thousands of paper clips in the shape of OpenAI’s logo” as a prank, in reference to the infamous paperclip maximizer thought experiment.
Weirdly, I think OpenAI’s logo makes for a great paperclip design. Should we be worried?
Nov 22Sony finished its second round of tests of its in-camera authenticity tech.
The company tested baking a cryptographic “digital signature” into photos taken by its cameras to set them apart from AI-generated or otherwise faked images. Sony says the feature will come to cameras like the Alpha 9 III via a firmware update in Spring 2024.
ChatGPT’s voice feature is now available to all users for free. In a post on X (formerly Twitter), OpenAI announced users can now tap the headphones icon to use their voice to talk with ChatGPT in the mobile app, as well as get an audible response.Read Article >
OpenAI first rolled out the ability to prompt ChatGPT with your voice and images in September, but it only made the feature available to paying users.
While OpenAI is in the middle of an existential crisis, there’s a new chatbot update from Anthropic, the Google-backed AI startup founded by former OpenAI engineers who left over disagreements about the company’s increasingly commercial direction as its Microsoft partnership went on.Read Article >
Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence. The Information broke the news today, citing an internal post it had seen.Read Article >
According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more.
Nov 17Some of Bing’s search results now have AI-generated descriptions.
Microsoft says it’s using GPT-4 to garner the “most pertinent insights” from webpages and write summaries beneath Bing search results. You won’t see these summaries beneath every search result, but you can check which ones are AI-generated by clicking the little arrow next to the result’s URL. If the description is written by AI, it’ll say “AI-Generated Caption.”
Nov 17Google’s next-generation ‘Gemini’ AI model is reportedly delayed.
And we’re already feeling, even a couple of months in, the benefits and the strengths of that with projects like Gemini that you may have heard of, which is our next-generation multimodal large models — very, very exciting work going on there, combining all the best ideas from across both world-class research groups. It’s pretty impressive to see.
Now, The Information cites two sources saying its launch is expected in Q1 of 2024, not this month as they were previously told. It also reports Google co-founder Sergey Brin has been spending “four to five days a week” with the developers.Google Delays Release of Gemini AI That Aims to Compete With OpenAI
Google is testing new generative AI features for YouTube that’ll let people create music tracks using just a text prompt or a simple hummed tune. The first, Dream Track, already seeded to a few creators on the platform, is designed to auto-generate short 30-second music tracks in the style of famous artists. The feature can imitate nine different artists, who’ve chosen to collaborate with YouTube on its development. YouTube is also showing off new tools that can generate music tracks from a hum.Read Article >
The announcement comes as YouTube attempts to navigate the emerging norms and copyright rules around AI-generated music while also protecting its relationship with major music labels. The issue was brought into sharp relief when an AI-generated “Drake” song went viral earlier this year, and YouTube subsequently announced a deal to work with Universal Music as it establishes rules around AI-generated music on its platform.
Microsoft announced a plethora of changes for its Microsoft Copilot AI during Microsoft Ignite today that make the chatbot more interactive and participatory, particularly in Teams meetings. The updates expand Copilot’s role as an enterprise helper in Office apps like Teams, PowerPoint, and Outlook.Read Article >
Microsoft has added some flexibility to the chatbot’s output, so users can tweak it with instructions to make its formatting and tone more to their personal liking. Word and PowerPoint will get the new personalization features to start, but the company says other Microsoft 365 apps will gain support in time.
Nov 14Airbnb just bought its very own AI company, led by a Siri co-founder.
CNBC reports Airbnb bought the “stealth mode” Gameplanner.AI for almost $200 million, and notes its plans to use generative AI for trip planning help.
Nov 8Barack Obama on AI, free speech, and the future of the internet.
In a sitdown with Verge EIC Nilay Patel on Decoder, the 44th president discussed Joe Biden’s recently-signed executive order about AI, why Obama disagrees with the idea that social networks are a “common carrier,” and which iPhone apps he uses the most, now that he’s no longer president and he can use an iPhone.
Moments after OpenAI’s big keynote wrapped in San Francisco on Monday, the reporters in attendance made our way down to a private room to chat with CEO Sam Altman and CTO Mira Murati. During the Q&A, they elaborated on the big news that had just been shared onstage: OpenAI is launching a platform for creating and discovering custom versions of ChatGPT.Read Article >
There are natural parallels to draw between OpenAI’s GPT Store, which is set to go live in a few weeks, and the debut of the iPhone’s App Store in 2008. Like Apple way back then, OpenAI is inviting developers who are excited about this new wave of technology to hopefully help create a new, enduring platform.
Google is rolling out new generative AI tools for creating ads, from writing the headlines and descriptions that appear along with searches to creating and editing the accompanying images. It’s pitching the tool for use by both advertising agencies as well as businesses without in-house creative staff. Using text prompts, advertisers can iterate on the text and images they generate until they find something they like.Read Article >
Google also promises that it will never generate two of the same images, which can avoid the awkward possibility that two competing businesses end up with the exact same photo elements.
OpenAI announced more improvements to its large language models, GPT-4 and GPT-3.5, including updated knowledge bases and a much longer context window. The company says it will also follow Google and Microsoft’s lead and begin protecting customers against copyright lawsuits.Read Article >
GPT-4 Turbo, currently available via an API preview, has been trained with information dating to April 2023, the company announced Monday at its first-ever developer conference. The earlier version of GPT-4 released in March only learned from data dated up to September 2021. OpenAI plans to release a production-ready Turbo model in the next few weeks but did not give an exact date.
With the release of ChatGPT one year ago, OpenAI introduced the world to the idea of an AI chatbot that can seemingly do anything. Now, the company is releasing a platform for making custom versions of ChatGPT for specific use cases — no coding required.Read Article >
In the coming weeks, these AI agents, which OpenAI is calling GPTs, will be accessible through the GPT Store. Details about how the store will look and work are scarce for now, though OpenAI is promising to eventually pay creators an unspecified amount based on how much their GPTs are used. GPTs will be available to paying ChatGPT Plus subscribers and OpenAI enterprise customers, who can make internal-only GPTs for their employees.
One hundred million people are using ChatGPT on a weekly basis, OpenAI CEO Sam Altman announced at its first-ever developer conference on Monday. Since releasing its ChatGPT and Whisper models via API in March, the company also now boasts over two million developers, including over 92 percent of Fortune 500 companies.Read Article >
OpenAI announced the figures as it detailed a range of new features, including a platform for building custom versions of ChatGPT to help with specific tasks and GPT-4 Turbo, a new model that has knowledge of world events up to April 2023 and which can fit the equivalent of over 300 pages of text in a single prompt.
Update November 6th, 10AM ET: We have all the confirmed details about the future of ChatGPT right here in our coverage of OpenAI’s developer event. The original article continues below.Read Article >
Just as OpenAI is preparing for its first-ever developer conference, a significant ChatGPT update has leaked.
- A look at why the Leica M11-P’s Content Credentials matter.
Over on MKBHD’s The Studio, David Imel talked about the Leica M11-P. Or, more accurately, he used it to talk about Content Credentials, which the $9,000-plus camera attaches to photos as they’re taken so they can be verified through Adobe’s Content Authenticity Initiative (CAI).
It’s a good look at CAI and its potential benefits to the media using the first-ever camera to participate in an initiative intended to help onlookers identify real-world images in a sea of AI-generated ones.
- The good kind of SEO.
Nilay, David, and Alex had a little aside on Friday’s Vergecast episode about how the combo of SEO and AI means Google thinks you can melt an egg. This morning, a listener showed what’s clearly the best way to experience this silly SEO flub.
I immediately tried it, and this was also my Nest Hub’s response to the question, “Can you melt an egg?”
Go try it before Google fixes it!
The US Copyright Office is taking public comment on potential new rules around generative AI’s use of copyrighted materials, and the biggest AI companies in the world had plenty to say. We’ve collected the arguments from Meta, Google, Microsoft, Adobe, Hugging Face, StabilityAI, and Anthropic below, as well as a response from Apple that focused on copyrighting AI-written code.Read Article >
There are some differences in their approaches, but the overall message for most is the same: They don’t think they should have to pay to train AI models on copyrighted work.
- Elon Musk says xAI’s chatbot will be an X subscriber exclusive.
He says users will need a $16-a-month X Premium Plus subscription to access "Grok," and that it will get real-time information from posts on X.
Musk was a co-founder of OpenAI but left in 2018 over the company’s for-profit shift, and has called ChatGPT “WokeGPT.” He launched xAI earlier this year. Musk’s posts come a few days ahead of OpenAI’s first developer conference on Monday.
Nov 3OpenAI won’t say how many artists have opted out of training AI.
Bloomberg’s report on how artists are finding ways to fight back against AI scraping highlights how the process of excluding their content from OpenAI’s training datasets “feels like a charade” that would take months to even attempt.
The system asks that artists upload images they’d like excluded from future training to OpenAI, along with a description of each piece.
OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 dropdown — it will instead guess what they want based on context.Read Article >
The new features bring a dash of the office features offered by its ChatGPT Enterprise plan to the standalone individual chatbot subscription. I don’t seem to have the multimodal update on my own Plus plan, but I was able to test out the Advanced Data Analysis feature, which seems to work about as expected. Once the file is fed to ChatGPT, it takes a few moments to digest the file before it’s ready to work with it, and then the chatbot can do things like summarize data, answer questions, or generate data visualizations based on prompts.