Skip to main content

Filed under:

Bing, Bard, and ChatGPT: How AI is rewriting the internet

Share this story

Big players, including Microsoft, with its Bing AI (and Copilot), Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.

  • ChatGPT is winning the future — but what future is that?

    A trippy graphic displaying a collection of items like paintbrushes, books, phone messages, and a notepad to represent generative AI. A large pair of eyes and hands can be seen at the center of the image.
    Illustration by Haein Jeong / The Verge

    There have been a handful of before-and-after moments in the modern technology era. Everything was one way, and then just like that, it was suddenly obvious it would never be like that again. Netscape showed the world the internet; Facebook made that internet personal; the iPhone made plain how the mobile era would take over. There are others — there’s a dating-app moment in there somewhere, and Netflix starting to stream movies might qualify, too — but not many.

    ChatGPT, which OpenAI launched a year ago today, might have been the lowest-key game-changer ever. Nobody took a stage and announced that they’d invented the future, and nobody thought they were launching the thing that would make them rich. If we’ve learned one thing in the last 12 months, it’s that no one — not OpenAI’s competitors, not the tech-using public, not even the platform’s creators — thought ChatGPT would become the fastest-growing consumer technology in history. And in retrospect, the fact that nobody saw ChatGPT coming might be exactly why it has seemingly changed everything.

    Read Article >
  • Microsoft joins OpenAI’s board with Sam Altman officially back as CEO

    Sam Altman at OpenAI’s developer conference.
    Sam Altman.

    Sam Altman is officially OpenAI’s CEO again.

    Just before Thanksgiving, the company said it had reached a deal in principle for him to return, and now it’s done. Microsoft is getting a non-voting observer seat on the nonprofit board that controls OpenAI as well, the company announced on Wednesday.

    Read Article >
  • A troll worthy of Clippy themself.

    This Wall Street Journal article about the recent drama at OpenAI contains an amazing anecdote. Apparently an employee at AI rival Anthropic thought it’d be funny to send “thousands of paper clips in the shape of OpenAI’s logo” as a prank, in reference to the infamous paperclip maximizer thought experiment.

    Weirdly, I think OpenAI’s logo makes for a great paperclip design. Should we be worried?

  • Wes Davis

    Nov 22

    Wes Davis

    Sony finished its second round of tests of its in-camera authenticity tech.

    The company tested baking a cryptographic “digital signature” into photos taken by its cameras to set them apart from AI-generated or otherwise faked images. Sony says the feature will come to cameras like the Alpha 9 III via a firmware update in Spring 2024.

    Picture of a Sony digital camera
    Image: Sony
  • Emma Roth

    Nov 21

    Emma Roth

    OpenAI drops a big new ChatGPT feature with a joke about its CEO drama

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
    Illustration: The Verge

    ChatGPT’s voice feature is now available to all users for free. In a post on X (formerly Twitter), OpenAI announced users can now tap the headphones icon to use their voice to talk with ChatGPT in the mobile app, as well as get an audible response.

    OpenAI first rolled out the ability to prompt ChatGPT with your voice and images in September, but it only made the feature available to paying users.

    Read Article >
  • Wes Davis

    Nov 21

    Wes Davis

    OpenAI rival Anthropic makes its Claude chatbot even more useful

    Anthropic’s logo — the outline of a head surrounded by three more larger outlines of heads, and a star-shaped set of lines emerging from a center point of the smallest head and bursting out in seven different lengths of line, terminating in filled-in circles. The background is orange.
    Anthropic gives Claude more abilities.
    Image: Anthropic

    While OpenAI is in the middle of an existential crisis, there’s a new chatbot update from Anthropic, the Google-backed AI startup founded by former OpenAI engineers who left over disagreements about the company’s increasingly commercial direction as its Microsoft partnership went on.

    Anthropic has announced that the latest update of its chatbot, Claude 2.1, can digest up to 200,000 tokens at once for Pro tier users, which it says equals over 500 pages of material.

    Read Article >
  • Wes Davis

    Nov 18

    Wes Davis

    Meta disbanded its Responsible AI team

    Image of Meta’s logo with a red and blue background.
    Illustration by Nick Barclay / The Verge

    Meta has reportedly broken up its Responsible AI (RAI) team as it puts more of its resources into generative artificial intelligence. The Information broke the news today, citing an internal post it had seen.

    According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more.

    Read Article >
  • Emma Roth

    Nov 17

    Emma Roth

    Some of Bing’s search results now have AI-generated descriptions.

    Microsoft says it’s using GPT-4 to garner the “most pertinent insights” from webpages and write summaries beneath Bing search results. You won’t see these summaries beneath every search result, but you can check which ones are AI-generated by clicking the little arrow next to the result’s URL. If the description is written by AI, it’ll say “AI-Generated Caption.”

    A screenshot of an AI-generated search description on Bing
    Screenshot by Emma Roth / The Verge
  • Google’s next-generation ‘Gemini’ AI model is reportedly delayed.

    Earlier this year, Google combined two AI teams into one group which is working on a new model to compete with OpenAI’s GPT-4. Its leader, Demis Hassabis, discussed the combo on Decoder:

    And we’re already feeling, even a couple of months in, the benefits and the strengths of that with projects like Gemini that you may have heard of, which is our next-generation multimodal large models — very, very exciting work going on there, combining all the best ideas from across both world-class research groups. It’s pretty impressive to see.

    Now, The Information cites two sources saying its launch is expected in Q1 of 2024, not this month as they were previously told. It also reports Google co-founder Sergey Brin has been spending “four to five days a week” with the developers.

  • YouTube previews AI tool that clones famous singers — with their permission

    Google is testing new generative AI features for YouTube that’ll let people create music tracks using just a text prompt or a simple hummed tune. The first, Dream Track, already seeded to a few creators on the platform, is designed to auto-generate short 30-second music tracks in the style of famous artists. The feature can imitate nine different artists, who’ve chosen to collaborate with YouTube on its development. YouTube is also showing off new tools that can generate music tracks from a hum.

    The announcement comes as YouTube attempts to navigate the emerging norms and copyright rules around AI-generated music while also protecting its relationship with major music labels. The issue was brought into sharp relief when an AI-generated “Drake” song went viral earlier this year, and YouTube subsequently announced a deal to work with Universal Music as it establishes rules around AI-generated music on its platform.

    Read Article >
  • Wes Davis

    Nov 15

    Wes Davis

    Microsoft’s Copilot AI gets more personalized in its first update since launch

    Illustration of Microsoft’s new AI-powered Copilot for Office apps
    Image: Microsoft

    Microsoft announced a plethora of changes for its Microsoft Copilot AI during Microsoft Ignite today that make the chatbot more interactive and participatory, particularly in Teams meetings. The updates expand Copilot’s role as an enterprise helper in Office apps like Teams, PowerPoint, and Outlook.

    Microsoft has added some flexibility to the chatbot’s output, so users can tweak it with instructions to make its formatting and tone more to their personal liking. Word and PowerPoint will get the new personalization features to start, but the company says other Microsoft 365 apps will gain support in time.

    Read Article >
  • Wes Davis

    Nov 14

    Wes Davis

    Airbnb just bought its very own AI company, led by a Siri co-founder.

    CNBC reports Airbnb bought the “stealth mode” Gameplanner.AI for almost $200 million, and notes its plans to use generative AI for trip planning help.

    The article says Gameplanner.AI was founded in 2020 by former Siri co-founder Adam Cheyer, whose later work included co-founding Viv Labs, which Samsung bought to make Bixby.

  • Barack Obama on AI, free speech, and the future of the internet.

    In a sitdown with Verge EIC Nilay Patel on Decoder, the 44th president discussed Joe Biden’s recently-signed executive order about AI, why Obama disagrees with the idea that social networks are a “common carrier,” and which iPhone apps he uses the most, now that he’s no longer president and he can use an iPhone.

  • OpenAI wants to be the App Store of AI

    Sam Altman.
    OpenAI CEO Sam Altman.
    Photo by Andrew Caballero-Reynolds

    Moments after OpenAI’s big keynote wrapped in San Francisco on Monday, the reporters in attendance made our way down to a private room to chat with CEO Sam Altman and CTO Mira Murati. During the Q&A, they elaborated on the big news that had just been shared onstage: OpenAI is launching a platform for creating and discovering custom versions of ChatGPT

    There are natural parallels to draw between OpenAI’s GPT Store, which is set to go live in a few weeks, and the debut of the iPhone’s App Store in 2008. Like Apple way back then, OpenAI is inviting developers who are excited about this new wave of technology to hopefully help create a new, enduring platform. 

    Read Article >
  • Google is bringing generative AI to advertisers

    three dogs in colorful backgrounds, a prompt that says “elegant dogs on color backdrop” with a button that says generate assets.
    Very clickable good dogs.
    Image: Google

    Google is rolling out new generative AI tools for creating ads, from writing the headlines and descriptions that appear along with searches to creating and editing the accompanying images. It’s pitching the tool for use by both advertising agencies as well as businesses without in-house creative staff. Using text prompts, advertisers can iterate on the text and images they generate until they find something they like.

    Google also promises that it will never generate two of the same images, which can avoid the awkward possibility that two competing businesses end up with the exact same photo elements.

    Read Article >
  • OpenAI turbocharges GPT-4 and makes it cheaper

    Illustration of the OpenAI logo on an orange background with purple lines
    Illustration: The Verge

    OpenAI announced more improvements to its large language models, GPT-4 and GPT-3.5, including updated knowledge bases and a much longer context window. The company says it will also follow Google and Microsoft’s lead and begin protecting customers against copyright lawsuits.

    GPT-4 Turbo, currently available via an API preview, has been trained with information dating to April 2023, the company announced Monday at its first-ever developer conference. The earlier version of GPT-4 released in March only learned from data dated up to September 2021. OpenAI plans to release a production-ready Turbo model in the next few weeks but did not give an exact date.

    Read Article >
  • OpenAI is letting anyone create their own version of ChatGPT

    ChatGPT logo in mint green and black colors.
    Illustration: The Verge

    With the release of ChatGPT one year ago, OpenAI introduced the world to the idea of an AI chatbot that can seemingly do anything. Now, the company is releasing a platform for making custom versions of ChatGPT for specific use cases — no coding required.

    In the coming weeks, these AI agents, which OpenAI is calling GPTs, will be accessible through the GPT Store. Details about how the store will look and work are scarce for now, though OpenAI is promising to eventually pay creators an unspecified amount based on how much their GPTs are used. GPTs will be available to paying ChatGPT Plus subscribers and OpenAI enterprise customers, who can make internal-only GPTs for their employees.

    Read Article >
  • ChatGPT continues to be one of the fastest-growing services ever

    Stock image of computer chip on an illustration of the human brain.
    Illustration by Alex Castro / The Verge

    One hundred million people are using ChatGPT on a weekly basis, OpenAI CEO Sam Altman announced at its first-ever developer conference on Monday. Since releasing its ChatGPT and Whisper models via API in March, the company also now boasts over two million developers, including over 92 percent of Fortune 500 companies.

    OpenAI announced the figures as it detailed a range of new features, including a platform for building custom versions of ChatGPT to help with specific tasks and GPT-4 Turbo, a new model that has knowledge of world events up to April 2023 and which can fit the equivalent of over 300 pages of text in a single prompt.

    Read Article >
  • ChatGPT subscribers may get a ‘GPT builder’ option soon

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
    Illustration: The Verge

    Update November 6th, 10AM ET: We have all the confirmed details about the future of ChatGPT right here in our coverage of OpenAI’s developer event. The original article continues below.

    Just as OpenAI is preparing for its first-ever developer conference, a significant ChatGPT update has leaked.

    Read Article >
  • A look at why the Leica M11-P’s Content Credentials matter.

    Over on MKBHD’s The Studio, David Imel talked about the Leica M11-P. Or, more accurately, he used it to talk about Content Credentials, which the $9,000-plus camera attaches to photos as they’re taken so they can be verified through Adobe’s Content Authenticity Initiative (CAI).

    It’s a good look at CAI and its potential benefits to the media using the first-ever camera to participate in an initiative intended to help onlookers identify real-world images in a sea of AI-generated ones.

  • The good kind of SEO.

    Nilay, David, and Alex had a little aside on Friday’s Vergecast episode about how the combo of SEO and AI means Google thinks you can melt an egg. This morning, a listener showed what’s clearly the best way to experience this silly SEO flub.

    I immediately tried it, and this was also my Nest Hub’s response to the question, “Can you melt an egg?”

    Go try it before Google fixes it!

  • AI companies have all kinds of arguments against paying for copyrighted content

    An image showing a graphic of a brain on a black background
    Illustration by Alex Castro / The Verge

    The US Copyright Office is taking public comment on potential new rules around generative AI’s use of copyrighted materials, and the biggest AI companies in the world had plenty to say. We’ve collected the arguments from Meta, Google, Microsoft, Adobe, Hugging Face, StabilityAI, and Anthropic below, as well as a response from Apple that focused on copyrighting AI-written code.

    There are some differences in their approaches, but the overall message for most is the same: They don’t think they should have to pay to train AI models on copyrighted work.

    Read Article >
  • Elon Musk says xAI’s chatbot will be an X subscriber exclusive.

    He says users will need a $16-a-month X Premium Plus subscription to access "Grok," and that it will get real-time information from posts on X.

    Musk was a co-founder of OpenAI but left in 2018 over the company’s for-profit shift, and has called ChatGPT “WokeGPT.” He launched xAI earlier this year. Musk’s posts come a few days ahead of OpenAI’s first developer conference on Monday.

  • OpenAI won’t say how many artists have opted out of training AI.

    Bloomberg’s report on how artists are finding ways to fight back against AI scraping highlights how the process of excluding their content from OpenAI’s training datasets “feels like a charade” that would take months to even attempt.
    The system asks that artists upload images they’d like excluded from future training to OpenAI, along with a description of each piece.

    OpenAI says it’s collecting feedback to improve the experience amid the rise of new tools like Glaze and Nightshade, which are designed to disrupt AI image generators.

  • Wes Davis

    Oct 29

    Wes Davis

    ChatGPT Plus members can upload and analyze files in the latest beta

    ChatGPT logo in mint green and black colors.
    Illustration: The Verge

    OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 dropdown — it will instead guess what they want based on context.

    The new features bring a dash of the office features offered by its ChatGPT Enterprise plan to the standalone individual chatbot subscription. I don’t seem to have the multimodal update on my own Plus plan, but I was able to test out the Advanced Data Analysis feature, which seems to work about as expected. Once the file is fed to ChatGPT, it takes a few moments to digest the file before it’s ready to work with it, and then the chatbot can do things like summarize data, answer questions, or generate data visualizations based on prompts.

    Read Article >