Skip to main content

Filed under:

From ChatGPT to Gemini: how AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Gemini, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play (and so many name changes — remember when we were talking about Bing and Bard last year?), but you can be sure to see it all unfold here on The Verge.

  • Brave brings its AI browser assistant to Android.

    The privacy-focused Brave browser launched its AI assistant, Leo, last year on the desktop, and now it’s available for Android, following other mobile AI-connected browsers like Edge and Arc (only on iOS).

    Leo promises summaries, transcriptions, translations, coding, and more (while acknowledging that LLMs may “hallucinate” erroneous info). As for privacy, Brave claims, “Inputs are always submitted anonymously through a reverse-proxy and are not retained or used for training.”


  • Google Cloud links up with Stack Overflow for more coding suggestions on Gemini.

    The partnership also lets developers on Gemini for Google Cloud (not to be mistaken for Gemini the chatbot) access Stack Overflow directly. The new features will be available in the first half of 2024.

    Stack Overflow, which laid off 28 percent of its staff last year amid the boom in AI coding, will be able to use Google’s AI services to help “accelerate content approval process and further optimize forum engagement experiences.”


  • From Eliza to ChatGPT: why people spent 60 years building chatbots

    People have been trying to talk to computers for almost as long as they’ve been building computers. For decades, many in tech have been convinced that this was the trick: if we could figure out a way to talk to our computers the way we talk to other people, and for computers to talk back the same way, it would make those computers easier to understand and operate, more accessible to everyone, and just more fun to use.

    ChatGPT and the current revolution in AI chatbots is really only the latest version of this trend, which extends all the way back to the 1960s. That’s when Joseph Weizenbaum, a professor at MIT, built a chatbot named Eliza. Weizenbaum wrote in an academic journal in 1966 that Eliza “makes certain kinds of natural language conversation between man and computer possible.” He set up the bot to act as a therapist, a vessel into which people could pour their problems and thoughts. 

    Read Article >
  • Google CEO says Gemini AI diversity errors are ‘completely unacceptable’

    Photo illustration of Sundar Pichai in front of the Google logo
    Google CEO Sundar Pichai.
    Illustration by Cath Virginia / The Verge

    The historically inaccurate images and text generated by Google’s Gemini AI have “offended our users and shown bias,” CEO Sundar Pichai told employees in an internal memo obtained by The Verge.

    Last week, Google paused Gemini’s ability to generate images after it was widely discovered that the model generated racially diverse, Nazi-era German soldiers, US Founding Fathers who were non-white, and even inaccurately portrayed the races of Google’s own co-founders. While Google has since apologized for “missing the mark” and said it’s working to re-enable image generation in the coming weeks, Tuesday’s memo is the first time the CEO has widely addressed the controversy.

    Read Article >
  • Emma Roth

    Feb 27

    Emma Roth

    Wendy’s betrays spicy nugget lovers everywhere and will introduce surge pricing

    A photo showing a Wendy’s sign
    Image: Wendy’s

    Imagine waiting in line at your local Wendy’s drive-thru during the lunch rush, only to pull up to the menu board and realize that the spicy chicken nuggets you’ve been craving all day will cost you a dollar extra. That nightmare may soon become a reality, because Wendy’s plans on testing surge pricing that will increase the price of its spicy nuggets, burgers, Frostys, and other favorites during its busiest times.

    During an earnings call earlier this month, Wendy’s CEO Kirk Tanner said the fast food chain plans on investing $20 million to roll out digital menu boards to US-based restaurants by the end of 2025. As part of the change, Wendy’s will also introduce something called “dynamic prices” that will change the prices on the digital menu boards based on demand. It sounds similar to the surge pricing system implemented by Uber, which charges riders higher rates in busy areas.

    Read Article >
  • Emma Roth

    Feb 26

    Emma Roth

    Microsoft now offers Copilot GPTs to help you work out, find recipes, and more.

    When you open Microsoft Copilot, you’ll notice a new list of Copilot GPTs tailored for fitness training, designing, planning vacations, and helping you cook. You’ll also be able to create your own Copilot GPTs soon, as Microsoft corporate vice president Jordi Ribas says the feature is currently in testing.


  • Microsoft partners with Mistral in second AI deal beyond OpenAI

    Microsoft logo
    Illustration: The Verge

    Microsoft has announced a new multiyear partnership with Mistral, a French AI startup that’s valued at €2 billion (about $2.1 billion). The Financial Times reports that the partnership will include Microsoft taking a minor stake in the 10-month-old AI company, just a little over a year after Microsoft invested more than $10 billion into its OpenAI partnership.

    The deal will see Mistral’s open and commercial language models available on Microsoft’s Azure AI platform, the second company to offer a commercial language model on Azure after OpenAI. Much like the OpenAI partnership, Microsoft’s partnership with Mistral will also be focused on the development and deployment of next-generation large language models.

    Read Article >
  • Gemini’s photo generator ‘will be back in a few weeks.’

    Google DeepMind CEO Demis Hassabis, in a keynote during the Mobile World Congress, acknowledged the model applied a range of people for images “too bluntly.” Hassabis said Gemini’s photo generation feature, which was paused last week, is being fixed to offer a more narrow range of people for historical accuracy.


  • Wes Davis

    Feb 24

    Wes Davis

    Glenn or Glenda?

    The Vergecast team threw out some ideas yesterday for what random names Google will use for future chatbots. I like “Fancy Geoff.”

    Give it a listen if you want to catch up on stuff like Gemini’s first big controversy, The adventures of Apple and the post-quantum cryptography, and your place in Reddit’s AI-training corpus. Also, two lightning rounds!


  • Emma Roth

    Feb 23

    Emma Roth

    Google explains Gemini’s ‘embarrassing’ AI pictures of diverse Nazis

    Image of the Google “G” logo on a blue, black, and purple background.
    Illustration: The Verge

    Google has issued an explanation for the “embarrassing and wrong” images generated by its Gemini AI tool. In a blog post on Friday, Google says its model produced “inaccurate historical” images due to tuning issues. The Verge and others caught Gemini generating images of racially diverse Nazis and US Founding Fathers earlier this week.

    “Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Prabhakar Raghavan, Google’s senior vice president, writes in the post. “And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.”

    Read Article >
  • Microsoft says its automated AI red teaming tool finds malicious content “in a matter of hours.”

    PyRIT, or Python Risk Identification Toolkit, can point human evaluators to “hot spot” categories in AI that might generate harmful prompt results.

    Microsoft used PyRIT while redteaming (the process of intentionally trying to get AI systems to go against safety protocols) its Copilot services to write thousands of malicious prompts and score the response based on potential harm in categories that security teams can now focus on.


  • Windows is getting its own Magic Eraser to AI-modify your photos

    Selecting a dog’s leash with a tool and magically erasing it with
    This good dog is about to go off-leash.
    GIF: Microsoft

    Google and Samsung aren’t the only ones baking magical AI selective photo erasers into their devices — they’re about to become table stakes for Windows PCs too. Microsoft has just announced Generative erase, a feature that lets you do similar things in the Photos app that comes bundled with Windows.

    Above and below, you can see how Microsoft disappears a dog’s leash and some unintentional photobombers using the power of generative AI:

    Read Article >
  • Emma Roth

    Feb 22

    Emma Roth

    Google cut a deal with Reddit for AI training data

    An illustration of the Reddit logo.
    Image: The Verge

    Google is getting AI training data from Reddit as part of a new partnership between the two companies. In an update on Thursday, Reddit announced it will start providing Google “more efficient ways to train models.”

    The collaboration will give Google access to Reddit’s data API, which delivers real-time content from Reddit’s platform. This will provide “Google with an efficient and structured way to access the vast corpus of existing content on Reddit,” while also allowing the company to display content from Reddit in new ways across its products.

    Read Article >
  • Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis

    A Gemini image generation result featuring groups of almost entirely white men in white wigs
    The results for “generate an image of the Founding Fathers,” as of February 21st.
    Screenshot: Adi Robertson / The Verge

    Google has apologized for what it describes as “inaccuracies in some historical image generation depictions” with its Gemini AI tool, saying its attempts at creating a “wide range” of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.

    “We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” says the Google statement, posted this afternoon on X. “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

    Read Article >
  • Wes Davis

    Feb 21

    Wes Davis

    ChatGPT spat out gibberish for many users overnight before OpenAI fixed it

    Illustration of the OpenAI logo on an orange background with purple lines
    Illustration: The Verge

    ChatGPT users began reporting odd responses from the chatbot last night that included switching between languages, getting stuck in loops, or even repeatedly correcting itself. Some of the responses were just pure gibberish.

    While discussing the Jackson family of musicians, the chatbot explained to a Reddit user that “Schwittendly, the sparkle of tourmar on the crest has as much to do with the golver of the ‘moon paths’ as it shifts from follow.”

    Read Article >
  • One month with Microsoft’s AI vision of the future: Copilot Pro

    Vector collage of the Microsoft Copilot logo.
    The Verge

    Microsoft’s Copilot Pro launched last month as a $20 monthly subscription that provides access to AI-powered features inside some Office apps, alongside priority access to the latest OpenAI models and improved image generation.

    I’ve been testing Copilot Pro over the past month to see if it’s worth the $20 subscription for my daily needs and just how good or bad the AI image and text generation is across Office apps like Word, Excel, and PowerPoint. Some of the Copilot Pro features are a little disappointing right now, whereas others are truly useful improvements that I’m not sure I want to live without.

    Read Article >
  • Wes Davis

    Feb 17

    Wes Davis

    Sora can create video collages, too.

    One of OpenAI’s employees showed off another of the company’s new text-to-video generator’s abilities.

    This is some impressive AI creation of course, but what in blue blazes is happening in the upper right frame here?


  • OpenAI can’t register ‘GPT’ as a trademark — yet

    An image of OpenAI’s logo, which looks like a stylized and symmetrical braid.
    Image: OpenAI

    The US Patent and Trademark Office (PTO) has denied OpenAI’s application to register the word GPT, which means generative pre-trained transformer, saying GPT is too general a term to register and can prevent competitors from correctly describing their products as a GPT.

    OpenAI argued in its application that GPT is not a descriptive word — that GPT isn’t such a general term that consumers would “immediately understand” what it means.

    Read Article >
  • At least in Canada, companies are responsible when their customer service chatbots lie to their customer.

    A man was booking an Air Canada flight and asked for a reduced rate because of bereavement. The chatbot assured him this was possible — the reduced fare would be a rebate. When he went to submit the rebate, the airline refused to refund him.

    In February of 2023, Moffatt sent the airline a screenshot of his conversation with the chatbot and received a response in which Air Canada “admitted the chatbot had provided ‘misleading words.’”

    He took the airline to court and won.


  • Scientists are extremely concerned about this rat's “dck.”

    And for good reason — this, and several other nonsensical AI-generated images were openly credited to Midjourney in a peer-reviewed science paper published by the Frontiers Journal this week. The gibberish annotations and grotesquely inaccurate images it included are one example of the risks that generative AI poses to the accuracy of academic research.

    Frontiers has responded and removed the offending paper:

    Our investigation revealed that one of the reviewers raised valid concerns about the figures and requested author revisions. The authors failed to respond to these requests. We are investigating how our processes failed to act on the lack of author compliance with the reviewers’ requirements.


  • Sora’s AI-generated video looks cool, but it’s still bad with hands.

    OpenAI’s still-in-limited-testing new text-to-video generation model, Sora, is very impressive, especially compared to widely available AI video generators like Runway Gen-2 and Google’s Imagen.

    As you can see in the clips, though, there are issues — basketballs go through the sides of metal hoops, dogs pass through each other while walking, and hands are.... not always hands.


  • You sound like a bot

    3D illustration of a robot rubber stamping a text file.
    Illustration by Erik Carter

    In 2018, a viral joke started going around the internet: scripts based on “making a bot watch 1,000 hours” of just about anything. The premise (concocted by comedian Keaton Patti) was that you could train an artificial intelligence model on vast quantities of Saw films, Hallmark specials, or Olive Garden commercials and get back a bizarre funhouse-mirror version with lines like “lasagna wings with extra Italy” or “her mouth is full of secret soup.” The scripts almost certainly weren’t actually written by a bot, but the joke conveyed a common cultural understanding: AI was weird.

    Strange AI was everywhere a few years ago. AI Dungeon, a text adventure game genuinely powered by OpenAI’s GPT-2 and GPT-3, touted its ability to produce deeply imagined stories about the inner life of a chair. The first well-known AI art tools, like Google’s computer vision program Deep Dream, produced unabashedly bizarre Giger-esque nightmares. Perhaps the archetypal example was Janelle Shane’s blog AI Weirdness, where Shane trained models to create physically impossible nuclear waste warnings or sublimely inedible recipes. “Made by a bot” was shorthand for a kind of free-associative, nonsensical surrealism — both because of the models’ technical limitations and because they were more curiosities than commercial products. Lots of people had seen what “a bot” (actually or supposedly) produced. Fewer had used one. Even fewer had to worry about them in day-to-day life.

    Read Article >
  • How much electricity does AI consume?

    Pixel illustration of a computer generation an image connected to many electrical outlets at once.
    Illustration by Erik Carter

    It’s common knowledge that machine learning consumes a lot of energy. All those AI models powering email summaries, regicidal chatbots, and videos of Homer Simpson singing nu-metal are racking up a hefty server bill measured in megawatts per hour. But no one, it seems — not even the companies behind the tech — can say exactly what the cost is. 

    Estimates do exist, but experts say those figures are partial and contingent, offering only a glimpse of AI’s total energy usage. This is because machine learning models are incredibly variable, able to be configured in ways that dramatically alter their power consumption. Moreover, the organizations best placed to produce a bill — companies like Meta, Microsoft, and OpenAI — simply aren’t sharing the relevant information. (Judy Priest, CTO for cloud operations and innovations at Microsoft said in an e-mail that the company is currently “investing in developing methodologies to quantify the energy use and carbon impact of AI while working on ways to make large systems more efficient, in both training and application.” OpenAI and Meta did not respond to requests for comment.)

    Read Article >
  • How AI copyright lawsuits could make the whole industry go extinct

    Photo illustration of a statue of Lady Justice with pixelation and binary code texture
    Illustration: The Verge

    Our new Thursday episodes of Decoder are all about deep dives into big topics in the news, and for the next few weeks, we’re going to stay focused on one of the biggest topics of all: generative AI. 

    There’s a lot going on in the world of generative AI, but maybe the biggest is the increasing number of copyright lawsuits being filed against AI companies like OpenAI and Stability AI. So for this episode, we brought on Verge features editor Sarah Jeong, who’s a former lawyer just like me, and we’re going to talk about those cases and the main defense the AI companies are relying on in those copyright cases: an idea called fair use.

    Read Article >
  • In defense of busywork

    3D illustration of a robot bored at a desk.
    Illustration by Erik Carter

    In the show Severance’s dystopian workplace — is there any other kind? — employees spend their days studying arrays of numbers bobbing on their screens. Whenever a cluster of numbers makes an employee feel unsettled, the employee clicks on it to discard it. The work’s value is not apparent to the workers, who are told only that they are “refining macro-data files,” but the job is nevertheless satisfying to complete. When one protagonist, Helly, tosses enough bad numbers, she is greeted with a Game Boy-esque animation of the company’s founder and CEO, who tells her, “I love you.”

    The task is a parody of corporate busywork, the time-consuming, mind-numbing, manager-placating chores that fill our days. Most jobs involve some degree of busywork, and it is generally maligned. A Microsoft WorkLab survey published last January reported that 85 percent of respondents said they hoped artificial intelligence tools would automate all busywork, freeing up their time for more fulfilling activities such as “engaging with others.” These respondents have clearly never sat through a five-hour conversation about a three-word headline, but I digress: busywork has been cast as the enemy of innovation, and AI has been cast as the solution. “Eliminating busywork” has become AI proponents’ “making the world a better place.” But would it?

    Read Article >