Big players, including Microsoft, with its Bing AI (and Copilot), Google, with Bard, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.
How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.”
Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”
But there are so many more pieces to the AI landscape that are coming into play — and there are going to be problems — but you can be sure to see it all unfold here on The Verge.
Microsoft says listing the Ottawa Food Bank as a tourist destination wasn’t the result of ‘unsupervised AI’
A Microsoft travel guide for Ottawa, Canada, prominently recommended tourists visit the Ottawa Food Bank, as spotted by Paris Marx until it was removed after this article was originally published. (You can see the article in full here.) The food bank was the No. 3 recommendation on the list, sitting behind the National War Memorial and above going to an Ottawa Senators hockey game.Read Article >
We reported in 2020 about Microsoft laying off journalists at Microsoft News and MSN to replace them with artificial intelligence. However, the company says its content is not generated by the AI we’re now used to in the form of large language models powering tools like the Bing chatbot or ChatGPT. Instead, the content in Microsoft’s story was generated through “a combination of algorithmic techniques with human review,” according to the company. As explained in a statement to The Verge from Jeff Jones, a senior director at Microsoft:
Google’s AI-powered Search Generative Experience (SGE) is getting a major new feature: it will be able to summarize articles you’re reading on the web, according to a Google blog post. SGE can already summarize search results for you so that you don’t have to scroll forever to find what you’re looking for, and this new feature is designed to take that further by helping you out after you’ve actually clicked a link.Read Article >
You probably won’t see this feature, which Google is calling “SGE while browsing,” right away.
The most sought-after resource in the tech industry right now isn’t a specific type of engineer. It’s not even money. It’s an AI chip made by Nvidia called the H100.Read Article >
Securing these GPUs is “considerably harder to get than drugs,” Elon Musk has said. “Who’s getting how many H100s and when is top gossip of the valley rn,” OpenAI’s Andrej Karpathy posted last week.
Zoom has updated its terms of service and reworded a blog post explaining recent terms of service changes referencing its generative AI tools. The company now explicitly states that “communications-like” customer data isn’t being used to train artificial intelligence models for Zoom or third parties. What is covered by communications-like? Basically, the content of your videoconferencing on Zoom.Read Article >
Here’s the key passage from the newly-revised terms:
Technology news outlet CNET has deleted thousands of older articles from its site, telling staff the deletions will improve its Google Search ranking, according to an internal memo. The news was first reported by Gizmodo.Read Article >
Gizmodo reports that, since July, thousands of articles have been removed from CNET. In the memo, CNET says that so-called content pruning “sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results.” Stories slated to be “deprecated” are archived using the Internet Archive’s Wayback Machine, and authors are alerted at least 10 days in advance, according to the memo.
Newegg’s new AI-generated review summaries could make it easier to sift through user feedback while you search for PC parts and other tech. In addition to providing a short summary of what people are saying about a product, the AI also picks out pros and cons based on user reviews.Read Article >
The feature leverages the technology behind OpenAI’s ChatGPT and lives within the “Reviews” tab toward the bottom of a product’s page. There, you’ll see a list of pros and cons that you can click on, allowing you to filter reviews by specific keywords and see where the AI got its information from. Below that, you’ll also see an AI-generated summary that combines all the key pieces of feedback into a short paragraph.
In the world of generative AI, it is the big names that get the most airtime. Big Tech players like Microsoft and lavishly funded startups like OpenAI have earned invitations to the White House and the earliest of what will likely be many, many congressional hearings. They’re the ones that get big profile pieces to discuss how their technology will end humanity. As politicians in the US and beyond grapple with how to regulate AI, this handful of companies has played an outsize role in setting the terms of the conversation. And smaller AI players, both commercial and noncommercial, are feeling left out — while facing a more uncertain future.Read Article >
Big AI — a term that’s long overdue for adoption — has been actively guiding potential AI policies. Last month, OpenAI, Meta, Microsoft, Google, Anthropic, and Amazon signed an agreement with the White House promising to invest in responsible AI and develop watermarking features to flag AI-generated content. Soon after, OpenAI, Microsoft, Anthropic, and Google formed the Frontier Model Forum, an industry coalition targeted to “promote the safe and responsible use of frontier AI systems.” It was set up to advance AI research, find best practices, and share information with policymakers and the rest of the AI ecosystem.
Aug 5Apple’s job listings reveal possible paths for using generative AI.
Apple says it’s been working on AI research for years, and recent job listings show its current focus, reports the Financial Times.
Over the last few months company has posted dozens of AI jobs in the US, France, and China, looking to fill roles that could help build generative AI tools that use local processing on mobile devices, like this one:
We are seeking a candidate with a proven track record in applied ML research. Responsibilities in the role will include training large scale language and multimodal models on distributed backends, deployment of compact neural architectures such as transformers efficiently on device, and learning policies that can be personalized to the user in a privacy preserving manner.
Google’s AI-powered Search Generative Experience is getting a big new feature: images and video. If you’ve enabled the AI-based SGE feature in Search Labs, you’ll now start to see more multimedia in the colorful summary box at the top of your search results. Google’s also working on making that summary box appear faster and adding more context to the links it puts in the box.Read Article >
SGE may still be in the “experiment” phase, but it’s very clearly the future of Google Search. “It really gives us a chance to, now, not always be constrained in the way search was working before,” CEO Sundar Pichai said on Alphabet’s most recent earnings call. “It allows us to think outside the box.” He then said that “over time, this will just be how search works.”
Google is planning to update Assistant with features powered by generative AI, according to a report from Axios. In an email obtained by the outlet, Google tells staff members that it has already started exploring a “supercharged” Assistant powered by the newest large language models (LLM), similar to the technology behind ChatGPT and Google’s own Bard chatbot. According to the email, “A portion of the team has already started working on this, beginning with mobile.”Read Article >
As part of this change, Google says it’s condensing the team that works on Assistant. The email obtained by Axios states that the company is “eliminating a small number of roles,” although it’s unclear how many employees are affected. According to Axios, Google laid off “dozens” of workers. The Verge reached out to Google to confirm this, and we’ll update this article if we get more information.
The ChatGPT for Android app is now available in the Google Play Store, launching a few months after the free iOS app brought the chatbot to iPhones and iPads. According to a company tweet, it’s available first in the US, India, Bangladesh, and Brazil, with other countries set to follow later, mimicking the staged rollout we saw for the iOS version.Read Article >
On July 27th, OpenAI announced additional availability, saying the Android ChatGPT app is now available in Argentina, Canada, France, Germany, Indonesia, Ireland, Japan, Mexico, Nigeria, the Philippines, the UK, and South Korea.
Jul 27Meta keeps calling its new AI model open source when it’s not.
On Meta’s Q2 earning call Wednesday, Mark Zuckerberg called Llama 2, the company’s latest generative AI model, an “open source project.”
Except it’s not actually open source, since its license has usage restrictions. Here’s Stefano Maffulli, the executive director for the Open Source Initiative:
‘Open Source’ means software under a license with specific characteristics, defined by the Open Source Definition (OSD). Among other requirements, for a license to be Open Source, it may not discriminate against persons or groups or fields of endeavor (OSD points 5 and 6). Meta’s license for the LLaMa models and code does not meet this standard; specifically, it puts restrictions on commercial use for some users (paragraph 2) and also restricts the use of the model and software for certain purposes (the Acceptable Use Policy).Meta’s Llama 2 license is not open source
[Voices of Open Source]
During the AWS Summit in New York on Wednesday, Amazon launched Agents for Bedrock, which will let companies build AI apps that can automatically do tasks on their own, like booking a flight for users instead of just telling them about it. AI agents are the assistant that actually gets a restaurant reservation instead of just giving suggestions on where to eat.Read Article >
“I believe this will supercharge developers who wanted an easier way to build agents and at the same time customize the data the models read,” Swami Sivasubramanian, vice president of data and machine learning at AWS, tells The Verge. “Building agents took so much time even with how advanced generative AI is now, but we’re making it so developers can access exactly the models they need.”
OpenAI shuttered a tool that was supposed to tell human writing from AI due to a low accuracy rate. In an (updated) blog, OpenAI said it decided to end its AI classifier as of July 20th. “We are working to incorporate feedback and are currently researching more effective provenance techniques for text,” the company said.Read Article >
As it shuts down the tool to catch AI-generated writing, OpenAI said it plans to “develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.” There’s no word yet on what those mechanisms might be, though.
Jul 25Reporting from inside the AI factory.
The Verge’s Josh Dzieza appeared on Vox’s Today, Explained podcast to discuss his recent report on the human element involved in creating the AI tools everyone is talking about lately.
As he wrote then, “a vast tasker underclass is emerging — and not going anywhere.”
Watch out Bard — Bing’s AI chatbot is rolling out on Google Chrome and Safari. As first spotted by Windows Latest (via 9to5Google), Microsoft is testing letting users on both browsers access the tool.Read Article >
“We are flighting access to Bing Chat in Safari and Chrome to select users as part of our testing on other browsers,” Caitlin Roulston, Microsoft’s director of communications, says in a statement to The Verge. “We are excited to expand access to even more users once our standard testing procedures are complete.”
Apple is using an internal chatbot to help its employees “prototype future features, summarize text and answer questions based on data it has been trained with,” says Bloomberg’s Mark Gurman in Power On today.Read Article >
Apple hasn’t been sure what it wants to do with its Apple GPT chatbot project on the customer-facing side yet, but Gurman’s report shed some light on at least its internal chatbot uses. According to the newsletter, Apple is looking at ways to expand the use of generative AI within its organization, with one possibility being giving the tool to its AppleCare support staff to better help customers dealing with issues.
Since launching in November, OpenAI’s ChatGPT tool has reached a number of users at a rate that’s astounding for anything outside of Threads — now the company says it’s ready to release an app for Android.Read Article >
The pace of AI development is moving at breakneck speed. And as Meta showed this week with the commercial release of its second-generation, open-source-ish Llama model, the competitive landscape is being constantly redrawn.Read Article >
I’ve spent the past few days reading reactions to the news and talking to people in the AI field. Many believe that Llama 2 is the industry’s most important release since ChatGPT last November, though it obviously won’t generate as much press buzz as a developer-facing release. Companies will now be able to more easily and cheaply build bespoke bots with proprietary data that would never be accessible externally, like the internal AI bot that Stripe recently rolled out for its employees. This will make AI chatbots of all kinds more useful and personalized, which is an exciting step in the right direction.
Jul 21OpenAI updates its API.
Following user complaints of GPT-4 becoming “slower and dumber,” OpenAI said it made improvements to APIs on the latest model.
The company asked users to send in evaluations so they can continue to improve its models.
We are working hard to ensure that new versions result in improvements across a comprehensive range of tasks. That said, our evaluation methodology isn’t perfect, and we’re constantly improving it.
Jul 21Sergey Brin, back at work.
Months after we heard Google founder Sergey Brin made a reappearance at Google amidst the rise of ChatGPT, now a report from The Wall Street Journal suggests Brin’s return might be more permanent than we thought.
Sources tell the WSJ that Brin has been visiting Google’s offices three to four days per week to help build Google’s Gemini AI model. Brin has not only “convened weekly discussions of new AI research with Google employees,” but has also taken point in “the hiring of sought-after researchers,” the WSJ reports.
The White House is bringing in AI’s top seven companies Friday to make a series of voluntary promises to protect users.Read Article >
The companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI — have all agreed to a series of asks from the White House to address many of the risks posed by artificial intelligence. The promises consist of investments in cybersecurity, discrimination research, and a new watermarking system informing users when content is AI-generated.
The newest feature of ChatGPT is designed to help you type a little less. It’s called “custom instructions,” and it gives you a place to tell your chatbot the things it should always know about you and how you’d like it to respond to your questions. The feature is in beta, works everywhere ChatGPT does — it should be particularly helpful on mobile devices — and is available today on an opt-in basis to ChatGPT Plus subscribers everywhere but the UK and EU. (Those are hopefully coming soon.)Read Article >
“Right now, if you open up ChatGPT,” says Joanne Jang, who works on model behaviors and product at OpenAI, “it doesn’t know much about you. If you start a new thread, it forgets everything you’ve talked about in the past. But there are things that might apply across all conversations.”
A few months back, everyone wondered who would win the AI arms race. Microsoft aligned itself with OpenAI. Google launched Bard. Meta began working on its own large language model, LLaMA. Other companies began thinking of launching AI platforms, and curious users pitted the models against each other.Read Article >
But a recent deal suggests we may also see a growing number of partnerships, not just head-to-head competition. Earlier this week, Meta offered its LLaMA large language model for free under an open license and brought it to Microsoft’s Azure platform. The decision highlighted the benefits of interoperability in AI — and as more companies join the field, it probably won’t be the last of its kind.
The New York Times cites anonymous sources in a report saying Google demonstrated Genesis for media execs from the Times, Washington Post, and Wall Street Journal owner News Corp, presenting “responsible” technology that takes in facts and spits out news copy. Two execs mentioned in the article “said it seemed to take for granted the effort that went into producing accurate and artful news stories,” while another saw it as more of a personal assistant / helper.Read Article >
Asked about the report, Google spokesperson Jenn Crider provided the following statement to The Verge: