The Financial Times reports Google, OpenAI, Microsoft, and Meta are pressing the UK’s AI Safety Institute for clarity on how long AI model testing will take and what happens if risks are found. The institute has begun testing existing models and has access to unreleased ones like Gemini Ultra from Google.
The companies volunteered to undergo testing of their AI models after the AI Safety Summit in November. There is no policy preventing companies from releasing models found risky in the testing.
The AIs are officially out of control
Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis
Google Pay replaced Google Wallet — now it’s going away to make room for Google Wallet
Vision Pro owners are reporting a mysterious crack in the front glass
Spotify HiFi is still MIA after three years, and now so is my subscription