Google is updating its Bard AI chatbot to help developers write and debug code. Rivals like ChatGPT and Bing AI have supported code generation, but Google says it has been “one of the top requests” it has received since opening up access to Bard last month.
You can ask Bard to explain code snippets or explain code within GitHub repos similar to how Microsoft-owned GitHub is implementing a ChatGPT-like assistant with Copilot. Bard will also debug code that you supply or even its own code if it made some errors or the output wasn’t what you were looking for.
Speaking of errors, Bailey admits that Bard “may sometimes provide inaccurate, misleading or false information while presenting it confidently,” much like many AI-powered chatbots. “When it comes to coding, Bard may give you working code that doesn’t produce the expected output, or provide you with code that is not optimal or incomplete,” says Bailey. “Always double-check Bard’s responses and carefully test and review code for errors, bugs and vulnerabilities before relying on it.” Bard will also cite the source of its code recommendations if it quotes them “at length.”
Google is pushing ahead with its Bard chatbot despite reports that suggest employees repeatedly criticized the chatbot and labeled it “a pathological liar.” Google has reportedly sidelined ethical concerns to keep up with rivals like OpenAI and Microsoft. In our tests comparing Bard, Bing, and ChatGPT, we found Google’s Bard chatbot to be less accurate than its rivals.