Google warns its Staff about using Chatbots, including its very own Bard

Google warns its Staff about using Chatbots, including its very own Bard | CIO Women Magazine

According to reports, Google is cautioning staff members against discussing sensitive material with AI chatbots like ChatGPT and Bard, which is developed in-house. The caution, according to Reuters, is intended to protect private data that LMM models like Bard and Google may use to train themselves and reveal later. Human reviewers who act as moderators can also examine sensitive information. According to the story, Google developers have been cautioned against using codes produced by AI chatbots.

Leveraging the Data

Google Bard’s FAQ page states that whenever a user interacts with the chatbot, the business records usage data, feedback, and conversation history. According to the website, “That data helps us provide, improve, and develop Google products, services, and machine-learning technologies.” The report claims that Google staff members can still utilise Bard for other tasks. Google’s warning considerably differs from its previous stance with Bard. Employees were tasked with rigorously testing the AI chatbot after the software company released Bard earlier this year to compete with ChatGPT.

The caution Google issued to its staff also matches a security policy that many businesses are implementing. Utilizing publicly accessible AI chatbots is prohibited by some businesses. Following the discovery of certain employees discussing sensitive information, Samsung is said to be one of the businesses that has banned the use of ChatGPT. Google said in a statement that it wished to be “transparent” regarding Bard’s limits. According to the business, “Bard can make undesirable code suggestions, but it still aids programmers.” The AI chatbot can also quickly create pictures, analyse code, edit lengthy documents, and even write emails.

Advertising and Developing Platforms

Cloudflare CEO Matthew Prince compared sharing private information with chatbots to “letting a bunch of PhD students loose in all of your private records” when discussing security issues with free-to-use AI chatbots. Cloudflare, a company that provides cybersecurity services to organisations, promotes a feature that allows companies to tag and block specific data from flowing externally. Microsoft is also developing a ChatGPT private chatbot with the same name for business clients.

Watch Google’s Deep Dive Into Bard AI Chatbot

Through their agreement, Microsoft and OpenAI are able to advertise and develop platforms under the ChatGPT brand. Microsoft is claimed to have constructed the proprietary ChatGPT chatbot on its own cloud infrastructure. It is yet unknown if Microsoft has set the same limitations on using Bing Chat that Google has for Bard. Yusuf Mehdi, chief marketing officer for Microsoft’s consumer division, is quoted in the report as saying that “companies are taking a duly conservative standpoint.” Mehdi was making a reference to the business’s work on the exclusive ChatGPT services for corporate clients.

Read More: Alibaba plans to launch a ChatGPT Rival

Share:

Facebook
Twitter
Pinterest
LinkedIn

Social Media

Most Popular

Get The Latest Updates

Subscribe To Our Weekly Newsletter

Related Posts