Google Warns Employees About Chatbot Usage, Here’s Why

K.C. Sabreena Basheer Last Updated : 28 Jun, 2023
3 min read

Alphabet Inc., the parent company of Google, is cautioning its employees about using chatbots, including ChatGPT & its own creation, Bard. This warning comes as the company expands the reach of its chatbot program globally. Let’s delve into the details of this development and understand the underlying concerns.

Also Read: Samsung Bans Employees From Using Generative AI Due to Security Concerns

Safeguarding Confidential Information

According to sources familiar with the matter, Alphabet has instructed its staff to refrain from entering confidential materials into AI chatbots, including Bard. This directive aligns with the company’s longstanding policy on information protection. While the chatbots, such as Bard and ChatGPT, are designed to engage in conversations with users and provide responses, there is a potential risk of data leakage as the AI models can reproduce absorbed training data.

Also Read: AI Is Stealing Your Data – Say Experts

Alphabet Inc., the parent company of Google, warned its employees about using chatbots, including ChatGPT and Bard.

Cautions for Engineers and Programmers

In addition to advising against entering confidential information, Alphabet has alerted its engineers to avoid directly using chatbots’ computer code. While Bard may offer suggestions, it is important for programmers to exercise caution. Google aims to maintain transparency by acknowledging the limitations of its technology and ensuring it does not cause unintended consequences.

Also Read: Apple Follows its Rival Samsung, Bans ChatGPT Over Privacy Fears

A Competitive Landscape and Business Implications

Google’s wariness about chatbot usage stems from its competition with ChatGPT, backed by OpenAI and Microsoft Corp. The stakes are high, with billions of dollars in investments, potential advertising revenues, and cloud revenue riding on the success of these AI programs. Google’s precautions are an industry-wide trend, with other companies like Samsung, Amazon.com, and Deutsche Bank also implementing guardrails for AI chatbots.

Also Read: Microsoft and OpenAI Clash Over AI Integration

Employees’ Use of AI Tools and Security Standards

A survey among professionals from top US-based companies revealed that approximately 43% of respondents used AI tools like ChatGPT, often without informing their superiors. To mitigate potential risks, companies worldwide, including Apple (unconfirmed), have adopted security standards to warn employees about using publicly-available chat programs.

Alphabet's AI tools and security standards.

Privacy Concerns and Regulatory Dialogue

Google has engaged in detailed discussions with Ireland’s Data Protection Commission to address concerns related to privacy and comply with regulatory requirements. A recent Politico report stated that the launch of Bard in the European Union was postponed pending additional information on its privacy implications. Google’s updated privacy notice advises users not to include confidential or sensitive information in their Bard conversations.

Also Read: Europe’s Data Protection Board Forms ChatGPT Privacy Task Force

Mitigating Risks with Innovative Solutions

Companies are actively developing software solutions to address these concerns. For instance, Cloudflare, a leading provider of cybersecurity &cloud services, lets businesses tag and restrict certain data from transmitting externally. Google and Microsoft also offer conversational tools to business customers, ensuring data privacy by not incorporating them into public AI models. Although the default setting in Bard and ChatGPT saves users’ conversation history, users can delete it.

Google warned its employees about using chatbots over data security and privacy concerns.

Our Say

Google’s warning to its staff regarding chatbot usage, including Bard, reflects the company’s commitment to data privacy and security. As AI technologies continue to evolve, it is crucial for organizations to implement safeguards and promote responsible usage. The dynamic landscape of AI chatbots demands a delicate balance between innovation and risk mitigation. By addressing these concerns, companies like Google are working towards a future where AI technologies can be harnessed safely & ethically.

Sabreena Basheer is an architect-turned-writer who's passionate about documenting anything that interests her. She's currently exploring the world of AI and Data Science as a Content Manager at Analytics Vidhya.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details