Google seems to fear its Bard can leak confidential info, reportedly tells employees to be wary
During Google's I/O conference, most of the talks were about AI. The company launched Google Bard as an experimental product to answer to OpenAI's ChatGPT (those are often called LLM chatbots, or Large Language Model chatbots). It seems though that behind the scenes Google isn't as enthusiastic about AI chatbots.
Reuters now reports that Google has allegedly informed employees to be wary of using LLM chatbots, including Google's own Bard.
It seems Google has found some privacy and security issues that could arise when employees are using Google Bard. For one, the company's reportedly told its developers to not use code that chatbots can generate (the code-generating feature of Google Bard was showcased during the Google I/O conference).
Basically, if employees enter confidential info into Bard or ChatGPT, this info can become public. The same applies to strings of code, which could compromise the security of the code, showcasing it to potential hackers that could take advantage of it.
Other companies such as Samsung and Amazon also reportedly have guardrails when it comes to AI.
In a comment to Reuters, Google said that it strives to be transparent about Bard's limitations, and says then when it comes to code, Bard can be a helpful tool although sometimes it may make undesired suggestions.
Meanwhile, Google is reportedly in talks with Ireland's Data Protection Commission, after the delay of Bard's launch in the EU (again, over privacy concerns the Ireland regulatory body has).
Google is worried about privacy and security, allegedly advises employees to be extra careful when using Google Bard
It seems Google has found some privacy and security issues that could arise when employees are using Google Bard. For one, the company's reportedly told its developers to not use code that chatbots can generate (the code-generating feature of Google Bard was showcased during the Google I/O conference).
Mainly, the problem seems to be company secrets. If you've been following the mobile tech world (or the tech world in general), you've seen the amount of substantial leaks that have been going on in the last few years, spoiling big product reveals.
Basically, if employees enter confidential info into Bard or ChatGPT, this info can become public. The same applies to strings of code, which could compromise the security of the code, showcasing it to potential hackers that could take advantage of it.
In a comment to Reuters, Google said that it strives to be transparent about Bard's limitations, and says then when it comes to code, Bard can be a helpful tool although sometimes it may make undesired suggestions.
Meanwhile, Google is reportedly in talks with Ireland's Data Protection Commission, after the delay of Bard's launch in the EU (again, over privacy concerns the Ireland regulatory body has).
Things that are NOT allowed: