Alphabet Inc is warning its employees about the use of chatbots, including Bard, while promoting it globally, writes Reuters citing people familiar with the situation.

According to them, Google’s parent company advised employees not to enter their confidential materials into chatbots with artificial intelligence. The company confirmed this, citing its long-standing data protection policy.

In addition, Alphabet has also warned its engineers to avoid directly using computer code that can be generated by chatbots.

In response to a request for comment, the company said that Bard may make unsolicited code suggestions, but that it does help programmers. Google has also said it is committed to being transparent about the limitations of its technology.

The concern shows how Google wants to avoid business damage from software it launched to compete with ChatGPT. At stake in Google’s race against ChatGPT backers OpenAI and Microsoft Corp are billions of dollars in investment, as well as untold ad and cloud revenues from new AI applications.

Google’s caution also reflects what is becoming a security standard for corporations: warning staff about using public chat programs.

As of January, 43% of professionals were using ChatGPT or other AI tools, often without telling their superiors. This is evidenced by the results of a survey of almost 12,000 respondents, including from leading American companies, conducted by the Fishbowl website.

It was previously reported that Great Britain will host the first big global AI security summit. At the summit, it is planned to consider the risks associated with artificial intelligence. It will also discuss how they can be mitigated through internationally coordinated action.