Corporate users of Slack messenger owned by Salesforce are extremely unhappy with the fact that the service trains its artificial intelligence Slack AI on user messages, Ars Technica writes.
The whole drama is that Slack representatives answer direct questions that the company does not use its customers’ data to train AI, but the service’s privacy policy says otherwise.
Slack’s privacy policy states, in part, that “Machine learning (ML) and artificial intelligence (AI) are useful tools that we use in a limited way to improve our product’s mission. To develop AI/ML models, our systems analyze user data (such as messages, content, and files) provided by Slack, as well as other information (including usage information) as defined in our privacy policy and your customer agreement.”
Тим часом на сторінці Slack AI сказано: “Work without worry
Your data is your data. We don’t use it to train Slack AI”.
Due to this discrepancy, users called on Slack to update its privacy guidelines to make it clear how data is used for Slack AI or any future AI updates. According to a Salesforce representative, the company agreed with the need for an update.
A company spokesperson said that the policy update clarifies that Slack does not “develop LLMs or other generative models using customer data.”
The update also clarifies that “Slack AI uses off-the-shelf LLMs in which the models do not store customer data,” ensuring that “customer data never leaves Slack’s trust boundary and LLM providers never have any access to customer data.”
However, these changes do not seem to solve the key problem of users who have never explicitly consented to Slack using their information for AI training. The conflict between users and the service is not over, and the company continues to receive angry reviews and lose corporate customers.
Loading comments …