BLOG
Read our blog articles, product news and announcements.

One in 20 workers feeding sensitive data into ChatGPT

MyCena

Almost one in every 20 employees has submitted sensitive company information into ChatGPT, according to a report. The use of large language models like ChatGPT by employees raises concerns about incorporating sensitive business data into the models. Data security service Cyberhaven detected and blocked requests from 4.2% of workers at client companies to input data into ChatGPT – mainly due to the risk of leaked confidential information. Examples include an executive using ChatGPT to create a presentation using a confidential strategy document and a doctor inputting patient health information. As the use of ChatGPT and similar AI-based tools grows, the risk of data breaches is likely to increase.