Samsung Prohibits Employees from Using ChatGPT Following Data Leaks
Samsung has reportedly banned its employees from using generative AI tools like ChatGPT and Google Bard after accidentally leaking sensitive company data through these tools. This temporary ban is a response to concerns over the security of these AI systems and their data processing methods.
AI Tools Causing Unintended Data Leaks
Samsung employees unwittingly leaked the company’s secrets by using AI tools to fix coding errors and convert meeting notes into presentations. This data became part of OpenAI’s training data, which the company retains for further system improvements. Although the leaked data has not surfaced publicly yet, Samsung is taking steps to prevent such incidents in the future.
Security Measures Under Review
Samsung’s internal memo indicates that the company is reviewing security measures to create a safe environment for using generative AI to improve employee productivity and efficiency. Until these measures are in place, employees are temporarily restricted from using generative AI tools. Failure to comply with this guideline may result in disciplinary action, including termination of employment.
Other Companies Also Banning ChatGPT
Samsung is not the only company banning ChatGPT usage among its staff. Other companies have also expressed doubts about the security of AI-driven systems like ChatGPT and are implementing similar restrictions to protect sensitive company information.