Samsung Data Breach Highlights ChatGPT Concerns In The Workplace

Mark DiPietro
Technology Section Editor

The release of ChatGPT in November 2022 by Open AI has already changed the way we work and learn. Along with this revolutionary technology has come many articles and testimonials from a wide array of publications (including The Stillman Exchange) decrying, glorifying, and dissecting the uses and morality of artificial intelligence in our future. If there was not enough dangers associated with AI, recently the topic of data security was added into the already hotly contested debate.

Earlier this week, Bloomberg reported on a recent breach of security within the South Korean phone manufacturer, Samsung. The issue arose when Samsung employees, apparently without the knowledge of their supervisors, uploaded and used ChatGPT for their job responsibilities. The two main types being engineers who uploaded proprietary source code into the site to check for errors and managers uploading daily meeting notes, internal communications, and task lists to generate memos.

Issues arose when a bit of that source code for a new Samsung program leaked to the public due to some of those employees not electing to opt out of data sharing, causing their queries to be shared and collected. As a response, Samsung banned the use of ChatGPT and other generative AI chatbots for completing work tasks.

This is an example of the options users are given on whether they would like to share their data in OpenAI’s servers (Photo courtesy of Stamus Networks)

This situation underscores the fact that ChatGPT’s product is free with the intention that their queries will improve the learning model. This means that the data provided is stored on OpenAI’s servers and could be vulnerable. Any user who may be uncomfortable with this policy should know that OpenAI does allow users an option to opt out of such data collection.

Samsung is far from the first affected by a situation like this. JPMorgan Chase, Amazon, Verizon, Bank of America, Accenture, CitiGroup, and countless other workplaces have been barred from using open-source AI sites like this over similar concerns. In fact, Italy banned ChatGPT on all government issued devices until OpenAI made changes to make the option to opt out of data collection clearer to users.

Does this mean the adoption of AI will slow? The answer is maybe, but probably not. The issue is not the use of AI. Actually, if anything is certain around AI over the past few months it is that it is an incredibly powerful tool that allows professionals to work more efficiently, decreasing cost and increasing production, at least in the short term. It has been demonstrated to be a game changer for companies and those with an eye on the future will not shy away from that.

The concern is in this transition period as AI tools are still in desperate need of user data to build out and must risk being open to the public. If anything, this trend of concern over data security will slow the growth capacity of AI tools. But, in the same stride, it will greatly increase the demand for a packaged AI system that is closed and whose data is able to be managed. This will motivate those developing these technologies to push out a final product to these eager customers who will be willing to pay top dollar for it, as they have seen the capabilities of AI in the workplace thus far.

As we all try and navigate this world of generative AI, we must have an eye on the present and stay vigilant on if we are comfortable with the data being stored in ChatGPT to be possibly compromised. But the successful business people will be able to also have an eye on the future and see the opportunities that this incredible new technology will open for the industries of tomorrow.


Contact Mark at

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest