By now, everyone is aware of the rapid development of artificial intelligence (AI) and its integration into various aspects of our daily lives. With the proliferation of AI, we have seen significant improvements in efficiency in handling tasks and, for various market segments, an improvement in the professional quality of products and services. However, in the data-driven world we live in today, the use of AI by corporations has only intensified the ever-growing concerns regarding privacy and data protection.
Considering this new reality, it is important and interesting to note the measures being recently taken by data protection authorities in Korea and Brazil to regulate the AI systems used by big data corporations, as these measures serve as examples to follow on the otherwise unregulated advancement of AI technologies.
Korea's Personal Information Protection Commission (PIPC) recently made headlines by imposing a $1.4 million fine on AliExpress for privacy and data protection breaches. The PIPC uncovered that AliExpress had transferred Korean customers' data to foreign sellers without proper consent, as required by Korean privacy laws regarding international data transfer. Furthermore, the PIPC is also scrutinizing another Chinese e-commerce giant, Temu, for similar privacy concerns. The PIPC's actions send a clear message that, regardless of its origin, companies that treat personal data must comply with the nation's stringent data protection regulations.
On the other side of the globe, Brazil's National Data Protection Authority (ANPD) has recently taken significant steps against Big Tech players like Meta and X, formerly Twitter, relating to the improper use of users' personal data to train their respective proprietary AI tools. The ANPD suspended Meta's new privacy policy, which basically allowed the company to use users' public posts to train its AI models. This decision was driven by concerns over the potential risks to users' privacy, especially minors, and Meta's failure to adequately inform users about these changes and their right to opt out, which are all requirements by the Brazilian data protection law. Meta's situation in Brazil is not isolated, as the ANPD has also begun investigating X for similar reasons. The ANPD has initiated an investigation against X, due to the collection and treatment of its user data to also train its new AI, known as Grok, without properly notifying its users. The ANPD sent an official notice to the company requesting clarifications after finding out that the new data collection option for AI training was quietly added to the platform's settings without providing clear information or instructions on how users could opt out of this form of treatment. This conduct constitutes a violation of Brazilian data protection law, particularly concerning the lack of transparency and information and the users' right to decide and oppose the collection and treatment of their personal data.
So how does all of this recent news regarding measures taken by Korean and Brazilian data authorities relate to AI?
We believe that the actions taken by both the PIPC and the ANPD weave a common thread of leveraging existing data protection laws to regulate the use of AI by big data companies. This approach, whether intentional or not, is particularly significant in the current landscape we stand in regarding AI technology, where comprehensive AI-specific regulations are still embryotic and lacking.
Data protection laws in both Korea and Brazil require companies to adopt measures to guarantee that the data subjects are informed and are given control over their personal data, especially ensuring transparency about data usage and mechanisms for users to exercise their rights over their data. By enforcing these requirements, data protection authorities can effectively prevent the misuse of personal data for AI training purposes, which could lead to unprecedented consequences.
The implications of these regulatory strategies may extend beyond the borders of Korea and Brazil, as they can be replicated in other countries grappling with similar challenges with the rampant growth of AI. As AI technology evolves and its applications become more pervasive, it is crucial to have robust safeguards in place to protect individuals' privacy and data rights.
Moreover, these actions of the PIPC and the ANPD demonstrate that data protection authorities can play a pivotal role in the broader governance of AI. They can ensure that the development and exploitation of AI technologies are conducted following ethical and responsible guidelines, by holding companies accountable for their data practices.
While the use of data protection laws to regulate AI seems to be a good interim measure for the reality we live in today, it should not be a substitute for comprehensive AI-specific regulations, since AI technologies are not only limited to data protection issues. So, the current data protection laws are insufficient to address all the different issues and concerns that AI technology poses.
As we move forward, it is crucial that more robust regulatory frameworks that safeguard individuals' rights and promote ethical AI development be issued.
The lessons from Korea and Brazil serve as a powerful reminder that data protection is not just about privacy, it is about protecting the fundamental rights and freedoms of individuals in an increasingly digital world.
Chyung Eun-ju (ejchyung@snu.ac.kr) is a marketing analyst at Career Step. She received a bachelor's degree in business from Seoul National University and a master's degree in marketing from Seoul National University. Joel Cho (joelywcho@gmail.com) is a practicing lawyer specializing in intellectual property and digital law.