2024-12-16
Writing the Rules Is Not the End: How We Continually Update Privacy Rules Based on the Ideas of Agile Governance
The technological revolution started by AI/LLMs is impacting the world in massive ways. Here at Mercari, we value the significance of AI’s potential and are not only implementing it in our product but exploring various ways of leveraging it to increase productivity at work.
In the last few years, we’ve stepped up our activities to include AI technologies in our product. In October 2023, we created the Mercari AI Assistant Feature, which serves as a personal assistant to each of our users; in September 2024, we released AI Listing Support, which generates listing information based on an image and the user’s selection of a category. Currently, we’re working on implementing AI internally for uses other than the product, such as applications that will improve our work internally.
We forecast that implementing AI across the entire company will allow us to improve work efficiency and productivity. However, due to the explosive expansion of AI around the world, there is a chance that the rapid pace of change could come with risks to our work that have never surfaced before. These include things like legal risks, intellectual property (IP) risks, reputation risks, security risks, privacy risks, and ethical risks.
We recently released an article on building AI governance for Mercari Group, but for this feature, we spoke with Privacy Office member Naofumi Hayakawa (@early) to gain insight into how we have moved forward with AI usage as it concerns privacy.
Featured in this article
-
Naofumi Hayakawa
Naofumi works for Mercari’s Privacy Office as a Privacy Specialist. After graduating from law school, Naofumi gained experience working in law for a consignment business, a recipe app service, and a holding company for multiple consumer-oriented lifestyle services. In his work on business management teams, he also acquired experience working in such fields as contract law, corporate law, security governance, and the building of personal information protection systems. More than a Doctor of Law, registered information security specialist, and business law executive, he’s also a guy who relishes trying out new trends. His lifestyle color of choice is neon yellow.
Discussions of generative AI from the perspective of privacy were everywhere
Allow me to open with a brief introduction: My name is @early, and I work for the Privacy Office here at Mercari.
As a matter of course, I coordinate with the people working in our business divisions in order to handle data about our users appropriately from a privacy standpoint and to perform my work with the goal of realizing the ideal situation for both Mercari and our users.
When I joined the company two years ago, immediately after the establishment of the Privacy Office Team, my team and I took stock of issues related to privacy and discussion points that surfaced at the product development stage. We saw that there was a need to build a structure that could field the questions on privacy originating from various teams within the company. Therefore, I started by pouring my efforts into activities to spread awareness of our team and of me. As part of this, I started dressing in neon yellow from head to toe at work, to make an unforgettable impact on anyone who saw me. My goal was to spread awareness of privacy issues and the Act on the Protection of Personal Information.
You might wonder how I am involved in AI usage from a privacy viewpoint. Before I get into this topic, I’d like to talk about how I personally feel about generative AI.
Not too long ago, the world was jolted by the disruption that generative AI caused, and I instinctively understood that it was just a matter of time before it would leave a mark on privacy matters as well. There were actually discussions among experts about such things as where the Act on the Protection of Personal Information stood. That said, I wasn’t one of the early adopters of the technology who dove headfirst into using ChatGPT.
I tend to be slightly behind those who are first out of the gate and trail behind in the second or third pack of users, thinking that I’m too late. (laughs) However, I don’t think that starting where I did meant I was late at all. The fastest that anyone can start using a new technology hinges entirely on when they decide that they want to use it.
Looking at my parent’s generation, I see people who mastered using computers and had no problems making the switch to smartphones, but also people who don’t understand the digital world at all. I grew up, and my parents grew older, and at some point, I ended up being the one who taught my parents about anything digital. When I first set eyes on generative AI, the impression I had was that if I didn’t learn to use this technology, I would be right next to my parents needing help.
Having tasted a somewhat vague sense of fear and unease about the technology, what really pushed me past procrastination and into action was the late-night radio program, “Audrey’s All-Night Nippon.” What made me a believer was listening to Masayasu Wakabayashi, who is half of the comedic duo “Audrey,” talk about how he uses ChatGPT as his sounding board. It woke me up in a strange way that spurred me into action; I figured that if a comedian like Wakabayashi was using generative AI, I had to harness it too.
From there, things kicked off for me when we launched our internal AI/LLM team. I was in charge of writing the privacy policy section of our new internal guidelines for generative AI.
Realizing the simple truth: Rules are not something you make once and never change again
With @Tago from R4D’s at the helm, we promoted a project to create internal guidelines. The project involved multiple divisions and was extremely exciting.
We created the guidelines like this because the risks related to generative AI cover a broad range of areas. These include things like legal risks, intellectual property (IP) risks, reputation risks, security risks, privacy risks, and ethical risks. Even just compiling all of these considerations was a headache. I was in charge of the security risk and privacy risk parts of the document. We had to create something for which there were no existing criteria. The guidelines were created with all of the project members springing into action to involve those around them in the process.
After we were done, since I was involved in starting the guidelines, I handled questions and inquiries on generative AI related to privacy matters. I also took part in the HR hackathon that looked at leveraging generative AI and was periodically involved in internal work pertaining to the technology.
As I experienced what it was like working with engineers to create a product prototype, I started to think that the current guidelines might be in conflict with the creation of the product we were working on. I began to question whether the guidelines considered the needs of people working on the ground floor at the company.
At the same time—and much to my disappointment—I also saw that people were not really all that aware of the guidelines within the company. The hard work does not end just because you’ve created rules to follow. I got a real sense of this simple fact.
It was around this time that @gomichan joined the company as the AI Implementation Officer. I was concerned that the guidelines in their current form could become blockers to our work promoting generative AI internally.
With that in mind, I didn’t waste any time connecting with @gomichan. I conveyed to her that I wanted the guidelines that we had created to encourage the usage of AI and also told her the background of creating the guidelines and the issues we were facing.
Creating rules with an awareness of privacy issues
Just having rules prevents the creation of tacit understanding. This is something that struck a chord with me quite some time ago.
I think one approach to changing existing rules would be for someone to suggest revising the rules after discovering that an action they were about to take would conflict with the rules. However, something like that would take a lot of energy. I think what we should instead be doing is ensuring the people who created the rules keep their ear to the ground listening to the needs of people actually using the technology at our company and constantly update the rules based on the current situation. The reason for this is that rules that don’t fit the current situation have a tendency of being ignored and end up becoming obsolete.
I think it’s important to look with fresh eyes at the rules we created without thinking about them as absolute. We have to picture what the risks are if they are broken and look at why we created them. I also think that is an even more important stance to have regarding rule creation and governance with regard to generative AI. You might ask yourself why. Quite simply, when the pace of technological development of something is fast, how the world perceives it and the rules and guidelines related to it will change on a daily basis.
From a privacy perspective, the following things can easily become blockers, so it’s necessary to remain aware of them when making rules.
- If the review process is very demanding (e.g. the process takes so long that implementing new generative AI tools is virtually impossible), it can become a serious blocker.
- If there are too many complicated rules established about what information you can use, the guidelines no longer meet our needs to get the most out of generative AI in our work.
- On the other hand, prioritizing the unfettered use of generative AI and taking a soft approach to security should be avoided.
- Currently, discussions on legislation and the interpretation of laws for generative AI are still ongoing, and as a result, there is a tendency to take a conservative approach and establish restrictive rules.
In a sense, creating rules with a conservative approach and internally promoting the idea of observing them might actually be rather simple. However, when it comes to generative AI, in order for us to call what we do “agile governance,” it’s important for us to update the rules we’ve already created with a trial-and-error approach.
As a concrete example of our updates, we expanded the range of information that can be input into generative AI tools that meet certain internal criteria. We implemented the expansion after confirming that appropriate security and privacy measures were in place, allowing for a broader scope compared to our previous guidelines. In doing so, I think we enabled people to follow the rules naturally while using generative AI, without having to think about constraints on their work.
We receive questions and inquiries in all the situations where we use generative AI at the company, and I of course use it myself as well. By using the technology to engage in creation, I’ve learned something about what it’s like to have the mindset of a product creator. In addition, the passion to spread generative AI throughout the company is something I feel strongly from @gomichan and other AI-promoting members as well. In this context, when I thought about how I could best contribute, I thought that my role was probably to break the rules proactively.
What I mean is that it’s my job to make the rules and then have the members who should observe them continue to question why we have them, allowing me to keep making updates. CTO @kimuras has also said that it’s hard for those members who work in the company’s first line (business side) to say that the rules should be relaxed. He wants our people working in Privacy to take the lead on this.
I’ve long believed that this is precisely what we should be doing, and being able to implement the minor updates to the related guidelines was a positive move for the project. Personally speaking, I thought the work was worth doing.