
2025-2-6
Reassuring Our Employees That Using AI Is OK: The Ideal AI Governance Mercari Group Is Aiming For
Mercari Group has been an enthusiastic adopter of AI since before generative AI technology such as ChatGPT became prevalent.
Over the last few years, we’ve put even more effort into including AI in our product by bringing on board generative AI technology and large language models (LLMs). In October 2023, we created the Mercari AI Assistant feature, which serves as a personal assistant to each of our users, and in September 2024, we released AI Listing Support, which generates listing information based on an image and the user’s selection of a category. We have also implemented AI internally for uses other than product development, such as applications that improve the efficiency of our work.
We forecast that implementing AI across the entire company will allow us to improve work efficiency and productivity. However, due to the explosive expansion of AI around the world, there is a chance that the rapid pace of change could come with risks to our work that we have never seen before. We talked to Yuka Ochiai (@Yuka) of the New Reg, Tech & Regions Team, which leads the creation of AI governance at Mercari, and Tomomi Matsuhashi (@Mattsun) of the Public Policy Team about the countermeasures Mercari has in place to deal with potential risks.
Profiles
-
Yuka Ochiai
Yuka worked at NEC Corporation before joining SoftBank Group, where she oversaw acquisitions of Vodafone Japan and the Fukuoka Daiei Hawks and led numerous overseas investment and financing projects. After pivoting and joining a startup to gain experience launching a new company, Yuka started working for OpenDoor Inc., where she served as the head of the legal department. Yuka joined Mercari in January 2020, first working on the Legal Team and then establishing a new team overseeing new legislation and AI governance. Registered as a New York State lawyer. Penguin fanatic with a dream to visit the South Pole.
-
Matsuhashi Tomomi
Tomomi worked at Tokio Marine Nichido, and then founded a financial business company in the KDDI group. She joined Mercari in January 2019. After gaining experience in compliance at Merpay, Tomomi joined the Policy Policy Team in 2021 and began working on stakeholder communication and rule-making. She has also contributed to the treatise AI and Data Ethics Textbook (authored by Shinnosuke Fukuoka of Nishimura & Asahi, published by Kobundo, 2022). Currently, Tomomi is enrolled as a working graduate student in the Master’s Program of the Information Law Program at Hitotsubashi University Graduate School of Law, specializing in Business Law. In her private time, she organizes and holds exhibitions related to digital art and writes educational materials.
Devising guidelines for a market in which information is ever changing
──First, could you tell us about your roles in creating AI governance?
@Yuka:To broadly summarize, @Mattsun works on external governance, and I work on internal governance.
@Mattsun:I communicate with government agencies, external experts, and employees at other companies to gather information and exchange opinions about AI governance. It’s also my job to relay information to society about the AI governance initiatives that Mercari is currently working on to increase awareness.
@Yuka:I lead internal projects that decide what kind of AI governance Mercari needs to create. More specifically, I lead the creation and revision of Mercari’s guidelines related to AI usage in collaboration with a variety of teams, including teams in the public policy division, R4D, the Eliza Team (team dedicated to generative AI/LLMs) the Machine Learning Team, the Privacy Office, and the Security, Legal, and IP Teams. My team only started full operation in April 2024, so we’re currently in the process of recruiting members in related divisions, holding team-building sessions to get to know each other, and designating key people in the team. We also hold open-door sessions to explain our initiatives to other members of the company.
──Before we get into those initiatives, could you give us an overview of how AI is currently being used at Mercari?
@Mattsun:When OpenAI launched ChatGPT, people began widely using generative AI technology. While it made a lot of tasks easier, in 2023 there was a global discussion about the risks involved in developing and using AI. The Japanese government launched the Hiroshima AI Process—an international code of conduct for organizations—at the G7 Hiroshima Summit, China introduced generative AI regulations, and the US government issued the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In 2024, Japan enacted the AI Guidelines for Business, and the EU finally passed the AI Act.
At Mercari as well, we’re conscious of keeping up with societal changes and have started to devote more time and effort to developing AI ethics initiatives and AI governance. We’re currently conducting research with the Osaka University Research Center on Ethical, Legal and Social Issues to formulate internal guidelines and checklists in line with worldwide trends.
──How do you gather information?
@Mattsun:We reference use cases at large Japanese and foreign corporations that actively use AI, participate in overseas conferences, and always keep an eye out for information on the internet. I’m also currently in graduate school to research the latest developments in information law. Gaining knowledge about information law and AI is more like my life’s work than my job. (laughs)

──@Yuka, once you have the information gathered by @Mattsun, how do you share it with the rest of the company?
@Yuka:I first started working on AI governance in April 2024, so I haven’t had the chance to create all that many initiatives yet. What I have been working on is collecting information about AI and relaying that information to the execs, and organizing activities that aim to get all members on the same page about AI governance, such as holding open-door information sessions. I think this has resulted in more people becoming interested in AI governance at the company. We’re currently at the stage of updating our guidelines for the internal use of AI, which we enacted last year.

An example of how we distill the latest information and domestic and international AI trends and share that information within the company
──You already need to update the guidelines? But you only made them last year!
@Yuka:The guidelines are made based on the technical information and use cases that we find at the time of writing. As new technology comes out and new use cases are published, we have to revise the guidelines based on those developments. For AI in particular, information is constantly changing; information is often outdated after a few months. We have to keep on top of these changes and update the guidelines as quickly as we can.
To give a specific example, the guidelines we wrote last year define such things as the types of data that can be input into an AI engine. However, because there is a risk that the AI engine could output misinformation, we also have to define how to handle the data the engine outputs as well. Also, last year’s guidelines were written with OpenAI’s GPT in mind, but now we also use other AI technology like Gemini by Google, so the guidelines need to be updated to reflect this.
Promoting “responsible AI” for the whole company
──Mercari has been leveraging AI since before ChatGPT became well-known. Would you say that having prior experience with AI made you aware of the need to speed up initiatives related to AI ethics and creating AI governance?
@Yuka:Yes, everyone was aware that we had to do something. But the landscape changes quickly, and different information is in different places all across the world. For a while, we found it difficult to fully commit to the cause because organizing and systematizing all the information posed a challenge.
@Mattsun:Actually, the emergence of generative AI technology such as ChatGPT probably served as the push we needed to improve AI literacy at the company. At Mercari, before generative AI was a known thing, only data engineers and data scientists used AI. In other words, AI was a specialized tool. When generative AI came along, members in other teams—more specifically, members in non-engineering roles—also started to use AI. But that means that it’s harder to track when and where AI is being used throughout the company.
──What kind of risks does that pose?
@Yuka:One risk is that someone might make an advertisement using a character created by an AI engine that generates images, and that character might look like a character that already exists. In that case, the company could be sued for copyright infringement. That’s just a simple example. There could also be a case where a member inputs data containing personal information or confidential information into an AI engine used as training data. Or, someone uses AI to make a chatbot for users, but the chatbot outputs misinformation.
@Mattsun:There’s also the risk of using LLM tools that have not undergone security checks. These tools could expose us to information leaks or viruses.
──It sounds like there are a great number of risks out there.
@Mattsun:That’s right. Having said that, leveraging generative AI has also improved productivity at the company by a big margin. I think everyone is now more aware that, in order to continue using AI, we have to establish rules to help prevent any incidents.
──What do you feel are your current challenges now that all members, including those with no engineering experience, are leveraging AI?
@Mattsun:Right now at Mercari, members in different divisions and teams have different levels of AI knowledge. However, as I mentioned earlier, AI usage is having an increasingly large social impact around the world. Because of this, more and more companies are finding ways to leverage AI while addressing the associated risks. This means that we also have to move quickly to improve AI literacy throughout the company and establish AI governance that promotes the concept of “responsible AI.” To do this, we have to keep gathering information about the current state of affairs and positive use cases at other companies to provide hints about where to take AI usage in the future.
We often share the information we’ve gathered among members, but I think it’s important to share this information outside of the company as well. For example, we could give advice on AI-related policies and rule-making if we need to. I’d like for us to be an “information hub” that disseminates information both internally and externally.
@Yuka:If you’re not familiar with AI, you might be apprehensive about using it. It can be difficult to know what you can use it for, or whether inputting company information into the AI engine would result in an information leak. I think these worries often stop people from trying it out. Our guidelines are a powerful tool that supports the use of AI while removing any psychological barriers. We aim to create and share guidelines that are tailored to the current situation within the company to clarify the appropriate ways to use AI, so that all members can use AI with confidence.
When all members feel comfortable using AI, productivity and AI literacy will improve, and this will strengthen the AI capability of the entire organization. Every day, I’m motivated by the desire to make Mercari a place where members in any role, engineers and non-engineers, have the ability to use AI for lots of different things—wouldn’t that be great?
──I know that you’re working on not only the guidelines but also other initiatives to improve AI literacy throughout the company. For instance, you held a Lunch & Learn (in-house study session) to share information about Mercari Group’s AI initiatives the other day. What kind of feedback do you get from the participants of these sessions?
@Yuka:We took a survey after that event, and a lot of people said that they were happy for the opportunity to learn more general information about AI. For me, that reiterated the fact that we need a more generalized source of information.
──With AI becoming more prevalent throughout the company, I suppose quite a few members might get left behind.
@Mattsun:Yes, I think so. Mercari is made up of an amazing group of people who think for themselves, are able to research, and act on their own accord. Those qualities are great to have, but it does mean that information is likely to become scattered all over the place. We have to consolidate that information to prevent anyone from being left behind.
I think consolidating information is best done by establishing places where we can disseminate information to all members, such as open-door sessions and Lunch & Learn sessions. Information naturally finds its way to those who share it.
@Yuka:Many people don’t know where the information they need is located, or don’t know who to ask when they have a question. AI is particularly tricky because it spans so many divisions. That reminds me—thanks to one Lunch & Learn session, we created a list of people who members can contact when they have a question.
Creating guidelines accessible to all members, regardless of profession or previous experience
──AI usage is set to ramp up across the company going forward. We need rules for using AI to mitigate risks, but having too many rules will slow us down; it must be hard to strike the right balance. What kind of policy will you use to work on the guidelines going forward?
@Yuka:We’ll refrain from using difficult technical terms and write in a way that’s easy for everyone to understand. I don’t think many people like reading guidelines or actively seek them out. (laughs) We want all employees at Mercari to leverage AI, so we have to make the guidelines accessible to all members, regardless of the kind of work they do or their previous experience. If we don’t, nobody will use them. While there will be a detailed set of guidelines, we also plan to write and update summaries that give employees an overview of internal AI usage at the company. They’ll just need to read those to get a general idea.
@Mattsun:We’re always thinking about what kind of wording will make things easy to read and generate interest, and we try to write guidelines with the input of the people who will actually use them.
@Yuka:We’re also conscious of aligning the content with Mercari’s corporate culture and values. For example, one of Mercari’s values is “Go Bold,” and we also want to apply that to leveraging AI, placing importance on trying new things and not being afraid of risks. So, in the guidelines as well, we try not to focus too much on risks and instead try to write in a way that will help maximize the potential of AI.
The never-ending journey of AI governance and the ideal Mercari is aiming for
──Lastly, could you tell us about any initiatives you’re excited to work on in the future?
@Yuka:The world of AI is constantly changing and new needs are being discovered at the company all the time, so ideally we’ll keep on top of those changes and tweak the guidelines about once a week. I think an agile approach is needed when writing guidelines; creating AI governance is a never-ending journey.
I’d also like to make use of e-learnings in addition to our regular open-door sessions and Lunch & Learn sessions to improve AI literacy at the company from the bottom up.
At Mercari, we aim to create an environment where all employees can leverage AI in a safe way through the establishment of AI governance. We aim for a future where, as AI evolves, we approach new risks and challenges with an open mind so that we can provide value to more and more people.
@Mattsun:It would be great to hold study sessions on different topics that serve as a gateway to get employees interested in AI. For example, it might be interesting to invite outside experts, such as lawyers who specialize in AI and intellectual property or ethics researchers, to hold panel discussions.
We want to send out stronger messaging about Mercari’s AI-related ethics and AI governance functions to both our users and our stakeholders, which includes government agencies and affiliated companies. Guaranteeing transparency is an important role for us to ensure that everyone who interacts with Mercari on a daily basis feels at ease using our service.