
With recent advances in AI/LLM technology, companies worldwide are looking at how they can best incorporate AI and LLMs into their products and companies to improve productivity and provide more value to customers.
To leverage this growing trend, Mercari launched its own AI Native direction this year, with over 96% of employees using AI, and coding assistants stated to be contributing to over 70% of new code generated in the last quarter, according to the financial results materials published for FY2025.6. However, with the current wave of AI/LLM technology and this level of general uptake still in a nascent state, increased and rapid adoption comes with new risks that need to be addressed.
To address these risks Mercari formed a specialized virtual AI Security team in May 2025, composed of members of each of the security team’s core functions.
In this Mercan article we’ll dive into how Mercari is tackling the novel risks introduced by AI and LLM technology, including challenges, solutions, and plans for the future, by interviewing members of the team .

Featured in this article
-
Allan Wirth
Allan has been working in Cyber Security for 10 years after graduating with his bachelor’s and master’s degrees in Computer Science from Boston University. He has been at Mercari for two years in the Platform Security team and became the manager of the team last year. In early 2025, he established the AI Security team for managing AI related security risks at Mercari.
-
Hiroki Akamatsu
Hiroki received his bachelor’s and master’s degrees in Information Science and Technology from Osaka University. He joined Mercari in 2024 as a new graduate, starting on the Platform Security Team before moving to the AI Security Team. His work focuses on strengthening cloud and platform security, ensuring the safe use of AI across the company, and contributing to AI-powered products.
-
Evgenii Borovkov
Evgenii graduated with a bachelor degree in Information Technology and Systems, and also had a few years in Computer Security and Cryptography. He started his career as a penetration tester, then switched to the Application Security field. In his free time he likes to watch movies, play board games and pet his 2 cats.
-
Anna Simon
Anna got her bachelor’s degree in applied mathematics and her master’s degree in security & privacy. She started her career as a penetration tester at a Big4 company, then moving to the blue team side of things where she worked at a cloud company. She joined Mercari in 2021 and is part of the Threat Detection and Response team. Outside of working hours, she likes to tinker with home automation, run half-marathons, and knit socks.
-
Danny Hazaki
Danny is a bilingual security professional from Northern Ireland with a background spanning security management and engineering. After earning a master’s degree in Computer Science from the University of Bristol, he began his career as a security engineer at a hedge fund in London before moving into security management at a Legal AI firm. He joined Mercari in 2022 and now leads the Security & Privacy Planning Team, with additional focus on AI Security and AI Governance.
Q. Allan, you were the one who originally came up with the idea of creating an AI Security team. Can you explain how this team was started, what triggered the creation of the team, and what kind of team members were needed?
@allan: I proposed the creation of the AI Security team in April of this year to holistically manage the security risks of the rapid introduction of AI at Mercari. We brought together experts from each of the Security and Privacy division’s teams to ensure that we could make fast, effective decisions to allow the company to adopt new AI solutions without taking on undue levels of security risk.
Q. How are you categorizing AI related security risks? Are these new security issues? Where are they coming from?
@allan: The AI security team looks to industry frameworks such as the OWASP 2025 Top 10 Risks for Generative AI as reference points to guide our work and categorize the risks we identify. As such we are focused very much on risks such as sensitive information disclosure, excessive agency, and supply chain risk. Although the risks they outline are not new, we have seen an increase in the potential for these issues as the rate of adoption of new AI tools and development patterns has increased. We closely follow the rapidly evolving industry best practices and work closely with system owners to apply them to the Mercari environment.

Q. Can you share some examples of issues related to AI that the team had to face already?
@h1120ki: In particular, more recent AI agents possess significant autonomy and provide various forms of assistance and automation. For example, at Mercari, products such as the coding agent Cursor and a proprietary data analysis AI agent called Socrates are being utilized. While these solutions provide a high level of convinience, we need to be able to tackle concerns that AI’s high level of autonomy can sometimes lead to unintended actions.
Furthermore, to support this autonomy, AI agents are integrated with various services, and multiple identities are leveraged in the process. For this reason, it is necessary to establish a management framework not only at the AI level but also at the platform level.
Finally, in such integrations, Model Context Protocol (MCP) integrations are often used, and “how effectively MCP can be utilized” has become one of the benchmarks for AI adoption. However, MCP, much like Integrated Development Environment (IDE, like VS Code or Cursor) extensions, raises security concerns in the supply chain, making it necessary to carefully examine how safe MCP implementations truly are.

Q. Mercari has a massive production infrastructure, and is trying to integrate AI into its product. AI remains a new technology and its adoption isn’t always following standards. How are you keeping track of all this and preventing accidents?
@Evgenii: We use our CSPM (cloud security posture management) solution, Wiz, to track AI tools in our infrastructure. To monitor new AI services, we have created several detection rules in Wiz. When a new AI-enabled service is deployed to our infrastructure, we receive automatic notifications. Additionally, Wiz not only alerts us to these deployments but also provides a comprehensive visual graph of all AI tools currently deployed across our infrastructure. For areas where it is challenging to get sufficient information from Wiz and or we want to go in more depth and build out a more fully fledged inventory we have also began working on joint efforts with engineering teams so we can have the visibility we neeed to manage risk.

Q. Are you also keeping track of AI tools being used by employees as well? How are you preventing incidents?
@anna: Since the start of AI Native, there are a lot of tools that employees are testing and using. We constantly monitor new PoCs to be aware what tools are being used, and we have onboarded multiple AI tools so far that are used widely. We also have tools that can alert if servers (like MCP servers made in-house) are exposed to the internet accidentally. This monitoring based approach allows us to allow rapid uptake while mitigating risk.

Q. Are you doing any awareness or training initiatives to build broader understanding of these risks at a company level? Are there any tools for technical and less technical employees to use AI safely that you provide?
@danny: At Mercari, we recognise that securing AI isn’t just about strengthening our systems; it’s also about empowering all of our employees, whether they’re engineers or non-technical team members, to use AI tools responsibly and securely. We’re not just reacting to challenges as they arise; we’re being proactive by providing training, resources, and clear guidelines. Here’s how we’re tackling it:
To start, we’ve created a comprehensive set of AI Security Guidelines that complement our existing Information Security policies and guidelines. These guidelines are designed for everyone, technical or not, and include everything from secure AI deployments to managing credentials, authorisations, and permissions effectively. We’re updating these almost daily to keep up with the rapid evolution of AI-related risks and best practices. Staying ahead of the curve is critical, and we want to ensure employees always have access to the most up-to-date advice.
For those integrating AI tools with company systems like Slack, GitHub, or Google Workspace, we’ve developed an AI Tool Integration Matrix. This acts as a go-to resource highlighting which integrations are approved and which aren’t, plus details on how to securely configure those that are in use. It’s a clear, actionable way to make sure AI technology is working with us safely and efficiently.
On top of these resources, we curate a biweekly AI Security Newsletter. Think of it as a snapshot of essential tips and updates: condensed and actionable. It includes newly released guidelines, lessons learned from recent AI-related security incidents (both within and outside the company), and precautionary advice to help prevent similar challenges in the future. It’s a great way to keep everyone informed without overwhelming them with too much information at once.
For our engineers, we’re weaving AI security into our upcoming annual Secure Coding Training. This ensures that we directly address the specific risks engineers face when applying AI in their workflows. We focus not only on the challenges but also on the best ways to mitigate risks; combining technical controls and automated defenses with heightened awareness and better practices at the individual level.
Finally, we make sure vital security updates and learnings reach everyone through regular Slack announcements and presentations at our All Hands events. Security is everyone’s responsibility, and by sharing information openly and broadly, we’re helping foster a culture where that mindset thrives.
In short, we’re not just building guardrails behind the scenes; we’re equipping our teams with the tools, guidance, and training they need to navigate AI securely and confidently. It’s all part of our mission to innovate responsibly while protecting our people, systems, and data.

Thank you very much for your time. Is there anything you would like to share for readers who are facing similar issues in their workplace?
@allan: Based on our initial experience, I would suggest three recommendations.
First, AI security is a cross-functional area, so it’s important to have cross-functional experts involved in tackling it. Assembling a virtual team with representitives of different domains is one good way to ensure complete risk visibility.
Second, as with many other parts of security, it’s more productive to focus on enablement instead of restriction. Prohibitive policies can lead to the use of unsanctioned and less secure tools. Instead, provide clear guidelines, training, and pre-vetted tools to create a secure path for AI adoption.
Finally, iterate your controls and leverage existing tools, instead of trying to tackle all AI Security risks at once. The threat landscape is constantly changing, and AI Security teams will need to be flexible and adaptive to mirror that. Policies, tooling and procedures will need to constantly reevaluated as the underlying technology and business needs evolve.
This remains a complex and rapidly progressing domain. It is a significant challenge, but one we are excited to tackle to enable the secure adoption of AI across Mercari.

Related job positions
Here are some of our open positions!
-
Security Management Specialist – Mercari
Office: 東京・六本木オフィス
Company/Business: メルカリ
-
CISO(Chief Information Security Officer) – Merpay / Mercoin
Office: 東京・六本木オフィス
Company/Business: メルペイ
Direct you to a careers site
Related job positions
Here are some of our open positions!
Direct you to a careers site