Interview Of The Week

Interview Of The Week: Anna Zeiter, Chief Privacy Officer, eBay  

Dr. Anna Zeiter is Chief Privacy Officer and Vice President for Privacy, AI & Data Responsibility at eBay and a board member of eBay Marketplaces GmbH, the holding company of eBay’s international business. She has served as a board member of the world’s largest privacy organization, the International Association of Privacy Professionals, since 2020. Zeiter joined the Digital Transformations Working Group of the World Economic Forum in 2020, co-leading the Business of Data & Data Valuation workstream until 2022. In 2022 she became a member of the privacy & security advisory board of Flo Health, a global female health tracking app. Before joining eBay in 2014, Zeiter worked as a lawyer for two international law firms in Germany. She has a PhD in free speech and general right of privacy from the University of Hamburg and earned a master’s degree in law, science and technology from Stanford University and a certificate in AI Leadership from Harvard Kennedy School. Zeiter regularly gives key notes at international privacy and AI conferences and teaches privacy and responsible AI at several universities in Europe and the U.S., including Bern, Zurich, St. Gallen, Göttingen and Stanford. She was a speaker at the consultancy bluegain’s CxO Luncheon 2025 during the World Economic Forum’s Annual Meeting in Davos, which was moderated by The Innovator’s Editor-in-Chief. Zeiter separately spoke to The Innovator about how to leverage the latest advances in AI responsibility.

Q: How is eBay leveraging AI?

AZ: Privacy and security in AI aren’t something new that we need to grapple with; we’ve been thinking about and implementing it for years. Our history and the size of our team separates us from the pack; we have been using and developing with AI going back more than 20 years. It’s also unique among our peer companies that we have such a big team. Other ecommerce sites tend to buy or rent AI as a service as opposed to developing their own. We do leverage third-party AI from providers like OpenAI and Microsoft, but we also develop our own. We invested heavily early on, and that’s paying dividends now. We have, for example, multiple AI driven tools and functionalites on the website. One of them is Shop the Look. If I am searching for items that fit trends such as bohemian or minimalism, AI will suggest outfits with AI-generated visual models in those styles for me to consider. In the same way when I no longer love something GenAI can also help me sell the item on eBay. It’s called Magical Listing makes selling way easier. I can take a picture of me wearing something I want to sell, like a pair of jeans, and AI will likely identify the item as a Levis 501 model that is three years old.

People might think from the outside that eBay is only a marketplace, but we’re much more than that. We work in advertising, first- and third-party integration, marketing, financial services, all kinds of things. And so much of it incorporates AI to some degree; most of the organization utilizes AI.

It is one of the many things that makes eBay unique. We are a small big tech company and a big small tech company, plus we have a lot of touchpoints with smaller companies, our sellers. We have a very good company culture and high values, and we take European regulation on privacy and AI very seriously.

Q: Most U.S. tech companies complain that European regulations inhibit innovation. Do you disagree?

AZ: We adopted binding corporate rules to adopt European privacy rules (GDPR) very early and globally because we wanted to do the right thing and gain the trust of our customers. We are doing the same with the EU AI Act. In February we signed an AI pact with the EU AI office. We went to Brussels with our CTO Mazen Rawashdeh and signed a deal to adopt parts of the AI Act ahead of the deadline – and we plan to implement the AI Act globally. I have worked at eBay for 10 years and can say that I can’t think of any situation where we couldn’t innovate or couldn’t do what we wanted to do because of these two EU laws. If you adopt privacy and responsible AI rules very early on in the design process and involve the relevant teams early new product technology can be developed in a compliant way. It is not just good practice. The younger generation especially focuses a lot on AI, privacy and trust and these users are very agile. If they can’t trust one marketplace they will go to another one. It takes a lot of time and money to gain and maintain trust, but you can lose it in a heartbeat.

We think privacy and AI compliance is a business driver. Companies that fail to ensure their AI systems are fair, transparent, and free from bias risk damaging their reputation, facing regulatory scrutiny, and losing customer loyalty.  Trusted companies, on the other hand, enjoy numerous benefits. They have more loyal customers, lower employee turnover, higher revenue, and ultimately, a greater market value. In short, trust is a cornerstone of business success.

Q: You talked about some of AI’s risks at the CXO lunch in Davos. Can you share some of those thoughts here?

AZ: Although AI offers immense potential, it also comes with significant risks, as demonstrated by several real-world examples where things have gone wrong.

One such instance involved a supermarket chain in New Zealand that used an AI-driven product recommendation tool. The tool, designed to suggest recipes based on a budget, recommended adding washing detergent and glue to a family soup recipe. This incident underscores the critical importance of ensuring AI outputs are safe, accurate, and aligned with user expectations.

Another example is the controversy surrounding a recent third-party video call feature, which can analyze meeting participants’ facial expressions, filler words, and question quality. Our decision to reject the feature was driven by concerns about privacy and the potential misuse of emotional data. This case highlights the ethical dilemmas companies face when deploying AI tools that intersect with employee privacy and trust.

AI’s potential for discrimination was starkly illustrated by a recent incident involving an MIT student of Asian descent. She used an AI photo app to transform a casual picture into a professional headshot, only to have the tool lighten her skin to appear European. This incident serves as a stark reminder of the biases that can be embedded in AI systems, leading to discriminatory outcomes that can harm individuals and damage a company’s reputation.

Emerging risks in AI further complicate the landscape. Recent advancements have shown that AI can predict sexual orientation from a single photo with over 90% accuracy or detect diabetes from just 10 seconds of a person’s voice. These capabilities raise profound privacy and ethical concerns, particularly in regions where such information could be used against individuals, such as in countries where homosexuality is criminalized. As AI continues to evolve, its ability to make highly sensitive inferences from minimal data will force individuals and organizations to rethink how they share information.

For these reasons and more companies must prioritize ethical considerations, accuracy, privacy, and transparency to avoid unintended consequences and build trust in their AI systems.

Q: What kind of safeguards has eBay put in place?

AZ: We developed a responsible AI policy with principles long ago but when Generative AI came along, we issued specific guidelines and standards and, very early on, training for the entire workforce and upskilling for certain team to ensure that we develop and deploy AI in a responsible way. Our principles are: build trustworthy AI systems that are reliable, safe and secure; enable equitable and fair AI experiences; ensure accountability and lawfulness, privacy by design and transparency. These principles are not just something we publish on our website. It is very important for us to communicate these to the workforce and operationalize these policies in practice. We have established a global intake process. If, for example, someone wants to create a new AI-powered listing tool they need to submit it along with details such as which large language model they used and which specific data sets. We have a dedicated cross-functional team with technical expertise, ethical expertise and legal expertise to review the use cases. Each new AI application is reviewed through many lenses: legal, ethics, IP, privacy, safety, security, and DEI (diversity, equity and inclusion.)

Q: Does agentic AI raise special challenges?

AZ: There is a need for transparency and to make sure there is a human-in-the-loop. By transparency we mean, for example, ensuring that a customer knows if they are talking to a human and how their data is used. What I kept hearing in Davos is how quality data is the new oil. Everyone wants to train their models with new and better high-quality data, but it is very difficult to get. You either need to get consent from users (but that consent can be withdrawn at every second) or legitimate interest but that is not 100% waterproof in all situations either. What we try to do before AI data training is to filter out personal data. We are also training our models with synthetic data but that doesn’t solve all the problems and creates others as it sometimes treats real data as the outlier.

Q: Are hallucinations an issue?

AZ: Yes, hallucination is always a challenge. There is, for example, the danger that AI might hallucinate and add a logo to an item, or give a wrong description of a product. It might also remove imperfections from old items and turn an old item into a new item. We certainly don’t want to misrepresent what we are selling. There is a lot of pressure at tech companies right now to release new functionalities and sometimes they are just not ready yet. For things that have very real impact on users, like the accuracy of item descriptions, we have very small tolerance for hallucinations. We are super tough before we release something.

Q: How often do you audit your systems?

AZ: It depends on the sensitivity of the data and the use cases. Some models will be reviewed on a monthly basis, others less frequently.

Q: What advice do you have for companies that want to adopt GenAI responsibly?

AZ: Consider the EU Act as the gold standard. We still assume a Brussels effect will be kicking in, which means counties around the world will probably be adopting similar laws, like they did with EU GDPR. Many companies use GDPR still as a high-watermark bar for privacy and we assume a similar thing with the EU AI Act. Already in the U.S. there are several AI bills at the state level, in California and Colorado, focused on AI risk and safety. Globally the EU AI Act is a good base line. If you follow its guidance on AI governance globally, and put a robust intake process for new AI use cases in place you will be in a well defensible position. At eBay we take that seriously. No company wants to find itself in the press due to an AI-related drama  such as discrimination or harassment or product safety issues. Taking a responsible approach to AI is not only a good thing, it is a business driver. It creates trust, which is important especially in times of global political instability. AI is being used around the world and data flows don’t stop at the border, so it is important we agree on global standards and global best practices. Otherwise, we will see an even deeper a tech divide between the U.S., Europe and China and that will not be great for anyone.

 

 

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.