Interview Of The Week

Interview Of The Week: Lior Zalmanson, Future Of Work Expert

Lior Zalmanson, PhD, is a senior lecturer (assistant professor) at the Coller School of Management, Tel Aviv University, where he leads AIMLAB (Artificial Intelligence in Management, Labor, and Business). The lab, which is supported by the European Research Council, is actively searching for partnerships with corporate to explore AI’s transformative effects on management practices.

Zalmanson’s research examines how AI shapes workplace dynamics and perceptions of authority, trust, competence, and creativity. He was recognized among “40 Under 40” by Poets and Quants Magazine (2022), and his insights have appeared in leading publications including Harvard Business Review, MIT Technology Review, and the Wall Street Journal.

Beyond academia, Zalmanson is an award-winning curator, writer, and artist whose technology-focused installations exploring digital culture and the information society have been featured at the Tribeca Film Festival, Cannes Film Market, Albright Knox Museum, Tel Aviv Museum and Secession Vienna.

He participated in a panel on AI’s impact on industry moderated by The Innovator’s Editor-in- Chief during the Sparks Innovation Summit in Tel Aviv March 26.  Zalmanson separately agreed to be interviewed by The Innovator about AI’s impact on management.

Q: What does your work focus on?

 

LZ: I research human-AI interaction, with a particular focus on the future of work—and more specifically, the future of managers in the age of AI. Much of the conversation around AI and labor fixates on workers, yet surprisingly little attention is given to managers. It’s as if we assume that management will remain untouched. But in reality, managerial roles may be even more vulnerable to AI-driven transformation.

If you examine core management functions—assigning tasks, monitoring performance, evaluating outcomes—many of these are already being emulated, simulated, or automated by AI systems. The shift didn’t start yesterday; we began seeing managerial practices replaced by algorithmic systems nearly a decade ago. One moment that crystallized this for me was in 2016, when Uber was still relatively new.

Together with my colleague Mareike Möhlmann, I studied Uber’s business model and its implications for the future of work. What struck us wasn’t just the level of surveillance or control—these have existed in different forms throughout labor history—but the near-total absence of human contact. When we asked drivers to name someone at Uber they reported to, they couldn’t. Not a single one knew who their supervisor was. Their only human contact was with tech support. It was a black box of management—opaque, impersonal, and deeply unfamiliar.

This, I believe, signals something important about where management may be heading. Along with Ola Henfridsson and Robert Wayne Gregory from the University of Miami, we published a 2021 paper analyzing this dynamic. We described how Uber drivers operate under a dual model: they are positioned as independent contractors with the freedom to choose their hours, yet they are simultaneously managed through a rigid algorithmic system that enforces discipline and control.

This paradox of freedom and strong algorithmic control generates significant anxiety among drivers. They may be ‘free’  to work when they want, but they have little recourse when problems arise, no ability to negotiate, and no sense of who holds authority over them. That same logic, once confined to platform labor, is now creeping into traditional workplaces.

What we’re witnessing is not just a shift in how work is done—but a fundamental reimagining of what it means to be managed, and who (or what) does the managing.

Q: In your opinion is this a good or a bad thing?

LZ: In many cases, even within traditional employment structures, AI is poised to take over the ‘monitoring’ function of managers and play a more prominent role in the day-to-day supervision of workers. Many companies believe this shift could improve outcomes such as fairness, consistency, and efficiency. And to some extent, they’re right—AI has the potential to benefit line workers, too. But the reality is more complex. This transition is difficult to navigate and carries far-reaching organizational impacts.

Q: Can you go into detail on organizational impact?

LZ: For starters, AI flattens organizations. Algorithms are increasingly taking over many core functions of middle management—assigning tasks, monitoring performance, even enforcing discipline. I don’t believe middle managers will disappear entirely, but their numbers and responsibilities will likely shrink. In the future, we may see one human supervisor overseeing hundreds of workers, with algorithms handling the rest.

Secondly, when your supervisor is an algorithm—as in the case of Uber—the primary human contact for most workers becomes tech support. But tech support is no longer just fixing bugs; they’re becoming brokers of algorithmic authority. They help interpret decisions made by the system and, occasionally, override them. Yet these roles were never designed with management in mind. Most tech support staff lack training in conflict resolution, empathy, or crisis management. Companies are now realizing that they can’t just staff these positions with part-time students; they need full-time professionals who are emotionally mature and equipped to deal with distressed employees. In essence, tech support is evolving into middle management by default—but without the structure or preparation that role demands.

A third, subtler shift is the rise of AI-mediated communication. It sounds abstract, but it’s everywhere: emails drafted by AI, presentations assembled by AI tools, customer interactions handled by AI agents, and internal communication increasingly shaped by algorithmic suggestions. In this new ecosystem, even interpersonal exchanges are being filtered. In the age of agentic AI, how does a manager know if they’re truly talking to a human? And when a poor decision is made, who is accountable—the employee or the AI they delegated to?

This creates a new kind of supervisory challenge. As employees hand off more and more tasks to AI, the sense of individual accountability blurs. Managers now face the difficult task of overseeing not just people—but people overseeing machines. And we don’t yet have clear norms or rules for this.

As an assistant professor, I increasingly ask myself: did the student write this essay, or did an AI? Managers face the same dilemma in corporate settings. AI can and does make mistakes. Workers are encouraged to rely on AI agents, but what happens when they fail to supervise those agents? Consider a fleet of AI-managed vehicles in a ride-sharing service. Drivers are told to rely on the AI for navigation and dispatch. But if the AI is wrong 10% of the time, how do drivers know when to override it? Exercising human judgment in the face of machine authority is difficult—especially when the task involves processing massive data sets that humans can’t easily evaluate.

This leads to a loss of agency. In a series of experiments with my PhD student, Yotam Liel, we found that a growing number of people accept nonsensical AI outputs without question. They’re learning to trust AI reflexively. And while AI often performs better than humans on specific tasks, that only deepens the dilemma: when is trust justified?

The more we rely on AI, the more we lose the ability to perform tasks independently. At the same time, managers expect employees to become “superhuman”—faster, smarter, more productive—because they now have GenAI at their disposal. This creates a vicious cycle. The more we’re expected to do with AI, the more we delegate. The more we delegate, the less we exercise human judgment. Eventually, we risk rendering ourselves not just less capable—but dispensable.

Q: So how do you advise companies to adjust management for the age of AI?

LZ: That’s the $100 million question—and a moving target. We’re all grappling with how to write the new management guide for the age of AI. When you look at AI-focused management books written a decade ago, many now seem outdated, even absurd. Still, one idea is beginning to emerge clearly through our fieldwork: every worker is becoming, in effect, a manager of AI assistants.

We’re entering an era where managing AI agents will become a core competency, and we’ll need to learn how to use them wisely. In many ways, managing AI is like managing a team: you delegate, evaluate, and make judgment calls about the quality of the output. The challenge is that companies are asking employees to manage these agents—without giving them any real management training.

Everyone in the organization will need to learn how to lead hybrid work forces of humans and non-humans. And that shift will upend much of what we thought we knew about management.

Many of us sense that this transformation is coming, but we still don’t fully understand its implications. We don’t yet have the right tools—but we’re beginning to build them. First, companies need to recognize that all employees are becoming managers. Then, they must clarify the lines of responsibility: who is accountable for decisions made by AI, by humans, or by both? And how should this accountability be communicated up the chain?

Meanwhile, many managers still see themselves as irreplaceable. They’re often comfortable with AI replacing workers but overlook the possibility that AI could challenge their own roles. But if machines can analyze, delegate, evaluate and even empathize, what’s left that only a human supervisor can do? And more urgently: are we ready to find out?

The more I think about this, the more it seems that we’re witnessing not just the introduction of new tools for work and management, but the emergence of a new organizational infrastructure—embedded in code and distributed across systems. What’s at stake isn’t just how managers manage, but how we preserve agency, accountability, and meaning in a world where the very act of managing is becoming increasingly invisible.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To access more of The Innovator’s Interview Of The Week articles click here.

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.