Interview Of The Week

Interview Of The Week: Andrew Wyckoff, OECD

Andrew Wyckoff is the Director of the OECD’s Directorate for Science, Technology, and Innovation. He oversees the OECD’s work on innovation, business dynamics, science and technology, digital economy, and ICT policy as well as the statistical work associated with each of these areas.   His experience prior to the OECD includes being a program manager of the Information, Telecommunications and Commerce program of the U.S. Congressional Office of Technology Assessment, an economist at the US National Science Foundation and a programmer at The Brookings Institution. 

Wyckoff has served as an expert on various advisory groups and panels, including MIT’s AI Policy Forum Steering Committee, a Commissioner on the Lancet/FT Governing Health Futures 2030 Commission, Board of Trustees of Digital Future Society, and Head of the OECD’s Delegation at the G20 and G7 meetings on ICT and Digital Economy.

Wyckoff holds a BA in Economics from the University of Vermont, and a Master of Public Policy from Harvard University’s JFK School of Government. He recently spoke to The Innovator about the OECD’s work on artificial intelligence.

Q: The OECD is working with governments and businesses around the world to try to measure and analyze the impact of AI. What are you focusing on?

AW: While the OECD does not represent all nations the organization is in a good position to use its multidisciplinary breadth to look at complex problems like AI. OECD’s Science Technology and Innovation Directorate that I head has been partnering with the Directorate for Employment, Labour and Social Affairs and our Education Directorate on this topic with support from the German Ministry of Labour and Social Affairs since 2021, and work is planned into 2024. We have so far produced a dozen working papers, including one in January of last year on the impact of AI on the labor market.

Q: What were some of the key takeaways from that study?

AW: The empirical evidence based on AI adopted in the last 10 years does not support the idea of an overall decline in employment and wages in occupations exposed to AI. AI, in fact, has the potential to complement and augment human capabilities, leading to higher productivity, greater demand for human labor and improved job quality. That said, AI is likely to reshape the work environment of many people, by changing the content and design of their jobs, the way workers interact with each other and with machines, and how work effort and efficiency are monitored.

Q: What does your research say about what types of skills will be needed?

AW: In March and September of last year we produced a report entitled Demand for AI Skills In Jobs: Evidence from Online Job Postings, in partnership with  Burning Glass Technologies, which delivers job market analytics based on publicly available information on the Web, third-party resume databases and job boards, the recruiting industry, opt-in data from employers and applicant tracking systems, sales and marketing CRM databases, and various consumer/identity databases. This report, which covered four countries – Canada, Singapore, the United Kingdom and the United States – offered first-time evidence about the growing number of jobs requiring multiple AI-related skills.  In the countries considered, between 2012 and 2018, the average share of AI software-related skills, such as Python and machine learning skills, as well as data mining, cluster analysis, natural language processing, out of total AI-related skills sought in jobs advertised online, increased. In 2018, the share amounted to about 30% and skills related to communication, problem solving, creativity, writing and teamwork gained relative importance over time.

Q: How are we going to fulfill the demand for AI skills? Many traditional companies – large companies and SMEs – find it hard to find the talent they need to successfully implement and scale AI.

AW: It’s a challenge. The OECD focuses on productivity and we find that the average productivity growth is pretty lackluster worldwide. There are two offsetting trends. The frontier firms – including big tech  – are very well advanced but the large majority of the world’s businesses- including many SMEs – are laggards and we worry about that because it may be a contributing factor to inequality. So how do those laggards catch up, especially in an AI powered economy even though only about 7% to 10% of firms have adopted AI? There are some pathways. AI is getting easier to use. There is a lot of plug and play and lot of Cloud platforms offer AI applications. To me it is a great advantage because now there is no reason to build a whole IT department. This is good news for for SMEs. Some large firms, like John Deere, have pivoted from a product to a data driven company and have attracted high quality talent, so it can happen. Still, I worry that the companies that are standard bearers in large parts of the economy are sitting on lot of data and don’t realize it and aren’t very good at exploiting it.

Q: Is the OECD also covering the main ethical risks associated with using AI in the workplace?

AW:  We have published a report on this topic as well. It found that, like all technologies, AI is a two-edged sword.  AI systems can multiply and systematize existing human biases, but they can also help correct our biases. The collection and curation of high-quality data is a key element in assessing and potentially mitigating biases – but presents challenges for the respect of privacy. What’s more, systematically relying on AI-informed decision-making in the workplace can reduce workers’ autonomy and agency. This may reduce creativity and innovation, especially if AI-based hiring also leads to a standardization of worker profiles.  On the other hand, the use AI systems at work could free up time for more creative and interesting tasks and leave the dirty, dangerous, dull and demeaning jobs to machines.  AI systems present many opportunities to strengthen the physical safety and well-being of workers, but they also present some risks. Risks include heightened digital security risks and excessive pressure on workers caused by constant surveillance. If I look away from my computer screen to read a report the AI might conclude that I am not working, for example. It could also draw wrong conclusions about a pregnant woman taking frequent bathroom breaks. Another issue is that it can also be more difficult to anticipate the actions of AI-based robots due to their increased mobility and decision-making autonomy.

 The report also looked at how enhanced transparency and explainability in workplace AI systems has the potential to provide more helpful explanations to workers than traditional systems. Yet, understandable explanations about employment decisions that affect workers and employers are too often unavailable with workplace AI systems. Improved technical tools for transparency and explainability will help, although many system providers are reluctant to make proprietary source code or algorithm available.  

Finally, deciding who should be held accountable in case of system harm is not straightforward. Having a human ‘in the loop’  may help with accountability, but it may be unclear which employment decisions require this level of oversight. Audits of workplace AI systems can improve accountability if done carefully. Possible requisites for audits include auditor independence.

Q: The OECD has brought together policy, industry, and technical experts to discuss AI approaches,  risk, tools, and  accountability but hundreds of organizations are also attempting to do the same ( World Economic Forum, academic organizations,etc)  How can businesses best ensure that they don’t run afoul of new, and sometimes conflicting, rules regarding ethical AI?

AW: I understand that this has become a bit of a crowded space, but I would like to think that we were earlier than others. We started in 2016 at the G7 meeting in Japan, which led to the publication of the OECD’s AI Principles in 2019. The OECD’s principles have served as the foundation of the G7 created Global Partnership On Artificial Intelligence that now has 25 Members; informed the EU AI Act; and there are many references to it in the U.S., including in the forthcoming National Institute of Standards and Technology work on AI. The OECD’s principles have set lines on the road that would reduce differences and point towards common objectives at the outset of policy making. Some 36 OECD member countries and eight others have adopted them. The G20 -which includes China, Brazil, and Russia – used our principles as the basis for their own AI principles and our Expert Advisory Group included UNESCO, EC, IEEE, the U.S. National Institute of Standards and Technology, etc. We are hoping for interoperability. The point is to get out in front of an area that is moving very quickly.

Many existing laws are still applicable but need enforcing and aligningt for the AI environment. Some new regulations are emerging, such as the EU AI Act , which deems some AI systems used in employment as unacceptable and high risk, as well as laws in individual U.S. states and cities that  require consent for the use of facial recognition in hiring or audits of automated employment decisions.

 Q: What’s next on the OECD’s AI agenda and how can businesses get involved?

AW: We are currently refining and rolling out our definition and classification of AI based on an AI lifecycle. This is essential because AI is so broad we need to gain sophistication with our understanding and treatment and begin to differentiate AI applications based on their learning data; the type of AI model and the application. We should not treat an AI movie recommendation the same as an AI criminal sentencing recommendation. It is essential for getting common international understanding. We want to do this in a way that does not stifle innovation and are hoping it can serve as a bridge for interoperability between policies because inevitably each national policy will have its own orientation.

Separately, we are also working on the development of a ‘Global AI Incidents Tracker’ to aggregate, in real-time, information on AI incidents from published news globally. Its aim is to build an evidence base for categorizing AI risks, ensure consistency in reporting, and facilitate an understanding of which types of AI systems materialize into actual incidents. The aim is to use this information to calibrate AI policies based on risk management. It is one thing to talk hypothetically about the risks posed by AI, and another to document what is happening out in the field.

 We are also doing some early work on the environmental impact of AI ahead of COP 2027 to ensure that the use of AI to reduce emissions  will not be offset by energy intensive data centrtd and computational demands.

Many businesses are already involved in the OECD’s work, along with government, unions, and civil society. If your business is not already engaged, please attend our annual AI conference in February, which this year included over 4000 registrants and 80 plus speakers over four days. We need input from business, which is where the vast majority of AI development and applications are generated.

To access more of The Innovator’s Interview Of The Week articles click here


About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.