The need to rethink the way new technologies are designed was a recurring theme at DLD, an annual conference in Munich that gathers some of the greatest talents in technology, science, art, and music.
“AI should be guided by concern for its impact on human society,” said speaker James Landay, a professor at Stanford University and Vice-Director and Faculty Director of Research, Stanford Institute for Human-Centered Artificial Intelligence |(HAI).
He and other speakers at the conference discussed the need for interdisciplinary input and guardrails to ensure technology is being used in the service of humanity.
In a panel on robotics and the industrial metaverse moderated by The Innovator’s Editor-in-Chief Sami Haddadin, the Executive Director of the Munich Institute of Robotics and Machine Intelligence (MIRMI) at the Technial University of Munich (TUM) and holds the Chair of Robotics and Systems Intelligence, talking about the university’s new Geriatronics program.
With partners from the nursing care and healthcare fields, TUM is building a collaborative campus where care, education and research are combined to develop AI/robotics technologies and create applications ready for everyday use. It will include a 25,000 square meter care center operated by the Caritas charity that will house a social services station, assisted living facilities and a broad range of nursing care services. The adjacent educational center will utilize technical developments in robotics and AI and provide training for nursing care specialists. The TUM Geriatronics Innovation Space will incubate startups to convert technological developments into products suitable for the real world.
Haddadin told the audience that he hopes that this systemic approach to designing new technologies will be emulated for different applications of AI and other technologies.
Ensuring the technology is not only used to solve real-world problems but also for positive outcomes needs to be embedded at the start of the design, said some of the speakers.
Speaker Mehran Sahami, the James and Ellenor Chesebrough Professor in the Computer Science department at Stanford University, said Stanford has launched an Embedded Ethics program to help students consistently think through some of the issues that arise in computing: . (To learn more read The Innovator’s interview with Sahami.),
“We need to ensure the next generation of AI is able to judge itself and how it operates and what it is doing so that it can reflect on the validity and origin of its claims,” said John Clippinger, a founding Director of the Open Earth Foundation, a non-profit focused on harnessing emerging tech and radical collaboration for a more resilient planet and a research affiliate at MIT’s City Science Group,.
Clippinger, who is scheduled to speak on a panel focused on the challenges of generative AI, said it is imperative to ensure there are economic incentives for using technology for good.
But some of the speakers said they do not believe we can rely on the tech sector to police itself or on market forces. They argue that building regulatory guardrails will also be necessary.
Judith Gerlach, Bavaria’s State Minister of Digital Affairs, predicted that 2023 will be the year of smart AI legislation. “The more sophisticated AI becomes the more we will need transparency on where, when and how it is being used,” she told the DLD audience. New technologies should not be created and developed behind closed doors, she said, but rather “in an open exchange with policy makers and the public.”