Nearly 400 women from 50 countries convened in Paris May 22 and 23 for the Women In Tech Global Summit, an annual conference that aims to create a more inclusive and innovative science, technology, engineering and mathematics (STEM) ecosystem.
Topics covered at the conference included cybersecurity (The Innovator’s editor-in-chief moderated a panel on the topic that is pictured here), the metaverse, AI ethics and policy making, industry 5.0 and tech for good.
The presence of speakers such as Christyl Johnson, NASA’s Goddard Space Flight Center’s Deputy Director for Technology and Research Investments (see The Innovator’s Interview of The Week) Sheikha Bodour Al-Qasimi, President of the United Arab Emirate’s Sharjah Research, Technology and Innovation Park and Judith Wunschik, Chief Cybersecurity Officer at Siemens Energy, as well as dozens of accomplished young women from developing world countries, served as a reminder of women’s progress in STEM.
But significant barriers remain. A panel on entrepreneurship and investment talked about how women remain underrepresented in digital sectors and startups led by women are still in the minority and raise less money.
Speaker Sara El Hairy, France’s State Secretary for Youth at the Ministry of National Education, stressed that women are not only underrepresented in tech but in all leadership areas. As rapid advances in AI bring us to “the crossroads of the very future of humanity” female voices are needed more than ever, she said.
Al-Qasmi said that offering leadership opportunities to women and including their perspectives in decision and policymaking would have a positive effect. “Things need to change and change fast. We need to take the reins,” she said. “When you consider that even today, government decision-making, policy frameworks, and regulatory parameters are still dominated by men, we need a call to action.” Al-Qasimi said that if empowered to be more influential in decision-making, women would help deliver policies that unlock the benefits of new technologies while managing their potential harms to communities.“We need to ensure that gender parity is guaranteed and not just a target,” she said.
Here are some of the other key takeaways from the conference:
Cybersecurity: The cybersecurity panel focused on the escalation of threats and how innovation can be used to fight back. James Apathurai, NATO’s Deputy Assistant Secretary General for Emerging Security Challenges, talked about how Ukraine is using new technologies and innovation to fend off the Russians (see The Innovator’s story about what companies can learn from Ukraine) while Judith Wunschik, Chief Cybersecurity Officer, Siemens Energy and Alina Matyukhina, CSO/ Global Head of Cybersecurity at the branch of Siemens covering infrastructure spoke about how innovation can be used to secure critical infrastructure. The panelists, along with Daleen Pretorious, Head of Platforms (Cloud) at Absa Bank from Alba Bank in South Africa, also spoke about the cyber security skills gap and the need to attract more women into the sector.
The Metaverse: Can the metaverse be a catalyst for gender equality? “We have to do better than what we have done everywhere else,” remarked one speaker on a panel tackling that topic. “We need to be there in force, otherwise the social construct will be defined by gamers.” At the conference Ayumi Moore Aoki, the founder of the conference, announced that Women In Tech will build a headquarters in Sandbox, a decentralized virtual gaming world in the metaverse built on blockchain technology, with the help of Marie Franville, Co-Founder and CEO of Nabiya Studio.
AI Ethics: During the discussion on AI Ethics and Policy Making Yam Atir, VP Strategy & Policy at Israel’s Start-up Nation Policy Institute, stressed that having only government or only tech companies come up with rules to govern AI is not good enough “We need to get everyone around the table and get ahead of it,” she said. Another panelist compared AI to “a flowing river that can’t be stopped but can and must be directed.” The first thing that must be tackled is “a shared agreement on the social values we share,” said one of the panelists. “That is the most challenging and the most important.” Other issues that must be determined are: What happens to the data? Is it secure and ground in truth? Are the algorithms compliant and transparent?
Industry 5.0: Speaker Saman Sarbazvatan, COO & Vice Dean of Ecole des Ponts Business School of ENPC – ParisTech and Chair of Harvard European Chapter of Microeconomics of Competitiveness, talked about the convergence of two forces: the digital economy transition and the responsible economy transition, which emphasizes sustainability and values. “The shift of value systems is giving rise to a myriad of innovation opportunities on all fronts,” he said. “Industry 5.0 is not only a competitive edge for front runners but also a strategic hedge against the volatility of global supply chains and increasing socioeconomic turmoil.”
Tech For Good: During the conference Sarbazvatan announced that Ecole des Ponts ParisTech is opening a new Tech for Good Center in Paris and has concluded a strategic partnership with Women In Tech. The plan is for the new tech center to launch several joint initiatives in 2023 and “expand the horizons of opportunities with this brilliant network of international women in technology,” he said.
IN OTHER NEWS THIS WEEK
Microsoft Says State-Sponsored Group Hacked Critical InfrastructureMicrosoft is warning that a state-sponsored Chinese hacking group has compromised critical infrastructure in the U.S. in order to disrupt communications between the country and Asia in the event of a crisis.
Volt Typhoon has been active since mid-2021 and has targeted critical infrastructure organizations in Guam and elsewhere in the United States, according to a May 24 Microsoft blog posting. “ In this campaign, the affected organizations span the communications, manufacturing, utility, transportation, construction, maritime, government, information technology, and education sectors. Observed behavior suggests that the threat actor intends to perform espionage and maintain access without being detected for as long as possible,” said Microsoft.
To achieve their objective, the threat actor puts strong emphasis on stealth in this campaign, relying almost exclusively on living-off-the-land techniques and hands-on-keyboard activity, says Microsoft. The blog posting says the hackers issue commands via the command line to (1) collect data, including credentials from local and network systems, (2) put the data into an archive file to stage it for exfiltration, and then (3) use the stolen valid credentials to maintain persistence. In addition, Volt Typhoon tries to blend into normal network activity by routing traffic through compromised small office and home office (SOHO) network equipment, including routers, firewalls, and VPN hardware. They have also been observed using custom versions of open-source tools to establish a command and control channel over proxy to further stay under the radar.
On May 25, The Financial Times reported that Chinese foreign ministry hit back at the allegations, saying the U.S. “lacked evidence” and accused it of being a “hacker empire.”. They added that “the involvement of certain companies” in the warning “shows that the U.S. is expanding channels for disseminating false information”. Microsoft said it had notified targeted or compromised customers and urged them to close or secure their accounts. The U.S. and international cyber security authorities issued a joint advisory notice about Volt Typhoon on May 24 that also warned of Chinese state-sponsored cyber threats.
Governments, Tech Companies Grapple With Ways To Govern AI
Group of Seven (G7) nation officials will meet next week to consider problems posed by generative artificial intelligence (AI) tools like ChatGPT, Japan said on May 26. Leaders of the G7, which includes the United States, European Union and Japan, last week agreed to create an intergovernmental forum called the “Hiroshima AI process” to debate issues around fast-growing AI tools. G7 government officials will hold the first working-level AI meeting on May 30 and consider issues such as intellectual property protection, disinformation and how the technology should be governed, Japan’s communications minister, Takeaki Matsumoto, said.
Meanwhile OpenAI chief Sam Altman warned that Brussels’ efforts to regulate artificial intelligence could lead the maker of ChatGPT to pull its services from the EU. The Financial Times reported that while speaking to reporters during a visit to London this week, Altman said he had “many concerns” about the EU’s planned AI Act, which is due to be finalized next year. In particular, he pointed to the European parliament’s move this month to expand its proposed regulations to include the latest wave of general purpose AI technology, including large language models such as OpenAI’s GPT-4. “The details really matter,” Altman said. “We will try to comply, but if we can’t comply we will cease operating.” Altman backed away from that statement on May 2., “We are excited to continue to operate here and of course have no plans to leave,” he said in a tweet. Google’s chief executive Sundar Pichai also toured European capitals this week, seeking to influence policymakers as they develop “guardrails” to regulate AI.
OpenAI’s leadership aid this week that it does believe that the world needs an international regulatory body akin to that governing nuclear power to oversee artificial intelligence. In a post to the company’s blog, OpenAI founder Altman, President Greg Brockman and Chief Scientist Ilya Sutskever said there will eventually be the need for an agency like an International Atomic Energy Agency( IAEA). “Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.,” says the blog post. “Tracking compute and energy usage could go a long way and give us some hope this idea could actually be implementable. “As a first step, companies could voluntarily agree to begin implementing elements of what such an agency might one day require, and as a second, individual countries could implement it. It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say.”
Separately Microsoft President Brad Smith on May 25 called for the people behind artificial intelligence to be held accountable for shortcomings and urged lawmakers to ensure that safety brakes be put on AI used to control the electric grid, water supply and other critical infrastructure.”This is the fundamental need to ensure that machines remain subject to effective oversight by people and the people who design and operate machines remain accountable to everyone else. In short, we must always ensure that AI remains under human control,” he wrote.As part of a five-point blueprint for public governance of AI, Smith urged that special attention be paid to the electric grid, water systems and other critical infrastructure. “New laws would require operators of these systems to build safety brakes into high-risk AI systems by design,” he wrote in the blog.
AI Based Digital Bridge Enables Paraplegic Patient To Walk
A “digital bridge” that uses artificial intelligence to decode brain signals has enabled a paraplegic patient to walk just by thinking about moving his legs, boosting hopes that the neurotechnology could eventually help millions of people overcome disabilities. Swiss researchers implanted an electronic device in the patient’s skull on top of the region of the brain responsible for controlling leg movements. Using algorithms based on adaptive AI methods, “movement intentions are decoded in real time from brain recordings”, said Guillaume Charvet, head of brain-computer interface research at French public research body CEA. These signals are then transmitted wirelessly to a neurostimulator connected to an electrode array over the part of the spinal cord that controls leg movement below the injury site, said Jocelyne Bloch, the project’s neurosurgeon. Researchers at École Polytechnique Fédérale de Lausanne (EPFL) and Swiss hospitals published their findings in Nature on May 24. The breakthrough will enable doctors to bypass damaged nerves and boost the treatment of a range of neurological disorders including strokes, the researchers said, though they caution that much research and development will be required to miniaturise and enhance the technology, cut production costs and carry out extensive clinical trials.
Elon Musk’s Brain Implant Company Announces FDA Approval of In-Human Clinical Study
Neuralink, the neurotech startup co-founded by Elon Musk announced May25 it has received approval from the U.S.’s Food and Drug Administration to conduct its first in-human clinical study.Neuralink is building a brain implant called the Link, which aims to help patients with severe paralysis control external technologies using only neural signals. This means patients with severe degenerative diseases like ALS could eventually regain their ability to communicate with loved ones by moving cursors and typing with their minds.
For more of The Innovator’s News In Context stories click here.