Debra Danielson is a Distinguished Engineer and Senior Vice President, Merger and Acquisition Strategy at CA Technologies, which makes agile, automation, analytics and cybersecurity software. She also serves on the board for CA’s innovation incubator (CA Accelerator), and is an executive advisor for private equity firm Strattam Capital. A recognized expert in acquisition-focused technology evaluation and technical due diligence, her 25+ year career has spanned technical, strategic, operational and managerial roles. Danielson holds more than eleven patents in IT management and security disciplines. She recently spoke to The Innovator about the risks the introduction of AI poses to corporates.
Q: What risks does the introduction of AI pose to corporates?
DD: The most important thing senior executives should consider when adding AI into their organization or products are the less than obvious elements. It is easy to see and understand how deep learning and AI can help you become more efficient but it is not always clear where the risks and concerns are. It is important for an organization to understand how their algorithms are actually making decisions. because if you don’t, then you don’t know if your algorithms are aligned with laws and/ or your values. In addition, there is an opportunity for black hats or nefarious players to attack your algorithms with techniques like data injection. It is bias in, bias out, or in the case of data injection, direct manipulation of the algorithms. Either way you end up with something that is not in the best interest of the organization.
Q: Can you expand on the dangers of data injection, which is a relatively new phenomenon?
DD: The black hats in the world have been looking to game the system through every evolution in the technology, and it is no different in the era of AI. Data injection is a just different form of attack. In the past the big risk was black hats getting access to your data and taking it out of the enterprise. Now you have to prevent gamed data from getting into your system. I can give a couple of examples of what this means in the machine vision domain. There are specially constructed images that can cause an algorithm to become computationally “mesmerized”, causing it to hone on that image and ignore everything else in its field of vision. You could put a specific type of sticker on a stop sign and machine vision could be tricked into seeing the stop sign as a banana. Or you can game facial recognition by wearing sunglasses with similarly constructed images on the lenses, again memorizing the algorithm to see whatever is on the sticker and thus provide their wearer invisibility.
Q: Could Tay, an AI chat bot released by Microsoft via Twitter in 2016 which caused controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, forcing Microsoft to shut down the service only 16 hours after its launch, be considered another way of distorting algorithms and damaging a brand?
DD: I don’t know if that chatbot was just reflecting the bias of the dataset or was being gamed. Both are risks. There are examples of organizations that have identified bias in algorithms and in some of these cases the bias was learned from (real but) biased input data . Companies have to be very careful that the inferences being drawn are in fact valid underlying elements and not untested assumptions. I recently read an article that talked about how Xerox was using data to determine who in their work force was likely to churn so they could identify those early in the interview process and not hire them in the first place. They found that people who had long commutes tended to leave. But those people who had long commutes tended to come from poorer neighborhoods so that correlation turned out to be a poverty correlation as well as a location correlation. Unless you are explicitly looking for bias in the results you can end up institutionalizing this kind of bias , which would go against the core values of your organization. In the case of Xerox they decided the churn element [getting rid of poor people who live far from the office] didn’t reflect their values and removed it from the algorithm.
Q: AI bias can potentially do severe damage to a company’s brand or put the company in legal hot water so a number of companies are considering hiring chief ethic officers. Do you see that as a trend?
DD: Yes but it will probably require a team not just a person. Companies are going to need someone to interpret results and compare those results against the company’s core values or ethics. Understanding what their algorithms’ biases are is a quite technical undertaking. Achieving explainability in algorithms and ethical results will require input from data scientists as well as from a company’s ethics organization.
I do think you will see specialists focused on ethical explainability enter the field. But they won’t be alone. The engineers and the builders of technology care deeply about how their tech is being used and will absolutely want a governance framework in place. For even if 99.99% of the engineering organization is engaged and highly ethical, you only need one malicious insider. You expect all your employees to be ethical but you verify it.
Q: Isn’t there more risk from an overworked and rushed developer than from a malicious insider?
DD: There are risks from malicious insiders and there are risks from incompetent insiders — a person without bad intentions who is unaware of the damage they could cause. There is a higher risk from incompetent insiders.
Q: What is your best advice to big corporates?
DD: Most corporates’ security systems has been all about avoiding data exfiltration. Now they also have to look for data injection and algorithm gaming. The first thing an organization needs to have is an awareness that this is a risk and understand where the potential areas of vulnerabilities lie and to analyze and address those.