Hirundo, which splits its headquarters between Tel Aviv and London, is building a Machine Unlearning Platform to ensure that AI models only know what they should. The company is a pioneer in a nascent field that focuses on making AI forget.
Until now training AI models has been a one-way track: The only way to remove unwanted data that has entered the model is to retrain the model, a lengthy and expensive process
Hirundo removes the unwanted data from the model, at less than 5% of the time and the cost to retrain, says CEO Ben Luria, a keynote speaker at the April 8 R.AI.SE conference in Paris, which focused on Generative AI. “We are like a precision knife that does surgery,” he says.
Every company using AI will inevitably find inaccurate data, bias caused by things that should not be considered such as gender, noncompliant or poisoned data, says Luria, an experienced entrepreneur and one of Israel’s first Rhodes scholar. Copyright laws and privacy regulations that give people the “right to be forgotten,” are also driving interest in techniques that can remove traces of data from algorithms without interfering with the model’s performance.
This article is part of The Innovator’s premium content and available only to The Innovator’s Radar subscribers. Click here to sign up for a free four-week trial
If you are already a Radar subscriber click here to sign into your account.