Machine Unlearning Update

At the end of July, I wrote an article about the problem of machine unlearning or how would you get a model such as ChatGPT to “forget” information that it has been trained on but has later been proven false. My focus was on scientific journal articles that had been revealed to be based on faked data, etc.

Imagine my surprise when my latest UCR Alumni newsletter has an article on a recent paper from UCR engineers coming up with a solution to the very problem I discuss. Of course they target the problem that is much bigger to the purveyors of all the AI models, that is, how to remove copyrighted material from learning models without starting from scratch. That is of paramount importance due to pending court cases that may rule over 90% of AI models are illegal due to copyright violations in their training material.

Ask and you shall receive. Amazing. 🙂