Artificial intelligence algorithms are quickly becoming a part of everyday life. Many systems that require strong security are either already underpinned by machine learning or soon will be. These systems include facial recognition, banking, military targeting applications, and robots and autonomous vehicles, to name a few.
This raises an important question: how secure are these machine learning algorithms against malicious attacks?
In an article published today in Nature Machine Intelligence, my colleagues at the University of Melbourne and I discuss a potential solution to the vulnerability of machine learning models.
We propose that the integration of quantum computing in these models could yield new algorithms with strong resilience against adversarial attacks.
To read more, click here.