The opaque nature of many ML models makes securing them difficult. The NCSC wants to help.

The UK’s National Cyber Security Centre (NCSC) has published a set of security principles for developers and companies implementing machine learning models. An ML specialist who spoke to Tech Monitor said the principles represent a positive direction of travel but are “vague” when it comes to details.

The NCSC has developed its security principles as the role of machine learning and artificial intelligence is growing in industry and wider society, from the AI assistant in smartphones to the use of machine learning in healthcare. The most recent IBM Global AI Adoption Index found that 35% of companies reported using AI in their business, and an additional 42% reported they are exploring AI.

The NCSC says that as the use of machine learning grows it is important for users to know it is being deployed securely and not putting personal safety or data at risk. “It turns out this is really hard,” the agency said in a blog post. “It was these challenges, many of which don’t have simple solutions, that motivated us to develop actionable guidance in the form of our principles.”

Doing so involved looking at techniques and defences against potential security flaws, but also taking a more pragmatic approach and finding actionable ways of protecting machine learning systems from exploitation in a real-world environment.

Read article here: https://techmonitor.ai/technology/cybersecurity/machine-learning-models-security-ncsc

 

Leave a Reply

Your email address will not be published. Required fields are marked *

5 + 4 =