input license here

Neural Network - The threat that can hide malware (Malware)

With millions and billions of parameters, deep learning models can do many things: detect objects in images, recognize speech, generate text, and hide malware. One of the studies including the University of California found that Neural Network can embed malicious payloads without anti-malware activation.

Their malware cloaking technique, which raises security concerns, has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in the applications we use every day, the security community needs to think about ways to protect network users against future threats.

Neural Network can hide malware (Malware)

Related posts:

1. Concept of Neural Network

Neural Network is a series of algorithms that attempt to recognize underlying relationships in a set of data through a process that mimics how the human brain works. In this sense, neural network refers to the system of neurons, which can be organic or artificial in nature.

Neural networks can adapt to changing inputs so that the network produces the best possible results without redesigning the output criteria. The concept of Neural Network derived from artificial intelligence (AI) is rapidly gaining popularity in the development of trading systems.

Neural Network evaluates price data and finds opportunities to make trade decisions based on data analysis. Networks can distinguish subtle nonlinear interdependencies and patterns that other technical analysis methods cannot.

2. Hide malware (Malware) in a deep learning model (Deep Learning)

Every deep learning model consists of multiple layers of AI cells. Based on each type, each neuron has connections to all or some neurons on both the anterior and posterior layers. The strength of these connections is determined by numerical parameters during training, as the Deep Learning model learns the task for which it was designed. Large neural networks can include hundreds of millions or even billions of parameters.

The main idea behind Evil Model is to embed malware (Malware) in the parameters of a neural network in a way that makes it invisible to malware scanners. This is a form of practice of hiding one part of information in another.

At the same time, Deep Learning when infected must perform its main task (e.g. image classification) as well as or almost as a safe model to avoid raising suspicion or rendering it invisible. to its victims.

Finally, the attacker must have a mechanism to deliver the infected model to the target devices and extract malware from the parameters of the Neural Network.

3. Changing parameter values ​​in a deep learning model (Deep Learning)

Most Deep Learning models use 32-bit (4-byte) floating point numbers to store parameter values. According to the researchers' testing, an attacker can store up to 3 bytes of malware (Malware) in each parameter without significantly affecting its value.

Changing values ​​in a deep learning model

Changing values ​​in a deep learning model

When infecting a deep learning model, the attacker splits the Malware into 3-byte chunks and embeds the data in its parameters. To deliver malware to a target, an attacker can publish the infected neural network on one of several online places hosting Deep Learning models such as GitHub or TorchHub.

Alternatively, an attacker can perform a more sophisticated form of supply chain attack, where a Malware infected model is delivered through automatic updates to software installed on a target device. . Once the Malware infected model is delivered to the victim, a piece of software extracts the payload and executes it.

Hiding Malware in a Complex Neural Network

To verify the feasibility of EvilModel, the researchers tested it on several complex neural networks (CNNs). Since they are quite large, often contain dozens of classes and millions of parameters. At the same time CNN contains a diverse architecture and includes different types of layers (fully connected, associative) and different generalization techniques (batch normalization, skipping layers, pooling layers). , etc.), makes it possible to assess the impact of embedding malware (Malware) in different installations.

Besides, CNNs are widely used in computer vision applications, which can make them prime targets for bad guys. And there are also a lot of pre-trained CNNs that are ready to be integrated into applications without any changes and many businesses use pre-trained CNNs in applications without necessarily having a deep understanding of the operation. how to move.

Neural Network hides malware (Malware) without being detected

Researchers first tried embedding malware into AlexNet, a popular CNN that helped renew interest in deep learning in 2012. Researchers embedded 26.8 megabytes of malware Malware into the model while keeping the accuracy within 1 percent of the clean version. If they increase the volume of Malware data, the accuracy will start to decrease significantly.

4. Securing the Machine Learning Pipeline

Since malware scanners cannot detect malicious payloads embedded in deep learning models, the only countermeasures against it is to kill the malware.

The payload maintains its integrity only if its bytes remain intact. Therefore, if the recipient of the Neural Network does not freeze the infected layer, its parameter values ​​will change and the malware data will be destroyed. Even an era of training may be enough to destroy any malware embedded in a Deep Learning model.

However, most developers use pre-trained models, unless they want to adapt them for another application. This means that besides data poisonings and other security issues, malware-infected Neural Networks are a real threat to the future of Deep Learning.

Enterprises need to establish chains of trust in machine learning pipelines

Enterprises need to establish chains of trust in machine learning pipelines

Enterprises need to establish a chain of trust in machine learning pipelines knowing that malware scanners and static analysis tools cannot detect infected models. Also need to make sure your models come from reliable sources and that the source of the training data and learned parameters is not compromised.

This article showed you the threat of Neural Network that can mask malware (Malware). Need to be careful. Mat Bao specializes in providing new technology news. If you want to enhance the security of your business, contact us immediately Eye of the storm.

Related Posts
Diệp Quân
Nguyen Manh Cuong is the author and founder of the vmwareplayerfree blog. With over 14 years of experience in Online Marketing, he now runs a number of successful websites, and occasionally shares his experience & knowledge on this blog.
SHARE

Related Posts

Subscribe to get free updates

Post a Comment

Sticky