Practical Attack and Defense Methods for Integrity of Deep Neural Networks in Digital Pathology Image Analysis Systems
Aukeala, Markus (2024)
Aukeala, Markus
2024
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-2024051110934
https://urn.fi/URN:NBN:fi:amk-2024051110934
Tiivistelmä
Digital pathology has made huge strides in development over the past decade. The introduction of new technology brings with it huge potential in efficiency, accuracy, and cost benefits, but also new risks. From the point of view of cyber security, in addition to traditional software, hardware and network security, a new risk will be attack attempts against artificial intelligence models and systems running the models.
The purpose of the thesis was to respond to the thesis commissioner’s need to investigate practical options for detecting and preventing vulnerabilities in deep neural network models and their feasibility. In addition to the goodness of the detection models, things to consider were, e.g., performance, feasibility in practice and calculated cost/benefit ratio.
The thesis used design science as its research method, which aims to produce an artifact that solves the research problem with a practical and innovative solution. Scientific publications were used as source material for the work, both on the vulnerabilities related to neural networks in digital pathology, and on the vulnerabilities of deep learning neural networks in general in relation to image analysis. In the work, a deep learning neural network (convolutional autoencoder) was produced as an artifact, the purpose of which is to detect deviations from the input data.
Based on the results, with convolutional autoencoders it is possible to detect perturbations of even just a few pixels in the analyzed images and through this to detect a possible attack that tries to influence the result of the image analysis through a deviation of the input data. Implementing standard convolutional autoencoders is an easy way to start detecting deviations, but the disadvantage of an artificial intelligence model for detecting attacks taught with image data is its poor generalizability and the need to retrain the neural network model if significant changes in image quality or color scheme occur in the digital pathology image data due to hardware or software changes.
The purpose of the thesis was to respond to the thesis commissioner’s need to investigate practical options for detecting and preventing vulnerabilities in deep neural network models and their feasibility. In addition to the goodness of the detection models, things to consider were, e.g., performance, feasibility in practice and calculated cost/benefit ratio.
The thesis used design science as its research method, which aims to produce an artifact that solves the research problem with a practical and innovative solution. Scientific publications were used as source material for the work, both on the vulnerabilities related to neural networks in digital pathology, and on the vulnerabilities of deep learning neural networks in general in relation to image analysis. In the work, a deep learning neural network (convolutional autoencoder) was produced as an artifact, the purpose of which is to detect deviations from the input data.
Based on the results, with convolutional autoencoders it is possible to detect perturbations of even just a few pixels in the analyzed images and through this to detect a possible attack that tries to influence the result of the image analysis through a deviation of the input data. Implementing standard convolutional autoencoders is an easy way to start detecting deviations, but the disadvantage of an artificial intelligence model for detecting attacks taught with image data is its poor generalizability and the need to retrain the neural network model if significant changes in image quality or color scheme occur in the digital pathology image data due to hardware or software changes.