TY - JOUR
T1 - Secure Convolutional Neural Network-Based Internet-of-Healthcare Applications
AU - Kheriji, Lazhar
AU - Bouaafia, Soulef
AU - Messaoud, Seifeddine
AU - Chiheb Ammari, Ahmed
AU - Machhout, Mohsen
N1 - Publisher Copyright:
© 2013 IEEE.
DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2023/4/12
Y1 - 2023/4/12
N2 - Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ =0.3 with a maximum loss of accuracy tolerated at 2%.
AB - Convolutional neural networks (CNNs) have gained popularity for Internet-of-Healthcare (IoH) applications such as medical diagnostics. However, new research shows that adversarial attacks with slight imperceptible changes can undermine deep neural network techniques in healthcare. This raises questions regarding the safety of deploying these IoH devices in clinical situations. In this paper, we review the techniques used in fighting against cyber-attacks. Then, we propose to study the robustness of some well-known CNN architectures' belonging to sequential, parallel, and residual families, such as LeNet5, MobileNetV1, VGG16, ResNet50, and InceptionV3 against fast gradient sign method (FGSM) and projected gradient descent (PGD) attacks, in the context of classification of chest radiographs (X-rays) based on the IoH application. Finally, we propose to improve the security of these CNN structures by studying standard and adversarial training. The results show that, among these models, smaller models with lower computational complexity are more secure against hostile threats than larger models that are frequently used in IoH applications. In contrast, we reveal that when these networks are learned adversarially, they can outperform standard trained networks. The experimental results demonstrate that the model performance breakpoint is represented by γ =0.3 with a maximum loss of accuracy tolerated at 2%.
KW - Convolutional neural networks
KW - adversarial attacks
KW - internet of healthcare
KW - medical data
KW - security and privacy
UR - http://www.scopus.com/inward/record.url?scp=85153395276&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85153395276&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3266586
DO - 10.1109/ACCESS.2023.3266586
M3 - Article
AN - SCOPUS:85153395276
SN - 2169-3536
VL - 11
SP - 36787
EP - 36804
JO - IEEE Access
JF - IEEE Access
ER -