

Did you know that some of the most common used social media channels are being affected by the creation of false accounts that aims to bias the opinion of specific large groups of people by publishing fake news.
Have you ever thought how might be possible to detect such accounts?
One of CARAMEL’s partners has just released a very interesting article about Exploring Adversarial Attacks and Defences for Fake Twitter Account Detection.
We encourage to read this interesting just released paper.
- Papandreou, Andreas, Andreas Kloukiniotis, Aris Lalos, and Konstantinos Moustakas. “Deep Multi-Modal Data Analysis and Fusion for Robust Scene Understanding in CAVs”. Edited by IEEE MMSP 2021 (October 2021).Deep learning (DL) tends to be the integral part of Autonomous Vehicles (AVs). Therefore the development of scene analysis modules that are robust to various vulnerabilities such as adversarial inputs or cyber-attacks is becoming an imperative need for the future AV perception systems. In this paper, we deal with this issue by exploring the recent progress in Artificial Intelligence (AI) and Machine Learning (ML) to provide holistic situational awareness and eliminate the effect of the previous attacks on the scene analysis modules. We propose novel multi-modal approaches against which achieve robustness to adversarial attacks, by appropriately modifying the analysis Neural networks and by utilizing late fusion methods. More specifically, we propose a holistic approach by adding new layers to a 2D segmentation DL model enhancing its robustness to adversarial noise. Then, a novel late fusion technique has been applied, by extracting direct features from the 3D space and project them into the 2D segmented space for identifying inconsistencies. Extensive evaluation studies using the KITTI odometry dataset provide promising performance results under various types of noise.
@article{papandreou2021multimodal,
abstract = {Deep learning (DL) tends to be the integral part of Autonomous Vehicles (AVs). Therefore the development of scene analysis modules that are robust to various vulnerabilities such as adversarial inputs or cyber-attacks is becoming an imperative need for the future AV perception systems. In this paper, we deal with this issue by exploring the recent progress in Artificial Intelligence (AI) and Machine Learning (ML) to provide holistic situational awareness and eliminate the effect of the previous attacks on the scene analysis modules. We propose novel multi-modal approaches against which achieve robustness to adversarial attacks, by appropriately modifying the analysis Neural networks and by utilizing late fusion methods. More specifically, we propose a holistic approach by adding new layers to a 2D segmentation DL model enhancing its robustness to adversarial noise. Then, a novel late fusion technique has been applied, by extracting direct features from the 3D space and project them into the 2D segmented space for identifying inconsistencies. Extensive evaluation studies using the KITTI odometry dataset provide promising performance results under various types of noise.},
author = {Papandreou, Andreas and Kloukiniotis, Andreas and Lalos, Aris and Moustakas, Konstantinos},
editor = {2021, IEEE MMSP},
keywords = {cybersecurity},
month = {October},
title = {Deep multi-modal data analysis and fusion for robust scene understanding in CAVs},
year = 2021
}%0 Journal Article
%1 papandreou2021multimodal
%A Papandreou, Andreas
%A Kloukiniotis, Andreas
%A Lalos, Aris
%A Moustakas, Konstantinos
%D 2021
%E 2021, IEEE MMSP
%T Deep multi-modal data analysis and fusion for robust scene understanding in CAVs
%X Deep learning (DL) tends to be the integral part of Autonomous Vehicles (AVs). Therefore the development of scene analysis modules that are robust to various vulnerabilities such as adversarial inputs or cyber-attacks is becoming an imperative need for the future AV perception systems. In this paper, we deal with this issue by exploring the recent progress in Artificial Intelligence (AI) and Machine Learning (ML) to provide holistic situational awareness and eliminate the effect of the previous attacks on the scene analysis modules. We propose novel multi-modal approaches against which achieve robustness to adversarial attacks, by appropriately modifying the analysis Neural networks and by utilizing late fusion methods. More specifically, we propose a holistic approach by adding new layers to a 2D segmentation DL model enhancing its robustness to adversarial noise. Then, a novel late fusion technique has been applied, by extracting direct features from the 3D space and project them into the 2D segmented space for identifying inconsistencies. Extensive evaluation studies using the KITTI odometry dataset provide promising performance results under various types of noise. - Kantartopoulos, Panagiotis, Nikolaos Pitropakis, Alexios Mylonas, and Nicolas Kylilis. “Exploring Adversarial Attacks and Defences for Fake Twitter Account Detection”. Technologies 8, no. 4 (2020): 64. doi:https://doi.org/10.3390/technologies8040064.Social media has become very popular and important in people’s lives, as personal ideas, beliefs and opinions are expressed and shared through them. Unfortunately, social networks, and specifically Twitter, suffer from massive existence and perpetual creation of fake users. Their goal is to deceive other users employing various methods, or even create a stream of fake news and opinions in order to influence an idea upon a specific subject, thus impairing the platform’s integrity. As such, machine learning techniques have been widely used in social networks to address this type of threat by automatically identifying fake accounts. Nonetheless, threat actors update their arsenal and launch a range of sophisticated attacks to undermine this detection procedure, either during the training or test phase, rendering machine learning algorithms vulnerable to adversarial attacks. Our work examines the propagation of adversarial attacks in machine learning based detection for fake Twitter accounts, which is based on AdaBoost. Moreover, we propose and evaluate the use of k-NN as a countermeasure to remedy the effects of the adversarial attacks that we have implemented.
@article{kylilis2020exploring,
abstract = {Social media has become very popular and important in people’s lives, as personal ideas, beliefs and opinions are expressed and shared through them. Unfortunately, social networks, and specifically Twitter, suffer from massive existence and perpetual creation of fake users. Their goal is to deceive other users employing various methods, or even create a stream of fake news and opinions in order to influence an idea upon a specific subject, thus impairing the platform’s integrity. As such, machine learning techniques have been widely used in social networks to address this type of threat by automatically identifying fake accounts. Nonetheless, threat actors update their arsenal and launch a range of sophisticated attacks to undermine this detection procedure, either during the training or test phase, rendering machine learning algorithms vulnerable to adversarial attacks. Our work examines the propagation of adversarial attacks in machine learning based detection for fake Twitter accounts, which is based on AdaBoost. Moreover, we propose and evaluate the use of k-NN as a countermeasure to remedy the effects of the adversarial attacks that we have implemented.},
author = {Kantartopoulos, Panagiotis and Pitropakis, Nikolaos and Mylonas, Alexios and Kylilis, Nicolas},
journal = {Technologies},
keywords = {cybersecurity},
number = 4,
pages = 64,
publisher = {Multidisciplinary Digital Publishing Institute},
title = {Exploring Adversarial Attacks and Defences for Fake Twitter Account Detection},
volume = 8,
year = 2020
}%0 Journal Article
%1 kylilis2020exploring
%A Kantartopoulos, Panagiotis
%A Pitropakis, Nikolaos
%A Mylonas, Alexios
%A Kylilis, Nicolas
%D 2020
%I Multidisciplinary Digital Publishing Institute
%J Technologies
%N 4
%P 64
%R https://doi.org/10.3390/technologies8040064
%T Exploring Adversarial Attacks and Defences for Fake Twitter Account Detection
%U https://www.mdpi.com/2227-7080/8/4/64
%V 8
%X Social media has become very popular and important in people’s lives, as personal ideas, beliefs and opinions are expressed and shared through them. Unfortunately, social networks, and specifically Twitter, suffer from massive existence and perpetual creation of fake users. Their goal is to deceive other users employing various methods, or even create a stream of fake news and opinions in order to influence an idea upon a specific subject, thus impairing the platform’s integrity. As such, machine learning techniques have been widely used in social networks to address this type of threat by automatically identifying fake accounts. Nonetheless, threat actors update their arsenal and launch a range of sophisticated attacks to undermine this detection procedure, either during the training or test phase, rendering machine learning algorithms vulnerable to adversarial attacks. Our work examines the propagation of adversarial attacks in machine learning based detection for fake Twitter accounts, which is based on AdaBoost. Moreover, we propose and evaluate the use of k-NN as a countermeasure to remedy the effects of the adversarial attacks that we have implemented.