With the intelligence accompanied, AI has tapped enormous strength to stealthily bypass traditional cybersecurity measures. This blogpost enlists some key research work available in public domain that bring out insightful results on how AI in its adversarial form can be used to fool or bypass traditional cybersecurity measures.

Such research work (by and large provide all the more reason why current security measures need to armor for bigger and conniving threats lurking around.

  • Adversarial examples on image classification: Researchers at UC Berkeley showed that adversarial examples could fool deep learning models used for image classification (Szegedy et al. 2013). https://arxiv.org/abs/1312.6199
  • Bypassing Speech Recognition Systems: Researchers at Baidu showed that adversarial examples could be used to bypass speech recognition systems (Carlini and Wagner 2018). https://arxiv.org/abs/1801.01944
  • Bypassing Text Classifiers: Researchers at OpenAI showed that adversarial examples could be used to bypass text classifiers (Jia and Liang 2017). https://arxiv.org/abs/1707.07328
  • Bypassing Firewalls: Researchers showed that adversarial examples could be used to bypass firewalls (Hemmati and Hadavi, 2021).
    https://ieeexplore.ieee.org/abstract/document/9720473
  • Breaking encryption: Researchers show how Deep learning techniques can be used to extract key used in AES encryption
    • (Maghrebi et al., 2016)
      https://link.springer.com/chapter/10.1007/978-3-319-49445-6_1
    • (Das et. al, 2019) https://dl.acm.org/doi/abs/10.1145/3316781.3317934
  • Bypassing Malware Detection Systems: Researchers showed that adversarial examples could be used to bypass malware detection systems (Grosse et al. 2017) https://link.springer.com/chapter/10.1007/978-3-319-66399-9_4
  • Bypassing Secure Boot Process: Researchers develop AI powered hardware Trojans that bypass conventional and ML powered hardware Trojan Detectors that can potentially impact and bypass secure boot process (Pan and Mishra, 2022)
    https://ieeexplore.ieee.org/abstract/document/9774654
  • Bypassing User Authentication: Researchers showed that adversarial examples could be used to bypass user authentication systems such as use of mouse dynamics, (Tan et al., 2019) https://ieeexplore.ieee.org/abstract/document/8852414
  • Bypassing Password Cracking Systems: Researchers showed that adversarial examples could be used to crack password more efficiently (Nam et al., 2020) https://link.springer.com/chapter/10.1007/978-3-030-39303-8_19

Leave a Reply

Your email address will not be published. Required fields are marked *