Intern Program AI and Cybersecurity

Job description

As an intern with the NavInfo Europe Cybersecurity team, you will research and write your master thesis on innovations in cybersecurity technologies for Connected cars, IoT, Artificial Intelligence solutions.

You will have the freedom and support of NavInfo Europe to develop your research in one of the fastest-growing business areas of the world. We will give you the opportunity to gain knowledge and skills on a wide range of the most innovative cybersecurity tools, technologies, and methodologies.


You are free to write your own research proposal or choose one of the following suggested Internship projects to research:

  1. Automated Man-in-the-middle attacks for 2G/3G/4G/5G mobile networks with SDR solutions.
  2. Penetration testing and reverse engineering of Connected cars and IoT devices.
  3. Coverage-guided penetration testing of DNN models (formerly project #4)
  4. Adversarial Attacks on Deep Neural Networks in the Real World
  5. Applying domain-agnostic adversarial defenses to improve model robustness
  6. Single metric for model robustness evaluation
  7. Model robustness explainability


Open internship/M.Sc. thesis project descriptions:


1. Automated Man-in-the-middle attacks for 2G/3G/4G/5G mobile networks with SDR solutions.

Mobile communications are used by more than two-thirds of the world population, who expect security and privacy guarantees. Even though privacy was a requirement, numerous man-in-the-middle attacks, and network traffic interception techniques were demonstrated at the biggest cybersecurity conferences.

In this research, we will automate mobile traffic interception for different mobile network standards and discover new vulnerabilities in communications protocols. Finally, we will conduct a security analysis of the vulnerability and propose countermeasures to remedy our attacks.


2. Penetration testing and reverse engineering of Connected cars and IoT devices.

Connected cars are likely the most complex connected devices we see. The attack surface is immense – the Internet, mobile, Bluetooth, custom RF protocols, DAB, media files imported over USB, remote diagnostics, telematics, mobile apps… As an intern, you will get hands-on experience with penetration testing and software, hardware reverse engineering of modern electric cars and IoT devices:

  • Software fuzzing
  • Exploit development
  • Key fob communication security analyses
  • NFC communication security analyses
  • CAN bus communication security analyses
  • Reverse engineering firmware
  • Reverse engineering mobile application


3. Internship research project: Coverage-guided penetration testing of DNN models

Duration: 30-60 ECTS


It is well known that DNN models can be easily offended by adversarial attacks. A DNN model is unsafe if even one adversarial example (mislabeled after minor perturbation) can be found. To prevent enormous financial and reputational risks, a penetration test is required. The test can provide a meaningful indication of model performance and robustness; bug finding and structure analysis. Applying various adversarial attacks by using available toolboxes (e.g., advbox, foolbox) in an unconstrained way is insufficient.

The testing that is done within the guidelines from non-structured coverage metrics is much more promising. Coverage-guided penetration test stays away from exhaustive (endless) testing and indicates test completeness. Several recent papers (for example 2018, 2020, 2020, toolbox) show potential and development in this direction. While testing, random mutation enhanced with the coverage knowledge, i.e., targeted mutation, is designed to generate test cases. It is important to mention that there is a strong connection link to AD scenario simulation, in which completeness or coverage have to be estimated as well. However, the attacks generated by random mutations do not correlate with the published one in the available toolboxes. This significantly reduces the tested space as well as its representativeness of actual attacks on DNN. Therefore, further research is required in that direction. Since the topic is large, new, and there is not much material on it, research work would take a very long time and require freedom that is hardly available in a commercial organization.

That is why we propose to start the exploration of that topic with a student master thesis or PhD project (depending on the scope). This project requires strong knowledge of mathematics and statistics. Basic knowledge of deep learning is a plus.


4. Internship project: Adversarial Attacks on Deep Neural Networks in the Real World

Duration: 30-60 ECTS with research component, shorter without


Testing model robustness to adversarial attacks is often done in unconstrained or unrealistic scenarios, where the attacker is assumed to have either complete access to a model or the ability to execute multiple queries repeatedly to carefully craft an adversarial example. These adversarial examples, when they can be used to fool the respective models, carry significant reputational and financial risks.

A small subset of research in the domain of adversarial machine learning (for example 2018, 2019, 2020, 2021) has indeed been focused towards providing more realistic attacks through printable adversarial patch techniques. However, the transferability of such attacks across models or tasks has not been thoroughly examined, i.e., as such attacks were usually performed in controlled environments with limited model/domain choices.

It is unclear what is necessary to craft realistic, domain-agnostic adversarial examples or to what extent these adversarial patches can be reused to test model robustness, or whether such practical adversarial examples can be crafted through domain-transfer techniques. Therefore, further research is required in that direction.

Since this topic is fairly large, research work is also possible for candidates wishing to do their master thesis on the subject. This project is suitable for a candidate with strong programming and algorithms skills. Knowledge of deep learning is a plus, but not strictly necessary prior to the start of this project.


5. Internship project: Applying domain-agnostic adversarial defenses to improve model robustness

Duration: 30-60 ECTS with research component, shorter without


As Deep Neural Networks (DNNs) are becoming more common-place, they are also becoming increasingly more common in safety-critical environments. However, it was shown that DNN models can be easily fooled by adversarial attackers utilizing certain optimization algorithms to modify the input to cause some sort of misclassification in the target model. DNN models vulnerable to adversarial attacks often require (attack-enhanced) adversarial (re-)training or some other form of adversarial defense technique.

However, research into adversarial defenses (for example various, 2020a, 2020b) indicates that the majority of these techniques either rely on attack-specific intrinsic behavior to improve the robustness of the model against said attacks, or expand the model through additional layers which is not necessarily useful in every practical setting (e.g., denoising autoencoders or generative adversarial training). It is unclear how effective, sensitive or costly such techniques are in a generalized setting across multiple tasks and potentially domains and, as such, further research is required in that direction.

Since this topic is fairly large, research work is also possible for candidates wishing to do their master thesis on the subject. This project is suitable for a candidate with strong programming and algorithms skills. Knowledge of deep learning is a plus but not necessary, but not strictly necessary prior to the start of this project.


6. Short internship project: Single metric for model robustness evaluation

Duration: 15-30 ECTS


Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial attacks. However, measuring the robustness of DNN models to adversarial attacks is not as straightforward as calculating a single value. As the robustness metric tends to be relative to the chosen performance metrics, which often include a combination of metric to measure the model performance itself, and the (im-)perceptibility of the corresponding attack.

A single value metric is useful in order to be able to quickly track and compare different settings and options when optimizing and retraining models in adversarial settings (i.e., YouTube). Utilizing the harmonic mean of the chosen metrics is a suitable, simplified approach to measure model robustness. But it is not necessarily complete in terms of describing the entire story of the model, or whether it is sufficient by itself to monitor the progress of training a model against adversarial perturbations.

Therefore, further research is required in examining whether such a metric provides all the necessary information as a single (relative) value for various metric combinations, or if additional information or methods of combining these metrics are necessary.

This project is suitable for a candidate with an understanding of statistics and image processing techniques.


7. Internship project: Model robustness explainability

Duration: 30-60 ECTS with research component, shorter without


DNN models have been shown to be vulnerable to carefully crafted adversarial examples, ideally utilizing imperceptible perturbations to fool the target model. A robust model ideally utilizes perceptible components in its decision-making process. Existing tools for measuring adversarial robustness do not contain many, if any, explainability tools (e.g., advbox, foolbox).

Explainability tools and methods (for example, as detailed here) can be used to improve trustworthiness and accountability of models trained against adversarial perturbations. It is therefore necessary to develop tools and solutions to obtain and diminish the risk posed by their imperceptible components, whilst simultaneously providing explanations of the risks and decision-making process of such models. Additionally, those who employ such models should be aware of the associated risks and biases with utilizing and relying on such models and the effects of different model intrinsics on the overall robustness to adversarial perturbations. Therefore, further research is required in that direction. Since the topic is large, new, and there is not much material on it, research work would take a very long time and require freedom that hardly available in a commercial organization.

That is why we propose to start the exploration of that topic with an internship projects or student master thesis (depending on the scope). Since this topic is huge, there are multiple opportunities possible related to this project for candidates wishing to explore specific relevant directions or wishing to complete a master thesis on this topic.

This project is suitable for a candidate with strong programming and data science skills. Knowledge of explainability in the context of machine learning is a strong plus, but not strictly necessary prior to the start of this project.

Requirements

Education:

  • Background in relevant domain (e.g., Cybersecurity, AI,  Data Analytics, Data Science, BI or related field)
  • Finishing bachelor or Master at university (WO) or Applied Sciences university (HBO) in the Netherlands, Belgium, Germany.


Technical skills:

  • Ability to read, interpret and analyze researches
  • Writing and presenting. Knowledge about information security (focus on people/ process). 
  • Able to communicate complexity (verbal and written)


Soft skills:
Ability to work well in an international team environment


Interested?

Does this profile describe you and are you interested in this internship program? Please apply!

You can expect a challenging paid internship with professional guidance. For more information contact the HR department via hrm@navinfo.eu