PhD Position F/M Mechanistic Interpretability and Problem-Space Adversarial Attacks for LLM-based Software Vulnerability Detection

March 21, 2026 Custom Inria Recruitment Portal (Jobs.inria.fr)

PhD Position F/M Mechanistic Interpretability and Problem-Space Adversarial Attacks for LLM-based Software Vulnerability Detection

Le descriptif de l’offre ci-dessous est en Anglais

Type de contrat : CDD

Niveau de diplôme exigé : Bac + 5 ou équivalent

Fonction : Doctorant

A propos du centre ou de la direction fonctionnelle

The Inria Centre at Rennes University is one of Inria's eight centres and has more than thirty research teams. The Inria Centre is a major and recognized player in the field of digital sciences. It is at the heart of a rich R&D and innovation ecosystem: highly innovative PMEs, large industrial groups, competitiveness clusters, research and higher education players, laboratories of excellence, technological research institute, etc.

Contexte et atouts du poste

Within the framework of the ANR PRCI project "SecLLM4SVD (Secured Large Language Models in Reliable Software Vulnerability Detection)", Principal Investigator: Dr. Yufei Han.

Mission confiée

Context and Motivation:

Large Language Models (LLMs) have demonstrated remarkable capabilities in automating the detection of software vulnerabilities (SVD) due to their ability to process both natural and programming languages. However, a critical reliability concern with state-of-the-art LLMs is their susceptibility to adversarial attacks. Subtle, problem-space modifications to source code—such as variable renaming or dead code insertion—can mislead the model without changing the code's main functionality or underlying vulnerabilities. Furthermore, the opaque, "black-box" nature of LLMs makes it difficult to understand whether they truly grasp code semantics or simply recognize superficial statistical artifacts.


Collaboration :

The recruited person will be in connection with Dr. Yuejun Guo at Luxembourg Institute of Science and Technology.

Responsibilities :
The person recruited is responsible for conducing full-time research activities centered at the theme of the thesis. 

Steering/Management :
The person recruited will be supervised by Dr. Yufei Han

Principales activités

  • Thesis Objectives

    This 36-month PhD position aims to bridge the gap between LLM transparency and adversarial robustness. The PhD candidate will spearhead research in two dedicated work packages: WP2 (Mechanistic Interpretability of LLM-based SVD) and WP3 (Problem-space Adversarial Attacks against LLM-based SVD).

    Goal 1: Unveiling LLM Decision-Making 

    The first phase of the thesis will focus on a systematic analysis of how LLMs detect software vulnerabilities. The candidate will:

    • Investigate the causal relationships encoded in LLMs' vulnerability detection mechanisms.
    • Analyse how specific code properties (e.g., syntactic patterns, data flow structures) trigger vulnerability flags.
    • Explore how the attention mechanisms in LLMs encode correlations between code properties and detection outputs, providing human-understandable insights into the LLM logic.

    Goal 2: Assessing and Exploiting Vulnerabilities via Adversarial Attacks 

    Building upon the mechanistic understanding from WP2, the candidate will generate adversarially manipulated source code to systematically mislead LLM-based SVD systems. The candidate will:

    • Design and propose advanced problem-space adversarial attacks that preserve code functionality and mimic real-world developer practices.
    • Leverage heuristic optimization methods, such as multi-armed bandit programming and reinforcement learning, to craft these attacks.
    • Develop innovative in-context learning techniques to overcome the limited input windows of LLMs, ensuring efficient and comprehensive evaluations of model robustness.

     

Compétences

Candidate Profile and Requirements

To successfully carry out the research objectives of WP2 and WP3, the ideal candidate should possess a strong foundational background in both artificial intelligence and software security. We are looking for candidates who meet the following requirements:

  • Educational Background: A Master’s degree or equivalent engineering degree in Computer Science, Artificial Intelligence, Cybersecurity, or a closely related discipline.
  • Deep Learning Expertise: Solid knowledge and proven project experience in designing, training, and evaluating Deep Neural Network (DNN)-based classification models.
  • Program Analysis Proficiency: Demonstrated understanding and practical experience in program analysis. Specifically, the candidate must be familiar with the static analysis of source code using semantic representations, such as Control Flow Graphs (CFG) and Data Flow Graphs (DFG).
  • Programming Skills: Strong programming skills in Python and proficiency with standard deep learning frameworks (e.g., PyTorch, TensorFlow). Experience with code parsing and analysis tools (e.g., Tree-sitter, Joern) is highly desirable.
  • Additional Assets: Prior exposure to Large Language Models (LLMs), Natural Language Processing (NLP), or Adversarial Machine Learning will be considered a significant plus.
  • Soft Skills: Excellent analytical and problem-solving skills, an autonomous and rigorous work ethic, and good communication skills in English for scientific writing and presentation within an international consortium.

 

 

Avantages

  • Subsidized meals
  • Partial reimbursement of public transport costs
  • Leave: 7 weeks of annual leave + 10 extra days off due to RTT (statutory reduction in working hours) + possibility of exceptional leave (sick children, moving home, etc.)
  • Possibility of teleworking (after 6 months of employment) and flexible organization of working hours
  • Professional equipment available (videoconferencing, loan of computer equipment, etc.)
  • Social, cultural and sports events and activities
  • Access to vocational training
  • Social security coverage

Rémunération

monthly gross salary 2300 euros

Apply on company site

How to Get Hired at INRIA

  • Inria is a French public research institute (EPST) under joint supervision of the Ministry of Research and the Ministry of the Economy, employing around 2,800 staff across nine research centres in France plus Inria Chile, with headquarters at Le Chesnay-Rocquencourt near Versailles and Bruno Sportisse as Chairman and CEO since 2018.
  • All open positions are published on the custom Inria recruitment portal at jobs.inria.fr, with English and French interfaces, structured filters, and unique offer reference numbers in the format YYYY-NNNNN that you must quote in every document and email.
Read the full guide

How well do you match this role?

Check My Resume