Pietro Liguori
Researcher in software reliability, cybersecurity, and AI code generation.
University of Naples Federico II
Member of DESSERT Lab
My research sits at the intersection of artificial intelligence and software security. I study how large language models can be used — and how they can be attacked — to generate, analyze, and secure software systems. Before moving to AI-based code generation, I worked extensively on fault injection and failure analysis in large-scale cloud infrastructures.
I earned my Ph.D. at the University of Naples Federico II under the supervision of Prof. Domenico Cotroneo, and I spent nearly a year as a visiting research scholar at the University of North Carolina at Charlotte, working with Prof. Bojan Cukic on AI-based code generation for offensive security.
What I work on
AI Code Generation
Evaluating correctness, robustness, and security of code produced by LLMs and neural code generators. Benchmarks, metrics, and testing methodologies.
Offensive Security via AI
Generating exploits, shellcode, and PowerShell attacks from natural language. Understanding what AI can — and should not — produce.
Software Reliability
Fault injection, failure mode analysis, and runtime detection in large-scale cloud platforms such as OpenStack.
Vulnerability Detection
Static analysis and prompt-based methods for detecting and patching vulnerabilities in both human-written and AI-generated code.
Code, datasets, and models are released on GitHub (DESSERT Lab) and Hugging Face (OSS-forge).
A brief origin story
Legend has it I am the result of a long-running laboratory experiment conducted by Prof. Domenico Cotroneo and Prof. Roberto Natella, somewhere in the basement of the DESSERT Lab.
For years they applied every fault injection technique they had at their disposal — deadlines, rejections, conference reviews at 3 AM, paper #4 of the month, PhD defenses, missing GPUs, broken pipelines, and the occasional Reviewer 2. Against all odds, the subject exhibited remarkable resilience, and the prototype was eventually deployed in production at DIETI - University of Naples Federico II.
Residual failure mode: a deep, possibly unhealthy, passion for finding ways to break systems — especially AI.
Scientific output
Selected recent publications
-
Reading between the Lines: Context-Aware AI-based Generation of Software ExploitsEmpirical Software Engineering, Springer, 2026
-
CGP-Tuning: Structure-Aware Soft Prompt Tuning for Code Vulnerability DetectionIEEE Transactions on Software Engineering, 2025
-
Human-written vs. AI-generated Code: A Large-Scale Study of Defects, Vulnerabilities, and ComplexityIEEE 36th International Symposium on Software Reliability Engineering (ISSRE 2025)
Recent highlights
1-Star MBDA Innovation Award
Recognition for the Artificial Firmware Designer Assistant project — applying generative models to firmware design in high-criticality industrial contexts.
Selected to present DeVAIC at innovIT — San Francisco
Selected by the Italian Innovation and Culture Hub (innovIT) to present DeVAIC — a tool for security assessment of AI-generated code — in San Francisco.
Invited talk at CRIT SRL
"Vibe Coding and Cybersecurity: balancing productivity and the risks of LLM-generated code."
Program Chair — DSML 2026
Chairing the 9th Workshop on Dependable and Secure Machine Learning, co-located with DSN 2026.
Invited member, IFIP WG 10.4
Invited to the IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance (meetings in USA, Australia, Brazil).