Mahmoud Nazzal, Secure, Robust, and Interpretable AI Integrating Graph and Language Models

When: Thursday, 2/27 11:00 am Where: Tyler Hall 055

Abstract:
Artificial intelligence (AI) has achieved remarkable performance across various domains. In most real-world applications, data often takes relational forms, such as graphs and networks, or sequential forms, such as text and time series. As AI evolves, specialized models have emerged to handle these structures—Graph Neural Networks (GNNs) for relational mining and Large Language Models (LLMs) for sequential understanding. Despite their success, these models face challenges in security, robustness, and interpretability. GNNs excel in relational reasoning but are vulnerable to adversarial manipulation and lack interpretability, while LLMs are strong in linguistic reasoning and generalization yet struggle with relational data and inherent security risks. This talk introduces a unified framework that integrates GNNs and LLMs to address security-critical challenges by combining their complementary strengths. This integration assumes a frozen LLM, eliminating the need for expensive fine-tuning or exposure of internal model parameters, thereby allowing the use of state-of-the-art LLMs. The framework is designed to accommodate diverse data modalities across a wide range of AI applications. The talk will explore contributions within this integration through real-world case studies, including adversarial robustness in GNNs and LLMs, secure source code generation, code security analysis, hardware design automation, deepfake detection, and large-scale predictive modeling, with insights from industrial collaborations. The talk will conclude with a research vision for building trustworthy AI systems that bridge theoretical insights with real-world interdisciplinary applications.

Bio:
Mahmoud Nazzal is a Ph.D. candidate in Computer Engineering at New Jersey Institute of Technology (NJIT) specializing in the security, robustness, and applicability of Graph Neural Networks (GNNs) and Large Language Models (LLMs) in security-critical domains. His expertise and contributions span cross-disciplinary applications, including adversarial machine learning, secure source code generation and testing, hardware design automation, deepfake detection, and large-scale transportation system analytics. He has contributed to more over 30 peer-reviewed papers in venues including IEEE S&P, ACM CCS, and several patents and has received Best Paper Awards and Fellowships. Mahmoud has taught at institutions across the USA, the UAE, Turkey, and Cyprus, covering computer and electrical engineering courses and supervising senior design students. He has also volunteered as a mentor for undergraduate summer research interns at NJIT. Additionally, he has collaborated with NJIT’s industry partners, TelAI Inc. and Hvantage Inc., on industry-level LLM-based document analysis projects. He is a member of IEEE and ACM and serves as a reviewer for multiple journals and conferences. His technical skills span deep learning frameworks (PyTorch, TensorFlow, PyG), LLM prompt optimization and Retrieval-Augmented Generation (RAG), programming languages (Python, MATLAB, Java, and C), and high-performance computing platforms.