Secure and Private System Design for Generative & Agentic AI

Time: 9:00 - 17:00 (PDT), July 26, 2026

Location: Mtg Room 101A, Long Beach (Co-located with DAC 2026)

Workshop Venue

Generative and agentic AI are reshaping computing at every layer, from research and infrastructure to the design of the systems themselves. As these capabilities are embedded into how silicon, software, and full-stack platforms get built and verified, both the threat surface and the means to defend it are being rewritten. The SPGAI 2026 Workshop on Secure and Private System Design for Generative and Agentic AI, co-hosted with the Design Automation Conference (DAC), confronts this dual frontier: hardening AI agents and the artifacts they generate against new and largely unmapped attack surfaces, while harnessing those same agents as a force multiplier for vulnerability discovery, secure-by-construction generation, automated compliance, and trustworthy verification. The program is organized around three intertwined themes — Secure, Private, and Trustworthy AI Agents, Securing the Artifacts of Agentic AI, and Generative and Agentic AI for Security, Privacy, and Trustworthiness — and welcomes contributions across EDA, chip and SoC security, quantum and neuromorphic computing, electro-optic co-design, and embedded and cyber-physical systems.

Important Dates:

Paper Submission Deadline May 25, 2026
Notification of Acceptance June 10, 2026
Early Registration Deadline June 11, 2026
Workshop Date July 26, 2026

More details available at Call For Submissions

Call for Submissions

Overview

Generative and agentic AI are reshaping computing at every layer, from research and infrastructure to the design of the systems themselves. As these capabilities are embedded into how silicon, software, and full-stack platforms get built and verified, both the threat surface and the means to defend it are being rewritten. The SPGAI 2026 Workshop, co-hosted with DAC, confronts this dual frontier: hardening AI agents and the artifacts they generate against new and largely unmapped attack surfaces, while harnessing those same agents as a force multiplier for vulnerability discovery, secure-by-construction generation, automated compliance, and trustworthy verification. The program is organized around three intertwined themes and welcomes contributions across EDA, chip and SoC security, quantum and neuromorphic computing, electro-optic co-design, and embedded and cyber-physical systems.

Topics of Interest

We solicit original contributions on topics including, but not limited to:

Theme 1 — Secure, Private, and Trustworthy AI and Agents

  • Privacy-preserving training and fine-tuning of foundation models for design (federated, differentially private, and confidential computing approaches)
  • Defenses against prompt injection, jailbreaks, and tool misuse in agentic design workflows
  • Secure multi-agent orchestration, sandboxing, and least-privilege tool use for design automation
  • Trust, alignment, and reliability of LLMs and agents in safety- and security-critical design loops
  • Confidentiality of proprietary IP, netlists, and source code during LLM-aided generation
  • Inference-time defenses, guardrails, and verifiable reasoning for design agents
  • Robustness against data poisoning, model extraction, and supply-chain attacks on generative design pipelines

Theme 2 — Securing the Artifacts of AI and Agents

  • Detection, attribution, and watermarking of LLM- and agent-generated RTL, HLS, layouts, and software
  • Hardware Trojans, backdoors, and malicious patterns introduced (intentionally or unintentionally) by generative AI
  • Verification, formal methods, and testing tailored to AI-generated designs
  • Provenance, traceability, and reproducibility of AI-generated design artifacts
  • Copyright, licensing, and IP-leakage analysis of generated code and designs
  • Secure release and open-sourcing practices for AI-generated artifacts and benchmarks
  • Privacy leakage and memorization analysis in LLM-generated designs

Theme 3 — Generative and Agentic AI for Security, Privacy, and Trustworthiness

  • LLM- and agent-driven vulnerability discovery, fuzzing, and exploit generation for hardware and software
  • Generative AI for secure-by-construction RTL, HLS, EDA scripting, and physical design
  • Agentic workflows for security verification, side-channel analysis, and threat modeling
  • LLMs for cryptographic protocol design, post-quantum migration, and confidential computing
  • AI-aided regulatory compliance, audit, and policy enforcement for design and IT automation
  • Generative and agentic AI for privacy-enhancing technologies (PETs)
  • LLM-aided security of emerging technologies: quantum, neuromorphic, and electro-optic systems
  • Datasets, benchmarks, and competitions for evaluating secure and private generative/agentic design

Important Dates

Paper Submission May 25, 2026 (AoE)
Notification of Acceptance June 10, 2026 (AoE)

Submission Instructions

The workshop invites regular papers of 2 to 6 pages in the IEEE Conference format (IEEE templates), with page limits excluding references. Submissions are not anonymized: author names, affiliations, and acknowledgments should be included in the paper. We encourage submissions with a commitment to open and reproducible research, including datasets, models, and methods. Papers with open-source implementations will be highlighted at the workshop. Submissions will be handled via EasyChair by May 25, 2026 (AoE).

Policy on Submissions

SPGAI 2026 expects previously unpublished papers describing original research, but concurrent submissions to other venues or already-published papers are also permitted. Accepted papers will not have proceedings.

Authors of accepted papers will have the opportunity to have oral presentation or a poster at the workshop. Selected authors may also be invited to co-author a Systematization of Knowledge (SoK) paper on the topic together with the workshop organizers.

Keynote Speakers

To be announced.

Schedule

To be announced.

Workshop Organization

Organization Team

Qian Lou

Qian Lou

Assistant Professor

University of Central Florida, USA

Hongyi Michael Wu

Hongyi Michael Wu

Professor

University of Arizona, USA

Mengxin Zheng

Mengxin Zheng

Assistant Professor

University of Central Florida, USA