How to train your Llama

May 17, 2025 · 0 min read
Abstract

Large language models are reshaping both the attacker and defender landscape. But how do you actually build, harden, and break them? This hands-on workshop covers the full lifecycle of working with LLMs from a security practitioner’s perspective.

The first half, “How to Train Your Llama,” walks through practical AI implementation: running models locally, enhancing them with Retrieval-Augmented Generation (RAG), using embedding models for classification, and fine-tuning modern BERT models on custom datasets. These techniques let defenders build context-aware detection and triage tools without relying on external APIs.

The second half, “How to Fleece Your Llama,” flips to the offensive side: exploiting RAG pipelines through injection and poisoning attacks, and automated red-teaming with Promptfoo to generate vulnerability reports against LLM-backed applications.

All demos use open-source tooling and Jupyter notebooks. Attendees leave with a working understanding of how to deploy, customize, and stress-test LLMs for security use cases.

Date
May 17, 2025 5:30 PM — May 19, 2025 7:30 PM
Event
Location

Raleigh, NC