Skip to main content

Breakout session at the inaugural Chainguard Assemble customer conference in San Francisco. The talk runs through the strange shape of attack vectors in deep learning, such as data poisoning, input manipulation, model inversion, contrasting with what’s actually going to move the needle for securing an AI deployment in 2025. The interesting and fun to talk about attacks are probably not the most critical to deal with first, you still have to secure your boring software layer and supply chain first. But it’s fun to talk about more exotic attacks (dirty pickle, sponge, model inversion). :)