0:00
/
0:00
Transcript

Ep 34: Security at Scale in a Probabilistic World with Ankur Chakraborty

In this episode, Ankur Chakraborty, Senior Director of Platform Security at Box, joins us to examine what security looks like when systems no longer behave the same way twice. Drawing from his experience across Google, Twitter, and Box, Ankur argues that while core security principles haven’t changed, the scale, speed, and uncertainty introduced by AI systems demand a fundamentally different approach.

For decades, security has relied on a comforting assumption: systems are predictable, and control flows are deterministic. Generative AI breaks that assumption. It introduces non-determinism and dramatically increases the speed and volume of change; security teams face a scaling problem that traditional workflows can’t keep up with.

We explore how AI can act as a force multiplier for defenders, boosting individual productivity and automating high-toil workflows, while also forcing a hard rethink of “human in the loop” models that add friction without real control.

The conversation goes deep into context engineering, decision traces, and explainability and why understanding why a system acted is becoming as important as what it did. We close by exploring how security leaders should evaluate tools in this new era: moving away from process-driven checklists toward outcome-based measures, and preparing for an industry on the brink of meaningful structural change.

00:00–02:49 — Introduction to AI security and Ankur’s platform-security journey

02:49–05:27 — What changes (and what doesn’t) in AI security fundamentals

05:27–09:18 — Scaling security in a probabilistic, AI-generated code world

09:18–10:30 — Embracing AI as defenders

10:30–13:46 — Productivity gains from LLMs for security engineers

13:46–20:06 — Human-in-the-loop vs autonomous agents in security workflows

20:06–22:25 — Context graphs, observability, and decision traces

22:25–32:01 — Explainability, mechanistic interpretability, and security trust

32:01–35:36 — How security teams evaluate tools, platforms, and outcomes

35:36–42:42 — Measuring security outcomes, velocity, and cost trade-offs

42:42–46:46 — False positives, false negatives, and revealed preferences

46:46–50:16 — LLMs as triage engines and force multipliers for security

50:16–52:51 — Underlying fears in the security industry

52:51–55:05 — Context engineering, platforms, and the future of security teams

Tune in for a deep dive!

Connect with Ankur Chakraborty:

LinkedIn: https://www.linkedin.com/in/ankurchakraborty/

Substack: https://machinesagainsthumanity.substack.com/

Connect with Anshuman:

LinkedIn: ⁠⁠⁠⁠⁠⁠anshumanbhartiya⁠⁠

X: ⁠⁠⁠⁠⁠⁠https://x.com/anshuman_bh⁠⁠

Website: ⁠⁠⁠⁠⁠⁠https://anshumanbhartiya.com/⁠⁠

⁠⁠⁠⁠Instagram: ⁠⁠anshuman.bhartiya⁠

⁠⁠⁠Connect with Sandesh:

LinkedIn: ⁠⁠⁠⁠⁠⁠anandsandesh⁠⁠

X: ⁠⁠⁠⁠⁠⁠https://x.com/JubbaOnJeans

Thanks for reading The BoringAppSec Community! Subscribe for free to receive new posts and support my work.

Discussion about this video

User's avatar

Ready for more?