0:00
/
0:00
Transcript

Ep 36: Discussing AI's Current State of Affairs

In this episode, we examine what is shifting in AI, AppSec, and product security and what remains fundamentally the same.

For years, application security has operated on a familiar model: siloed reviews, tool-driven findings, and periodic assessments that struggle to keep pace with modern development. AI doesn’t eliminate those pressures, it amplifies them. Code is generated faster, systems are more interconnected, and the surface area of change expands weekly.

The conversation explores agent-based workflows through tools like OpenClaw, not as novelty, but as a signal of a broader shift: from manually operating tools to orchestrating fleets of agents. As AI interfaces move from chat windows to terminals to messaging environments, security teams must reconsider where workflows live and how context is preserved across them.

For decades, AppSec has struggled to build a reliable understanding of what systems exist and how they connect. Large language models may finally make it possible to construct living maps of components, data flows, and trust boundaries enabling assessments that talk to each other instead of existing in isolation.

The discussion also revisits threat modeling, not as a compliance artifact, but as a foundation for system-wide reasoning. If AI can automate baseline coverage and reduce repetitive toil, security teams may return to their original purpose: high-leverage risk judgment on critical systems. This leads to a broader debate whether AppSec as a distinct function evolves, shrinks, or dissolves into engineering itself and what the enduring “maker–checker” model of risk management demands in an AI-native world.

Finally, the episode reflects on the role of large AI labs in security: the gap between ambitious claims and shipped products, and what that means for founders and security leaders navigating change.

00:00–02:15 — Why this is a no-guest episode & what’s changed since last year

02:15–06:30 — AI co-authoring, productivity gains, and writing workflows

06:30–10:20 — OpenClaw architecture, agent risks, and prompt injection realities

10:20–14:00 — The shifting UI of AI: chat → terminal → messaging agents

14:00–18:30 — Agent orchestration vs siloed security tooling

18:30–23:00 — Context graphs and assessments that “talk” to each other

23:00–27:30 — Threat modeling’s evolution and system-wide visibility

27:30–31:00 — Why inventory is still AppSec’s hardest problem

31:00–34:30 — Personal AI stacks: Obsidian, memory layers, and query tools

34:30–37:30 — Open source in the age of AI-generated PR spam

37:30–40:00 — AI labs: what they ship vs what they say

40:00–44:00 — Will AppSec disappear? A serious debate

44:00–48:00 — Maker–checker risk models in an AI-driven org

48:00–51:00 — Where AI replaces toil — and where humans stay critical

51:00–End — 2026 predictions for AI security and product security

Tune in for a deep dive!

Connect with Anshuman:

LinkedIn: ⁠⁠⁠⁠anshumanbhartiya

X: ⁠⁠⁠⁠https://x.com/anshuman_bh

Website: ⁠⁠⁠⁠https://anshumanbhartiya.com/

⁠⁠⁠⁠Instagram: anshuman.bhartiya

Connect with Sandesh:

LinkedIn: ⁠⁠⁠⁠ anandsandesh

X: ⁠⁠⁠⁠https://x.com/JubbaOnJeans

Thanks for reading The BoringAppSec Community! Subscribe for free to receive new posts and support my work.

Discussion about this video

User's avatar

Ready for more?