Discussion about this post

User's avatar
Rainbow Roxy's avatar

This piece really made me think. Your insight that BigCo's AppSec tools are mostly demoware, despite the AI hype, resonates deeply. It's a pattern you've highlighted before with how progress is often slower than it seems. As an AI enthusiast, I truely appreciate your clear, critical perspective here.

Neural Foundry's avatar

The demoware framing is spot-on. Frontier labs shipping AppSec tools isn't about competing with Snyk or Semgrep—it's about controlling the narrative around code generation risks before regulation forces their hand. OpenAI and Anthropic understand that if their models generate insecure code at scale and there's no viable review mechanism, that becomes an existential product liability issue. The strategic value here isn't the AppSec revenue; it's maintaining deployment velocity for their core business. What's interesting is the resource allocation signal: even with limited product focus, these teams can still outpace dedicated vendors on certain dimensions (integration depth, model access, workflow positioning). That asymmetry won't disappear in 2026, but it will force AppSec startups to pick very specific wedges where platform distribution doesn't matter.

No posts

Ready for more?