Discussion about this post

User's avatar
Felipe A. Zubia's avatar

I like this framing, I haven’t seen it put quite this way, and I think it’s a helpful perspective. That said, we can’t fully absolve the model itself of responsibility.

We accept fallibility everywhere, but always relative to risk e.g. duct tape on a leaky faucet is ok but it isn’t tolerated in a spacecraft. AI may not be the spacecraft (yet), but its well above the “faucet repair” tier, which means there’s a limit to how much agents, workflows, and post-hoc constraints can compensate.

At some point, duct tape stops working, and reliability has to be addressed at the foundational architecture and training level, where the problem truly resides.

Expand full comment

Ready for more?