Well-Architected in the Age of AI
You can’t use 2022 architecture thinking to manage 2025 AI risk.
Yet most teams still are.
Salesforce killed its Well-Architected program earlier this year. The backlash was immediate and loud.
I’ve seen what happens when software isn’t thoughtfully designed. When critical roles get cut or consolidated to save $$$. When RAI principles are not considered (too often, sadly). When a half-baked pilot gets pushed to production to show progress or chase a “quick win. But the real issue runs deeper:
We’re building AI systems that evolve, adapt, and make autonomous decisions. But we’re still using architecture principles designed for predictable, static, human-operated software.
How do we build responsibly when the AI tools we’re using now are rewriting the rules as fast as we make them?
Old rules (pre 2022 🤯) “well-architected” meant:
--> Scalable
--> Secure
--> Performant
--> Maintainable
--> Governable
Clean. Predictable. Static. Not anymore.
Don’t get me wrong, the previous Well-Architected team did groundbreaking and highly valuable work. But it largely assumed human-driven processes, static business logic, and predictable inputs/outputs. It did not account for systems that evolve, make autonomous decisions, or interpret human context.
Agentic AI has changed the game. We now have software that doesn’t just execute instructions. It observes, infers, adapts, and acts. It can take autonomous action, learn from interaction, collaborate with other agents, and continuously evolve based on context and feedback.
So what does “well-architected” mean now? My 2 cents on where we need to go:
--> Ethical Design & Failsafes: Because agents interpret, and interpretation is messy, biased, and unpredictable.
--> Dynamic Guardrails: Fixed logic won’t cut it anymore. We need policies that flex based on context, history, and behavior. AI looks for patterns, not rules. This requires data scientists and ML engineers working hand-in-hand with RAI, compliance and solution architects to build feedback loops that adapt without going rogue.
--> Explainability & Observability: If your AI made a call, can anyone explain it? Can a customer challenge it?
This isn’t just DevOps + Compliance. It’s Data + ML + Legal + CX + RAI strategy all building together.
I’m happy to hear SF is bringing back Well-Architected program. But in 2025, if it’s not intentional, ethical, and cross-functional, it’s just architecture theater.
Thoughts? Feedback? What does “well-architected” mean to you in 2025?