[{"data":1,"prerenderedAt":156},["ShallowReactive",2],{"blog-/blog/2026-02-26-explainability-is-architecture-decision":3},{"id":4,"title":5,"body":6,"date":142,"description":143,"draft":144,"extension":145,"meta":146,"navigation":147,"path":148,"seo":149,"stem":150,"tags":151,"__hash__":155},"blog/blog/2026-02-26-explainability-is-architecture-decision.md","Explainability Is an Architecture Decision, Not a Feature",{"type":7,"value":8,"toc":131},"minimark",[9,14,18,21,24,28,31,34,37,40,42,46,49,56,62,65,68,70,74,77,83,98,101,103,107,110,113,116,118,122,125,128],[10,11,13],"h2",{"id":12},"you-cant-bolt-on-transparency-after-the-fact","You can't bolt on transparency after the fact",[15,16,17],"p",{},"There's a growing assumption in enterprise AI that explainability is a feature you add. That you build the system first, then add an \"explain\" button. That there's a library, an API, or a wrapper that makes any AI system transparent.",[15,19,20],{},"This assumption is architecturally wrong — and the regulatory landscape is making the consequences real.",[22,23],"hr",{},[10,25,27],{"id":26},"what-the-law-actually-requires","What the law actually requires",[15,29,30],{},"The EU AI Act entered into force on 1 August 2024 and is being phased in, with key obligations applying from 2025, 2 August 2026, and for some systems 2 August 2027. Article 86 applies where a deployer takes a decision based on the output of an Annex III high-risk AI system (with specified exceptions) and that decision produces legal effects or similarly significantly affects the person's health, safety, or fundamental rights. In such cases, the affected individual has the right to obtain \"clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.\"",[15,32,33],{},"Separately, the GDPR already requires \"meaningful information about the logic involved\" in automated decision-making (Article 15(1)(h)). The Court of Justice of the European Union strengthened this on 27 February 2025 (Case C-203/22, Dun & Bradstreet Austria), establishing that simply communicating \"a complex mathematical formula, such as an algorithm\" is insufficient — organizations must explain \"the procedure and principles actually applied\" in a way the affected person can understand.",[15,35,36],{},"In the US, the Equal Credit Opportunity Act (Regulation B) has long required creditors to provide specific reasons for denial. California's AB 2013 (effective January 1, 2026) requires developers of public-use generative AI systems to publish a high-level summary of training datasets. California's SB 942 applies to covered providers (those with over 1,000,000 monthly users) and requires a free AI-detection tool along with manifest and latent disclosures for content created or altered by their generative AI systems. These aren't abstract policy discussions — they're enforceable requirements with compliance deadlines.",[15,38,39],{},"The practical question for any system architect is: when a regulator, auditor, or affected individual asks \"why did the system make this decision?\" — what does your architecture allow you to answer?",[22,41],{},[10,43,45],{"id":44},"two-kinds-of-explainability-and-why-it-matters","Two kinds of explainability — and why it matters",[15,47,48],{},"XAI literature commonly distinguishes two approaches to explainability. A paper in the proceedings of MultiMedia Modeling (MMM 2025) and a TechPolicy.Press analysis both examine this distinction in the context of EU regulation:",[15,50,51,55],{},[52,53,54],"strong",{},"Intrinsic explainability"," is possible when the model or decision system is simple enough that the relationship between inputs and outputs can be directly traced. A decision tree, a rule-based system, a decision table — these are intrinsically explainable. You can point to a specific rule, a specific condition, and say: \"This input matched this condition, which triggered this outcome.\" The explanation is exact, not approximate.",[15,57,58,61],{},[52,59,60],{},"Post-hoc explainability"," uses external methods (SHAP, LIME, Shapley Values) to approximate why a complex model produced a particular output. These methods are applied after the fact to models that are too complex to trace internally — which includes virtually all large language models. The TechPolicy.Press analysis states it plainly: \"Any insight gained is only an approximation of the model's actual reasoning path. There's no guarantee of the accuracy or consistency of post-hoc explanations. And the more complex the model, the less reliable the approximations.\"",[15,63,64],{},"The same analysis concludes: \"Post-hoc explanations fall short of providing the kind of protections that are possible when human decisions are contested.\"",[15,66,67],{},"This is not a philosophical distinction. It's an architectural one. If you delegate decisions to an LLM, explainability typically becomes post-hoc — approximate, potentially inconsistent, and vulnerable to regulatory challenge. If an LLM only extracts structured data and explicit rules decide, the decision logic is intrinsically explainable — though the extraction step itself still requires validation and logging.",[22,69],{},[10,71,73],{"id":72},"what-this-looks-like-in-practice","What this looks like in practice",[15,75,76],{},"Consider a common pattern: processing an incoming request (a support ticket, an application, a claim — the domain doesn't matter). The system needs to understand the request, classify it, and route it to the appropriate handler based on business rules.",[15,78,79,82],{},[52,80,81],{},"Architecture A: LLM decides.","\nThe LLM receives the request, determines the category, assesses urgency, and selects the routing destination. When asked \"why was this routed to Team X?\", the answer is: \"The model determined this was the best routing.\" To explain further, you'd need post-hoc analysis tools, and the explanation would be an approximation.",[15,84,85,88,89,93,94,97],{},[52,86,87],{},"Architecture B: LLM extracts, rules decide.","\nThe LLM receives the request and extracts structured data: category, urgency indicators, key entities. This structured data is then evaluated by explicit rules — a decision table that maps combinations of category, urgency, and entity type to routing destinations. When asked \"why was this routed to Team X?\", the answer is: \"The LLM classified this as category Y with urgency Z. Rule 7 in the routing table (version 3.2, last modified by ",[90,91,92],"span",{},"person"," on ",[90,95,96],{},"date",") specifies that category Y + urgency Z routes to Team X.\"",[15,99,100],{},"Same outcome. Same use of AI. Fundamentally different explainability. And the difference was determined at architecture time, not after deployment.",[22,102],{},[10,104,106],{"id":105},"the-cost-of-getting-this-wrong","The cost of getting this wrong",[15,108,109],{},"Italy's Garante fined OpenAI €15 million over GDPR breaches tied to ChatGPT's processing of personal data, including lack of an adequate legal basis and transparency/information failures. The FTC's \"Operation AI Comply\" targeted deceptive AI marketing practices. These enforcement actions establish a clear pattern: regulators expect documented controls, technical safeguards, and evidence of compliance.",[15,111,112],{},"EM360Tech's analysis of enterprise AI strategy captures the shift: \"AI auditability is now a design requirement. Inspection-readiness becomes the default posture. Enterprises will be expected to demonstrate AI accountability in a way that holds up under external scrutiny. That includes documentation, decision logs, model and data governance, and clarity on who is responsible for what.\"",[15,114,115],{},"An MDPI-published framework for engineering explainable AI systems explicitly argues for making \"transparency and compliance intrinsic to both development and operation\" rather than treating explainability as \"an isolated post-hoc output.\"",[22,117],{},[10,119,121],{"id":120},"the-architecture-decision","The architecture decision",[15,123,124],{},"Explainability isn't a compliance checkbox. It's a design constraint that shapes the entire system architecture.",[15,126,127],{},"If you're building a system where decisions need to be explained — to regulators, to auditors, to affected individuals, or even to your own team debugging a production issue — the question isn't \"which XAI library should we use?\" The question is: \"Where in my architecture do decisions happen, and can I trace them?\"",[15,129,130],{},"The answer to that question determines whether you can explain your system's decisions exactly — or only approximately. And that distinction is becoming the difference between compliant and non-compliant, auditable and unauditable, trustworthy and not.",{"title":132,"searchDepth":133,"depth":133,"links":134},"",3,[135,137,138,139,140,141],{"id":12,"depth":136,"text":13},2,{"id":26,"depth":136,"text":27},{"id":44,"depth":136,"text":45},{"id":72,"depth":136,"text":73},{"id":105,"depth":136,"text":106},{"id":120,"depth":136,"text":121},"2026-02-26","Intrinsic vs. post-hoc explainability is determined at design time. EU AI Act Article 86, GDPR, and CJEU rulings are making this an engineering constraint, not a policy discussion.",false,"md",{},true,"/blog/2026-02-26-explainability-is-architecture-decision",{"title":5,"description":143},"blog/2026-02-26-explainability-is-architecture-decision",[152,153,154],"explainability","compliance","architecture","f0b3uFw5gylzRMrEZTK9_O6utlWzZEGtlQK3vUKvPL0",1772500485126]