Testing Decision Tables: A Practical Guide
You wouldn't deploy code without tests. Why deploy business rules without them?
Decision tables are deceptively simple. A table with 15 rules and 4 input columns has thousands of possible input combinations. You can't test them all — but you can test the ones that matter.
The Testing Pyramid for Rules
/ Edge cases \
/ (boundary values) \
/───────────────────────\
/ Golden path tests \
/ (expected business scenarios) \
/─────────────────────────────────\
/ Completeness checks \
/ (every rule matches at least one test) \
/─────────────────────────────────────────────\
Start from the bottom. Work your way up.
Level 1: Rule Coverage
Every rule in the table should have at least one test case that triggers it.
table: loan_eligibility
hit_policy: FIRST
rules:
- rule_1: { credit_score: "< 500" } → reject
- rule_2: { credit_score: "[500..650]", income: "< 30000" } → reject
- rule_3: { credit_score: "[500..650]", income: "[30000..60000]" } → manual_review
- rule_4: { credit_score: "[500..650]", income: "> 60000" } → approve
- rule_5: { credit_score: "[651..750]" } → approve
- rule_6: { credit_score: "> 750", amount: "<= 500000" } → approve
- rule_7: { credit_score: "> 750", amount: "> 500000" } → manual_review
Minimum test suite — one test per rule:
[
{
"name": "Rule 1: Very low credit",
"input": { "credit_score": 400, "income": 50000, "amount": 10000 },
"expected": { "decision": "reject" }
},
{
"name": "Rule 2: Low credit + low income",
"input": { "credit_score": 580, "income": 25000, "amount": 30000 },
"expected": { "decision": "reject" }
},
{
"name": "Rule 3: Low credit + moderate income",
"input": { "credit_score": 600, "income": 45000, "amount": 50000 },
"expected": { "decision": "manual_review" }
},
{
"name": "Rule 4: Low credit + high income",
"input": { "credit_score": 550, "income": 80000, "amount": 75000 },
"expected": { "decision": "approve" }
},
{
"name": "Rule 5: Good credit",
"input": { "credit_score": 700, "income": 40000, "amount": 150000 },
"expected": { "decision": "approve" }
},
{
"name": "Rule 6: Excellent credit, standard loan",
"input": { "credit_score": 780, "income": 100000, "amount": 300000 },
"expected": { "decision": "approve" }
},
{
"name": "Rule 7: Excellent credit, jumbo loan",
"input": { "credit_score": 800, "income": 200000, "amount": 750000 },
"expected": { "decision": "manual_review" }
}
]
Level 2: Boundary Testing
Boundaries are where bugs hide. For every condition threshold, test both sides:
[
{
"name": "Boundary: credit_score = 499 (below 500)",
"input": { "credit_score": 499, "income": 100000, "amount": 10000 },
"expected": { "decision": "reject" }
},
{
"name": "Boundary: credit_score = 500 (at threshold)",
"input": { "credit_score": 500, "income": 100000, "amount": 10000 },
"expected": { "decision": "approve" }
},
{
"name": "Boundary: credit_score = 650 (top of low range)",
"input": { "credit_score": 650, "income": 25000, "amount": 10000 },
"expected": { "decision": "reject" }
},
{
"name": "Boundary: credit_score = 651 (bottom of good range)",
"input": { "credit_score": 651, "income": 25000, "amount": 10000 },
"expected": { "decision": "approve" }
},
{
"name": "Boundary: amount = 500000 (at jumbo threshold)",
"input": { "credit_score": 800, "income": 200000, "amount": 500000 },
"expected": { "decision": "approve" }
},
{
"name": "Boundary: amount = 500001 (above jumbo threshold)",
"input": { "credit_score": 800, "income": 200000, "amount": 500001 },
"expected": { "decision": "manual_review" }
}
]
Rule of thumb: for every numeric threshold
N, testN-1,N, andN+1.
Level 3: Business Scenarios
These are the real-world cases your business team cares about:
[
{
"name": "Scenario: First-time homebuyer, good profile",
"input": {
"credit_score": 720,
"income": 65000,
"amount": 250000,
"first_time_buyer": true
},
"expected": { "decision": "approve" },
"note": "Standard approval for good credit first-time buyers"
},
{
"name": "Scenario: Refinance with borderline credit",
"input": {
"credit_score": 630,
"income": 55000,
"amount": 180000,
"loan_type": "refinance"
},
"expected": { "decision": "manual_review" },
"note": "Should go to review — credit is borderline for this amount"
},
{
"name": "Scenario: High-income executive, large loan",
"input": {
"credit_score": 810,
"income": 350000,
"amount": 1200000,
"employment_type": "executive"
},
"expected": { "decision": "manual_review" },
"note": "Jumbo loan — always requires review regardless of credit"
}
]
Business scenarios serve double duty: they validate the rules AND document the expected behavior. When a new team member asks "what happens when a first-time buyer applies with 720 credit?", the test case is the answer.
Gap Detection
A gap is an input combination that matches no rule. With FIRST hit policy, a gap returns empty output — which usually means a bug.
TIATON detects gaps automatically during test evaluation:
⚠ Gap detected:
Input: { credit_score: 650, income: 60000, amount: 75000 }
No rule matched.
Nearest rules:
- Rule 3: credit_score [500..650] AND income [30000..60000] — income boundary miss
- Rule 4: credit_score [500..650] AND income > 60000 — income boundary miss
Suggestion: Check boundary at income = 60000
The fix is usually a boundary adjustment (change < 60000 to <= 60000) or adding a catch-all default rule.
Regression Testing
When you change a rule, you want to know what changes in production behavior. TIATON runs your test suite against both the old and new versions and shows the diff:
Rule change: risk_scoring rule 3
Before: credit_score [600..650] → score 20
After: credit_score [600..680] → score 20
Impact on test suite:
✅ 42 tests: same result
⚠️ 3 tests: result changed
- "Good credit moderate income": score 15 → 20 (credit 660 now in range)
- "Standard approval": score 0 → 20 (credit 670 now in range)
- "Borderline refinance": score 25 → 45 (credit 640, score increased)
Production impact estimate:
~8% of recent applications would have scored differently
This is the power of versioned, testable rules. You see the impact before deploying.
Automation
In TIATON, tests run automatically at two points:
- Before publish — A release with failing tests cannot be published
- On every edit — The playground evaluates test cases in real-time as you modify rules
# Run tests via API
curl -X POST /v1/admin/domains/lending/tables/loan_eligibility/test \
-d '{"tag": "v1.1.0-draft"}'
# Response
{
"total": 18,
"passed": 17,
"failed": 1,
"failures": [
{
"test": "Boundary: credit_score = 650",
"expected": { "decision": "reject" },
"actual": { "decision": "manual_review" },
"matched_rule": 3
}
]
}
One failing test. Clear diff. Fix the rule or fix the test. Either way, you know before it hits production.