Why AI Is a Good Fit for Tax Guidance
This Was Not an Obvious Decision
Using AI for anything related to taxes naturally raises eyebrows.
People associate tax filing with accuracy, compliance, and consequences. None of those feel compatible with something often described as “probabilistic.”
So before committing to AI, we had to answer a more fundamental question:
Is AI actually a good fit for this problem at all?
Taxes Are Rule Based, Not Creative
Despite how intimidating tax systems feel, they are not subjective.
They are built on:
- Clearly defined rules
- Thresholds and conditions
- Decision trees that lead to deterministic outcomes
Given the same inputs, the expected result does not change.
The complexity comes from volume and language, not ambiguity.
This distinction matters.
AI struggles when the rules are unclear. It performs well when rules are clear but poorly explained.
Tax systems fall squarely into the second category.
The Real Problem Is Translation
Most people don’t struggle with calculations.
They struggle with:
- Understanding which rules apply to their situation
- Interpreting legal or bureaucratic language
- Knowing what question to ask next
This is where AI excels.
Modern language models are good at:
- Explaining complex concepts in plain language
- Maintaining context across a conversation
- Breaking down multi step processes
In other words, AI is well suited to act as a guide.
AI as an Interface, Not an Authority
A key design decision we made early on was defining what AI is not allowed to do.
AI does not:
- Make legal determinations
- Submit filings
- Override explicit rules
Instead, it:
- Explains options
- Highlights tradeoffs
- Helps users understand consequences
The final decisions always remain with the user.
This boundary is essential for trust.
Why Traditional Software Falls Short
Most tax software is built around static forms.
It expects users to already know:
- Which forms they need
- Which sections apply to them
- What each field actually means
AI enables a different approach.
Instead of forcing users to adapt to the system, the system adapts to the user.
A conversation can branch dynamically based on context, just like a human advisor would.
Where We Do Not Use AI
Equally important is where AI is intentionally not used.
We avoid AI for:
- Final compliance decisions
- Irreversible actions
- Ambiguous edge cases without clear rules
In those situations, AI is better used to surface uncertainty than to hide it.
Knowing when not to automate is part of responsible system design.
Confidence Comes From Constraints
Trust in AI does not come from claiming high accuracy.
It comes from:
- Clear boundaries
- Transparent behavior
- Predictable failure modes
By constraining AI to the role it performs best, explanation and guidance, we can deliver real value without overreach.
What Came Next
Once we were confident that AI was the right interface for tax guidance, the next question became how to build it responsibly.
That led us to reconsider where models run, how data is handled, and what tradeoffs we were willing to accept.
In the next post, I’ll explain why relying entirely on proprietary AI models was not the right long term choice for this kind of product.
This is not about rejecting AI.
It’s about using it carefully, deliberately, and with respect for the problem it is trying to solve.
