When AI Meets Accounting: My Experience as a Keen Amateur in Automation
- Sinéad Pratschke
- Jun 21
- 3 min read
How I tested AI to streamline accounting workflows for my tax practice - and why it didn’t quite work (yet).

The Context: Creativity and Chaos in the Ledger
We’re a boutique tax and accountancy firm working with internationally recognised musicians and music organisations. Our clients tour globally, with wildly varied spending habits that make standardisation…challenging, to say the least.
We use Xero for bookkeeping and basic accounts prep. While Xero’s Rules feature is helpful in more predictable scenarios, our client base defies uniformity. You can’t apply a one-size-fits-all rule to a saxophonist’s spending while on tour in Berlin and then expect it to work for a composer living between Tokyo and Dublin.
So we set out to test whether AI and automation could help us pre-code bank transactions at scale.
The AI Dream: Hands-Off Reconciliation
With the help of a specialist AI/automation company, we built an automation using:
Make.com (for workflows)
Airtable (for data structuring)
Perplexity AI (for classification and decision support)
The idea: pull unreconciled transactions from Xero → use AI to determine the country of origin, expense account type (from our bespoke Xero chart), and correct VAT code → then push that structured data back into Xero for reconciliation.
Simple, right?
Prompt Engineering: The Devil’s in the Details
What followed was a crash course in prompt design and machine logic. We tested a single composite prompt, but eventually had to split it into three tailored prompts:
Country analysis
Account code classification
VAT categorisation
We trained the model using our bespoke chart of accounts, example transactions, and context around tax rules.
The AI got close. Painfully close. In fact, with each iteration, it improved. But…
Close ≠ Good Enough
We needed accuracy that met or exceeded a human’s judgment across all three criteria.
Instead:
We ended up spending more time checking and correcting AI output than if we’d manually coded the transactions ourselves.
The errors were often small but significant - “random” in a way only AI can be.
Some nuance - like an artist’s specific touring habits or historical expenses - was impossible for the model to infer.
Eventually, we made the tough call: the time and resource investment outweighed the benefit.
Key Takeaways (AKA, What I Learned the Hard Way)
Machines Aren’t Intuitive (But They Can Be Logical)
The exercise forced me to think like a machine - define parameters ruthlessly, clarify logic, and leave no room for interpretation. I gained a better understanding of why AI fails, not just how.
Human Intuition Still Wins
My analysis is shaped by everything I know about our clients - habits, context, long-term planning, personal quirks. That 360-degree view is what makes good tax and accountancy advice irreplaceable (for now!).
Experimentation Is Never a Waste
Although the project didn’t reach full implementation, it was worth it. I now know exactly what to look for next time. And I will be trying again - maybe even in a few months, as the models evolve.
Final Thoughts: Why This Matters
For anyone working in professional services, especially in high-context, high-variance fields like ours, AI is not a silver bullet. But it is a powerful tool for learning, testing, and systemising - if you’re willing to get under the hood.
If you’re experimenting too, I’d love to hear how it’s going.
Comments