Robotics and Tax Compliance

25 May 2018
Author: Sabina Manea

According to the government website, the UK is the best prepared country for the implementation of artificial intelligence (AI), and attracts the most venture capital investment in Europe. It will come as no surprise that the enthusiasm for AI and machine learning has percolated to public authorities, of which HMRC leads the way.

For years HMRC has been particularly receptive to automation – and for entirely practical reasons. From introducing digital tax accounts to creating a Digital Strategy, the goal is to help customers get their tax calculations right and make tax easier. Automation is intended to make both compliance and enforcement more time and cost-efficient. To this end, HMRC has created a new so-called Collaboration Zone, to explore how AI and machine learning can improve its decision-making processes, as well as customer experience. In HMRC’s own words, AI is about ‘automating repetitive tasks and freeing up our people for more satisfying work helping our customers – this is people and machines working powerfully together’. Since opening its Automation Delivery Centre in 2016, HMRC has set itself the ambitious goal of automating 10 million processes by the end of 2018. It is no surprise that AI is considered to be the logical next step to help meet this goal.

How will AI be used? Robotics tools currently in use range from the simplest, namely social media engagement and the use of a virtual assistant called Rita, to harnessing the power of machine learning to assist with compliance and complex tax investigations. It is the latter end of the spectrum that provides more food for thought. KPMG’s 2017 report, ‘Technology in Tax’, reveals that, according to surveys by the World Economic Forum, a significant number of experts believe that by 2021, 30% of corporate audits will be performed by AI, and tax will be collected for the first time by a government via a blockchain.


For the sceptics, an interesting instance of machine fail appears to have occurred in a fairly run-of-the-mill tax case, Richter v HMRC [2017] UKFTT 0339 (TC), an appeal against penalties for late filing of an income tax return. Speaking obiter, the judge remarked that ‘not even the most sophisticated computers can (yet) form beliefs, and certainly not those operated by HMRC’. In the Richter case, a human being needed to form a belief about the appropriate level of penalty to be charged. Specifically, the legislation provided for a penalty for late filing (beyond 6 or 12 months) which was the higher of 5% of the tax liability on the return and £300. This wording called for a determination ‘to the best of HMRC’s information and belief’ as to whether the taxpayer’s past history of returns and payments justified a penalty higher than £300. Instead, however, the practice was that the HMRC self-assessment computer would trawl its database for cases where a return had been issued but not received before the 6 month point, and that computer was programmed to issue, in all cases, a £300 penalty. The computer was not programmed to interrogate any data it held about past liabilities. Consequently, the judge cancelled the automatic assessment of the payable penalty. The real-life scenario of the Richter case underlines some potentially serious challenges and glitches that lie ahead in the use of AI in the tax context.

Return to Listings

Our Insights