In 2021, Rules as Code (RaC) is truly hitting its stride. More and more governments are exploring the concept of machine-consumable legislation, regulation and policy, and looking at how to approach creating and delivering better, machine-consumable rules. Research institutes have been established, papers and reports are being published, tools and platforms are being built, and multi-disciplinary teams are trying new things — learning new ways to draft and implement rules by getting their hands dirty.
Given that it’s still an emerging practice, a lot of the current discussion about RaC is centred on introductory questions such as why and how we should code rules (and I’ve tried to answer those questions here and here). The continued focus on the fundamentals is very welcome, but it can make it hard to understand the true potential of RaC — for that, we have to take a longer view. We have to ask ourselves what kind of world we want to build with coded rules.
Better and trustworthy automated decisions
The first reaction that RaC practitioners are often faced with is the fear of the killer robot. What happens if the automated system makes a wrong decision? What if that decision hurts someone? This is not an unfounded fear — they will inevitably get it wrong at some point. We have seen poorly implemented and poorly used automated systems raise debts that are not owed, and lead to the arrest of innocent people. Humans make mistakes, so the rules that we code may include errors, and systems we build on those rules may deliver incorrect outcomes. This is not unique to RaC — all systems have flaws.
As a former administrative lawyer and someone who grapples with the ethical uses of technology on a daily basis, the use of RaC to help people understand what decisions are being made and how they’re being made — that is, to enable trustworthy automated decisions — is particularly compelling.
Administrative law is the body of law that regulates how governments make decisions. In common law countries, this generally includes requirements that only relevant matters should be taken into account, irrelevant matters should not be, reasons should be given for decisions, and there should be workable avenues for merits reviews of decisions.
What admin law gives us is a tried and tested framework for trustworthy and accountable decision-making. One vision of the future is that we fully and consistently integrate those principles into RaC-enabled decision-making systems. In this future, we have more automated decisions, but we don’t need to trust that those decisions are correct because they are demonstrably trustworthy.
For me, the true potential of RaC is not faster decision-making through automation, but better decision-making. Business and organisations are strongly motivated to automate decision-making processes — the potential cost savings and efficiency benefits are just too great to ignore. However, if we’re basing an automated decision-making process on an optimal RaC implementation, we can ensure that the decision is completely transparent and traceable. Specifically, we can build a decision-making system based on an open and inspectable ruleset, so it’s clear what rules are being applied. We can also deliver the decision together with information about which rules were applied, to which facts or evidence — this provides fully traceable reasons for the decision. An excellent example of this approach is Austlii’s DataLex platform, which automatically generates a report with each decision that explains how the decision was arrived at based the rules and the user’s input (see, for example, this consultation that tells you whether you are eligible to run for Federal office in Australia). This offers the possibility of automated decisions that are actually more trustworthy than decisions currently made by humans — we’ll be able to prove, conclusively, that only relevant matters were considered and that only the correct rules were applied to the correct evidence. In that respect, optimal RaC implementations offer us a possible tool to combat issues like implicit bias in human-made decisions (but, it must be said, not anything like ‘silver bullet’ solution).
Such a system would also make any appeals much easier to make and to determine — either there is an error in the rules, or in the evidence to which the rules have been applied. Either way, with fully transparent and traceable decision-making, a person who disagrees with the decision already has everything they need to appeal, and the appeal should be much easier for the decision-maker to determine.
That’s why my optimistic future vision of RaC includes clear, foundational regulatory requirements for automated decision-making, explicitly requiring that automated or partially-automated decisions are transparent, traceable, accountable and appealable. That is, regulation should impose a baseline for automated decisions that reflects the core principles of administrative decision-making. We’re already seeing some movement towards this — the EU’s General Data Protection Regulation restricts the use of automated decision-making without a ‘human in the loop’ (Art. 22), and jurisdictions all over the world are implementing AI ethics frameworks and policies. In the optimistic future, transparency, traceability of automated decisions is widely accepted and routinely implemented, and governments and companies take it as a given that ‘black box’ systems are not suitable for decisions that affect peoples’ wellbeing. People are aware when a decision affecting them has been automated, can easily understand the decision that’s being made and how it’s being made, and it’s simple and easy for them to appeal if they think there’s an error.
This is likely to be an easier proposition in government, which generally has an expectation of transparency and appealability, but companies might be concerned about revealing proprietary information via open rulesets — for example, credit worthiness calculations, which are a closely guarded secret in the financial services industry. Still, RaC approaches may be adopted in corporate environments to enable internal transparency and continuity of decision-making; that is, to ensure that companies themselves understand how they made their decisions and are better able to explain it to their customers.
Either way, automated decisions will be more trustworthy, citizens will gain more transparency and understanding of how decisions are made and better options to address problematic decisions, and decision-makers will spend less time on complaints and appeals — a win/win scenario.
Not just cheaper compliance, better compliance
As systems capabilities grow, business operations become more complex — and regulation becomes more complex to compensate. In corporate circles, compliance has become a major cost, and monitoring regulatory change is a major component of that. In 2020, the compliance teams of most regulated entities were spending at least 22% and up to 35% of their time just tracking and analysing regulatory developments. Another major time and cost sink is manually coding regulatory developments into business systems to enable compliance.
In the RaC-enabled optimistic future, regulated entities have long since linked their business systems to APIs (Application Programming Interfaces) published by regulators. Their business systems consume new rules via the relevant API and automatically update when they come into effect, ensuring immediate compliance. The systems automatically log the change and notify the compliance staff of the update. The APIs, being a conduit to representations of the law, are open and available to the public, enabling enterprising vendors and service providers to build them into their own solutions or create new products.
This future would require an increased initial investment by governments and regulators to code rules, but with the rapid development of ‘low code’ tools like Blawx and Datalex, even this could become a much easier and cheaper task.
Gone are the days where compliance teams spent a quarter of their work week to identify new regulations. Businesses spend less on manually coding new requirements and implementing updates, and become more profitable — they thus pay more tax, which the government can spend on public services. The productivity dividend easily outstrips the initial investment required to draft the rules in a machine-consumable format. Further, the code is warranted by the government or regulator, which minimises the risk of mistranslation and inadvertent non-compliance. Many companies have not cut their staffing — they have decided to pursue better outcomes rather than just cheaper ones. Instead, their compliance staff are better able to focus their attention on solving more difficult issues which cannot be easily automated.
Democratising law by making it easier to understand and apply
RaC practitioners often argue that, in many ways, laws are already coded. Like code, laws are often written in language that is difficult to understand or sometimes utterly unintelligible to non-lawyers. Like legislation, code relies on defined terms and on applying them consistently and strictly, and refers to precedents. As mutual adherents of formal logic and semantics, lawyers and coders are at least cousins, if not siblings.
However, while machine-readable code itself may not be any more understandable to the general public, having coded rules offers the opportunity to build tools that can better explain how the rules apply, or to apply the rules. For example, the Mes Aides set of tools, originally built by the French Government and now a non-profit civilian project, helps French citizens understand how different tax and social benefits laws apply to them and to their society. Amongst other things it provides tax calculators that help users experiment and simulate different tax settings and outcomes. Similarly, the NSW Department of Environment and Planning and Code of Australia have recently issued a RaC-enabled set of tools that help users understand and comply with a complex environmental protection and energy efficiency program.
In the optimistic future, governments have recognised and adapted to the fact that machines are significant users of their rulesets. As such, they’ve redeveloped the way they create laws, simultaneously co-drafting human and machine-readable versions of prescriptive rules, and allowing the disciplines of law and code to influence the drafting process. Governments still release exposure drafts and consult with the community, but now it’s routine to release coded versions of draft legislation. Stakeholders can, for example, easily integrate the proposed rules into their business systems to test how the proposed rules will impact their operations and use this information to inform their submissions. If they identify contradictions or flaws in the ruleset, they make a pull request and suggest corrections directly, instead of or in addition to a written submission. Data journalists use the rules to create visualizations or tools to explain to readers how the rules work (as is often currently done with budget analyses and tax calculators — generally with a lot of painstaking manual work), or the governments themselves create interfaces to help the public understand and to build social support. Once passed, governments use the coded rules to make the laws easier to understand and apply though purpose-built tools and interfaces.
This may not be a far flung future. With apologies to William Gibson, the future is here — it’s just not evenly distributed. For example, Wellington City Council (WCC) in New Zealand is investing in creating a rules-enabled planning applications process. Planning law is often very detailed and difficult for lay people to understand. WCC has developed a resource consent checker — that is, a web interface that helps you work out when you need consent from the counsel to do your planned residential development. Residents can get a quick and definitive answer instead of pouring over zoning maps and then struggling to read complex development control legislation. The code will soon be open, enabling transparency and the development of new tools by others.
Similarly, the Canadian Government is working on developing a Policy Difference Engine; a set of tools that will, amongst other things, iterate how policies evolve and iterate over time, and whether they are meeting stated objectives.
Further, some governments are already using digital tools to explain policy; for example, these interactive budget visualisations from the NSW Government that enables users to explore how public money has been allocated over time, or by location and project.
If we want to make it to the optimistic future, we need to deliberately work to build it. It’s not enough to merely iterate and achieve incremental change. We don’t need a faster horse, or even a faster car — we need a spaceship. We need to transform our ways of doing things, with intention, empathy and energy.
“Anything one man can imagine, other men can make real.”
Jules Verne ― Round the World in Eighty Days (1873)