Insights

Break it down: Lessons from an algorithmic two-step

Image > Insight > Break it down: Lessons from an algorithmic two-step

The online forum instructables.com is a community for people who like to share how they make and do things. It features recipes, craft projects, a workshop and classes. My favourite lesson is the nine-step diagram that explains how to lead when dancing the Two-step. It suggests that upon completion of step four, the dance continues by repeating steps five to nine. However, it fails to say how the dance stops. In fact, the instruction on step nine is to repeat steps five to eight. The dancers are thereby propelled into an infinite loop of quick, quick, slow. In programming parlance, this recursive function is said to be missing a termination condition. This is just the first of a number of useful insights to be gained from close study of an algorithmic expression of how to do the Two-step.

Between steps five and six, there is an asterisked note that “your partner will be doing the exact opposite of your movements during steps five to nine”. Ignoring the known issue with the recursive function identified above, this command is vague and possibly ambiguous. For example, “exact opposite” could be interpreted to mean that when the leader starts, their partner should stop. Alternatively, if the leader steps quick, quick, slow, should the partner step slow, slow, quick? And what is the partner to do while the leader is dancing steps one to four? Do they do nothing or do they do exactly what the leader does? Neither of these scenarios makes any sense and both would be inconsistent with the accompanying photos and foot-placement diagram.

The Two-step instructions also suggest that the person who will lead the dance is traditionally male and that their partner would likely be female. This assumption imports bias into the algorithm. Existence of bias always precedes the creation of the algorithm. Bias in, bias out. If these processes are going to be relevant, they need to produce outputs that are useful and timely. Furthermore, whether they fall short of complying with the letter and spirit of any relevant anti-discrimination laws, may depend upon context. Under Australia’s AI Ethics Principles, where an algorithm produces unintended, detrimental or offensive outcomes, the individuals and organisations that created and implemented the algorithm may be made liable for any resulting loss or damage. The European Union General Data Protection Regulation states that algorithmic decision-making requires a human review component if and when decisions have significant consequences for any person.

In an ideal scenario, individuals who design automated systems would check for potential loops, bias, danger or other errors at the process design phase. Waiting until a program has been implemented and is generating results is too late. These issues are important to keep in mind as more machine-learning algorithms are programmed to optimise based on complex mathematical functions, while relying on data sets and Internet-of-Things sensors.

Quite aside from issues arising from unintended bias, fear of misuse of technology is at its most acute when there is a perception that machines are developing cognitive (or self-learning) abilities. At this time in the development of machine learning, “synthetic intellect” is manifested as autonomous machines, self-driving cars, killer robots, and smart contracts. However, before deploying any of these systems, it is necessary to test them in their intended operational domains and for reliability at various levels of autonomy. For example, killer robots (or autonomous weapons) can operate in five different domains: at sea, on land, in the air, from outer-space and across cyberspace. Meanwhile, the International Standards Organisation identifies six levels of autonomy, ranging from zero (human-operated) to five (fully-autonomous). Understanding the level of interaction with humans in each of these domains at different levels of autonomy plays an important role in deciding whether and how to deploy them. Schleiger and Hajkowicz (2019) suggest that when using artificial intelligence, it is important to test outcomes in different scenarios and to identify where human decision-making should override any automated processes.

Lawyers play a crucial role when it comes to navigating the use of algorithms, AI and machine-learning in business, government, national security, commerce, financial transactions, telecommunications, social relationships, and the practice of law itself. A lawyer’s armoury has always included the ability to read and analyse complex documents or to cross-examine doctors, scientists, auditors, accountants and other (non-legal) experts. While lawyers do not necessarily need to learn how to code, it is important to be able to understand the mechanics of new technologies, in order to understand the risks and benefits of their use in different contexts. You do not need to know how to dance in order to identify flaws in an algorithm.

Thank you to guest author, Dr Philippa Ryan for contributing this article.

All information on this site is of a general nature only and is not intended to be relied upon as, nor to be a substitute for, specific legal professional advice. No responsibility for the loss occasioned to any person acting on or refraining from action as a result of any material published can be accepted.

Key contacts

Michelle Bey

Chief Innovation Officer & Transformation Lead