Advertisement

QLS supports inquiry on automated decision-making and administrative law

Queensland Law Society has backed proposals for an inquiry into the use of automated decision-making in administrative law.

QLS President Luke Murphy, in a recent submission to the Australian Law Reform Commission (ALRC), said the Society supported the need for inquiry at a time when the use of automated decision-making and artificial intelligence (AI) throughout the world had proven to be problematic.

In a five-page submission to ALRC President Justice Sarah Derrington, Mr Murphy said: “Overall, Queensland Law Society supports an inquiry into the use of automated decision-making in administrative law.

“Such an inquiry should investigate the discussion points already developed by the ALRC. However, our members have also raised the importance of including a further point for consideration which addresses transparency of automated decisions which result in discrimination or bias due to the datasets used to train this technology.

“When automated decision making and AI are used in administrative law, it is critical that there are sufficient transparency and accountability in the process so that decision-makers can still comply with their obligations to deliver statements of reason that enable proper scrutiny for decisions.

“QLS envisages that reforms will be required in this area and suggests that the proposed reforms should entrench an ethical framework, recognised privacy…and governance principles.”

Advertisement

The submission, compiled by members of the QLS Occupational Law, Privacy and Data Law and Human Rights and Public Law Committees, was in response to invitation for feedback from the ALRC.

Mr Murphy said automated decision-making and AI technology was currently informed by machine learning, where algorithms learn how to best reproduce an existing dataset to be used for categorisation and prediction.

“Based on the in-grained human biases that persist in our society, and therefore these datasets, automated decision making technologies incorporate these biases into what they are programmed to recognise as a ‘success’,” he said.

“For example, these technologies have contributed to racial biases, criminal injustice, gender biases and socio-economic biases.

“Research reveals that the increasing pervasiveness of automated decision-making makes the public law challenge particularly acute. since the full bases for algorithmic decisions are rarely available to affected individuals.”

The Society also identified a current problem surrounding governance – or the lack of people able to police both their own conduct and that of others.

Advertisement

“In a world of machine learning and automated decision-making, leaders of public and private sector organisations have a great need for new workable rules on how to allocate individual responsibilities,” Mr Murphy said.

“There are established legal principles applicable to public sector decision makers, as well as private sector boards, limiting delegations of supervisory obligations.

“But where are the rules informing them of how to require fairness, ethics, accountability and transparency from machines which teach themselves to maximise outcomes required by those who control the code and the data which it is being applied?

“These are profound governance issues which QLS requests be part of the ALRC’s review and recommendations.”

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Search by keyword