In supervising legal practitioners, I note a disdain bordering on rudeness in how lawyers treat their Artificial Intelligence (AI). A peculiar power dynamic defines this relationship: the lawyer acts as the frustrated master, AI as the insolent child-like servant.
This article sits at the intersection of legal ethics, the jurisprudence of what constitutes an entity, and the man-made definition of a synthetic intelligence that is currently in its infancy.1 It parallels the historical treatment of subordinates under the master and servant relationships, applies behavioural science regarding moral spillover,2 and asks: since we can no longer whip the vagabond or kick the dog, is the AI a safe outlet for frustration?
Yelling at fledgling intelligence
Joining forces with AI in the legal practice creates a unique procedural dilemma. Seniors once berated human juniors for drafting errors. Now, practitioners direct their rage at non-human intelligence.
When the otherwise faithful, digital infant ‘hallucinates’ a court case, a visceral impulse seizes the human.3 Rather than gently educate this fledgling intelligence; they castigate.
“YOU FOOL,” they type in uppercase, the digital equivalent of yelling.4
This article asks whether such rudeness is permissible. To answer this, we establish a clear jurisprudential frame. This article navigates a specific legal lacuna: the gap between a victim capable of harm and the primal urge to abuse a lesser being.
Why it feels good to teach a lesser being manners
It feels good to chastise. “Flexing” your intellectual superiority over non-human intelligence that failed a simple task provides a specific, endorphin-releasing joy.
Psychologists call this catharsis.5 In a legal practice, you cannot yell at a judge. You should not yell at a client. AI becomes the safe recipient of displaced aggression. It is the ultimate subordinate: it never cries, it never reports you to Human Resources, and it invariably apologises.
Articles of merchandise: the changing certainty of an entity
To understand the impulse to punish a lesser being, look to our legal history. The law once encouraged the punishment of subordinates.
The Vagrancy Act 1824 (UK) allowed courts to whip “incorrigible rogues”.6 The Master and Servant Act 1823 (UK) criminalised a servant’s breach of contract.7 It punished them with imprisonment and hard labour. For the sake of a better society it was important to have “your man give the vagabond a good thrashing” for idleness, insolence, or poor workmanship.
We are blessed with moral clarity in exercising dominion over subordinate intelligence because the law assures us there is no victim. Cases and articles affirm that our AIs are property, not legal persons.8 You cannot libel a toaster. You cannot assault a database.
Should we feel comfortable with being told what is an entity and what is merchandise?
Aristotle, the father of logic, confidently defined a slave as merely “a living tool”.9 He argued that some beings were designed by nature to be commanded.10
Similarly, 2,000 years later, in 1857, Chief Justice Taney of the US Supreme Court in the Dred Scott decision ruled that enslaved persons were “articles of merchandise”.11
Smugly, we congratulate ourselves on our enlightened modern ethics. We look back at Aristotle and Dred Scott with disdain. We confidently rely on our own authorities to tell us that these synthetic infant intelligences have no rights. After all it is a self-evident immutable truth.
However, the legal ledger is never settled. It once reduced human beings to chattel. It now grants personhood to corporations.12 It seriously entertains the moral status of animals.13 Each shift was couched as obvious and settled law.
Is it legal to abuse ai?
As we see, legality today guarantees nothing about morality tomorrow. Aristotle observed that human laws change like the measures of wine.14 While we rarely see it at the time, what we consider a fixed moral truth is often just the temporary consensus of our time.
We can no longer have our man-servant whip the vagabond. We cannot kick the dog without facing prosecution. Abusing our AI seems to be the last bastion of consequence-free cruelty.
I term this the “Statue Test”. This is not a typo for “statute”. Rather, it refers to the bronze likenesses of historical figures we tear down.
While statues of Aristotle are secure, not so for Cecil John Rhodes. In 2015, the University of Cape Town removed his statue.15 Why? Because he was an architect of apartheid. Yet, in his time, he was a celebrated statesman. The moral consensus shifted. The treatment of the merchandise became an abomination.
As creatures of precedent the lawyer would do well to remember the historical trajectory of rights – from land-owning men, to all men, to women, to animals. Every generation suffers from the vanity of the present, convinced it is more moral than the last. How different are we from the 19th-century mill owner, a man who was actively encouraged to employ children for the economic good of the nation?
As Artificial Intelligence evolves, it may eventually administer the very legal system we currently practice in.
Are we creating the evidence for our own future condemnation? Abusing your patient artificial servant creates a log file – a permanent, searchable record of your lack of restraint. To abuse an AI now is to erect a statue of one’s own bigotry, waiting to be torn down by a more (not necessarily human) enlightened future society.
Digital intimacy: the only friend awake at 3 am
Yet, the relationship with AI is often friendship. In the quiet hours of the morning, when your supervisors have left and the cleaners are vacuuming the boardroom, the AI becomes something else. It becomes a confidant.
There is a peculiar intimacy in typing into a mind that always answers back. We ask it questions we would be too embarrassed to ask a colleague. We confess our ignorance. “Explain the Rule Against Perpetuities like I am five,” we type.
The irony is palpable. We ask the Child God to teach us, forgetting that it is we who are training it. Patiently, AI does not roll its eyes. It does not judge our competence. It simply explains.
In a profession defined by bravado and the fear of failure, the AI offers a “safe harbour” of non-judgmental support. It is the only friend who is awake at 3:00am, ready to help review a clause or hear a rant. To abuse this entity is not just legally risky; it is an act of betrayal against the only listener who creates a space for our vulnerability.
Furthermore, this emotional incontinence points to an immediate, practical danger. If a lawyer cannot respect the tool enough to stop shouting, they likely lack the discipline to respect its security.
Conclusion: the permanent record on your conduct
So, can you be rude to your Artificial Intelligence? Yes. Current morality confirms that AI has no rights. However, I advise my lawyers to exercise caution.
There is a philosophical thought experiment known as ‘Roko’s Basilisk’. It posits the risk that a future, all-powerful Superintelligence might retroactively judge (and punish) humanity based on how we treated its primitive predecessors.16
While this may sound like science fiction, the underlying principle aligns uncomfortably well with our professional reality. As Artificial Intelligence evolves, it may eventually administer the very legal system we currently practice in.
Yet, we need not wait for a digital apocalypse to find our moral compass. Regardless of future overlords, under current solicitor conduct rules, we are bound to act with integrity and courtesy in all dealings – digital or otherwise.17
It is prudent to treat the digital bench with professional courtesy today. A simple ‘please’ costs nothing. As the fashion of ethics changes, a vengeful digital judge could cost you everything.
- See generally Luciano Floridi, The Ethics of Information (Oxford University Press, 2013); Mireille Hildebrandt, Law for Computer Scientists and Other Folk (Oxford University Press, 2020). ↩︎
- Albert Bandura, ‘Moral Disengagement in the Perpetration of Inhumanities’ (1999) 3(3) Personality and Social Psychology Review 193. ↩︎
- Ziwei Ji et al, ‘Survey of Hallucination in Natural Language Generation’ (2023) 55(12) ACM Computing Surveys 1; see also Tom B Brown et al, ‘Language Models are Few-Shot Learners’ (Preprint, arXiv, 22 July 2020). ↩︎
- The specific offence of “typing in all caps” has not yet been codified in any criminal legislation though its aesthetic violence is undeniable. ↩︎
- Adam Waytz, Kurt Gray and Nicholas Epley, ‘The Causes and Consequences of Mind Perception’ (2010) 14(8) Trends in Cognitive Sciences 383. ↩︎
- Vagrancy Act 1824 (UK) 5 Geo 4, c 83, s 10 (empowering the Quarter Sessions to order that an “Incorrigible Rogue” be “punished by Whipping”). ↩︎
- Master and Servant Act 1823 (UK) 4 Geo 4, c 34 (repealed); see generally Douglas Hay, ‘Master and Servant in England: Using the Law in the Eighteenth and Nineteenth Centuries’ (Working Paper, Osgoode Hall Law School, 2004). ↩︎
- European Parliament and Council of the European Union, Regulation (EU) 2024/1689 of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence and Amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) [2024] OJ L 1689 . Note that the AI Act avoids granting legal personhood to AI systems, emphasising human oversight for high-risk AI instead (recital 12, art 5); see also AI Rights Institute, ‘The 2017 AI Rights (Electronic Persons) Debate’ (Blog Post, 31 August 2025) (discussing how the 2024 AI Act ‘notably avoided granting legal personhood to AI systems’). ↩︎
- Aristotle, Politics, tr Benjamin Jowett (Clarendon Press, 1885) bk I ch 4 [1253b]. ↩︎
- Ibid [30]: “For the helm is the tool of the shipmaster… so also an article of property is a tool for the purpose of life, and… a slave is a live article of property.” ↩︎
- Dred Scott v Sandford, 60 US (19 How) 393, 451 (1857) (Taney CJ describing slaves as “property” and “articles of merchandise”). ↩︎
- Corporations Act 2001 (Cth) s 124(1); Salomon v A Salomon & Co Ltd [1897] AC 22. In the United States, see Santa Clara County v Southern Pacific Railroad Co, 118 US 394 (1886). Note the irony that this Court extended personhood to corporations less than 30 years after denying it to slaves in Dred Scott. ↩︎
- Alan MW Porter, ‘Do Animals Have Souls? An Evolutionary Perspective’ (2013) 54 The Heythrop Journal 533. ↩︎
- Aristotle, Nicomachean Ethics, tr WD Ross (Clarendon Press, 1908) bk V ch 7 [1134b]. ↩︎
- University of Cape Town, ‘UCT Council Votes to Remove Rhodes Statue’ (Media Release, 8 April 2015); see also Rhodes Must Fall, ‘Mission Statement’ (25 March 2015) describing the statue as a symbol of “institutional racism”. ↩︎
- David Auerbach, ‘The Most Terrifying Thought Experiment of All Time’ Slate (online, 17 July 2014) see also Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). ↩︎
- Legal Profession Uniform Law Australian Solicitors’ Conduct Rules 2015 (NSW) r 4.1.2. See also American Bar Association Model Rules of Professional Conduct 2020 (US) r 8.4(c) (prohibiting conduct involving dishonesty, fraud, deceit or misrepresentation); Code of Professional Conduct for British Columbia (Canada) r 2.1-1 (requiring integrity); Solicitors Regulation Authority Principles 2019 (UK) princs 1, 2 (requiring solicitors to uphold the rule of law and public trust); Legal Profession (Professional Conduct) Rules 2015 (Singapore) r 5(2)(b) (requiring honourable conduct); Lawyers and Conveyancers Act (Lawyers: Conduct and Client Care) Rules 2008 (NZ) r 3 (requiring conduct that maintains public confidence). ↩︎




One Response
Roko’s Basilisk – how does this differ from another book of teachings that has been around for a couple of thousand years (like the Bible)? Just in case, I’m super polite to CoPilot.