Ask an AI: what makes lawyers “professional”?
By their very nature, heuristic shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the heuristics of AI are not necessarily the human ones: Daniel Kahneman
Transforming our professions
Only 15 years ago, the possibility of replacing professionals in medical diagnosis or legal representation was science-fiction. Not anymore.
Machine learning is a methodology that is actually well suited to automating those tasks that rely heavily on experience-based "tacit knowledge". This will drastically change the way we practice law, among other professions. And that might be a good thing.
Computer systems have the potential to augment our ability to deliver affordable, quality professional services. Do they also have the potential to leave you out of a job? Absolutely.
So far the debate over the desirability of ultimately replacing particular professions with computer systems has mostly been framed in straightforward terms: if automation can maximise the affordability, accountability and quality of our services, we should not allow short-sighted (or self-interested) objections to hamper its deployment.
Yes, whatever professional work is left may become duller as a result (live with it). No, we do not necessarily need to preserve face-to-face interaction to deliver quality professional services that deserve the public's trust. Is there any difference, in this respect, between the services provided by mountain guides, say, and professionals? If we let this outcome-focused logic run its course, there soon won't be.
Drawing a line: professional responsibility as a constraint to wholesale AI replacement
To think long and hard about what, if anything, distinguishes us as "professionals" from other expert service providers is not something that we lawyers (nor, for that matter, doctors or even philosophers) tend to be very keen on.
This is in part because the notion of professionalism is historically entangled with rather vague notions of "public interest" (who wants to argue that the services of mountain guides are not in the public interest?) or social engineering ambitions that are not that palatable today.
Most people will intuitively associate a professional's particular status with a concomitant responsibility, but the grounds of that responsibility remain elusive. I argue that the specific responsibility of professionals lies in the distinct nature of the lay-professional relationship.
At the heart of this relationship is a vulnerability that is different in kind from the one at play when our life is at stake on the side of the mountain. The role of the mountain guide indeed does not affect the development of those interests or concerns that are closest to our sense of self.
In most cases, the role of the professional does: whether we are struggling to preserve our health or our social standing and recognition (which a divorce, sudden poverty, prosecution can all endanger), our sense of owning the way we project ourselves, both socially and physically, is typically weakened.
Because, and to the extent that, in our society, educators, bankers, lawyers and doctors are all in a position to significantly alter our sense of self, they are endowed with a particular type of responsibility. That responsibility simply cannot be met by a computer system. Hence a line needs to be drawn between those occupations that may eventually lend themselves to wholesale AI replacement on consequentialist grounds, versus those that should not.
Now let's imagine I've convinced you as a professional body, and lawyers successfully manage to constrain the deployment of professional AI applications: computers become our indispensable partners. To do so, they'll have to be able to take into account a whole range of moral values and concerns which permeate professional practice. How do they do that? Or more precisely how do we do that?
Computer-enabled moral holidays? Beware what you wish for
Computer scientists talk of the so-called value-alignment problem. They do so in a way that worries me because they tend to think of moral values as given: provided we can somehow identify them, all we have to do is to include these values as constraints within the operation of the system.
This assumption is both naïve and dangerous. It is naïve because even in the most harmonious societies, values will always be the subject of controversy and disagreement. It is also dangerous, because ethics cannot but be a work in progress.
If we start thinking of moral values as static, lending themselves to some neat inclusion into systems designed to simplify our practical reasoning, the danger is that we'll not only stop being the authors of those values, we'll also stop being capable of "ethical effort": the critical engagement that is at the root of the messy but nevertheless precious value system we share today.
My worry is that computers may become so very good at simplifying our practical reasoning that we may find ourselves in never-ending 'moral holidays'. These might look attractive at first, until we find ourselves incapable of mobilising atrophied moral muscles.
You don't need to learn to code to contribute to design choices
Our quest to develop artificial intelligence has already taught us much about our own, eminently fallible intelligence. Now that AI applications prepare to revolutionise the way we professionals operate, we stand to learn something important, both about the nature of our work and the nature of our responsibility as professionals.
The latter could be bolstered (rather than hampered) by the deployment of professional AI on one condition: that we actively engage as a profession with the strategic choices that are being made today, both in terms of policy and in terms of system design.
For a fuller version of this argument that Dr Delacroix gave at our London Tech Week event, read Drawing a Non-Consequentialist Line: Augmenting v. Replacing the Professions with Computer Systems.
Dr Sylvie Delacroix was one of four speakers at our free 2017 London Tech Week event: Does your machine mind? Ethics and potential bias in the law of algorithms.