How Can a Morally Bankrupt Nation Be a Leader in Ethical AI?

Yesterday, The Verge reported that the British government is eager to position itself as a leader in the artificial intelligence space.

Great Britain might be the world’s third-largest investor in AI technologies, but it cannot even match the level of investment we’re currently seeing in countries such as China and the United States, let alone outspend those countries. So what can Britain do to secure its place at the table?

Become a leader in the ethical applications of AI, apparently.

That’s the grand idea that’s being put forward by a report published earlier this week by the House of Lords, titled “AI in the UK: Ready, Willing, and Able?

Its hilariously tone-deaf, self-deprecating title notwithstanding, the report reveals the profound disconnect between how Britain sees itself in a changing global economy and the realities of Britain’s draconian approach to digital policy.

The report explores several important points, including AI’s less-than-favorable media coverage. Except rather than acknowledging the fact that AI and machine learning are inherently disruptive and that many public fears are justified — particularly within the context of existing economic challenges — the report instead chooses to paint the majority of the British public as ignorant. While it’s true that most people probably couldn’t tell you how machine learning algorithms work, plenty of people are rightly concerned with how the implementation of AI in the workplace will threaten their already precarious livelihoods.

“Many AI researchers and witnesses connected with AI development told us that the public have an unduly negative view of AI and its implications, which in their view had largely been created by Hollywood depictions and sensationalist, inaccurate media reporting.”

Then there’s the not-inconsequential matter of how Britain proposes to actually enforce whatever ethical guidelines the Tories come up with. Had Britain chosen to remain in the EU, the country would be looking forward to the prospect of enjoying more robust data protection laws as set forth in the GDPR. Now, the idea of Britain setting any real agenda regarding ethical applications of AI is as laughable as it is disheartening.

The report’s proposed “AI Code,” a set of pathetically broad guidelines that many AI researchers already follow, reveals precisely how poorly equipped Great Britain is to even follow its own recommendations (emphasis mine):

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

There’s little arguing with the first point, even if the likelihood of tech conglomerates acting in anything but their own interest seems almost hopelessly naive.

The second point is where things start to become problematic, however. As I’m sure even a few peers in the House of Lords realize, AI constructs can only act within the parameters we set for them. In this regard, assuming that AI systems should operate on principles of fairness depends entirely on the definition of fairness given to them by their human overseers. It’s the same problem with the flawed assumption that technology is inherently neutral. As sociologist Donna Haraway noted in an interview with WIRED in 1997:

“Technology is not neutral. We’re inside of what we make, and it’s inside of us. We’re living in a world of connections — and it matters which ones get made and unmade.”

It’s the report’s third point, however, that should infuriate any technologist who’s been paying attention. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

Again, there should be no arguments here — yet Theresa May’s Home Office has demonstrated that it is willing to continually and systematically violate the human rights of individuals, families, and communities, never mind data rights. May’s Home Office has blatantly and repeatedly demonstrated its contempt for the rights of the vulnerable by hacking the devices of asylum seekers and refugees, sought to give itself sweeping new surveillance powersunder the guise of “immigration control,” illegally shared the personal data on millions of British citizens with foreign intelligence agencies, and exempted members of parliament from the vast data collection methods authorized by the Investigatory Powers Act — the sole amendment to the most comprehensive domestic surveillance program ever conceived.

In light of the morally reprehensible actions the British government has taken against its own citizens, it’s laughable that the U.K. could ever assert itself as the moral arbiter of anything, let alone something as crucial and far-reaching as a governing code of ethics for artificial intelligence.

Of course, the real motivation for the report’s publication was as transparently obvious as Theresa May’s disgust for the poor — a thinly-veiled attempt at economic relevance. With a disastrous, chaotic Brexit looming (the true impact of which the government refuses to disclose), May’s Conservative government has been exploring a range of economic measures in an attempt to mitigate the inevitable financial ruin that will accompany Britain’s withdrawal from the EU — chlorinated chicken importsarms sales to brutal Middle Eastern petrostates, and now as a watchdog of ethical propriety in artificial intelligence.

When technologists and scientists speak of the potential dangers of AI, it’s often framed in the context of popular culture and science fiction. Naysayers warn us of autonomous weapons systems that could launch preemptive strikes against human targets. However, while we cannot rule out the risks of placing control of increasingly powerful weapon systems into the hands of algorithms and neural networks, it’s the human potential for abuse and discrimination that should worry us.

“AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

The quote above, from late physicist Stephen Hawking, has been used widely to support arguments for a more cautious approach to AI. However, it’s Hawking’s observation that AI could bring “new ways for the few to oppress the many” that concerns me. Today’s society is already fractured to the point of irreparable damage. In light of the resurgence of authoritarianism around the world, and Theresa May’s apparent eagerness to serve her new master in the United States, it is abundantly clear that Britain cannot be entrusted with such a grave responsibility.

The British government refuses to recognize the humanity of refugees, immigrants, families, and even its own citizens. It has treated the weak and the vulnerable with callous indifference at best, and vicious cruelty at worst. We cannot — and should not — entertain the idea of allowing the British government a seat at this particular table.

The U.K. has forfeited any right to tell the rest of the world what to do. Given that Britain’s feeble attempt to ensure its economic relevance in a rapidly changing global economy will inevitably be ignored by China and the U.S., it seems only fitting that this report be ignored as well.