• Home
  • Blog
  • Can an Ethical Framework be Designed by a Computer?

Can an Ethical Framework be Designed by a Computer?

AI is replacing many repeatable human tasks, based on ability to learn from lots of data. Are ethical frameworks next?

Source: screen capture from https://www.youtube.com/watch?v=lXUQ-DdSDoE&t=1s (Google I/O demo)

It's a challenge today to have a conversation about technology without invoking big data, machine learning (ML), or artificial intelligence (AI). The pace is incredibly fast, and the applications broad, beyond those typically associated with mathematics. The future – and true value of AI – is the advancement of computers assuming roles once thought only for humans.

We can teach a computer right from wrong, but we'd better do it quickly

We're at the stage now where we've moved from, 'there's no way a computer could do that' to, 'well, if someone has the idea, they just need to execute it.' Computers (including algorithms, ML, and AI) now permit us to move from idea to hypothesis much quicker. That leads to a faster time to finding the prototype (of whatever it is) that works, all while processing incredible amounts of readily accessible data. The next big advancement will be to move from a huge silo of data to create farms of interconnected data. Before we talk about computers designing ethical frameworks, it's helpful to briefly review some recent advancements in AI.

From 0s and 1s to “To Be and Not To Be”

It's clear that machine learning and artificial intelligence can aid in scaling repeatable tasks steeped in mathematics. After all, that's the core of what computers and programs are: binary decisions, off or on, zero or one. Machine learning's strength and weakness is that it's only as good as what goes into it (conversationally known as garbage in, garbage out). We've advanced beyond that, for the most part.

If a computer evolves its learning and thinking from a corpus of very good work, then a better end result is more likely. This goes beyond sophisticated mathematics. In March 2016, a Japanese AI program wrote a short novel, and it almost won a literary prize. Key to that near-award? “Hitoshi Matsubara and his team at Future University Hakodate in Japan selected words and sentences, and set parameters for construction before letting the AI 'write' the novel autonomously.” Humans selected the prose from which the code was to model.

A computer becoming a true virtual assistant

In May 2018, TechInsider posted a short video from Google's annual I/O conference. One of the many things revealed there was Google Duplex, an update coming to Google Home and Assistant. Duplex can use natural language processing (NLP) to make phone calls for you and talk to the person on the other end to schedule appointments and make reservations.

This is the proof of concept. This is code learning, in real-time, responses based on a real-time interaction with a human. With sophisticated AI, even more is capable. Instead of booking a hair salon appointment, imagine asking Siri, Alexa, Cortana or GoogleHome to “renew my driver's license.”

The nuances of NLP, the variations from US state to US state to enable that request … the list is endless. We've seen that a virtual assistant can respond to a few queries and still book an appointment. We're not far off from AI helping us efficiently deliver government services, like renewing a driver's license by voice, or applying for a building permit for a home renovation project.

These are becoming reality today as government works to become more personal through the use of these technologies.  

Is computer intelligence artificial? Or: teaching a (robot) dog new tricks

Reading, writing, arithmetic: check. On deck: teaching a computer morality. According to Dr. Vyacheslav Polonski here, “Before giving machines a sense of morality, humans have to first define morality in a way computers can process. A difficult but not impossible task.”

Imagine our state of mind if we could create an app that only fed us good news. While excellent at reducing our stress levels, surely we'd be missing out a lot. There's immense subjectivity in who decides what a good story is. Our own definition of good is inherently factored into that, whether we like it (or know it) or not.

A machine cannot be taught what is fair unless the engineers designing the AI system have a precise conception of what fairness is.

Dr. Vyacheslav Polonski

The reality is that we already have. When the Facebook wall went away and Zuckerberg + co. introduced the concept of a newsfeed in around 2011, it would have been overwhelming to show a Facebook member the unfiltered view of posts, like, ads, games, and so on. So an algorithm was created to only show the most important items to a particular Facebook member.

But what was deemed important? And, by whom? Over time, the outrage grew. Add in the cries of 'fake news,' citing examples where stories were told but from (subjectively) less-than-truthful sources. In the age of rapid sharing and expiring online content, it's difficult to know what is real and what is fake. And it starts to raise doubts about who (or what) that arbiter is.

Soon, we could use AI to identify what the truth is of any given number of stories. Part of that task becomes identifying not just a corpus of work, but instead identifying a good corpus of work. And in the age of news, 'good' becomes 'truthful,' 'trustworthy,' or 'correct.' That arbiter of good plays a primary function here. In 11 or so years, intelligent machines will be able to outsmart human beings, according to futurist Ray Kurzweil. We will need to have firmly established what an unbiased corpus of work looks like for that training, and be able to put trust in the trust.

We need to be able to put trust in the trust.

Sudo, Make me a Privacy Policy

Yes, we can teach a computer morality, but we'd better do it quickly. If we can teach a computer to write near-award-winning prose, and we can teach a computer morality, then it should be able to write us a code of ethics. Soon, but not tomorrow. Here's something more tangible, though:

Envision that we need to update our privacy policy in the wake of May 25, 2018 and the go-live date of the EU's General Data Protection Regulation (aka GDPR). We could write it on our own or we could ask a computer to do so, drawing from several other good ones. What we're looking for is that prototype, because the likelihood of the AI to produce us a new GDPR privacy policy with which we are 100% comfortable is very slim.

Instead, it can give us a very good first draft, building on the experience of others. In turn, it would let us, as humans, audit and edit the work of these rapid prototypes to then create a final product.

Or imagine this scenario:

Countless cyberattacks happen everyday at organizations big and small. Some of the attacks are the same or slight variations of one another. This data could be collected and analyzed for things like origin, company size, threat vector, and so on, to then have an AI make pre-emptive recommendations to others, warning against the likelihood of such an attack. Humans can't do this at scale. But an AI-assisted tool can help humans think ahead, helping reduce the risk of cybersecurity threats on organizations and on individuals.

Or, this scenario:

An investor wishes to make smarter choices about where to place her money. But reviewing the vast amounts of data in any meaningful amount of time would let her make very few tweaks to her investment strategy. Instead, she can leverage AI to review the publicly transcribed earnings calls and reports to spot word clusters, trends, avoidances, and so on. The AI leverages NLP and turns a former human-focused task into a cogent output, complete with a recommendation based on a corpus of fed text. Now she can make a wiser, less risk-averse investment decision, based on a much greater consumption of relevant data.

And the great thing with this scenario: it's real.

The end game here isn't for AI to solely create a GDPR privacy policy, or to alert my organization of the next Cloudbleed, or to tell me where to invest. We can't have the algorithm or model sitting in a locked black box. We need to be able to trust the AI with assistance, not sole execution. We must always maintain some level of human oversight to ensure that we know we're avoiding a potential garbage in, garbage out model. One model where we as humans define what garbage is and isn't; what good is and isn't, and using the AI to get us there faster.

AI as a faster way to MVP

We can use AI as an accelerant to a minimum viable product. We can more quickly create a prototype, then edit it to suit our needs. But it's merely a faster way, not the only way. Humans simply cannot be eliminated from this equation. There will be a need for humans as knowledge workers. Even in something like automotive manufacturing, there's still some level of human involvement or oversight required. There needs to be a confirmation to humans, by a human, to validate that something is safe and trustworthy.

AI is here to save the day, but it is not a panacea and we can't put it in a black box. It is powerful and has helped humans accomplish much, and we've only scratched the surface. However, we need to be mindful of asking, 'For what value? For what reason?' We need to avoid 'just because,' as we'll be faced with a precarious situation if we don't.