AI and the ethics conundrum

What are some of the necessary checks and balances to make AI a force for good?

By Michael Shepherd, distinguished engineer at Dell Technologies

“Google it” is synonymous with “ask someone who knows” and, if you’re like me, it’s your first stop for just about any question. But have you ever paused to wonder if the answers on that first page (the one I never really get past) are the most accurate ones? Similarly, is your view of the world becoming narrower the more your social media algorithms offer up content that’s based on how you responded to clickbait articles?

While I believe we are uniquely independent decision-makers, without the right checks and balances technology could erode our diversity of thought and powers of discernment by treating us as a homogenous group and only serving up curated content.

For some, this is a small price to pay for access to an infinite vault of information that resides beyond the algorithms. But technology companies also need to consider the potential short- and long-term ethical scenarios and impacts. We need to address this question: How can technology serve humanity rather than replace it or constrain it?

This inquiry should start with becoming cognizant of the risks of artificial intelligence (AI) and then bridge to how technology can be designed to prevent or sidestep these risks. Whatever the necessary action, AI requires ongoing thoughtful interventions and mitigations to ensure it remains a force for good. As we progress towards technologies that increasingly embed “intelligence” at the edge, the industry is delegating more and more complex decision-making to an AI model that is now primarily responding to nuance and context. In doing so, we need to ensure the AI is ethical and incorruptible, and people can understand how decisions are made.

Enigmatic virtual assistants

It’s well established that intelligent technology will change our lives by contextualizing enormous amounts of data at incredible speeds (thus transcending human ability), and intelligence (recognizing trends and patterns in disparate data sets). In a Dell Technologies study, 43% of business leaders look forward to a time (circa 2030) when smart machines become admins in their lives—connecting their needs to highly personalized goods and services.

However, a similar proportion are also concerned about the wider implications of AI becoming more pervasive in their lives: 45% agree that computers must be able to discern between good and bad commands, and 50% argue that clear lines of responsibility and protocols will need to be established in the event autonomous machines fail.

In our 2019 study with Institute for the Future, we acknowledged that AI algorithms are already helping organizations decide who to hire, whom to loan money, and what appears in people’s newsfeeds. While these practical examples are very helpful, we need to consider the weight that is being placed on fallible technology created by fallible and potentially biased humans. AI algorithms aren’t developed in a vacuum. They’re programmed by people with a subjective view of the world (informed by their upbringing, social status, belief system, etc.). Sometimes, a developer may purposely or unconsciously add their own preconceptions to the AI code. Meanwhile, other biases may be the result of technical limitations, skewed data, or just flawed design.

These subtleties are exacerbated by the enigmatic nature of an AI black box. Many of the algorithms that we live by are “trade secrets” that companies count on for their competitive advantage. While the inputs and outputs are understood, the internal workings are often a mystery to the users and even sometimes to the developers. This lack of transparency may aid cybercrime: It’s easier to hide deceptive, coercive, and malicious practices behind inscrutable code. It also obscures the reasons why certain decisions are made.

These may seem trivial issues for some who assume it won’t impact their business, however, left unresolved they could become the sources of scandal and insurmountable barriers to wider adoption and human progress. Thankfully, over the last few years “explainable AI” has become a key focal area and is bringing transparency to black boxes, by incorporating human reason into intelligent systems. It’s important that the industry doesn’t lose momentum here as there are still some hurdles to scale.

Building trust with empathetic AI

Once the bias question is resolved, we then need to win back trust in AI. Given that decision-making is invariably based on empathy, AI may need to replicate these emotive connections. People generally tackle problem-solving by considering the potential impact on fellow human beings (i.e. how will a particular decision impact another person?).

To integrate ethical considerations into a machine learning data set, we need to ensure that the AI makes a decision with learned empathy and experience data. To do so, we must feed AI with extra stimulus, while screening out unintended bias. That might include inputting biofeedback for the AI to recognize the affected person’s stress level, as well as their heart, respiration, and/or perspiration rate. These measurements then inform the decision-making model, so that the AI learns to make humane decisions that engender trust, rather than endanger it.

Civilizing the “most important technology on the planet”

AI has been described as the most important technology under development on the planet today. When it’s done right it has the potential to remove unnecessary friction and improve decision-making in our lives. That’s certainly the ultimate goal however, without the proper oversight it will be difficult to ensure that every AI will be safe and ethical.

Right now, there are many distractions and risks, and too few organizations are engaged in the difficult job of de-risking AI of potential misuses while they work to improve efficiencies and profit. Many are undoubtedly intimidated by the size and complexity of the task–as there aren’t any quick fixes—and explainable AI is still in its infancy. De-risking AI will take time and painstaking, continual effort. But when it comes to a technology that drives human progress, the effort is worth it.