By Stephanie Walden, Contributor
In 2016, location-based analytics startup Geofeedia landed in its own position of scrutiny. The company, a social media intelligence platform, had been sharing whereabouts of protesters with local police—a practice that the American Civil Liberties Union (ACLU) condemned as reckless and having the potential to facilitate racial profiling.
The rebuke was the start of a series of public backlash campaigns against companies for questionable data practices. In 2018, online outrage culminated in thousands of users clamoring to #DeleteFacebook in the wake of the Cambridge Analytica scandal. As a result of the PR nightmare, Facebook experienced the worst single-day loss in the history of the U.S. stock market.
As the volume of collected data stretches to hundreds of zettabytes in the not-so-distant future, companies are learning—sometimes the hard way—that complex data systems and algorithms require equally intricate ethical considerations.
Today, companies are faced with the question: what is “right” and “wrong” when it comes to collecting, using, analyzing, and sharing data? And, whose job is it to make this call?
Meet the Data Ethicist
One emerging job title aims to clarify some of these ambiguities. Much as the term “data scientist” was relatively obscure 15 years ago, “data ethicist” has yet to hit zeitgeist status. But the role, designed to help companies consider the ethical implications of their practices, is slowly gaining traction. In some cases, it’s already taken off.
Reid Blackman, Ph.D., is the founder of Virtue Consultants, a firm of more than 60 ethicists spread across the globe. With expertise in data and AI, the Virtue network advises corporations in areas such as privacy, fairness (including bias), trust, and respect. “I want companies to do better,” Blackman explains, “and now, they have a financial incentive, because consumers and employees are demanding it.”
Virtue’s client services go beyond assessing simple cases of data misuse. For instance, the company is currently in talks with a facial-recognition startup to determine the ethical ramifications of each function of its API, including how data is stored, disclosed, and shared.
“Leaders should be educated about what the issues are so that they feel empowered to make decisions in a responsible way.”—Reid Blackman, Ph.D., Founder of Virtue Consultants
One of Virtue’s objectives is to instill confidence in corporate decision-makers who are responsible for data missteps. “Companies are intimidated by the topic. They think it’s going to require a whole bunch of technical chops—but making certain kinds of decisions doesn’t require deep technological knowledge,” Blackman says. “Leaders should be educated about what the issues are so that they feel empowered to make decisions in a responsible way.”
As organizations continue to make data central to their operations, experts like Blackman predict that moral crossroads will only continue to crop up. What’s more, efforts to clearly define just who answers such questions often do not keep pace with the rate of innovation.
Lisa Spelman, vice president, Data Center Group, and general manager, Intel Xeon Processors and Data Center Marketing, cautions against assigning ethical oversight solely to the data science team. “A data scientist is a mathematician, a deeply technical resource—not necessarily an ethicist,” says Spelman. “So, if you are putting all of that responsibility on your data science team, it’s too big of a burden, and can slow down the path to success.”
Instead, Blackman suggests that every employee who touches data should have basic knowledge of its ethical implications. A chief privacy officer or chief data officer, for instance, must not only work with a development team to enact privacy policies and best practices, but he or she must also educate other executives to ensure their buy-in.
The Intersection of AI and Data Ethics
Moral quandaries associated with AI open a whole new can of worms, ethically speaking, explains Blackman. “There was data well before there was artificial intelligence. But because AI is all about data, data ethics is almost—not quite, but almost—a subset of the discipline,” he notes.
As such, ethics around training AI are becoming increasingly salient. AI algorithms present a twofold challenge: Companies must consider both input (are algorithms biased in any way?) as well as output (where and with whom is the resulting data shared, and is it leading to a valuable outcome?).
Spelman suggests this is where the data ethicist can facilitate honest conversations as they’re building out AI capabilities—from the very earliest stages of algorithm development. “This will give you the capability to stand back, look at what you’re delivering, and determine whether it’s leading to the right result,” she says.
“…We have a responsibility not to collect as much data as possible, but to collect as little data as possible to drive good results.”—Lisa Spelman, Vice President of Data Center Group and General Manager of Intel Xeon Processors and Data Center Marketing
She also urges companies to consider just how much data they really need to collect to be effective. “You don’t need as much data as people think you need to be valuable in the AI space. We have a responsibility not to collect as much data as possible, but to collect as little data as possible to drive good results.”
Beyond the Checked Box
Still, if a primary ethical concern is how businesses use data, another equally important issue is how to disclose data usage to customers beyond pages of legal jargon followed by a consent checkbox.
While a truly ethical lens goes beyond regulatory compliance, initiatives like the EU’s General Data Protection Regulation (GDPR) loom large. Similar, imminent legislation in the United States may provide extra incentive for companies to proactively develop appropriate ethical standards and communications.
Daryl Crockett, CEO of ValidDatum, a data consulting company, points out that although consumer outrage around data misuse has gained traction, many people still have no idea the extent to which companies collect and use their information.
“Data in general is a very abstract topic,” she asserts. “People cannot convert technical terms and written words into examples that they can then make judgments on. When disclosing to consumers, companies need to be specific in their examples.” She proposes visuals like pictures or animations as a way to ensure company messages sink in.
Blackman concurs that transparency is crucial, pointing out that the proliferation of technologies like location-tracking, biometrics, and chatbots make clear messaging even more paramount. “Companies may say, ‘Oh, we’re just collecting your metadata.’ But number one, you can do some powerful things with metadata, and number two, the average consumer doesn’t have a clue what metadata means,” says Blackman. “It’s hard to consent to something when you don’t even know what it is.”
Ultimately, Blackman believes that his uptick in clients is a sign that more companies are turning introspective, considering the ethics of their products, services, and related data. A primary motivator for these businesses, he notes, is building and maintaining public trust.
“Millennials in particular want to purchase from ethically upright businesses, and they want to work for them as well,” he says. “People of all ages are raising concerns and spreading them in a way that we haven’t seen before.”