A Brief History of AI, From French Philosophy to Self-Driving Cars

Artificial intelligence and machines’ ability to reason to has come a long way since French philosopher René Descartes pontificated this concept nearly 400 years ago. Learn how significant amounts of research, government funding and increasing computational power has advanced human-machine partnerships to where they are today, and where they are headed next.

By John Gorman, Dell Technologies

When we talk of self-driving cars, robot vacuums, and voice assistants, we’re probably not thinking much about French philosophy. And yet, AI’s current automated task-mastering was first posited by the French philosopher René Descartes almost 400 years ago. Descartes, who famously coined, “I think, therefore I am,” pondered about the ability of machines to reason. His theory? While machines may be able to “do some things as well, or better, than humans, they would inevitably fail in others,” whereas human reason can universally adapt to any task. Though Descartes’ idea of machines differs from today’s reality, some say he threw down the gauntlet for what we now refer to as general AI—or machines that can think like humans.

Though Descartes’ idea of machines differs from today’s reality, some say he threw down the gauntlet for what we now refer to as general AI—or machines that can think like humans.

It would be another 300 years before Alan Turing also explored the question, “Can machines think?” His test, or “imitation game,” challenged a human evaluator to pose questions to two contestants—a computer and a human—and distinguish between them based on their responses. If the evaluator could not choose the human subject 50 percent or more of the time, then the machine passed the test that bears Turing’s name.

To bring this scenario up to date: If you’re having a conversation with a chatbot and are unaware that you’re not actually speaking to a human—congratulations, that chatbot has theoretically passed the Turing Test. In 2014, one such chatbot did exactly that: Chatbot Eugene Goostman convinced 33 percent of the judges at an AI competition at the Royal Society in London that it was a 13-year-old Ukrainian boy. While both this win and the merits of Turing’s test are hotly debated, Turing is nonetheless credited as paving the way for AI, though he does not get the credit for naming it.

“Artificial intelligence” didn’t come along until the summer of 1956, when John McCarthy coined the term at a Dartmouth conference of top researchers in the field. Finally, the loose and not yet tangible vision of computers thinking like humans had a snappy name that attached itself to continued efforts in the space.

There were significant hopes—and significant government funding—coming out of McCarthy’s conference. And the following years did bring some early successes, including ELIZA—the world’s first chatbot and early implementation of natural language processing. MIT professor Joseph Weizenbaum designed ELIZA to imitate a therapist who could ask open-ended questions and respond with follow-ups via text.

But despite the high expectations and proclamations (in 1970, Marvin Minsky told Life magazine that “we will have a machine with the general intelligence of an average human being” in three to eight years), the lack of computational power to bring AI to life ultimately led to the first AI winter. Progress would ensue in the 1980s and stall again in the early ’90s. But in 1997, when supercomputer Deep Blue called checkmate against world champion Chess master Garry Kasparov, the moment marked a seismic shift in the way the public perceives the intelligence of machines.

Deep Blue’s sheer volume and computing force—processing some millions of positions per second—captivated an audience around the globe that saw potential in its thinking power. This position would be furthered in 2011 as IBM’s Watson defeated champions of the game show Jeopardy!, winning a $1 million grand prize. And in 2016, Google Deep Mind’s AlphaGo AI beat the top-ranked player of the board game Go in front of more than 200 million viewers worldwide. But we’ve already seen the potential of machine intelligence well beyond fun and games.

With rapid progress in machine learning, deep learning, and neural networks, we see algorithms helping to spot disease, detect fraud, thwart animal poachers, optimize supply chains, improve customer experience, predict buying patterns, generate music, and the list goes on.

The AI trailblazers of this century have celebrated several milestones in the development and application of the still-emerging technology. In 2000, MIT researcher Cynthia Breazeal went beyond thinking machines to feeling machines when she brought forward Kismet, a robot that, according to Breazeal’s dissertation, could recognize and simulate emotion. And while they may not feel, the current generation of voice assistants—including Siri, Google, Cortana, and Alexa—have become part of the family in dwellings around the world, and since 2011 have exponentially increased in power, agility, and ubiquity. With rapid progress in machine learning, deep learning, and neural networks, we see algorithms helping to spot disease, detect fraud, thwart animal poachers, optimize supply chains, improve customer experience, predict buying patterns, generate music, and the list goes on.

The road ahead for AI will, in all likelihood, involve travel down actual roads. Autonomous cars have long been the stuff of fantasy and hype. Yet, advancements in edge computing and 5G are pushing them closer to reality. Volvo has deployed self-driving trucks to take over dangerous mining tasks in Norway, and Alphabet subsidiary Waymo put autonomous taxis on Phoenix-area streets in 2018.

“I think, therefore I am.” We’re long past the Turing Test. Where we go from here in the age of human-machine partnerships will be fascinating to witness.