AI: What’s Hype? What’s Reality?

Host Walter Isaacson and guests trace the origins of AI, each milestone to date, and reveal how it’s evolving at lightning speed.
Transcript
Subscribe
All Trailblazers Podcasts

Stanley Kubrick is no one’s idea of an optimist (in film, anyway). Yet, in his landmark 1968 film “2001: A Space Odyssey,” Kubrick projected a vision into the future that humans still haven’t been able to shake: an intelligent machine, gone rogue, rising up against those whom it’d been tasked to serve. The vivid horror shaped the way some of us view AI, and – to this day – scientists, technologists, businesses, and policymakers still debate this possibility. We may have a long way before we find out the answer, but we’ve come a long way so far just to get here.

The thought machines

When we talk of artificial intelligence, the conversation doesn’t often steer toward French Philosophy. Yet AI’s roots date back almost 400 years ago when French Philosopher René Descartes, who famously coined “I think; therefore I am,” pondered about the ability of machines to reason. It would be another 300 years before Alan Turing also explored the question “can machines think?” His test, or “imitation game,” challenged a human evaluator to pose questions to two contestants—a computer and a human—and distinguish which was which based on their responses. Passing the Turing Test became the first benchmark for AI. The term “artificial intelligence” came along in the summer of 1956, when John McCarthy coined the term at a Dartmouth conference of top researchers in the field.

Deep Blue. Deep Learning.

In 1997, supercomputer Deep Blue called checkmate against world champion chess master Garry Kasparov. In 2011, IBM’s Watson defeated champions of the game show “Jeopardy!,” and in 2016, Google Deep Mind’s AlphaGo AI beat the top-ranked player of the board game Go. But while these winning machines can play the game they were programmed to play very well, they can’t do much else. The key to unlocking the true potential of artificial intelligence, it seems, lies in neural networks. Deep learning combined with the compute power to crunch enormous amounts of data are moving artificial intelligence beyond games and into the transformation of industries, job markets and quality of life.

Autonomy and empathy

So will AI rebel against humans? Although some are terrified, a school of thought exists that AI simply wouldn’t have interest in human affairs. Yet, with algorithms and analytics helping diagnose and treat diseases like cancer, humans are very interested in AI’s potential. And, more viscerally still, interested in if AI will automate them out of a job. Will millions of people will be replaced by robots? Realistically, AI will both disrupt and create new jobs, and in fields such as medicine, give us greater appreciation for the kind of work that machines could never duplicate. Humans may make mistakes, but unless we can trust AI to be objective and perfect, so, too, will machines. Of chief concern: combating implicit bias in AI. Technologists are refining algorithms to ensure non-discriminatory objectivity in decision-making. AI may not replace us, but if deployed powerfully and perfectly, AI may be the last invention humans ever need.

“The development of superintelligence would be the most important invention in all of human history. And, in fact, the last invention humans will ever need to make.”

– Nick Bostrom, Author, “Superintelligence: Paths, Dangers, Strategies”

What you’ll hear in this episode:

  • A Space Odyssey
  • Can machines really think?
  • A Deep Blue checkmate
  • The neural networks in devices that mimic brains
  • How machine learning happens
  • Will autonomous computers rebel?
  • The life-saving magic of predictive medical analytics
  • Who will be automated?
  • Combating implicit bias in machines.

Guest List

  • Nick Bostrom is the Director of the Future of Humanity Institute at Oxford. He’s considered one of the leading critics of superintelligence.
  • Oren Etzioni is the CEO of the Allen Institute for Artificial Intelligence in Seattle.
  • Matt Sanchez is the founder and CTO of CognitiveScale, a company that creates augmented intelligence software that “uses AI to help humans do what they’re trying to do better.”
  • Hilary Mason is the General Manager for Machine Learning at Cloudera, the Data Scientist in Residence at Accel Partners, and is on the board of the Anita Borg Institute.
  • Gary Marcus is a cognitive scientist at NYU and the author of the forthcoming book Reboot: Getting to AI We Can Trust.
  • Jerry Kaplan teaches social and economic impact of artificial intelligence at Stanford University and is the author of Artificial Intelligence: What Everyone Needs to Know.
  • Geoffrey Hinton is an engineering fellow at Google, an emeritus professor at the University of Toronto and the chief scientific adviser of the Vector Institute.

Walter Isaacson: It’s one of the most memorable scenes in movie history. And for most viewers, it was a startling and even frightening introduction to the world of artificial intelligence. The scene occurs in Stanley Kubrick’s 1968 groundbreaking film, 2001: A Space Odyssey. Two astronauts are taking a spacewalk aboard a space station bound for Jupiter, but the HAL 9000 computer that’s in charge of the station refuses to let them back in.

Speaker 1: I’m sorry, Dave, I’m afraid I can’t do that.

Walter Isaacson: Turns out that HAL was a machine with a very strong survival instinct. It had discovered that the astronauts were planning to disconnect it after they learned it might have improperly reported a fault in the spacecraft’s communication antenna. HAL’s response was one of human instincts, survival. And that’s what made it so terrifying. Machines were supposed to do what they were told. After one of the astronauts managed to get back into the spacecraft, he pulled the plug on HAL thus saving the mission.

While moviegoers marveled at HAL’s cognitive skills, they may have also taken another message home with them from the theater. Be afraid. Be very afraid of super smart computers who think and behave like humans. Of course, in 1968 there were no computers that could perform even a fraction of the functions that HAL was capable of. Computer generated speech and facial recognition, natural language processing, automated reasoning, and playing chess at a very high level were all still in the realm of science fiction.

Today those functions are at our fingertips. Our pocket size digital devices deliver that and much more thanks to incredible advances in a field of research that few people at the time had ever heard of, artificial intelligence or AI. And now we’re once again wondering what could happen when machines achieve what’s known as artificial general intelligence or super intelligence when they become smarter than us. While it’s likely still decades away, most experts agree it will eventually happen. What they don’t agree on is what it’ll mean for mankind. Will it mark the beginning of an exciting new world where environmental destruction can be reversed and deadly diseases finally concurred? Or as a great physicist, Stephen Hawking predicted, will it mark the end of civilization as we know it?

I’m Walter Isaacson and you’re listening to Trailblazers, an original podcast from Dell technologies.

Speaker 1: I’m sorry, Walter. I’m afraid I can’t do that.

Speaker 2: This computer is on the job around the clock.

Speaker 3: The computer can be called a kind of brain.

Speaker 2: Efficient, computerized, with a sleep beauty all its own.

Speaker 4: The amazing machines and gadgets that almost seem to think for themselves.

Speaker 5: It’s a new breed of computers that will test the ingenuity of both man and machine.

Walter Isaacson: One of the people Stanley Kubrick turned to for advice when writing the script for 2001: A Space Odyssey was an American cognitive scientist named Marvin Minsky. At the time, Minsky was a professor at the MIT artificial intelligence lab, which he had helped to found a decade earlier along with AI pioneer John McCarthy. In fact, it was McCarthy who first coined the term artificial intelligence. Marvin Minsky was also part of a small group of scientists that McCarthy brought together at Dartmouth college in 1956 to discuss this exciting new field of computer research for the first time.

Jerry Kaplan teaches the social and economic impact of AI at Stanford University and he’s the author of the book Artificial Intelligence: What Everyone Needs to Know.

Jerry Kaplan: The interesting thing about the meeting was that nobody really came in with much of a preconceived notion about how one might actually perform the tasks that they were concerned about, but John McCarthy himself was a mathematician who was very strong in mathematical logic. And his hypothesis was that mathematical logic and reasoning was the basis for human intelligence. And so they began to explore programs that contain, for example, if then rules, that performed certain kinds of logical inference and game playing, if this happens, then I should do that. And their thesis was that if they could just do this well enough that they would be able to recreate many of the higher level cognitive functions to the human mind.

Walter Isaacson: Hovering in the background with McCarthy and the others at the Dartmouth meeting was a challenge posed six years earlier by the brilliant British mathematician and computer scientist Alan Turing. In a paper on computer intelligence Turing had proposed a test of a machine’s ability to exhibit intelligent behavior that was indistinguishable from that of a human. Could a machine engage in natural language conversation so sophisticated that a neutral third party wouldn’t be able to tell whether it was a computer or a human speaking. In other words good computers think. Turing’s prediction was at by the year 2000, the answer would be yes. Today millions of people interact with AI devices with friendly names like Alexa.

Alexa: Receiving, over.

Walter Isaacson: AI can do simultaneous translation, guide our cars, and recognize images, but does all of that add up to real intelligence?

Oren Etzioni: What we’ve seen is two things. A, no computer program has really passed the Turing test and also I think we’ve realized over the last 50, 60 years that the Turing test, more than anything, is a test of human gullibility.

Walter Isaacson: Oren Etzioni is the CEO of the Allen Institute for Artificial Intelligence in Seattle and a professor of computer science at the University of Washington.

Oren Etzioni: If I were to administer the Turing test, the first thing I would do with a putative intelligence is I would give it the SATs. I would ask it to write an essay. I would give it an AP examine in biology. And before even having a chat, I would grade those exams to make sure that I’m actually dealing with an intelligent entity, and then we could have a chat about wines, and about my family, and the kinds of things that computers have shown themselves to sometimes hoodwink us into thinking that they’re human-like.

Walter Isaacson: The idea of building a machine with human-like qualities dates back long before Alan Turing. In 1637, the great French philosopher Rene Descartes, speculated that it might be possible to build an automaton that could imitate an animal, but he felt that humans contain two critical features that machines could never duplicate, language and reason. A machine might be able to perform a particular task better than a human, but it could not take what it learned from that experience and apply it to other problems. Nearly 400 years later, it seems that they’d caught, was remarkably pressure. Natural language processing and non logical reasoning remain two of the biggest challenges confronting AI researchers as they try to build human-like qualities into their machines.

Of course there’s been significant progress in 1997 IBM’s Deep Blue beat world chess champion, Garry Kasparov.

Deep Blue: Checkmate Garry.

Walter Isaacson: Though some argue that its victory was more of a result of brute force computing than artificial intelligence. In 2011, another IBM Computer, Watson, won that TV game show Jeopardy and then in 2016 more than 200 million people worldwide watched as a machine named AlphaGo defeated the world champion of the board game Go in four out of five matches.

Those computers can play the game they were programmed to play very well, but they can’t do anything else and that is why Oren Etzioni says, “We shouldn’t read too much into the success of these machines.”

Oren Etzioni: There’s an inherent paradox in AI, which a lot of people don’t get. Things that are super hard for people like playing championship level Go turn out to be quite easy for the machine. Right? And then on the other hand, things that are quite simple for a person like crossing the street, understanding a simple children’s story are things that are actually very difficult for the machine. So people see these remarkable successes in these very narrow tasks like chess or Go, and they extrapolate from that, that AI can do amazing things. And AI can do amazing things, but in very narrow arenas.

Walter Isaacson: Like Mr Spock in the Star Trek TV series, those early AI scientists thought mathematical logic was the best way to recreate human intelligence, but that approach quickly proved to be far too limited. Human intelligence involves making judgments and finding creative solutions to problems that are not always strictly logical. So at the same time as scientists were meeting at Dartmouth, another researcher was working on a radically-

Another researcher was working on a radically different approach, one that in recent years, has taken us a lot closer to the goal of artificial general intelligence. It involves recreating the brain’s neural networks inside of a computer, to imitate the human thought process. Today, these machine learning networks, are where most of the buzz around AI is coming from. It was first proposed in the 1950’s, by a Cornell psychologist, named Frank Rosenblatt.

Jerry Kaplan…

Jerry Kaplan: Rosenblatt’s approach was a rival approach, and it simply wasn’t regarded that well by the rest of the community, partially because of the outlandish claims that Rosenblatt had made for his invention, and partially because the computers at the time, simply weren’t powerful enough, to perform the kinds of functions that modern machine learning programs can do, with the proliferation of digital data, that we have available in the world today.

So basically, there was a mismatch between the technology and the tools that were available, and now we have tools that are capable of using Rosenblatt’s machine learning neural network approach, to solve all of these problems, where the original approach taken by the founders of the field in 1956. The approach that they were taking, simply wasn’t adequate to solve those kinds of problems.

Walter Isaacson: Before we continue, I’d like to take a moment to tell you about a new podcast series from Dell Technologies, that focuses entirely on artificial intelligence. The show is called AI: Hype vs. Reality, and the first episode is available right now. AI: Hype vs. Reality is a fun series, that takes a deep dive into all the hype surrounding artificial intelligence, and then goes out into the real world to test the technology, to see if it actually lives up to its promise. The series is hosted by Jessica Chobot, and you can listen to the first episode right now, by searching, AI: Hype Versus Reality on your favorite podcast app. We think you’ll really love it, and now back to the show.

It took a long time, for the potential of machine learning to be realized. There wasn’t enough meaningful data, to feed into the computer’s neural networks, and those computers weren’t powerful enough to process the data that was available. Most researchers abandoned field, but not Geoffrey Hinton. As a graduate student at the University of Edinburgh in the 1970’s, and later, as a professor at Carnegie Mellon and the University of Toronto, he remained convinced that neural networks, were the key to unlocking the mystery of artificial intelligence, and over the past 10 years, he’s been proven right.

Computers now have the processing power to crunch the enormous amount of high quality data that is being generated, mostly by our digital selves. A researchers have taken advantage of breakthroughs in Neuroscience, to build increasingly sophisticated artificial neural networks, that aim to mimic how the brain processes data. Now a research fellow at Google, Hinton is considered the godfather of a subset of machine learning, called deep learning, which has been the foundation for most of the breakthroughs in AI, over the past few years.

Geoffrey Hinton: If you have a smartphone, it’s recognizing speech using neural networks for sure, and it’s working really well. It’s working much better than it used to before they used neural networks for that. If you have a photo collection on a computer, and you want to know if you’ve got a photo of a dog, or if you got a photo of people hugging, neural nets are used to find that photo. If you want to translate Chinese to English, the best system out there on the web is Google Translate, which uses neural nets for doing that, and it’s very clear that things like reading medical images. Right now, in a few domains, says neural nets is as good as people, and over the next few years, those will get better than people.

Walter Isaacson: The easiest way of thinking about deep learning, is that it can recognize patterns in large sets of data. It determines probabilities, based on those patterns. The more data the neural nets can process, the more patterns it sees, and the more accurate it can be.

Take, as a very basic example, how your computer determines whether an email belongs in your inbox, or your spam folder. Hilary Mason is a general manager of machine learning at Cloudera, a software platform company, based in Silicon Valley.

Hilary Mason: So you get examples of emails that are spam, and you get examples of emails that are not spam. Then you think about the significant features of those emails, that indicate it might fall in one category or another. So, maybe if it uses certain words, like Nigerian Prince, perhaps it’s more likely to be spam, and then what you do is train, so you learn from that historical data what a model is. The model will learn things, like if the email is too long, or over 2,000 characters say, it’s 90% more likely to be spam than not.

Then for every new email, so you don’t know if it’s spam or not, it’s just popped up in your inbox. The system makes a calculation, based on that learned set of features and those probabilities, as to the probability that this new message is spam or not, and then there’s some threshold above which we put it in your spam folder. We say if it’s more than 85% likely to be spam based on this calculation, it goes into your spam folder. This is how the system works. These probabilities become incredibly powerful, at predicting what is likely to be true.

Walter Isaacson: According to Geoffrey Hinton, one of the reasons deep learning can be so effective at making predictions, is that they can simulate the way humans often think, illogically, intuitively, and unpredictably.

Geoffrey Hinton: The key was getting neural nets that had intuition. It wasn’t logical reasoning at all. If you wanted to look at an image, and produce a caption for the image, we’re able to do that, but we don’t know how we do that. We can’t write a whole bunch of rules for doing that. So, it’s very hard to program a computer directly to do that, but what we can do, is show the computer a lot of examples, and have it just kind of get it, and that’s a new way of getting computers to do things.

Walter Isaacson: Having a machine that just kind of gets it, without anyone really understanding how it gets there, is very far removed from the hard logic, of the early AI pioneers, but it does conjure up images of a super intelligent HAL 9,000 computer, taking matters into its own hands. But how close are we, to that being a legitimate concern?

Gary Marcus: Hi, I’m Gary Marcus. I’m a Cognitive Scientist at New York University, and author of the forthcoming book, REBOOT: Getting to AI We Can Trust.

Walter Isaacson: Gary Marcus describes himself as short-term skeptic, and long-term optimist, when it comes to AI. There’s lots of things he’s concerned about, but autonomous computers running a muck, is not one of them.

Gary Marcus: I think lay people are terrified of it, because they get misled by the media all the time, and what typically happens is that some tiny little advance in the lab gets reported as if it’s a profound change, so the machine does a tiny bit of text retrieval, and it gets written up as if machines could now read, and change the world. Most of us in the field, are not so worried about these kinds of machines taking over for two reasons. One, is they’re nowhere near competent enough at this point, to do it even if they wanted to, and number two, is they’ve shown no sign whatsoever of being interested in our affairs. They don’t have these kinds of motivations.

If you look, for example, at Go, it’s the closest thing to taking territory in the AI world. What you find is that machines have not gotten any more interested in human beings over the last 50 years, even though they’ve gotten much better at Go. Machines ultimately calculate things. They’re not interested in human affairs.

Walter Isaacson: Machines might not be interested in human affairs, but more and more these days, humans are very interested in turning to deep learning algorithms for decision making, and in some areas, such as the early detection of skin cancer, those algorithms are presenting enormous potential to save lives.

Geoffrey Hinton…

Geoffrey Hinton: They show this patch of your skin to neural network, and then neural network tells you what kind of cancer it is, or if it’s not cancer, and the neural network is as good as a dermatologist now, and it’s only been trained on 130,000 examples, and with time you could easily train it on 10 million examples, and then it’ll be better than dermatologist.

That’s something that’s very easy to see how you use it, because you can have an app on your cell phone, you can pointed at some funny patch of skin you have, and you don’t need the embarrassment of going to the doctor and saying, “Is this skin cancer?” And the doctor laughing at you, or you don’t have the disappointment of going to the doctor, and saying, “Is this skin cancer?” And the doctor saying, “Why didn’t you come before?”

Walter Isaacson: The scenario that Geoffrey Hinton just described, is potentially very good news for people concerned about skin cancer, but possibly, not so good for medical professionals, who fear of being replaced by an App on a smartphone. The introduction of new technology, invariably leads to job disruption, and AI may be the most disruptive technology we’ve ever seen.

The doomsday scenario is that millions of people, including professionals and white collar workers, who until now, have largely escaped the impact of automation, will be out of work, replaced by robots who can do their jobs faster, cheaper, and better. The more likely outcome is that, while some jobs will be disrupted by AI, many new ones will be created and in fields such as medicine, we will gain a greater understanding and appreciation for the kind of work that humans can do, which machines can never duplicate. Hilary Mason:

Hilary Mason: There’s been a fair bit of hype around AI in healthcare and replacing doctors, and that is generally not something I see coming anytime soon. People were saying five years ago, it should be trivial to do cancer diagnosis and radiology work entirely in an automated way and it turns out that in fact it is possible to do quite a lot of good work in that domain, but we have not yet replaced radiologists. And it turns out they do a lot of work that isn’t just looking at pictures and identifying things that’s actually quite important for patient outcomes. So perhaps we actually undervalued the work that they do in putting together treatment programs and helping patients understand how to navigate through a traumatic situation and we’ve overvalued the part of their job that’s just looking at the pictures and coming up with a label, which is something where computational systems actually can be helpful. So that’s in an area where the reality has not matched up to the hype, but the technical truth remains the same, which is that there is tremendous progress to be made here. We just shouldn’t put all our hopes in completely automating away some of these professions.

Walter Isaacson: Machines are being used to replace or at least assist in decisions that were once the exclusive domain of humans. Fill out a job application and chances are that it will be scanned by an algorithm before it’s ever seen by a human. The idea is that the algorithm can make an accurate evaluation about the applicant’s suitability for the job more quickly and perhaps more objectively than human resources professionals can sift through the data, but that’s only true if the data that has been put into the algorithm is itself accurate and unbiased and that depends largely on the people who are doing the inputting.

Matt Sanchez: Humans can make mistakes. Those mistakes if translated into the data sets that then encode these systems can definitely then have an impact. The question is not whether that’s going to happen or not. That’s going to happen. Humans are flawed, we always will be and we will always make mistakes.

Walter Isaacson: Matt Sanchez is the founder and CTO of Cognitive Scale, an artificial intelligence software company based in Austin, Texas. He’s one of the founders of an organization called AI Global that works to expose bias and promote transparency in AI system.

Matt Sanchez: We heard about a case of AI applied to an optical sensor, put it into a soap dispenser that only recognizes fair skin people. So when a dark skin hand goes underneath the soap dispenser, nothing comes out and clearly that’s a bias and that one is a fairly straightforward one. It’s a training bias. They only train the system with images so it could only detect light skin. But these are important problems to understand. They’re sort of illustrative examples of what can go wrong very quickly that could have you know massive consequences if we don’t understand how to put the right controls in place and test them. So clearly the soap dispenser wasn’t tested very well and the data, nobody really sort of looked at the data to figure out that, you know, did it have the right kind of information in it so that it wasn’t biased. And I think without doing that it’s irresponsible and, and quite frankly dangerous to just deploy these systems without thinking through and having a plan for these types of tests and controls in the system.

Walter Isaacson: From soap dispensers, to smartphones, to cars, artificial intelligence is a growing part of our everyday lives and testing bias is top of mind as AI researchers begin to move from solving one particular problem towards developing machines that more closely resemble the full range of human intelligence. In other words, super intelligence. Nick Bostrom directs the Future of Humanity Institute at Oxford University and he’s the author of the book “Superintelligence: Paths, Dangers and Strategies.”

Nick Bostrom: The development of super intelligence would be the most important invention in all of human history and in fact the last invention humans will ever need to make and that therefore that’s where moving forward in this direction it would behoove us to think very, very carefully about what this would mean for the world and whether there are particular actions we should take in advance to make sure that we minimize the risks of this transition to the machine super intelligence era.

Walter Isaacson: There’s been a quantum leap forward in the field of artificial intelligence since Alan Turing first proposed the Turing test. Yet the gap between human and machine intelligence is still very wide. 51 years after the release of Kubrick’s masterpiece, 2001 A Space Odyssey, there is still no single computer that can perform all the functions of the Hal 9,000. Hopefully with responsibility and thoughtful actions, by the time there is, we’ll be ready for it. I’m Walter Isaacson and you’ve been listening to Trailblazers, an original podcast from Dell technologies. This is the last episode of the season, but we’re already hard at work on the next season of Trailblazers where we’ll be bringing you all new stories about the digital pioneers behind some of the biggest disruptions about time, but while we’re busy working on the next season, we have a new podcast series to help tide you over that we really think you’re gonna love, it’s called AI: Hype Versus Reality. Host Jessica Shobot actually gets behind the wheel of an experimental self-driving car to see if autonomous vehicle technology lives up to its hype. Here’s a clip from that first episode.

Jessica Chobot: You got a lot of stuff going on in this car.

Speaker 8: Yeah we do.

Jessica Chobot: There are lots of things. What am I looking at on this monitor?

Speaker 8: On the bottom left we’ve got some boxes put around pedestrians so we can detect, so you can see that guy that’s far away.

Jessica Chobot: Oh yeah. Okay.

Speaker 8: And the top left hand side we’ve got a top down visualization of that guy that’s also walking towards us. And on the right is actually this 3D cloud of points that’s being produced by a rotating laser sensor that we’ve got over the top of the car.

Jessica Chobot: Got It. And so then the car takes all this information and processes it and then just that’s how it drives.

Speaker 8: Yeah, exactly.

Jessica Chobot: Oh okay. Wow. That seems simple enough.

Speaker 8: Yeah.

Jessica Chobot: All right. So is it my turn?

Speaker 8: It is, yeah.

Jessica Chobot: Alright. Which program are we going to do?

Speaker 8: So we’re going to do the pedestrian detection and the stone.

Jessica Chobot: Good. That’s the one I wanted.

Speaker 8: All right.

Jessica Chobot: Awesome. All right, I’ll switch out here.

Speaker 9: Big switcheroo.

Jessica Chobot: All right, well buckle up because we don’t know how this is going to work.

Speaker 8: Tighten your restraints. Well we can start with is flipping the car into autonomous mode.

Jessica Chobot: Okay.

Speaker 8: So Jessica, are you ready?

Jessica Chobot: I am ready.  Oh Gosh, I’m a little nervous. I think I’m more scared of disappointing you guys and ruining the car than I am of actually getting us hurt.

Speaker 8: Final check, the vehicle will enter autonomous mode.

Speaker 9: On three. One, two, three.

Jessica Chobot: I’m so excited.

Speaker 9: And we’re rolling.

Jessica Chobot: That is weird.

Walter Isaacson: If you want to hear more about Jessica’s drive in the self-driving car, you can listen to the whole episode right now by looking up AI; Hype Versus Reality, wherever you get your podcasts.