Physicist And Neuroscientist Garrett Kenyon Says There’s No Artificial Intelligence

IMG_2944 (3).jpg
LANL physicist and neuroscientist Garrett Kenyon, left, chats with Bradbury Science Museum Educator Mel Strong at the July 15 Science on Tap event. Photo by Maire O’Neill/losalamosreporter.com

IMG_2951 (7).jpgA large crowd gathers at projectY cowork July 15 for a talk by Garrett Kenyon as part of the Science on Tap series. Photo by Maire O’Neill/losalamosreporter.com

IMG_2954 (6).jpgGarrett Kenyon answers questions following his talk on artificial intelligence July 15 at projectY cowork. Photo by Maire O’Neill/losalamosreporter.com

BY MAIRE O’NEILL
maire@losalamosreporter.com

Just days before Microsoft agreed to invest $1 billion in and partner with the research company OpenAI, which was cofounded by Elon Musk to develop artificial general intelligence, Garrett Kenyon, speaking at the Science on Tap series said artificial intelligence (AI) doesn’t exist.

Kenyon is a physicist and neuroscientist at the Los Alamos National Laboratory Information Sciences Division who specializes in neurally-inspired computing.

“The topic is so interesting. What is AI? What does it mean for our society? I’ve been in this field for more than 30 years now and I bring a different perspective to the topic. It will be somewhat shocking to some people because I bring a somewhat different opinion of where we’re at,” he said.

“AI doesn’t exist,” Kenyon told the Los Alamos Reporter prior to his talk. “It’s kind of a myth. It’s this word everyone uses. It’s utterly abused.”

He referred to a baseball commercial for Amazon AWS AI that says it can predict the probability of a stolen base.

“That’s not AI, that’s statistics,” he said. Another example he gave was Google Maps and similar apps which have become somewhat indispensable.

“There are some fabulous apps out there. Humans are very smart. They write great code. Software engineers are amazing but none of this is AI. Google Maps is not AI – it’s simply engineering. Every step in Google Maps is well posed. You know what you’re doing. You just have to engineer that system correctly. You have GPS, you have street maps. It’s not AI. We don’t have AI,” he exclaimed.

Kenyon said that was the argument he was going to make in his Science on Tap talk and that he thought he could show some examples of what AI would be and how “we’re not there” as well as how what computers do today fails.

“If we dig a little deeper in there, there’s no such thing. We don’t have it. There is no such thing as an intelligent computer,” he said, which was actually a relief to the Los Alamos Reporter!

Kenyon said there are computers that can take over some jobs but he thinks it’s way overboard right now. He said that doesn’t mean someone is not going to figure it out, that there isn’t going to be a great break-through, but that we’re not there yet.

“I would say is that if we are going to develop AI, we have to go back to biology and understand how biological systems learn, how they work, how we learn, how we work, and take lessons from that. We have no AI and our understanding of the brain is about medieval,” he said, adding that there are things neuroscientists don’t agree on such as whether spike mapping is an individual rate code or do individual spike times matter.

“They argue about things like what does the hippocampus do? Is it a short-term memory buffer? Is it a spatial navigation device? These are basic parts of the brain. What does the cerebellum do? People argue about what’s happening in the primary visual cortex,” Kenyon said. “So really for me, inside the field for so long, I’m struck with just how little we know. Just how profound our ignorance is. That’s my message. If we dig deep into what AI purports to be today you find that it’s almost completely empty. It doesn’t mean that computers aren’t very powerful and they don’t do very powerful things. Our phones do incredible things for us, amazing things, but it’s not intelligent – useful but not intelligent.”

Kenyon thinks Google is more likely to figure out how the brain works than neuroscientists are.

“We’re going to figure out how neuroscience works by using brain link algorithms to solve things we don’t know how to solve right now, by using brain link algorithms to actually achieve something like AI, is how I think we’re really going to start to understand how brains work. That’s what I’ve bet my career on at this point. That’s how I do neuroscience now. I make brain-like algorithms do things that our current algorithms can’t,” he said.

Kenyon said he is opinionated and not shy about where he’s coming from. He said he has impassioned arguments with many of his colleagues who are equally convinced that AI is here, that it’s just a matter of more training data. He mentioned Elon Musk and his righthand man, Tesla’s director of AI, Andrej Karpathy, who he called a very smart guy.

“They’re going to make self-driving cars. They’re coming and they’re going to get there. There’s so much training data, that’s his argument. He’s got so many Teslas out there with so much data to train with. Everything that could happen is in their training data. They just have to make sure they’ve covered every possible contingency in training data and it’s going to be okay,” Kenyon said. “I respectfully disagree. I admire Elon Musk – he’s a visionary guy – but I think he’s wrong on this one. I think the world will create more variety than will be in his training data. There will be more things out there in the world than he will dream of in his Tesla database.”

Kenyon spoke about some of his work at LANL. He is also affiliated with the New Mexico Consortium.

“We have a big project now to look at altered media. We believe that some of these brain-like algorithms, these algorithms that are closer than neuroscience than the usual machine-running applications are, might be particularly helpful for say, detecting altered media, detecting fake video or synthetically generated images of some kind. It’s a very hard problem but it’s something that many people are very concerned about including the U.S. government because of the potential for disruption to society being caused by a malicious agent,” he said. “Seeing is believing. When I see Godzilla come out of the Sea of Japan and destroy Tokyo, I believe it. It’s very hard not to believe what you see and people can make fake media that looks and sounds realistic to them. It’s very difficult to disimbue that that really happened, that that person really said that. It’s a fascinating topic right now that I and others have been working on very hard.”

Kenyon said he’s always trying to further develop his understanding of neural algorithm, the actual algorithms that neural systems are using but that he’s trying to understand them by just applying them to situations and seeing where his concepts of neural algorithms fail and what he needs to do to improve them. He said to survive at LANL you have to be useful and he has to continue to show that he adds value somewhere.

“I keep my eyes open for where the opportunities are. The main area where I’ve found that I have a niche is in unsupervised learning. The way humans learn, the way other animals learn is we learn in what we call an unsupervised way. It turns out that the cognitive psychologists tell us that you probably learned to see by the time you were nine months old. They have amazing ways of assessing these things but they can tell when infants are surprised,” he said. “When you do something that is kind of shocking to them, you can kind of tell that they’ve learned some principle. But before a certain age if an object just slides off the table and suspends in air, they don’t see anything wrong with that – it’s just stuff that happens. At some point you can see that they can see that was not right, that it doesn’t match their expectation.”

Kenyon said those sorts of probes can be used to figure out when infants have developed certain capabilities.

“So we know that infants generally understand the visual world something like the way adults do by the time they’re nine months old. They know the world is three-dimensional; they understand that when objects pass behind other objects they still exist. They understand basic intuitive physics. They learn this without us spending every minute of every day teaching them. They just figure this out. Their brains work this out somehow. They don’t get labelled data. We don’t tell them how far away pixels are so they can compare – they just learn it,” he said.

Kenyon said the algorithms he looks at have that character, that he feels like he can learn a lot more from just data, just by adjusting data and modeling the data.

“Most of our machine-learning algorithms today are supervised. You have to give the machine a task. You have to say label these images and I have to give you a bunch of images and I have to tell you these are cats and these are dogs and I have to give you a bunch of cats and a bunch of dogs that are labeled and the machine has to see a bunch of images that are labeled. Yet somehow infants just look at the world, they figure it out, and you can kind of show them one example – this is a zebra – and they’ve got it,” he said. “Most of our learning is unsupervised – just learning to understand the world – so that’s very interesting to me and there are applications at Los Alamos where that’s very relevant”

Kenyon said the Lab has been a godsend to him and has been by far the greatest opportunity of his life.

“I am extraordinarily grateful to have had this opportunity to just be a scientist. If I had to do it again I would be at Los Alamos. It has just been phenomenal for me. It just hit the right spot for me to be a practicing scientist. I really enjoy working with students and working with colleagues but I need to do science too. I need to do technical work and I really appreciate being able to do science every day,” he said.

The Bradbury Science Museum hosts the Science On Tap discussions downtown each month. every month. Registration is not required and admission is free. For more information, go to www.lanl.gov/museum.