top of page

1 - Language Meaning with Tom Mitchell



Monte Zweben


Hi, I’m Monte Zweben, CEO of Splice Machine. You’re listening to ML Minutes, where we solve big problems in little time. Every two weeks, we invite a different thought leader to talk about a problem they’re solving with Machine Learning, with an exciting twist: our guest has only one minute to answer each question. Let’s get started!


This episode, our guest is my friend Tom Mitchell. Tom is a legend in Machine Learning, having founded the world’s first Machine Learning Department at Carnegie Mellon University in 2006, and led it as Department Head for 10 years. Tom’s research bridges Machine Learning, Artificial Intelligence, and Cognitive Neuroscience to explore some of the most interesting and groundbreaking questions of our time. Outside of work, Tom plays guitar and a little bit of banjo. Welcome, Tom. So happy to have you here.


1:07 Tom Mitchell


It's great to be here.


1:10 Monte:


Tom, you've worked on a lot of fascinating projects. But today, we want to talk to you about your work in language processing. Could you give our listeners an overview on how the brain represents meaning, in one minute or less?


1:31 Tom


Sure. If I tell you, “Think about tomato”: I just did it. Your brain just activated a pattern of neural activity that's unique to “tomato”. Where is that neural activity? Part of it is in a part of the brain called the gustatory cortex, the part that's sensitive to taste; another part of the neural activity is in the premotor cortex, the part of the brain that would be active if you were to grab a tomato. Another part of the neural activity is in the parietal regions, where the size of the tomato is represented. So it doesn't have to be tomato; if I tell you to think about love, you’ll get a pattern in different parts of your brain. But what we found is that there is a spatial distribution with particular regions of the brain, that often have to do with sensing and controlling the world, that will automatically activate when you think about a concept.


2:33 Monte


That's really cool. So what you're saying is that a single word or concept activates the same part of the brain every time you repeat it. That's pretty cool. I'm curious, though: why is this representation of meaning important? How can eventually be applied?


2:54 Tom


Well, there are kinds of clinical uses, as you can imagine; people who are locked in, for example, who are unable to speak. Or even people who are in a coma, or coma-like conditions, you want to know whether they're hearing even though they might not be able to move or speak. If we put somebody in the brain imaging machine and speak to them, even if they're in a state where they can't move, we can see whether they're comprehending the words that we're saying, for example.


3:30 Monte


Well, that's amazing. So what you're saying is, from a medical perspective, is that we may be able to understand what's happening to a patient who has serious brain injuries, and be able to see what's really going on. So you're doing great research on something that can really impact people. What tools are you using to study this?


4:03 Tom


We're using a combination of brain imaging and machine learning. So for example, we collect data from human subjects, we collect our own data, some of it is fMRI. We put people in an fMRI scanner, and we show them pictures or we show them written words or we say words; one way or another, we get them to think about a particular concept. That gives us a pattern of brain activity. With fMRI, we get very good spatial resolution about two millimeters. With MEG, another kind, we get very good time resolution, about a millisecond. And what we do is we train machine learning classifiers to decode from that neural activity, which word you're thinking about.


4:55 Monte


Excellent. So I think I understand: you've taken imagery, classic medical imagery, and used machine learning to classify those images and predict what part of the brain may be activated given a particular stimulus, like a word. What's one challenge you faced with this research?


5:24 Tom


Well, let me give you a challenge that we think we resolved. One challenge is there 100,000 words in English, and people aren't patient enough to sit in the scanner while we test all hundred thousand words on them. So we'd like to understand the systematicity of these neural patterns of activity across all vocabulary. And if we just had to get a pattern per word, obviously, that would be too tedious. But what we were able to do is train a model that is a little different from a classifier; instead, we trained a model where the input to the model is a word embedding, a vector representation of the statistics of how a word is used. And the output is the predicted brain activity. We trained this on 60 nouns, and we found it generalized to many other nouns.


6:26 Monte


Oh, that's very interesting. I’m curious though, what's one challenge that you face that you haven't overcome yet?


6:34 Tom


Brain imaging is a terrible technology in terms of the noise to signal ratio, I'll give you-- I'll just point out MEG, the method that we use when we want to see with better than one second resolution. The MEG stands for Magneto EncephaloGraphy. It's a passive technique that is literally listening with super-cooled sensors around your head to the magnetic fields that your head is producing. Those magnetic fields are so weak, they're weaker than the North Pole, which causes your compass to work. They're so weak that if a big bus drives by outside, we see it on the scanner. So it's a crazy situation. However, it only takes you 400 milliseconds to understand a word. So if we want to see what's going on, we need to use that sensing.


7:37 Monte


Excellent, I think I see. So the better the imaging technology, the more precise we’ll be, in being able to see how meaning emerges over time given the imagery. So Tom, what's next for language processing?


7:56 Tom


Well, the way I think of it, is that we've spent a lot of time looking at the question of where in the brain, is there neural activity involved in comprehending the word? And when-- we now know something about the timing. But we still haven't confronted very much the question of how; how is it that your brain [processes language], what is the algorithm? What is the process that is causing us to see these different patterns of activity in the sequence that we're seeing? And so I think the next really big challenge is to go beyond where and when, and get to the how.


8:37 Monte


I think I understand what you’re saying here- deep learning uses neural networks, and many people feel that neural networks may be mulating what’s going on in the brain, but really what you’re saying is that your use of neural networks only predicted where in the brain would be activated and when it would be activated, but it doesn’t really describe how the brain is actually figuring out how to translate the meaning of the words.


Tom, thank you very much; we’ve really enjoyed hearing about your research of how the brain processes language, and I really appreciate the time you gave to us today.


9:00 Tom


It's great to be here. Thanks, Monte.


9:08 Monte


If you want to hear Tom’s thoughts on GPT-3 and the future of machine learning, check out our bonus minutes! They’re linked in the show notes below, and on our website, MLMinutes.com. Next episode, we’ll be discussing how Machine Learning is being used to fight, COVID-19. To stay up-to-date on our upcoming guests and giveaways, you can follow our Twitter and Instagram, @MLMinutes. Our intro music is Funkin' It by the Jazzual Suspects, and our outro music is Last Call by Shiny Objects, both on the Om Records Label. ML Minutes is produced and edited by Morgan Sweeney. I’m your host, Monte Zweben, and this was an ML Minute.

0 comments
bottom of page