top of page

4 - Fighting COVID-19 with Oren Etzioni


Hi, I’m Monte Zweben, CEO of Splice Machine. You’re listening to ML Minutes, where we solve big problems in little time. Every two weeks, we invite a different thought leader to talk about a problem they’re solving with Machine Learning, with an exciting twist: our guest has only one minute to answer each question. Let’s get started!


This episode, our guest is my friend Dr. Oren Etzioni. Oren is Chief Executive Officer at the Allen Institute for AI, or AI2 to most people. Oren has been a Professor at the University of Washington’s Computer Science department since 1991, and his work has helped to pioneer meta-search, online comparison shopping, machine reading, and Open Information Extraction. Oren has founded or co-founded several companies, including Farecast (acquired by Microsoft). He has written over 200 technical papers, and has been featured in The New York Times, Wired, and Nature. Outside of work, Oren plays way too much Bughouse online!


1:08 Oren


Thank you, Monte, really kind introduction. I feel like I don't want to say anything, though, ‘cause it's gonna be downhill from there.

1:15 Monte Zweben


Well, that's funny, because I'm gonna ask you to talk more about yourself. Can you tell me a little bit about your journey to how you got to where you are now?

1:24 Oren


Well, let's start in the beginning. Like a lot of people in high school, I got my hands on a TRS-80, asimple personal computer, some people refer to dismissively as the Trash-80. But I loved it. I started programming in basic, and it was just so much fun. I feel like I had a balanced palette. I played basketball, I was very interested in girls. But then there was the TRS-80 and programming. And then later, at the end of high school, I read the book Gödel, Escher, Bach, which connected me with the fundamental questions of intelligence. How do we build human level intelligence into a machine? And that combination is what got me going. Big Questions, superb technology.

Monte Zweben 2:11


That is a great answer. And I have to admit, I had a TRS-80. I even programmed it in Assemblr. And I read Gödel, Escher, Bach at that time, too. Okay, now, I'd like to take this to your research, I'd like you to talk a little bit about the work you're doing on COVID-19. And, of course, there is a great deal of research in the community trying to find a vaccine. But I'd love to hear what you're doing in this community. What's the problem that CORD-19 is trying to solve?

2:54 Oren


So we're a nonprofit research institute, and we developed a free search engine for scientific information called Semantic Scholar. One day, early in March, we got a call from the White House, from the CTO of the United States, saying you need to help us take all the research on COVID-19 and the Coronavirus and so on, and put it together, make it available for researchers to build AI and information retrieval search systems on top of. And we said, How much time do we have? They said, We need this yesterday. I'm really proud of our team, we had some relevant infrastructure, which is why they contacted us. Within five days, we had the first version of this, within 10 days it was out and available for the public. We had a corpus of research papers that was machine readable, and people ranging from Amazon to the Chan Zuckerberg foundation from Korea and elsewhere, build search engines and question answering systems on top of that, to help biologists and virologists tackle COVID-19.

4:01 Monte


That's fantastic. So you had a very short period of time to build upon your original research of information extraction and search, and focus it in this area of research on COVID-19. What are some of the ways that the community used the research?

4:23 Oren


Well, first of all, we partnered with Kaggle out of Google, and they launched a competition to answer key questions about drugs, about vaccines, about how long the virus lives on various surfaces. And it became the most popular competition ever: there were more than 2 million downloads of our dataset. So we felt we're really in the thick of it. And a whole bunch of specific answers about masking, about convalescent plasma, but the key issues that affect us every day came out of that, and they've been published in medical journals and they've been informing policy groups ever since. Nowadays, CORD-19 is updated daily, has more than 200,000 papers in it, and people are continuing to work on it and use it to hopefully find a vaccine.

5:12 Monte


That's fantastic. I love the fact that you're able to keep this corpus very up-to-date with all of the new research that's publishing, and getting into the hands of the people who need it. So moving on from the importance of the research, let's look at how you did it. How are you actually solving this problem?

5:34 Oren


Well, if I had to give you a phrase, it would be machine learning. No surprise there, machine learning is the tide that's lifting all boats. So natural language processing, or NLP for short, which is what we do, is based on modern machine learning techniques. On top of that, we use what's called embedding representations, basically projecting the context of different words, into a vector space that allows us to understand the meanings of different words. And then we build up from the meaning of words to the meaning of sentences, from the meaning of sentences to the meaning of documents, and from that to the ability to do things like answer questions, summarize documents quickly, and more importantly, be able to extract findings and medical results from, say, 200,000 research papers.

6:26 Monte


Yes, and I think that your research that I read a little bit about, tried to take some of the original NLP research, using machine learning to extract meaning from just sentences. You were able with your team to really extract meanings of documents. And that's what led to a, I think, a unique capability. You talked about word embeddings; these are just being able to understand the likelihood of one word being close to another word, or incorporated in a sentence or in a document, and being able to understand meaning through those embeddings. With respect to the research, though, what was one really interesting or significant challenge that you faced along the way?

7:19 Oren


Well, I think that modern natural language processing research has worked, as you said, at the sentence level, but if you think of a document, like a scientific paper, or even a Shakespearean play, or a memo at a corporation, reading it one sentence at a time, it's like trying to see a movie through a keyhole, okay, it's a very, very limited view. Often, you have to put pieces together across sentences, across different sections, to make sense of the entire document. So we're really scaling up natural language processing, from the sentence level, to the document level. And we look at things like hierarchy, how the document breaks into sections, what's salient? What's the most important sentence or set of sentences in this document? And then we go beyond the document level. So if I have 20,000 papers, written recently, about COVID-19’s persistence on surfaces, and I've asked the question, How do I put the pieces together across thousands of documents that a person might not even have time to read at all?

8:32 Monte


That's fantastic. So what you're saying here is that, instead of just looking at these micro-features, of sentences at the word level, you are able to take semantic constructs that describe documents and concepts, even beyond documents about the domain and create a representational structure that the machine learning can leverage to learn some of these concepts. That's really interesting. Were there any specific difficulties in trying to incorporate higher level concepts in some of the deep learning methods that you used?

9:16 Oren


Absolutely, let me get a little bit more technical here in a minute or less. You talked about how all these models, what they do is try to figure out how likely is the next word or the next word. If I say, once upon a, you would say time, right? So we constantly have expectations based on our experience of what the next word is going to be, and the word after that. But the window of context that's used to compute that is very short. In standard models, we had to scale that by two orders of magnitude, which required technical innovation way up and down the stack, to go from sentences to contexts that really have the whole document inside them. That's an example of a major challenge that's familiar to everybody: it's scaling algorithms.

10:07 Monte


Thanks, Oren. That really explains some of the complexity in the research. I've seen some of the work on natural language that uses LSTMs to string together words in a sentence. And you're right, it's usually just a few words apart. And it sounds really interesting how you scaled it to look at a much larger semantic context. What's next in the research? What are you going to tackle next in this really interesting area, NLP and information extraction?

10:41 Oren


So a huge problem is information overload, right? We're all inundated with tweets, Facebook posts, email messages, slack messages, right? So an academic researcher, a doctor of neurologists, somebody that we count on, to help fight COVID-19 has all that stuff. And they have all these papers that have to read and all these reports of clinical findings, and so on. So we really want to go from sentences to documents, to set of documents, to really tools that help these scientists do their research, do their literature search, find support for examples, understand where the field is going. And we're using, for example, information visualization techniques to find graphs where the different genes and the different proteins and the different viruses, how they all relate to each other. And so that, for example, could be a substitute for reading maybe 20 papers, just look at one graph, you know, they say, a picture's worth 1000 words. Well, a graph could be worth 10,000 words.

11:49 Monte


Well, that's interesting. So I guess what's going to happen is, these tools are going to help people do very specific jobs, perhaps summarizing lots of research or lots of research documents into common areas all the way to being able to reformulate the information in these documents into other visualizations and other ways that people can consume them.

12:17 Oren


Exactly.

12:20 Monte


And I guess I’ll pop the stack one more level and ask a broader question, looking way into the future: 10 years from now, where do you see AI?

12:29 Oren


I see AI poised to help humanity in ways that we haven't thought about, or if we thought about, we don't fully grasp. So right now 40,000 people are killed in our highways each year, more than a million accidents in the US alone. People are fascinated with self-driving cars; I'm fascinated with the lives that we're going to save in transportation. The third leading cause of death in American hospitals is physician error. They're exhausted, they're overworked, they're under pressure. Well, information systems, AI-based medical systems could help them, could look over their shoulder and catch errors, catch mistakes, make suggestions. I see in 10 years, lots and lots of lives being saved by AI systems in America.

13:18 Monte


Thank you, Oren. That is a fantastic way to close. It's been a pleasure, and thank you so much.

13:26 Oren


Pleasure is all mine, Monte.


13:35 Monte

If you want to hear Oren's thoughts on Hollywood misconceptions of AI and AI2's newest computer vision program, check out our bonus minutes! They’re linked in the show notes below, and on our website, MLMinutes.com. To stay up-to-date on our upcoming guests and giveaways, you can follow our Twitter and Instagram, @MLMinutes. Our intro music is Funkin' It by the Jazzual Suspects, and our outro music is Last Call by Shiny Objects, both on the Om Records Label. ML Minutes is produced and edited by Morgan Sweeney. I’m your host, Monte Zweben, and this was an ML Minute.

0 comments
bottom of page