Share:

Actual Knowledge and Artificial Intelligence

By Jane Neumayr Nemcova (’98)

 

Jane Neumayr Nemcova (’98)

Note: Jane Neumayr Nemcova (’98) served as Managing Director of AI at Lionbridge until May 2020. She recently finalized a course at Massachusetts Institute of Technology in cryptocurrency and blockchain, and is planning to work on new projects in the area of natural language processing. The following article is adapted from remarks she made to the Thomas Aquinas College Board of Governors at its meeting on November 16, 2019.

 

Back when I was in high school, when people would ask me about my college plans, they would say things like, “Do you want to go play basketball or tennis?” Or, “Do you want to study law?” And I would say, “No, I think I probably want to study philosophy.” And they would respond, “Why would you ever do that? That’s, well, kind of silly and impractical, isn’t it? What are you going to do with that?”

And so I thought, “Well, OK, maybe it is silly,” but somehow I knew that I needed to learn, and deep down I knew that, while maybe everyone wants to learn in some sense, I kind of wanted it more. I knew that I needed to learn how to learn, and that, if I did that, then I could pursue any profession that I wanted. If I decided to go into law later, that would be great, and if I decided to go into some other area, that would be fine. I would have the necessary foundation.

From TAC to AI

I was always interested in language, but when I graduated from Thomas Aquinas College and I started thinking about what I wanted to do next, there weren’t many options in that field. I had studied French extensively and I had even lived in France for a while, so I thought maybe I would go back to France, continue with French, and see what I could do with that.

As I went to graduate school and then later into the business workforce, language was my focus. Back then it was really about translation, using translation services to take the products or software technologies that companies build in English, translate them into other languages, and then deploy them in other countries.

But as I was working I realized that technology was changing rapidly, and about seven or eight years ago, just as Artificial Intelligence (AI) was beginning to catch on, I started thinking about the role that language played in the development of technology. I saw an opportunity, and so I started an AI division within the language company where I was working.

What my team did, and what I have done, is structure an organization around supporting AI companies with data services. We developed the human side of the human-data input for AI. In language and speech, which are the most difficult parts of the process, we provided data services for developing language models, natural language processing, computational linguistics — all aspects of speech development for products — among others. We covered more locales than any other company. We specialized in finding people, even working in languages you’ve never heard of, and developing language technology across the world.

What’s funny, given the opposition I ran into in high school when I told people I wanted to study philosophy, is how philosophy proved to be the avenue that brought me to AI. And these days, many of my colleagues in the AI industry — very accomplished individuals who are creating the products and technologies that we all use day in and day out — often remark about my college education. They say, “It’s really the most interesting thing about you, that you studied Descartes, or Aristotle, or Kant.”

AI and Liberal Education

What’s more, they are beginning to see that the sort of education that I had is something like what they want for their own children. I have been involved in countless conferences and summits with different folks in the AI community over the years, and I have often heard industry leaders asked the question, “What should my child study in school to survive in this AI world?” What I find pleasing, but also ironic, is that these professionals who have spent so much of their lives — 20 or even 30 years — working on different areas of AI often see the perils of over-exposing children to technology.

One of the people I respect the most in AI is Andrew Ng, who was one of the founders of Google Brain; later he was a key person at Baidu, and he started Coursera, which is one of the most successful online education companies. He said at an EmTech conference, in answer to a question along those lines, “You know: for my children, if I could pick what I wanted, I would want them to learn how to learn.”

What pleased me, of course, was that I had essentially made that choice as a teenager — and now Andrew Ng was validating it.

Steve Jobs famously prohibited his children from using an iPad, and one of the reasons he did so is because these devices can be a huge distraction from focusing on the right things. Technology, in and of itself, might not be a problem; it helps in many practical aspects of life. But, as far as education is concerned, distractions from the focus on actual knowledge and learning can be a very big problem.

The emergence of AI is pushing everyone into understanding what education ultimately means, what learning is, and what knowledge is. And I do see, in the Silicon Valley in particular, that more people are trying to teach their kids languages; they are trying to get their children to read more, to decode what knowledge is. The people I have often encountered at big tech companies see that learning how to learn is really the most important part of education. The ability to think is essential to the smooth operation of business, and that becomes ever more apparent the more technical an area becomes. We are in a technology revolution of sorts right now, and we don’t have a choice about that. It is happening, and how we navigate and educate ourselves in and around that is absolutely crucial.

One of the ways in which I think this trend will evolve is that AI is going to force more true learning. It is going to heighten the value that society places on creativity, broad thinking, and the liberal arts. People with a liberal arts background typically end up being very good in a business environment because they are used to thinking about things from different angles, in different frameworks, and figuring out how to discuss complex topics. In business and technology, a liberal arts background is a kind of natural advantage. That will be even more true in an increasingly AI-driven economy. It will push people to figure out what makes humans different from machines — what ultimately makes humans valuable — and, as a result, knowledge itself is going to become a commodity worth purchasing.

Premium on Philosophy

A couple years ago I spoke to students at the College and shared with them a story about a friend of mine who developed “Magic: The Gathering,” which is a famous game that was later bought by Hasbro some years ago. The story has to do with a discussion we had about the hiring practices at his company, which was worth something like $300 million at the time. “What are you looking for in students coming out of college?” I asked him. “Are you looking only for candidates with degrees in gaming?” And he said, “Well, we’ve got PhDs and master’s students from gaming programs, but they have not been our best hires. What we have come to figure out is that we really need to hire philosophy majors. Those are the guys and gals who are creating next-level games and characters and storylines — all the exciting, interesting things that lead to success in this industry.”

That story is, I think, representative of what is happening in the marketplace right now. Philosophy is no longer an impractical piece of your education; it actually may be the most important piece.

An enormous amount of human data is required to make AI and related technologies work, and an obstacle to using that data properly can be labeling and categorizing. Now, TAC students know well that Aristotle spent a lot of time going through all kinds of data empirically, labeling and categorizing the natural world around us. In a sense he is the number-one thinker in AI, and many of the great AI thinkers reference him and talk about him as an important part of building any kind of machine-learning model. He is also one of the initial data collectors. So he went about observing nature and observing everything about the world that he could in order to use empirical means as a form of validation.

What I tried to communicate to the College’s students was their value as philosophy students, which is now recognized as an important criterion by people looking for the next generation of professionals — especially in areas such as management and marketing, and particularly in AI. Thomas Aquinas College graduates are well positioned for these sorts of positions and can interview very effectively. The ability to discern and navigate complex matters is the most crucial trait that our economy needs right now — in other words, critical thinking.

What Thomas Aquinas College is doing in the lives of its students is invaluable not only in terms of the good it’s achieving for American higher education, it’s vital for preparing the next generation to navigate the AI world. And the folks in AI are looking for candidates exactly like those coming from Thomas Aquinas College to help them, not only in developing their products, but in figuring out how those products should function and how they should be applied.

As I have spent much time in AI with accomplished engineers, I have come to realize how precious my own education in philosophy is — and that has been recognized by the folks I work with in Big Tech everywhere. They have all noted that. So thank you for your support of Thomas Aquinas College. It’s been amazing.