Home NovaAstrax 360 Duolingo’s CEO admits where he got AI wrong

    Duolingo’s CEO admits where he got AI wrong

    5
    0



    When Luis von Ahn, Duolingo’s CEO, sent an internal memo about AI last year, he didn’t expect it to go viral—or to ignite a firestorm about the future of work. Now he unpacks what he got right, what he got wrong, and what the backlash taught him about the real limitations of AI. It’s a candid reckoning with hype, growth, and the surprisingly complicated promise of technology in education.

    This is an abridged transcript of an interview from Rapid Response, hosted by the former editor-in-chief of Fast Company, Bob Safian. From the team behind the Masters of Scale podcast, Rapid Response features candid conversations with today’s top business leaders navigating real-time challenges. Subscribe to Rapid Response wherever you get your podcasts to ensure you never miss an episode.

    Right now, a lot of the learning that business people are being forced to do is technological is about AI. I know that that’s not been the focus of what the learning is on Duolingo, but are there things about the way we should be approaching learning about this new technology that you would take away from what you do with Duolingo? It certainly isn’t often framed to us as being fun.

    I think the most important thing I would say for learning anything: It doesn’t have to be fun. It just has to be that it keeps people motivated. There are multiple ways to keep people motivated. With Duolingo, we’ve chosen mainly fun. That’s the main thing we’ve chosen, but you don’t have to do that. For example, seeing results keeps people motivated. In the case of learning AI, I would say that’s probably a better motivator of the form. I’m going to learn AI, but the first thing that I’m going to do is make myself a dashboard or a mini-dashboard or something. But I think if you find the right motivation, that helps a lot.

    I have to ask you, because last year at this time you sent out this all-hands email about AI that rattled things a bit. No new hires unless teams showed that AI couldn’t do the job and existing employees [to be] assessed on their AI use. It really sparked this blowback on social and the stock price. You’re not unfamiliar with taking risks. Was this a bigger risk than you realized at the time?

    Absolutely. I did not think this was going to be controversial, because internally, inside Duolingo, this was not controversial. We started as a technology company. I used to be a computer science professor that actually taught the AI class at Carnegie Mellon University. We’ve always used AI inasmuch as we can. So internally, this was not controversial at all. Externally, I think I was not very clear, and given how I wrote it and without giving it more context, I opened it up to people thinking that what I was trying to do was to fire a lot of our employees. But that was never the intention. We’ve never done a layoff. We still have never done a layoff. In fact, last year when I sent that memo, we increased our number of employees, not decreased our number of employees. There was that misunderstanding because I think there’s a lot of fear that AI is going to substitute jobs, et cetera, et cetera. The way I see it here is our employees are just way more productive if they use AI. And so, I actually want to hire more people because they can do more.

    It also seemed a little bit like you were sort of forcing people. You weren’t making it playful to learn how to use AI. It was almost like a bludgeon, which I guess wasn’t the intent necessarily either. But I do think it’s something a lot of folks are struggling with. How do you get the people on your team who are more resistant to this new technology to get on board?

    Yeah. The good news here is, at Duolingo, we hire a lot of people who are pretty young. We hire a lot of people who are straight out of college. We have just not found a lot of resistance here. Internally, we have this thing we call the golden rule of AI usage, which is you have to use AI for the benefit of our learners. Everything we do with AI should be for the benefit of our learners. For example, if it helps you be more efficient at putting out more features, that’s for the benefit of our learners. If we put out a feature that helps our learners learn better because they’re now, for example, interacting with an AI to practice conversation, that’s for the benefit of our learners. Sometimes when we use AI, we’re able to save costs, but that is not the goal of our usage of AI. That is an okay thing, but it is not the primary goal. It has to be about helping our learners versus we’re going to use AI to save $10 million. That’s just not all that motivating.

    There’ve been some reports about you kind of walking back some of the things you said in that memo. But you’re clearly still a believer in AI. There’s no doubt from you that sort of this is the direction the business has to continue to go.

    I’m a big believer in AI, but it definitely comes with asterisks and learnings. For example, my initial memo said that we were going to evaluate every employee on their usage of AI. I don’t think that was right. Many people came to me and said, “Look, for the job that I’m doing, I’m finding that I’m just using AI for AI’s sake because you’re going to evaluate me on that, but it’s not because I actually think that for this particular thing, we can do it better.” I think, ultimately, in the case of performance reviews, what you should evaluate is how much that particular employee is contributing to the company. It turns out for most of our employees, using AI helps them contribute more to the company. That is true.

    However, there may be cases, projects or particular roles where it may just not help all that much. And so saying blanket statement, “We’re going to evaluate you on the usage of AI” was not needed, and so we’ve removed that. The other important thing that I think is important to mention when it comes to AI is that we’re trying to use AI as much as possible, but we really don’t want to decrease quality. For some things, AI is quite ready to do high-quality stuff. For some things it’s just not. And so, we’re not going to decrease quality just for the sake of using AI.

    So where are you seeing AI not able to deliver that kind of quality?

    In a lot of places, for example, we hire a lot of artists and designers and our app is very high craft on design, et cetera. We’re just not seeing AI get to the level of creativity or the level of polish that our top people have by any means. The other place where I think it’s just not the highest quality, one of the biggest problems I think with AI is that it demos really well. What do I mean by that? It’s just like, “Look, it can write a story,” and if you see one story, you probably wrote a really good story, like the one story they showed you. And my God, it wrote a story.

    But … in our case, we may need to write 1,000 different stories for people to learn a language. Then, you’ll find that, I don’t know, 20% of the things were just pure slop. Whenever we scale a lot [of] things with AI, we have to really be careful that slop doesn’t get through. And if the quality’s just not high enough, even though AI is really nice and that it can do it pretty fast, we just don’t go for it.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here