CNBC
+

CNBC Exclusive: CNBC Transcript: OpenAI COO Brad Lightcap Speaks with CNBC’s David Faber on “Squawk on the Street” Today

CNBC

WHEN: Today, Monday, October 7, 2024

WHERE: CNBC’s “Squawk on the Street”  

Following is the unofficial transcript of a CNBC exclusive interview with OpenAI COO Brad Lightcap on CNBC’s “Squawk on the Street” (M-F, 9AM-11AM ET) today, Monday, October 7. Following is a link to video on CNBC.com: https://www.cnbc.com/video/2024/10/07/openai-coo-breaks-down-apple-partnership-new-ai-models.html.

All references must be sourced to CNBC.

DAVID FABER: Joining us now, it’s a CNBC Exclusive, OpenAI’s Chief Operating Officer, Brad Lightcap. Brad, good to see you.

BRAD LIGHTCAP: Good to see you, too.

FABER: And nice to learn that you’re a Mets fan as well.

LIGHTCAP: That I am.

FABER: I could spend the entire interview on that, but I think the rest of our viewers might be a bit disappointed. You have a fairly large remit as COO, but certainly partnerships are one of the key things I know you’ve focused on. So I’d love to start with Apple. A lot of questions there, exactly what the relationship is in terms of your help with Apple Intelligence. What can you tell us about what the partnership means for OpenAI and what Apple conceivably will get out of it?

LIGHTCAP: Yeah. Well, hey, it’s great to be here. Good to see you again. Apple’s obviously an amazing company. We’re really thrilled to be part of Apple Intelligence. We think it’s a big step forward both for Apple. It’s a first step, but it’s a big step. It’s the first time that we’re going to have AI really start to integrate into the device. You know, if you think about kind of how people have used AI to date, it’s mostly been accessing it through, like, apps and, you know, really, really simple kind of services. But for the first time now, you can do really, really interesting things on the device. So you can actually take a picture, for example, through the camera and actually have ChatGPT understand what’s going on in that image. And that’s, you know, that’s we think that’s going to create tremendous value for consumers. And for the 250 million people that use ChatGPT every week, we think it’s just going to be another way for them to experience it.

FABER: Yeah. So, you know, from a perspective, obviously, revenues of your company continue to go up sharply. But investors, your private investors certainly are also focused on margins over time. So how do you benefit economically here? This is not a relationship, for example, the Google search relationship where Google is paying Apple an enormous amount of money every year. That’s not the case here, correct?

LIGHTCAP: Correct. Yeah, we’re, look, we the way we think about it is really around product. We don’t yet know, I think, what the great product experiences with AI are going to look like. I think those are yet to be created. I said to our team the other day, I think it’s the most dynamic time in technology and the history of technology that I can think of, to be building product. It’s a phase shift, right? We’re at the beginning of an entirely new era for how software works. And so I think exploring that frontier and what it means to actually be able to interact with these devices in this very human like way. All of that is yet to be discovered and that’s where we’re focused.

FABER: Right. And I understand that. But I’m trying to understand. I pay you guys $20 a month now.

LIGHTCAP: Thank you.

FABER: You’re welcome. And I’m enjoying it. But if I have Apple intelligence on this phone, am I still going to want to pay you the 20 bucks a month?

LIGHTCAP: I think so. I think there’s going to be a range of workloads that you’re going to want AI to be able to take off your plate. Some of those are going to be very simple. They’re going to be on device. But a lot of them are going to be really big, complex workflows that you’re going to want to ship off to more powerful models. Good example, we recently released a new model called o1. It’s a different type of model. So in contrast to GPT-3, GPT-4, o1 is the first model that can actually reason. And so it actually what that means is actually can think about a problem. It can break down a complex problem into different steps. And so if you think about, like, even giving your AI a simple task like, you know, help me plan a trip. That’s a series of actual, you know, underlying tasks that you actually want more complex, higher sophistication intelligence to be able to handle. So that’s the type of experience we’re going to – for users.

FABER: So that would be more for the, for the pay model for GPT as opposed to potentially for Apple Intelligence. But you’re still going to be using it there. You know, I wonder when you think about when so many people who own Apple, for example, stock, think about, well, how is this going to change the nature of the interaction with the device with the use of apps? How do you think about it? Is it going to change significantly?

LIGHTCAP: I think it will. I think interfaces are going to change. I think we’re going to have a much more human like interface, a much more interactive interface. One of the really cool things you’ve probably seen ChatGPT, you can now access our advanced voice mode. So it’s for the first time you actually have a system that you can talk to fluidly like you and I are talking right now. You can interrupt it. You can have it tell you jokes. You can make it sound more serious, more funny. Those interfaces will become the norm. I think, you know, a few years from now, it will be unusual that you’d walk up to a screen and not expect to be able to talk to it like a person and ask it to do arbitrarily complex things. And so we’re just at the beginning of this, but that’s what we’re really excited about.

FABER: Right. But again, I’m trying to understand then the differential because I’ll talk to Siri potentially at some point with that kind of an experience. But you think it won’t take me away from wanting to talk as well to your, to your ChatGPT.

LIGHTCAP: I think these will be everywhere.

FABER: You do?

LIGHTCAP: Yes.

FABER: Alright. Let’s talk about the Microsoft partnership as well, because you are obviously very instrumental, incredibly important to the company. How as you transition from a not for profit to an equity ownership structure, I would assume the negotiations with Microsoft are important ones. What are your expectations there in terms of how that’s going to go on and how Microsoft ends up in terms of what their ownership structure or anything else is that at the different the new OpenAI, which is a for profit company?

LIGHTCAP: Yeah, well, look, Microsoft’s been an amazing partner. Our history with them dates back a long time. We’ve been through multiple versions of our partnership. We’re going to continue to evolve our partnership with Microsoft. I don’t see a future, you know, of AI being what it could be without Microsoft. And so bringing them, you know, kind of into that process and along for the journey is paramount for us. Satya is an amazing leader. And that company, by the way, has done amazing things to advance the field, you know, from the infrastructure they build to the types of products that they serve. There’s a — there’s a tremendously important place for them. Open AI is growing, right. We have a business that we’ve got to grow. We’ve got now, you know, much more complex infrastructure, much more complex systems serving 250 million people every week. You know, many, many thousands of businesses, a million business users. So a lot of our thinking is really around how do we make sure, A, we can continue to set ourselves up to push the frontier on research? And then also really thinking a lot about –

FABER: Alright, how does Microsoft play into that? I know my question was about their rolling in, so to speak. Can you give us any details or not?

LIGHTCAP: Yeah, look, they’re going to be an amazing partner on both sides. We think that they’re critical to the research input, scaling infrastructure, scaling systems. We think there’s also some really interesting ways to collaborate around how we build product and go to market.

FABER: Do they have enough compute still for you, I mean, you know, you did that deal which included Oracle earlier in the summer. Is that because you guys are running out of Microsoft compute to use here?

LIGHTCAP: Well, certainly we think we need a lot of compute. And so we really look at, you know, our infrastructure is where can we find, you know, the best systems that are going to help us scale the models that we, that we want to scale? And, you know, we’ve got plans for that. We think Microsoft is a critical input to that.

FABER: Is the compute now, you know, the questions, is it being used to train the new models or is the inference taking up more compute than you guys thought? I guess I’m trying to understand because Sam Altman seems to be coming at it. You know, sometimes he’s talking about we need, my God, trillions and then other times perhaps not as much. Where do you guys sit right now in terms of how much compute is going to be needed and for what?

LIGHTCAP: So this is actually a keen observation. So the o1 models are really an entirely different regime for how AI models work. You mentioned something that was interesting, which is inference compute being now a critical input to how these models work. In effect, o1 models can think. So for the first time, you’ve got models now can use compute when you actually ask them a question. And the better, the more they can think, the more they can use compute, the better their answer gets. And so much like a human, the more time you can take to think about something, generally, the better the output is going to be. We think that paradigm is completely different. It has completely different implications for our infrastructure, completely different implications for how we scale compute. And I think we’re only just beginning to understand that.

FABER: Alright, give me a little more here then. I mean, what does, what does that mean for compute? Tell our viewers, does it mean you need more or less?

LIGHTCAP: We think we’re going to end up needing a lot more. We think that, you know, the types of workloads that you can do with these types of models are now these kind of arbitrarily complex and sophisticated workflows. I’ll give you an example. In manufacturing, a field that historically has not generally been well-penetrated by AI, you now have models that can do cross-referencing across technical manuals. They have deep technical understanding of infrastructure and critical, you know, critical pieces of manufacturing. And you can ask some complex questions like, I have a broken, you know, I have a broken machine. I have a broken, you know, piece of my, you know, my machinery. How do I go fix it, right? And that used to be a process, you know, in field operations that would take hours, days, weeks. You can now boil that down to minutes, right? And so it’s all because o1 can cross-reference what it’s seeing, you know, in a manual, reason about how to actually solve complex mechanical problems, and then can bring an answer to people in the field in real time right there. So we think this is going to be a transformation. And there’s really no way to kind of estimate what that upper bound is of use.

FABER: Right. But everybody who owns Nvidia just heard you say that, and they’re probably thinking, I might even want to own more. I mean, ultimately, it’s like boundless need for compute, it seems.

LIGHTCAP: Well, certainly we’re optimistic.

FABER: You’re optimistic. And even with, as you say, with the new models, even though they’re more efficient, they still consume even more.

LIGHTCAP: Again, you can throw arbitrary amounts of compute at these models, and they just get smarter. And they can think harder, they can solve harder problems, and we’re just at the beginning of that.

FABER: And then we’re going to get to AGI, aren’t we? I mean, where are — last time we talked at Milken back in May, I think I asked you, obviously, the company as a not-for-profit was to develop the most beneficial AI for humanity. I still hope you guys are thinking about that sometimes.

LIGHTCAP: Very much so.

FABER: When do we get to AGI, do you think, given what you just told me about the new models?

LIGHTCAP: I swore off making predictions in this field a long time ago, but we’re very excited.

FABER: Alright, 6.6 billion comes in, what do you use it for?

LIGHTCAP: For growth, you know, where we have one of the amazing things about AI is we can invest very predictably in our research and in our business. We know now we can prove scientifically how these models will improve as you scale them up. So we know with every system we build every model we train, we have some predictable ability to understand what that model can deliver in terms of capability and impact. And that gives us the confidence to continue to invest.

FABER: Is it mostly team now, or, you know, is it people?

LIGHTCAP: It’s going to be, it’s going to be primarily compute. That’s going to be the biggest.

FABER: It is going to still be compute—

LIGHTCAP: The big input, yep.

FABER: Yeah.

LIGHTCAP: Compute is the input to these models working. Of course, we are continuing to grow our team very responsibly, but really for us, it’s just knowing that you can continue to predictively invest and you’re going to get something out on the other end that is going to be demonstrably better than what you had.

FABER: And opening an office here in New York as well, I guess. That goes to the team part of this, right?

LIGHTCAP: That’s what they say.

FABER: Why?

LIGHTCAP: You know, we have critical mass here in New York of publisher partners, financial services, healthcare. New York is still this epicenter of industry we think is going to be heavily impacted by AI. And we want to be where our customers are. So, for us, working with customers, working with enterprises is a very hands-on process. It helps us to actually be close to customers to understand the business problems we’re solving. We work with, you know, a variety of companies, financial services in particular, very interesting. And for us, we want to make sure that we’re understanding our customers’ problems the way they understand them.

FABER: And finally, Brad, the transition, I think you guys have given yourselves as much as two years to transition from a not-for-profit to an equity ownership structure for profit. My understanding is that’s going to happen a lot more quickly than two years. Is that going to be the case?

LIGHTCAP: Yeah, so we’re looking at that. You know, as the company evolves, as the technology evolves, I expect our structure will evolve, too. The cool thing about OpenAI is it’s like we try and keep it this super dynamic company. I think we don’t see it as we just did one thing and then, you know, from here on out, it’s just kind of incremental growth. I think we see it fundamentally as our mission is we’ve got to usher in this technology that’s coming. The structure’s got to evolve. The company’s got to evolve. We’re ultimately going to evolve our products, too, and so more to come.

FABER: Let’s talk about dynamic. I mean, you’re there. You’ve been there for a long time. But there’s a lot of people who’ve left, and that’s raised some concerns. You know, what about this exodus of significant talent, which continues sort of every week?

LIGHTCAP: Well, companies evolve. You know, I’ve been around a long time in Silicon Valley, and I’ve seen teams turn over, and it’s natural. And so you expect people, you know, OpenAI is a hard place to work. We all work super, super hard, and, you know, it’s a natural evolution of the company that as we grow and as we hit different milestones and scale of the business and the research, we’re going to want new leaders. We’re going to want fresh ideas, people that can look at problems differently, product problems, business problems, technical problems. So we feel as good as we’ve ever felt. We’ve got an amazing team, you know, but we’ve got way more to come. And so we’re just focused on the future.

FABER: Alright, Brad, always appreciate you taking some time. Thank you.

LIGHTCAP: Thank you.

FABER: Brad Lightcap, COO of OpenAI.

For more information contact:

Jennifer Dauble

CNBC

t: 201.735.4721

m: 201.615.2787

e: jennifer.dauble@nbcuni.com

Stephanie Hirlemann

CNBC

m: 201.397.2838

e: Steph.Hirlemann@nbcuni.com