By | March 31, 2023
Google CEO Sundar Pichai on AI Moment: "You Will See Us Be Bold"

Here are some other highlights of Mr. Pichai’s comments:

On the first, lukewarm reception of Google’s Bard chatbot:

We knew when we released Bard we wanted to be careful… So it’s not surprising to me that that’s the reaction. But somehow it feels like we took a souped-up Civic and kind of put it in a race with more powerful cars. And what surprised me is how well it works on many, many, many classes of questions. But we will iterate quickly. We clearly have more capable models. Pretty soon, maybe when this goes live, we’ll upgrade Bard to some of our more capable PaLM models, which will bring more features, whether it’s in reasoning, coding, it can answer math questions better. So you will see progress over the course of the next week.

If ChatGPT’s success came as a surprise:

With OpenAI we had a lot of context. There are some incredibly good people, some of whom had been at Google before, so we knew the caliber of the team. So I think OpenAI’s progress didn’t surprise us. I think ChatGPT … you know, kudos to them for finding something that fits the product market. The reception from the users I think was a pleasant surprise, maybe even for them, and for many of us.

On his concerns about tech companies racing against AI advances:

Sometimes I get worried when people use the word “race” and “being first.” I’ve thought about AI for a long time, and we’re definitely working on technology that will be incredibly beneficial, but clearly has the potential to cause harm in a profound way. And so I think it’s very important that we all take responsibility for how we deal with it.

When Larry Page and Sergey Brin return:

I have had a few meetings with them. Sergey has been hanging out with our engineers for a while now. He is a deep mathematician and computer scientist. So for him, the underlying technology, I think if I were to use his words, he’d say it’s the most exciting thing he’s seen in his lifetime. So there’s all that tension. And I’m happy. They have always said, “Call us when you need.” And I call them.

On open mail, signed by nearly 2,000 AI researchers and tech luminaries including Elon Musk, which called on companies to pause development of powerful AI systems for at least six months:

In this area, I think it is important to hear concerns. There are many caring people behind it, including people who have been thinking about AI for a long time. I remember talking to Elon eight years ago, and he was deeply concerned about AI security then. I think he’s always been worried. And I think there is reason to worry about that. While I may not agree with everything there and the details of how you would go about it, I believe the spirit of [the letter] is worth being out there.

If he’s worried about the danger of creating artificial general intelligence, or AGI, an AI that surpasses human intelligence:

When is AGI? What is it? How do you define it? When will we get here? These are all good questions. But to me it almost doesn’t matter because it’s so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter if you reached AGI or not; you will have systems that can provide benefits on a scale never seen before, and potentially cause real harm. Could we have an AI system that can cause disinformation on a large scale? Yes. Is it AGI? It really doesn’t matter.

On why climate change activism makes him hopeful about AI:

One of the things that gives me hope about AI, like climate change, is that it affects everyone. Over time, we live on one planet and these are both issues that have similar characteristics in the sense that you cannot unilaterally get security in AI By definition it affects everyone. So that tells me that the collective will will eventually deal with all of this responsibly.

Source: myurma.com

Leave a Reply

Your email address will not be published. Required fields are marked *