I really enjoyed this interview with Sundar Pichai by John Collison and Elad Gil of Stripe.
Here are the five most interesting things I learned.
1. Search Will Still Exist In The Future, But Much Of It Will Be Agentic
Sundar was asked if agents would replace Search. He said:
“If I fast forward, a lot of what are just information-seeking queries will be agentic in Search. You’ll be completing tasks. You’ll have many threads running.”
And also, Search will change so that we think of it like an agent manager.
“It keeps evolving. Search will be an agent manager in which you’re doing a lot of things. I think, in some ways, you know, I use Antigravity today, and you know, you have a bunch of agents doing stuff, and I can see search doing versions of those things, and you’re getting a bunch of stuff done.”
He said that people do deep research in AI Mode, and it will soon be the norm to do long-running tasks. He also said that the form factor of devices will change.
2. Google Uses Antigravity Internally
Boy, do I love Google’s IDE and agent manager, Antigravity. I have built so many things with it, including my own RSS feed reader, a screenshot and annotation tool, workflows to publish things I write in a Google Doc to my WordPress site, and a bunch of tools to do agentic things with Google Search Console and Google Analytics 4 data. While I think Claude Cowork and Claude Code are incredible, I truly do prefer using Antigravity.
It turns out that Google makes good use of Antigravity internally. Except they don’t call it Antigravity. They call it “Jet Ski.”
Sundar said that the Google DeepMind and the Google Software Engineers use it:
“I can see groups, and in particular I would say GDM and some of the SWE groups really change their workflows. They are using, we call this for some strange reason, we have a different name internally than externally of the same product, but it’s Jet Ski internally which is Antigravity. You’re living on it, you’re living in an agent manager world. You have workflows, and you’re working in this new way.”
He also uses it himself.
“I would query in Antigravity, in our internal version of Antigravity. “Hey, we launched this thing. What did people think about this? Tell me the worst five things people are talking about?” and I type that. Now that brings it back. Has my life gotten easier? Yes. In the past I would have to spend a lot more time trying to get a sense for it. Now an AI agent is helping me in that journey.”
Also, just last week, the Google Search team started using Antigravity.
“Just last week we rolled it [Antigravity] out to the Search team. We’re constantly pushing that. In a large organization, I think change management is a hard aspect of this technology diffusing, which may be easy for a small company. You can quickly switch over.”
If you want to learn how to use Antigravity, I’ve created a full guide teaching you how it works, and how I use it to not only code, but create full agentic workflows that I actually use in my day-to-day work. It’s available in the paid part of my community, The Search Bar. And next Thursday, the Search Bar Pro crew is having an event where we’re going to split into two teams, Team Claude Code and Team Antigravity, and see who can build the better SEO tool.
I know it’s a bit of a pain to try and use something new in your workflows. I thoroughly believe that those who learn how to use Antigravity today will have a big advantage as things really start to take off as AI improves.
3. Robotics Is Growing Fast
Sundar admitted that Google was previously too early to robotics. AI has become the missing ingredient for ideas conceived 10 to 15 years ago. The Gemini Robotics models have reached state-of-the-art status for spatial reasoning. Google has partnered back with Boston Dynamics and Agile and a few other companies.
Most interesting to me was the discussion on Wing for drone delivery.
“I think we are scaling up Wing where in some reasonable time period, 40 million Americans will have access to a Wing delivery service. I’m not talking years out or something like that.”
When asked if Google was going to do more to build hardware, Sundar said having first-party hardware for robotics and AI would be important.
“I think we’d keep a very open mind. My lesson from Waymo and on the AI side with TPUs, et cetera, I need to really push the curve well, particularly in areas where you have safety, regulatory, everything. You want the first hand experience of the product feedback cycle. I think having first party hardware will end up being very important.”
4. Agentic OpenClaw-Like Systems Are The Future
There’s a reason why OpenClaw (initially Clawdbot) went crazy viral a few weeks ago. I still haven’t set up an OpenClaw system because I don’t feel I know enough about security to make this system safe.
When Sundar was asked if something OpenClaw-like was coming from Google, he said he thought it was the future.
“I think you want to give users capability where you have persistent long-running tasks in a reliable, secure way. You have to think through things like identity, access, et cetera. But I think that’s the future. That’s the agentic future. And bringing that for consumers is a bit of an exciting frontier we are looking at. This is one of mine too.
I think effectively the consumer interfaces are going to have full coding models underneath, and the right harnesses and the right skills and the ability to persist and run somewhere security in the cloud, locally and in the cloud. All those primitives are coming together.
Today I feel like there’s 1% of the world, maybe not 1%, 0.1% of the world who’s living this future. They are building stuff for themselves, but bringing that to mass adoption. Yes. It is a very exciting frontier I think.”
As I am writing this, Google DeepMind has just tweeted out instructions for using their new local open model Gemma 4 with OpenClaw. A new way of communicating with our machines is starting to unfold!
5. AI And AI Agents Are Going To Improve Dramatically In 2027
Sundar was asked when he thought it would happen that agentic systems would be able to work fully with no human in the loop. He said twice that 2027 was likely to be a big year.
“I definitely expect in some of these areas ’27 to be an important inflection point for certain things. Even the people doing it, that is the workflow through which they would produce it. Maybe for a while you would check it in the conventional way, but you switch over, a crossover. But I expect ’27 to be a big year in which some of those shifts happen pretty profoundly.”
The interview finished with Sundar talking about what he was most excited about. He did mention that putting data centers in space was very exciting, but this last bit was super interesting.
“I literally spent time yesterday with someone who was explaining some improvement in post-training, which is one person talking through the improvement they are doing. Listening to it, I’m like, “Oh, it’s going to really show up as a nice jump.” That’s the constant power of this moment. All of that, I don’t want to be specific about the second one, but we’ll publish it one day I’m sure.”
It sounds to me like he is talking about agentic self-improvement.
We are currently learning how to have AI build and do things for us. I recall first learning to code with ChatGPT as a partner. It would give me code to paste into VS Code. Then I’d run it and paste the errors back into ChatGPT. We went back and forth until something actually worked. I felt like I was unnecessary in this process – the copying and pasting robot, and sure enough, today’s systems like Antigravity, Claude Code, and ChatGPT Codex run the code, check the errors, and fix things up without much need for human involvement.
It makes sense to me that the next step in this process is to have AI systems learn to improve their usefulness without us having to prompt them specifically. I expect that when this happens, we will see even faster progression of AI capabilities and usefulness!
More Resources:
Read Marie’s newsletter, AI News You Can Use. Subscribe now.
Featured Image: isasoulart/Shutterstock


