The other day, I came across this post on X:
Unfortunately, what followed was a lot of AI hype without a lot of substance.
OpenAI CEO Sam Altman did not offer a fully-fleshed timeline for AI when he testified at the Senate hearing on AI competitiveness earlier this month.
There was no roadmap given. That wasn’t the purpose of this bipartisan hearing anyway.
But what he did say tells us a lot about where AI is heading.
I watched the entire hearing, and here are my most important takeaways…
Sam Altman In His Own Words
The hearing on May 8 was the Senate’s largest since President Trump returned to office.
It was called “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation.” And it reflected the Trump administration’s push to roll back Biden-era rules and eliminate regulatory barriers to AI innovation.
Altman was joined by Microsoft President Brad Smith, AMD CEO Dr. Lisa Su and CoreWeave CEO Michael Intrator in what was a mostly positive hearing.
But it was also clear early on that Altman has changed his stance on regulation.
Two years ago he said he was open to regulation. Today, he is apparently more concerned with legal clarity than the imposition of more rules.
Altman testified: [Editor’s Note: All testimony is slightly edited for clarity and punctuation.]
We need to make sure that companies like OpenAI and others have legal clarity on how we’re going to operate.
Of course there will be rules. Of course there need to be some guardrails. This is a very impactful technology, but we need to be able to be competitive globally. We need to be able to train, we need to be able to understand how we’re going to offer services and sort of where the rules of the road are going to be.
So clarity there and I think an approach like the internet, which did lead to [the] flourishing of this country in a very big way. We need that again.
This was a common refrain from all of the witnesses, who urged lawmakers to take a hands-off approach to AI.
Republicans, including Sen. Ted Cruz, echoed this view, warning against European-style rules that could hinder U.S. competitiveness with China.
Cruz was also particularly curious about the impact of China’s DeepSeek, asking: “How big a deal was DeepSeek? Is it a major seismic, shocking development from China? Is it not that big a deal? Is it somewhere in between…?”
Altman replied:
Not a huge deal.
There are two things about DeepSeek. One is that they made a good open-source model and the other is that they made a consumer app that for the first time briefly surpassed ChatGPT as the most downloaded AI tool, maybe the most downloaded app.
Overall, there are going to be a lot of good open source models and clearly there are incredibly talented people working at DeepSeek doing great research, so I’d expect more great models to come. Hopefully.
Also us and some of our colleagues will put out great models too on the consumer app. I think if the DeepSeek consumer app looked like it was going to beat ChatGPT and our American colleague’s apps — is the default AI systems that people use — that would be bad. But that does not currently look to us like what’s happening.
Does that mean Altman believes the U.S. is leading in AI development? He testified:
I believe we are leading the world right now. I believe we’ll continue to do so. We want to make AI in the United States and we want the whole world… to benefit from that. I think that is the strongest thing for the United States.
But he and the rest of the tech leaders called for greater investment in AI infrastructure and workforce training, two things I’ve been talking about a lot in the Daily Disruptor.
I believe the only way we beat China in the race to artificial superintelligence (ASI) is if we establish an infrastructure that supports our growing need for more power and compute.
Altman seems to agree.
When asked by Sen. Dan Sullivan: “What would the key things be that you would need from the US government to help us maintain that lead and dominate this space?” Altman replied:
We’ve talked a little bit about infrastructure, but I think we cannot overstate how important that is and the ability to have that whole supply chain or as much of it as possible in the United States. The previous technological revolutions have also been about infrastructure and the supply chain, but AI is different in terms of the magnitude of resources that we need.
So projects like Stargate that we’re doing in the U.S., things like bringing chip manufacturing, certainly chip design to the U.S., permitting power quickly, like these are critical. If we don’t get this right, I don’t think anything else we do can help.
When Sen. Gary Peters pivoted the discussion to AI’s impact on jobs, Altman brought up the importance of workforce training, saying:
The most important thing or one of the most important things I think we can do is to put tools in the hands of people early.
We have a principle that we call iterative deployment. We want people to be getting used to this technology as it’s developed.
We’ve been doing this now for almost five years, since our first product launch as society and this technology co-evolve, putting great capable tools in the hands of a lot of people and letting them figure out the new things that they’re going to do and create for each other and come up with and provide sort of value back to the world…
As for the future of work?
Altman focused on how AI is already changing software development, something we also talked about recently.
I don’t think we can imagine the jobs on the other side of this, but even if you look today at what’s happening with programming, which I’ll pick because it’s sort of my background and near and dear to my heart.
What it means to be a programmer and an effective programmer in May of 2025 is very different than what it meant last time I was here in May of 2023.
These tools have really changed what a programmer is capable of [and] the amount of code and software that the world is going to get. And it’s not like people don’t hire software engineers anymore. They work in a different way and they’re way more [productive.]
Here’s My Take
I don’t agree with everything Sam Altman has ever said or done, but I do find him to be a reasonable voice about where we are with AI today, and where we’re headed in the future.
When Sen. John Fetterman asked Altman about the singularity — what I call ASI — here’s what he said:
I am incredibly excited about the rate of progress, but I also am cautious and I would say, I dunno, I feel small next to it or something.
I think this is beyond something that we all fully yet understand where it’s going to go…
I do think things are going to change quite substantially. I think humans have a wonderful ability to adapt and things that seem amazing will become the new normal very quickly.
We’ll figure [out] how to use these tools to just do things we could never do before and I think it will be quite extraordinary. But these are going to be tools that are capable of things that we can’t quite wrap our heads around…
It feels like a sort of new era of human history, and I think it’s tremendously exciting that we get to live through that and we can make it a wonderful thing, but we’ve got to approach it with humility and some caution.
I’m not sure I could have said it better.
Regards,
Ian King
Chief Strategist, Banyan Hill Publishing
Editor’s Note: We’d love to hear from you!
If you want to share your thoughts or suggestions about the Daily Disruptor, or if there are any specific topics you’d like us to cover, just send an email to [email protected].
Don’t worry, we won’t reveal your full name in the event we publish a response. So feel free to comment away!
Publisher: Source link