“The culture of Silicon Valley is authoritarian”
Journalist Karen Hao
Foto: Tony Luong
Interview by Julia Stanton
Ms. Hao, you have been covering Silicon Valley as a journalist for a long time. OpenAI, the company that launched ChatGPT and which you focus on in your book Empire of AI, is based there. OpenAI started out as a non-profit organization with a mission to benefit humanity. Now, almost ten years after its founding, it is an ultra-capitalist company that that primarily pursues its own business interests. What has changed?
There are two ways to answer this question. One is that everything has changed. It was originally meant to be a fundamental research lab, and now it’s focused on aggressively commercializing products. Not only has it become increasingly closed off, but it has also set off a wider trend in the AI field — one toward greater secrecy about the true capabilities and limitations of these technologies.
On the other hand, you could argue that nothing has changed at all. From the very beginning, OpenAI was an egotistical project. When Altman and Musk founded it, their reaction to Google’s dominance was essentially: why should they be the ones to control a technology that will reshape the world? Why shouldn’t we be the ones to shape AI in our own image? That same mentality has persisted the industry and the so-called AI race remain driven by billionaires, each determined to make AI an extension of themselves.
How would you describe the current culture in Silicon Valley, especially around AI?
When you go to parties with people who work in the field, the way they talk feels like it’s on an entirely different plane of reality. They use their own language, discussing the potential for AI to either devastate humanity — the so-called “doom probability” — or to usher in utopia. They make sweeping claims that aren’t substantiated by any evidence, scientific or otherwise. This has become the norm in how they think, how they view their work, and what motivates their day-to-day decisions about building these technologies. It’s an extremely insular, unscientific, and ultimately authoritarian culture — one that sees itself as uniquely entitled to shape a technology that will affect everyone on the planet.
In April 2025, several authors, including Daniel Kokotajlo, who worked at OpenAI until 2024, published the AI-2027 report, which states that AI could soon surpass human intelligence. How realistic is this? How realistic do you think that prediction is?
The problem with these predictions is that we don’t actually know what human intelligence is. There’s no reliable way to quantify it. In fact, efforts to rank and classify people’s “intelligence” have often been used to justify oppression or even extermination. So if we can’t meaningfully quantify human intelligence, how can we ever know when AI has surpassed it? Some people already claim that moment has come. The whole debate is unmoored, because there’s no clarity about definitions or benchmarks. That’s why I describe these prophecies about AI surpassing human intelligence as quasi-religious.
The other problem is that artificial intelligence as a term was originally a marketing term. We never used to think that calculators were more intelligent than humans just because they are better at math. I think part of it is a bit of linguistic obfuscation. Using the term "intelligence" has just made people make false comparisons that actually shouldn't have been made in the first place.
"AI companies claim resources that are not their own"
When you finished writing "Empire of AI" Donald Trump wasn't yet elected President. Now he's rolled back all regulation in the AI sector. What consequences will this have?
The Trump administration has uniquely supercharged Silicon Valley’s recklessness and imperial ambitions. They now have an ally in him—the most powerful man on Earth—who enables them to access as much capital, land, and energy as physically possible. They’re pouring trillions of dollars into building computational infrastructure across the globe. In four years, many of those projects will at least be partially completed. Then what? Even if a new president takes office, it will be difficult to reverse that momentum.
You compare OpenAI to a modern day empire. Why do you draw that connection?
The tactics these companies use are identical to the tactics of Empires of Old: They claim resources that are not their own, and they redefine the rules to suggest that those resources were always their own, i.e by taking IP from artists and writers and saying it's completely fair use under copyright law. They exploit an extraordinary amount of labor the way that empires of old did. They are also causing a monopolization of knowledge production. "AI Empires" have essentially snapped up most of the AI researchers in the world. The very production of AI science, the science of the limitations, the capabilities of these technologies, is being distorted based on what companies want the public to know. We are no longer getting a clear picture of really what these systems are and what they're capable of. Lastly, these empires engage in a narrative that there are "evil empires" in the world, and that's why they have to be a "good empire" to beat the "evil empire" – in a kind of civilizing mission for the benefit of humanity. The religious overtones of the AI world also are extremely colonial: In the same way that eligion once justified imperial expansion, belief in “artificial general intelligence” is now being used to convince the world of the “salvation-bringing” effects of this technology and to portray all harm as a necessary sacrifice in order to access heaven.
"Everything an AI model will ever be capable of doing has to be taught to it by a person"
For your reporting you went to Kenya, Venezuela. What does the outsourcing of AI-work look like in practive and what consequences does it have for the communities living in those places?
Everything an AI model will ever be capable of doing has to be taught to it by a person. And yet, the AI industry pays the people who do this work abysmally little. Many of these workers don’t have contracts, guaranteed minimum pay, or health care. Their livelihoods depends on whether tasks come in from these platforms or not. One woman I spoke to, told me she could hardly live a normal life because she was tethered to her computer, waiting for tasks to appear. The platform pitted workers against each other — whoever clicked first got the job and the pay.
In the era of generative AI there’s an added layer of exploitation: the work itself has become deeply toxic. Content moderation — now essential because companies have chosen to train these models with vast amounts of unfiltered internet data — exposes workers to horrific material, leading to psychological trauma. In many countries such as Kenya and Venezuelatens of thousands of young, well-educated people are being pushed into these jobs. Their mental and physical health suffers because of it.
AI technology is not only affecting humans, but also the environment. What are the ecological costs, and what will the consequences be five, ten, or twenty years?
The environmental consequences are multifold and also deeply connected to public health consequences. The amount of data center infrastructure that is being built to support the training and deployment of these AI models is unlike anything we've ever seen before. It's leading to a historic increase in energy demand in countries, were energy demand flatlining or even declining. A McKinsey report projected that the AI industry will require as much energy in the next five years as the UK currently consumes in total. Most of that is going to be serviced by fossil fuels.
On top of that, the data centers have to be cooled with fresh water because other types of water leads to the corrosion of equipment. According to a Bloomberg-investigation two thirds of data centers are being built in areas scarce with fresh water resources. This will lead to more climate disasters, air pollution, and health problems. Another investigation by "Business Insider" found that, in the US alone, public health costs due to air pollution from the AI industry could amount to around ten billion dollars. Then, of course, in Chile, I was reporting on the excavation of all of these minerals that's devastating extremely unique ecologies. It is shocking: habitats that could potentially enable new medical discoveries or scientific findings are being irreversibly destroyed.
In January 2025, the Chinese company DeepSeek released an AI-based chatbot that required much less energy. That shows that it is possible to drive development forward without consuming huge amounts of energy.
Deep Seek showed the world, what many researchers already knew. Before Open AI shifted the entire paradigm within AI to focus on ever larger models with ever higher energy consumption, AI research was actually trending in the opposite direction. From 2018 to 2020, researchers were trying to build smaller, more efficient AI models for use on mobile devices. Within only four years, the entire common sense thinking flipped 180 degrees. It was like collective amnesia. This is why I'm so critical of OpenAI's and Silicon Valley's current approach: It's wholly unnecessary from a technical and scientific perspective.
If this was possible, why did Open AI decide not to pursue this route further?
Fundamental research is unpredictable. Science is nonlinear and can't be retrofit to commercial cycles. Open AI started out, wanting to beat Google so they picked an approach that was more predictable to them: If we want to beat Google, we need more data and bigger supercomunters. This was an intellectually cheap approach, but one that will have some guaranteed payoff. As soon as OpenAI did that, every other company followed. The race was on. they also are going to shift away from the fundamental research approach, which is very hard to predict towards the. Instead of unpredictable research, they focused on mass production – “pump more in.” This is how they locked in their competitive advantage and secured a monopoly. If they build smaller models, there will be more competitors because the entry barriers will be lower.
Do you think these large companies will change?
I think it would be unrealistic to think they'll change without external pressure. If that pressure is applied, change becomes possible. When I criticize these companies as empires, I'm not advocating their complete dissolution, but rather the elimination of their imperial characteristics. We need to revert from empire back to normal company. Traditional companies can only survive if they serve their customers and users effectively and if there is a fair exchange of value. To achieve this, people inside and outside the companies must work together. It's about building up enough pressure so that change becomes an existential task for the company.
What policy measures could help make that happen?
One thing would be requiring companies to be fully transparent about what data they use to train their models. That would stop them from taking large amounts of intellectual property without compensating or crediting the creators of that IP. That would force a necessary exchange of value. If they have to pay, it is at least not simply appropriation, but a trade.
It is equally important to be transparent about how much energy is actually used to train and deploy these models. This has been extremely opaque. Researchers outside these companies are attempting to estimate the ecological footprint, but companies are refusing to provide the necessary information.
Another interesting development in the US has been the growing concern around chatbots and their impact on the mental health, especially when it comes to teenagers. There have been cases of teens who have taken their own lives after extended conversations with chatbots. A current legislative proposal would make companies liable for such damages. That could be a way of forcing them to be more cautious when introducing and designing their systems.