AI will be the greatest centralizing technology we’ve ever seen and we aren’t ready yet. By centralization, what I mean is that AI is more likely than not to concentrate wealth, power, and technical talent into the hands of a few companies and individuals, rather than society as a whole.
The first reason for this is that AI relies on big data. So far, all of America’s top models (GPT-3/4, Llama, Gemini) have relied upon more and more parameters and more and more compute to get top-tier results. I don’t know if the capabilities of LLMs scale linearly like that forever (probably not), but I do see it hindering anyone working with AI with limited access to training data. Collecting large amounts of data is something major tech companies have been doing for years now, instrumenting their software products to hoover it up at every turn. I would argue that they are uniquely capable at it, too.
Big data, of course, is the second most centralizing technology we’ve ever seen. Meta, Google, and Microsoft have all used big data to stunning effect. They use it to sell targeted ads, they use it to show you more relevant search results, they use it to keep you increasingly more addicted to their platforms, they can even use it just to make their user experience better than the competition. Because of the nature of big data, and new and old data growing in value when you connect them together, being able to collect and store as much of it as possible has become incredibly lucrative. It’s a hoarder’s paradise.
Besides big data, the other component I mentioned, big compute, plays a massive role here. I imagine the investment meeting between Satya Nadela and Sam Altman going like this: Microsoft’s CTO walks into the boardroom and flops his giant FLOPs down onto the table. Sam Altman bows down and says “whatever you say, sir”.1 OpenAI’s ability to make mind-numbingly good machine learning models has ridden on the back of Azure’s equally mind-numblingly massive amount of compute. Huge tech companies, so called “superscalars”, that have the ability to run and maintain their own data centers are not playing the same game as everybody else. For many companies, you either rent your compute from AWS at inflated costs, or you build it yourself. And building it yourself at scale is not so easy.
Even if you can build it, there’s no guarantee the GPUs will come. Nvidia and AMD’s value have shot up over the past year because they are able to charge massive margins while the market is so supply strained. GPU makers can’t make these things fast enough to satiate the massive demand for AI-juicing GPUs. If you look at it one way, the next step in human progress is bottlenecked by silicon production right now. Silicon production will catch up eventually, but it makes it hard for new players to front the capital like the old dogs can in a resource constrained environment. If one wanted to spin their own silicon instead, the only ones equipped for this are the tech companies with the huge cash stacks. And this is very much something they are looking into. I’m talking Amazon’s Trainium, Google’s TPUs, and Microsoft’s Mystery Meat Silicon codenamed Maia.
All the prerequisite techs, all the ingredients that go into making AI, are things that big tech have been using for years now to consolidate wealth into the few major players we see today. The addition of the AI seasoning on top only stands to accentuate the flavor of centralized capitalism we’ve already been experiencing. Last year, the top 7 tech companies grew 74% compared to just 12% growth by all other companies. That’s the rest of the companies globally, by the way, not just companies in the US. This trend isn’t something that popped up overnight. US tech has had economic hegemony for a long time, and so far AI doesn’t look prepped to reverse it.
Old vs New
At this point I’ve just been talking about existing companies and how they’re going to use this technology to strangle everybody else. But there are some new and small companies that are using this technology to make really exciting innovations. Recently, Mistral AI has released their Mixtral model which can match GPT 3.5 performance while using much less GPU memory. Stability AI has also released models like Stable Diffusion, which was one of the first major open source models. Their open models have added some jazzy tech enthusiast experimentation into the mix.
The situation that these companies find themselves in, however, is not so disco. These new companies are forced to open source their tech to attract attention and funding just to survive. This is great for tech enthusiasts, but for the companies it puts them in the sour position of trying to monetize their free and open tech. Anyone can use their models for free, but if they decide to close-source they risk losing all of that good will in one fell swoop. OpenAI was granted an exception for this because they’re the top dog in the indussy and they can basically do whatever they want. For real open source efforts, they’re grinding away to create a GPT 3.5 capable model while only a year behind. That’s a great accomplishment, but it might not matter if the closed source companies are always far enough ahead to have AI that is meaningfully better for most customers.
Another problem for new companies is finding a platform to deploy their models. If you’re making a Large Language Model, well get in line. Making a new ChatGPT clone that’s not state of the art doesn’t really differentiate you enough. Regular old Chatty G is so last year. Where there’s still opportunity for AI is in niche applications of deep learning (not LLMs), and great LLM integrations into existing products. New companies have no existing products to integrate with. Existing big tech companies, however, have massive ecosystems that they can plug AI into. AI will be in your Word doc, it will help you sift through iOS photos, it will be in Google search to, ya know, search. New companies do not have this existing infrastructure to juice up with their AI innovations.
That said, I do think some new companies will find success off the back of AI, but it probably won’t be with a mission of free access for everyone. Midjourney is a good example of a successful new AI company. They’ve managed to create state of the art image models since the beginning with a tiny team and remain profitable throughout. So far they seem to just be focused on image generation but who knows. Unfortunately, I think the fact that their image bot’s messages still come into the same Discord inbox as invites to my cousin’s Minecraft server and dumb memes from the server I forgot to mute might speak to the lengths of their current ambitions.
Self-Service
AI will be centralizing because people with access to AI and skill with it will be much more productive than those without it, at a rate that is uncatchupable. One of the first and best mainstream applications for AI has been coding assistants. I don’t think I need to remind you who already employs large swaths of software developers. Engineers who are using AI are hugely more empowered than those without it.
Consider this: a company like Microsoft develops an AI tool that is vastly better than the competition, or maybe even incrementally better, and just doesn’t release it. They can keep the AI tool internally and let their engineers run free with it and reap the benefits without ever letting consumers or the competition access it. Because AI is useful at everything: info retrieval, mistake checking, creative prompting, Microsoft will just have better engineers by definition, and this will trickle through to all their existing products. If the impact of AI is meaningful, and I think it will be, whoever develops the best AI tooling for their engineers first is going to take off like a rocket ship.
Now do I think Microsoft is going to prevent the release of GPT-5 to consumers just for their own use? Absolutely not. But they could use their direct access to the technology to create internal tooling at lower cost than their competitors. Take ChipNeMo as an example. ChipNeMo is an LLM that Nvidia fine-tuned to answer questions about their chip documentation. Nvidia’s Bill Dally has said the tool is primarily to help newbie chip designers answer questions and “saving senior [engineer] time”. This might not sound like a big deal but it is. Fear the company with senior engineers with time on their hands.
With tools like ChipNeMo, Nvidia is well placed to use AI to make better GPUs to train better AI to make better GPUs to train better AI to make better GPUs and, I think I’m gonna throw up. They’ve gone recursive on our asses. And remember, Nvidia is using their own GPUs to create this for themselves. They’ve got GPUs to own, while many others are stuck renting. The playing field is not level. The self-service opportunities with AI are numerous and rich for those already with the resources.
The Sad Part
Okay now here’s the sad part. AI is going to be a transformative technology, and the extremely good or bad consequences seem like a coin toss. Aside from the utopia/apocalypse prediction metronome some of the discussion keeps teetering on, I think the most immediate effects from AI are going to be from massive disruption in the economy, and the resulting uncertainty. Sam Altman, someone with possibly the best position for perspective on this situation, tried to address this years ago with Worldcoin. The idea behind Worldcoin is that a magical orb will scan your retina (to verify that you are a human) and give you free money that you will somehow be able to use at a later date. This is a great idea and was always destined to work (my eyes have rolled all the way to the ceiling and I think they’re stuck. Help). Maybe if the Worldcoin orb gave you shares in OpenAI it would have actually meant something, but now that Microsoft has OpenAI by the orbs I think that idea is a world away. Anyways, what Worldcoin was really trying to accomplish was some sort of variation on Universal Basic Income.
Okay, now here’s the actual sad part. UBI is nowhere near close socially or poilticially in the US. This is for a lot of reasons, but that’s another humorous tale. But what that means is if half of the economy is devoured by AI, there’s no fucking safety net. Artificial Intelligence is going to split the Earth and swallow us whole. So who captures all of the value that AI produces/swallows if it isn’t redistributed to the people? Why, the shareholders of course. That’s right, AI is going to supercharge the economy with C4 explosive (or the energy drink, if you prefer) and capitalism is going to have a front row seat the whole way. The keepers of the AI will become the unelected stewards of Earth, so far as the money is concerned.
What To Do
If AI really does do unspeakable things to our economy, what the hell are we supposed to do about it? Well, apart from hoping AI doesn’t perfectly replicate what you do for work, the next best thing is to bolt yourself to the rocketship before it leaves us behind. Capital is drooling at the mouth, ready to take advantage of the opportunities this technology provides, and I think they may carve the only viable path of future financial stability for most people. For me, this means investing in companies well placed to take advantage of AI as early as possible. Those companies are the aforementioned “centralizing” US tech corps. When done right, it should look something like this:

As with all predictions, none of this is a sure thing. AI might also be the wackiest technology we’ve ever seen. Up and coming AI companies might still have an opportunity to make an impact with staying power in the market. AI development could slow down long enough where the playing field has time to level. Our social systems could get ahead of this and make thoughtful solutions to bulwark us against the worst outcomes. Trust busting could come in and scatter engineering talent across the fertile soils of our tech industry. Some opportunities still exist to change the likely course of events.
To be clear: free, open, equal, and safe AI is exactly the type of thing I want. And again, that outcome is not out of the question. We could reverse course and overturn the immense pressure of capital to centralize all this stuff into the hands of a few technologists. It could be distributed equally to empower individuals to be the most productive and creative we’ve ever been. We should continue to pressure these companies to allow equal access, and ideally source code, to these powerful models as much as possible. Let’s not forget that 5 years ago you could not prompt the digital Oracle at Delphi to do meaningful work, and now you can. Whatever happens next will be something we’ve never seen before.
FLOPs stands for Floating Point Operations per second and is a measure of GPU performance. The joke only works if you know this.