The boom in AI startups is now creating enormous tailwinds for chipmakers and hardware producers.
While the first wave of venture capital largely found its way to generative AI startups and companies building large language models, its now hardware manufactures, chipmakers and microprocessors who are seeing an influx of capital.
Despite many uncertainties around artificial intelligence as a whole, one thing has become abundantly clear: there is significantly more infrastructure required to build the future of AI.
Who’s making moves:
- Arm, one of the leading chip manufacturers globally, is set to IPO in one of the biggest initial public offerings in history, with an expected valuation of somewhere between $60-$70 billion
- Hugging Face, an open source startup accelerating machine learning, recently raised $235 million in a Series D financing led by Salesforce and with participation from Nvidia, Google, Amazon, Intel, AMD and Sound Ventures
- Modular, a startup which aims to build the go to platform for optimizing AI systems and reducing the costs of training AI models, just raised $100 million in a funding round led by General Catalyst, with other notable investment from Google Ventures, SV Angel, Greenock and Factory
- NeoLogic, an Israeli based startup, raised $8 million to build processors of that offer higher computing power and energy efficiency that can be applied to machine learning and AI
- With the exponential boom in AI startups looking for GPUs and cloud computing, there is a massive supply and demand issue that needs solving. As a consequence, new capital is being steered towards the infrastructure plays needed to power those startups.
- There are only two ways to get computing power for AI: owning your own GPUs or accessing them through cloud computing companies. Currently, there are issues with both approaches, as the most popular GPUs are on massive back order and cloud computing companies are having a hard time keeping up with new demand.
- This is spurring all types of innovation from next generation GPUs, to purpose built chips for machine learning and quantum hardware to process large datasets more efficiently.
Why it matters:
- Any meaningful technology breakthrough that enables AI computing and machine learning to happen more efficiently will massively reduce the upfront capital investment required by startups and rapidly accelerate the rate of progress that AI can have
- As more infrastructure becomes available, the ability to build startups and new AI technology without needing massive venture funding will reduce the barrier to entry and enable more entrepreneurial ventures, allowing the best ideas to win independent of capital constraints
The fine print:
- It’s unclear how long it will take for new cloud computing data centres to become operational given the backlog of demand for GPUs or if new chip technology will have the breakthrough potential that is being promised, so there are still many issues underpinning the current AI startup ecosystem
- Further, given that these are big technological challenges, there may be vastly more venture funding required to get new infrastructure to the scale that is needed not just today, but for the future
- Currently Nvidia (NASDAQ: NVDA) has a deep rooted monopoly on GPUs and until that changes, the supply and demand of AI compute is largely being wielded and controlled by them. Given Nvidia's recent interests in funding startups themselves this creates a challenging paradigm for competing companies