“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” – John McCarthy, 1956

AI, or artificial intelligence, aims to create intelligent machines that can perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception, and natural language processing. AI aims to develop systems that can analyze and interpret vast amounts of data, recognize patterns, and make decisions based on that analysis without human intervention.

Shutterstock/sdecoret

The iterative learning mechanism creates machines that can improve their performance over time, leading to more intelligent and effective systems that ultimately enhance human productivity, efficiency, and quality of life by creating intelligent systems that can automate complex tasks and help us solve some of the world’s most pressing problems. In short, AI is critical to human advancement.

The world around us is rapidly changing, and the pace of technological advancement is accelerating at an unprecedented rate. With every passing day, we are witnessing breakthroughs in fields ranging from medicine to entertainment, all made possible by the power of artificial intelligence. One such breakthrough is the advent of ChatGPT, a cutting-edge language model that has the potential to revolutionize the way we create and consume media. 

However, as we move forward into this new age of endless possibilities, we must also acknowledge the challenges that come with progress in AI, including the high burden of computing power. Generative AI is going to require astronomical amounts of computational power that is orders of magnitude more than is available in all of the world’s data centers combined. The only way to access that much computational power is to tap into the hundreds of millions of latent GPUs that consumers have around the world. To scale true intelligence, you need access to a network that allows complex GPU-based jobs to be distributed and processed over a large, distributed peer-to-peer network.

As an investor, I’m always on the lookout for technologies that have the potential to disrupt and transform industries, and decentralized computing in AI and ML is one such area with huge potential.

The three trends underlying the investment case for decentralized computing in AI and ML

1. Political tensions driving GPU supply constraints

Semiconductor production relies on a stack of complexities all piled on top of each other: mechanical, physical, chemical, logistical, and commercial. While US policymakers have committed to addressing supply by supporting the infrastructural development of fabrication capacity, geopolitical tensions still threaten longer-term supply chain security. Semiconductors are an essential component of modern technology enablement and advancement. Therefore, Taiwan’s stronghold in the semiconductor foundry market is a source of political and economic leverage in the great power rivalry between the US and China. 

On a country level, Taiwan accounts for 63% of the semiconductor foundry market. In addition to producing the most chips, Taiwan’s foundries (including TSMC) produce the world’s most advanced chips, which can be found in 92% of all the highest-tech machinery. Taiwan’s single-sourcing stronghold on the semiconductor business, despite being under constant threat of cross-strait military confrontation by Beijing, represents a choke point in the global supply chain of the entire computer industry.

2. Rapidly increasing GPU-compute requirements

The majority of the world’s top supercomputers are powered by GPUs used in enterprise use cases such as deep learning and machine learning, analytics, computational finance, manufacturing, construction, and business process optimization. ML models are becoming more complex and harder to train at scale. The creation of transformers and their application in language modeling has driven computational requirements to double roughly every 3-6 months in recent years, highlighting the urgency for improvements in model efficiency. This increase in the requirements to build increasingly complex larger parameter models (faster than Moore’s Law) has led to the need for more GPUs. 

Current infrastructure can’t keep up with demand. Based on market research the RNDR team conducted in 2017, they found that the number of GPUs in the public cloud was a fraction of the number of GPUs in circulation. Public cloud providers are unable to expand fast enough and offer more competitive prices due to high upfront capital expenditure, rapid GPU hardware depreciation, and cloud providers being prohibited from purchasing consumer-grade hardware. Building a large GPU data center is already a strenuous and expensive task, and the global semiconductor chip shortage makes the task even more onerous.

3. Market incumbency business model arbitrage

It is widely known that renting GPUs from any of the major cloud providers, such as AWS, GCP, and Azure, is expensive, and it’s difficult to rent in large quantities. Despite burgeoning market demand, cloud providers have been unable to respond with lower pricing at scale due to Nvidia changing its end user license agreement to prohibit their flagship consumer graphics cards (e.g., 2080 or 3090ti) from being deployed by cloud providers, effectively forcing data center owners to buy its much more expensive enterprise GPUs, designed specifically for data center applications.

Nvidia enterprise-grade GPU cards, under the Tesla or Quadro brands, are up to 10-15x as expensive but only provide 20-25% more computational power. Consumer GPUs provide about 5x better performance per $ relative to its very similar but massively marked-up enterprise variants. This creates a value arbitrage opportunity by unlocking hundreds of millions of otherwise latent consumer-grade GPUs to reduce the cost of GPU-based computation dramatically. 

Decentralized computing has the potential to revolutionize the way we approach Artificial Intelligence and Machine Learning. By distributing computing tasks across multiple nodes, we can harness the power of the collective to train larger, more accurate models and perform complex calculations more efficiently. This opens up new possibilities for innovation in fields such as natural language processing, computer vision, and autonomous systems. 

An idealized business model is one where a network pools and coordinates decentralized GPU compute supply and demand, providing exponentially greater GPU cloud computational capacity than what the centralized GPU cloud offers. Through the running of the network, actors create economic efficiencies that make generative AI production viable and scalable for the first time.

Moreover, the decentralized nature of this approach makes it more secure and resilient than traditional, centralized systems. With no single point of failure, we can ensure that AI and ML applications remain operational even in the face of unexpected disruptions. By investing in decentralized computing in AI and ML, we have the opportunity to shape the future of these fields and create new opportunities for growth and impact. 

Overall, the growth in the market for “GPUs as a service” and cloud computing is driven by a combination of cost savings, improved accessibility, and the potential for disruptive innovation. This makes it an attractive opportunity for investors looking for exposure to growth in the technology sector.