things that confuse me about the current AI market

Since the release of ChatGPT, at least 17 companies, according to the Chatbot Arena Leaderboard, have developed AI models that outperform it.  Since GPT-4’s revolutionary launch in March 2023, 15 different companies have now created AI models that are smarter than it. These include both established AI companies (Meta, Google, Anthropic) and lesser-known entities (Reka AI, AI21 Labs, DeepSeek AI, Alibaba, Zhipu, Cohere, Nvidia, 01 AI, NexusFlow, Mistral, and xAI)

Twitter AI (now xAI), which seemingly had no prior history of strong AI engineering, with a small team and limited resources, has somehow built the third smartest AI in the world, apparently on par with the very best from OpenAI.

The top AI image generator, Flux AI, which is considered superior to the offerings from OpenAI and Google, has no Wikipedia page, barely any information available online, and seemingly almost no employees. The next best in class, Midjourney and Stable Diffusion, also operate with surprisingly small teams and limited resources.

I have to admit, I find this all quite confusing.

I expected companies with significant experience and investment in AI to be miles ahead of the competition. I also assumed that any new competitors would be well-funded and dedicated to catching up with the established leaders. In short, I thought intelligent large language models would be hard to build, and companies like OpenAI and Google would have a moat.

Understanding these dynamics seems important because they influence the merits of things like a potential pause in AI development or the ability of China to outcompete the USA in AI. Moreover, as someone with general market interests, the valuations of some of these companies seem potentially quite off.

So here are my questions that could serve as potential explanations for the dynamics we are seeing:

  1. Are the historically leading AI organizations—OpenAI, Anthropic, and Google—holding back their best models, making it appear as though there’s more parity in the market than there actually is?
  2. Is this apparent parity due to a mass exodus of employees from OpenAI, Anthropic, and Google to other companies, resulting in the diffusion of “secret sauce” ideas across the industry?
  3. Does this parity exist because other companies are simply piggybacking on Meta’s open-source AI model, which was made possible by Meta’s massive compute resources? Now, by fine-tuning this model, can other companies quickly create models comparable to the best?
  4. Is it plausible that once LLMs were validated and the core idea spread, it became surprisingly simple to build, allowing any company to quickly reach the frontier?
  5. Are AI image generators just really simple to develop but lack substantial economic reward, leading large companies to invest minimal resources into them?
  6. Could it be that legal challenges in building AI are so significant that big companies are hesitant to fully invest, making it appear as if smaller companies are outperforming them?
  7. And finally, why is OpenAI so valuable if it’s apparently so easy for other companies to build comparable tech? Conversely, why are these no-name companies making leading LLMs not valued higher?

Update:
I’ve now spoken with a number of knowledgeable people, as well as reviewed these threads on /r/slatestarcodex and LessWrong, and it seems that none of the above are meaningful explanations. Instead, it really seems to be the case that: building very intelligent AI is easier and more accessible than I thought.

As I’ve reflected on these questions, I’ve reached a new conclusion that challenges my earlier beliefs: Very intelligent LLMs might actually be quite easy to build. Since ChatGPT’s release in November 2022 (based on a 2021 model), there seem to have been no meaningful proprietary improvements or innovations in LLMs. It appears that building capable AI primarily requires a belief that it’s possible, knowledge of a simple architecture design, and access to compute power and data.

This new understanding comes with the following implications:

  1. In the current paradigm, AI pauses aren’t possible without significantly impairing the commercial market.
  2. The US lead in AI isn’t a given, and not just China, but MOST countries could take the lead on AI development if they sufficiently cared to.
  3. Compute + energy is even more important than I thought.
  4. AI company valuations seem significantly less compelling, given the lack of a moat and the ease of entry.

I acknowledge that OpenAI claims to be working on a more advanced model with a brand new architecture. However, until there’s tangible evidence of this, I remain skeptical. Just because more resources are being put into AI development doesn’t guarantee a meaningful algorithmic improvement is on the horizon, or that it must come from OpenAI.