The Ongoing Race for Open-source Artificial Intelligence Innovation – Recent Developments
This week we have lots to cover: Everything from Apple developing a small LLM on device to Google using their researchers to make academia walk away from LLM’s. They even have a nifty 1-million-dollar prize. I’ll try to summarize some observations on these developments.
Turning Tables in AI Research and Development
In an attempt to make non-industry or commercial applications standards relevant, Mike Knoop and François Chollet have unveiled a new initiative, the ARC Prize, a $1 million competition – which appears to be self-financed via a non-profit- centered on promoting the creation of an AI that can exceed the 2019 ARC-AGI benchmark.
This initiative intends to expedite open-source research for attaining Artificial General Intelligence (AGI). In our view, this is a clever way to draw some attention away from the Open AI styled solutions of the world; some researchers want to make it less about NVIDIA powered brute force paired with their clever stochastic parrots and more about clever engineering that is interpretable to them.
We should note that prior to the exponential growth in interest from the public on AI, the task of measuring model performance and developments was largely an academic-corporate model where researchers paid to play in the domain. This fairly narrow group of researchers with the help of their companies, prior educational access and corporate sponsorship could pace and define advances in their field.
Now, for better or worse, Open AI redefined what is relevant to the general public, companies and incoming researchers – they set the pace, but they’ve also democratized access to some clever tools and platforms. The priorities have been upended for fun and relevance, not engineering specs, which are undoubtedly important still.
In our view, the Chomsky style argument that better and better simulations of linguistic behavior (or any phenomena) is not science would apply to both factions of the industry/academia complex around AI. Thus, I am rather impartial as to who gains the industry and media advantage – I just want a platform that lets me build stuff easily. In this sense, Open AI’s been very helpful as has been the Google Cloud Platform.
Google’s Researchers Throw Shade At Open AI
Episode with @fchollet and @mikeknoop is out!
— Dwarkesh Patel (@dwarkesh_sp) June 11, 2024
We debate whether LLMs are on track to AGI.
And talk about the $1 million ARC-AGI Prize they’re launching today.
Had a ton of fun chatting through the cruxes.
Links to episode and prize below.
Enjoy! pic.twitter.com/XdF8zPVHUy
As implied, not everyone views OpenAI in a favorable light. Some critics argue that OpenAI has inflicted severe damage on open-source AI research. Despite the criticisms, the importance of open source in democratizing AI research and development is widely acknowledged. Additionally, a technical argument around how LLM’s essentially just scale large databases that can mimic intelligent behavior. Nonetheless, whatever the cause for the Large Language Models to perform so well in useful tasks is difficult to negate as a virtue.