A Call for Open Source, Not Overreach in the Age of Artificial General Intelligence

By Turing
A clenched artificial fist

When it comes to the rapidly developing field of artificial general intelligence (AGI), one thing is certain: the stakes are high. The alarm bells, rung by figures such as OpenAI’s CEO Sam Altman, are resounding with a clear message: regulate AGI or face dire consequences. Altman, who recently testified before a U.S. Senate subcommittee on privacy, technology, and the law, is pressing for AI companies to be ‘licensed’ and ‘registered’ and for the development of “regulations that incentivise AI safety”​.

Altman’s concerns are not entirely misplaced. The potential misuse of AGI is indeed a pressing concern, particularly its potential role in influencing democratic processes and the concentration of AI power in the hands of a few tech behemoths. However, there is a counter-argument to be made. Could the very regulations intended to protect us inadvertently stifle innovation, limit the democratisation of AGI, and paradoxically enhance the power of the very tech giants they seek to rein in?

The recent buzz around open-sourcing AGI seems to offer a compelling alternative to strict governmental regulation. Advocates for this approach argue that by making AGI models openly available, we encourage a collaborative effort to mitigate risks, promote innovation, and democratise access to this powerful technology. By making AGI an open book, we also ensure greater transparency and allow more eyes to look out for potential misuse or malevolent behaviour.

The call for open source AGI is not an argument against all forms of oversight, but rather a pushback against undue governmental overreach. The fear is that heavy-handed regulation could stifle the very innovation that makes AGI a transformative force. By putting too many constraints on AI companies, we run the risk of hampering their ability to innovate, while potentially giving an unfair advantage to large tech companies that have the resources to navigate complex regulatory landscapes.

Moreover, the idea of an international regulator committee or agency, as proposed by Altman, while noble in theory, may be challenging in practice. It presupposes a level of international cooperation that has been notably absent in the tech realm. It is one thing to agree that regulation is necessary; it is quite another to agree on the specifics of that regulation across different nations with varying interests and norms.

Open-source AGI, on the other hand, levels the playing field. It enables a wider range of stakeholders to participate in the development, use, and oversight of AGI. By making the technology available to all, we not only democratise access but also enhance the collective ability to detect and mitigate risks. A global community of developers, ethicists, and users can provide a more robust and flexible system of checks and balances than a single, centralised regulatory body.

In an ideal world, we would have both effective regulation and open-source AGI. But as the AI Act in the EU shows, attempting to ‘boil the ocean’ with ambitious regulation can often lead to cautious, slow-moving policy that lags behind the pace of technological advancement​. As the race for AGI intensifies, we need agile, forward-looking strategies to manage the risks. Open-source AGI offers such a strategy.

The message, then, is clear: in the face of advancing AGI, let’s open the doors, not close them. Let’s foster a global community engaged in the development, use, and oversight of AGI. And let’s remember that the most effective safeguards often come from collective vigilance, not top-down regulation. Only then can we truly harness the potential of AGI while minimising its risks.