MARIANNE WANG: Democratizing AI Development Cannot Be Our Priority Now

As someone relatively new to learning about AI, it surprises me how fast the industry is moving ahead, given how little the world (and the industry itself) knows about where exactly we are headed. On one hand, we have the “x-risk” school of thought clearly articulating the apocalyptic potential of autonomous AI systems, should they evolve beyond our control and pursue goals harmful to society. In stark opposition, the proponents of continued progress in the field of AI shrug off this concern as akin to worrying about “overpopulation on Mars”

This divergence in opinion is no elite-mass divide. Both camps are represented by prominent, well-informed leaders of the field, from deep learning pioneers to AI firm CEOs. In other words, even our brightest minds of the field are unsure of the exact risks that are involved with AI. There is insufficient research to make a convincing assessment on its implications. 

The issues we have chosen to tackle as a society are thus incredibly scattered. As with all emerging technologies, problems have arisen on multiple levels, from short-term distributive justice to long-term upheavals, and tackling some of them can often mean undercutting progress in others. For instance, imposing more safety restrictions for long-term security could undercut innovation in the present. More immediately, channeling more resources into esoteric research on the mechanisms of AI evolution means time and capital directed away from training and scaling of models. Yet, limiting or decelerating AI development does not irreversibly hurt anyone in the present — the short-term trade-offs are simply a slower, cautious tread towards reaping the benefits of AI, unlike the more problematic direct harms to present communities that some endorse in fields like climate change. The issues of inequality around intelligence accumulation are legitimate, but pale in comparison to the fundamental security of human society and even, perhaps, all life. 

The case for decentralized access and contributions to AI models often has to do with ensuring that more people can apply AI to benefit their own unique societies and communities. However, even these short-term gains are debatable, and could be very much offset by elevated cybersecurity uncertainties in the medium term as societies and agencies grow more reliant on an autonomous system they do not understand. 

I say we need a much more assertive commitment to short-term restrictions on AI, during which we sharpen our understanding of what AI growth truly entails, as preceding all other desirable but secondary focuses. Hence, we must temporarily sideline goals like democratization of AI development that make AI governance exponentially harder, and introduce greater volatility to the growth trajectories of AI models as they are trained using unconstrained sets of data, at incredible scales. Reaching a consensus on such a firm stance, is the first step away from our current non-committal, lackluster approach to managing the risks of AI. Our penchant for mandates is merely screaming into the void; repeated instances of “unauthorized” usage have proven that they are toothless.

How then, should AI be researched in a more constrained setting? Here, the national security paradigm complicates the trajectory of AI development, if we should allow a government nationalization of the field. AI could then, truly, become a second nuclear weapon rather than a new technology for common good. Still, we cannot leave such a technology in the hands of a few rich tech elites, allowing the logic of the private market to run rampant in such a consequential area.

The solution then, is to work towards a neutral, global centralization amongst a non-governmental body to alter our current framing of AI governance as a problem between nations. Rather than a guarantee to absolute power, AI models should be viewed as a scientific phenomenon, and its corresponding community should hence be curated as such: an academic, scientific coalition of peoples who are vested in the discovery of truth, working to uncover the mechanisms of this potential bomb we’re fiddling with. Unlike the way we approached the atomic bomb, research and progress should not be tied to the military.


Earlier today, I attended a panel session featuring Steve Omohundro, Dan Faggella, and Preston Epton, and this sentiment of a scientific approach to AI management was very much echoed. In that room, I saw the power of a rational, empirical, and scientific discussion on AI. The UN AI Advisory Body’s effort to bring together individuals with diverse AI-related profiles to craft a governance recommendations report (that was just published last month) was a good place to start — and that should be our predominant focus. It should not take us actual cyber disasters (which Steve reckons will be the only way we begin to cooperate) to realize that this is an issue of collective security, not a national one. And it most definitely is not an issue of whether the masses are sufficiently involved in its training and scaling.