Leading AI Experts Advocate for Openness Amidst Growing Industry Debate
On a significant day when leaders from various sectors converged at Bletchley Park for the AI Safety Summit, over 70 influential figures in the AI domain, including Meta's Chief AI Scientist Yann LeCun, rallied for a more open approach towards AI development through a letter published by Mozilla. The letter highlighted the critical juncture at which AI governance stands, emphasizing the necessity for openness, transparency, and broad access to counter potential hazards from AI systems.
The ongoing debate between open versus proprietary AI echoes the broader software sphere's discourse over the past decades. Over the weekend, LeCun took to Twitter, criticizing some companies like OpenAI and Google’s DeepMind for their attempts to secure a "regulatory capture of the AI industry" by lobbying against open AI R&D. LeCun warned against the catastrophic outcome if such campaigns succeed—a monopolistic control of AI by a handful of companies.
This debate is not just confined within the industry but has also seen reflections in governance efforts, like President Biden's executive order and discussions at the AI Safety Summit. While some argue that open-source AI could be misused by malicious actors, others refute this notion, claiming such arguments only serve to concentrate control in a few protectionist companies' hands.
The open letter, backed by dozens of notable figures, acknowledges the risks and vulnerabilities of openly available models. However, it counters that proprietary technologies have shown similar risks, with the added disadvantage of limiting public scrutiny and collaboration. The letter strongly asserts that the notion of tight and proprietary control being the only path to safety is naive at best and dangerous at worst.
Among the signatories are other esteemed personalities in the AI field, including Google Brain and Coursera co-founder Andrew Ng, Hugging Face co-founder and CTO Julien Chaumond, and renowned technologist Brian Behlendorf from the Linux Foundation. They identify three primary areas where openness can significantly contribute to safe AI development: fostering greater independent research and collaboration, enhancing public scrutiny and accountability, and reducing entry barriers for new entrants in the AI domain.
The letter also warns against hastily crafted regulation that could inadvertently lead to power concentrations, hurting competition and innovation. It advocates for open models to inform an open debate and improve policymaking, underscoring that if safety, security, and accountability are the objectives, openness and transparency are indispensable ingredients to achieve these goals.