Artificial intelligence (AI) technology is advancing at an unprecedented pace, raising concerns about the need for regulations. In a recent announcement, Microsoft (MS) has strongly advocated for the establishment of a government agency dedicated to AI regulations.
According to sources including Bloomberg, MS’s Chief Legal Officer, Brad Smith, emphasized the urgency of creating a federal agency to oversee AI development in his latest blog post. “The installation of a government-level agency to monitor AI development is the most rational course of action,” Smith stated. He further asserted, “We must always ensure that AI remains under human control, making it a top priority for both technology-driven companies and governments.”
Smith highlighted the necessity of implementing measures to prevent legitimate content from being altered through deceptive intentions using AI. He emphasized the need for licenses when dealing with the most critical forms of AI, including physical security, cyber-security, and national security.
MS’s call for a regulatory agency aligns with the stance presented by Sam Altman, CEO of OpenAI, the company behind the generative AI model ‘ChatGPT.’ Altman recently testified before Congress, emphasizing the importance of government involvement in mitigating AI risks. OpenAI has also stressed the necessity of an international AI regulatory organization akin to the International Atomic Energy Agency (IAEA).
In their blog, MS outlined five key principles essential for effective AI regulation. The company proposed mandatory inclusion of “safety brakes” in AI systems used for critical infrastructure. Similar to emergency brakes in trains, these safety measures would allow for complete shutdown or deceleration of AI operations in crucial infrastructure facilities. MS explained, “As AI becomes increasingly powerful, concerns arise regarding our ability to control it. The need for AI control in critical infrastructure, such as electricity, water, and transportation, is being widely recognized.”
MS further highlighted that safety brakes have already been integrated into various technologies, including elevators, school buses, and high-speed trains. The company also suggested the establishment of “guardrails” for government-led AI technology usage, proposing the involvement of organizations like the National Institute of Standards and Technology (NIST) in their development.
Additionally, MS emphasized the importance of creating a legal framework for AI applications, advanced foundational models, and AI infrastructure. The company also called for increased funding to support academic and nonprofit AI research.
To address the societal impact of AI, MS proposed collaborative partnerships between governmental and non-governmental organizations. They asserted that organizations involved in the development and utilization of advanced AI systems should establish their own governance frameworks. MS highlighted their own six-year effort in building such systems.
By investing over $10 billion in OpenAI, MS has propelled the global AI technology race. Their incorporation of AI chatbots across all their products, from search engines to document tools, has positioned them at the forefront of this competition.
While the European Union has taken initial steps, the need for AI regulations within the United States is gaining traction. The Biden administration recently commenced a review of measures to regulate interactive AI, including ChatGPT.
According to Reuters, the National Telecommunications and Information Administration (NTIA), a division of the U.S. Department of Commerce, has initiated discussions on regulatory measures for AI systems. The focus is on ensuring that AI systems operate as intended without causing harm, in response to growing public interest and concerns about AI accountability.
During an AI-related meeting, President Biden emphasized the responsibility of tech companies to ensure the safety of their products before making them available to the public. Notably, former Google CEO Eric Schmidt warned of the real risks associated with AI, including the potential for “zero-day attacks” and applications in life sciences. Schmidt’s cautionary statements reinforce the need to prepare and safeguard against potential misuse of AI technology.
The call for AI regulations is gaining momentum, with industry leaders like Microsoft taking a proactive stance. As the impact of AI continues to shape society, it is crucial to strike a balance between innovation and responsible governance.