Image: Aidan Granberry (unsplash)
A septet of U.S. artificial intelligence (AI) powerhouses – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have made a public pledge to “facilitate the movement towards a secure, safe, and transparent evolution of AI technology.”
The administration of President Biden and Vice President Harris emphasised that enterprises involved in developing such novel technologies bear the duty to guarantee the safety of their products. In an effort to fully exploit AI’s potential, they are urging the industry to adhere to the strictest standards, ensuring that innovation does not jeopardize the rights and safety of Americans.
Simultaneously, the administration is formulating an executive order which will instate legal responsibilities for firms in the AI sector. In the meantime, these seven enterprises are pledging to:
Perform security assessments of their AI systems prior to launching (the assessments will be conducted both internally and by independent experts)
Exchange knowledge concerning best practices in AI risk management amongst themselves and the government
Defend proprietary and unreleased model weights, described as the “most essential component of an AI system,” through investments in cybersecurity and measures to counter insider threats
Facilitate third-party detection and reporting of vulnerabilities within their AI systems
Ensure that users can unambiguously identify when content (audio, video) has been generated by AI, e.g., through watermarks
Reveal the capabilities and limitations of their AI systems, including both their appropriate and inappropriate uses, and the security and societal risks they entail
Continually investigate potential societal risks (discrimination, bias) associated with AI use and safeguard privacy
Develop advanced AI systems to address society’s most crucial challenges such as preventing diseases like cancer, battling climate change, and confronting cyber threats
Contending with the risks AI poses
Within the official document outlining the eight pledges, it is stated that they “apply only to generative models that exceed the current industry frontier (for instance, models that are overall more potent than any presently released models, including GPT-4, Claude 2, PaLM 2, Titan and DALL-E 2 in terms of image generation).”
Non-profit research and advocacy organization AlgorithmWatch expressed its discontent with this provision, noting that harm is being inflicted right now by the AI systems currently available. They argued that if companies agree it’s wise to apply these precautions, shouldn’t they be applied to the products they’re currently selling worldwide?
James Campbell, CEO of Cado Security, commented on the necessity of public awareness around validating information obtained from the Internet. He highlighted the dangers of Large Language Models (LLMs) and their often incorrect but authoritative information delivery. He stressed the need for users of these LLMs to treat the results with skepticism and to seek validation from alternative sources.
Campbell added that “testing” in this context likely involves the internal security of AI developers and the broader societal impact of the technologies. He flagged potential privacy issues related to AI technologies, particularly around LLMs like ChatGPT. In his view, more generally, companies might be required to conduct a risk assessment from a societal impact perspective before releasing AI-enabled technologies.
However, according to Mike Britton, CISO of Abnormal Security, when finally enacted, the most consequential regulation will revolve around ethics, transparency, and assurances in how the AI operates.
The administration previously released a Blueprint for an AI Bill of Rights, aiming to safeguard American citizens from potential AI system risks such as bias and discrimination. This blueprint identified five guiding principles for the design, use, and deployment of such systems:
Protection from unsafe or ineffective systems Protection from algorithmic discrimination Data privacy protection Clarity and explanation about the systems used Availability of a human alternative to automated systems President Biden signed an Executive Order to urge federal agencies to fight bias in the development and implementation of emerging technologies, such as AI, thereby protecting the public from algorithmic discrimination.
With a view to boosting AI research and development, the administration also invested $140 million into the National Science Foundation to inaugurate seven new National AI Research Institutes, which are in addition to the existing 18.