18 Countries Sign an Agreement on Making AI Safety

Karimi & Associates Law Firm represents, according to Reuters:

Governments around the world are increasingly concerned about the safety issues surrounding generative AI. This is particularly true since the launch of OpenAI’s ChatGPT in November last year, which was followed by other prominent generative AI-based chatbots like Google’s Bard and Anthropic’s Claude. These chatbots differ from traditional ones as they can perform multiple tasks with a single prompt, ranging from writing a novel to solving complex physics problems.

The United States and Britain, along with 16 other countries, signed a 20-page agreement aimed at promoting the safe development of Artificial Intelligence systems and preventing them from falling into the hands of rogue actors. The agreement contains general recommendations and guidelines for the ethical and secure development of AI systems. The agreement emphasizes the importance of monitoring AI systems for abuse, protecting data from tampering, and vetting software suppliers to ensure that they adhere to ethical practices. It also recommends releasing AI models only after necessary security testing has been conducted.

In addition to the US and Britain, the other signatories to the agreement include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore. These countries have committed to working together to develop AI technologies that are safe and secure. The main focus of the agreement is to prevent the hijacking of AI technology by hackers. With the increasing use of AI systems in various fields such as healthcare, finance, and transportation, ensuring the security of these systems is of utmost importance. The agreement aims to protect individuals and organizations from potential cyber threats and to promote the responsible use of AI.

However, the agreement does not address questions about the appropriate use of AI or how the data for training major AI models like ChatGPT and Bard is collected. These issues are still being debated and will likely be addressed in future discussions.

Recently, France, Germany, and Italy signed an agreement on regulating AI, which outlines the importance of prioritizing security during the design phase of AI. This agreement emphasizes that AI capabilities should not just be focused on cool features, market speed, or cost reduction. Instead, security should be the most critical aspect of AI design. This agreement seeks to ensure that AI-based systems are designed with safety and security in mind, right from the beginning.

The UK has also recently concluded an AI summit, which focused on similar issues. US Cybersecurity and Infrastructure Security Agency Director Jen Easterly, while speaking about the agreement, emphasized that it is the first time that such an agreement has been reached, affirming the importance of prioritizing security during the design phase of AI. The goal of this agreement is to ensure that AI systems are designed with security as a top priority, rather than an afterthought.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top