Notice! Governments around the world have introduced AI safety regulatory measures, mainly focusing on six aspects:

Table of contents

Countries have strengthened artificial intelligence safety supervision

Supervision suggestions mainly focus on six aspects

Technology advances artificial intelligence security


The rapid development of global artificial intelligence technology has a profound impact on economic and social development and the progress of human civilization, and brings huge opportunities to the world. At the same time, artificial intelligence technology also brings unpredictable risks and complex challenges.

Due to the ambiguity in the technical logic and application process of artificial intelligence (AI), it may trigger a series of risks, including data, algorithm risks, ethical risks, technology abuse risks, and cyber attack risks. These risks may not only threaten personal privacy and corporate interests, but may also have a negative impact on the fairness and stability of the entire society.

picture

First, algorithmic risk. Since artificial intelligence is a "black box" based on big data, deep learning, and computer algorithms, its decision-making logic and basis are often difficult to explain, leading to uncertainty risks. In some application areas with high fault tolerance requirements, this inexplicability may even lead to security risks.

Secondly, data risk. Mining the value of data is the key to improving artificial intelligence capabilities, but the circulation of sensitive information such as personal privacy carries the risk of leakage and abuse. Once data forms an island due to security considerations, it will restrict the generation of value of data elements and the development of the AI ​​industry. In addition, using copyrighted materials to train AI models may lead to copyright disputes, while inputting information related to identified or identifiable natural persons may lead to issues such as the leakage of trade secrets.

Social and ethical risks that cannot be ignored. Big data, gender discrimination, racial discrimination, regional discrimination, etc. may lead to social equity issues . Although AI is based on massive amounts of data, these data are often biased, making decisions based on these data potentially exacerbating social injustice. In addition, the algorithm design process may be affected by the value inclination of the developer, making it difficult to effectively guarantee the fairness of transaction decisions.

Concerning risks of misuse of technology. The use of AI to create fake news, fake news, fake accounts, fake voices, fake pictures, etc. has an increasingly serious impact on society . These behaviors may harm social and economic security, personal reputation of enterprises and the safety of personal property. With the continuous development of deep synthesis technology, illegal activities and cases such as fraud, blackmail, frame-up, and defamation using this technology are common.

Finally, there is the serious risk of cyberattacks. Attackers may exploit security vulnerabilities in artificial intelligence systems to carry out attacks, such as hijacking, attacking, blocking and interfering with artificial intelligence learning and prediction, etc. In addition, attackers can also use artificial intelligence technology to launch attacks, such as using machine learning algorithms to infer users' passwords or encryption keys.

picture


Countries have strengthened artificial intelligence safety supervision

Artificial intelligence governance is related to the destiny of all mankind and is a common issue faced by all countries in the world. Since this year, many countries and organizations around the world have introduced initiatives or regulations, unanimously calling for strengthening the safety supervision of artificial intelligence. Artificial intelligence bids farewell to extensive development and welcomes a stage of synchronization of security and development.

picture

On November 1, at the first Global Artificial Intelligence (AI) Security Summit, 28 countries jointly signed the "Bletchley Declaration" on the international governance of artificial intelligence. This is the world's first international declaration on the rapidly emerging technology of artificial intelligence. statement. The Declaration encourages relevant actors to take appropriate measures, such as security testing, evaluation, etc., to measure, monitor and mitigate the potential harmful capabilities of AI and its possible impacts, and to provide transparency and accountability. Countries are called on to develop risk-based policies based on risk, including developing appropriate assessment indicators, security testing tools, and developing public sector capabilities and scientific research. And determined to support the establishment of an internationally inclusive network of cutting-edge AI safety scientific research that includes and complements existing and new multilateral and bilateral cooperation mechanisms, and promotes provision of information for decision-making and the public interest through existing international forums and other relevant initiatives. Best science.

On October 30, the Group of Seven (G7) released the "International Code of Conduct for Organizations Developing Advanced Artificial Intelligence Systems." The code of conduct contains 11 items in total, emphasizing the measures that should be taken during the development process to ensure trustworthiness, safety and security. Among other things, developers need to identify and mitigate risks, including red team testing, testing, and mitigation measures. Developers also need to identify and mitigate vulnerabilities, incidents and misuse patterns after deployment, including monitoring vulnerabilities and incidents and facilitating third parties and users to discover and report issues. Additionally, the guidelines emphasize the importance of developing and deploying reliable content authentication and provenance mechanisms, such as watermarks. These measures will help ensure the safety and reliability of artificial intelligence systems and increase user trust in them.

Also on October 30, US President Biden officially issued the "Safe, Reliable and Trustworthy Artificial Intelligence" executive order, which is the White House's first set of regulatory regulations on generative artificial intelligence. The executive order requires multiple U.S. government agencies to develop standards, test artificial intelligence products, seek the best methods for content verification such as "watermarks", develop cybersecurity plans, and attract technical talents to protect privacy and promote fairness and civil rights. Protect the interests of consumers and workers, promote innovation and competition, and enhance the leadership status of the United States. At the same time, the executive order states that U.S. users will be protected from AI fraud and deception by establishing standards for detecting AI-generated content and certifying official content.

On October 18, the Cyberspace Administration of China released the Global Artificial Intelligence Governance Initiative. Specific measures include promoting the establishment of a risk level testing and evaluation system, implementing agile governance, classified and hierarchical management, and rapid and effective response. R&D entities need to improve the interpretability and predictability of artificial intelligence, improve the authenticity and accuracy of data, ensure that artificial intelligence is always under human control, and create artificial intelligence technology that is auditable, supervised, traceable, and trustworthy. At the same time, we should actively develop the development and application of related technologies for artificial intelligence governance, and support the use of artificial intelligence technology to prevent risks and improve governance capabilities. In addition, the initiative also emphasizes the gradual establishment and improvement of laws and regulations to ensure personal privacy and data security in the research and development and application of artificial intelligence, and to oppose illegal collection, theft, tampering and leakage of personal information.

On July 13, the Cyberspace Administration of China and relevant national departments announced the “Interim Measures for the Management of Generative Artificial Intelligence Services.” Those requiring generative artificial intelligence services with public opinion attributes or social mobilization capabilities should conduct security assessments in accordance with relevant national regulations, and perform algorithm filing and change and cancellation filing procedures in accordance with the "Regulations on the Management of Algorithm Recommendations for Internet Information Services".

In June this year, the European Parliament passed the authorization draft of the EU Artificial Intelligence Act. If the bill is officially approved, it will become the world's first AI regulation. The bill classifies AI systems into four classifications based on risk levels, from minimal to unacceptable. Among them, "technical robustness and security" requires that artificial intelligence systems minimize accidental harm during development and use, and have the robust ability to respond to unexpected problems to prevent malicious third parties from illegally using the system or changing the way it is used or Performance behavior. In addition, the bill prohibits the creation or expansion of facial recognition databases through the untargeted extraction of facial images from the Internet or CCTV footage, and the use of such methods to place artificial intelligence systems on the market, put them into service or use them. For generative AI systems based on these models, the bill requires compliance with transparency requirements, that is, they must disclose that the content was generated by the AI ​​system and ensure that the generation of illegal content is prevented. Additionally, when using copyrighted training data, detailed summaries of these data must be made public.

In addition, in late October, a fierce debate about the control of artificial intelligence broke out among scholars such as Turing Award winners and the "Big Three of Artificial Intelligence". 24 Chinese and foreign artificial intelligence scientists signed a statement calling for stricter controls on artificial intelligence technology. Establish an international regulatory agency, require advanced artificial intelligence systems to undergo mandatory registration and review, introduce instant "shutdown" procedures, and require development companies to devote 30% of their research budgets to artificial intelligence safety.

picture


Supervision suggestions mainly focus on six aspects

Although there are differences in regulatory priorities and fierce debates between the artificial intelligence academic community and industry, governments around the world have basically reached consensus on their attitude towards strengthening artificial intelligence supervision. At present, the supervision of artificial intelligence mainly focuses on six aspects: security testing and evaluation, content certification and watermarking, preventing information abuse, forcibly closing programs, independent regulatory agencies, and risk identification and security assurance.

Security Testing and Assessment: Require security testing and assessment of AI systems to measure, monitor, and mitigate potentially harmful capabilities and provide transparency and accountability. Developers need to share security test results and other key information with the government to ensure that the system is safe and reliable before release.

Content Authentication and Watermarking: Establish standards for detecting AI-generated content and authenticating official content to protect users from AI fraud and deception. Emphasize the development and deployment of reliable content authentication and provenance mechanisms such as watermarks.

Ensure face security: Faces are important private data for artificial intelligence and an important application output. Preventing abuse is particularly important. Building or expanding facial recognition databases through the untargeted extraction of facial images from the internet or CCTV footage is prohibited. For generative AI systems, require compliance with transparency requirements to disclose how content is generated and prevent the generation of illegal content.

Risk identification and safety assurance: Artificial intelligence systems are required to be robust and safe, minimize accidental harm, and be able to cope with problems and malicious use. Developers need to identify and mitigate vulnerabilities, incidents, and misuse patterns after deployment, including monitoring for vulnerabilities and incidents and facilitating users and third parties to discover and report issues.

Set force shutdown: Introducing an instant “one-click shutdown” feature to prevent unexpected or malicious use of artificial intelligence programs.

Establish an independent tripartite agency: Promote the establishment of an international and inclusive cutting-edge AI safety research network, establish an international regulatory agency, and require mandatory registration and review of advanced artificial intelligence systems.

picture


Technology advances artificial intelligence security

Based on the current status of artificial intelligence risks and the security supervision needs of various countries, Dingxiang provides four security capabilities: artificial intelligence security detection, security intelligence, security defense and facial security assurance.

Artificial intelligence system security testing : Conduct comprehensive security testing on artificial intelligence applications, products and apps, identify and detect potential security vulnerabilities, and provide timely repair suggestions. Through this detection mechanism, attackers can be prevented from using security vulnerabilities to conduct malicious attacks.

Artificial intelligence threat intelligence : Dingxiang Defense Cloud Business Security Intelligence provides multi-faceted and multi-angle artificial intelligence attack intelligence, combining technology and expert experience to comprehensively predict attackers' threat methods. Help enterprises respond promptly and provide precise control to protect their artificial intelligence systems from potential threats.

Full-process security defense : Use Dingxiang Defense Cloud to reinforce and code obfuscate artificial intelligence applications, apps and devices to improve their security. At the same time, obfuscation and encryption of artificial intelligence data communication transmission can prevent problems such as eavesdropping, tampering, and fraudulent use during the information transmission process. In addition, with the help of Dingxiang's Dinsigh risk control decision-making engine, the equipment environment can be comprehensively detected, various risks and abnormal operations can be discovered in real time, and overall security can be improved. In addition, Dingxiang’s Xintell modeling platform can provide strategic support for artificial intelligence security and promptly discover potential risks and unknown threats.

picture

Face application security protection : Dingxiang business security awareness and defense platform is based on advanced technologies such as threat probes, stream computing, and machine learning , and integrates equipment risk analysis, operational attack identification, abnormal behavior detection, early warning, and protective treatment into one active security defense The platform can detect malicious behaviors such as camera hijacking and equipment forgery in real time, and effectively prevent and control various face application risks. It has the characteristics of threat visualization, threat traceability, device correlation analysis, multi-account management, cross-platform support, active defense, open data access, defense customization and full-process prevention and control.

Through Dingxiang’s four-layer security capabilities, enterprises can better protect their artificial intelligence systems from security risks and attacks, improve the security of artificial intelligence applications, and meet the security regulatory needs of various countries.

Guess you like

Origin blog.csdn.net/dingxiangtech/article/details/134336763