Web2.0 Concept Proposer Tim O'Reilly: Why AI Regulations Should Start with Mandatory Disclosure

4a4f98c89869049111deb69fe33fc00a.gif

69522d77ccae5a35a145fd4a267bc311.jpeg

Image source: Generated by Unbounded AI tool

You can't police what you don't understand.

On November 30, 2022, the world changed, just as it did on August 12, 1908, when the first Model T rolled off Ford's assembly line. That was the day OpenAI released ChatGPT, and the day AI moved from the research lab into the unsuspecting world. Within two months, ChatGPT had over 100 million users, faster adoption than any technology in history.

Arguments erupted immediately. Most notably, The Future of Life Institute published an open letter calling for an immediate moratorium on advanced AI research, asking: "Should we allow machines to flood our information channels with propaganda and lies?" Should we automate all jobs, including those that are rewarding? Should we develop non-human brains that eventually outsmart us, outwit us, and replace us? Should we risk risk losing control of our civilization?"

In response, the Association for the Advancement of Artificial Intelligence released its own letter, citing the many positive changes AI is already having in our lives, and pointing to existing efforts to improve AI safety and understanding its influence efforts. In fact, important conferences on AI regulation are currently underway, such as the Partnership on AI's responsible generative AI conference last week. Additionally, the UK has announced its intention to regulate AI, albeit in a light, “pro-innovation” way. In the US, Senate Minority Leader Charles Schumer has announced plans to introduce “a framework outlining a new regulatory regime” for AI. And the EU is sure to follow suit, leading to potentially conflicting regulations in a worst-case scenario.

All of these efforts reflect a general consensus that regulations should address issues of data privacy and ownership, bias and fairness, transparency, accountability, and standards. OpenAI’s own AI Safety and Responsibility Guidelines cite the same goals, but also ask what many consider the most central and pervasive question: How do we align AI-based decisions with human values? They wrote:

"AI systems are becoming part of everyday life. The key is to ensure that these machines align with human intentions and values".

But whose values ​​should it align with? The benevolent idealist values ​​to which most AI critics aspire? The value of public companies that are destined to put shareholder value above customers, suppliers and society at large? Criminals or rogue states bent on harming others? Well-meaning people who, like Aladdin, expressed an unthought-of wish to an all-powerful artificial intelligence genie?

There is no easy way to solve the consistency problem. But alignment is impossible without strong disclosure and audit bodies. If we want prosocial outcomes, we need to design and report metrics that explicitly target those outcomes and measure the extent to which they are achieved. This is a critical first step, and we should act now. These systems are still largely controlled by humans. At least for now, they function as required, and their training improves quickly when results don't match expectations. What we need to know is what they are told.

Also, what should be disclosed? For companies and regulators, there is an important lesson in the rules governing companies — what sci-fi writer Charlie Stross calls "slow AIs." One way we hold companies accountable is by requiring them to share financial results in accordance with Generally Accepted Accounting Principles (GAAP) or International Financial Reporting Standards (IFRS). It would be impossible to regulate each company if it reported its financials differently.

Today, dozens of organizations have published AI principles, but they offer little detailed guidance. The principles all say "maintain user privacy" and "avoid unfair bias," but don't spell out exactly when companies can collect images of faces from surveillance cameras, or what to do if there are differences in accuracy due to skin color. manage. Today, when disclosures happen, they are haphazard and inconsistent, sometimes in research papers, sometimes on earnings calls, sometimes from whistleblowers. It is almost impossible to compare what is being done now with what has been done in the past or what might be done in the future. Companies limit disclosure for user privacy concerns, trade secrets, system complexity, and a variety of other reasons. Instead, they offer only general assurances about their commitment to safe and responsible AI. This is unacceptable.

Imagine what would happen if the standards guiding financial reporting simply said that companies must accurately reflect their true financial position, without detailing what the report must include, and what "true financial position" equates to. Conversely, what happens when an independent standards body, such as the Financial Accounting Standards Board, which created and oversees GAAP, specifies these matters in great detail. Regulators such as the Securities and Exchange Commission will then require public companies to report in accordance with GAAP and hire audit firms to review and certify the accuracy of those reports.

The same should be true for AI safety. What we need is an equivalent of GAAP for AI and algorithmic systems more generally. Perhaps, we can call it "accepted principles of artificial intelligence". We need an independent standards body to oversee these standards, a regulatory equivalent of the SEC and the European stock exchange to enforce them, and an ecosystem of auditors to ensure that companies and their products Make accurate disclosures.

But if we are going to create GAAP for AI, we should learn from the evolution of GAAP itself. The accounting system that we take for granted today and use to hold companies accountable was originally developed by medieval merchants for their own use. They are not imposed from outside, but are adopted because they allow merchants to track and manage their own trading enterprises. They are also commonly used by businesses today for the same reason.

So, what better starting point for developing AI regulations than the governance and control frameworks used by companies that develop and deploy advanced AI systems?

Creators of generative AI systems and large language models already have tools for monitoring, modifying, and optimizing them. Techniques such as RLHF ("reinforcement learning from human feedback") are used to train models to avoid bias, hate speech, and other forms of bad behavior. These companies are collecting vast amounts of data on how people use these systems. They are stress and "red teaming" these systems to find vulnerabilities, post-processing the output, building layers of security, and starting to harden their systems against "adversarial prompts" and other attempts to subvert them Attempts to exercise control. But exactly how this stress testing, post-processing and consolidation works — or does it work — is mostly invisible to regulators.

Regulators should first formalize and require detailed disclosure of the measurement and control methods already used by those developing and operating advanced AI systems.

In the absence of those operational details that actually create and manage advanced AI systems, we run the risk that regulators and advocacy groups will “hallucinate” like large language models and use them with plausible but unrealistic ideas. to fill the gaps in their knowledge.

Companies creating advanced AI should collectively develop a comprehensive set of operational metrics that can be regularly and consistently reported to regulators and the public, as well as a process for updating these metrics as new best practices emerge.

What is needed is an ongoing process by which the creators of AI models fully, regularly, and consistently disclose the metrics they themselves use to manage and improve the service and prohibit abuse. Then, as best practice develops, we need regulators to formalize and require it, just as accounting regulations formalize the tools companies already use to manage, control and improve their finances. Disclosing data isn't always comfortable, but mandatory disclosure has proven to be a powerful tool for ensuring companies are actually following best practices.

It is in the interest of companies developing advanced AI to disclose the methods by which they control AI and the metrics they use to measure success, and to work with their peers to develop standards for such disclosure. Like the regular financial reporting required of a company, this reporting must be regular and consistent. But unlike financial disclosures, which are usually only for public companies, we may need AI disclosure requirements to apply to much smaller companies as well.

Disclosures should not be limited to the quarterly and annual reports required in the financial sector. For example, AI security researcher Heather Frase argues that “a public ledger should be established to report events generated by large language models, similar to cybersecurity or consumer fraud reporting systems.” Additionally, there should be dynamic information sharing such as as found in the anti-spam system.

It may also be worthwhile to have testing done by an external lab to confirm compliance with best practices and what to do if best practices are not met. An interesting historical parallel to product testing can be found in the certification of fire safety and electrical equipment by the Underwriter's Laboratory, an external nonprofit auditor. UL certification is not required, but it is widely adopted because it increases consumer trust.

This is not to say that there may not be regulatory requirements for cutting-edge AI technologies outside the existing regulatory frameworks for these systems. Some systems and use cases are riskier than others. National security considerations are a good example. Especially for small LLMs that can run on laptops, there is a risk of irreversible and uncontrollable proliferation of technologies about which we know so little. It's what Jeff Bezos calls a "one-way door," a decision once made that's hard to undo. One-way decisions require deeper consideration and may require regulation from outside sources that goes ahead of existing industry practice.

Furthermore, as Peter Norvig of Stanford University's Institute for Human-Centric AI noted in reviewing a draft of this article, "We consider 'human-centred AI' to have three domains: the user (e.g., for a post-bail recommendation system, the user i.e. judges); stakeholders (e.g., defendants and their families, and victims and families of past or potential future crimes); society at large (e.g., affected by mass incarceration).”

Arvind Narayanan, a professor of computer science at Princeton University, points out that these systemic harms to society go beyond harm to individuals and require a longer-term view and broader measurement schemes than those typically undertaken within companies. But despite the predictions of groups such as the Future of Life Institute and letters calling for a "pause on AI," it's often difficult to foresee these hazards in advance. Could a "stop of the assembly line" in 1908 allow us to foresee the great social changes that industrial production in the 20th century was about to bring to the world? Will such a pause make us better or worse?

Given the enormous uncertainty surrounding the progress and impact of AI, we are better off mandating transparency and establishing institutions to enforce accountability than trying to prevent every specific harm imaginable.

We shouldn't wait until these systems are out of control to regulate them. But regulators should also not overreact to the AI ​​scare in the media. Regulation should focus first on current monitoring and disclosure of best practice. In doing so, companies, regulators, and defenders of the public interest can come together to understand how these systems work, how best to manage them, and what systemic risk really is.


Under the wave of AIGC,

What are the potential risks and opportunities faced by the subdivisions,

How to seize the opportunity of this new change?

This is a collection of intellectual wisdom,

A grand meeting for exchanging experience and skills,

It is also an in-depth training session on the application of AIGC tools.

How can you miss it?

↓↓↓

9c2ee3af7c1e917cd4ec5eb93cf3dda0.jpeg

Babbitt Park is open for cooperation!

2fa3698df2e61849017f633e1eda7c7c.png

5ad7d9a1f2948eb01135884c125e3e40.jpeg

9adf1425453edb9655d083d8117272ba.gif

Chinese Twitter: https://twitter.com/8BTC_OFFICIAL

English Twitter: https://twitter.com/btcinchina

Discord community: https://discord.gg/defidao

Telegram channel: https://t.me/Mute_8btc

Telegram community: https://t.me/news_8btc

5f949c45c504c1c237997fd05a64d0d9.jpeg

Guess you like

Origin blog.csdn.net/weixin_44383880/article/details/131255389