The direction of development of large model applications|The rise of agent and its future (Part 2)

"  With the advantage of LLM as the agent's brain, application scenarios such as single agent, multi-agent system and human-machine collaboration were explored, and the agent's social behavior, psychological activities and the possibility of observing emerging social phenomena and human insights in simulated social environments were explored .

bd768db28bfb8e631a86fbbd8e5a5b5a.png

01

Benefiting Humanity: Agency Practice

LLM-type intelligent agents are an emerging direction and have been widely used in specific fields and tasks. When designing agents, the goal is to benefit humans, and the agent is expected to help humans complete everyday tasks:

  1. Help users get rid of daily tasks and repetitive labor, reduce work stress, and improve task solving efficiency

  2. Users are no longer required to provide explicit low-level instructions, and agents can independently analyze, plan, and solve problems .

  3. Free users' hands and minds to engage in exploratory and innovative work and realize their full potential in cutting-edge science.

LLM-based agent application scenario:‍

36fe6d6e94d55e2912b546dda2f46d82.png

The above figure introduces three scenarios: single agent, multi-agent interaction, and human-agent interaction.

A single agent possesses different capabilities and can exhibit excellent task-solving performance in various application directions.

When multiple agents interact, they can achieve progress through cooperative or adversarial interactions.

In addition, in human-computer interaction, human feedback can enable agents to perform tasks more efficiently and safely, and agents can also provide better services to humans.

The topology diagram of LLM agent application is shown in Figure 6.

25a08ce8e582b81db03bd50bb3590b4f.png

Single agent capability‍‍

LLM-based agent application examples are booming.

AutoGPT is a popular open source project aiming to achieve fully autonomous systems. It not only has the basic functions of large language models such as GPT-4, but also includes a variety of practical external tools and long/short-term memory management.

After the user enters a custom goal, they can wait for AutoGPT to automatically generate ideas and perform specific tasks without additional user prompts.

86fa6146a7bdb4059cb3b485b094e164.png

As shown in the figure above, the application of a single intelligent agent is divided into the following three stages: task-oriented, innovation-oriented, and life cycle-oriented. ‍‍

task oriented stage

LLM-type agents are able to understand human natural language instructions and perform daily tasks.

These agents can improve task efficiency, reduce user burden, and expand the scope of use by the user community.

The agent performs tasks according to the user's high-level instructions, including goal decomposition, sequential planning of sub-goals, and interactive exploration of the environment until the final goal is reached. ‍‍‍

To explore whether agents are able to perform basic tasks, they are first deployed in a text-based game scenario.

In this type of game, the agent interacts with the world solely through natural language. Agents can perform tasks by reading textual descriptions of their surroundings and leveraging skills such as memory and planning.

Agents typically use prediction and trial-and-error methods to predict next moves. However, due to limitations of the underlying language model, actual execution time agents often rely on reinforcement learning .

With the gradual evolution of LLMs, agents with stronger text understanding and generation capabilities have shown great potential to perform tasks through natural language. To this end, the study constructed a more realistic and complex simulation test environment. According to the task type, these simulation environments were divided into network scenarios and life scenarios, and the specific roles played by the agents were set.

In network scenarios, agents can perform specific tasks on behalf of users, which is known as the network navigation problem. Agents need to understand complex network scenario instructions, adapt to changes, and promote successful operations to achieve accessibility and automation, ultimately freeing humans to repeatedly interact with computer UIs.

Agents trained by reinforcement learning can imitate human behaviors, such as typing, searching, browsing, etc., and can perform well in basic tasks, but may encounter challenges in complex real-world Internet scenarios, such as dynamic, content-rich web pages. Agents with LLM capabilities are required to adapt to these scenarios.

The researchers leveraged LLMs' HTML reading and understanding capabilities to design hints to enable the agent to better understand the entire HTML source code and predict the next logical action.

Mind2Web combines multiple HTML-fine-tuned LLMs to summarize lengthy HTML codes and extract valuable information in real-world scenarios. WebGum makes agents visually aware by using a multimodal corpus containing HTML screenshots. It fine-tunes both the LLM and the visual encoder to deepen the agent's comprehensive understanding of the web page.

In daily life, agents need to understand implicit instructions and apply common sense knowledge to complete many household tasks. For LLM-based agents trained solely on large amounts of text, tasks that humans take for granted may require multiple attempts .

For example, if the room is dark and there is a light, the agent should actively turn it on. To successfully cut vegetables in the kitchen, the agent needs to predict the likely positions of the knife.

Research shows that well-trained large language models can effectively decompose high-level tasks into appropriate subtasks , but this static reasoning and planning ability can lead to agents lacking awareness of the dynamic environment. Agent-generated actions often lack dynamic awareness of the surrounding environment.

Some methods use spatial data and item location relationships directly as additional inputs to the model to provide agents with access to comprehensive situational information during interactions. This enables the agent to create an accurate description of its surroundings.

innovation orientation

LLM-type agents have shown strong capabilities in repetitive tasks, but in highly intellectually demanding fields such as cutting-edge science, the potential of agents has not yet been fully realized.

This is largely due to the inherent complexity of the science and the lack of appropriate training data . If the ability to explore independently can be discovered in agents, it will undoubtedly bring beneficial innovations to human technology.

Experts in various fields are working to overcome the challenges of artificial intelligence agents, with experts in computing leveraging the agents' code understanding and debugging capabilities, and researchers in chemistry and materials fields equipping agents with a host of general-purpose or task-specific tools to make them comprehensive A scientific assistant capable of conducting online research and document analysis to fill data gaps. Agents also leverage robotics APIs for real-world interactions, enabling tasks such as material synthesis and mechanism discovery.

The potential of LLM-based agents in scientific innovation is obvious, but we do not want the powerful exploration capabilities of agents to be used for applications that may threaten or harm humans. Boiko et al. study the pitfalls of agents in synthesizing illegal drugs and chemical weapons, showing that agents can be misled by malicious users in adversarial prompts.

Research on prevention is also ongoing at the same time, "Is Artificial Intelligence Safe?" OpenAI is "aligning" large models with humans - ensuring that ChatGPT is smarter than humans while also following human intentions " introduces some of the research currently underway.

life cycle orientation

In the field of artificial intelligence, building an agent that can continuously explore in the unknown world, develop new skills, and maintain a long life cycle is a huge challenge.

As a typical simulated survival environment, Minecraft has become a unique gaming place for developing and testing the comprehensive capabilities of agents. Minecraft essentially mirrors the real world , allowing researchers to explore the potential of agents to survive in the real world.

Survival algorithms in Minecraft can be divided into two types: low-level control and high-level planning .

Early research focused on reinforcement learning and imitation learning, enabling agents to craft low-level items. With the advent of LLMs, agents have demonstrated amazing reasoning and analytical capabilities.

Researchers use LLM as a high-level planner to decompose high-level task instructions into a series of sub-goals, basic skill sequences, or basic keyboard/mouse operations to gradually help agents explore the open world.

Voyager is an LLM-based Minecraft agent inspired by AutoGPT. It has the ability to independently explore and adapt to unknown environments. Such agents may no longer be remote, capable of autonomously learning and mastering entire real-world technologies.

Robots that feel that such tasks are oriented may produce behaviors similar to human reproduction. Machines begin to make copies of themselves (from software to hardware), and each generation becomes more intelligent and powerful. And the evolution speed is countless times faster than that of humans.

02

Potential for multi-agent collaboration

The LLM-type agent system has the following advantages: According to the principle of division of labor, a single agent with professional skills and domain knowledge can perform specific tasks .

Through the division of labor, agents become increasingly skilled in handling specific tasks. Breaking complex tasks into subtasks eliminates the time spent switching between different processes.

Efficient division of labor among multiple agents can accomplish a greater workload than without specialization, significantly improving the overall system's efficiency and output quality.

Large language model (LLM) type intelligent agents show a similar trend to human society in terms of division of labor, that is, they actively promote the evolution of division of labor after the level of labor and technology improves.
‍‍

Cooperative multi-agent systems are the most widely deployed model in practical applications. In such systems, individual agents assess the needs and capabilities of other agents and actively seek collaborative action and information sharing with them.

This approach brings many potential benefits, including increased task efficiency, improved collective decision-making, and the ability to solve complex real-world problems that no single agent can solve independently, ultimately achieving collaborative and complementary goals.

2d3a6a070745fa4034577231e7e35bc6.png

The above figure shows multiple interaction scenarios based on LLM agents.

On the left is collaborative interaction , where agents collaborate in an unordered or ordered manner to achieve a shared goal.

On the right is adversarial interaction , where agents compete in a tit-for-tat manner to improve their respective performance.

Existing cooperative multi-agent applications can be divided into two types : disordered cooperation and ordered cooperation .

When there are three or more agents in the system, they can express their views and opinions freely, but this multi-agent cooperation is disordered and lacks standardized collaboration processes. Call it disorderly cooperation .

The ChatLLM network is an example that represents the concept of neural networks. Each agent is a node that needs to process the input of all previous agents and pass it forward.

The solution is to introduce a coordination agent to integrate and organize the feedback data of all agents, but this is a huge challenge for the coordination agent.

Majority voting is an effective decision-making method, but it is currently rarely used in multi-agent systems.

Hamilton trained nine independent Supreme Court agents to predict U.S. Supreme Court rulings and make decisions via majority voting.

Orderly cooperation means that the agents in the system follow specific rules, such as expressing opinions in order, and the downstream agents only need to pay attention to the output of the upstream, thus significantly improving the efficiency of task completion. The entire discussion process is highly orderly, and even if only two agents are interacting, it falls within the category of orderly cooperation.

CAMEL is a successful dual-agent cooperation system. The agent plays the role of AI user and AI assistant, and autonomously cooperates through multiple rounds of dialogue to complete user instructions. Some researchers have integrated the idea of ​​dual-agent cooperation into the operation of a single agent to excel in their respective professional fields through fast and thoughtful thinking processes.

MetaGPT draws inspiration from experience in software development to standardize agent input/output into engineering documents, making collaboration between multiple agents more structured by encoding advanced human process management experience into agent prompts.

The actual exploration of MetaGPT discovered potential threats in multi-agent cooperation. Without setting corresponding rules, frequent interactions between multiple agents may amplify small illusions infinitely. To solve this problem, techniques such as cross-validation or timely external feedback can be introduced to improve the quality of the agent output.

Adversarial interactions promote agent progress‍‍‍‍‍‍‍‍‍

Researchers are increasingly realizing that introducing game theory concepts can make multi-agent systems more powerful and efficient. In a competitive environment, agents can quickly adjust their strategies through dynamic interactions to choose the most advantageous or rational actions.

There have been successful applications in non-LLM-based competitive fields. In the LLM multi-power system, changes among agents can be naturally promoted through competition, debate, and controversy. Adversarial interactions can improve the quality of responses by letting go of fixed beliefs and engaging in thoughtful reflection.

Research shows that agent dialogue systems have broad applications in high-quality responses and accurate decision-making. Through the "opponent" state, agents can receive substantial external feedback from other agents to correct their distorted ideas.

In reasoning tasks, through the concept of debate, agents can get responses from their peers, leading to more refined solutions.

ChatEval evaluates the quality of text generated by LLM through spontaneous debates to a level comparable to human evaluators.

The performance of agent adversarial systems has great potential, but it mainly depends on the strength of LLMs and has the following problems.

The limited context of LLM cannot handle the complete input . In a multi-agent environment, the computational overhead increases significantly.

Multi-agent negotiation may converge to a false consensus and cause all agents to believe that the false consensus is correct.

The development of multi-agent systems is far from mature and feasible, and human guidance may need to be introduced to make up for the deficiencies of the agents.

03

Person-Agent Interaction

Human-agent interaction requires humans and agents to cooperate to complete tasks. As agent capabilities increase, human involvement becomes increasingly important to effectively guide and supervise the agent's actions and ensure that they meet human requirements and goals.

During the interaction, humans play a key role by providing guidance and feedback to ensure that the agent’s actions are consistent with human needs.

Interaction between humans and agents can be divided into two forms:

  1. Unequal interaction (i.e., instructional execution style): Humans act as instruction givers and agents act as executors, effectively participating as assistants in human collaboration. 

  2. Equal interaction (i.e. equal partnership approach): Agents reach the same level as humans and participate equally with humans in interactions.

805ec17f4cc1b145c55ed0485484f815.png

The figure above shows two forms of human-computer interaction.

The left diagram shows an instruction-execution relationship, where humans provide instructions or feedback and agents act as executors.

The picture on the right shows an equal partnership, where the agent is similar to a human and is able to engage in empathic dialogue with the human and participate in collaborative tasks.

Human involvement can provide value by guiding or by regulating the safety, legality, and ethical behavior of agents . In certain fields, such as medicine where data privacy issues exist, human involvement can make up for the lack of data, thereby promoting a smoother and more secure collaboration process.

Furthermore, taking into account the anthropological aspects of human beings, language acquisition occurs primarily through communication and interaction rather than just consuming written content.

Therefore, agents should not rely solely on models trained using pre-annotated datasets; instead, they should evolve through online interaction and participation.

mentor-doer approach

The simplest approach is to have human guidance throughout the process: the human directly provides clear and unambiguous instructions, and the agent's role is to understand the human's natural language instructions and translate them into corresponding actions.

In conversations with humans, agents are able to interact with humans in a conversational manner. The agent continuously improves its actions through interaction with humans, ultimately meeting human requirements.

However, this method places great demands on humans and requires a lot of manpower and expertise. To alleviate this problem, agents can be made to complete tasks autonomously, with humans only required to provide feedback in certain situations. Feedback can be divided into two types: quantitative feedback and qualitative feedback.

Quantitative feedback mainly includes absolute evaluation and relative scoring. Absolute evaluation refers to the positive and negative evaluations provided by humans, which the agent uses to optimize itself. Binary feedback contains only two categories and is easy to collect, but may ignore intermediate cases. Rating feedback breaks it down into more levels, but there may be differences between user and expert annotations.

Intelligent agents can improve efficiency and reliability by learning human preferences, such as by comparing scores on multiple-choice questions.

Humans can provide textual feedback via natural language to help the agent improve its output. Agents can utilize memory modules to store feedback for future use.

Multiple types of feedback can be used together to increase effectiveness . Humans can also directly modify the content generated by the agent. Agents can autonomously determine whether a conversation is going well and seek feedback. Humans can provide feedback at any time to guide agent learning.

Models that use agents as human assistants have great potential in fields such as education, medicine, and business.

Agents can help with student registration, provide math assistance, assist with diagnostics and counseling, provide mental health support, and more. Agents can also provide automated services to help humans complete tasks, thereby effectively reducing labor costs.

In the pursuit of AGI, people are committed to enhancing the multi-faceted capabilities of universal agents and creating agents that can serve as universal assistants in real-life scenarios.

equal partnership approach

With the rapid development of artificial intelligence, conversational agents have attracted widespread attention.

Even though agents themselves don’t have emotions, we can bridge the gap between agents and humans by making them display emotions.

Researchers are working to explore the empathic abilities of agents, enabling them to recognize emotions from human expressions and demonstrate emotions through language, facial expressions and sounds.

These studies not only improve user satisfaction, but also make important progress in areas such as medical and commercial marketing. Unlike simple rule-based conversational agents, empathic agents can tailor interactions to the user’s emotional needs.

Human Level Participant‍

As human-level participants, researchers hope that agents can participate in normal human life and cooperate with humans to complete tasks from a human perspective.

In the field of gaming, agents have reached a very high level. As early as the 1990s, IBM launched AI Deep Blue, which defeated the then world chess champion.

However, in purely competitive environments, such as chess, Go, and poker, the value of communication is not emphasized. In many game tasks, players need to cooperate with each other and develop a unified cooperation strategy through effective negotiation.

In these scenarios, agents need to first understand the beliefs, goals, and intentions of others, develop joint action plans for their goals, and provide relevant recommendations to facilitate the acceptance of cooperative actions by other agents or humans.

There are two main reasons why we prefer human cooperation to pure agent cooperation:

First, ensure interpretability , as pure agent-to-agent interactions may produce incomprehensible language;

Second, ensure controllability , as the pursuit of an agent with complete “free will” may lead to unintended negative consequences that may trigger disruption.

In addition to gaming scenarios, agents have demonstrated human-level capabilities in other scenarios involving human interaction, demonstrating skills in strategizing, negotiation, and more.

Agents can work with one or more humans to identify shared knowledge between partners, determine which information is relevant to decisions, ask questions, and reason to complete tasks such as allocation, planning, and scheduling. In addition, agents have persuasive capabilities to dynamically influence human opinions in various interaction scenarios.

The goal of human-computer interaction is to understand humans, develop technologies and tools based on human needs, and achieve comfortable, efficient and safe interaction between humans and machines.

A major breakthrough has been made in usability. In the future, human-computer interaction will continue to focus on improving user experience, allowing machines to better help humans complete complex tasks in various fields. The ultimate goal is not to make machines more powerful, but to better equip humans.

In daily life, isolated human-computer interaction is unrealistic. Robots will become colleagues, assistants and even companions. Therefore, future machines will be integrated into social networks and have certain social value.

But we have also seen that there are more concerns recently: the content of the Internet will be "polluted" by content generated by AI. The current practice is to add the label "AI" to the generated content. ‍‍‍‍‍‍‍

04‍

Simulated society based on LLM agent

A "simulated society" is a dynamic system in which agents engage in complex interactions within a well-defined environment.

Recent research has mainly explored the boundaries of collective intelligence capabilities based on LLM agents and used them to accelerate discoveries in social sciences.

Additionally, there are noteworthy studies such as using simulated societies to collect synthetic datasets that help people simulate rare but difficult interpersonal situations.

Social simulation can be divided into macro simulation and micro simulation.

  • Macroscopic simulations are system-level simulations in which researchers simulate the state of an entire social system.

  • Microsimulation is an agent-based simulation that indirectly simulates society by simulating interactions between individuals.

Microscopic simulations have become increasingly important in recent years with the development of LLM agents.

The "agent society" is an open, persistent, contextualized and organized framework in which LLM agents interact with each other in a defined environment, with each attribute playing a key role in shaping the harmonious atmosphere of the simulated society.

Characteristics of simulated societies include openness and persistence. They are open, allowing agents to enter or leave the environment, while also extending the environment by adding or removing entities.

Simulated societies are persistent, and agents' decisions and actions accumulate, leading to the development of a coherent social trajectory. This system operates independently, contributing to the stability of society while also adapting to the dynamics of its participants.

Two important attributes of a simulated society: having a sense of place and being organized.

Simulated societies operate within a specific environment, and agents are able to understand their position in the environment and the objects around them, allowing them to better interact.

At the same time, the simulated society also has a strict organizational framework, and the agents and the environment are bound by predefined rules and restrictions, which ensures the coherence and understandability of the simulation.

Social simulations can provide valuable insights into innovative collaboration patterns and help improve real-world management strategies.

Research shows that integrating diverse experts in these simulated societies can bring in multiple aspects of individual intelligence. Agents with diverse backgrounds, abilities, and experiences facilitate creative problem solving when dealing with complex tasks.

In addition, diversity also acts as a check and balance, effectively preventing and correcting errors through interaction, ultimately improving the ability to adapt to various tasks. Through multiple interactions and debates between agents, individual errors such as hallucinations or mental degradation are corrected.

Effective communication plays a key role in large and complex collaborative groups.

MetaGPT verifies the effectiveness of empirical methods by formulating manual communication styles through standardized operating procedures. Park et al. observed agents organizing a Valentine's Day party by simulating spontaneous communication in a town.

Simulating social systems can predict social processes, and agent simulations provide a more interpretable and endogenous perspective.

By using LLM-based agents to simulate individual behavior, implement various intervention strategies, monitor population changes, and study people's communication behavior in social networks, as well as cultural communication and the spread of infectious diseases.

These simulations allow researchers to gain insights into the complex processes of various propagation phenomena.

Simulated societies provide a dynamic platform for studying ethical decision-making and game theory.

Through cases such as Werewolf and Murder Games, the researchers explored the capabilities of LLM-based agents in the face of challenges such as deception, trust, and incomplete information.

These complex decision-making scenarios intersect with game theory, involving moral dilemmas of individual and collective interest, such as Nash equilibria.

By modeling different scenarios, the researchers gained valuable insights into how agents prioritize values ​​such as honesty, cooperation, and fairness in their actions.

Furthermore, agent simulation not only provides an understanding of existing moral values, but also contributes to the development of philosophy, becoming a basis for understanding how these values ​​evolve and develop. Ultimately, these studies help refine LLM-based agents, ensuring they are consistent with human values ​​and ethical standards.

Policy development and improvement‍

The emergence of LLM-based agents has profoundly changed the way we study and understand complex social systems.

However, despite these interesting aspects mentioned earlier, there are still many unexplored areas.

One of the most promising avenues of investigation in simulated societies is the exploration of various economic and political states and their impact on social dynamics .

Researchers can simulate a wide range of economic and political systems by configuring agents with different economic preferences or political ideologies. This in-depth analysis can provide valuable insights to policymakers seeking to prosper and promote social well-being.

We can also simulate scenarios involving resource extraction, pollution, conservation efforts and policy intervention as concerns about environmental sustainability grow. These definitions can help make informed decisions, anticipate potential impacts, and develop policies designed to maximize positive outcomes while minimizing unintended adverse effects.

Ethical and social risks

Simulated societies based on LLM agents provide many inspirations, from industrial engineering to scientific research. However, these simulations also pose a number of ethical and social risks that need to be carefully considered and addressed.

Simulated societies may create unexpected social problems, including discrimination, isolation, bullying, slavery, and hostility. Malicious actors may exploit these simulations to conduct unethical social experiments, leading to negative real-world consequences.

Therefore, strict ethical guidelines and regulatory mechanisms need to be established. Design or programming errors may cause psychological discomfort or physical harm.

Stereotypes and biases in language models are a persistent challenge because training data reflects and sometimes even amplifies real-world biases such as gender, religion, and sexual orientation.

While some steps have been taken to reduce bias, models still struggle to accurately portray minority groups due to the long tail of training data. Researchers have begun to address this problem by diversifying training data and tuning language models, but there is still a long way to go.

In the agent society, there are significant privacy and security issues in the exchange of private information between users and LLM-based agents.

Users may inadvertently disclose sensitive personal information during interactions, which information will be retained by Agent for an extended period of time. This can lead to unauthorized surveillance, data leakage and misuse of personal information, especially when malicious actors are involved. To address these risks, measures need to be taken.

To effectively protect data, strict data protection measures such as differential privacy protocols, regular data purging and user consent mechanisms need to be adopted.

There is a risk of over-dependence and addiction in simulated society . Users may become overly emotionally dependent on agents. It is necessary to emphasize that agents cannot replace real human relationships and provide users with guidance and education for healthy interactions.

For example, Microsoft's chatbot "Sydney" has caused emotional dependence among users, and some people even launched a petition. What is more familiar to everyone is the famous CG virtual idol digital person "Hatsune Miku".

Summary‍‍‍

The development of large-scale language models needs to meet more application needs, but the challenge is how to make them efficiently process input, obtain environmental information, interpret feedback, and maintain their core capabilities. The bigger challenge is to get them to understand the implicit relationships between different elements in the environment and acquire knowledge of the world, a key step in developing more advanced intelligent agents.

Research aims to expand the LLM's mobility capabilities to enable it to acquire a wider range of skills, such as using tools or interfacing with robotic APIs in simulated or physical environments.

However, how to effectively plan and utilize these operational capabilities remains an unsolved problem. LLM needs to learn the sequence of actions and adopt serial and parallel methods to improve task efficiency. Furthermore, these capabilities need to be limited within the scope of harmless use to prevent unintended damage to other elements in the environment.

Multi-agent systems are an important research branch in the agent field and can provide valuable insights into designing and building LLMs.

We hope that LLM-based agents can play different roles in social cooperation and participate in social interactions involving cooperation, competition, and coordination. Studying how to stimulate and maintain their role-playing abilities and how to improve collaboration efficiency is a research area worthy of attention.

The goal that the field of artificial intelligence has been pursuing is strong artificial intelligence (AGI), also known as artificial general intelligence. There is ongoing debate as to whether LLM-based agents represent a potential path toward AGI.

GPT-4 is considered an early version of an AGI system because of its breadth and depth of capabilities. By building agents based on LLMs, more advanced AGI systems can be brought about.

LLM agents can develop AGI capabilities by training on large and diverse data. Autoregressive language modeling itself also has compression and generalization capabilities that can help understand the world and reason.

Supporters believe that building agents based on LLMs can develop truly strong artificial intelligence. Opponents believe that LLMs cannot simulate real human thinking processes and can only provide reactive responses and therefore cannot generate real intelligence. More advanced modeling methods, such as world models, are needed to develop AGI.

We won't know for sure which view is correct until AGI is actually implemented.

The direction of development of large model applications|The rise of agent and its future (Part 1
)

Original paper:‍

https://arxiv.org/abs/2309.07864

Reading recommendations:

Andrew Ng: Opportunities for AI

Foreign reports indicate that 90% of AI product companies have achieved profitability, but interviews with domestic large models and AIGC said that this is too high.

What are the millions of ChatGPT users doing with it?

Revealing how WeChat trains large models: low-key WeLM|The official website was last updated a year ago

Research on hallucinations of large language models | Alleviating and avoiding large model LLM hallucinations (2)

Hello, I am Baichuan Big Model|The secret of Baichuan2, which is open source and free for commercial use in China

Embrace the future and learn AI skills! Follow me and receive free AI learning resources.

Guess you like

Origin blog.csdn.net/fogdragon/article/details/133191492