The Role of Semantic Models in Smart Industrial Operations

introduce

This paper looks at the application of semantic model design and technology to the integration of industrial operations, and the evolving role of semantic computing. Comparing semantic (data) modeling. As a core component of the application architecture, with more familiar architectural integration patterns. Functional capabilities describe the operational basis of the semantic model. The value of semantic models is illustrated through a series of examples with which the reader should be familiar.

Only matches match; therefore, in order to compete by differentiating its performance, every company must apply leading practices wisely. Semantic computing now has enough industrial applications to be considered a leading practice. Therefore, it is proposed to supplement traditional techniques and possible long-term enhancements in various ways to promote security and integrity.

In discussions around smarter factory solutions, three key elements are often described, of which the concept of semantics is the key element. These three key elements are "Instrumented", "Intelligent" and "Interconnected". These elements support the view that the vast amount of data collected from the world around us, if we use the three "Is" to federate the data, will result in operational intelligence that drives timely communication, response and optimization around key business tasks . In this approach, it is important to be able to interpret data for timely analysis and gain understanding from various sources in various formats and contexts. With semantic computation, computations and analyzes are generalized to all instances of similar objects to which they are applied.

Data in the real world is constantly changing. Therefore, the structure needs to be adaptive rather than strictly predefined. This difference is called "open world" vs "closed world". Semantic modeling and its techniques identify changes in underlying data and the potential interactions of those changes. In most cases, however, the role of semantics is to promptly alert classes in the appropriate context so that responses to these changes can be recognized and acted upon appropriately.

The implemented semantic model can federate data from any connected data store into a flexible, adaptive, fit-for-purpose model that leverages and extends industry standards and ontologies. When semantic models are combined with applications that perform analytics, logic, reporting, views, etc. that can be applied and tuned easily and consistently, global manufacturers are truly evolving to an environment where business and operations People have direct control over their data, business and operational rules, and business and operational processes. Some refer to this evolution as the "evolving ubiquitous computing model," as computing power has become highly distributed and pervasive.

Industrial architectures must be designed to handle changing, disparate data and the actual relationships implied between the data. Data sources include structured and unstructured data, sensor data (current and historical), images, audio and video. In addition, the interaction of proposed data changes must be identified so that coordinated changes can be ruled out and changes minimized. Not only does current data processing not fit well with standard relational persistence structures, but it also faces the challenge of understanding this data in context and accommodating experienced additions, deletions, and changes without introducing unnecessary complexity. challenge.

Another major use of semantic computing is to monitor the overall mechanical integrity of the myriad of different systems and services deployed in industrial facilities. Developing a common ontology is one thing, which is a prerequisite for developing a semantic model, but regularly verifying the integrity of a compliance program for regulations, standards, policies, procedures, etc. applicable to an operating industrial facility is an entirely different matter. Verification requires confirmation that plans are being followed, that system configurations have been approved and that critical equipment required is operating, or where such confirmation cannot be obtained, special efforts are being made to monitor the increased risk in an appropriate manner. If the incident continues, the system should escalate an alarm in such a way that humans cannot suppress messages sent to higher levels.

In our world, interdependence and interaction have become too interconnected to compel the use of technology to help us verify adherence to stated intentions. Maybe it's time to use semantic technologies to monitor and verify security and general compliance, including strong, agile governance.

Consider a smarter urban traffic management application. Real-time traffic data is provided through traffic light sensors, Department of Transportation (DOT) speed sensors and cameras. Other data critical to accurate traffic flow forecasts can come from a variety of sources, including weather reports; accident reports; tweets; traffic disruptions; calendar events, such as holidays; seasonal trends, such as beach traffic; parades, festivals, or major sports Special events such as races, emergency dispatch events; and major news events. A large number of data sources describe events in their own terms, which may differ from those of other sources. All of this data must be assimilated in a usable form and understood in context, and relationships between events must be explained to infer intelligence.

Roads, signals, sensors, etc. represent the basic model of the transportation network. Current readings, sensor conditions, event declarations, vehicles, etc. all represent the condition of the road system and the activity level of each road. This is more transient data to be interpreted, and the model changes less frequently. Independently validated in the background and thus subject to an appropriate governance model and usage model for development. In fact, it is even possible to predict model changes by monitoring budgets, construction progress, etc. Current readings and news feeds (regardless of their origin) are more dynamic and more challenging to verify over short periods of time. By separating these two types of information (model and more transient data), additional logic can be derived to enhance validation on more transient data, thereby reducing response time.

To communicate effectively, a common understanding of events in operational workflows and their context must be drawn from these various sources. For example, basic terms such as "vehicle" may become ambiguous between data sources and providers, while distinctions such as car, light truck, semi-car, bus, or motorcycle may be significantly reduced. Some features, such as the number of axles or the number of occupants, can be important differentiators. Ontology can be associated with "vehicle" according to the situation. Of course, the relevant data being collected is constantly changing. Fortunately, model data changes much less frequently than live data feeds. Semantic Computational Inference Engine helps to understand model and transient data changes.

Semantic modeling defines data and the relationships between entities in those data. Industrial or operational information models provide the ability to abstract different types of data and provide an understanding of how data elements are related. A semantic model is an information model that supports fixed and ad hoc modeling of data entities and their relationships. The total set of entities in our semantic model includes a taxonomy of classes that can be used in our model to represent the real world. Together, these ideas are represented by an ontology, a vocabulary of semantic models that provides the basis for forming user-defined model queries. The model supports the representation of entities and their relationships, supports constraints on these relationships and entities, and aggregates terms based on queries, such as the definition of "vehicle". This provides the semantic composition of the information model.

In the World Wide Web Consortium (W3C) version of semantic computing, access to data through individual elements, ontological alignment rules applied to data validation, computation, etc., and derived inferences, are separated from data storage. It's not just a basic concept about the productivity and mechanical integrity of a system or overall architecture.

Current semantic computing technologies enable:

  • Syndicate data from multiple data stores into a normalized form without moving from its primary data store or system of record (SOR).
  • Create an agnostic data model dynamically built by engineers to focus on current needs.
  • Provide data to consumers in any form or schema and with any tool required.
  • Integrate business and operational rules in an information model independent of any single application.
  • Develop algorithms, etc., that run on all existing and new instances of the desired object without special effort from the user.

These are major shifts from typical approaches to information management and business processes. So they really need to reposition and restructure IT teams to generate information differently than business and operations teams have traditionally done.

The long-term goal of semantic computing is to enable an agile, adaptive environment where:

  • data + model = information
  • information + rules = knowledge
  • knowledge + action = result

In our traffic management example, the semantic model allows the user to understand things such as (1) the relationship between a traffic light sensor and the intersection being monitored, (2) the association of any given traffic light sensor with other sensors on the same road, or (3) the relationship of the road to other intersecting roads and major highways. A semantic model can also generate similar information about bus lines or subway lines to further describe the types of services available within a service location. Inferred relationships between stations and street addresses, service lines, and surface road alignments also provide a basis for understanding the impact of specific disruptions in public transport service on road traffic.

As an added complication, a single application must interact with multiple domain models or domain ontologies. One way to achieve this interaction is to merge existing ontologies into a new ontology. It is not necessary to merge all information from each original ontology, since this integration may not be logically satisfied. Furthermore, new ontologies may introduce new terms and relations for linking related items in the source ontologies: in the examples in the later sections, this paper will closely investigate how the construction of semantic models is most suitable.

Manufacturing Complexity Issues

In today's business environment, the challenge of effective decision-making is exacerbated by an influx of data and an increasing need for actionable information. Line-of-business managers are looking for the information infrastructure to alert them to changing bad situations early so they can mitigate them through contextual visibility and decision support in business intelligence (BI) and operational intelligence (OI) solutions. In a manufacturing enterprise, information exists in various islands, which must connect and communicate with each other to coordinate the execution of operational workflows. Each island is a system of record (SOR) for an operating sector or region, where the information system is the authoritative source for a data element or information. SORs include ERP, MES, LIMS, QMS, WMS, SCADA and IO equipment. The information structure in each SOR changes with the number of product stock keeping units (SKUs), production mix, line ratio of throughput, and continuous improvement measures and process technology improvements. SORs use a very specific data model designed to support the specific application functional requirements of real-time workflows. In today's rapidly changing world, the non-static nature and long adoption time frames of MOM and process control applications are major inhibitors to continuous improvement and business competitiveness. These are major cost factors in implementing and maintaining a comprehensive MOM and process control solution.

Typically, SORs have limited capacity to adapt to changes in processes and data at reasonable cost and in a short period of time. This is the main reason why 60-90% of IT budgets are spent on maintenance rather than enhancements. By exposing the data of SORs to semantic computing applications, additional functionality can be more easily integrated, but more importantly, formal compliance with all regulations, standards, policies, procedures, etc., can be obtained within each SOR and between SORs Validated, and independent of each SORs.

Operations within a manufacturing enterprise are driven by ever-changing structures and businesses. Operational processes (activities) utilize information and capabilities from multiple SORs. Each SORs participates in the work process and executes it with equipment, materials and personnel, which forms a dynamic and special operation process and information ontology. Since 80-90% of the information used by knowledge workers already exists, semantic modeling techniques can effectively centralize actionable data from multiple SORs, allowing knowledge workers to efficiently use each SOR to perform accurate and timely actions. In fact, semantic technology can update the underlying data store, but in the near future, knowledge workers may be better off updating the data with appropriate SORs.

The ISA-95 Part 3 standard "Activity Models for Manufacturing Operations Management (MOM)" identifies four activity models involving the various SORs, data and events that must be exchanged for MOM:

  • Production and operation management.
  • Quality Operations Management.
  • Maintenance operations management.
  • Inventory operations management.

Part 3 of ISA-95 states that MOM activities refer to the activities of a manufacturing plant in coordinating personnel, equipment, materials and energy in the process of transforming raw materials or components into products. Each MOM activity model is composed of tasks in functions and data exchange between tasks and functions. According to the concepts of the Purdue Enterprise Reference Architecture (PERA) (www.pera.net), the original reference model for ISA-95, functions and tasks can be performed by physical equipment, humans, or information systems. For each operation in a work order's operational route, some subset of the four MOM activities, functions, tasks, and exchanges are invoked to execute the work flow for that operation, using various equipment, SORs, and people to complete the work order. According to this model, operating resources (materials [raw materials, intermediate products, consumables, and finished products], equipment and tools, MOM and process control systems, and people) are considered as equivalent to the participants of the SORs in the execution of the operating process.

Business resources are sources of data and targets of state-changing function calls, similar to a database. In other words, a given task or data exchange can only be performed by a person, device, or system of record—an information system that is the authoritative source for a given data element or information. The application of job resources to each job in a route is performed as a set of tasks for each activity specified in the job definition and rules. For a given product and production line, the operational definitions and rules for scheduling and executing the operational plan are usually part of the various SORs. A coordinated approach for all operations of a production route must validate changes in operation definitions during order execution and must react to changes while maintaining consistency of information between SORs to produce the expected results of tasks and data exchanges for each activity .

Data from various SORs are used or changed as the manufacturing layer 4 business and layer 3 operational processes are executed. Events from different sources are triggered to advance process execution or initiate new business and operational processes. Traditionally (i.e., before "semantic" computing), some of these processes were defined and executed outside of SORs, leaving a short history of some process execution outside of these systems. In other words, a large number of these systems do not store the history of information changes needed to understand causality, which in turn is needed to understand "what, when, and why". This information is critical for analytics to drive continuous process improvement within manufacturing companies. With semantic computing, this information is easily captured and becomes part of the formal record.

Applications such as process historians, if used to capture information, typically only include temporal context, rather than processing complex and transactional data from SORs such as ERP and MOM applications. Likewise, ERP systems often lack detailed execution information or high-speed transactional capabilities, such as identifying which production request was first processed following equipment maintenance on a particular piece of equipment. To get this information from legacy systems, someone has to know how to look up records related to equipment, operations orders, work orders, and operations from operations execution management systems (production, quality, inventory, and maintenance). The records being queried must be matched and reconciled from the timestamps of records, production orders, and work orders to find the full genealogy of production requests processed before and after the equipment was last returned for work. Time, as a consistent data element across SORs, is a relevant context that can be used to link information between SORs. Of course, this assumes that time is properly synchronized between the systems. With semantic computation, these queries can be structured and reused in an easily maintainable way, with an audit trail.

Manufacturing companies have a wealth of standard operating procedures (SOPs) that define how operational processes should be performed. Building on the example above, a manufacturing company might have detailed SOPs for equipment maintenance procedures, qualification of equipment into production, and routing and scheduling of products through the manufacturing process.

Currently and historically, SOPs exist in paper form and are referenced as needed. Since SOPs are manually executed workflows (outside of the SOR), the genealogy and understanding of the detailed description of what happened, when it happened, and why, in the best case, requires the collection and correlation of paper records, in the Worst-case scenarios require meetings where people compile records by discussing what they remember happened. With semantic computing and automated business and operational processes, this will change, but there is a lot of resistance to such worthwhile change. Manufacturing companies today have a strong, inherent resistance to change and the reality that people must understand the risks involved and the benefits to realize the rewards.

Compounding the problem, the information in SORs is often in the form of a specific task or workflow, rather than a domain expert's view of the broader challenge. In most cases, modifying, replacing, or extending SORs to meet new requirements is cost-prohibitive, even if it is an option. The pragmatic solution is a less intrusive approach that meets the requirements of a dynamic manufacturing environment and provides multiple forms of access to relevant, timely information to various departments of the enterprise and plant. In fact, domain experts across businesses and industries need a structured information environment to remind them of dogmatic events, background opinions for their analysis, and thus to derive their decisions. With semantic computing, semantic models (with the ability to decouple data from data storage and to act on that data) play a key role in this pragmatic form of MOM architecture.

Applying the inference and understanding provided by semantic models through semantic computing, whether for urban traffic management or refinery management, is critical to correctly deriving insights from monitored SORs and process instruments. This near real-time analysis ultimately leads to optimized, agile business and operational processes through timely, responsive decision-making. Semantic computing greatly enhances the federation of data from various SORs by using its reasoning capabilities to alert the right people in the right business process to take the right action in a timely manner, and to escalate if it is not resolved in time.

Why a Semantic Model?

What exactly is a semantic model. How do they help with this type of operational system integration? First, for clarity, explain the difference between models in the Unified Modeling Language (UML) and the Web Ontology Language (OWL):

UML is a modeling language used in software engineering primarily around object-oriented system design artifacts. When explaining information-oriented architecture (IOA)-based operating system integration in this UML context, the semantic model is used as the functional core of the application to provide a navigable data model and association relationships that collectively represent our target domain knowledge.

Web Ontology Language is a family of knowledge representation languages ​​for writing ontologies. These languages ​​feature formal semantics and XML-based serialization for Resource Definition Framework (RDF)/RDF Schema (RDFS)/Semantic Web. OWL is endorsed by the World Wide Web Consortium (W3C) and has attracted academic, medical, and commercial interest. Data described by an ontology in the OWL family is interpreted as a set of "individuals" and a set of "property assertions" that relate these individuals to each other. An ontology consists of a set of axioms that place constraints on collections of individuals (called "classes") and the types of relationships allowed between them. These axioms provide semantics that allow the system to infer additional information from explicitly provided data. A complete introduction to OWL's expressive capabilities is provided in the W3C's OWL Guidelines.

Ontologies define the terms used to describe and represent a domain of knowledge. Ontologies are used by people, databases, and applications that need to share information about a domain (a domain is simply a specific subject area or area of ​​knowledge, such as medicine, toolmaking, real estate, car repair, or financial management). Ontologies include computer-usable definitions of fundamental concepts in a domain and the relationships between them (note that here and throughout this paper, definitions are not used in the technical sense understood by logicians). They encode knowledge in one domain as well as knowledge across domains. In this way, they make this knowledge reusable.

The term ontology has been used to describe artifacts (artifacts) with varying degrees of structure. These range from simple taxonomies such as tree hierarchies, to metadata schemes, to logical theories. The Semantic Web requires ontologies with a considerable degree of structure. These need to specify descriptions for the following types of concepts:

  • Classes (general things) in many areas of interest.
  • possible relationships between things.
  • The properties (or properties) that these things may have.

Ontologies are usually expressed in a logic-based language so that detailed, precise, consistent, reasonable, and meaningful distinctions can be made between classes, attributes, and relationships. Some ontology tools can use ontologies for automatic reasoning to provide advanced services for intelligent applications such as: conceptual/semantic search and retrieval, software agents, decision support, speech and natural language understanding, knowledge management, intelligent databases, and e-commerce.

Ontologies play an important role in the emerging Semantic Web, as a way of representing the semantics of documents and enabling them to be used by web applications and intelligent agents. Ontologies are very useful to a community as a way of structuring and defining metadata terms that are currently being collected and standardized. Using ontologies, future applications can be "intelligent" in the sense that they work more accurately on a human-understandable conceptual level.

Semantic models allow users to ask questions about what is happening in the modeled system in a more natural way, in the form of structured queries, transactions, interfaces, and reports. For example, an oil production enterprise might consist of five geographic regions, each containing three to five rigs, each rig monitored by several control systems, each with a different purpose. One of the control systems might monitor the temperature of the oil produced, while another might monitor the vibration of the pump. The semantic model allows the user to ask a question such as "What is the temperature of the oil being produced on Platform 3?" without having to know details such as which specific control system monitors this information, or which physical sensor (often represented by an OPC tag) Report the oil temperature on the platform.

Therefore, a semantic model is used to link the physical world (in this example) as known to control systems engineers to the real world as known to line-of-business leaders and decision makers. In the physical world, a control point, such as a valve or a temperature sensor, is known by its identifier in a particular control system, perhaps by a tag name. This may be one of thousands of identifiers in any given control system, and there may be many similar control systems throughout the enterprise. To further complicate the problem of information referencing and aggregation, other data points of interest may be managed through database, file, application or component services, each with its own interface methods and naming conventions for data access.

Therefore, a key value of a semantic model is to provide access to information in a real-world context in a consistent manner. In a semantic model implementation, this information is identified using "triples" of the form "subject-predicate-object". For example:

  • Tank 1 <with temperature> Sensor 7.
  • Tank 1 <is> part of platform 4.
  • Platform 4 <is> part of Zone 1.

Together these triples form an ontology of regions that can be stored in a model server. This information can be easily traversed using the model query language to answer questions such as "What is the temperature of tank 1 on platform 4?" This is much easier than without a semantic model that relates engineering information to the real world.

Another advantage of a semantic model for such applications is maintenance. Refer to Figure 2-1.

Figure 2-1: Information Model Structure

The real-world model described here can be implemented with any of the types of models shown in Figure 2-1. Relationships between entities in a relational model are established through explicit key (primary key, foreign key) entities and many-to-many relationships for associated entities. Changing the relationship in this case is cumbersome because it requires changing the underlying model structure itself, which is difficult with a populated database. Queries on that kind of data based on a relational model can also be cumbersome, as it can result in very complicated where conditions or important table joins.

Tree models (middle part of Figure 2-1) have similar limitations when it comes to real-world updates, and are not very flexible when trying to traverse the model "laterally".

The graph model (right side of Figure 2-1) is an implementation of the semantic model to make it easier to query and maintain the model after deployment. For example, a new relationship must be represented that was not anticipated at design time. With the triple storage notation, additional notations are easily maintained. A new triple is simply added to the data store. This is a key point. Relationships are part of the data, not part of the database structure, nor part of a particular SOR. Likewise, the model allows for traversal from many different angles to answer questions that were not considered at design time. In contrast, other types of database designs may require structural changes to answer new questions that arise after the initial implementation.

Semantic models (based on graphs) allow easy inference in a non-linear fashion. Example: Consider an online service to buy books or music. Such an app should be very good at making additional purchase suggestions based on your buying patterns. This is very common for eCommerce sites that offer recommendations such as "Because you liked this movie, you might also like these movies" or "Because you liked this music, you might also like the following".

One way to achieve this is to use a semantic model, and add a relationship like this:

A <similar to> B

In addition, an ontology is built where both A and B belong to a music genre named "New Age". Once these relationships are established in the model, these types of recommendations can be easily provided when needed. 

Guess you like

Origin blog.csdn.net/wyz19940328/article/details/130518846