id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
8e21aa788aeab8ca27c41b380c6330b8ff97d9c5
|
Data Integration through Service-based Mediation for Web-enabled Information Systems
Yaoling Zhu, Dublin City University, School of Computing, Dublin 9, Ireland, Phone: ++353 +1 7005620, Fax: ++353 +1 700 5442, Email: [email protected]
Claus Pahl, School of Computing, Dublin City University, Dublin 9, Ireland, Phone: ++353 +1 7005620, Fax: ++353 +1 700 5442, Email: [email protected]
Abstract
The Web and its underlying platform technologies have often been used to integrate existing software and information systems. Traditional techniques for data representation and transformations between documents are not sufficient to support a flexible and maintainable data integration solution that meets the requirements of modern complex Web-enabled software and information systems. The difficulty arises from the high degree of complexity of data structures, for example in business and technology applications, and from the constant change of data and its representation. In the Web context, where the Web platform is used to integrate different organisations or software systems, additionally the problem of heterogeneity arises. We introduce a specific data integration solution for Web applications such as Web-enabled information systems. Our contribution is an integration technology framework for Web-enabled information systems comprising, firstly, a data integration technique based on the declarative specification of transformation rules and the construction of connectors that handle the integration and, secondly, a mediator architecture based on information services and the constructed connectors to handle the integration process.
Keywords
Web Applications, Data Integration, Software Architecture, Data Models, Information System Design
INTRODUCTION
The Web and its underlying platform technologies have often been used to integrate existing software and information systems. Information and data integration is a central issue in this context. Basic techniques based on XML for data representation and XSLT for transformations between XML documents are not sufficient to support a flexible and maintainable data integration solution that meets the requirements of modern complex Web-enabled software and information systems. The difficulty arises from the high degree of complexity of data structures, for example in business
---
1 This chapter appears in “Software Engineering for Modern Web Applications: Methodologies and Technologies” edited by Brandon, Daniel M., Copyright 2008, IGI Global, www.igi-global.com. Posted by permission of the publisher.
and technology applications, and from the constant change of data and its representation. In the Web context, where the Web platform is used to integrate different organisations or software systems, additionally the problem of heterogeneity arises. This calls for a specific data integration solution for Web applications such as Web-enabled information systems.
The advent of Web services and service-oriented architecture (SOA) has provided a unified way to expose the data and functionality of an information system. Web services are provided as-is at certain location and can be discovered and invoked using Web languages and protocols. SOA is a service-based approach to software application integration. The use of standard technologies reduces heterogeneity and is therefore central to facilitating application integration. The Web services platform is considered an ideal infrastructure to solve the problems in the data integration domain such as heterogeneity and interoperability (Orriens et al., 2003; Haller et al., 2005; Zhu et al., 2004). We propose a two-pronged approach to address this aim: firstly, data integration and adaptivity through declarative, rule-based service adaptor definition and construction, and, secondly, a mediator architecture that enables adaptive information service integration based on the adaptive service connectors. Abstraction has been used successfully to address flexibility problems in data processing - database query languages are a good example here.
XML as a markup language for document and data structuring has been the basis of many Web technologies. XML-based transformation languages like XSLT, the XML Stylesheet Transformation Language, XML-based data can be translated between formats. With recent advances in abstract, declarative XML-based data query and transformation languages beyond the procedural XSLT, this technology is ready to be utilised in the Web application context. The combination of declarative abstract specification and automated support of the architecture implementation achieves the necessary flexibility to deal with complexity and the maintainability of constantly changing data and system specifications.
Our objective is to explore and illustrate solutions to compose a set of data integration services. The data integration services deliver a unified data model built on top of individual data models in dynamic, heterogeneous and open environments. The presentation of this technology framework aims to investigate the practical implications of current research findings in Web information systems technology.
A lightweight mediated architecture for Web services composition shall be at the centre of our solution. Data integration is a central architectural composition aspect. The flexibility of the architecture to enable information integration is essential in order to separate the business process rules from the rest of the application logic. Therefore, the data transformation rules are best expressed at the abstract model level. We apply our solution to the Web Services platform in the context of information technology services management in the Application Service Providers ASP (on demand) business area. We focus on this context to illustrate problems and solutions. Portals, provided by ASPs, are classical examples where data might come from different sources that motivate our research. In order to consume the information, the data models and representation needs to be understood by all participants. The ASP maintains the application, the associated infrastructure, and the customer's data. The ASP also ensures that systems and data are available when needed.
The chosen area demonstrates the need to support deployment of Web service technology beyond toy examples (Stern & Davies, 2004). It is a specific, but important area due to the need to find solutions to accommodate constant structural changes in data representations. Two central themes shall be investigated:
- to identify data model transformation rules and how to express these rules in a formal, but also accessible and maintainable way are central to the data integration problem and its automation,
- service composition to enable interoperability through connector and relationship modelling based on workflow and business processes is central.
Our contribution based on these themes is an integration technology framework for Web-enabled information systems comprising
- a data integration technique based on the declarative specification of transformation rules and the construction of connectors that handle the integration in a software system,
- a mediator architecture based on information services and the constructed connectors to handle the integration process.
We start our investigation by providing some data integration background. We then present the principles of our declarative data integration technique. The mediator architecture that realises the data integration technique for Web services is subsequently presented. A larger application scenario will then be discussed. We end with some conclusions.
BACKGROUND
Data Integration Context
The Application Service Provider or ASP business model, which has been embraced by many companies, promotes the use of software as a service. Information Systems (IS) outsourcing is defined as the handing over to third party the management of IT and IS infrastructure, resources and/or activities (Willcocks & Lacity, 1998). The ASP takes primary responsibility for managing the software application on its infrastructure, using the Internet as the delivery channel between each customer and the primary software application. The ASP maintains the application and ensures that systems and data are available when needed. Handing over the management of corporate information systems to third party application service providers in order to improve the availability of the systems and reduce costs is changing the ways that we manage information and information systems.
Information integration aims at bringing together various types of data from multiple sources such that it can be accessed, queried, processed and analysed in an integrated and uniform manner. In a large modern enterprise, it is inevitable that different parts of the organization will use different systems to produce, store, and search their critical data.
Recently, service-based platforms have been used to provide integration solutions for ASP applications. Data integration in these types of collaborating systems is necessary. This problem has been widely addressed in component-based software
development through adaptor and connector approaches (Crnkovic & Larsson, 2000; Szyperski, 2002). In the service-based Web applications context, the data in XML representation retrieved from the individual Web services needs to be merged and transformed to meet the integration requirements. The XML query and transformation rules that govern the integration may change; therefore, the programs for building up the connectors that facilitate the connection between integrated Web services and data service providers need to be adjusted or rewritten. As with schema integration, the schema-mapping task cannot be fully automated since the syntactic representation of schemas and data do not completely convey the semantics of different data sources. As a result, for both schema mapping and schema integration, we must rely on an outside source to provide some information about how different schemas (and data) correspond. For instance, a customer can be identified in the configuration management repository by a unique customer identifier; or, the same customer may be identified in the problem management repository by a combination of a service support identifier and its geographical location. In this case, a transformation might be necessary; see Fig. 1 for a visualisation of the customer identifier example.
Fig. 1. Example of Data Integration in Adaptive Service Architectures - two Data Schemas that need to be transformed into one another.
Data Integration Principles
Information integration is the problem of combining heterogeneous data residing at different sources, and providing the user with a unified view (Lenzerini, 2002). This view is central in any attempt to adapt services and their underlying data sources to specific client and provider needs. One of the main tasks in information integration is to define the mappings between the individual data sources and the unified view of these sources and vice versa to enable this required adaptation, as the example in Fig. 1 illustrates. The data integration itself is defined using transformation languages.
There are two major architectural approaches to the data integration problem that provide the infrastructure for the execution of transformations (Widom, 1995).
- Data warehousing is an eager or in-advance approach that gathers data from the appropriate data sources to populate the entities in the global view. A data warehousing approach to integration is suitable for data consumers wanting to access to local copies of data so that it can be modified and calculated to suite the business needs by nature.
- In contrast, the mediated approach extracts only data from export schemas in advance. A mediated approach to integration is suitable for information that changes rapidly, for service environments that change, for clients in need tailored data, and for queries that operate over large amounts of data from numerous information sources and most importantly for clients with the need of the most recent state of data.
XSLT Shortcomings
XSLT is the most widely used language for XML data integration, but these XSLT transformations are difficult to write and maintain for large-scale information integration. It is difficult to separate the source and target parts of the rules as well as the filtering constraints. The verbosity of XML makes manual specifications of data and transformations difficult in any case. With this difficulty in mind, we propose a declarative query and transformation approach yielding more expressive power and the ability to automatically generate query programs as connectors to improve the development of services-based data integration in Web-based information systems.
XSLT does work well in terms of transforming data output from one Web service to another in an ad-hoc manner. XSLT code is, however, difficult to write and almost impossible to reuse in a large enterprise integration solution. The syntactical integration of the query part and construction part of a XSLT transformation program is hard to read and often new programs are needed even when a small portion of the data representation changes. XSLT does not support the join of XML documents. We would in our context need to merge several source XML documents into one document before it can be transformed into another document according to an over-arching general schema.
A DECLARATIVE DATA INTEGRATION AND TRANSFORMATION TECHNIQUE
A declarative, rules-based approach can be applied into the data transformation problem (Orriens et al., 2003). A study by Peltier et al. (2001) introduces the MTRANS language that is placed on top of XSLT to describe data model transformations. XSLT is generated from an MTrans specification. The transformation rules are expressed in the form of MTrans and then parsed using a generator. Peltier et al. argue that the data transformation rules are best expressed declaratively at the abstract model level rather than at the concrete operational level in order to reduce the complexity of the transformation rules.
A data integration engine for the Web services context can be built in the Web service business process execution language WS-BPEL, which is another example for the
benefits of abstraction in transformation and integration. A common over-arching information model governs what types of services are involved in the composition. In (Rosenberg & Dustdar, 2005), a business rule engine-based approach has been introduced to separate the business logic from the executable WS-BPEL process.
These two examples illustrate current work in this context. Now, a detailed discussion shall elicit the specific requirements for service-based information integration.
Requirements for Mediated Integration
The flexibility of the architecture in which information integration is to be realised is essential in order to separate the business logic from the rest of the application logic. Therefore, the data transformation rules are best expressed at an abstract business model level. These rules, stored in a repository, can be used to dynamically create XSLT-based transformations using a connector or integration service as the mediator. These integration services are the cornerstones of a mediator architecture that processes composite client queries that possibly involve different data sources provided by different Web services. We start our investigation by discussing the properties of suitable integration and transformation languages.
XML data might be provided without accompanying schema and sometimes is not well-formed; XML data often contains nested structures. Therefore, transformation techniques need more expressive power than traditional database languages such as relational algebra or SQL. The characteristics of an XML query language have been studied extensively (Jhingran et al., 2002; Lenzerini, 2002; Peltier et al., 2002). However, these investigations often focus on the features to query an XML or semi-structured data repository in the spirit of database query languages rather than constructing a new XML document in the context of the data integration. The following principles, which are inspired by the data integration literature such as (Lenzerini, 2002), aim to provide a comprehensive requirements list.
- The language should support both querying and restructuring XML Data.
- The language must enable the generation of query programs by other programs.
- The language should be capable of expressing the following operations in addition to the ones existing in database languages (such as projection, selection, and joins): restructuring (constructing a new set of element instances based on variable bindings and the global schema), combination (merging two or more element instances into one), and reduction (to express transformation rules that exclude parts of the data from the result).
- Compositionality is an essential feature for an XML query and transformation language to support query composition.
A rule-based, declarative language enables developers to concentrate on the integration logic rather than on implementation details and enables the required compositionality and expressiveness. Most XML and semi-structured data query languages have been proposed to extract XML data from the XML databases or the Web. A comparative analysis of existing languages has been done by Reynaud et al. (2001). A language is generally designed to suit the needs for a limited application domain such as database querying or data integration; some languages are designated only for semi-structured data that predated the XML-format. A query language should be able to query data sources using
complex predicates, joins and even document restructuring. We add the following criteria specifically for the context of Web-based data integration:
- **Join.** The language must support joins of multiple XML data sources. A join condition is necessary to compare attributes or elements in any number of XML documents. In data integration systems, data is most likely to come from more than one source.
- **Data Model.** The queries and their answers are the instances of a data model. Sometimes, a rich data model is needed to support the functionality of some query languages. The underlying framework plays a major role in determining a data model for a query language.
- **Incomplete Query Specification.** XML and semi-structured data is not as rigid as relational data in term of schema definitions and data structure. Therefore, it is important that a query language is capable of expressing queries in incomplete form, such as by using wildcard and regular expressions - also called partially-specified path expressions.
- **Halt on Cyclic Query Terms.** If a language supports querying with incomplete query specification by wildcard and regular expression, it might cause termination problems. Therefore, features to detect cyclic conditions are required.
- **Building New Elements.** The ability to construct a new node added to the answering tree is an important feature for data integration systems.
- **Grouping.** Grouping XML nodes together by some conditions by querying the distinct values is another important feature in data integration. Some languages use nested queries to perform grouping operations; in contrast, some more powerful languages have built-in constructors.
- **Nested Queries.** Nested queries are common in relational database languages for joining different data elements by their values. In logic-based languages, the construction part and the selection part are separated.
- **Query Reduction.** Query reduction allows users to specify what part of the elements or what nodes in the query conditions will be removed from the resulting XML tree.
A number of potential candidates shall briefly be discussed in the context of these requirements:
- **XQuery** is a W3C-supported query language that aims at XML-based database systems. XQuery is an extension of XPath 2.0 adding functionalities needed by a full query language. The most notable of these functionalities are support of sequences, the construction of nodes and variables, and user-defined functions.
- **UnQL** - the Unstructured Query Language - is a query language originally developed for querying semi-structured data and nested relational databases with cyclic structures. It has later been adapted to query XML documents and data. Its syntax uses query patterns and construction patterns and a query consists of a single select or traverse rule that separates construction from querying. Queries may be nested, in which case the separation of querying and construction is abandoned. UnQL was one of the first languages to propose a pattern-based querying (albeit with subqueries instead of rule chaining).
- **XML-QL** uses query patterns and path expressions to select data from XML sources. These patterns can be augmented by variables for selecting data.
XML-QL uses query patterns containing multiple variables that may select several data items at a time instead of path selections that may only select one data item at a time. Furthermore, variables are similar to the variables of logic programming, i.e. joins can be evaluated over variable name equality. Since XML-QL does not allow one to use more than one separate rule, it is often necessary to employ subqueries to perform complex queries.
The shortcomings of these widely known and used languages in the context of the given requirements and the language comparisons have led us to choose a fully declarative language called Xcerpt (Bry & Schaffert, 2002) that satisfies all criteria that we have listed earlier on. However, other recently developed and well-supported transformation languages such as ATL and QVT are similarly suitable candidates. While QVT satisfies the criteria, it is currently not as well supported through tools and accessible tutorial material.
Xcerpt is a query language designed for querying and transforming both data on the standard Web (e.g. XML and HTML data) and data on the Semantic Web (e.g. RDF data). Xcerpt not only allows one to construct answers in the same data formats as the data queries like XQuery, but also allows further processing of the data generated by this same query program. One of the design principles is to strictly separate the matching part and the construction part in a query. Xcerpt follows a pattern-based approach to querying the XML data. A similar approach has been proposed in the languages UnQL and XML-QL. However, Xcerpt has extended the pattern-based approach in the following aspects. Firstly, the query patterns can be specified by incomplete query specifications in three dimensions. Incomplete query specifications can be represented in depth, which allows XML data to be selected at any arbitrary depth; in breadth, which allows querying neighbouring nodes by using wildcards, and in order. Incomplete query specifications allow the pattern specifications to be specified in a more flexible manner but without losing accuracy. Secondly, the simulation unification computes answer substitutions for the variables in the query pattern against underlying XML terms - similar to UnQL, but strict unification is used in UnQL.
Declarative Transformation Rules
We have adapted Xcerpt to support the construction of the service connectors, which is our central objective:
- From the technical point of view, in order to promote code reuse, the individual integration rules should not be designed to perform the transformation tasks alone. The composition of rules and rule chaining demand the query part of service connector to be built ahead of the construction part of the service connector.
- From the business point of view, the data presentation of the global data model changes as element names change or elements are being removed. These should not affect the query and integration part of the logic. Only an additional construction part is needed to enable versioning of the global data model.
Grouping and incomplete query specifications turn out to be essential features.
Xcerpt is a document-centric language which is designed to query and transform XML and semi-structured documents. Therefore the ground rules, which read data
from the document resources, are tied with at least one resource identifier. This is a bottom up approach in terms of data population because the data are assigned from the bottom level of the rules upward until the rule application reaches the ultimate goal of a complex, hierarchically structured rule. These rules are defined through an integration goal at the top level and structured into sub-rules down to ground rules, which address individual data elements.
```
CONSTRUCT
CustomerArray {
all Customer[
nameAsContracted [var Name],
companyId [var CompanyId],
serviceOrganizationIdentifier [var OrgId],
all supportIdentifier [
CustomerSupportIdentifier [var Code],
ISOCountryCode [var CSI]
]
]
}
FROM
arrayOfCustomer[
item [
orgName [var Name],
companyId [var CompanyId],
gcdbOrgId [var OrgId],
countryCode [var Code],
csiNumber [var CSI]
]
]
```
Fig. 2. Declarative Query and Transformation Specification of Customer Array Element in Xcerpt.
Fig. 2 shows a transformation example for a customer array based on Fig. 1. Fig. 1 is a graphical illustration of XML-based data structures. The upper structure provides the data schema of the input document; the lower structure is the target data schema that a transformation needs to map onto. The graphical representation allows us to avoid the verbosity of XML-based data representations for this investigation. An output customer in CustomerArray is constructed based on the elements of an item in an arrayOfCustomer by using a pattern matching approach, identifying relevant attributes in the source and referring to them in the constructed output through variables. For instance, the Name variable is used to declare nameAsContracted and OrgName as semantically equal elements in both representations that are syntactically different.
This original Xcerpt approach is unfortunately not feasible in an information integration solution because the resource identifiers cannot be hard coded in the ground rules in our setting. A wrapper mechanism has been developed to pass the resource identifiers from the goal level all the way down to the ground rules. In addition to the original Xcerpt approach, we propose a mediator-based data integration architecture where the Xcerpt-based connectors are integrated with the client and provider Web services. WS-BPEL code is generated by a transformation generator within the mediator service (see Fig. 4 below, which is explained in a separate section).
Implementation of Connector Construction
The construction of Xcerpt-based connectors, which specify integration through declarative rules, can be automated using rule chaining. Ground rules are responsible for querying data from individual Web services. Intermediate composite rules are responsible for integrating the ground rules to render data types that are described in global schemas. The composite rules are responsible for rendering the data objects described in the interfaces of the mediator Web services based on demand. Therefore, exported data from a mediator service is the goal of the corresponding connector (i.e. a query program), see Fig. 3. Fig. 1 defines again the respective input and output data schemas. The CONSTRUCT .. FROM clauses in Fig. 3 define the individual rules. Here, information from ArrayOfCustomers and Customers is selected to construct the SupportIdentifier.
```
GOAL
Out { Resource "{file:SupportIdentifier_Customer.xml}",
SupportIdentifier [ All var SupportIdentifier ] } } FROM
Var SupportIdentifier -> SupportIdentifier {()}
END
CONSTRUCT
SupportIdentifier [var Code, optional Var Cname, Var Code] FROM
in { Resource "{file:customer1.xml}",
ArrayOfCustomer []
customer [[ optional countryName [var CName],
countryCode [var Code] ]
csiNumber [var CSI] ] } }}
END
CONSTRUCT
SupportIdentifier [var Code, Var Cname, optional Var Code] FROM
in { Resource "{file:customer2.xml}",
Customers []
customer [[
countryName [var CName],
optional countryCode [var Code] ]
csiNumber [var CSI] ] } ] } }
END
```
Fig. 3. Transformation Specification in Xcerpt based on Goal Chaining.
We apply backward goal-based rule chaining in this adapted implementation to execute complex queries based on composite rules. Fig. 3 shows an example of this pattern matching-based approach that separates a possibly partial query based on resource and construction parts. This transformation rule maps the supportIdentifier element of the customer example from Fig. 1. Fig. 3 is a composite rule based on the SupportIdentifier construction rule at a lower level.
These rules are saved in a repository. When needed, a rule will be picked and the backward rule chaining enables data objects to be populated to answer transformation requests. This architecture will be detailed in the subsequent section.
MEDIATOR ARCHITECTURE
Motivation
Zhu et.al. (2004) argue that traditional data integration approaches such as federated schema systems and data warehouses fail to meet the requirements of constantly changing and adaptive environments. We propose, based on (Haller et al., 2005; Wiederhold, 1992; Sheth & Larson, 1990; Zhu et al., 2004), a service-oriented data integration architecture to provide a unified view of data on demand from various data sources. A service-oriented data integration architecture is different from business process integration as the latter is concerned with integrating the business process rather than data. The proposed integration architecture uses Web services to enable the provision of data on demand whilst keeping the underlying data sources autonomous.
There is consequently a need for mediators in an architecture that harmonise and present the information available in heterogeneous data sources (Stern & Davies, 2003). This harmonisation comes in the form of identification of semantic similarities in data while masking their syntactic differences; see Fig. 1. Relevant and related data is then integrated and presented to a higher layer of applications. The sourcing, integration, and presentation of information can be seen as logically separated mediator rules for integration, implemented by mediator services - which shall form the basis for the presented mediator architecture.
Garcia-Molina et.al. (1997) identify that the following requirements are essential in order to build a mediator architecture. Firstly, it must be based on a common data model that is more flexible than the models commonly used for the database management systems. Secondly, it must be supported by a common query language. Finally, there must be a tool to make the creation of new mediators and mediator systems more cost-effective than building them from scratch.
Architecture Definition
The mediator architecture transforms local XML documents into documents based on a global schema. Fig. 4 illustrates this architecture with a few sample information services - Customer Data, E-business System, Request Logging and Analysis Service - that a client might access. The data integration engine is built based on a composition of individual services using WS-BPEL, where component invocation orders are predefined in the integration schemas. These service orchestrations are defined by specifying the order in which operations should be invoked.
The proposed Web services-based mediator architecture, Fig. 4, contains the following components:
- **Schema Repository**: Each object within the model is a logical representation of the entity and will often be populated with data sourced from more than one repository. The advantage of having a unified view of data is to make sure that the customers will have a consistent view of data and to avoid duplication.
- **Information Services**: These provide source data retrieved from the underlying data repositories to clients and other services. The signature of the Web service interfaces such as input parameters and data output is agreed in advance by business domain experts from both client and provider sides. The benefit of asking the data sources to provide a Web service interface is to delegate the responsibility and cut down the effort spent on developing data access code and understanding the business logic.
- **Data Integration and Mediation Services**: A common data model can be implemented as an XML schema. Two basic approaches have been proposed for the mappings between the export schemas and the federated schema - called global-as-view and local-as-view in (Lenzerini, 2002). The former approach defines the entities in the global data model as views over the export schemas whereas the latter approach defines the export schemas as views over the global data model. In this work, a data integration service will be treated as a mediator in the mediator architecture. We introduce a novel approach to ease and improve the development of the mediators. There are two quite different styles of transformation: procedural, with explicit source model traversal and target object creation and update, and declarative, with implicit source model traversal and implicit target object creation. Therefore, an approach based on a declarative rule markup language to express the data transformation rules and a rule engine have been chosen. The mapping should be conducted at the abstract syntax mappings level, leaving the rendering of the result to a separate step at runtime to the BPEL engine.
• Query Component: The query service is designed to handle inbound requests from the application consumer side. The application developers build their applications and processes around common objects and make successive calls to the mediated Web services. Therefore, the interfaces of individual Web service providers are transparent to the application customers; they may send any combinations of the input parameters to the query service. In order to facilitate these unpredicted needs, the query service has to decompose the input messages into a set of pre-defined WS-BPEL flows. Normally a BPEL flow belongs to a mediator that delivers a single common object. Occasionally, two or more mediators need to be bundled together to deliver a single object.
Each of the components can in principle be offered as a service by a (potentially different) provider. Within the composite mediator service, both transformation and connector generation services are separated and only loosely coupled.
Developer Activities
The architecture in Fig. 4 explains the runtime view from the client and user perspective. In order to complete the picture, the development perspective shall also be addressed. Fig. 5 illustrates development activities, looking at the developers of architecture, rules, and services - and their respective activities. A number of actors including service provider engineers, application software engineer, integration business analyst, integration software architect, and integration software engineer are distinguished. These are associated to the activities they are involved in. In particular the integration team is involved with Xcerpt-based rule definition and application. Activities are also related among themselves. The participation of different roles from possibly different organisations (application customer, service provider, integration team) demonstrates the need for common understanding and maintainability of the integration problem, which can be achieved through abstract and declarative rule specifications (here in Xcerpt format), shared by service provider developers, integration business analysts, and integration software developers.
APPLICATION SCENARIO AND DISCUSSION
The presented data integration technique and the mediated architecture are complemented by an incremental, evolutionary process model. Some pragmatic aspects of this process shall now be addressed. In the proposed architecture, the unified data model (over-arching schema) is maintained manually. The schema for large enterprise integration solutions might consist of a large number of data aspects. From the development point of view, it is only reasonable to deliver the data integration services on a phased basis such as one data aspect for one release cycle. A mediator consists of the following components: the individual provided Web services, a WS-BPEL workflow, and one or more service connectors, as illustrated in Fig. 4. Mediators in our solution are used to deliver these data aspects according to the unified schema. This schema is available to the customers so that these can decide which mediator to call based on the definition of the unified schema.
The focus of this investigation is not on the automatic composition of Web services, rather on how the data output from multiple Web services can be automatically integrated according to a global data model and sent back to users. Therefore, in terms of the WS-BPEL process flow, a static approach with respect to the orchestration of the involved Web services can be taken. These can be orchestrated together in form of a WS-BPEL flow built in advance.
During the development phase, the mappings between the global model and the local models will be expressed at the abstract model level, for instance in the widely used
MOF (Meta Object Facility) framework for modelling language definition. Model transformation between different metamodels can then be automatically carried out. The inputs are the source XML schema definitions and the transformation rules. The output is an XSLT transformation file.
In the proposed process model illustrated in Fig. 5, the unified data model and the creation of rules are the responsibility of the business solution analysts, not necessarily the software architect. The rules are merely mappings from the elements exposed by Web service providers to the elements in the unified data model. We assume here that the semantic similarity is determined manually. In the literature on data model transformation, the automation of the mapping is often limited to transforming the source model and the destination model rather than integrating more than one data model into a unified data model. Even in the case of source to destination model mapping, the user’s intervention is needed to select one from more than one sets of mappings that are generated. In our proposed architecture, the service connectors can be generated on the fly by rule composition. The sacrifice is that semantic similarity is not taken into consideration.
The data integration rules are created at the higher level than the Xcerpt ground query programs themselves, as the following schematic example demonstrates (Fig. 3 shows an example of a composite rule like A below):
- Rule A: \( A(a, b) := B(a, b), C(b) \)
- Rule B: \( B(a, b) := D(a), E(b) \)
- Rule C: \( C(b) := E(b), F(b) \)
Each of the above rules would be implemented in the Xcerpt language. In the above example, rule \( A \) is a composite rule, based on \( B \) and \( C \). This could be used to answer a user’s query directly, but internally referring to subordinated rules dealing with the extraction and transformation of specific data aspects. The resource identifiers in form of variables and the interfaces for the data representation such as version number of the unified data model will be supplied to the transformation generator. The rule mappings in the transformation generator serve as an index to find the correct Xcerpt queries for execution. As a result, a query program including both query part and construction part is being executed to generate the XML output, which is sent back to the transformation generator.
In terms of examples, we have so far only addressed complex transformations based on compositional rules within data provided by one Web service - the customer information service. Queries could of course demand to integrate data from different services. For instance, to retrieve all services requests by a particular customer would target two services, based on several composite integration and transformation rules.
FUTURE TRENDS
Adaptivity in service-based software systems is emerging as a crucial aspect beyond the discussed area of service-based ASP infrastructures and on-demand information systems. Adaptability of services and their infrastructure is necessary to reconcile integration problems that arise in particular in dynamic and changing environments.
We have excluded the problem of semantic interoperability from our investigation. Mappings between schemas might still represent the same semantic information. The recently widely investigated semantic Web services field, with ontology-based domain and service models, can provide input for some planned extensions in this direction (Haller et al., 2005).
Re-engineering and the integration of legacy systems is another aspect that we have not addressed. The introduction of data transformation techniques for reengineering activities can improve the process of re-engineering legacy systems and adopting service-oriented architecture to manage the information technology services (Zhang & Yang, 2004). Business rules often change rapidly - requiring the integration of legacy systems to deliver a new service. How to handle the information integration in the context of service management has not yet been exploited in sufficient detail in the context of transformation and re-engineering.
CONCLUSIONS
The benefit of information systems on demand must be supported by corresponding information service management systems. Many application service providers are currently modifying their technical infrastructures to manage information using a Web services-based approach. However, how to handle information integration in the context of service-based information systems has not yet been fully exploited.
The presented framework utilises information integration technologies for service-oriented software architectures. The crucial solutions for the information integration problem are drawn from mediated architectures and data model transformation, allowing the data from local schemas to be transformed, merged and adapted according to declarative, rule-based integration schemas for dynamic and heterogeneous environments. We have proposed a declarative style of transformation, with implicit source model traversal and implicit target object creation. The development of a flexible mediator service is crucial for the success of the service-based information systems architecture from the deployment point of view.
REFERENCES
|
{"Source-Url": "http://doras.dcu.ie/17088/1/SEMWA08.pdf", "len_cl100k_base": 7886, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 37357, "total-output-tokens": 10367, "length": "2e12", "weborganizer": {"__label__adult": 0.0002675056457519531, "__label__art_design": 0.0004401206970214844, "__label__crime_law": 0.0003218650817871094, "__label__education_jobs": 0.0004818439483642578, "__label__entertainment": 5.805492401123047e-05, "__label__fashion_beauty": 0.00012505054473876953, "__label__finance_business": 0.0003192424774169922, "__label__food_dining": 0.0002949237823486328, "__label__games": 0.0003333091735839844, "__label__hardware": 0.0005650520324707031, "__label__health": 0.0003740787506103515, "__label__history": 0.00021708011627197263, "__label__home_hobbies": 5.507469177246094e-05, "__label__industrial": 0.00033736228942871094, "__label__literature": 0.00024116039276123047, "__label__politics": 0.00021398067474365232, "__label__religion": 0.0003190040588378906, "__label__science_tech": 0.02459716796875, "__label__social_life": 6.35981559753418e-05, "__label__software": 0.0114898681640625, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.00018203258514404297, "__label__transportation": 0.0003542900085449219, "__label__travel": 0.0001742839813232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48288, 0.01079]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48288, 0.23645]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48288, 0.89188]], "google_gemma-3-12b-it_contains_pii": [[0, 2587, false], [2587, 6264, null], [6264, 9209, null], [9209, 11292, null], [11292, 14421, null], [14421, 17884, null], [17884, 21161, null], [21161, 24480, null], [24480, 27055, null], [27055, 29486, null], [29486, 31961, null], [31961, 34078, null], [34078, 37264, null], [37264, 37887, null], [37887, 41051, null], [41051, 43683, null], [43683, 46310, null], [46310, 48288, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2587, true], [2587, 6264, null], [6264, 9209, null], [9209, 11292, null], [11292, 14421, null], [14421, 17884, null], [17884, 21161, null], [21161, 24480, null], [24480, 27055, null], [27055, 29486, null], [29486, 31961, null], [31961, 34078, null], [34078, 37264, null], [37264, 37887, null], [37887, 41051, null], [41051, 43683, null], [43683, 46310, null], [46310, 48288, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48288, null]], "pdf_page_numbers": [[0, 2587, 1], [2587, 6264, 2], [6264, 9209, 3], [9209, 11292, 4], [11292, 14421, 5], [14421, 17884, 6], [17884, 21161, 7], [21161, 24480, 8], [24480, 27055, 9], [27055, 29486, 10], [29486, 31961, 11], [31961, 34078, 12], [34078, 37264, 13], [37264, 37887, 14], [37887, 41051, 15], [41051, 43683, 16], [43683, 46310, 17], [46310, 48288, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48288, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
196aa8c4e17abac67ec77de1c89df88271bcf2d5
|
Abstract
Using a case study approach, this paper introduces and outlines the Unified Modeling Language (UML) as it applies to modeling a site on the World Wide Web. The authors include an introduction to the concept of modeling, in general, as well as how modeling relates to the design of a Web site. A simple, fictitious university Web site serves as an illustrative tool throughout the paper. This site is reflected in several UML-based diagrams, as well as the discussion of some of the issues, considerations and techniques when using UML to model a Web site. The paper concludes with a list of ‘best practices’ when modeling Web sites using UML.
1. Introduction
This paper introduces and outlines the Unified Modeling Language (UML) as it applies to modeling a site on the World Wide Web. We focus our attention on Internet-based systems, essentially all of the UML-related design implications included in our discussion hold for Intranet-based systems, as well. UML is more than able to model complex, Web-based applications, including transaction processing (such as book/CD ordering system) and document management (such as an academic conference manager). However, for purposes of simplicity, we will be discussing some of the major considerations of modeling a relatively simple Web site using UML, largely from the user’s perspective (client-side). Our examples will be drawn from the Web site of a fictitious University (see Appendix A), and we include several UML-based diagrams, which are intended to illustrate various points of modeling with UML. We used Rational Software Corporation’s, Rational Rose, to generate our UML diagrams, and combined with a narrative of the site, help to illustrate the concepts being proposed and
the site being developed.
Booch, Rumbaugh and Jacobson [1999] define UML as a “standard language for writing software blueprints”, including the capability to “visualize, specify, construct and document the artifacts” of the system to be modeled through the use of numerous diagrams [Booch, et al, 1999]. UML offers consistent notations and numerous tools across processes and projects. Jim Conallen, Web Modeling Evangelist at Rational Software Corporation, suggests that UML is the “language of choice for modeling software-intensive systems” [Conallen, 1999]. Web site development falls into this category of ‘software-intensive systems’.
In illuminating the specific merits of UML when modeling Web sites and applications, Conallen pointed out the following [Conallen, 1999]:
- Web applications are a type of software-intensive system that are not only becoming increasingly complex, but are being implemented in more critical situations.
- A software system typically has multiple models, each representing a different viewpoint, level of abstraction and detail.
- The proper level of abstraction and detail depend on the artifacts and worker activities in the development process.
- UML can “express the execution of the system’s business logic [if any] in those Web-specific elements and technologies” [Conallen, 1999].
UML can model specific representations at various levels of abstraction. These different levels are comprised of several diagrams, which taken together, allow us to view the design of the system with as much, or as little, detail as needed. Modeling at differing levels of abstraction (between high-level/generalized and low-level/detailed) will depend on exactly what information needs to be conveyed through the completed model.
When modeling any system, Conallen [1999] suggests the importance of concentrating on the elements that will be of value to those who will be using the model (system developers). This entails “model [ing] the artifacts of the system – those ‘real life’ entities that will be constructed and manipulated to produce the final product”. Of course, the artifacts of a particular system will depend on the system being modeled. Artifacts of a Web site may include, but are not limited to, the following:
- Web pages,
- Multimedia-based elements, such as images, animation, video and audio clips,
- Inter- and intra-page hyperlinks (“navigational paths” through a Web site),
- Dynamic Web page content, both on the client and server-side,
- Various end-users of the system.
Depending on the particular site, these elements, or a subset of them, are of direct concern to the designers and creators of a Web site. In general, modeling the internal workings of the Web server or Web browser will not lend any significant insight to the designers and creators (programmers) of the site, and as such, would not be included in a typical UML diagram. Given the characteristics of our sample University Web site, we felt that modeling the navigational links and paths of the site was a priority. Among software-based systems, a site map is unique to a Web-based system and UML’s corresponding tool for this map is a Component Diagram, which is discussed later in this paper.
The structure of this paper is as follows. We begin with a general introduction to
modeling, including why we model and what UML represents. This section also includes a general architecture of a Web site, as well as an overview of a fictitious sample University Web site. We then present more traditional approaches to modeling a Web site, including the development of a cognitive walkthrough and a storyboard. Our next major section presents some of the issues, considerations and techniques when using UML to model a Web site. This section makes extensive use of various UML-based diagrams. Lastly, we present our conclusions, which include our list of ‘best practices’ when modeling Web sites using UML.
1.3 Scope of our analysis: simplified university Web site
Regardless of the modeling method(s) and tools employed, there are several critical aspects to designing an effective, easy-to-navigate and information Web site. It is critical for a Web site to provide information and support the functions that its users need. These aspects are related to who is expected to use the site and what tasks these specific users need to accomplish through the site. Specifically, an initial analysis needs to be completed of at least the following:
- Determine the overall purpose of the site.
- Identify the intended users of the site.
- Frame the scope of information contained within the site.
The overall purpose of our sample University Web site is twofold. The first is to provide information related to programs, people and admissions. The second is to facilitate contact through phone, mail and e-mail between users of the site, and representatives of the University (faculty and administration). Brannan [2000] suggested that part of the design process to is attempt to identify groups of users based on their common informational needs, and then essentially generate a navigation and manipulation model from the information gathered. This would result in applications that are more tailored to users. The intended users of the site are as follows:
- Potential students and their parents/guardians.
- Current students and their parents/guardians.
- Faculty and administrators.
- Industry representatives.
- Alumni.
There may be inherent variety among these users in terms of their expectations, goals and technical constraints (including the speed of their modem and Internet hookup) [Baresi, 2000].
The information contained in the site pertains to the following:
- Academic programs (undergraduate, graduate and continuing education).
- People associated with the University (faculty, administration and alumni).
- Admission information (undergraduate, graduate and continuing education).
1.4 Traditional methods of Web site modeling
There are a few traditional methods for modeling a Web site. They include the following:
- Text-based description of the general contents and navigational requirements of the site.
- Cognitive walkthroughs for each user task.
- Storyboard of the site.
The following section includes text-based description of the general contents and navigational requirements of the site. It should be noted that for brevity, this description includes only the home page, the major sections of the site and subsequent ‘first-level’ sub-pages. This site contains approximately 50 pages, which, with the exception of the home page, are arranged into the following categories: Programs, People and Admissions. Appropriate links on each of their pages are shown as follows: PageName. The Programs section of the site includes pages for Undergraduate (Overview, Majors/Minors, Class List), Graduate Overview, Majors/Minors, Class List) and Continuing Education (Overview, Class List). The People section of the site includes pages for Faculty (Overview, Faculty List), Administration (Overview, Administrator List) and Alumni (Overview, Alumni List). The Admissions section of the site includes pages for Undergraduate (Overview, Apply), Graduate (Overview, Apply) and Continuing Education (Overview, Apply). Each of these pages links to their respective Overview and Apply pages. Every page in the site contains a link back to the Home Page, as well as a link page to the appropriate section of the site (Programs, People or Admissions).
Cognitive walkthrough provides step-by-step instructions, combined with prototyped screens, to test the completeness of the site in executing a given user task. Similar walkthroughs would need to be developed and documented for other common user tasks. The specific tasks would need to be determined through user studies, perhaps in the form of interviews and/or surveys. A storyboard is used to illustrate the navigational hierarchy and paths within a Web site. The direction of the various arrows indicate the destination page of a particularly hyperlink.
The following sections relate to UML-specific modeling, with some general discussions that are supplemented by diagrams and documentation that is specific to our University Web site.
2. Implementing UML in Web site modeling
We have selected those diagrams that we deemed to be most relevant to modeling a Web site, particularly those with extensive navigation path analysis. We include discussions of the following elements and diagrams of UML, arranged by the following general and specific categories:
- **System Analysis:**
- 2.1 Problem Statement
- 2.2 Use Case Diagrams
- 2.3 Analysis-Level (high-level) Class Diagrams
- **System Design:**
- 3.1 Sequence Diagrams
- 3.2 State Diagrams
- 3.3 Activity Diagrams
- 3.4 Design-Level (low-level) Class Diagrams
- **Physical Design:**
- 4.1 Component Diagrams
- 4.2 Deployment Diagrams
- **Applications Design:**
- 5.1 Interaction Diagrams
This section will investigate and demonstrate how UML is used to model the design of a Web site, with appropriate levels of abstraction. We will be developing a primary Use Case to serve as a basis when utilizing UML and producing our UML-based examples. We will not attempt to model, in detail, these ‘backend’ aspects of the Web’s client/server architecture. For simplicity, we will also not attempt to model any animation or real time graphic techniques, such as ‘mouse-over’ help.)
2. SYSTEM ANALYSIS
2.1 Problem Statement
Our assignment is to develop a Web site for a University that will provide pertinent information to a wide variety of users. These users include potential students, students, parents and guardians, faculty/administrators, alumni and representatives from industry. The included information relates to information on Programs, People and Admissions. The site should also assist users in contacting various representatives of the University (including faculty, administration, admissions and alumni) through e-mail links and contact information (address, telephone and fax numbers). The site must also provide for password-restricted access by enrolled students to pages containing course grades and assignments.
2.2 Use Case Diagrams
Before discussing Use Cases, we present a discussion of the concept of mapping user groups to UML’s Actor object. We have previously identified our user groups and they can be directly mapped to Actors in UML. Actors are identified based on their distinctive interactive role with the system being modeled. For the purpose of modeling, therefore, Actors are considered external to the system. The operational and navigational needs of various user groups are “associated with the actors [that] they are specific to” [Baresi, et al, 2000]. The identified Actors (users) of our sample University Web site are: prospective students, students, parents and guardians, faculty and administration, alumni, and industry representatives.
Rosenberg [1999] defines a use case as “a sequence of actions that an Actor performs within a system to achieve a particular goal”. (For clarity, we would reword this to read, “…that an Actor performs through a system…” We believe that this wording would more accurately represent the role of the Actor(s)). We have identified two “analysis-level (or business process)” Use Cases related to our University Web site [Rosenberg, 1999]:
1. Access various information regarding the Programs, People and Admissions of the University. Relevant Actors for this Use Case include potential students, current students, students’ parents/guardians, faculty, administrators, alumni and industry representatives.
2. Contact various representatives of the University, including faculty and administrators, primarily through e-mail. Relevant Actors for this Use Case include potential students, current students, students’ parents/guardians, faculty, administrators, alumni and industry representatives.
The specific detailed, or design-level, Use Case that we will use for purposes of illustration is the following. A potential student of the University is interested in accessing an overview of the “Introduction To Computing”, an undergraduate course offered through the College of Information Technology. There are several available forms of Use Case documentation. We have used a template developed by Dr. Il-Yeol Song (College of Information Science & Technology, Drexel University USA) to present formal, structured, high-level descriptions of our Use Cases. Figure 1 includes these Use Case Descriptions.
**Figure 1: Use Case Descriptions**
<table>
<thead>
<tr>
<th>Level</th>
<th>Access Information Use Case</th>
<th>Contact Representative Use Case</th>
</tr>
</thead>
<tbody>
<tr>
<td>Primary (1)</td>
<td>1a. Access Information</td>
<td>1b. Contact Representative (not detailed)</td>
</tr>
<tr>
<td>Secondary (2)</td>
<td>2a.1. Access Restricted Information</td>
<td>2b.1. Contact Faculty (not detailed)</td>
</tr>
<tr>
<td></td>
<td>2a.2. Access General Information</td>
<td>2b.2. Contact Administration (not detailed)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>2b.3. Contact Admissions (not detailed)</td>
</tr>
<tr>
<td>Ternary (3)</td>
<td>3a.1. Access Overview for Introduction To Computing Course (not detailed)</td>
<td>3b.1a. Contact Dr. Smith (not detailed)</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Use Case Reference #</th>
<th>Use Case Name</th>
<th>Actor</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>1a</td>
<td>Access Information</td>
<td>Currently enrolled students.</td>
<td>To access grades and assignments of a particular course.</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Overview and scope</th>
<th>Students use the various pages and navigational links to access information on the grades and assignments of the courses they are currently enrolled in.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Level</td>
<td>Primary</td>
</tr>
<tr>
<td>Preconditions</td>
<td>Connection to the World Wide Web through an HTTP connection.</td>
</tr>
<tr>
<td></td>
<td>Access to the University’s Web site.</td>
</tr>
<tr>
<td>Post conditions in words</td>
<td>Desired information is displayed on the student’s screen.</td>
</tr>
<tr>
<td>Trigger</td>
<td>A student wants to access their course grade(s) or assignment(s).</td>
</tr>
<tr>
<td>Included Use Cases</td>
<td>None.</td>
</tr>
<tr>
<td>Extension Use Cases</td>
<td>Access Restricted Information.</td>
</tr>
<tr>
<td></td>
<td>Access General Information.</td>
</tr>
<tr>
<td>Frequency</td>
<td>Unknown.</td>
</tr>
<tr>
<td>Other Comments</td>
<td>This is a high level Use Case description, as the level of detail shows.</td>
</tr>
<tr>
<td></td>
<td>‘Primary’ Level refers to a top-level Use Case. Access Information and Contact Representatives are each primary level Use Cases.</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Use Case Reference #</th>
<th>Use Case Name</th>
<th>Actor</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>2a.2</td>
<td>Access General Information</td>
<td>Potential students, students, parents & guardians, faculty & administrators, alumni and industry reps.</td>
<td>To access information regarding the University.</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Overview and scope</th>
<th>Actors use the various pages and navigational links to access information on the University’s Programs (Undergraduate, Graduate and Continuing Education), People (Faculty, Administration and Alumni) and Admissions (Undergraduate, Graduate and Continuing Education).</th>
</tr>
</thead>
<tbody>
<tr>
<td>Level</td>
<td>Secondary</td>
</tr>
<tr>
<td>Preconditions</td>
<td>Connection to the World Wide Web through an HTTP connection.</td>
</tr>
<tr>
<td></td>
<td>Access to the University’s Web site.</td>
</tr>
<tr>
<td>Post conditions in words</td>
<td>Desired information is displayed on the Actor’s screen.</td>
</tr>
<tr>
<td>Trigger</td>
<td>An actor wants to access information about the University.</td>
</tr>
<tr>
<td>Included Use Cases</td>
<td>None.</td>
</tr>
<tr>
<td>Extension Use Cases</td>
<td>None.</td>
</tr>
<tr>
<td>Frequency</td>
<td>Unknown.</td>
</tr>
<tr>
<td>Other Comments</td>
<td>This is a secondary Use Case description, as the level of detail shows.</td>
</tr>
<tr>
<td></td>
<td>‘Secondary’ Level refers to a second-level Use Case, specifically Access Information: Access General Information.</td>
</tr>
</tbody>
</table>
28
A set of Use Case Diagrams provides a high-level view of the operational and navigational aspects of the site [Baresi, et al, 2000]. Our University example is a relatively straightforward in its structure, navigation and operation. Therefore, we can model all of our access paths in one diagram. Figure 2 depicts our Use Case Diagram for our Web site. This high level of Use Case does not differentiate between client and server-side functions. It includes the following:
- Students Actors can be specialized into:
- *Students In Introduction To Computing*: This refers to those students that are currently enrolled in the course. These students will have password-enabled access to the Grades & Assignment pages of Introduction To Computing.
- *Students Not In Introduction To Computing*: This refers to all other students. These students will only be able to access those pages that are not password protected.
- The Access Information Use Case is extended into:
- *Access Restricted Information Use Case*: This refers to those pages that are restricted to people with the appropriate password. An example is the Grades & Assignment page of the Introduction To Computing page.
- *Access General Information Use Case*: This refers to those pages that can be accessed by any of the identified Actors.
Baresi and his colleagues [2000] suggested that requirements of Web sites that can be represented by Use Case Diagrams fall into two classes: operational and navigational. Operational requirements include those functions that “modify the state of the applications”. Transaction processing would be an example of an operational requirement. In our University site, we consider using the site to e-mail representatives of the University as an operational requirement. Navigational requirements refer to the various interactions between the Actors and the site. Operational and navigational can both be represented through the use of Use Case Diagram(s) by one of two methods:
a. Separate models for navigational and operational requirements,
b. A single, combined model that includes color-coding of the two classes within one diagram.
The choice between the two methods depends on “the degree of intertwining between operations and navigations”. Due to its relative simplicity, our University site can be modeled using one color-coded diagram. We specialized Student Actors into “Students In Intro To Computing” and “Students Not In Intro To Computing”. However, the remaining Actors also do not have access to the restricted Introduction To Computing pages. Therefore, based on restricted access to the pages related to this course, we can create two distinct groups of users for this particular Use Case. Essentially, the two groups are those that have access to the restricted course pages associated with the Introduction To Computing course and those that do not. It should also be noted that we have chosen to use the <<extends>> notation to depict the Access Restricted Information and Access General Information secondary Use Cases of Access Information, the primary Use Case. We chose not to use <<includes>> because the two secondary Use Cases are not invoked by any other Use Cases.
Grouping various Actors together based on common functionality relates to a broader UML modeling concept called ‘Packaging’. This concept relates to the grouping of common objects with Packages. This tool allows the system developer to group various objects that conceptually ‘fit’ together. Such a grouping is intended to increase the clarity of various diagrams. In our University site and specified Use Case, the Actors can be grouped according to their access to the Introduction to Computing course’s restricted Grades and Assignments pages. As mentioned, these two pages are restricted to those students currently taking the course. Other Actor-oriented packages could conceivably be developed, depending on the specified criteria of the package (a common need to access differing information or a common need to communication with differing representatives of the University, for example), as determined by the specific Use Case being analyzed. See Figure 3, which details our Actor-based Packages.
**Figure 2: 2nd Level Use Case Diagram**
**Figure 3: Packages of Actors based on access to password-protected course pages**
In fact, all of the other Actors, aside from those students currently enrolled in the Introduction To Computing course, do not have access to that course’s password protected pages. Given this criterion, there is no need to detail Parents/Guardians, Faculty/Administrators and Alumni as individual Actors because all of these groups share common access (or lack thereof) to the pages related to the Introduction To Computing course. Specifically, no Actor other than those students currently enrolled in the class can access the pages related to grades and assignments.
In the absence of Packages, any differences related to operational or navigational constraints, if any, among the different Actors (users) would be noted with a comment on the navigational link to that particular Actor(s) [Baresi, et al, 2000].
2.3 Analysis-Level (high-level) Class Diagrams:
UML maps various components and entities of the project at hand to objects. Class Diagrams depict the “structures, navigations and operations” of the identified objects that users of the system utilize in order to “accomplish their tasks”. Baresi and his colleagues [2000], suggested that they should be modeled from the perspective of the various users (Actors), as opposed to an implementation (physical) view. Baresi, et al, suggest that this approach might result in class diagrams that are not ‘typical’ of the classes that are derived from “traditional object oriented design”. Further, they suggest that since these classes are modeled from the Actor’s perspective, several class diagrams may be necessary in order to fully capture the context and essence of the viewpoint of the various users of the site. For purposes of illustration, we have developed a Class Diagram only of our design-level Use Case.
Baresi, et al, [2000] also suggested a need for ‘navigational nodes’ of a Web site to be modeled as a class. These nodes could be identified as the start and end points for users to navigate through the system. Basically, the nodes are individual Web pages, or “well identifiable logical blocks in a page” (or intra-page links) [Baresi, et al, 2000].
Figure 4 is our version of an analysis-level (high-level) Class Diagram for our University Web site. It includes both client and server collaborations.

3. SYSTEM DESIGN
3.1 Sequence Diagrams
A Sequence Diagram, or a set of Sequence Diagrams, charts the steps, in order, that are necessary to complete a specific Use Case, including “all alternative courses” of action within the Use Case [Baresi, et al, 2000]. The particular Diagram being constructed determines the relative level of detail of the steps. Sequence diagrams include boundary, control, and entity objects, as well as the narrative steps from a particular Use Case description. Our sample University site’s objects can be mapped to the following:
- **Boundary Object**: various pages of the site (example: home page)
- **Control Object**: various hyperlinks (example: Programs, People, Admissions)
- **Entity Object**: various text of hyperlinks (example: Programs, People, and Admissions)
Although not part of our state design-level Use Case (namely accessing the Overview of the Introduction To Computing Course), we wanted to include an outline of the steps necessary to gain access to one of the password-protected pages (namely, Grades). Figure 5 is a partial depiction of the Sequence Diagram that includes password-enabled access to the Grades page.

**Figure 5: Partial Sequence Diagram depicting password-enabled access**
**Notes:**
Steps 1 through 6: Access Web site home page, access Programs page, access Undergraduate page, access Class List page, access Information Technology Classes page and access Introduction To Computing page.
The various client-side browser windows are 'boundary' objects in this Use Case. The Web server object is a ‘control’ object. The format of the messaging resembles a ‘stair’ format, whereby there is a delegation of authority among the object ‘lifelines’.
UML includes a conceptual view that consists of several diagrams, and it is called the Implementation View. We include the following diagrams as part of our University Web site analysis: static-based views (Component Diagrams) and dynamic-based views (State Diagram and Interface Design Diagram).
3.4 Design-Level (low-level) Class Diagrams
We have added operations and attributes to our University Web site in Figure 6, as we further refine our model for the site in this System Design phase of modeling.
Figure 6: Design-Level Class Diagram
The operations included in our design-level Class Diagram were originally identified from the Sequence Diagram. We conclude our discussion of Class Diagrams with a list of the various UML elements as they map to the artifacts of a Web site.
Web Artifacts UML Elements
User Actor
HTML hyperlink* Association element / control object
Hyperlink text Entity object
Site Map Component Diagram
Storyboard Component Diagram
Server page <<server page>>
Client page <<client page>> / boundary object
Java Script <<java script>>
HTML form <<form>>
HTML target of a frame <<target>>
HTML frameset <<frameset>>
Various groups of related elements Packages of these related elements
* The <<link>> association has a list of parameter names that are sent along with the link request. The server then processes this link request along with any parameters [Conallen, 1999]. Hyperlinks request a specific page, either within the site or a site stored on another computer that is accessible through the Web (i.e. through HTTP).
4. PHYSICAL DESIGN
4.1 Component Diagrams
An UML-based component diagram is essentially a site map, which provides an overview of client-side navigation through high-level abstraction of the various pages of the Web site. The components of a Component Diagram, as they relate to a Web site, include each of the pages and the hyperlinks (navigational links) among, and between, the pages [Conallen, 1999]. (Due to space limitations, we have not included a component diagram.) Components, however, only represent the “physical’ packaging” [Conallen, 1999]. As such, they provide no value when modeling any of the workings of the component, which are conceptually internal to the page [Conallen, 1999]. When considering a Web site, these internal workings could include inter-page links, scripts, Java applets and Active Server Pages (ASP). As we have attempted to demonstrate through this paper, other charts can be used to fill-in these details.
4.2 Deployment Diagrams
Deployment Diagrams provide a modeling mechanism for illustrating the physical components of a system. Figure 7 represents our conceptual representation of the physical components of a typical Web-based application.
It should be noted that the Application Server component is not necessary for our University Web site. It is included in the above figure only to illustrate a Web system that includes database functionality, which might be located on an Application Server. While a Deployment Diagram provides concise modeling for the physical structure of a Web site, it lends little, if any, insight into the development of our University site. As developers, we are primarily concerned with the workings of the site that directly interact with its end-users. We have presented several UML-based components that address these aspects of content presentation and navigational links.
5. APPLICATIONS DESIGN
5.1 Interface Diagram
An Interface Diagram illustrates the navigation paths, similar to a Component Diagram. Directions of the various arrows indicate the navigational flow of control among, and between, the various pages. For clarity, we have deviated from software engineering notation in our Interface Diagram. Whereas standard UML notation includes the ability to depict bi-directional navigational flow (to and from a given hyperlink) through use of a single vertical line, each of our vertical lines is intended to depict a particular one-directional navigational flow. Figure 8 shows a partial Interface Diagram.

Figure 8: Partial Interface Diagram
6. **Recommended Best Practices**
We have compiled a list of some of the ‘best practices’ for UML-based modeling of Web sites.
**General comments:**
- Don’t necessarily think that you need to develop every available UML diagram for every design effort you undertake.
- We have found that as when modeling any software development project, the level of detail of the various UML diagrams is determined by the relative sophistication of the particular Web site, as well as the particular focus of the developers of the site. You should attempt to model only at the level of abstraction that will be of the most value to you. This involves considerable judgment, and as such, requires experience.
- Don’t waste too much time and effort on modeling the server side of a Web site, unless your particular site is intended to support transaction processing or other back-end processing functions for which the site will act as a front-end.
- When short on time, consider developing Use Case Descriptions, Class Diagrams and Sequence Diagrams, as Grady Booch suggested that 80% of the design effort can be accomplished through the development of these three tools.
- When severely short on time, develop at least an analysis-level (high-level) Class Diagram.
- Provide a numbering scheme for the Use Case Descriptions, Sequence Diagrams and Activity Diagrams (1, 1.x, 2, 2.x, etc.) and be consistent with this numbering scheme across each of the three diagrams. This will assist you and your end-users/clients in following the flow of processes and objects, which in turn assists in determining any oversights in your design.
- Consider developing the appropriate diagrams in the following order of priority:
1. Problem Statement
2. Use Case Diagrams
3. Analysis-Level (high-level) Class Diagrams
4. Sequence Diagrams
5. State Diagrams
6. Activity Diagrams
7. Design-Level (low-level) Class Diagrams
8. Component Diagrams
9. Deployment Diagrams
10. Interaction Diagrams
**System Analysis:**
- Consider using Packages to simplify your concepts, particularly when dealing with several groups of Actors. Grouping them by function or some other common denominator will simplify your diagrams and your process modeling. (Figure 3)
**System Design:**
- Include the text from your Use Case Description down the left side of your Sequence Diagram. This will serve as an organizational tool as you development your Diagram. Because Rational Rose does not ‘link’ the steps of your Use Case Description to individual objects on your diagram, you will find yourself doing additional moving (vertically) of objects within your diagram if you add any additional objects/procedures. Therefore, in an effort to minimize the
number and complexity of your edits, we suggest that if you are using Rose as your design tool, draw a sketch, or two, of your Sequence Diagram on an oversized piece of paper prior to developing the Diagram in Rose. (Figure 5)
- Develop additional Sequence Diagrams for each alternative course of action that may need to be modeled as part of the Use Case. This will help maintain the clarity of your original Diagram.
- We believe you will gain the most benefit from the development of a State Diagram if the Web site you are modeling is intended to serve as the front-end for some type of transaction processing system. Otherwise, as was the case with our sample University Web site, a State Diagram will not add much value to the design process.
- If you plan to develop a Activity Diagram, consider developing one that is based on physical swimlanes (client browser and Web server, as well as any other application server), particularly a site that supports transaction processing or other back-end processing functions for which the site will act as a front-end.
**Physical Design:**
- Use a Component Diagram to develop a model of the navigational links of your site. This essentially represents a storyboard of the site.
**Applications Design:**
- Consider color-coding the various rows of an Interface Diagram in order to provide additional visual structure for the navigation paths (Figure 8).
- Consider using our suggested notation when constructing an Interface Diagram (Figure 8), depending on the level of complexity of the site you are modeling. As noted, our notation results in a wider Diagram, but we feel it is more readable than the standard notation.
- Consider supplementing the Interface Diagram with a text-based outline of the navigational links.
- Don’t waste too much time and effort on a Deployment Diagram, as it adds little value to the design effort of a Web site, particularly a site that doesn’t support transaction processing or other back-end processing functions for which the site will act as a front-end. (Figure 7)
7. **Conclusions**
There is clearly a need for tools that are robust enough to assist developers in capturing the various views, constructs and capabilities of Web sites of varying complexity. Baresi and his colleagues [2000] referred to some of these complexities when they pointed out the various interrelationships between hypermedia design (“information structures and navigational paths”) and functional design (“operations and applications behavior”).
Through our personal experiences and research for this paper, as well as our development of the prototype University Web site, we found that the UML-based tools we have discussed combined to form a more robust and capable tool set than traditional methods of modeling sites (text-based descriptions, cognitive walkthroughs and storyboards). With some experience in utilizing these tools, designers can be
reasonably assured of developing a complete model of even complicated sites.
Lastly, we present the following thoughts regarding possible future research. We would be interested in investigating if there was any evidence to indicate whether or not using UML to model and develop a Web site increases the likelihood of creating a site that has a more useful, efficient and informative design than compared to a site that was developed utilizing traditional modeling techniques (text-based documentation, cognitive walkthroughs and storyboards). We believe it would also be both interesting and useful to investigate the relative usefulness and clarity of the various modifications we have introduced to some of our UML-based models. This could be accomplished through surveys and/or interviews with Web developers who would have had an opportunity to work with both standard-UML diagrams, as well as our modified diagrams.
Appendix A: Scope of University site (prototyped Web pages)
**University Home Page**
- Programs
- People
- Admissions
**PROGRAMS Page**
- Undergraduate
- Graduate
- Continuing Education
**PROGRAMS: Undergraduate Page**
- Overview
- Majors/Minors
- Class List
**PROGRAMS: Undergraduate: Class List Page**
- Business Classes
- Education Classes
- Information Technology Classes
- Science & Math Classes
**PROGRAMS: Undergraduate: Info Tech Class List Page**
- Introduction To Computing
- Introduction To HTML
- Introduction To Java
- Introduction To Networking
- Introduction To Programming
**PROGRAMS: Undergraduate: Info Tech/Intro To Computing: General Information Page**
- Overview
- Schedule
**PROGRAMS: Undergraduate: Info Tech/Intro To Computing: General Information: Overview Page**
---
References
|
{"Source-Url": "http://www.journal.au.edu:80/ijcim/2002/may02/article2.pdf", "len_cl100k_base": 7956, "olmocr-version": "0.1.48", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 44332, "total-output-tokens": 8882, "length": "2e12", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.0014505386352539062, "__label__crime_law": 0.00034737586975097656, "__label__education_jobs": 0.0113983154296875, "__label__entertainment": 0.0001017451286315918, "__label__fashion_beauty": 0.0002157688140869141, "__label__finance_business": 0.0004744529724121094, "__label__food_dining": 0.0003204345703125, "__label__games": 0.0005145072937011719, "__label__hardware": 0.00078582763671875, "__label__health": 0.0004487037658691406, "__label__history": 0.0004229545593261719, "__label__home_hobbies": 0.00012153387069702148, "__label__industrial": 0.0004296302795410156, "__label__literature": 0.0006737709045410156, "__label__politics": 0.00022745132446289065, "__label__religion": 0.0005178451538085938, "__label__science_tech": 0.0182037353515625, "__label__social_life": 0.00012683868408203125, "__label__software": 0.01306915283203125, "__label__software_dev": 0.94873046875, "__label__sports_fitness": 0.00021076202392578125, "__label__transportation": 0.0005335807800292969, "__label__travel": 0.0002351999282836914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41580, 0.01738]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41580, 0.49326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41580, 0.91004]], "google_gemma-3-12b-it_contains_pii": [[0, 1747, false], [1747, 5057, null], [5057, 7978, null], [7978, 11205, null], [11205, 14313, null], [14313, 20274, null], [20274, 23484, null], [23484, 24617, null], [24617, 26955, null], [26955, 28447, null], [28447, 29491, null], [29491, 32403, null], [32403, 33122, null], [33122, 35855, null], [35855, 38785, null], [38785, 39708, null], [39708, 40607, null], [40607, 40607, null], [40607, 40607, null], [40607, 41580, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1747, true], [1747, 5057, null], [5057, 7978, null], [7978, 11205, null], [11205, 14313, null], [14313, 20274, null], [20274, 23484, null], [23484, 24617, null], [24617, 26955, null], [26955, 28447, null], [28447, 29491, null], [29491, 32403, null], [32403, 33122, null], [33122, 35855, null], [35855, 38785, null], [38785, 39708, null], [39708, 40607, null], [40607, 40607, null], [40607, 40607, null], [40607, 41580, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41580, null]], "pdf_page_numbers": [[0, 1747, 1], [1747, 5057, 2], [5057, 7978, 3], [7978, 11205, 4], [11205, 14313, 5], [14313, 20274, 6], [20274, 23484, 7], [23484, 24617, 8], [24617, 26955, 9], [26955, 28447, 10], [28447, 29491, 11], [29491, 32403, 12], [32403, 33122, 13], [33122, 35855, 14], [35855, 38785, 15], [38785, 39708, 16], [39708, 40607, 17], [40607, 40607, 18], [40607, 40607, 19], [40607, 41580, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41580, 0.15079]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
c8e4555ee2724a8cd3a5c446f098a2e84b83f3b9
|
Chapter 4 Macro Processors
-- Basic Macro Processor Functions
Introduction
- A macro instruction (macro) is a notational convenience for the programmer
- It allows the programmer to write shorthand version of a program (module programming)
- The macro processor replaces each macro instruction with the corresponding group of source language statements (expanding)
- Normally, it performs no analysis of the text it handles.
- It does not concern the meaning of the involved statements during macro expansion.
- The design of a macro processor generally is machine independent!
Basic macro processor functions
- Two new assembler directives are used in macro definition
- MACRO: identify the beginning of a macro definition
- MEND: identify the end of a macro definition
- Prototype for the macro
- Each parameter begins with ‘&’
```
name MACRO parameters
:
body
:
MEND
```
- Body: the statements that will be generated as the expansion of the macro.
## Macro Expansion
<table>
<thead>
<tr>
<th>Source</th>
<th>Expanded source</th>
</tr>
</thead>
<tbody>
<tr>
<td>M1 MACRO &D1, &D2</td>
<td></td>
</tr>
<tr>
<td>STA &D1</td>
<td>STA DATA1</td>
</tr>
<tr>
<td>STB &D2</td>
<td>STB DATA2</td>
</tr>
<tr>
<td>MEND</td>
<td></td>
</tr>
<tr>
<td>M1 DATA1, DATA2</td>
<td></td>
</tr>
<tr>
<td>M1 DATA4, DATA3</td>
<td></td>
</tr>
<tr>
<td></td>
<td>STA DATA4</td>
</tr>
<tr>
<td></td>
<td>STB DATA3</td>
</tr>
</tbody>
</table>
*System Programming*
Example of macro definition
Figure 4.1, pp. 178
5 COPY START 0 COPY FILE FROM INPUT TO OUTPUT
10 RDBUFF MACRO &INDEV, &BUFADR &RECLTH
15 .
20 . MACRO TO READ RECORD INTO BUFFER
25 .
30 CLEAR X CLEAR LOOP COUNTER
35 CLEAR A
40 CLEAR S
45 +LDT #4096 SET MAXIMUM RECORD LENGTH
50 TD =X’&INDEV’ TEST INPUT DEVICE
55 JEQ *-3 LOOP UNTIL READY
60 RD =X’&INDEV’ READ CHARACTER INTO REG A
65 COMPR A, S TEST FOR END OF RECORD
70 JEQ *+11 EXIT LOOP IF EOR
75 STCH &BUFADR, X STORE CHARACTER IN BUFFER
80 TIXR T LOOP UNLESS MAXIMUM LENGTH
85 JLT *-19 HAS BEEN RECORD
90 STX &RECLTH SAVE RECORD LENGTH
95 MEND
System Programming
Macro invocation
- A macro invocation statement (a macro call) gives the name of the macro instruction being invoked and the arguments to be used in expanding the macro.
- `macro_name p1, p2, ...`
- Difference between macro call and procedure call
- Macro call: statements of the macro body are expanded each time the macro is invoked.
- Procedure call: statements of the subroutine appear only one, regardless of how many times the subroutine is called.
- Question
- How does a programmer decide to use macro calls or procedure calls?
- From the viewpoint of a programmer
- From the viewpoint of the CPU
Exchange the values of two variables
```c
void exchange(int a, int b) {
int temp;
temp = a;
a = b;
b = temp;
}
main() {
int i=1, j=3;
printf("BEFORE - %d %d\n", i, j);
exchange(i, j);
printf("AFTER - %d %d\n", i, j);
}
```
What’s the result?
Pass by Reference
```c
void exchange(int *p1, int *p2) {
int temp;
temp = *p1;
*p1 = *p2;
*p2 = temp;
}
main() {
int i=1, j=3;
printf("BEFORE - %d %d\n", i, j);
exchange(&i, &j);
printf("AFTER - %d %d\n", i, j);
}
```
# 12 Lines of Assembly Code
## . Subroutine EXCH
<table>
<thead>
<tr>
<th>EXCH</th>
<th>LDA</th>
<th>@P1</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>STA</td>
<td>TEMP</td>
</tr>
<tr>
<td></td>
<td>LDA</td>
<td>@P2</td>
</tr>
<tr>
<td></td>
<td>STA</td>
<td>@P1</td>
</tr>
<tr>
<td></td>
<td>LDA</td>
<td>TEMP</td>
</tr>
<tr>
<td></td>
<td>STA</td>
<td>@P2</td>
</tr>
<tr>
<td></td>
<td>RSUB</td>
<td></td>
</tr>
</tbody>
</table>
- P1: RESW 1
- P2: RESW 1
- TEMP: RESW 1
## MAIN
<table>
<thead>
<tr>
<th>MAIN</th>
<th>LDA</th>
<th>#1</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>STA</td>
<td>I</td>
</tr>
<tr>
<td></td>
<td>LDA</td>
<td>#3</td>
</tr>
<tr>
<td></td>
<td>STA</td>
<td>J</td>
</tr>
</tbody>
</table>
. Call a subroutine
<table>
<thead>
<tr>
<th></th>
<th>LDA</th>
<th>#I</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>STA</td>
<td>P1</td>
</tr>
<tr>
<td></td>
<td>LDA</td>
<td>#J</td>
</tr>
<tr>
<td></td>
<td>STA</td>
<td>P2</td>
</tr>
<tr>
<td></td>
<td>JSUB</td>
<td>EXCH</td>
</tr>
</tbody>
</table>
- I: RESW 1
- J: RESW 1
-END-
*System Programming*
Swap two variables by macro
```c
#define swap(i,j) { int temp; temp=i; i=j; j=temp; }
main() {
int i=1, j=3;
printf("BEFORE - %d %d\n", i, j);
swap(i,j);
printf("AFTER - %d %d\n", i, j);
}
```
6 Lines of Assembly Code
MAIN LDA #1
STA I
LDA #3
STA J
. Invoke a macro
LDA I
STA TEMP
LDA J
STA I
LDA TEMP
STA J
I RESW 1
J RESW 1
TEMP RESW 1
END MAIN
Macro expansion
- Each macro invocation statement will be expanded into the statements that form the body of the macro.
- Arguments from the macro invocation are substituted for the parameters in the macro prototype (according to their positions).
- In the definition of macro: parameter
- In the macro invocation: argument
- Comment lines within the macro body will be deleted.
- Macro invocation statement itself has been included as a comment line.
- The label on the macro invocation statement has been retained as a label on the first statement generated in the macro expansion.
- We can use a macro instruction in exactly the same way as an assembler language mnemonic.
Example of macro invocation
Figure 4.1, pp. 178
170 . MAIN PROGRAM
175 .
180 FIRST STL RETADR SAVE RETURN ADDRESS
190 CLOOP RDBUFF F1,BUFFER,LENGTH READ RECORD INTO BUFFER
195 LDA LENGTH TEST FOR END OF FILE
200 COMP #0
205 JEQ ENDFIL EXIT IF EOF FOUND
210 WRBUFF 05,BUFFER,LENGTH WRITE OUTPUT RECORD
215 J CLOOP LOOP
220 ENDFIL WRBUFF 05,EOF,THREE INSERT EOF MARKER
225 J @RETADR
230 EOF BYTE C’EOF’
235 THREE WORD 3
240 RETADR RESW 1
245 LENGTH RESW 1 LENGTH OF RECORD
250 BUFFER RESB 4096 4096-BYTE BUFFER AREA
255 END FIRST
System Programming
Example of macro expansion
Figure 4.2, pp. 179
<table>
<thead>
<tr>
<th>Line</th>
<th>Instruction</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>5</td>
<td>COPY START 0</td>
<td>COPY FILE FROM INPUT TO OUTPUT</td>
</tr>
<tr>
<td>180</td>
<td>FIRST STL RETADR</td>
<td>SAVE RETURN ADDRESS</td>
</tr>
<tr>
<td>190</td>
<td>.CLOOP RDBUFF F1,BUFFER,LENGTH</td>
<td>READ RECORD INTO BUFFER</td>
</tr>
<tr>
<td>190a</td>
<td>CLOOP CLEAR X</td>
<td>CLEAR LOOP COUNTER</td>
</tr>
<tr>
<td>190b</td>
<td>CLEAR A</td>
<td></td>
</tr>
<tr>
<td>190c</td>
<td>CLEAR S</td>
<td></td>
</tr>
<tr>
<td>190d</td>
<td>+LDT #4096</td>
<td>SET MAXIMUM RECORD LENGTH</td>
</tr>
<tr>
<td>190e</td>
<td>TD =X’F1’</td>
<td>TEST INPUT DEVICE</td>
</tr>
<tr>
<td>190f</td>
<td>JEQ *-3</td>
<td>LOOP UNTIL READY</td>
</tr>
<tr>
<td>190g</td>
<td>RD =X’F1’</td>
<td>TEST FOR END OF RECORD</td>
</tr>
<tr>
<td>190h</td>
<td>COMPR A, S</td>
<td>TEST FOR END OF RECORD</td>
</tr>
<tr>
<td>190i</td>
<td>JEQ *+11</td>
<td>EXIT LOOP IF EOR</td>
</tr>
<tr>
<td>190j</td>
<td>STCH BUFFER, X</td>
<td>STORE CHARACTER IN BUFFER</td>
</tr>
<tr>
<td>190k</td>
<td>TIXR T</td>
<td>LOOP UNLESS MAXIMUM LENGTH</td>
</tr>
<tr>
<td>190l</td>
<td>JLT *-19</td>
<td>HAS BEEN REACHED</td>
</tr>
<tr>
<td>190M</td>
<td>STX LENGTH</td>
<td>SAVE RECORD LENGTH</td>
</tr>
</tbody>
</table>
### Example of macro expansion
**Figure 4.2, pp. 179**
<table>
<thead>
<tr>
<th>Line</th>
<th>Instruction</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>195</td>
<td>LDA LENGTH</td>
<td>TEST FOR END OF FILE</td>
</tr>
<tr>
<td>200</td>
<td>COMP #0</td>
<td></td>
</tr>
<tr>
<td>205</td>
<td>JEQ ENDFIL</td>
<td>EXIT IF EOF FOUND</td>
</tr>
<tr>
<td>210</td>
<td>WRBUFF 05,BUFFER,LENGTH</td>
<td>WRITE OUTPUT RECORD</td>
</tr>
<tr>
<td>210a</td>
<td>CLEAR X</td>
<td>CLEAR LOOP COUNTER</td>
</tr>
<tr>
<td>210b</td>
<td>LDT LENGTH</td>
<td></td>
</tr>
<tr>
<td>210c</td>
<td>LDCH BUFFER,X</td>
<td>GET CHARACTER FROM BUFFER</td>
</tr>
<tr>
<td>210d</td>
<td>TD =X’05’</td>
<td>TEST OUTPUT DEVICE</td>
</tr>
<tr>
<td>210e</td>
<td>JEQ #-3</td>
<td>LOOP UNTIL READY</td>
</tr>
<tr>
<td>210f</td>
<td>WD =X’05’</td>
<td>WRITE CHARACTER</td>
</tr>
<tr>
<td>210g</td>
<td>TIXR T</td>
<td>LOOP UNTIL ALL CHARACTERS</td>
</tr>
<tr>
<td>210h</td>
<td>JLT #-14</td>
<td>HAVE BEEN WRITTEN</td>
</tr>
<tr>
<td>215</td>
<td>J CLOOP</td>
<td>LOOP</td>
</tr>
<tr>
<td>220</td>
<td>.ENDFIL WRBUFF 05,EOF,THREE</td>
<td>INSERT EOF MARKER</td>
</tr>
</tbody>
</table>
Example of macro expansion
Figure 4.2, pp. 179
<table>
<thead>
<tr>
<th>Line</th>
<th>Instruction</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>220a</td>
<td>ENDFIL</td>
<td>CLEAR X</td>
</tr>
<tr>
<td>220b</td>
<td>LDT</td>
<td>THREE</td>
</tr>
<tr>
<td>220c</td>
<td>LDCH</td>
<td>EOF,X</td>
</tr>
<tr>
<td>220d</td>
<td>TD</td>
<td>=X‘05’</td>
</tr>
<tr>
<td>220e</td>
<td>JEQ</td>
<td>-*3</td>
</tr>
<tr>
<td>220f</td>
<td>WD</td>
<td>=X‘05’</td>
</tr>
<tr>
<td>220g</td>
<td>TIXR</td>
<td>T</td>
</tr>
<tr>
<td>220h</td>
<td>JLT</td>
<td>-*14</td>
</tr>
<tr>
<td>225</td>
<td>J</td>
<td>@RETADR</td>
</tr>
<tr>
<td>230</td>
<td>EOF</td>
<td>BYTE C‘EOF’</td>
</tr>
<tr>
<td>235</td>
<td>THREE</td>
<td>WORD 3</td>
</tr>
<tr>
<td>240</td>
<td>RETADR</td>
<td>RESW 1</td>
</tr>
<tr>
<td>245</td>
<td>LENGTH</td>
<td>RESW 1</td>
</tr>
<tr>
<td>250</td>
<td>BUFFER</td>
<td>RESB 4096</td>
</tr>
<tr>
<td>255</td>
<td>END</td>
<td>FIRST</td>
</tr>
</tbody>
</table>
CLEAR LOOP COUNTER
GET CHARACTER FROM BUFFER
TEST OUTPUT DEVICE
LOOP UNTIL READY
WRITE CHARACTER
LOOP UNTIL ALL CHARACTERS
HAVE BEEN WRITTEN
System Programming
16
No label in the macro body
Problem of the label in the body of macro:
- If the same macro is expanded multiple times at different places in the program …
- There will be *duplicate labels*, which will be treated as errors by the assembler.
Solutions:
- Do not use labels in the body of macro.
- Explicitly use PC-relative addressing instead.
- Ex, in RDBUFF and WRBUFF macros,
- JEQ *+11
- JLT *-14
- It is inconvenient and error-prone.
- The way of avoiding such error-prone method will be discussed in Section 4.2.2
Two-pass macro processor
- You may design a two-pass macro processor
- Pass 1:
- Process all macro definitions
- Pass 2:
- Expand all macro invocation statements
- However, one-pass may be enough
- Because all macros would have to be defined during the first pass before any macro invocations were expanded.
- The definition of a macro must appear before any statements that invoke that macro.
- Moreover, the body of one macro can contain definitions of other macros.
Example of recursive macro definition
Figure 4.3, pp.182
MACROS (for SIC)
Contains the definitions of RDBUFF and WRBUFF written in SIC instructions.
1 MACROS MACOR {Defines SIC standard version macros}
2 RDBUFF MACRO &INDEV,&BUFADR,&RECLTH
3 MEND
4 WRBUFF MACRO &OUTDEV,&BUFADR,&RECLTH
5 MEND {End of WRBUFF}
6 MEND {End of MACROS}
System Programming
Example of recursive macro definition
Figure 4.3, pp.182
- **MACROX (for SIC/XE)**
- Contains the definitions of RDBUFF and WRBUFF written in SIC/XE instructions.
<table>
<thead>
<tr>
<th>Line</th>
<th>Macro</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>MACROX</td>
<td>MACRO {Defines SIC/XE macros}</td>
</tr>
<tr>
<td>2</td>
<td>RDBUFF</td>
<td>MACRO &INDEV,&BUFADR,&RECLTH {SIC/XE version}</td>
</tr>
<tr>
<td>3</td>
<td>MEND</td>
<td>{End of RDBUFF}</td>
</tr>
<tr>
<td>4</td>
<td>WRBUFF</td>
<td>MACRO &OUTDEV,&BUFADR,&RECLTH {SIC/XE version}</td>
</tr>
<tr>
<td>5</td>
<td>MEND</td>
<td>{End of WRBUFF}</td>
</tr>
</tbody>
</table>
*System Programming*
Example of macro definitions
- A program that is to be run on SIC system could invoke MACROS whereas a program to be run on SIC/XE can invoke MACROX.
- However, defining MACROS or MACROX does not define RDBUFF and WRBUFF. These definitions are processed only when an invocation of MACROS or MACROX is expanded.
One-pass macro processor
- A one-pass macro processor that alternate between \textit{macro definition} and \textit{macro expansion} in a recursive way is able to handle recursive macro definition.
- Restriction
- The definition of a macro must appear in the source program before any statements that invoke that macro.
- This restriction does not create any real inconvenience.
Data structures for one-pass macro processor
- **DEFTAB** (definition table)
- Stores the macro definition including *macro prototype* and *macro body*
- Comment lines are omitted.
- References to the macro instruction parameters are converted to a positional notation for efficiency in substituting arguments.
- **NAMTAB**
- Stores macro names
- Serves as an index to DEFTAB
- Pointers to the *beginning* and the *end* of the macro definition (DEFTAB)
- **ARGTAB**
- Stores the arguments of macro invocation according to their positions in the argument list
- As the macro is expanded, arguments from ARGTAB are substituted for the corresponding parameters in the macro body.
Data structures
Algorithm
**Procedure GETLINE**
If EXPANDING then
get the next line to be processed from DEFTAB
Else
read next line from input file
**MAIN program**
- Iterations of
- GETLINE
- PROCESSLINE
**Procedure EXPAND**
Set up the argument values in ARGTAB
Expand a macro invocation statement (like in MAIN procedure)
- Iterations of
- GETLINE
- PROCESSLINE
**Procedure PROCESSLINE**
- DEFINE
- EXPAND
- Output source line
**Procedure DEFINE**
Make appropriate entries in DEFTAB and NAMTAB
Algorithm
Figure 4.5, pp. 184
begin \{macro processor\}
EXPANDINF := FALSE
while OPCODE \neq \textquoteleft EN D\textquoteright \ do
begin
GETLINE
PROCESSLINE
end \{while\}
end \{macro processor\}
Procedure PROCESSLINE
begin
search MAMTAB for OPCODE
if found then
EXPAND
else if OPCODE = \textquoteleft MACRO\textquoteright then
DEFINE
else
write source line to expanded file
end \{PROCESSOR\}
Algorithm
Figure 4.5, pp. 185
Procedure DEFINE
begin
begin
enter macro name into NAMTAB
enter macro prototype into DEFTAB
LEVEL := 1
while LEVEL > do
begin
GETLINE
if this is not a comment line then
begin
substitute positional notation for parameters
enter line into DEFTAB
if OPCODE = ‘MACRO’ then
LEVEL := LEVEL +1
else if OPCODE = ‘MEND’ then
LEVEL := LEVEL – 1
end {if not comment}
end {while}
store in NAMTAB pointers to beginning and end of definition
end {DEFINE}
System Programming
Algorithm
Figure 4.5, pp. 185
Procedure EXPAND
begin
EXPANDING := TRUE
get first line of macro definition {prototype} from DEFTAB
set up arguments from macro invocation in ARGTAB
while macro invocation to expanded file as a comment
while not end of macro definition do
begin
GETLINE
PROCESSLINE
end {while}
EXPANDING := FALSE
end {EXPAND}
Procedure GETLINE
begin
if EXPANDING then
begin
get next line of macro definition from DEFTAB
substitute arguments from ARGTAB for positional notation
end {if}
else
read next line from input file
end {GETLINE}
Handling nested macro definition
- In DEFINE procedure
- When a macro definition is being entered into DEFTAB, the normal approach is to continue until an MEND directive is reached.
- This would not work for nested macro definition because the first MEND encountered in the inner macro will terminate the whole macro definition process.
- To solve this problem, a counter LEVEL is used to keep track of the level of macro definitions.
- Increase LEVEL by 1 each time a MACRO directive is read.
- Decrease LEVEL by 1 each time a MEND directive is read.
- A MEND terminates the whole macro definition process when LEVEL reaches 0.
- This process is very much like matching left and right parentheses when scanning an arithmetic expression.
Comparison of Macro Processors Design
- One-pass algorithm
- Every macro must be defined before it is called
- One-pass processor can alternate between macro definition and macro expansion
- Nested macro definitions are allowed but nested calls are not
- Two-pass algorithm
- Pass1: Recognize macro definitions
- Pass2: Recognize macro calls
- Nested macro definitions are not allowed
|
{"Source-Url": "http://solomon.ipv6.club.tw/~solomon/Course/SP/sp4-1.pdf", "len_cl100k_base": 5149, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 47885, "total-output-tokens": 5571, "length": "2e12", "weborganizer": {"__label__adult": 0.00033211708068847656, "__label__art_design": 0.0002911090850830078, "__label__crime_law": 0.0002703666687011719, "__label__education_jobs": 0.0004620552062988281, "__label__entertainment": 4.392862319946289e-05, "__label__fashion_beauty": 0.00012683868408203125, "__label__finance_business": 0.00011593103408813477, "__label__food_dining": 0.0003376007080078125, "__label__games": 0.000560760498046875, "__label__hardware": 0.0030803680419921875, "__label__health": 0.0002760887145996094, "__label__history": 0.0001627206802368164, "__label__home_hobbies": 0.00011646747589111328, "__label__industrial": 0.0005679130554199219, "__label__literature": 0.00014662742614746094, "__label__politics": 0.00018286705017089844, "__label__religion": 0.0004558563232421875, "__label__science_tech": 0.00960540771484375, "__label__social_life": 4.863739013671875e-05, "__label__software": 0.0042266845703125, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.0003345012664794922, "__label__transportation": 0.0004835128784179687, "__label__travel": 0.00014960765838623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16532, 0.0369]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16532, 0.28263]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16532, 0.71921]], "google_gemma-3-12b-it_contains_pii": [[0, 62, false], [62, 585, null], [585, 976, null], [976, 1550, null], [1550, 2169, null], [2169, 2793, null], [2793, 3070, null], [3070, 3323, null], [3323, 3951, null], [3951, 4163, null], [4163, 4459, null], [4459, 5142, null], [5142, 5693, null], [5693, 7106, null], [7106, 8479, null], [8479, 9832, null], [9832, 10366, null], [10366, 10857, null], [10857, 11226, null], [11226, 11905, null], [11905, 12218, null], [12218, 12602, null], [12602, 13300, null], [13300, 13316, null], [13316, 13810, null], [13810, 14252, null], [14252, 14744, null], [14744, 15373, null], [15373, 16134, null], [16134, 16532, null]], "google_gemma-3-12b-it_is_public_document": [[0, 62, true], [62, 585, null], [585, 976, null], [976, 1550, null], [1550, 2169, null], [2169, 2793, null], [2793, 3070, null], [3070, 3323, null], [3323, 3951, null], [3951, 4163, null], [4163, 4459, null], [4459, 5142, null], [5142, 5693, null], [5693, 7106, null], [7106, 8479, null], [8479, 9832, null], [9832, 10366, null], [10366, 10857, null], [10857, 11226, null], [11226, 11905, null], [11905, 12218, null], [12218, 12602, null], [12602, 13300, null], [13300, 13316, null], [13316, 13810, null], [13810, 14252, null], [14252, 14744, null], [14744, 15373, null], [15373, 16134, null], [16134, 16532, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 16532, null]], "pdf_page_numbers": [[0, 62, 1], [62, 585, 2], [585, 976, 3], [976, 1550, 4], [1550, 2169, 5], [2169, 2793, 6], [2793, 3070, 7], [3070, 3323, 8], [3323, 3951, 9], [3951, 4163, 10], [4163, 4459, 11], [4459, 5142, 12], [5142, 5693, 13], [5693, 7106, 14], [7106, 8479, 15], [8479, 9832, 16], [9832, 10366, 17], [10366, 10857, 18], [10857, 11226, 19], [11226, 11905, 20], [11905, 12218, 21], [12218, 12602, 22], [12602, 13300, 23], [13300, 13316, 24], [13316, 13810, 25], [13810, 14252, 26], [14252, 14744, 27], [14744, 15373, 28], [15373, 16134, 29], [16134, 16532, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16532, 0.20519]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
386896534e58b0d19b39d5cb23e274fdf41d6079
|
Information Retrieval: Improving Question Answering Systems by Query Reformulation and Answer Validation
Mohammad Reza Kangavari, Samira Ghandchi, Manak Golpour
Abstract — Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords.
This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations.
There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting.
Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer.
Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Keywords — Answer Processing, Answer validation, Classification, Question Answering and Query Reformulation.
1. INTRODUCTION
Many researches have been done in recent years on QA systems. QA systems have been expanded to answer simple questions correctly; but now researches have been focused on methods for answering complex questions truthfulness. Those methods analyze and parse complex question to multi simple questions and use existing techniques for answering them. [1]
Recent researches show that increasing the performance of system is dependent on number of probable answers in documents.
Finding the exact answer is one of the most important problems in QA systems. For this purpose, this designed model uses syntax and semantic relations together, previous asked questions and dynamic patterns to find exact answer at least timeline.
This model work on aerology domain by forecasting the weather information based on patterns in close domain question answering system. If there is no default proper pattern in database, user can make appropriate patterns referring to English language grammar. Designed QA system just answers the questions in factoid form or one sentence.
The aim of this paper is to design and implement a new model for classification, reformulation and answer validation in a QA system. Used methodology in this system to find correct answer in 'weather forecasting' domain with NLP techniques, syntax & semantic relation among words, Dynamic pattern and previous information about defined domain.
The main reason behind the necessity of providing the system with an answer validation component concerns the difficulty of picking up from a document the "exact answer".
Our approach to automatic answer validation relies on discovering relations between a question and the answer candidates by mining the documents or a domain text corpus for their co-occurrence tendency [11].
In this model, first of all, questions are parsed by using semantic and syntax information in the question. Second, answer patterns based on their types are specified. Then the search engine finds candidate answer document and sends them to answer processing module to extract correct answers. The system filter candidate answers collection based on co-occurrence patterns and assigns a priority number to the candidate answers. Finally the system ranks the answers and sends to user for final validation in order to extract the exact answer.
Considered patterns in this program are based on English language grammar and tried to include all probable patterns. If no proper pattern find, user can make a new pattern. This paper tries to use syntax, semantic relations and existing information of pervious asked questions by users which were saved in system. Our system modeled in aerology domain but it can easily works in both close and open domain in QA systems. In Section II, we considered QA systems, section III consist of question processing part and section IV present answer processing part. Section V include the architecture of the new model and section VI discussed evaluation. Final section include conclusion of the designed model.
II. QUESTION ANSWERING SYSTEMS (QA)
QA is a type of information retrieval. Given a collection of documents (such as the World Wide Web or a local collection), the system should be able to retrieve answers to questions posed in natural language. QA is regarded as requiring more complex natural language processing (NLP) techniques than other types of information retrieval such as document retrieval, and it is sometimes regarded as the next step beyond search engines.[1][2]
QA research attempts to deal with a wide range of question types including: fact, list, definition, how, why, hypothetical, semantically-constrained and cross-lingual questions. Search collections vary from small local document collections to internal organization documents to complied newswire reports to the World Wide Web. QA systems are classified in two main parts [12]: open domain QA system and closed domain QA system.
Open domain question answering deals with questions about nearly everything and can only rely on general ontology [4] and world knowledge. On the other hand, these systems usually have much more data available from which to extract the answer.
Closed-domain question answering deals with questions under a specific domain (for example medicine or weather forecasting and etc) and can be seen as an easier task because NLP systems can exploit domain-specific knowledge frequently formalized in ontology.
Alternatively, open-domain might refer to a situation where unlimited type of questions are accepted, such as questions asking for descriptive [1][2]. Many searches have been done for expanding English language QA systems. Also some other works have been done on Chinese, Arabic, Spanish and … QA systems. [3]
The aim of QA systems is to find exact and correct answer for user's questions. In addition to user interaction, various QA systems contain at least three following parts:
1- Question processing
2- Document processing
3- Answer processing
III. QUESTION PROCESSING
As mentioned before, question, document and answer processing are three main parts of a QA system.
Important components of question processing are classification of question and reformulation.
A. Classification component
For answer extraction in a large collection of documents and texts, at first the system should know what it look for. In this case, questions should be classified regarding their types [4].
Question classification will be done before reformulation. This is for finding types of questions and answers. For this, system first should know type of question. It also helps system to omit the question in final format of answer.
Table No. 1 shows question words, type of questions and answers. Totally questions can be divided as follows:
- Questions with ‘WH’ question words such as what, where, when, who, whom, which, how, why and etc.
- Questions with ‘modal’ or ‘auxiliary’ verbs that their answers are Yes/No.
It is obvious that specifying type of question is not enough to find the correct answer. For example in question 'Who was the first aerologist in USA?' type of answer will be 'a person'. But if a question is asked with 'What', exact type of answer is not specified. Because the answer may be definition, number or title.[6]
For correct answer extraction, some patterns should be defined for system to find exact type of answer and then sends to document processing.[4][5]
B. Reformulation component
Question reformulation (also called surface pattern, paraphrase or answer pattern) tries to identify various ways of expressing an answer given a natural language question. This reformulation is often used in Question Answering system to retrieve answers in a large document collection. [7]
The query reformulation component converts the question into a set of keyword queries that will be sent to the search engine for parallel evaluation.
Following items are important in reformulation:
1- Use of syntax relations among words of asked question sentence.
2- Use of semantic relations among words of asked question sentence.
3- Use the existing information of pervious asked questions and answers in which a part or totally is same to user's asked question. In this case, system can use type of pervious answer for new asked question. It causes that the process of finding proper pattern and type of answer become shorter and reduces the necessary time for submitting correct answer.[8][9]
It would be possible if the system has the ability of saving information in 'Usage knowledge' database. If all above options work together at the same time, the flexibility of system will increase. As mentioned before all flexibility of designed system in on 'Usage knowledge' part. This part is same as FAQ1 and also can answer to new asked questions which are not totally same to Previous questions and have some differences in adverbs or verbs.
When a user asks a question, first sentence parses to its syntax components and then its keywords are selected to use in reformulation.
Table 1 Classification of question and answer
<table>
<thead>
<tr>
<th>Question Classification</th>
<th>Sub classification</th>
<th>Type of Answer</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>When</td>
<td>DATE</td>
<td>When did rain come yesterday?</td>
<td></td>
</tr>
<tr>
<td>Which</td>
<td>Which-Who</td>
<td>PERSON</td>
<td>Which person did invent the instrument of aerology?</td>
</tr>
<tr>
<td>Which</td>
<td>Which-Where</td>
<td>LOCATION</td>
<td>Which city has the min temperature?</td>
</tr>
<tr>
<td>Which</td>
<td>Which-When</td>
<td>DATE</td>
<td>Which month has max rain?</td>
</tr>
<tr>
<td>Why</td>
<td>REASON</td>
<td></td>
<td>Why don't we have enough rain this year?</td>
</tr>
<tr>
<td>What</td>
<td>Money / Number</td>
<td></td>
<td>What is the temperature of Tehran?</td>
</tr>
<tr>
<td></td>
<td>Definition /Title</td>
<td></td>
<td></td>
</tr>
<tr>
<td>What – Who</td>
<td>PERSON</td>
<td></td>
<td>What is the best meteorologist in Iran?</td>
</tr>
<tr>
<td>What – When</td>
<td>DATE</td>
<td></td>
<td>What year do we have max rain?</td>
</tr>
<tr>
<td>What – Where</td>
<td>LOCATION</td>
<td></td>
<td>What is the capital of Iran?</td>
</tr>
<tr>
<td>Who</td>
<td>PERSON</td>
<td></td>
<td>Who is the first meteorologist in world?</td>
</tr>
</tbody>
</table>
There is an important question: 'What are keywords in question sentence?' Keywords are selected in question sentence as follow:
1- All words which are in 'quotations' and "double quotations".
2- All words that are name.
3- All words that are adverb (time, location, status).
4- All words that are main verb or modal verb.
5- All words those are subject.
6- All words that are object.
Next important subject is how can use keywords to make answer? For this propose, system uses patterns to find correct format of answer. These pattern are made regarding English language grammar.[4]
1) Rules for extract answer patterns. First step to find proper pattern is to find verb in sentence. In defined patterns verbs are totally divided in three parts:
1- Main Verb such as: 'to be' (am, is , are, was, were, been) or 'to have' (have, has, had)
2- Auxiliary verbs (do, does, did)
3- Modal verbs (can, could, shall, should, may, might, …)
Main verbs never delete in answer, but regarding type of answer its location in sentence may change. Sometimes these verbs (am, is, are …) come with another verb in 'ing' form.
But auxiliary verbs will be deleted in answers. It should be noted that 'do' may located in sentence as a main verb that can be find by semantic relations.[9][10]
If question doesn't have WH question word and question is asked with a modal verb or 'to be' then answer is yes/no. Also if sentence doesn't have any question word (WH question or modal), system asks user that which question word make his question. After that usual process will be done.
IV. ANSWER PROCESSING
Answer processing module consist of two main components: answer extraction and answer validation. First, candidate answers extract from documents which are retrieve by search engine in answer extraction module. After that we validate answers with filtering and ranking candidate answers and final system’s suggested answers with user voting.
Our approach to automatic answer validation relies on discovering relations between asked question and answer candidates by mining the documents or a domain text corpus for their co-occurrence tendency[10],[11]. The underlying hypothesis is that the number of these co-occurrences can be considered a significant clue to the validity of the answer.
As a consequence, this information can be effectively used to rank the amount of candidate answers that our QA system is often required to deal with. Also we can exploit domain knowledge and answer patterns to create new answers based on co-occurrence keywords and semantic relation [5].
1 Frequently asked questions
A. FILTERING COMPONENT
Candidate answers collection which has been sent by answer extraction feed in filtering component. These candidate collections consist of some snippets which may include the exact answer. By using answer keywords, the system finds co-occurrence words [9] and semantic relations [12] existing in database ontology and moreover related sentences from knowledge domain. By analyzing the candidate answers and using answer type and keywords, some snippets eliminate from the collection. Then the best candidate answers send for ranking.
B. RANKING COMPONENT
This component receives a list of answers which have filtered before. This list consists of the best answer from the system’s point of view which is more related to the question. Ranking component classifies the answers and gives priority to them. A priority number is specified to answers by using the number of repeated answer type in the snippets and the distance of answer keywords (considering to threshold). The answer with highest priority is located at the top of the list and this task performs frequently for all answers. After that the data fetch from domain knowledge database, and the answers sent to user to validate.
C. USER VOTING (VALIDATING)
In this step, the answers are shown to user for validation. If the top answer was the exact answer, then system would increase a validation grade in usage knowledge for [q, a] pairs. That answer will submit in database to answer next similar question. Otherwise the other candidates will be shown to user to certify. This process continuous until there aren’t any other answers, then the systems asks for additional information from user and will send those information or new question to question processing module.
V. ARCHITECTURE
To increase the reliability and ability of designed QA system and to find correct and exact answer, we use dynamic pattern with semantic relations among words, verbs and keywords and also co-occurrence keywords.
In question processing module, at first the question is classified regarding linguistic theories and bases of answering questions. Then question's structure and keywords are specified by classification, send to document processing module to retrieve documents which may have proper answer.
In answer processing module, first of all candidate answers which is received from search engine, will be filtered by co-occurrence patterns and ordered based on some analyzing in system. Then the answers send to user to validate the candidates. Finally the system will present the exact answer.
A. SYSTEM COMPONENTS
Designed architecture has these parts, (see fig1):
1-Question interface: In this part user writes his question by an interface. If no proper answer is given, user can write his question in another way.
2- Query Analyzer: In this part question is parsed to its particles such as subject, object, verb, noun, adjective, adverb & etc.
3- Lexicon: This part is used as vocabulary (dictionary) and contains all words that are in related domains. Also the type of word such as subject, object, verb, noun, adjective, adverb & etc is specified in this part.
4- Database Ontology: In this part questions and answers are surveyed semantically. Semantic relation among keywords saved in this database.
5- Domain knowledge: Domain information is saved as database in this part and will submit the user's answer when a web service connects to internet.
6- Question classification: Question classification is one of the important functions of most QA systems. Most researches on this subject are based on regular expression, hand writing grammar rules and other advanced techniques in natural language for question parsing and finding answers. In this part all questions are classified regarding WH question words (such as What, Where, When, Who & etc) or other question words with Yes/No answer.
7- Reformulation: In this part main question (Q) with using rules changes to a question with new format (Q'). In this part question words and punctuation which make no difference in question and answer, are deleted and the root of words will be specified. Then by the words of new question, proper patterns and information are surveyed.
8- Usage Knowledge: one of the most useful ways to find correct related answers is to use a library of the previous questions and answer. If new user's question is similar to a previous submitted question, the answer of the old question will be used as answer of new question. If the new question is different with old questions in database, new question will be sent for other steps. It should be notified, this database is new for answer validation.
9- Candidate Answers Filtering: by this part, candidate answers will be filtered based on answer type and co-occurrence patterns which were created in system. Also some answers generate dynamically based on domain knowledge and co-occurrence keywords.
10- Candidate Answers Ranking: answers rank based on distance of keywords in snippets, answer type and answer repetition. By using ranking part in our model, answer candidate collection, which be filtered, will be ranked based on their validation value.
11- User Voting: this part of the system plays the human assessment role which checks the correctness of answer and fills the validation grade in usage knowledge for the next validating which will affect on total system timeline.
12- Pattern: this is a database which is used for answer patterns and will be updated with dynamic patterns which created in system.
B. ALGORITHM
As mentioned before for each question that is written by user in natural language, some words of question are used as keywords in answer. These keywords can be used as subject, object, verb, adverbs & etc.
Designed Algorithm of this QA system is as follow:
1- User asks question through a query interface. If the question is similar to one of the previous questions which were saved in usage knowledge database, the answer of pervious question will be chosen for user's question and the system give the answer. Otherwise next step will be done.
2- Query analyzer pars question as subject, verb, object, adverb & etc. It should be noted that the type of words and synonyms of them (if is existed) were defined dynamically in Lexicon database. If system could not find the word or its type in question, system will announce and user can enter the new word and its type. In this case Lexicon database will be completed and updated.
At last a tree view result will be used in classification part.
In classification part, type of question and after that type of answer will be specified.
3- The question may have a WH question word which its answer is proportionate to that question.
3-1 Asked question doesn’t have any WH question word and just has a modal or auxiliary verb with Yes/No answer.
3-2 User may ask his question with a sentence that has no verb or question word such as: Temperature of Tehran.
4-After finishing these steps, for finding answer the most important part of job, query reformulation based on proportionate pattern, should be done.
5- We suppose that in document processing part, the search engine retrieves the documents in scope of the domain and based on answer patterns and important keywords.
6- Search engine send candidate answers collection to answer processing module. Answer extraction part will extract candidate answers from retrieved documents. Then these candidate answers pass to filtering unit.
7- Based on co-occurrence words and semantic relations existing in database ontology, answer type and keywords which extracting in question processing module, system filter candidate answers collection. Therefore some answers which are not related with the asked question will be eliminated.
Fig. 1 Question answering system architecture
8- The remained answers rank by keywords distance and the frequently rate of answer keywords in snippets. In this case the filtered answers obtain priority and located in an ordered list.
9- The answers with high priority show to user for validation. Then the answers get a validation answer grade and save it in usage knowledge. If the user accepts the suggested answer which system presented as an exact answer, the algorithm will be finished.
10- If not, the algorithm send next set of candidate answers with priority to user from the list. This task performs recursively.
11- Finally if the user doesn’t accept any answer from the candidate list, the system asks for new question and request for additional information from the user and sends them to question processing module. Also update the pattern database to eliminate non efficient patterns.
VI. PROCESS OF MODEL
For increasing efficiency and finding exact answer, system uses data base ontology. This database include co-occurrence words such as rain and umbrella [6] and also semantic words which are near in meaning, such as ‘temperature’ and ‘degree’ in weather forecasting domain.
Flexibility of designed system is based on usage knowledge. It means that if the new question is totally or nearly similar to a question which was asked before, system can use the generated answer. Also we use a field named answer validation grade to decrease system’s useful timeline.
If an answer is valid the system automatically adds one number to this grade. Then if user ask repeated question, the answer with highest grade will be select as valid answer.
New asked question is parsed in query analyzer part to its components. Then all of these components check with data in usage knowledge to find the probable similarity with pervious questions. If during checking, the structure of asked question totally is same as data in usage knowledge, certainly the answer of new asked question is same to answer of pervious question. But if some differences find between new asked question and data (such as question word, proposition, adjective, name and adverb) then system uses ‘word ontology’ to find synonyms of different parsed words to find similarity between new asked and pervious questions. At last if there was any answer for synonyms word in previous question, system uses this answer for the answer of new asked question.
For example during checking, if two words such as ‘temperature’ in new asked question with ‘degree’ in pervious questions is different and in ‘word ontology’ these two words were saved as ‘synonyms’, also a previous question with ‘degree’ was asked, then system takes these two questions and the type of their answers, same even if they have different adverbs.
It should be noted that different adverbs in two same questions, have no effect in type of question. This option is important for question words that have more than one type of answer (such as ‘what’ that its answer type may be ‘number’, ‘title’ or “definition’).
If the structure of question has totally different, it means no similar question exists in usage knowledge, system uses other defined patterns to finding answer of question.
Suggested model by using co-occurrence technique can increase the validity of the candidate answers and also exploiting from validation value caused the efficiency of system affected. In addition, in final step user can check the validity of answers and select the best validated answer/answers. System will appropriate a validation value for selected answer and store that value in usage knowledge. Likewise, system receives a score between 0-100 from user which shows user satisfactions from system operation. This score will use for evaluation of the system. Because of this measure for evaluation which exploit user viewpoint, the percent of validation is so high.
If there are some words which are co-occurrence in candidate answer sentence, there is strong probability that the answer will be valid. The ontology for the weather events consists of event concepts, which are similar to Synset in WORDNET [15]. For example, rain and umbrella are in same event concept in the domain ontology for weather events, because the questions about using umbrella are usually asking about raining (e.g. Will I need to bring umbrella tomorrow? and Will it be raining tomorrow?)
VII. EVALUATION
The model implemented base on dynamic patterns, syntax and semantic relation among words, co-occurrence keywords, answer validation value, and use previous information (question and answers) that saved in usage knowledge.
Chart No.1 shows type and quantity of questions (such as questions which ask for “quality/status, quantity/amount, location, time/date, person and defined/descriptive questions”).
The model is capable to control words of question semantically and also use co-occurrence word relations for validate the answer. Other advantage of this model is the ability of defining new answers by system from domain knowledge, keywords and answer patterns. Also the model used an answer validation value which affect on system's
timeline. Because our domain is restricted on aerology, and if frequent question increase in system, number of the validated answer also increase. This means that the validity of answers will be increased.
Moreover, model works with dynamic patterns. Also this model of program is capable to answer a sentence with more than one question word individually or a sentence without question word or a multi sentence text that has a question.
For evaluation of the implemented system, 50 questions were asked by 20 various persons in age and knowledge in different location and time situations.
From total 50 asked questions, evaluation of the model, show 92% improving the decision of the system.
Chart No.2 shows evaluation if the model base on user voting.
As before noted, this model improve the precision of question answering systems by working on query reformulation and answer validation modules. The model exploit syntax and semantic relation for making dynamic patterns to reformulate asked question and also develop answer validation part by using words co-occurrence techniques and apply user assessments to validate answers. Then system's performance increased.
Finally model use usage knowledge to decline total timeline for answering to question.
VIII. CONCLUSIONS AND FUTURE WORKS
RDQA, working on small document collections and restricted subjects, seems to be a task no less difficult than open-domain QA. Due to candidate scarcity, the precision performance of a RDQA system, and in particular that of its IR module, becomes a problematic issue. It affects seriously the entire success of the system, because if most of the retrieved candidates are incorrect, it is meaningless to apply further techniques of QA to refine the answers.
The simplest approach to improve the accuracy of a question answering system might be restricting the domain it covered. By restricting the question domain, the size of candidate answer collection becomes smaller.
Reformulation is an important part for understanding the interplay of information retrieval. There are three steps in patterns of reformulation by user: format, contents and source.
The main goal of rewriting question is asking question in another new format by user in which less time and sources are used for search. Reformulation of questions is one of the most difficult parts in user's domain, even in web that seems learning and using documents is simple. Understanding the behavior of question and designed software for supporting of these behaviors are important problems in this part as the QA systems based on reformulation.
So for increasing the performance and getting exact correct answer, this component should use syntax and semantic relations together at same time and also use the existing information of pervious asked questions by users which were saved in program.
One of the important specifications of reformulation component is the ability of being used separately in other QA systems. It is for, not having any relation to a special domain can be used in open and closed domains.
The designed system considers following step to increase its proficiency:
- Semantic analysis
- Syntax analysis
- Dynamic pattern
- Use the pervious data(usage knowledge)
- Using questions with related subjects,
- Adding a web services as a rich source in domain ontology
In addition the question processing module, the improvement of answer processing module will be complete the question processing task in efficiency of the QA system, because the system must return correct answer. Then by improving the answer processing module the system can able to present exact answers, because it perform on close domain and deal with frequent questions and use validations patterns. Another reason is that the exact answers obtained by filtering candidate answers collection which perform in several steps, therefore the answers select from restricted collection and this makes the algorithm more efficient. This model by using a validation grade is more effective than the other models in total response time. This grade is null at the beginning of the system but by using QA system this field will increase and affect on response access time.
Future researches should consider factors that lead users to reformulate their questions. Also new research should be done to gather more information in various levels of understanding, effectiveness and situations. Methods of gathering multi information such as documents, interviews, reports and etc. should be done. In addition that we must improve the answer processing module by identification new kind of patterns and try to decline the timeline to find the exact answer which is performed here by using validation grade and usage knowledge database.
REFERENCES
|
{"Source-Url": "https://publications.waset.org/13824/pdf", "len_cl100k_base": 6297, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23125, "total-output-tokens": 7560, "length": "2e12", "weborganizer": {"__label__adult": 0.0005536079406738281, "__label__art_design": 0.001041412353515625, "__label__crime_law": 0.0010595321655273438, "__label__education_jobs": 0.02288818359375, "__label__entertainment": 0.0004596710205078125, "__label__fashion_beauty": 0.00043129920959472656, "__label__finance_business": 0.0008959770202636719, "__label__food_dining": 0.0006780624389648438, "__label__games": 0.002353668212890625, "__label__hardware": 0.00103759765625, "__label__health": 0.002002716064453125, "__label__history": 0.0008096694946289062, "__label__home_hobbies": 0.00017690658569335938, "__label__industrial": 0.0005712509155273438, "__label__literature": 0.004665374755859375, "__label__politics": 0.0005555152893066406, "__label__religion": 0.0008053779602050781, "__label__science_tech": 0.3525390625, "__label__social_life": 0.000377655029296875, "__label__software": 0.124755859375, "__label__software_dev": 0.47998046875, "__label__sports_fitness": 0.0003888607025146485, "__label__transportation": 0.0006651878356933594, "__label__travel": 0.00030803680419921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35234, 0.01039]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35234, 0.74408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35234, 0.94109]], "google_gemma-3-12b-it_contains_pii": [[0, 4456, false], [4456, 9582, null], [9582, 14296, null], [14296, 19838, null], [19838, 22135, null], [22135, 27255, null], [27255, 32034, null], [32034, 35234, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4456, true], [4456, 9582, null], [9582, 14296, null], [14296, 19838, null], [19838, 22135, null], [22135, 27255, null], [27255, 32034, null], [32034, 35234, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35234, null]], "pdf_page_numbers": [[0, 4456, 1], [4456, 9582, 2], [9582, 14296, 3], [14296, 19838, 4], [19838, 22135, 5], [22135, 27255, 6], [27255, 32034, 7], [32034, 35234, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35234, 0.07303]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
6e26484f7881c4d05392a552035156896802c0c1
|
Dynamic Programming
Cormen et. al. IV 15
Motivating Example: Fibonacci numbers
\[ F(1) = F(2) = 1 \]
\[ F(n) = F(n-1) + F(n-2) \quad n>2 \]
Fibonacci numbers
F(1) = F(2) = 1
F(n) = F(n-1) + F(n-2) \quad n>2
Simple recursive solution:
```
def fib(n):
if n<=2: return 1
else: return fib(n-1) + fib(n-2)
```
What is the size of the call tree?
Problem: exponential call tree
Can we avoid it?
Efficient computation using a memo table
def fib(n, table):
# pre: n>0, table[i] either 0 or contains fib(i)
if n <= 2:
return 1
if table[n] > 0:
return table[n]
result = fib(n-1, table) + fib(n-2, table)
table[n] = result
return result
We use a memo table, never computing the same value twice. How many calls now? \(O(n)\)
Can we do better?
Look ma, no table
def fib(n):
if n<=2: return 1
a, b = 1
c = 0
for i in range(3, n+1):
c = a + b
a = b
b = c
return c
Compute the values "bottom up"
Avoid the table, only store the previous two
same \(O(n)\) time complexity, constant space.
Only keeping the values we need.
Optimization Problems
In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems are encountered.
This often leads to a recursive definition of a solution. However, the recursive algorithm is often inefficient in that it solves the same sub problem many times.
Dynamic programming avoids this repetition by solving the problem bottom up and storing sub solutions, that are (still) needed.
Dynamic vs Greedy, Dynamic vs Div&Co
Compared to Greedy, there is no predetermined local choice of a sub solution, but a solution is chosen by computing a set of alternatives and picking the best.
Another way of saying this is: Greedy only needs ONE best solution.
Dynamic Programming builds on the recursive definition of a divide and conquer solution, but avoids re-computation by storing earlier computed values, thereby often saving orders of magnitude of time.
Fibonacci: from exponential to linear
Dynamic Programming
Dynamic Programming has the following steps
- Characterize the structure of the problem, i.e., show how a larger problem can be solved using solutions to sub-problems
- Recursively define the optimum
- Compute the optimum bottom up, storing values of sub solutions
- Construct the optimum from the stored data
Optimal substructure
Dynamic programming works when a problem has optimal substructure: we can construct the optimum of a larger problem from the optima of a "small set" of smaller problems.
- small: polynomial
Not all problems have optimal substructure. Travelling Salesman Problem (TSP) does not have optimal substructure.
Weighted Interval Scheduling
We studied a greedy solution for the interval scheduling problem, where we searched for the maximum number of compatible intervals.
If each interval has a weight and we search for the set of compatible intervals with the maximum sum of weights, no greedy solution is known.
Weighted Interval Scheduling
Weighted interval scheduling problem.
- Job $j$ starts at $s_j$, finishes at $f_j$, and has value $v_j$.
- Two jobs compatible if they don't overlap.
- Goal: find maximum value subset of compatible jobs.
Weighted Interval Scheduling
Assume jobs sorted by finish time: \( f_1 \leq f_2 \leq \ldots \leq f_n \).
\( p(j) \) = largest index \( i < j \) such that job \( i \) is compatible with \( j \),
in other words: \( p(j) \) is \( j \)'s latest predecessor; \( p(j) = 0 \) if \( j \) has no predecessors. Example: \( p(8) = 5, p(7) = 3, p(2) = 0 \).
Using \( p(j) \) can you think of a recursive solution?
Recursive (either / or) Solution
Notation. \( \text{OPT}(j) \): optimal value to the problem consisting of job requests 1, 2, ..., \( j \).
- Case 1: \( \text{OPT}(j) \) includes job \( j \).
- add \( v_j \) to total value
- can’t use incompatible jobs \{ \( p(j) + 1, p(j) + 2, \ldots, j - 1 \) \}
- must include optimal solution to problem consisting of remaining compatible jobs \( 1, 2, \ldots, p(j) \)
- Case 2: \( \text{OPT}(j) \) does not include job \( j \).
- must include optimal solution to problem consisting of remaining compatible jobs \( 1, 2, \ldots, j - 1 \)
\[
\text{OPT}(j) = \begin{cases}
0 & \text{if } j = 0 \\
\max \left\{ v_j + \text{OPT}(p(j)), \text{OPT}(j - 1) \right\} & \text{otherwise}
\end{cases}
\]
Either / or recursion
This is very often a first recursive solution method:
- either some item is in and then there is some consequence
- or it is not, and then there is another consequence, e.g. knapsack, see later slides:
Here: for each job \( j \)
- either \( j \) is chosen
- add \( v_j \) to the total value
- consider \( p_j \) next
- or it is not
- total value does not change
- consider \( j-1 \) next
---
Weighted Interval Scheduling: Recursive Solution
**input:** \( s_1, \ldots, s_n, f_1, \ldots, f_n, v_1, \ldots, v_n \)
sort jobs by finish times so that \( f_1 \leq f_2 \leq \ldots \leq f_n \).
**compute** \( p(1), p(2), \ldots, p(n) \)
\[
\text{Compute-Opt}(j) \{ \\
\quad \text{if} \ (j == 0) \\
\quad \quad \text{return} \ 0 \\
\quad \text{else} \\
\quad \quad \text{return} \ \max(v_j + \text{Compute-Opt}(p(j)), \text{Compute-Opt}(j-1)) \\
\}
\]
What is the size of the call tree here?
How can you make it big, e.g. exponential?
Analysis of the recursive solution
**Observation.** Recursive algorithm considers exponential number of (redundant) sub-problems.
Number of recursive calls for family of "layered" instances grows like Fibonacci sequence.
![Fibonacci sequence diagram]
\[
p(1) = 0, p(j) = j-2
\]
Code on previous slide becomes Fibonacci: \( \text{opt}(j) \) calls \( \text{opt}(j-1) \) and \( \text{opt}(j-2) \)
---
Weighted Interval Scheduling: Memoization
**Memoization.** Store results of each sub-problem in a cache; look up as needed.
**input:** \( n, s_1, \ldots, s_n, f_1, \ldots, f_n, v_1, \ldots, v_n \)
**sort** jobs by finish times so that \( f_1 \leq f_2 \leq \ldots \leq f_n \).
**compute** \( p(1), p(2), \ldots, p(n) \)
**for** \( j = 1 \) to \( n \)
\[ M[j] = \text{empty} \]
\[ M[0] = 0 \]
**M-Compute-Opt** \( j \) \{
\text{if} (M[j] \text{ is empty})
\[ M[j] = \max(v_j + \text{M-Compute-Opt}(p(j)), \]
\[ \text{M-Compute-Opt}(j-1)) \]
\}
**return** \( M[j] \)
Weighted Interval Scheduling: Running Time
**Claim.** Memoized version of $M$-$Compute$-$Opt(n)$ takes $O(n \log n)$ time.
- $M$-$Compute$-$Opt(n)$ fills in all entries of $M$ ONCE in constant time
- Since $M$ has $n+1$ entries, this takes $O(n)$
- But we have sorted the jobs
- So Overall running time is $O(n \log n)$.
---
Weighted Interval Scheduling: Finding a Solution
**Question.** Dynamic programming computes optimal value. What if we want the choice vector determining which intervals are chosen.
**Answer.** Do some post-processing, walking BACK through the dynamic programming table.
```plaintext
Run Dynpro-Opt(n)
Run Find-Solution(n)
Find-Solution(j) {
if (j = 0)
output nothing
else if ($v_j + M[p(j)] > M[j-1]$)
print j
Find-Solution(p(j))
else
Find-Solution(j-1)
}
```
Weighted Interval Scheduling: Bottom-Up
**Bottom-up dynamic programming**, build a table.
**input:** $n, s_1, \ldots, s_n, f_1, \ldots, f_n, v_1, \ldots, v_n$
sort jobs by finish times so that $f_1 \leq f_2 \leq \ldots \leq f_n$.
compute $p(1), p(2), \ldots, p(n)$
Dynpro-Opt {
$M[0] = 0$
for $j = 1$ to $n$
$M[j] = \max(v_j + M[p(j)], M[j-1])$
}
By going in bottom up order $M[p(j)]$ and $M[j-1]$ are present when $M[j]$ is computed. This takes $O(n \log n)$ for sorting and $O(n)$ for Compute, so $O(n \log n)$
---
**Do it, do it: Recursive**
<table>
<thead>
<tr>
<th>S</th>
<th>F</th>
<th>V</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>B</td>
<td>2</td>
<td>8</td>
</tr>
<tr>
<td>C</td>
<td>4</td>
<td>3</td>
</tr>
<tr>
<td>D</td>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>E</td>
<td>9</td>
<td>10</td>
</tr>
<tr>
<td>F</td>
<td>11</td>
<td>1</td>
</tr>
</tbody>
</table>
Sort in F order: $A, B, C, D, E, F$
Determine $p$ array:
<p>| | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>A</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>B</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>E</td>
<td>2</td>
</tr>
<tr>
<td>4</td>
<td>D</td>
<td>1</td>
</tr>
<tr>
<td>5</td>
<td>C</td>
<td>0</td>
</tr>
<tr>
<td>6</td>
<td>F</td>
<td>3</td>
</tr>
</tbody>
</table>
$6, F + 3, E + 2, B = 19$
Do the recursive algorithm.
Left: take $+v$ next $p(j)$. Right: don’t take $0$, next $j-1$
Up: edge: add, node: take the max
Do it, do it: Dynamic Programming
<table>
<thead>
<tr>
<th>S</th>
<th>F</th>
<th>V</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>B</td>
<td>2</td>
<td>8</td>
</tr>
<tr>
<td>C</td>
<td>4</td>
<td>3</td>
</tr>
<tr>
<td>D</td>
<td>6</td>
<td>5</td>
</tr>
<tr>
<td>E</td>
<td>9</td>
<td>10</td>
</tr>
<tr>
<td>F</td>
<td>11</td>
<td>1</td>
</tr>
</tbody>
</table>
M[0] = 0
for j = 1 to n
M[j] = max(v_j + M[p(j)], M[j-1])
Draw Intervals
Sort in F order
Determine p array
<table>
<thead>
<tr>
<th>Sort in F order</th>
<th>Determine p array</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 A</td>
<td>1A: 0</td>
</tr>
<tr>
<td>2 B</td>
<td>2B: 0</td>
</tr>
<tr>
<td>3 E</td>
<td>3E: 2B</td>
</tr>
<tr>
<td>4 D</td>
<td>4D: 1A</td>
</tr>
<tr>
<td>5 C</td>
<td>5C: 0</td>
</tr>
<tr>
<td>6 F</td>
<td>6F: 3E</td>
</tr>
</tbody>
</table>
Create M table
Walk back to determine choices
6,F: take gets you 19, don't gets you 18, so take F
3,E: take gets you 18, don't gets you 8, so take E
2,B: take gets you 8, don't gets you 0, so take B
Computing the p array
Claim. Memoized version of M-Compute-Opt(n) takes O(n log n) time.
- M-Compute-Opt(n) fills in all entries of M ONCE in constant time
- Since M has n+1 entries, this takes O(n)
- But we have sorted the jobs
- So Overall running time is O(n log n).
Computing the latest-predecessor array
Visually, it is "easy" to determine \( p(i) \), the largest index \( i < j \) such that job \( i \) is compatible with \( j \). For the example below:
\[ p[1...8] = [0, 0, 0, 1, 0, 2, 3, 5] \]
How about an algorithm? Or even as a human, try it without the visual aid (give it 5 minutes)
<table>
<thead>
<tr>
<th>Activity</th>
<th>A1</th>
<th>A2</th>
<th>A3</th>
<th>A4</th>
<th>A5</th>
<th>A6</th>
<th>A7</th>
<th>A8</th>
</tr>
</thead>
<tbody>
<tr>
<td>Start (s)</td>
<td>1</td>
<td>3</td>
<td>0</td>
<td>4</td>
<td>3</td>
<td>5</td>
<td>6</td>
<td>8</td>
</tr>
<tr>
<td>Finish (f)</td>
<td>4</td>
<td>5</td>
<td>6</td>
<td>7</td>
<td>8</td>
<td>9</td>
<td>10</td>
<td>11</td>
</tr>
<tr>
<td>( p )</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Computing the latest-predecessor array
Spoiler alert:
1. Treat all the start and finish times as “events” and sort them in increasing order (resolve ties any way, as long as all the f events are before the s events)
2. Have global variables LFSF and IFLSF (for "Latest_Finish_So_Far," and "Index_of_LFSF")
3. Process events in order as follows:
a. If it is a finish event, f, then update LFSF and IFLSF
b. If it is a start event, s, then set p(i) to ILFSF
<table>
<thead>
<tr>
<th>Evnt</th>
<th>LFSF</th>
<th>IFLSF</th>
<th>p(x)=y</th>
</tr>
</thead>
<tbody>
<tr>
<td>s3</td>
<td>0</td>
<td>0</td>
<td>p(3)=0</td>
</tr>
<tr>
<td>s1</td>
<td>0</td>
<td>0</td>
<td>p(1)=0</td>
</tr>
<tr>
<td>s2</td>
<td>0</td>
<td>0</td>
<td>p(2)=0</td>
</tr>
<tr>
<td>s5</td>
<td>0</td>
<td>0</td>
<td>p(5)=0</td>
</tr>
<tr>
<td>f1</td>
<td>4</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>s4</td>
<td>4</td>
<td>1</td>
<td>p(4)=1</td>
</tr>
<tr>
<td>f2</td>
<td>5</td>
<td>2</td>
<td></td>
</tr>
<tr>
<td>s6</td>
<td>5</td>
<td>2</td>
<td>p(6)=2</td>
</tr>
<tr>
<td>f3</td>
<td>6</td>
<td>3</td>
<td></td>
</tr>
<tr>
<td>s7</td>
<td>6</td>
<td>3</td>
<td>p(7)=3</td>
</tr>
<tr>
<td>f4</td>
<td>7</td>
<td>4</td>
<td></td>
</tr>
<tr>
<td>f5</td>
<td>8</td>
<td>5</td>
<td></td>
</tr>
<tr>
<td>s8</td>
<td>8</td>
<td>5</td>
<td>p(8)=5</td>
</tr>
<tr>
<td>f6</td>
<td>9</td>
<td>6</td>
<td></td>
</tr>
<tr>
<td>f7</td>
<td>10</td>
<td>7</td>
<td></td>
</tr>
<tr>
<td>f8</td>
<td>11</td>
<td>8</td>
<td></td>
</tr>
</tbody>
</table>
Discrete Optimization Problems
**Discrete Optimization Problem (S,f)**
- **S:**
- Set of solutions of a problem, satisfying some constraint
- **f : S → R**
- Cost function associated with feasible solutions
- **Objective:** find an optimal solution $x_{opt}$ such that
- $f(x_{opt}) \leq f(x)$ for all $x$ in $S$ (minimization)
- $f(x_{opt}) \geq f(x)$ for all $x$ in $S$ (maximization)
- Ubiquitous in many application domains
- planning and scheduling
- VLSI layout
- pattern recognition
- bio-informatics
Knapsack Problem
- Given $n$ objects and a "knapsack" of capacity $W$
- Item $i$ has a weight $w_i > 0$ and value or profit $v_i > 0$.
- Goal: fill knapsack so as to maximize total value.
What would be a Greedy solution?
- repeatedly add item with maximum $v_i / w_i$ ratio ...
Does Greedy work?
Capacity $M = 7$, Number of objects $n = 3$
\[ w = [5, 4, 3] \]
\[ v = [10, 7, 5] \] (ordered by $v_i / w_i$ ratio)
Either / or Recursion for Knapsack Problem
Notation: $OPT(i, w) =$ optimal value of max weight subset that uses items 1, ..., $i$ with weight limit $w$.
- Case 1: item $i$ is not included:
- $OPT$ includes best of \{ 1, 2, ..., $i-1$ \} using weight limit $w$
- Case 2: item $i$ is included, if it can be included: $w_i <= w$
- new weight limit = $w - w_i$
- $OPT$ includes best of \{ 1, 2, ..., $i-1$ \} using weight limit $w - w_i$
\[
OPT(i, w) = \begin{cases}
0 & \text{if } i = 0 \\
OPT(i-1, w) & \text{if } w_i > w \\
\max \{ OPT(i-1, w), \ v_i + OPT(i-1, w - w_i) \} & \text{otherwise}
\end{cases}
\]
**Knapsack Problem: Dynamic Programming**
**Knapsack.** Fill an n+1 by W+1 array.
**Input:** n, W, weights $w_1, \ldots, w_n$, values $v_1, \ldots, v_n$
for $w = 0$ to $W$
$M[0, w] = 0$
for $i = 1$ to $n$
for $w = 0$ to $W$
if $w_i > w$
$M[i, w] = M[i-1, w]$
else:
$M[i, w] = \max (M[i-1, w], v_i + M[i-1, \text{w} - w_i])$
return $M[n, W]$
---
**Knapsack Algorithm**
<table>
<thead>
<tr>
<th>Item</th>
<th>Value</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>18</td>
<td>5</td>
</tr>
<tr>
<td>4</td>
<td>22</td>
<td>6</td>
</tr>
<tr>
<td>5</td>
<td>28</td>
<td>7</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Item</th>
<th>Value</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>18</td>
<td>5</td>
</tr>
<tr>
<td>4</td>
<td>22</td>
<td>6</td>
</tr>
<tr>
<td>5</td>
<td>28</td>
<td>7</td>
</tr>
</tbody>
</table>
$W = 11$
### Knapsack Algorithm
<table>
<thead>
<tr>
<th>Item</th>
<th>Value</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>18</td>
<td>5</td>
</tr>
<tr>
<td>4</td>
<td>22</td>
<td>6</td>
</tr>
<tr>
<td>5</td>
<td>28</td>
<td>7</td>
</tr>
</tbody>
</table>
**W = 11**
At 1,1 we can fit item 1 and from then on, all we have is item 1.
---
**At 2,2** we can either not take item 2 (value 1 (previous row[2])) or we can take item 2 (value 6 previous row[0]+ 6).
**At 2,3** we can either not take item 2 (value 1) or we can take item 2 and item 1 (value 7). From then on we can fit both items 1 and 2 (value 7).
Knapsack Algorithm
<table>
<thead>
<tr>
<th>Item</th>
<th>Value</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>18</td>
<td>5</td>
</tr>
<tr>
<td>4</td>
<td>22</td>
<td>6</td>
</tr>
<tr>
<td>5</td>
<td>28</td>
<td>7</td>
</tr>
</tbody>
</table>
OPT: 40
How do we find the objects in the optimum solution?
Walk back through the table!!
Knapsack Algorithm
\[ \text{OPT: } 40 \]
\n\[ n=5 \text{ Don't take object 5 (7+28<40)} \]
\[ W = 11 \]
\[ 1 \{1\} \{1, 2\} \{1, 2, 3\} \{1, 2, 3, 4\} \{1, 2, 3, 4, 5\} \]
\[ \text{Item} \quad \text{Value} \quad \text{Weight} \]
\[ 1 \quad 1 \quad 1 \]
\[ 2 \quad 6 \quad 2 \]
\[ 3 \quad 18 \quad 5 \]
\[ 4 \quad 22 \quad 6 \]
\[ 5 \quad 28 \quad 7 \]
Knapsack Algorithm
\[ \text{OPT: } 40 \]
\n\[ n=5 \text{ Don't take object 5 (7+28<40)} \]
\[ W = 11 \]
\[ 1 \{1\} \{1, 2\} \{1, 2, 3\} \{1, 2, 3, 4\} \{1, 2, 3, 4, 5\} \]
\[ \text{Item} \quad \text{Value} \quad \text{Weight} \]
\[ 1 \quad 1 \quad 1 \]
\[ 2 \quad 6 \quad 2 \]
\[ 3 \quad 18 \quad 5 \]
\[ 4 \quad 22 \quad 6 \]
\[ 5 \quad 28 \quad 7 \]
### Knapsack Algorithm
```
<table>
<thead>
<tr>
<th>Item</th>
<th>Value</th>
<th>Weight</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>6</td>
<td>2</td>
</tr>
<tr>
<td>3</td>
<td>18</td>
<td>5</td>
</tr>
<tr>
<td>4</td>
<td>22</td>
<td>6</td>
</tr>
<tr>
<td>5</td>
<td>28</td>
<td>7</td>
</tr>
</tbody>
</table>
```
OPT: 40
n=5 Don't take object 5
n=4 Take object 4
n=3 Take object 3
and now we cannot take anymore, so choice set is {3,4}, choice vector is [0,0,1,1,0]
### Knapsack Problem: Running Time
Running time. $\Theta(nW)$.
- Not polynomial in input size!
- $W$ can be exponential in $n$
- Decision version of Knapsack is NP-complete.
[Chapter 34 CLRS]
**Knapsack approximation algorithm.**
- There exists a poly-time algorithm that produces a feasible solution that has value within 0.01% of optimum.
|
{"Source-Url": "https://www.cs.colostate.edu/~cs320/Fall21/more_resources/slides/09_dynpro.pdf", "len_cl100k_base": 6638, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 48918, "total-output-tokens": 6712, "length": "2e12", "weborganizer": {"__label__adult": 0.00033593177795410156, "__label__art_design": 0.00030612945556640625, "__label__crime_law": 0.0004837512969970703, "__label__education_jobs": 0.0013570785522460938, "__label__entertainment": 7.110834121704102e-05, "__label__fashion_beauty": 0.0001785755157470703, "__label__finance_business": 0.00040340423583984375, "__label__food_dining": 0.000484466552734375, "__label__games": 0.0008783340454101562, "__label__hardware": 0.0016393661499023438, "__label__health": 0.0008563995361328125, "__label__history": 0.00029206275939941406, "__label__home_hobbies": 0.00020253658294677737, "__label__industrial": 0.0009140968322753906, "__label__literature": 0.0002135038375854492, "__label__politics": 0.00030112266540527344, "__label__religion": 0.0005340576171875, "__label__science_tech": 0.1038818359375, "__label__social_life": 0.00010025501251220704, "__label__software": 0.008331298828125, "__label__software_dev": 0.876953125, "__label__sports_fitness": 0.000476837158203125, "__label__transportation": 0.0007958412170410156, "__label__travel": 0.00023698806762695312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15531, 0.0485]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15531, 0.18894]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15531, 0.73637]], "google_gemma-3-12b-it_contains_pii": [[0, 143, false], [143, 405, null], [405, 1112, null], [1112, 2053, null], [2053, 2714, null], [2714, 3254, null], [3254, 4403, null], [4403, 5371, null], [5371, 6358, null], [6358, 7196, null], [7196, 8207, null], [8207, 9267, null], [9267, 9848, null], [9848, 11363, null], [11363, 12399, null], [12399, 13174, null], [13174, 13743, null], [13743, 14101, null], [14101, 14814, null], [14814, 15531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 143, true], [143, 405, null], [405, 1112, null], [1112, 2053, null], [2053, 2714, null], [2714, 3254, null], [3254, 4403, null], [4403, 5371, null], [5371, 6358, null], [6358, 7196, null], [7196, 8207, null], [8207, 9267, null], [9267, 9848, null], [9848, 11363, null], [11363, 12399, null], [12399, 13174, null], [13174, 13743, null], [13743, 14101, null], [14101, 14814, null], [14814, 15531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15531, null]], "pdf_page_numbers": [[0, 143, 1], [143, 405, 2], [405, 1112, 3], [1112, 2053, 4], [2053, 2714, 5], [2714, 3254, 6], [3254, 4403, 7], [4403, 5371, 8], [5371, 6358, 9], [6358, 7196, 10], [7196, 8207, 11], [8207, 9267, 12], [9267, 9848, 13], [9848, 11363, 14], [11363, 12399, 15], [12399, 13174, 16], [13174, 13743, 17], [13743, 14101, 18], [14101, 14814, 19], [14814, 15531, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15531, 0.225]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
93ce6e6876299fe3be97ca90d1e1eab214b9d25e
|
[REMOVED]
|
{"Source-Url": "https://www.researchgate.net/profile/Baris_Oezkan/publication/226309301_Formalization_Studies_in_Functional_Size_Measurement_How_Do_They_Help/links/549743220cf20f487d31661d.pdf?origin=publication_detail", "len_cl100k_base": 5468, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24032, "total-output-tokens": 8831, "length": "2e12", "weborganizer": {"__label__adult": 0.00029468536376953125, "__label__art_design": 0.0003180503845214844, "__label__crime_law": 0.00023353099822998047, "__label__education_jobs": 0.0006833076477050781, "__label__entertainment": 5.65648078918457e-05, "__label__fashion_beauty": 0.00013506412506103516, "__label__finance_business": 0.00025463104248046875, "__label__food_dining": 0.0002522468566894531, "__label__games": 0.0004820823669433594, "__label__hardware": 0.0004322528839111328, "__label__health": 0.00028777122497558594, "__label__history": 0.00023305416107177737, "__label__home_hobbies": 5.9545040130615234e-05, "__label__industrial": 0.00022017955780029297, "__label__literature": 0.0003964900970458984, "__label__politics": 0.0001760721206665039, "__label__religion": 0.0002872943878173828, "__label__science_tech": 0.01024627685546875, "__label__social_life": 8.362531661987305e-05, "__label__software": 0.00820159912109375, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00018453598022460935, "__label__transportation": 0.0002713203430175781, "__label__travel": 0.00014710426330566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38364, 0.04211]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38364, 0.44748]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38364, 0.90049]], "google_gemma-3-12b-it_contains_pii": [[0, 4615, false], [4615, 11375, null], [11375, 14784, null], [14784, 18514, null], [18514, 23513, null], [23513, 29835, null], [29835, 35410, null], [35410, 38364, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4615, true], [4615, 11375, null], [11375, 14784, null], [14784, 18514, null], [18514, 23513, null], [23513, 29835, null], [29835, 35410, null], [35410, 38364, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38364, null]], "pdf_page_numbers": [[0, 4615, 1], [4615, 11375, 2], [11375, 14784, 3], [14784, 18514, 4], [18514, 23513, 5], [23513, 29835, 6], [29835, 35410, 7], [35410, 38364, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38364, 0.0625]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
f2ca705c9f2f1185bdd3185107e49e67ed8ede69
|
Abstract—The key problem of successful developing of the software intensive system (SIS) is adequate conceptual interactions of designers during the early stages of development. The success of the development can be increased by using of a project ontology, the creation of which is being embedded into the processes of conceptual solving the project tasks and specifying the project solutions. The essence of the conceptual design is a specification of conceptualization. The main suggestion of this paper is a creation of the project ontology in the form of a specialized SIS that supports the conceptual activity of designers. For creation of the project ontology of such type, the instrumental shell was developed. For creation of the project ontology the designers should fill this shell with the adequate information. The basic reasons for evolving the content of the ontology are negative results of testing of the used text units according to the conformity to the ontology. Such shell (in any state of its using) includes the created ontology and its working version (working dictionary) which helps to manage the informational flows, to register the life cycles of the conceptual units and to provide the representativity of their usages.
Index terms—Project ontology, system development, software engineering, task solving.
I. INTRODUCTION
NOWADAYS one of the most challenging area of computer applications is “Development of Software Intensive Systems”, within the frame of which the collaborative works of developers and other stakeholders are being carried out in corporate networks. The success of such activity in this area, which is being estimated regularly by corporation Standish group [18] for last 16 years, is extremely low (a little more than 30%). Failures can occur in development of the SIS related to any part of the SIS’s definition [15]: “A software intensive system is a system where software represents a significant segment in any of the following points: system functionality, system cost, system development risk, development time.”
A very important cause of the failures is semantic mistakes in the collective intellectual activity of developers and other persons involved to the development of the SIS. The necessary condition of the developers success is their mutual understanding in collaborative actions based on reasoning over textual information including the statements of task and definitions of project solutions. Developers of the SIS should be supplied with useful and effective techniques for the prevention and correction of semantic mistakes.
At the beginning stage of the SIS development the necessary understanding usually is absent. The adequate understanding is formed only gradually and step by step during the interaction in working groups. Evolution of understanding follows step by step the design of the SIS in the collaborative development environment (CDE) and the current state of understanding includes its positive influences on the management of the development process.
The important role of understanding (personal and mutual) in the development of the SISs is well known. For exploiting of this phenomenon the special techniques for “interactions” with understanding are being created and are used. One type of such technique is a glossary. The specialized version of the glossary is applied, for example, in widely used methodology (and technology) Rational Unified Process (RUP) [14]. Let us notice that in the RUP such artifact is normatively defined, though it does not have collaborative techniques for its informational filling in real time of design. The problems of dynamically extracting, defining, modeling, registering, keeping and visualizing the units of understanding in designing the SISs do not have satisfactory solution.
In this paper for the explicit work with understanding of SIS designers, a specialized system of the project ontology, which is creating as a subsystem embedded into the developing SIS, is proposed. Moreover, it is suggested to create the project ontology as an interactive system of the SIS type. Such system which will be denoted below as SISONT, is implemented on the base of an ontology shell which supports the collaborative extracting and checking of ontology units from statements of project tasks and definitions of project solutions.
The implemented ontology shell is included into the instrumental system WIQA [16] which is aimed to designing the complex system of the SIS type. The WIQA is based on question-answer reasoning and the models of the units’ flow, of which the informational source of concepts’ usages embedded into the project ontology is extracted.
II. RELATED WORKS
A set of typical kinds of ontologies, according to their level of dependence on a particular task or a point of view, includes the top-level ontologies, domain ontologies, tasks ontologies and applied ontologies. All these types of ontologies are defined in [9] and [10] as techniques that are used in different systems.
For the SISs, the more adequate type of ontologies is applied – the type that must be expanded usually by means of the other ontologies’ types. In accordance with the publication [11], the theory and practice of applied ontologies “will require many more experiences yet to be made”.
It is necessary to notice that the project ontology as a subtype of applied ontologies is essentially important for SISs. Project ontologies mainly are aimed at the process of design but after refining they can be embedded into implemented SISs.
The specificity of project ontologies is indicated in a number of publications. In the technical report [5] the main attention is concentrated on “people, process and product” and collaborative understanding in interactions. Investigation of the possibility of the ontology-based project management is discussed in the paper [1].
The usage of the ontology potential in developing the program system and ontological problems of program products are investigated in the paper [4]. This article describes the experience of development of the task ontologies taking into account first of all the role of different kinds of knowledge. The introduction of knowledge into the task ontologies is reflected and discussed in the work [2]. The role of knowledge connected with problem-solving models is presented in the paper [12].
In all mentioned publications there are many useful ideas but the approach to the ontology as to the specialized SISONT – for extracting, defining and assembling concepts into the ontology in the process of designing the SIS – is not considered. The Internet search of publications with key words which include such phrases as “project ontology” and “software intensive system”, has remained without competitive results coinciding with results suggested in this paper.
Let us remind that the main goal in using the project ontology is to provide the necessary understanding in collaborative design which is impossible without human-computer interaction. Therefore the theory and experience of human-computer interaction as presented in [13] were taken into account in this paper.
III. SPECIFICITY OF SUGGESTED ONTOLOGY
Attempts to view the project ontology from the side of creating the specialized SISONT leads to the questions about its architecture, life cycle and used models which must be coordinated with the evolution of the project ontology. Below we answer these questions.
The architecture of any SISONT for the definite SIS has a problem-oriented type the materialization which begins its life cycle from the ontology shell with architectural solutions, inherited and kept by the SISONT without changing. The principle architecture of the shell (and any SISONT also) is presented in Fig. 1.
For any dictionary entry of the ontology there is a corresponding analog in the working dictionary. Such analog is used firstly as a representative set of samples registering the variants of the concept usages extracted from statements of project tasks and definitions of project solutions (or shortly from text units). Samples are being gathered naturally in interactions of designers who are testing (implicitly or explicitly on different working places) the used concepts according to their conformity to the ontology.

Filling the ontology by the content is connected with a specialized project task appointed to an administrator of the ontology. The work of the administrator is managed:
- By events each of which is generated when the result of comparison of the used concept with the ontology is not correct;
- In accordance with a sequence of actions supporting the normative state of the project ontology (current levels of adequacy and systematization).
The necessary informational material for the administrator of the ontology is supplied by designers with the help of the predicative analysis. Designers must test and confirm the authenticity of concepts which are used in statements of tasks and definitions of project solutions. For achieving such aim they have to extract firstly the usage of concepts (from the text units) and then to compare them with the ontology. The differences of comparisons (new concepts or additional parts of existing concepts, additional questions which require answers) are used as the informational material for evolving the ontology. Let us notice that any extracted concept usage includes its expression as a simple predicate but not only this (the full expression will be presented below).
Used concepts are the main part of the project ontology which should be expanded by systematizations and axiomatic relations. Techniques of systematizations are embedded into the ontology component while axiomatic relations are being created with the help of the logic processor.
The logic processor is intended to build the axiomatic relations as formulas of the logic of predicates. Such work is being implemented in the frame of the appropriate article (entry) of the working dictionary where the necessary simple predicates are being accumulated. Ontology axioms express materialized units of the SIS and first of all those of them which corresponds to UML-diagrams. Any built axiom is registered in the definite entry (article) of the ontology.
The main architectural view presents the project ontology from the side of its components and informational content which defines the dynamics of the life cycle for the SIS\textsuperscript{ONT}. In a typical case such life cycle is being implemented in the form of the real time work of several dozens of designers who have solved and are solving several thousands of tasks. Models which are used in the ontology life cycle will be presented below.
IV. LINGUISTIC PROCESSOR
The life cycle of the SIS\textsuperscript{ONT} is embedded into the life cycle of the designing SIS from which all (named above) text units are being introduced into the linguistic processor. Another possibility is to apply some term-extraction technique, for example, as described in [8].
For testing any text unit, it is transformed into a set of simple sentences and in such transformation the pseudo-physics model of the compound sentence or complex sentence of the other type is applied. In the pseudo-physics model of the sentence all used words are interpreted as objects which take part in the “force interaction” which is visualized on the monitor screen. Formal expressions of pseudo-physics laws are similar to the appropriate laws of the classic physics.
In accordance with acting forces (forces of “gravitation” $F_g$, “electricity” $F_q$, “elasticity” $F_e$ and “friction” $F_f$) and attributes appointed to the “word-objects” such objects after moving are being grouped in definite places of the interaction area. The possible picture of the forces interaction for one word of the investigated sentence is shown in Fig. 2.
In the stable state (Fig.3), each group of words-objects will present the extracted simple sentence after finishing dynamic process on the screen.
The screenshot in Fig.3 and other screenshots of this paper are used with labels for the generalized demonstrations of the visual forms and objects with which the designers are working. The language of these screenshots is Russian.
Let us notice that in the assignment of attributes (values of $m_i$, $q_i$ and others values and parameters) two mechanisms are applied – the automatic morphological analysis and the automated tuning of object parameters. Values are assigned in accordance with the type of the part of speech. The suitable normative values were chosen experimentally. For description of morphological analysis see works [6], [7].
After extraction of simple sentences the designer begins their semantic analysis aimed to testing the correctness of each simple sentence (SS\textsubscript{i}). In such work the designer uses the model of SS\textsubscript{i} and its relations with surrounding, as presented in Fig. 4. This picture shows the type of SS\textsubscript{i} which is used for registering the appointment of the property for object. The other type of model for registering the appointment of relation between two objects has the similar scheme.
The scheme of relations was used for defining and implementing the techniques for their semantic testing. First of all the expression of semantics for SS\textsubscript{i} was chosen. The structure of the semantics value as a set of semantic components (S\textsubscript{0} $\cup \Delta S\textsubscript{i}$) is presented generally in Fig. 4 where the component S\textsubscript{0} indicates for the sentence SS\textsubscript{i} its conformity to the reality.
Definition and testing of any other semantic component $\Delta S\textsubscript{i}$ helps to precise the semantic value of the SS\textsubscript{i} that can be useful for the design of the SIS. Additionally, the work with any semantic component increases the belief in the correctness of the testable simple sentence (and embedded simple predicate) and can lead to useful questions. In the work with additional semantic components the conditional access to appropriate precedents is used.
Elements of the typical set of semantic components are estimated, applied and tested in the definite sequence. Such work begins from the component $S_0$ which is compared with elements of the ontology. The result of comparing can be positive or can lead to questions which should be registered.
The positive result does not exclude the subsequent work with additional semantic components.
Semantics of subjectivity and understanding (part $\Delta S_1$) are estimated and tested for the relation with designers. The fact of the non-understanding leads to questioning or even to interruption of the work with the testable sentence.
Actual or future material existence of the sentence semantics is a cause for testing the semantic relation of the SS with designing (part $\Delta S_i$). Such type of relations is used in the ontology for its systematization.
The greater part of semantic relations of the modality type (parts $\Delta S_{i+1} \rightarrow \Delta S_j$) is aimed to defining and testing of the uncertainties of measurable and/or probable and/or fuzzy types. The semantic relations with normative values (parts $\Delta S_{j+1} \rightarrow \Delta S_M$) suppose the potential inclusion of the SS or its parts into the useful informational sources, for example, into the ontology.
V. SOURCES OF TEXT UNITS
As it is shown in Fig. 1 the primary information for filling the project ontology is being extracted by designers from the life cycle of designing the SIS in the real time.
For the designers interaction with the life cycle of the SIS the specialized instrumental system WIQA (Working In Questions and Answers) was created. The main interface of the WIQA is presented in Fig. 5 (with commentary labels).
The WIQA is intended for registering the current state of designing in the form of a dynamic set of project tasks combined into an interactive tasks tree. Each task of such tree is defined with the help of the question-answer protocol of its solving. Any QA-protocol opens the access to the question-answer model (QA-model) of the corresponding task.
The screenshot shows that for the chosen task $Z_i$ from the task tree its QA-model is opened through the QA-protocol of the registered question-answer reasoning (QA-reasoning). Let us notice that any unit of reasoning (question $Q_j$ or answer $A_j$) has a textual expression with necessary pictures (for example, with UML-diagrams and/or “block and line schemes”). Any task with its statement and any unit of QA-reasoning has the unique name $Z.I$ or $Q.J$ or $A.J$ where I or J is a compound index expressing the subordinations of the corresponding unit. So any text unit is visualized and has a unique index which can be used as its address.
More specifically, any unit of the $Z$, $Q$, or $A$-type is the interactive object the properties of which are being opened when the special plug-ins are used. One of such plug-ins registers and indicates the responsibility (the assignment of the tasks) in the designer group.
The WIQA is created on the base of the QA-model and the usage of following architectural styles — repository, MVC, client-server and interpreter. So for the current state of design of the definite SIS the WIQA can open to designers the statement of any task from the tasks tree and the definition of any project solution accessible as the definite answer in the corresponding QA-protocol.
Let us notice that the usage of the WIQA as the source of text units is a solution proposed by the author but the suggested ideas are possible to use for creating the project ontology with other instrumental systems which can supply designers by statements of project tasks and definitions of project solutions.
VI. WORKING DICTIONARY
The role of the working dictionary is very important in creating the project ontology. This component as the preliminary version of the ontology accumulates all necessary information and distributes informational units between dictionary articles. Carrying out functions of transportation of information, the working dictionary registers relate the text units with their sources. The index name of unit, the number of its sentence and the number of the corresponding simple sentence are used for such referencing.
After extracting the simple sentence with the help of the linguistic processor the predicate model of this sentence is being included into the virtual article of the working dictionary (the article with zero index). Zero article is a temporal memory in the working dictionary which keeps predicates till finishing their testing on the ontology conformity. Zero article, the interface of which is presented in Fig. 6, can be interpreted as a queue of predicates in their mass service.
After extracting any simple sentence and transforming it to the simple predicate, the designer has to start the test of the predicate (as the definite usage of the definite concept). The test begins usually without knowing the “normative usage of the concept” for this predicate in the ontology. Moreover, such usage of the concept in the ontology can be absent or the result of comparing with the appropriate concept will be negative. That is why any tested sentence and corresponding
predicate start their life cycles in the working dictionary from zero article.
The “normative usage of the concept” for any tested predicate is localized into the corresponding ontology article. If the result of comparing is negative but the designer is convinced that “predicate is truth” then the new ontology article is to be created or the new variant of the concept usage is to be built into the existed ontology article. The first of such results requires to create the new article in the working dictionary also and to transport the tested predicate from the virtual article into this new article (Fig. 7).
The type of the new article in the working dictionary is being chosen by designers in accordance with the type of the ontological unit of the designing SIS for representation of which the transported predicate will be used.
Processing the second result includes the transportation of the tested predicate but into the existed article (Fig. 7) of the working dictionary. In general case such predicate is transported into several articles of the working dictionary each of which materializes the tested predicate in the definite form.
If the test of the predicate on the conformity to the ontology is positive then this predicate should be transported in the article of the working dictionary, but only in the article of the definite concept for achieving its representativity. So (step-by-step) predicates (and their parent sentences) are being accumulated into corresponding articles of the working dictionary.
There is a set of types of materialized SIS units which are reflected in the project ontology. The set includes concepts about “parts” of the reality embedded in the SIS and materialized in its software (in the form of variables, classes, functions, procedures, modules, components and program constructions of the other types) and axioms which combine concepts. Each of such unit is found as its initial textual expression in statements of project tasks or in definitions of project solutions. But when this unit is included into the ontology article it is usually rewritten, redefined and reformulated. All informational material for the execution of the similar work is accumulated in the corresponding article of the working dictionary. After creating the adequate textual expressions and formulas they are rewritten from the working dictionary to the corresponding articles of the project ontology.
VII. LOGIC PROCESSOR
The logic processor is intended to build the formal description of the text unit from simple predicates accumulated in the definite article of the working dictionary. Such work is being fulfilled by designer in the operational space presented in Fig. 8 where designer assembles simple predicates in the formula watching them in the graphical window. Necessary predicates are being chosen by designer from the processed article of the working dictionary.
To assemble the predicates the designer has possibilities to use the patterns of two bound predicates and setting of the typical relations between predicates by editing the “picture” (using the drag and drop and lexical information) and registering the final result as the formula of the first predicate logic.
Patterns for two bound predicates has been extracted by author from the grammars of Russian (46 patterns) and English (32 patterns). Such patterns are formalized as typical formulas of the predicates logic.
Mechanisms of assembling the formulas were evolved with experimental aims as the complex of instrumental procedures that provides (for statements of tasks) the creation of prolog-like descriptions. The transformation of the formalized
statement of task to the prolog-like description is being implemented as an automated translating of the formula registered in the appropriate article of the working dictionary. Now the method of translating exists in the preliminary version which will be rationalized by the author.
VIII. SYSTEMATIZATION OF ONTOLOGY
The most important feature of any ontology and the project ontology in particular is its systematization. In suggested case the project ontology is defined initially as the Software Intensive System, the integrity of which is provided by the system of architectural views. Some of these views are reflected implicitly by screenshots used in this paper. But such version of the systematization is only one possibility.
Let us present the other way of the systematization. First of all it is the classification of concepts in accordance with structures of the SIS and process of its design. Such system features of the ontology are formed implicitly through definitions of concepts and corresponding axioms.
The next classification level of the ontology is bound with classifying the variants of concept usages. In this case for any concept its article in the project ontology is being formed, which includes the ordered group of concept usage variants and the textual definition of the concept.
The group of usage variants is a list of sub-lists each of which includes main word (or phrase) as a name of the concept (Ci) and subordinated words (or phrases) as names of characteristics (wi1, wi2, ..., wNi) of this concept. The definite sub-list w'i1, w'i2, ..., w'iN, Ci is an example of the “normative usage of the concept” which can be used in testing of the investigated predicate on the conformity to the ontology.
The basic operation of testing is a comparison of the normative (ontological) sub-list of words with words extracted from the investigated predicate. Two similar sub-lists of words can be extracted from the simple predicate when it indicates the feature and three sub-lists when the predicate registers the relation.
After testing the chosen sub-list of words, which expresses the definite variant of the concept usage, the following results of comparison are possible:
- positive result when the chosen sub-list (w'i1, w'i2, ..., w'iN, C_i) is included into the normative sub-list;
- interrogative result when chosen sub-list crosses the normative sub-list or the tested sub-list is outside of all norms (the role of questions was explained above).
The next direction of the systematization is related to binding concepts. For uniting the ontology concepts into the system the following relations are used: basic relations (the part and the whole, the hereditary, the type of the materialization), associative relations (in accordance with the similarity, the sequence, common time and common space) and causality relations.
This type of the view onto the ontology (onto the system of concepts) is formed by administrator of the ontology at the screen shown in Fig. 9. Any unit of any such form is opened for interactive action of designer.
To use the concept relation the designer chooses the necessary concept by its names in the area “keys of entry” and then designer can switch among groups (nodes of the relations system) up to the necessary relation. For the group of relations presented in Fig. 9 the designer may navigate in these directions – “part of”, “whole for”, “has attribute”, “attribute of”, “descendant of”, “parent for”, “has type” and “materialized as”. Similar schemes of navigation are used for the other classes of relations also.
In any state of the navigation the description of any visualized unit can be opened. Let us notice that all forms of the ontology systematization are inherited by the working dictionary where it opens the possibility for useful switching between its articles.
IX. COLLISION AVOIDANCE OF SEA VESSELS
The proposed version of the project ontology was created and used in the development process of the “Expert system for the collision avoidance of the sea vessels” which is implemented with using the WIQA capabilities [17].
One of the important components of this expert system is a knowledge base which includes the normative rules for the vessel movement. Any unit of such rules was formalized as a precedent with conditional and behavioral parts. Such precedents were extracted from the textual descriptions of normative rules in accordance with their formalizing and coding in expert system by the WIQA capabilities.
At the first stage of the expert system development about 150 textual expressions describing precedents were extracted from 37 rules of The International Regulations for Preventing Collisions at Sea 1972 (COLREGS-72) presented in [3].
Each textual expression was processed with the usage of techniques described above. As a result about 300 concepts with their variants of usages and about 500 precedents were extracted from the textual information. One possibility of the access to the extracted concepts is presented in Fig. 9. Each typical usage of any concept was embedded to the project ontology with its declaration in C#. After developing the expert system the project ontology was refined and included into the created system as its ontology.

As told above all necessary and useful axioms are included into the project ontology also. Any formal expression of any precedent is an axiom binding the definite group of variables indicating the definite concepts.
Each precedent into the project ontology has five variants of these expressions: the textual expression, the predicate formula, the question-answer form, the source code in C# and the executing code. The chosen version of precedent materializations is suitable not only for the automated access by the sailor on duty but for the automatic access of program agents modeling the vessels in the current situation on the sea.
One of these precedents which correspond to the fifteenth rule of MPPSS-72, has the following predicate expression:
\[
\text{if } \text{Condition} = (\text{Velocity} \ V_1, \ "\text{keep out of the way}" )
\&\& (|\text{Bear}_1 - \text{Bear}_2| > 11, 5^*)
\&\& (\text{CPA-DDA} \cdot \Delta D1 \leq 0) \text{ then}
\text{Reaction} = \text{Maneuver} \ Mi.
\]
The precedent (where CPA is a "Closest Point of Approach", DDA is a normative distance between vessels and \(\Delta D\) is an error of the distance measuring) is included into the article with as demonstration without full explaining the variables and expressions. The expression of this precedent (as the axiom) is included into the ontology of the expert system for the collision avoidance of the sea vessels.
Let us notice that the set of articles of the project ontology (in development process of the expert system) includes not only units for named variables and precedents. The common quantity of project ontology articles (still under refining) was about two thousand.
**X. Conclusion**
This paper presents the system of techniques for the creation and usage of the projects ontology in the development of the SIS when enormous quantity of project tasks is being solved by the team of designers in the corporate instrumental network. The success of such activity essentially depends of mutual understanding of designers in their step-by-step conceptualization for solving project tasks and making project decisions. Therefore any project ontology is to be being created as the dynamic subsystem included into the life cycle of the created SIS.
The main suggestion of the paper is the creation of the project ontology as the problem-oriented SIS\textsuperscript{ONT} which is intended for supporting the evolution of understanding and mutual understanding of designers in their step-by-step conceptual activity.
The other important specificity of suggested techniques is the usage of the working dictionary as the preliminary version of the ontology which helps to manage the informational flows and to register the life ways of the informational units and their representativity.
Special attention is given to basic informational units the roles of which are being fulfilled by simple sentences and simple predicates extracted from them. For working with basic informational units the linguistic processor serves the testing of the statements of project tasks and specifications of project solutions (including requirements and restrictions) on their conformity to the ontology and reality. Arising questions are used for evolving the project ontology.
The logic processor helps to build ontology axioms as predicate formulas. Its experimental research shows that this processor can be (and will be) evolved till the automated creation of the prolog-like description of project tasks.
All interfaces of suggested techniques are adjusted to Russian but only the morphological analyzer and the library of the patterns for two bound predicates are dependent from the specific natural language. The library of patterns for English is created also.
Various and useful techniques of the systematization are embedded into the project ontology for the real time work of designers. Such techniques are accessible both in the ontology component and in the working dictionary.
As the source of the primary information for the creation of the ontology the specialized instrumental system WIQA which supports the usage of question-answer reasoning in the work with project tasks and project solutions is used. Still suggested and developed techniques can be adjusted to the other sources supplying the created ontology by the primary information.
**References**
|
{"Source-Url": "https://www.polibits.gelbukh.com/2010_42/42_05.pdf", "len_cl100k_base": 6351, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24226, "total-output-tokens": 7914, "length": "2e12", "weborganizer": {"__label__adult": 0.0002646446228027344, "__label__art_design": 0.0005764961242675781, "__label__crime_law": 0.000308990478515625, "__label__education_jobs": 0.0011444091796875, "__label__entertainment": 6.824731826782227e-05, "__label__fashion_beauty": 0.00012069940567016602, "__label__finance_business": 0.0003094673156738281, "__label__food_dining": 0.0002753734588623047, "__label__games": 0.0006365776062011719, "__label__hardware": 0.0007228851318359375, "__label__health": 0.00035452842712402344, "__label__history": 0.0002295970916748047, "__label__home_hobbies": 9.709596633911131e-05, "__label__industrial": 0.0004248619079589844, "__label__literature": 0.0003285408020019531, "__label__politics": 0.00016570091247558594, "__label__religion": 0.00037384033203125, "__label__science_tech": 0.038970947265625, "__label__social_life": 9.208917617797852e-05, "__label__software": 0.016571044921875, "__label__software_dev": 0.93701171875, "__label__sports_fitness": 0.00018787384033203125, "__label__transportation": 0.0005178451538085938, "__label__travel": 0.00017845630645751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36142, 0.0132]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36142, 0.77261]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36142, 0.92237]], "google_gemma-3-12b-it_contains_pii": [[0, 5031, false], [5031, 10343, null], [10343, 14222, null], [14222, 19434, null], [19434, 23102, null], [23102, 28440, null], [28440, 35109, null], [35109, 36142, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5031, true], [5031, 10343, null], [10343, 14222, null], [14222, 19434, null], [19434, 23102, null], [23102, 28440, null], [28440, 35109, null], [35109, 36142, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36142, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36142, null]], "pdf_page_numbers": [[0, 5031, 1], [5031, 10343, 2], [10343, 14222, 3], [14222, 19434, 4], [19434, 23102, 5], [23102, 28440, 6], [28440, 35109, 7], [35109, 36142, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36142, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
2c85983e8f7cee296e3f3cb77e5d965d52a26963
|
1 INTRODUCTION
A database Web service consists of a Web service interface with operations that provide access to a backend database. When a client sends a query to a database Web service, the backend engine submits the query to the backend database, collects the results, and delivers them to the client. The export schema describes the subset of the backend database schema that the database Web service makes visible to the clients (Sheth & Larson, 1990). Typically, a Web service announces its interfaces, including the export schema, using the Web Service Definition language – WSDL, a W3C standard.
A mediator is a software component that facilitates access to a set of data sources (Wiederhold, 1992). A mediator offers a mediated or global schema that represents an integrated view of the export schemas of the data sources.
In this paper, we focus on the design of a mediator that constructs the mediated schema adaptively from evidences elicited from user query responses. It precludes the a priori definition of the mediated schema and of the mappings between the export schemas and the mediated schema. The mediator assumes that the data sources are encapsulated by Web services, that is, they are database Web services. This assumption avoids the burden of interpreting HTML pages containing query results. Indeed, the mediator communicates with the database Web services by exchanging SOAP messages conforming to their WSDL descriptions.
The schema matching approach proposed in this paper is based on the following assumptions:
1. The database Web services accept keyword queries, that is, lists of terms as in a Web search engine.
2. The mediator accepts only keyword queries.
3. The database Web services return query results as flat XML documents. That is, each export schema consists of a single flat relational table (encoded as an XML Schema data type in the WSDL document of the service).
4. Attributes with similar domains are semantically equivalent.
This last assumption is rather strong, but the case studies described in the paper indicate that it is warranted in a number of application domains.
The remainder of this paper is organized as follows. Section 2 summarizes related works. Section 3 describes the instance-based schema matching approach. Section 4 presents the mediator architecture. Section 5 contains two case studies that illustrate the approach. Finally, section 6 contains the conclusions and directions for future work.
2 RELATED WORK
Schema matching is a fundamental issue in many database application domains, such as query mediation and Web-oriented data integration (Casanova et al., 2007; Rahm & Bernstein, 2001). By query mediation, we mean the problem of designing a mediator, a software service that is able to translate...
user queries, formulated in terms of a mediated schema, into queries that can be handled by local databases. The mediator must therefore match each export schema with the mediated schema. The problem of query mediation becomes a challenge in the context of the Web, where the number of local databases may be enormous and, moreover, the mediator does not have much control over the local databases, which may join or leave the mediated environment at will.
In general, the match operation takes two schemas as input and produces a mapping between elements of the two schemas that correspond to each other. Many techniques for schema and ontology matching have been proposed to automate the match operation. For a survey of several schema matching approaches, we refer the reader to (Rahm & Bernstein, 2001).
Schema matching approaches may be classified as syntactic vs. semantic and, orthogonally, as
- a priori vs. a posteriori (Casanova et al., 2007). The syntactic approach consists of matching two schemas based on syntactical hints, such as attribute data types and naming similarities. The semantic approach uses semantic clues to generate hypotheses about schema matching. It generally tries to detect how the real world objects are represented in different databases and leverages on the information obtained to match the schemas. Both the syntactic and the semantic approaches work a posteriori, in the sense that they start with pre-existing databases and try to match their schemas. The a priori approach emphasizes that, whenever specifying databases that will interact with each other, the designer should start by selecting an appropriate standard (a common schema), if one exists, to guide the design of the export schemas.
An implementation of a mediator for heterogeneous gazetteers is presented in (Gazola et al., 2007). Gazetteers are catalogues of geographic objects, typically classified using terms taken from a thesaurus. Mediated access to several gazetteers requires the use of a technique to deal with the heterogeneity of different thesauri. The mediator incorporates an instance-based technique to align thesauri that uses the results of user queries as evidences (Brauner et al., 2006).
An instance-based schema matching technique, based on domain-specific query probing, applied to Web databases, is proposed in (Wang et al., 2004). A Web database is a backend database available on the Web and accessible through a Web site query interface. In particular, the interface exports query results as HTML pages. In particular, a Web database has two different schemas, the interface schema (IS) and the result schema (RS). The interface schema of an individual Web database consists of data attributes over which users can query, while the result schema consists of data attributes that describe the query results that users receive.
The instance-based schema matching technique described in (Wang et al., 2004) is based on three observations about Web databases:
1. Improper queries often cause search failure, that is, return no results. For the authors, improperness means that the query keywords submitted to a particular interface schema element are not applicable values of the database attribute to which the element is associated. For instance, if you submit a string to query search element that is originally defined as an integer, you get an error. As an example, consider submitting a title value to the search element pages number.
2. The keywords of proper queries that return results very likely reappear in the returned result pages.
3. There is a global schema (GS) for Web databases of the same domain (He & Chang, 2003). The global schema consists of the representative attributes of the data objects in a specific domain.
The keyword probing technique consists of exhaustively sending keyword queries to the query interface of different Web databases, and collecting their results for further analysis. Based on the third observation, they assume, for a specific domain, the existence of a pre-defined global schema and a number of sample data objects under the global schema, called global instances. For Web databases, they deal with two kinds of schema matching: intra-site schema matching (that is, matching global with interface schemas, global with result schemas, and interface with result schemas) and inter-site schema matching (that is, matching two interface schemas or two result schemas).
The data analysis is based on the second observation. Given a proper query, the results will probably contain the reoccurrence of the submitted value (referring to the values of the attributes of the global instances). The results will be collected in the HTML sent to the Web browser. Thus, the reoccurrence of the query keywords in the returned results can be used as an indicator of which query submission is appropriate (i.e., to discover associated elements in the interface schema). In addition, the position of the submitted query keywords in the
result pages can be used to identify the associated attributes in the result schema.
Note that, differently from (Wang et al., 2004), we work with Web services that encapsulate databases. In particular, we assume that the service interface is specified by a WSDL document that describes the input attributes (interface schema) and the output attributes (export schema). This means that both the query definitions and query answers are encoded as SOAP messages. Therefore, we avoid the interpretation of query results encoded in HTML, which introduces a complication that distracts from the central problem of mediating access to databases.
Also, differently from (Brauner et al., 2007) and (Wang et al., 2004), we avoid the use of a global schema and a set of global instances. Defining a global schema and collecting a good set of global instances are hard tasks. The technique presented here instead uses an instance-based approach to align the export schemas using the results of user queries as evidences for the mappings, as in (Brauner et al., 2006).
3 THE SCHEMA MATCHING APPROACH
The schema matching approach proposed in this paper is based on the matching process illustrated in Figure 1. The matching process is the basis of a mediator that facilitates access to a collection of database Web services, assumed to cover the same application domain. In section 4, we discuss the complete mediator architecture.
The matching process starts with a client query submitted to the mediator and the WSDL descriptions of the database Web services accessed through the mediator engine (step 1 on Figure 1). Note that, through the WSDL documents, the mediator obtains the necessary information to encode queries to be sent to the database Web services, as well to interpret the results sent back.
Our current schema matching approach works under the following assumptions:
1. The database Web services accept keyword queries, that is, lists of terms as in a Web search engine.
2. The mediator accepts only keyword queries.
3. The database Web services return query results as flat XML documents. That is, each export schema consists of a single flat relational table (encoded as an XML Schema data type in the WSDL document of the service).
4. Attributes with similar domains are semantically equivalent.
This last assumption is rather strong, but the case studies described in the paper indicate that it is warranted in a number of application domains.
When a client submits a query, the mediator forwards it to the database Web services registered through the Query Manager Module (QMM) (step 2). The result sets are delivered to the user, and simultaneously, they are cached in a local cache database (step 3). Then, the mediator analyzes the cached instances by counting instance values that reoccurs in both result sets (step 4). This task is performed by the Mapping Rate Estimator Module (MREM).
The MREM analyzes the result sets by a probing technique. The probing technique consists of counting the reoccurrence of instances values from one set in the other.
After this analysis, the MREM generates the occurrence matrix (step 5). To enable the mediator to accumulate evidences, the MREM stores the occurrence matrix in the Mappings local database (step 6). If there is a previously stored occurrence matrix, the MREM must sum the existing matrix with the new matrix, generating an accumulated occurrence matrix. If new attributes are matched, the MREM must add rows and columns to the accumulated occurrence matrix. Finally, given an accumulated occurrence matrix, the MREM also generates the Estimated Mutual Information (EMI) matrix (step 7). The generation of the EMI is explained in detail at the end of this section.
For instance, suppose that the mediator provides access to databases $D_A$ and $D_B$ with export schemas $S_A$ and $S_B$, respectively. Suppose that the mediator receives a keyword query $Q$ from the client application, which it resends to the database Web
services. Let $R_d$ and $R_b$ be the results received from $D_d$ and $D_b$, respectively. These results are analyzed to detect if there are XML elements $a_v$ in $R_d$ and $b_v$ in $R_b$ that have the same value $v$. If this is indeed the case, then $v$ establishes some evidence that $a_v$ and $b_v$ map to each other. This analysis generates an occurrence matrix $M$ between attributes from the export schemas of both databases.
As in (Wang et al., 2004), we assume that the attributes of $S_d$ and $S_b$ induce a partition of the result sets $R_d$ and $R_b$ returned by the database Web services. Suppose the attributes of $S_d$ partition $R_d$ into sets $A_1, A_2, \ldots, A_m$ and the attributes of $S_b$ partition $R_b$ into sets $B_1, B_2, \ldots, B_n$. The element $M_{ij}$ in the occurrence matrix $M$ actually indicates the content overlap between partitions $A_i$ and $B_j$ with respect to the reoccurrences of values in the two partitions. The schema matching problem now becomes that of finding pairs of partitions from different schemas with the best match. In what follows, we formalize what we mean by best match.
To address this point, we introduce the concept of mutual information (Wang et al., 2004), which interprets the overlap between two partitions $X$ and $Y$ of a random event set as the “information about $X$ contained in $Y$” or the “information about $Y$ contained in $X$” (Papoulis, 1984). In other words, the concept of mutual information aims at detecting the attributes that have similar domains, i.e., similar domain value sets.
Given an occurrence matrix $M$ of two export schemas $S_d$ and $S_b$, the estimated mutual information (EMI) between the $r^{th}$ attribute of $S_d$ (say $a_r$) and the $s^{th}$ attribute of $S_b$ (say $b_s$) is:
$$EMI(a_r, b_s) = \frac{m_{rs}}{\sum_{i,j} m_{ij}} \log \frac{\sum_{i,j} m_{ij}}{\sum_{i,j} m_{ij} \sum_{i} m_{is}}$$
Note that, if $m_{rs}$ is equal to 0, EMI is assumed to be 0 as well.
So, given an EMI matrix, the MREM derives the mappings between the export schemas elements (step 8 in Figure 1). Given an EMI matrix $[e_{ij}]$, the $r^{th}$ attribute of $S_d$ matches the $s^{th}$ attribute of $S_b$ if $e_{ij} \geq e_{ir}$ for all $j \in [1,n]$ with $j \neq s$, and $e_{ij} \geq e_{jr}$ for all $i \in [1,m]$ with $i \neq r$. Note that, by this definition, there might be no best match for an attribute of $S_d$.
4 THE MEDIATOR ARCHITECTURE
Figure 2 illustrates the architecture for a mediator implementing the schema matching approach.
The User Interface Module (UIM) is responsible for the communication between the user (clients) and the mediator. The UIM accepts two kinds of user interactions: query and registration. Through the UIM, a user could register a new data source providing the WSDL description of the database Web service. The Registration Module (RM) is responsible for accessing the WSDL and registering the new service, and creating a new wrapper to access the remote source. The UIM also accepts keyword queries. Every time the UIM receives a new query, it forwards it to the Query Manager Module (QMM). The QMM is responsible for submitting user queries to database Web services. The QMM communicates with the Local Sources Module (LSM) and Wrappers Module (WM) to access local and remote sources respectively. Moreover, the QMM communicates with the LSM to store query responses in the local cache database. The Mapping Rate Estimator Module (MREM) is an autonomous module that is responsible for accessing the local cache database to compute the occurrence matrix and the estimated mutual information matrix between export schemas and generate the mappings. The MREM communicates with the LSM to store mappings into the local mappings database. The Mediated Schema Module (MSM) is responsible for accessing the local database and used to induce the elements of the mediated schema (see Section 5 examples).
5 CASE STUDIES
5.1 Bookstore Databases Experiment
The first experiment we describe uses two bookstore databases, Amazon.com ($D_A$) and Barnes and Noble ($D_B$).
$D_A$ is available as a database Web service. Figure 3 shows a fragment of the Amazon.com WSDL. We selected the operation ItemSearch. Referring to Figure 3, this operation accepts an ItemSearchRequestMsg message as input and returns an ItemSearchResponseMsg message as output. We suppressed some elements from Figure 3 to preserve readability. Table 1 reproduces the parameter list returned by the ItemSearch operation, representing the export schema $S_A$ of $D_A$.
The second data source, Barnes and Noble ($D_B$), does not provide a Web service interface, which forced us to create one to run the experiment. Table 2 shows $D_B$ export schema $S_B$.
For the keyword query “Age”, $D_A$ returned the result set $R_A$, with 23,338 entries, and $D_B$ returned the result set $R_B$, with 6,168 entries. With both result sets in cache, the mediator applied the technique described in Section 3, generating the occurrence matrix between attributes of $D_A$ and $D_B$. Figure 4 shows the occurrence matrix and Figure 5 the estimated mutual information matrix.
Table 1: Amazon.com ($D_A$) Export Schema ($S_A$).
<table>
<thead>
<tr>
<th>Attribute name</th>
<th>Description</th>
<th>Data type</th>
</tr>
</thead>
<tbody>
<tr>
<td>title (a1)</td>
<td>title</td>
<td>String</td>
</tr>
<tr>
<td>edition (a2)</td>
<td>edition</td>
<td>String</td>
</tr>
<tr>
<td>author (a3)</td>
<td>author name</td>
<td>String</td>
</tr>
<tr>
<td>publisher (a4)</td>
<td>publisher</td>
<td>String</td>
</tr>
<tr>
<td>isbn (a5)</td>
<td>International Standard Book Number (10 digit)</td>
<td>String</td>
</tr>
<tr>
<td>ean (a6a)</td>
<td>European Article Number - a barcoding standard – book id</td>
<td>String</td>
</tr>
<tr>
<td>pages (a7)</td>
<td>number of pages</td>
<td>Number</td>
</tr>
<tr>
<td>date (a8)</td>
<td>publication date</td>
<td>String</td>
</tr>
</tbody>
</table>
Table 2: Barnes and Noble ($D_B$) Export Schema ($S_B$).
<table>
<thead>
<tr>
<th>Attribute name</th>
<th>Description</th>
<th>Data Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>name (b1)</td>
<td>title</td>
<td>String</td>
</tr>
<tr>
<td>by (b2)</td>
<td>author name</td>
<td>String</td>
</tr>
<tr>
<td>isbn (b3)</td>
<td>International Standard Book Number – book id</td>
<td>String</td>
</tr>
<tr>
<td>pub_date (b4)</td>
<td>publication date</td>
<td>String</td>
</tr>
<tr>
<td>sales_rank (b5)</td>
<td>number of times that other titles sold more than this book title</td>
<td>Number</td>
</tr>
<tr>
<td>number_of_pages (b6)</td>
<td>number of pages</td>
<td>Number</td>
</tr>
<tr>
<td>publ (b7)</td>
<td>publisher</td>
<td>String</td>
</tr>
</tbody>
</table>
Figure 3: Amazon.com WSDL fragment.
Figure 4: Occurrence matrix between $S_A$ and $S_B$.
Figure 5: Estimated Mutual Information matrix between $S_A$ and $S_B$.
<?xml version="1.0" encoding="UTF-8" ?>
<definitions xmlns="http://schemas.xmlsoap.org/wsdl/"
xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/
xmlns:xs="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://webservices.amazon.com/AWSECommerceService/2007-10-29"
>
<types>
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:tns="http://webservices.amazon.com/AWSECommerceService/2007-10-29" elementFormDefault="qualified">
+<xs:element name="ItemSearch">
+<xs:complexType name="ItemSearchRequest">
+<xs:element name="ItemSearchResponse">
+<xs:complexType name="ItemSearchResponse">
+<xs:element name="ItemSearchRequest">
+<xs:element name="ItemSearchResponse">
</ts:complexType>
</xs:schema>
</types>
<message name="ItemSearchRequestMsg">
<part name="body" element="tns:ItemSearch" />
</message>
<message name="ItemSearchResponseMsg">
<part name="body" element="tns:ItemSearchResponse" />
</message>
<portType name="AWSECommerceServicePortType">
+<operation name="Help">
+<input message="tns:ItemSearchRequest" />
+<output message="tns:ItemSearchResponse" />
</operation>
</portType>
<binding name="AWSCECommerceServicePortType">
+<service name="AWSCECommerceService">
+<port name="AWSCECommerceServicePort" binding="tns:AWSCECommerceServicePortType">
+<soap:address location="http://soap.amazon.com/onca/soa?Service=AWSCECommerceService" />
</port>
</service>
</binding>
</definitions>
In this first experiment, we used simple comparison operators to identify reoccurred values. For textual attributes, we used the SQL statement “LIKE” and for numerical attributes, the “=” operator.
Table 3 shows the alignments found between the attributes of \( S_A \) and \( S_B \). Note that the only wrong alignment was between \( edition \) from \( S_A \) and \( sales\_rank \) from \( S_B \). These attributes are not semantically similar. However, a false matching was found due to the reoccurrence of 4974 values from \( D_A \) in \( D_B \). For instance, in the Amazon.com database, several first book editions had the \( edition \) value equal to “1” and, in the Barnes and Noble database, several books had \( sales\_rank \) also equal to “1”.
An interesting observation can be made regarding book identifiers. Starting in 2007, the 13-digit ISBN began to replace the 10-digit ISBN. \( D_A \) stores both numbers, with the attribute ISBN holding the old 10-digit ISBN and the attribute EAN, the new 13-digit ISBN. \( D_B \) stores only the new 13-digit ISBN. Our instance-based technique correctly matched attribute EAN from \( S_A \) with attribute ISBN from \( S_B \). Differently from a syntactical approach, which would wrongly match attribute ISBN from \( S_A \) with attribute ISBN from \( S_B \), our instance-based technique correctly matched attribute EAN from \( S_A \) with attribute ISBN from \( S_B \).
The date attributes would never reoccur due to format differences. For instance, Amazon.com database stores dates in the format “YYYY-MM-DD”, while the Barnes and Noble database stores the publication date as “Month, YEAR”. To solve this problem, the algorithm would have to be enhanced with a type-based filter. In our experiments, attribute \( date \) was modelled as a string in both sources.
Assume that attributes with similar domains are semantically equivalent. Based on this assumption, by analyzing the values of the estimated mutual information matrix, the mediator has some evidence of which attributes should be part of a mediated schema. For instance, by observing Table 3, we note that \( title \) from the Amazon.com database aligns with \( name \) from the Barnes and Noble database, and so on. Table 4 shows the complete mediated schema, where attribute names were chosen from the first export schema, in our case, that of Amazon.com.
### 5.2 Gazetteers Experiment
Our second experiment used two geographical gazetteers available as database Web services, the Alexandria Digital Library (ADL) Gazetteer (\( D_C \)), and Geonames (\( D_D \)). In our experiments, we accessed both gazetteers through their search-by-place-name operations.
The ADL Gazetteer contains worldwide geographic place names. The ADL Gazetteer can be accessed through XML- and HTTP-based requests (Janée & Hill, 2004). Table 5 contains the ADL export schema (\( S_C \)) and Figure 6 shows a fragment of the XML response of this service.
Geonames is a gazetteer that contains over six million features categorized into one of nine classes and further subcategorized into one out of 645 feature codes. Geonames was created using data from the National Geospatial Intelligence Agency (NGA) and the U.S Geological Survey Geographic Names Information System (GNIS). Geonames services are available through Web services. Table 6 presents the Geonames export schema (\( S_D \)) and Figure 7 shows a fragment of the XML response of this service.
Table 6: Geonames \((D_\text{G})\) Database Web Service Export Schema \((\mathcal{S}_\text{G})\).
<table>
<thead>
<tr>
<th>Attribute name</th>
<th>Description</th>
<th>Data type</th>
</tr>
</thead>
<tbody>
<tr>
<td>name (d1)</td>
<td>primary name</td>
<td>String</td>
</tr>
<tr>
<td>lat (d2)</td>
<td>latitude</td>
<td>Number</td>
</tr>
<tr>
<td>lng (d3)</td>
<td>longitude</td>
<td>Number</td>
</tr>
<tr>
<td>geonameId (d4)</td>
<td>identifier</td>
<td>String</td>
</tr>
<tr>
<td>countryCode (d5)</td>
<td>country code (ISO-3166 2-letter code)</td>
<td>String</td>
</tr>
<tr>
<td>countryName (d6)</td>
<td>country name</td>
<td>String</td>
</tr>
<tr>
<td>fcl(d7)</td>
<td>Feature type super class code</td>
<td>String</td>
</tr>
<tr>
<td>fcode (d8)</td>
<td>Feature type classification code</td>
<td>String</td>
</tr>
<tr>
<td>fclName (d9)</td>
<td>Feature type super class name</td>
<td>String</td>
</tr>
<tr>
<td>fcodeName (d10)</td>
<td>Feature type classification name</td>
<td>String</td>
</tr>
<tr>
<td>population (d11)</td>
<td>population</td>
<td>Number</td>
</tr>
<tr>
<td>alternateNames (d12)</td>
<td>alternative names</td>
<td>String</td>
</tr>
<tr>
<td>elevation (d13)</td>
<td>elevation, in meters</td>
<td>Number</td>
</tr>
<tr>
<td>adminCode1 (d14)</td>
<td>Code for 1st adm. division</td>
<td>String</td>
</tr>
<tr>
<td>adminName1 (d15)</td>
<td>Name for 1st adm. division</td>
<td>String</td>
</tr>
<tr>
<td>adminCode2 (d16)</td>
<td>Code for 2nd adm. division</td>
<td>String</td>
</tr>
<tr>
<td>adminName2 (d17)</td>
<td>Name for 2nd adm. division</td>
<td>String</td>
</tr>
<tr>
<td>timezone (d18)</td>
<td>Timezone description</td>
<td>String</td>
</tr>
</tbody>
</table>
In this experiment, we submitted the keyword query “alps”. \(D_C\) returned 71 entries \((R_C)\) and \(D_D\) returned 77 entries \((R_D)\). With both result sets in cache, the mediator generated the occurrence matrix between attributes of the schemas of \(D_C\) and \(D_D\). Figure 8 shows the occurrence matrix and Figure 9 shows the EMI matrix between \(S_C\) and \(S_D\).
the estimated mutual information matrix. Based on the estimated mutual information matrix, Table 7 shows the alignments between attributes of $S_C$ and $S_D$.
By analyzing the values of the estimated mutual information matrix, the mediator has some evidence of which attributes could be part of a mediated schema. For instance, by observing Table 7, we note that attribute names from ADL aligns with name from Geonames, and so on for the other attributes in Table 7. Based on these observations, the mediator suggested the mediated schema in Table 8.
Table 7: The aligned attributes ($S_C$ - $S_D$).
<table>
<thead>
<tr>
<th>SC attributes</th>
<th>SD attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td>names (c5)</td>
<td>name (d1)</td>
</tr>
<tr>
<td>bounding-box_X (c6)</td>
<td>lng (d3)</td>
</tr>
<tr>
<td>bounding-box_Y (c7)</td>
<td>lat (d2)</td>
</tr>
<tr>
<td>ftt_class (c8)</td>
<td>fcodeName (d10)</td>
</tr>
<tr>
<td>gnis_class(c9)</td>
<td>fcode (d8)</td>
</tr>
</tbody>
</table>
Table 8: The mediated schema ($S_C$ - $S_D$).
<table>
<thead>
<tr>
<th>Attribute name</th>
<th>Description</th>
<th>Data type</th>
</tr>
</thead>
<tbody>
<tr>
<td>names</td>
<td>entry name</td>
<td>String</td>
</tr>
<tr>
<td>bounding-box_X (c6)</td>
<td>entry longitude</td>
<td>Number</td>
</tr>
<tr>
<td>bounding-box_Y (c7)</td>
<td>entry latitude</td>
<td>Number</td>
</tr>
<tr>
<td>ftt_class (c8)</td>
<td>entry class of FTT</td>
<td>String</td>
</tr>
<tr>
<td>gnis_class(c9)</td>
<td>entry class of GNIS</td>
<td>String</td>
</tr>
</tbody>
</table>
6 CONCLUSIONS AND FUTURE WORK
In this paper, we proposed an instance-based approach for matching export schemas of databases available through Web services. We also described a technique to construct the mediated schema and to discover schema mappings on the fly, based on matching query results. To validate the approach, we discussed an experiment using bookstore databases and gazetteers.
As future work, we intend to extend the case studies to include several data sources. In this context, we plan to investigate the alignment of the included export schemas with the mediated schema and its implications, as the association of the mediated schema with a global instance set derived from existing sources.
ACKNOWLEDGEMENTS
This work was partly supported by CNPq under grants 550250/05-0, 301497/06-0, and 140417/05-2, and FAPERJ under grant E-26/100.128/2007.
REFERENCES
|
{"Source-Url": "http://www.scitepress.org/papers/2008/17069/17069.pdf", "len_cl100k_base": 6826, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28490, "total-output-tokens": 7469, "length": "2e12", "weborganizer": {"__label__adult": 0.00029277801513671875, "__label__art_design": 0.0004703998565673828, "__label__crime_law": 0.0003802776336669922, "__label__education_jobs": 0.0008678436279296875, "__label__entertainment": 8.26716423034668e-05, "__label__fashion_beauty": 0.00016379356384277344, "__label__finance_business": 0.0003998279571533203, "__label__food_dining": 0.0003325939178466797, "__label__games": 0.0003995895385742187, "__label__hardware": 0.0008244514465332031, "__label__health": 0.0005116462707519531, "__label__history": 0.0003783702850341797, "__label__home_hobbies": 8.946657180786133e-05, "__label__industrial": 0.0003952980041503906, "__label__literature": 0.00039839744567871094, "__label__politics": 0.000278472900390625, "__label__religion": 0.0003871917724609375, "__label__science_tech": 0.0888671875, "__label__social_life": 0.0001055002212524414, "__label__software": 0.033355712890625, "__label__software_dev": 0.8701171875, "__label__sports_fitness": 0.00019073486328125, "__label__transportation": 0.0004482269287109375, "__label__travel": 0.00025391578674316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29132, 0.02171]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29132, 0.30018]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29132, 0.8166]], "google_gemma-3-12b-it_contains_pii": [[0, 2785, false], [2785, 7797, null], [7797, 11797, null], [11797, 15720, null], [15720, 19912, null], [19912, 23373, null], [23373, 25092, null], [25092, 29132, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2785, true], [2785, 7797, null], [7797, 11797, null], [11797, 15720, null], [15720, 19912, null], [19912, 23373, null], [23373, 25092, null], [25092, 29132, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29132, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29132, null]], "pdf_page_numbers": [[0, 2785, 1], [2785, 7797, 2], [7797, 11797, 3], [11797, 15720, 4], [15720, 19912, 5], [19912, 23373, 6], [23373, 25092, 7], [25092, 29132, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29132, 0.28342]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
8c7748c1aff50d1881b471763844ea993763761e
|
Alignment in Enterprise Systems Implementations: The Role of Ontological Distance
Michael Rosemann
Queensland University of Technology
Iris Vessey
Indiana University
Ron Weber
Monash University
Follow this and additional works at: http://aisel.aisnet.org/icis2004
Recommended Citation
http://aisel.aisnet.org/icis2004/35
ALIGNMENT IN ENTERPRISE SYSTEMS IMPLEMENTATIONS:
THE ROLE OF ONTOLOGICAL DISTANCE
Michael Rosemann
Centre for IT Innovation
Queensland University of Technology
Brisbane, Queensland, Australia
[email protected]
Iris Vessey
Kelley School of Business
Indiana University
Bloomington, IN U.S.A.
[email protected]
Ron A. G. Weber
Faculty of Information Technology
Monash University
Caulfield East, Victoria, Australia
[email protected]
Abstract
The development, implementation, operation, support, maintenance, and upgrade of enterprise systems (ES) have given rise to a multibillion dollar industry. Nonetheless, this industry has been perceived as having difficulties in achieving an adequate return on investment. We believe that a major reason why organizations encounter problems is that they fail to understand properly how well an ES package aligns with or fits their needs. Misfits are external manifestations of the differences between two worlds: that of the organization’s needs on the one hand and the system’s capabilities on the other.
To address issues of fit in ES package implementation, we need a way to describe and evaluate fit. To achieve this end, we use the concept of ontological distance. Specifically, we provide an approach to evaluate the significance (size) of a gap between the world of organizational requirements and the world of system capabilities. The greater the distance between the organizational world and system world, the more likely we believe that organizations will encounter difficulties in their engagement with an ES package. Distances arising from the joint evolution of organizational requirements and system capabilities occur at a number of stages during the process of implementing an ES.
From a theoretical perspective, our conceptual development shows how measures of ontological distance can be used to predict the likely deficiencies an organization will have on implementing an enterprise system. From a practical perspective, our research helps organizations to avoid problems when implementing an enterprise system and to select the best package for the organization’s needs.
Keywords: Enterprise systems, ontology, alignment
Introduction
Over the past decade, ready-to-install software in the form of enterprise systems (ES) packages has resulted in a fundamental shift in how information systems are developed in user organizations (Sawyer 2001). These systems seek to provide their users with comprehensive, integrated support for their information system needs (Shang and Seddon 2002). They include enterprise resource planning (ERP) software (Klaus et al. 2000), customer relationship management (CRM) software, and supply-chain management
(SCM) software. Embedded within ES packages are business models that their designers believe represent best practice in certain contexts. ES packages require parameterized input to allow their generic capabilities to be configured to better support their users’ particular business needs.
The development, implementation, operation, support, maintenance, and upgrade of enterprise systems have given rise to a multibillion dollar industry. This industry is replete, however, with stories of high-cost problems. Many ES implementation projects run late, and some even fail (Scott and Vessey 2002). Moreover, as organizations’ use of ES matures, the high costs of operating and maintaining them have become apparent (Nolan Norton Institute 2000). Many organizations now report that achieving an adequate return on investment in ES has become a major priority (Accenture 2002; Stein and Hawking 2002).
Research that seeks to understand the reasons for these problems is relatively recent (e.g., Markus et al. 2000). Based on our prior research and experience with ES packages, we believe that a major reason why organizations encounter problems when they purchase and use them is that they fail to understand properly how well an ES package aligns with or fits their needs. Furthermore, in a large-scale survey of mid-size European organizations, van Everdingen et al. (2000) found that alignment or fit of an ES package with an organization’s needs was by far the most important criterion used in selecting a package.
The alignment issue has been studied by examining misfits identified following ES implementations (Sia and Soh 2000; Soh et al. 2003; Soh et al. 2000). Although certain misfits can be regarded as exogenous alignment issues, such as those specific to a particular country, sector, or industry, others are organization-specific. To a large extent, these latter types of alignment issues are under the control of organizations via the choices they make about tailoring the ES during implementation. In this regard, Brehm et al. (2001) identify nine different ways in which an organization can improve the fit between organizational needs and package capabilities. These choices range from configuration (setting parameters in tables to specify how the package should function in certain circumstances), to customization (e.g., employing user exits to extend the functionality of the package via published software interfaces), to modification (changing source code). Each way of addressing misfits potentially leads to its own set of problems.
Misfits manifest differences between “two worlds.” One world reflects the organization’s needs. The other reflects the package’s capabilities. To improve ES implementations, this gap needs to be minimized. Alternatively, the overlap between the two worlds needs to be maximized.
In this paper, we elucidate the idea of alignment between an ES and the needs of the user organization using the concept of ontological distance. We use the term ontological distance as a label for the extent of the difference between the capabilities embedded within an ES package and the capabilities that an organization needs to be able to operate effectively and efficiently. Ontology is the branch of philosophy that seeks to articulate models of the real world in general (Bunge 1977). Each ES package is developed to support certain types of “worlds.” If organizations are to survive and prosper, however, they need to operate in particular types of worlds that are appropriate to their mission and goals. The greater the distance between the former worlds and the latter worlds, the more likely we believe that organizations will encounter difficulties in their engagement with an ES package.
From a theoretical perspective, our conceptual development shows how measures of ontological distance can be used to predict the likely deficiencies an organization will have on implementing an enterprise system. From a practical perspective, our research helps organizations to avoid problems when implementing an enterprise system and to select the best package for their needs. Over the lifetime of an ES implementation, our notions can also be used to aid in making the decision to upgrade or further customize the system.
The next section defines what we mean by ontological distance and presents the concepts required for its measurement. We then present a model that shows the way in which misfits arise during the ES implementation process. Finally, we discuss the findings of our study and present the implications for future research.
**Concept of Ontological Distance**
In this section, we seek to characterize ontological distance. Specifically, we provide an approach for evaluating the significance (size) of a gap between the world of organizational requirements and the world of system capabilities. We characterize the gap from the viewpoint of a selected ontology. We first discuss the nature of ontological distances, in general. Next, we examine different situations that can occur in identifying those distances. Finally, we show how an ontology can be used to measure these distances.
Figure 1. Classification of Distances
Classification of Distances
Figure 1 presents the constructs involved in our examination of gaps between the user requirements and package capabilities. Circle A on the left includes all organizational requirements that are deemed relevant at some stage in the system selection and implementation process. Conceptually, it includes any requirement that some stakeholder might deem necessary at some time during the selection of an ES package and the implementation of the chosen package.
Circle B includes those requirements perceived as relevant at some point in time during the system selection or implementation process (i.e., requirements 1 through 7). It represents the first pass or initial specification of organizational requirements at a particular stage of the system selection and implementation process. For example, at an early stage of the system selection and implementation process, it might include only those key requirements that organizational stakeholders use to evaluate an ES package to determine whether it is worthwhile investigating the package in more detail.
Circle C includes the requirements that are actually relevant at the same point in time that circle B is identified. Such a selection could result from a dialogue between experts who specialize in the implementation of ES packages and representatives of the organization who are familiar with the companies’ strategies, objectives, and expectations. The difference between these two circles, B and C, highlights the potential impact of selecting nonrelevant requirements (see requirements 1 and 2 in Figure 1). It also emphasizes the fact that certain relevant requirements could be excluded from the analysis (see requirements 8 and 9 in Figure 1).
These requirements are then mapped to the capabilities of the ES. The capabilities of the system can be viewed from three perspectives. They could be the actual capabilities (circle D), the perceived capabilities (circle E), or the appropriated capabilities (circle F) (Orlikowski and Robey 1991). Appropriated capabilities are the capabilities of the system as viewed by its users; they reflect the way in which users actually use the system.
The distances identified can now be further classified using this figure.
- First, a decision has to be made regarding the relevant organizational requirements. Thus, distances may be relevant or non-relevant. Nonrelevant distances result from the analysis of nonrelevant requirements (B\textbackslash C). The analysis of nonrelevant requirements is not only a time-consuming and non-value adding activity, it also impacts in a misleading way the calculation of the ontological distance.
- Second, distances can be differentiated in terms of actual and perceived distances. Once the relevant requirements have been identified, a decision has to be made about the extent to which the actual system capabilities should be explored. Perceived distances map the actual organizational requirements against the perceived system capabilities (i.e., they are not based on a detailed analysis of the system, but rather on assumptions about the system capabilities). It is important to understand that insufficient analysis of the system capabilities can provide a wrong impression of the fit of the system with the organizational requirements and can negatively impact the calculation of the ontological distance.
Third, a decision has to be made regarding the scope of the system capabilities. The focus could be on the “vanilla” system capabilities only (actual: circle D), or it could include appropriated system use (circle F). The differentiation between actual and appropriated capabilities is important for a clear definition of the unit of analysis.
If, in seeking to reduce complexity, we focus only on the relevant requirements (circle C in Figure 1), the following situations can be differentiated when mapping those requirements to the (actual) system capabilities (Figure 2).
First, **system completeness** refers to the situation in which there is a one-to-one mapping between an organizational requirement and a system capability. Second, **system excess** reflects system capabilities that do not correspond to any organizational requirements. This situation can be further differentiated: (1) these capabilities might not be relevant at the time that system capabilities are analyzed, but might become relevant later in the system lifecycle; (2) the capabilities may never be relevant to the organization. Third, **system deficit** includes those organizational requirements that cannot be mapped to any system capability. Again, there are two possibilities: (1) the requirements may not be meaningful, justified requirements; (2) they may be justified requirements, but the system may be unable to support them.
Note that we address issues of one-to-many and many-to-one mapping as aspects of system completeness. For example, one organizational requirement may be supported by more than one system capability, and many organizational requirements may be supported by the same system capability. The ontological distance of one enterprise system consists therefore of an evaluation of the extent of system completeness and an evaluation of the combined system excess and system deficit, which could lead to potential misfits.
The Nature of Ontological Distance
The previous section described what should be measured when it comes to the distance between the world of organizational requirements and system capabilities. We now address how these distances can be measured by using the concept of ontological distance. That is, we can gain a greater understanding of these distances by evaluating them in terms of an ontological model. For the purpose of our research, we selected the Bunge-Wand-Weber (BWW) ontological model (Wand and Weber 1989, 1990a, 1990b, 1993, 1995; Weber 1997). The core of this ontology is formed by the representation model, which defines constructs such as thing (e.g., employee), property (e.g., the attribute age of an employee), and system (e.g., an organizational unit), as well as their relationships.
Characterizing gaps in ES requirements using ontological distance has three major benefits over characterizing them as misfits. First, it is possible to cluster the identified mappings into ontological constructs such as thing, property, and system. Different distances can then be discussed in light of the underlying constructs. Second, relationships between the distances can be identified...
with reference to the relationships between the ontological constructs. An example would be a gap (distance) related to a property, which can be linked to the referenced thing(s). Further, this approach potentially provides the opportunity to identify cause-effect relationships among the misfits of an ES. Third, these ontological classifications can be used to derive weights for the groups based on the significance of the ontological construct. Such a weighting can help to identify the more critical gaps.
The following criteria provide some insights into the possible design of such weights.
- A distance that refers to a thing should receive a higher weighting than a distance related to a property. Things are perceived as more important (of higher weight) than properties because the real-world is made up of things, which are further characterized by properties.
- A mutual property (e.g., “is supervisor of”) describes a property that derives its meaning only in terms of other things, while an intrinsic property (e.g., “age of an employee”) is a property of a thing that does not require any other thing. A mutual property is perceived as more important than an intrinsic property because it is related to two or more things and therefore has impact beyond a single thing.
- Within mutual properties, we perceive binding mutual properties, which make a difference to the involved things (e.g., “is project manager of”), as being of higher weight than non-binding mutual properties (e.g., “is older than”).
- In an ontology, it is typically possible to identify super-type–subtype relationships. An example in the BWW ontological model is the super-type event, which can be further differentiated into the subtypes internal/external event, or poorly defined/well-defined event. A distance that relates to a super-type should receive a higher weight than a distance that refers to a subtype because a missing subtype is less significant.
- Laws that traverse two or more things should be assigned a higher weight than laws that apply only to one thing. The former constrain more state spaces and more event spaces than the latter.
- In addition to weights that can be assigned to different ontological constructs based on their nature, weights will also be influenced by the number of occurrences of a construct. In the evaluation of the MRP II solution within an enterprise system, for example, it will be possible to identify a number of things such as material, asset, tool, and employee. The more instances of an ontological construct, the higher should be its weight.
Such an ontologically based weighting of organizational requirements or system capabilities can be used to derive default values for system selection criteria. These values can be over-written based on individual organizational needs. However, they provide, at least, a theoretically based starting point to determine such weights.
Reference Frame for Assessing Ontological Distance
Having provided a framework for measuring the alignment between the ES and the user organization, we now need to understand the context in which ontological distances arise. To do so, we introduce the notion of a reference frame (Bunge 1977).
Our analyses suggest that the implementation process is evolutionary in nature, resulting from a continual adaptation of perceived user requirements in the context of what is offered by the software system(s) under consideration. The model shown in Figure 3, therefore, covers the entire process of selecting a new ES solution from the first specification of the organizational requirements to the final and revised organizational requirements based on the system as actually implemented. The left-hand side of the figure captures the development of the organizational requirements for such large systems, and thus forms the demand side. The right-hand side of the figure essentially describes the system lifecycle. As such, it captures the supply side (i.e., the models, theories, and systems that are available when implementing an ES). Various interrelationships exist between the elements of the organizational requirements side and those of the systems side. Each relationship shown represents a distance between an organizational requirement at a point in time and a system capability.
Evolution of Organizational Requirements in the ES Context
Next, we characterize the different stages on both sides of the model, as well as the identified distances between related elements.
Problem Domain
ES are applications that provide integrated solutions for the most common functional areas and business processes of an organization. As such, they typically rely on a plethora (n) of existing models and theories (S1n). For example, such models could be based on certain accounting standards, approaches for cost management (e.g., activity-based costing), purchasing models (e.g., dynamic lot-size models), or specific approaches for operations management (e.g., optimized production technology, OPT). These models describe a selected problem domain that is independent of any system implementation. They form the core body of knowledge of the business community and academia.
The development of ES depends to a large extent on the existence and quality of such models and theories. ES may be differentiated based on the selected business models and theories used to conceptualize their domains. During the process of developing an integrated software solution, these models and theories are often modified, extended (S2m), or eliminated.
From an organizational viewpoint, the system selection process starts by deriving an initial list of organizational requirements (O1). To a large extent, these requirements are based on current practice (i.e., the relevant part of the real world that impacts the organization). Furthermore, existing business models and theories influence these requirements. A deep understanding and critical evaluation of those models and theories, which goes beyond the current practice of an organization, can significantly increase the quality of the organizational requirements. However, lack of appreciation of these models and theories means that, most often, these aspects are investigated only cursorily.
Perceived distances between O1 and S1n exist for various reasons. First, the organizational requirements may not have been derived based on all relevant models. Second, inappropriate models may have been included in the organizational requirements. Third, selected models may have been applied incorrectly. Actual distances between O1 and S1n exist if the current status of models and theories does not provide the required solutions for the organization.
System Selection Domain
Organizational requirements (O1) for ES selection typically are summarized in criteria catalogues. These initial criteria are often clustered and weighted. They are also influenced by available models and theories (S1n), which leads to more complete organizational requirements (O2). In this form, they represent the starting point for evaluating a selected number of available systems (S2m) in the system selection domain. The arrow between O2 and S2m describes this relationship.
Typically, a detailed study of the current functionality of available systems (S2m) would impact the organizational requirements (O2). In addition to available models and theories, the functionality of available systems will be another valuable source for the further improvement of organizational requirements. Complete organizational requirements (O2) consider innovative and relevant solutions in available systems.
Again, multiple reasons may lead to actual and perceived distances between O2 and S2m. Actual distances exist when initial organizational requirements cannot be supported by a selected system. The sum of all (weighted) actual distances between O2 and S2, forms one important indicator to be taken into account during the overall evaluation of the appropriateness of ESi. Perceived distances may be more critical in this stage, however. It will often be difficult to analyze the support for all organizational requirements because the functionality of the selected systems might not only be overwhelming, but also difficult to comprehend without detailed systems’ know-how.
An actual distance between O2 and S2m may exist in two situations. First, it arises when at least one organizational requirement is not supported by any of the selected systems (system deficit). Second, it arises when each and every organizational requirement is supported by at least one system, but no one system supports all organizational requirements. Perceived distances can occur at this stage if existing system solutions are not translated correctly into organizational requirements.
The core of the system selection domain is the evaluation of each selected (short-listed) system in light of the complete organizational requirements. Ideally, the system with the most comprehensive support for the complete organizational requirements will be the system selected (S3). In other words, the system with the minimal distance measured as a weighted sum over all distances between the analyzed items will be selected. The actual distance between O2 and S3 is the first indication of potential misfit between the organizational requirements and the best possible ES solution. The perceived distance between O2 and S3 is an indicator of the risk of the decision being suboptimal.
The capabilities of the system selected (S3) represent a constraint on the complete organizational requirements (O2) because the system typically will not be able to support all organizational requirements. This situation leads to a subset of the complete organizational requirements (O2) that we term specific organizational requirements (O3). These specific organizational requirements are derived based on the functionality of the selected system. They represent requirements that can be implemented. An actual distance between S3 and O3 will exist if O3 still includes requirements that can be realized only by system configurations or additional solutions external to S3. In this stage, a danger of unknown actual distances exists. The danger is that the selected system may appear to support core requirements when it does not.
Implementation Domain
The specific organizational requirements (O3) are a major input to the specification of the implemented system (S4). The level of detail in the analyses of distances will be higher in the implementation domain because more criteria will be analyzed. An actual distance between O3 and S4 can result from known or unknown actual distances between O3 and S3. In particular, many unknown actual distances will be converted into known actual distances by the complete, exact analyses demanded when developing implementation requirements.
The identified distances between O3 and S4 will again impact the specific organizational requirements. The final organizational requirements (O4) describe those requirements that we might consider to be the shortcomings of the implemented system. Distances between O4 and S4 can now be characterized as those misfits that remain after the go-live date. Differentiation between actual and perceived distances is important at this stage to make decisions about how these distances should be approached. Unknown actual distances should be minimized as they pose a critical risk. Such distances could, for example, be problems in processes that occur a few months after the go-live date (e.g., during the consolidation process in financial accounting).
Second Wave: Extending the System
The distance between O4 and S4 represents ongoing requirements that will be considered during the second wave (i.e., in extending the system capabilities). This distance could be addressed by continuous benefit realization activities, by system upgrades, or by modifying the organizational requirements. Any remaining distance between O4 and the extended system (S5), as well as organizational changes, will lead to a new set of final organizational requirements (O5: termed revised organizational requirements).
Implication of the Reference Frame for Measuring Ontological Distance
The model of information requirements evolution shows, therefore, that a number of steps or transformations contribute to the misfits that can be identified after an ES has been implemented. Note also that the number of analyzed requirements and system capabilities will increase during the implementation process (i.e., the scope of the analysis will increase). In other words, in Figure 2, circles B and C will become synonymous with circle A.
The value and the innovation of the proposed model are particularly important in a number of areas. Although the model differentiates between different stages of organizational requirements for these systems, in practice we observe that many organizations develop one comprehensive set of requirements, evaluate a number of short-listed systems based on these (weighted) requirements, and then select a system. Our proposed four-stage process of requirements engineering in the context of ES selection, therefore, acknowledges the fact that ES are based on theoretical, system-independent models that can be evaluated separately from the systems themselves.
Discussion and Implications
Our research addresses the alignment between an ES package and an organization’s information requirements as a way of mitigating problems associated with the acquisition, implementation, operation, maintenance, and upgrade of an ES package. Packages are aligned to organizational needs via a tailoring process that includes activities such as configuration, customization, and modification of source code. Alignment problems, which are often characterized as misfits in tailoring an ES to meet users’ needs, frequently become apparent to the user organization only after implementation.
We propose the notion of ontological distance as a way of evaluating differences between the world of organizational requirements and the world of system capabilities. Further, we provide an approach for evaluating the significance (size) of a gap between these two worlds by characterizing such distances from the viewpoint of a selected ontology. The greater the distance between the organizational and system worlds, the more likely organizations are to encounter difficulties in their engagement with an ES package. Further, when evaluating ontological distance, our analysis reveals that it is important to differentiate between actual, perceived, and appropriated distances.
Different types of ontological distance are apparent throughout the requirements determination process. Knowledge of both organizational requirements and systems capabilities evolve over time in a series of stages that largely result from the interaction between the two. Each of the many interactions between the organization and system sides of the model reflects changing distances. The existence of large distances helps stakeholders to pinpoint where more-careful evaluations of organizational requirements and ES capabilities are needed.
From a theoretical perspective, our conceptual development shows how measures of ontological distance can be used to predict the likely deficiencies an organization will have on implementing an ES. Our goals now are to refine our measures of ontological distance and to empirically test the predictions we make about the seriousness of misfits based on the distance measures.
Practically, our research helps organizations select the best package to meet their needs and to avoid problems when developing, implementing, operating, supporting, and maintaining an ES. Our notions can also be used as an aid in structuring the evaluation process and in making the decision to upgrade or further customize the system.
Our goals are to develop an ES package evaluation methodology based on ontological distance and to create software tools to support use of our methodology. For example, such tools might facilitate clustering selection criteria used during an ES selection project and provide default values for each criteria based on the ontological classification. We are also currently conducting an
ontological evaluation of parts of a human resources management solution of an enterprise system. The software tool we will develop to calculate ontological distances will use the outcomes of this evaluation.
References
|
{"Source-Url": "http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1123&context=icis2004", "len_cl100k_base": 5751, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 25886, "total-output-tokens": 7353, "length": "2e12", "weborganizer": {"__label__adult": 0.0004925727844238281, "__label__art_design": 0.0012388229370117188, "__label__crime_law": 0.0008006095886230469, "__label__education_jobs": 0.01837158203125, "__label__entertainment": 0.0002598762512207031, "__label__fashion_beauty": 0.0003566741943359375, "__label__finance_business": 0.0367431640625, "__label__food_dining": 0.0007061958312988281, "__label__games": 0.0013217926025390625, "__label__hardware": 0.0013294219970703125, "__label__health": 0.0015020370483398438, "__label__history": 0.0009431838989257812, "__label__home_hobbies": 0.0003077983856201172, "__label__industrial": 0.001468658447265625, "__label__literature": 0.0027313232421875, "__label__politics": 0.0006999969482421875, "__label__religion": 0.0008130073547363281, "__label__science_tech": 0.406005859375, "__label__social_life": 0.0004029273986816406, "__label__software": 0.10284423828125, "__label__software_dev": 0.4189453125, "__label__sports_fitness": 0.00028896331787109375, "__label__transportation": 0.001140594482421875, "__label__travel": 0.00037598609924316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34689, 0.01523]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34689, 0.26093]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34689, 0.9253]], "google_gemma-3-12b-it_contains_pii": [[0, 500, false], [500, 3230, null], [3230, 8384, null], [8384, 11816, null], [11816, 14955, null], [14955, 19462, null], [19462, 21674, null], [21674, 26602, null], [26602, 31272, null], [31272, 34689, null], [34689, 34689, null]], "google_gemma-3-12b-it_is_public_document": [[0, 500, true], [500, 3230, null], [3230, 8384, null], [8384, 11816, null], [11816, 14955, null], [14955, 19462, null], [19462, 21674, null], [21674, 26602, null], [26602, 31272, null], [31272, 34689, null], [34689, 34689, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34689, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34689, null]], "pdf_page_numbers": [[0, 500, 1], [500, 3230, 2], [3230, 8384, 3], [8384, 11816, 4], [11816, 14955, 5], [14955, 19462, 6], [19462, 21674, 7], [21674, 26602, 8], [26602, 31272, 9], [31272, 34689, 10], [34689, 34689, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34689, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
4cfcadb1727d0c79337b134fcad384fdd49c8d30
|
INT02-C. Understand integer conversion rules
Conversions can occur explicitly as the result of a cast or implicitly as required by an operation. Although conversions are generally required for the correct execution of a program, they can also lead to lost or misinterpreted data. Conversion of an operand value to a compatible type causes no change to the value or the representation.
The C integer conversion rules define how C compilers handle conversions. These rules include integer promotions, integer conversion rank, and the usual arithmetic conversions. The intent of the rules is to ensure that the conversions result in the same numerical values and that these values minimize surprises in the rest of the computation. Prestandard C usually preferred to preserve signedness of the type.
Integer Promotions
Integer types smaller than int are promoted when an operation is performed on them. If all values of the original type can be represented as an int, the value of the smaller type is converted to an int; otherwise, it is converted to an unsigned int. Integer promotions are applied as part of the usual arithmetic conversions to certain argument expressions; operands of the unary +, -, and ~ operators; and operands of the shift operators. The following code fragment shows the application of integer promotions:
```c
char c1, c2;
c1 = c1 + c2;
```
Integer promotions require the promotion of each variable (c1 and c2) to int size. The two int values are added, and the sum is truncated to fit into the char type. Integer promotions are performed to avoid arithmetic errors resulting from the overflow of intermediate values:
```c
signed char cresult, c1, c2, c3;
c1 = 100;
c2 = 3;
c3 = 4;
cresult = c1 * c2 / c3;
```
In this example, the value of c1 is multiplied by c2. The product of these values is then divided by the value of c3 (according to operator precedence rules). Assuming that signed char is represented as an 8-bit value, the product of c1 and c2 (300) cannot be represented. Because of integer promotions, however, c1, c2, and c3 are each converted to int, and the overall expression is successfully evaluated. The resulting value is truncated and stored in cresult. Because the final result (75) is in the range of the signed char type, the conversion from int back to signed char does not result in lost data.
Integer Conversion Rank
Every integer type has an integer conversion rank that determines how conversions are performed. The ranking is based on the concept that each integer type contains at least as many bits as the types ranked below it. The following rules for determining integer conversion rank are defined in the C Standard, subclause 6.3.1.1 [ISO/IEC 9899:2011]:
- No two signed integer types shall have the same rank, even if they have the same representation.
- The rank of a signed integer type shall be greater than the rank of any signed integer type with less precision.
- The rank of long long int shall be greater than the rank of long int, which shall be greater than the rank of int, which shall be greater than the rank of short int, which shall be greater than the rank of signed char.
- The rank of any unsigned integer type shall equal the rank of the corresponding signed integer type, if any.
- The rank of any standard integer type shall be greater than the rank of any extended integer type with the same width.
- The rank of char shall equal the rank of signed char and unsigned char.
- The rank of _Bool shall be less than the rank of any of the other standard integer types.
- The rank of any enumerated type shall equal the rank of the compatible integer type.
- The rank of any extended signed integer type relative to another extended signed integer type with the same precision is implementation-defined but still subject to the other rules for determining the integer conversion rank.
- For all integer types T1, T2, and T3, if T1 has greater rank than T2 and T2 has greater rank than T3, then T1 has greater rank than T3.
The integer conversion rank is used in the usual arithmetic conversions to determine what conversions need to take place to support an operation on mixed integer types.
Usual Arithmetic Conversions
The usual arithmetic conversions are rules that provide a mechanism to yield a common type when both operands of a binary operator are balanced to a common type or the second and third operands of the conditional operator (?) are balanced to a common type. Conversions involve two operands of different types, and one or both operands may be converted. Many operators that accept arithmetic operands perform conversions using the usual arithmetic conversions. After integer promotions are performed on both operands, the following rules are applied to the promoted operands:
1. If both operands have the same type, no further conversion is needed.
2. If both operands are of the same integer type (signed or unsigned), the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.
3. If the operand that has unsigned integer type has rank greater than or equal to the rank of the type of the other operand, the operand with signed integer type is converted to the type of the operand with unsigned integer type.
4. If the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type is converted to the type of the operand with signed integer type.
5. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type.
Example
In the following example, assume the code is compiled using an implementation with 8-bit char, 32-bit int, and 64-bit long long:
```c
signed char sc = SCHAR_MAX;
unsigned char uc = UCHAR_MAX;
signed long long sll = sc + uc;
```
Both the signed char `sc` and the unsigned char `uc` are subject to integer promotions in this example. Because all values of the original types can be represented as int, both values are automatically converted to int as part of the integer promotions. Further conversions are possible if the types of these variables are not equivalent as a result of the usual arithmetic conversions. The actual addition operation, in this case, takes place between the two 32-bit int values. This operation is not influenced by the resulting value being stored in a signed long long integer. The 32-bit value resulting from the addition is simply sign-extended to 64 bits after the addition operation has concluded.
Assuming that the precision of signed char is 7 bits, and the precision of unsigned char is 8 bits, this operation is perfectly safe. However, if the compiler represents the signed char and unsigned char types using 31- and 32-bit precision (respectively), the variable `uc` would need to be converted to unsigned int instead of signed int. As a result of the usual arithmetic conversions, the signed int is converted to unsigned, and the addition takes place between the two unsigned int values. Also, because `uc` is equal to UCHAR_MAX, which is equal to UINT_MAX, the addition results in an overflow in this example. The resulting value is then zero-extended to fit into the 64-bit storage allocated by `sll`.
Noncompliant Code Example (Comparison)
The programmer must be careful when performing operations on mixed types. This noncompliant code example shows an idiosyncrasy of integer promotions:
```c
int si = -1;
unsigned int ui = 1;
printf("%d\n", si < ui);
```
In this example, the comparison operator operates on a signed int and an unsigned int. By the conversion rules, `si` is converted to an unsigned int. Because 1 cannot be represented as an unsigned int value, the 1 is converted to UINT_MAX in accordance with the C Standard, subclause 6.3.1.3, paragraph 2 [ISO/IEC 9899:2011]:
> Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Consequently, the program prints 0 because UINT_MAX is not less than 1.
Compliant Solution
The noncompliant code example can be modified to produce the intuitive result by forcing the comparison to be performed using signed int values:
```c
int si = -1;
unsigned ui = 1;
printf("%d\n", si < (int)ui);
```
This program prints 1 as expected. Note that `(int)ui` is correct in this case only because the value of `ui` is known to be representable as an int. If it were not known, the compliant solution would need to be written as
```c
int si = /* Some signed value */;
unsigned ui = /* Some unsigned value */;
printf("%d\n", (si < 0 || (unsigned)si < ui));
```
Noncompliant Code Example
This noncompliant code example demonstrates how performing bitwise operations on integer types smaller than int may have unexpected results:
In this example, a bitwise complement of \( \text{port} \) is first computed and then shifted 4 bits to the right. If both of these operations are performed on an 8-bit unsigned integer, then \( \text{result\_8} \) will have the value 0x0a. However, \( \text{port} \) is first promoted to a signed int, with the following results (on a typical architecture where type int is 32 bits wide):
<table>
<thead>
<tr>
<th>Expression</th>
<th>Type</th>
<th>Value</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \text{port} )</td>
<td>\text{uint8_t}</td>
<td>0x5a</td>
<td></td>
</tr>
<tr>
<td>( \sim \text{port} )</td>
<td>\text{int}</td>
<td>0xffffffa5</td>
<td></td>
</tr>
<tr>
<td>( \sim \text{port} \gg 4 )</td>
<td>\text{int}</td>
<td>0x0ffffffa</td>
<td>Whether or not value is negative is \text{implementation-defined}</td>
</tr>
<tr>
<td>( \text{result_8} )</td>
<td>\text{uint8_t}</td>
<td>0xfa</td>
<td></td>
</tr>
</tbody>
</table>
**Compliant Solution**
In this compliant solution, the bitwise complement of \( \text{port} \) is converted back to 8 bits. Consequently, \( \text{result\_8} \) is assigned the expected value of 0x0aU.
```c
uint8_t port = 0x5a;
uint8_t result_8 = (uint8_t)(~port) >> 4;
```
**Noncompliant Code Example**
This noncompliant code example, adapted from the Cryptography Services blog, demonstrates how signed overflow can occur even when it seems that only unsigned types are in use:
```c
unsigned short x = 45000, y = 50000;
unsigned int z = x * y;
```
On implementations where \text{short} is 16 bits wide and \text{int} is 32 bits wide, the program results in undefined behavior due to signed overflow. This is because the \text{unsigned short}s become signed when they are automatically promoted to integer, and their mathematical product (2250000000) is greater than the largest signed 32-bit integer (2\text{16} - 1, which is 2147483647).
**Compliant Solution**
In this compliant solution, by manually casting one of the operands to \text{unsigned int}, the multiplication will be unsigned and so will not result in undefined behavior:
```c
unsigned short x = 45000, y = 50000;
unsigned int z = x * (unsigned int)y;
```
**Risk Assessment**
Misunderstanding integer conversion rules can lead to errors, which in turn can lead to exploitable vulnerabilities. The major risks occur when narrowing the type (which requires a specific cast or assignment), converting from unsigned to signed, or converting from negative to unsigned.
<table>
<thead>
<tr>
<th>Recommendation</th>
<th>Severity</th>
<th>Likelihood</th>
<th>Remediation Cost</th>
<th>Priority</th>
<th>Level</th>
</tr>
</thead>
<tbody>
<tr>
<td>INT02-C</td>
<td>Medium</td>
<td>Probable</td>
<td>Medium</td>
<td>P8</td>
<td>L2</td>
</tr>
</tbody>
</table>
**Automated Detection**
<table>
<thead>
<tr>
<th>Tool</th>
<th>Version</th>
<th>Checker</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Astére</td>
<td>22.04</td>
<td></td>
<td>Supported</td>
</tr>
<tr>
<td>CodeSonar</td>
<td>7.2p0</td>
<td>ALLOC.SIZE.TRUNC</td>
<td>Truncation of Allocation Size</td>
</tr>
<tr>
<td>------------</td>
<td>-------</td>
<td>------------------</td>
<td>-------------------------------</td>
</tr>
<tr>
<td></td>
<td></td>
<td>LANG.CAST.COERC</td>
<td>Coercion Alters Value</td>
</tr>
<tr>
<td></td>
<td></td>
<td>LANG.CAST.VALUE</td>
<td>Cast Alters Value</td>
</tr>
<tr>
<td></td>
<td></td>
<td>MISC.MEM.SIZE.TRUNC</td>
<td>Truncation of Size</td>
</tr>
<tr>
<td>ECLAIR</td>
<td>1.2</td>
<td>CC2.INT02</td>
<td>Fully implemented</td>
</tr>
<tr>
<td>Heli-QAC</td>
<td>202.3</td>
<td>C1250, C1251, C1252, C1253, C1256, C1257, C1260, C1263, C1266, C1274, C1290, C1291, C1292, C1293, C1294, C1295, C1296, C1297, C1298, C1299, C1800, C1802, C1803, C1804, C1810, C1811, C1812, C1813, C1821, C1822, C1823, C1824, C1830, C1831, C1832, C1833, C1834, C1841, C1842, C1843, C1844, C1850, C1851, C1852, C1853, C1854, C1860, C1861, C1862, C1863, C1864, C1880, C1881, C1882, C2100, C2101, C2102, C2103, C2104, C2105, C2106, C2107, C2109, C2110, C2111, C2112, C2113, C2114, C2115, C2116, C2117, C2118, C2119, C2120, C2122, C2124, C2130, C2132, C2134, C4401, C4402, C4403, C4404, C4405, C4410, C4412, C4413, C4414, C4415, C4420, C4421, C4422, C4423, C4424, C4425, C4430, C4431, C4432, C4434, C4435, C4436, C4437, C4440, C4441, C4442, C4443, C4445, C4446, C4447, C4460, C4461, C4463, C4464, C4470, C4471, C4480, C4481</td>
<td></td>
</tr>
<tr>
<td>Klocwork</td>
<td>202.3</td>
<td>MISRA.CAST.INT</td>
<td>Fully implemented</td>
</tr>
<tr>
<td></td>
<td></td>
<td>MISRA.CAST.UNSIGNED_BITS</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>MISRA.Conv.INT.SIGN</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>MISRA.CVALUE.IMPL.CAST</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>MISRA.MINUS.UNSIGNED</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>PRECISION.LOSS</td>
<td></td>
</tr>
<tr>
<td>Parasoft C/C++test</td>
<td>202.1</td>
<td>CERT_C-INT02-a</td>
<td>Implicit conversions from wider to narrower integral type which may result in a loss of information shall not be used Avoid mixing arithmetic of different precisions in the same expression</td>
</tr>
<tr>
<td></td>
<td></td>
<td>CERT_C-INT02-b</td>
<td></td>
</tr>
<tr>
<td>PC-lint Plus</td>
<td>1.4</td>
<td>501, 502, 569, 570, 573, 574, 701, 702, 732, 734, 737</td>
<td>Partially supported</td>
</tr>
<tr>
<td>Polyspace Bug Finder</td>
<td>R2022b</td>
<td>CERT C: Rec. INT02-C</td>
<td>Checks for sign change integer conversion overflow (rec. fully supported)</td>
</tr>
<tr>
<td>PROQA QA-C</td>
<td>9.7</td>
<td>1250, 1251, 1252, 1253, 1256, 1257, 1260, 1263, 1266, 1274, 1290, 1291, 1292, 1293, 1294, 1295, 1296, 1297, 1298, 1299, 1800, 1802, 1803, 1804, 1810, 1811, 1812, 1813, 1820, 1821, 1822, 1823, 1824, 1830, 1831, 1832, 1833, 1834, 1840, 1841, 1842, 1843, 1844, 1850, 1851, 1852, 1853, 1854, 1860, 1861, 1862, 1863, 1864, 1880, 1881, 1882, 2100, 2101, 2102, 2103, 2104, 2105, 2106, 2107, 2109, 2111, 2112, 2113, 2114, 2115, 2116, 2117, 2118, 2119, 2120, 2122, 2124, 2130, 2132, 2134, 4401, 4402, 4403, 4404, 4405, 4410, 4412, 4413, 4414, 4415, 4420, 4421, 4422, 4423, 4424, 4425, 4426, 4430, 4431, 4432, 4434, 4435, 4436, 4437, 4440, 4441, 4442, 4443, 4445, 4446, 4447, 4448, 4449, 4450, 4451, 4452, 4453, 4454, 4455, 4456, 4457, 4458, 4459, 4460, 4461, 4462, 4463, 4464, 4465, 4466, 4467, 4468, 4469, 4470, 4471, 4472, 4473, 4474, 4475, 4476, 4477, 4478, 4479, 4480, 4481</td>
<td>Fully implemented</td>
</tr>
<tr>
<td>PVS-Studio</td>
<td>7.22</td>
<td>V555, V605, V673, V5006</td>
<td></td>
</tr>
</tbody>
</table>
**Related Vulnerabilities**
This vulnerability in Adobe Flash arises because Flash passes a signed integer to `calloc()`. An attacker has control over this integer and can send negative numbers. Because `calloc()` takes `size_t`, which is unsigned, the negative number is converted to a very large number, which is generally too big to allocate, and as a result, `calloc()` returns `NULL`, causing the vulnerability to exist.
Search for vulnerabilities resulting from the violation of this rule on the [CERT website](https://cert.org).
**Related Guidelines**
<table>
<thead>
<tr>
<th>SEI CERT C++ Coding Standard</th>
<th>VOID INT02-CPP. Understand integer conversion rules</th>
</tr>
</thead>
<tbody>
<tr>
<td>ISO/IEC TR 24772:2013</td>
<td>Numeric Conversion Errors [FLC]</td>
</tr>
<tr>
<td>MISRA C:2012</td>
<td>Rule 10.1 (required)</td>
</tr>
<tr>
<td></td>
<td>Rule 10.3 (required)</td>
</tr>
<tr>
<td></td>
<td>Rule 10.4 (required)</td>
</tr>
<tr>
<td></td>
<td>Rule 10.6 (required)</td>
</tr>
<tr>
<td></td>
<td>Rule 10.7 (required)</td>
</tr>
<tr>
<td></td>
<td>Rule 10.8 (required)</td>
</tr>
<tr>
<td>MITRE CWE</td>
<td>CWE-192, Integer coercion error</td>
</tr>
<tr>
<td></td>
<td>CWE-197, Numeric truncation error</td>
</tr>
</tbody>
</table>
Bibliography
[Seacord 2013] Chapter 5, "Integer Security"
|
{"Source-Url": "https://wiki.sei.cmu.edu/confluence/download/temp/pdfexport-20221220-201222-0654-39/c-INT02-C.Understandintegerconversionrules-201222-0654-40.pdf?contentType=application/pdf", "len_cl100k_base": 4967, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 14352, "total-output-tokens": 5516, "length": "2e12", "weborganizer": {"__label__adult": 0.0004405975341796875, "__label__art_design": 0.0004134178161621094, "__label__crime_law": 0.00054931640625, "__label__education_jobs": 0.0004320144653320313, "__label__entertainment": 8.767843246459961e-05, "__label__fashion_beauty": 0.00018453598022460935, "__label__finance_business": 0.0002574920654296875, "__label__food_dining": 0.000576019287109375, "__label__games": 0.0007729530334472656, "__label__hardware": 0.002777099609375, "__label__health": 0.0006380081176757812, "__label__history": 0.0002758502960205078, "__label__home_hobbies": 0.00012022256851196288, "__label__industrial": 0.00093841552734375, "__label__literature": 0.00023305416107177737, "__label__politics": 0.0003216266632080078, "__label__religion": 0.0006999969482421875, "__label__science_tech": 0.1021728515625, "__label__social_life": 7.802248001098633e-05, "__label__software": 0.0074005126953125, "__label__software_dev": 0.87939453125, "__label__sports_fitness": 0.0004553794860839844, "__label__transportation": 0.000732421875, "__label__travel": 0.00022089481353759768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16972, 0.04454]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16972, 0.5772]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16972, 0.86558]], "google_gemma-3-12b-it_contains_pii": [[0, 5283, false], [5283, 8970, null], [8970, 11908, null], [11908, 15890, null], [15890, 16972, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5283, true], [5283, 8970, null], [8970, 11908, null], [11908, 15890, null], [15890, 16972, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16972, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16972, null]], "pdf_page_numbers": [[0, 5283, 1], [5283, 8970, 2], [8970, 11908, 3], [11908, 15890, 4], [15890, 16972, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16972, 0.29655]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
72c640281fe663caaf24a3edc5eb1715514207f0
|
ITERATIONS
THE DW METHODOLOGY
ITERATIONSTM - The Data Warehouse Methodology
by Prism Solutions
Acknowledge the Challenges
Data warehouse projects pose a unique set of analysis, design, technology and management challenges. These challenges are unlike those found during traditional development projects to build operational systems. Successful data warehouse implementations require a methodology that identifies and addresses these differences to efficiently guide project managers and project teams. More than applying a specific set of technologies, successful data warehouse implementations are the result of an effective and repeatable development process.
Evaluating traditional system development life cycle (SDLC) methodologies, business process re-engineering (BPR) methodologies, or rapid application design (RAD) methodologies for approaches to solve the challenges of data warehousing is an exercise in futility. Regardless how many boxes or arrows or levels they may have, they are primarily geared toward developing operational systems that run the business, rather than informational systems that analyze the business. Not only are the analysis and design characteristics unique to data warehouse development absent among them, but worse, concepts that appear useful in a traditional methodology may be contrary to what an experienced data warehouse architect would recommend. The arbitrary use of common methodologies and accepted techniques focused on building operational systems may be largely to blame for the less than optimal data warehouses within many organizations.
ITERATIONS removes the guesswork from data warehouse planning, analysis, design and management by providing development teams with a clearly defined set of Tasks, timeframes, deliverables, and Roles that can be tailored to your data warehouse initiative.
The Evolution of ITERATIONS
Prism Consulting has pursued the best data warehouse talent to staff its consulting practice with most of the consultants bringing experience from some of the “early adopters” of data warehousing throughout the US. Leveraging the years of experience related to the specifics and intricacies of delivering successful data warehouse solutions, Prism has developed a flexible, iterative project roadmap that can be adapted to various data warehouse initiatives across all industries. ITERATIONS is based on many of the tenets articulated by Bill Inmon and is also focused on ensuring successful implementations for a wide range and scope of data warehouse initiatives including data marts, operational data stores, and enterprise data warehouses. This enhanced methodology addresses all facets of a data warehouse initiative:
- Project management
- Data warehouse analysis
- Data warehouse modeling
- Data warehouse design
- End user access design
- Meta data management
- Technical environment design and preparation
- Construction of the data warehouse environment
- Testing of the data warehouse environment
- End user acceptance
- Project reviews
ITERATIONS was not developed in a laboratory or derived by adapting traditional systems development methodologies. Rather, Prism capitalized on its five-year history of delivering “real” data warehouse solutions, the experience of our consulting staff, and the validated data warehousing theories of Bill Inmon, as the sole sources of input for the methodology. The Prism methodology has continually evolved into what is now a practical, adaptable, comprehensive approach for enabling successful data warehouse implementations.
A review of ITERATIONS is a good starting point for discussing Prism’s approach to any type of data warehouse consulting engagement. A brief review of the methodology should provide an appreciation for how Prism Solutions’ experience and knowledge can help to ensure the success of your data warehouse initiative.
**ITERATIONS Framework**
ITERATIONS was designed in a highly structured manner. This approach yields several benefits including ease of understanding, enabling of a common vernacular within the data warehouse project team, and the modular design permits it to be straightforwardly integrated with existing project methodologies.
ITERATIONS is clearly organized into Modules, and all of the Activities required to complete the Module are identified along with their corresponding Deliverable(s). Additional detail at the Task level is provided in the Generic Project Plan. Modules are grouped two ways - within a Track, and within a Phase. The Tracks represent distinct sets of Modules which should occur in parallel. Phases represent a progressive grouping of Modules. Typically, these Modules are completed prior to initiating the next Phase. Projects are managed and monitored by Phase. In this way, clients may enlist ITERATIONS-certified consultants to assist with a particular Track or Phase of Modules, or even specific Modules themselves. Further, these data warehouse development Modules may be easily inserted into or integrated with existing methodologies.
ITERATIONS Tracks
Five parallel Tracks of work efforts help coordinate various Activities, optimize resource utilization, and maximize data warehouse project efficiencies resulting in more timely implementations.
<table>
<thead>
<tr>
<th>TRACK</th>
<th>DESCRIPTION</th>
</tr>
</thead>
<tbody>
<tr>
<td>Project</td>
<td>Modules focused on orientation, commitment, management, administration, training, marketing and strategy</td>
</tr>
<tr>
<td>User</td>
<td>Modules focused on business requirements, departmental/individual designs, end user access, and end user acceptance</td>
</tr>
<tr>
<td>Data</td>
<td>Modules focused on data model analysis and design, atomic level design, source system analysis, data extraction design, data warehouse processing design, construction, and data warehouse population</td>
</tr>
<tr>
<td>Technical</td>
<td>Modules focused on technical component assessment, selection, integration, technical environment sizing, environment preparation, and environment testing</td>
</tr>
<tr>
<td>Meta Data</td>
<td>Modules focused on business/technical meta data integration, and meta data access design & development</td>
</tr>
</tbody>
</table>
ITERATIONS Phases
The Phases of ITERATIONS represent groupings of Modules that are completed in concert and often have many dependencies among Modules within the Phase and from previous Phases. The conclusion of a Phase should result in a checkpoint that reviews and ensures successful completion of expected project deliverables and ongoing progress of the data warehouse initiative.
<table>
<thead>
<tr>
<th>PHASE</th>
<th>DESCRIPTION</th>
</tr>
</thead>
<tbody>
<tr>
<td>Startup</td>
<td>Ensures the organization is prepared for a data warehouse project in terms of awareness and commitment, and establishes that a data warehouse strategy is in place</td>
</tr>
<tr>
<td>DW Management</td>
<td>Ensures and plans for training, support, project management, change management, data warehouse marketing, and ongoing administration of the data warehouse</td>
</tr>
<tr>
<td>Analysis</td>
<td>Assesses, scopes and models potential data warehouse solutions, source system solutions, data availability, cleanliness, and completeness, along with high level technical recommendations</td>
</tr>
<tr>
<td>Design</td>
<td>Designs the data environment, data access environment, data extraction environment, maintenance processing environment, and the detailed technical environment</td>
</tr>
<tr>
<td>Construction</td>
<td>Builds and unit tests the data extraction solutions, data access solutions, maintenance processing solutions, the technical environment and develops end user training</td>
</tr>
<tr>
<td>Testing</td>
<td>Performs various levels of integrated data warehouse testing and user acceptance</td>
</tr>
<tr>
<td>Implementation</td>
<td>Makes the data warehouse accessible to the end users and being monitoring and optimization efforts</td>
</tr>
</tbody>
</table>
ITERATIONS Roles
ITERATIONS defines and describes the data warehouse project team Roles and responsibilities and cross references these Roles to the Deliverables Matrix, the Module and Activity Narratives, and the Generic Project Plan.
Data Warehouse Roles
![Data Warehouse Roles Diagram]
ITERATIONS Modules
ITERATIONS is comprised of 35 Modules, each with associated, specific Activities. Each Module is reflected in the comprehensive Generic Data Warehouse Project Plan. Both the Modules and their corresponding Activities have significant Narratives describing them, identifying responsibilities, deliverables, and techniques that can be used to complete them (See Example Section to follow). The following high level diagram illustrates the ITERATIONS process.
The ITERATIONS process
ITERATIONS Product Materials
The ITERATIONS product, available in a set of binders and hyper-linked CD-ROM, includes over 1000 pages of integrated data warehousing best-practices techniques and consists of the following materials:
- **Process Narratives** – detailed descriptions of the nearly 200 high-level and mid-level work items. Each process Narrative consists of 8 sections:
- Purpose
- Key Participants
- Description
- Effort
- Deliverables
- Techniques
- Responsibility
- Success Indicators
- **Role Definitions** – detailed skill definitions defining the Core and Extended team resources’ responsibilities.
- **Documentation Templates** – defining each of the suggested nearly 200 project deliverables for creating a solid project audit trail and reference for your future data warehouse releases. The templates are provided both on-line in MS Office and in printed format and included sample completed templates.
• **Deliverables Checklist** – a cross-reference of Documentation Templates to each of the Modules & Activities for project managers to use as a completion checklist and reference. The checklist is provided in MS Excel format with tabs for each Phase.
• **Generic Data Warehouse Workplan** – a detailed hierarchical, flexible data warehouse project plan that lays out each of the process steps, team member responsibilities, and deliverable milestones. The project plan consists of approximately 200 work items at the Activity level, and 700 work items at the more granular Task level. The workplan is provided both on-line in MS Project and in printed format.
• **Tech Topics** – a growing series of about 30 whitepapers on various data warehousing topics written by Prism experts. These Tech Topics are cross-referenced throughout the process Narratives.
• **Training and Case Study** – workbooks and exercise materials from the formal three-day ITERATIONS education class.
**ITERATIONS Licensing**
For approximately the cost of one person-month of consulting, you can now significantly increase the likelihood of building data warehouses that:
- Solve a specific business challenge
- Deliver value within a reasonable timeframe
- Achieve high return on investment
- Meet or exceed expectations
- Meet user requirements
- Deliver a data warehouse solution on schedule, within budget, and effectively utilizing the resources available
- Minimize the impact on operational systems
- Maximize information availability and analytical capabilities
- Design toward flexibility to ensure future decision support needs can be accommodated
Depending upon the scope of your implementation, you can select a “Mart License”, “Departmental License”, or “Enterprise License”.
Activity Definition Example
Following is an excerpt from just one of the nearly 200 ITERATIONS Narratives:
**ACTIVITY A3.2**
**Assess Technical Environment Characteristics of Candidate Source Systems**
- **Phase:** Analysis
- **Module:** Source System Analysis
- **Track:** Data
**Purpose**
Perform analyses on the technical environments of candidate systems of record.
**Description**
In most organizations, there are multiple candidate source systems with the same or comparable data. Candidate source systems are the ones whose data is most likely to be loaded into the data warehouse. The focus of this Activity is to identify the technical system characteristics of similar or identical data within the same or different candidate systems that indicate which is a preferable or optimal source for populating the data warehouse. At this point, all potential source systems have been identified. Most of this effort is spent on the most likely candidate systems. These candidate sources are evaluated based on the following characteristics of their data (data quality analysis is performed in A3.3, Evaluate Quality of Legacy Data):
- Timeliness
- Nearness to the source
- Degrees of granularity
- Batch windows
Methods used to gather this information are:
- Observation of data entry into legacy system
- System staff interviews
- Queries against source systems
- Source code analysis
- Network path analysis
- Source system capacity analysis
- Source system support considerations
- Future implementation considerations
Systems of record that are serious contenders as sources to the data warehouse should be evaluated in depth. Comprehensive technical environment evaluation of the candidate source systems provides a means for selecting between systems that may be able to provide similar or identical information to the data warehouse. For selected candidate systems, the Data Quality Administrator needs to evaluate the quality of source system data by completing the next Activity, A3.3.
Regardless of the best operational data for the data warehouse, technical limitations of operational systems may prohibit them from being acceptable or usable sources of data. This Activity also assesses the feasibility of the current computing environment of a candidate source providing data to the data warehouse environment. For example, limitations that may prohibit a system from optimally supporting the data needs of the data warehouse include, but are not limited to:
- Accessibility
- Capacity
- Availability
- Support personnel
In addition, for the initial implementation of the data warehouse, this Activity serves to increase understanding of the organization's overall computing environment.
**Deliverables**
The Data Warehouse Data Architect completes the following template during this Source System Analysis Activity:
**ACTIVITY Filename Deliverable**
A3.2 A32t.doc Candidate Source Systems Assessment
**Responsibility**
Data Warehouse Data Architect
**Key Participants**
- Source System Expert
- Source System DBA
- Data Acquisition Developer
**Effort**
The number of source systems, their geographic dispersion, and their complexity can affect the time needed to complete this Activity. Additionally, availability of source system support personnel who understand the system and can discuss source system characteristics will have a significant impact on duration. As a guideline, this Activity will require up to one week per major system (such as a billing system or a financial system) and one to three days per minor system (such as a system populated from a major system with a minimal number of tables).
**Techniques**
Using the information obtained in the previous Activity, gather additional information to assist during analysis of the most likely candidates, to include:
- *Processing Narratives*
- *Existing Reports* supporting the business requirements identified during Business Requirements Analysis
Meet with the appropriate System Managers, Subject Matter Experts, Key System Users and Application Support Analysts to discuss system specifics, such as:
- *Required fields*
- *Options for field extraction*
Identify the point in the system processes in which the fields exist in the nearest state to those required in the data warehouse. For example, in a billing system, the data may be required after it has been rated versus
before. However, should usage information be required, extracting the data prior to rating may be advantageous.
- Estimated data volumes
- Extractions already developed against the data
It must be determined why these extracts exist, what current production systems they support, what their stability and level of flexibility are (i.e., can they be changed within reasonable timeframe), support levels, and so forth.
- Timing of extractions, including batch windows
Identify the specific point in batch processing in which the extraction should occur.
Using tools currently available, obtain sample data to verify the data quality, stability, definition, structure, integrity and usability.
**General Guidelines for Selecting the Optimal Source System**
After selecting the optimal source system, it is a good idea to update the template, A3_2t.doc, Candidate Source System Assessment. The optimal source system should be determined based on the following criteria:
**Timeliness**
Accuracy and completeness of data is often relative to the point in time the data is extracted from the current operational environment. Extraction during specific times in a process may prove one source more accurate and complete than another, yet this may not hold true of the same source at another time in the process. Identify and understand the different time implications of the data in the existing systems environment, then identify the best source of data to satisfy the business requirements of the data warehouse.
Examples of considerations for timeliness include:
- How often the source data is created, updated and deleted
- Relative timeliness of sources to be combined in the data warehouse. For example, if the customer master file is updated weekly while the sales master file is updated daily, this can result in sales with no apparent customer when a new customer places an order.
**Structural Compatibility**
While accuracy, completeness and timeliness are very important in deciding on the best data, how well the source data structurally conforms to the data warehouse data model is also an issue. Structural compatibility or conformance occurs when the source system data can be extracted and mapped into the data warehouse table structure with minimal creation of new keys or redesign of the data warehouse data model. If there is a gross mismatch between the data model and the optimal current system data, some compromises are usually made. In the next Activity, A3.3, the Data Quality Administrator performs this evaluation in more detail.
**Nearness to the Source**
Based on the theory that data becomes increasingly corrupt as it passes through an organization's computing environment, consider how far removed the data is from its originating source. Often, the further from the source, the more modifications have been made to the data, and the more difficult it will be to maintain as changes occur to the source system environment. On the other hand, sometimes the best system of record is the output of a downstream process. For example, an output file of an operational system that is carefully controlled, audited and loaded into the General Ledger may be a more appropriate source of data for the data warehouse than creating a separate extract process from the original source that mimics the same results.
**Other Techniques**
Following are techniques and considerations that aid the source system selection:
**Degrees of Granularity**
Similar source data tables may consist of differing degrees of granularity. For example, one sales table may contain line item detail while another contains order summary data. Identifying the differences and business rules applied to both candidate sources will help facilitate the selection of the best system of record.
**Observation of Entry into System**
If time allows, observing the Activities of users of the operational systems at the point of entry is an excellent method for identifying the system's data characteristics.
**System Staff Interviews**
Developing a set of questions concerning source system data and characteristics that can be used when interviewing data creators, data users, data stewards, and data base administrators is often effective.
Network Path Analysis
Accessibility of various source platforms (via local area or wide area networks) often differs significantly because of different network connectivity solutions. Identifying the network paths to be used, and testing the network performance for moving data from the candidate source system platform to the data warehouse platform can be advantageous during the assessment process. In addition, future plans for network upgrades should be identified.
Source System Capacity Analysis
Source system capacity (including processing, storage, and batch windows) is often a major limitation that will affect decisions surrounding the optimal source. Often, preferred source systems have the lowest available capacity because the data resident in the system is in high demand throughout the organization. In an attempt to ensure the capacity is sufficient, time and effort should be expended:
- Evaluating and testing the capacities for compiling and executing data extraction programs
- Investigating the batch queues and priorities are available to the data warehouse project
- Identifying whether acceptable data staging disk space is available
- Assessing plans for hardware upgrades or retirements
Batch Windows
Typically, source extract applications process during the late evening or early morning when operational activity is lowest. For the likely source systems, identify the available batch windows and ascertain whether the extractions can complete processing during these windows. At this time also identify the scheduling anomalies (e.g., scheduled down-time, developmental freeze periods, or scheduled purges/archival).
Source System Support Considerations
The availability and commitment of source system staff to work with the Data Warehouse Project Team can also have an effect on the selection of data sources. Since the source system staff will be relied upon to manage and monitor job queues, assist in coding job control programs, establish system access priorities, and other environment-specific Activities, their commitment is critical.
Future Implementation Considerations
As the data warehouse continues to incorporate new data elements in successive implementations, consider future requirements when selecting appropriate data sources. Although two data sources may satisfy the target data requirements in a relatively equal manner, one may also include data that will likely be needed for the next anticipated subject area.
Success Indicators
Success is achieved upon completing and documenting a comprehensive assessment of the candidate source systems. Success will also be demonstrated, when based on this assessment, minimal reconsideration of additional data sources in this (or a future) implementation is required.
Upon implementation of the data warehouse, this Activity will have been successful when extract performance is satisfactory, the data extracted from the systems is of sufficient quality, and additional data can be included rapidly.
Deliverable Example
Following is an excerpt from the deliverable template corresponding to the Narrative above.
<table>
<thead>
<tr>
<th>Candidate Source Systems Assessment</th>
<th>A3_2t.doc Analysis Phase Source System Analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td>Author:</td>
<td>Create Date:</td>
</tr>
<tr>
<td> </td>
<td> </td>
</tr>
<tr>
<td>Review Date:</td>
<td> </td>
</tr>
<tr>
<td>Complete Date:</td>
<td> </td>
</tr>
</tbody>
</table>
**Overview**
| Individual Expected to Complete Template: | Data Warehouse Data Architect |
**Intended Use:** Identify and document the characteristics of the potential source systems. These characteristics will then be used to select the “best” source of data for the data warehouse.
---
**Potential Source System Identification**
<table>
<thead>
<tr>
<th>Potential Source System:</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Data Structure:</td>
<td></td>
</tr>
<tr>
<td>Primary Function:</td>
<td></td>
</tr>
<tr>
<td>Contact Names:</td>
<td></td>
</tr>
</tbody>
</table>
---
**System Profile**
<table>
<thead>
<tr>
<th>Operating System:</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Platform:</td>
<td></td>
</tr>
<tr>
<td>Typical Programming Language Used:</td>
<td></td>
</tr>
</tbody>
</table>
**System Accessibility and Availability**
<table>
<thead>
<tr>
<th>Batch Window Availability:</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Batch Processing Capacity:</td>
<td></td>
</tr>
<tr>
<td>Physical Proximity to Expected Data Warehouse Platform:</td>
<td></td>
</tr>
<tr>
<td>Data Storage Capacity:</td>
<td>(temporary files)</td>
</tr>
<tr>
<td>Communications Protocol:</td>
<td></td>
</tr>
<tr>
<td>Communications Availability:</td>
<td></td>
</tr>
<tr>
<td>Support Resources:</td>
<td></td>
</tr>
<tr>
<td>System Stability:</td>
<td></td>
</tr>
<tr>
<td>Security:</td>
<td></td>
</tr>
<tr>
<td>Enhancement & Upgrade Plans:</td>
<td></td>
</tr>
<tr>
<td>Ease of Access:</td>
<td></td>
</tr>
</tbody>
</table>
---
**Data by Source System**
Develop a rating scheme that reflects the priorities of the organization to evaluate the data within the potential source system according to major considerations identified below.
"Acceptable to meet data warehouse requirements?"
**Project Plan Example**
The Iterations Data Warehouse Workplan contains four distinct levels of detail: Phase, Module, Activity and Task – each level expanding upon the previous one. The plan also incorporates the complete set of suggested deliverables (as milestones), and provides Role assignments at both the Module and Activity level. Finally, the plan includes dependencies (links) at both the Module and Activity level.
A small portion of the entire detailed ITERATIONS project plan is illustrated on the following page:
**ITERATIONS Detailed Project Plan (partial)**
Items in blue italics represent suggested Deliverables and their completion dates (documentation templates) for this Module. In practice, project managers often elect to modify the plan to manage projects at the
Activity level rather than the detailed Task level and/or prefer to Track deliverables via the Deliverables Matrix only.
**ITERATIONS Deliverables Matrix Example**
Project managers and other team members may also make free-form use of the on-line Deliverables Matrix that cross-references Deliverables to their associated Activities and Modules. There is one matrix for each Phase. A partial matrix is illustrated below:
Sample Deliverables Matrix (partial)
**ITERATIONS Education**
**ITERATIONS Data Warehouse Methodology (4 day course)**
The ITERATIONS Data Warehouse Development Methodology provides the data warehouse project team a set of guidelines and techniques to enable the development of a successful data warehouse. ITERATIONS addresses the unique project management, architectures, design approaches, technologies, and analytical techniques necessary to develop a successful data warehouse within a reasonable timeframe. Using these methods, data warehouse development teams can greatly reduce the risks of failing to meet user requirements and management expectations. Based on Prism Consulting’s years of experience, the ITERATIONS education class packages and presents these "best practices" to the class participant.
The ITERATIONS training course applies a case study approach to ensure the class participant understands the practical application of the methodology in a realistic environment. It involves discussions of topics such as:
- Iterative and parallel project management
- Negotiated data warehouse design approaches
- Cooperative analysis efforts
- Data warehouse implementation and feedback mechanisms
- Data warehouse orientation, commitment and expectation management
- Project budgeting
- Data warehouse marketing and support
- Meta data collection/integration/management
- End user access development
The course presents the materials in an easy-to-follow fashion, consistently correlating the Module or Activity with the associated Phase in the iterative process.
Also, to ensure maximum value, the course is highly participatory and discussion-oriented, and class participants have ample opportunity to frame their own organization’s culture, standards and technologies within the ITERATIONS methodology.
**ITERATIONS Adaptation (2 consulting days - Optional)**
Because ITERATIONS will be utilized in a very specific cultural, technical, and organizational environment, some aspects of the methodology may need to be tailored to the specific environment. In this two day (minimum) session, an ITERATIONS-certified consultant will work with your project manager or project
team to define how ITERATIONS will be implemented in your organization and on your project. Activities include, but may not be limited to, review of the ITERATIONS deliverables, review of the Roles and responsibilities and correlating them with the skills and capabilities of your project team, and high level development of a project plan.
A lengthier but similar effort may include integrating ITERATIONS with your existing methodology. In this situation, the consultant will work to either extend your current methodology with the applicable Modules or Activities within ITERATIONS, or incorporate necessary components of your methodology into ITERATIONS.
ITERATIONS-Based Consulting Services
Accomplishing the objectives of a data warehousing initiative and moving beyond may require an experienced hand throughout the project, or in particular Phases. Prism Solutions Consulting brings a wealth of experience and expertise to the initiative, offering both a data warehouse specific methodology and industry-recognized experts in data warehousing.
In general, data warehouse projects pose a unique set of analysis, design and management challenges, which are very unlike traditional development projects. Combine these data warehousing challenges with the information architecture challenges that an organization faces, and the advantages become clear of enlisting the services of consultants who possesses a significant breadth and depth of data warehouse development skills.
Prism Solutions’ consulting staff is comprised of internationally recognized data warehouse professionals with several levels of experience. Clients will be assured the appropriate consultant will be matched to the project demands. Consulting services are available for all types of data warehouse design, development and implementation projects, with specific expertise available in the following areas.
Data Warehouse Readiness Assessment
Moving forward to build a data warehouse requires a great degree of technical and organizational preparedness. In this limited engagement, Prism will draw upon its experience to evaluate a client’s readiness to proceed with a data warehouse initiative. Each major component of the ITERATIONS data warehouse development methodology will be addressed (based on the client’s business requirements and objectives.) A Senior Consultant will review project plans, system documentation and project requirements and incorporate their findings in the formal Readiness Assessment Report.
Data Warehouse Enterprise Strategic Planning
Companies are often anxious to analyze how the data warehouse can benefit their entire organization. Typically, they select to first implement an initial data warehouse to evaluate it’s broader potential. Prism’s experience in delivering data warehouse solutions bridging departmental boundaries across industries can be applied to the strategic planning process. Prism will perform a detailed data warehouse implementation review and work with your management to generate a feasible, refined enterprise-wide data warehouse strategy and implementation plan.
Data Warehouse Project Management
Managing the construction of a data warehouse can demand a level of expertise and attention not currently available in many organizations. For this reason, customers may wish to entrust the project management to an outside resource who is familiar with the various development Phases and issues related to data warehouse implementation. Prism will manage all or part of the data warehouse project including: work planning, work supervision, status report preparation, leading design reviews, end user training, and other related Activities as specified by the client.
Data Warehouse Source System Analysis
Organizations often have several similar or identical data sources for populating the data warehouse. One of the first steps in building a data warehouse is to identify the legacy data most appropriate for populating the data warehouse. By assessing the integrity, volatility and accessibility of a client’s data sources, Prism will
produce a recommendation of the types of transformation, filtering, integration, summarization and retention needed to construct the data warehouse according to specifications.
**Data Warehouse Modeling**
The point of departure for building the data warehouse is the data model. The data model serves as the blueprint for organizing the structure and content of data and metadata in the warehouse. Often existing corporate data models must be transformed and extended for use as the data warehouse data model. Prism’s expertise in data modeling techniques and tools can be applied to creating corporate data models, designing data warehouse data models, or adapting Prism’s Inmon Generic Data Models to specific requirements. This Activity involves working closely with the end user and Information Systems communities to ensure the data model is designed to meet their needs.
**Data Warehouse Business Subject Area Analysis**
The overall success of the data warehouse project can be contingent on the selection of the initial business subject areas. Experienced Prism consultants work with clients to identify the subject areas to be populated in the data warehouse and the method in which they should be implemented. The output from this Activity is an understanding of the scope of the effort required for each subject area, so the development stages of the data warehouse are properly estimated and planned for. For each selected subject area, the analysis involves choosing the triggering events for data capture, identifying data relationships, determining naming conventions and planning the frequency of data transfer into the data.
**Data Warehouse Systems Planning**
Constructing a data warehouse involves much more than the movement of data from one platform to another. Information system planning encompasses several Activities needed for building effective decision support systems. This process typically includes building a data model for the data warehouse, determining the optimal computing environment for the data warehouse, selecting the triggering events for data capture, identifying data relationships, resolving naming conventions and planning the frequency of data transfer into the data warehouse.
**Data Warehouse Construction**
Creating the physical data warehouse involves extracting, transforming, transporting, and loading the data. The development of these processes demands a specialized understanding of data warehouse tools and techniques across hardware platforms, operating systems, DBMSs, and networks. Prism data warehouse architects and specialists specify the data mappings and transformations, and develop, test, and maintain the programs.
**Data Warehouse Capacity Planning**
A significant cost of the data warehouse is the computing platform. Accurate capacity planning can help to mitigate this expense. Further, the related financial decisions and capital expenditures should be made as early as possible in the data warehouse project to ensure adequate time for installation. The capacity planning engagement systematically evaluates a client’s data warehouse requirements and preliminary designs to determine the amount of disk storage, processing resources and telecommunication capacity required for the data warehouse environment. Prism’s experience in data warehousing can be drawn upon to overcome the challenges of capacity planning for decision support systems in which the computing workload fluctuates and is difficult to predict. Also, since the data warehouse typically grows in stages over a period of several years, it is critical to project the impact of that growth on corporate resources. Capacity planning is also beneficial for companies who may wish to transfer the data warehouse over time from a mainframe to a distributed environment.
**Data Warehouse Design Review**
Only properly designed and implemented data warehouses will meet end users’ informational needs. A Prism design review will verify that the business requirements and organizational objectives can be achieved via the data warehouse as designed. After each major subject area has been designed, it is studied thoroughly in relation to the corporate data model, the data warehouse data model, the data delivery mechanisms, and the business requirements. This ensures design completeness, accuracy and feasibility. Having an experienced
consultancy such as Prism lead the design review offers the advantages of a broad, external perspective and qualified, constructive input.
**Meta Data Management**
With the abundance of information available to business analysts and other data warehouse users, it is paramount to consider the meaning behind the data. It’s easy to interpret the same information in different ways or questions how the data was derived — this can be a dangerous byproduct of improved information access. Meta data, or data about data is the answer. In helping construct a usable data warehouse solution, Prism will design and deliver a corresponding meta data management solution. This will integrate both the business and technical meta data and provide an access mechanism to search and view the meta data. In addition, this solution will even allow users to launch queries into the data warehouse via one of many popular data access tools.
**Data Warehouse Access Development**
Data warehouse projects driven by business analysis or user access requirements demand complimentary desktop solutions. Different user communities may require unique views of the data warehouse or they may need sophisticated analysis and reporting functionality. Prism Solution’s consulting experience extends into the end user access design and development Activities that precede a successful data warehouse rollout. We will consider not only the informational requirements, but also the data presentation requirements in helping evaluate data access tools, design the internal indexing structures, develop specific reports and queries, train users, and install the software.
Web: http://www.prismsolutions.com/ White paper provided by:
http://www.prismsolutions.com/
|
{"Source-Url": "https://projectmanagement.com/content/attachments/patrick2001_2706021154.PDF", "len_cl100k_base": 7204, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 28798, "total-output-tokens": 7548, "length": "2e12", "weborganizer": {"__label__adult": 0.00042557716369628906, "__label__art_design": 0.001071929931640625, "__label__crime_law": 0.0005998611450195312, "__label__education_jobs": 0.00707244873046875, "__label__entertainment": 9.238719940185548e-05, "__label__fashion_beauty": 0.00024211406707763672, "__label__finance_business": 0.0151519775390625, "__label__food_dining": 0.0004181861877441406, "__label__games": 0.0005278587341308594, "__label__hardware": 0.0016689300537109375, "__label__health": 0.0004363059997558594, "__label__history": 0.0003502368927001953, "__label__home_hobbies": 0.0002942085266113281, "__label__industrial": 0.002117156982421875, "__label__literature": 0.00033926963806152344, "__label__politics": 0.0003097057342529297, "__label__religion": 0.0004117488861083984, "__label__science_tech": 0.0158233642578125, "__label__social_life": 0.00011914968490600586, "__label__software": 0.0285797119140625, "__label__software_dev": 0.9228515625, "__label__sports_fitness": 0.0001933574676513672, "__label__transportation": 0.0005984306335449219, "__label__travel": 0.00025773048400878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38719, 0.00138]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38719, 0.20537]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38719, 0.89383]], "google_gemma-3-12b-it_contains_pii": [[0, 3028, false], [3028, 5041, null], [5041, 7933, null], [7933, 8224, null], [8224, 9670, null], [9670, 12755, null], [12755, 15849, null], [15849, 20096, null], [20096, 23693, null], [23693, 25049, null], [25049, 25891, null], [25891, 28511, null], [28511, 32600, null], [32600, 36984, null], [36984, 38719, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3028, true], [3028, 5041, null], [5041, 7933, null], [7933, 8224, null], [8224, 9670, null], [9670, 12755, null], [12755, 15849, null], [15849, 20096, null], [20096, 23693, null], [23693, 25049, null], [25049, 25891, null], [25891, 28511, null], [28511, 32600, null], [32600, 36984, null], [36984, 38719, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38719, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38719, null]], "pdf_page_numbers": [[0, 3028, 1], [3028, 5041, 2], [5041, 7933, 3], [7933, 8224, 4], [8224, 9670, 5], [9670, 12755, 6], [12755, 15849, 7], [15849, 20096, 8], [20096, 23693, 9], [23693, 25049, 10], [25049, 25891, 11], [25891, 28511, 12], [28511, 32600, 13], [32600, 36984, 14], [36984, 38719, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38719, 0.15884]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
ed1a211e395f7260cd9d92d46ea68ffcf9dc4ca6
|
## SPEC CPU®2017 Integer Speed Result
**Cisco Systems**
Cisco UCS B200 M5 (Intel Xeon Platinum 8276L, 2.20GHz)
<table>
<thead>
<tr>
<th>Software</th>
<th>OS: SUSE Linux Enterprise Server 15 (x86_64) 4.12.14-23-default</th>
</tr>
</thead>
<tbody>
<tr>
<td>Compiler:</td>
<td>C/C++: Version 19.0.4.227 of Intel C/C++ Compiler for Linux; Fortran: Version 19.0.4.227 of Intel Fortran Compiler for Linux</td>
</tr>
<tr>
<td>Parallel:</td>
<td>Yes</td>
</tr>
<tr>
<td>Firmware:</td>
<td>Version 4.0.4b released Apr-2019</td>
</tr>
<tr>
<td>File System:</td>
<td>btrfs</td>
</tr>
<tr>
<td>System State:</td>
<td>Run level 3 (multi-user)</td>
</tr>
<tr>
<td>Base Pointers:</td>
<td>64-bit</td>
</tr>
<tr>
<td>Peak Pointers:</td>
<td>Not Applicable</td>
</tr>
</tbody>
</table>
## Hardware
| CPU Name: | Intel Xeon Platinum 8276L |
| Max MHz: | 4000 |
| Nominal: | 2200 |
| Enabled: | 56 cores, 2 chips |
| Orderable: | 1.2 chips |
| Cache L1: | 32 KB I+ 32 KB D on chip per core |
| L2: | 1 MB I+D on chip per core |
| L3: | 38.5 MB I+D on chip per chip |
| Other: | None |
| Memory: | 768 GB (24 x 32 GB 2Rx4 PC4-2933V-R) |
| Storage: | 1 x 240G SSD SATA |
| Other: | None |
### SPECspeed®2017_int_base = 10.6
### SPECspeed®2017_int_peak = Not Run
---
**CPU2017 License:** 9019
**Test Sponsor:** Cisco Systems
**Tested by:** Cisco Systems
**Hardware Availability:** Apr-2019
**Software Availability:** May-2019
**Test Date:** Jul-2019
### Threads
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Threads</th>
<th>0</th>
<th>1.00</th>
<th>3.00</th>
<th>5.00</th>
<th>7.00</th>
<th>9.00</th>
<th>11.0</th>
<th>13.0</th>
<th>15.0</th>
<th>17.0</th>
<th>19.0</th>
<th>21.0</th>
<th>23.0</th>
<th>25.0</th>
</tr>
</thead>
<tbody>
<tr>
<td>600.perlbench_s</td>
<td>56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>602.gcc_s</td>
<td>56</td>
<td>7.03</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>605.mcf_s</td>
<td>56</td>
<td>10.2</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>620.omnetpp_s</td>
<td>56</td>
<td>9.25</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>623.xalancbmk_s</td>
<td>56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>625.x264_s</td>
<td>56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>631.deepsjeng_s</td>
<td>56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>641.lmll_s</td>
<td>56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>648.exchange2_s</td>
<td>56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>657.xz_s</td>
<td>56</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
---
**CPU Name:** Intel Xeon Platinum 8276L
**Max MHz:** 4000
**Nominal:** 2200
**Enabled:** 56 cores, 2 chips
**Orderable:** 1.2 chips
**Cache L1:** 32 KB I+ 32 KB D on chip per core
**L2:** 1 MB I+D on chip per core
**L3:** 38.5 MB I+D on chip per chip
**Other:** None
**Memory:** 768 GB (24 x 32 GB 2Rx4 PC4-2933V-R)
**Storage:** 1 x 240G SSD SATA
**Other:** None
Cisco Systems
Cisco UCS B200 M5 (Intel Xeon Platinum 8276L, 2.20GHz)
CPU2017 License: 9019
Test Sponsor: Cisco Systems
Tested by: Cisco Systems
SPEC speed®2017_int_base = 10.6
SPEC speed®2017_int_peak = Not Run
**Results Table**
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Threads</th>
<th>Seconds</th>
<th>Ratio</th>
<th>Seconds</th>
<th>Ratio</th>
<th>Seconds</th>
<th>Ratio</th>
<th>Threads</th>
<th>Seconds</th>
<th>Ratio</th>
</tr>
</thead>
<tbody>
<tr>
<td>600.perlbench_s</td>
<td>56</td>
<td>256</td>
<td>6.94</td>
<td>251</td>
<td>7.07</td>
<td>253</td>
<td>7.03</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>602.gcc_s</td>
<td>56</td>
<td>391</td>
<td>10.2</td>
<td>390</td>
<td>10.2</td>
<td>393</td>
<td>10.1</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>605.mcf_s</td>
<td>56</td>
<td>368</td>
<td>12.8</td>
<td>367</td>
<td>12.9</td>
<td>366</td>
<td>12.9</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>620.omnetpp_s</td>
<td>56</td>
<td>172</td>
<td>9.51</td>
<td>177</td>
<td>9.23</td>
<td>176</td>
<td>9.25</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>623.xalancbmk_s</td>
<td>56</td>
<td>111</td>
<td>12.8</td>
<td>111</td>
<td>12.7</td>
<td>110</td>
<td>12.8</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>625.x264_s</td>
<td>56</td>
<td>119</td>
<td>14.8</td>
<td>120</td>
<td>14.7</td>
<td>120</td>
<td>14.7</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>631.deepsjeng_s</td>
<td>56</td>
<td>257</td>
<td>5.59</td>
<td>257</td>
<td>5.57</td>
<td>257</td>
<td>5.57</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>641.leela_s</td>
<td>56</td>
<td>349</td>
<td>4.90</td>
<td>348</td>
<td>4.91</td>
<td>348</td>
<td>4.90</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>648.exchange2_s</td>
<td>56</td>
<td>172</td>
<td>17.1</td>
<td>172</td>
<td>17.1</td>
<td>171</td>
<td>17.2</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>657.xz_s</td>
<td>56</td>
<td>254</td>
<td>24.3</td>
<td>255</td>
<td>24.3</td>
<td>254</td>
<td>24.3</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Operating System Notes**
Stack size set to unlimited using "ulimit -s unlimited"
**General Notes**
Environment variables set by runcpu before the start of the run:
KMP_AFFINITY = "granularity=fine,scatter"
LD_LIBRARY_PATH = "/home/cpu2017/lib/intel64:/home/cpu2017/je5.0.1-64"
OMP_STACKSIZE = "192M"
Binaries compiled on a system with 1x Intel Core i9-7900X CPU + 32GB RAM
memory using Redhat Enterprise Linux 7.5
Transparent Huge Pages enabled by default
Prior to runcpu invocation
Filesystem page cache synced and cleared with:
sync; echo 3> /proc/sys/vm/drop_caches
NA: The test sponsor attests, as of date of publication, that CVE-2017-5754 (Meltdown) is mitigated in the system as tested and documented.
Yes: The test sponsor attests, as of date of publication, that CVE-2017-5753 (Spectre variant 1) is mitigated in the system as tested and documented.
Yes: The test sponsor attests, as of date of publication, that CVE-2017-5715 (Spectre variant 2) is mitigated in the system as tested and documented.
jemalloc, a general purpose malloc implementation
built with the RedHat Enterprise 7.5, and the system compiler gcc 4.8.5
SPEC CPU®2017 Integer Speed Result
Cisco Systems
Cisco UCS B200 M5 (Intel Xeon Platinum 8276L, 2.20GHz)
**CPU2017 License:** 9019
**Test Sponsor:** Cisco Systems
**Tested by:** Cisco Systems
**Test Date:** Jul-2019
**Hardware Availability:** Apr-2019
**Software Availability:** May-2019
**SPECspeed®2017_int_base = 10.6**
**SPECspeed®2017_int_peak = Not Run**
**Platform Notes**
BIOS Settings:
- Intel HyperThreading Technology set to Disabled
- CPU performance set to Enterprise
- Power Performance Tuning set to OS Controls
- SNC set to Disabled
- IMC Interleaving set to Auto
Sysinfo program /home/cpu2017/bin/sysinfo
Rev: r5797 of 2017-06-14 96c45e4568ad54c135fd618bccc091c0f
running on linux-bo6o Fri Sep 6 11:18:14 2019
SUT (System Under Test) info as seen by some common utilities.
For more information on this section, see
https://www.spec.org/cpu2017/Docs/config.html#sysinfo
From /proc/cpuinfo
- model name: Intel(R) Xeon(R) Platinum 8276L CPU @ 2.20GHz
- 2 "physical id"s (chips)
- 56 "processors"
- cores, siblings (Caution: counting these is hw and system dependent. The following excerpts from /proc/cpuinfo might not be reliable. Use with caution.)
- cpu cores: 28
- siblings: 28
- physical 0: cores 0 1 2 3 4 5 6 8 9 10 11 12 13 14 16 17 18 19 20 21 22 24 25 26 27 28 29 30
- physical 1: cores 0 1 2 3 4 5 6 8 9 10 11 12 13 14 16 17 18 19 20 21 22 24 25 26 27 28 29 30
From lscpu:
- Architecture: x86_64
- CPU op-mode(s): 32-bit, 64-bit
- Byte Order: Little Endian
- CPU(s): 56
- On-line CPU(s) list: 0-55
- Thread(s) per core: 1
- Core(s) per socket: 28
- Socket(s): 2
- NUMA node(s): 2
- Vendor ID: GenuineIntel
- CPU family: 6
- Model: 85
- Model name: Intel(R) Xeon(R) Platinum 8276L CPU @ 2.20GHz
- Stepping: 7
- CPU MHz: 2200.000
- CPU max MHz: 4000.0000
- CPU min MHz: 1000.0000
- BogoMIPS: 4400.00
- Virtualization: VT-x
(Continued on next page)
Cisco Systems
Cisco UCS B200 M5 (Intel Xeon Platinum 8276L, 2.20GHz)
<table>
<thead>
<tr>
<th>SPECspeed®2017_int_base</th>
<th>10.6</th>
</tr>
</thead>
<tbody>
<tr>
<td>SPECspeed®2017_int_peak</td>
<td>Not Run</td>
</tr>
</tbody>
</table>
CPU2017 License: 9019 | Test Date: | Jul-2019 |
Test Sponsor: Cisco Systems | Hardware Availability: | Apr-2019 |
Tested by: Cisco Systems | Software Availability: | May-2019 |
Platform Notes (Continued)
<table>
<thead>
<tr>
<th>Cache Attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td>L1d cache: 32K</td>
</tr>
<tr>
<td>L1i cache: 32K</td>
</tr>
<tr>
<td>L2 cache: 1024K</td>
</tr>
<tr>
<td>L3 cache: 39424K</td>
</tr>
<tr>
<td>NUMA node0 CPU(s): 0-27</td>
</tr>
<tr>
<td>NUMA node1 CPU(s): 28-55</td>
</tr>
<tr>
<td>Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtsscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_pinn mba tpr_shadow vmmi flexpriority ept vpid fsgsbase tsc_adjust bmlb hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsaves cqm_llc cqm_occusp_llc cqm_mbm_total cqm_mbm_local ibpb ibrs stibp dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni arch_capabilities ssbd</td>
</tr>
</tbody>
</table>
/proc/cpuinfo cache data
```
cache size : 39424 KB
```
From numactl --hardware WARNING: a numactl 'node' might or might not correspond to a physical chip.
```
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
node 0 size: 385426 MB
node 0 free: 384673 MB
node 1 cpus: 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
node 1 size: 387043 MB
node 1 free: 382926 MB
node distances:
node 0 1
0: 10 21
1: 21 10
```
From /proc/meminfo
```
MemTotal: 791009524 KB
HugePages_Total: 0
Hugepagesize: 2048 KB
```
From /etc/*release* /etc/*version*
```
os-release:
NAME="SLES"
VERSION="15"
VERSION_ID="15"
```
(Continued on next page)
Cisco Systems
Cisco UCS B200 M5 (Intel Xeon Platinum 8276L, 2.20GHz)
SPECspeed®2017_int_base = 10.6
SPECspeed®2017_int_peak = Not Run
CPU2017 License: 9019
Test Sponsor: Cisco Systems
Test Date: Jul-2019
Hardware Availability: Apr-2019
Tested by: Cisco Systems
Software Availability: May-2019
Platform Notes (Continued)
PRETTY_NAME="SUSE Linux Enterprise Server 15"
ID="sles"
ID_LIKE="suse"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:15"
uname -a:
Linux linux-bo6o 4.12.14-23-default #1 SMP Tue May 29 21:04:44 UTC 2018 (cd0437b)
x86_64 x86_64 x86_64 GNU/Linux
run-level 3 Sep 6 08:36
SPEC is set to: /home/cpu2017
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc1 btrfs 224G 18G 205G 8% /home
Additional information from dmidecode follows. WARNING: Use caution when you interpret this section. The 'dmidecode' program reads system data which is "intended to allow hardware to be accurately determined", but the intent may not be met, as there are frequent changes to hardware, firmware, and the "DMTF SMBIOS" standard.
BIOS Cisco Systems, Inc. B200M5.4.0.4b.0.0407191258 04/07/2019
Memory:
24x 0xCE00 M393A4K40CB2-CVF 32 GB 2 rank 2933, configured at 2934
(End of data from sysinfo program)
Compiler Version Notes
==============================================================================
C | 600.perlbench_s(base) 602.gcc_s(base) 605.mcf_s(base) 625.x264_s(base) 657.xz_s(base)
------------------------------------------------------------------------------
Intel(R) C Intel(R) 64 Compiler for applications running on Intel(R) 64,
Version 19.0.4.227 Build 20190416
Copyright (C) 1985-2019 Intel Corporation. All rights reserved.
------------------------------------------------------------------------------
C++ | 620.omnetpp_s(base) 623.xalancbmk_s(base) 631.deepsjeng_s(base)
| 641.leela_s(base)
------------------------------------------------------------------------------
Intel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64,
Version 19.0.4.227 Build 20190416
Copyright (C) 1985-2019 Intel Corporation. All rights reserved.
(Continued on next page)
SPEC CPU®2017 Integer Speed Result
Cisco Systems
Cisco UCS B200 M5 (Intel Xeon Platinum 8276L, 2.20GHz)
| SPECspeed®2017_int_base = 10.6 |
| SPECspeed®2017_int_peak = Not Run |
CPU2017 License: 9019
Test Sponsor: Cisco Systems
Tested by: Cisco Systems
Test Date: Jul-2019
Hardware Availability: Apr-2019
Software Availability: May-2019
Compiler Version Notes (Continued)
Fortran | 648.exchange2_s(base)
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.4.227 Build 20190416
Copyright (C) 1985-2019 Intel Corporation. All rights reserved.
Base Compiler Invocation
C benchmarks:
icc -m64 -std=c11
C++ benchmarks:
icpc -m64
Fortran benchmarks:
ifort -m64
Base Portability Flags
600.perlbench_s: -DSPEC_LP64 -DSPEC_LINUX_X64
602.gcc_s: -DSPEC_LP64
605.mcf_s: -DSPEC_LP64
620.omnetpp_s: -DSPEC_LP64
623.xalancbmk_s: -DSPEC_LP64 -DSPEC_LINUX
625.x264_s: -DSPEC_LP64
631.deepsjeng_s: -DSPEC_LP64
641.leela_s: -DSPEC_LP64
648.exchange2_s: -DSPEC_LP64
657.xz_s: -DSPEC_LP64
Base Optimization Flags
C benchmarks:
-W1,-z,muldefs -xCORE-AVX512 -ipo -O3 -no-prec-div
-qopt-mem-layout-trans=4 -gopenmp -DSPEC_OPENMP
-L/usr/local/je5.0.1-64/lib -ljemalloc
C++ benchmarks:
-W1,-z,muldefs -xCORE-AVX512 -ipo -O3 -no-prec-div
(Continued on next page)
Cisco Systems
Cisco UCS B200 M5 (Intel Xeon Platinum 8276L, 2.20GHz)
SPECspeed®2017_int_base = 10.6
SPECspeed®2017_int_peak = Not Run
CPU2017 License: 9019
Test Sponsor: Cisco Systems
Tested by: Cisco Systems
Test Date: Jul-2019
Hardware Availability: Apr-2019
Software Availability: May-2019
Base Optimization Flags (Continued)
C++ benchmarks (continued):
- qopt-mem-layout-trans=4
- L/usr/local/IntelCompiler19/compilers_and_libraries_2019.4.227/linux/compiler/lib/intel64
- lqkmalloc
Fortran benchmarks:
- xCORE-AVX512 -ipo -O3 -no-prec-div -qopt-mem-layout-trans=4
- nostandard-realloc-lhs
The flags files that were used to format this result can be browsed at
You can also download the XML flags sources by saving the following links:
|
{"Source-Url": "http://www.specbench.org/cpu2017/results/res2019q4/cpu2017-20190916-17933.pdf", "len_cl100k_base": 5797, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19381, "total-output-tokens": 6141, "length": "2e12", "weborganizer": {"__label__adult": 0.0009398460388183594, "__label__art_design": 0.0009660720825195312, "__label__crime_law": 0.0007114410400390625, "__label__education_jobs": 0.0008459091186523438, "__label__entertainment": 0.00032901763916015625, "__label__fashion_beauty": 0.0005426406860351562, "__label__finance_business": 0.001094818115234375, "__label__food_dining": 0.0006775856018066406, "__label__games": 0.002288818359375, "__label__hardware": 0.27392578125, "__label__health": 0.0007753372192382812, "__label__history": 0.00069427490234375, "__label__home_hobbies": 0.0005259513854980469, "__label__industrial": 0.00506591796875, "__label__literature": 0.0004334449768066406, "__label__politics": 0.00045371055603027344, "__label__religion": 0.0012311935424804688, "__label__science_tech": 0.201171875, "__label__social_life": 0.00010150671005249023, "__label__software": 0.043853759765625, "__label__software_dev": 0.460693359375, "__label__sports_fitness": 0.0007424354553222656, "__label__transportation": 0.001644134521484375, "__label__travel": 0.00029468536376953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13870, 0.11683]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13870, 0.05809]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13870, 0.5406]], "google_gemma-3-12b-it_contains_pii": [[0, 2445, false], [2445, 5235, null], [5235, 7123, null], [7123, 9333, null], [9333, 11501, null], [11501, 12793, null], [12793, 13870, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2445, true], [2445, 5235, null], [5235, 7123, null], [7123, 9333, null], [9333, 11501, null], [11501, 12793, null], [12793, 13870, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 13870, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13870, null]], "pdf_page_numbers": [[0, 2445, 1], [2445, 5235, 2], [5235, 7123, 3], [7123, 9333, 4], [9333, 11501, 5], [11501, 12793, 6], [12793, 13870, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13870, 0.18971]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
b7bb97c2cbef933879e4efa2af394c2165d3c5ba
|
Final Project
Due Midnight at the end of Wednesday, March 13, 2002
The Big Picture
Your last laboratory assignment of the quarter will be to design, document, and demonstrate a digital design project that exercises the skills you have been developing this quarter in EE121.
This handout describes four possible projects: the BreakOut video game, the Worm video game, a MIDI sound player, and a RC4 encryption cipher cracking program. You may choose a project other than the ones listed here, but you do so at your own risk. The TAs will be better able to help you if you choose one of the listed designs, and they will probably not be able to devote the same amount of effort in supporting individually designed projects. The projects are not necessarily of equal difficulty. All of the project descriptions include some basic functionality as well as many optional features that you can add as time permits. The difficulty will vary with the amount of features you choose to implement. We will keep this in mind when assigning grades. You should choose a project that you will enjoy designing and that you feel confident you will finish.
You will achieve success through three easy steps.
😊 Before you design anything in Xilinx Foundation, think carefully about your design, what modules it will have, and how these modules will interact with one another. Drawing block diagrams, state diagrams, and transition/output tables will be especially useful here. Feel free to describe your plans to the TAs and ask for feedback.
😊 Before you come to the lab to test your hardware, simulate your design thoroughly. Debugging is much easier with a simulator.
😊 Break the design process up into steps, and test each step in hardware before you proceed to the next step. Do not even think of adding advanced features before all your basic features work.
A simple but deadly mistake is to design everything in Foundation, then come in to the lab near the end of the quarter and expect your project to work right away. It won’t! We will be using several external interfaces in this project (VGA monitor, SDRAM, audio codec, game controller, etc.) and you do not know whether your design works with these interfaces until you’ve seen it work with the real hardware. Debugging and working with real hardware are important components of digital design, and a design that works “in theory” but not in practice is simply a design that does not work. A simple design without advanced features that works well is much better than a fancy design that does not work at all. If you don’t think you’ll be able to finish your project by the deadline, you should talk to the TAs in advance. They can usually suggest ways to reduce the scope of the design so you’ll have something to show.
The final project is due Midnight at the end of Wednesday, March 13. You will demo the project the following morning to your classmates and the teaching staff. We will not accept any late projects; you must demonstrate whatever you have working by Midnight the end of March 13th. Think of the project a “proof-of-concept” demonstration to some venture capitalists you want to fund your company, or some wealthy donors you want to fund your research group. You don’t have infinite time or resources, so your visitors (the teaching staff) do not expect elaborate features or a flawless, commercial product. They simply want tangible proof that you understand the design problem and can build a system to solve the problem. Functionality, a good demonstration, and clear explanations are important.
**Bureaucratic Details**
*Legal Issues*
You should work in groups of two on this project. You may also work on the project individually, but you may not work in groups of more than two students (unless this is pre-authorized by the instructor). You should submit only one copy of everything except the documentation. Each team member must submit his/her own project documentation.
*Open Lab Hours*
You should come to the lab during your regularly scheduled lab sessions to test the designs you create and simulate in Foundation. You may also work on your project during the other group’s lab session, but students in that session will have priority in the use of equipment. The TAs will also open the lab at other times (to be posted on the project web page) for project work. Anyone can come in during any of these open lab hours. However, since we have many more groups than lab stations we might start a reservation closer to the due date. These open lab hours replace all previous TA office hours.
*Web Page*
Your faithful companion in this project will be [http://www.stanford.edu/class/ee121/project.html](http://www.stanford.edu/class/ee121/project.html). (A link to this page is on the class web site.) We’ll post helpful documentation on the interfaces, answers to frequently asked questions (FAQs), software tools, and useful links as the project progresses, so be sure to visit this page regularly.
*Demos*
Unless specially arranged, all the demos will be all conducted on the same set of hardware in lab. All the boards are the same in the lab and there will be 10 identical gamepads for you to prototype with during the final project development phase. The instructor will download the *.bit file from your submit directory and any necessary SDRAM files (for the MIDI and RC4 projects). If you need to do anything special to setup your demo, please talk to the instructor as soon as possible.
Documentation
The project documentation requirements are similar to those given in the final assignment of E102E. You may use the same report you write in E102E for your EE121 documentation. Your documentation should have two distinct parts (taken from the E102E assignment):
1. A circuit description and “theory of operation.” This part should include a description of all of the inputs, outputs, functional blocks, and algorithms in your particular design, written for experts, engineers who will want to understand what’s special about what you’ve designed.
2. A “user manual,” which enables non-specialist users to play the game. Together, both parts of your document should total about 2-3 pages, typed, single-spaced, exclusive of diagrams.
You should write your report clearly and concisely. In addition to the hardcopy you can turn in to the computer cluster in Packard 128, place a PDF copy in your submit directory or emailed in the usual submission procedure.
Deadlines
We will hold mid-project and final demonstrations in Packard 127. You may submit all supporting files electronically (via Z:\submit or e-mail) or in the drop box in Packard 128. No late demonstrations or submissions are accepted for any of these deadlines; you should show us what you have by the due date. The purpose of the two “milestone” deadlines is to ensure that you are on schedule to finish your project on time – and if, you are not, to provide a time for you to discuss your problems with the TAs.
Friday, February 22nd at 6pm: By this time, you should have chosen a project and e-mailed your choice to your regular lab section TA. Be sure to tell us whom you are working with on the project. If you wish to do your own project, you must make an appointment with the Instructor to gain approval of your project and establish appropriate milestones. After this date, you may not change your project choice.
Thursday, February 28th, 2002, 22:00:00: Milestone # 1, described separately for each project.
Thursday, March 7th, 2002, 22:00:00: Milestone # 2, described separately for each project.
Wednesday, end of day March 13th, 2002, midnight: Final project submission. Archive and submit your final project.
Thursday, March 14th, starting at 9am: Final Project Demonstrations. Each team will have approximately 7 minutes to demonstrate their project to the class and teaching staff.
Tuesday, end of the day March 19th, 2002, midnight: Documentation due, both electronically (in PDF form) via e-mail or Z:submit, and in hardcopy in the drop box in Packard 128.
Breakout!
http://www.stanford.edu/class/ee121/breakout.html
Project Description
In this project, you will design a device that allows a user to play the video game Breakout using the XSA-100 board, Xstend board, gamepad and VGA monitor. Breakout involves a paddle on the bottom of the screen, which can scroll back and forth to hit a ball toward a block of bricks. As the ball strikes a brick, the brick will disappear. When the player has eliminated all of the bricks, the game is over. The player will control the motion of the paddle using the gamepad. The video game’s array of grid pieces will be displayed on the VGA monitor. By using characters stored in memory for each array element on the screen, we can dramatically simplify control of the game output on the monitor. For an example of Breakout, go to the link http://www.geocities.com/SiliconValley/Bay/6879/Breakout.html.
Specifications
Required:
1. Set the Breakout game array to grid pieces that are 16 bits by 16 bits in size. The top fraction of the array will be bricks. The rest will be blank, and the bottom row of the array should be reserved for the movement of the paddle.
2. Initialize the game with the paddle in the center of the bottom row. The ball can start out in the same position each time.
3. There should be a RESET signal that can stop any currently running game, and reset the screen to the initialization state.
4. If a player misses the ball, it should disappear and the game should be reset.
5. The paddle should be 3 grid pieces wide. When the ball strikes the center piece in the paddle, it reflects straight back (90 degrees). If the ball is to hit either grid piece to the right or left of the center piece, then it the ball must reflect at 45 degrees in the direction opposite from where it came.
6. The sidewalls of the game area must reflect the ball toward the middle at a 45 degree angle (this is the simplest boundary condition we can implement). Similarly, as the ball strikes a brick, the ball must be reflected at the opposite angle from its approach. If a ball goes vertically, it should reflect vertically.
7. The paddle movement can be at a resolution of “paddle” sized blocks. This means that if the paddle is three blocks wide, it only has to move in three block increments.
Recommended:
(1) Implement a score in the top row of the game array. Assign some number of points for each brick that is hit, and keep a running total.
(2) Use multiple colors for the blocks. It could look nice to make each row of bricks a different color.
(3) Increase the resolution of the paddle movement to allow finer adjustment of the paddle position (i.e. have the paddle move one game array piece at a time).
(4) Allow the player to have multiple balls per game play. So the state of the block of bricks (which ones have been eliminated and which are still present) must stay the same until all of the balls have been used. It would be a good idea to display the number of balls left on the top row of the screen (you can even use the ball character that you create).
Optional:
The following is a partial list of extensions that would make your design more impressive. Feel free to implement any other ideas that you come up with. These are optional, and we do not expect you to implement all (or even most) of them. You should finish the required functionality before attempting to add extra features.
(1) Set up different levels of the video game so that after you “beat round one” (knock out all the bricks) you go on to the next level which is not identical to the first. The next round also can have a different configuration of bricks and possibly an increase in ball speed.
(2) Implement a more complex algorithm for the reflection of the ball. Add different angles (other than 45, 90) at which the ball can be reflected. These angles could be a function of current angle of the ball and the location of the piece in the paddle that the ball comes in contact with. For example, if the ball is currently at 45 degrees and it hits the outer edge of the paddle, it could be reflected now at 30 degrees.
(3) Randomize the position of the ball upon initialization and reset of the game.
(4) Add sound effects. There could be a sound for the reflection of the ball as it hits the paddle or the sides of the screen. When the ball comes in contact with a brick, there could be a different sound. If a player misses the ball, there could also be a sound effect.
External Interfaces
Your design will interact with the VGA monitor and the gamepad. You can find datasheets for both these devices on the project web page.
Milestones
Milestone # 1: You should submit a block diagram that includes all the major components of your design and a fully labeled state diagram or transition table. The diagram must specify precisely the outputs and next-state behavior for every state and input combination. In the lab, you should demonstrate the ability to initialize the VGA monitor and display the game board.
Milestone # 2: You should be able to present the video game’s grid with blocks and paddle. You do not have to show the ball being reflected back and forth across the screen yet, but you should at least get the paddle to move. It would be good to have the ball move vertically between the paddle and bricks.
Worm (against computer)
http://www.stanford.edu/class/ee121/worm.html
Project Description
In this project, you will design a device that allows a user to play the video game Worm using the XSA-100 board, Xstend board, gamepad and VGA monitor. Worm involves two players (worms) that move in all four directions in a grid. The players will control the direction of the worms’ heads and their bodies will follow. The two worms get longer and longer as the game progresses. The object of the game is to outlast the other worm. The game ends when one of the worms “crashes” into one of the four sides, into the other worm, or into it’s own tail. The two players will control the motion of their respective worms each using a gamepad. The video game’s array of grid pieces will be displayed on the VGA monitor. By using characters stored in memory for each array element on the screen, we can dramatically simplify control of the game output on the monitor.
Specifications
Required:
1. Set the Worm game array to grid pieces that are 16 bits by 16 bits in size. The array will have a boarder, but the rest will be blank except for the two worms. They can start out in the same position each time.
2. There should be a RESET signal that can stop any currently running game, and reset the screen to the initialization state.
3. If a player “crashes”, that worm should disappear and the winning worm should flash.
4. If both players “crash” simultaneously, it is a draw and neither player wins. Both worms should disappear at this time.
5. Initialize the worms to be 3 grid pieces in length. Once the game begins, the worms should gradually get longer until the game ends.
6. Once again, a “crash” occurs when one of the worms’ heads attempts to occupy the same bit as a wall, a part of the other worm, or it’s own tail.
7. Use different colors for the two worms.
Recommended:
1. Implement a counter in the top row of the game array. The counter will simply be the grid length of the worms. Initialize this counter to 3 because the worm is initialized to three bits in length.
2. Initialize the screen with random blocks where the worms cannot go. If a worm crashes into one of these blocks, that worm looses and the other worm flashes.
3. Allow the game to be played against the computer. One player will control one worm, while the other worm moves randomly throughout the grid.
Optional:
The following is a partial list of extensions that would make your design more impressive. Feel free to implement any other ideas that you come up with. *These are optional, and we do not expect you to implement all (or even most) of them.* You should finish the required functionality before attempting to add extra features.
1. Set up the game so you play best two out of three and keep score of how many games each player has won.
2. If you decide to implement option #1, try making games 2 and 3 faster. Not only do the worms move faster, but their size increases faster as well.
3. When you play one player, instead of having the computer’s worm totally random, have some sort of algorithm where it tries to trap the head of the other worm.
4. Allow the game to be played by three players using three controllers.
5. Add sound effects. There could be a sound for each time the worms grow and a sound when one worm wins.
External Interfaces
Your design will interact with the VGA monitor and the gamepad. You can find datasheets for both these devices on the project web page.
Milestones
*Milestone #1:* You should submit a block diagram that includes all the major components of your design and a fully labeled state diagram or transition table. The diagram must specify precisely the outputs and next-state behavior for every state and input combination. In the lab, you should demonstrate the ability to initialize the VGA monitor and display the game board.
*Milestone #2:* You should be able to present the video game’s grid with the worms initialized to their correct lengths and different colors. You do not have to show the worms growing in size yet or even what happens when they crash, but you should at least get the worms to move.
Worm (one player)
http://www.stanford.edu/class/ee121/worm1.html
Project Description
In this project, you will design a device that allows a user to play the video game Worm using the XSA-100 board, Xstend board, gamepad and VGA monitor. Worm involves one player (worm) that moves in all four directions in a grid. The players will control the direction of the worms’ heads and their bodies will follow. The worm will get longer each time the worm eats a piece of “food.” Their will always be one and only one piece of food that the worm is trying to acquire. Once the worm eats that piece of food and new piece will randomly appear somewhere else on the game board. The object of the game is to eat as many pieces of food as possible. The game ends when the worm “crashes” into one of the four sides or into it’s own tail. The player will control the motion of the worm using a gamepad. The video game’s array of grid pieces will be displayed on the VGA monitor. By using characters stored in memory for each array element on the screen, we can dramatically simplify control of the game output on the monitor.
Specifications
Required:
(1) Set the Worm game array to grid pieces that are 16 bits by 16 bits in size. The array will have a boarder, but the rest will be blank except for the worm and a piece of food. The worm can start out in the same position each time.
(2) There should be a RESET signal that can stop any currently running game, and reset the screen to the initialization state.
(3) If the player “crashes”, that worm should disappear and the score should flash.
(4) Each time the worm eats a piece of food, another piece should randomly appear somewhere else in the game array.
(5) Initialize the worms to be 3 grid pieces in length. Each time the worm eats a piece of food it should get longer.
(6) Once again, a “crash” occurs when one of the worms’ heads attempts to occupy the same bit as a wall or it’s own tail.
(7) Use different colors for the worm and the food.
Recommended:
(1) Implement a score in the top row of the game array. The score will simply be a multiple of how many pieces of food the worm has eaten.
(2) Initialize the screen with random blocks where the worms cannot go. If a worm crashes into one of these blocks, the game is over.
(3) You could keep track of the high score.
Optional:
The following is a partial list of extensions that would make your design more impressive. Feel free to implement any other ideas that you come up with. *These are optional, and we do not expect you to implement all (or even most) of them.* You should finish the required functionality before attempting to add extra features.
1. When the worm eats a piece of food have the worm speed up as well as get longer.
2. Have the worm not only move faster, but it’s size increase faster as well.
3. Add sound effects. There could be a sound for each time the worms grow and a sound when the game is over.
External Interfaces
Your design will interact with the VGA monitor and the gamepad. You can find datasheets for both these devices on the project web page.
Milestones
*Milestone # 1:* You should submit a block diagram that includes all the major components of your design and a fully labeled state diagram or transition table. The diagram must specify precisely the outputs and next-state behavior for every state and input combination. In the lab, you should demonstrate the ability to initialize the VGA monitor and display the game board.
*Milestone # 2:* You should be able to present the video game’s grid with the worm initialized to it’s correct length and it should be a different color than the pebble of food. You do not have to show the worm growing in size yet or even what happens when it crashes, but you should at least get the worm to move.
MIDI Player
http://www.stanford.edu/class/ee121/labs/project/midi/index.html
*Project Description*
In this project, you will design a device that plays MIDI files using the XSA-100 board, XStend board, and an auxiliary pair of speakers. After you download one or more MIDI files to the SDRAM on the XSA-100 board, your player will read the file sequentially from memory and play back the music according to the file’s specifications. You will generate notes by storing sinusoid samples in the FPGA’s BlockRAM, sampling them at a rate proportional to the pitch of each note, and sending them to the XStend board’s stereo audio codec.
Background Information
The Musical Instrument Digital Interface (MIDI) is a protocol for connecting electronic musical instruments. It is both an asynchronous serial communication protocol and a data format. Like other serial communication protocols, MIDI describes how bytes of data should be assembled together and transmitted over a cable between two devices. But whereas the bytes traveling over most serial communication protocols might mean anything, bytes that travel over MIDI channels encode a description of a musical performance.
An auxiliary part of the MIDI specification is a file format called the Standard MIDI format (SMF) that describes a musical performance using a format that is very similar, and in many cases identical, to the format of the data that passes over MIDI communication channels. You will use this file format in the project.
MIDI is not an audio format like WAV, MP3, or the raw audio samples you used in Lab 6; it describes how music is to be played, not what the music sounds like. It contains information like what notes to play, the time at which the notes occur, the pressure with which the notes are struck, the tempo of the musical score, and the type of instrument to use. Many implementation details are left to the playback device, and the same MIDI data might sound differently on different devices.
The MIDI file describes much of the same information as a musical score. It is not identical to a score, but the analogy will suffice for this project. Underlying the MIDI file’s description of a musical score are two fundamental time periods, a division period and a quarter note period. The MIDI file sets the tempo by specifying the number of microseconds per quarter note period. It then defines a division period as a fraction of a quarter note period.
The body of the MIDI file is a series of event descriptions, each of which is associated with a timestamp. The timestamp specifies how many division periods should elapse before the event occurs. The two most common events are Note On and Note Off, which behave exactly the way you would imagine. The Note On event turns on one of 128 specific notes, and the Note Off turns a note off. Of course, more than one note might be on at the same time.
The MIDI file also might specify many other properties of the score, like what instrument should be playing and how fast the notes should be pressed. It might include multiple voices. We will either ignore these features or use MIDI files that do not include them. The project web page will contain more detailed information on the MIDI file format.
Rumors
The MIDI project is challenging. Some of you might have heard troubling rumors from last quarter’s students about the difficulty of creating a working MIDI player. Rest assured that we have significantly simplified your task this quarter by providing you with
a codec interface, giving you experience with reading files from memory in Lab 6, and switching to a chip that has block RAM components. But building a MIDI player is not easy, and you should make sure that you construct the basic functionality before attempting to add advanced features.
Specifications
**Required:**
1. Download and play single-track, single-program (i.e., single instrument), chordless MIDI files of at least five minutes in length. We will provide sample MIDI files of varying degrees of complexity that you should be able to play.
2. The MIDI player should have a basic user interface. You should have at least a play and a stop button, either from the on-board pushbuttons we have used in the labs or from a joystick. The MIDI player should play in response to one pushbutton, and reset in response to the other. It should display a “P” on the XSA-100 board seven-segment display when playing, an “S” when stopped, and an “E” if it encounters an unsupported feature in a MIDI file. Feel free to develop an alternative user interface if you wish.
3. Play sinusoidal notes with a range of at least two octaves. You may choose the range, but one that includes middle C would make sense.
**Recommended:**
Your design would be much more impressive with these features and you should try to include them. However, they are not easy to implement and you should be sure to complete the required specifications first.
1. Set the tempo as directed by the MIDI file before starting to play. Set the tempo to a constant value while you work on the required features, then make the tempo programmable after you make the other components work.
2. Play chords of at least three notes. You should make your design work with single notes before you try to implement chords.
**Optional**
The following is a partial list of extensions that would make your design more impressive. Feel free to implement any other ideas that you come up with. *These are optional, and we do not expect you to implement all (or even most) of them.* You should finish the required and recommended functionality before attempting to add extra features.
1. Set the tempo several times while playing a score, as directed by the MIDI file.
2. The Note On and Note Off events include information on the pressure with which the note is hit and the speed with which it is released. Incorporate these into your design by adjusting the note’s volume as a function of its pressure.
3. Real instruments do not generate purely sinusoidal notes. Make the note sound better by selectively adding harmonics or an attack and decay.
4. Add support for two voices. An extra voice could be something as plain as a square or sawtooth wave vs. a sine wave, or you might use some better waveforms from extension 3. Play one voice on the left stereo channel and the other voice on the right stereo channel.
5. Provide helpful information on a monitor with the VGA interface. You might display the status information (“P,” “S,” or “E”) or you might display more complicated information like which notes are being played.
6. Allow the user to pause, advance, or rewind play.
7. Add support for multiple scores, and allow the user to select which one s/he wants to play.
8. Gracefully ignore standard MIDI features that your design does not support. This will allow you to download a wider range of MIDI files.
External Interfaces
To build the MIDI player, you will need to use some external interfaces.
1. The SDRAM, using the same interface macro you used in Lab 6.
2. The stereo codec on the XStend board, using a slightly modified interface macro that we will provide for you. It will behave in exactly the same way as the one in Lab 6, but its sample rate will be 97656.25 Hz rather than 8138 Hz. If you wish to change the sample rate to some other value, we can show you how to do this.
3. (optional) The joypad
4. (optional) The VGA interface macro that you used in Lab 5 to display user-friendly information to a monitor.
Milestones
*Milestone # 1:* You should submit a block diagram that includes all the major components of your design and fully labeled state diagrams or transition tables. The diagrams should specify precisely the outputs and next-state behavior for every state and input combination. Be sure to do this carefully, since a little planning can save you hours of debugging time. In the lab, you should demonstrate the ability to play notes independently of the contents of the MIDI file.
*Milestone # 2:* You should be able to download and play CMajor.mid (a C Major scale, not surprisingly) from the project web page. If you’re a melancholy person, you may demonstrate CSharpMinor.mid instead. You do not need to demonstrate chords, the ability to set the tempo, or the ability to play more complicated files.
**RC4 Encryption Cracker**
*Project Description*
In cryptographic systems, a key (K) is used by an encryption algorithm (in this case, RC4) to encrypt a plaintext message (PT) thereby creating a ciphertext (CT). Algebraically this is expressed as CT = RC4(K, PT). The aim of this project is to determine the inverse. Namely, given a ciphertext (CT) and an encryption algorithm (RC4), determine the Key (K) and the plaintext (PT). This is in effect “cracking” the
encryption algorithm. This project is modeled after the Electronic Frontier Foundation’s DES Cracker project (http://www.eff.org/descracker/), but using RC4 instead of DES.
Background Information
RC4 is an important algorithm underlying the SSL protocol that protects information transmission on the Internet. Using Internet Explorer, go to https://banking.wellsfargo.com/. Click on File->Properties and you will see that the connection is protected with 128bit RC4 encryption.
RC4 is an algorithm that is specified in many places. A very good one is at http://www.cs.berkeley.edu/~iang/isaac/hardware/main.html. In fact, the authors of this paper created a RC4 cracker very similar to this project.
The basic outline of the project is the following. A cipher text will be given to you in the SDRAM. You will load the CT into the FPGA and then successively try keys until you get a plaintext that “looks” like a message. That is the message only has ASCII characters in it—the letters, numbers, and spaces. “How to Recognize Plaintext” http://www.counterpane.com/crypto-gram-9812.html#plaintext has more info but this is the basic concept.
In order to demo the project, you must successfully decrypt a CT provided and display the resulting PT and K on the VGA display. The K used will only be 16 bits in key material (2 ASCII characters) but will expand internally to the full 256 bytes for the algorithm. The CT will be generated by either of the following equivalent means (“12” is the 16 bit key for this example):
“cat rc4.in | /rc4.pl 12 > ! rc4_perl.txt”
“openssl rc4 -K 12121212121212121212121212121212 -iv 0 -out rc4.txt -nosalt -p -in rc4.in”
The rc4.pl script can be found on many webpages and is pretty cool 😊:
#!/usr/local/bin/perl -- -export-a-crypto-system-sig -RC4-in-3-lines-of-PERL
@k=unpack('C*",pack('H",shift));sub S{@s[$x,$y]=@s[$y,$x];}for(@t=@s=0..255)
{$y=($k[@++]+$s[$x]+$y)%256;$x=($s[$x%256]+$y)%256;&S;print pack('C','$^='@$s[($s[$x]+$s[$y])%256]);}
Specifications
Required:
1. Read a 128 Byte Cipher Text message from the SDRAM that was encrypted with a 16 bit RC4 key and display the Plain
2. Display the final results when they are available on the VGA monitor.
3. Play a sound via the codec when the cipher text is cracked.
Recommended:
Your design would be much more impressive with these features and you should try to include them. However, they are not easy to implement and you should be sure to complete the required specifications first.
1. Create a faster RC4 implementation to speedup the cracking (Consider using a dual-port memory).
2. Indicate progress (number of keys searched and wall clock time) on the VGA monitor as the algorithm proceeds.
3. Play audio cues that indicate the progress of the algorithm.
4. Indicate on the VGA monitor the number of keys tried and the time to complete the search.
**Optional**
The following is a partial list of extensions that would make your design more impressive. Feel free to implement any other ideas that you come up with. *These are optional, and we do not expect you to implement all (or even most) of them.* You should finish the required and recommended functionality before attempting to add extra features.
1. Implement multiple RC4 crackers in the FPGA and coordinate their operation so as to crack the CT faster. The FPGA is much, much bigger than a single RC4 module.
2. Implement a user interface with the gamepad.
**External Interfaces**
To build the MIDI player, you will need to use some external interfaces.
1. The SDRAM, using the same interface macro you used in Lab 6.
2. The stereo codec on the XStend board
3. The VGA interface macro that you used in Lab 5.
4. (optional) the gamepad.
**Milestones**
*Milestone # 1:* You should submit a block diagram that includes all the major components of your design and fully labeled state diagrams or transition tables. The diagrams should specify precisely the outputs and next-state behavior for every state and input combination. Be sure to do this carefully, since a little planning can save you hours of debugging time. In the lab, you should demonstrate the ability of your core RC4 algorithm with a known PT, CT, and K.
*Milestone # 2:* You should be able to encrypt a hard-coded (ie, not read from the SDRAM) CT and display it to the VGA monitor.
|
{"Source-Url": "https://web.stanford.edu/class/ee183/ee121_win2002_handouts/project2.pdf", "len_cl100k_base": 7537, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 29497, "total-output-tokens": 8351, "length": "2e12", "weborganizer": {"__label__adult": 0.0016508102416992188, "__label__art_design": 0.005382537841796875, "__label__crime_law": 0.0015316009521484375, "__label__education_jobs": 0.129150390625, "__label__entertainment": 0.001171112060546875, "__label__fashion_beauty": 0.0012464523315429688, "__label__finance_business": 0.0008368492126464844, "__label__food_dining": 0.001613616943359375, "__label__games": 0.01323699951171875, "__label__hardware": 0.1043701171875, "__label__health": 0.001499176025390625, "__label__history": 0.0023326873779296875, "__label__home_hobbies": 0.00305938720703125, "__label__industrial": 0.003757476806640625, "__label__literature": 0.000934600830078125, "__label__politics": 0.0010519027709960938, "__label__religion": 0.0021381378173828125, "__label__science_tech": 0.263916015625, "__label__social_life": 0.0007791519165039062, "__label__software": 0.0167236328125, "__label__software_dev": 0.438720703125, "__label__sports_fitness": 0.0017518997192382812, "__label__transportation": 0.0025043487548828125, "__label__travel": 0.0005917549133300781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34506, 0.01494]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34506, 0.42756]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34506, 0.92453]], "google_gemma-3-12b-it_contains_pii": [[0, 2774, false], [2774, 5481, null], [5481, 7865, null], [7865, 10344, null], [10344, 13051, null], [13051, 15434, null], [15434, 17507, null], [17507, 19831, null], [19831, 21939, null], [21939, 24815, null], [24815, 27691, null], [27691, 30185, null], [30185, 32464, null], [32464, 34506, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2774, true], [2774, 5481, null], [5481, 7865, null], [7865, 10344, null], [10344, 13051, null], [13051, 15434, null], [15434, 17507, null], [17507, 19831, null], [19831, 21939, null], [21939, 24815, null], [24815, 27691, null], [27691, 30185, null], [30185, 32464, null], [32464, 34506, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34506, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34506, null]], "pdf_page_numbers": [[0, 2774, 1], [2774, 5481, 2], [5481, 7865, 3], [7865, 10344, 4], [10344, 13051, 5], [13051, 15434, 6], [15434, 17507, 7], [17507, 19831, 8], [19831, 21939, 9], [21939, 24815, 10], [24815, 27691, 11], [27691, 30185, 12], [30185, 32464, 13], [32464, 34506, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34506, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
5710b2e1fc68a16e417caaa6453676e0794d6c55
|
HomalgToCAS
A window to the outer world
Version 2019.12.08
September 2015
Mohamed Barakat
Thomas Breuer
Simon Görtzen
Frank Lübeck
Vinay Wagh
(this manual is still under construction)
This manual is best viewed as an HTML document. The latest version is available ONLINE at:
http://homalg.math.rwth-aachen.de/~barakat/homalg-project/HomalgToCAS/chap0.html
An OFFLINE version should be included in the documentation subfolder of the package.
This package is part of the homalg-project:
http://homalg.math.rwth-aachen.de/index.php/core-packages/homalgtocas
Mohamed Barakat
Email: [email protected]
Homepage: https://mohamed-barakat.github.io
Address: Department of Mathematics,
University of Siegen,
57072 Siegen,
Germany
Thomas Breuer
Email: [email protected]
Homepage: http://www.math.rwth-aachen.de/~Thomas.Breuer/
Address: Lehrstuhl D für Mathematik, RWTH-Aachen,
Templergraben 64
52062 Aachen
Germany
Simon Görtzen
Email: [email protected]
Homepage: http://wwwb.math.rwth-aachen.de/goertzen/
Address: Lehrstuhl B für Mathematik, RWTH-Aachen,
Templergraben 64
52062 Aachen
Germany
Frank Lübeck
Email: [email protected]
Homepage: http://www.math.rwth-aachen.de/~Frank.Luebeck/
Address: Lehrstuhl D für Mathematik, RWTH-Aachen,
Templergraben 64
52062 Aachen
Germany
Vinay Wagh
Email: [email protected]
Homepage: http://www.iitg.ernet.in/vinay.wagh/
Address: E-102, Department of Mathematics,
Indian Institute of Technology Guwahati,
Guwahati, Assam, India.
PIN: 781 039.
Copyright
This package may be distributed under the terms and conditions of the GNU Public License Version 2 or (at your option) any later version.
Acknowledgements
We are very much indebted to Max Neunhöffer who provided the first piece of code around which the package IO_ForHomalg was built. The package HomalgToCAS provides a further abstraction layer preparing the communication.
Contents
1 Introduction 4
1.1 HomalgToCAS provides ... 4
2 Installation of the HomalgToCAS Package 5
3 Watch and Influence the Communication 6
3.1 Functions 6
3.2 The Pictograms 9
4 External Rings 19
4.1 External Rings: Representation 19
4.2 Rings: Constructors 19
4.3 External Rings: Operations and Functions 19
A Overview of the homalg Package Source Code 20
References 21
Index 22
Chapter 1
Introduction
HomalgToCAS is one of the core packages of the homalg project [hpa10]. But as one of the rather technical packages, this manual is probably not of interest for the average user. The average user will usually not get in direct contact with the operations provided by this package.
Quoting from the Appendix (homalg: The Core Packages and the Idea behind their Splitting) of the homalg package manual (→ homalg: HomalgToCAS):
“The package HomalgToCAS (which needs the homalg package) includes all what is needed to let the black boxes used by homalg reside in external computer algebra systems. So as mentioned above, HomalgToCAS is the right place to declare the three GAP representations external rings, external ring elements, and external matrices. Still, HomalgToCAS is independent from the external computer algebra system with which GAP will communicate and independent of how this communication physically looks like.”
1.1 HomalgToCAS provides ...
• Declaration and construction of
– external objects (which are pointers to data (rings, ring elements, matrices, ...) residing in external systems)
– external rings (as a new representation for the GAP4-category of homalg rings)
– external ring elements (as a new representation for the GAP4-category of homalg ring elements)
– external matrices (as a new representation for the GAP4-category of homalg matrices)
• LaunchCAS: the standard interface used by homalg to launch external systems
• TerminateCAS: the standard interface used by homalg to terminate external systems
• homalgSendBlocking: the standard interface used by homalg to send commands to external systems
• External garbage collection: delete the data in the external systems that became obsolete for homalg
• homalgIOMode: decide how much of the communication you want to see
Chapter 2
Installation of the HomalgToCAS Package
To install this package just extract the package’s archive file to the GAP pkg directory.
By default the HomalgToCAS package is not automatically loaded by GAP when it is installed.
You must load the package with
LoadPackage( "HomalgToCAS" );
before its functions become available.
Please, send me an e-mail if you have any questions, remarks, suggestions, etc. concerning this package. Also, we would be pleased to hear about applications of this package.
Mohamed Barakat, Thomas Breuer, Simon Görtzen, and Frank Lübeck
Chapter 3
Watch and Influence the Communication
3.1 Functions
3.1.1 homalgIOMode
\[ \text{homalgIOMode}(\text{str}, \text{str2}, \text{str3})] \\ (function)
This function sets different modes which influence how much of the communication becomes visible. Handling the string \( \text{str} \) is not case-sensitive. \text{homalgIOMode} invokes the global function \text{homalgMode} defined in the \text{homalg} package with an “appropriate” argument (see code below). Alternatively, if a second or more strings are given, then \text{homalgMode} is invoked with the remaining strings \text{str2}, \text{str3}, ... at the end. In particular, you can use \text{homalgIOMode}(\text{str},""") to reset the effect of invoking \text{homalgMode}.
<table>
<thead>
<tr>
<th>( \text{str} )</th>
<th>( \text{str} ) (long form)</th>
<th>mode description</th>
</tr>
</thead>
<tbody>
<tr>
<td>""</td>
<td>""</td>
<td>the default mode, i.e. the communication protocol won't be visible (\text{homalgIOMode}() is a short form for \text{homalgIOMode} (""))</td>
</tr>
<tr>
<td>"a"</td>
<td>"all"</td>
<td>combine the modes "debug" and "file"</td>
</tr>
<tr>
<td>"b"</td>
<td>"basic"</td>
<td>the same as "picto" + \text{homalgMode}( "basic")</td>
</tr>
<tr>
<td>"d"</td>
<td>"debug"</td>
<td>view the complete communication protocol</td>
</tr>
<tr>
<td>"f"</td>
<td>"file"</td>
<td>dump the communication protocol into a file with the name Concatenation("commands_file_of", \text{CAS}, "_with_PID", \text{PID})</td>
</tr>
<tr>
<td>"p"</td>
<td>"picto"</td>
<td>view the abbreviated communication protocol using the preassigned pictograms</td>
</tr>
</tbody>
</table>
All modes other than the "default"-mode only set their specific values and leave the other values untouched, which allows combining them to some extent. This also means that in order to get from one mode to a new mode (without the aim to combine them) one needs to reset to the "default"-mode first.
Caution:
• In case you choose one of the modes "file" or "all" you might want to set the global variable HOMALG_IO.DoNotDeleteTemporaryFiles := true; this is only important if during the computations some matrices get converted via files (using ConvertHomalgMatrixViaFile), as reading these files will be part of the protocol!
• It makes sense for the dumped communication protocol to be (re)executed with the respective external system, only in case the latter is deterministic (i.e. same-input-same-output).
Code
```
InstallGlobalFunction( homalgIOMode,
function( arg )
local nargs, mode, s;
nargs := Length( arg );
if nargs = 0 or ( IsString( arg[1] ) and arg[1] = "" ) then
mode := "default";
elif IsString( arg[1] ) then ## now we know, the string is not empty
s := arg[1];
if LowercaseString( s[[1]] ) = "a" then
mode := "all";
elif LowercaseString( s[[1]] ) = "b" then
mode := "basic";
elif LowercaseString( s[[1]] ) = "d" then
mode := "debug";
elif LowercaseString( s[[1]] ) = "f" then
mode := "file";
elif LowercaseString( s[[1]] ) = "p" then
mode := "picto";
else
Error( "the first argument must be a string\n" );
fi;
else
Error( "the first argument must be a string\n" );
fi;
if mode = "default" then
## reset to the default values
HOMALG_IO.color_display := false;
HOMALG_IO.show_banners := true;
HOMALG_IO.save_CAS_commands_to_file := false;
HOMALG_IO.DoNotDeleteTemporaryFiles := false;
HOMALG_IO.SaveHomalgMaximumBackStream := false;
HOMALG_IO.InformAboutCASystemsWithoutActiveRings := true;
SetInfoLevel( InfoHomalgToCAS, 1 );
homalgMode( );
elif mode = "all" then
homalgIOMode( "debug" );
homalgIOMode( "file" );
elif mode = "basic" then
HOMALG_IO.color_display := true;
```
This is the part of the global function homalgSendBlocking that controls the visibility of the communication.
```plaintext
HOMALG_IO.show_banners := true;
SetInfoLevel( InfoHomalgToCAS, 4 );
if mode = "basic" then
## use homalgIOMode( "basic", "" ) to reset
homalgMode( "basic" );
elif mode = "debug" then
HOMALG_IO.color_display := true;
HOMALG_IO.show_banners := true;
SetInfoLevel( InfoHomalgToCAS, 8 );
homalgMode( "debug" );
## use homalgIOMode( "debug", "" ) to reset
elif mode = "file" then
HOMALG_IO.save_CAS_commands_to_file := true;
elif mode = "picto" then
HOMALG_IO.color_display := true;
HOMALG_IO.show_banners := true;
SetInfoLevel( InfoHomalgToCAS, 4 );
homalgMode( "logic" );
## use homalgIOMode( "picto", "" ) to reset
fi;
if nargs > 1 and IsString( arg[2] ) then
CallFuncList( homalgMode, arg{[ 2 .. nargs ]} );
fi;
end );
```
```plaintext
io_info_level := InfoLevel( InfoHomalgToCAS );
if not IsBound( pictogram ) then
pictogram := HOMALG_IO.Pictograms.unknown;
picto := pictogram;
elif io_info_level >= 3 then
picto := pictogram;
## add colors to the pictograms
if pictogram = HOMALG_IO.Pictograms.ReducedEchelonForm and
IsBound( HOMALG_MATRICES.color_BOE ) then
pictogram := Concatenation( HOMALG_MATRICES.color_BOE, pictogram, \033[0m );
elif pictogram = HOMALG_IO.Pictograms.BasisOfModule and
IsBound( HOMALG_MATRICES.color_BOB ) then
pictogram := Concatenation( HOMALG_MATRICES.color_BOB, pictogram, \033[0m );
elif pictogram = HOMALG_IO.Pictograms.DecideZero and
IsBound( HOMALG_MATRICES.color_BOD ) then
pictogram := Concatenation( HOMALG_MATRICES.color_BOD, pictogram, \033[0m );
elif pictogram = HOMALG_IO.Pictograms.SyzygiesGenerators and
IsBound( HOMALG_MATRICES.color_BOH ) then
pictogram := Concatenation( HOMALG_MATRICES.color_BOH, pictogram, \033[0m );
elif pictogram = HOMALG_IO.Pictograms.BasisCoeff and
IsBound( HOMALG_MATRICES.color_BOC ) then
pictogram := Concatenation( HOMALG_MATRICES.color_BOC, pictogram, \033[0m );
elif pictogram = HOMALG_IO.Pictograms.DecideZeroEffectively and
IsBound( HOMALG_MATRICES.color_BOP ) then
pictogram := Concatenation( HOMALG_MATRICES.color_BOP, pictogram, \033[0m );
elif need_output or need_display then
```
3.2 The Pictograms
3.2.1 HOMALG_IO.Pictograms
The record of pictograms is a component of the record HOMALG_IO.
```haskell
Pictograms := rec(
##
## colors:
##
## pictogram color of a "need_command" or assignment operation:
color_need_command := "\033\[1;33;44m",
## pictogram color of a "need_output" or "need_display" operation:
color_need_output := "\033\[1;34;43m",
##
## good morning computer algebra system:
##
## initialize:
```
initialize := "ini",
## define macros:
define := "def",
## get time:
time := "ms",
## memory usage:
memory := "mem",
## unknown:
unknown := "??",
##
## external garbage collection:
##
## delete a variable:
delete := "xxx",
## delete several variables:
multiple_delete := "XXX",
## trigger the garbage collector:
garbage_collector := "grb",
##
## create lists:
##
## define a list:
CreateList := "lst",
##
## create rings:
##
## define a ring:
CreateHomalgRing := "R:=",
## get the names of the "variables" defining the ring:
variables := "var",
## define zero:
Zero := "0:=",
## define one:
One := "1:=",
## define minus one:
MinusOne := "-:=",
## mandatory ring operations:
## get the name of an element:
## (important if the CAS pretty-prints ring elements,
## we need names that can be used as input!)
## (install a method instead of a homalgTable entry)
\texttt{homalgSetName} := "a",
## a = 0 ?
\texttt{IsZero} := "a=0",
## a = 1 ?
\texttt{IsOne} := "a=1",
## subtract two ring elements
## (needed by Simpler Equivalent Matrix in case
## CopyRow/ColumnToIdentityMatrix are not defined):
\texttt{Minus} := "a-b",
## divide the element a by the unit u
## (needed by Simpler Equivalent Matrix in case
## DivideEntryByUnit is not defined):
\texttt{DivideByUnit} := "a/u",
## important ring operations:
## (important for performance since existing
## fallback methods cause a lot of traffic):
## is u a unit?
## (mainly needed by the fallback methods for matrices, see below):
\texttt{IsUnit} := "?/u",
## optional ring operations:
## copy an element:
\texttt{CopyElement} := "a>a",
## add two ring elements:
\texttt{Sum} := "a+b",
## multiply two ring elements:
\texttt{Product} := "a*b",
## the (greatest) common divisor:
\texttt{Gcd} := "gcd",
## cancel the (greatest) common divisor:
\texttt{CancelGcd} := "ccd",
### random polynomial:
RandomPol := "rpl",
### numerator:
Numerator := "num",
### denominator:
Denominator := "den",
### evaluate polynomial:
Evaluate := "evl",
### degree of a multivariate polynomial
DegreeOfRingElement := "deg",
### maximal degree part of a polynomial
MaximalDegreePart := "mdp",
### is irreducible:
IsIrreducible := "irr",
### create matrices:
### define a matrix:
HomalgMatrix := "A:=",
### copy a matrix:
CopyMatrix := "A>A",
### load a matrix from file:
LoadHomalgMatrixFromFile := "A<<",
### save a matrix to file:
SaveHomalgMatrixToFile := "A>>",
### get a matrix entry as a string:
MatElm := "<ij",
### set a matrix entry from a string:
SetMatElm := ">ij",
### add to a matrix entry from a string:
AddToMatElm := "+ij",
### get a list of the matrix entries as a string:
GetListOfHomalgMatrixAsString := "\"A\"",
GetSparseListOfHomalgMatrixAsString := ".A.",
## assign a "sparse" list of matrix entries to a variable:
sparse := "spr",
## list of assumed inequalities:
Inequalities := "<>0",
## list of assumed inequalities:
MaximalIndependentSet := "idp",
## mandatory matrix operations:
## test if a matrix is the zero matrix:
## CAUTION: the external system must be able to check
## if the matrix is zero modulo possible ring relations
## only known to the external system!
IsZeroMatrix := "A=0",
## number of rows:
NrRows := "#==",
## number of columns:
NrColumns := "#||",
## determinant of a matrix over a (commutative) ring:
Determinant := "det",
## create a zero matrix:
ZeroMatrix := "(0)",
## create a initial zero matrix:
InitialMatrix := "[0]",
## create an identity matrix:
IdentityMatrix := "(1)",
## create an initial identity matrix:
InitialIdentityMatrix := "[1]",
## "transpose" a matrix (with "the" involution of the ring):
Involution := "A^-*",
## transpose a matrix
TransposedMatrix := "A^-t",
## get certain rows of a matrix:
CertainRows := "===",
## get certain columns of a matrix:
CertainColumns := "|||",
## stack to matrices vertically:
UnionOfRows := "A_B",
## glue to matrices horizontally:
UnionOfColumns := "A|B",
## create a block diagonal matrix:
DiagMat := "A\B",
## the Kronecker (tensor) product of two matrices:
KroneckerMat := "AoB",
## multiply a ring element with a matrix:
MulMat := "a*A",
## multiply a matrix with a ring element:
MulMatRight := "A*a",
## add two matrices:
AddMat := "A+B",
## subtract two matrices:
SubMat := "A-B",
## multiply two matrices:
Compose := "A*B",
## pullback a matrix by a ring map:
Pullback := "pbk",
## important matrix operations:
## (important for performance since existing
## fallback methods cause a lot of traffic):
##
## test if two matrices are equal:
## CAUTION: the external system must be able to check
## equality of the two matrices modulo possible ring relations
## only known to the external system!
AreEqualMatrices := "A=B",
## test if a matrix is the identity matrix:
IsIdentityMatrix := "A=1",
## test if a matrix is diagonal (needed by the display method):
IsDiagonalMatrix := "A=\",
## get the positions of the zero rows:
ZeroRows := "0==",
## get the positions of the zero columns:
ZeroColumns := "0||",
## get "column-independent" unit positions
## (needed by ReducedBasisOfModule):
GetColumnIndependentUnitPositions := "ciu",
## get "row-independent" unit positions
## (needed by ReducedBasisOfModule):
GetRowIndependentUnitPositions := "riu",
## get the position of the "first" unit in the matrix
## (needed by SimplerEquivalentMatrix):
GetUnitPosition := "gup",
## position of the first non-zero entry per row
PositionOfFirstNonZeroEntryPerRow := "fnr",
## position of the first non-zero entry per column
PositionOfFirstNonZeroEntryPerColumn := "fnc",
## indicator matrix of non-zero entries
IndicatorMatrixOfNonZeroEntries := "<>0",
## transposed matrix:
TransposedMatrix := "^tr",
## divide an entry of a matrix by a unit
## (needed by SimplerEquivalentMatrix in case
## DivideRow/ColumnByUnit are not defined):
DivideEntryByUnit := "ij/",
## divide a row by a unit
## (needed by SimplerEquivalentMatrix):
DivideRowByUnit := "-/u",
## divide a column by a unit
## (needed by SimplerEquivalentMatrix):
DivideColumnByUnit := "|/u",
## divide a row by a unit
## (needed by SimplerEquivalentMatrix):
CopyRowToIdentityMatrix := "->-",
## divide a column by a unit
## (needed by SimplerEquivalentMatrix):
CopyColumnToIdentityMatrix := "|>|
## set a column (except a certain row) to zero
## (needed by SimplerEquivalentMatrix):
SetColumnToZero := "|=0",
## get the positions of the rows with a single one
## (needed by SimplerEquivalentMatrix):
GetCleanRowsPositions := "crp",
## convert a single row matrix into a matrix
## with specified number of rows/columns
## (needed by the display methods for homomorphisms):
ConvertRowToMatrix := "-%A",
## convert a single column matrix into a matrix
## with specified number of rows/columns
## (needed by the display methods for homomorphisms):
ConvertColumnToMatrix := "|%A",
## convert a matrix into a single row matrix:
ConvertMatrixToRow := "#A-",
## convert a matrix into a single column matrix:
ConvertMatrixToColumn := "#A|",
## basic matrix operations:
##
## compute a (r)educed (e)chelon (f)orm:
ReducedEchelonForm := "ref",
## compute a "(bas)is" of a given set of module elements:
BasisOfModule := "bas",
## compute a reduced "(Bas)is" of a given set of module elements:
ReducedBasisOfModule := "Bas",
## (d)e(c)ide the ideal/submodule membership problem,
## i.e. if an element is (0) modulo the ideal/submodule:
DecideZero := "dc0",
## compute a generating set of (syz)ygies:
SyzygiesGenerators := "syz",
## compute a generating set of reduced (Syz)ygies:
ReducedSyzygiesGenerators := "Syz",
## compute a (R)educed (E)chelon (F)orm
## together with the matrix of coefficients:
ReducedEchelonFormC := "REF",
## compute a "(BAS)is" of a given set of module elements
## together with the matrix of coefficients:
BasisCoeff := "BAS",
## (D)e(C)ide the ideal/submodule membership problem,
## i.e. write an element effectively as (0) modulo the ideal/submodule:
DecideZeroEffectively := "DC0",
## optional matrix operations:
## Hilbert-Poincare series of a module:
HilbertPoincareSeries := "HPs",
## Hilbert polynomial of a module:
HilbertPolynomial := "Hil",
## affine dimension of a module:
AffineDimension := "dim",
## affine degree of a module:
AffineDegree := "adg",
## the constant term of the hilbert polynomial:
ConstantTermOfHilbertPolynomial := "P_0",
## differentiate a matrix M w.r.t. a matrix D
Diff := "dif",
## maximal dimensional radical subobject:
MaxDimensionalRadicalSubobject := "V_d",
## radical subobject:
RadicalSubobject := "rad",
## radical decomposition:
RadicalDecomposition := "VxU",
## maximal dimensional subobject:
MaxDimensionalSubobject := "X_d",
## equi-dimensional decomposition:
EquiDimensionalDecomposition := "XxY",
## primary decomposition:
PrimaryDecomposition := "YxZ",
## eliminate variables:
Eliminate := "eli",
## leading module:
LeadingModule := "led",
## the i-th monomial matrix
MonomialMatrix := "mon",
## matrix of symbols:
MatrixOfSymbols := "smb",
## leading module:
## coefficients:
Coefficients := "cfs",
##
## optional module operations:
##
## compute a better equivalent matrix
## (field -> row+col Gauss, PIR -> Smith, Dedekind domain -> Krull, etc ...):
BestBasis := "(\)",
## compute elementary divisors:
ElementaryDivisors := "div",
##
## for the eye:
##
## display objects:
Display := "dsp",
## the LaTeX code of the mathematical entity:
homalgLaTeX := "TeX",
)
Chapter 4
External Rings
4.1 External Rings: Representation
4.1.1 IsHomalgExternalRingRep
\[ \text{IsHomalgExternalRingRep}(R) \]
**Returns:** true or false
The internal representation of homalg rings.
(It is a representation of the GAP category IsHomalgRing.)
4.2 Rings: Constructors
4.3 External Rings: Operations and Functions
Appendix A
Overview of the homalg Package Source Code
The package HomalgToCAS is split in several files.
<table>
<thead>
<tr>
<th>Filename .gd/.gi</th>
<th>Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>HomalgToCAS</td>
<td>the global variable HOMALG_IO and the global function homalgIOMode</td>
</tr>
<tr>
<td>homalgExternalObject</td>
<td>homalg external objects, homalgPointer, homalgExternalCASystem, homalgStream,...</td>
</tr>
<tr>
<td>HomalgExternalRing</td>
<td>CreateHomalgExternalRing, HomalgExternalRingElement</td>
</tr>
<tr>
<td>HomalgExternalMatrix</td>
<td>ConvertHomalgMatrix, ConvertHomalgMatrixViaFile</td>
</tr>
<tr>
<td>homalgSendBlocking</td>
<td>homalgFlush, homalgSendBlocking</td>
</tr>
<tr>
<td>IO</td>
<td>LaunchCAS, TerminateCAS</td>
</tr>
</tbody>
</table>
Table: The HomalgToCAS package files
References
Index
HomalgToCAS, 4
HOMALG_IO.Pictograms, 9
homalgIOMode, 6
IsHomalgExternalRingRep, 19
|
{"Source-Url": "https://www.gap-system.org/Manuals/pkg/HomalgToCAS-2019.12.08/doc/manual.pdf", "len_cl100k_base": 6251, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 39224, "total-output-tokens": 7516, "length": "2e12", "weborganizer": {"__label__adult": 0.00027108192443847656, "__label__art_design": 0.0005397796630859375, "__label__crime_law": 0.0003204345703125, "__label__education_jobs": 0.0017223358154296875, "__label__entertainment": 0.00013387203216552734, "__label__fashion_beauty": 0.00015914440155029297, "__label__finance_business": 0.00029659271240234375, "__label__food_dining": 0.0003478527069091797, "__label__games": 0.0007157325744628906, "__label__hardware": 0.0017061233520507812, "__label__health": 0.0004978179931640625, "__label__history": 0.00035119056701660156, "__label__home_hobbies": 0.00018787384033203125, "__label__industrial": 0.0008635520935058594, "__label__literature": 0.0002703666687011719, "__label__politics": 0.00029778480529785156, "__label__religion": 0.0006031990051269531, "__label__science_tech": 0.20068359375, "__label__social_life": 0.00016677379608154297, "__label__software": 0.03167724609375, "__label__software_dev": 0.75732421875, "__label__sports_fitness": 0.00035762786865234375, "__label__transportation": 0.0004401206970214844, "__label__travel": 0.00019609928131103516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22476, 0.01417]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22476, 0.44923]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22476, 0.63091]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 563, false], [563, 1664, null], [1664, 2132, null], [2132, 2535, null], [2535, 4376, null], [4376, 4954, null], [4954, 6760, null], [6760, 8796, null], [8796, 11069, null], [11069, 11511, null], [11511, 12172, null], [12172, 13356, null], [13356, 14209, null], [14209, 15342, null], [15342, 16504, null], [16504, 17981, null], [17981, 19494, null], [19494, 20535, null], [20535, 20944, null], [20944, 21283, null], [21283, 22269, null], [22269, 22385, null], [22385, 22476, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 563, true], [563, 1664, null], [1664, 2132, null], [2132, 2535, null], [2535, 4376, null], [4376, 4954, null], [4954, 6760, null], [6760, 8796, null], [8796, 11069, null], [11069, 11511, null], [11511, 12172, null], [12172, 13356, null], [13356, 14209, null], [14209, 15342, null], [15342, 16504, null], [16504, 17981, null], [17981, 19494, null], [19494, 20535, null], [20535, 20944, null], [20944, 21283, null], [21283, 22269, null], [22269, 22385, null], [22385, 22476, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22476, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22476, null]], "pdf_page_numbers": [[0, 0, 1], [0, 563, 2], [563, 1664, 3], [1664, 2132, 4], [2132, 2535, 5], [2535, 4376, 6], [4376, 4954, 7], [4954, 6760, 8], [6760, 8796, 9], [8796, 11069, 10], [11069, 11511, 11], [11511, 12172, 12], [12172, 13356, 13], [13356, 14209, 14], [14209, 15342, 15], [15342, 16504, 16], [16504, 17981, 17], [17981, 19494, 18], [19494, 20535, 19], [20535, 20944, 20], [20944, 21283, 21], [21283, 22269, 22], [22269, 22385, 23], [22385, 22476, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22476, 0.02878]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
090e3342104634280f1d8cca1834a9d495fb1ba0
|
Measure Authoring Tool Release Notes
Version 5.5.0
May 29, 2018
# Table of Contents
## MEASURE AUTHORING TOOL RELEASE NOTES
### ENHANCEMENTS
1. **Updated the MAT to Version 5.5**
### CQL WORKSPACE
1.1 **Replacing included libraries**
1.2 **Program and release options on value set retrieval**
1.3 **Entering value sets and codes with the same name**
1.4 **Applied codes table updates**
1.5 **Applied value sets table updates**
1.6 **Ability to include a code system version on the codes tab**
1.7 **Additions to the shortcut keys and insert icon (attribute builder)**
1.8 **CQL error report**
### MISCELLANEOUS
1.9 **Removal of measure notes**
1.10 **Changes to MAT emails**
1.11 **Added search bar to measure & library sharing pages**
### POPULATION WORKSPACE
1.12 **Population workspace redesign**
1.13 **Populations tabs**
1.14 **Measure observations tab**
1.15 **Stratification tab**
1.16 **View populations tab**
### MAT EXPORTS
1.17 **Removed “using” from human readable terminology section**
1.18 **Codesystem declaration changes on the human readable and HQMF exports**
### CQL-TO-ELM TRANSLATOR
1.19 **CQL-to-ELM translator version 1.2.20**
### SYSTEM FIXES
2.1 **Added the progress bar to additional MAT screens**
2.2 **Fixed “page jumping” issue due to the appearance of the loading bar**
2.3 **Changed the filtering process upon packaging**
Enhancements
A description of the enhancements included in the Measure Authoring Tool (MAT) version 5.5.0 are listed below. For further information on the enhancements and processes of the MAT tool, including the use of CQL, please visit the MAT public website and read the 5.5 User Guide found on the Training and Resources tab: https://www.emeasuretool.cms.gov/.
1.0 Updated the MAT to Version 5.5
The MAT 5.5 release to production will deploy on May 29, 2018.
CQL Workspace
1.1 Replacing included libraries
The CQL Workspace and CQL Library Workspace now allow users to replace an included library with a newer version of that library, via the Includes tab. To replace an already included library, users must double click the library they wish to replace and will be brought to the included library page within the Includes tab. An “Edit” icon has been added to the top of the page that gives users the ability to select from a filtered list of available library versions and then select “Apply” to complete the replacement.
Users will be able to replace their library with another, provided the library meets the below criteria:
- For measures, the Replace Library table filters out and does not display any libraries that contain another included library, i.e. a library within a library.
- For standalone libraries, the Replace Library table filters out and does not display any libraries that contain another included library, i.e. a library within a library.
- Libraries containing previous versions of the Quality Data Model (QDM) will not be available for selection from the Replace Library table.
- The Replace Library table will only display CQL libraries that have been versioned.
1.2 Program and Release options on Value Set retrieval
The MAT now allows users to select a specific program and release when retrieving value sets from the Value Set Authority Center (VSAC). If no program and release selection is made, the MAT will continue to retrieve the most current value set. There are two new drop-down menus on the Value Set search box that allow users to select a program and release value prior to retrieving their value set from VSAC. If a program is chosen, a release must also be chosen. Also, if a program/release combination is chosen, version may not be chosen. Release information will be displayed on the Applied Value Sets table for all applied value sets where a program/release was selected.
1.3 Entering Value Sets and Codes with the same name
The MAT now allows users to enter codes and value sets using the same name into their measures or stand-alone CQL libraries. This requires users to create a suffix when applying an already applied code or value set to make the second unique from the original. There is no limit on the amount of codes or value sets that may be applied to a measure or library. Suffixes applied to value sets and codes will be shown in the MAT exports as well.
When a repeat code or value set is applied, the user will see separate entries on the applied codes or value sets tables. To differentiate between the repeated applied codes and value sets, the suffix chosen by the user will display as part of the Code descriptor or Value Set name on the table. These codes and value sets will also be available for selection through the Insert icon in the attribute builder. The default birthdate and dead codes will not allow for multiple applications and a validation has been built in to prevent adding either of these codes with a user-created suffix.
1.4 Applied Codes table updates
The Applied Codes table has had some additions in functionality and additional columns added:
- The Version Included column is a new addition and will display as blank for each row unless a user has designated they would like to include code system version for that code; the column will then display a green checkmark.
- The Edit column is a new addition to the Applied Codes table and allows for a user to modify an already applied code by selecting “Edit” (pencil icon) on each applied code row.
- The Copy column is a new addition to the Applied Codes table and contains checkboxes that allow users to select applied codes they would like to then paste into another measure or library.
- A new set of icons now displays above the table:
- The copy icon allows users to copy codes to their clipboard that they have selected using the checkboxes in the copy column.
The paste icon allows users to paste codes from their clipboard to another measure or library.
The clear icon allows users to clear any selections they have made on the copy column.
### 1.5 Applied Value Sets table updates

The Applied Value Sets table has had some additions in functionality and additional columns added:
- The Release column is a new addition that will display the release information for any applied value set where a user has designated a release, otherwise, it will display blank.
- The Edit column is a new addition to the Applied Value Sets table and allows for a user to modify an already applied value set by selecting “Edit” (pencil icon) on each applied value set row.
- The Copy column is a new addition to the Applied Value Sets table and contains checkboxes that allow users to select applied value sets they would like to then paste into another measure or library.
- A new set of icons now displays above the table:
- The copy icon allows users to copy value sets to their clipboard that they have selected using the checkboxes in the copy column.
- The paste icon allows users to paste value sets from their clipboard to another measure or library.
- The clear icon allows users to clear any selections they have made on the copy column.
### 1.6 Ability to include a Code system version on the Codes tab
When applying retrieved codes to the Applied Codes table, users now can decide if they would like to include the code system version for each applied code. The Codes search box now contains an “Include Code System Version” checkbox. When selected, the code being applied will contain the code system version and this will be shown on the Applied codes table. The code and code system declarations in the View CQL section of the CQL Workspace will reflect the choice made when the code is applied.
Example Code System declarations:
- **Version included**: codesystem "LOINC:2.46": 'urn:oid:2.16.840.1.113883.6.1' version 'urn:hl7:version:2.46"
- **Version not included**: codesystem "LOINC ": 'urn:oid:2.16.840.1.113883.6.1'
Example Code declarations:
- **Version included**: code "Anesthesia for procedures on salivary glands, including biopsy": '00100' from "CPT:2018" display 'Anesthesia for procedures on salivary glands, including biopsy'
- **Version not included**: code "Anesthesia for procedures on salivary glands, including biopsy": '00100' from "CPT" display 'Anesthesia for procedures on salivary glands, including biopsy'
The default Birthdate and Dead codes will no longer display code system version when a new measure or library is created. For all existing measures and libraries, the default Birthdate and Dead code system versions will be removed from the CQL on the drafting and cloning processes.
### 1.7 Additions to the shortcut keys and Insert icon (attribute builder)
There have been a few additions to the shortcut keys and insert icon in the MAT:
The below functions have been added to the Ctrl-Alt-F shortcut key, and can also be found under the Pre-Defined Function item type under the attribute builder:
- AgeInWeeks()
- AgeInWeeksAt()
- CalculateAgeInWeeks()
- CalculateAgeInWeeksAt()
The unit for BPM (beats per minute) will now display as "{beats}/min" in the CQL, instead of "{H.B.}/min", when bpm is chosen by using either the Ctrl-Alt-U shortcut key or the attribute builder.
When a code is inserted via the attribute builder, the “~” symbol will now be used going forward in place of the "=" symbol.
Example:.result ~ "Infectious disease, hcv, six biochemical assays (alt, a2-macroglobulin, apolipoprotein a-1, total bilirubin, ggt, and haptoglobin) utilizing serum, prognostic algorithm reported as scores for fibrosis and necroinflammatory activity in liver"
1.8 CQL Error Report
Users are now able to download an error report of their measure or library CQL on the View CQL tab of the CQL Workspace. The button to download the error report is located on the View CQL section and is labeled as "Export Error File". Selecting this icon will generate a .txt file that contains the entirety of the CQL file for that measure or library as it currently stands. At the bottom of the .txt file is a section that will display any errors contained in the CQL, organized in ascending order by line number.
Figure 3: Export Error File icon location
Miscellaneous
1.9 Removal of Measure Notes
The Measure Notes tab has been removed from the MAT as it was determined that they were not being used and are a legacy of prior MAT versions. The measure notes tab was a sub-tab of the Measure Composer.
1.10 Changes to MAT emails
There were three changes to the emails generated out of MAT to improve the clarity of the message for users who may be accessing the MAT in a testing environment:
- Addition of the MAT environment the email is generated from in the subject line of the email.
- Addition of a User ID field in the body of each email letting users know which User ID this email is being generated for.
- A direct link to the environment the email was generated from in the body of each email.
1.11 Added search bar to Measure & Library Sharing pages
Users will now see a search box when sharing their measure and/or their stand-alone CQL Library. This search box allows users to narrow the list of available developers when they wish to share their measure or library with another user. Users will receive a confirmation message after selecting Save and Continue and be redirected to the Measure Library or CQL Library page.
Population Workspace
The most noticeable change in version 5.5 of the MAT is the population workspace redesign. The MAT team considered user feedback to make the population workspace more intuitive and, as a result, have rebuilt the population workspace.
1.12 Population Workspace redesign
The new Population Workspace follows a left-navigation structure and was built similarly to the CQL Workspace. There are a few noteworthy features of the new Population Workspace to highlight:
- As in previous versions of the MAT, populations will display dynamically based on the Measure Scoring type chosen. If the measure scoring type is changed, the populations shown in the left-hand navigation on the Population Workspace will be added and removed as necessary to match the new scoring type chosen.
- The right-click menu structure has been removed, and all the functionality of the right-click menu has been built into each screen.
- Definitions, Functions and Aggregate Functions may be selected in easy to use drop-down controls on each population, measure observation and stratification tab.
- Users must save their changes by selecting the Save icon on each population tab before moving to another population tab. There is a validation built in that will alert the user if they attempt to move to another tab with unsaved changes left on the page.
1.13 Populations tabs
The Populations screens listed below all share a common layout and functionality. These screens allow for addition or deletion of any of the below populations and allow users to associate a definition to each population.
- Initial Populations
- Denominators
- Denominator Exceptions
- Denominator Exclusions
- Numerators
- Numerator Exclusions
- Measure Populations
- Measure Population Exclusions
Each Population tab contains an “+Add New” link, a Save icon, a single population row, and each row contains delete and View Human Readable icons. All of the populations sections allow for a user to enter an unlimited number of populations on each tab. Each tab must contain at least one population. If a user attempts to navigate away from a tab with unsaved changes, they will be shown a warning message.
Figure 5: Unsaved changes message – Population Workspace
⚠️ You have unsaved changes that will be discarded if you continue. Do you want to continue without saving?
Yes No
Figure 6: Population Workspace – Initial Populations tab
1.14 Measure Observations tab
The Measure Observations tab on the new Population Workspace is very similar to the rest of the populations tabs. Instead of a right-click menu structure, users will now make their selections using the Aggregate Function and Function drop-down menus and add additional measure observations by using the “+Add New” link. Each measure observation row contains a delete icon and a View Human Readable icon that will open a new window with a Human Readable view of that measure observation.
There has been one change to the way measure observations are handled in the new Population Workspace. Both an Aggregate Function AND a user-defined Function are required in order to save a measure observation row. If a user attempts to save without valid selections for both fields, they will receive a warning message that prevents the save.
Figure 7: Population Workspace – Measure Observations tab
1.15 Stratification tab
The Stratification tab has been updated to align with the design of the other population tabs. Instead of using the right-click menu structure, Stratification and Stratum now have their own rows. Users are able to select “Add New Stratification”, which will add a new Stratification line as well as a Stratum line.
To add a new Stratum, users will select the “+Add Stratum” link on each Stratification row, allowing them to add an unlimited number of Stratum rows to each Stratification.
Each Stratification and Stratum row contains a delete icon. Each Stratification must contain at least one Stratum and the delete icon will be enabled only if there is more than one Stratum associated to a
Stratification row. Stratification rows may be deleted at any time, provided there is at least one stratification remaining.
Each Stratum row contains a “View Human Readable” icon that will open a new window showing the Human Readable view of each associated Stratum.
**Figure 8: Population Workspace – Stratification tab**
1.16 **View Populations tab**
The View Populations tab is the default view when a user navigates to the Population Workspace. The tree structure is meant to display the entirety of all populations, similar to the old population workspace. The tree order follows the same order as the chosen measure scoring type and will change if the measure scoring type is changed. The View Populations tab is read only and only allows for users to expand and collapse the tree. The View Populations tab is the only available tab on the Population Workspace when the measure is in a read-only state.
**MAT Exports**
1.17 **Removed “using” from Human Readable Terminology section**
The Human Readable export was updated to remove the word “using” in each row of the Terminology section.
1.18 **Codesystem declaration changes on the Human Readable and HQMF exports**
Human Readable Terminology section:
Codesystem declarations will no longer display in Terminology section of the Human Readable. This was a cosmetic change only.
The second change to the Terminology section is to conditionally display the code system version information for a referenced code system based off the user’s decision to either include or exclude code system version for the applicable code.
Example (user chose NOT to include code system version for the applied code):
- code "appointment" ("ActMood Code (APT)"")
Example (user chose TO include code system version for the applied code):
- code "appointment" ("ActMood version HL7V3.0_2015-07 Code (APT)"")
**HQMF:**
The HQMF export went through a similar change to support the user’s choice for either inclusion or exclusion of code system version.
Example (user chose NOT to include code system version for the applied code):
- <cql-ext:code codeSystem="2.16.840.1.113883.5.1001" code="APT" codeSystemName="ActMood">
Example (user chose TO include code system version for the applied code):
- <cql-ext:code codeSystem="2.16.840.1.113883.5.1001" code="APT" codeSystemVersion="HL7V3.0_2015-07" codeSystemName="ActMood">
**CQL-to-ELM Translator**
**1.19 CQL-to-ELM Translator version 1.2.20**
The CQL-to-ELM Translator has been upgraded to version, 1.2.20. This version of the CQL-to-ELM Translator includes validation for UCUM units used in CQL. This also includes the updated CQL-to-ELM formatter. The MAT will continue to use tab indents in the CQL editor for the 5.5 release.
System Fixes
2.1 Added the progress bar to additional MAT screens
The progress bar used in the MAT has been added to additional MAT screens that were not previously using the progress bar. The progress bar has been added to the following areas in the MAT version 5.5:
- Population Workspace
- CQL Workspace
- Measure Packager tab
2.2 Fixed “page jumping” issue due to the appearance of the loading bar
In previous versions of the MAT, users noted they observed a “page jumping” scenario where the page would shift up once the progress bar reached 100%. The MAT development team created a solution by creating a designated space at the top of the page to solve the observed page jumping issue related to the loading bar.
2.3 Changed the filtering process upon packaging
The MAT has been asked to filter out expressions that exist within a measure but were not used in conjunction, either directly or indirectly, with the expressions paired to the populations for the measure when the exports are created. We have changed the process by which the MAT determines which expressions are used in such a fashion. This modification is to ensure that all expressions used in connection, either directly or indirectly, with the expressions paired to the populations are identified and shown in the exports as expected.
|
{"Source-Url": "https://www.emeasuretool.cms.gov/sites/default/files/2020-03/Release%20Notes%20-%20MAT%20v5.5.pdf", "len_cl100k_base": 4244, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 22846, "total-output-tokens": 4930, "length": "2e12", "weborganizer": {"__label__adult": 0.00040268898010253906, "__label__art_design": 0.0007729530334472656, "__label__crime_law": 0.000499725341796875, "__label__education_jobs": 0.0032711029052734375, "__label__entertainment": 0.0001342296600341797, "__label__fashion_beauty": 0.00028777122497558594, "__label__finance_business": 0.0009055137634277344, "__label__food_dining": 0.0004687309265136719, "__label__games": 0.0007448196411132812, "__label__hardware": 0.0013751983642578125, "__label__health": 0.002017974853515625, "__label__history": 0.0004055500030517578, "__label__home_hobbies": 0.0002287626266479492, "__label__industrial": 0.00104522705078125, "__label__literature": 0.00045871734619140625, "__label__politics": 0.0003020763397216797, "__label__religion": 0.0006566047668457031, "__label__science_tech": 0.0745849609375, "__label__social_life": 0.0002319812774658203, "__label__software": 0.270751953125, "__label__software_dev": 0.6396484375, "__label__sports_fitness": 0.0004451274871826172, "__label__transportation": 0.0003020763397216797, "__label__travel": 0.00029730796813964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19471, 0.03096]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19471, 0.17561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19471, 0.87431]], "google_gemma-3-12b-it_contains_pii": [[0, 65, false], [65, 1369, null], [1369, 3800, null], [3800, 5794, null], [5794, 7676, null], [7676, 9587, null], [9587, 10922, null], [10922, 12709, null], [12709, 13773, null], [13773, 15416, null], [15416, 16636, null], [16636, 18156, null], [18156, 19471, null]], "google_gemma-3-12b-it_is_public_document": [[0, 65, true], [65, 1369, null], [1369, 3800, null], [3800, 5794, null], [5794, 7676, null], [7676, 9587, null], [9587, 10922, null], [10922, 12709, null], [12709, 13773, null], [13773, 15416, null], [15416, 16636, null], [16636, 18156, null], [18156, 19471, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19471, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19471, null]], "pdf_page_numbers": [[0, 65, 1], [65, 1369, 2], [1369, 3800, 3], [3800, 5794, 4], [5794, 7676, 5], [7676, 9587, 6], [9587, 10922, 7], [10922, 12709, 8], [12709, 13773, 9], [13773, 15416, 10], [15416, 16636, 11], [16636, 18156, 12], [18156, 19471, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19471, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
8b86ba7780ed4d2ffff2fb08b1c1f6f0cc3cfeac
|
9. Thesis Conclusions
This chapter presents the conclusions of this thesis. It begins with the answers to the questions presented at the beginning of the thesis. This is followed by a discussion of the chief contributions of this work to the field of software engineering. The chapter ends with a short discussion of future research directions and open questions.
9.1 Research Questions and Answers
In section 1, the research questions of this thesis were presented. Each of these questions is reiterated here, along with their answers. We begin with the specific research questions. The answers to these contribute to the answer to the overall research question, which is given afterwards.
The first specific question is as follows:
*RQ-1: How common are architecture patterns in software architectures? In particular, which patterns are commonly found individually and in pairs in certain application domains?*
Most software systems use between 1 and 4 architecture patterns. The most commonly used architecture patterns are, in descending order of frequency, Layers, Shared Repository, Pipes and Filters, Client-Server, Broker, Model View Controller, and Presentation Abstraction Control.
The most common pair of architecture patterns used together was Layers-Broker, followed by the following pairs: Layers-Shared Repository, Pipes and Filters-Blackboard, Client Server-Presentation Abstraction Control, Layers-Presentation Abstraction Control, and Layers-Model View Controller.
Among domains studied, the following patterns were the most common:
<table>
<thead>
<tr>
<th>Domain</th>
<th>Most Common Pattern</th>
</tr>
</thead>
<tbody>
<tr>
<td>Embedded Systems</td>
<td>Pipes and Filters</td>
</tr>
<tr>
<td>Dataflow and Production</td>
<td>Layers</td>
</tr>
<tr>
<td>Information and Enterprise</td>
<td>Shared Repository</td>
</tr>
<tr>
<td>Web-Based Systems</td>
<td>Broker</td>
</tr>
<tr>
<td>CASE and Related Tools</td>
<td>Layers</td>
</tr>
<tr>
<td>Games</td>
<td>Model View Controller</td>
</tr>
<tr>
<td>Scientific Applications</td>
<td>Pipes and Filters</td>
</tr>
</tbody>
</table>
Table 9.1: Patterns Found in Domains
The second question concerned how patterns fit in the big picture of architectural decisions:
*RQ-2: What is the relationship between architecture patterns and architectural decisions, particularly those concerned with quality attributes?*
Architecture patterns embody major architectural decisions about which architectural structure and behavior to employ. In other words, key decisions about the high level structure (and to a lesser extent, the behavior) of the system are very often decisions to use particular architecture patterns. With respect to quality attributes, these decisions are made to put architectural structure and associated behavior in place that the architects believe will satisfy the quality attributes. Architects often base these decisions on characteristics of candidate architecture patterns. Because architecture patterns are well understood and documented, use of architecture patterns helps solve the difficult problem of documenting architectural decisions, the rationale behind them, and their consequences.
A particularly important class of consequences is the impact of the pattern on quality attributes. The use of a particular pattern may have positive or negative impact on a given quality attribute, based on the characteristics of the pattern, thus certain patterns are good or bad fits for certain quality attributes. Among the most common patterns and quality attributes, the following patterns are particularly good fits for these quality attributes:
- Usability: Model View Controller; also Presentation Abstraction Control and Broker
- Security: Layers; also Broker
- Maintainability: Layers; also Broker and Pipes and Filters
- Efficiency: Pipes and Filters
- Reliability: Layers
- Portability: Pipes and Filters and Broker; also Layers and Presentation Abstraction Control
- Implementability: Broker
The next question explores the types of impact they can have on each other.
*RQ-3: What model describes the interaction between patterns and the tactics one uses to implement quality attributes?*
A brief summary of the model is as follows: An architecture pattern affects architectural concerns of a system architecture, among which are the quality attributes. The way this happens is that quality attributes are satisfied through the implementation of specific measures called tactics. The implementation of the
tactics must be done within the context of the architecture patterns. In particular, the tactic must be implemented within components and behavior of the pattern (designated as “pattern participants.”)
Implementation of tactics within a pattern entails some changes to the pattern participants. The types of changes to components and connectors are as follows:
- **Implemented in:** little or no change is required in order to implement the tactic.
- **Replicates:** components and/or connectors are replicated, but their structure remains the same.
- **Add, in the pattern:** new components and/or connectors are added, keeping the structure of the pattern the same (e.g., an additional layer is added in the Layers pattern.)
- **Add, out of the pattern:** new components and/or connectors are added which change the shape of the pattern.
- **Modify:** the structure of components and/or connectors must change.
- **Delete:** a component and/or connector is removed (postulated; not observed.)
It is possible to use a simple annotation scheme to show these changes on a typical component-and-connector architecture diagram.
For a given quality attribute, there are certain tactics implemented to satisfy the quality attribute. A tactic’s impact on a pattern is characterized by the types of changes, as outlined above, and the overall expected difficulty of implementing a given tactic in a given pattern can be gauged; I used a five-level scale.
The model leads to a sub-question:
*RQ-3a: What do we learn about patterns and satisfying quality attributes through the application of this model?*
A study of tactics associated with software reliability showed how the implementation of tactics in various architecture patterns fits in the various types of changes. The impact of the changes was assessed on the aforementioned five-level scale. A controlled experiment showed that this information can be useful in the assessment of work required to implement tactics.
The next question concerns how this model extends to complex systems where multiple patterns and quality attributes are in play:
RQ-4: In a complex system requiring the use of multiple patterns and multiple tactics, what characteristics of the patterns, the tactics, and even the system itself influence where and how tactics are most effectively implemented?
Three important factors influence the implementation of tactics. They are as follows:
1. The nature of the tactic itself indicates whether the tactic influences all components of the architecture, or just a subset of them.
2. Previous decisions about the system become constraints to which the system must conform. In particular, the selection of architecture patterns constrains how tactics are implemented, making it easier or harder to implement a tactic.
3. The requirements of the system with respect to the quality attributes constrain where a tactic is to be implemented. These requirements may cause a tactic to be implemented in the structure of a pattern that is a good fit for it, or in one that is a poor fit for it.
These factors combine to facilitate or hinder the accomplishment of desired quality attributes.
These complex relationships have been explored for a set of the most common tactics associated with reliability.
We now wish to find practical application for this understanding. In the next question, we consider how to use it to help form the architecture.
RQ-5: How does one incorporate patterns into the architectural analysis and synthesis phases of architectural design in order to help the architect consider how the structure impacts the satisfaction of quality attributes?
The use of patterns, or a pattern-driven approach to architecture is entirely compatible with common architectural analysis and synthesis methods. The key steps to using patterns are as follows:
1. Identify the most prominent architectural drivers of the system. These include both functional requirements and quality attributes.
2. Select candidate architecture patterns that address the needs of the architectural drivers.
3. Partition the system by applying a combination of the candidate patterns.
4. Evaluate whether the partitioning satisfies the architectural drivers. This may include:
a. Examine the forces of the pattern
b. Examine the consequences of the pattern
c. Examine the interaction among the patterns selected
5. Perform tradeoffs with respect to the different architecture drivers. These tradeoffs include exploring candidate tactics, and considering their interaction with the candidate architecture patterns.
We continue exploring the practical use of knowledge of the interaction of patterns and quality attributes in the following, where we concern ourselves with how we can learn whether the patterns selected are a good fit for the quality attributes, at the time the architecture is being formed.
*RQ-6: How can one gain insight into the impact of the architecture patterns used on quality attributes early in the development cycle – while the architecture can still be readily modified?*
One can use an architectural evaluation method that is based on traditional architectural evaluations, but is tailored to focus on the critical quality attributes and their interaction with architecture patterns. The evaluation process is designed to be very lightweight so that projects that cannot afford lengthy and expensive architecture reviews can obtain some of the important benefits of architecture reviews.
The main part of a pattern-based architecture process is the review meeting. The main activities of the review meeting are as follows:
1. Review the quality attribute requirements
2. Present and discuss the architecture
3. Identify the patterns in the architecture
4. Examine the interactions between the patterns and quality attributes; consider tactics which are being used or considered, as well as tactics which might be important to use.
5. Identify issues
These activities may be performed iteratively.
We have used this review process and found that it can identify potentially significant architectural issues with respect to the systems’ quality attributes. We have found that the process requires little time and effort, and can be done for small projects.
The answers to the above research sub-questions lead to the answer to the main research question. The main research question is as follows:
*How can architects leverage patterns to create architectures that meet quality attribute requirements, during analysis, synthesis, and evaluation?*
The interaction among architecture patterns and quality attributes is indeed rich. Thus, there are numerous ways in which an architect can take advantage of this interrelationship to create improved software architectures. I have identified the following ways:
1. At the most basic level, an architect can examine the available architecture patterns to understand existing solutions to architectural problems. The architect can focus on the patterns that are most commonly used, especially in particular problem domains. As most systems employ more than one architecture pattern, this also includes being able to focus on the pairs of patterns most commonly used in a domain.
2. A second use is that an architect can use patterns to help capture architectural decisions. That is, if an architect uses a pattern, documenting the use of the pattern leverages the existing pattern documentation; the architect does not need to rewrite it. The documentation of the consequences of the pattern is especially helpful, particularly as many of the consequences impact quality attributes.
3. With the additional insight about architecture patterns’ impact on quality attributes, an architect can make well informed choices about the architecture of the system under design. In particular, the architect can understand the impact of an architecture pattern on the system’s quality attributes, and can use this understanding to make tradeoff decisions concerning the system’s architecture.
4. The detailed information about the interaction of the patterns and tactics can help the architect outline how the quality attribute’s tactics are to be implemented. This includes the very common case where more than one architecture pattern is used; it can help the architect understand in which pattern(s) a tactic will be implemented, and its impact on those patterns.
addition, the detailed interaction information helps the architect make more accurate estimates of the work needed to implement the system.
5. Through a simple method of annotating architecture diagrams, architects can show where architectural changes are needed to implement quality attributes tactics.
6. By following a simple process, the architect can explicitly use patterns during the design of the architecture, thus gaining the benefits listed here. In particular, an architect can seek out and select patterns that fit the needs of the system being designed. Common architectural design methods can easily be adapted to include the pattern-based approach to architectural design.
7. Architects can learn about potential issues concerning the architecture of the system and the important quality attributes by employing a pattern-based architecture review process.
Some of these pattern-based activities that an architect can do are based on others; some are required, while others are helpful. The most notable relationships are illustrated in the following diagram.
9.2 Contributions
The answers to the questions presented in the previous section lead to several contributions that this thesis makes to the field of software engineering. In general, this thesis helps software architects design software architectures that better satisfy quality attributes. This is achieved through additional insight and through advanced architectural design and review techniques.
Specific contributions are as follows:
- **Understanding of the use of Architecture Patterns in Practice**: Chapter 2 stated that virtually all software systems employ one or more architecture patterns. Figure 9.1 illustrates the pattern-based architecture activities that help software architects make informed decisions about patterns and tactics to be used, understand how quality attribute tactics can be implemented in the framework of architecture patterns, and show where architectural changes are needed for quality attributes.
Figure 9.1: Pattern-Based Architecture Activities
patterns, and listed the most commonly used architecture patterns, based on study of 45 software architectures.
- **Understanding of the relationship between Architecture Patterns and Architectural Decisions:** Chapter 3 describes the special role that architecture patterns play in architectural decisions. This highlights the importance of architecture patterns in software architecture. It also shows the common architecture patterns and their impact on common quality attributes, and the reasons for this impact.
- **A Model of How Architecture Patterns and Tactics Interact:** Chapter 4 gives insight into how patterns and tactics interact, and models this interaction and gives architects an easy way to capture where tactics are implemented in the architecture, and show it in architecture diagrams.
- **A Method for Annotating Architecture Diagrams with Tactic Information:** Chapter 5 shows how information about the interaction of patterns and tactics can be used in a practical setting to improve the ability of the software to meet quality attribute requirements. This becomes a reference for designers working on high reliability systems.
- **Categories of impact of tactics on complex architectures:** The material in chapter 6 describes that tactics fall into three categories of impact on the components of a system. These categories help one understand how much of an impact the implementation of a tactic will have on the system components. This helps architects decide among alternate tactics, as well as understand which components in an architecture are likely to be affected.
- **PDAP:** The Pattern-Directed Architecture Process described in chapter 7 gives architects a way to guide software architecture. It helps architects take advantage of the characteristics of different architecture patterns. It is a lightweight process, but can be easily incorporated into other more heavyweight software architecture processes.
- **PBAR:** Pattern-Based Architecture Reviews described in chapter 8 gives the architect a way to use patterns and their interactions with quality attributes to evaluate architectures for potential difficulties in satisfying quality attributes.
### 9.3 Limitations
These contributions are necessarily limited in their application. For each of the contributions, I highlight the key limitations.
• **Understanding of the use of Architecture Patterns in Practice:** The state of software products continues to advance, particularly as new software applications, technologies and platforms are developed. Software architectures are evolving as software becomes ever more distributed. The architectures that were studied to learn the dominant architecture patterns may not reflect the common architectures of the future. In a few years, different architecture patterns may be the ones in vogue.
• **Understanding of the relationship between Architecture Patterns and Architectural Decisions:** The key limitation of this contribution is that it depends on the architect – architects are constrained by their knowledge of the patterns. The best informed decisions rest upon the foundation of in-depth knowledge of architecture patterns.
• **A Model of How Architecture Patterns and Tactics Interact:** This model, like any other model, is an approximate representation of some real phenomenon. In other words, there may be real-world cases where the patterns and tactics interact in ways not captured in this model. The model should be taken as a guide, and not used blindly in every situation.
• **A Method for Annotating Architecture Diagrams with Tactic Information:** This notation system, like all notations, has two significant constraints. First, one must learn the notation sufficiently well that the message imparted by the notations is clear at a glance. This takes experience, and is a limiter to those who wish to adopt it. Second, a notation like this needs wide exposure before it becomes widely used, and there is little mechanism for it to be shared with developers. Frankly, it stands a good chance of being used only rarely.
• **Categories of impact of tactics on complex architectures:** The categories of impact are well defined, but a tactic may be used in ways that are different from what was envisioned when the tactic was defined or analyzed. The fluid nature of software development allows wide flexibility of use. This means that the impact depends in part on the particular architectural decisions made in the context of the domain. This means that information on the impact of tactics on multiple patterns must be taken as general guidelines only.
• **PDAP:** The key limitation of PDAP is the amount of knowledge of patterns that the architect has. The less knowledge the architect has about patterns, the less PDAP can be used effectively.
- **PBAR**: The key limitations of PBAR are that it may have difficulty scaling, and that it is a high-level architecture evaluation approach, trading detailed analysis for low cost. Thus it is not appropriate in all situations.
### 9.4 Open Research Questions and Future Work
As with any research, the answering of research questions leads to other open questions, and opportunities for future work. In particular, the following questions are fertile areas for further research:
*How well does the model apply to tactics other than those associated with reliability?*
Reliability tactics appear to have been researched more comprehensively than other tactics. There are some tactics associated with security, which can be readily studied. Studies of the model with tactics beyond those will require identification and characterization of the tactics themselves.
*How can the interaction of patterns and quality attributes be most likely to be of practical use?*
The key is that the pattern-tactic interaction information must be readily available to architects. To this end, each of the well-known architecture patterns must be analyzed, and a complete catalog of the patterns with their non-functional impacts must be created. This work will be most effective when it is complete and available in one place. Cross-referencing capabilities, e.g., searching the catalog by quality attribute would be extremely useful.
Some patterns and reliability tactics fit particularly well together, and may indeed be commonly used. We would like to investigate architectures to see whether some combinations of patterns and reliability tactics are common. These may form a set of “reliable architecture patterns;” variants of architecture patterns especially for highly reliable systems. Such information would an essential part of such a catalog.
*How well does the PBAR process scale up to large industrial software projects?*
Additional studies with different reviewers should be done to strengthen the results and to give greater insights into the requirements of the qualifications of the reviewers. We recommend studying PBAR as part of a traditional architecture review in large projects. In other words, use the identification of the architecture patterns and their impact on quality attributes as one of the investigative tools within, for example ATAM. PBAR might complement such existing approaches.
*The model concerns only the so-called runtime tactics, yet there are also design time tactics. Do they fit into a model as well?*
We find that the other tactics they describe, design time tactics, tend to cut across all design partitions, and are implemented implicitly in the code. Therefore, they are not as good a fit for the model as are the run time tactics, nor do they have as well-defined effects on patterns. The model and the design time tactics should be studied in more detail to determine how the design time tactics can be represented in the model and annotation. Besides refinement of the model and annotation, it will lead to greater insights about the nature of design time tactics versus run time tactics.
What other relationships among architecture patterns are there?
One very interesting consequence of implementing the tactics is that since it involves changing the architecture, some changes may actually change the pattern composition of the architecture. An architecture pattern may be added. In certain cases, an existing pattern may even change to a different pattern. Obviously, the transformation of one pattern to another can happen only where patterns are similar. We have seen two examples of this type of transformation in architectures we have evaluated. We intend to study this further in order to understand its architectural implications.
In addition, the following are topics for further research:
Existing pattern variants should be studied both at a structural level and for their impact on quality attributes. In addition, undocumented pattern variants should be studied and common undocumented variants should be documented.
We noted that the architecture diagrams represented different views of the systems, and most incorporated elements of more than one of the 4+1 views. It is possible that architecture patterns are more readily apparent in different views. We intend to study which views match certain patterns better in the sense that the patterns become more visible and explicit. Ideally one specialized view per pattern would solve this problem, but the combination of patterns can complicate things.
Our studies have shown that architects consider the annotation of architecture diagrams with tactic implementation information useful. But longer term, how is it used? How useful is it for maintainers who are learning the system architecture? This is an area for long-term study.
The interaction of tactics with each other should be studied. In particular, it appears that the really interesting interactions of tactics come between tactics from different quality attributes; for example tactics of fault tolerance and tactics of performance. Studies of reliability tactic interaction can provide specific information as input to such tradeoff analyses.
Each of the tactics introduces some additional behavior into the system. In this respect, one can consider a tactic to be a fault tolerance feature: behavior of the system with the goal of improving fault tolerance.
We noted that timing of actions is particularly important for certain fault tolerance tactics; for example:
- In Ping/Echo and Heartbeat, messages must be sent within a certain time period, or else the component is considered to be in failure.
- With Active and Passive Redundancy, messages must be sent to the redundant components within a certain timeframe; otherwise synchronization can be lost.
We did not study timing in detail, but it appears that timing may be an important issue with Pipes and Filters, where processing of data can be in large units. In addition, sequencing of actions is part of behavior. Both these aspects of behavior should be studied in more detail to understand how they affect the patterns.
One very interesting consequence of implementing the tactics is that since it involves changing the architecture, some changes may actually change the pattern composition of the architecture. An architecture pattern may be added. In certain cases, an existing pattern may even change to a different pattern. Obviously, the transformation of one pattern to another can happen only where patterns are similar. We have seen two examples of this type of transformation in architectures we have evaluated. We intend to study this further in order to understand its architectural implications.
The use of architecture patterns in conjunction with common architectural methods can be further explored. In particular, we are interested to explore the interaction of patterns with quality attribute reasoning frameworks. This should include analysis of the interaction of multiple quality attributes within each pattern, in order to understand how to focus on a single critical quality attribute. After the impact of patterns on quality attributes is cataloged, it might be used as some of the knowledge in the reasoning frameworks.
Additional studies of PBAR should be done with different reviewers to strengthen the results and to give greater insights into the requirements of the qualifications of the reviewers. We recommend studying PBAR as part of a traditional architecture review in large projects. In other words, use the identification of the architecture patterns and their impact on quality attributes as one of the investigative tools within, for example ATAM. PBAR might complement such existing approaches.
References
212
52. Gamma, E., Helm, R., Johnson, R., and Vlissides, J. *Design Patterns: Elements of Reusable Object-Oriented Software*, Addison-Wesley, 1995.
68. Harrison, N. and Avgeriou, P. “Pattern-Based Architecture Reviews”, accepted to IEEE Software.
123. SIGCSE: The ACM Special Interest Group on Computer Science Education, www.sigcse.org
|
{"Source-Url": "https://www.rug.nl/research/portal/files/14565488/09c9.pdf", "len_cl100k_base": 5124, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 53243, "total-output-tokens": 13747, "length": "2e12", "weborganizer": {"__label__adult": 0.0005173683166503906, "__label__art_design": 0.0010671615600585938, "__label__crime_law": 0.0003898143768310547, "__label__education_jobs": 0.0035800933837890625, "__label__entertainment": 9.179115295410156e-05, "__label__fashion_beauty": 0.00020503997802734375, "__label__finance_business": 0.00023090839385986328, "__label__food_dining": 0.0003771781921386719, "__label__games": 0.0008473396301269531, "__label__hardware": 0.0005254745483398438, "__label__health": 0.0003952980041503906, "__label__history": 0.0003268718719482422, "__label__home_hobbies": 8.952617645263672e-05, "__label__industrial": 0.00032210350036621094, "__label__literature": 0.0006232261657714844, "__label__politics": 0.00031280517578125, "__label__religion": 0.0006074905395507812, "__label__science_tech": 0.00849151611328125, "__label__social_life": 0.00012290477752685547, "__label__software": 0.00443267822265625, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.0003528594970703125, "__label__transportation": 0.0004019737243652344, "__label__travel": 0.00019884109497070312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51092, 0.04148]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51092, 0.47256]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51092, 0.85111]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2174, false], [2174, 4542, null], [4542, 6647, null], [6647, 8698, null], [8698, 10554, null], [10554, 12961, null], [12961, 14041, null], [14041, 15032, null], [15032, 17381, null], [17381, 19857, null], [19857, 22398, null], [22398, 25082, null], [25082, 27642, null], [27642, 29750, null], [29750, 31852, null], [31852, 34009, null], [34009, 36236, null], [36236, 38833, null], [38833, 41116, null], [41116, 43619, null], [43619, 45799, null], [45799, 47902, null], [47902, 49916, null], [49916, 51092, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2174, true], [2174, 4542, null], [4542, 6647, null], [6647, 8698, null], [8698, 10554, null], [10554, 12961, null], [12961, 14041, null], [14041, 15032, null], [15032, 17381, null], [17381, 19857, null], [19857, 22398, null], [22398, 25082, null], [25082, 27642, null], [27642, 29750, null], [29750, 31852, null], [31852, 34009, null], [34009, 36236, null], [36236, 38833, null], [38833, 41116, null], [41116, 43619, null], [43619, 45799, null], [45799, 47902, null], [47902, 49916, null], [49916, 51092, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51092, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51092, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2174, 2], [2174, 4542, 3], [4542, 6647, 4], [6647, 8698, 5], [8698, 10554, 6], [10554, 12961, 7], [12961, 14041, 8], [14041, 15032, 9], [15032, 17381, 10], [17381, 19857, 11], [19857, 22398, 12], [22398, 25082, 13], [25082, 27642, 14], [27642, 29750, 15], [29750, 31852, 16], [31852, 34009, 17], [34009, 36236, 18], [36236, 38833, 19], [38833, 41116, 20], [41116, 43619, 21], [43619, 45799, 22], [45799, 47902, 23], [47902, 49916, 24], [49916, 51092, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51092, 0.03285]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
3950b71b9a3976355f234cbd441e33faf8d413f1
|
Native SeND kernel API for *BSD
Ana Kukec
University of Zagreb
[email protected]
Bjoern A. Zeeb
The FreeBSD Project
[email protected]
Abstract
In the legacy world of Internet Protocol Version 4 (IPv4), the link layer protocol, the Address Resolution protocol (ARP) is known to be vulnerable to spoofing attacks, but has nevertheless been in use entirely unsecured. The Neighbor Discovery Protocol (NDP), which in the IPv6 world roughly corresponds to IPv4 ARP, is vulnerable to a similar set of threats if not secured. The Secure Neighbor Discovery (SeND) extensions counter security threats to NDP by offering proof of address ownership, message protection, and router authorization. The current lack of robust support for SeND within BSD operating system family and drawbacks in the existing reference SeND implementation limits its deployment. We illustrate the protocol enhancements and their implementation by rehashing the known problem scenarios with unsecured NDP and providing the short information about SeND. We then describe the design and implementation of a new, BSD licensed, kernel-userspace API for SeND, which mitigates the overhead associated with the reference implementation in FreeBSD, and which aims to improve portability to other BSD-derived operating systems.
1 Introduction
IP version 6 (IPv6) [7] has been designed as the successor to IP version 4 (IPv4). Unlike the common opinion that IPv6 is primarily the solution for the problem of the shortage of public IPv4 addresses, there are many other changes from IPv4 to IPv6 such as the header format simplification, improved support for extensions and options, flow labeling capability, and authentication and privacy capabilities. However, the most significant changes are not in the IP protocol itself, but in the supporting protocols and mechanisms that were developed along with it, for example the ones that are related to the communication between link local devices.
The communication between IPv4 link local devices is supported by two protocols:
1. Address Resolution Protocol (ARP) that determines a host’s link layer address [17], and
2. Internet Control Message Protocol version 4 (ICMP) that is a messaging system and an error reporting protocol for the IPv4 network layer [18].
ICMP provides various functionalities through the use of ICMP messages, where two important functionalities for the link local communication are ICMP Router Discovery and ICMP Redirect. ICMP Router Discovery messages [6] deal with the configuration of IP hosts with the IP addresses of neighboring routers, using ICMP Router Advertisement messages and ICMP Router Solicitation messages. Since RouterAdvertisements are used by routers only to advertise their existence and not their location, there is a separate mechanism that uses ICMP Redirect messages to enable routers to convey the information about the optimal, alternate route to hosts. There is also a certain number of ICMP based algorithms that support the IPv4 communication between link local hosts that are recommended for IPv4, but they are not required and widely adopted. [4] defines some possible approaches to solve Dead Gateway Detection, a scenario in which the IP layer must detect the next-hop gateway failure and choose an alternate gateway, but there is no widely accepted IPv4 suite protocol for it.
Even though previously mentioned features work properly in IPv4, they were developed in an ad hoc manner. They consist of a great number of different protocols, mechanisms, algorithms, and Internet Standards. Both the nowadays Internet use case scenarios and security threat analysis are pointing out their various limitations and the need for the enhancements.
IPv6 Neighbor Discovery Protocol (NDP) [15] is a single protocol that corresponds to the combination of all previously mentioned protocols (ARP, ICMP Router
Discovery, ICMP Redirect, and various recommended ICMP mechanisms). Most of the Neighbor Discovery Protocol functionalities are based on the five ICMPv6 control messages (Router Solicitation and Advertisement, Neighbor Solicitation and Advertisement, and Redirect). Router Solicitation is sent by hosts as the request for Router Advertisement. Router Advertisement is sent by routers periodically or as a response to Router Solicitation, to advertise the link local prefix and other options. Neighbor Solicitation is sent by IPv6 hosts to find out a neighbor’s link layer address or to verify that a node is still reachable. Neighbor Advertisement is sent by IPv6 hosts as a response to Neighbor Solicitation or to propagate the link layer address change. Redirect is sent by routers to inform hosts of the better first-hop destination.
Neighbor Discovery Protocol functionalities are classified into two groups: host-host functionalities and host-router functionalities. Host-router functionalities enable the host to locate routers on the link local network (router discovery), to differentiate between the link local network and distant networks (prefix discovery), to find out the parameters of the link local network and neighboring routers (parameter discovery), and to autoconfigure their IPv6 address based on the information provided by a router. Host-host functionalities include the address resolution (ARP functionality in IPv4), the next-hop determination based on the datagram’s IP destination address, the determination whether the host is directly reachable (neighbor unreachability detection), and the determination of whether the choosed address already exists in the link local network (duplicate address detection). NDP function that does not belong in neither of two previously mentioned groups is the Redirect function. The Renumbering functionality, a mechanism that takes care of the renumbering based on the Router Advertisement messages containing the prefix, sent in a timely manner. The Renumbering mechanism is derived from the combined use of the neighbor discovery and the address autoconfiguration. The Neighbor Discovery Protocol combines all functionalities of IPv4 supporting protocols for the communication between link local devices, but also provides many enhancements and improvements over the mentioned set of protocols. The typical example of one such enhancement is the Neighbor Unreachability Detection [15] (NUD) that is one of the fundamental Neighbor Discovery Protocol parts. IPv4 Dead Gateway Detection [17] (DGD) is similar to Neighbor Unreachability Detection in IPv6, but addresses just a subset of the problems that Neighbor Unreachability Detection deals with. IPv4 Dead Gateway Detection is a simple failover mechanism that changes host’s default gateway to the next configured default gateway. There is no possibility to distinguish whether the link local or a remote gateway has failed, or to get any detailed reachability information. Thus there is no possibility for the fail-back to the previous router. Neighbor Unreachability Detection is enhanced mechanism that allows the node to track the detailed reachability information about its neighbor, either the link local host or the router. Based on the use of ICMPv6 messages, it enables the host to fail-back to the previous router, to make use of the inbound load balancing in case of replicated interfaces, to inform the neighbors about the change of its link layer address.
Both IPv4 protocols supporting the link local communication and the Neighbor Discovery Protocol, if not secured, are vulnerable and affected by the similar set of threats. The initial Neighbor Discovery Protocol specification proposed the use of IPsec, specifically IP Authentication Header (AH) [9] and IP Encapsulating Security Payload [10], for the protection, by authenticating the packet exchanged to overcome the shortcomings. Unlike the Neighbor Discovery Protocol that can be secured with the Secure Neighbor Discovery (SeND), one of the significant shortcomings of the IPv4 protocols supporting the link local communication, such as the Address Resolution Protocol and other ICMP-based mech-
The next section discusses the main threats associated to the Neighbor Discovery Protocol, illustrating the real world attacks that have never been solved for IPv4, but are solved for IPv6. It will further explain why the initial proposal for the Neighbor Discovery Protocol protection with IPsec was abandoned in favour of SeND.
2 Background
2.1 Neighbor Discovery Protocol (NDP) threats
The Neighbor Discovery Protocol trust models and threats are well known and clearly described in [16]. It illustrates the following attacks:
- Attack on Address Resolution (Figure 1),
- Redirect Attack (Figure 2),
- Duplicate Address Detection (DAD) Attack (Figure 3),
- First-Hop Router Spoofing Attack (Figure 4),
- Address Configuration Attack (Figure 5).
The Neighbor Discovery Protocol [15] offers some basic protection mechanisms. For example it introduces the limitation for the IPv6 source address to be either the unspecified address (::/128) or a link-local address, or puts the limitation on the hop limit to be set to 255, trying to limit source address spoofing by making sure that packet is coming from a host on a directly connected network. However, the protection shield offered by the Neighbor Discovery Protocol itself is not enough to encounter most of the known threats. This is due the fact that Neighbor Discovery Protocol as it is, is not able to offer any authentication, message protection or router authorization capabilities.
2.2 Neighbor Discovery Protocol and IPsec
The initial Neighbor Discovery Protocol specification proposed the use of IPsec Authentication Header (AH) to encounter known threats. This approach appeared to be problematic. Theoretically, in the IPv6 architecture, it is possible to secure all IP packets, including ICMPv6 and Neighbor Discovery messages, even the ones sent to multicast addresses. Packets that are supposed to be secured are filtered based on the Security Policy Database, and then protected based on Security Associations maintained automatically by the Internet Key Exchange protocol (IKE). But here we end up with the chicken-and-egg bootstrapping problem [1]. IKE is not able to establish a Security Association between the local hosts because in order to send the IKE UDP message it would have had to send the Neighbor Solicitation message, which would have required the Security Association which does not exist. Even if we decide to use a manual configuration for Security Associations, which solves the bootstrapping problem, we would be faced with the problem of maintaining an enormous number of Security Associations, especially when considering multicast links (Neighbor Discovery and Address Autconfiguration use a few fixed multicast addresses plus a range of 16 million "solicited node" multicast addresses). Even in scenarios with only a small fraction of the theoretically maximum number of addresses, which appear to be very common in case of the local communication, statically preconfigured Security Associations make the use of IPsec impractical.
2.3 Secure Neighbor Discovery (SeND)
Neighbor Discovery needed a different approach to encounter threats, a cryptographic extension to the basic protocol that will not require the excessive manual keying. To solve the problem IETF SeND working group that was chartered in 2002 defined the initial SeND speci-
The important thing to notice is that Secure Neighbor Discovery is not a new protocol, but just a set of enhancements to the Neighbor Discovery Protocol. It is based on four new Neighbor Discovery options preprending the normal Neighbor Discovery message options, and two new messages.
Secure Neighbor Discovery enhances the Neighbor Discovery Protocol with the following three additional features:
1. address ownership proof,
2. message protection,
3. router authorization.
The address ownership proof prevents the attacker from stealing the IPv6 address, which is a fundamental problem for the router discovery, duplicate address detection and address resolution mechanisms. This feature is based on IPv6 addresses known as Cryptographically Generated Addresses (CGAs). CGA is a mechanism that binds the public component of a public-private key pair to an IPv6 address. It is generated as a one-way hash of the four input values: a 64-bit subnet prefix, the public key of the address owner, the security parameter (sec) and a random nonce (modifier).
\[
CGA(128) = \text{Prefix}(64)|IID(64)
\]
\[
IID(64) = \text{hash}([\text{prefix}, \text{pubkey, sec}, \text{modifier}])
\]
The detailed description of the CGA generation procedure is described in RFC3972 [2].
The owner of the CGA address sends the all CGA Parameters, including all required input data for the CGA generation together with the CGA address to the verifier.
The CGA verification consists of the re-computation and comparison of received CGA value based on the received CGA parameters, including the public key. However, the hash of the public key itself offers no protection at all, if it is not used in combination with the digital signature produced using the corresponding private key. When using CGAs in Secure Neighbor Discovery, the sender signs the message with the private key that is possessed only by him, and that is the key related to the public key used in CGA’s interface identifier generation. This prevents an attacker from spoofing a cryptographically generated address. All the information about the CGA parameters, such as the public key used for the CGA verification, are exchanged within the new Neighbor Discovery Protocol option - the CGA option. The impact of the collision attacks in CGAs is described in RFC4982 [3]. Attacks against the collision-free property of hashes are known, but their characteristic is that they deal with the non-repudiation features. The attacker would be able to create two different messages that result in the same hash, and then use them interchangeably. The important thing to notice is that both messages must be produced by the attacker. Since the usage of CGAs in SeND does not include the provision of the non-repudiation capabilities, it is not affected by the hash collision attacks.
SeND offers message protection in terms of the message integrity protection of all messages relating to neighbor and router discovery, using the new Neighbor Discovery option called RSA option. It contains a public key digital signature calculated over the message, and thus protects the integrity of the message and authenticates the identity of the sender. Secure Neighbor Discovery message that the sender signs with its private key includes the link layer information, which creates the secure binding between the IP address and link layer anchor. In such a way, Secure Neighbor Discovery allows for the verification with the signer’s public key that the host’s IP address is bound to the trustworthy lower layer anchor. The public key trust is achieved either through the CGA address ownership proof (in the neighbor discovery procedure), or through the X.509 certificate chain (in the router discovery procedure), or both. SeND also defines the Timestamp and Nonce options to protect messages from reply attacks, and to ensure the request/response correlation.
The router authorization feature introduces two novelties to Neighbor Discovery:
1. it authorizes routers to act as default gateways for a certain local network, and
2. specifies prefixes that an authorized router may advertise on this certain link.
A new host on the link can easily configure itself using the information learned by the router, while in the same time there is no way a host can tell from the Neighbor Discovery information, that the router is actually an
authorized router. If the link is unsecured, the router might be a rogue router. At the moment when the host should verify whether the router is a valid one, the host is not able to do so since it is not able to communicate with the off-link hosts. To solve this situation, SeND introduces two new messages: Certification Path Solicitation message and Certification Path Advertisement message. The first one is sent by newly connected host to the router. The second one is the response sent by the router, and contains the certificate chain that contains the certification path, that the host uses to validate the router. The certificate path consists of the Router Authorization Certificate that authorizes a specific IPv6 node to act as a router, followed by intermediate certificates that lead to the trust anchor trusted both by the router and the host. Trust between the router and the hosts is achieved through the third party - the trust anchor (X.509 Certification Authority) [5]. The Router Authorization Certificate contains the information about the prefix that he is authorized to advertise.
3 Implementation
Neighbor Discovery Protocol is widely supported by many modern operating systems, since the NDP support is mandatory for IPv6 network stacks. The code resides mainly in kernel. However, there are very few Secure Neighbor Discovery implementations. None of the contemporary open source operating systems ships with built-in support for SeND.
3.1 Existing SeND implementations
The open-source SeND reference implementation (send-0.2), originally developed by NTT DoCoMo, works on Linux and FreeBSD. On FreeBSD, this implementation uses a Berkley Packet Filter (BPF) interface embedded in a netgraph node (ng_bpf) to divert SeND traffic from kernel to an userland daemon, and vice versa. This approach has two major drawbacks. First, all network traffic (both SeND and non-SeND) has to traverse through a ng_bpf filtering node (and through the netgraph subsystem in general), which introduces significant processing overhead, effectively prohibiting production deployment of SeND in high-speed networking environments. And second, the current send-0.2 implementation depends on the netgraph subsystem, which is available only in FreeBSD and DragonFlyBSD, making in send-0.2 implementation being unusable on other BSD-derived operating systems, such as NetBSD, OpenBSD or Mac OS X.
Figure 6 illustrates the design of DoCoMo’s SeND implementation for FreeBSD. The communication between the Neighbor Discovery stack implemented in kernel and the Secure Neighbor Discovery daemon flows through the chain of netgraph nodes: ng_ether, ng_bpf and ng_socket. Packets that are incoming from the interface’s point of view are protected with Secure Neighbor Discovery options (CGA option, RSA Signature option, Timestamp and Nonce option). Before the kernel will be able to process them in its Neighbor Discovery stack the packets must be validated and the Secure Neighbor Discovery options which are all unknown to kernel must be stripped off. Initially, all incoming packets arrive to the ng_ether “lower” hook, which is a connection to the raw Ethernet device and from there on to the ng_bpf “tolower” hook. That netgraph node will filter out the incoming packets that are protected by SeND options and pass these packets through an ng_socket “out” hook to the SeND daemon in user space, rather than passing them on inside the kernel for normal upper layer processing.
In userland, Secure Neighbor Discovery options are checked. Upon successful validation all Secure Neighbor Discovery options are removed, and injected back to kernel, through the ng_socket “out” hook, ng_bpf “toupper” hook and ng_ether “upper” hook, is a pure Neighbor Discovery message. The kernel will then pass the packets on through the normal input path to the upper layers and process the Neighbor Discovery information. In case that daemon cannot successfully validate the SeND options, it will silently drop the packet.
Packets that are outgoing from the interface’s point of view must be sent to Secure Neighbor Discovery daemon just before they are supposed to exit the outgoing interface. After the kernel upper layer processing, which includes the Neighbor Discovery stack processing, all outgoing packets are forwarded through the ng_ether “upper”
hook to the ng_bpf node. They are injected to the userland where the Secure Neighbor Discovery adds additional options to protect the packet. Packets on that way flow through ng_bpf "out" hook and ng_socket "in" hook to the userland. The Secure Neighbor Discovery daemon prepends the normal Neighbor Discovery options in the packet with the CGA option, RSA Signature option, Timestamp and Nonce option, and sends the packet back to kernel through the ng_socket in hook and the ng_bpf tolower hook to ng_ether. Packets then leaves the interface through the ng_ether lower hook, which is the direct connection to the lower device link layer.
As mentioned previously, Secure Neighbor Discovery also enhances the Neighbor Discovery Protocol with two new messages that participate in the process of router authorization. Neither the Certification Path Solicitation message, nor the Certification Path Advertisement message are processed in Neighbor Discovery kernel stack since they are not the part of the basic Neighbor Discovery Protocol. Thus both new messages are not exchanged through netgraph nodes, but through the separate socket.
While the NTT DoCoMo implementation had the advantage, that it was written to be distributed independently of the operating system, not needing any operating system changes, it had the drawbacks of using the netgraph subsystem as well as hitting the Berkeley Packet Filter for every packet. To address those problems the operating system itself has to be extended and the following sections will discuss those changes.
3.2 Initial design decisions
- Avoid the use of netgraph.
Netgraph itself introduces the big overhead to processing. Secondly, as the netgraph subsystem is not available throughout the entire BSD operating system family, it was not considered to be an option for a portable implementation. Further, avoiding the need of netgraph, could make an implementation even more portable to other Unices as well.
- Avoid the use of BPF.
Using the Berkeley Packet Filter meant that all packets, forwarded, for the local system or locally originated would be affected and that this would reduce the performance of a lot of systems, especially if connected to high speed networks, processing lots of packets per second.
- Only defer processing of packets that might be affected by Secure Neighbor Discovery.
As only few ICMPv6 Neighbor Discovery packets are actually affected by SeND it was clear that we should only actually defer processing of those few packets, rather than all. We would also never be interested in packets, that were invalid at a certain (lower) layer. Letting the already existing kernel code do those checks and the handling for us, would mitigate the risk of possible exploits through crafted packets outside the core problem domain of Secure Neighbor Discovery.
- Trigger only on the Secure Neighbor Discovery in case SeND code was loaded.
Using kernel hooks that will not fire unless the send.ko kernel module was loaded would ensure that normal Neighbor Discovery processing would not be affected for the default case. In case the kernel module would be loaded it would guarantee that all messages would traverse properly through the Neighbor Discovery stack, as if it there was no SeND daemon involved in the processing.
- Use routing control sockets.
The routing control sockets have been chosen for their simplicity to exchange messages between kernel and userland, as they are easy to extend beyond the scope of pure routing messages. Actually this had been done before by the net80211 stack. Alternatives would have been to introduce a new, private interface or extend another existing one, like the PF_KEY Key Management API [14], which would have been way more complex.
- Add as few new code to the kernel as possible.
It was clear that changes to the kernel should be kept to a minimum to ease portability and review, as well as reducing the risk of introducing problems complicating normal processing paths.
- Keep the separate socket to exchange Certification Path Solicitations and Certification Path Advertisements.
Since those options are exchanged end-to-end between Secure Neighbor Discovery daemons without the use of the Neighbor Discovery kernel code, there is no need to modify the kernel for those but entirely keep their processing in user space.
- Keep the user space implementation.
If possible and to not re-invent the wheel of handling the configuration and the actual processing of the SeND payload, the NTT DoCoMo SeND daemon should be kept but modified for the new kernel-userland API. This would further allow already existing users to update without the need for changes in their deployment (apart from kernel and daemon updates).
The goal for the changes were to design and implement a new kernel-userspace API for SeND mitigates the overhead associated with netgraph and BPF and would be easily portable.
In order to accomplish the implementation of such an API, we separated the kernel changes into three main parts:
1. Processing hooks to the existing Neighbor Discovery (ND) input and output code.
2. The SeND kernel module for the dispatching logic.
3. Extensions to the routing control sockets for the SeND kernel-userland interface.
The basic code flow is as follows: incoming Neighbor Discovery packets or outgoing Secure Neighbor Discovery packets are sent to the userland through the send input hook. Neighbor Discovery packets are then passed through the routing socket to the Secure Neighbor Discovery daemon either for protection validation (incoming packets) or (outgoing packets). On the way back to kernel, packets traverse again through the routing socket, but then through the send output hook. While the incoming packets are send back to Neighbor Discovery stack in kernel, outgoing packets are then sent from the output hook to if_output() routines.
In the following sections we will describe the individual changes for each part in more detail.
The changes to the IPv6 part of the network stack can be separated into Neighbor Discovery input and output path.
For the input path changes were mainly to the icmp6_input() function. There we have to divert the ND packet for the following ICMPv6 types: ND_ROUTER_SOLICIT, ND_ROUTER_ADVERT, ND_NEIGHBOR_SOLICIT, ND_NEIGHBOR_ADVERT and ND_REDIRECT. Instead of directly calling the respective function for direct processing of those ND types, we first check if the send_input_hook and with that SeND processing is enabled. If it is we pass the packet to the send.ko kernel module for dispatching to user space. If SeND processing is not enabled, the packet will follow the standard code path to the normal ND handler function.
Pseudo-Code:
```c
...
case ND_??????:
...
/* * Send incoming SeND-protected/ND * packet to user space. */
if (send_input_hook != NULL) {
send_input_hook(m, ifp, SND_IN, ip6len);
return (IPPROTO_DONE);
}
nd6_??_input(m, off, icmp6len);
...
break;
...
```
For the output paths the changes are a bit more diverse and complicated. This is because we can have three different ways that outgoing Neighbor Discovery packets can be send:
1. via nd6_na_input() when flushing the "hold queue" (a list of packets that could not be sent out because of the formerly missing link layer information of the next-hop) in response to the newly learned link layer information.
2. by `icmp6_redirect_output()` function, `nd6_ns_output()`, or `nd6_na_output()`,
3. or from user space applications like `rtsol(8)` or `rtadvd(8)` via `rip6_output()`.
None of those functions directly outputs the packet and as we need to know the IPv6 header for the address, we have to postpone Send processing to a later point in the output path. To be able to identify the packets later though, we add an attribute, a "tag", to the `mbuf` in the formerly mentioned functions, if Send processing is enabled. We also save the type as meta-information along the way, though you may find that this will only be used for assert.
Pseudo-Code:
```c
struct m_tag *mtag;
if (send_input_hook != NULL) {
mtag = m_tag_get(
PACKET_TAG_ND_OUTGOING,
sizeof(unsigned short),
M_NOWAIT);
if (mtag == NULL)
goto fail;
*(unsigned short *)(mtag + 1) =
nd->nd_type;
m_tag_prepend(m, mtag);
}
```
As you might notice, there is a slight difference in processing the outgoing Neighbor Solicitation, Neighbor Advertisement and Redirect messages compared to the processing of Router Solicitation and Router Advertisement messages.
Neighbor Solicitations, Neighbor Advertisements and Redirects are handled fully in the Neighbor Discovery kernel stack. Generated messages are tagged with the `m_tag PACKET_TAG_ND_OUTGOING` right after they are recognized in the Neighbor Discovery kernel stack to be the output messages. This happens in `sys/netinet6/nd6_nbr.c` in the `nd6_ns_output()` and the `nd6_na_output()` functions, as well as and `icmp6.c` in `icmp6_redirect_output()`.
The difference with outgoing Router Solicitation and Router Advertisement messages is, that they are generated by rtsol and rtadvd daemons and not with the kernel itself. Because of that, we cannot easily tag a packet. We solved this problem by using the already available socket, packet type and ICMPv6 informations in `rip6_output()` in `sys/netinet6/raw_ip6.c` and conditionally tagging those packets there as well.
Pseudo-Code:
```c
if (send_input_hook != NULL &&
so->so_proto->pr_protocol == IPPROTO_ICMPV6) {
switch (icmpv6_type) {
case ND_ROUTER_ADVERT:
case ND_ROUTER_SOLICIT:
mtag = m_tag_get(
PACKET_TAG_ND_OUTGOING,
sizeof(unsigned short),
M_NOWAIT);
if (mtag == NULL)
goto fail;
m_tag_prepend(m, mtag);
}
}
```
Our tests showed that neither rtadvd nor rtsol, or any other third part user space application sending RA or RS messages needs to be modified for Send processing, as that is handled transparently for them, with only minimal changes to the kernel.
Depending on the code path, packets will be passed on to `ip6_output()`, which will amongst other things add the IPv6 header and `nd6_output_lle()`, which would pass the packet to the interface’s output queue. Prior this step, we check if the packet was previously tagged by us and defer it for output path Send processing (Figure 8).
Pseudo-Code:
```c
if (send_input_hook != NULL) {
mtag = m_tag_find(m,
PACKET_TAG_ND_OUTGOING, NULL);
if (mtag != NULL)
goto bad;
send_input_hook(m, ifp, SND_OUT, ip6len);
return;
}
```
The send.ko kernel module consists of three things: the `send_input_hook` and the `send_output_hook`, as well as the module handling logic that also takes care of enabling or disabling the hooks upon load and unload.
The input and output hooks are named after the direction between kernel and userland. It should not be confused with the incoming and outgoing direction of the Neighbor Discovery packets.
- The `send_input_hook` takes packets from the IPv6 network stack’s input and output paths and passes them on to the kernel-userland interface for processing by the Secure Neighbor Discovery daemon.
- The `send_output_hook` gets packets from the userland-kernel interface after processing by the Secure Neighbor Discovery daemon to re-inject the packets back into the IPv6 network stack.
In addition both hooks take an argument that describes the direction of the packet:
- `SND_IN` is used for packets originated in the IPv6 input path. These packets are usually protected by Secure Neighbor Discovery options and are sent to userland first via the `send_input_hook` to be validated and all additional options to be stripped off. When the packets are sent back again to kernel for further Neighbor Discovery kernel stack processing they are still tagged with `SND_IN` even though they pass the `send_output_hook` (Figure 7).
- `SND_OUT` describes both reply or locally originated outgoing packets. These pure Neighbor Discovery packets, are sent to userland to be protected with the Secure Neighbor Discovery options, after the normal processing in the Neighbor Discovery kernel stack via the `send_input_hook`. Once userspace is done, they are sent back to kernel via the `send_output_hook` to be sent out of the interface using the standard output routines (Figure 8).
The last changes needed to the kernel were to interact with userspace. The routing control sockets interface was chosen for its simplicity and flexibility to be extended.
Messages between the Neighbor Discovery kernel stack and send.ko module and the Secure Neighbor Discovery daemon are exchanged through the routing socket.
The routing message type for the `rt_mshdr` structure of the routing message indicating the Secure Neighbor Discovery event is `RTM_SND` and is defined in `sys/net/route.h`. The `rtm_so` field of the routing message, which is by sender to identify the action is set to either `RTM_SND_IN` or `RTM_SND_OUT`. This is done in parallel to `SND_IN` or `SND_OUT` indicating either the incoming or outgoing direction of messages that are passing through the routing socket. Again the direction is independent from the send.ko module input or output hook naming.
The `rt_securendmsg()` function in `sys/net/rtsock.c` handles the generation of the routing socket message indicating the Secure Neighbor Discovery event, and it preserves all the existing functions, i.e. for appending the Neighbor Discovery or Secure Neighbor Discovery data to the routing message header. The same had been done before for the net80211 stack with `rt_ieee80211msg()`.
Input from userland back to the kernel is handled by extending the `route_output()` function. The `rt_mshdr` is stripped of, and the packet is passed to the send.ko `send_output_hook` again.
4 Future work
The decision to use the routing control socket for the interaction with the userspace was made to overcome complexities that would appear in case of the alternative approaches. However, there is the drawback caused by this design decision, due to the unability of the routing socket to provide better control of related daemon. First step to improve our solution is to replace the routing socket in order to provide the appropriate control over the active daemon and a default policy in case of no active daemon in the user space.
Along with the development of the native kernel API for SeND, we have continued the development of a Secure Neighbor Discovery userspace application. The current implementation is still based on NTT DoCoMo’s initial send-0.2 version. See the availability section for where to find our version of the SeND userspace implementation. Future steps in the development of the user space application will include the implementation of the new Secure Neighbor Discovery specifications that have been developed in the IETF Certificate and Send mIntainance (CSI) working group. They are related to the DHCPv6 and CGA interaction [8], the support of the hash agility in Secure Neighbor Discovery [13], the support of proxy Neighbor Discovery for Secure Neighbor Discovery [12] and the certificate management in the authorization delegation discovery process [11].
5 Conclusion
This paper reasons the need for the Secure Neighbor Discovery extension to counter threats to the Neighbor Discovery Protocol by illustrating the set of security threats, the protocol enhancements that counter those threats and their implementation. It also describes our implementation of a native kernel-userspace SeND API for *BSD
Our prototype is compliant with the Secure Neighbor Discovery specification, both in case of host-host scenarios and router-host scenarios. In case that send.ko module is not loaded, kernel operates just as there was no additional Secure Neighbor Discovery code involved. We successfully overcame major drawbacks of the existing SeND implementation for FreeBSD by eliminating the use of netgraph and Berkley Packet Filter. Our code does not affect other ICMPv6 or IPv6 packets in any way. We developed effective and portable solution for Secure Neighbor Discovery, while introducing as few new code to kernel stack as possible.
As the send.ko kernel module acts as a gateway between the network stack and the userland interface, it will be easy to adopt the user space interface to something more fitting without the need to change the kernel network stack again.
6 Acknowledgements
I would like to thank Google for supporting the implementation work and the AsiaBSDCon for supporting this publication. Also, I would like to thank to Marko Zec for the review and inspyrial suggestions on both the implementation and the paper work.
References
|
{"Source-Url": "https://2010.asiabsdcon.org/papers/abc2010-P2A-paper.pdf", "len_cl100k_base": 7367, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29555, "total-output-tokens": 8356, "length": "2e12", "weborganizer": {"__label__adult": 0.0003974437713623047, "__label__art_design": 0.00031113624572753906, "__label__crime_law": 0.0008907318115234375, "__label__education_jobs": 0.0003693103790283203, "__label__entertainment": 0.00012743473052978516, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.00037384033203125, "__label__food_dining": 0.0003364086151123047, "__label__games": 0.0008907318115234375, "__label__hardware": 0.004001617431640625, "__label__health": 0.0004513263702392578, "__label__history": 0.00035881996154785156, "__label__home_hobbies": 9.238719940185548e-05, "__label__industrial": 0.0006437301635742188, "__label__literature": 0.0002319812774658203, "__label__politics": 0.0004589557647705078, "__label__religion": 0.0004267692565917969, "__label__science_tech": 0.193359375, "__label__social_life": 0.00011175870895385742, "__label__software": 0.056488037109375, "__label__software_dev": 0.73828125, "__label__sports_fitness": 0.00035953521728515625, "__label__transportation": 0.0007104873657226562, "__label__travel": 0.0002465248107910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38158, 0.01568]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38158, 0.36437]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38158, 0.9078]], "google_gemma-3-12b-it_contains_pii": [[0, 3875, false], [3875, 8065, null], [8065, 11407, null], [11407, 15775, null], [15775, 20122, null], [20122, 24878, null], [24878, 27534, null], [27534, 31025, null], [31025, 35801, null], [35801, 38158, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3875, true], [3875, 8065, null], [8065, 11407, null], [11407, 15775, null], [15775, 20122, null], [20122, 24878, null], [24878, 27534, null], [27534, 31025, null], [31025, 35801, null], [35801, 38158, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38158, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38158, null]], "pdf_page_numbers": [[0, 3875, 1], [3875, 8065, 2], [8065, 11407, 3], [11407, 15775, 4], [15775, 20122, 5], [20122, 24878, 6], [24878, 27534, 7], [27534, 31025, 8], [31025, 35801, 9], [35801, 38158, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38158, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
266fe5d485975748adb12afe6a17ed392ce30381
|
Data-flow analysis
Data-flow analysis is a global analysis framework that can be used to compute – or, more precisely, approximate – various properties of programs. The results of those analysis can be used to perform several optimisations, for example:
• common sub-expression elimination,
• dead-code elimination,
• constant propagation,
• register allocation,
• etc.
Example: liveness
A variable is said to be live at a given point if its value will be read later. While liveness is clearly undecidable, a conservative approximation can be computed using data-flow analysis. This approximation can then be used, for example, to allocate registers: a set of variables that are never live at the same time can share a single register.
Requirements
Data-flow analysis requires the program to be represented as a control flow graph (CFG).
To compute properties about the program, it assigns values to the nodes of the CFG. Those values must be related to each other by a special kind of partial order called a lattice.
We therefore start by introducing control flow graphs and lattice theory.
Control-flow graphs
A control flow graph (CFG) is a graphical representation of a program. The nodes of the CFG are the statements of that program. The edges of the CFG represent the flow of control: there is an edge from $n_1$ to $n_2$ if and only if control can flow immediately from $n_1$ to $n_2$. That is, if the statements of $n_1$ and $n_2$ can be executed in direct succession.
In the CFG, the set of the immediate predecessors of a node $n$ is written $\text{pred}(n)$. Similarly, the set of the immediate successors of a node $n$ is written $\text{succ}(n)$.
A basic block is a maximal sequence of statements for which control flow is purely linear. That is, control always enters a basic block from the top – its first instruction – and leaves from the bottom – its last instruction. Basic blocks are often used as the nodes of a CFG, in order to reduce its size.
A lattice is a partially ordered set in which any two elements have a unique supremum (also called a least upper bound or join) and infimum (also called a greatest lower bound or meet).
Partial order
A partial order is a mathematical structure \((S,\leq)\) composed of a set \(S\) and a binary relation \(\leq\) on \(S\), satisfying the following conditions:
1. reflexivity: \(\forall x \in S, x \leq x\)
2. transitivity: \(\forall x,y,z \in S, x \leq y \land y \leq z \Rightarrow x \leq z\)
3. anti-symmetry: \(\forall x,y \in S, x \leq y \land y \leq x \Rightarrow x = y\)
Partial order example
In Java, the set of types along with the subtyping relation form a partial order.
According to that order, the type `String` is smaller, i.e., a subtype of the type `Object`.
The type `String` and `Integer` are not comparable: none of them is a subtype of the other.
Upper bound
Given a partial order \((S,\leq)\) and a set \(X \subseteq S\), \(y \in S\) is an upper bound for \(X\), written \(X \leq y\), if
\[
\forall x \in X, x \leq y
\]
A least upper bound (lub) for \(X\), written \(\mathfrak{u}X\), is defined by:
\[
X \leq \mathfrak{u}X \land \forall y \in S, X \leq y \Rightarrow \mathfrak{u}X \leq y
\]
Notice that a least upper bound does not always exist.
Lower bound
Given a partial order \((S,\leq)\) and a set \(X \subseteq S\), \(y \in S\) is a lower bound for \(X\), written \(y \leq X\), if
\[
\forall x \in X, y \leq x
\]
A greatest lower bound (glb) for \(X\), written \(\mathfrak{g}X\), is defined by:
\[
\mathfrak{g}X \leq X \land \forall y \in S, y \leq X \Rightarrow y \leq \mathfrak{g}X
\]
Notice that a greatest lower bound does not always exist.
Lattice
A lattice is a partial order \((L,\leq)\) for which \(\mathfrak{u}X\) and \(\mathfrak{g}X\) exist for all \(X \subseteq S\).
A lattice has a unique greatest element, written \(\top\) and pronounced "top", defined as \(\top = \mathfrak{u}S\).
It also has a unique smallest element, written \(\bot\) and pronounced "bottom", defined as \(\bot = \mathfrak{g}S\).
The height of a lattice is the length of the longest path from \(\bot\) to \(\top\).
Finite partial orders
A partial order \((S,\leq)\) is finite if the set \(S\) contains a finite number of elements.
For such partial orders, the lattice requirements reduce to the following:
- \(\top\) and \(\bot\) exist,
- every pair of elements \(x,y\) in \(S\) has a least upper bound – written \(x \sqcup y\) – as well as a greatest lower bound – written \(x \sqcap y\).
Cover relation
In a partial order \((S, \leq)\), we say that an element \(y\) covers another element \(x\) if:
\[(x \preceq y) \land (\forall z \in S, x \subseteq z \Rightarrow y = z)\]
where \(x \subseteq y \equiv x \subseteq y \land x \neq y\).
Intuitively, \(y\) covers \(x\) if \(y\) is the smallest element greater than \(x\).
Hasse diagram
A partial order can be represented graphically by a Hasse diagram.
In such a diagram, the elements of the set are represented by dots.
If an element \(y\) covers an element \(x\), then the dot of \(y\) is placed above the dot of \(x\), and a line is drawn to connect the two dots.
Hasse diagram example
Hasse diagram for the partial order \((S, \preceq)\) where \(S = \{0, 1, \ldots, 7\}\) and \(x \preceq y \equiv (x \& y) = x\) bitwise and
Partial order examples
Which of the following partial orders are lattices?
1 2 3
4 5 6
Monotone function
A function \(f : \mathbb{L} \rightarrow \mathbb{L}\) is monotone if and only if:
\[\forall x, y \in \mathbb{L}, x \leq y \Rightarrow f(x) \leq f(y)\]
This does not imply that \(f\) is increasing, as constant functions are also monotone.
Viewed as functions, \(\&\) and \(\lor\) are monotone in both arguments.
Fixed points
Fixed point theorem
Definition: a value $v$ is a fixed point of a function $f$ if and only if $f(v) = v$.
Fixed point theorem: In a lattice $L$ with finite height, every monotone function $f$ has a unique least fixed point $\text{fix}(f)$, and it is given by:
$$\text{fix}(f) = \bot \cup f(\bot) \cup f^2(\bot) \cup f^3(\bot) \cup \ldots$$
Fixed points and equations
Fixed points are interesting as they enable us to solve systems of equations of the following form:
$$x_1 = F_1(x_1, \ldots, x_n)$$
$$x_2 = F_2(x_1, \ldots, x_n)$$
$$\vdots$$
$$x_n = F_n(x_1, \ldots, x_n)$$
where $x_1, \ldots, x_n$ are variables, and $F_1, \ldots, F_n : L^n \to L$ are monotone functions.
Such a system has a unique least solution that is the least fixed point of the composite function $F : L^n \to L^n$ defined as:
$$F(x_1, \ldots, x_n) = (F_1(x_1, \ldots, x_n), \ldots, F_n(x_1, \ldots, x_n))$$
Data-flow analysis
Data-flow analysis works on a control-flow graph and a lattice $L$. The lattice can either be fixed for all programs, or depend on the analysed one.
A variable $v_n$ ranging over the values of $L$ is attached to every node $n$ of the CFG.
A set of inequalities for these variables are then extracted from the CFG – according to the analysis being performed – and solved using the fixed point technique.
Example: liveness
As we have seen, liveness is a property that can be approximated using data-flow analysis.
The lattice to use in that case is $L = (\mathcal{P}(V), \subseteq)$ where $V$ is the set of variables appearing in the analysed program, and $\mathcal{P}$ is the power set operator (set of all subsets).
Example: liveness
For a program containing three variables \(x, y\) and \(z\), the lattice for liveness is the following:
\[
\begin{array}{ccc}
\{x, y, z\} & \{x, z\} & \{y, z\} \\
\{x, y\} & \{x\} & \{y\} \\
\{x\} & \{\} & \{\} \\
\end{array}
\]
Example: liveness
To every node \(n\) in the CFG, we attach a variable \(v_n\) giving the set of variables live before that node.
The value of that variable is given by:
\[v_n = (\text{write}(n) \cup \text{read}(n)) \setminus \{n\}\]
where \(s_1, s_2, \ldots\) are the successors of \(n\), \(\text{read}(n)\) is the set of program variables read by \(n\), and \(\text{written}(n)\) is the set of variables written by \(n\).
Fixed point algorithm
To solve the data-flow constraints, we construct the composite function \(F\) and compute its least fixed point by iteration:
\[F(x_1, x_2, x_3, x_4) = (x_1 \setminus \{x\}, x_2 \setminus \{y\}, x_3 \setminus \{x, y\}, x_4 \setminus \{z\}, x_5 \setminus \{y\}, \ldots, \{\})\]
Work-list algorithm
Computing the fixed point by simple iteration as we did works, but is wasteful as the information for all nodes is re-computed at every iteration.
It is possible to do better by remembering, for every variable \(v\), the set \(\text{dep}(v)\) of the variables whose value depends on the value of \(v\) itself.
Then, whenever the value of some variable \(v\) changes, we only re-compute the value of the variables that belong to \(\text{dep}(v)\).
Example: liveness
\[
\begin{array}{ccc}
1 & \text{x=\text{read-int}} & v_1 = v_2 \setminus \{x\} \quad v_3 = \{\} \\
2 & \text{y=\text{read-int}} & v_2 = v_2 \setminus \{y\} \quad v_4 = \{x\} \\
3 & \text{if } (x \leq y) & v_3 = v_3 \cup v_1 \cup \{x, y\} \quad v_5 = \{x, y\} \\
4 & \text{z=x \& j=y} & v_4 = v_4 \cup \{x\} \setminus \{z\} \quad v_6 = \{x, z\} \\
5 & \text{print-int z} & v_5 = v_5 \cup \{y\} \setminus \{z\} \quad v_7 = \{y\} \\
6 & \text{print-int z} & v_6 = v_6 \cup \{z\} \quad v_8 = \{z\} \\
\end{array}
\]
Work-list algorithm
\[
\begin{align*}
x_1 &= x_2 = \ldots = x_9 = \perp \\
q &= [v_1, \ldots, v_9] \\
\text{while } q \neq [] & \quad \text{assume } q = [v_1, \ldots, v_9] \\
y = F(x_1, x_2, x_3) & \\
q &= q \text{.tail} \\
\text{if } (y \neq y) & \\
\text{for } (v \in \text{dep}(y)) & \\
\text{if } (v \notin q) & \\
& \quad q \text{.append}(v) \\
x_1 &= y
\end{align*}
\]
Analysis example #2: available expressions
Working with basic blocks
Until now, we considered that the CFG nodes were single instructions. In practice, basic blocks tend to be used as nodes, to reduce the size of the CFG.
When data-flow analysis is performed on a CFG composed of basic blocks, a variable is attached to every block, not to every instruction. Computing the result of the analysis for individual instructions is however trivial.
Work-list example: liveness
<table>
<thead>
<tr>
<th>q</th>
<th>x1</th>
<th>x2</th>
<th>x3</th>
<th>x4</th>
<th>x5</th>
<th>x6</th>
</tr>
</thead>
<tbody>
<tr>
<td>[1,2]</td>
<td>[]</td>
<td>[]</td>
<td>[]</td>
<td>[]</td>
<td>[]</td>
<td>[]</td>
</tr>
<tr>
<td>[2,1]</td>
<td>[]</td>
<td>[x]</td>
<td>[]</td>
<td>[]</td>
<td>[]</td>
<td>[]</td>
</tr>
<tr>
<td>[3,2]</td>
<td>[]</td>
<td>[x,y]</td>
<td>[x]</td>
<td>[]</td>
<td>[]</td>
<td>[]</td>
</tr>
<tr>
<td>[4,3]</td>
<td>[]</td>
<td>[x,y]</td>
<td>[x]</td>
<td>[y]</td>
<td>[x]</td>
<td>[y]</td>
</tr>
<tr>
<td>[5,4]</td>
<td>[]</td>
<td>[x,y]</td>
<td>[x]</td>
<td>[y]</td>
<td>[z]</td>
<td>[x]</td>
</tr>
<tr>
<td>[6,5]</td>
<td>[]</td>
<td>[x,y]</td>
<td>[x]</td>
<td>[y]</td>
<td>[z]</td>
<td>[x]</td>
</tr>
</tbody>
</table>
Available expressions
A non-trivial expression in a program is available at some point if its value has already been computed earlier.
Data-flow analysis can be used to approximate the set of expressions available at all program points. The result from that analysis can then be used to eliminate common sub-expressions, for example.
Very busy expressions
An expression is very busy at some program point if it will definitely be evaluated before its value changes.
Data-flow analysis can approximate the set of very busy expressions for all program points. The result of that analysis can then be used to perform code hoisting: the computation of a very busy expression \( e \) can be performed at the earliest point where it is busy.
Intuitions
We will compute the set of very busy expressions before every node of the CFG.
Intuitively, an expression \( e \) is very busy before node \( n \) if it is evaluated by \( n \) or if it is very busy in all successors of \( n \), and it is not killed by \( n \).
Example
<table>
<thead>
<tr>
<th>CFG</th>
<th>constraints</th>
<th>solution</th>
</tr>
</thead>
<tbody>
<tr>
<td>( 1 ) if ( a < b )</td>
<td>( v_1 = {a < b} )</td>
<td>( v_2 = {a < b} )</td>
</tr>
<tr>
<td>( 2 ) ( x = a + b )</td>
<td>( v_3 = {x } \cup {a, b} )</td>
<td>( v_4 = {a, b} )</td>
</tr>
<tr>
<td>( 3 ) ( x = d + e )</td>
<td>( v_3 = {d + e} \cup {a, b} )</td>
<td>( v_4 = {d, e, a, b} )</td>
</tr>
<tr>
<td>( 4 ) ( y = x + 1 )</td>
<td>( v_5 = {x + 1} \cup {a, b} )</td>
<td>( v_6 = {a, b} )</td>
</tr>
<tr>
<td>( 5 ) ( z = x + b )</td>
<td>( v_5 = {x + 1} \cup {a, b} )</td>
<td>( v_6 = {x + 1} \cup {a, b} )</td>
</tr>
<tr>
<td>( 6 ) ( t = x + 1 )</td>
<td>( v_5 = {x + 1} \cup {a, b} )</td>
<td>( v_6 = {x + 1} \cup {a, b} )</td>
</tr>
<tr>
<td>( 7 ) ( t = x + 1 )</td>
<td>( v_5 = {x + 1} \cup {a, b} )</td>
<td>( v_6 = {x + 1} \cup {a, b} )</td>
</tr>
</tbody>
</table>
Analysis example #3: very busy expressions
Intuitions
We will compute the set of available expressions after every node of the CFG.
Intuitively, an expression \( e \) is available after node \( n \) if it is defined by \( n \) itself, and not killed by \( n \).
A node \( n \) kills an expression \( e \) if it gives a new value to a variable used by \( e \). For example, the assignment \( x \leftarrow y \) kills all expressions that use \( x \), like \( x + 1 \).
Equations
To approximate available expressions, we attach to every node \( n \) of the CFG a variable \( v_n \) containing the set of available expressions after node \( n \). The result of that analysis can then be used to perform code hoisting: the computation of a very busy expression \( e \) can be performed at the earliest point where it is busy.
To approximate very busy expressions, we attach to each node $n$ of the CFG a variable $v_n$ containing the set of expressions that are very busy before it. Then we derive constraints from the CFG nodes, which have the form:
$$v_n = (v_{p1} \cap v_{p2} \cap \ldots \cap \text{kill}(n)) \cup \text{gen}(n)$$
where $\text{gen}(n)$ is the set of expressions computed by $n$, and $\text{kill}(n)$ the set of expressions killed by $n$.
### Analysis example #4: reaching definitions
#### Reaching definitions
The **reaching definitions** for a program point are the assignments that may have defined the values of variables at that point. Data-flow analysis can approximate the set of reaching definitions for all program points. These sets can then be used to perform constant propagation, for example.
#### Equations
To approximate reaching definitions, we attach to node $n$ of the CFG a variable $v_n$ containing the set of definitions (CFG nodes) that can reach $n$.
For a node $n$ that is not an assignment, the reaching definitions are simply those of its predecessors:
$$v_n = (v_{p1} \cup v_{p2} \cup \ldots \cap \text{kill}(n))$$
For a node $n$ that is an assignment, the equation is more complicated:
$$v_n = (v_{p1} \cup v_{p2} \cup \ldots \cup \text{kill}(n) \cup \{ n \})$$
where $\text{kill}(n)$ are the definitions killed by $n$, i.e. those which define the same variable as $n$ itself. For example, a definition like $x \leftarrow y$ kills all expressions of the form $x \leftarrow \ldots$
Example
<table>
<thead>
<tr>
<th>CFG</th>
<th>constraints</th>
<th>solution</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 ( x = 100 )</td>
<td>( v_1 = {1} )</td>
<td>( v_1 = {1} )</td>
</tr>
<tr>
<td>2 ( z = 0 )</td>
<td>( v_2 = {} \cup z \cup {2} )</td>
<td>( v_2 = {1, 2} )</td>
</tr>
<tr>
<td>3 ( z = z + 3 )</td>
<td>( v_3 = {} \cup z \cup {3} )</td>
<td>( v_3 = {1, 3, 4} )</td>
</tr>
<tr>
<td>4 ( x = x - 1 )</td>
<td>( v_4 = v_3 )</td>
<td>( v_4 = {3, 4} )</td>
</tr>
<tr>
<td>5 if ( x > 0 )</td>
<td>( v_5 = v_4 )</td>
<td>( v_5 = {3, 4} )</td>
</tr>
<tr>
<td>6 print ( z )</td>
<td>( v_6 = {3, 4} )</td>
<td>( v_6 = {3, 4} )</td>
</tr>
</tbody>
</table>
\( v_i \) = set of reaching definitions after node \( n \).
Notation:
- \( S \# x \) = \( S \) all nodes defining variable \( x \).
- \( v_n \) = set of reaching definitions after node \( n \).
Putting data-flow analyses to work
Using data-flow analysis
Once a particular data-flow analysis has been conducted, its result can be used to optimise the analysed program. We will quickly examine some transformations that can be performed using the data-flow analysis presented before.
Dead-code elimination
Useless assignments can be eliminated using liveness analysis, as follows:
Whenever a CFG node \( n \) is of the form \( x = e \), and \( x \) is not live after \( n \), then the assignment is useless and node \( n \) can be removed.
CSE
Common sub-expressions can be eliminated using availability information, as follows:
Whenever a CFG node \( n \) computes an expression of the form \( x \ op \ y \) and \( x \ op \ y \) is available before \( n \), then the computation within \( n \) can be replaced by a reference to the previously-computed value.
Constant propagation
Constant propagation can be performed using the result of reaching definitions analysis, as follows:
When a CFG node \( n \) uses a value \( x \) and the only definition of \( x \) reaching \( n \) has the form \( x = c \) where \( c \) is a constant, then the use of \( x \) in \( n \) can be replaced by \( c \).
Copy propagation – very similar to constant propagation – can be performed using the result of reaching definitions analysis, as follows:
When a CFG node \( n \) uses a value \( x \), and the only definition of \( x \) reaching \( n \) has the form \( x = y \) where \( y \) is a variable, and \( y \) is not redefined on any path leading to \( n \), then the use of \( x \) in \( n \) can be replaced by \( y \).
|
{"Source-Url": "http://lampwww.epfl.ch/teaching/archive/advanced_compiler/2007/resources/slides/act-2007-10-data-flow-analysis_6.pdf", "len_cl100k_base": 5595, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 45848, "total-output-tokens": 6174, "length": "2e12", "weborganizer": {"__label__adult": 0.0003414154052734375, "__label__art_design": 0.00027108192443847656, "__label__crime_law": 0.0003771781921386719, "__label__education_jobs": 0.00033211708068847656, "__label__entertainment": 4.8995018005371094e-05, "__label__fashion_beauty": 0.00015091896057128906, "__label__finance_business": 0.000186920166015625, "__label__food_dining": 0.0003490447998046875, "__label__games": 0.0007772445678710938, "__label__hardware": 0.00189208984375, "__label__health": 0.0004935264587402344, "__label__history": 0.0001952648162841797, "__label__home_hobbies": 0.00011873245239257812, "__label__industrial": 0.0005650520324707031, "__label__literature": 0.00017976760864257812, "__label__politics": 0.0002493858337402344, "__label__religion": 0.0005059242248535156, "__label__science_tech": 0.0247039794921875, "__label__social_life": 5.620718002319336e-05, "__label__software": 0.005146026611328125, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.0003659725189208984, "__label__transportation": 0.0005655288696289062, "__label__travel": 0.0001895427703857422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16953, 0.01791]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16953, 0.59982]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16953, 0.83899]], "google_gemma-3-12b-it_contains_pii": [[0, 1121, false], [1121, 2165, null], [2165, 4488, null], [4488, 5724, null], [5724, 7357, null], [7357, 9717, null], [9717, 10875, null], [10875, 13128, null], [13128, 14641, null], [14641, 16539, null], [16539, 16953, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1121, true], [1121, 2165, null], [2165, 4488, null], [4488, 5724, null], [5724, 7357, null], [7357, 9717, null], [9717, 10875, null], [10875, 13128, null], [13128, 14641, null], [14641, 16539, null], [16539, 16953, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16953, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16953, null]], "pdf_page_numbers": [[0, 1121, 1], [1121, 2165, 2], [2165, 4488, 3], [4488, 5724, 4], [5724, 7357, 5], [7357, 9717, 6], [9717, 10875, 7], [10875, 13128, 8], [13128, 14641, 9], [14641, 16539, 10], [16539, 16953, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16953, 0.11161]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b6257614e119c7fc960ebfa6722cd2b9ab56897c
|
django-contact-form provides customizable contact-form functionality for Django-powered Web sites.
Basic functionality (collecting a name, email address and message) can be achieved out of the box by setting up a few templates and adding one line to your site’s root URLconf:
```python
url(r'^contact/', include('contact_form.urls')),
```
For notes on getting started quickly, and on how to customize django-contact-form’s behavior, read through the full documentation below.
The 1.6 release of django-contact-form supports Django 1.11, 2.0 and 2.1 on the following Python versions:
- Django 1.11 supports Python 2.7, 3.4, 3.5 and 3.6.
- Django 2.0 supports Python 3.4, 3.5, 3.6 and 3.7.
- Django 2.1 supports Python 3.5, 3.6 and 3.7.
1.1 Normal installation
The preferred method of installing django-contact-form is via pip, the standard Python package-installation tool. If you don’t have pip, instructions are available for how to obtain and install it. If you’re using a supported version of Python, pip should have come bundled with your installation of Python.
Once you have pip, type:
```
pip install django-contact-form
```
If you plan to use the included spam-filtering contact form class, AkismetContactForm, you will also need the Python akismet module. You can manually install it via `pip install akismet`, or tell django-contact-form to install it for you, by running:
```
pip install django-contact-form[akismet]
```
If you don’t have a copy of a compatible version of Django, installing django-contact-form will also automatically install one for you.
**Warning: Python 2**
If you are using Python 2, you should install the latest Django 1.11 release *before* installing django-contact-form. Later versions of Django no longer support Python 2, and installation will fail. To install a compatible version of Django for Python 2, run `pip install “Django>=1.11,<2.0”`.
1.2 Installing from a source checkout
If you want to work on django-contact-form, you can obtain a source checkout.
The development repository for django-contact-form is at <https://github.com/ubernostrum/django-contact-form>. If you have git installed, you can obtain a copy of the repository by typing:
```
git clone https://github.com/ubernostrum/django-contact-form.git
```
From there, you can use git commands to check out the specific revision you want, and perform an “editable” install (allowing you to change code as you work on it) by typing:
```
pip install -e .
```
1.3 Next steps
To get up and running quickly, check out the quick start guide. For full documentation, see the documentation index.
First you’ll need to have Django and django-contact-form installed; for details on that, see the installation guide. Once that’s done, you can start setting up django-contact-form. First, add it to your INSTALLED_APPS setting. Then, you can begin configuring.
2.1 URL configuration
The quickest way to set up the views in django-contact-form is to use the provided URLconf, found at contact_form.urls. You can include it wherever you like in your site’s URL configuration; for example, to have it live at the URL /contact/:
```python
from django.conf.urls import include, url
urlpatterns = [
# ... other URL patterns for your site ...
url(r'^contact/', include('contact_form.urls')),
]
```
If you’ll be using a custom form class, you’ll need to manually set up your URLs so you can tell django-contact-form about your form class. For example:
```python
from django.conf.urls import include, url
from django.views.generic import TemplateView
from contact_form.views import ContactFormView
from yourapp.forms import YourCustomFormClass
urlpatterns = [
# ... other URL patterns for your site ...
]
```
url(r'^contact/$',
ContactFormView.as_view(
form_class=YourCustomFormClass
),
name='contact_form'),
url(r'^contact/sent/$',
TemplateView.as_view(
template_name='contact_form/contact_form_sent.html'
),
name='contact_form_sent'),
Important: Where to put custom forms and views
When writing a custom form class (or custom ContactFormView subclass), don’t put your custom code inside django-contact-form. Instead, put your custom code in the appropriate place (a forms.py or views.py file) in an application you’ve written.
2.2 Required templates
The two views above will need several templates to be created.
2.2.1 contact_form/contact_form.html
This is used to display the contact form. It has a RequestContext (so any context processors will be applied), and also provides the form instance as the context variable form.
2.2.2 contact_form/contact_form_sent.html
This is used after a successful form submission, to let the user know their message has been sent. It has a RequestContext, but provides no additional context variables of its own.
2.2.3 contact_form/contact_form.txt
Used to render the subject of the email. Will receive a RequestContext with the following additional variables:
- **body** The message the user typed.
- **email** The email address the user supplied.
- **name** The name the user supplied.
- **site** The current site. Either a Site or RequestSite instance, depending on whether Django’s sites framework is installed).
2.2.4 contact_form/contact_form_subject.txt
Used to render the subject of the email. Will receive a RequestContext with the following additional variables:
**body** The message the user typed.
**email** The email address the user supplied.
**name** The name the user supplied.
**site** The current site. Either a `Site` or `RequestSite` instance, depending on whether Django’s sites framework is installed.
---
Warning: Subject must be a single line
In order to prevent header injection attacks, the subject must be only a single line of text, and Django’s email framework will reject any attempt to send an email with a multi-line subject. So it’s a good idea to ensure your `contact_form_subject.txt` template only produces a single line of output when rendered; as a precaution, however, django-contact-form will, by default, condense the output of this template to a single line.
### 2.3 Using a spam-filtering contact form
Spam filtering is a common desire for contact forms, due to the large amount of spam they can attract. There is a spam-filtering contact form class included in django-contact-form: `AkismetContactForm`, which uses the Wordpress Akismet spam-detection service.
To use this form, you will need to do the following things:
1. Install the Python `akismet` module to allow django-contact-form to communicate with the Akismet service. You can do this via `pip install akismet`, or as you install django-contact-form via `pip install django-contact-form[akismet]`.
2. Obtain an Akismet API key from <https://akismet.com/>, and associate it with the URL of your site.
3. Supply the API key and URL for django-contact-form to use. You can either place them in the Django settings `AKISMET_API_KEY` and `AKISMET_BLOG_URL`, or in the environment variables `PYTHON_AKISMET_API_KEY` and `PYTHON_AKISMET_BLOG_URL`.
Then you can replace the suggested URLconf above with the following:
```python
from django.conf.urls import include, url
urlpatterns = [
# ... other URL patterns for your site ...
url(r'^contact/', include('contact_form.akismet_urls')),
]
```
Contact form classes
There are two contact-form classes included in django-contact-form; one provides all the infrastructure for a contact form, and will usually be the base class for subclasses which want to extend or modify functionality. The other is a subclass which adds spam filtering to the contact form.
3.1 The ContactForm class
class contact_form.forms.ContactForm
The base contact form class from which all contact form classes should inherit.
If you don’t need any customization, you can use this form to provide basic contact-form functionality; it will collect name, email address and message.
The ContactFormView included in this application knows how to work with this form and can handle many types of subclasses as well (see below for a discussion of the important points), so in many cases it will be all that you need. If you’d like to use this form or a subclass of it from one of your own views, here’s how:
1. When you instantiate the form, pass the current HttpRequest object as the keyword argument request; this is used internally by the base implementation, and also made available so that subclasses can add functionality which relies on inspecting the request (such as spam filtering).
2. To send the message, call the form’s save() method, which accepts the keyword argument fail_silently and defaults it to False. This argument is passed directly to Django’s send_mail() function, and allows you to suppress or raise exceptions as needed for debugging. The save() method has no return value.
Other than that, treat it like any other form; validity checks and validated data are handled normally, through the is_valid() method and the cleaned_data dictionary.
Under the hood, this form uses a somewhat abstracted interface in order to make it easier to subclass and add functionality.
The following attributes play a role in determining behavior, and any of them can be implemented as an attribute or as a method (for example, if you wish to have from_email be dynamic, you can implement a method named from_email() instead of setting the attribute from_email).
**from_email**
The email address (str) to use in the From: header of the message. By default, this is the value of the Django setting `DEFAULT_FROM_EMAIL`.
**recipient_list**
A list of recipients for the message. By default, this is the email addresses specified in the setting `MANAGERS`.
**subject_template_name**
A str, the name of the template to use when rendering the subject line of the message. By default, this is `contact_form/contact_form_subject.txt`.
**template_name**
A str, the name of the template to use when rendering the body of the message. By default, this is `contact_form/contact_form.txt`.
And two methods are involved in producing the contents of the message to send:
**message**()
Returns the body of the message to send. By default, this is accomplished by rendering the template name specified in `template_name`.
*Return type* str
**subject**()
Returns the subject line of the message to send. By default, this is accomplished by rendering the template name specified in `subject_template_name`.
*Return type* str
---
**Warning: Subject must be a single line**
The subject of an email is sent in a header (named Subject). Because email uses newlines as a separator between headers, newlines in the subject can cause it to be interpreted as multiple headers; this is the header injection attack. To prevent this, `subject()` will always force the subject to a single line of text, stripping all newline characters. If you override `subject()`, be sure to either do this manually, or use `super()` to call the parent implementation.
---
Finally, the message itself is generated by the following two methods:
**get_message_dict**()
This method loops through `from_email`, `recipient_list`, `message()` and `subject()`, collecting those parts into a dictionary with keys corresponding to the arguments to Django’s `send_mail` function, then returns the dictionary. Overriding this allows essentially unlimited customization of how the message is generated. Note that for compatibility, implementations which override this should support callables for the values of `from_email` and `recipient_list`.
*Return type* dict
**get_context**()
For methods which render portions of the message using templates (by default, `message()` and `subject()`), generates the context used by those templates. The default context will be a `RequestContext` (using the current HTTP request, so user information is available), plus the contents of the form’s `cleaned_data` dictionary, and one additional variable:
**site**
If `django.contrib.sites` is installed, the currently-active `Site` object. Otherwise, a `RequestSite` object generated from the request.
*Return type* dict
Meanwhile, the following attributes/methods generally should not be overridden; doing so may interfere with functionality, may not accomplish what you want, and generally any desired customization can be accomplished in a more straightforward way through overriding one of the attributes/methods listed above.
**request**
The HttpRequest object representing the current request. This is set automatically in `__init__()`, and is used both to generate a RequestContext for the templates and to allow subclasses to engage in request-specific behavior.
**save()**
If the form has data and is valid, will send the email, by calling `get_message_dict()` and passing the result to Django's `send_mail()` function.
Note that subclasses which override `__init__` or `save()` need to accept `*args` and `**kwargs`, and pass them via `super()`, in order to preserve behavior (each of those methods accepts at least one additional argument, and this application expects and requires them to do so).
### 3.2 The Akismet (spam-filtering) contact form class
```python
class contact_form.forms.AkismetContactForm
```
A subclass of `ContactForm` which adds spam filtering, via the Wordpress Akismet spam-detection service.
Use of this class requires you to provide configuration for the Akismet web service; you’ll need to obtain an Akismet API key, and you’ll need to associate it with the site you’ll use the contact form on. You can do this at [https://akismet.com/](https://akismet.com/). Once you have, you can configure in either of two ways:
1. Put your Akismet API key in the Django setting `AKISMET_API_KEY`, and the URL it’s associated with in the setting `AKISMET_BLOG_URL`, or
2. Put your Akismet API key in the environment variable `PYTHON_AKISMET_API_KEY`, and the URL it’s associated with in the environment variable `PYTHON_AKISMET_BLOG_URL`.
You will also need the Python Akismet module to communicate with the Akismet web service. You can install it by running `pip install akismet`, or django-contact-form can install it automatically for you if you run `pip install django-contact-form[akismet]`.
Once you have an Akismet API key and URL configured, and the akismet module installed, you can drop in `AkismetContactForm` anywhere you would have used `ContactForm`. A URLconf is provided in django-contact-form, at `contact_form.akismet_urls`, which will correctly configure `AkismetContactForm` for you.
---
**3.2. The Akismet (spam-filtering) contact form class**
class contact_form.views.ContactFormView
The base view class from which most custom contact-form views should inherit. If you don’t need any custom functionality, and are content with the default ContactForm class, you can also use it as-is (and the provided URLConf, contact_form.urls, does exactly this).
This is a subclass of Django’s FormView, so refer to the Django documentation for a list of attributes/methods which can be overridden to customize behavior.
One non-standard attribute is defined here:
recipient_list
The list of email addresses to send mail to. If not specified, defaults to the recipient_list of the form.
Additionally, the following standard (from FormView) methods and attributes are commonly useful to override (all attributes below can also be passed to as_view() in the URLconf, permitting customization without the need to write a full custom subclass of ContactFormView):
form_class
The form class to use. By default, will be ContactForm. This can also be overridden as a method named form_class(); this permits, for example, per-request customization (by inspecting attributes of self.request).
template_name
A str, the template to use when rendering the form. By default, will be contact_form/contact_form.html.
get_success_url()
The URL to redirect to after successful form submission. Can be a hard-coded string, the string resulting from calling Django’s reverse() helper, or the lazy object produced by Django’s reverse_lazy() helper. Default value is the result of calling reverse_lazy() with the URL name ‘contact_form_sent’.
Return type str
get_form_kwargs()
Returns additional keyword arguments (as a dictionary) to pass to the form class on initialization.
By default, this will return a dictionary containing the current HttpRequest (as the key request) and, if recipient_list was defined, its value (as the key recipient_list).
**Warning:** If you override get_form_kwarg(), you must ensure that, at the very least, the keyword argument request is still provided, or ContactForm initialization will raise TypeError. The easiest approach is to use super() to call the base implementation in ContactFormView, and modify the dictionary it returns.
**Return type** dict
CHAPTER 5
Frequently asked questions
The following notes answer some common questions, and may be useful to you when installing, configuring or using django-contact-form.
5.1 What versions of Django and Python are supported?
As of django-contact-form 1.6, Django 1.11, 2.0 and 2.1 are supported, on Python 2.7, (Django 1.11 only), 3.4 (Django 1.11 and 2.0 only), 3.5, 3.6 and 3.7 (Django 2.0 and 2.1 only).
5.2 What license is django-contact-form under?
django-contact-form is offered under a three-clause BSD-style license; this is an OSI-approved open-source license, and allows you a large degree of freedom in modifying and redistributing the code. For the full terms, see the file LICENSE which came with your copy of django-contact-form; if you did not receive a copy of this file, you can view it online at <https://github.com/ubernostrum/django-contact-form/blob/master/LICENSE>.
5.3 Why aren’t there any default templates I can use?
Usable default templates, for an application designed to be widely reused, are essentially impossible to produce; variations in site design, block structure, etc. cannot be reliably accounted for. As such, django-contact-form provides bare-bones (i.e., containing no HTML structure whatsoever) templates in its source distribution to enable running tests, and otherwise just provides good documentation of all required templates and the context made available to them.
5.4 Why am I getting a bunch of BadHeaderError exceptions?
Most likely, you have an error in your ContactForm subclass. Specifically, one or more of from_email, recipient_list or subject() are returning values which contain newlines.
As a security precaution against email header injection attacks (which allow spammers and other malicious users to manipulate email and potentially cause automated systems to send mail to unintended recipients), Django’s email-sending framework does not permit newlines in message headers. BadHeaderError is the exception Django raises when a newline is detected in a header. By default, `contact_form.forms.ContactForm.subject()` will forcibly condense the subject to a single line.
Note that this only applies to the headers of an email message; the message body can (and usually does) contain newlines.
### 5.5 I found a bug or want to make an improvement!
The canonical development repository for `django-contact-form` is online at [https://github.com/ubernostrum/django-contact-form](https://github.com/ubernostrum/django-contact-form). Issues and pull requests can both be filed there.
If you’d like to contribute to `django-contact-form`, that’s great! Just please remember that pull requests should include tests and documentation for any changes made, and that following PEP 8 is mandatory. Pull requests without documentation won’t be merged, and PEP 8 style violations or test coverage below 100% are both configured to break the build.
### 5.6 I’m getting errors about “akismet” when trying to run tests?
The full test suite of `django-contact-form` exercises all of its functionality, including the spam-filtering `AkismetContactForm`. That class uses the Wordpress Akismet spam-detection service to perform spam filtering, and so requires the Python `akismet` module to communicate with the Akismet service, and some additional configuration (in the form of a valid Akismet API key and associated URL).
By default, the tests for `AkismetContactForm` will be skipped unless the required configuration (in the form of either a pair of Django settings, or a pair of environment variables) is detected. However, if you have supplied Akismet configuration but do not have the Python `akismet` module, you will see test errors from attempts to import `akismet`. You can resolve this by running:
```
$ pip install akismet
```
or (if you do not intend to use `AkismetContactForm`) by no longer configuring the Django settings/environment variables used by Akismet.
Additionally, if the `AkismetContactForm` tests are skipped, the default code-coverage report will fail due to the relevant code not being exercised during the test run.
C
contact_form.forms, 7
contact_form.views, 11
Index
A
AkismetContactForm (class in contact_form.forms), 11
C
contact_form.forms (module), 7
contact_form.views (module), 11
ContactForm (class in contact_form.forms), 9
ContactFormView (class in contact_form.views), 13
F
form_class (contact_form.views.ContactFormView attribute), 13
from_email (contact_form.forms.ContactForm attribute), 9
G
get_context() (contact_form.forms.ContactForm method), 10
get_form_kwargs() (contact_form.views.ContactFormView method), 13
get_message_dict() (contact_form.forms.ContactForm method), 10
get_success_url() (contact_form.views.ContactFormView method), 13
M
message() (contact_form.forms.ContactForm method), 10
R
recipient_list (contact_form.forms.ContactForm attribute), 10
recipient_list (contact_form.views.ContactFormView attribute), 13
request (contact_form.forms.ContactForm attribute), 11
S
save() (contact_form.forms.ContactForm method), 11
subject() (contact_form.forms.ContactForm method), 10
subject_template_name (contact_form.forms.ContactForm attribute), 10
T
template_name (contact_form.forms.ContactForm attribute), 10
template_name (contact_form.views.ContactFormView attribute), 13
|
{"Source-Url": "https://django-contact-form.readthedocs.io/_/downloads/en/1.6/pdf/", "len_cl100k_base": 4949, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 37221, "total-output-tokens": 5978, "length": "2e12", "weborganizer": {"__label__adult": 0.0003497600555419922, "__label__art_design": 0.00023698806762695312, "__label__crime_law": 0.00023448467254638672, "__label__education_jobs": 0.00024771690368652344, "__label__entertainment": 4.32133674621582e-05, "__label__fashion_beauty": 0.0001029372215270996, "__label__finance_business": 0.00010055303573608398, "__label__food_dining": 0.000293731689453125, "__label__games": 0.00030231475830078125, "__label__hardware": 0.0002646446228027344, "__label__health": 0.00015056133270263672, "__label__history": 0.00010275840759277344, "__label__home_hobbies": 5.316734313964844e-05, "__label__industrial": 0.00014388561248779297, "__label__literature": 0.00011235475540161131, "__label__politics": 0.00011432170867919922, "__label__religion": 0.00022602081298828125, "__label__science_tech": 0.0004417896270751953, "__label__social_life": 8.183717727661133e-05, "__label__software": 0.0113067626953125, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0001876354217529297, "__label__transportation": 0.0001531839370727539, "__label__travel": 0.0001741647720336914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22202, 0.01326]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22202, 0.17831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22202, 0.80313]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 0, null], [0, 479, false], [479, 479, null], [479, 1898, null], [1898, 2619, null], [2619, 3737, null], [3737, 5399, null], [5399, 7342, null], [7342, 7342, null], [7342, 9474, null], [9474, 12187, null], [12187, 14670, null], [14670, 14670, null], [14670, 16386, null], [16386, 16899, null], [16899, 18554, null], [18554, 21005, null], [21005, 21053, null], [21053, 21053, null], [21053, 22202, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 0, null], [0, 479, true], [479, 479, null], [479, 1898, null], [1898, 2619, null], [2619, 3737, null], [3737, 5399, null], [5399, 7342, null], [7342, 7342, null], [7342, 9474, null], [9474, 12187, null], [12187, 14670, null], [14670, 14670, null], [14670, 16386, null], [16386, 16899, null], [16899, 18554, null], [18554, 21005, null], [21005, 21053, null], [21053, 21053, null], [21053, 22202, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22202, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22202, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 0, 4], [0, 479, 5], [479, 479, 6], [479, 1898, 7], [1898, 2619, 8], [2619, 3737, 9], [3737, 5399, 10], [5399, 7342, 11], [7342, 7342, 12], [7342, 9474, 13], [9474, 12187, 14], [12187, 14670, 15], [14670, 14670, 16], [14670, 16386, 17], [16386, 16899, 18], [16899, 18554, 19], [18554, 21005, 20], [21005, 21053, 21], [21053, 21053, 22], [21053, 22202, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22202, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
1b2d5192c2c541d3a5a0ad2ccaa880243985f622
|
DISTRIBUTED EPISODIC EXPLORATORY PLANNING (DEEP)
Rome Research Corporation
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED.
STINFO COPY
AIR FORCE RESEARCH LABORATORY
INFORMATION DIRECTORATE
ROME RESEARCH SITE
ROME, NEW YORK
NOTICE AND SIGNATURE PAGE
Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way oblige the U.S. Government. The fact that the Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them.
This report was cleared for public release by the 88th ABW, Wright-Patterson AFB Public Affairs Office and is available to the general public, including foreign nationals. Copies may be obtained from the Defense Technical Information Center (DTIC) (http://www.dtic.mil).
AFRL-RI-RS-TR-2008-322 HAS BEEN REVIEWED AND IS APPROVED FOR PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT.
FOR THE DIRECTOR:
/signatures/
DALE W. RICHARDS
Work Unit Manager
JAMES W. CUSACK, Chief
Information Systems Division
Information Directorate
This report is published in the interest of scientific and technical information exchange, and its publication does not constitute the Government’s approval or disapproval of its ideas or findings.
DEEP is a mixed-initiative decision support system that utilizes past experiences to suggest courses of action (COAs) for new situations. It was designed as a distributed multi-agent system, using agents to maintain and exploit the experiences of individual commanders, as well as to transform suggested past plans into potential solutions for new problems. The commander, through the agent, can view and modify the contents of the shared repository. Agents interact through a common knowledge repository, represented by blackboard, selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of development and integration with other projects. Research also included investigations into various scalability software suites and frameworks, as well as different database management systems. Hibernate, an object/relational persistence and query service, was chosen for interaction with the database. Comprehensive testing revealed the Java Distributed Blackboard was limited only by system resources and network bandwidth. Thus, its architecture is well suited for dealing with ill-defined, complex situations such as military planning.
Table of Contents
1. EXECUTIVE SUMMARY ......................................................... 1
2. INTRODUCTION ................................................................. 3
3. METHODS .............................................................................. 5
3.1 Statement of Work Tasks ................................................. 5
3.2 DEEP Architecture Overview ....................................... 5
3.3 Blackboard ................................................................. 6
3.3.1 Java Distributed Blackboard Architecture Overview .. 7
3.3.2 Blackboard Components ........................................ 8
3.3.3 Knowledge Sources ............................................. 8
3.3.4 Core data structure ............................................. 8
3.3.5 Control .............................................................. 9
3.3.6 Additional Components ....................................... 9
3.3.7 Proxy ............................................................... 9
3.3.8 Blackboard objects ............................................ 9
3.3.9 Blackboard utilities ............................................ 9
3.4 Research ................................................................. 10
3.4.1 Terracotta ........................................................ 10
3.4.2 Hibernate ......................................................... 10
3.4.3 Database Management Systems ....................... 11
3.5 Development ............................................................ 12
3.6 Other Tasks .............................................................. 12
4. RESULTS ............................................................................. 13
5. CONCLUSIONS ............................................................... 14
5.1 Discussion of Findings .................................................. 14
5.2 Issues and Concerns .................................................... 14
6. REFERENCES ................................................................. 15
7. LIST OF ACRONYMS ....................................................... 16
1. EXECUTIVE SUMMARY
The initiation of a Global War on Terrorism has brought to the forefront unique challenges in identifying adversaries, their plans, motivations, intentions and resources, and opportunities for successfully engaging them. Technologies enabling the right people to discover the right information at the right time, anywhere in the world, are of vital importance. The September 11, 2001 attacks and subsequent violence around the world proves that failure to develop such technology can contribute to the unacceptable loss of life, degradation of military readiness, and placing the U.S. population at unacceptable risk.
The Air Force Research Laboratory has initiated research within its Commander’s Predictive Environment (CPE) program and the associated Distributed Episodic Exploratory Planning (DEEP) project have take on the challenge of developing technology to enable commanders to rapidly develop better and more robust plans, and collaborate with other Air Operations Centers and other command centers. These efforts seek to improve the understanding of the operational picture (past, present and future), learn from past failures and successes, and be able to characterize and predict likely future events within the planning and execution process.
Advanced decision support systems provide a core capability required to align resources and increase effectiveness on the battlefield of today and prepare for the future. By effectively integrating and coordinating proactive measures and dynamic responses assisted through advanced technology, commanders will be proactive in meeting the dynamics necessary to deal with the myriad of roles commanders will face, and win the war on terrorism. The United States defense strategy requires continuing information superiority to secure an advantage over adversaries. In order to support the goal of information dominance and the focus on network-centric warfare, it is necessary to develop tools that can assist in predicting combat environments and building plans in response. Such a predictive environment must provide an ability to develop proactive Courses of Action (red, blue and gray) tailored to potential or pending crises, and be able to adjust them over time.
The objective of this effort was focused on developing a software capability to assist a Joint Force Commander (JFC) and/or Joint Force Air Component Commander (JFACC) to dynamically build and adjust combat plans and execute decisions based on a predictive battlespace environment, drawing from past experiences and apply that knowledge to present situations. This robust decision support environment will enable the commander to develop better, more robust plans to meet strategic objectives.
The JFC/JFACC must have an understanding of the operational picture (past, present and future), be able to characterize and predict likely future events within the planning and execution process, generate options, comprehend the impact of decisions made today on the battlespace of tomorrow; and do this reliably within the pace of modern combat. The JFC/JFACC must be able to anticipate plausible end states (based on “red” and “blue” actions), evaluate possible courses of action, have rapid, easy access to shared battlespace information, and interactive with planning tools in an intuitive mixed-initiative manner.
This effort designed and implemented a prototype decision support capability. This includes technology to: (a) gain greater understanding and awareness of the battlespace through
reflection, imitation and experience; (b) develop proactive course of action tailored to potential or pending crises; (c) develop and adjust projections temporally, spatially and across friendly (blue), enemy (red) and neutral (grey) forces; (d) provide mixed initiative planning; and (e) support distributed and collaborative planning.
This report provides an overview of the work performed by Rome Research Corporation (RRC) in support of the DEEP project at AFRL’s Rome Research Site. DEEP is a mixed-initiative decision support system that utilizes past experiences to suggest courses of action (COAs) for new situations. It was designed as a distributed multi-agent system, using agents to maintain and exploit the experiences of individual commanders, as well as to transform suggested past plans into potential solutions for new problems. The commander, through the agent, can view and modify the contents of the shared repository. Agents interact through a common knowledge repository, represented by a blackboard in the initial architecture.
The blackboard design pattern was selected because of its opportunistic reasoning capabilities. The initial design called for an Open Source blackboard written in LISP. The intent was to extend the blackboard to be a distributed environment; however, detailed examination revealed difficulties integrating LISP with Java, as well as noting that LISP does not have a network standard – each LISP implementation has its own networking implementation. Consequently, the combined RRC and AFRL team decided to develop a true distributed blackboard, using Java for platform independence. Java was chosen for ease of development and integration with other projects. A Java Distributed Blackboard (JDB) had the benefits of a generic shared memory space which could be molded and extended to fit the exact needs of the DEEP project. Research also included investigations into various scalability software suites and frameworks as well as different database management systems. The team decided to use Hibernate – an object/relational persistence and query service – to interact with the database.
Comprehensive testing revealed the JDB was limited only by system resources and network bandwidth. Thus, its architecture is well suited for dealing with ill-defined, complex situations such as warfare.
2. INTRODUCTION
The objective of this contract was to investigate, design, prototype, test, evaluate and demonstrate a capability for providing a Joint Force Commander (JFC) and/or Joint Force Air Component Commander (JFACC) with a set of tools to cooperate with other Air Operation Centers (AOC) and command centers. This end goal was to assist the JFC/JFACC in dynamically building and adjusting combat plans and execution decisions based on a predictive battlespace environment, and draw from past experiences to apply to present situations. The result is a highly robust decision support environment, enabling the commander to develop better, more robust plans that meet strategic objectives by:
- Understanding the operational picture (past, present and future)
- Characterizing and predicting likely future events within the planning and execution process
- Generating options for the commander
- Comprehending the impact of decisions made today on the battlespace of tomorrow
- Blending Commander’s intent and situational awareness/understanding into a predictive environment with a veracity level that would allow operational domain plans to be developed within the pace of modern combat
- Developing an environment that provides the JFC/JFACC the ability to:
- Anticipate plausible end states (based on “red” and “blue” actions)
- Evaluate possible courses of action
- Provide rapid, easy access to shared battlespace information
- Allow for intuitive like mixed-initiative planning
A major element of DEEP was to support the cognitive domain within the strategic layer of the Office of the Secretary of Defense Command and Control (C2) Conceptual Framework. Many earlier research and development programs provided the technology underpinning for the DEEP project, such as:
- Computational behavioral modeling
- Intelligent agents/enhanced machine-to-machine collaboration
- Immersive interfaces
- Game theory
- Real-time learning
- Episodic memory
- Multi-agent systems
- Collaboration tools
Blackboard systems
The scope of this effort included:
- Defining opportunities to enhance the DEEP system
- Designing software components to fulfill definitions
- Implementing designs
- Integrating interoperable software modules within existing CPE baseline
- Testing components
- Documenting tests and experiments results
- Documenting integration steps, system configurations, and installation plans
- Providing software documentation, code listings, and user manuals for developed capabilities
The delivered software components included defined service components; graphical user interfaces, where applicable; and installation, configuration and User’s Guide documentation. Presentations and demonstrations on advanced technologies were conducted at the AFRL Rome Research Site and elsewhere.
3. METHODS
3.1 Statement of Work Tasks
For this effort, the Statement of Work tasks were to:
- Research and identify enhancements to CPE Technology Program; define and prototype DEEP capabilities
- Design, develop and prototype DEEP tools
- Design, develop and prototype software to support prediction and assessment of probable COAs using opportunistic reasoning
- Design and develop tools for knowledge bases using blackboard and multi-agent systems technologies
3.2 DEEP Architecture Overview
The scope of this effort was confined to building tools for DEEP. Development of the core components of DEEP itself was not part of this effort. While no discussion of the DEEP architecture is included in this report, Figure 3-1 is provided for reference in the following Java Distributed Blackboard discussion. Detailed descriptions of the DEEP architecture, as well as the underlying aspects of mixed-initiative and episodic planning are documented elsewhere. (Ford and Carozzoni, 2007) (Carozzoni and Lawton, 2008) (Ford and Lawton, 2008)
3.3 Blackboard
The DEEP system requires a method to communicate and interact. To achieve this, a blackboard mechanism was selected. The blackboard design offers opportunistic reasoning capabilities (Corkill, 1991). It serves as a repository easing communication between systems.
Due to DEEP requirements, the blackboard needed to be extended from a monolithic to a distributed environment; however, research revealed existing commercial blackboards were not distributed. The DEEP team designed and implemented the distributed environment using the design patterns described in *Parallel and Distributed Programming Using C++* (Hughes, C., & Hughes, T. (2003)).
The initial design called for using an Open Source blackboard written in LISP which would be extended to become a distributed environment. Issues soon became evident, including:
- LISP does not integrate well with Java
- LISP does not have a network standard – each LISP implementation has its own networking implementation
Consequently, the Team decided to develop a true distributed blackboard using Java. Java was chosen for ease of development and integration with other projects. A Java Distributed Blackboard had the benefits of a generic shared memory space which could be molded and extended to fit the exact needs of the DEEP project. The Java implementation included the ability to easily choose a port; as a consequence, the operator can run as many blackboards as desired on a single machine.
### 3.3.1 Java Distributed Blackboard Architecture Overview
The architecture is composed of a hierarchy of Java classes and is illustrated by the diagram depicted in Figure 3-2. The diagram illustrates that a blackboard starts up as a ‘server’ on a machine. Multiple blackboards connect to this server as clients using the Java Remote Method Invocation (RMI). When a client connects, the server propagates everything on the blackboard to the new client(s). When the server has something new on it, it updates its clients. Conversely, when a new object is placed on a client blackboard, the object is given to the server which then updates all of the clients. This is done to avoid the synchronization issues inherent to a distributed environment.
Agents, or knowledge sources, connect to a blackboard through Java RMI as well. A BBProxy (BlackBoard Proxy) server exists for each blackboard application to facilitate these connections. This proxy provides the interface for the knowledge sources to interact with the blackboard.
3.3.2 **Blackboard Components**
A Blackboard architectural pattern, traditionally, is composed of three components: (1) knowledge sources, (2) a core data structure; and (3) a control component.
3.3.3 **Knowledge Sources**
By connecting to the blackboard, an application has the ability to become a knowledge source. Figure 3-2 shows example knowledge sources from the DEEP project. Also shown is how a knowledge source could contribute external information to the Blackboard. Further, there is a mechanism in place to notify the knowledge sources of Blackboard events.
3.3.4 **Core data structure**
The current Blackboard core data structure contains “partitions” to allow objects to be sorted and stored based on a unique string. These partitions store objects mapped by their unique identifier (UID). The Team designed and implemented the data structure in the Java Distributed Blackboard such that it could be extended to become a wrapper to a database or other high performance data store.
3.3.5 Control
The Java Distributed Blackboard (JDB) contains a controller to handle the flow of information between the knowledge sources on the same machine as well as separate machines; however, as far as control in a traditional blackboard sense, the paradigm used is that the knowledge sources are given the responsibility of contributing to the solution. To facilitate the contribution, the knowledge source registers with the JDB thus ensuring it is notified of updates.
3.3.6 Additional Components
In addition to a traditional blackboard components, the RRC Team developed components specific to the Java Distributed Blackboard. These additions include a proxy, blackboard objects, and blackboard utilities.
3.3.7 Proxy
The proxy is an interface used to connect the knowledge source to the Java RMI. Through the proxy the interface can put and retrieve objects on/from the client. Other actions supported by the proxy include retrieving an object by its UID and registering new blackboard listeners. Similar to the core data structure, the proxy can be extended to accommodate integration with other applications (new or existing) as needed.
3.3.8 Blackboard objects
For a Java object to be placed on the blackboard, it has to satisfy two requirements. First, it must have a universal identifier; second, the object must be serialize-able. To comply with these requirements, an interface was created to force the use of a UID. For convenience, the object inherits from the `java.io.Serializable` class so serialization is automatic. Essentially, for any Java object to be placed on the JDB, it must implement the JDB objects interface.
3.3.9 Blackboard utilities
Several utilities were developed for the DEEP Java Distributed Blackboard including the Packet, BBUID, Log writer, and Properties file parser. The main utility is the Packet which is used by the control to send information. The `BlackboardListener` receives the Packet when it gets an update event from the blackboard. Based on packet type, `BlackboardListener` will contain information or blackboard objects. The `BBUID` is an identifier which is unique across a network. The remaining utilities provide convenience functions (log writer and properties file parser).
3.4 Research
Research performed for this effort included investigations into various scalability software suites and frameworks, as well as databases. The following subsections provide a brief overview and the reasons why or why not an item was used.
3.4.1 Terracotta
Terracotta is an open-source infrastructure software product used to scale a Java application to as many computers as needed. Terracotta was considered briefly based on its usefulness to the distributed aspect of DEEP. Although a useful application, the Team decided against tying the project to a specific application early in the development process. Further, DEEP’s portability requirements called for platform independence; Terracotta lacked support for the Macintosh platform.
3.4.2 Hibernate
Hibernate is an object/relational (O/R) persistence and query service. Queries can be stated in Hibernate’s portable SQL extension (HQL), in native Structured Query Language (SQL), or with an object-oriented criteria and example application program interface (API). For DEEP, Hibernate offered the following advantages:
- Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity (JDBC) connector can be swapped out with ease
- Returns a collection of objects, eliminating the need to build these objects from a result set returned by using SQL+JDBC
- Increases performance by using ‘prepared’ query statements and caching
- Allows a mapping to a data store without modifying existing project code
- Externalizes object-relational mapping to an XML file
While not necessarily considered disadvantages the Team realized that:
- A Data Access Object (DAO) is required for each object to be mapped to the database
- A JDBC driver has to be installed as Hibernate uses it as connection to the database
For DEEP, the main issue would be the notification system. This system could be built by one of two means: either build/use a notification system inside the database, or build a notification system as a wrapper to the database. To utilize Hibernate to persist an object, four steps were necessary:
1. The to-be-persisted object had to be acceptable to Hibernate – Hibernate prefers persistent objects be in ‘Java-bean style’ and be serialize-able. Time was spent reviewing Hibernate’s documentation to find ways to minimize the changes that would need to be implemented in the DEEP code.
2. An XML mapping file for the object was necessary – the mappings needed to be simple while functional. Significant time was working out how to map objects within objects.
3. The object had to be persisted either Stored or Retrieved using HQL.
4. The database had to be configured properly to accept these objects – the Team learned to incorporate certain tags into the XML mapping file to minimize or eliminate a database configuration.
Based on the analysis performed, the DEEP Team selected Hibernate.
### 3.4.3 Database Management Systems
The DEEP team compared and contrasted the differences between Object-Oriented (OO) and Relational (R) database management systems (DBMSs) as related to DEEP. OO DBMSs reviewed included: Versant, Objectivity, Matisse, EyeDB, Ozone, and Cache.
The Team researched O/R mapping, its relevance (it masks the task of mapping objects to tables from the programmer) and identified the following relational DBMSs as having O/R Features:
- DB2
- Greenplum
- Intersystems Cache
- OpenLink Virtuoso
- Valentina
- PostgreSQL
- Hibernate3
- GigaBASE
- Informix
- Oracle
- UniSQL
- VMDS
- Zope / ZODB
3.5 Development
The JDB was designed and modeled using Unified Modeling Language (UML) and developed code using Java for platform independence. Support for agents was considered. On examination, the adaptation capabilities agent was decomposed into a capabilities critic and a capabilities adaptation agent. The capabilities adaptation agent creates an adapted plan out of an instantiated plan, taking into account the critic's scores. It was designed and developed using requirements provided by the DEEP team. The capabilities adaptation agent was integrated and tested. A third agent was developed - the execution selection agent. The RRC team researched, designed, implemented and integrated it into the JDB.
The DEEP team tested put and retrieval latencies to and from the JDB with various numbers and size objects. After the initial testing concluded, the DEEP team extended the JDB, implementing the following capabilities:
- Blackboard User Interface
- Agent Support
- Additional Proxies and Methods
- Messaging Support
- O/R Mapping to Oracle Database
- Hibernate
3.6 Other Tasks
Additional tasks performed under this effort included:
- Assisting in the evaluation of the potential for interfacing a commercial military simulation software package, Modern Air Power, to the DEEP system.
- Assisting with testing, debugging and demonstration of the DEEP system.
- Organizing, co-authoring the draft and submitting a paper to the 13th International Command and Control Research and Technology Symposium. (Destafano, 2008) (Note - final submission was undertaken by the author after leaving Rome Research Corporation and becoming an Air Force employee.)
4. RESULTS
This effort implemented a functional Java distributed blackboard for the AFRL DEEP project. After confirming the initial design and implementation, the DEEP team researched and implemented a major upgrade to the JDB by installing the Hibernate database. Hibernate replaced the majority of the JDB’s database and a connection to the blackboard. The implementation was effective while maintaining complete functionality.
Formal testing of the JDB revealed the following:
- A local blackboard is not necessary, an agent (or any knowledge source) may connect to a remote blackboard as if it is on the local machine.
- Put and retrieval latencies to and from the JDB demonstrated that one hundred (100) million objects (500 MB) could be placed on the JDB without slowdown.
5. CONCLUSIONS
DEEP is a mixed-initiative decision support system that utilizes past experiences to suggest COAs for new situations. It was designed as a distributed multi-agent system, using agents to maintain and exploit the experiences of individual commanders as well as to transform suggested past plans into potential solutions for new problems.
The commander, through the agent, can view and modify the contents of the shared repository. Agents interact through a common knowledge repository, represented by a blackboard in the initial architecture. The Blackboard design pattern was selected because of its opportunistic reasoning capabilities. Comprehensive testing revealed the Blackboard was limited only by system resources and network bandwidth. Thus, its architecture is well suited for dealing with ill-defined, complex situations such as warfare.
5.1 Discussion of Findings
The Java Distributed Blackboard is a generic distributed data structure which may be utilized by any application requiring a distributed framework. The DEEP researchers concluded that the Blackboard was limited only by system resources and network bandwidth.
5.2 Issues and Concerns
Issues and concerns experienced during the course of the contract included:
- AFMC computer policy restrictions –
- Software/frameworks under consideration could not be installed until permitted by AFMC. One specific example is that Oracle10g is not (was not) on the approved list for installation. This affected the choice of databases.
- Loss of local administrator privileges on development computers.
- Personnel changes –
- The initial researcher/developer changed employment status from contractor to Government civilian.
- The replacement researcher/developer changed employers and contracts.
- Technical –
- The team experienced setbacks after learning two repositories cannot operate on the same machine.
- Machines on different subnets (rome-2k and jbi-dev) within the AFRL network were unable to be connected to the same repository.
6. REFERENCES
### 7. LIST OF ACRONYMS
<table>
<thead>
<tr>
<th>Acronym</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>AFMC</td>
<td>Air Force Materiel Command</td>
</tr>
<tr>
<td>AFRL</td>
<td>Air Force Research Laboratory</td>
</tr>
<tr>
<td>API</td>
<td>Application Programming Interface</td>
</tr>
<tr>
<td>AOC</td>
<td>Air Operation Center</td>
</tr>
<tr>
<td>BB</td>
<td>Blackboard</td>
</tr>
<tr>
<td>C2</td>
<td>Command and Control</td>
</tr>
<tr>
<td>COA</td>
<td>Course of Action</td>
</tr>
<tr>
<td>CPE</td>
<td>Commander’s Predictive Environment</td>
</tr>
<tr>
<td>DAO</td>
<td>Data Access Object</td>
</tr>
<tr>
<td>DBMS</td>
<td>Data Base Management System</td>
</tr>
<tr>
<td>DEEP</td>
<td>Distributed Episodic Exploratory Planning</td>
</tr>
<tr>
<td>FTR</td>
<td>Final Technical Report</td>
</tr>
<tr>
<td>HQL</td>
<td>Hibernate SQL</td>
</tr>
<tr>
<td>ICCRTS</td>
<td>International Command and Control Research and Technology Symposium</td>
</tr>
<tr>
<td>JDB</td>
<td>Java Distributed Blackboard</td>
</tr>
<tr>
<td>JDBC</td>
<td>Java Database Connectivity</td>
</tr>
<tr>
<td>JFACC</td>
<td>Joint Force Air Component Commander</td>
</tr>
<tr>
<td>JFC</td>
<td>Joint Force Commander</td>
</tr>
<tr>
<td>LISP</td>
<td>LISt Processing</td>
</tr>
<tr>
<td>OO</td>
<td>Object-Oriented</td>
</tr>
<tr>
<td>O/R</td>
<td>Object-Relational</td>
</tr>
<tr>
<td>PM</td>
<td>Program Manager</td>
</tr>
<tr>
<td>R&D</td>
<td>Research And Development</td>
</tr>
<tr>
<td>RISB</td>
<td>Information Systems Research Branch</td>
</tr>
<tr>
<td>RMI</td>
<td>Remote Method Invocation</td>
</tr>
<tr>
<td>RRC</td>
<td>Rome Research Corporation</td>
</tr>
<tr>
<td>RRS</td>
<td>Rome Research Site</td>
</tr>
<tr>
<td>SQL</td>
<td>Structured Query Language</td>
</tr>
<tr>
<td>UI</td>
<td>User Interface</td>
</tr>
<tr>
<td>UID</td>
<td>Unique Identifier</td>
</tr>
<tr>
<td>UML</td>
<td>Unified Modeling Language</td>
</tr>
<tr>
<td>XML</td>
<td>eXtensible Markup Language</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a493637.pdf", "len_cl100k_base": 6155, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 36309, "total-output-tokens": 6947, "length": "2e12", "weborganizer": {"__label__adult": 0.0003867149353027344, "__label__art_design": 0.00040221214294433594, "__label__crime_law": 0.0014142990112304688, "__label__education_jobs": 0.0016660690307617188, "__label__entertainment": 9.447336196899414e-05, "__label__fashion_beauty": 0.00017952919006347656, "__label__finance_business": 0.0006470680236816406, "__label__food_dining": 0.00029921531677246094, "__label__games": 0.0009555816650390624, "__label__hardware": 0.001682281494140625, "__label__health": 0.0004351139068603515, "__label__history": 0.0005421638488769531, "__label__home_hobbies": 0.0001017451286315918, "__label__industrial": 0.0010433197021484375, "__label__literature": 0.0002276897430419922, "__label__politics": 0.0008778572082519531, "__label__religion": 0.0002796649932861328, "__label__science_tech": 0.1055908203125, "__label__social_life": 0.00013124942779541016, "__label__software": 0.017578125, "__label__software_dev": 0.86376953125, "__label__sports_fitness": 0.0003228187561035156, "__label__transportation": 0.0010976791381835938, "__label__travel": 0.00017321109771728516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30717, 0.02158]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30717, 0.33255]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30717, 0.90718]], "google_gemma-3-12b-it_contains_pii": [[0, 232, false], [232, 1455, null], [1455, 2668, null], [2668, 4883, null], [4883, 8425, null], [8425, 10770, null], [10770, 12784, null], [12784, 13583, null], [13583, 14626, null], [14626, 15289, null], [15289, 17127, null], [17127, 18130, null], [18130, 20377, null], [20377, 22172, null], [22172, 23899, null], [23899, 25567, null], [25567, 26350, null], [26350, 28386, null], [28386, 29382, null], [29382, 30717, null]], "google_gemma-3-12b-it_is_public_document": [[0, 232, true], [232, 1455, null], [1455, 2668, null], [2668, 4883, null], [4883, 8425, null], [8425, 10770, null], [10770, 12784, null], [12784, 13583, null], [13583, 14626, null], [14626, 15289, null], [15289, 17127, null], [17127, 18130, null], [18130, 20377, null], [20377, 22172, null], [22172, 23899, null], [23899, 25567, null], [25567, 26350, null], [26350, 28386, null], [28386, 29382, null], [29382, 30717, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30717, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30717, null]], "pdf_page_numbers": [[0, 232, 1], [232, 1455, 2], [1455, 2668, 3], [2668, 4883, 4], [4883, 8425, 5], [8425, 10770, 6], [10770, 12784, 7], [12784, 13583, 8], [13583, 14626, 9], [14626, 15289, 10], [15289, 17127, 11], [17127, 18130, 12], [18130, 20377, 13], [20377, 22172, 14], [22172, 23899, 15], [23899, 25567, 16], [25567, 26350, 17], [26350, 28386, 18], [28386, 29382, 19], [29382, 30717, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30717, 0.14167]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
06ec6c15bce436a84460257e0c3c238e1f6b2543
|
The Effects of Education on Students’ Perception of Modeling in Software Engineering
Omar Badreddin
Northern Arizona University
Flagstaff, U.S.A
[email protected]
Armon Sturm
Ben-Gurion University of the Negev
Beer Sheva, Israel
[email protected]
Abdelwahab Hamou-Lhadj
Concordia University
Montreal QC, Canada
[email protected]
Timothy Lethbridge
University of Ottawa
Ottawa ON, Canada
[email protected]
Waylon Dixon
Northern Arizona University
Flagstaff, U.S.A
[email protected]
Ryan Simmons
Northern Arizona University
Flagstaff, U.S.A
[email protected]
Abstract—Models in software engineering bring significant potential in improvements of productivity of engineers, and improved quality of the artifacts they produce. Despite this significant potential, modeling adoption in practice remains rather low. Computer Science and software engineering curriculums may be one factor that causes this low adoption.
In this study, we investigate the effects of education on students’ perception of modeling. We conducted a survey in three separate institutions, in Canada, Israel, and the U.S. The survey covers various aspects of modeling and addresses students ranging from a first year in undergraduate studies until final years in graduate studies.
The survey’s findings suggest that the perception of undergraduate students towards modeling declines as they progress in their studies. While graduate students tend to be more favorable of modeling, their perception also declines over the years. The results also suggest that students prefer more modeling content to be integrated earlier in the curriculum.
Index Terms—Survey of Perceptions, Pedagogy, Modeling in Software Engineering, UML, Education.
I. INTRODUCTION
Model Driven Engineering promotes the use of models, rather than code, for system development. Models can be easier to understand [1][2], improve communications amongst stakeholders [3], and help generate executable artifacts [4]. In addition, platform independent models can improve system portability, and can facilitate migrating systems from legacy platforms [5].
UML has emerged as the standard modeling language in software engineering. In an empirical assessment of MDE in industry, Hutchison et al. [10] mentioned that UML was used by 85 percent of the respondents. Petre [16] mentioned many studies that have indicated the UML is the de facto standard modeling language or the “lingua franca” one. The standard is managed by OMG, and supports many aspects of the life cycle of software development, from requirements, specification, to development and deployment. However, the adoption of modeling in software engineering practice remains dismal. Studies point to significantly low adoption of modeling in practice [2] and that open source projects remain code-centric by and large [5]. Petre [16] also provided evidence that the actual adoption of UML is quite low.
The reasons behind such low adoption may be attributed to many factors. These include complexity of modeling tools, the lack of compatibility within different tools, the lack of integration of modeling tools within existing environments, and the lack of education about the value of modeling tools and techniques [6].
Our goal in this study is to investigate the effect of education in software engineering and computer science on students’ perception of modeling. In particular, we want to understand what effect, if any, education has on how students perceive the value of models.
We designed and distributed a survey covering many aspects of modeling for students throughout the full academic spectrum, from undergraduate to post graduate students. The survey is conducted in three separate universities to allow for different cultural and perspectives.
This paper is organized as follows. In Section II, we review the related work. In Section III, we introduce the study design. We then briefly introduce a background on the modeling-related curriculum at the three participating institutions. The results of the survey are presented in section V and further discussed and analyzed the results in Section VI. Finally, In Section VII we conclude and as set plans for future research directions.
II. RELATED WORK
The perception of UML by professional software engineers has been investigated, with mixed results. Ariadi and Chaudron have surveyed 80 professional software engineers about their perception of the value of UML in terms of productivity and quality [7]. Despite the low adoption of UML, its value is perceived very positively in design, analysis and implementation. Other such surveys have reported negative perceptions of UML due to its complexity, incompleteness, and finds that UML is perceived to be difficult to learn [8].
UML as the standard modeling language is increasingly becoming integrated into academic curriculums for undergraduate and graduate students. There has been a number of studies that reported on experiences on teaching UML [9],
as well as studies in innovative tools for UML education [11]. In addition, a number of studies on the effectiveness of a specific teaching technique for software engineering students have been reported, such as case study and problem based approaches [8].
There has been a number of studies on the comprehension of specific modeling notation, such as the work of Glezer et al. [9] on the comprehension of UML Interaction diagrams. However, there has been very little known about students’ perception of modeling, modeling tools and the curriculum in terms of modeling coverage and depth.
III. STUDY DESIGN
A. Goal
The goal of this study is to uncover the perceived value and usefulness of models by undergraduate and graduate students in Computer Science and Software Engineering as they progress in their studies. The focus in this study is not on the specific modeling language (e.g., UML or BPMN) but rather on the applicability and usability of models in general.
The research questions we were interested in are the following:
• Do students perceive models useful? And in what context? What are the reasons for that?
• Do students think or wish to have a more substantial modeling education?
• Does students’ perception over modeling evolve over their studies?
B. Intended Subjects
The intended subjects of this study are undergraduate and graduate students in system development related fields such as software engineering, computer science, and information systems engineering.
C. Administering the Survey
The survey was filled by students either in classrooms, labs, or online. Participation was both anonymous and voluntary. The survey was conducted in three institutions, Concordia University in Canada, Ben-Gurion University of the Negev in Israel, and in the Northern Arizona University in the U.S.
D. The Survey
The survey consisted of two parts: demographic data and a reflection on modeling.
The demographic data part included questions regarding the university, the study program, the academic year, age, work experience (with ranges: 0-3, 4-7, 8-12, 13+), and the average grade (with ranges: 65-75, 76-85, 86-90, 91+).
The reflection part included the following questions:
1. Applicability of Models (APP)
a. Models are very useful
b. Models are useful for documentation
c. Models are useful for communication
d. Models are useful for representing requirements
e. Models are useful for specification
f. Models are useful for implementation and/or code generation
g. Models are useful for testing
h. Models are useful for maintenance
2. Modeling Characteristics (CHR)
a. Models are normally used just as drawings
b. Code is just a type of model
c. Models are precise (i.e., unambiguous)
d. Models can be easily checked to find opportunities for improvement
e. Models are more comprehensible than code
f. In general, models are easy to understand
g. Models facilitate abstractions and comprehension
h. Textual models are easier to understand than graphical models
i. Textual models are easier to construct than graphical models
j. Models are implementation independent
k. Models help provide flexibility during the development process
l. Modeling is counterproductive since the models need to be changed all the time
m. Models are usually abandoned after the code is written
3. Implementation (IMP)
a. Modeling tools are not mature enough
b. Modeling tools are too complex and are difficult to learn
c. It is not easy for developers to obtain modeling tools that meet their needs
4. Modeling Education (EDU)
a. Modeling should be taught before programming
b. Modeling and programming should be taught at the same time
c. Modeling is not being taught sufficiently
d. Modeling should be integrated in most software engineering and computer science courses
These questions had Likert scale: Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree, and NA; representing a scale from 5 to 1. The full list of questions as well as the raw data is included in the supplementary material.
IV. BACKGROUND ON THE INSTITUTIONS AND THEIR MODELING COURSES
In the following we elaborate on the education background of the participants by introducing the curriculum in each of the institutes.
A. Concordia University
Concordia University (CU) offers two related programs that are both managed by Department of Computer Science and Software Engineering: Computer Science and Software Engineering. The Computer Science program focuses primarily on the study and design of computer systems, such as design algorithms, languages, hardware architecture, systems software, and applications software and tools. Whereas, the
Software Engineering program, while built on the fundamentals of computer science, is focused more on the principles and practices of engineering to develop creative applications such as, computer games, web services, information security, and avionics.
Tables 1 and 2 present the courses in which modeling education takes place along with their modeling content, in both programs. Although two programs are administratively separate, in this survey we unified the results of the two programs since the modeling content is largely similar.
### Table 1. Computer Science @ Concordia
<table>
<thead>
<tr>
<th>Sem</th>
<th>Course</th>
<th>Credit (120)</th>
<th>Modeling Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Fundamentals of Programming</td>
<td>3</td>
<td>Includes basics of object-oriented programming, essentially UML aspects.</td>
</tr>
<tr>
<td>3</td>
<td>System Hardware</td>
<td>3</td>
<td>Abstraction and modeling of system architecture.</td>
</tr>
<tr>
<td>4</td>
<td>Object-Oriented Programming 1 & 2</td>
<td>7</td>
<td>Essentially, a modeling course related to all UML aspects.</td>
</tr>
<tr>
<td>5</td>
<td>Introduction to Database Applications</td>
<td>4</td>
<td>Modeling DB using ERD</td>
</tr>
<tr>
<td>6</td>
<td>Computer Architecture</td>
<td>3</td>
<td>Using modeling construct to teach CA content such as content/data flow, shared memory models, etc.</td>
</tr>
<tr>
<td>6-8</td>
<td>Databases</td>
<td>4</td>
<td>Modeling DB using ERD, OOD, and ODL.</td>
</tr>
<tr>
<td>6-8</td>
<td>Introduction to Software Engineering</td>
<td>4</td>
<td>Using modeling construct to teach other SE content such as design patterns and refactoring.</td>
</tr>
<tr>
<td>6-8</td>
<td>Database Design</td>
<td>4</td>
<td>Essentially, a modeling course related to all UML aspects.</td>
</tr>
<tr>
<td>7-8</td>
<td>Computer Science Project 1 & 2</td>
<td>6</td>
<td>Using models to implement and manage a whole project.</td>
</tr>
</tbody>
</table>
### Table 2. Software Engineering @ Concordia
<table>
<thead>
<tr>
<th>Sem</th>
<th>Course</th>
<th>Credit (120)</th>
<th>Modeling Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>6-8</td>
<td>Software Process</td>
<td>3</td>
<td>Basic principles of SE with activities in software notations and documentations.</td>
</tr>
<tr>
<td>6-8</td>
<td>Software Architecture and Design 1 & 2</td>
<td>6</td>
<td>Essentially, a modeling course related to all UML aspects.</td>
</tr>
<tr>
<td>6-8</td>
<td>User Interface Design</td>
<td>3</td>
<td>Using modeling construct to teach other UID content such as usability engineering, user models, and notations.</td>
</tr>
<tr>
<td>6-8</td>
<td>Control Systems and Applications</td>
<td>3</td>
<td>Using modeling construct to teach CSA content such as block diagrams.</td>
</tr>
<tr>
<td>7</td>
<td>Software Engineering Team Design Project</td>
<td>3.5</td>
<td>Using models to implement and manage a software project.</td>
</tr>
<tr>
<td>8</td>
<td>Capstone Software Engineering Design Project</td>
<td>4</td>
<td>Using models to implement and manage a whole project.</td>
</tr>
</tbody>
</table>
B. Ben-Gurion University of the Negev
In Ben-Gurion University of the Negev (BGU) there are two system engineering programs which are related to the goal of the survey. The first is the Information System Engineering program in which the focus is on data analysis, yet, graduates of that program are expected to perform development activities as well. The program is managed by the department of the Information System Engineering. Table 3 presents the courses that cover software modeling. Other courses refer to information systems management such as production management, organizational culture, information retrieval and data mining, operational research etc. The second program is the Software Engineering, which is managed by the two departments of Information System Engineering and Computer Science. Graduates of that program serve mainly in development positions. Table 4 presents the courses in which modelling education takes place along with their modeling content. Other courses include computer science foundation such as principles of programming languages, automata, compilation, etc.
### Table 3. Information Systems Engineering @ BGU
<table>
<thead>
<tr>
<th>Sem</th>
<th>Course</th>
<th>Credit (160)</th>
<th>Modeling Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Introduction to Information Systems Engineering</td>
<td>3</td>
<td>Basics of UML, mainly class diagram</td>
</tr>
<tr>
<td>5</td>
<td>Database Systems</td>
<td>3.5</td>
<td>Modeling DB using ERD</td>
</tr>
<tr>
<td>5</td>
<td>Analysis and Design of Information Systems</td>
<td>5</td>
<td>Focus mainly of functional modeling</td>
</tr>
<tr>
<td>6</td>
<td>Object-Oriented Analysis and Design</td>
<td>3.5</td>
<td>Essentially, a modeling course related to all UML aspects.</td>
</tr>
<tr>
<td>7-8</td>
<td>Capstone Project</td>
<td>8</td>
<td>Using models to implement and manage a whole project.</td>
</tr>
</tbody>
</table>
### Table 4. Software Engineering @ BGU
<table>
<thead>
<tr>
<th>Sem</th>
<th>Course</th>
<th>Credit (160)</th>
<th>Modeling Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Introduction to Software Engineering</td>
<td>2.5</td>
<td>Basics of UML, mainly class diagram</td>
</tr>
<tr>
<td>3</td>
<td>Database Systems</td>
<td>3.5</td>
<td>Modeling DB using ERD</td>
</tr>
<tr>
<td>4</td>
<td>Analysis and Design of Software System</td>
<td>5</td>
<td>Essentially, a modeling course related to all UML aspects as well as DFD.</td>
</tr>
<tr>
<td>5</td>
<td>Topics in Software Engineering</td>
<td>4.5</td>
<td>Using modeling construct to teach other SE content such as design patterns, and refactoring.</td>
</tr>
<tr>
<td>6</td>
<td>Software Implementation Workshop</td>
<td>3</td>
<td>Using models to implement iteratively a small scale application.</td>
</tr>
<tr>
<td>7-8</td>
<td>Capstone Project</td>
<td>8</td>
<td>Using models to implement and manage a whole project.</td>
</tr>
</tbody>
</table>
C. University of Northern Arizona
At Northern Arizona University (NAU), there are two related programs; the first is Computer Science where there is emphasis on theoretical foundation of computer science (Automata theory, Algorithms, etc.) and the second is Applied Computer Science where students are given the option to replace theory courses with more applied courses (such as mobile and web development courses). The Computer Science program at NAU is accredited under ABET [14]. Table 5 presents the courses where modeling is covered along with their content.
### Table 5. Computer Science @ NAU
<table>
<thead>
<tr>
<th>Sem</th>
<th>Course</th>
<th>Credit (120)</th>
<th>Modeling Content</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Introduction to Computer Science 1 (+lab)</td>
<td>4</td>
<td>Basic Class Diagram</td>
</tr>
</tbody>
</table>
2 Introduction to Computer Science II (+lab) 4 Basic Class Diagrams
5 Data Base Systems 3 ER Diagrams
6 Software Engineering 3 Many UML notations are presented (class, state machines, use cases)
7 Requirements Engineering (Capstone I) 2 Project-specific UML
V. RESULTS
In this section, we present the summarized results for each institution. The complete raw data as well as summarized data are made publicly available\(^1\) to facilitate replication and validation of the results [17].
A. Demographics
All in all we got 195 filled questionnaires for the three universities. Analyzing the profiles of the participating students, as they appear in the following tables, we found out that most of the participants have good grades and they have limited work experience (a fact that emphasizes that their perception is mainly established by their education).
Table 6. Profile of the participating students
(a) Number of Responses
<table>
<thead>
<tr>
<th>Institute</th>
<th>Y1</th>
<th>Y2</th>
<th>Y3</th>
<th>Y4</th>
<th>Grad</th>
</tr>
</thead>
<tbody>
<tr>
<td>NAU</td>
<td>5</td>
<td>10</td>
<td>26</td>
<td>8</td>
<td>2</td>
</tr>
<tr>
<td>BGU-SE</td>
<td>8</td>
<td>12</td>
<td>17</td>
<td>22</td>
<td>25</td>
</tr>
<tr>
<td>BGU-ISE</td>
<td>3</td>
<td>3</td>
<td>23</td>
<td>12</td>
<td></td>
</tr>
<tr>
<td>CU</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>19</td>
</tr>
</tbody>
</table>
(b) Years of experience
<table>
<thead>
<tr>
<th>Institute</th>
<th>Experience (years)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>0-3</td>
</tr>
<tr>
<td>NAU</td>
<td>46</td>
</tr>
<tr>
<td>BGU SE</td>
<td>47</td>
</tr>
<tr>
<td>BGU ISE</td>
<td>35</td>
</tr>
<tr>
<td>CU</td>
<td>12</td>
</tr>
</tbody>
</table>
(c) Average grades obtained in modeling-related courses
<table>
<thead>
<tr>
<th>Institute</th>
<th>Average Grade (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>65-75</td>
</tr>
<tr>
<td>NAU</td>
<td>0</td>
</tr>
<tr>
<td>BGU SE</td>
<td>6</td>
</tr>
<tr>
<td>BGU ISE</td>
<td>5</td>
</tr>
<tr>
<td>CU</td>
<td>0</td>
</tr>
</tbody>
</table>
The response rates we had for the questionnaire are as follows. For BGU-ISE and BGU-SE, as the survey was conducted on line, we got response rate of 13 percent. For BGU graduate students, for NAU, and CU we had a response rate of above 90 percent as the survey was conducted in class as a paper questionnaire.
B. Reflection on Modeling
Figures 1-3 summarize the average results for each institution and give an overview of the results per institution. In the following, we discuss these results.
In general, the perception of BGU’s students towards modeling is positive. In particular, they perceive modeling as a useful means mainly for documentation and communication. One of the reasons for that limited usefulness might be the students’ perception of modeling characteristics. For example, the students perceive models as drawings, they find it counterproductive, and they find textual models (like code) easier to deal with. As for the training they receive, the students think that more training on modeling is required.
Overall, the perception of NAU’s student of modeling is generally positive. Graduate students seem to appreciate modeling for documentation and communication. But they do not find models to be that useful for representing requirements or for specification. Their perception of models tends to get significantly lower when it comes to using models for testing and maintenance.
NAU Undergraduate students seem to find models to be more comprehensible than code. This could be interpreted by the fact that undergraduates find code to be challenging and/or complex. Graduate students, on the other hand, do not find models to be more comprehensible.
CU students find modeling very useful and that it should be integrated in the curriculum earlier (as shown in Figure 3). They also believe that modeling is important for various software engineering tasks, and not just used for drawing diagrams. Concordia students also believe that textual modeling is not easier to understand and construct than graphical modeling. We attribute this positive reaction to modeling of Concordia students mainly to the fact that they are graduate students. Many of them had taken some graduate courses on modeling as well. It is also interesting to note that when asked whether modeling and programming should be taught at the same time, Concordia students seem less favorable to this idea (average score is 3.4/5). This might indicate that students wish to see more courses dedicated to using models as the main development artifacts. Courses that combine tightly both perspectives (code and models) seem to reiterate the traditional perception of modeling, which restricts models to the design phase only.
The survey’s results towards the educational section tend to have an upward trend that is more evident for the graduate students. Students in general want more training in modeling and feel that modeling should be taught at the same time as coding. Interestingly, graduate students tend to agree more. This can be interpreted by the fact that graduates appreciate models more, and have more appreciation on the role of modeling in Software Engineering, and therefore, are more positive regarding increasing the modeling portions in the curriculum.
\(^1\) https://zenodo.org/record/20367?ln=en#.Veuv5dNViFl
Fig. 1. BGU Results
Fig. 2. NAU Results
Fig. 3. Concordia University Results
VI. CROSS-UNIVERSITY ANALYSIS
A. Perception Trends
Of particular interest to this study, is to investigate whether curriculums have positive, negative, or neutral effects on how students perceive the value of modeling. We studied the perception trend of both undergraduates and graduates as follows. For undergraduates, we analyze the changes in perception from year to year, starting from year 1 to 4. For example, if students’ average perception of “Models Usefulness” in year 1 is 3.0/5.0 and in year 2 the average perception is 4.0/5.0, this implies that the perception has improved from year 1 to 2. For graduates, we analyze the differences in perception from undergraduates and graduate averages. We do this by taking the average of the entire data set for undergrads and subtract it from the average for graduates.
This analysis is performed using only a subset of questions that reflect models usefulness and value. The subset includes the following questions as listed in Section III.D: 1.a through 1.h, 2.c through 2.g, 2.j, 2.k. 2.l. We also report on the analysis of the students’ perception of educating students on modeling using analysis of answers to questions 4.a through 4.d.
We use the sign analysis technique as reported in [18]. We count the number of positives (indicating perception improvement) and negative (indicating perception decline). The results are summarized in the following table.
Table 7. Sign analysis of the survey results
<table>
<thead>
<tr>
<th></th>
<th>Usefulness</th>
<th>Education</th>
</tr>
</thead>
<tbody>
<tr>
<td>NAU UG</td>
<td>+ 0</td>
<td>- 14</td>
</tr>
<tr>
<td>NAU G</td>
<td>+ 5</td>
<td>- 13</td>
</tr>
<tr>
<td>BGU UG</td>
<td>+ 6</td>
<td>- 10</td>
</tr>
<tr>
<td>BGU G</td>
<td>+ 3</td>
<td>- 12</td>
</tr>
</tbody>
</table>
NAU Undergraduate students’ perception of modeling declines as they progress in their undergraduate education, evident by 14 negatives, and 0 positives. This result is also reflected in students’ perception of modeling education (0 positive, and 5 negatives). For NAU graduate students, the data is more balanced, but remains overall negative. One possible explanation might be that students with low perception of modeling do not enroll in graduate studies, or that perception of modeling is an indicator of academic success.
The results for BGU are more balanced, but remain overall negative. The perception of usefulness and education among undergraduates and graduates tend to decline over the years of their education.
B. Cross-University Trends
For the cross-university analysis, we are interested in answering the following high-level questions.
Q1. Do students perceive models to be more useful for documentation and communication, as opposed to software development activities (code generation, implementation and maintenance)?
Q2. Do students in general think that more modeling need to be integrated into the curriculum?
Q3. How do students perceive modeling tools? For this question, we limit our data analysis for graduate students as undergrads may not have the sufficient maturity to understand the distinction between the tools, and the approach.
Q4. How does the students’ perception change from their undergrad education to their final years in graduate studies?
For Q4, we found a consistent pattern of declining perceptions emerging in both NAU and BGU undergrad and grad students. In general, the perception declination was more prominent in the case of undergraduate than graduate students.
This can be interpreted in a number of ways, 1) the curriculum fails to highlight the value of modeling in software engineering or 2) students come to the program assuming unrealistically high value of modeling. During their education years, the curriculum does not improve on that initial perception. 3) For large software projects, students fail to discover the value of modeling (usually, UML), and may be relying exclusively on code. As a result, students may come to the conclusion that modeling is not as useful as they may have thought initially. The lacks of tools may also contribute to this. As the advance in the programs, students usually want to build interesting systems that run quickly so they can make modifications and improve their functionality. The unavailability of good tools make this difficult. This may be a factor that discourages students from adopting the modeling paradigm as advanced stages. This question may require further investigation to uncover exactly why the perception declines.
However, grads tend to perceive modeling more favorably than undergrads, especially for communication, documentation, and tool availability and readiness. This is with the exception of a few aspects of models suitability for software development and testing. This may be interpreted by the nature of the work performed by the grads vs. undergrads. Grads may be using models to abstract ideas and communicate early concepts. Models for such tasks may be more suitable than code.
VII. Threats to Validity
Threats to validity of this study are discussed in this section.
A. Question Bias
The majority of the questions in this study were presented in the positive sense (i.e. models are useful). It is possible that negative questions may have a different impact on how participants respond to questions.
B. Profile of the Respondents
The researchers in this study did not have control on selecting participants. It is possible that those participants who opted to complete the survey, or those who decided to complete the survey after it had started, may have had different views on modeling than the general population. Our study collected profiling data and as discussed in this paper, we attempted to analyze the paper while considering the collected profiling data.
C. Different Modeling Teaching Approaches
The three participating institutions deployed different curriculum and different teaching styles. It is possible that the participating universities teaching of modeling may have influenced the views of the participants. This could in effect mean that the participating universities are not a good representation of the general population. This external validity threat was minimized by the fact that three different institutions participated, and that participants were not selected from a specific group or study year.
VIII. Conclusion
This paper reports on a survey that was conducted in three independent institutions. The goal of the survey is to uncover students’ perception of the value of modeling in software development. Participants included students from undergraduate year one to last year in the graduate program.
The results suggest that students’ perception of the value of modeling declines as they progress in their education. This was true for both undergraduate and graduate students. These results warrant further investigation into why this is the case. The reasons of the decline in perception may be attributed to wrong perceptions of modeling for young students, in adequate coverage of modeling topics, lacks of adequate modeling tools, or immaturity or unsuitability of the modeling techniques and approaches for students’ projects. It is also possible that UML is simply not appropriate for defining and/or implementing the problems and solutions they face.
The results suggest that graduate students on average appreciate modeling more than undergraduates. This could be attributed to the nature of tasks that graduate students perform that may be more suitable for modeling approaches.
The results of the study calls for further investigation of the reasons of the relatively low perceptions of modeling usefulness by students. This can be done by further correlations analysis and interviewing students about their perceptions and the reason for that. Furthermore, the results call for revisiting modeling curriculum in order to introduce improvements and to further recruit the students into the modeling era.
REFERENCES
|
{"Source-Url": "http://openknowledge.nau.edu/2943/1/Badreddin_O_etal_2015_The_Effects_of_Education_on_Students'_Perception_of_Modeling.pdf", "len_cl100k_base": 6814, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28890, "total-output-tokens": 7780, "length": "2e12", "weborganizer": {"__label__adult": 0.00104522705078125, "__label__art_design": 0.00136566162109375, "__label__crime_law": 0.0009617805480957032, "__label__education_jobs": 0.1243896484375, "__label__entertainment": 0.00017774105072021484, "__label__fashion_beauty": 0.0005655288696289062, "__label__finance_business": 0.0006961822509765625, "__label__food_dining": 0.0010309219360351562, "__label__games": 0.0014982223510742188, "__label__hardware": 0.0012636184692382812, "__label__health": 0.0013151168823242188, "__label__history": 0.0008382797241210938, "__label__home_hobbies": 0.0003082752227783203, "__label__industrial": 0.0009484291076660156, "__label__literature": 0.0010471343994140625, "__label__politics": 0.000865936279296875, "__label__religion": 0.0014972686767578125, "__label__science_tech": 0.0163116455078125, "__label__social_life": 0.0005340576171875, "__label__software": 0.006740570068359375, "__label__software_dev": 0.83349609375, "__label__sports_fitness": 0.0008893013000488281, "__label__transportation": 0.0016736984252929688, "__label__travel": 0.0005903244018554688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34296, 0.02243]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34296, 0.2279]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34296, 0.92039]], "google_gemma-3-12b-it_contains_pii": [[0, 5037, false], [5037, 9764, null], [9764, 17460, null], [17460, 22641, null], [22641, 22720, null], [22720, 25893, null], [25893, 31926, null], [31926, 34296, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5037, true], [5037, 9764, null], [9764, 17460, null], [17460, 22641, null], [22641, 22720, null], [22720, 25893, null], [25893, 31926, null], [31926, 34296, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34296, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34296, null]], "pdf_page_numbers": [[0, 5037, 1], [5037, 9764, 2], [9764, 17460, 3], [17460, 22641, 4], [22641, 22720, 5], [22720, 25893, 6], [25893, 31926, 7], [31926, 34296, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34296, 0.2625]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
6ec80084fb42fc83431af236a7465b7660205443
|
Uses and Benefits of Function Points
April 2001
© Total Metrics Pty. Ltd
USES AND BENEFITS OF FUNCTION POINTS
1 INTRODUCTION ................................................................. 3
2 MANAGING PROJECT DEVELOPMENT .................................. 4
2.1 FPA USES AND BENEFITS IN PROJECT PLANNING .......... 4
2.1.1 Project Scoping ............................................................ 4
2.1.2 Assessing Replacement Impact ...................................... 4
2.1.3 Assessing Replacement Cost .......................................... 4
2.1.4 Negotiating Scope .......................................................... 5
2.1.5 Evaluating Requirements ............................................... 5
2.1.6 Estimating Project Resource Requirements ..................... 6
2.1.7 Allocating Testing Resources ........................................ 6
2.1.8 Risk Assessment ............................................................ 6
2.1.9 Phasing Development .................................................... 7
2.2 FPA USES AND BENEFITS IN PROJECT CONSTRUCTION .... 7
2.2.1 Monitoring Functional Creep .......................................... 7
2.2.2 Assessing and Prioritizing Rework ................................... 8
2.3 FPA USES AND BENEFITS AFTER SOFTWARE IMPLEMENTATION 8
2.3.1 Planning Support Resources and Budgets ....................... 8
2.3.2 Benchmarking ............................................................... 9
2.3.3 Identifying Best Practice ................................................ 9
2.3.4 Planning New Releases ................................................. 10
2.3.5 Software Asset Valuation .............................................. 10
2.3.6 Outsourcing Software Production and Support ................ 10
3 CUSTOMISING PACKAGED SOFTWARE ................................ 11
3.1 BACKGROUND ............................................................... 11
3.2 ESTIMATING PACKAGE IMPLEMENTATIONS .................... 12
4 SUMMARY ............................................................................. 12
USES AND BENEFITS OF FUNCTION POINTS
1 INTRODUCTION
Industry experience has shown that an emphasis on project management and control offsets much of the risk associated with software projects. One of the major components of better management and control of both in-house development and a package implementation is measurement.
This includes measurement of:
- the scope of the project e.g.
⇒ software units to be delivered
- performance indicators of efficiency and cost effectiveness e.g.
⇒ cost per unit of software delivered,
⇒ staff resources per unit of software delivered,
⇒ elapsed time to deliver a unit of software.
- quality indicators e.g.
⇒ number of defects found per unit of software delivered
The outcome of a Function Point count provides the metric ‘unit of software delivered’ and can be used to assist in the management and control of software development, customisation or major enhancements from early project planning phases through to the ongoing support of the application.
Knowing the software size facilitates the creation of more accurate estimates of project resources and delivery dates and facilitates project tracking to monitor any unforeseen increases in scope. The measurement of the performance indicators enables benchmarking against other development teams and facilitates better estimating of future projects. These are only some of the ways in which Function Point Analysis (FPA) can assist IT management. These and other lesser known ways in which FPA can assist IT to move towards ‘best practice’ in the management of their software products and processes, are discussed in the following sections.
The benefits of using measurement to support management decision-making, can only be achieved if the information supporting these decisions is relevant, accurate and timely. In order to ensure the quality of their measurement data, organisations need to implement a ‘measurement process’. The cost of implementing the activities, procedures and standards to support the function point counting process will depend on the size and structure of the organisation and their measurement needs. These considerations are discussed in the last section “Costs of Implementing Function Point Analysis”.
© Copyright Total Metrics (Australia)
2 MANAGING PROJECT DEVELOPMENT
2.1 FPA Uses and Benefits in Project Planning
2.1.1 Project Scoping
A recommended approach for developing function point counts is to first functionally decompose the software into its elementary functional components (base functional components). This decomposition may be illustrated graphically on a functional hierarchy. The hierarchy provides a pictorial ‘table of contents’ or ‘map’ of the functionality of the application to be delivered. This approach has the advantage of being able to easily convey the scope of the application to the user, not only by illustrating the number of functions delivered by each functional area, but also a comparative size of each functional area measured in function points.
2.1.2 Assessing Replacement Impact
If the software to be developed is planned to replace existing production applications it is useful to assess if the business is going to be delivered more, less or the same functionality. The replacement system’s functionality can be mapped against the functionality in the existing system. A quantitative assessment of the difference can be measured in function points. Note, this comparison can only be done if the existing applications have already been sized in Function Points.
2.1.3 Assessing Replacement Cost
Multiplying the size of the application to be replaced by an estimate of the dollar cost per function point to develop, enables project sponsors to develop quick estimates of replacement costs. Industry derived costs are available and provide a ballpark figure for the likely cost. Industry figures are a particularly useful reference if the re-development is for a new software or hardware platform not previously experienced by the organisation. Ideally organisations should establish their own ‘cost per function point’ metrics for their own particular environment based on project history.
If you are considering implementing a ‘customised off the shelf’ package solution then this provides a quick comparison of the estimated package implementation costs to compare with an in-house build. Package costs typically need to include the cost of re-engineering the business to adapt the current business processes to those delivered by the package. These costs are usually not a consideration for in-house developed software.
---
1 International Software Benchmarking Standards Group Release 6 Report April 2000 provides cost value for software projects in 1999 – median costs to develop a function point = $US716, average costs = $US849 per function point. Cost data is derived from 56 projects representing a broad cross section of the software industry. Industry sectors represented are banking, insurance, communications, government and financial services organizations. They include a mixture of languages, platforms, application types, development techniques, project types, size (50 – 3,000 function points) and effort (from under 1000 to 40,000 hours). Most were implemented between 1995 and 1997. All costs include overheads and the effort data was recorded for the development team and support services.
2.1.4 Negotiating Scope
Initial project estimates often exceed the sponsors planned delivery date and budgeted cost. A reduction in the scope of the functionality to be delivered is often needed so that it is delivered within a predetermined time or budget constraints. The functional hierarchy provides the ‘sketch-pad’ to do scope negotiation. I.e. it enables the project manager and the user to work together to identify and flag (label) those functions which are:
- **mandatory** for the first release of the application,
- **essential** but not mandatory,
- **optional** and could be held over to a subsequent release.
The scope of the different scenarios can then be quickly determined by measuring the functional size of the different scenarios. E.g.: the project size can be objectively measured to determine what the size (and cost and duration) would be if:
- all functions are implemented
- only mandatory functions are implemented
- only mandatory and essential functions are implemented.
This allows the user to make more informed decisions on which functions will be included in each release of the application based on their relative priority compared to what is possible given the time, cost and resource constraints of the project.
2.1.5 Evaluating Requirements
Functionally sizing the requirements for the application quantifies the different types of functionality delivered by an application. The function point count assigns function points to each of the function types, External Inputs, Outputs and Enquiries and Internal and External Files.
Industry figures available from ISBSG Repository² for projects measured with IFPUG function points indicates that ‘complete’ applications tend to have consistent and predictable ratios of each of the function types. The profile of functionality delivered by each of the function types in a planned application can be compared to that of the typical profile from implemented applications, to highlight areas where the specifications may be incomplete or there may be anomalies.
The following pie chart illustrates the function point count profile for a planned Accounts Receivable application compared to that from the ISBGS data. The reporting functions (outputs) are lower than predicted by industry comparisons. Incomplete specification of reporting functions is a common phenomena early in a project’s lifecycle and highlights the potential for substantial growth creep later in the project as the user identifies all their reporting needs.
The quantitative comparison below shows that the reporting requirements were lower than expected by about half (14% compared to the expected 23% of the total function points). The project manager in this case verified with the user that the first release of the software would require all reporting requirements and the user indicated that more reports were likely to be specified. The project manager increased the original count to allow for the
---
² International Software Benchmarking Standards Group (ISBSG) is an international group of representatives from international metrics organizations who collect project data from countries including Australia, Austria, Canada, Denmark, Germany, Hong Kong, India, Japan, New Zealand, Norway, Poland, United Kingdom and the United States.
extra 9% and based his early project estimates on the higher figure that was more likely to reflect the size of the delivered product. The function point measurement activity enabled the project manager to quantify the potential missing functionality and justify his higher, more realistic estimate.

**Figure 1**
### 2.1.6 Estimating Project Resource Requirements
Once the scope of the project is agreed the estimates for effort, staff resources, costs and schedules need to be developed. If productivity rates (*hours per function point, $cost per function point*) from previous projects are known, then the project manager can use the function point count to develop the appropriate estimates. If your organisation has only just begun collecting these metrics and does not have sufficient data to establish its own productivity rates then the ISBGS industry data can be used in the interim.
### 2.1.7 Allocating Testing Resources
The functional hierarchy developed as part of the function point count during project development can assist the testing manager to identify high complexity functional areas which may need extra attention during the testing phase. Dividing the total function points for each functional area by the total number of functions allocated to that group of functions, enables the assessment of the relative complexity of each of the functional areas.
The effort to perform acceptance testing and the number of test cases required is related to the number and complexity of the user functions within a functional area. Quantifying the relative size of each functional area will enable the project manager to allocate appropriate testing staff and check relative number of test cases assigned.
### 2.1.8 Risk Assessment
Many organisations have large legacy software applications, that due to their age, are unable to be quickly enhanced to respond to the needs of their rapidly changing business environments. Over time these applications have been patched and expanded until they have grown to monstrous proportions. Frustrated by long delays in implementing changes,
lack of support for their technical platform and expensive support costs, management will often decide to redevelop the entire application. For many organisations this strategy of rebuilding their super-large applications has proved to be a disaster resulting in cancellation of the project mid-development. Industry figures show that the risk of project failure rapidly increases with project size. Projects less than 500 function points have a risk of failure of less than 20% in comparison with projects over 5000 function points which have a probability of cancellation close to 40%. This level of risk is unacceptable for most organisations.
Assessing planned projects for their delivered size in function points enables management to make informed decisions about the risk involved in developing large highly integrated applications or adopting a lower risk phased approach described below.
### 2.1.9 Phasing Development
If the project manager decides on a phased approach to the project development then related modules may be relegated to different releases. This strategy may require temporary interfacing functionality to be built in the first release to be later decommissioned when the next module is integrated. The function point count allows project managers to develop ‘what if scenarios’ and quantify the project scope of each phase as a means of making objective decisions. Questions to which quantitative answers can be provided are:
- how much of the interfacing functionality can be avoided by implementing all of the related modules in release one?
- what is the best combination of potential modules to group within a release to minimise the development of temporary interfacing functions?
If it is decided to implement the application as a phased development then the size of each release can be optimised to that which is known to be manageable. This can be easily done by labelling functions with the appropriate Release and performing ‘what-if’ scenarios by including and excluding functions from the scope of the count for the release.
### 2.2 FPA Uses and Benefits in Project Construction
#### 2.2.1 Monitoring Functional Creep
Function point analysis provides project management with an objective tool by which project size can be monitored for change, over the project’s lifecycle.
---
3 Data within the ISBSG Repository Release 6 supports the premise that smaller projects are successful. Over 65% of the projects in the repository are less than 500 function points and 93% of the projects are less than 2000 function points. The repository is populated by industry projects, voluntarily submitted by organizations that want to benchmark their project’s performance against industry projects with a similar profile. Consequently organizations tend to submit successfully completed projects which have better than average performance i.e. the ones which did not ‘fail’.
4 Software Productivity Research
5 At a median industry cost of $716/fp delivered, a 5000 function point project is risking $3.5 million dollars.
6 Industry experience suggests that the best managed projects which deliver quality software on time and within budget tend to less than 700 function points and up to 1500 function points.
As new functions are identified, functions are removed or changed during the project the function point count is updated and the impacted functions appropriately flagged. The project scope can be easily tracked and reported at each of the major milestones.
If the project size exceeds the limits allowed in the initial estimates then this will provide an early warning that new estimates may be necessary or alternatively highlight a need to review the functionality to be delivered by this release.
2.2.2 Assessing and Prioritizing Rework
Function Point Analysis allows the project manager to objectively and quantitatively measure the scope of impact of a change request and estimate the resulting impact on project schedule and costs. This immediate feedback to the user on the impact of the rework allows them to evaluate and prioritise change requests.
The cost of rework is often hidden in the overall project costs and users and developers have no means to quantify its impact on the overall project productivity rates. Function point analysis enables the project manager to measure the functions that have been reworked due to user-initiated change requests. The results provide valuable feedback to the business on the potential cost savings of committing user resources early in the project to establish an agreed set of requirements and minimising change during the project lifecycle.
2.3 FPA Uses and Benefits after Software Implementation
2.3.1 Planning Support Resources and Budgets
The number of personnel required to maintain and support an application is strongly related to the application’s size. Knowing the functional size of the application’s portfolio allows management to confidently budget for the deployment of support resources. The following figure demonstrates this relationship as demonstrated within an Australian financial organisation. The average maintenance assignment scope (number of function points supported per person) for this organisation is 833 function points per person. The assignment scope has been found to be negatively influenced by the age of the application and the number of users i.e. as both these parameters increase the assignment scope decreases. Capers Jones figures show similar assignment scopes where for ageing, unstructured applications with high complexity an assignment scope of 500 function points per person is not unusual whereas newer, structured applications, skilled staff can support around 1500 – 2000 function points.
---
7 The Victorian State Government in Australia has adopted a recommended policy for Government departments to manage and control government out-sourced development projects using Function Points. Suppliers tender for the development based on a fixed price in dollars per function point. Scope changes are automatically charged by the supplier at a pre-determined contracted charge-rate based on the number of function points impacted and the stage at the life cycle when the change was introduced. The government policy underpinning this approach is called ‘Southern Scope’. More information is available at: www.mmv.vic.gov.au/southernscope
8 Where maintenance and support includes defect repairs and very minor enhancements.
Figure 2 Relationship between the Size of an Application and the Number of Support staff (Source - Total Metrics 1999)
Once implemented, applications typically need constant enhancement in order to respond to changes in direction of an organisation’s business activities. Function points can be used to estimate the impact of these enhancements. The baseline function point count of the existing application will facilitate these estimates. As the application size grows with time the increasing assignment scope will provide the justification to assign more support staff.
2.3.2 Benchmarking
The function point count of delivered functionality provides input into productivity and quality performance indicators. These can then be compared to those of other in-house development teams and implementation environments. Benchmarking internally and externally with industry data enables identification of best practice. External benchmarking data is readily available in the ISBSG Repository.
2.3.3 Identifying Best Practice
Project managers seeking ‘best practice’ in their software development and support areas recognise the need to adopt new tools, techniques and technologies to improve the productivity of the process and quality of the products they produce. Baselining current practice enables management to establish current status and set realistic targets for improvement. Ongoing measurement of productivity and quality key performance indicators enable management to assess the impact of their implemented changes and identify where further improvements can be made. Function points are the most universally accepted method to measure the output from the software process. They are a key metric
---
10 For information on how to access the ISBSG data visit: [www.ISBSG.org.au](http://www.ISBSG.org.au)
within any process improvement program because of their ability to normalise data from various software development environments combined with their ability to measure output from a business perspective as compared to a technical perspective.
2.3.4 Planning New Releases
The functional hierarchy of the functionality delivered by an application can also assist the support manager in planning and grouping change requests for each new release of the application. The hierarchy illustrates closely related functions and their relative size. If the impact of change is focussed on a group of related functions then development effort will be reduced particularly in the design, testing and documentation stages of the project. This strategy of evaluating the scope of impact of a change request also reduces project risk by restricting projects to a manageable size and focussing change on a restricted set of related business functions.
2.3.5 Software Asset Valuation
Function Point Analysis is being used increasingly by organisations to support the ‘valuation of their software assets’. In the past, software has been considered an expense rather than a capital asset and as such was not included in an organisations asset register. The most commonly used software valuation method is based on the ‘deprival method’. This method values the software based on what it would cost to replace in today’s technical environment rather than what it cost originally to build. The industry build rate (dollar cost per function point) is determined and the total replacement value is calculated based on the current functional size of the application.
Since FPA provides a means of reliably measuring software then some organisations have implemented accrual budgeting and accounting in their business units. Under this directive, all assets must be valued based on deprival value and brought to account, thus ensuring better accountability of the organisations financial spending. Funding via budget allocation is based on assets listed in their financial accounts and their depreciation. In the past, the purchase price of the software recorded as an expense within an accounting year. These more recent accounting practices mean that it can now be valued as an asset and depreciated.
Publicly listed organisations have found that by using this accrual accounting method of measuring software as an asset rather than an expense they can amortise the depreciation over five years rather than artificially decrease the current year’s profit by the total cost of the software. This strategy has a dramatic effect on their share price since once their software is listed as a capital asset it contributes to the overall worth of the company and the total cost of that asset has a reduced impact on the current year’s reported profit.
2.3.6 Outsourcing Software Production and Support
The benefits of Functional size measurement in outsourcing contracts, is that functional size enables suppliers to measure the cost of a unit of output from the IT process to the business and enables them to negotiate on agreed outcomes with their client.
Specifically these output based metrics based on function point analysis has enabled suppliers to:
- quantitatively and objectively differentiate themselves from their competitors
- quantify extent of annual improvement and achievement of contractual targets
• negotiate price variations with clients based on an agreed metric
• measure financial performance of the contract based on unit cost of output
• at contract renewal be in a stronger bargaining position supported by an established set of metrics
Conversely these output based metrics based on function point analysis has enabled clients to:
• Objectively assess supplier performance based on performance outputs delivered rather than concentrating on inputs consumed.
• Establish quantitative performance targets and implement supplier penalties and bonuses based on achievement of these targets
• measure the difference between internal IT costs compared to the cost of outsourcing based on similar output
• quantitatively compare competing suppliers at contract tender evaluation stage.
Most of the international outsourcing companies use function point based metrics as part of their client service level agreements. Whilst this method of contract management is relatively new its proponents are strong supporters of the usefulness of the technique. In our experience once an outsourcing contract has been based on Function Point metrics subsequent contract renewals expand on their use.
Metrics initiatives have a high cost and need substantial investment, which is often overlooked at contract price negotiation. Both the supplier and the client typically incur costs. However, given the size of the penalties and bonuses associated with these contracts it soon becomes obvious that this investment is necessary.
3 Customising Packaged Software
3.1 Background
For selected MIS applications, implementing a packaged ‘off the shelf’ solution is the most cost effective and time efficient strategy to deliver necessary functionality to the business.
All of the benefits and uses of Function Point Analysis which applied to in-house development projects as described in the previous section can also apply to projects which tailor a vendor supplied package to an organisations specific business needs.
Experience shows that Function Point Counting of packages is not always as straightforward as sizing software developed in-house, for the following reasons:
• only the physical and technical functions are visible to the counter. The logical user view is often masked by the physical implementation of the original logical user requirements.
• in most cases the functional requirements, functional specifications, and logical design documentation are not delivered with the software. The counter may have to rely on the User Manual or online help to assist in interpreting the user view.
⇒ The modelling of the logical business transactions often requires the function point counter to work with the client to identify the logical transactions. They do this by investigating the users functional
requirements and interpreting the logical transactions from the package’s physical implementation.
- in most cases the names of the logical files accessed by the application’s transactions are not supplied by the package vendor.
⇒ The function point counter will need to develop the data model by analysing the data items processed by the application.
However, with sufficient care a reasonably accurate function point count of packaged applications can usually be obtained.
### 3.2 Estimating Package Implementations
The project estimates for a package solution need to be refined for each implementation depending on the percentage of the project functionality which is:
- native to the package and implemented without change
- functionality within the package which needs to be customised for this installation
- functionality contained with the organisations existing applications which needs to be converted to adapt to the constraints of the package
- to be built as new functions in addition to the package functions
- to be built as new functions to enable interfacing to other in-house applications
- not to be delivered in this release.
The productivity rates for each of these different development activities (to implement, customise, enhance or build) are usually different. This complexity of assigning an appropriate productivity factor can be compounded when the package provides utilities which enable quick delivery based on changes to rule tables. Change requests, which can be implemented by changing values in rule-based tables, can be implemented very efficiently compared to a similar user change request, that requires source code modification. It is recommended that these different types of activities are identified and effort collected against them accordingly so that productivity rates for the different activity types can by determined.
The functions can be flagged for their development activity type and their relative contributions to the functional size calculated. This will enable fine-tuning of the project estimates.
Another area of concern when developing estimates for package integration is the need to determine the extent that the application module needs to interface with existing functionality. The function point count measures the external files accessed by transactions within this application. A high percentage of interface files (>10%) suggests a high degree of coupling between this application and existing applications. A high degree of interfacing tends to have a significant negative impact on productivity rates and needs to be considered when developing estimates.
### 4 SUMMARY
Function Point Analysis is a technique that until now has been restricted within many organisations to only be used for better estimating or input into benchmarking productivity.
rates. The above examples illustrate a wider range of uses where it can contribute to the better management and control of the whole software production environment.
|
{"Source-Url": "https://www.totalmetrics.com/function-point-resources/downloads/Function-Points-Uses-Benefits.pdf", "len_cl100k_base": 5504, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25547, "total-output-tokens": 6150, "length": "2e12", "weborganizer": {"__label__adult": 0.00033283233642578125, "__label__art_design": 0.0004343986511230469, "__label__crime_law": 0.0003495216369628906, "__label__education_jobs": 0.0014553070068359375, "__label__entertainment": 6.35385513305664e-05, "__label__fashion_beauty": 0.00012409687042236328, "__label__finance_business": 0.00420379638671875, "__label__food_dining": 0.0002987384796142578, "__label__games": 0.0005435943603515625, "__label__hardware": 0.00046634674072265625, "__label__health": 0.000293731689453125, "__label__history": 0.00015938282012939453, "__label__home_hobbies": 9.79900360107422e-05, "__label__industrial": 0.00037026405334472656, "__label__literature": 0.0002015829086303711, "__label__politics": 0.0001595020294189453, "__label__religion": 0.00023472309112548828, "__label__science_tech": 0.0032291412353515625, "__label__social_life": 9.268522262573242e-05, "__label__software": 0.0182952880859375, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.00020456314086914065, "__label__transportation": 0.00029921531677246094, "__label__travel": 0.000179290771484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30691, 0.02648]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30691, 0.19676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30691, 0.91292]], "google_gemma-3-12b-it_contains_pii": [[0, 75, false], [75, 2136, null], [2136, 4424, null], [4424, 7548, null], [7548, 10856, null], [10856, 13040, null], [13040, 16289, null], [16289, 19673, null], [19673, 21492, null], [21492, 24889, null], [24889, 27697, null], [27697, 30526, null], [30526, 30691, null]], "google_gemma-3-12b-it_is_public_document": [[0, 75, true], [75, 2136, null], [2136, 4424, null], [4424, 7548, null], [7548, 10856, null], [10856, 13040, null], [13040, 16289, null], [16289, 19673, null], [19673, 21492, null], [21492, 24889, null], [24889, 27697, null], [27697, 30526, null], [30526, 30691, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30691, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30691, null]], "pdf_page_numbers": [[0, 75, 1], [75, 2136, 2], [2136, 4424, 3], [4424, 7548, 4], [7548, 10856, 5], [10856, 13040, 6], [13040, 16289, 7], [16289, 19673, 8], [19673, 21492, 9], [21492, 24889, 10], [24889, 27697, 11], [27697, 30526, 12], [30526, 30691, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30691, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
54dc5bd93a19e5d56763fc88328213c568d1fa7f
|
CS 240A : Examples with Cilk++
- Divide & Conquer Paradigm for Cilk++
- Solving recurrences
- Sorting: Quicksort and Mergesort
- Graph traversal: Breadth–First Search
Thanks to Charles E. Leiserson for some of these slides
Work and Span (Recap)
\[ T_p = \text{execution time on } P \text{ processors} \]
\[ T_1 = \text{work} \quad T_\infty = \text{span}^* \]
*Also called critical-path length or computational depth.*
Sorting
- Sorting is possibly the most frequently executed operation in computing!
- **Quicksort** is the fastest sorting algorithm in practice with an average running time of \(O(N \log N)\), *(but \(O(N^2)\) worst case performance)*
- **Mergesort** has worst case performance of \(O(N \log N)\) for sorting \(N\) elements
- Both based on the recursive **divide–and–conquer** paradigm
Basic Quicksort sorting an array $S$ works as follows:
- If the number of elements in $S$ is 0 or 1, then return.
- Pick any element $v$ in $S$. Call this pivot.
- Partition the set $S \setminus \{v\}$ into two disjoint groups:
- $S_1 = \{x \in S \setminus \{v\} \mid x \leq v\}$
- $S_2 = \{x \in S \setminus \{v\} \mid x \geq v\}$
- Return $\text{quicksort}(S_1)$ followed by $v$ followed by $\text{quicksort}(S_2)$
QUICKSORT
Partition around Pivot
13 45 34 14 56
32 21 31 78
13 31 21
14 32
45 56
78
QUICKSORT
Quicksort recursively
13 14 21 31 32
45 56 78
13 14 21 31 32
45 56 78
Parallelizing Quicksort
- Serial Quicksort sorts an array $S$ as follows:
- If the number of elements in $S$ is 0 or 1, then return.
- Pick any element $v$ in $S$. Call this pivot.
- Partition the set $S - \{v\}$ into two disjoint groups:
- $S_1 = \{x \in S - \{v\} \mid x \leq v\}$
- $S_2 = \{x \in S - \{v\} \mid x \geq v\}$
- Return $\text{quicksort}(S_1)$ followed by $v$ followed by $\text{quicksort}(S_2)$
Not necessarily so!
Parallel Quicksort (Basic)
- The second recursive call to `qsort` does not depend on the results of the first recursive call.
- We have an opportunity to speed up the call by making both calls in parallel.
```cpp
template <typename T>
void qsort(T begin, T end) {
if (begin != end) {
T middle = partition(
begin,
end,
bind2nd( less<typename iterator_traits<T>::value_type>(),
*begin )
);
cilk_spawn qsort(begin, middle);
qsort(max(begin + 1, middle), end);
cilk_sync;
}
}
```
Performance
- `. `/qsort 500000 -cilk_set_worker_count 1`
`>> 0.083 seconds`
- `. `/qsort 500000 -cilk_set_worker_count 16`
`>> 0.014 seconds`
- `Speedup = T_1 / T_{16} = 0.083 / 0.014 = 5.93`
- `. `/qsort 50000000 -cilk_set_worker_count 1`
`>> 10.57 seconds`
- `. `/qsort 50000000 -cilk_set_worker_count 16`
`>> 1.58 seconds`
- `Speedup = T_1 / T_{16} = 10.57 / 1.58 = 6.67`
Measure Work/Span Empirically
- `cilkscreen -w ./qsort 50000000`
\[
\begin{align*}
\text{Work} & = 21593799861 \\
\text{Span} & = 1261403043 \\
\text{Burdened span} & = 1261600249 \\
\text{Parallelism} & = 17.1189 \\
\text{Burdened parallelism} & = 17.1162 \\
\#\text{Spawn} & = 50000000 \\
\#\text{Atomic instructions} & = 14
\end{align*}
\]
- `cilkscreen -w ./qsort 5000000`
\[
\begin{align*}
\text{Work} & = 178835973 \\
\text{Span} & = 14378443 \\
\text{Burdened span} & = 14525767 \\
\text{Parallelism} & = 12.4378 \\
\text{Burdened parallelism} & = 12.3116 \\
\#\text{Spawn} & = 50000000 \\
\#\text{Atomic instructions} & = 8
\end{align*}
\]
```cpp
workspan ws;
ws.start();
sample_qsort(a, a + n);
ws.stop();
ws.report(std::cout);
```
Analyzing Quicksort
Assume we have a “great” partitioner that always generates two balanced sets.
Analyzing Quicksort
- Work:
\[ T_1(n) = 2T_1(n/2) + \Theta(n) \]
\[ 2T_1(n/2) = 4T_1(n/4) + 2 \Theta(n/2) \]
....
....
\[ n/2 T_1(2) = n T_1(1) + n/2 \Theta(2) \]
\[ T_1(n) = \Theta(n \log n) \]
- Span recurrence: \( T_\infty(n) = T_\infty(n/2) + \Theta(n) \)
Solves to \( T_\infty(n) = \Theta(n) \)
\[ \text{Partitioning not parallel!} \]
Analyzing Quicksort
Parallelism: \[ \frac{T_1(n)}{T_\infty(n)} = \Theta(lg \ n) \]
- Indeed, partitioning (i.e., constructing the array \( S_1 = \{ x \in S \setminus \{v\} \mid x \leq v \} \)) can be accomplished in parallel in time \( \Theta(lg \ n) \)
- Which gives a span \( T_\infty(n) = \Theta(lg^2 n) \)
- And parallelism \( \Theta(n/lg \ n) \)
- Basic parallel qsort can be found under \$cilkpath/examples/qsort\n
Not much!
Way better!
The Master Method (Optional)
The **Master Method** for solving recurrences applies to recurrences of the form
\[ T(n) = a \ T(n/b) + f(n) \]
where \( a \geq 1 \), \( b > 1 \), and \( f \) is asymptotically positive.
IDEA: Compare \( n^{\log_b a} \) with \( f(n) \).
* The unstated base case is \( T(n) = \Theta(1) \) for sufficiently small \( n \).
Master Method — CASE 1
\[ T(n) = a \cdot T(n/b) + f(n) \]
Specifically, \( f(n) = O(n^{\log_b a - \varepsilon}) \) for some constant \( \varepsilon > 0 \).
**Solution:** \( T(n) = \Theta(n^{\log_b a}) \).
Master Method — CASE 2
\[ T(n) = a \cdot T(n/b) + f(n) \]
\[ n^{\log_b a} \approx f(n) \]
Specifically, \( f(n) = \Theta(n^{\log_b a \lg^k n}) \) for some constant \( k \geq 0 \).
**Solution:** \( T(n) = \Theta(n^{\log_b a \lg^{k+1} n}) \).
Ex(qsort): \( a = 2, \ b = 2, \ k = 0 \) \( \Rightarrow \) \( T_1(n) = \Theta(n \lg n) \)
Master Method — CASE 3
\[ T(n) = a \cdot T\left(\frac{n}{b}\right) + f(n) \]
Specifically, \( f(n) = \Omega(n^{\log_b a} + \varepsilon) \) for some constant \( \varepsilon > 0 \), and \( f(n) \) satisfies the regularity condition that \( a \cdot f\left(\frac{n}{b}\right) \leq c \cdot f(n) \) for some constant \( c < 1 \).
**Solution:** \( T(n) = \Theta(f(n)) \)
Example: Span of qsort
Master Method Summary
\[ T(n) = a \cdot T(n/b) + f(n) \]
**CASE 1**: \( f(n) = O(n^{\log_{b}a} - \epsilon) \), constant \( \epsilon > 0 \)
\[ \Rightarrow T(n) = \Theta(n^{\log_{b}a}) . \]
**CASE 2**: \( f(n) = \Theta(n^{\log_{b}a} \lg^{k}n) \), constant \( k \geq 0 \)
\[ \Rightarrow T(n) = \Theta(n^{\log_{b}a} \lg^{k+1}n) . \]
**CASE 3**: \( f(n) = \Omega(n^{\log_{b}a} + \epsilon) \), constant \( \epsilon > 0 \), and regularity condition
\[ \Rightarrow T(n) = \Theta(f(n)) . \]
Mergesort is an example of a recursive sorting algorithm. It is based on the divide-and-conquer paradigm. It uses the merge operation as its fundamental component (which takes in two sorted sequences and produces a single sorted sequence).
Simulation of Mergesort
Drawback of mergesort: Not in-place (uses an extra temporary array)
template <typename T>
void Merge(T *C, T *A, T *B, int na, int nb) {
while (na > 0 && nb > 0) {
if (*A <= *B) {
*C++ = *A++; na--;
} else {
*C++ = *B++; nb--;
}
}
while (na > 0) {
*C++ = *A++; na--;
}
while (nb > 0) {
*C++ = *B++; nb--;
}
}
Time to merge $n$ elements = $\Theta(n)$.
template <typename T>
void MergeSort(T *B, T *A, int n) {
if (n==1) {
B[0] = A[0];
} else {
T* C = new T[n];
cilk_spawn MergeSort(C, A, n/2);
MergeSort(C+n/2, A+n/2, n-n/2);
cilk_sync;
Merge(B, C, C+n/2, n/2, n-n/2);
delete[] C;
}
}
Parallel Merge Sort
A: input (unsorted)
B: output (sorted)
C: temporary
template <typename T>
void MergeSort(T *B, T *A, int n) {
if (n==1) {
B[0] = A[0];
} else {
T* C = new T[n];
cilk_spawn MergeSort(C, A, n/2);
MergeSort(C+n/2, A+n/2, n-n/2);
cilk_sync;
Merge(B, C, C+n/2, n/2, n-n/2);
delete[] C;
}
}
Work: \[ T_1(n) = 2T_1(n/2) + \Theta(n) \]
\[ = \Theta(n \lg n) \]
Span of Merge Sort
\[ T_{∞}(n) = T_{∞}(\frac{n}{2}) + Θ(n) \]
\[ = Θ(n) \]
CASE 3:
\[ n^{\log_{b}a} = n^{\log_{2}1} = 1 \]
\[ f(n) = Θ(n) \]
Parallelism of Merge Sort
**Work:** \[ T_1(n) = \Theta(n \lg n) \]
**Span:** \[ T_\infty(n) = \Theta(n) \]
**Parallelism:** \[ \frac{T_1(n)}{T_\infty(n)} = \Theta(\lg n) \]
*We need to parallelize the merge!*
**Parallel Merge**
**Key Idea:** If the total number of elements to be merged in the two arrays is \( n = na + nb \), the total number of elements in the larger of the two recursive merges is at most \( (3/4) n \).
Coarsen base cases for efficiency.
Span of Parallel Merge
```c
template <typename T>
void P_Merge(T *C, T *A, T *B, int na, int nb) {
if (na < nb) {
int mb = BinarySearch(A[ma], B, nb);
C[ma+mb] = A[ma];
cilk_spawn P_Merge(C, A, B, ma, mb);
P_Merge(C+ma+mb+1, A+ma+1, B+mb, na-ma-1, nb-mb);
cilk_sync;
}
}
```
**CASE 2:**
\[ n^{\log_b a} = n^{\log_{4/3} 1} = 1 \]
\[ f(n) = \Theta(n^{\log_b a \log \log n}) \]
**Span:**
\[ T_\infty(n) = T_\infty(3n/4) + \Theta(\log n) \]
\[ = \Theta(\log^2 n) \]
template<typename T>
void P_Merge(T *C, T *A, T *B, int na, int nb) {
if (na < nb) {
int mb = BinarySearch(A[ma], B, nb);
C[ma+mb] = A[ma];
cilk_spawn P_Merge(C, A, B, ma, mb);
P_Merge(C+ma+mb+1, A+ma+1, B+mb, na-ma-1, nb-mb);
cilk_sync;
}
}
**Work:** \( T_1(n) = T_1(\alpha n) + T_1((1-\alpha)n) + \Theta(\log n) \), where \( 1/4 \leq \alpha \leq 3/4 \).
**Claim:** \( T_1(n) = \Theta(n) \).
Analysis of Work Recurrence
**Work:** \( T_1(n) = T_1(\alpha n) + T_1((1-\alpha)n) + \Theta(lg n), \) where \( 1/4 \leq \alpha \leq 3/4. \)
**Substitution method:** Inductive hypothesis is \( T_1(k) \leq c_1 k - c_2 lg k, \) where \( c_1, c_2 > 0. \) Prove that the relation holds, and solve for \( c_1 \) and \( c_2. \)
\[
T_1(n) = T_1(\alpha n) + T_1((1-\alpha)n) + \Theta(lg n) \\
\leq c_1(\alpha n) - c_2 lg(\alpha n) \\
+ c_1(1-\alpha)n - c_2 lg((1-\alpha)n) + \Theta(lg n)
\]
Analysis of Work Recurrence
**Work:** \( T_1(n) = T_1(\alpha n) + T_1((1-\alpha)n) + \Theta(lg n) \), where \( 1/4 \leq \alpha \leq 3/4 \).
\[
T_1(n) = T_1(\alpha n) + T_1((1-\alpha)n) + \Theta(lg n) \\
\leq c_1(\alpha n) - c_2 lg(\alpha n) \\
+ c_1(1-\alpha)n - c_2 lg((1-\alpha)n) + \Theta(lg n)
\]
Analysis of Work Recurrence
**Work:** \( T_1(n) = T_1(\alpha n) + T_1((1-\alpha)n) + \Theta(lg n), \)
where \( 1/4 \leq \alpha \leq 3/4. \)
\[
T_1(n) = T_1(\alpha n) + T_1((1-\alpha)n) + \Theta(lg n)
\leq c_1(\alpha n) - c_2 lg(\alpha n)
+ c_1(1-\alpha)n - c_2 lg((1-\alpha)n) + \Theta(lg n)
\leq c_1 n - c_2 lg(\alpha n) - c_2 lg((1-\alpha)n) + \Theta(lg n)
\leq c_1 n - c_2 ( lg(\alpha(1-\alpha)) + 2 lg n ) + \Theta(lg n)
\leq c_1 n - c_2 lg n
- (c_2(lg n + lg(\alpha(1-\alpha))) - \Theta(lg n))
\leq c_1 n - c_2 lg n
\]
by choosing \( c_2 \) large enough. Choose \( c_1 \) large enough to handle the base case.
Parallelism of P_Merge
Work: \( T_1(n) = \Theta(n) \)
Span: \( T_\infty(n) = \Theta(\log^2 n) \)
Parallelism: \( \frac{T_1(n)}{T_\infty(n)} = \Theta(n/\log^2 n) \)
template <typename T>
void P_MergeSort(T *B, T *A, int n) {
if (n==1) {
B[0] = A[0];
} else {
T C[n];
cilk_spawn P_MergeSort(C, A, n/2);
P_MergeSort(C+n/2, A+n/2, n-n/2);
cilk_sync;
P_Merge(B, C, C+n/2, n/2, n-n/2);
}
}
**CASE 2:**
\[ n^{\log_b a} = n^{\log_2 2} = n \]
\[ f(n) = \Theta(n^{\log_b a} \lg^0 n) \]
**Work:**
\[ T_1(n) = 2T_1(n/2) + \Theta(n) \]
\[ = \Theta(n \lg n) \]
template <typename T>
void P_MergeSort(T *B, T *A, int n) {
if (n==1) {
B[0] = A[0];
} else {
T C[n];
cilk_spawn P_MergeSort(C, A, n/2);
P_MergeSort(C+n/2, A+n/2, n-n/2);
cilk_sync;
P_Merge(B, C, C+n/2, n/2, n-n/2);
}
}
CASE 2:
\[ n^{\log_b a} = n^{\log_2 1} = 1 \]
\[ f(n) = \Theta(n^{\log_b a \log^2 n}) \]
**Span:** \( T_\infty(n) = T_\infty(n/2) + \Theta(\log^2 n) \)
\[ = \Theta(\log^3 n) \]
Parallelism of P_MergeSort
**Work:** \( T_1(n) = \Theta(n \lg n) \)
**Span:** \( T_\infty(n) = \Theta(\lg^3 n) \)
**Parallelism:** \( \frac{T_1(n)}{T_\infty(n)} = \Theta(n/\lg^2 n) \)
Breadth First Search
- Level-by-level graph traversal
- Serially complexity: $\Theta(m+n)$
Graph: $G(E, V)$
- $E$: Set of edges (size m)
- $V$: Set of vertices (size n)
Breadth First Search
- Level-by-level graph traversal
- Serially complexity: $\Theta(m+n)$
Breadth First Search
- Level-by-level graph traversal
- Serially complexity: $\Theta(m+n)$
Breadth First Search
- Level-by-level graph traversal
- Serially complexity: $\Theta(m+n)$
Breadth First Search
- Level-by-level graph traversal
- Serially complexity: $\Theta(m+n)$
Breadth First Search
- Level-by-level graph traversal
- Serially complexity: $\Theta(m+n)$
Breadth First Search
- Level-by-level graph traversal
- Serially complexity: $\Theta(m+n)$
Breadth First Search
- Who is parent(19)?
- If we use a queue for expanding the frontier?
- Does it actually matter?
```
Level 1
1 2 3 4
5 6 7 8
9 10 11 12
16 17 18 19
Level 2
Level 3
Level 4
Level 5
Level 6
```
Parallel BFS
- Way #1: A custom reducer
```c
void BFS(Graph *G, Vertex root)
{
Bag<Vertex> frontier(root);
while ( ! frontier.isEmpty() )
{
cilk::hyperobject< Bag<Vertex> > succbag();
cilk_for (int i=0; i< frontier.size(); i++)
{
for( Vertex v in frontier[i].adjacency() )
{
if( ! v.unvisited() )
succbag() += v;
}
}
frontier = succbag.getValue();
}
}
```
Bag<T> has an associative reduce function that merges two sets
```
operator+=(Vertex & rhs)
also marks rhs “visited”
```
Parallel BFS
- Way #2: Concurrent writes + List reducer
```c
void BFS(Graph *G, Vertex root)
{
list<Vertex> frontier(root);
Vertex * parent = new Vertex[n];
while ( ! frontier.isEmpty() )
{
cilk_for (int i=0; i< frontier.size(); i++)
{
for( Vertex v in frontier[i].adjacency() )
{
if ( ! v.visited() )
parent[v] = frontier[i];
}
}
}
}
```
An intentional data race
How to generate the new frontier?
Parallel BFS
void BFS(Graph *G, Vertex root)
{
...
while ( ! frontier.isEmpty() ) {
...
hyperobject< reducer_list_append<Vertex> >
succlist();
cilk_for (int i=0; i< frontier.size(); i++)
{
for( Vertex v in frontier[i].adjacency() )
{
if ( parent[v] == frontier[i] )
{
succlist.push_back(v);
v.visit();// Mark “visited”
}
}
}
}
frontier = succlist.getValue();
}
Parallel BFS
- Each level is explored with $\Theta(1)$ span
- Graph $G$ has at most $d$, at least $d/2$ levels
- Depending on the location of root
- $d = \text{diameter}(G)$
**Work:** $T_1(n) = \Theta(m+n)$
**Span:** $T_\infty(n) = \Theta(d)$
**Parallelism:** $\frac{T_1(n)}{T_\infty(n)} = \Theta((m+n)/d)$
Parallel BFS Caveats
- \( d \) is usually small
- \( d = \log(n) \) for scale-free graphs
- But the degrees are not bounded 😞
- Parallel scaling will be memory-bound
- Lots of burdened parallelism,
- Loops are skinny
- Especially to the root and leaves of BFS-tree
- You are not “expected” to parallelize BFS part of Homework #5
- You may do it for “extra credit” though 😊
|
{"Source-Url": "http://www.cs.ucsb.edu/~gilbert/cs240a/old/cs240aSpr2010/slides/cs240a-cilkapps.pdf", "len_cl100k_base": 5615, "olmocr-version": "0.1.50", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 71428, "total-output-tokens": 7990, "length": "2e12", "weborganizer": {"__label__adult": 0.00030875205993652344, "__label__art_design": 0.0003490447998046875, "__label__crime_law": 0.00037741661071777344, "__label__education_jobs": 0.001163482666015625, "__label__entertainment": 6.949901580810547e-05, "__label__fashion_beauty": 0.00014472007751464844, "__label__finance_business": 0.0001596212387084961, "__label__food_dining": 0.0004625320434570313, "__label__games": 0.0007691383361816406, "__label__hardware": 0.0013561248779296875, "__label__health": 0.0004529953002929687, "__label__history": 0.0002741813659667969, "__label__home_hobbies": 0.00012862682342529297, "__label__industrial": 0.0004935264587402344, "__label__literature": 0.00022292137145996096, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0005435943603515625, "__label__science_tech": 0.035186767578125, "__label__social_life": 0.00010639429092407228, "__label__software": 0.00537109375, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 0.00034809112548828125, "__label__transportation": 0.0006055831909179688, "__label__travel": 0.00020956993103027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15309, 0.04965]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15309, 0.84443]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15309, 0.50602]], "google_gemma-3-12b-it_contains_pii": [[0, 225, false], [225, 423, null], [423, 810, null], [810, 1232, null], [1232, 1232, null], [1232, 1320, null], [1320, 1405, null], [1405, 1855, null], [1855, 2438, null], [2438, 2832, null], [2832, 3626, null], [3626, 3725, null], [3725, 4083, null], [4083, 4529, null], [4529, 4883, null], [4883, 5091, null], [5091, 5427, null], [5427, 5818, null], [5818, 6304, null], [6304, 6638, null], [6638, 7010, null], [7010, 7385, null], [7385, 7754, null], [7754, 7898, null], [7898, 8111, null], [8111, 8327, null], [8327, 8362, null], [8362, 8879, null], [8879, 9322, null], [9322, 9807, null], [9807, 10110, null], [10110, 10727, null], [10727, 10894, null], [10894, 11339, null], [11339, 11799, null], [11799, 11986, null], [11986, 12158, null], [12158, 12250, null], [12250, 12342, null], [12342, 12434, null], [12434, 12526, null], [12526, 12618, null], [12618, 12710, null], [12710, 12936, null], [12936, 13545, null], [13545, 14063, null], [14063, 14613, null], [14613, 14928, null], [14928, 15309, null]], "google_gemma-3-12b-it_is_public_document": [[0, 225, true], [225, 423, null], [423, 810, null], [810, 1232, null], [1232, 1232, null], [1232, 1320, null], [1320, 1405, null], [1405, 1855, null], [1855, 2438, null], [2438, 2832, null], [2832, 3626, null], [3626, 3725, null], [3725, 4083, null], [4083, 4529, null], [4529, 4883, null], [4883, 5091, null], [5091, 5427, null], [5427, 5818, null], [5818, 6304, null], [6304, 6638, null], [6638, 7010, null], [7010, 7385, null], [7385, 7754, null], [7754, 7898, null], [7898, 8111, null], [8111, 8327, null], [8327, 8362, null], [8362, 8879, null], [8879, 9322, null], [9322, 9807, null], [9807, 10110, null], [10110, 10727, null], [10727, 10894, null], [10894, 11339, null], [11339, 11799, null], [11799, 11986, null], [11986, 12158, null], [12158, 12250, null], [12250, 12342, null], [12342, 12434, null], [12434, 12526, null], [12526, 12618, null], [12618, 12710, null], [12710, 12936, null], [12936, 13545, null], [13545, 14063, null], [14063, 14613, null], [14613, 14928, null], [14928, 15309, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15309, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15309, null]], "pdf_page_numbers": [[0, 225, 1], [225, 423, 2], [423, 810, 3], [810, 1232, 4], [1232, 1232, 5], [1232, 1320, 6], [1320, 1405, 7], [1405, 1855, 8], [1855, 2438, 9], [2438, 2832, 10], [2832, 3626, 11], [3626, 3725, 12], [3725, 4083, 13], [4083, 4529, 14], [4529, 4883, 15], [4883, 5091, 16], [5091, 5427, 17], [5427, 5818, 18], [5818, 6304, 19], [6304, 6638, 20], [6638, 7010, 21], [7010, 7385, 22], [7385, 7754, 23], [7754, 7898, 24], [7898, 8111, 25], [8111, 8327, 26], [8327, 8362, 27], [8362, 8879, 28], [8879, 9322, 29], [9322, 9807, 30], [9807, 10110, 31], [10110, 10727, 32], [10727, 10894, 33], [10894, 11339, 34], [11339, 11799, 35], [11799, 11986, 36], [11986, 12158, 37], [12158, 12250, 38], [12250, 12342, 39], [12342, 12434, 40], [12434, 12526, 41], [12526, 12618, 42], [12618, 12710, 43], [12710, 12936, 44], [12936, 13545, 45], [13545, 14063, 46], [14063, 14613, 47], [14613, 14928, 48], [14928, 15309, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15309, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
53fd6b59f3529b7f4fdbdf139a07a1b434f55ca0
|
[REMOVED]
|
{"Source-Url": "https://www.bsu.by/Cache/pdf/89263.pdf", "len_cl100k_base": 7608, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 31177, "total-output-tokens": 8845, "length": "2e12", "weborganizer": {"__label__adult": 0.0007882118225097656, "__label__art_design": 0.0009889602661132812, "__label__crime_law": 0.0006661415100097656, "__label__education_jobs": 0.0011730194091796875, "__label__entertainment": 0.0002061128616333008, "__label__fashion_beauty": 0.00039577484130859375, "__label__finance_business": 0.0006594657897949219, "__label__food_dining": 0.0007672309875488281, "__label__games": 0.00104522705078125, "__label__hardware": 0.058319091796875, "__label__health": 0.0013885498046875, "__label__history": 0.00048470497131347656, "__label__home_hobbies": 0.00044608116149902344, "__label__industrial": 0.003498077392578125, "__label__literature": 0.00036978721618652344, "__label__politics": 0.0005340576171875, "__label__religion": 0.000988006591796875, "__label__science_tech": 0.4560546875, "__label__social_life": 0.00011730194091796876, "__label__software": 0.00690460205078125, "__label__software_dev": 0.46142578125, "__label__sports_fitness": 0.0005588531494140625, "__label__transportation": 0.00191497802734375, "__label__travel": 0.00029468536376953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22498, 0.05129]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22498, 0.93889]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22498, 0.78832]], "google_gemma-3-12b-it_contains_pii": [[0, 4307, false], [4307, 8208, null], [8208, 8320, null], [8320, 11360, null], [11360, 14324, null], [14324, 18302, null], [18302, 21673, null], [21673, 22498, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4307, true], [4307, 8208, null], [8208, 8320, null], [8320, 11360, null], [11360, 14324, null], [14324, 18302, null], [18302, 21673, null], [21673, 22498, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22498, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22498, null]], "pdf_page_numbers": [[0, 4307, 1], [4307, 8208, 2], [8208, 8320, 3], [8320, 11360, 4], [11360, 14324, 5], [14324, 18302, 6], [18302, 21673, 7], [21673, 22498, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22498, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
358dc84a0f436046562cef81498e2acffb8438d5
|
Misc. Post-CSI310 ideas.
Tree and Expression Definitions Compared.
Variations of tree traversals.
Heap-ordered trees.
Decision, Search and other trees, Array search, Binary array and tree search.
First, containment or "has-a" relationship...
A glimpse at inheritance...
cases and basically the same kind of thing, as the base class.
Unlike containment, inheritance is used when the subclass object are special.
data, methods or specific properties.
objects that share all the data and methods of the base, plus have additional
Given the base class, programmers can derive from it or create subclasses for
have some general usefulness.
The objects of one class, called the base class, have some data and methods that
Inheritance expresses the "is-a" relationship.
virtual void make-move( string move )
{ }
public:
} } }
virtual void display-status( ) const = 0;
{ }
virtual void make-move( string move )
{ }
protected:
{ }
display-status( );
make-move( move );
string move; cin >> move;
... }
who play
enum who plays { // who wins
human, who, computer
public:
} class Game {
DSO (14.3) Example:
University at Albany Computer Science Dept.
```cpp
};
int most_recent_column;
int many_used[COLUMNS];
who_data[ROWS][COLUMNS];
private:
virtual void display_status() const;
{
super::move();
}
// Code for making a connect 4 move goes here
```
virtual functions depending on the class of the object.
The same method calls made in base class method bodies call out to different
Inherited classes with virtual methods provide an example of polymorphism:
In Java, all methods are virtual.
Game calls their name, like make-move and display-status
Virtual functions FROM class Game connected will be called when code in class
Game class like class connected : Game.
To implement a specific game, the programmer derives from class Game a new
every Game should support.
Idea: The game class models a general game, methods reflect what operations
...;
...;
{
draw-rectangle(10.0, 10.0, 30.0, 40.0);
clear-canvas();
}
public: virtual void draw
...;
}
} class MyApplicationClass : Window
Programmers code:
{,
...
public: virtual void draw();
...;
}
} class Window
say
Object Oriented Graphics Application Programming Interfaces. The API defines
University at Albany Computer Science Dept.
(Graphics::Window)
draw() function that the programmer wrote. It codes the commands to draw the
constructed by the new call above. The function that is actually called in the
time, or to redraw it, the API library calls Window::draw() on the object
When the windowing system needs to display the application's window the first
MyApplicationClass* ptr = new MyApplicationClass();
Read Head First Design Patterns.
Read Head First Java.
Take CSI445 this summer.
For more:
the maze array represents a graph. (in 15.2 and 15.3) Adjacency array for representing a graph; difference from how
and binary search trees; pre, post, and in-order tree traversals. Representation of decision (DIOs, taxonomy) animal guessing
tree traversal (prefix, traversal of decision) 10 and lectures (some basic tree lore, tree applications, linked tree
Graph traversal codes. (6 (a little), 10.4, code in 15.3) Templates and function parameters in tree and
Breadth-First traversal of TREES.
15.3) Graph traversal, Depth-First and Breadth-First; Depth-First and
Data structures and algorithms for Project 5!
subject final exam questions.
honors”/”optional” some might be covered in low point value. Simple but
Topics for review Guidance: [Double Bracketed] topics are
(extra sheet will be allowed.
Final exam: Closed book/computer, like midterm, except one sheet of notes
Final exam in TC-19. Mon May 15 3:30-5:30(+15min)
A session; all recitation sections Wed and Fri.
0
1.2 Tree Nodes
1.4 Tree Traversal (in, pre, post order)
- A heap ordered tree.
- A binary search tree.
How to sort by building and using
1.5 Binary Search Trees
- Binary tree size/depth analysis
- B-trees
- Generalized binary search
1.3 Binary and Serial Search
13.1 13.2 Running time analysis of sorting algorithms.
- Selection Sort (Project 3)
- Mergesort in arrays (Project 3)
11.1 11.3 Heap sorting.
- What is a heap?
- What is a search tree?
14.1 Inheritance
5.4 Linked List Appl.
5.5 "Mix and Match" Reading
6.1 Template Functions
6.2 Template Classes
6.6-6.8 [STL and Iterators]
7.1 Stacks
7.2 Balanced () and 2-stacks Algorithms
7.4 Evaluating Postfix [opt. precedence rules]
8.1 Queue Intro.
8.2 Queue Appl: I/O Buffering
9.1 Recapitive Functions, Activation Records, Local Extant/Automatic
9.2 Rectangles and Mazes
9.3 Reasoning About Recursion
10.1 Trees
University at Albany Computer Science Dept.
(1.1) Specification, Design, etc.
(1.2) Run Time Analysis
(2) Classes, Separate CXX/ H Files, Build Scripts, RCS
(3) Containers
(4) Pointers, Dynamic Arrays, C-Strings
Structure/Class types, some fields being pointers, others not; function members.
(5.1) Linked Lists
(5.2) More Linked Lists
(5.3) Linked List Bags
question. (M/S confuse this with taxonomy trees).
Binary Decision Trees: Each leaf is an answer, each non-leaf is a yes-no.
(It is like a telephone book).
Other Name Space Trees: EG, the Domain Name System of the Internet.
Directory names: plus a file name.
File Name Trees: Express a system to identify files using a sequence of
search for "human".
Taxonomy Trees: See http://www.ncbi.nlm.nih.gov/Taxonomy/ and
expression (string, web document, program, etc.).
Expression Trees: Express the structure of the computation expressed by an
Tree Examples/Applications
$O$ (height) operations: number can be moved to the root, with the heap remaining heap-ordered, using
Heap qualities: (1) The largest number is in the root. (2) The next-largest number in each and each subtree $L$, the root contains the largest of the numbers in the left subtree $L$, the root contains the largest of the numbers in the heap property: A heap is a tree (of numbers) with the heap property.
For the tree and each subtree $L$, a given key is in the tree or not. It can express the structure of a search process to tell if a given key is in the tree. Subtree of $L$ is the number in the left. The left subtree of $L$ is less than the number in the left subtree of $L$. Every number in every subtree $L$, every number in the left subtree $L$ is greater than. Every number is in a finite set, by using questions of the form: Is it $u$ or $\bar{u}$? and number is in a given set. A decision tree for answering whether or not a given number of data in a tree.
Binary search tree: Trees used for searching and sorting (different problems, different arrangements...
courses prove theoretical results about sorting algorithms.
Running on $N$ elements, (This tree has $N$ leaves; it is used in Graduate CS)
Combination of outcomes of comparisons possibly made by that algorithm
Sorting state tree of a sorting algorithm: One node for EVERY
That ends in a win, loss or draw.
According to the rules of the game, this tree has one leaf for every game-play
Game trees: One node for EVERY legal combination of moves by the player(s)
More conceptual kinds of trees:
University at Albany Computer Science Dept.
search, (D) a species, genus, family, group or other taxonomy category, etc.
In various applications, the trees occur in two ways:
Explicit, implemented by a data structure: Each node of the tree is an object (objects ARE variables), with a C++ data type (often a structure
Implicit, helpful way for people to understand the application: Each node represents something more abstract, like (A) a subexpression, (B) the state of knowledge you got from answers to questions, or (C) progress in a process.
{ else return search(right subtree of T, k) ?
if (T has no right subtree) return false;
else
{ else return search(left subtree of T, k) ?
if (T has no left subtree) return false;
(if (k > key(root of T))
if (k is in the root of T) return true;
search(binary subtree, key k)
)
Think of one explicit example: one binary search tree containing the keys 10, 20,
count = "No..."
{ count = "Yes..." return
( if (A[i] == K) ++ )
for(int i=0; i < n; i++)
// that's a precondition, so this code snippet fail if
// precondition: A[0..n-1] must be sorted.
// Sequential Search (not best)
array elements and output K's index, if K is in the array.
increasing order, and if input key K, tell whether or not K is in one of the
The sorted array search problem: Given an array A[ ] of n keys sorted in
count >> "\n\n if \nBound = \nBound + 1; \n \nelse \nBound = \nBound - 1; \n( [mid \nBound] \nK > \n\nBound \nBound \nBound \nBound, \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBound \nBoundary search in a sorted array: \n
Precondition [A[0..n-1] must be sorted. \n
Signed int Round = n-1; \n// in case 12.1. \nSigned int Round = 0; \n// Note the pitfalls. \n
University at Albany Computer Science Dept.
THINK: What if \( n \) is a \( G \), about \( 1 \) billion, which is \( 2^{30} \)?
previous length! So, the number of comparison steps is \( \geq \log_2(n + 1) \).
the subarray that remains to be searched is reduced to half or less of its
Why binary array search is efficient: After each \(< \) or \( > \) comparison operation,
If \( k > a[3] \), we restrict search to \( A[4] \ldots 7 \).
If \( k < a[3] \), we restrict search to \( A[0] \ldots 2 \).
\[
\begin{array}{cccccccc}
10 & 20 & 30 & 40 & 50 & 60 & 70 & 80 \\
\end{array}
\]
TRY it with \( n=8 \) on this array:
University at Albany Computer Science Dept.
This binary search tree expresses the structure of the search done by the pseudo-code. The diagram shows a binary search tree with keys 40, 60, 70, 80, 50, 30, and 10. Each node represents a comparison in the search algorithm.
Some kinds of traversals (whole-tree exploring) of trees:
1. Pre-order (EQUIVALENT to depth-first)
2. In-order
3. Post-order
4. Breadth-first
Breadth-first traversal of a tree
Depth-first traversal of a tree
Each arc expresses the structural relation between the root node and the
subtrees.
An expression is an identifier or constant,
or has a top level operator, excluding
or one operator and operands under
superiorly (and)
or more expressions as operands
(e) If it has an operator, it has one
or more expressions as operands
Any operator and operands under
overlapped!
(q) (p)
A non-empty tree has:
Zero or more rooted trees called
its subtrees, with no nodes or arcs in
its subtrees, with each other or the root,
common with each other or the root.
(e) One arc from this tree's root to
the root of each of the trees specified
under (q).
Each arc expresses the structural relation between the root node and the
subtrees.
used by the recursive evaluator.
The stack of activation records stores the subexpression values until they are
3. Solution to the problem of managing memory for the subexpression values:
order.
Recursive evaluation easily finds and executes all the operations in the right
2. The top level operation is in the root of the tree.
1. The expression tree directly reveals the order of operations.
Remember about expression trees:
root.
and keep moving upward to each node’s parent, you will eventually reach the
one. If you start at any node and move to the node’s parent (provided there is
3. Each node except for the root has exactly one parent; the root has no parent.
say that “is c’s parent” is that node’s children. If a node c is the child of another node d, then we
2. Each node may be [is] associated with [zero] or more different nodes, called
1. There is one special node, called the root.
empty, then it [the set of nodes] must follow these rules:
A tree is a finite set of “nodes” whose set might be empty (no nodes, which is
Main/Savitch’s definition:
{ }
hold();
cout << temp << " is the root. " << endl;
while (temp is NOT the root )
temp = u;
node temp;
find-root(node u)
The following algorithm, started at any node, always halts, eventually:
A more formal description of property A:
What about \#4?
Mathematics Prize:
If you can answer definitively, you win a Clay Institute of
A million dollar question: „Is \#3 harder than \#2?“ (More formally, „Does
Is it easier to solve \#2 than by trying to solve \#1?
How many large can the count of \#1 be?
any.
4. Given a maze, find ONE simple path of shortest length, or report there isn’t
(An Hamiltonian path problem"
3. Given a maze, and ONE simple path THAT VISITS ALL THE VERTEXES,
2. Given a maze, and ONE simple path from start to goal, or report that there
1. Given a maze, and list/count ALL simple paths from start to goal.
Pour Separate Problems:
(15.4) Path Finding (15.3) Graph Traversal
Assorted closing remarks...
University at Albany Computer Science Dept.
The number of length $N$-1 strings is $2^N$. For our $N=5$ example, using this rule provides a 1-1 function from length 5 (generally $N$) binary strings to strings 01100 corresponds to DDRDRDR.
So string 01010 corresponds to DDRDRDR. String 01100 corresponds to take right only steps to reach the goal.
When we reach the bottom row, left followed by one step down if the bit is 1, when we reach the bottom row, take one step down if the bit is 0, and take one step when we are at each row, take one step right, when we are at each row, take one step down. The following rule specifies how a length 5 string like this determines one solution:
We write the binary string 01010 down the left side. The right side of the following rule specifies steps.
ONLY WE illustrate the first such solution: 5 down steps followed by 5 right.
Some but not all of the solutions are formed by $N$ right and $N$ down steps.
$N = 5$.
In the figure shows the graph with row. So, in terms of Project 5, $N = 1$. The figure shows the graph with row.
Let $N$ be the number of down steps needed to go from the top to the bottom. The start and goal vertices are the upper left and lower right ones.
University at Albany Computer Science Dept.
can be packed into the known universe, etc.
However, the number of solutions, more than 2,100, the computer must print is way too big to be computed for mortal, the number of protons that
Consider \( N = 1000 \). The description of the maze can fit on a floppy disk.
Rule demonstrates that there are over 32 different solutions.
\[ 2^N \text{ paths.} \]
...take a number of steps proportional to \( N \) instead of getting nothing as many as
This way, the algorithm to find one path or determine that there is none, would
be reached from the start vertex.
is to put and retain a "mark" on each vertex as soon as we determine that it can
We sketched the operation of a "labeling" type search algorithm. The main idea
algorithm from the project.
any and print "none" if there are none, doing less work than the backtracking
and search algorithms that can solve problems #2, to find one solution path if
Let us compare problems #1 and #2. See 15.3 and 15.4 detail graph traverse
Depth-first Labeling Search
Breadth-first Labelling Search
0, 0, 1, 1, 0, 0, 3, 1, 4, 3, 0, 3, 2, 5, 0
here are the squares in the order they are inserted in the queue.
General and Linked-List Mergesort
(13.2) Quicksort, Mergesort in arrays
terables).
ADDITIONAL MEMORY NEEDED: except for a few control, swapping, etc.
Heapsort is an \( O(\log n) \) array-in-place-only sorting algorithm! (NO
parent \( \left\lfloor \frac{I}{2} \right\rfloor \) rounded down •
right child \( 2 + I \) •
left child \( 1 + I \) •
\( 0 < I \) Non-root - position \( 0 \)
Root - position
(ARRAY IMPLEMENTATION OF COMPLETE BINARY TREES)
13.3 Heapsort
understanding recursive definitions in computer science.
These rules apply to reading and writing inductive proofs in mathematics, and
BELIEVE (assume by induction) the recursive calls will work.
When you study whether it works in other cases, check that the recursive calls
\( \text{FIRST} \)
Golden rules for recursive programming:
1. The Principle of Induction Says: If \( u \) is true for ALL \( u \), then the Principle of Induction Says: \( p(1) \) is true.
2. You can prove \( p(u) \) AND \( p(u') \) if you assume \( p(u) \) is true for every \( u > u' \).
3. If you can prove: (1) \( p'(u, n) \) for all \( u < n \), then you can prove: \( p'(u, n) \) for all \( u < n \).
Principle of Mathematical Induction:
Whenever it is run on a list of keys, the mergeSort function will WORK.
My reasoning about recursion:
9.3 Reasoning About Recursion
|
{"Source-Url": "http://www.cs.albany.edu/~sdc/CSI310/Lectures/L27/spr06lect27.pdf", "len_cl100k_base": 6507, "olmocr-version": "0.1.49", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 164981, "total-output-tokens": 8268, "length": "2e12", "weborganizer": {"__label__adult": 0.0004711151123046875, "__label__art_design": 0.0005578994750976562, "__label__crime_law": 0.0005016326904296875, "__label__education_jobs": 0.008331298828125, "__label__entertainment": 0.0001214146614074707, "__label__fashion_beauty": 0.0002484321594238281, "__label__finance_business": 0.0002453327178955078, "__label__food_dining": 0.0006718635559082031, "__label__games": 0.0013151168823242188, "__label__hardware": 0.0017042160034179688, "__label__health": 0.000659942626953125, "__label__history": 0.0004651546478271485, "__label__home_hobbies": 0.00032138824462890625, "__label__industrial": 0.0006947517395019531, "__label__literature": 0.0005731582641601562, "__label__politics": 0.00031757354736328125, "__label__religion": 0.0007700920104980469, "__label__science_tech": 0.0592041015625, "__label__social_life": 0.00023937225341796875, "__label__software": 0.00560760498046875, "__label__software_dev": 0.9150390625, "__label__sports_fitness": 0.0005650520324707031, "__label__transportation": 0.0010128021240234375, "__label__travel": 0.0002999305725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22315, 0.0256]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22315, 0.71097]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22315, 0.79293]], "google_gemma-3-12b-it_contains_pii": [[0, 200, false], [200, 275, null], [275, 775, null], [775, 1180, null], [1180, 1389, null], [1389, 1989, null], [1989, 2341, null], [2341, 2721, null], [2721, 2811, null], [2811, 3801, null], [3801, 4273, null], [4273, 4720, null], [4720, 5043, null], [5043, 5617, null], [5617, 6693, null], [6693, 7237, null], [7237, 7742, null], [7742, 8162, null], [8162, 8598, null], [8598, 14666, null], [14666, 15337, null], [15337, 15564, null], [15564, 15564, null], [15564, 15710, null], [15710, 15744, null], [15744, 15776, null], [15776, 16511, null], [16511, 16944, null], [16944, 17588, null], [17588, 17849, null], [17849, 18597, null], [18597, 18597, null], [18597, 19823, null], [19823, 20155, null], [20155, 20811, null], [20811, 20811, null], [20811, 20839, null], [20839, 20982, null], [20982, 21453, null], [21453, 22315, null]], "google_gemma-3-12b-it_is_public_document": [[0, 200, true], [200, 275, null], [275, 775, null], [775, 1180, null], [1180, 1389, null], [1389, 1989, null], [1989, 2341, null], [2341, 2721, null], [2721, 2811, null], [2811, 3801, null], [3801, 4273, null], [4273, 4720, null], [4720, 5043, null], [5043, 5617, null], [5617, 6693, null], [6693, 7237, null], [7237, 7742, null], [7742, 8162, null], [8162, 8598, null], [8598, 14666, null], [14666, 15337, null], [15337, 15564, null], [15564, 15564, null], [15564, 15710, null], [15710, 15744, null], [15744, 15776, null], [15776, 16511, null], [16511, 16944, null], [16944, 17588, null], [17588, 17849, null], [17849, 18597, null], [18597, 18597, null], [18597, 19823, null], [19823, 20155, null], [20155, 20811, null], [20811, 20811, null], [20811, 20839, null], [20839, 20982, null], [20982, 21453, null], [21453, 22315, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 22315, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22315, null]], "pdf_page_numbers": [[0, 200, 1], [200, 275, 2], [275, 775, 3], [775, 1180, 4], [1180, 1389, 5], [1389, 1989, 6], [1989, 2341, 7], [2341, 2721, 8], [2721, 2811, 9], [2811, 3801, 10], [3801, 4273, 11], [4273, 4720, 12], [4720, 5043, 13], [5043, 5617, 14], [5617, 6693, 15], [6693, 7237, 16], [7237, 7742, 17], [7742, 8162, 18], [8162, 8598, 19], [8598, 14666, 20], [14666, 15337, 21], [15337, 15564, 22], [15564, 15564, 23], [15564, 15710, 24], [15710, 15744, 25], [15744, 15776, 26], [15776, 16511, 27], [16511, 16944, 28], [16944, 17588, 29], [17588, 17849, 30], [17849, 18597, 31], [18597, 18597, 32], [18597, 19823, 33], [19823, 20155, 34], [20155, 20811, 35], [20811, 20811, 36], [20811, 20839, 37], [20839, 20982, 38], [20982, 21453, 39], [21453, 22315, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22315, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
ac5096ce1a6920cf20e1cf18b8cca5bb292292ae
|
Software Reuse Within the Earth Science Community
James J. Marshall, Steve Olding, Robert E. Wolfe
NASA Goddard Space Flight Center
Greenbelt, MD, USA
[email protected], [email protected], [email protected]
Abstract—Scientific missions in the Earth sciences frequently require cost-effective, highly reliable, and easy-to-use software, which can be a challenge for software developers to provide. The NASA Earth Science Enterprise (ESE) spends a significant amount of resources developing software components and other software development artifacts that may also be of value if reused in other projects requiring similar functionality. In general, software reuse is often defined as utilizing existing software artifacts. Software reuse can improve productivity and quality while decreasing the cost of software development, as documented by case studies in the literature. Since large software systems are often the results of the integration of many smaller and sometimes reusable components, ensuring reusability of such software components becomes a necessity. Indeed, designing software components with reusability as a requirement can increase the software reuse potential within a community such as the NASA ESE community.
The NASA Earth Science Data Systems (ESDS) Software Reuse Working Group is chartered to oversee the development of a process that will maximize the reuse potential of existing software components while recommending strategies for maximizing the reusability potential of yet-to-be-designed components. As part of this work, two surveys of the Earth science community were conducted. The first was performed in 2004 and distributed among government employees and contractors. A follow-up survey was performed in 2005 and distributed among a wider community, to include members of industry and academia. The surveys were designed to collect information on subjects such as the current software reuse practices of Earth science software developers, why they choose to reuse software, and what perceived barriers prevent them from reusing software.
In this paper, we compare the results of these surveys, summarize the observed trends, and discuss the findings. The results are very similar, with the second, larger survey confirming the basic results of the first, smaller survey. The results suggest that reuse of ESE software can drive down the cost and time of system development, increase flexibility and responsiveness of these systems to new technologies and requirements, and increase effective and accountable community participation.
Software reuse; Earth science; SEEDS; NASA
I. INTRODUCTION
Software reuse is the reapplication of a variety of kinds of knowledge about one system to another system in order to reduce the effort of developing and maintaining that system. In principle, many different artifacts produced during the software development life cycle can be reused. Some typical examples of reusable artifacts include source code, analysis and design specification, plans, data, documentation, expertise and experience, and any information used to create software and software documentation. While all of these items are useful, the most often reused artifacts are software components.
Software reuse is used in order to realize a number of potential benefits. These include reduced cost and increased reliability [1, 2]. Productivity and quality improvements are also typical motivations for reuse [3]. Productivity is often measured in terms of cost and labor, and reuse has the potential to decrease both, thereby increasing productivity. Reusing software can also improve the reliability and quality of new products because the currently existing software components have already been tested and confirmed to perform according to their designs.
The NASA Earth Science Data System (ESDS) Software Reuse Working Group is chartered to oversee the development of a process that will maximize the reuse potential of existing software components while recommending strategies for maximizing the reusability potential of yet-to-be-designed components. As part of this work, we conducted two surveys of members of the Earth science community in order to get a measure of their reuse practices. Here, we describe these surveys, compare their results, and discuss the findings.
II. SURVEY DESCRIPTION
We conducted two surveys, the first in 2004 and the second in 2005. They were identical with the exception of one question that was added to the 2005 survey, making it 54 questions long. The survey questions were grouped into four major categories – background information, recent reuse experiences, reusability / developing for reuse, and community needs. The background section included questions on the respondent’s role in software development, organization, operating systems used or planned for use, and programming languages used. The questions on recent reuse experience asked if respondents did or did not reuse artifacts from outside their project or group within the last five years, then followed up with questions including why they did or did not reuse components, what they reused, the factors influencing their decision to reuse, and where they found reusable components. The reusability section asked if respondents made any software components available for reuse by others, then followed up with questions including what factors prevented them from making components available for reuse, the types of components made available for reuse, and how often they...
believe the components are reused by others. The community needs section included questions on what factors would help increase the level of software reuse within the Earth science community, what artifacts they would reuse if made available, and allowed space for additional comments and questions.
The 2004 survey was sent to members of the Software Reuse Working Group and other government employees; 25 responses were received. Office of Management and Budget (OMB) approval was obtained for the survey on Jan. 4, 2005 (Approval Number 2700-0117), and the survey was distributed to a wider audience, including members of academia; 100 responses were received from about 3000 invitations to participate (approximately a 3.3% return rate).
III. SURVEY RESULTS
The results of the 2005 survey confirm the results of the 2004 survey. There were some shifts in the answers to some questions, but these were typically small. The general results are the same in both surveys, and the same conclusions can be drawn from either one. Therefore, we will focus our discussion on the 2005 survey which is more recent and received a larger number of responses. The majority of the questions in the survey asked the respondents to rate various answer choices on a 1-5 scale representing the importance or frequency of the choice, as appropriate for the question. The importance scale was typically: (1) not important at all, (2) not very important, (3) somewhat important, (4) important, and (5) very important. The frequency scale was typically: (1) never, (2) rarely, (3) sometimes, (4) often, and (5) very often. Weighted averages were calculated for each choice and used to rank them within each question. These are used as a measure of the overall, general opinion of the survey respondents.
A. Components Reused vs. Components Made Available
One of the most interesting results of the survey regards the types of software and software development artifacts that respondents reused and how they developed the ones they made available for reuse by others. We asked six questions relating to this subject in order to determine if current practices may present a barrier to reuse. In particular, if respondents desire to use a certain type of component, but tend to develop and make available a different type of component, this would point towards a barrier that needs to be broken down in order to increase the systematic reuse of software components.
There were three questions related to reuse of existing software and software development artifacts and three related to developing software and artifacts for easier reuse by others. The first in each set simply asked if respondents had reused artifacts outside of their project/group or made artifacts available outside of their project/group within the last five years. The following two questions in each set were asked only of the people who answered “yes” to the first question, which was 79% of the respondents in the case of reusing existing software and 74% of the respondents in the case of making artifacts available for reuse. These questions regarded the frequency with which certain types of artifacts or software were reused or developed for reuse. One question asked about software development artifacts: algorithms and techniques, designs and architectures, source code and scripts, executables and binaries, and other. The other question asked about types of software: complete systems or applications, subsystems or components, code libraries, code fragments, and other.
In terms of what was reused, there was a clear preference for the smaller-sized components. Algorithms, techniques, source code, and scripts were reused more often than designs, architectures, executables, and binaries. Code libraries and code fragments were reused more often than subsystems or components and complete systems or applications. A difference appeared in what was developed for reuse. For software development artifacts, the results similarly favored smaller-sized components (source code, scripts, algorithms, and techniques). However, for the types of software made available for reuse, larger-sized components were offered – subsystems or components and complete solutions or applications were more frequently made available than code fragments or code libraries.
These results point to a potential problem and thus a barrier to reuse. Complete solutions or applications and subsystems or components are being made available for reuse, but code libraries and code fragments are most desired for reuse purposes. The fact that there is a tendency to provide larger-sized components when smaller-sized ones are desired suggests that it will be difficult for software developers to find the types of software they want. If they are unable to find what they are looking for, they will not be able to reuse existing components, and may end up rewriting components when it is not necessary. Making smaller-sized components for reuse by others should encourage and increase the amount of reuse.
B. Reasons for not Reusing or Making Available Components
We were also interested in knowing the reasons why people did not reuse existing software or make their software available for reuse by others. We asked four questions related to this topic in order to determine what barriers to reuse existed in the experience of our survey respondents. In order to increase the amount of reuse, it is important to know what factors are currently preventing people from practicing reuse.
Two questions are the same as ones described in the previous section – the ones asking if respondents did reuse artifacts or make artifacts available for reuse by others. The other two questions were asked of the people who answered “no” to those questions, which was 21% of the respondents in the case of not reusing software from outside of their project/group and 26% of the respondents in the case of not making artifacts available for reuse outside of their project/group. Ten choices were provided as possible reasons for not reusing existing artifacts. Eight choices were provided as possible reasons for not making artifacts available for reuse. Each question also offered an “other” option to account for reasons not listed explicitly. In addition, the same question about reasons for not making artifacts available was also asked of the 74% of the respondents that did make some artifacts available. This was to determine if the two groups of respondents faced similar barriers. One difference to note between the 2004 and 2005 surveys is that the question regarding reasons for not reusing existing software received
significantly more responses in the 2005 survey; the results from the 2004 survey are too limited to provide any useful information.
The primary reasons respondents did not reuse software from outside their project/group were that they did not know where to look for reusable artifacts and they did not know suitable reusable artifacts existed at the time. The reasons respondents who did not make any software development artifacts available for reuse outside of their project/group tended to be more varied. The main reasons included not knowing if it would be useful, support and maintenance concerns, the cost of developing for reuse, no standard method for distribution, and not knowing how. Among the respondents who did make at least some artifacts available for reuse, the reasons for not making artifacts available were also varied, but were very similar. The main reasons here were support and maintenance concerns, the cost of developing for reuse, no standard method for distribution, limitations in the organization's software release policy, and not knowing if it will be useful.
These are important results because they indicate where additional work needs to be done in order to increase the level of reuse within the Earth science community. The main barrier to reusing existing artifacts is lack of knowledge about what suitable reusable artifacts exist and where to find them. Therefore, reusable artifacts need to be more readily available and more easily locatable. If software developers know where to look for artifacts and can easily find suitable artifacts, they will be more likely to reuse them. They can then help others reuse existing artifacts by passing on their knowledge about the location and availability of such products. In a separate question to respondents who did reuse artifacts, personal knowledge from past projects and word-of-mouth or networking were the primary ways of locating and acquiring software development artifacts. (Web searches were of average importance while serendipity and reuse catalogs or repositories were rated the lowest.) The larger variety of reasons provided for not making artifacts available for reuse by others makes it more difficult to determine how to break down the barriers here. However, it also points to a variety of areas where improvements can be made.
C. Modifying Artifacts and Licenses Used
Another pair of questions asked only to the respondents who indicated that they had reused software development artifacts dealt with modifications to the artifacts and the licenses under which the artifacts were reused. The first question asked about the frequency with which (a) artifacts were modified and (b) the frequency with which those changes were communicated back to the original developer(s) of the artifact. The second question asked respondents to indicate the frequency with which the following licensing methods or agreements were used: open source, shareware or public domain, formal license agreement with the developer, semi-formal license agreement with the developer, no formal license agreement.
The results showed that artifacts were modified with moderate/average frequency (coded as “sometimes” on the scale used in the survey). However, the changes were communicated back to the developers with a somewhat lower frequency. In terms of licensing methods, open source software was clearly preferred over the other options. Use of shareware or public domain also rated somewhat above average frequency. All of the licensing options (formal, semi-formal, none) rated below average frequency.
The average frequency for modification of artifacts suggests that there is a relatively equal mix of artifacts that can be reused as-is, without modification, and ones that require some degree of modification to meet the requirements of a new environment. We did not ask about the amount of modification done though, so we do not have a measure of whether it is typically high or low. The preference for open source software is expected given the open nature of these licenses, which typically allow free redistribution of the software, provide access to source code, and allow modifications and derived works [4]. An interesting point was that formal license, semi-formal license, and no formal license all rated below average frequency. Theoretically, these should cover the range of possibilities in a mutually exclusive way; e.g., if you are not using a formal license or a semi-formal license, you must be using no formal license. Thus we expected to see one of these choices rate above average, one near average, and one below average. Since this is not the case, perhaps respondents did not view these three options in the same way as we did when creating them.
D. Factors to Increase Reuse
The final section of the survey included a few questions for all respondents. One of these was how important different specified factors would be in helping increase the level of reuse within the Earth science community. Clearly, this is an important piece of information to as it provides more explicit information on where work can be done to increase reuse.
Six factors were provided, plus one “other” choice to account for any not listed. The top three factors were having an Earth science catalog/repository for reusable artifacts, use of open source licensing, and education and guidance on reuse. The three lower ranking factors were a standardized support policy for reused software, changes to NASA external release policy, and a standardized license agreement for the Earth science community. As with other questions, the optional “other” choice was rated by fewer people and ranked the lowest (was considered the least important).
These results provide direction for future work, and are consistent with the opinions of the respondents as expressed in the answers to other questions. The primary reasons respondents did not reuse artifacts were that they did not know where to look and they did not know that suitable reusable artifacts existed at the time. Having a catalog/repository dedicated to Earth science software would eliminate these problems. It would give them a place to look for reusable artifacts while also increasing their knowledge about the existence of currently available reusable artifacts. Also, although the use of reuse catalogs and repositories was rated the least important method of locating reusable artifacts, it was rated the most important method of increasing the level of
reuse within the community. This suggests that respondents would use such an Earth science catalog/repository for reusable artifacts if it existed, indicating that it would be a worthwhile endeavor to create such a system. Since a system like this would eliminate the two main reasons people did not reuse artifacts (not knowing where to look and not knowing suitable reusable artifacts existed), this is an understandable result. It also touches on another reason respondents did not make software available for reuse – no standard method of distribution. An Earth science catalog or repository for reusable artifacts could serve as a standard way of making software available to others in the community.
Open source software is already the primary choice of licensing for most respondents, so it is logical that they would also recommend greater use of open source licensing as a way to increase reuse. The general freedom of open source licensing as compared to more closed licensing mechanisms makes it an attractive and useful option for software developers. Encouraging greater use of open source licensing is another area of work where successful results can produce greater levels of reuse.
More education and guidance on reuse will also help break down existing barriers. For example, making people aware that smaller components are most desired for reuse, but larger components are more frequently made available (the issue raised in section A), may help create a shift in the type of software that developers make available for reuse. It can also help people understand the usefulness of reuse, taking away a possible reason people would not make their artifacts available. One question asked respondents who did reuse artifacts about their reasons for doing so – saving time and ensuring reliability were the primary reasons, with saving money not far behind. Education can make more people aware of these points and the benefits they gain by practicing reuse, thus encouraging greater reuse. Guidance on reuse can help address issues where people do not know how to make artifacts available. Education and guidance should naturally lead to improvements as more people understand the benefits of reuse and work to break down the existing barriers to reuse.
IV. CONCLUSIONS
Our surveys provide support for the common view that software reuse saves time and money while also ensuring the reliability of the product. These were the top three reasons respondents chose to reuse existing software. We also discovered some potential barriers to reuse. Smaller-sized components such as code libraries and code fragments are the most desired for reuse purposes, but larger-sized components such as complete applications and subsystems are more frequently made available. Another barrier is that people did not know that suitable reusable artifacts existed and did not know where to locate reusable artifacts. In order to help increase the amount of reuse, these barriers need to be broken down.
The responses we received also provided some information on how to increase the level of reuse. The top three suggestions were having an Earth science catalog/repository for reusable artifacts, greater use of open source licensing, and more education and guidance on reuse. Even though personal knowledge and word-of-mouth networking were the most commonly used methods of locating reusable artifacts, and catalogs/repositories were the least often used methods, an Earth science catalog/repository of reusable assets was seen to be the most important method of increasing reuse. This suggests that people will use such a catalog/repository, if a suitable one existed. Such a system would also help break down the barriers of not knowing what reusable artifacts existed or where to look for them. The recommendation for greater use of open source licensing is reasonable considering the benefits it provides and that it is already the most commonly used licensing method with the Earth science community. Education and guidance on reuse should also help break down barriers by teaching people about the benefits of reuse, how to reuse existing components and how to make their own components available for reuse by others, and what barriers to reuse need to be broken down.
Software reuse is being practiced within the Earth science community, but there are still barriers to reuse and progress to be made. Further education and guidance should encourage a greater number of people to participate in reuse and help break down existing barriers. Removing barriers would further increase the level of reuse, and help make systematic software reuse a more regular and routine part of the software development process.
ACKNOWLEDGMENT
We would like to thank all of the survey respondents for taking the time to complete our survey and providing us with valuable information about reuse practices within the Earth science community.
REFERENCES
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20060027793.pdf", "len_cl100k_base": 4312, "olmocr-version": "0.1.50", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 12064, "total-output-tokens": 4708, "length": "2e12", "weborganizer": {"__label__adult": 0.00023627281188964844, "__label__art_design": 0.0002715587615966797, "__label__crime_law": 0.00024962425231933594, "__label__education_jobs": 0.0009102821350097656, "__label__entertainment": 5.131959915161133e-05, "__label__fashion_beauty": 8.654594421386719e-05, "__label__finance_business": 0.00032830238342285156, "__label__food_dining": 0.0001962184906005859, "__label__games": 0.0003573894500732422, "__label__hardware": 0.000484466552734375, "__label__health": 0.0002243518829345703, "__label__history": 0.00016486644744873047, "__label__home_hobbies": 5.036592483520508e-05, "__label__industrial": 0.00016367435455322266, "__label__literature": 0.00017642974853515625, "__label__politics": 0.00016736984252929688, "__label__religion": 0.0002199411392211914, "__label__science_tech": 0.009185791015625, "__label__social_life": 9.083747863769533e-05, "__label__software": 0.0138397216796875, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.00013017654418945312, "__label__transportation": 0.0002617835998535156, "__label__travel": 0.00013756752014160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24143, 0.01254]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24143, 0.82384]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24143, 0.95568]], "google_gemma-3-12b-it_contains_pii": [[0, 5553, false], [5553, 12210, null], [12210, 18724, null], [18724, 24143, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5553, true], [5553, 12210, null], [12210, 18724, null], [18724, 24143, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24143, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24143, null]], "pdf_page_numbers": [[0, 5553, 1], [5553, 12210, 2], [12210, 18724, 3], [18724, 24143, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24143, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
cb77fec13908016934d88ab86fc7eb999d51f0b3
|
Selection of Composable Web Services Driven by User Requirements
Zeina Azmeh, Maha Driss, Fady Hamoui, Marianne Huchard, Naouel Moha, Chouki Tibermacine
To cite this version:
HAL Id: lirmm-00596346
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00596346
Submitted on 27 May 2011
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Selection of Composable Web Services Driven by User Requirements
Zeina Azmeh*, Maha Driss†, Fady Hamoui*, Marianne Huchard*, Naouel Moha‡ and Chouki Tibermacine*
*LIRMM, CNRS, University of Montpellier, France
{azmeh, hamoui, huchard, tibermacin}@lirmm.fr
†IRISA, INRIA, University of Rennes I, France
{mdriss}@irisa.fr
‡Université du Québec à Montréal, Canada
{moha.naouel}@uqam.ca
Abstract—Building a composite application based on Web services has become a real challenge regarding the large and diverse service space nowadays. Especially when considering the various functional and non-functional capabilities that Web services may afford and users may require.
In this paper, we propose an approach for facilitating Web service selection according to user requirements. These requirements specify the needed functionality and expected QoS, as well as the composability between each pair of services. The originality of our approach is embodied in the use of Relational Concept Analysis (RCA), an extension of Formal Concept Analysis (FCA). Using RCA, we classify services by their calculated QoS levels and composability modes. We use a real case study of 901 services to show how to accomplish an efficient selection of services satisfying a specified set of functional and non-functional requirements.
Keywords-Web service selection; Web service composition; Formal Concept Analysis (FCA); Relational Concept Analysis (RCA); User requirements; QoS.
I. INTRODUCTION
Service-Oriented Computing (SOC) is an emerging paradigm for developing low-cost, flexible, and scalable distributed applications based on services [1]. Web services represent a realization of SOC. They are autonomous, reusable, and independent software units that can be accessed through the internet. SOC is becoming broadly adopted and in particular by organizations, which are more and more willing to open their information systems to their clients and partners over the Internet. The reason comes from the fact that SOC offers the ability to build efficiently and effectively added-value service-based applications by composing ready-made services. Web service composition addresses the situation when the functionality required by users (developers) cannot be satisfied by any available Web service, but by assembling suitably existing services [2]. When building Web service-based applications, we have to face several issues such as:
- Web service retrieval from the large number of existing services, and the lack of efficient indexing mechanisms. Current solutions are embodied in search engines, like: Seekda [3] and Service-Finder [4];
- a service’s ability to meet the user’s functional and non-functional requirements;
- a service’s composability, or the degree to which a service can be composed with another, considering the needed adaptations;
- the last issue is how can we achieve a compromise for service selection in the light of the previous issues.
Discovering and selecting services that closely fit users’ functional and non-functional requirements is an important issue highly studied in the literature as in [5], [6], [7]. Functional requirements define functionalities provided by Web services and non-functional requirements define Quality of Service (QoS) criteria such as availability, response time, and throughput [8]. However, discovering and selecting relevant composable services is another important issue that still need to be investigated, since few approaches focused on service composability problem [9], [10], [11]. Relevant composable services represent services that minimize the amount of adaptation among them while best fitting requirements.
In this paper, we propose an approach for the identification of services that best fit QoS and composition requirements. This approach is based on Relational Concept Analysis (RCA) [12], a variant of Formal Concept Analysis (FCA) [13], [14]. FCA has been successfully used as a formal framework for service substitution [15], [16], [17], [18], [19]. The approach also integrates a query mechanism that allows users to specify their required QoS and composability levels. Thereafter, the generated lattices help in identifying the services that match the specified queries.
The paper is organized as follows: Section II gives an overview of the FCA and RCA classification techniques. Section III describes our approach. Section IV describes the experiments performed on a real case study for validating our approach. The paper ends with the related work in Section V and the conclusion in Section VI.
II. BACKGROUND
In this section, we give the basic definitions of FCA and RCA, along with a simple example of exploiting them.
A. Formal Concept Analysis (FCA)
We base our approach on FCA [13], [14] which is a classification method that permits the identification of groups
of objects having common attributes. It takes a data set represented as an \( n \times m \) table (formal context) with objects as rows and attributes as columns. A cross "x" in this table means that the corresponding object has the corresponding attribute. An example of a formal context is shown in Table I, for a set of objects \( O = \{1,2,3,4,5,6,7,8,9,10\} \) and a set of attributes \( \mathcal{A} = \{\text{odd, even, prime, composite, square}\} \).
Table I: A formal context for objects \( O \) and attributes \( \mathcal{A} \).
<table>
<thead>
<tr>
<th>\text{odd}</th>
<th>\text{even}</th>
<th>\text{prime}</th>
<th>\text{composite}</th>
<th>\text{square}</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>\text{x}</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>\text{x}</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>\text{x}</td>
<td></td>
<td>\text{x}</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>5</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>6</td>
<td>\text{x}</td>
<td></td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>7</td>
<td>\text{x}</td>
<td></td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>8</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>9</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>10</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
</tbody>
</table>
From a formal context, FCA extracts the set of all the formal concepts. A formal concept is a maximal set of objects (called extent) sharing a maximal set of attributes (called intent). For example, in Table I, \( a = \{4,6,8,10\}, \{\text{even, composite}\} \) is a formal concept because the objects 4, 6, 8, and 10 share exactly the attributes even and composite (and vice-versa). On the other hand, \( \{6\}, \{\text{even, composite}\} \) is not a formal concept because the extent \( \{6\} \) is not maximal: other objects share the same set of attributes.
FCA reveals the inheritance relations (super-concept and sub-concept) between the extracted concepts and organizes them into a partially ordered structure known as Galois lattice or concept lattice. The resulting concept lattice is illustrated in Figure 1(L).
Figure 1: Formal concept lattice for the context in Table I (L); focus on the concept b (R). Lattices are built with Concept Explorer (ConExp) tool [20].
This lattice reveals phenomena that may not be recognized intuitively. For example, in Figure 1(R) appears the concept \( b = \{4\}, \{\text{composite, even, square}\} \) as a sub-concept of the concept \( a \). It inherits \( a \)'s attributes composite and even, and extends it by the square.
B. Relational Concept Analysis (RCA)
RCA [12] is an extension of FCA that takes into consideration the relations between the objects. Thus, it takes as input two types of contexts: (non-relational) ones that are previously used with FCA to classify objects by attributes, and inter-context (relational) ones that represent the relations between the objects. RCA generates lattices similar to the ones generated by FCA, but enriched with the information about the relation between the objects. We take as an example two sets of numbers, \( \{1,2,3,4,5\} \) and \( \{11,12,13,14,15,16,17,18,19,20\} \). We build two non-relational contexts similar to the one in Table I. We consider a relation called \text{Divides} \( \leftrightarrow \) between the first and second sets of numbers, and we build the relational context in Table II. RCA takes the two non-relational contexts (numbers\times attributes), and the relational context \text{Divides}, then generates the two lattices in Figure 2.
Table II: The relational context \text{Divides}.
<table>
<thead>
<tr>
<th>\text{Divides}</th>
<th>11</th>
<th>12</th>
<th>13</th>
<th>14</th>
<th>15</th>
<th>16</th>
<th>17</th>
<th>18</th>
<th>19</th>
<th>20</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>2</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>3</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>4</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
<tr>
<td>5</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
<td>\text{x}</td>
</tr>
</tbody>
</table>
These lattices are similar to FCA lattices, but one of them is enriched with the relation \text{Divides}. For example: by regarding the concept \( a = \{2\}, \{\text{prime, even, Divides:}\text{c7, Divides:}\text{c3}\} \) in lattice (L), we notice that numbers of its extent can divide numbers of the extents of the concepts 7 and 3 in lattice (R). In the general case, where relations form directed cycles between objects, RCA applies iteratively. During this iteration, several scaling operators can be used. Here we use the existential one (see [12] for more details).
III. Approach
Using our approach, a user can specify an abstract process described as a set of functional and non-functional requirements. The functional requirements describe the needed tasks, while the non-functional requirements describe the expected QoS and mode of composition for these tasks. The approach retrieves sets of Web services, filters and analyzes their data according to the user requirements. Then, it classifies them in concept lattices based on RCA classification.
A. User Requirements Analyzer:
The approach starts by analyzing the requirements specified by the user via a description file (see Figure 4), which is composed of the following elements:
1) Functional requirements: this part is described by a set of tasks. Each task is described by its input and output parameters via their names and types. For each parameter, a user specifies a set of relevant keywords for its name, and another set for its type. For example, Task1 has one input parameter, which is defined by \{InParamNameKeys1, InParamTypeKeys1\}. We enabled the user of providing more than one keyword, in order to retrieve more relevant services. For example: if a user needs a "date" parameter, he/she may specify the possible keywords for its name as \{date\} and types (primitive or complex) as \{string, date\}.
2) Non-functional requirements: this part is composed of two other parts.
- QoS specification, which specifies the requested QoS level for each task according to the supported QoS attributes \{qosx, qosy, etc.\}.
- Composition specification, which specifies the mode of composition (links) between each two consecutive tasks, for example: Task1 → Task2. The mode of composition describes whether Task1 (source) covers all the input of Task2 (target) or not. This notion is better described below.
B. Web Service Retriever:
The requirements analyzer sends to this component the keywords provided for the parameter names input/output for each task \{InParamNameKeys, OutParamNameKeys\}. In our example, the retriever searches and retrieves a set of services: WS1.i, WS2.j, and WS3.k, corresponding respectively to each of the three tasks. It also gathers the QoS values (the supported attributes) for each retrieved service \{qosx.i, qosy.i, etc.\}.
C. WSDL Parser:
Each set of the retrieved services is passed to this component\(^1\), in order to extract for each service its operations with their input/output parameters.
Using the information extracted from the parsed services, we can check a service's compatibility.
D. Compatibility Checker:
This component checks whether a service provides an operation that can satisfy the corresponding task. An operation satisfies a task when it contains the requested input/output parameters names. We verified this by using the Jaro-Winkler string distance measure [21]. By doing so, we discovered three possible cases:
- compatible, there exists one operation at least that satisfies the corresponding task and has the same parameters types; or it may become:
- adaptable compatible, meaning that none of the satisfying operations has the same parameter types (either for input, or output, or both), thus type adaptations need to be done; otherwise:
- incompatible, the service does not satisfy the corresponding task.
The compatibility checker reduces the number of the retrieved services, by omitting the incompatibles ones (WS1.i becomes WS1.i’). Thus, it keeps a detailed list of the compatible services together with their satisfying operations.
Once we identified the compatible services, we can measure the mode of composition and the QoS levels using the two following components.
E. Composability Evaluator:
Composing two Web services is finding two operations (one of each) that can be linked. Two operations can be linked when the output parameters of the first (source) can cover the input parameters of the second (target). We can define two composition modes according to the coverage of the input parameters of a target operation, in addition to two other modes according to the needed adaptations that are discovered by the compatibility checker. They are:
\(^1\)Available online: http://www.lirmm.fr/~azmeh/tools/WsdlParser.html
− **Fully-Composable (FC)**, when a source operation covers by its outputs all of the expected inputs of a target operation;
− **Partially-Composable (PC)**, when one or more input parameters of a target operation are not covered;
− **Adaptable-Fully-Composable (AFC)**, when the source and target operations have an FC mode, but need some type adaptations either for the output of the source or the input of the target, or both of them; and
− **Adaptable-Partially-Composable (APC)**, similar to AFC but when having a PC mode.
Thus, having the composition Task1 $\rightarrow$ Task2, the composability evaluator determines the mode of composition for all the services in $WS1.i'$ with all the services in $WS2.j'$. Then, it generates four relational contexts (Section II-B) between $WS1.i'$ and $WS2.j'$ according to the four previously defined modes of composition. These contexts are exploited by the RCA classifier to clarify which services can be composed and following which mode. This is illustrated in the experiments section IV.
**F. QoS Levels Calculator:**
This component takes into consideration the QoS values for all the sets of compatible services $(qos_{x}.i',qos_{x}.j',qos_{x}.k', etc.)$. It extracts these values from the ones returned by the service retriever, according to the list returned by the compatibility checker. Web services have many QoS attributes and different ranges of numerical values for each one of these attributes. In order to have a better overview of these values, we apply a statistical technique called BoxPlot++ [22] to cluster the convergent values together. The BoxPlot++ is an extension of the original boxplot [23] technique.
It takes as input a given set of numerical values, and produces one to seven corresponding levels of values: $L = \{BadOutlier, VeryBad, Bad, Medium, Good, VeryGood, GoodOutlier\}$ [2]. The technique is applied on each QoS attribute. Then, it generates for each set of services a non-relational context (Section II-A) having all of its QoS attributes levels. These contexts are exploited afterwards by RCA classifier, in order to classify the services according to these different QoS levels.
**G. RCA Classifier:**
This component takes into consideration the relational contexts of composition modes and the non-relational contexts of QoS levels. It also uses the non-functional requirements provided in the user requirements file. These requirements are considered as queries that are added to the two types of contexts as follows:
The QoS specifications for each task are integrated to the corresponding non-relational context as a new row. The composition specifications are also integrated to the corresponding relational context in the same way.
Finally, the RCA classifier [24] generates all the corresponding service lattices and passes them to the final component. This component is further detailed in [25].
**H. Lattice Interpreter:**
By integrating the non-functional queries into the contexts, they appear inside the concepts of the corresponding lattices. This enables this component of locating the services that satisfy the queries and to navigate between the different solutions. These services are present at the sub-concepts of the queries concepts. This is better illustrated in section IV.
---
2 Available online: http://www.lirmm.fr/~azmeh/tools/BoxPlot.html
IV. EXPERIMENTS
We applied our approach on an abstract process, which is supposed to provide the weather information for a given ip address. It is described by a user requirements file (Section III) using three tasks: Task1, Task2, and Task3. Task1 takes as input an ip address and provides Task2 with the corresponding city name. Task2 takes the city name and returns to Task3 the corresponding zipcode. Finally, Task3 returns the corresponding weather information. From this file, the experiments are conducted on four steps as follows:
1. Collecting Services: We use the set of keywords describing each task to search and retrieve sets of corresponding Web services. We make use of the Service-Finder Web service search engine [4] to collect a set of corresponding endpoints (WSDL addresses). This engine also provides us with values of two QoS attributes: availability (A) and response time (RT). We download the corresponding WSDL files after omitting repeated and invalid endpoints. We show in Table III each task with its keywords as well as the number of obtained endpoints, the number of retrieved WSDL files, and the sets identifiers. In this step, we make use of the requirements analyzer components (III-A) and the service retriever (III-B).
Table III: Summary of the retrieved services.
<table>
<thead>
<tr>
<th>Task</th>
<th>Keywords</th>
<th>#Endpoints</th>
<th>#Services</th>
<th>Set ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>{ip,ipAddress}, {city,cityName}</td>
<td>94</td>
<td>94</td>
<td>WS1.i</td>
</tr>
<tr>
<td>2</td>
<td>{city,cityName}, {zip,zipcode,postal,postcode}</td>
<td>768</td>
<td>760</td>
<td>WS2.j</td>
</tr>
<tr>
<td>3</td>
<td>{zip,zipcode,postal,postalcode}, {weather,weatherInfo,forecast,weatherForecast,weatherReport}</td>
<td>39</td>
<td>37</td>
<td>WS3.k</td>
</tr>
</tbody>
</table>
2. Filtering the Services: In this step, we parse the WSDL files using the WSDL parser (III-C) and remove the invalid ones (Filter1). Then we calculate the compatible ones (Filter2) using the Compatibility checker (III-D). In Table IV, we can see the resulting number of filtered services for each set.
Table IV: The number of filtered services for each set.
<table>
<thead>
<tr>
<th>Filter</th>
<th>#Services</th>
</tr>
</thead>
<tbody>
<tr>
<td>Filter1 (Valid)</td>
<td>94</td>
</tr>
<tr>
<td>Filter2 (Compatible)</td>
<td>17</td>
</tr>
</tbody>
</table>
3. Composability and QoS: In this step, we calculate the composition modes for the compatible sets of services (Table V) as well as their QoS levels. We use the Composability evaluator (III-E) and the QoS level calculator (III-F).
The resulting composition modes and QoS levels are organized into non-relational and relational contexts (see [26]), and are used to classify the services in the next step.
4. RCA-Based Classification: During this step, the RCA classifier (III-G) takes the contexts formed in the previous step and integrates the QoS queries. The queries that we choose in this case study are specified to be Good A and Good RT levels for each task in the process. We also require an FC composition mode for both (Task1,Task2) and (Task2,Task3). The generated lattices are illustrated in Figure 5.
These lattices are finally interpreted by the lattice interpreter (III-H), considering two rules:
- in each lattice, the services satisfying the corresponding query (QoS and composition) appear in the sub-concepts of the concept where the query appears. Example: the services in c0 (WS1.i) satisfy Query1;
- the services located closer to the bottom of a lattice offer better QoS levels than the farther ones, for example: in the lattice (WS2.j), the service WS2.8 is better than service WS2.198 because it has a VeryGood A (an inherited attribute). On the other hand, the services located in a same concept have convergent QoS levels.
Following the precedent rules, the lattice interpreter extracts the following services to be the best choice regarding the specified requirements: {WS1.59,WS1.5,WS1.3} for Task1 because they all appear in the same concept (c0); {WS2.8} for Task2 because it is better than {WS2.198}; and {WS3.23} for Task3 because it is better than {WS3.1}. If we verify the actual services, we get the information in Table VI.
Table V: Number of services per composition mode.
<table>
<thead>
<tr>
<th>Composition Mode</th>
<th>WS1.59</th>
<th>WS1.5</th>
<th>WS1.3</th>
</tr>
</thead>
<tbody>
<tr>
<td># FC services</td>
<td>12</td>
<td>3</td>
<td>12</td>
</tr>
<tr>
<td># FC services</td>
<td>4</td>
<td>89</td>
<td>4</td>
</tr>
<tr>
<td># RPC services</td>
<td>2</td>
<td>1</td>
<td>11</td>
</tr>
<tr>
<td># RPC services</td>
<td>0</td>
<td>3</td>
<td>2</td>
</tr>
</tbody>
</table>
Table VI: Information about the services satisfying the queries with the selected ones (highlighted).
<table>
<thead>
<tr>
<th>Service Name</th>
<th>Description</th>
<th>A(%)</th>
<th>RT(ms)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ip2LocationWebService</td>
<td>IP2Location</td>
<td>100</td>
<td>257</td>
</tr>
<tr>
<td>GeoCoder</td>
<td>IPAddressLookup</td>
<td>100</td>
<td>328</td>
</tr>
<tr>
<td>IP2Geo</td>
<td>IPAddress:string</td>
<td>100</td>
<td>798</td>
</tr>
<tr>
<td>MediCareSupplier</td>
<td>GetSupplierByCity</td>
<td>85</td>
<td>304</td>
</tr>
<tr>
<td>ZipcodeLookupService</td>
<td>CityToLatLong</td>
<td>100</td>
<td>439</td>
</tr>
<tr>
<td>USWeather</td>
<td>GetWeatherReport</td>
<td>85</td>
<td>384</td>
</tr>
<tr>
<td>Weather</td>
<td>GetCityForecastableZIP</td>
<td>100</td>
<td>237</td>
</tr>
</tbody>
</table>
These service lattices offer a browsing mechanism that
facilitates services selection according to user requirements. In each lattice, services are classified by their QoS levels as well as their composition modes with services in a following lattice. A lattice reveals two relations between the services regarding QoS: hasSimilarQoS when services are located in a same concept and hasBetterQoS when a service is a descendant of another service(s). Services having a hasSimilarQoS or hasBetterQoS relations with a selected service, are considered to be its alternatives. User requirements (queries) can be expressed as new services to be classified in the corresponding lattices. This locates the part of the lattice that meets the user requirements and thus represent an efficient lattice navigation mechanism. In the case where no services could be found for the specified requirements, the query mechanism enables the user to identify the next best service selection. This service will have lesser QoS than the requested and it represents the direct ascendant of the query concept. For example: in the third lattice of Figure 5, we could take WS3.2 or WS3.3 in concept c3 to be the next best selection. They have a Medium RT and a Good A.
In this experiment, we had several functional and non-functional requirements needed to build a simple process of three tasks as described previously. The Service-Finder enabled us to find a total of 901 (Table III) Web service addresses, among which we have to identify and retrieve the services meeting our requirements. Using our approach, we efficiently identified out of 901 endpoints a set of five services that best match our requirements (VI). The total time required to extract these services is equal to 103 sec, starting from the WSDL Parser (component C) till the end.
V. RELATED WORK
We list the related work according to three categories:
Web Service Composability: The Web service Composability problem have been addressed by many works [10], [27], [28], [9], [11]. Ernst et al. [10] present an approach based on syntactic descriptions of Web services. This approach detects the matching between Web service’s operations by analyzing the results obtained after multiple invocations of these services. The input and output parameter values are compared syntactically and matchings are deduced. Contrarily to our approach, which focuses on the syntactic, semantic, and QoS descriptions of Web services, this work is based on the experimental usage of Web services, and is perfectly complementary to ours. In this research area, much work has been done also on semantic Web services [27], [28], [9], [11]. In [27], Sycara et al. present DAML-S, an ontology for semantic Web service description. They show the use of this ontology for service
\footnote{Calculated by NetBeans (6.9.1) using Intel Core 2 Duo (1.80GHz) and a RAM of (2.00 GB).}
discovery, interaction, and composition. Two implementations are proposed: the DAML-S/UDDI Matchmaker that provides semantic capability matching and the DAML-S Virtual Machine that manages the interaction with Web services. Contrarily to our approach, this work do not deal with QoS properties to compose Web services. In [28] and [9], Medjahed et al. present a model for checking the composability of semantic Web services at different levels: syntactic, semantic, and QoS. Two kinds of composability are defined: horizontal (normal composability) and vertical (substitutability). This work deals with three QoS properties: fees, security, and privacy. In our approach, we may consider any QoS property. We used availability and response time provided by Service-Finder. In [11], Lécue et al. present a method which combines semantic and static analysis of messages in semantic Web services. Data types (XML schemas) and semantic description (domain ontologies and SA-WSDL specifications) of parameters are used to deduce mappings. These mappings are then transformed into adapters (XSL documents). In this work, the focus is on the generation of adapters of Web service data flows. This enhances greatly the composability of Web services which are not directly composable. In our work, we concentrated on the selection of services that best fit the user’s QoS and composability requirements. The two approaches are complementary.
**QoS-Based Web Service Selection:** Many approaches like [5], [6], [7] have been proposed to solve the problem of QoS-based Web service selection. In [5], Zeng et al. present a middleware platform that enables the quality-driven composition of Web services. In this platform, the QoS is evaluated by means of an extensible multidimensional model, and the selection of Web services is performed in such a way as to optimize the composite service’s QoS. Aggarwal et al. [6] present a constraint-driven Web services composition tool that enables the selection and the composition of Web services considering QoS constraints. Like Zeng et al. [5], a linear integer programming approach is proposed for solving the optimization problem. In [7], Yu et al. propose heuristic algorithms to find a near-to-optimal solution more efficiently than exact solutions. They model the QoS-based service selection and composition problem in two different ways: a combinatorial model and a graph model. A heuristic algorithm is introduced for each model. The QoS-based service selection problem is viewed by all the works detailed above [5], [6], [7] as an optimization problem that aims to find the service that best fit QoS requirements from the set of candidate services. The advantage of our approach compared to these works is that FCA provides equivalence classes of services (the extents formal concepts provide classes of services that share the same characteristics). If a service in an application fails, one of the other services in the class can replace it. This provides considerable enhancement for the dynamic composition since it reduces the reaction time.
**Web Service Classification Using Concept Lattices:** Many works in the literature like [15], [16], [17], [18], [29], [19], [30] have addressed the classification of Web services using concept lattices. Peng et al. [15] present an approach to classify and select services. They build lattices upon contexts where individuals are Web services and properties represent the operations of these services. The approach allows similar services clustering by applying similarity search techniques that compare operation descriptions and input/output messages data type. Aversano et al. [17] present a similar approach to classify and select services using the FCA. They propose WSPAB tool that permits the discovery, the automatic classification, and the selection of Web services. Classification is accomplished by defining a binary relation between services and operation signatures. In [18], Azmeh et al. uses FCA to classify Web services by keywords extracted from their WSDL description files, to identify relevant services and their substitutes. Fenza and Senatore [29] describes a system for supporting the user in the discovery of semantic Web services that best fit personal requirements and preferences. Through a concept-based navigation mechanism, the user discovers conceptual terminology associated to the Web services and uses it to generate an appropriate service request which syntactically matches the names of input/output specifications. The approach exploits the fuzzy FCA for modelling concepts and relative relationships elicited from Web services. After the request formulation and submission, the system returns the list of semantic Web services that match the user query. Contrarily to our approach, these works [15], [16], [17], [18], [29] do not deal with QoS properties to classify Web services. In [19], Chollet et al. propose an approach based on FCA to organize the services registry at runtime and to allow the best service selection among heterogeneous and secured services. The services registry is viewed as a formal context where the services are the individuals and the services types, functional, and non-functional characteristics (security characteristics) are properties. In [30], Driss et al. propose a requirement-centric approach to Web services, modelling, discovery, and selection. They consider formal contexts with services as individuals and QoS characteristics as properties. The obtained lattices are used to check out relevant (that best fit functional requirements) and high QoS Web services. All the works detailed above [15], [16], [17], [18], [29], [19], [30] are based on FCA, and not on RCA. In addition, they do not deal with service composition since they propose to classify and select only individual services.
VI. Conclusion
In this paper, we presented an approach for facilitating Web service selection according to user functional and non-functional requirements. This approach is based on four principal steps; service collecting, validity and compatibility filtration, QoS levels calculating, and RCA classification. The resulting lattices group services that have common QoS and composition levels. User requirements are expressed as new services and are classified in the corresponding lattices. This locates the part of the lattice that meets the user requirements. We validated our approach on a set of 901 real-world Web services obtained by querying Service-Finder. Experimental results show that our approach allows an efficient selection of services satisfying the specified functional and non-functional requirements.
Future work includes: (i) enhancing the composability evaluator component by considering advanced semantic and syntactic similarity techniques; (ii) proposing keywords to the user to help her/him in specifying the needed tasks more efficiently; and (iii) performing experiments on more complex compositions.
References
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00596346/document", "len_cl100k_base": 8134, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29651, "total-output-tokens": 10027, "length": "2e12", "weborganizer": {"__label__adult": 0.0002701282501220703, "__label__art_design": 0.0004420280456542969, "__label__crime_law": 0.0003132820129394531, "__label__education_jobs": 0.0008254051208496094, "__label__entertainment": 9.614229202270508e-05, "__label__fashion_beauty": 0.00015079975128173828, "__label__finance_business": 0.0004038810729980469, "__label__food_dining": 0.00032067298889160156, "__label__games": 0.0004749298095703125, "__label__hardware": 0.0006985664367675781, "__label__health": 0.0004253387451171875, "__label__history": 0.0002796649932861328, "__label__home_hobbies": 7.492303848266602e-05, "__label__industrial": 0.00033354759216308594, "__label__literature": 0.00043320655822753906, "__label__politics": 0.00028061866760253906, "__label__religion": 0.0004229545593261719, "__label__science_tech": 0.06524658203125, "__label__social_life": 0.00011074542999267578, "__label__software": 0.0229949951171875, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0001932382583618164, "__label__transportation": 0.000396728515625, "__label__travel": 0.000202178955078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38346, 0.02981]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38346, 0.24373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38346, 0.85763]], "google_gemma-3-12b-it_contains_pii": [[0, 1116, false], [1116, 5966, null], [5966, 11554, null], [11554, 15285, null], [15285, 18652, null], [18652, 23999, null], [23999, 26844, null], [26844, 32701, null], [32701, 38346, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1116, true], [1116, 5966, null], [5966, 11554, null], [11554, 15285, null], [15285, 18652, null], [18652, 23999, null], [23999, 26844, null], [26844, 32701, null], [32701, 38346, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38346, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38346, null]], "pdf_page_numbers": [[0, 1116, 1], [1116, 5966, 2], [5966, 11554, 3], [11554, 15285, 4], [15285, 18652, 5], [18652, 23999, 6], [23999, 26844, 7], [26844, 32701, 8], [32701, 38346, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38346, 0.2337]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
dba19af0f4da20ea16547d33fce1349e5ea5fefe
|
Abstract
Since the very beginning of the Modelica development ambitions for electronic simulation exist. The electronic simulator SPICE, the SPICE models and the SPICE netlists grew to a quasi standard in electronics simulation for the last 30 years. That is why the wish arose to have SPICE models available in Modelica. This paper deals with modeling the SPICE3 models in Modelica directly extracted from the original SPICE3 source code. This courses the problem of transforming the sequential simulator-internal model descriptions of SPICE to the declarative description from Modelica. To solve this problem a way was developed and tested for some SPICE3 semiconductor models. The actual library is presented and further plans are shown.
Keywords: SPICE, Modelica, SPICE3 library for Modelica, Semiconductor models, Electronic circuit simulation
1 Motivation
With starting the development of Modelica, models for electrical circuits were taken into consideration [1]. Since SPICE and its derivatives grew to a quasi standard in electronics simulation the SPICE models should become available within Modelica.
Beyond the Modelica standard library (MSL) two SPICE libraries were developed [2], the SPICELib and the BondLib. The SPICELib [3], which covers different complex MOSFET models, is a standalone library with its own connectors. The BondLib [4] bases on bond graphs. It offers different levels of models related to HSpice.
The reason for developing this SPICE3 library is to provide both the original Berkeley SPICE3 models and the SPICE netlist approach. Furthermore, some additions will be prepared to cover PSPICE models. Since the Berkeley SPICE3 simulator is the only known electric circuit simulation program with open source code it offers the opportunity to extract models for implementation in Modelica. The SPICE3 library uses that way for SPICE3 semiconductor models.
In this paper the modelling steps are considered which are done starting with a C++-model library which was extracted from SPICE3 formerly. The SPICE3 library structure is presented as well as a circuit example.
2 SPICE3 models and netlists
The Berkeley SPICE3 (latest versions e5 or f4) is a general-purpose circuit simulation program which has built-in models both for general devices (resistors, capacitors, inductors, dependent and independent sources) and semiconductor devices (Diode, MOSFET, BJT, …). Some models are a collection of different single models (levels). Instead of adding new models the user is able to choose a large variety of parameters. Only sometimes a new model is added by the developer. The set of SPICE models is like a standard in circuit simulation.
Via a netlist the SPICE3 models are composed to a circuit to be simulated. The netlist contains the model instances, their actual parameters, and the connection nodes. In more detail SPICE3 netlists are described in the SPICE3 user’s manual [5]. For many electric and electronic devices SPICE3 netlists are available. For the following inverter circuit figure 2 shows the SPICE3 netlist.

Fig. 1 MOSFET inverter circuit
Simulation inverter circuit
MT1 4 2 0 0 Tran_NMOS L=2u W=5u
MT2 6 2 4 6 Tran_PMOS
VDD 6 0 5.0
VEIN1 2 0 dc=0 sin(0 5 0.5)
.model Tran_NMOS NMOS (VT0 =0.7
tox=8n lambda=3e-2)
.model Tran_PMOS PMOS (VT0 =-0.7)
.tran 0.001 10
.plot v(6) v(2) v(4)
.end
Fig. 2 SPICE3 netlist of the inverter circuit
Within the semiconductor devices SPICE3 differentiates between technology parameters and device parameters. Device parameters can be chosen for every single model instance, e.g. the channel length of a transistor. Technology parameters which are specified in a model card (.model) are adjustable for more than one element simultaneously, e.g. the type of transistors.
3 Model extraction out of SPICE3
The SPICE3 internal models were extracted from the SPICE3 source code, and stored in a (commercial) C++ library [6] [7]. This library was intensively tested by including it as external model code to SPICE3, so it was possible to test the C++ models and the original SPICE3 models in parallel.
The C++ library includes the whole model pool of the semiconductor elements of SPICE3. For each element both a C++ file (*.cpp) and a header file (*.h) exist. The header file of each semiconductor element contains classes with data (parameter and internal data) and declaration of methods. In the C++ file the methods are coded.
Due to the object-oriented principle, a class hierarchy of model components was created. Central base classes contain such values and their methods that, according to SPICE3, are needed in nearly every model, e.g. the nominal temperature. Via inheritance of the base classes their values are provided to other classes. Each functionality that is needed more than one time is coded in a separate function. Consequently, a strongly structured hierarchy of classes was developed.
To simulate a model with the C++ library a SPICE3 typical system of equations is generated (initialization phase) and for each solution step the current data are loaded (simulation phase). For each device of the circuit, model specific methods that are called according to different aspects are supplied. These methods can be disposed under functional aspects as follows:
- Methods to analyse the source code
- Methods to create the linear system of equations
- Methods for instantiation the models and parameters
- Methods to calculate values of the linear equation system
- Methods to insert values into the system of equations
For each model parameter a variable exists, that is called “parameterValue” which gets the value of the particular parameter. For some parameters it is important to know whether they were set by the user or their default values were used. Depending on which case comes into effect, different formulas are used in the further calculation. Even if the value set by the user is the same as the default value, the simulation results differ in some cases. The information if a parameter is set is stored in a Boolean value “IsGiven” (true, if the parameter is set). The “IsGiven” value is analysed by different methods.
The semiconductor devices are modelled by means of a substitute circuit. In this process the different physical effects are allocated at any one time to a component of this circuit. For each of this effects different methods exist, that insert the currents and conductances that are calculated for the actual voltages at the pins, into the linear system of equations. Also equations are arranged for the internal nodes of the substitute circuit. For the calculation also internal values of the integration method are used, e.g. the actual time step size and the history of the calculated values.
In summary the C++ library of the SPICE3 semiconductor elements can be characterized as follows:
- The complete library is according to the semiconductors structured in classes, which contain data and methods.
- For each device methods exist, that achieve the necessary calculations.
For each element of the SPICE3 netlist, the according classes are instantiated. The needed methods are called for every instance.
The aim of these calls is to create a linear system of equations to calculate the solution like in SPICE3.
The parameter handling is special because of the calculations in the initial phase that uses so called “IsGiven” values.
Internal values of the integration methods are used.
The C++ library which was thoroughly tested is the base for creating SPICE3 models in Modelica.
4 Modelling steps towards Modelica
The C++ functions are constructed to calculate the currents of an equivalent circuit starting with given node voltages. The currents are used inside SPICE3 for filling a linear system of equations.
Starting with the C++ equivalent circuit a Modelica top level model (figure 3) is constructed with electrical pins for connecting. The components of the top level model represent special (e.g. semiconductor) effects (e.g. channel current). Using the pin voltages the components call in their algorithm part typically a hierarchy of functions for the calculation of currents.
There are several steps of modelling semiconductor devices [8], which are described in the following:
1. Construction of top level model
In Modelica every semiconductor device gets a so called top level model which calls the semiconductor functions and can be connected to other models via its connectors. This top level model is the semiconductor device component which will be applied by the user. As in SPICE3 the top level model is adjustable by choosing parameters. Within the top level model the branch currents are calculated using the existing voltages and parameters with the help of functions.
The physical values that are calculated in the C++ semiconductor devices are prepared to be inserted into the linear system of equations like in the SPICE3 simulator. Such a system of equations cannot be addressed in Modelica usually. Only the relation between voltage and current at the interfaces of the model is of interest (terminal behaviour [9]). The voltages at the pins, that are the results of the simulation algorithm, are gripped and given to functions that calculate currents and other values.
The top level model that can be connected and provided with parameters is extracted from the functionality in C++ (figure 3).
2. Parameter handling
The behaviour of a transistor is determined by its parameters significantly. Parameters are e.g. the physical dimension, the temperature or the oxide thickness. Before the simulation the Boolean value “IsGiven” is analysed, which gives the information whether a parameter was set by the user its default value is used.
In the C++ library the parameters are handled as a string. If a parameter is needed when calling a method, the string is searched for a value of the parameter. This way is also possible in Modelica, but it is not usual. In Modelica all parameters are provided in a parameter list, where the user can adjust the parameters. It is desirable that in the Modelica concept a possibility exists that decides whether a parameter is set by the user (“IsGiven”) or not. Unfortunately such a possibility does not exist yet. That is why a temporary solution was chosen. The default value of parameters whose “IsGiven” value is of importance, is set to a very big negative value (-1e40), because that values does make no sense as a normal parameter value. Afterwards it is checked if the value of a parameter is not equal to -1e40. In that case it is assumed that the parameter was given by the user and consequently “IsGiven” is true. Otherwise the parameter gets its default value. This solution is only preliminary and will be improved as soon as Modelica delivers the necessary possibilities. The example
As described in section two the parameters in SPICE3 are divided into two groups, device and technology parameters. In Modelica the device parameters are part of the semiconductor model. The technology parameters are collected in a record. This record is a parameter for all semiconductor devices. This courses that also the technology parameters can be adjusted in every single model separately which is not intended in SPICE. But in some cases it could make sense.
Furthermore the record with the technology parameters is available in the highest level of the circuit. Every semiconductor device gets the record as a parameter. So the components of the record can be adjusted in a global way for each device in the circuit.
Another possibility to provide the technology record as global is to define a model in the circuit level that inherits the properties of the MOSFET where the desired parameters are unchangeably included. Both possibilities force the user to work within the source code. For untrained users it would be better to work in the graphical modus of Dymola and giving each single semiconductor device parameter its value by clicking on the device and inserting the parameter value in the prepared list.
3. Transformation of C++ library data structure
In the C++ source library the data are concentrated in classes and located in the according header file of the semiconductor elements. For each parameter a variable “parameterValue” exists that gets the particular value of the parameter. In Modelica the parameters are concentrated in records because these are the equivalent classes to the C++ classes with the parameters. Records were developed in Modelica to collect and administrate data and to instantiate data all at once. Inside the records the data get their default values. With a function call all data that are located inside a record can be accessed. Parameters that are needed for more than one model are collected in a higher level record which is inherited to the lower level records of the single models (figure 5).
4. Transformation of C++ library methods
The C++ library of the semiconductor elements of SPICE3 contains beyond parameters and variables that are concentrated in classes, also of a huge number of methods that need to be transformed. Within the transformation it is important, that the structure of the C++ library also remains in Modelica with the aim to recognise the C++ code.
Each semiconductor element in the C++ library becomes to a top level model in Modelica. Inside the top level model functions are called, that calculate both the parameters and the currents at the pins of the model. These functions need to be extracted from C++ and transformed to Modelica. In the C++ library a hierarchy of classes exists where often more than one method calculate one physical effect. Like in a tree structure one method calls another method that itself also calls another method and so on.
The transformation starts with the transfer of the name of the C++ method to the according Modelica function. That function has to be included into a package that has the name of the C++ class where the appropriate method came from. In the second step the parameters and values that are concentrated in classes in C++ are transformed to Modelica into records. In the third step the function text that changes the values in the classes respectively the records has to be directly red of the C++ code and transformed to Modelica where the original C++ names are used. Within that step the C++ code is included into the Modelica code as annotation to recognise the C++ code (figure 6).
5. Code revision
After a SPICE3 model was transformed into Modelica the source code is checked again with the aim to make it more effective. One point is to include the Modelica operator “smooth”. Within this all conditions (if) are checked to find out if it is continuous, also in higher derivations. In that case “smooth” avoids the not needed breaks of the analogue simulation algorithm. With this approach the performance of the simulation can be increased very much.
It also has to be checked if methods were transformed to Modelica that are actually not needed, to simplify the Modelica code.
The system of equations that is built in SPICE3/C++ is not used in Modelica as well as internal values of the integration method that is in connection to the SPICE3 solution algorithm. The calculation of the Jacobians that is done in SPICE3/C++ is also not used in Modelica. It was tried to ensure to transform only the functional aspects of the models to Modelica. In this way a mixture between model equations and numerical solution algorithms like in SPICE3 is avoided.
5. Structure of SPICE3 library
The current SPICE3 library contains the packages Basic, Interfaces, Semiconductors, Sources, Examples, Repository and Additionals (as can be seen in figure 7).
The package Basic contains basic elements like resistor, capacitance, inductivity and controlled sources. In the package Sources there are the voltage- and current sources transformed from SPICE3. The package examples include some example circuits, to help the user getting a feeling of the behavior of the library and their elements.
Only the semiconductor models are written using the converted C++ library. The packages Semiconductor and Repository are related to each other very closely. In the package Repository the semiconductor devices and their model cards from SPICE3 are modeled. The necessary function and records are also in this package. This package is not for user access. The semiconductor package contains clearly arranged the offered semiconductor devices and their model card records for easy usage. The user should take the models out of this package. Via inheritance these models are connected to the repository. That’s why the user does not have to access to the repository directly.
The package Additionals contains the polynomial sources like they are available in SPICE2 or PSPICE. Other models that are not from SPICE3 can be collected here.
6 Example
In this section a Modelica model of the inverter circuit shown in figure 1 is developed. The following two approaches are important.
Graphical composition
The SPICE3 library models are composed and connected with the graphical possibilities of the simulator. Figure 9 shows such a circuit (Dymola).
Textual composition
Starting with the SPICE3 netlist (figure 2) the Modelica inverter model can be generated directly without using graphical information. This feature is important, because the SPICE3 netlists that exist for many circuits, modules and complex circuit elements, should also be available in Modelica. In the following example of two inverters, a way of transforming is shown. First of all the two source codes are opposed to each other.
<table>
<thead>
<tr>
<th>SPICE3</th>
<th>Modelica</th>
</tr>
</thead>
<tbody>
<tr>
<td>inverter</td>
<td>model inverter</td>
</tr>
<tr>
<td>Mp1 11 1 13 11</td>
<td>Spice3.Basic.Ground g;</td>
</tr>
<tr>
<td>+ MPmos Mn1 13 1 0 0</td>
<td>Spice3..M</td>
</tr>
<tr>
<td>+ MNmos</td>
<td>Mn1(M(LAMBDA=0.02, GAMMA=0.37));</td>
</tr>
<tr>
<td>Vgate 1 0 PULSE</td>
<td>Spice3..V_pulse vgate(V1=0, V2=5, TD=2, TR=1);</td>
</tr>
<tr>
<td>+ (0 5 2s 1s)</td>
<td>Spice3..V_pulse vdrain(V1=0, V2=5, TD=0, TR=1);</td>
</tr>
<tr>
<td>Vdrain 11 0</td>
<td>Spice3.Interfaces.Pin p_in, p_out;</td>
</tr>
<tr>
<td>+ PULSE(0 5 0s + 1s)</td>
<td>protected</td>
</tr>
<tr>
<td>.model MPmos PMOS</td>
<td>Spice3. Interfaces.Pin n0, n1, n11, n13;</td>
</tr>
<tr>
<td>+ (gamma=0.37)</td>
<td>equation</td>
</tr>
<tr>
<td>.model MNmos NMOS + (gamma=0.37 + lambda=0.02)</td>
<td></td>
</tr>
<tr>
<td>.tran 0.01 5</td>
<td>connect(p_in, n1);</td>
</tr>
<tr>
<td>.end</td>
<td>connect(p_out, n13);</td>
</tr>
<tr>
<td></td>
<td>connect(g.p, n0);</td>
</tr>
<tr>
<td></td>
<td>connect(vdrain.n,n0);</td>
</tr>
<tr>
<td></td>
<td>connect(vdrain.p,n11);</td>
</tr>
<tr>
<td></td>
<td>connect(Mp1.NB,n11);</td>
</tr>
<tr>
<td></td>
<td>connect(Mp1.ND, n11);</td>
</tr>
<tr>
<td></td>
<td>connect(Mp1.NG, n1);</td>
</tr>
<tr>
<td></td>
<td>connect(Mp1.NS, n13);</td>
</tr>
<tr>
<td></td>
<td>connect(Mn1.NB,n0);</td>
</tr>
<tr>
<td></td>
<td>connect(Mn1.ND, n13);</td>
</tr>
<tr>
<td></td>
<td>connect(Mn1.NG, n1);</td>
</tr>
<tr>
<td></td>
<td>connect(Mn1.NS, n0);</td>
</tr>
<tr>
<td></td>
<td>end inverter;</td>
</tr>
</tbody>
</table>
The creation of the Modelica texts requires the following steps:
1. The obligate name of the Modelica model can be derived from the first line in the SPICE3 netlist.
Figure 9: Graphically composed inverter circuit
2. It is necessary to create entities of each circuit element of the SPICE3 netlist and to provide them with parameters, e.g. the SPICE3 line
\[ V_{\text{drain}} 11 0 \text{ PULSE}(0 5 0 1) \]
is in Modelica
\[ V\_\text{pulse} \text{ vdrain}(V1=0,V2=5,TD=0,TR=1); \]
3. For each node number in SPICE an internal pin has to be created in Modelica, e.g. for the node number 2 in SPICE, the Modelica line would be:
```
protected Spice3.Interfaces.Pin n2;
```
The “n” is necessary because in Modelica a single number is not a name.
4. According to the netlist the internal pins have to be connected to the circuit element, e.g.
```
connect(Mp1.ND, n11);
```
5. In the last step the external connectors have to be created and connected to the according internal connectors, e.g.
```
Spice3.Interfaces.Pin p_in, p_out;
connect(p_in, n1); connect(p_out, n2);
```
Concerning the semiconductor elements the model cards have to be transformed to Modelica. Two ways seem to be possible.
**Separate record**
The records of the technology parameters MPmos and MNmos are instances of the record model card in the model inverter for each transistor (Mp1, Mp2,...).
```
model inverter
model MPmos
Spice3.Semiconductors.modelcardMOS M (GAMMA=0.37);
extends Spice3\_MOS(final type=1,
modelcard=M);
end MPmos;
model MNmos
Spice3.Semiconductors.modelcardMOS M
(LAMBDA=0.02, GAMMA=0.37);
extends Spice3\_MOS(final mtype=0,
modelcard=M);
end MNmos;
Spice3.Basic.Ground g;
MPmos Mp1;
MPmos Mp2;
MNmos Mn1;
MNmos Mn2;
...
end inverter;
```
With the help of these two possibilities the user can give many transistors the same technology parameters like it can be done in SPICE3.
The textual composition could be done automatically by a special translator. The aim is to have such a translator in the future, maybe in the Modelica language.

The result of the Dymola simulation of the inverter circuit is in accordance with the SPICE3 simulation result.
### 7 Test and Comparison
To verify the transformed models several different test steps were arranged. It is important that the Modelica library is in accordance with SPICE3. Since the C++ library was tested very intensively it can be assumed that it is correct. That is why SPICE3 as well as the C++ library are the base of the tests.
The C++ code was included to the Modelica code as comment. This allows the visual comparison of the source codes.
Single values of currents or other variables (e.g. capacitances) are compared between the Modelica simulation and the simulation of the C++ model library. This approach is very complex and time consuming. Therefore it is only done when the reason of known differences has to be found out.
The terminal behavior is compared between Modelica and SPICE3. Therefore single semiconductor devices are connected to voltage sources to calculate the current-voltage characteristics.
In the next step complex circuits are created with several semiconductor elements and the results are compared between SPICE3 and Modelica. Such circuits are the base for a collection of circuits for regression tests, which are maintained to ensure the correctness of the library in future.
A comparison between the Spice3 library for Modelica and the BondLib in Dymola showed that the two libraries have nearly the same results and performance. For the comparison three circuits were used (NAND, NOR, double Inverter). The following table 1 shows the results in detail:
<table>
<thead>
<tr>
<th></th>
<th>SPICE3lib</th>
<th>BONDlib</th>
<th>SPICE3lib</th>
<th>BONDlib</th>
<th>Spice3lib</th>
<th>BONDlib</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Before translating</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>scalar unknowns</td>
<td>873</td>
<td>10.677</td>
<td>873</td>
<td>10.673</td>
<td>870</td>
<td>10.860</td>
</tr>
<tr>
<td>variables</td>
<td>1.157</td>
<td>12.136</td>
<td>1.157</td>
<td>12.132</td>
<td>1.152</td>
<td>12.315</td>
</tr>
<tr>
<td><strong>After translating</strong></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>parameter depending time-varying variables</td>
<td>8</td>
<td>2.005</td>
<td>8</td>
<td>2032</td>
<td>8</td>
<td>2.030</td>
</tr>
<tr>
<td></td>
<td>826</td>
<td>688</td>
<td>826</td>
<td>687</td>
<td>824</td>
<td>687</td>
</tr>
</tbody>
</table>
As it can be seen in the table 1, the Spice3 library has much less variables then the BondLib before translation of the model. After the model has been translated, the BondLib has little less variables than the Spice3 library. This shows that the simplification algorithms of Dymola work better for the BondLib.
For the double Inverter circuit the output voltage of the original SPICE3 simulator, the BondLib and the Spice3 library for Modelica are shown in the following figures 12/13.


Each figure shows the output voltage of the second inverter. In figure 12 the result of the original SPICE3 simulator is shown. The results of the three simulators are nearly the same.
8 Conclusions
In this paper a concept was described to transform the procedural implemented SPICE3 models, which are directly extracted from the original SPICE3 source code, to declarative described models for Modelica. Therefore a list of modeling steps was elaborated and applied to transform several semiconductor devices from SPICE3 to Modelica whereas the parameter handling was focused on. The result is a SPICE3 library for Modelica which contains the general devices and first semiconductor devices.
A disadvantage of the Spice3 library compared with the BondLib is that the Spice3 library has no heatport. At the moment it is possible to simulate with a fixed parameter “Temp”. It has to be figured out how this parameter can be made variable and time dependent in the future.
Further steps for the improvement of the SPICE3 library are:
- developing a method for automatically transforming SPICE3 netlists to Modelica
- increasing the performance of the Modelica models (e.g. application of the smooth operator)
- parameter treatment (“IsGiven”) has to be simplified
- adding SI units to the Modelica models
- large number of tests
- testing large circuits (many devices)
- inclusion of further SPICE3 models
- intensively testing, comparison to SPICE3
- inclusion of some PSPICE model features
- comparison with existing electronic libraries
- adding a heatport
Acknowledgement
This research was founded by the European ITEA2, projects EUROSYSLIB and MODELISAR.
References
|
{"Source-Url": "https://2009.international.conference.modelica.org/proceedings/pages/papers/0019/0019_FI.pdf", "len_cl100k_base": 6177, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 23384, "total-output-tokens": 6842, "length": "2e12", "weborganizer": {"__label__adult": 0.0007390975952148438, "__label__art_design": 0.0006699562072753906, "__label__crime_law": 0.0004544258117675781, "__label__education_jobs": 0.0012378692626953125, "__label__entertainment": 0.000179290771484375, "__label__fashion_beauty": 0.00042510032653808594, "__label__finance_business": 0.0003867149353027344, "__label__food_dining": 0.0006618499755859375, "__label__games": 0.0016565322875976562, "__label__hardware": 0.07794189453125, "__label__health": 0.0009822845458984375, "__label__history": 0.0005011558532714844, "__label__home_hobbies": 0.0004456043243408203, "__label__industrial": 0.00350189208984375, "__label__literature": 0.0002162456512451172, "__label__politics": 0.0004096031188964844, "__label__religion": 0.0009984970092773438, "__label__science_tech": 0.392578125, "__label__social_life": 0.0001062154769897461, "__label__software": 0.0167236328125, "__label__software_dev": 0.496337890625, "__label__sports_fitness": 0.0007276535034179688, "__label__transportation": 0.001941680908203125, "__label__travel": 0.0002448558807373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27695, 0.03975]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27695, 0.55019]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27695, 0.87845]], "google_gemma-3-12b-it_contains_pii": [[0, 3134, false], [3134, 7067, null], [7067, 10871, null], [10871, 14503, null], [14503, 16780, null], [16780, 19749, null], [19749, 22188, null], [22188, 25604, null], [25604, 27695, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3134, true], [3134, 7067, null], [7067, 10871, null], [10871, 14503, null], [14503, 16780, null], [16780, 19749, null], [19749, 22188, null], [22188, 25604, null], [25604, 27695, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27695, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27695, null]], "pdf_page_numbers": [[0, 3134, 1], [3134, 7067, 2], [7067, 10871, 3], [10871, 14503, 4], [14503, 16780, 5], [16780, 19749, 6], [19749, 22188, 7], [22188, 25604, 8], [25604, 27695, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27695, 0.17157]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
2db9e5f42d049bb535b60919e82992453eb2d1e6
|
CoFra: Towards Structurally Selecting ICT Tools and Methods in Multidisciplinary Distributed Projects
Deepak Sahni, Jan Van den Bergh, Karin Coninx
Expertise Centre for Digital Media
Hasselt University - tUL - IBBT
Wetenschapspark 2
3590 Diepenbeek - Belgium
{deepak.sahni,jan.vandenbergh,karin.coninx}@uhasselt.be
ABSTRACT
ICT tools and working methods are important to effectively work together in cross-organizational, multi-disciplinary projects. At the moment, there is no or very limited support to enable sharing knowledge about which ICT tools to use for collaboration and how to use them. In this paper we propose CoFra, a collaboration framework that facilitates stakeholders to share knowledge about ICT tools and work practices to improve the collaboration in dispersed multidisciplinary teams. CoFra supports two main mechanisms: collaboration variables identifying characteristics of ICT tools and workflow to express best practices. Two prototype applications were created based upon the framework. A first user evaluation of these prototype applications shows that (1) the defined collaboration variables used are relevant and useful in the selection of ICT tool support and (2) that workflow depiction can improve knowledge sharing practices over traditional wiki usage.
KEYWORDS : Collaboration, Framework, ICT, Workflow, Tool Selection
1. INTRODUCTION
The realization of a user-centered software development project is a highly complex task. It involves many project activities, sub-activities and stakeholders with different backgrounds (software developer, usability engineers, project manager, business analyst etc).
Stakeholders always have to meet the collaboration and communication needs of their partners, employees, clients, and other stakeholders that are involved in the project development process. These needs, however, are beyond e-mail capabilities, as they typically involve document sharing, virtual meetings, and information and knowledge exchanges [7]. To fulfill these needs one has to achieve high quality coordination, communication and collaboration. Effective communication and collaboration highly depends on the usage of ICT tools [2]. By selecting and providing suitable ICT tools for the project activities, stakeholders are more likely to effectively collaborate, which is especially needed in dispersed multidisciplinary teams [2]. Collaboration efficiency could thus be increased by structurally sharing knowledge about ICT tools and their recommended use for different activities in the project.
ICT tool selection in multidisciplinary teams is recognized as a challenging decision-making activity that requires support. Several frameworks [8, 11, 20, 23] have been created that categorize ICT tools for collaboration. Although some of these provide categorizations that would help in identifying tools, none also takes the process into account during which these tools are used. I.e. they only consider the moments of collaboration, not the complete process during which the collaboration takes place. Despite of several frameworks, there is still a lack of evidence of which factors, criterion, or variables to consider when selecting ICT tools, since each multidisciplinary team has different requirements and prohibitions. One reason for this may be that multidisciplinary teams work with stakeholders from different backgrounds in different contexts, and it is difficult to measure and report individual preferences of stakeholders. In addition to this, it is difficult to measure tool-related variables (such as interoperability and notification support) and user-specific variables (such as ease of use) due to lack of conceptual framework or tool support. The tool-related and user-specific variables are discussed in section 4.
Another challenge when working in multidisciplinary teams is to manage work processes effectively and efficiently in a distributed environment [17]. The few important tasks (such as sharing knowledge, approved or review document request, reminder or notification) are still done using ad-hoc techniques (such as email) or very limited support is available. To improve the collaboration process in multidisciplinary teams it is important to define the method or workflow which facilitates stakeholders to perform these tasks according to a (loosely) defined process.
This paper addresses this problem by proposing CoFra, a collaboration framework that considers the whole process including selection of ICT tools and methods as well as their usage (see section 3) in multi-disciplinary project teams that are distributed in both space and organizations. The framework is conceived as part of a project CoCoNuT, studying the usage of ICT tools for collaboration, communication and coordination within this type of multidisciplinary projects. Besides the usage of the workflow techniques to describe best practices (work methods). The framework provides (1) a checklist to facilitate decisions related to how to select ICT tools in a suitable way (i.e. what variables to measure) and (2) provide workflow support that apply best practices to improve a collaboration process in distributed teams.
To illustrate the use of CoFra we developed two applications. These applications are considered as instantiation of CoFra. In this context, the framework instantiation applications describe how the framework can be used. The applications are based on the functionalities provided by CoFra. These describe the specific problem that occurs in the collaborative projects (For example selection of ICT tools) and how the problem is solved. The applications show example solutions and details of the CoFra design. CoFra is a generic solution to an existing problem, the instantiations transform the generic solution to the real world applications. To test these applications, we performed two user studies on these instantiations, which will be discussed and analyzed in section 5 and following. The details about the applications are mentioned in section 6.1 and section 7.1.
2. RELATED WORK
In this section we discuss the different frameworks that have been proposed in literature with the purpose to improve collaboration practices in multidisciplinary teams.
Nutt [20] proposes a model for workflow systems on the basis of: (1) the required level of compliance to the workflow specification, (2) the degree of detail of the description and (3) the operational character of the model. His observation that a workflow is not necessarily a rigid description of a work process is something we agree with and the usage of the term workflow should be understood as such in this paper.
Sarma et al. [23] propose a need-based collaboration framework adapted from Maslow’s [19] theory of needs. Their framework categorizes different collaboration tools based on collaboration needs of developers. Grudin [11] classifies collaboration tools based on time and space: tools are categorized based on whether time or place are the same, predictable or unpredictable. Malone et al. [18] designed a framework to study coordination. The framework is based on the dependencies (e.g. shared resources, task assignment, user etc.) and identification of the coordination processes (that can be) used to manage these dependencies. The dependencies used in their framework are analyzed and similarities across multi disciplines are identified. Their framework is applied when classifying collaboration tools based on a coordination process.
Bolstad and Embley [2] postulate a taxonomy of collaboration. In their taxonomy they categorize collaboration tools (e.g. face to face, audio, video, file transfer) and investigate collaboration characteristics (time, predictability, place and interaction), tool characteristics (recordable, identifiable, and structured), information type (verbal, textual, spatial, emotional, photographic and video) and processes (planning, scheduling, tracking, brainstorming, document creation, data gathering, data distribution and shared situation awareness). The taxonomy helps in identifying collaboration tools (tool category) and exchanging information depending on the situation. However, it does not identify which particular ICT tool is useful for a particular situation. We explain in the remainder of this paper how CoFra facilitates stakeholders to select ICT tools based on their own preferences and those of their colleagues.
3. COLLABORATION FRAMEWORK
This section presents our conceptual collaboration framework (CoFra). It aims to gather and promote the use of knowledge about appropriate technology and methods for collaboration based on experience, general information sources (e.g. about tool-related properties of ICT tools) and organizational policies.
CoFra, shown in Figure 1, extends our previous work [22]. Four of the central entities of the initial framework were inherited: Stakeholder, ICT Tool, Collaboration Variable and Activity, marked with a white background in Figure 1. Three entities extend the coverage of the framework so that it also includes preferences. Preferences can relate to ICT tools or can be a described as a best practice, which can be expressed as a workflow. Additional entities give more
detail to the framework and provide some links between the central entities. Stakeholders are individual persons or groups of people that are affected by or affect the outcome of a project. Examples of stakeholders are people working on the project, the project leader, a sponsor of the project or even the IT head of an involved organization. Stakeholders have preferences, a preference can also be mandated and as such become a requirement or a prohibition. E.g. the sponsor of a project can mandate the use of certain tools to monitor progress of the project but on the other hand they may prohibit certain tools because of the associated license requirements. Preferences can be described as best practice or can relate to properties of ICT tools (such as interoperability or whether they are open source or not). These properties are described using collaboration variables. Collaboration variables are a key concept in CoFra. They allow stakeholders to validate which ICT tools are appropriate for particular project activities based on a simple checklist of relevant properties for these tools. When selecting any ICT tool(s) it is very important to know the environment and requirements of the project or organization [9] because selection is influenced by a wide variety of reasons of different natures. Tool-related and user-specific variables allow us to validate which ICT tools are suitable to the project or organization depending on its size, limitations etc. Table 1 shows a set collaboration variables belonging to both categories. Literature review in addition with workshops, interviews within multidisciplinary teams as well as our own experience, are the basis for identifying the set of collaboration variables that we believe are mostly considered while selecting ICT tools.
Table 1 shows the tool-related variables and their respective values. For example, the 'Type of Tool' variable has two potential values opensource and commercial. For each ICT tool the relevant variables should get the appropriate value. These values can then later be used by applications that instantiate CoFra, such as those described in this paper. User-specific variables ‘ease of use’ and ‘most used tool’ are quantified based on direct or indirect input (e.g. surveys, field trips) from stakeholders. A large scale survey was conducted to investigate usage of ICT tools for coordination, communication and collaboration [14]. We only have survey results for two user-specific variables (ease of use, most used tool) and we want to use real data in our user study (section 6.1) therefore we only focus on them. However it will be interesting to include other user-specific variables (i.e. ease of learning). Regarding tool-related
We discern three types of collaboration variables: activity-related, user-specific and tool-related variables. Activity-related collaboration variables describe for which kind of activity a certain category of tools can be used. Bolstad and Embley [2], as discussed in section 2, already give a set of activity-related variables (i.e. the collaboration characteristics and the process characteristics). The other two types of collaboration variables can be used to express tool-related characteristics, which can be generic for a type of tool (e.g. those given in [2]) or technical for specific tools. User-specific variables and stakeholder preferences cover the social perspective. The choice for the three types of collaboration variables is motivated by the fact that tools should not only be suitable for a specific task, but they also should fit the preferences, requirements and prohibitions (technical and social) of the stakeholders that are involved.
4. COLLABORATION VARIABLES
Collaboration variables are a key concept in CoFra. They allow stakeholders to validate which ICT tools are appropriate for particular project activities based on a simple checklist of relevant properties for these tools.
The use of CoFra is not limited to the selection of appropriate ICT tools. It also facilitates stakeholders to design the workflow based on project activities and to share best practices. Best practices are used in selection of tools and design of workflow to improve the communication and collaboration among multidisciplinary teams.
Table 1: Collaboration Variables
<table>
<thead>
<tr>
<th>Tool-Related Variables</th>
<th>Values</th>
<th>User-Specific Variables</th>
</tr>
</thead>
<tbody>
<tr>
<td>Type of tool</td>
<td>Commercial, Open source</td>
<td>Ease of use, Most used tool</td>
</tr>
<tr>
<td>Budget</td>
<td>Freeware, Pay, Pay and free trial</td>
<td></td>
</tr>
<tr>
<td>Language support</td>
<td>Multi-language, Native language</td>
<td></td>
</tr>
<tr>
<td>Mobile support</td>
<td>Not required, Yes, needed</td>
<td></td>
</tr>
<tr>
<td>Interoperability</td>
<td>Not required, Yes, needed</td>
<td></td>
</tr>
<tr>
<td>Notification</td>
<td>RSS, E-mail, Discussion forum</td>
<td></td>
</tr>
<tr>
<td>User interface</td>
<td>WYSIWYG, Command, Wizard</td>
<td></td>
</tr>
</tbody>
</table>
variables, we focus on a limited but important set of variables. A limited set of variables adds more comprehension. It ensures that stakeholders focus on the most important variables rather than giving in to the temptation of focusing on less important variables because they may stand out more.
In the remainder of this section we highlight the tool-related variables we introduce and discuss some of their characteristics that can be found in literature.
Stakeholders perform the effective measurement of ICT cost and benefits, as this measurement is important in decision making regarding ICT tool selection [21]. Everyday people use their mobile, hand held devices to coordinate their collaboration with one another [4]. Guerrero et al [12] agree that mobile support is essential for collaboration tools. Mobile devices are low cost, small size, and most of all provide portability. These advantages makes stakeholders to consider mobile support as important criterion in selection of ICT tools. Notification is an important feature in ICT tools, it provides the overview of the events that occurred and make information easily accessible to stakeholders [24, 1] and should be considered when selecting ICT tools. Learning a new tool is time consuming and it involves cost. The user ratings, i.e. ease of use and most use ICT tool provides insight in experiences of users, the stakeholders can used them while making appropriate selection of ICT tools[15]. Similarly, the type of tool [3], user interface [6]and support for multiple languages [13] are important variables that need to be evaluated.
5. APPLICATIONS BASED ON COFRA
We built two proof-of-concept applications based on the insights of CoFra. The first application, ITS (Selection of ICT Tools), is used to select ICT tools for activities (interdisciplinary multi-organization research projects) based on collaboration variables (see section 6.1) and the second one, SBP (Sharing Best Practices), combines workflow with wiki’s to allow flexible sharing of work practices (see section 7.1). The choice of these applications was motivated by the fact that they emphasize two important parts of CoFra which are enabling structured selection of ICT tools using collaboration variables and sharing knowledge through workflow descriptions and informal text.
To evaluate these applications, we decided to perform two user studies. We opted for an empirical approach because empirical studies have proved to be an optimal way of getting results [16]. We opted to do a user study because it is hard to do a comparative experiment mainly due to two reasons: 1) we are not aware of any tools using a similar approach and 2) completing the applications to a degree in which they could be used in a real project would require an excessive effort.
The user studies were carried out with the same participants in a closed setting over two sessions with approximately one week in between. Both user studies consisted of pre-test questionnaires, an introduction to the task, task itself, post-test questionnaires, and finally a concluding discussion regarding execution. The user studies did not take longer than 20 minutes. Before the user studies, an e-mail was sent to all participants where each user study was shortly presented and a motivation for participation was given. All participants were in the 21-35 age range. Due to our interest in multidisciplinary teams, the participants were from different educational backgrounds - Business Development (1), Social Science (1), Engineering (2), and Information Technology (9).
Before each session, the participants were given a pre-test questionnaire to get input for sampling. The five point scale (very little, little, some, much, very much) was used in pre-test (both user studies) to get input from participants. None of the participants used very little and very much options. The scale used in a pre-test questionnaires measures the knowledge and previous experiences of the participants (relevant to user studies).
After completion of the tasks (in both user studies), the participants were asked to fill out a post-test questionnaire. We used the five point likert scale (strongly disagree, disagree, neutral, agree, strongly agree), and two point scale (yes, no) in post-test questionnaires (both user studies). The scale
used in post-test questionnaires help us in collecting the opinions regarding participants experiences with applications and user study design. The post-test questionnaire results are mentioned in section 6.3 and section 7.3. The results and analysis is based on the descriptive statistics.
6. ITS: SELECTION OF ICT TOOLS
The goal of this application, ITS, is to analyze CoFra to evaluate the impact of collaboration variables on the selection of appropriate ICT tools in a scientific context (PhD students and researchers from multidisciplinary background) using CoFra for single/multiple project activities.
6.1. CoFra For ICT Tool Selection
ITS is a prototype of a web based application for the selection of ICT tools that are appropriate within the context of a multi-disciplinary, distributed project in which several organizations are involved.
Figure 2 shows a screenshot of the main page of ITS. The values for activity, sub-activity, work packages, partners and collaboration variables can be selected from a set of predefined values. Note that the user first has to pick an activity (step 1) and one or more sub-activities (step 2a).
Based on this information an appropriate set of variables (step 3) will be shown in the bottom-part of the screen. The user can then select appropriate values for some or all of these variables, which will result in a list of recommended ICT tools for selected project activity. Note that for this prototype, the users could only select their preferences based upon the tool-related collaboration variables shown in table 1. This choice was made so that we could concentrate our evaluation on the variables we defined. As an optional step, users of the application can also select partner organizations of whom they want to know preferences or rules regarding the usage of ICT tools. Partners can be selected based on their involvement in specific parts of the project, called work package (figure 2 (step 2b))
The application database contains several possible ICT tools, project activities, collaboration variables and partners. Currently we use activities and partners as criteria to select appropriate tools but activities can be combined with work packages etc. The sample of project activities, sub-activities, work package and stakeholders in this study was based on other results of the CoCoNut project [14]. Most participants in the study were familiar with the specific terms used in the application, eliminating the need for specific instructions. The inventory of ICT tools, and their corresponding tool-related variables and values (Table 1)

are listed. User-specific variables (ease of use, most used tool) values are taken from the surveys conducted during CoCoNut [14]. Only the stakeholders’ preferences used in the study were fictive. When the user has completed the previously mentioned steps the recommended tools are presented. The recommended ICT tools are presented in a table that also contains specific information about the tool, such as the matched tool-specific variables and user-related variables. More generic information about e.g. the preferences of the relevant stakeholders regarding the recommended ICT-tools are also shown.
6.2. Evaluation Approach And Execution
13 participants were involved in this user study (7 females, 6 males). The objective of the pre-test questionnaire was to investigate how well the participants were acquainted with ICT tools and multidisciplinary teams. This study concerned the following research objectives.
Research Objective 1 ITS is an optimal way of selecting appropriate ICT tools.
Research Objective 2 Collaboration variables used in CoFra are relevant and useful for ICT tool selection.
The participants received the task to find ICT tools supporting some project activities using ITS. In the pre-test questionnaire, 11 out of 13 participants indicated that they know much about ICT tools. Furthermore, 6 participants have much, and 5 have some experience in working within multidisciplinary teams. The post-test questionnaire included the questions regarding collaboration variables, stakeholders’ preferences etc.
6.3. Results And Analysis
This section presents some of the results of the user study. First, the two research objectives are discussed, and then some other results are described.
Research objective 1 Among the 13 participants, 10 believe that the tested application provides a useful way of selecting ICT tools. 10 participants believe that it will benefit their work and 11 participants mentioned that CoFra will be useful for new colleagues if their organization/company implemented a tool similar to this one (see Figure 3). CoFra facilitates users not only to select tools based on their own preferences but also to consider the preferences of their partners involved in a project. 10 out of 13 participants agree that concept of using partner preferences is useful, it allows them to match their preferences and the preferences of their colleagues for better collaboration (see Figure 3). Selecting a partner’s preference was optional in the test application. However, 8 participants state that it will be more beneficial if the partner preferences could be integrated with their preferences. Furthermore, 12 participants answered in the post-test questionnaire that it is nice to get the list of tools that only match their preferences. This empirical study confirms our insights based on our understanding and a literature survey: ITS is a novel way of selecting ICT tools. 9 out of 13 participants answered that they have not used any application that allows them to select the tools based on their preferences and preferences of their colleagues.
One subject mentioned earlier use of an application for ICT tools selection, but with a limited scope compared to ITS. And 11 participants show interest in using such an application that allows them to select correct ICT tools that match their preferences.
Research objective 2 All 13 participants confirm that the concept of using tool-related variables for selection of ICT tools is useful. 12 participants believe that user-specific variables (ease of use, most used tool) will further add value in selection of appropriate ICT tools. 12 participants mentioned that it is useful that they only get the list of tools that most match the collaboration variables (see Figure 4). Post-test results shows that participants consider both tool-related and user-specific variables equally important in selection of ICT tools. The positive results in the post-test questionnaire clearly indicate that the collaboration variables used in CoFra are relevant.
The participants were asked to check what collaboration variables are not important to their opinion in the process of selection of ICT tools. 5 out of 13 participants answered that all the collaboration variables are important. The remaining 8 participants answered that few collaboration variables are not important for their selection (Table 2). But not more than 2 participants think that 3 variables are not important. The results shown in Table 2 are mutually inclusive. The display of tool-related variables depends on the ICT tools. This dependency implies that even though a given variable can be less significant for the selection, it should be considered and is not allowed to be ignored. As shown in Table 2, ‘notification support’ and ‘most used tool’ are least considered to be not important. One could derive that they are thus the most important.
Table 2: Number Of Participants That Rated Collaboration Variables As Not Important
<table>
<thead>
<tr>
<th>Collaboration Variables</th>
<th>Rated not important</th>
</tr>
</thead>
<tbody>
<tr>
<td>Notification support</td>
<td>0</td>
</tr>
<tr>
<td>Most use tool</td>
<td>0</td>
</tr>
<tr>
<td>Budget</td>
<td>1</td>
</tr>
<tr>
<td>Interoperability</td>
<td>1</td>
</tr>
<tr>
<td>Type of tool</td>
<td>2</td>
</tr>
<tr>
<td>User Interface</td>
<td>2</td>
</tr>
<tr>
<td>Ease of use</td>
<td>2</td>
</tr>
<tr>
<td>Language support</td>
<td>3</td>
</tr>
<tr>
<td>Mobile support</td>
<td>3</td>
</tr>
</tbody>
</table>
Other Results It is important that ITS is easy to use and supports a simple tool selection process. In the post-test questionnaire, 12 participants answered that the tool selection process was clear to them. The guidelines and scenario explained before the pre-test questionnaire were useful and helped them in developing an understanding of the selection process and tool usage. Another interesting finding from the study is that participants show their interest to use the application and they believe it is a useful way to select ICT tools. They are however not interested in adding new ICT tools to the collection. Only 5 participants show a motivation to contribute to the collection of information on ICT tools.
Furthermore, 6 out of 13 participants agreed that it would be nice if the list of collaboration variables could be increased. We can combine the list of current variables with the tool characteristics presented by Bolstad et al [2](see section 2), proposed in their taxonomy of collaboration. However, their taxonomy is limited to only three tool characteristics. We can extend the list by adding collaboration variables that we encountered during literature study i.e. platform (web-based, desktop), database (Oracle, SQL server), anti-spam, security, deployment and scalability as tool-related variables while user rating, ease of learning, customization are user-specific variables. Table 2 reflects that the list of current collaboration variables is relevant.
7. SBP: SHARING BEST PRACTICES
The goal of the SBP prototype application is to analyze instantiation of CoFra to sharing and promoting reuse of best practices through workflows.
7.1. SBP
SBP (Sharing Best Practices) is designed to improve the exchange of best practices, and to encourage knowledge sharing among stakeholders, i.e they should be able to do their work as easily, efficiently and effectively as possible.
Figure 5 gives an overview of the different parts of the application. The database used in the application contains possible values i.e activities, sub-activities etc used in workflow. The navigation menu on the left is used to navigate through the best practices available in the database. The organization of the application’s menu is based on a taxonomy developed in the same project as our collaboration framework. Data obtained during field trips and surveys [14] as well as card sorting exercises with researchers from multiple disciplines were used to create the taxonomy. This approach was chosen because stakeholders with different backgrounds have different views and more often find it difficult to identify ambiguous term [10]. The center depicts a workflow, in this case on how to collaboratively write a report. This graphical depiction should allow each stakeholder to quickly identify all necessary steps to perform this activity and allows to navigate quickly through all related knowledge and to even start execution of a specific step.
When a step in the workflow is selected, all relevant information is shown in the lower part of the screen. It is spread over a maximum of three tabs: information, recommended tools and perform task. The information tab contains detailed information about this step in the workflow. The information is presented using a wiki, as this approach not only allows stakeholders to access content but also to contribute and change the content [5]. This is important to allow information to be updated to reflect best practices in different disciplines and stakeholders. The recommended tools tab is a shortcut to a variant of the ICT tool selection application discussed in the previous section, while the perform task tab allows to directly execute a step, whenever it is possible to do so. While in the current prototype the functionality in the perform task tab is not related to the information in the recommended tools tab, it is our belief that this should be the case.
Automating some activities, using the workflow specification was not a goal of this application and would be a major challenge. Supporting an inter-organizational processes
across research groups is a challenging task in workflow management [25] due to many technical and social issues. Therefore our application enables the execution of single steps in the workflow.
7.2. Evaluation Approach And Execution
One female participant with Information Technology background and a male subject from Business Development who participated in the first user study were not available for the second study. This means that 11 participants took part in the study (6 females, 5 males). The objective with the pre-test questionnaire was to investigate how familiar the participants were with the workflow concept and wiki applications. 8 out of the 11 participants have little knowledge regarding workflow and none of them have much experience with workflow applications. 3 participants have much and 6 have some experience of using wiki. Here, we aim to investigate following research objective.
Research Objective 3 A workflow depiction improves knowledge sharing practices over traditional wiki.
In this user study, the participants used SBP (see section 7.1). The participants were given a scenario in which they were asked to consider themselves as a new employee that wanted to know more about writing deliverables. This choice was motivated by the fact that the taxonomy used in application (see section 7.1) should be clear to new researchers with different backgrounds and that the topic would otherwise be considered trivial. While performing the tasks the participants were encouraged to try out all functionality related to the steps of the workflow for “writing a deliverable”. In the post-test questionnaire the participants answered questions related to workflow, wiki etc.
7.3. Results And Analysis
Ten participants indicated that the concept of using wiki to provide information (guidelines, best practices, other useful information i.e. web-links, videos etc) is useful (see Figure 6). Furthermore, 8 out of 11 participants agree that a wiki could be used to improve collaboration. This can be explained by the fact that a wiki facilitates users to contribute and share knowledge. 7 participants mention that a graphical description of the workflow adds value over a wiki-only description of best practices (see Figure 6). This can be explained by the fact that the graphical representation (used in the test) provides a better overview about steps that need to be performed to complete activity. When consulted during the execution of the activity, it helps in identifying which tasks are completed and what still needs to be done. Performing a task within workflow is another feature that adds value over a traditional wiki as 10 out of 11 participants agree that starting a task directly from the workflow is useful (see Figure 6). Based on the post-test questionnaire results we infer that participants prefer CoFra’s workflow depiction over traditional wiki.
Other Results 8 out of 11 participants believe that the workflow application will benefit their new colleagues, while only 4 participants answered that it will benefit their own work. We believe that the information provided in a scenario could be a possible reason that participants believe that the workflow application benefit their new colleagues rather than their own work. On the contrary, 10 participants stated that they would use this kind of application for their work. The workflow application is equally useful to experienced researchers. They can contribute and add more knowledge based on their experiences, literature and other useful information using the wiki. The workflow generated is static and predefined, all 11 participants believe that it will be nice if they can customize the workflow, based on their preferences. We agree that there is a need to customize the workflow. This could be achieved by letting users customize their profile to personalize the interaction. We will examine the possibility to add personalization in the workflow application in future.
10 out of 11 participants agreed that it is useful to start external applications from within this kind of application. The participants also provided positive feedback regarding integration of ICT tools, all 11 participants believed that the concept of integrating ICT tool selection in the workflow adds more value than using it as a separate application. The empirical evidence strengthens our believe that the application described in section 7.1 is a suitable tool to share best practices and to promote their application through the integrated capabilities to start using the proposed tools. It thus also could help to improve collaboration.
8. DISCUSSION
Selection of the right ICT tool and sharing/promotion of best practices are very important activities in collabora-
tive multidisciplinary projects because they lay the foundation for effective communication and collaboration between stakeholders. However, since these activities require detailed knowledge about the stakeholders and estimation skills in order to be successful, it is difficult to carry them out perfectly. The inability to estimate implementation effort and a lack of framework and tool support may be one of the reasons why organizations use ad-hoc methods when selecting ICT tools.
We proposed CoFra that acknowledges the importance of appropriate selection of ICT tools and methods by placing these activities at the same level as the actual usage of these methods. Despite the fact that we devoted much attention to two applications that focus on the sharing and selection of appropriate tools and methods, we believe that the framework can be useful even without these or other specific new tools by indicating important points of attention. The primary purpose of the discussed applications is to illustrate potential software support for usage of CoFra (in multi-disciplinary research projects). Two user studies using these applications were conducted to evaluate the instantiations of a collaboration framework as a technique for selecting ICT tools and sharing best practices. Although these user studies do not give us a basis to draw definitive conclusions in real world settings, we believe they give some indications of pitfalls and potential to introduce this kind of tool support in multi-organization multidisciplinary research projects.
The results of the first user study (see section 6.1) do not contradict that using a simple set of variables can be useful to select appropriate tools and that the additional collaboration variables we provide can be useful in selecting ICT tools. The results also give an indication of what is considered important when working in teams: notification support (i.e. support for awareness) and most frequently used tools (i.e. do other people use this tool?) are closely followed by the variables budget (can we afford this?) and interoperability (can we easily reuse the output of the tool). What this shows regarding CoFra is that the user-specific variables (potentially) are as important as the other considerations. Using a specific tool, such as ITS, to support the selection process would probably be difficult due to the fact that only a small number of people are inclined to enter the necessary data. This finding is similar to what is seen at large scale collaborative efforts such as wikipedia.
The results of the second user study, regarding SBP (see section 7.1), indicate that graphical workflow depiction of even informal activities can be useful in distributed projects. Most people would like to adapt these workflows, which is logical since many activities are rather informal and might involve no or very limited steps that are really required to be performed in a specific way. The potential for adoption is however relatively uncertain as only a minority answered that such a tool would benefit their work but a majority indicated that it would benefit new colleagues. Most participants in the user study indicated nonetheless they would use such a tool. Considering that many parts of activities in multi-organization, multidisciplinary research projects are unpredictable in both time and space, this latter statement is hopeful since Grudin [11] indicated workflows as the best collaboration tool for this category of collaborative work. Further investigation of user-adaptable lightweight workflow systems is thus encouraged.
9. CONCLUSIONS AND FUTURE WORK
This paper reports on a study about supporting ICT tool selection and use in multi-disciplinary collaborative projects. We presented a conceptual collaboration framework (CoFra). CoFra has four main components (Stakeholder, ICT Tools, Collaboration Variable and Activity) as well as two additional components (Workflow and Best Practice). Based on the discussion in this paper, CoFra can be considered as a novel way to improve collaboration in multidisciplinary teams. It helps in selecting the appropriate ICT tools for project activities, covers a comprehensive set of collaboration variables, generate workflow, examines the best practices (e.g. knowledge sharing, integration with external applications, stakeholder's preferences in selection of ICT tools). We thus conclude that CoFra is a state-of-the-art technique in multidisciplinary collaborative projects.
We conducted two user studies to validate instantiations of CoFra. Based on the results from both studies, one can conclude that (1) CoFra provides a suitable way to select appropriate ICT tools for multiple project activities, (2) the collaboration variables used in CoFra are relevant and (3) CoFra’s workflow element improves sharing of best practices over traditional wiki. The results presented in this paper indicate that our framework has a lot of potential. A future, more extensive, study is however required to make definite recommendations regarding the use of the framework in real world settings. Based on the experiences with these user studies, we believe there is a need to add personalization to improve the applications tested in this paper. It is, however, our believe that this would not result in changes to CoFra. The results of the second user study also learned us that there is a desire to adapt the workflow. An easy way to update a workflow specification or create variations of it, while ensuring correctness of the related information and ICT tools, is another area of future work.
ACKNOWLEDGMENTS
We would like to thank our colleagues and in particular Mieke Haesen for their valuable input.
References
|
{"Source-Url": "https://doclib.uhasselt.be/dspace/bitstream/1942/11096/1/CoFra.pdf", "len_cl100k_base": 7930, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29695, "total-output-tokens": 9607, "length": "2e12", "weborganizer": {"__label__adult": 0.0003731250762939453, "__label__art_design": 0.0014753341674804688, "__label__crime_law": 0.0005221366882324219, "__label__education_jobs": 0.03253173828125, "__label__entertainment": 0.0002110004425048828, "__label__fashion_beauty": 0.0002357959747314453, "__label__finance_business": 0.0032100677490234375, "__label__food_dining": 0.0004143714904785156, "__label__games": 0.0006785392761230469, "__label__hardware": 0.0008955001831054688, "__label__health": 0.0007076263427734375, "__label__history": 0.0007023811340332031, "__label__home_hobbies": 0.00025153160095214844, "__label__industrial": 0.0006899833679199219, "__label__literature": 0.000850677490234375, "__label__politics": 0.0004787445068359375, "__label__religion": 0.0005674362182617188, "__label__science_tech": 0.10498046875, "__label__social_life": 0.0006475448608398438, "__label__software": 0.141845703125, "__label__software_dev": 0.70654296875, "__label__sports_fitness": 0.0002636909484863281, "__label__transportation": 0.0006098747253417969, "__label__travel": 0.00040078163146972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45496, 0.02602]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45496, 0.2967]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45496, 0.93035]], "google_gemma-3-12b-it_contains_pii": [[0, 3796, false], [3796, 9247, null], [9247, 13532, null], [13532, 18476, null], [18476, 22161, null], [22161, 26711, null], [26711, 30818, null], [30818, 35588, null], [35588, 41212, null], [41212, 45496, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3796, true], [3796, 9247, null], [9247, 13532, null], [13532, 18476, null], [18476, 22161, null], [22161, 26711, null], [26711, 30818, null], [30818, 35588, null], [35588, 41212, null], [41212, 45496, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45496, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45496, null]], "pdf_page_numbers": [[0, 3796, 1], [3796, 9247, 2], [9247, 13532, 3], [13532, 18476, 4], [18476, 22161, 5], [22161, 26711, 6], [26711, 30818, 7], [30818, 35588, 8], [35588, 41212, 9], [41212, 45496, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45496, 0.14493]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ade78a6ac72c612473fad73b33121393cc69e070
|
Static Estimation of Test Coverage
IPA Herfstdagen - Session: Static analysis via code query technologies
Tiago Alves & Joost Visser
November 27th, 2008
Arent Janszoon Ernststraat 595-H
NL-1082 LD Amsterdam
[email protected]
www.sig.nl
Software Improvement Group
Characterization
- Based in Amsterdam, The Netherlands
- Spin-off from the Centre of Mathematics and Information Technology (CWI)
- Fact-based IT consulting
Services
- Software Risk Assessment
- Exhaustive research of software quality and risks
- Answers specific research questions
- One time execution
- Software Monitor
- Automated quality measurements executed frequently (daily / weekly)
- Information presented in a web-portal
- DocGen
- Automated generation of technical quality
Our Customers
Financial and Insurance Companies
Government
Logistical
IT
Other
KLM
Getronics PinkRoccade
ENECO
ergie
ING
Leng en Zekerheid
VROM
RDW
DHL
Centric
PricewaterhouseCoopers
Rabobank
InterBank
ZwitserLeven
POLITIE
EUROMAX
TERMINAL
EXACT
Software
IBM
Gasunie
Friesland Bank
delta Lloyd
Allianz
LeasePlan
ProRail
CHESS
LOGICA
CMG
kadaster
Software testing
Production code
Test code
Method under test
Unit test
Test execution
Test coverage
Test failure
Software Improvement Group
Measuring test coverage
Pros:
- Indicator for test quality
- Indicator for quality of the software under test
- Higher coverage => better software quality (in principle)
Cons:
- Full installation required (sources + libraries)
- Instrumentation of source/byte code
- Problematic in embedded systems
- Execution (Hardware and time constrains)
- Not appropriate to compute in the context of software quality assessment!!
Motivation
- 13th Testdag, Delft, November 2007
- I. Heitlager, T. Kuipers, J. Visser “Observing unit test maturity in the wild”
Research questions
- Is it possible to estimate test coverage without running tests?
- What trade-offs can be made between sophistication and accuracy?
Requirements
- Use only static analysis
- Scale to large systems
- Robust against incomplete systems
Where do we stand?
- Solution sketch
- Sources of imprecision
- Dealing with imprecision
Solution sketch
1. **Extract**
- Extract structural and call information
- Determine set of test classes
2. **Slice (modified)**
- Slice graph starting from test methods
- Set of methods reached from test code
- Take into account class initializer calls
3. **Count (per class)**
- Determine number of defined methods
- Determine number of covered methods
4. **Estimate**
- Class coverage
- Package coverage
- System coverage
Modified slicing specification
\[
\begin{align*}
\text{call} & \quad n \rightarrow m \\
\text{def} & \quad m \leftarrow c \\
\text{init} & \quad \text{init} \\
\text{call} & \quad \text{call} \\
\text{def} & \quad \text{def} \\
\text{call} & \quad \text{call} \\
\text{init} & \quad \text{init} \\
\text{invoke} & \quad \text{invoke} \\
\text{call} & \quad \text{call} \\
\text{init} & \quad \text{init} \\
\text{invoke} & \quad \text{invoke} \\
\end{align*}
\]
Code coverage formulas
Defined methods: \( DM : n_c \rightarrow N \)
Covered methods: \( CM : n_c \rightarrow N \)
\[
CC(c) = \frac{CM(c)}{DM(c)} \times 100\%
\]
\[
PC(p) = \frac{\sum_{c \in p} CM(c)}{\sum_{c \in p} DM(c)} \times 100\%
\]
\[
SC = \frac{\sum_{c \in G} CM(c)}{\sum_{c \in G} DM(c)} \times 100\%
\]
What can go wrong? (Sources of imprecision)
Java language
- Control flow
- Dynamic dispatch (inheritance)
- Overloading
General issues
- Frameworks / Libraries call backs
- Identification of test code
- ///CLOVER:OFF flags
Sources of imprecision
Control flow
class ControlFlow {
ControlFlow(int value) {
if (value > 0)
method1();
else
method2();
}
void method1() {}
void method2() {}
}
import junit.framework.*;
class ControlFlowTest extends TestCase {
void test() {
ControlFlow cf =
new ControlFlow(3);
}
}
Sources of imprecision
Libraries
class Pair {
Integer x; Integer y;
Pair(Integer x, Integer y) { ... }
int hashCode() { ... }
boolean equals(Object obj) { ...}
}
class Chart {
Set pairs;
Chart() { pairs = new HashSet(); }
void addPair(Pair p) { pairs.add(p); }
void checkForPair(Pair p) { return pairs.contains(p); }
}
import junit.framework. static;
class LibrariesTest extends TestCase {
void test() {
Chart c = new Chart();
Pair p1 = new Pair(3,5);
c.addPair(p1);
Pair p2 = new Pair(3,5);
c.checkForPair(p2);
}
}
Dealing with imprecision
**Pessimistic approach**
- Report only what can be determined to be true
- False negatives
- Estimates lower bound for coverage
**Optimistic approach**
- Report everything that might be true
- False positives
- Estimates upper bound for coverage
**Pessimistic vs. Optimistic (software assessment context)**
- Pessimistic will always report low coverage
- Optimistic will be sensitive to lack of coverage
- Optimistic will not take into account library calls
Where do we stand?
- Code query technologies
- Definition of abstractions
- Implementation of the method
- Querying the results
# Code query technologies
<table>
<thead>
<tr>
<th>Style/Paradigm</th>
<th>Style/Paradigm</th>
<th>Type system</th>
<th>Type system</th>
<th>Type system</th>
<th>Type system</th>
<th>Abstraction</th>
<th>Extendability</th>
</tr>
</thead>
<tbody>
<tr>
<td>ReView</td>
<td>Procedural</td>
<td>-</td>
<td>x</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Grok</td>
<td>Relational</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Rscript</td>
<td>Relational & Comprehensions</td>
<td>x</td>
<td>x</td>
<td>-</td>
<td>x</td>
<td>Composite</td>
<td>x</td>
</tr>
<tr>
<td>JRelCal</td>
<td>API</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>java</td>
<td>x</td>
</tr>
<tr>
<td>SemmleCode</td>
<td>SQL-like + OO</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>Object</td>
<td>x (limited)</td>
</tr>
<tr>
<td>Crocopat</td>
<td>Imperative + FO logic</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>GReQL2</td>
<td>SQL-like + path expr.</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>JTransformer</td>
<td>FO Logic</td>
<td>x</td>
<td>x</td>
<td>x</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
Commercial product developed by Semmle Ltd. (Oege de Moor et. Al)
**Historical overview:**
- Started December 2006
- First tutorial July 2007
- Version 1.0 (Beginning 2009 - expected)
**Eclipse plug-in + headless version**
**Integrated Java + XML extractor**
**.QL as code query language**
- Based on relational calculus + object-oriented model.
Definition of abstractions
Class extension
```java
class AnalyzedClass extends Class {
AnalyzedClass() {
this.fromSource() and ...
}
}
class TestClass extends AnalyzedClass {
TestClass() { isJUnitClassTest(this) }
}
class CodeClass extends AnalyzedClass {
CodeClass() { not isJUnitClassTest(this) }
...
int numberOfDefinedMethods() {
result = count( NonAbstractCallable m
| this.containsCallable(m))
}
int numberOfCoveredMethods() {
result = count( NonAbstractCallable m
| this.containsCallable(m)
and m.isTestCovered())
}
}
```
Definition of abstractions
Method extension
class NonAbstractCallable extends Callable {
NonAbstractCallable() {
this.fromSource() and
not (this.getName() = "<clinit>") and
not this.hasModifier("abstract") and
not (this.getLocation().getNumberOfLines() = 0)
}
predicate isTestCovered() {
exists( TestClass tc, Callable tm | tc.contains(tm) and invoke+(tm, this))
}
}
Modified slicing implementation
Binary relational expression
\[
\begin{align*}
\text{call} & : n \rightarrow m \\
\text{init} & : m = n \rightarrow m_i \overset{\text{def}}{\rightarrow} c \\
\text{invoke} & : n \rightarrow m = n \rightarrow m \mid n \rightarrow m_i \overset{\text{def}}{\rightarrow} c \\
\text{invoke} & : n \rightarrow + m \\
\end{align*}
\]
predicate invoke(Callable m1, Callable m2) {
myPolyCall(m1,m2)
or
exists(Class c, Callable mi, Callable mj |
myPolyCall(m1,mi) and
c.contains(mi) and
c.contains(mj) and
mj.getName() = "<clinit>" and
myPolyCall(mj,m2)
)
}
Querying the results
Class-level query
```
from CodeClass c
select
c.getQualifiedName() as ClassName,
c.numberOfCoveredMethods() as NumberOfCoveredMethods,
c.numberOfDefinedMethods() as NumberOfDefinedMethods
order by ClassName
```
Querying the results
Class-level query results for JPacMan
Code query
Total # methods
# Covered methods
Code query results
Querying the results
Package-level query
```sql
from Package p
where p.fromSource()
select
p as PackageName,
sum(CodeClass c | p.contains(c) | c.numberOfCoveredMethods())
as NumberOfCoveredMethods,
sum(CodeClass c | p.contains(c) | c.numberOfDefinedMethods())
as NumberOfDefinedMethods
order by PackageName
```
Querying the results
Package-level query results for JPacMan
<table>
<thead>
<tr>
<th>PackageName</th>
<th>NumberOfCoveredMethods</th>
<th>NumberOfDefinedMethods</th>
</tr>
</thead>
<tbody>
<tr>
<td>jpacman</td>
<td>10</td>
<td>0</td>
</tr>
<tr>
<td>jpacman.controller</td>
<td>10</td>
<td>47</td>
</tr>
<tr>
<td>jpacman.model</td>
<td>10</td>
<td>130</td>
</tr>
</tbody>
</table>
```
from Package p
where p.fromSource()
select
p as PackageName,
sum( CodeClass c | p.contains as NumberOfCoveredMethods,
sum( CodeClass c | p.contains as NumberOfDefinedMethods
order by PackageName
```
Where do we stand?
- Experimental design
- Data set characterization
- Comparison of results
IPA Herfstdagen, Nunspeet, November 24-28, 2008
Static Estimation of Test Coverage
Tiago Alves
Experimental design
**Data set selection and characterization**
- Open-source and proprietary Java systems
- Different application domains
- Different sizes
**Execution of experiment**
- Clover execution (configuring clover + ant, running tests)
- XML Clover extraction (XSLT transformations for CSV generation)
- SemmleCode execution (text file export + scripts for CSV generation)
- Custom built java tool to read CSV files and generate Excel XLS
Statistical analysis
Distributions
- Histogram of the coverage estimation
- Histogram of the real (clover) coverage
Correlation
- Spearman (rank-correlation)
Estimation different
- Histogram of the differences
Dispersion
- Inter-quartile ranges (dispersion)
Data set characterization
<table>
<thead>
<tr>
<th>System</th>
<th>Version</th>
<th>Author</th>
<th>LOC</th>
<th># Packages</th>
<th># Classes</th>
<th># Methods</th>
</tr>
</thead>
<tbody>
<tr>
<td>JPacMan</td>
<td>3.0.4</td>
<td>Arie van Deursen</td>
<td>2.5k</td>
<td>3</td>
<td>46</td>
<td>335</td>
</tr>
<tr>
<td>Certification</td>
<td>20080731</td>
<td>SIG</td>
<td>3.8k</td>
<td>14</td>
<td>99</td>
<td>413</td>
</tr>
<tr>
<td>G System</td>
<td>20080214</td>
<td>C Company</td>
<td>6.4k</td>
<td>17</td>
<td>126</td>
<td>789</td>
</tr>
<tr>
<td>Dom4j</td>
<td>1.6.1</td>
<td>MetaStuff</td>
<td>24.3k</td>
<td>14</td>
<td>271</td>
<td>3,606</td>
</tr>
<tr>
<td>Utils</td>
<td>1.61</td>
<td>SIG</td>
<td>37.7k</td>
<td>37</td>
<td>506</td>
<td>4,533</td>
</tr>
<tr>
<td>JGap</td>
<td>3.3.3</td>
<td>Klaus Meffert</td>
<td>42.9k</td>
<td>27</td>
<td>451</td>
<td>4,995</td>
</tr>
<tr>
<td>Collections</td>
<td>3.2.1</td>
<td>Apache</td>
<td>55.4k</td>
<td>12</td>
<td>714</td>
<td>6,974</td>
</tr>
<tr>
<td>PMD</td>
<td>5.0b6340</td>
<td>Xavier Le Vourch</td>
<td>62.8k</td>
<td>110</td>
<td>894</td>
<td>6,856</td>
</tr>
<tr>
<td>R System</td>
<td>20080214</td>
<td>C Company</td>
<td>82.3k</td>
<td>66</td>
<td>976</td>
<td>11,095</td>
</tr>
<tr>
<td>JFreeChart</td>
<td>1.0.10</td>
<td>JFree</td>
<td>127.7k</td>
<td>60</td>
<td>875</td>
<td>10,680</td>
</tr>
<tr>
<td>DocGen</td>
<td>r40981</td>
<td>SIG</td>
<td>127.7k</td>
<td>112</td>
<td>1,786</td>
<td>14,909</td>
</tr>
<tr>
<td>Analysis</td>
<td>1.39</td>
<td>SIG</td>
<td>267.5k</td>
<td>284</td>
<td>3,199</td>
<td>22,315</td>
</tr>
</tbody>
</table>
Dom4j: detailed statistical analysis
Class coverage distributions comparison
Clover
Static
IPA Herfstdagen, Nunspeet, November 24-28, 2008
Static Estimation of Test Coverage
Tiago Alves
Dom4j: detailed statistical analysis
Class coverage comparison + differences
Static and Clover coverage at class level
Histogram of the differences at class level
IPA Herfstdagen, Nunspeet, November 24-28, 2008
Static Estimation of Test Coverage
Tiago Alves
Dom4j: detailed statistical analysis
Package coverage comparison + differences
IPA Herfstdagen, Nunspeet, November 24-28, 2008
Static Estimation of Test Coverage
Tiago Alves
### Statistical analysis
(Class and package coverage comparison)
<table>
<thead>
<tr>
<th>System name</th>
<th>Spearman</th>
<th>Median</th>
<th>Interquartile range</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Class</td>
<td>Package</td>
<td>Class</td>
</tr>
<tr>
<td>JPacMan</td>
<td>0.467*</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>Certification</td>
<td>0.368**</td>
<td>0.520</td>
<td>0</td>
</tr>
<tr>
<td>G System</td>
<td>0.774**</td>
<td>0.694**</td>
<td>0</td>
</tr>
<tr>
<td>Dom4j</td>
<td>0.584**</td>
<td>0.620**</td>
<td>0.167</td>
</tr>
<tr>
<td>Utils</td>
<td>0.825**</td>
<td>0.778**</td>
<td>0</td>
</tr>
<tr>
<td>JGap</td>
<td>0.733**</td>
<td>0.786**</td>
<td>0</td>
</tr>
<tr>
<td>Collections</td>
<td>0.549**</td>
<td>0.776**</td>
<td>0</td>
</tr>
<tr>
<td>PMD</td>
<td>0.638**</td>
<td>0.655**</td>
<td>0</td>
</tr>
<tr>
<td>R System</td>
<td>0.727**</td>
<td>0.723**</td>
<td>0</td>
</tr>
<tr>
<td>JFreeChart</td>
<td>0.632**</td>
<td>0.694**</td>
<td>0</td>
</tr>
<tr>
<td>DocGen</td>
<td>0.397**</td>
<td>0.459**</td>
<td>0</td>
</tr>
<tr>
<td>Analysis</td>
<td>0.391**</td>
<td>0.486**</td>
<td>0</td>
</tr>
</tbody>
</table>
Statistical analysis
(System-level coverage comparison)
Correlation: 0.769
<table>
<thead>
<tr>
<th>System</th>
<th>Static</th>
<th>Clover</th>
<th>Diff</th>
</tr>
</thead>
<tbody>
<tr>
<td>JPacman</td>
<td>88.06%</td>
<td>93.53%</td>
<td>-5.47%</td>
</tr>
<tr>
<td>Certification</td>
<td>92.82%</td>
<td>90.09%</td>
<td>2.73%</td>
</tr>
<tr>
<td>G System</td>
<td>89.61%</td>
<td>94.81%</td>
<td>-5.19%</td>
</tr>
<tr>
<td>Dom4j</td>
<td>57.40%</td>
<td>39.37%</td>
<td>18.03%</td>
</tr>
<tr>
<td>Utils</td>
<td>74.95%</td>
<td>70.47%</td>
<td>4.48%</td>
</tr>
<tr>
<td>JGap</td>
<td>70.51%</td>
<td>50.99%</td>
<td>19.52%</td>
</tr>
<tr>
<td>Collections</td>
<td>82.62%</td>
<td>78.39%</td>
<td>4.23%</td>
</tr>
<tr>
<td>PMD</td>
<td>80.10%</td>
<td>70.76%</td>
<td>9.34%</td>
</tr>
<tr>
<td>R System</td>
<td>65.10%</td>
<td>72.65%</td>
<td>-7.55%</td>
</tr>
<tr>
<td>JFreeChart</td>
<td>69.88%</td>
<td>61.55%</td>
<td>8.33%</td>
</tr>
<tr>
<td>DocGen</td>
<td>79.92%</td>
<td>69.08%</td>
<td>10.84%</td>
</tr>
<tr>
<td>Analysis</td>
<td>71.74%</td>
<td>88.23%</td>
<td>-16.49%</td>
</tr>
</tbody>
</table>
IPA Herfstdagen, Nunspeet, November 24-28, 2008
Static Estimation of Test Coverage
Tiago Alves
Static estimation of test coverage
Tiago Alves
IPA Herfstdagen, Nunspeet, November 24-28, 2008
Static Estimation of Test Coverage
Tiago Alves
Conclusion
**Is it possible to determine test coverage without running tests?**
- Yes we can!!!
- Spearman: high correlation between static and clover coverage
- In general static coverage identifies the same values as Clover
**What trade-offs can be made between sophistication and accuracy?**
- Average absolute difference: 9%
- Class and Package coverage needs further improvement
**Implementation**
- SemmleCode: 92 LOC
- Java SIG Analysis: 256 LOC
Thank you!
Questions?
Tiago Alves
[email protected]
Joost Visser
[email protected]
|
{"Source-Url": "http://wiki.di.uminho.pt/twiki/pub/Personal/Tiago/Publications/2008-11-27-ipa-testcoverage.pdf", "len_cl100k_base": 5089, "olmocr-version": "0.1.49", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 58921, "total-output-tokens": 6242, "length": "2e12", "weborganizer": {"__label__adult": 0.00035691261291503906, "__label__art_design": 0.00020110607147216797, "__label__crime_law": 0.0002770423889160156, "__label__education_jobs": 0.0005884170532226562, "__label__entertainment": 3.6716461181640625e-05, "__label__fashion_beauty": 0.0001169443130493164, "__label__finance_business": 0.0002617835998535156, "__label__food_dining": 0.00022423267364501953, "__label__games": 0.0003662109375, "__label__hardware": 0.00044798851013183594, "__label__health": 0.0002770423889160156, "__label__history": 0.0001112222671508789, "__label__home_hobbies": 6.35981559753418e-05, "__label__industrial": 0.00021409988403320312, "__label__literature": 0.0001480579376220703, "__label__politics": 0.00015926361083984375, "__label__religion": 0.0002715587615966797, "__label__science_tech": 0.0015106201171875, "__label__social_life": 9.387731552124023e-05, "__label__software": 0.0045013427734375, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0002415180206298828, "__label__transportation": 0.00025725364685058594, "__label__travel": 0.00015914440155029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15441, 0.02851]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15441, 0.01999]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15441, 0.58512]], "google_gemma-3-12b-it_contains_pii": [[0, 234, false], [234, 763, null], [763, 1144, null], [1144, 1292, null], [1292, 1718, null], [1718, 2105, null], [2105, 2195, null], [2195, 2654, null], [2654, 3117, null], [3117, 3435, null], [3435, 3660, null], [3660, 4031, null], [4031, 4632, null], [4632, 5118, null], [5118, 5247, null], [5247, 6572, null], [6572, 6921, null], [6921, 7583, null], [7583, 8010, null], [8010, 8650, null], [8650, 8895, null], [8895, 9020, null], [9020, 9342, null], [9342, 10003, null], [10003, 10193, null], [10193, 10644, null], [10644, 10906, null], [10906, 12198, null], [12198, 12387, null], [12387, 12648, null], [12648, 12823, null], [12823, 13994, null], [13994, 14758, null], [14758, 14902, null], [14902, 15361, null], [15361, 15441, null]], "google_gemma-3-12b-it_is_public_document": [[0, 234, true], [234, 763, null], [763, 1144, null], [1144, 1292, null], [1292, 1718, null], [1718, 2105, null], [2105, 2195, null], [2195, 2654, null], [2654, 3117, null], [3117, 3435, null], [3435, 3660, null], [3660, 4031, null], [4031, 4632, null], [4632, 5118, null], [5118, 5247, null], [5247, 6572, null], [6572, 6921, null], [6921, 7583, null], [7583, 8010, null], [8010, 8650, null], [8650, 8895, null], [8895, 9020, null], [9020, 9342, null], [9342, 10003, null], [10003, 10193, null], [10193, 10644, null], [10644, 10906, null], [10906, 12198, null], [12198, 12387, null], [12387, 12648, null], [12648, 12823, null], [12823, 13994, null], [13994, 14758, null], [14758, 14902, null], [14902, 15361, null], [15361, 15441, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15441, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15441, null]], "pdf_page_numbers": [[0, 234, 1], [234, 763, 2], [763, 1144, 3], [1144, 1292, 4], [1292, 1718, 5], [1718, 2105, 6], [2105, 2195, 7], [2195, 2654, 8], [2654, 3117, 9], [3117, 3435, 10], [3435, 3660, 11], [3660, 4031, 12], [4031, 4632, 13], [4632, 5118, 14], [5118, 5247, 15], [5247, 6572, 16], [6572, 6921, 17], [6921, 7583, 18], [7583, 8010, 19], [8010, 8650, 20], [8650, 8895, 21], [8895, 9020, 22], [9020, 9342, 23], [9342, 10003, 24], [10003, 10193, 25], [10193, 10644, 26], [10644, 10906, 27], [10906, 12198, 28], [12198, 12387, 29], [12387, 12648, 30], [12648, 12823, 31], [12823, 13994, 32], [13994, 14758, 33], [14758, 14902, 34], [14902, 15361, 35], [15361, 15441, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15441, 0.12889]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
b2f38c4ba5d4a8865f2995a91a201e7d19321a48
|
[REMOVED]
|
{"len_cl100k_base": 7949, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 32305, "total-output-tokens": 9555, "length": "2e12", "weborganizer": {"__label__adult": 0.0004570484161376953, "__label__art_design": 0.0003843307495117187, "__label__crime_law": 0.00028777122497558594, "__label__education_jobs": 0.0007772445678710938, "__label__entertainment": 8.046627044677734e-05, "__label__fashion_beauty": 0.00019407272338867188, "__label__finance_business": 0.0002137422561645508, "__label__food_dining": 0.00036072731018066406, "__label__games": 0.0007228851318359375, "__label__hardware": 0.0010585784912109375, "__label__health": 0.0005593299865722656, "__label__history": 0.0001984834671020508, "__label__home_hobbies": 0.00010287761688232422, "__label__industrial": 0.0003688335418701172, "__label__literature": 0.00028586387634277344, "__label__politics": 0.0002157688140869141, "__label__religion": 0.00047206878662109375, "__label__science_tech": 0.018890380859375, "__label__social_life": 9.72747802734375e-05, "__label__software": 0.004520416259765625, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.0003676414489746094, "__label__transportation": 0.0005369186401367188, "__label__travel": 0.00019693374633789065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37167, 0.03972]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37167, 0.23304]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37167, 0.87507]], "google_gemma-3-12b-it_contains_pii": [[0, 2806, false], [2806, 7493, null], [7493, 12381, null], [12381, 15179, null], [15179, 19114, null], [19114, 23338, null], [23338, 27541, null], [27541, 28570, null], [28570, 30704, null], [30704, 32721, null], [32721, 35068, null], [35068, 36930, null], [36930, 37167, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2806, true], [2806, 7493, null], [7493, 12381, null], [12381, 15179, null], [15179, 19114, null], [19114, 23338, null], [23338, 27541, null], [27541, 28570, null], [28570, 30704, null], [30704, 32721, null], [32721, 35068, null], [35068, 36930, null], [36930, 37167, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37167, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37167, null]], "pdf_page_numbers": [[0, 2806, 1], [2806, 7493, 2], [7493, 12381, 3], [12381, 15179, 4], [15179, 19114, 5], [19114, 23338, 6], [23338, 27541, 7], [27541, 28570, 8], [28570, 30704, 9], [30704, 32721, 10], [32721, 35068, 11], [35068, 36930, 12], [36930, 37167, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37167, 0.13542]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
1bcf05c16fc728904df456db6c7c6b15ab91d22a
|
Collaboration behavior enhancement in co-development networks
Shadi, M.
Citation for published version (APA):
General rights
It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulations
If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
Chapter 1
Introduction
1.1 Motivation and Problem Definition
Organizations can hardly gain from available business opportunities in the market due to the increasingly change and challenges that are outside of their control. They often face demands that are beyond their own capabilities and resources. Numerous technological developments and breakthroughs are among the main causes for vast dynamism in production and services industry, both on the supply side of the products and services, and the customer demands and general expected standards of living. Encountered with constant fluctuations, Small and Medium Enterprises (SMEs) find their survival at risk, and reconsider the way in which they structure, coordinate, and manage their businesses and related processes. Furthermore, advances in ICT (Information and Communication Technologies) has enhanced mobility and flexibility of organizations and has facilitated collaboration among them regardless of their geographic location. New forms of co-working are emerging that bundles a number of organizations’ interests and abilities together to better address market demands. These new forms link organizations not only at the local, regional, and national level, but also at the global level. Therefore, the traditional organization structures are gradually shifting to the networked organizations.
An effective networked organization structure, benefiting from advanced ICT, constitutes the so-called Collaborative Network (CN). The CN is defined in [22] as follows:
"A collaborative network (CN) consists of a variety of entities (e.g. organizations and people) that are largely autonomous, geographically distributed, and heterogeneous in their operating environment, culture, social capital and goals, but that collaborate together to better achieve common or compatible goals, and whose interactions are supported by computer network."
One emerging form of CN, applied to organizations in business and science...
areas, is the Virtual Organization (VO). The VOs involve heterogeneous, autonomous, and geographically distributed partners, mostly consisting of SMEs. VO is typically short-term and goal-oriented, and needs to form dynamically and fluently, in order to address specific emerged opportunity. VOs usually compete against large organizations in attracting customers. The definition of a virtual organization, adopted in our research, is as follows [19]: "Virtual Organization (VO) is a dynamic and temporary form of collaborative networks, comprising a number of independent organizations that wish to share their resources and skills to achieve its common mission/goal."
Research and practice have shown that effective support of the VO’s requirements necessitate the pre-existence of a strategic alliance among organizations in the sector. The role of this alliance would be to provide the common base infrastructure and conditions needed to prepare organizations for their effective involvement in dynamic creation and successful operation of potential future VOs. This alliance the so-called Virtual organizations Breeding Environments (VBEs) is a long-term CN, and is already manifested in practice in many industry sectors. The definition of a VBE, adopted in our research follows [2]: "VO Breeding environment (VBE) represents an association of organizations and their related supporting institutions, adhering to a base long term cooperation agreement, and adoption of common operating principles and infrastructures, with the main goal of increasing their preparedness towards rapid configuration of temporary alliances for collaboration in potential Virtual Organizations.”
To acquire more business opportunities and qualify themselves to participate in larger projects, organizations get involved in VBE networks. Co-working among the members of the VBE and the partners of the VO are very different, and consequently so is the role of the VBE’s administrator and the VO’s coordinator, as described below.
In VBEs cooperation is practiced among its members [3]. As a base task, VBE increases the preparedness of organizations for collaboration in potential future VOs. Achieving this, primarily concerns exchanging some information about their competencies and resources, adjusting some of their activities, and adopting some new infrastructure and standards, all in order to enhance their compatibility with each other and increase their chance of effective future collaboration. Therefore, division of some minor tasks among the VBE members requires their cooperation. In most cases, the adjustment plans for VBE members is specified individually for each organization, rather than jointly for all VBE members, and organizations compliance with the suggested adjustments are coordinated by the VBE administrator. Furthermore, VBE administration provides to its members the needed information and a set of ICT tools to support them with: identifying market opportunities, planning the formation of a VO in response to an emerged opportunity in the market, selecting the most suitable set of partners to configure the planned VO, reaching agreements and negotiating among the selected VO partners that is required for formation of an effective VO, as well as assisting the
1.1. Motivation and Problem Definition
VO coordinator (once the VO is established) for monitoring and observation of the VO partners’ performance on one hand, and for any required reconfiguration of the VO’s consortium during its life time [2] and [3].
Some of the main roles played by the VBE administration for the benefit of its members include: creating common infrastructure for information sharing and exchange, common taxonomy and an ontology for the shared concepts among members [4], providing information about competencies of each member in order to enhance their acquaintance with other members [11], as well as defining a set of measurable criteria for assessment of organizations’ trust level and periodically performing the measurement of base trust level for each organization at the VBE, thus serving as the means for trust establishment among the VBE members a main determinant for effective collaboration in potential future VOs [69].
However, in VOs collaboration is practiced, which requires that partners effectively share their information and resources, and fully comply to the VO plans and schedules, thus performing well, both in their individual responsibilities as well as in tasks that are assigned to them jointly [2] and [20]. All partners’ activities are necessary to be fulfilled to achieve the VO’s common goal. In other words in VOs, activities of a group of partners are tightly-coupled with each other. Partners must work and co-develop products closely together, which in turn completely boost each of their individual capabilities. Furthermore, partners share any possible risks or losses facing the VO, as well as the profits and rewards among themselves, following their agreement achieved at the time of VO creation and reflected in their internal negotiations. As such, the VO collaboration provides both a competitive advantage for the involved SMEs as well as increasing their survival factor. It is therefore of extreme importance that VO partnerships are carefully planned and all measure are taken to increase the possibility of their success. While achieving success for such level of closed collaboration among independent organizations needs strong effort and devotion spent by all VO partners, some innovative assisting mechanism and systems, in automated or semi-automated format, can be designed to enhance partners’ collaboration, as addressed in our research.
Based on the literature, in spite of the fact that the number of created networked organizations are gradually increasing, the rate of overall success in alliances of organizations is nearly 50% [42], and according to [96] the great number of VOs either end up in failure or operate under very high risks! Various reasons are stated in the literature for such high numbers of failures in VOs [8]. These can be categorized into three classes:
- Internal risks at organization level including: strikes, machine failure, management failure, etc.
- External risks at market level including: competition, change in demand, political situation, social atmosphere, etc.
• Network-related risks at collaboration level including: lack of trust, insufficient information sharing, clash of work culture, work overload, etc.
Our research primarily narrows down on addressing the last category of risks and how to improve the success rate of networked collaborations.
Several research works have been performed aiming to shed light on the causes that lead to success or failure of partnerships. The result of research performed in [80], takes into account and analyzes large number of available documents on the web that report on organization’s cooperation. It extracts a number of factors concerning cooperation’s success and failure status, as presented in Figure 1.1. The findings of this research, resulted through analyses of web reports and documents, reveal that a large number of failures in partnerships among organizations are caused by their behavior. The identified factors were partly related to the individual organizations and partly to the collaboration as a whole. Our research approach mainly focuses on addressing virtual organizations’ success and failure in terms of their observed and monitored behavior in the network.
![Factors of success and failure of partnerships obtained in partnership-related web documents][80].
A main literature review on this topic goes beyond the above. It attributes for instance some of the VO’s risks and failures to the general lack of common existing infrastructure and collaboration procedure, which are also identified in research that characterizes the role of VBEs, e.g. by establishing a common base for sharing among them, or better VO configuration/creation [2] and [3]. But also identifies other causes related to deficiencies in partner organizations’ performance in the VO. Thirteen specific sources of risks and flaws in VOIs are specified in [9] that can lead to failure for instance in delivery time, or customer satisfaction.
in relation to the cost and quality of the aimed product. These thirteen sources of problems include: **lack of trust among partners**, inadequacy of collaboration agreement, heterogeneity of involved partners, ontology/taxonomy differences, structure and configuration of the VO, **inefficiency in communication between partners**, cultural differences, **work overload of partners** (e.g. bidding for several Virtual Organizations at the same time), inadequacy in information sharing, lack of top management’s commitment, deficiency in partners selection, geographical distance among partners and their location, and **unawareness of potential failures and their severity in the VO**.
Several of the above risks are already addressed in other research, for instance exemplified above in [2], [11], and [69]. However, four of the above mentioned risk factors in this literature review (illustrated as bold) are related to the behavioral factors of partners, and addressed in our research. We also address supporting the last key challenge above on the need to identify and when possible to predict the risks in the VO. Addressing this last source of risk and failure requires monitoring and analyzing VO’s detailed activities during its operational stage, in order to diagnose and forecast potential failures in running its plans and schedules. Making such failure transparent in the VO, alarms its decision makers to potentially intervene and take remedial actions. In turn this will enter the VO into its evolution phase, where, the original plan and schedule of activities may need to be replaced with some new plans and schedules to be negotiated and agreed with other partner. Due to their shorter life span and being goal-oriented, it is very important to identify the weak or weakest points in the planning and scheduling of VO activities, and to measure potential risks of VO failures due to the monitored behavioral perspective of the partners and the VO as a whole, as addressed in our research.
We briefly mentioned above about two specific stages of the life cycle of the VOs, the operation and the evolution stage, which are addressed further below. The VO’s complete life cycle consists of four stages: creation, operation, evolution and dissolution [20]. While there are challenges in all stages of the life cycle of VOs, our research focuses on parts of its operation and evolution stages’ challenges. If VO is mimicking the operation of one large organization in real market, then its partners need to collaborate closely and effectively, as if they represent different departments within a large organization. However, it shall be noted that the VO partners are heterogeneous, autonomous and geographically dispersed organizations, who agree to temporarily contribute a part of their resources and skills into the VO, as required for achieving their common VO goal. Also note that while these independent organizations themselves individually evolve in time, the VO is also dynamic in nature and shall evolve its goals as well as its activity plans and schedules, etc., in order to adapt itself with changes that occur either internal or external to its environment.
In other words, VO evolves in time to cope with changes in the market and society, and as such, even some of its goals may need to evolve. Additionally, due
to the fact that VOs depend on a number of independent entities, constituting the VO partners, it is possible that even its configuration of partners may need to evolve in time. The customer is however typically not concerned with the needed VO dynamism. Usually from the customer’s point of view, the contract commits to deliver some final products with all the specificities of those products, including delivery date, location, quality, etc., and in that sense, there is no difference between a VO producing these required products, or a single company. Please note that the contract between a customer and a large company does neither address the names of its internal departments which will perform different tasks and sub-tasks, nor the planning and scheduling of their activities. Similarly, in some VOs, the VO partners are not even distinguished within the contract. This is the usually case when one representative organization signs the contract with the customer, on behalf of the entire VO consortium. As such even the name of the VO partners may not be mentioned in the contract signed during the VO’s creation stage. Later on during its operation stage, similar to any large organization that has a contract with a customer, the virtual organization acts as a dynamic entity. In other words, its daily activities are decided internal to the VO, specifying each partner’s responsibilities and monitored by the VO coordinator.
Considering the dynamic complexities of VOs, in some other forms of VOs, besides the product specificities, the customer may wish to also know about the VO partners during its creation phase. Usually in such VO contracts, the main goals and sub-goals as well as the high level planning and scheduling of its work packages (sub-projects) together with a number of their coarse-grained tasks, can be predefined in the contract that the VO signs during its creation stage with the customer. It is however known to all parties that VO is dynamic and as long as it delivers its final agreed product, its daily activities are internal and the customer will not interfere. Therefore, similar to the case above, daily activities and other details of the VO are only gradually, systematically, and dynamically extended and defined during its operation phase, under the supervision and decision making of the VO coordinator. However, VO is a federation of independent organizations. So, daily activities are always advised by the coordinator and agreed by the partners that are suggested by the VO coordinator to take those tasks.
Capturing and supporting the required dynamism during the VO operation stage can benefit from a VO supervisory framework to assist on one hand with continuously formalizing and reflecting the detailed agreement on tasks to be executed by partners, as well as the evolved situations at the VO, and on the other hand to record partners compliance, successful completion and performance of their assigned, planned, and scheduled tasks. In other words, while the customer of the final product of the VO is not at all interested in knowing details of which organization did what, when, and how within the VO consortium, these information are quite critical to the VO’s coordinator and VO partners. As such this information shall be carefully recorded, logged and preserved internal to the VO.
1.2. Research Questions
On the one hand, it shall provide a reference point on assigned/agreed responsibilities for the VO consortium partners. On the other hand, it shall be used as a main tool in the hand of the VO coordinator for monitoring, supervision, alignment, potential intervention and even decision making on needed evolution of the VO during its operation stage. Furthermore, the collection of past logs of each partner’s activities in the VO, combined with that partner’s past records stored within the VBE provides a good source for assessment of each organization and can be used for their accountability, measuring their trustworthiness, selecting suitable partners for task reassignment, and even for potential fair distribution of some profits and losses in the VOs, etc.
As a building block for establishing such a supervisory framework and using it to measure potential risks of VO failure due to the monitored behavioral perspectives of the partners and the VO as a whole, the regulatory role of the VO coordinator must be concisely specified. In this role, a set of working principles and regulations must be defined for partners’ behavior regarding both their responsibilities and rights by the VO coordinator, which will then be used to supervise the VO. The principles limit and lead both the collective and individual behavior of partners. There is a need for formation of well-founded behavioral models in VOs, which can then also be applied to mechanisms for effective selection of VO partners, aiming to prevent future failures in VOs. Moreover, such a framework facilitates introducing new mechanisms, e.g. a reward system for good performance, where many studies show it encourages behavior enhancement in collaborative networks. Motivation and rewarding models and mechanisms increase sustainability of the networks, through fair and transparent distribution of some benefits, and tap on the expectation of partners offering to contribute more, and collaborate more effectively.
1.2 Research Questions
The VO coordinator can play a vital role in supporting successful collaboration of the VO. We investigate the above addressed supervisory framework and aim to develop a supervisory assisting tool to support VO coordinators with increasing the resilience and success of their VOs, through monitoring and analysis of partners’ behavior and diagnosing and alarming about the potential points of failure in the VO.
The main related research on this topic is addressed in [69], [96], [8], [55] and [25]. However, joint responsibilities, and VO’s dynamism that are of great importance to building our targeted supervisory assisting tool have not been sufficiently addressed. Moreover, while the related research in the area mainly focuses on the VO creation phase, the main focus of our research is to support the operation and evolution stages of the VOs, with slightly addressing how some of our introduced functionality can also be applied to the creation stage, as exemplified in
Chapter 1. Introduction
the service oriented VOs. To build our proposed system, we need to answer the following fundamental questions.
RQ1: How to model and assess the work-related collaborative behavior of VO partners?
This research question is primarily related to two specific measurements at each VO partner: (i) individual collaborative-behavior of the VO partner, as incrementally measured for it and recorded in the VBE, and (ii) current collaborative-behavior of the organization in the VO. It should be noticed that, based on the definition of VOs and VBEs, adopted from [19] and [2], the VO partners refer to organizations. This research question is addressed in Chapters 2, 3, 4 and 5 of the thesis, and it includes the following two sub-questions:
S1-RQ1: How to characterize and model past collaborative-behavior of members in the VBE?
We present a mechanism for identifying and comparing the individual collaborative-behavior of organizations based on the causal relationships that we have defined among organizations’ behavior traits, and some known factors in the VBE. Chapter 2 explains that information related to organization’s behavior traits as collected from their involvement in the past VOs. This information is used for measuring and comparing the organization’s behavior against other organizations.
S2-RQ1: How to monitor and measure current collaborative-behavior of partners in a VO?
We introduce four specific kinds of behavioral norms, including: (i) Socio-regulatory norms, (ii) Co-working norms, (iii) Committing norms, and (iv) Controlling norms, as addressed in Chapter 3. We formalize these norms in order to both deal with the VO’s dynamism and its evolution, as well as dealing with the relationship among partners that are involved in a joint-responsibility. Moreover, we develop new mechanisms for monitoring the organizations’ behaviors against these norms, to measure their degrees of norm obedience, as discussed in Chapter 4. A mechanism is proposed in Chapter 5 to evaluate the trust level of each VO partner during the VO operation phase, which has two advantages: (i) measuring the lack of trustworthiness for each VO partner, to predict the risk of failure in VO goals, and (ii) more effective service selection, as exemplified for service selection in a Service Oriented Architecture (SOA)-based VOs.
RQ2: How does the partner’s work-related behavior influence the achievement of the VO goals?
This research question is related to the specification of behavior-related risk factors aiming at the risk prediction in VOs. This question is mainly addressed in Chapter 6, and includes the following two sub-questions:
S1-RQ2: What are the main behavior-related risk factors in VOs?
We introduce three risk factors related to the VO partners, i.e. lack of trustworthiness, work overload, and failing in communication. The lack of trustworthiness of the partner is measured based on the results of modeling and monitoring the partners’ behavior, which are discussed in Chapters 4 and 5 of
1.3. Research method
Our research method consists of the following phases:
(1) Establish the motivation. This phase aims at indicating the importance of modeling and monitoring the behavior of VO partners in increasing the resilience and success of VOs. Two specific measurements for each VO partner are identified: (a) established past collaborative-behavior of the VO partner, measured in the VBE, and (b) current collaborative-behavior of the VO partner in the thesis. Other factors are mostly addressed in Chapter 6.
S2-RQ2: How to predict risk of failure in achieving VO goals, considering behavior-related risk factors?
We introduce a new mechanism to predict the risk of failure in the VO goals and the VO tasks. This mechanism uses the probabilities of the three risk factors mentioned above under S1-RQ2, addressed in Chapter 6, as well as information related to the inter-dependencies among different partner’s responsibilities, while these responsibilities are themselves dynamically and gradually specified during the VO operation phase.
RQ3: How to enhance collaboration success in VOs, in relation to partners’ work-related collaborative behavior?
This main question is primarily related to the prevention of failure risk, as well as promotion and enhancement of collaboration, which results in improving the success rate in VOs. It is mainly addressed in Chapter 6. This research question includes the following two sub-questions:
S1-RQ3: How to prevent potential work-related VO failure, through task reassignment?
Task reassignment is one of the solutions that the VO coordinators can consider to prevent a potential failure in fulfilling of the VO goals. We introduce a new approach for selecting the best-fit member to which the risky task is reassigned, as addressed in Chapter 6. The proposed approach aims to bring new insights on how to increase the chances of partnership success.
S2-RQ3: How to promote work-related collaboration behavior among VO partners?
We introduce the concept of indirect rewards, which can be distributed among organizations based on their collaborative work-related behavior during the VO operation phase. It can encourage partners to perform better and more collaboratively, and it is aimed to enhance and affect their future behaviors. This is also discussed in Chapter 6.
Obtaining satisfactory answers to these questions have resulted in developing our VO supervisory assisting tool (VOSAT) for monitoring and controlling organizations’ behavior that enables the VO coordinator to both predict and prevent failures, and to promote collaborative behavior among the partners.
Chapter 1. Introduction
(2) Review the related works. This phase aims at providing enough information for the approach pursued in this thesis to monitor the VO partners during its operation phase, diagnose and prevent the potential causes of failures, related to partners collaborative behavior, in fulfilling the VO goals.
(3) Establish the research assumptions. The first assumption of this research is that the collaborative behavior of organizations involved in a VBE can be measured and compared with each other, based on a number of organizations’ traits. The reason behind this assumption is that when organizations collaborate with each other to achieve common and compatible objectives, they may show different behaviors, which are usually repeated over time. As a result of the repetition of these behaviors, some behavioral patterns are formed, that are in turn related to a number of certain traits [79]. In other words, certain actions by an organization in collaboration situations are the evidence of certain behavioral patterns. Therefore, we define the personality of an organization being composed of a series of traits that highlight the organization’s behavior. Applying the organization’s personality, the organization’s behavior can therefore be predicted, thorough our suggested mappings between the traits and behavior.
The second assumption is that similar as humans the behaviors of organizations involved in VOs, are also constrained by some norms. Considering the VO’s dynamism and its evolution during its operation stage, there are certain obligations, prohibitions and constraints that limit the behavior of involved organizations in VOs, so it is assumed that we can monitor the organizations’ behavior through checking the state of these norms.
The third assumption is that the failure risks of the VO goals, and the planned collective responsibilities in VOs depend primarily on the trust levels, communication rates and work overloads, which are measured in our approach for the involved partners, although there are some other risk sources also identified in the literature [8]. The main reason behind this assumption is that collaboration risk assessment in our approach is performed using the concept of organization’s behavior. For example, the trust level of organizations, as a main factor of risk, is evaluated from a behavioral perspective. It means that for instance if an organization’s behavior is considered positive regarding the collaboration’s norms, and its individual collaborative behavior is also high in contrast to other organizations, then its level of trust increases, and from this aspect, this organization is not predicted to put the VO at risk.
(4) Design the assisting supervision framework. This phase aims at developing the assisting supervision framework to support VO coordinators with increasing the resilience and success of the VOs. In this framework, to measure the organizations behavior within the VBE, four specific quality-behavioral dimensions are considered, and modeled through a set of traits. A quantitative causal approach is then defined to inter-relate some known factors from the environment with the traits of these four behavioral dimensions. The results are then used to
1.3. Research method
measure each organization's level of Individual Collaborative Behavior (ICB) in the VBE, in comparison to all others. Formulas are derived from the causal relationships, for computing the collaborative behavior degree for each organization. This measure constitutes one criterion in our proposed approach for evaluating collaborative trustworthiness of the organization, as needed to be known during the VO operation phase. To address the collaborative behavior of the VO partner in current VO, four specific kinds of behavioral norms are introduced, i.e. (i) Socio-regulatory norms, (ii) Co-working norms, (iii) Committing norms, and (iv) Controlling norms; our proposed model is therefore called the S3C model. This model characterizes each of the four norms for organizations’ behavior in the VO. Our approach to this model introduces new formalization and mechanisms for organizations to make promises (for performing individual sub-tasks) and/or joint-promises (for performing joint-tasks), thus the VO partners committing themselves in a bottom-up manner to perform tasks, as opposed to the VO coordinator assigning tasks to them in a top-down manner. The bottom-up manner is more in line with the collaboration nature in the VOs that resembles a federated partnership among organizations. Furthermore, based on the results of monitoring the socio-regulatory norms, co-working norms, and committing norms as well as the value of partner’s ICB, the trust level of the partner is specified. Using the controlling norms, as well as responsibility inter-dependencies, applying Bayesian network, the probability of failure in each of the planned sub-task, task, sub-goal, as well as the general VO goal can be concisely measured. Moreover, some decision-making suggestions are provided for intervention toward failure prevention and collaboration promotion in VOs, and thus enhancing the success rate of VOs. A summary of our contributions in this thesis are presented in Table 1.1.
(5) Tool Development. This phase aims at addressing the development of the VO assisting tool, the VOSAT. The development of this tool applied the behavior models and mechanisms for behavior monitoring, diagnosing the risks, reassignment of risky tasks, and support some rewards distribution, which were designed in phase 4. Our VOSAT tool is developed using the 2OPL (Organization Oriented Programming Language) \cite{2OPL}, while also extending its environment with the formalization of the notion of promises, joint-promises and fuzzy norms.
(6) Model validation. Since applying the model to the sufficient number of cases needs more time, and also there is no similar competitive system introduced for VOs, statistical validation approaches cannot be applied here. Therefore, other approaches are introduced and used for this purpose. The proposed behavior model of organizations in VBEs, addressed in Chapter 2 is validated by more than fifty members of the SOCOLNET community of experts in collaborative networks\footnote{https://sites.google.com/a/uninova.pt/socolnet/}. The mechanism of monitoring VO partners’ behavior and their trust evaluation is applied to a Service Oriented Architecture (SOA)-based VO,
Table 1.1: Summary of thesis Contributions and their ranks (** novel in the area, *
* enhancing the state of the art).
as addressed in Chapter 5. Furthermore, we have applied our developed tool to
some examples, such as a project aims at producing the canned tomato paste,
briefly addressed in Chapter 4, a designed case study within the framework of an
EU-funded project, and the project aiming at investigating the impact of certain
Scientific Research on the Agriculture (SRA), as briefly addressed in Chapter 6
1.4 Thesis Structure
This thesis addresses mechanisms that are used to support the supervision of
VOs. The structure of the thesis is as follows.
In Chapter 2, we discuss the causal relationships among the organizations’ per-
sonality, behavioral dimensions, and traits. We adopt some applicable ideas from
existing personality models, such as [75] to evaluate the organizations’ behavior.
Several new measurable traits are then introduced, and their causal relations-
ships with behavioral dimensions are defined. Furthermore, the system dynamics
method is applied to formulate measurable organizations’ behavior.
Chapter 3 introduces four specific behavioral norms, including: Socio-regulatory
norms, Co-working norms, Committing norms, and Controlling norms. Then the
formalizations of the related concepts are discussed based on which a new ap-
proach is proposed to monitor organizations’ behavior against these norms. Our
developed supervisory tool, called VOSAT (VO Supervisory Assisting Tool), is briefly introduced in this Chapter. It consists of the following five components:
- Norm Monitoring Component (NMC)
- Norm Abidance Component (NAC)
- Trust Evaluating Component (TEC)
- Risk Predicting Component (RPC)
- Partner Selecting Component (PSC)
Chapter 4 focuses on addressing the first two components. In other words, Chapter 4 addresses how to monitor the partners’ behavior against the four specific norms defined in Chapter 3. Moreover, the obedience degrees for socio-regulatory norms, co-working norms and committing norms are measured in Chapter 4.
Chapter 5 presents a mechanism for evaluating the trust level of VO partners, using the measures discussed in Chapter 2 and Chapter 4. A case study is introduced for partners’ behavior monitoring and trust evaluation, which applies our approach for most-fit partner selection to the software service industry, and a specific case of a Service Oriented Architecture (SOA)-based VO. We have also extended our approach here by introducing a new competency-model used for most-fit service selection in this environment. This model leads to an effective service discovery and consequently assists with integrated service composition in VOs. New meta-data to describe and model the services is introduced.
The main focus of Chapter 6 is on enhancing resilience of the VO, for which a set of mechanisms are proposed. At first, the risk of failure in fulfillment of assigned tasks is predicted. When risky tasks are specified then it may be needed to reassign them; therefore, the best potential partners to which the risky tasks can be delegated are selected, based on findings of our proposed approach. These task reassignments can prevent risk of failure in fulfilling the VO goals. Another important issue addressed in this chapter is how to promote stronger collaboration, which is proposed to be guided through transparent and fair rewards distribution.
Finally, in Chapter 7, we conclude the thesis, address how we have answered to the defined research questions, and our assessment and validation of our approach and the developed system. Some on going and future works are also mentioned in this chapter.
The material represented in Chapter 2 to Chapter 6 of this thesis have been published, as indicated below. Co-authorship and roles:
- Addressing behavior in collaborative networks [80],
- Presented in Chapter 2,
Chapter 1. Introduction
– Awarded as the best paper in *Adaptation and Value Creating Collaborative Networks, 12th IFIP WG 5.5 Working Conference on Virtual Enterprises*, 2011,
– Mahdieh Shadi: All aspects of the paper,
– Hamideh Afsarmanesh: Guidance and technical advice
• Behavioral norms in virtual organizations [91],
– Presented in Chapter 3,
– Published in *Collaborative Systems for Smart Networked Environments, 15th IFIP WG 5.5 Working Conference on Virtual Enterprises*, 2014,
– Mahdieh Shadi: All aspects of the paper,
– Hamideh Afsarmanesh: Guidance and technical advice
• Behavior modeling in virtual organizations [90],
– Presented in Chapter 3,
– Published in *Proc. Advanced Information Networking and Applications Workshops (WAINA)*, 2013,
– Mahdieh Shadi: All aspects of the paper,
– Hamideh Afsarmanesh: Guidance and technical advice
• Agent behavior monitoring in virtual organization [92],
– Presented in Chapter 4,
– Published in *Proc. Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE)*, 2013,
– Mahdieh Shadi: All aspects of the paper,
– Hamideh Afsarmanesh: Guidance and technical advice,
– Mehdi Dastani: Guidance and technical advice
• VO Supervisory Assisting Tool (VOSAT),
– Presented in Chapters 4,
– Submitted to *International Journal of Networking and Virtual Organisations*,
– Mahdieh Shadi: All aspects of the paper,
– Hamideh Afsarmanesh: Guidance and technical advice,
– Mehdi Dastani: Guidance and technical advice
1.4. Thesis Structure
- Semi-automated software service integration in virtual organisations [6],
- Presented in Chapter 5,
- Published in *Enterprise Information Systems*, 2015,
- Mahdieh Shadi: The proposed competency model and the approach for service selection (non-functional service discovery),
- Mahdi Sargolzaei: The functional service discovery and service integration
- Hamideh Afsarmanesh: Guidance and technical advice
- A framework for automated service composition in collaborative networks [5],
- Presented in Chapter 5,
- Published in *Collaborative Networks in the Internet of Services, 13th IFIP WG 5.5 Working Conference on Virtual Enterprises*, 2012,
- Mahdieh Shadi: The proposed competency model and the approach for service selection (non-functional service discovery),
- Mahdi Sargolzaei: The functional service discovery and service integration
- Hamideh Afsarmanesh: Guidance and technical advice
- Bayesian Network-Based Risk Prediction in Virtual Organizations [7],
- Presented in Chapter 6,
- Published in *Risks and Resilience of Collaborative Networks, 16th IFIP WG 5.5 Working Conference on Virtual Enterprises*, 2013,
- Mahdieh Shadi: All aspects of the paper,
- Hamideh Afsarmanesh: Guidance and technical advice
- Task Failure and Risk Analysis in Virtual Organizations,
- Presented in Chapter 6,
- Submitted to *International Journal of Cooperative Information Systems*,
- Mahdieh Shadi: All aspects of the paper
- Hamideh Afsarmanesh: Guidance and technical advice
|
{"Source-Url": "https://pure.uva.nl/ws/files/9847318/01.pdf", "len_cl100k_base": 7778, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 32160, "total-output-tokens": 8510, "length": "2e12", "weborganizer": {"__label__adult": 0.000579833984375, "__label__art_design": 0.0011129379272460938, "__label__crime_law": 0.0013780593872070312, "__label__education_jobs": 0.053375244140625, "__label__entertainment": 0.0004494190216064453, "__label__fashion_beauty": 0.00036454200744628906, "__label__finance_business": 0.052154541015625, "__label__food_dining": 0.0007281303405761719, "__label__games": 0.00165557861328125, "__label__hardware": 0.0014467239379882812, "__label__health": 0.0012140274047851562, "__label__history": 0.0009598731994628906, "__label__home_hobbies": 0.0005598068237304688, "__label__industrial": 0.0016489028930664062, "__label__literature": 0.0017671585083007812, "__label__politics": 0.001430511474609375, "__label__religion": 0.0006051063537597656, "__label__science_tech": 0.338623046875, "__label__social_life": 0.0018606185913085935, "__label__software": 0.142822265625, "__label__software_dev": 0.392822265625, "__label__sports_fitness": 0.0004031658172607422, "__label__transportation": 0.0011930465698242188, "__label__travel": 0.0006356239318847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40147, 0.01495]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40147, 0.28889]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40147, 0.945]], "google_gemma-3-12b-it_contains_pii": [[0, 1007, false], [1007, 2989, null], [2989, 6274, null], [6274, 9359, null], [9359, 11284, null], [11284, 14630, null], [14630, 17981, null], [17981, 20998, null], [20998, 24041, null], [24041, 26680, null], [26680, 29943, null], [29943, 33167, null], [33167, 34615, null], [34615, 37083, null], [37083, 38606, null], [38606, 40147, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1007, true], [1007, 2989, null], [2989, 6274, null], [6274, 9359, null], [9359, 11284, null], [11284, 14630, null], [14630, 17981, null], [17981, 20998, null], [20998, 24041, null], [24041, 26680, null], [26680, 29943, null], [29943, 33167, null], [33167, 34615, null], [34615, 37083, null], [37083, 38606, null], [38606, 40147, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40147, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40147, null]], "pdf_page_numbers": [[0, 1007, 1], [1007, 2989, 2], [2989, 6274, 3], [6274, 9359, 4], [9359, 11284, 5], [11284, 14630, 6], [14630, 17981, 7], [17981, 20998, 8], [20998, 24041, 9], [24041, 26680, 10], [26680, 29943, 11], [29943, 33167, 12], [33167, 34615, 13], [34615, 37083, 14], [37083, 38606, 15], [38606, 40147, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40147, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
54337cc12dc22971b9c63993b5a2e671c07f1f02
|
Generating Test Data for Software Structural Testing using Particle Swarm Optimization
Dinh Ngoc Thi*
VNU University of Engineering and Technology, 144 Xuan Thuy, Cau Giay, Hanoi, Vietnam
Abstract
Search-based test data generation is a very popular domain in the field of automatic test data generation. However, existing search-based test data generators suffer from some problems. By combining static program analysis and search-based testing, our proposed approach overcomes some of these problems. Considering the automatic ability and the path coverage as the test adequacy criterion, this paper proposes using Particle Swarm Optimization, an alternative search technique, for automating the generation of test data for evolutionary structural testing. Experimental results demonstrate that our test data generator can generate suitable test data with higher path coverage than the previous one.
Received 26 Jun 2017; Revised 28 Nov 2017; Accepted 20 Dec 2017
Keywords: Automatic test data generation, search-based software testing, Particle Swarm Optimization.
1. Introduction
Software is a mandatory part of today’s life, and has become more and more important in current information society. However, its failure may lead to significant economic loss or threat to life safety. As a consequence, software quality has become a top concern today. Among the methods of software quality assurance, software testing has been proven as one of the effective approaches to ensure and improve software quality over the past three decades. However, as most of the software testing is being done manually, the workforce and cost required are accordingly high [1]. In general, about 50 percent of workforce and cost in the software development process is spent on software testing [2]. Considering those reasons, automated software testing has been evaluated as an efficient and necessary method in order to reduce those efforts and costs.
Automated structural test data generation is becoming the research topic attracting much interest in automated software testing because it enhances the efficiency while reducing considerably costs of software testing. In our paper, we will focus on path coverage test data generation, considering that almost all structural test data generation problems can be transformed to the path coverage test data generation one. Moreover, Kernighan and Plauger [3] also pointed out that path coverage test data generation can find out more than 65 percent of bugs in the given program under test (PUT).
Although path coverage test data generation is the major unsolved problem [20], various approaches have been proposed by researchers. These approaches can be classified into two types: constraint-based test data generation (CBTDG) or search-based test data generation (SBTDG).
Symbolic execution (SE) is the state-of-the-art of CBTDG approaches [21]. Even though there have been significant achievements, SE still faces difficulties in handling infinite loops, array, procedure calls and pointer references in each PUT [22].
There are also random testing, local search [10], and evolutionary methods [23, 24, 25] in SBTDG approaches. As the value of input variables is assigned when a program executes, problems encountered in CBTDG approaches can be avoided in SBTDG.
Being an automated searching method in a predefined space, genetic algorithm (GA) was applied to test data generation since 1992 [26]. Micheal et al [22], Levin and Yehudai [25], Joachim et al [27] indicated that GA outperforms other SBTDG methods e.g. local search or random testing. However even though they can generate test data with appropriate fault-prone ability [4, 5], they fail to produce them quickly due to their slowly evolutionary speed. Recently, as a swarm intelligence technique, Particle Swarm Optimization (PSO) [6, 7, 8] has become a hot research topic in the area of intelligent computing. Its significant feature is its simplicity and fast convergence speed.
Even so, there are still certain limitations in current research on PSO usage in test data generation. For example, consider one PUT which was used in Mao’s paper [9] as below:
```c
int getDayNum(int year, int month) {
int maxDay=0;
if(month>=1 && month<=12){
//bch1: branch 1
if(month==2){ //bch2: branch 2
if(year%400==0|| (year%4=0&&year%100=0))
//bch3: branch 3
maxDay=29;
else //bch4: branch 4
maxDay=28;
}
else if(month==4||month==6||
month==9||month==11)
//bch5: branch 5
maxDay=30;
else //bch6: branch 6
maxDay=31;
}
else //bch7: branch 7
maxDay=-1;
return maxDay;
}
```
Regarding this PUT, Mao [9] used PSO to generate test data through building the one and only fitness function which was the combination of Korel formula [10] and the branch weights. This proposal has two weaknesses: the branch weight function is entirely performed manually and some PUTs are not able to generate test data to cover all test paths. To overcome these weaknesses, we still use PSO to generate test data for the given PUT. However, unlike Mao, our approach is to assign one fitness function for each test path. Then we will use simultaneous multithreading of PSO to simultaneously find the solution corresponding to this fitness function, which is also the one able to generate test data for this test path.
The rest of this paper is organized as follows: Section 2 gives some theoretical background on fitness function and particle swarm optimization algorithm. Section 3 summarizes some related works, and Section 4 presents the proposed approach in detail. Section 5 shows the experimental results and discussions. Section 6 concludes the paper.
2. Background
This section describes the theoretical background being used in our proposed approach.
2.1. Fitness function
When using PSO, a test path coverage test data generation is transformed into an optimization problem. To cover a test path during execution, we must find appropriate values for the input variables which satisfy related branch predicates. The usual way is to
use Korel’s branch distance function [10]. As a result, generating test data for a desired branch is transformed into searching input values which optimizes the return value of its Korel function. Table 1 gives some common formulas which are used in branch distance functions. To generate test data for a desired path P, we define a fitness function F(P) as the total values of all related branch distance functions. For these reasons, generating path coverage test data can be converted into searching input values which can minimize the return value of function F(P).
<table>
<thead>
<tr>
<th>Relational predicate</th>
<th>Branch distance function f(bch)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boolean ~a</td>
<td>if true then 0 else k</td>
</tr>
<tr>
<td>a = b</td>
<td>if abs(a − b) = 0 then 0 else abs(a − b) + k</td>
</tr>
<tr>
<td>a ≠ b</td>
<td>if abs(a − b) ≠ 0 then 0 else k</td>
</tr>
<tr>
<td>a ≥ b</td>
<td>if a − b ≥ 0 then 0 else abs(b(a − b) + k)</td>
</tr>
<tr>
<td>a < b</td>
<td>if a − b < 0 then 0 else abs(a − b) + k</td>
</tr>
<tr>
<td>a or b</td>
<td>if (a) + (b) then 0 else min((a), (b))</td>
</tr>
</tbody>
</table>
Similar to Mao [9], we also set up the punishment factor k = 0.1. Basing on this formula, we will develop a function calculating values at branch predicate, which is will be explained in the next part.
2.2. Particle Swarm Optimization
Particle Swarm Optimization (PSO) was first introduced in 1995 by Kennedy and Eberhart [11], and is now widely applied in optimization problems. Compared to other optimal search algorithms such as GA or SA, PSO has the strength of faster convergent speed and easier coding. PSO is initialized with a group of random particles (initial solutions) and then it searches for optima by updating generations. In every iteration, each particle is updated by the following two "best" values. The first one is the best solution (fitness) achieved so far (the fitness value is also stored). This value is called pbest. Another "best" value tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest.
After finding the two best values, the particle updates its velocity and positions with the following equation (1) and (2).
\[
v[] = v[] + c1 \times \text{rand()} \times (\text{present[]}) + c2 \times \text{rand()} \times (\text{gbest[]} - \text{present[]}) (1)
\]
\[
\text{present[]} = \text{present[]} + v[](2)
\]
v[] is the particle velocity, present[] is the current particle (current solution). pbest[] and gbest[] are defined as stated before. rand() is a random number between (0, 1). c1, c2 are learning factors, usually c1 = c2 = 2.
The PSO algorithm is described by pseudo code as shown below:
Algorithm 1: Particle Swarm Optimization (PSO)
Input: F: Fitness function
Output: gBest: The best solution
1: for each particle
2: initialize particle
3: end for
4: do
5: for each particle
6: calculate fitness value
7: if the fitness value is better than the best fitness value (pBest) in history then
8: set current value as the new pBest
9: end if
10: end for
11: choose the particle with the best fitness value of all the particles as the gBest
12: for each particle
13: calculate particle velocity according equation (1)
14: update particle position according equation (2)
15: end for
16: while maximum iterations or minimum criteria is not attained
Particles' velocities on each dimension are clamped to a maximum velocity $V_{\text{max}}$, which is an input parameter specified by the user.
3. Related work
From the 1990s, genetic algorithm (GA) has been adopted to generate test data. Jones et al. [13] presented a GA-based branch coverage test data generator. Their fitness function made use of weighted Hamming distance to branch predicate values. They used unrolled control flow graph of a test program such that it is acyclic. Six small programs were used to test the approach. In recent years, Harman and McMinn [14] performed empirical study on GA-based test data generation for large-scale programs, and validated its effectiveness over other meta-heuristic search algorithms.
Although GA is a classical search algorithm, its convergence speed is not very significant. PSO algorithm, which simulates to birds flocking around food sources, was invented by Kennedy and Eberhart [11] in 1995, and was originally just an algorithm used for optimization problems. However with the advantages of faster convergence speed and easier construction than other optimization algorithms, it was promptly adopted as a meta-heuristic search algorithm in the automatic test data generation problem.
Automatic test data generation literature using PSO started with Windisch et al. [6] in 2007. They improved the PSO into comprehensive learning particle swarm optimization (CL-PSO) to generate structural test data, but some experiments proved that the convergence speed of CL-PSO was perhaps worse than the basic PSO.
Jia et al. [8] created an automatic test data generating tool named particle swarm optimization data generation tool (PSODGT). The PSODGT is characterized by two features. First, the PSODGT adopts the condition decision coverage as the criterion of software testing, aiming to build an efficient test data set that covers all conditions. Second, the PSODGT uses a particle swarm optimization (PSO) approach to generate test data set. In addition, a new position initialization technique is developed for PSO. Instead of initializing the test data randomly, the proposed technique uses the previously-found test data which can reach the target condition as the initial positions so that the search speed of PSODGT can be further accelerated. The PSODGT is tested on four practical programs.
Khushboo et al. [15] described the application of the discrete quantum particle swarm optimization (QPSO) to the problem of automated test data generation. The discrete quantum particle swarm optimization algorithm is proposed on the basis of the concept of quantum computing. They had studied the role of the critical QPSO parameters on test data generation performance and based on observations an adaptive version (AQPSO) had been designed. Its performance compared with QPSO. They used the branch coverage as their test adequacy criteria.
Tiwari et al. [16] had applied a variant of PSO in the creation of new test data for modified code in regression testing. The experimental results demonstrated that this method could cover more code in less number of iterations than the original PSO algorithm.
Zhu et al. [17] put forward an improved algorithm (APSO) and applied it to automatic test data generation, in which inertia weight was adjusted according to the particle fitness. The results showed that APSO had better performance than basic PSO.
Dahiya et al. [18] proposed a PSO-based hybrid testing technique and solved many of the structural testing problems such as dynamic variables, input dependent array index, abstract function calls, infeasible paths and loop handling.
Singla et al. [19] presented a technique on the basis of a combination of genetic algorithm and particle swarm algorithm. It is used to generate automatic test data for data flow coverage by using dominance concept between two nodes, which is compared to both GA and
PSO for generation of automatic test cases to demonstrate its superiority.
Mao [9] and Zhang et al. [7] had the same approach, in which they did not execute any PSO improvement but only built a fitness function by combining the branch distance functions for branch predicates and the branch weights of a PUT, then applied PSO to find the solution for this fitness function. The experiment result with 1 benchmark having 8 programs under test proved that PSO algorithm was more effective than GA in generating test data. However, there remained a weakness that the calculation of branch weight for a PUT was still entirely manual work, which reduced the automatic nature of the proposal. In this paper, our proposal can overcome this limitation while being able to assure the efficiency of a PSO-based automatic test data generation method.
4. Proposed approach
Our proposed approach can be divided into two separate parts: performing static analysis and applying simultaneous multithreading of PSO to generate test data. This approach is presented in the Figure 1 below.

4.1. Perform statistical analysis to find out all test paths
At first, we perform the statistical analysis to find all test paths of the given PUT. We call static analysis because of not having to execute the program, we can still generate control flow graph (CFG) from the given program, and then traverse this CFG to find out all test paths. It can be done through the following two small steps:
1) Control flow graph generation: Test data generated from source code directly is more complicated and difficult than from control flow graph (CFG). CFG is a directed graph visualizing logic structures of program [12] and is defined as follow:
**Definition1(CFG).** Given a program, a corresponding CFG is defined as a pair \( G = (V, E) \), where \( V = \{v_0, v_1, \ldots, v_n\} \) is a set of vertices representing statements, \( E = \{(v_i, v_j) | v_i, v_j \in V \} \subset V \times V \) is a set of edges. Each edge \( (v_i, v_j) \) implies the statement corresponding to \( v_j \) is executed after \( v_i \).
This paper uses the CFG generation algorithm from a given program which was presented in [28]. Before performing this algorithm, output graph is initialized as a global variable and contains only one vertex representing for the given program P.
**Algorithm 2: GenerateCFG**
**Input**: \( P \); given program
**Output**: graph: CFG
1: \( B \) = a set of blocks by dividing \( P \)
2: \( G \) = a graph by linking all blocks in \( B \) to each other
3: update graph by replacing \( P \) with \( G \)
4: if \( G \) contains return/break/continue statements then
5: update the destination of return/break/continue pointers in the graph
6: **end if**
7: for each block \( M \) in \( B \) do
8: if block \( M \) can be divided into smaller blocks then
9: GenerateCFG(M)
10: **end if**
11: **end for**
Apply this GenerateCFG algorithm to the above mentioned PUT `getDayNum`, we will get a CFG which has 5 test paths (presented by decision nodes) as Figure 2 following.
2) Test paths generation: In order to generate test data, a set of feasible test paths is found by traversing the given CFG. Path and test path are defined as follows:
Definition 2 (Path). Given a CFG \(G = (V, E)\), a path is a sequence of vertices \(\{v_0, v_1, ..., v_k\} \in E\), where \(v_0\) and \(v_{k+1}\) are corresponding to the start vertex and end vertex of the CFG.
Definition 3 (Test path). Given a CFG \(G = (V, E)\), a test path is a path \(\{v, v_{i+1}\} \in E\), where \(v\) is not a decision node.
This research also uses CFG traverse algorithm [28] to obtain feasible test paths from a CFG as below:

**Algorithm 3: TraverseCFG**
Input: \(v\): the initial vertex of the CFG
\(depth\): the maximum number of iterations for a loop
\(path\): a global variable used to store a discovered test path
Output: \(P\): a set of feasible test paths
1: if\(v=\) NULL or \(v\) is the end vertex then
2: add \(path\) to \(P\)
3: else if the number occurrences of \(v\) in \(path \leq depth\) then
4: add \(v\) to the end of \(path\)
5: if \((v\) is not a decision node) or \((v\) is decision node and \(path\) is feasible) then
6: for each adjacent vertex \(u\) to \(v\) do
7: TraverseCFG\((u, \) depth, \(path)\)
8: end for
9: end if
10: remove the latest vertex added in \(path\) from it
11: end if
In this paper, a test path is represented as a sequence of pairs of predicate, e.g. \((\text{month} \geq 1 \&\& \text{month} \leq 12)\) for the first branch, and its decision (T or F for TRUE or FALSE respectively). For example, one of the paths in PUT `getDayNum` can be written as these sequence \{[(\text{month} \geq 1 \&\& \text{month} \leq 12), T], [(\text{month} = 2), T], [(\text{year} \% 400 = 0 \&\& \text{year} \% 4 = 0 \&\& \text{year} \%100 = 0), F]\} which means the TRUE branch is taken at predicate \((\text{month} \geq 1 \&\& \text{month} \leq 12)\), the TRUE branch at predicate \((\text{month} = 2)\), and the FALSE branch at predicate \((\text{year} \% 400 = 0 \&\& \text{year} \% 4 = 0 \&\& \text{year} \%100 = 0)\).
This is the path taken with data that represents the number of days of February in the not leap year. Apply this algorithm TraverseCFG to the CFG of PUT `getDayNum`, we will get 5 test paths which are presented as a sequence of pairs of branch predication and its decisions as in the Table 2 below:
Table 2. All test paths of PUT `getDayNum`
<table>
<thead>
<tr>
<th>PathID</th>
<th>Path’s branch predications and their decisions</th>
</tr>
</thead>
<tbody>
<tr>
<td>path1</td>
<td>([(\text{month} \geq 1 && \text{month} \leq 12), T], [(\text{month} = 2), T], [\text{year} % 400 = 0 && \text{year} % 4 = 0 && \text{year} %100 = 0), T])</td>
</tr>
<tr>
<td>path2</td>
<td>([(\text{month} \geq 1 && \text{month} \leq 12), T], [(\text{month} = 2), T], [\text{year} % 400 = 0 && \text{year} % 4 = 0 && \text{year} %100 = 0), F])</td>
</tr>
<tr>
<td>path3</td>
<td>([(\text{month} \geq 1 && \text{month} \leq 12), T], [(\text{month} = 2), F], [(\text{month}=4) && \text{month}=6] \text{ or } [(\text{month}=9) && \text{month}=11], T])</td>
</tr>
<tr>
<td>path4</td>
<td>([(\text{month} \geq 1 && \text{month} \leq 12), T], [(\text{month} = 2), F], [(\text{month}=4) && \text{month}=6] \text{ or } [(\text{month}=9) && \text{month}=11], F])</td>
</tr>
<tr>
<td>path5</td>
<td>([(\text{month} \geq 1 && \text{month} \leq 12), F])</td>
</tr>
</tbody>
</table>
4.2. Establish fitness function for each test path
From the branch distance calculation formula in Table 1, we develop the below function, \( f_{BchDist} \) to calculate the value at each predicate branch.
Since each test path is represented by sequence of pairs of branch predication and its decision, in order to build the fitness function for the test path, we establish the fitness function for each branch predication and its decision. There will be 2 possibilities of TRUE(T) and FALSE(F) for each branch predication, so there will be 2 fitness functions corresponding to those possibilities. Regarding the calculation formula for the fitness function of each branch predication, we apply the above mentioned branch distance calculation algorithm.
<table>
<thead>
<tr>
<th>Table 3. Fitness function for each branch predication and its decision of PUT getDayNum</th>
</tr>
</thead>
<tbody>
<tr>
<td>Decision node</td>
</tr>
<tr>
<td>----------------</td>
</tr>
<tr>
<td>([\text{month} \geq 1 && \text{month} \leq 12], T)</td>
</tr>
<tr>
<td>([\text{month} \geq 1 && \text{month} \geq 12], F)</td>
</tr>
<tr>
<td>([\text{month} = 2], T)</td>
</tr>
<tr>
<td>([\text{month} = 2], F)</td>
</tr>
<tr>
<td>([\text{year} % 400 = 0</td>
</tr>
<tr>
<td>([\text{year} % 400 = 0</td>
</tr>
<tr>
<td>([\text{month} = 4</td>
</tr>
<tr>
<td>([\text{month} = 4</td>
</tr>
</tbody>
</table>
**Algorithm 4**: Branch distance function \( f_{BchDist} \)
**Input**: double \( a \), condition type, double \( b \)
**Output**: branch distance value
1. switch (condition type)
2. case “=”:
3. if \( \text{abs}(a - b) = 0 \) then return 0 else return \( \text{abs}(a - b) + k \)
4. case “\neq”:
5. if \( \text{abs}(a - b) \neq 0 \) then return 0 else return \( k \)
6. case “<”:
7. if \( a - b < 0 \) then return 0 else return \( \text{abs}(a - b) + k \)
8. case “>”:
9. if \( a - b \leq 0 \) then return 0 else return \( \text{abs}(a - b) + k \)
10. case “\neq”:
11. if \( b - a > 0 \) then return 0 else return \( \text{abs}(b - a) + k \)
12. case “\geq”:
13. if \( b - a \geq 0 \) then return 0 else return \( \text{abs}(b - a) + k \)
14. end switch
Base on these formulas, for calculating fitness value for each branch predication, we generate the fitness function for each test path of the PUT getDayNum as below:
| Table 4. Fitness functions for each test path of PUT getDayNum |
|-----------------|------------------|
| PathID | Test path fitness functions |
| path1 | \( F_1 = f(T) + f(T) + f(F) \) |
| path2 | \( F_2 = f(T) + f(T) + f(F) \) |
| path3 | \( F_3 = f(T) + f(T) + f(F) \) |
| path4 | \( F_4 = f(T) + f(T) + f(F) \) |
| path5 | \( F_5 = f(T) \) |
4.3. Apply multithreading of Particle Swarm Optimization
With each fitness function of each test path, we use one PSO to find its solution (in this case the solution means the test data which can cover the corresponding test path). In order to find the solution for all fitness functions at the same time, we perform simultaneous multithreading of the PSO algorithm by defining PSO it as 1 class extends Thread class of Java as follows: public class PSOProcess extends Thread
The multithreading of PSO can be executed through below algorithm:
Algorithm 5: Multithreading of Particle Swarm Optimization(MPSO)
Input: list of fitness functions
Output: the set of test data that is solution to cover corresponding test path
1: for each fitness function Fi
2: initialize an object psoi of class PSOProcess
3: assign a fitness function Fi to object psoi
4: execute object pso: pso.start();
5: end for
The experimental results of the above steps gave the results that our proposal has generated test data which covered all test paths of PUTgetDayNum:

Figure 4. Generated test data for the PUT getDayNum.
5. Experimental analysis
We compared our experimental result to Mao’s proposal [9] in 2 criteria: the automatic ability of test data generation and the coverage capabilities of each proposal for each PUT of the given benchmark. Also we show our approach is better than state-of-the-art constraint-based test data generator Symbolic PathFinder [21].
5.1. Automatic ability
When referring to an automatic test data generation method, the actual coverage of "automatic" ability is one of the key criteria to decide the proposal’s effectiveness. Mao [9] used only 1 fitness to generate test data for all test paths of a PUT, therefore he had to combine branch weight for each test path into the fitness function. The build of a branch weight function (and also the fitness function) is purely manual, and for long and complex PUT, sometimes it is even harder than generating test data for the test paths, therefore it affected the efficiency of his proposed approach.
On the opposite side, taking advantage of the fast convergence of PSO algorithm, we propose the solution of using separate fitness function for each test path. This solution has clear benefits:
1. As there is no need to build the branch weight function, the automatic feature of this proposal will be improved.
2. The fitness functions are automatically built basing on the pair of branch predication and its decision of each test path, and these pairs can be entirely generated automatically from a PUT with above mentioned algorithm 2 and 3. This obviously advances the automatic ability in our proposal.
5.2. Path coverage ability
We also confirmed our proposed approach on the benchmark which is used in Mao’s paper [9]. We performed in the environment of MS Windows 7 Ultimate with 32-bits and ran on
Intel Core i3 with 2.4 GHz and 4 GB memory. Our proposal was implemented in Java and run on the platform of JDK 1.8. We compared the coverage ability of all 8 programs in the benchmark as Table 5.
Table 5. The benchmark programs used for experimental analysis
<table>
<thead>
<tr>
<th>PUT name</th>
<th>LOC</th>
<th>TPs</th>
<th>Args</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>triangleType</td>
<td>31</td>
<td>5</td>
<td>3</td>
<td>Type classification for a triangle</td>
</tr>
<tr>
<td>calDay</td>
<td>72</td>
<td>11</td>
<td>3</td>
<td>Calculate the day of the week</td>
</tr>
<tr>
<td>cal</td>
<td>53</td>
<td>18</td>
<td>5</td>
<td>Compute the days between two dates</td>
</tr>
<tr>
<td>remainder</td>
<td>49</td>
<td>18</td>
<td>2</td>
<td>Calculate the remainder of an integer division</td>
</tr>
<tr>
<td>computeTax</td>
<td>61</td>
<td>11</td>
<td>2</td>
<td>Compute the federal personal income tax</td>
</tr>
<tr>
<td>bessj</td>
<td>245</td>
<td>21</td>
<td>2</td>
<td>Bessel J_n function</td>
</tr>
<tr>
<td>printCalendar</td>
<td>187</td>
<td>33</td>
<td>2</td>
<td>Print the calendar of a month in some year</td>
</tr>
<tr>
<td>line</td>
<td>92</td>
<td>36</td>
<td>8</td>
<td>Check if two rectangles overlap</td>
</tr>
</tbody>
</table>
* LOC: Lines of code TPs: Test paths Args: Input arguments
The two criteria to be compared with Mao’s result [9] are:
- Success rate (SR): the probability of all branches which can be covered by the generated test data. In order to check the actual result basing on this criterion, we executed MPSO 1000 times, and calculated the number of times at which generated test data could cover all test paths of given PUT. The SR formula is calculated as follows:
\[
SR = \frac{\sum(\text{all test paths were covered})}{1000}
\]
- Average coverage (AC): the average of the branch coverage achieved by all test inputs in 1,000 runs. Similar to above, in order to check the actual result basing on this criterion, we executed MPSO by 1000 times, and calculated the average coverage for each run. AC formula is calculated for each PUT as follows:
\[
AC = \frac{\sum(\text{coverage for each run})}{1000}
\]
The detailed results of the comparison with PUT benchmark used by Mao [9] in 2 criteria are shown in the Table 6.
From Table 6 can be seen that there are 4 PUTs (triangleType, computeTax, printCalendar, line) which Mao’s proposed approach cannot fully cover, while our method can. Because each test path is assigned to a PSO, it ensures that every time the MPSO is run, each PSO can generate test data which can cover the test path it is assigned to. Also with the remaining 4 PUTs (calDay, cal, reminder, bessj), our experiments fully covered all test paths with the same results of Mao [9].
5.3. Compare to constraint-based test data generation approaches
In this section we point out our advancement of the constraint-based test data generation approaches when generating test data for the given program that contains native function calls. We compare to Symbolic PathFinder (SPF) [21], which is the state-of-the-art of constraint-based test data generation approaches. Consider asample Java program as below:
```java
int foo(double x, double y) {
int ret = 0;
if ((x + y + Math.sin(x + y)) == 10) {
ret = 1; // branch 1
}
return ret;
}
```
Due to the limitation of the constraint solver used in SPF, it cannot solve the condition \((x + y + \text{Math.sin}(x + y)) = 10\). Because this condition contains the native function \text{Math.sin}(x + y) of the Java language, SPF is unable to generate test data which can cover branch 1.
In contrast, by using search-based test data generation approach, for the condition \((x + y + \text{Math.sin}(x + y)) = 10\), we applied Korel’s formulain Table 1 to create fitness function \(f_T = \text{abs}(x + y + \text{Math.sin}(x + y)) - 10\). Then using PSO to generate test data that satisfies this condition, we got the following result:

6. Conclusion
This paper has introduced and evaluated a combination static program analysis and PSO approach for evolutionary structural testing. We proposed a method which uses a fitness function for each test path of a PUT, and then executed those PSOs simultaneously in order to generate test data to cover test paths of a PUT. The experimental result proves that our proposal is more effective than Mao’s [9] test data generation method using PSO in terms of both automatic and coverage ability for a PUT. Our approach also addressed a limitation of constraint-based test data generation approaches, which generate test data for conditions that contain native functions.
As future works, we will continue to extend our proposal to be applicable to many kinds of UTs, such as PUTs which contain calls to other native functions or PUTs that handle string operations or complex data structures. In addition, further research is needed to be able to apply this proposal for programs not only in academics but also in industry.
Table 6. Comparison between Mao’s approach and MPSO
<table>
<thead>
<tr>
<th>Program under test</th>
<th>Success rate (%)</th>
<th>Average coverage (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Mao[9]’s PSO</td>
<td>MPSO</td>
</tr>
<tr>
<td>triangleType</td>
<td>99.80</td>
<td>100.0</td>
</tr>
<tr>
<td>calDay</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>cal</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>remainder</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>computeTax</td>
<td>99.80</td>
<td>100.0</td>
</tr>
<tr>
<td>bessj</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>printCalendar</td>
<td>99.10</td>
<td>100.0</td>
</tr>
<tr>
<td>line</td>
<td>99.20</td>
<td>100.0</td>
</tr>
</tbody>
</table>
References
|
{"Source-Url": "http://jcsce.vnu.edu.vn/index.php/jcsce/article/download/165/78", "len_cl100k_base": 8185, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 44719, "total-output-tokens": 10854, "length": "2e12", "weborganizer": {"__label__adult": 0.0003888607025146485, "__label__art_design": 0.00029659271240234375, "__label__crime_law": 0.0003736019134521485, "__label__education_jobs": 0.0007481575012207031, "__label__entertainment": 5.805492401123047e-05, "__label__fashion_beauty": 0.0001806020736694336, "__label__finance_business": 0.00019156932830810547, "__label__food_dining": 0.0003368854522705078, "__label__games": 0.0006041526794433594, "__label__hardware": 0.0009822845458984375, "__label__health": 0.0006251335144042969, "__label__history": 0.00021648406982421875, "__label__home_hobbies": 8.83340835571289e-05, "__label__industrial": 0.0003597736358642578, "__label__literature": 0.0002598762512207031, "__label__politics": 0.00025343894958496094, "__label__religion": 0.00044083595275878906, "__label__science_tech": 0.020782470703125, "__label__social_life": 8.827447891235352e-05, "__label__software": 0.00473785400390625, "__label__software_dev": 0.966796875, "__label__sports_fitness": 0.0003452301025390625, "__label__transportation": 0.0004780292510986328, "__label__travel": 0.0001932382583618164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37448, 0.0425]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37448, 0.49847]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37448, 0.8453]], "google_gemma-3-12b-it_contains_pii": [[0, 2816, false], [2816, 6206, null], [6206, 9583, null], [9583, 13495, null], [13495, 16483, null], [16483, 20026, null], [20026, 23455, null], [23455, 26384, null], [26384, 29451, null], [29451, 33094, null], [33094, 37448, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2816, true], [2816, 6206, null], [6206, 9583, null], [9583, 13495, null], [13495, 16483, null], [16483, 20026, null], [20026, 23455, null], [23455, 26384, null], [26384, 29451, null], [29451, 33094, null], [33094, 37448, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37448, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37448, null]], "pdf_page_numbers": [[0, 2816, 1], [2816, 6206, 2], [6206, 9583, 3], [9583, 13495, 4], [13495, 16483, 5], [16483, 20026, 6], [20026, 23455, 7], [23455, 26384, 8], [26384, 29451, 9], [29451, 33094, 10], [33094, 37448, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37448, 0.18792]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
1b2cc9ec0f3303b5ec38477989777e01ff8b6b60
|
[REMOVED]
|
{"Source-Url": "https://www.cs.tufts.edu/comp/105/homework/continuations.pdf", "len_cl100k_base": 5525, "olmocr-version": "0.1.51", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21712, "total-output-tokens": 6242, "length": "2e12", "weborganizer": {"__label__adult": 0.0005626678466796875, "__label__art_design": 0.000766754150390625, "__label__crime_law": 0.0006532669067382812, "__label__education_jobs": 0.035400390625, "__label__entertainment": 0.00015461444854736328, "__label__fashion_beauty": 0.00034236907958984375, "__label__finance_business": 0.00032830238342285156, "__label__food_dining": 0.0008063316345214844, "__label__games": 0.0018587112426757812, "__label__hardware": 0.0012884140014648438, "__label__health": 0.0005440711975097656, "__label__history": 0.0005097389221191406, "__label__home_hobbies": 0.00032258033752441406, "__label__industrial": 0.0008416175842285156, "__label__literature": 0.0006756782531738281, "__label__politics": 0.0004706382751464844, "__label__religion": 0.0010814666748046875, "__label__science_tech": 0.01812744140625, "__label__social_life": 0.0004067420959472656, "__label__software": 0.00684356689453125, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.0008597373962402344, "__label__transportation": 0.0009794235229492188, "__label__travel": 0.0003464221954345703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20981, 0.01391]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20981, 0.48781]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20981, 0.81473]], "google_gemma-3-12b-it_contains_pii": [[0, 1202, false], [1202, 3357, null], [3357, 5970, null], [5970, 8402, null], [8402, 11247, null], [11247, 13838, null], [13838, 17367, null], [17367, 18809, null], [18809, 20981, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1202, true], [1202, 3357, null], [3357, 5970, null], [5970, 8402, null], [8402, 11247, null], [11247, 13838, null], [13838, 17367, null], [17367, 18809, null], [18809, 20981, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20981, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20981, null]], "pdf_page_numbers": [[0, 1202, 1], [1202, 3357, 2], [3357, 5970, 3], [5970, 8402, 4], [8402, 11247, 5], [11247, 13838, 6], [13838, 17367, 7], [17367, 18809, 8], [18809, 20981, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20981, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
22611a94d1a687c068ccbe6f95e866c78befcec6
|
A conceptual Bayesian net model for integrated software quality prediction
Łukasz Radliński
1 Institute of Information Technology in Management, University of Szczecin
Mickiewicza 64, 71-101 Szczecin, Poland
Abstract – Software quality can be described by a set of features, such as functionality, reliability, usability, efficiency, maintainability, portability and others. There are various models for software quality prediction developed in the past. Unfortunately, they typically focus on a single quality feature. The main goal of this study is to develop a predictive model that integrates several features of software quality, including relationships between them. This model is an expert-driven Bayesian net, which can be used in diverse analyses and simulations. The paper discusses model structure, behaviour, calibration and enhancement options as well as possible use in fields other than software engineering.
1 Introduction
Software quality has been one of the most widely studied areas of software engineering. One of the aspects of quality assurance is quality prediction. Several predictive models have been proposed since 1970’s. A clear trade-off can be observed between model’s analytical potential and the number of used quality features. Models that contain a wide range of quality features [1, 2, 3] typically have low analytical potential and are more frames for building calibrated predictive models. On the other hand, models with higher analytical potential typically focus on a single or very few aspects of quality, for example on reliability [4, 5].
This trade-off has been the main motivation for research focused on building predictive models that both incorporate various aspects of software quality and have high analytical potential. The aim of this paper is to build such predictive model as a
Bayesian net (BN). This model may be used to deliver information for decision-makers about managing software projects to achieve specific targets for software quality.
Bayesian nets have been selected for this study for several reasons. The most important is related with the ability to incorporate both expert knowledge and empirical data. Typically, predictive models for software engineering are built using data-driven techniques like multiple regression, neural networks, nearest neighbours or decision trees. For the current type of study, a dataset with past projects of high volume and appropriate level of details is typically not available. Thus, the model has to be based more on expert knowledge and only partially on empirical data. Other advantages of BNs include the ability to incorporate causal relationships between variables, explicit incorporation of uncertainty through probabilistic definition of variables, no fixed lists of independent and dependent variables, running the model with incomplete data, forward and backward inference, and graphical representation. More information on the BN theory can be found in [6, 7] while recent applications in software engineering have been discussed in [8, 9, 10, 11, 12, 13, 14, 15, 16].
The rest of this paper is organized as follows: Section 2 brings closer the point of view on software quality that was the subject of the research. Section 3 discusses background knowledge used when building the predictive model. Section 4 provides the details on the structure of the proposed predictive model. Section 5 focuses on the behaviour of this model. Section 6 discusses possibilities for calibrating and extending the proposed model. Section 7 considers the use of such type of model in other areas. Section 8 summarizes this study.
## 2 Software Quality
Software quality is typically expressed in science and industry as a range of features rather than a single aggregated value. This study follows the ISO approach where software quality is defined as a “degree to which the software product satisfies stated and implied needs when used under specified conditions” [1]. This standard defines eleven characteristics, shown in Fig. 1 with dark backgrounds. The last three characteristics (on the left) refer to “quality in use” while others refer to internal and external metrics. Each characteristic is decomposed into the sub-characteristics, shown in Fig. 1 with white background. On the next level each sub-characteristic aggregates the values of metrics that describe the software product. The metrics are not shown here because they should be selected depending on the particular environment where such quality assessment would be used. Other quality models have been proposed in literature [17, 3], from which some concepts may be adapted when building a customized predictive model.
In our approach we follow the general taxonomy of software quality proposed by ISO. However, our approach is not limited to the ISO point of view and may be adjusted according to specific needs. For this reason our approach uses slightly different
terminology with “features” at the highest level, “sub-features” at the second level and “measures” at the lowest level.
3 Background knowledge
Our approach assumes that the industrial-scale model for integrated software quality prediction has to be calibrated for specific needs and environment before it can be used in decision support. Normally such calibration should be performed among domain experts from the target environment, for example using a questionnaire survey. However, at this point such survey has not been completed yet so the current model has been built fully based on the available literature and expert knowledge of the modellers. This is the reason why the model is currently at the “conceptual” stage. The literature used includes quality standards [1, 18, 2, 19, 20, 21], widely accepted results on software quality [22, 23, 24, 17, 3, 25, 26, 27, 28, 29], and experience from building models for similar areas of software engineering [8, 9, 10, 11, 12, 30, 13, 14, 15, 16].
Available literature provides useful information on the relationships among quality features. Fig. 2 illustrates the relationships encoded in the proposed predictive model. There are two types of relationships: positive (“+”) and negative (“−”). The positive relationship indicates a situation where the increased level of one feature causes a probable increase of the level of another feature. The negative relationship indicates a situation where the increased level of one feature causes a probably decrease of the level of another feature unless some compensation is provided. This compensation typically has a form of additional effort, increase of development process quality, or use of better tools or technologies.
Table 1 summarizes the relationships between the effort and the quality features.
Currently there are two groups of controllable factors in the model: effort and process quality – defined separately for three development phases. It is assumed that the increase of effort or process quality has a positive impact on the selected quality features. This impact is not deterministic though, i.e. the increased effort does not guarantee better quality but causes that this better quality is more probable.
It should be noted that the relationships in Fig. 2 and Table 1 may be defined differently in specific target environments.

**Table 1. LMS types and features (Source: self study).**
<table>
<thead>
<tr>
<th>Quality feature</th>
<th>Requirements effort</th>
<th>Implementation effort</th>
<th>Testing effort</th>
</tr>
</thead>
<tbody>
<tr>
<td>functional suitability</td>
<td>+</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>reliability</td>
<td>+</td>
<td></td>
<td></td>
</tr>
<tr>
<td>performance efficiency</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>operability</td>
<td>+</td>
<td></td>
<td></td>
</tr>
<tr>
<td>security</td>
<td>+</td>
<td></td>
<td></td>
</tr>
<tr>
<td>compatibility</td>
<td>+</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>maintainability</td>
<td>+</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>portability</td>
<td>+</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>usability</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>safety</td>
<td>+</td>
<td>+</td>
<td></td>
</tr>
<tr>
<td>flexibility</td>
<td>+</td>
<td>+</td>
<td></td>
</tr>
</tbody>
</table>
4 Model Structure
The proposed predictive model is a Bayesian net where the variables are defined as conditional probability distributions given their parents (i.e. immediate predecessors). It is beyond the scope of the paper to discuss the structure of the whole model because the full model contains over 100 variables. However, for full transparency and reproducibility of the results full model definition is available on-line [31].
Fig. 3 illustrates a part of the model structure by showing two quality features and relevant relationships. The whole model is a set of linked hierarchical naïve Bayesian classifiers where each quality feature is modelled by one classifier. Quality feature is the root of this classifier, sub-features are in the second level (children) and measures are the leaves.
To enable relatively easy model calibration and enhancement this model was built with the following assumptions:
- the links between various aspects of software quality may be defined only at the level of features;
- controllable factors are aggregated as the “effectiveness” variables, which, in turn, influence selected quality features.
Currently, all variables in the model, except measures, are expressed on a five-point ranked scale from ‘very low’ to ‘very high’. Two important concepts, implemented in AgenaRisk tool [32], were used to simplify the definition of probability distributions. First, the whole scale of ranked variable is internally treated as the numeric a range (0, 1) with five intervals – i.e. for ‘very low’ an interval (0, 0.2), for ‘low’ an interval (0.2,
A conceptual Bayesian net model for integrated...
0.4) etc. This gives the possibility to express the variable not only as a probability distribution but also using summary statistics, such as the mean (used in the next section). It also opens the door for the second concept – using expressions to define probability distributions for variables. Instead of time-consuming and prone to inconsistencies manual filling probability tables for each variable, it is sufficient to provide only very few parameters for the expressions like Normal distribution (mean, variance), TNormal distribution (mean, variance, lower bound, upper bound), or weighted mean function – wmean(weight for parameter 1, parameter 1, weight for parameter 2, parameter 2 etc.). Table 2 provides the definitions for the selected variables in different layers of the model.
Table 2. Definition of selected variables.
<table>
<thead>
<tr>
<th>Type</th>
<th>Variable</th>
<th>Definition</th>
</tr>
</thead>
<tbody>
<tr>
<td>feature</td>
<td>usability</td>
<td>TNormal(wmean(1, 0.5, 3, wmean(3, req_effect, 2, impl_effect, 1, test_effect, 1, funct_size), 0.05, 0, 1))</td>
</tr>
<tr>
<td>sub-feature</td>
<td>effectiveness</td>
<td>TNormal(utility, 0.01, 0, 1)</td>
</tr>
</tbody>
</table>
| measure | percentage of tasks accomplished | effectiveness = 'very high' → Normal(95, 10)
| | | effectiveness = 'high' → Normal(90, 40)
| | | effectiveness = 'medium' → Normal(75, 60)
| | | effectiveness = 'low' → Normal(65, 80)
| | | effectiveness = 'very low' → Normal(50, 100) |
| controllable | testing effort | TNormal(0.5, 0.05, 0, 1) |
| controllable | testing effectiveness | TNormal(wmean(3, test_effort, 4, test_procedure), 0.001, 0, 1) |
5 Model Behaviour
To demonstrate model behaviour four simulations were performed with the focus to analyse the impact of one group of variables on another.
Simulation 1 was focused on the sensitivity analysis of quality features in response to the level of controllable factors. An observation about the state for a single controllable factor was entered to the model and then the predictions for all quality features were analyzed. This procedure was repeated for each state of each controllable factor.
Fig. 4 illustrates the results for one of such runs by demonstrating the changes of predicted levels of maintainability and performance efficiency caused by different levels of implementation effort. These results have been compared with the background knowledge in Table 1 to validate if the relationships have been correctly defined, i.e. if the change of the level of the controllable factor causes the assumed direction of changed level of quality feature. In this case the obtained results confirm that the background knowledge was correctly incorporated into the model.
With these graphs, it is possible to analyze the strength of impact of controllable factors on quality features. The impact of implementation effort is larger on maintainability than on performance efficiency – predicted probability distributions are more ‘responsive’ to different states of implementation effort for maintainability than for performance efficiency. Such information may be used in decision support.

**Fig. 4.** Impact of implementation effort on the selected quality features.
**Simulation 2** is similar to simulation 1 because it also analyses the impact of controllable factors on quality features. However, this simulation involves the analysis of summary statistics (mean values) rather than full probability distributions. Here, an observation ‘very high’ was entered to each controllable factor (one at the time) and then the mean value of predicted probability distribution for each quality feature was analyzed. Table 3 summarizes the results for effort at various phases. All of these mean values are above the default value of 0.5. These higher values suggest the increase in the predicted level specific quality features. These values correspond to “+” signs in Table 1 which further confirms the correct incorporation of the relationships between the controllable factors and the quality features.
<table>
<thead>
<tr>
<th>Quality feature</th>
<th>Requirements effort</th>
<th>Implementation effort</th>
<th>Testing effort</th>
</tr>
</thead>
<tbody>
<tr>
<td>functional suitability</td>
<td>0.55</td>
<td>0.56</td>
<td></td>
</tr>
<tr>
<td>reliability</td>
<td></td>
<td></td>
<td>0.55</td>
</tr>
<tr>
<td>performance efficiency</td>
<td></td>
<td>0.54</td>
<td>0.53</td>
</tr>
<tr>
<td>operability</td>
<td>0.60</td>
<td></td>
<td></td>
</tr>
<tr>
<td>security</td>
<td></td>
<td></td>
<td>0.55</td>
</tr>
<tr>
<td>compatibility</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>maintainability</td>
<td>0.56</td>
<td>0.57</td>
<td></td>
</tr>
<tr>
<td>portability</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>usability</td>
<td>0.56</td>
<td>0.55</td>
<td>0.52</td>
</tr>
<tr>
<td>flexibility</td>
<td>0.57</td>
<td>0.56</td>
<td></td>
</tr>
<tr>
<td>safety</td>
<td></td>
<td></td>
<td>0.55</td>
</tr>
</tbody>
</table>
**Simulation 3** was focused on the analysis of the relationships among various quality features. Similarly to simulation 2, it also covered the analysis of the mean values of predicted probability distributions. The results are presented in Table 4.
The predicted mean values are either lower or higher than the default value 0.5. The values lower than 0.5 correspond to “−” signs in Fig. 2 while the values higher than 0.5 correspond to “+” signs in Fig. 2. Such results confirm that the model correctly incorporates the assumed relationships among quality features.
Simulation 4 has been focused on demonstrating more advanced model capabilities for delivering important information for decision support using what-if and trade-off analysis. Although such analysis may involve more variables, for simplicity, four variables were investigated: implementation effort, testing effort, maintainability, and performance efficiency. Some input data on the hypothetical project under consideration were entered into the model. The model provides predictions for these four variables as shown in Fig. 5 (scenario: baseline).
Let us assume that a manager is not satisfied with the low level of maintainability. Apart from previously entered input data, an additional constraint is entered to the model to analyze how to achieve high level of maintainability (maintainability=‘high’→mean(maintainability)=0.7). As shown in Fig. 5, scenario: revision 1, the model predicts that such target is achievable with the increased level of implementation effort and testing effort (although the increase of required testing effort is very narrow). The model also predicts that the level of performance efficiency is expected to be lower. This is due to the negative relationship between the maintainability and performance efficiency (Fig. 2).
Let us further assume that, due to limited resources, not only the increase of effort is impossible, but even it has to be reduced to the level ‘low’ for implementation and
testing. In such case the level of performance efficiency is expected to be further decreased (scenario: revision 2).
It is possible to perform various other types of simulations similar to simulation 4 to use the model with what-if, trade-off and goal-seeking analyses for decision support. Such simulation may involve more steps and more variables. Such simulations will be performed in future to enhance the validation of model correctness and usefulness.
6 Calibration and Enhancement Options
The proposed model has a structure that enables relatively easy calibration. As the variables are defined using expressions, the calibration requires setting appropriate parameters in these expressions:
- the values of weights in \( \text{wmean} \) functions – higher value for weight indicates stronger impact of particular variable on the aggregated value;
- the value of variance in \( \text{TNormal} \) expressions (second parameter) – value closer to zero indicates stronger relationship, higher values indicate lower relationships. Note, that since ranked variables are internally defined over the range \((0, 1)\), typically a variance of 0.001 indicates very strong relationship and 0.01 – medium relationship.
Apart from calibration focused on the defining parameters for the existing structure, the model may be enhanced to meet specific needs:
- by adding new sub-features to features or new measures to sub-features – such change requires only the definition of newly added variable, no change in definitions of existing variables is necessary;
by adding new controllable factors – such change requires the change in definition of “effectiveness” variable for specific phase, typically by setting new weights in wmean function;
• by adding new quality feature – such change requires the most work because it involves setting sub-features and measures, relationships among features, and relationships between the controllable factors and this new feature.
Currently the model does not contain many causal relationships. This may reduce the analytical potential. Defining the model using more causal relationships may increase analytical potential but may also make the model more difficult in calibration. Thus, this issue needs to be investigated carefully when building a tailored model.
The model enables static analysis, i.e., for the assumed point of time. Because both the project and the development environment evolve over time, it may be useful to reflect such dynamics in the model. However, such enhancement requires significantly more time spent on modelling and makes the calibration more difficult because more parameters need to be set.
7 Possible Use in Other Fields
The proposed predictive model is focused on the software quality area. Such approach may also be used in other fields/domains because the general constraints on model structure may also apply there. Possible use outside software quality area depends on the following conditions:
• the problem under investigation is complex but can be divided to a set of sub-problems,
• there is no or not enough empirical data to generate a reliable model from them,
• domain expert (or group of experts) is able to define, calibrate and enhance the model,
• the relationships are of stochastic and non-linear nature,
• there is a need for a high analytical potential.
However, even meeting these conditions, the use in other fields may be difficult. This happens in the case of a high number of additional deterministic relationships, which have to be reflected in the model with high precision. Possible use in other fields will be investigated in detail in future.
8 Conclusions
This paper introduced a new model for integrated software quality prediction. Formally, a model is a Bayesian net. This model contains a wide range of quality aspects (features, sub-features, measures) together with relationships among them. To make
the model useful in decision support it also contains a set of controllable factors (current effort and process quality in different development phases).
This model encodes knowledge on software quality area published in literature as well as personal expert judgment. To prepare the model for using in the target environment it is necessary to calibrate the model, for example using questionnaires. The model may also be enhanced to meet specific needs. The model was partially validated for correctness and usefulness in providing information for decision support.
In future, such model may become a heart of an intelligent system for analysis and managing software quality. To achieve this higher level of automation would be required, for example in calibration and enhancement by automated extraction of relevant data/knowledge. In addition, the model would have to reflect more details on the development process, project or software architecture.
The stages of building customized models will be formalized in a framework supporting the proposed approach. This framework may also be used in building models with a similar general structure but in the fields other than software quality.
Acknowledgement: This work has been supported by the research funds from the Ministry of Science and Higher Education as a research grant no. N N111 291738 for the years 2010-2012.
References
A conceptual Bayesian net model for integrated...
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3331/2525", "len_cl100k_base": 4785, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24110, "total-output-tokens": 5442, "length": "2e12", "weborganizer": {"__label__adult": 0.0002815723419189453, "__label__art_design": 0.0003590583801269531, "__label__crime_law": 0.00028133392333984375, "__label__education_jobs": 0.0010881423950195312, "__label__entertainment": 5.733966827392578e-05, "__label__fashion_beauty": 0.00012183189392089844, "__label__finance_business": 0.00032258033752441406, "__label__food_dining": 0.00032973289489746094, "__label__games": 0.0004737377166748047, "__label__hardware": 0.0005245208740234375, "__label__health": 0.0004363059997558594, "__label__history": 0.00015687942504882812, "__label__home_hobbies": 7.450580596923828e-05, "__label__industrial": 0.0002872943878173828, "__label__literature": 0.00031185150146484375, "__label__politics": 0.0001590251922607422, "__label__religion": 0.00033283233642578125, "__label__science_tech": 0.0172271728515625, "__label__social_life": 9.393692016601562e-05, "__label__software": 0.00829315185546875, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0002110004425048828, "__label__transportation": 0.00028133392333984375, "__label__travel": 0.00014972686767578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24768, 0.02221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24768, 0.14277]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24768, 0.91642]], "google_gemma-3-12b-it_contains_pii": [[0, 1837, false], [1837, 4946, null], [4946, 6756, null], [6756, 8579, null], [8579, 10172, null], [10172, 13300, null], [13300, 16051, null], [16051, 17804, null], [17804, 19364, null], [19364, 21726, null], [21726, 24719, null], [24719, 24768, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1837, true], [1837, 4946, null], [4946, 6756, null], [6756, 8579, null], [8579, 10172, null], [10172, 13300, null], [13300, 16051, null], [16051, 17804, null], [17804, 19364, null], [19364, 21726, null], [21726, 24719, null], [24719, 24768, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24768, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24768, null]], "pdf_page_numbers": [[0, 1837, 1], [1837, 4946, 2], [4946, 6756, 3], [6756, 8579, 4], [8579, 10172, 5], [10172, 13300, 6], [13300, 16051, 7], [16051, 17804, 8], [17804, 19364, 9], [19364, 21726, 10], [21726, 24719, 11], [24719, 24768, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24768, 0.26829]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
2c4524946fa44f1e7299c0e3707507a6beedbdda
|
Chapter 18
Parameters
18.1 Introduction
Ever since Chapter 5, the first ML chapter, you have been reading examples and doing exercises that involve calling functions or methods and passing parameters to them. It is now time for a closer look at this familiar operation. Exactly how are parameters passed from caller to callee? This chapter will look at seven different methods and compare their costs and dangers.
First, some basic terminology. Here is a method definition and call in a Java-like language, with the key parts labeled:
```
int plus(int a, int b) {
return a + b;
}
int x = plus(1, 2);
```
It is important to distinguish between the actual parameters, the parameters passed at the point of call, and the formal parameters, the variables in the called method that correspond to the actual parameters.1
This chapter will use the word method instead of function and will use a Java-like syntax for most of the examples. Real Java cannot illustrate the many different parameter-passing mechanisms used in different languages. In fact, it implements only one of them. So be aware that most of the examples in this chapter are fictitious.
18.2 Correspondence
Before looking at the parameter-passing mechanisms, a preliminary question must be dealt with: how does a language decide which formal parameters go with which actual parameters? In the simplest case, as in ML, Java, and Prolog, the correspondence between actual parameters and formal ones is determined by their positions in the parameter list. Most programming languages use such positional parameters, but some offer additional parameter-passing features. Ada, for example, permits keyword parameters like this:
```
DIVIDE(DIVIDEND => X, DIVISOR => Y);
```
This call to an Ada procedure named DIVIDE passes two actual parameters, X and Y. It matches the actual parameter X to the formal parameter named DIVIDEND and the actual parameter Y to the formal parameter named DIVISOR, regardless of the order in which those formal parameters appear in the definition of DIVIDE. To call a procedure, the programmer does not have to remember the order in which it expects its parameters; instead, the programmer can use the names of the formal parameters to make the correspondence clear. (Of course, an Ada compiler would resolve the correspondence at compile time, so there is no extra runtime cost for using keyword parameters.) Other languages that support keyword parameters include Common Lisp, Dylan, Python, and recent dialects of Fortran. These languages also support positional parameters and allow the two styles to be mixed; the first parameters in a list can be positional, and the remainder can be keyword parameters.
1. Some authors use the word parameter to mean a formal parameter and use the word argument to mean an actual parameter. Many people also use the word parameter informally, referring to either a formal parameter or an actual parameter. This author will always say either formal parameter or actual parameter, thus avoiding any argument.
Another parameter-passing feature offered by some languages is a way to declare optional parameters with default values. The formal parameter list of a function can include default values to be used if the corresponding actual parameters are not given. This gives a very short way of writing certain kinds of overloaded function definitions. For example, consider a C++ definition like this one:
```
int f(int a=1, int b=2, int c=3) {
function body
}
```
With this definition, the caller can provide zero, one, two, or three actual parameters. The actual parameters that are provided are matched with the formal parameters in order. Any formal parameters that are not matched with an actual parameter are initialized with their default values instead. In effect, C++ treats the definition above like the following overloaded collection of four definitions:
```
int f() {f(1,2,3);}
int f(int a) {f(a,2,3);}
int f(int a, int b) {f(a,b,3);}
int f(int a, int b, int c) {
function body
}
```
A few languages, including C, C++, and most of the scripting languages like JavaScript, Python, and Perl, allow actual parameter lists of any length. In C, for example, an ellipsis can appear as the last item in a formal parameter list. The printf library function for C, which takes a format string followed by any number of additional parameters, would be declared like this:
```
int printf(char *format, ...) {
function body
}
```
The function body must use C library routines to access the additional actual parameters. This is a weak spot in C’s static type checking, of course, since the types of the additional parameters cannot be checked statically.
18.3 By Value
The first parameter-passing mechanism this chapter will look at is the most common—passing parameters by value.
For by-value parameter passing, the formal parameter is just like a local variable in the activation record of the called method,
with one important difference: it is initialized using the value of
the corresponding actual parameter, before the called method
begins executing.
The by-value mechanism is the simplest. It is the only one used in real Java. The
actual parameter is used only to initialize the corresponding formal parameter.
After that, the called method can do anything it wants to with the formal para-
meter, and the actual parameter is not affected. For example:
```java
int plus(int a, int b) {
a += b;
return a;
}
void f() {
int x = 3;
int y = 4;
int z = plus(x, y);
}
```
In this Java code, when the method f calls the method plus, the values of f's
variables x and y are used to initialize plus's variables a and b. When the plus
method begins executing, the activation records look like this:
```
current activation
record
<table>
<thead>
<tr>
<th>a: 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>b: 4</td>
</tr>
<tr>
<td>return address</td>
</tr>
<tr>
<td>previous activation record</td>
</tr>
<tr>
<td>result: ?</td>
</tr>
</tbody>
</table>
```
plus's assignment a += b changes only its own formal parameter a, not the
variable x in f that was the corresponding actual parameter on the call.
When parameters are passed by value, changes to the formal parameter do not
affect the corresponding actual parameter. That does not mean that the called
method is unable to make any changes that are visible to the caller. Consider the
ConsCell class from the previous chapter with this method added:
```java
/*
* Mutator for the head field of this ConsCell.
* @param h the new int for our head
*/
public void setHead(int h) {
head = h;
}
```
The method setHead is a mutator—a method that changes the value of a field.
Now consider this method f:
```java
void f() {
ConsCell x = new ConsCell(0, null);
alter(3, x);
}
void alter(int newHead, ConsCell c) {
c.setHead(newHead);
c = null;
}
```
As you can see, the method f creates a ConsCell object and passes its reference to
the method alter. When alter begins to execute, the activation records look like
this:
```
current activation
record
<table>
<thead>
<tr>
<th>newHead: 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>c:</td>
</tr>
<tr>
<td>return address</td>
</tr>
<tr>
<td>previous activation record</td>
</tr>
</tbody>
</table>
```
As the illustration shows, the formal parameter c is a copy of the actual param-
eter x. This does not mean that there is a copy of the ConsCell object—only that
the reference c is a copy of the reference x, so both refer to the same object. Now
when the first statement of `alter` executes, it calls the `setHead` method of that object. The object's `head` becomes 3. That change is visible to the caller. When `alter` returns, the object to which `f`'s variable `x` refers will have a new `head` value. On the other hand, when the second statement of `alter` executes, it changes `c` to `null`. This has no effect on the object or on the actual parameter `x`. When `alter` is ready to return, this is the situation:
In general, when a Java method receives a parameter of a reference type, any change it makes to the `object` (like `c.setHead(3)`) is visible to the method's caller, while any change it makes to the `reference` (like `c = null`) is purely local. Another language that only has by-value parameters is C. C programmers often use the same kind of trick to get non-local effects. If a C function should be able to change a variable that is visible to the caller, a pointer to that variable is passed.
When a parameter is passed by value, the actual parameter can be any expression that yields a value suitable for initializing the corresponding formal parameter. It need not be a simple variable. It could be a constant (as in `c.setHead(3)`), an arithmetic expression (as in `c.setHead(1+2)`), or the value returned by another method call (as in `c.setHead(x.getHead())`). This may seem obvious, but it is pointed out here because it is not true of the next few parameter-passing methods.
### 18.4 By Result
A parameter that is passed `by result` is, in a way, the exact opposite of a parameter that is passed by value.
For by-result parameter passing, the formal parameter is just like a local variable in the activation record of the called method—it is uninitialized. After the called method finishes executing, the final value of the formal parameter is assigned to the corresponding actual parameter.
Notice that the actual parameter is not evaluated, only assigned to. No information is communicated from the caller to the called method. A by-result parameter works only in the opposite direction, to communicate information from the called method back to the caller. Here is an example, in a Java-like language but with a fictitious `by-result` keyword. (Parameters not otherwise declared are assumed to be passed by value, as in normal Java.)
```java
void plus(int a, int b, by-result int c) {
c = a + b;
}
void f() {
int x = 3;
int y = 4;
int z;
plus(x, y, z);
}
```
In this example, the method `f` calls the method `plus`. The third parameter is passed by result. This means that the actual parameter `z` does not need to be initialized before the call, since its value is never called for. When `plus` starts, its formal parameter `c` is uninitialized.
When `plus` is ready to return, its formal parameter `c` has had a value assigned to it. This has had no immediate effect on the corresponding actual parameter `z`:
```
<table>
<thead>
<tr>
<th>current activation record</th>
</tr>
</thead>
<tbody>
<tr>
<td>a: 3</td>
</tr>
<tr>
<td>b: 4</td>
</tr>
<tr>
<td>c: 7</td>
</tr>
<tr>
<td>return address</td>
</tr>
<tr>
<td>previous activation record</td>
</tr>
<tr>
<td>x: 3</td>
</tr>
<tr>
<td>y: 4</td>
</tr>
<tr>
<td>z: 7</td>
</tr>
<tr>
<td>return address</td>
</tr>
<tr>
<td>previous activation record</td>
</tr>
</tbody>
</table>
```
Only when `plus` actually returns is the final value of `c` automatically copied to `z`:
```
<table>
<thead>
<tr>
<th>current activation record</th>
</tr>
</thead>
<tbody>
<tr>
<td>a: 3</td>
</tr>
<tr>
<td>b: 4</td>
</tr>
<tr>
<td>c: 7</td>
</tr>
<tr>
<td>return address</td>
</tr>
<tr>
<td>previous activation record</td>
</tr>
<tr>
<td>x: 3</td>
</tr>
<tr>
<td>y: 4</td>
</tr>
<tr>
<td>z: 7</td>
</tr>
<tr>
<td>return address</td>
</tr>
<tr>
<td>previous activation record</td>
</tr>
</tbody>
</table>
```
To use parameter passing by result, the actual parameter must be something that can have a value assigned to it; a variable, for example, and not a constant. In fact, the actual parameter must be an expression with an lvalue—something that could appear on the left-hand side of an assignment, as was discussed in Chapter 13.
By-result parameter passing is also sometimes called `copy-out`, for obvious reasons. Relatively few languages support pure by-result parameter passing. Algol W is one. Ada language systems also sometimes use the by-result mechanism.
### 18.5 By Value-Result
You have seen by-value parameters for communicating information from the caller to the called method, and you have seen by-result parameters for communicating information in the opposite direction. What about bidirectional communication? What if you want to pass a value into a method through a parameter, and get a different value out through that same parameter? One way to do this is to pass the parameter by `value-result`, which is a simple combination of by-value and by-result.
If you look at the descriptions of by-value and by-result, you will see that the first describes things that happen before the called method begins executing, while the second describes things that happen after it has finished. If you combine the two, you get value-result:
For passing parameters by value-result, the formal parameter is just like a local variable in the activation record of the called method. It is initialized using the value of the corresponding actual parameter, before the called method begins executing. Then, after the called method finishes executing, the final value of the formal parameter is assigned to the actual parameter.
This method behaves like by-value when the method is called and like by-result when the method returns. Because (like by-result) it assigns a value to the actual parameter, it needs the actual parameter to be an lvalue. For example:
```java
void plus(int a, by-value-result int b) {
b += a;
}
void f() {
int x = 3;
plus(4, x);
}
```
When `plus` is called, but before it begins executing, the activation records look like this:
As the illustration shows, the formal parameter \( b \) has been initialized using the value of the actual parameter \( x \). When \( \text{plus} \) has finished, but not yet returned, its value for \( b \) has changed. But the value of the caller's \( x \) has not yet been changed:
Only when the method actually returns is the final value of the formal parameter copied back to the actual parameter, like this:
Value-result parameter passing is sometimes called \textit{copy-in/copy-out}, for obvious reasons. Ada language systems sometimes use the value-result mechanism.
18.6 By Reference
The three methods of parameter passing seen so far require copying values into and out of the called method's activation record. This can be a problem for languages in which values that take up a lot of space in memory can be passed as parameters. Copying a whole array, string, record, object, or some other large value to pass it as a parameter can be seriously inefficient, both because it slows down the method call and because it fattens up the activation record. This is not a problem in Java, because no primitive-type or reference-type value takes up more than 64 bits of memory. Objects, including arrays, can be large, but they are not passed as parameters—only references are. But languages other than Java sometimes need another method of parameter passing to handle large parameters more efficiently.
One solution is to pass the parameter \textit{by reference}:
For passing parameters by reference, the Ivale of the actual parameter is computed before the called method executes. Inside the called method, that Ivale is used as the Ivale for the corresponding formal parameter. In effect, the formal parameter is an alias for the actual parameter—another name for the same memory location.
Here is an example. It is the same as the example used for value-result parameter passing, except for the (fictional) keyword by-reference to indicate the parameter-passing technique.
```c
void plus(int a, by-reference int b) {
b += a;
}
void f() {
int x = 3;
plus(4, x);
}
```
As in the value-result example, the final value of `x` seen by the method `f` is 7. But the mechanism is quite different. When `plus` is called, but before it begins executing, the activation records look like this:
There is no separate memory location for the value of `b`. The lvalue for `b` (that is, the address where `b`'s value is stored) is the same as the lvalue for `x`, which is indicated with an arrow in the illustration. The effect is that `b` is an alias—just another name for `x`. So when the `plus` method executes the expression `b += a`, the effect is to add 4 to `x`. Unlike value-result, the caller's actual parameter is affected even before the called method returns.
No extra action needs to be taken when the method returns, since the change to `x` has already been made. Notice that no copying of the values of the parameters ever took place. This makes little, if any, difference in efficiency in this example, since copying the value of `x` is probably no more expensive than setting up `b` to be an alias for `x`. But if `x` were some large value, like an array, passing it by reference would be less expensive than the previous parameter-passing methods.
The discussion of by-value parameter passing mentioned a trick that C programmers often use: passing a pointer to a variable rather than the variable itself. By-reference parameter passing is that same trick, really, except that the language system hides most of the details. Although C only has by-value parameter passing, a C program can exhibit the same behavior as the previous example, like this:
```c
void plus(int a, int *b) {
*b += a;
}
void f() {
int x = 3;
plus(4, &x);
}
```
The declaration `int *b` means that `b` is a pointer to an integer. The expression `*b` refers to the integer to which `b` points. The expression `&x` gives a pointer to the integer `x`. An implementation of by-reference parameter passing might well work exactly like this C example. Passing by reference can be implemented simply by passing the actual parameter’s address by value.
By-reference parameter passing is the oldest parameter-passing technique in commercial high-level languages, since it was the only one implemented in early
The aliasing that can occur when parameters are passed by reference can be much more deceptive. Consider this example:
```c
void sigsum(by-reference int n, by-reference int ans) {
ans = 0;
int i = 1;
while (i <= n) ans += i++;
}
```
This `sigsum` method takes two integer parameters, `n` and `ans`, by reference. When it is called properly, it stores the sum of the numbers 1 through `n` in `ans`. For example, this function uses it to compute the sum of the numbers 1 through 10:
```c
int f() {
int x, y;
x = 10;
sigsum(x, y);
return y;
}
```
The simple aliasing that occurs in this example is not a problem. `sigsum`'s variable `n` aliases `f`'s variable `x`, and `sigsum`'s variable `ans` aliases `f`'s variable `y`. Since the aliases have scopes that do not overlap, there is no danger from this. But consider what happens when `sigsum` is called this way:
```c
int g() {
int x;
x = 10;
sigsum(x, x);
return x;
}
```
You might expect this function `g` to return the same value as `f`, but it does not. Because the function `g` passes `x` for both parameters, `sigsum`'s variables `n` and `ans` are actually aliases for each other. (That's what makes this kind of aliasing so deceptive. The variables look innocent enough. It might not occur to you that, depending on how the function is called, they can actually have the same lvalue.) This is how the activation records look when `sigsum` is called, before it begins executing:
The first thing `sigsum` does is initialize `ans` to zero. Since `ans` and `n` are aliased, this also sets `n` to zero. That makes the loop guard for `sigsum`'s while loop, `(i<n)`, immediately false, so `sigsum` returns to its caller. This is how the activation records look:
The function `g` returns the value 0, instead of the sum of the numbers 1 through 10.
```c
int main() {
int temp=1, b=2;
intswap(temp, b);
printf("%d, %d\n", temp, b);
}
```
This program prints the string "1, 2", showing that it does not swap the two variables. (See if you can figure out why, before reading on!)
The macro expansion shows why. Before the compiler sees the program, the preprocessing step expands the macro this way:
```c
int main() {
int temp=1, b=2;
{int temp= temp ; temp = b ; b =temp;}
printf("%d, %d\n", temp, b);
}
```
The actual parameter `temp` is evaluated in an environment that has a new, local definition of a variable called `temp`. This is a kind of behavior that programmers usually find surprising. They have the habit of thinking that the names of local variables in the called method are irrelevant to the caller. But macros are not methods, and the names of local variables in a macro body may be critically important to the caller.
This phenomenon has a name—**capture**. In any program fragment, an occurrence of a variable that is not statically bound within the fragment is **free**. For example, in this fragment, the occurrences of `temp` are bound while the occurrences of `a` and `b` are free:
```c
{int temp= a ; a = b ; b =temp;}
```
In the problematic use of the `intswap` macro, `intswap(temp, b)`, the two actual parameters are program fragments with free variables, `temp` and `b`. When these program fragments are substituted into the body of the macro, the free variable `temp` is "captured" by the local definition of `temp` in the macro body. Capture can also occur when the macro body is substituted into the body of the caller. If the macro contains occurrences of a global variable and if the caller has a local definition of the same name, the macro's occurrences will be captured by the caller's definition.
The other parameter-passing methods you have seen can be mixed, and are mixed in some languages. In Pascal, for example, a procedure can take some of its parameters by reference and others by value. But macro expansion is really an all-or-nothing affair. Just substituting the text of an actual parameter for the formal parameter would be nearly useless, since the actual parameter could not then refer
to variables in the caller's context. To pass parameters by macro expansion, you must also substitute the text of the macro body back into the caller's code. When it is implemented by textual substitution like this, a macro does not have an activation record of its own, and generally cannot be recursive.
Macro expansion has been explained in terms of textual substitutions that happen before compilation and execution. This is the easiest way to think about it. But the trick of making textual substitution is just an implementation technique—one that is useful for compiled language systems like C, but not the only one that can be imagined. The important thing about macro expansion is not its implementation, it is its effect. The body of a macro must be evaluated in the caller's context, so that free variables in the macro body can be captured by the caller's definitions. Each actual parameter must be evaluated on every use of the corresponding formal parameter, in the context of that occurrence of the formal parameter, so that free variables in the actual parameter can be captured by the macro body's definitions.
Any implementation that achieves this can be said to pass parameters by macro expansion, even if it does not do textual substitution before compilation.
For passing parameters by macro expansion, the body of the macro is evaluated in the caller's context. Each actual parameter is evaluated on every use of the corresponding formal parameter, in the context of that occurrence of that formal parameter (which is itself in the caller's context).
### 18.8—By Name
The phenomenon of capture is a drawback for macro expansion. One way to eliminate it is to pass parameters by name. In this technique, each actual parameter is evaluated in the caller's context, on every use of the corresponding formal parameter. Macro expansion puts each actual parameter in the context of a use of the formal parameter, and then puts the whole macro body in the caller's context. In this way the actual parameters can get access to the caller's local definitions. But it happens in an indirect way that risks capture. Passing parameters by name skips the middle man; the actual parameter is evaluated directly in the caller's context.
For passing parameters by name, each actual parameter is evaluated in the caller's context, on every use of the corresponding formal parameter.
As with macro expansion, if a formal parameter passed by name is not used in the called method, the corresponding actual parameter is never evaluated.
Passing parameters by name is simple to describe in the abstract, but it is rather difficult to implement. It can be done by macro-style substitution, if the names used in the method body are changed to avoid capture. But this is not efficient enough for practical implementations. In practical implementations, the actual parameter is treated like a little anonymous function. Whenever the called method needs the value of the formal parameter (either its lvalue or its rvalue), it uses that little anonymous function to get it. Here is an example:
```c
void f(by-name int a, by-name int b) {
b=a;
}
int g() {
int i = 3;
f(i+1,i);
return i;
}
```
This is what the activation records look like when g calls f, before f starts executing:
![Activation Records Diagram]
This illustration shows how the formal parameters a and b are bound to two anonymous functions. The function for a knows how to compute the value for i+1, and the function for b knows how to compute i. These little anonymous functions need the caller’s context to get the variable i from the caller. As was shown in Chapter 12 when passing functions as parameters, the thing that is passed from caller to callee has two parts: the code for the actual parameter and the nesting link to use with it.
When f executes the expression b=a, it calls for the lvalue of b. The anonymous function for b supplies i’s lvalue. The result is that the value 5 is stored in the caller’s variable i. So far this seems to be working like a by-reference parameter—the change to the formal parameter immediately changed the corresponding actual parameter. But there is more to the story. When f executes the expression b=a, it calls again for the lvalue of b (which is recomputed) and for the rvalue of a. The anonymous function for a computes the value i+1 in the caller’s context—using the current value for i—which produces the value 6. This 6 is the value stored back in i and is the value that the function g returns.
Notice the similarities between by-name parameter passing and the anonymous functions experimented with in ML. A method call like f(i+1,i), if it passes parameters by name, is like a shorthand notation for passing two little anonymous functions without parameters. In ML, this might be written as f(fn () => i+1, fn () => i);
As in ML, the functions are passed with nesting links that allow them to access variables in the caller’s context.
By-name parameter passing was introduced in Algol 60 (which also introduced by-value parameter passing). It can be used to do some impressively tricky things, but on the whole it was not a successful invention. It is difficult to implement and can be inefficient. Moreover, most programmers prefer a parameter-passing technique that is easier to understand, even if it is not as flexible. By-name parameter passing is one of the few things introduced in Algol 60 that has not been widely copied in later languages. However, the variation shown next, by-need parameter passing, is used in many functional languages.
---
2. There is a customary name for the little-anonymous-function-and-nesting-link used to implement by-name parameter passing. It is called a thunk.
an actual parameter usually produce the same value as the first evaluation (except for unusual examples like the one above, in which the evaluation of the actual parameter is affected by some intervening side effect). Passing parameters by need eliminates this unnecessary recomputation.
For passing parameters by need, each actual parameter is evaluated in the caller's context, on the first use of the corresponding formal parameter. The value of the actual parameter is then cached, so that subsequent uses of the corresponding formal parameter do not cause reevaluation.
The previous example—the one that demonstrates parameters passed by name—would produce the same result if the parameters were passed by need. Each actual parameter would be evaluated only once. But the parameter \(i+1\) would be evaluated after the value 5 was assigned to \(i\), so the outcome would be the same.
On the other hand, here is an example that shows the difference between by-name and by-need parameters:
```java
void f(by-need int a, by-need int b) {
b = a;
b = a;
}
void g() {
int i = 3;
f(i, i+1);
return i;
}
```
When \(f\) is called, its first assignment expression \(b = a\) has the same effect as evaluating \(i = i+1\) in the caller's context; it changes \(g\)'s variable \(i\) to 4. But the second assignment does not reevaluate the actual parameters. It just sets \(i\) to 4 again. By-name parameters would be reevaluated for the second assignment, so \(i\) would end up as 5. As you can see, the difference depends on the side effects. Without side effects, the only way to detect the difference between by-name and by-need parameter passing is by the difference in cost. If a formal parameter is used frequently in the called method and if the corresponding actual parameter is an expensive expression to evaluate, by-name parameters can be much slower than by-need parameters.
All three of the last parameter-passing methods—macro expansion, passing by name, and passing by need—have the property that if the called method does not use a formal parameter, the corresponding actual parameter is never evaluated. Consider this function, which implements the \&\& operator of Java:
```java
boolean andand(by-need boolean a, by-need boolean b) {
if (a) return false;
else return b;
}
```
This example short-circuits, just like Java's \&\& and ML's \(\text{andalso}\). If the first parameter is false, Java decides the result is false without ever evaluating the second parameter. This can make a big difference, in more than just efficiency. For example:
```java
boolean g() {
while (true) {
}
return true;
}
void f() {
andand(false, g());
}
```
When \(f\) calls \(\text{andand}\), it passes the expressions \(false\) and \(g()\) by need. Since the first parameter is false, \(\text{andand}\) never evaluates the second parameter; that is, it never calls \(g\). This is easily observable when the program runs, since the function \(g\) has an infinite loop. If \(f\) did call it, the program would hang. As it is, \(f\) completes normally.
By-need parameter passing is used in so-called lazy functional languages, such as Haskell. Such languages evaluate only as much of the program as necessary to get an answer. Consistent with that philosophy, they only evaluate an actual parameter if the corresponding formal parameter is really used. As in the example above, lazy languages can produce an answer where an eager language hangs or gets an exception.
---
### 18.10 Specification Issues
You have now seen seven different methods of passing parameters. Are these techniques part of the language specification? Does a programmer know and rely on the peculiarities of the parameter-passing technique used by the language system, or are they just hidden system-implementation details? The answer depends on the language.
In languages without side effects, the exact parameter-passing technique is often invisible to the programmer. For functional languages, the big question is whether actual parameters are always evaluated (eager evaluation) or whether they are...
evaluated on, — the corresponding formal parameter is actually used (lazy evaluation). ML uses eager evaluation. It guarantees to evaluate all actual parameters, whether they are used in the called method or not. It follows that an ML language system does not pass parameters by name, by need, or by macro expansion. But it is difficult to distinguish among the other possibilities without side effects, and there are no side effects in the subset of ML seen.
In imperative languages, it is possible to write a program whose behavior differs with each of our seven parameter-passing techniques. In that sense the technique is perfectly visible to the programmer. Nevertheless, some imperative language specifications define parameter passing abstractly, so that the language system is free to use one of several techniques. Ada, for example, has three parameter-passing "modes": in, out, and in out. An in parameter is used to pass values into the called method. The language treats in-mode formal parameters rather like constants in the called method and does not permit assignments to them. An out parameter is used to pass values out of the called method. The language requires assignment to out-mode formal parameters in the called method, and does not permit their values to be read. An in out parameter is used for two-way communication. It may be both read and assigned within the called method. None of this specifies whether the formal parameters are copies of the actual parameters or references to the actual parameters. For scalar values, the Ada standard specifies copying. But for aggregates like arrays and records it permits implementations to go either way. For example, an implementation might pass in out parameters by reference or by value-result. A program that can tell the difference (like some of the examples in this chapter) is simply not considered to be a valid Ada program.
This exemplifies the abstract definition of parameter passing. In this approach, the parameter-passing techniques in this chapter are all considered to be implementation details—they belong in the language system, not in the definition of the language itself. The language definition specifies parameter passing abstractly, not saying how parameters should be implemented, but only how they should be used. Any program that tries to make use of implementation-specific, parameter-passing properties not guaranteed by the language deserves what it gets.
### 18.11 Conclusion
The parameter-passing techniques described in this chapter are among the most commonly used, but they are certainly not the only techniques that have been tried. In particular, Prolog handles parameters in a completely different way, as you will see starting in the next chapter. Further, there are many minor variations on the
|
{"Source-Url": "https://www.cs.hmc.edu/courses/2016/spring/cs131/docs/webber-handout.pdf", "len_cl100k_base": 7782, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 14859, "total-output-tokens": 8340, "length": "2e12", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.00019538402557373047, "__label__crime_law": 0.0002281665802001953, "__label__education_jobs": 0.0004305839538574219, "__label__entertainment": 5.5849552154541016e-05, "__label__fashion_beauty": 0.00011557340621948242, "__label__finance_business": 0.00010448694229125977, "__label__food_dining": 0.00036025047302246094, "__label__games": 0.0005655288696289062, "__label__hardware": 0.0005826950073242188, "__label__health": 0.0002589225769042969, "__label__history": 0.0001627206802368164, "__label__home_hobbies": 5.161762237548828e-05, "__label__industrial": 0.0002448558807373047, "__label__literature": 0.0002810955047607422, "__label__politics": 0.00018286705017089844, "__label__religion": 0.0004897117614746094, "__label__science_tech": 0.0031261444091796875, "__label__social_life": 5.835294723510742e-05, "__label__software": 0.0032825469970703125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.000278472900390625, "__label__transportation": 0.0003786087036132813, "__label__travel": 0.00017690658569335938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34287, 0.01439]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34287, 0.73163]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34287, 0.82856]], "google_gemma-3-12b-it_contains_pii": [[0, 611, false], [611, 4961, null], [4961, 7297, null], [7297, 10059, null], [10059, 13224, null], [13224, 15026, null], [15026, 17544, null], [17544, 19027, null], [19027, 19391, null], [19391, 24003, null], [24003, 27358, null], [27358, 31479, null], [31479, 34287, null]], "google_gemma-3-12b-it_is_public_document": [[0, 611, true], [611, 4961, null], [4961, 7297, null], [7297, 10059, null], [10059, 13224, null], [13224, 15026, null], [15026, 17544, null], [17544, 19027, null], [19027, 19391, null], [19391, 24003, null], [24003, 27358, null], [27358, 31479, null], [31479, 34287, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34287, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 34287, null]], "pdf_page_numbers": [[0, 611, 1], [611, 4961, 2], [4961, 7297, 3], [7297, 10059, 4], [10059, 13224, 5], [13224, 15026, 6], [15026, 17544, 7], [17544, 19027, 8], [19027, 19391, 9], [19391, 24003, 10], [24003, 27358, 11], [27358, 31479, 12], [31479, 34287, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34287, 0.10264]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
06a0040fa6d3636f3becaef6454ae58c63379eb5
|
[REMOVED]
|
{"Source-Url": "https://arxiv.org/pdf/1611.02823v1.pdf", "len_cl100k_base": 6634, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32042, "total-output-tokens": 8788, "length": "2e12", "weborganizer": {"__label__adult": 0.00042724609375, "__label__art_design": 0.000392913818359375, "__label__crime_law": 0.0004367828369140625, "__label__education_jobs": 0.0006580352783203125, "__label__entertainment": 9.167194366455078e-05, "__label__fashion_beauty": 0.0002467632293701172, "__label__finance_business": 0.0003426074981689453, "__label__food_dining": 0.0004360675811767578, "__label__games": 0.0007381439208984375, "__label__hardware": 0.0024013519287109375, "__label__health": 0.000911235809326172, "__label__history": 0.0003812313079833984, "__label__home_hobbies": 0.00014770030975341797, "__label__industrial": 0.0007529258728027344, "__label__literature": 0.0002930164337158203, "__label__politics": 0.0003914833068847656, "__label__religion": 0.000644683837890625, "__label__science_tech": 0.10809326171875, "__label__social_life": 9.953975677490234e-05, "__label__software": 0.006683349609375, "__label__software_dev": 0.8740234375, "__label__sports_fitness": 0.0004498958587646485, "__label__transportation": 0.0009207725524902344, "__label__travel": 0.0002624988555908203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40355, 0.02276]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40355, 0.39565]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40355, 0.88133]], "google_gemma-3-12b-it_contains_pii": [[0, 2644, false], [2644, 6019, null], [6019, 9091, null], [9091, 11684, null], [11684, 14720, null], [14720, 17494, null], [17494, 19932, null], [19932, 22046, null], [22046, 24533, null], [24533, 27498, null], [27498, 27562, null], [27562, 31260, null], [31260, 34385, null], [34385, 37469, null], [37469, 40355, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2644, true], [2644, 6019, null], [6019, 9091, null], [9091, 11684, null], [11684, 14720, null], [14720, 17494, null], [17494, 19932, null], [19932, 22046, null], [22046, 24533, null], [24533, 27498, null], [27498, 27562, null], [27562, 31260, null], [31260, 34385, null], [34385, 37469, null], [37469, 40355, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40355, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40355, null]], "pdf_page_numbers": [[0, 2644, 1], [2644, 6019, 2], [6019, 9091, 3], [9091, 11684, 4], [11684, 14720, 5], [14720, 17494, 6], [17494, 19932, 7], [19932, 22046, 8], [22046, 24533, 9], [24533, 27498, 10], [27498, 27562, 11], [27562, 31260, 12], [31260, 34385, 13], [34385, 37469, 14], [37469, 40355, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40355, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
86749d46e8dd9a596df45f1d2603b84c80aa5263
|
[REMOVED]
|
{"Source-Url": "https://iris.unito.it/retrieve/handle/2318/79356/10995/main.pdf", "len_cl100k_base": 7706, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 43440, "total-output-tokens": 10367, "length": "2e12", "weborganizer": {"__label__adult": 0.00034999847412109375, "__label__art_design": 0.0002264976501464844, "__label__crime_law": 0.0002613067626953125, "__label__education_jobs": 0.0004887580871582031, "__label__entertainment": 3.916025161743164e-05, "__label__fashion_beauty": 0.00012874603271484375, "__label__finance_business": 0.00017118453979492188, "__label__food_dining": 0.00030612945556640625, "__label__games": 0.0003752708435058594, "__label__hardware": 0.00041103363037109375, "__label__health": 0.00027680397033691406, "__label__history": 0.00014197826385498047, "__label__home_hobbies": 5.501508712768555e-05, "__label__industrial": 0.00022208690643310547, "__label__literature": 0.0001767873764038086, "__label__politics": 0.0001920461654663086, "__label__religion": 0.000339508056640625, "__label__science_tech": 0.0016527175903320312, "__label__social_life": 6.61611557006836e-05, "__label__software": 0.0032138824462890625, "__label__software_dev": 0.990234375, "__label__sports_fitness": 0.00025463104248046875, "__label__transportation": 0.00030159950256347656, "__label__travel": 0.000164031982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43827, 0.02375]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43827, 0.12201]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43827, 0.83914]], "google_gemma-3-12b-it_contains_pii": [[0, 1016, false], [1016, 3799, null], [3799, 7326, null], [7326, 9306, null], [9306, 12158, null], [12158, 14212, null], [14212, 17750, null], [17750, 19850, null], [19850, 22425, null], [22425, 25932, null], [25932, 29610, null], [29610, 31722, null], [31722, 34266, null], [34266, 37398, null], [37398, 40652, null], [40652, 43827, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1016, true], [1016, 3799, null], [3799, 7326, null], [7326, 9306, null], [9306, 12158, null], [12158, 14212, null], [14212, 17750, null], [17750, 19850, null], [19850, 22425, null], [22425, 25932, null], [25932, 29610, null], [29610, 31722, null], [31722, 34266, null], [34266, 37398, null], [37398, 40652, null], [40652, 43827, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43827, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43827, null]], "pdf_page_numbers": [[0, 1016, 1], [1016, 3799, 2], [3799, 7326, 3], [7326, 9306, 4], [9306, 12158, 5], [12158, 14212, 6], [14212, 17750, 7], [17750, 19850, 8], [19850, 22425, 9], [22425, 25932, 10], [25932, 29610, 11], [29610, 31722, 12], [31722, 34266, 13], [34266, 37398, 14], [37398, 40652, 15], [40652, 43827, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43827, 0.00948]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
5d6d37e6038635d055b35ba5229d570f29d2a8c5
|
A Statement Level Bug Localization Technique using Statement Dependency Graph
Shanto Rahman, Md. Mostafijur Rahman, Ahmad Tahmid and Kazi Sakib
Institute of Information Technology, University of Dhaka, 1205, Bangladesh
{bit0321, bit0312, bit0332, sakib}@iit.du.ac.bd
Keywords: Statement level bug localization, search space minimization, statement dependency, similarity measurement
Abstract: Existing bug localization techniques suggest source code methods or classes as buggy which require manual investigations to find the buggy statements. Considering this issue, this paper proposes Statement level Bug Localization (SBL), which can effectively identify buggy statements from the source code. In SBL, relevant buggy methods are ranked using dynamic analysis followed by static analysis of the source code. For each ranked buggy method, a Method Statement Dependency Graph (MSDG) is constructed where each statement acts as a node of the graph. Since each of the statements contains few information, it is maximized by combining the contents of each node and its predecessor nodes in MSDG, resulting a Node Predecessor-node Dependency Graph (NPDG). To identify relevant statements for a bug, similarity is measured between the bug report and each node of the NPDG using Vector Space Model (VSM). Finally, the buggy statements are ranked based on the similarity scores. Rigorous experiments on three open source projects named as Eclipse, SWT and PasswordProtector show that SBL localizes the buggy statements with reasonable accuracies.
1 Introduction
In automatic software bug localization, finding bugs in more granular levels i.e., statement of the source code is needed because it reduces software cost by minimizing maintenance effort. On top of this, many bug localization techniques have already been proposed which rank buggy classes (Zhou et al., 2012) or methods (Poshyvanyk et al., 2007). The suggestion of buggy classes provide a large problematic solution search space where a bug can stay. Among the suggested class list, it is almost impossible to find buggy statements within a short time because a class may contain numerous statements. Although suggesting buggy methods are better than suggesting classes, this hardly reduces the time consumption because a method may have large number of statements.
Although statement level bug localization is required, there may exist several limitations such as the availability of large irrelevant information and a small valid information within a statement. A software project often contains a large number of statements having massive amount of irrelevant information for a bug. For example, Bug Id- 31779 of Eclipse is related to src.org.eclipse.core.internal.localstore.UnifiedTree.java, whereas the total number of files in Eclipse is 86206 and except the aforementioned buggy file, other files are irrelevant for this bug. As bug localization using bug report follows probabilistic approach, the consideration of irrelevant information can mislead to rank the buggy statements. Therefore, it is needed to discard the irrelevant source code as much as possible. Meanwhile, a single statement contains very few information about a bug. For example, for Bug Id- 31779, line 1221 (i.e., child = createChild NodeFromFileSystem(node, parentLocalLocation, localName);) is one of the buggy statements which contains few information about the bug. Only using this information, suggesting buggy statement is another challenging issue.
Fault localization is closely related to the bug localization, however the main difference is, fault localization does not consider bug report whereas bug localization does (Zhou et al., 2012). In real life projects, user or Quality Assurance (QA) team reports against a faulty scenario, and the bug is fixed using that report. Although several researches address bug localization (Zhou et al., 2012), (Poshyvanyk et al., 2007), to the best of the authors knowledge, still no research has been conducted to suggest buggy statements using bug report. Zhou et al. propose a class
1https://bugs.eclipse.org/bugs/attachment.cgi?id=3472&action=diff
level bug localization technique by considering whole source code as a search space. As a result, biasness may be introduced. As this technique suggests classes, it demands manual inspection into the source code files to find buggy statements. Considering this issue, comparatively more granular level i.e., method level suggestion is derived (Poshyvanyk et al., 2007), (Rahman and Sakib, 2016). Poshyvanyk et al. propose PROMISER which suggests methods as buggy (Poshyvanyk et al., 2007). Here, authors consider whole source code which may degrade the accuracy of bug localization (Rahman and Sakib, 2016). To improve the accuracy, recently Rahman et al. introduce MBuM where irrelevant source code for a bug is discarded (Rahman and Sakib, 2016). Still, developers need manual investigations to find the actual buggy statements. Interestingly, the minimized search space which is extracted in MBuM can be used for reaching further granular level.
In this paper, Statement level Bug Localization (SBL) is proposed where statements are suggested as buggy. At first, SBL extracts source code methods, generated from (Rahman and Sakib, 2016). As the statements of the methods’ are related to each other, for each suggested method, a dependency relationship is developed among the statements, which is named as Method Statement Dependency Graph (MSDG). The information of each node is processed to produce corpora. Later, a Node Predecessor-node Dependency Graph (NPDG) is developed where the corpora of each node (i.e., statements of the source code) and its predecessor nodes in MSDG are combined. This is because a statement often contains few information about a bug. The combination of each node with its predecessors increases the valid information of each statement. The similarity between the bug report and each node of NPDG is measured using VSM. These statement similarity scores are weighted by the corresponding method similarity score. Finally, a list of buggy statements are suggested using the descending order of the similarity scores.
The effectiveness of SBL has been measured using 230 bugs from three open source projects namely Eclipse, SWT and PasswordProtector where Top N Rank, Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP) are used as the evaluation metrics. In all the projects, SBL ranks more than 28.33% buggy statements at the top 1 and more than 58.33% within top 5. In case of MAP, 38.5%, 44.2% and 61% accuracies are gained for Eclipse, SWT and PasswordProtector respectively. For MRR, 45.8%, 51.3% and 66% accuracies have been achieved in the aforementioned projects respectively. All of the above results justify the assumption that SBL has made.
2 Literature Review
This section focuses on researches that are conducted to suggest buggy classes or methods. The following discussion first holds the techniques which suggest classes as buggy, then describes the method level bug localization techniques.
2.1 Class Level Bug Localization
Zhou et al. propose BugLocator where class level buggy locations are suggested (Zhou et al., 2012). Here, two sets of corpora have been generated; one for bug report and another for source code using several text-processing approaches such as stop word removal, multi-word identifiers and stemming. Similarity is measured between these two sets of corpora by revised Vector Space Model (rVSM). This technique considers whole source code for static analysis which may degrade the accuracy (Rahman and Sakib, 2016). An improved version of BugLocator is addressed by Ripon et al. where special weights are assigned on structural information including class names, method names, variable names and comments of the source code (Saha et al., 2013). Due to considering whole source code for a bug, the accuracy of these technique may get biased (Rahman and Sakib, 2016).
Later, another class level bug localization technique is proposed by Tantithamthavorn et al. (Tantithamthavorn et al., 2013). Here, an intrinsic assumption is that when a bug is fixed, a set of classes are changed altogether. On top this concept, co-change score is calculated and a list of co-change files are identified. Large number of changes holds high score for those files to be buggy. Otherwise, a file with rare changes gets a small score. As a consequence, that file has minimum probability to be buggy. After finding the co-change score for each file, these results are adjusted with BugLocator (Zhou et al., 2012). As this technique suggests classes as buggy, manual searching is needed to find the actual buggy statements from the source code which consumes lots of time.
Sisman and Kak incorporates version histories (Sisman and Kak, 2012) to suggest buggy locations. Similar to this, Wang et al. introduce a technique by considering similar bug reports, version history and the structure of the source code (Wang and Lo, 2014). This technique also suggests class level buggy locations and so, developers have to spend searching time to find buggy statements. Based on this version histories and structural information of the source code, recently Rahman et al. also propose a class level bug localization technique (Rahman et al., 2015). Here, authors identify suspicious score for each class by com-
domain of search space. As a result, the accuracy test cases, irrelevant features may be included in the (Wilde et al., 1992). However, due to using passing considering passing and failing test cases of the program minimization of source code has been performed by con-
techique is proposed by Wilde et al where mini-
ization technique (Lukins et al., 2008) where each method of the source code is considered as the unit of measurement. The source code is processed using stop words removal, programming language specific keywords removal, multi-word identifiers and stemming. Later, Latent Dirichlet Allocation (LDA) model is generated and that model is queried using bug report to get the buggy methods. Another method level localization technique is proposed by considering whole source code where semantic meanings of each method have been extracted (Nichols, 2010). Authors gather extra information from the previous bug history. When a new bug arrives, Latent Semantic Indexing (LSI) is applied on the method documents to identify the relationships between the terms of the bug report and the concepts of the method documents. Based on that, a list of buggy methods is suggested.
A feature location based bug localization is introduced in (Alhindawi et al., 2013) where source code corpus is enhanced with stereotypes. Stereotypes represent the details of each word which are commonly used in programming. For example, the stereotype ‘get’ means a method returns a value. These stereotypes information are derived automatically from the source code via program analysis. After adding stereotypes information with the source code methods, IR technique is used to execute the queries. However, as the technique suggests method as buggy, it requires lots of time to find the buggy statements.
The first dynamic analysis based bug localization technique is proposed by Wilde et al. where minimization of source code has been performed by considering passing and failing test cases of the program (Wilde et al., 1992). However, due to using passing test cases, irrelevant features may be included in the domain of search space. As a result, the accuracy of bug localization may be hampered. An improved version of (Wilde et al., 1992) is proposed by Eisen-barth et al. where both dynamic and static analysis of the source code are combined (Eisenbarth et al., 2003). Here, static analysis identifies the dependencies among the data to locate the features in a pro-
gram while dynamic analysis collects the source code execution traces for a set of scenarios. Poshyvanyk et al. propose PROMESIR where authors also use both static and dynamic analysis of the source code.
Through dynamic analysis, executed buggy methods are extracted for a bug. Initially, the two analysis techniques produce bug similarity scores differently without interacting with each other. Finally, these two scores are combined and a weighted ranking score for each method is measured. Although this technique uses dynamic information of the source code, it fails to minimize the solution search space during static analysis. As a consequence, the bug localization accuracy may be deteriorated. Still manual searching is required due to suggesting methods as buggy rather than statements.
Recently Rahman et al. introduce MBuM which focuses on the search space minimization by applying dynamic analysis of the source code (Rahman and Sakib, 2016). Executed methods are identified by reproducing the bug, and during static analysis only the contents of those executed methods are extracted. As a result, irrelevant source code is removed from the search space. The remaining texts are processed to measure the similarity between the bug report and source code method using a modified Vector Space Model (mVSM). In mVSM, the length of the method is incorporated with the existing VSM.
From the above discussion, it is clear that the accuracy depends on the identification of valid information domain by removing irrelevant source code. Besides, existing approaches suggest either classes or methods as buggy which also demand manual inspections to find more granular buggy locations i.e., buggy statements. As a result, the time of localizing bug is increased which distracts the motivation of automatic software bug localization.
3 The Proposed Approach
This section proposes Statement level Bug Localization (SBL) which consists of two major phases such as irrelevant search space minimization but maximizing relevant data domain, and the ranking of buggy statements from that relevant data domain. The details are described in the followings.
3.1 Minimization of Irrelevant Search Space by Identifying Probable Buggy Methods
Since MBuM provides better ranking of buggy methods than others, that ranked list of buggy methods is used here as minimized search space (Rahman and Sakib, 2016). To do so, at first irrelevant source code is discarded by considering source code execution traces. Only the relevant method contents are processed using several text processing techniques such as stop word removal, programming language specific keyword removal, multi-word identifications, semantic meaning extraction and stemming which produce code corpora. Similarly, bug report is also processed to generate bug corpora. Finally, textual similarity is measured between code and bug corpora by modified Vector Space Model (mVSM) where the lengths of the source code methods are considered. The overall procedure is demonstrated in Figure 1.
3.2 Statement Level Bug Localization using Minimized Search Space
In this section, source code statements are suggested as buggy by only considering the ranked buggy methods described in Section 3.1. For each ranked method, a graph is generated where each statement acts as a node of the graph and the edge between the nodes represents the callee (predecessor) - called (successor) relationship between statements. This graph is named as Method Statement Dependency Graph (MSDG). For each node, a super node is generated by combining the node and its predecessors, because the execution of a node may depend on its predecessor nodes. This process maximizes the valid information of a statement. This graph is mentioned as Node Predecessor-node Dependency Graph (NPDG). To suggest a list of buggy statements, node similarity score \( N_s \) is measured between each node of NPDG and the bug report using VSM. For each node, \( N_s \) score is weighted by the score of the method \( M_s \) (obtained from MBuM) which contains the node. All these steps are described in the followings.
3.2.1 Generation of Method Statement Dependency Graph (MSDG)
MSDG is developed because one statement may have relationships with other statements (e.g., \( x = 5; y = x + z \); here \( x \) and \( y \) have a relationship and \( x \) is a predecessor of \( y \)), and it is very often defects propagate from one statement to another (e.g., one variable generates null value and when the next variable uses that value, a null pointer exception is rendered) (Chen and Rajlich, 2001). Usually, the error propagation is forwarded (Rajlich, 1997). As a result, the statement which is executed first may affect the statements which use the result of that statement.
For a better understanding regarding the generation of MSDG, a Java source code class is considered (see Figure 2). Here, two methods namely calculateProfit and calculateOperationCost are available within class IncomeProfit. Each method introduce some local, global and method call variables. The method calculateProfit contains sell, buy, etc. as local variables where profit is a global variable. Except this, calculateProfit also depends on the result of calculateOperationCost method. So, operationCost at line 7 is a method call variable.
For calculateProfit method, an MSDG is depicted in Figure 3 which is a directed graph. Let, \( G = (V,E) \) is a graph where \( V \) is the set of vertices
or statements of the method. Each vertex stores some properties of the source code (i.e., package name, class name, method name, method score and line number of the statement). Figure 4 shows these properties of node int sell = sell1 of MSDG in Figure 3. Similar to this node, all other nodes also contain these types of information. \( E \) is the set of directed edges between statements which represents the data flow. If there exists an edge \((v_1, v_2)\) from node \(v_1\) to \(v_2\), \(v_1\) is said to be a predecessor of \(v_2\). In source code, the predecessors can be defined as follows: if a statement contains an assignment such as \(x = y + z\), the last modified statements of \(y\) and \(z\) are the predecessors of \(x\). For arithmetic assignment operators such as \(x += y\), the last modification of \(x\) is also considered as a predecessor. In case of increment (\(x++\)), decrement (\(x--\)) and conditional statements (e.g., if \((x > 0)\)), statement containing the last modification of \(x\) is a predecessor. These are needed to maximize the valid information of each statement which can improve the accuracy of bug localization technique. To find the last modification of a variable, three cases are identified based on the types of predecessor variables such as Local Variable Usage (LVU), Global Variable Usage (GVU) and Method Return Variable Call (MRVC).
1. In LVU, scopes of predecessor variables are bounded within that method. In \(\text{calculateProfit}\) method of Figure 2, \(\text{int sell1}\) affects \(\text{sell}\), hence \(\text{int sell1}\) is the predecessor of \(\text{sell}\). Similarly, \(\text{int buy1}\) is the predecessor of \(\text{buy}\) (see Figure 2) and so on.
2. In GVU, the predecessor variable is globally declared and multiple methods may use that variable. The declaration of the \(\text{double profit} = 0.0\) in line 3 affects line 8 where \(\text{profit}\) is calculated using \(\text{sell, buy, operationCost}\) and \(\text{profit}\). Therefore, line 3 is the predecessor of line 8.
3. MRVC indicates that a statement calls a method which returns a value. In this case, the last modification of that called method’s return statement is the predecessor of the current node. In Figure 2, line 7 shows that \(\text{operationCost}\) calls a method (i.e., \(\text{calculateOperationCost}\)), which returns \(\text{cost}\) variable (see line 20). Therefore, line 20 is the predecessor of the calling node (i.e., line 7), as shown in Figure 3.
From Figure 3 it is found, the last two nodes such as \(\text{system.out.println(“Yes you gain profit”)}\) and \(\text{system.out.println(“Sorry you lose”)}\) have no edge with other nodes because these do not use any variables or call any methods. As a result, the above mentioned cases (i.e., LVU, GVU and MRVC) are not satisfied.
3.2.2 Generation of Node Predecessor-node Dependency Graph (NPDG)
In this phase, textual information is maximized by combining the corpora of each node with its predecessors (i.e., only the direct predecessors). At first, each node of MSDG containing a statement of the source code is processed. In this step, several text preprocessing techniques are applied. Stop words (e.g., am, is, are, etc.) and programming language specific keywords (e.g., int, HashMap, public, etc.) are removed because these keywords do not provide any bug specific information rather increase ambiguity and affects the accuracy of bug localization. Besides this, multi-word identifications are also performed using multiple separators such as Camel case, underscore, dot. These multi-word identifications help to increase the valid information about a bug. For example, if a word is gasCost, using Camel case identification, two terms such as gas and Cost can be found.
The remaining words are transformed into lower case to make all words in the same format. The semantic meanings of each word are identified using WordNet to increase the valid information (Finlayson, 2014). Here, the synset of WordNet\(^2\) containing single word is only considered. Porter stemming is applied on the remaining textual words to get the root form of the words. This is because each word may stay in multiple forms. As an example, a developer may use ‘contains’ word whereas QA uses ‘containing’. However, the basic concept or root form is the same (i.e., contain) though multiple forms of the word are used. After applying these text-processing techniques, a list of valid code corpora is obtained.
As SBL depends on the similarity between the bug report and statements of the source code, increasing valid information of a statement is an implicit demand. Hence, a super node is generated by combining each node with its predecessor nodes’ corpora because predecessors may affect its successors, results in an NPDG. Figure 5 shows an NPDG which is derived from Figure 3 by combining each node with its predecessor nodes. In Figure 3, \texttt{int sell = sell1} is a node while in NPDG, this node is incorporated with its predecessor node (i.e., \texttt{int sell1}). As a result, the number of corpus is increased for statement \texttt{int sell = sell1}. Similarly, \texttt{profit + = sell - buy - operationCost} has four predecessor nodes and so this creates a super node by combining the corpora of this node and other four predecessor nodes.
After the generation of NPDG, similarity score is measured between the bug report and each node of NPDG. This graph is traversed to find the frequencies of shared terms between the bug corpora and each node corpora of NPDG. As several kinds of term-frequency (tf/df) have already been available (e.g., logarithm, boolean variants), logarithm variant performs better than others (Croft et al., 2010). \(tf(t, n)\) and \(tf(t, b)\) are calculated according to Equation 1 and 2.
\[
tf(t, n) = \log f_{tn} + 1 \tag{1}
\]
\[
tf(t, b) = \log f_{tb} + 1 \tag{2}
\]
\(f_{tn}\) and \(f_{tb}\) are the frequencies of a term of NPDG node and bug report respectively, which are used for measuring the similarity between bug report and each statement of the source code (see Equation 3).
\[
N_s(n, b) = VSM(n, b) = cos(n, b) = \frac{\sum_{t \in V} (\log f_{tn} + 1) \times (\log f_{tb} + 1)}{\sqrt{\sum_{t \in V} (\log f_{tn} + 1)^2} \times \sqrt{\sum_{t \in V} (\log f_{tb} + 1)^2}} \tag{3}
\]
Here, \(N_s(n, b)\) denotes the similarity score of each node. A statement containing large score \((N_s)\) represents that the statement has large similarity with the bug report. Similarly, low score of a statement indicates that the bug has minimum effect on that node.
The score of each statement is weighted by the score of the method using Equation 4. This is because, a statement contains a few number of words compared to a method (Moreno et al., 2013). So, it is more obvious that the large ranking scored methods’ are more likely liable for being buggy.
\[
SBL_{score} = N_s \times M_s \tag{4}
\]
\(M_s\) and \(N_s\) represent the score of a method containing statement \(s\) and the statement similarity score respectively. \(SBL_{score}\) denotes the ranking score of each statement, and based on that statements are ranked.
4 Result Analysis
To measure the effectiveness of SBL, several bug reports from three open source projects named as Eclipse, SWT and PasswordProtector are considered. The experiments are conducted for validating the ranking of buggy statements. To measure the accuracy of SBL, Top N Rank, Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP) are used as metrics which are also commonly used in bug localization. The data collection followed by the experimental details are discussed in the followings.
4.1 Data Collection
Eclipse, SWT and PasswordProtector are used as the subject of evaluation. Eclipse is a widely used open
Table 1: The performance of SBL by considering 230 bugs
<table>
<thead>
<tr>
<th>Project name</th>
<th>Top 1 (Rank)</th>
<th>Top 5 (Rank)</th>
<th>Top 10 (Rank)</th>
<th>Top 20 (Rank)</th>
<th>MRR</th>
<th>MAP</th>
</tr>
</thead>
<tbody>
<tr>
<td>Eclipse</td>
<td>34 (28.33%)</td>
<td>70 (58.33%)</td>
<td>79 (65.83%)</td>
<td>98 (81.66%)</td>
<td>45.8%</td>
<td>38.5%</td>
</tr>
<tr>
<td>SWT</td>
<td>38 (38%)</td>
<td>63 (63%)</td>
<td>72 (72%)</td>
<td>85 (85%)</td>
<td>51.3%</td>
<td>44.2%</td>
</tr>
<tr>
<td>PasswordProtector</td>
<td>4 (40%)</td>
<td>9 (90%)</td>
<td>10 (100%)</td>
<td>10 (100%)</td>
<td>66%</td>
<td>61%</td>
</tr>
</tbody>
</table>
source Integrated Development Environment (IDE) which is written in Java. SWT is a widget toolkit which is integrated with Eclipse. PasswordProtector is also an open source project used to encrypt passwords for accessing multiple websites.
Different versions of Eclipse and SWT (e.g., version 2.1.0, 3.0.1, 3.0.2 and 3.1.0) are chosen which contain large volume of source code. For example, Eclipse 3.0.2 contains 12,863 classes, 95,341 methods and 1,86,772 non-empty statements, SWT contains 489, 18,784 and 32,032 classes, methods and non-empty statements respectively. These benchmark projects lack the available patches with statement level bug fixing because, none of the researches are conducted on statement level. Hence, SBL is evaluated using 230 bugs from three projects (i.e., 120 from Eclipse, 100 from SWT and 10 from PasswordProtector). These bugs are manually collected from Bugzilla for Eclipse and SWT. For each bug, corresponding patched files are also collected to validate the results provided by SBL. Meanwhile, the bug reports and fixed files of PasswordProtector are collected from the developers of the projects (Rahman, 2016).
The details of each bug are available in (Rahman, 2016) where bug id, description and buggy locations are given. It is noteworthy that stack trace is omitted from the bug description as it may bias the evaluation.
4.2 Research Questions and Evaluation
In this section, SBL is validated by addressing two research questions RQ1 and RQ2. RQ1 states how many bugs are successfully located by suggesting buggy statements. And RQ2 demonstrates the effect of considering predecessors of each node. The following discussion holds the detail description of each research question along with their evaluation.
4.2.1 RQ1: How many bugs are successfully located at statement level?
To answer this question, three open source projects described in Section 4.1 are considered. For each bug, the ranked list of buggy statements suggested by SBL are compared with the original patched buggy statements. If the buggy statements are ranked at the top 1, top 5, top 10 or top 20, the bug is considered effectively localized. MRR and MAP metrics are also used to prove the efficacy of SBL.
Table 1 presents SBL locates 28.33% buggy statements at the top in Eclipse. This refers that developers may not traverse any more statements for 28.33% bugs. Besides, 58.33% and 65.83% bugs are located within top 5 and top 10 respectively. In case of SWT, SBL locates 38%, 63% and 72% at the top 1, top 5 and top 10. For PasswordProtector, 40% bugs are localized at the top 1, 90% and 100% bugs are correctly localized within top 5 and top 10 suggestions. For Eclipse, SWT and PasswordProtector, MRR of SBL is 45.8%, 51.3% and 66% while MAP is 38.5%, 44.2% and 61% respectively. These results indicate, SBL can effectively localize the buggy statements.
4.2.2 **RQ2: Does the consideration of predecessor nodes improve the accuracy of bug localization?**
SBL creates super node by combining each node and its predecessor nodes. Hence, it is an intrinsic demand to show whether the consideration of these predecessor nodes improve the bug localization accuracy or not. To validate this, a comparison is made between without considering predecessor (i.e., MSDG) and with consideration of predecessors (i.e., NPDG).
Figure 6-8 show the accuracy of ranking buggy statements in Top N Rank by MSDG which is comparatively lower than that of NPDG for all the studied projects. Because buggy statements depend on its previously executed statements. And NPDG is more effective when the bug is not stayed in a statement rather it propagates from one statement to another.
Figure 6 holds a comparative study between the results of considering NPDG and MSDG in case of Eclipse. The ranking of buggy statements in Eclipse shows that 21.66% and 28.33% bugs are located at the 1st position in case of MSDG and NPDG respectively. For SWT, 29% and 38% bugs are located at the 1st position for MSDG and NPDG. Although PasswordProtector is comparatively low volume project than other two, here also NPDG performs better than MSDG, i.e., 40% and 30% bugs are located at the 1st position. Thus NPDG improves 6.67%, 9% and 10% performance in case of Eclipse, SWT and PasswordProtector respectively over MSDG.
The effects of using predecessors are shown in Figure 9 and 10 where MRR and MAP are considered as metrics respectively. NPDG also shows higher values than MSDG. In case of MRR, 9.8%, 8.3% and 5% (Figure 9) improvements are found by considering NPDG instead of MSDG for Eclipse, SWT and PasswordProtector respectively. Meanwhile, 7.3%, 10.1% and 7.3% (Figure 10) accuracy improvements have been found (on MAP) by NPDG over MSDG in the aforementioned three projects respectively.
5 **Conclusion**
In this paper, a novel statement level bug localization technique named as SBL is proposed where irrelevant search space is discarded using the source code dynamic analysis. The relevant search space is further minimized by ranking the buggy methods. Later, for each suggested method, a Method Statement Dependency Graph (MSDG) is generated which holds the relationship among the statements. For invoking predecessor node information, a Node Predecessor-node Dependency Graph (NPDG) is generated for each method where bag of words of each node and its predecessor nodes are combined. The similarity between each node (i.e., statement) of NPDG and the bug report is measured to rank the buggy statements. Effectiveness of SBL has been evaluated on three open source projects. The experimental results show that SBL can successfully rank buggy statements. It is also evident from the results, the consideration of pre-
deccessor nodes improve the accuracy.
Since SBL performs well in different types of projects, in future it can be applied in industrial projects to assess its effectiveness in practice.
REFERENCES
|
{"Source-Url": "http://www.iit.du.ac.bd/about_iit/download/297", "len_cl100k_base": 7172, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30361, "total-output-tokens": 8477, "length": "2e12", "weborganizer": {"__label__adult": 0.0003180503845214844, "__label__art_design": 0.0002720355987548828, "__label__crime_law": 0.00033402442932128906, "__label__education_jobs": 0.0006794929504394531, "__label__entertainment": 5.519390106201172e-05, "__label__fashion_beauty": 0.00014483928680419922, "__label__finance_business": 0.00020110607147216797, "__label__food_dining": 0.0002410411834716797, "__label__games": 0.0005807876586914062, "__label__hardware": 0.0006122589111328125, "__label__health": 0.0003685951232910156, "__label__history": 0.0001875162124633789, "__label__home_hobbies": 6.687641143798828e-05, "__label__industrial": 0.00024306774139404297, "__label__literature": 0.0002541542053222656, "__label__politics": 0.00017905235290527344, "__label__religion": 0.0002913475036621094, "__label__science_tech": 0.01143646240234375, "__label__social_life": 6.777048110961914e-05, "__label__software": 0.006328582763671875, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.0002586841583251953, "__label__transportation": 0.00033211708068847656, "__label__travel": 0.00015819072723388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34063, 0.03397]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34063, 0.61883]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34063, 0.91568]], "google_gemma-3-12b-it_contains_pii": [[0, 4145, false], [4145, 9415, null], [9415, 14040, null], [14040, 17380, null], [17380, 20220, null], [20220, 25194, null], [25194, 28617, null], [28617, 31468, null], [31468, 34063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4145, true], [4145, 9415, null], [9415, 14040, null], [14040, 17380, null], [17380, 20220, null], [20220, 25194, null], [25194, 28617, null], [28617, 31468, null], [31468, 34063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34063, null]], "pdf_page_numbers": [[0, 4145, 1], [4145, 9415, 2], [9415, 14040, 3], [14040, 17380, 4], [17380, 20220, 5], [20220, 25194, 6], [25194, 28617, 7], [28617, 31468, 8], [31468, 34063, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34063, 0.04717]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
df7fdb638d223f6b322330a03cb013b637bcf26d
|
CSE 30
Fall 2008
Final Exam
1. Number Systems
2. Binary Addition/Condition Code Bits/Overflow Detection
3. Branching
4. Bit Operations
5. Recursion/SPARC Assembly
6. Local Variables, The Stack, Return Values
7. More Recursive Subroutines
8. Floating Point
9. Machine Instructions
10. Linkage, Scope, Lifetime, Data
11. Load/Store/Memory
12. Miscellaneous
SubTotal
Extra Credit
Total
______________________ (15 points)
______________________ (12 points)
______________________ (22 points)
______________________ (13 points)
______________________ (10 points)
______________________ (19 points)
______________________ (16 points)
______________________ (12 points)
______________________ (20 points)
______________________ (32 points)
______________________ (11 points)
______________________ (27 points)
______________________ (209 points)
______________________ (11 points)
______________________
1. Number Systems
Convert **0xFADE** (2’s complement, 16-bit word) to the following. (6 points)
- **binary** ___________________________ (straight base conversion)
- **octal** ___________________________ (straight base conversion)
- **decimal** ___________________________ (convert to signed decimal)
Convert **298** to the following (assume 16-bit word). **Express answers in hexadecimal.** (3 points)
- **sign-magnitude** ___________________________
- **1’s complement** ___________________________
- **2’s complement** ___________________________
Convert **-542** to the following (assume 16-bit word). **Express answers in hexadecimal.** (6 points)
- **sign-magnitude** ___________________________
- **1’s complement** ___________________________
- **2’s complement** ___________________________
2. Binary Addition/Condition Code Bits/Overflow Detection
Indicate what the condition code bits are when adding the following 8-bit 2’s complement numbers. (12 points)
- 10101010 +11010110
- 01110110 +11001101
- 11111111 +00000001
<table>
<thead>
<tr>
<th>N</th>
<th>Z</th>
<th>V</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
3. Branching
Write the SPARC Assembly leaf subroutine to perform the following C function. Use only standard looping/branching format as detailed in class and class notes. Comment! **Do not optimize.** (22 points)
```c
int checkIfPowerOf2( unsigned int value )
{
int i;
int cnt = 0;
unsigned int mask = 0x80000000;
for ( i = 0; i < 32; ++i )
{
if ( (value & mask) != 0 )
++cnt;
if ( cnt > 1 )
return 0;
mask = mask >> 1;
}
return cnt;
}
```
```sparc-assembler
.global checkIfPowerOf2 ! i mapped to %o1
.section " .text" ! cnt mapped to %o3
checkIfPowerOf2: ! mask mapped to %o5
int i;
int cnt = 0;
unsigned int mask = 0x80000000;
for ( i = 0; i < 32; ++i )
{
if ( (value & mask) != 0 )
++cnt;
if ( cnt > 1 )
return 0;
mask = mask >> 1;
}
return cnt;
```
4. Bit Operations
What is the value of %l0 after each statement is executed? **Express your answers in hexadecimal.**
```
set 0xCA4273BD, %l0
set 0x9035768A, %l1
and %l0, %l1, %l0
Value in %l0 is _______________________________________ (2 points)
```
```
set 0xCA4273BD, %l0
sra %l0, 13, %l0
Value in %l0 is _______________________________________ (2 points)
```
```
set 0xCA4273BD, %l0
sll %l0, 9, %l0
Value in %l0 is _______________________________________ (2 points)
```
```
set 0xCA4273BD, %l0
set 0x????????, %l1
xor %l0, %l1, %l0
! Value in %l0 is now OxFEEDBEEF
Value set in %l1 must be this bit pattern _______________________________________ (3 points)
```
```
set 0xCA4273BD, %l0
set 0x9035768A, %l1
or %l0, %l1, %l0
Value in %l0 is _______________________________________ (2 points)
```
```
set 0xCA4273BD, %l0
srl %l0, 7, %l0
Value in %l0 is _______________________________________ (2 points)
```
5. Recursion/SPARC Assembly
Given `main.s` and `recurse.s`, what gets printed when executed? (10 points)
```assembly
.global main
/* main.s */
.section ".text"
main:
save %sp, -96, %sp
set 24568, %o0
call recurse
nop
ret
restore
.global recurse
/* recurse.s */
.section ".rodata"
fmt: .asciz "%d\n"
.section ".text"
recurse:
save %sp, -(92 + 4) & -8, %sp ! One int local variable on the stack
mov %i0, %o0
mov 10, %o1
call .rem
nop
st %o0, [%fp - 4]
mov %i0, %o0
mov 10, %o1
call .div
nop
mov %o0, %i0
set fmt, %o0
ld [%fp - 4], %o1
call printf
nop
cmp %i0, %g0
be base
nop
mov %i0, %o0
call recurse
nop
base:
set fmt, %o0
mov %i0, %o1
call printf
nop
ret
restore
```
6. Local Variables, The Stack, and Return Values
Here is a C function that allocates a few local variables, performs some assignments and returns a value. Don’t worry about any local variables not being initialized before being used. Just do a direct translation. **Draw lines.**
```c
int fubar( short x, long y ) {
char *local_stack_var1;
short *local_stack_var2;
struct foo {
short s1;
char s2[3];
short s3[3];
int s4;
} local_stack_var3;
local_stack_var1 = &local_stack_var3.s2[2]; /* 1 */
local_stack_var2 = local_stack_var3.s3 + y; /* 2 */
local_stack_var3.s1 = y + *++local_stack_var2; /* 3 */
return ( local_stack_var3.s4 + x ); /* 4 */
}
```
Write the equivalent **full unoptimized** SPARC assembly language module to perform the equivalent. You must allocate all local variables on the stack. No short cuts. Treat each statement independently. (19 points)
#include <stdio.h>
int mystery1( int x )
{
int result;
printf( "x = %d\n", x );
if ( x <= 0 )
return 0;
else {
result = x + mystery2( x - 1 );
printf( "result = %d\n", result );
return result;
}
}
int mystery2( int x )
{
int result;
printf( "x = %d\n", x );
if ( x <= 0 )
return 0;
else {
result = x + mystery1( x - 2 );
printf( "result = %d\n", result );
return result;
}
}
int main( int argc, char *argv[] )
{
printf( "%d\n", mystery1( 10 ) );
return 0;
}
8. Floating Point
Convert $129.125_{10}$ (decimal fixed-point) to binary fixed-point (binary) and single-precision IEEE floating-point (hexadecimal) representations.
binary fixed-point __________________________________ (2 points)
IEEE floating-point __________________________________ (4 points)
Convert $0xC348C000$ (single-precision IEEE floating-point representation) to fixed-point decimal.
fixed-point decimal __________________________________ (6 points)
9. Machine Instructions
Translate the following instructions into SPARC machine code. Use hexadecimal values for your answers. If an instruction is a branch, specify the number of instructions away for the target (vs. a Label).
and %i2, %l3, %o4 ___________________________________ (5 points)
std %o2, [%fp - 8] ___________________________________ (5 points)
Translate the following SPARC machine code instructions into SPARC assembly instructions.
0x3CBFFFF9 ___________________________________ (5 points)
0xEC0B401C ___________________________________ (5 points)
10. Linkage, Scope, Lifetime, Data
For the following program fragment, specify what C runtime area/segment will be used for each variable definition or statement: (32 points — 1 point each)
static int a = 411; ______________
int b; ______________
int c = 404; ______________
static int d; ______________
int foo( int e ) { ____________ (foo) ______________ (e)
double f = 4.20; ______________
static int g = 8675309; ______________
static int *h; ______________
h = (int *) malloc( b ); ______________ (where h is pointing)
int (*i)(int) = foo; ____________ (i) ______________ (where i is pointing)
...
}
Fill in the letter corresponding to the correct scoping/visibility for each of the variables:
A) Global across all modules/functions linked with this source file.
B) Global just to this source file.
C) Local to function foo().
a _______
b _______
c _______
d _______
e _______
f _______
g _______
h _______
i _______
foo _______
Fill in the letter corresponding to the correct lifetime for each of the variables:
A) Exists from the time the program is loaded to the point when the program terminates.
B) Exists from the time function foo() is called to the point when foo() returns.
11. Load/Store/Memory
What gets printed in the following program? (11 points)
```assembly
.global main
.section "".data"
fmt: .asciz "0x%08X\n" ! prints value as hex 0xXXXXXXXX
.c: .byte 0x83
.s: .half 0xBEAD
.i1: .word 0xABCD1234
.i2: .word 0xABCD1234
.i3: .word 0xABCD1234
.x: .word 0x5FF50000
.section "".text"
main:
save %sp, -96, %sp
set x, %l0
set s, %l1
lduh [%l1], %l2 _____________________ Hex value in %l2
stb %l2, [%l0+2] ______________________ Hex value in word labeled x
sll %l2, 4, %l2 _____________________ Hex value in %l2
stb %l2, [%l0+1]
set fmt, %o0
ld [%l0], %o1
call printf _____________________ Hex value in word labeled x
nop (same as output of this printf)
set i1, %l0
set c, %l1
ldsb [%l1], %l2 _____________________ Hex value in %l2
stb %l2, [%l0+2] ______________________ Hex value in word labeled i1
stb %l2, [%l0+1]
set fmt, %o0
ld [%l0], %o1
call printf _____________________ Hex value in word labeled i1
nop (same as output of this printf)
set i2, %l0
set i3, %l1
ld [%l1], %l2 _____________________ Hex value in %l2
stb %l2, [%l0] ______________________ Hex value in word labeled i2
sra %l2, 8, %l2 _____________________ Hex value in %l2
stb %l2, [%l0+2]
set fmt, %o0
ld [%l0], %o1
call printf _____________________ Hex value in word labeled i2
nop (same as output of this printf)
ret
restore
```
12. Miscellaneous
Fill in the blanks so the following program correctly determines if it is run on a Big-Endian or Little-Endian architecture. (2 points)
```c
#include <stdio.h>
int main( void ) {
int word = 1;
if ( *(char *)&word == 1 )
printf( "____________________________\n" ); /* Prints either Big-Endian or Little-Endian */
else
printf( "____________________________\n" ); /* Prints either Big-Endian or Little-Endian */
return 0;
}
```
What gets printed with the statements below? (4 points)
```c
unsigned short x = 0xF024;
putchar( (x & 0xF) + '0' ); ________
putchar( (x << 8 >> 12) + '0' ); ________
putchar( (x & 0xF00) + '0' ); ________
putchar( (x >> 12) + '0' ); ________
```
What is Rick's favorite TV show? (1 pt)
Given the following program, order the printf() lines so that the values that are printed when run on a Sun SPARC Unix system are displayed from smallest value to largest value. (2 points each)
```c
void foo( int, int ); /* Function Prototype */
int a;
int main( int argc, char *argv[] ) {
static int b = 311;
int c = 69;
foo( argc, b );
/* 1 */ (void) printf( "argc --> %p\n", &argc );
/* 2 */ (void) printf( "foo --> %p\n", foo );
/* 3 */ (void) printf( "a --> %p\n", &a );
/* 4 */ (void) printf( "malloc --> %p\n", malloc(50) );
/* 5 */ (void) printf( "c --> %p\n", &c );
/* 6 */ (void) printf( "b --> %p\n", &b );
}
void foo( int d, int e ) {
int f = e;
int g = f;
/* 7 */ (void) printf( "e --> %p\n", &e );
/* 8 */ (void) printf( "g --> %p\n", &g );
/* 9 */ (void) printf( "d --> %p\n", &d );
/* 10 */ (void) printf( "f --> %p\n", &f );
}
```
**Extra Credit** (11 points)
What is the value of each of the following expressions taken sequentially based on changes that may have been made in previous statements?
```c
char a[] = "SD Chargers!";
char *p = a + 3;
printf( "%c", --*p );
++p;
printf( "%c", *p = *p + 4 );
printf( "%c", p[1] = *(a + strlen(a) - 2) + 2 );
p = p + 2;
printf( "%c", *p = p[-2] + 2 );
p++;
printf( "%c", *p = p[0] - 3 );
printf( "%d", ++p - a );
printf( "\n%s\n", a );
```
Optimize the following assembly code fragments using only the given instructions. Some optimizations are worth more than others.
```
mov %l3, %o0
mov 32, %o1
call .mul
nop
mov %o0, %l3
```
```
cmp %l1, %i2
bge L1
nop
add %l0, %l1, %i2
L1:
andcc %o0, %i2, %o2
sub %o2, 5, %o2
```
Hexadecimal - Character
| 00 NUL | 01 SOH | 02 STX | 03 ETX | 04 EOT | 05 ENQ | 06 ACK | 07 BEL |
| 08 BS | 09 HT | 0A NL | 0B VT | 0C NP | 0D CR | 0E SO | 0F SI |
| 10 DLE | 11 DC1 | 12 DC2 | 13 DC3 | 14 DC4 | 15 NAK | 16 SYN | 17 ETB |
| 18 CAN | 19 EM | 1A SUB | 1B ESC | 1C FS | 1D GS | 1E RS | 1F US |
| 20 SP | 21 ! | 22 " | 23 # | 24 $ | 25 % | 26 & | 27 ' |
| 28 ( | 29 ) | 2A * | 2B + | 2C , | 2D - | 2E . | 2F / |
| 30 0 | 31 1 | 32 2 | 33 3 | 34 4 | 35 5 | 36 6 | 37 7 |
| 38 8 | 39 9 | 3A : | 3B ; | 3C < | 3D = | 3E > | 3F ? |
| 40 @ | 41 A | 42 B | 43 C | 44 D | 45 E | 46 F | 47 G |
| 48 H | 49 I | 4A J | 4B K | 4C L | 4D M | 4E N | 4F O |
| 50 P | 51 Q | 52 R | 53 S | 54 T | 55 U | 56 V | 57 W |
| 58 X | 59 Y | 5A Z | 5B [ | 5C \ | 5D ] | 5E ^ | 5F _ |
| 60 ` | 61 a | 62 b | 63 c | 64 d | 65 e | 66 f | 67 g |
| 68 h | 69 i | 6A j | 6B k | 6C l | 6D m | 6E n | 6F o |
| 70 p | 71 q | 72 r | 73 s | 74 t | 75 u | 76 v | 77 w |
| 78 x | 79 y | 7A z | 7B { | 7C | | 7D } | 7E ~ | 7F DEL |
A portion of the Operator Precedence Table
<table>
<thead>
<tr>
<th>Operator</th>
<th>Associativity</th>
</tr>
</thead>
<tbody>
<tr>
<td>++ postfix increment</td>
<td>L to R</td>
</tr>
<tr>
<td>-- postfix decrement</td>
<td></td>
</tr>
<tr>
<td>* indirection</td>
<td>R to L</td>
</tr>
<tr>
<td>++ prefix increment</td>
<td></td>
</tr>
<tr>
<td>-- prefix decrement</td>
<td></td>
</tr>
<tr>
<td>& address-of</td>
<td></td>
</tr>
<tr>
<td>* multiplication</td>
<td>L to R</td>
</tr>
<tr>
<td>/ division</td>
<td></td>
</tr>
<tr>
<td>% modulus</td>
<td></td>
</tr>
<tr>
<td>+ addition</td>
<td>L to R</td>
</tr>
<tr>
<td>- subtraction</td>
<td></td>
</tr>
<tr>
<td>= assignment</td>
<td>R to L</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://ieng9.ucsd.edu/~cs30x/Final.fa08.pdf", "len_cl100k_base": 4738, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 25994, "total-output-tokens": 5685, "length": "2e12", "weborganizer": {"__label__adult": 0.0006513595581054688, "__label__art_design": 0.0006265640258789062, "__label__crime_law": 0.00045609474182128906, "__label__education_jobs": 0.0239715576171875, "__label__entertainment": 0.00023245811462402344, "__label__fashion_beauty": 0.0003440380096435547, "__label__finance_business": 0.0003600120544433594, "__label__food_dining": 0.0010004043579101562, "__label__games": 0.002582550048828125, "__label__hardware": 0.003887176513671875, "__label__health": 0.00072479248046875, "__label__history": 0.0006003379821777344, "__label__home_hobbies": 0.00036263465881347656, "__label__industrial": 0.0015697479248046875, "__label__literature": 0.0006947517395019531, "__label__politics": 0.000530242919921875, "__label__religion": 0.0009412765502929688, "__label__science_tech": 0.054443359375, "__label__social_life": 0.0003910064697265625, "__label__software": 0.00714111328125, "__label__software_dev": 0.89599609375, "__label__sports_fitness": 0.0008435249328613281, "__label__transportation": 0.0012302398681640625, "__label__travel": 0.0003466606140136719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13743, 0.06217]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13743, 0.24801]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13743, 0.54412]], "google_gemma-3-12b-it_contains_pii": [[0, 944, false], [944, 2075, null], [2075, 2987, null], [2987, 3910, null], [3910, 4584, null], [4584, 5516, null], [5516, 6093, null], [6093, 7143, null], [7143, 8337, null], [8337, 9690, null], [9690, 11378, null], [11378, 12140, null], [12140, 13743, null], [13743, 13743, null], [13743, 13743, null]], "google_gemma-3-12b-it_is_public_document": [[0, 944, true], [944, 2075, null], [2075, 2987, null], [2987, 3910, null], [3910, 4584, null], [4584, 5516, null], [5516, 6093, null], [6093, 7143, null], [7143, 8337, null], [8337, 9690, null], [9690, 11378, null], [11378, 12140, null], [12140, 13743, null], [13743, 13743, null], [13743, 13743, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 13743, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13743, null]], "pdf_page_numbers": [[0, 944, 1], [944, 2075, 2], [2075, 2987, 3], [2987, 3910, 4], [3910, 4584, 5], [4584, 5516, 6], [5516, 6093, 7], [6093, 7143, 8], [7143, 8337, 9], [8337, 9690, 10], [9690, 11378, 11], [11378, 12140, 12], [12140, 13743, 13], [13743, 13743, 14], [13743, 13743, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13743, 0.08197]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
e712571ac6621863593365e9df514bff5ee81303
|
[REMOVED]
|
{"len_cl100k_base": 6440, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20763, "total-output-tokens": 7741, "length": "2e12", "weborganizer": {"__label__adult": 0.0004267692565917969, "__label__art_design": 0.0003638267517089844, "__label__crime_law": 0.0005326271057128906, "__label__education_jobs": 0.0008654594421386719, "__label__entertainment": 0.00013709068298339844, "__label__fashion_beauty": 0.0002036094665527344, "__label__finance_business": 0.0004527568817138672, "__label__food_dining": 0.0005025863647460938, "__label__games": 0.0007205009460449219, "__label__hardware": 0.00323486328125, "__label__health": 0.001491546630859375, "__label__history": 0.0004715919494628906, "__label__home_hobbies": 0.00014710426330566406, "__label__industrial": 0.000823974609375, "__label__literature": 0.0004682540893554687, "__label__politics": 0.0003554821014404297, "__label__religion": 0.0006394386291503906, "__label__science_tech": 0.47265625, "__label__social_life": 0.00011992454528808594, "__label__software": 0.01421356201171875, "__label__software_dev": 0.499267578125, "__label__sports_fitness": 0.00045680999755859375, "__label__transportation": 0.0013170242309570312, "__label__travel": 0.00028395652770996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33771, 0.02057]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33771, 0.20326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33771, 0.91437]], "google_gemma-3-12b-it_contains_pii": [[0, 4944, false], [4944, 8341, null], [8341, 14144, null], [14144, 19543, null], [19543, 25615, null], [25615, 27605, null], [27605, 33771, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4944, true], [4944, 8341, null], [8341, 14144, null], [14144, 19543, null], [19543, 25615, null], [25615, 27605, null], [27605, 33771, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33771, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33771, null]], "pdf_page_numbers": [[0, 4944, 1], [4944, 8341, 2], [8341, 14144, 3], [14144, 19543, 4], [19543, 25615, 5], [25615, 27605, 6], [27605, 33771, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33771, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
1a9984580f28460ad89485888aa6f1427ba674e8
|
[REMOVED]
|
{"Source-Url": "http://www.cs.uml.edu/~xinwenfu/paper/Accessibility.pdf", "len_cl100k_base": 7358, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 31110, "total-output-tokens": 8809, "length": "2e12", "weborganizer": {"__label__adult": 0.0010747909545898438, "__label__art_design": 0.0008296966552734375, "__label__crime_law": 0.0150146484375, "__label__education_jobs": 0.0013036727905273438, "__label__entertainment": 0.0004343986511230469, "__label__fashion_beauty": 0.0005292892456054688, "__label__finance_business": 0.0004656314849853515, "__label__food_dining": 0.0005083084106445312, "__label__games": 0.005023956298828125, "__label__hardware": 0.0185699462890625, "__label__health": 0.0015087127685546875, "__label__history": 0.0006389617919921875, "__label__home_hobbies": 0.00025582313537597656, "__label__industrial": 0.0008225440979003906, "__label__literature": 0.0008015632629394531, "__label__politics": 0.0007910728454589844, "__label__religion": 0.0008802413940429688, "__label__science_tech": 0.345703125, "__label__social_life": 0.0002257823944091797, "__label__software": 0.2249755859375, "__label__software_dev": 0.378173828125, "__label__sports_fitness": 0.0005354881286621094, "__label__transportation": 0.0005207061767578125, "__label__travel": 0.00021588802337646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37307, 0.04952]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37307, 0.25674]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37307, 0.89056]], "google_gemma-3-12b-it_contains_pii": [[0, 2698, false], [2698, 5794, null], [5794, 8595, null], [8595, 11876, null], [11876, 14945, null], [14945, 18238, null], [18238, 21068, null], [21068, 24324, null], [24324, 28114, null], [28114, 31190, null], [31190, 33922, null], [33922, 37307, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2698, true], [2698, 5794, null], [5794, 8595, null], [8595, 11876, null], [11876, 14945, null], [14945, 18238, null], [18238, 21068, null], [21068, 24324, null], [24324, 28114, null], [28114, 31190, null], [31190, 33922, null], [33922, 37307, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37307, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37307, null]], "pdf_page_numbers": [[0, 2698, 1], [2698, 5794, 2], [5794, 8595, 3], [8595, 11876, 4], [11876, 14945, 5], [14945, 18238, 6], [18238, 21068, 7], [21068, 24324, 8], [24324, 28114, 9], [28114, 31190, 10], [31190, 33922, 11], [33922, 37307, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37307, 0.24648]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
5283bc7cf99152dbcc9afe295bae33090f903829
|
An Automatic Development Process for Integrated Modular Avionics Software
Ying Wang
State Key Laboratory of Software Development Environment, Beihang University, Beijing, China
[email protected]
Dianfu Ma
State Key Laboratory of Software Development Environment, Beihang University, Beijing, China
[email protected]
Abstract—With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA). ARINC653 is a major industry standard to support partitioning concept introduced in IMA to achieve security isolation between avionics functions with different criticalities. To decrease the complexity and improve the reliability of the design and implementation of IMA-based avionics software, this paper proposes an automatic development process based on Architecture Analysis & Design Language. An automatic model transformation approach from domain-specific models to platform-specific ARINC653 models and safety-critical ARINC653-compliant code generation technology are respectively presented during this process. A simplified multi-task flight application as a case study with preliminary experiment result is given to show the validity of this process.
Index Terms—Integrated Modular Avionics, ARINC653, Architecture Analysis & Design Language, model transformation, code generation
I. INTRODUCTION
With the ever-growing avionics functions, the modern avionics architecture is evolving from traditional federated architecture to Integrated Modular Avionics (IMA) to save hardware resource cost [1]. In IMA, it introduces partitioning concept to achieve security isolation between avionics functions with different criticalities executing on the same processor platform. ARINC653 [2] is a current major industry standard which supports the development of avionics software based on the IMA architecture.
Currently, model-driven engineering has become an important method for the development of safe-critical embedded system [3]. However, there are still some problems when designing and implementing ARINC653-based avionics software during this model-driven engineering: (1) an architecture-level language with precise semantic needs to represent such a partitioned architecture. Although the Architecture Analysis & Design Language (AADL) is extended to AADL ARINC653 annex to describe partition-specific concepts, the annex still has some shortcomings such as lacking description about an ARINC process’s internal dynamic behavior, lacking formal semantic to specify an accurate AADL model which conforms to the ARINC653 requirements; (2) it is not easy to manually build valid ARINC653-compliant models directly because avionics engineers from different application domains mainly focus on the high-level design of domain-specific models (e.g. air data computer) rather than the low-level platform-specific models (e.g. ARINC653 platform). The complexity of the ARINC653 also increases difficulty in building a valid partition model which conforms to the ARINC653 requirements; (3) a partitioned system implementation need to consider its high safety and compatibility with ARINC653 platforms. Whereas existing code generation studies mainly focus on the platform-independent code mapping, the discussion about ARINC653-specific platform is very few. Besides, their less attention to safety-critical constraints in the code generation rules enables the generated system hard to meet safety-critical need.
To address the problems above mentioned, an AADL-based integrated development process is proposed to help the design and implementation of avionics software based on the ARINC653 more productive and reliable as shown in Figure 1. An extended AADL-based formal language for ARINC653 (AADL653) is proposed to accurately describe such a partitioned architecture which is divided into two parts: AADL653 Multi-Task Model and AADL653 Runtime Model (as shown in the Lower-left column).
The whole process takes the platform-independent avionics function model represented by the AADL as input. First, a model transformation is proposed in the AADL Modeling column to automatically transform this input AADL model to a valid AADL653 model to a valid AADL653 Multi-Task Model, and then to integrate it into an AADL653 Runtime Model. Second, the integrated AADL653 model can be used for verification and analysis as shown in the V&V column. Finally, the code generation column shows an ARINC653-compliant and safety-critical system implementation consisting of real-time codes and runtime configurations, automatically generated from this verified AADL653 model.
This paper mainly focuses on the model transformation and code generation parts of this process and the rest of this paper is organized as follows. Section 2 introduces a brief AADL653 language. In Section 3, a model transformation from domain-specific AADL to platform-specific AADL653 model is discussed in detail. An object-oriented safety-critical ARINC653 code generation approach is proposed in Section 4. Section 5 presents a simplified multi-task flight application to illustrate our approach proposed in this paper. Finally, Section 6 ends this paper with a conclusion and future work.
II. AADL653 LANGUAGE
A. AADL Overview
The AADL standard [4] supports a top-down stepwise-refinement component-based design process fit with safety-critical system design. It provides three category components (e.g. system, software, hardware), component features (e.g. data port, requires data access), time properties and modes to model safety-critical system with function and non-function requirements. Some annexes are proposed to further refine or extend the AADL standard for modeling specific requirements, e.g. the AADL behavior annex [5] is used to refine internal function behavior of a thread or a subprogram for function correctness verification and detailed code generation. However, the AADL standard and above annex are high-level abstract and not aiming to any concrete platform implementation, so a new extended language with platform-specific semantic need to be defined to model avionics software constructed on the ARINC653 execution architecture.
B. AADL653 Definition
Existing AADL ARINC653 annex [6], which describes a mapping from ARINC653 elements to AADL elements and extended properties, is aimed to provide a guideline for modeling ARINC653 architecture. But the annex still has some shortcomings, for example, it uses natural language to describe a mapping from ARINC653 elements to AADL models, thus likely to cause ambiguity and inaccuracy of mapping models; it does not give an ARINC process’s internal behavior representation in a partitioned architecture, lacking of complete model descriptions etc. Therefore, the AADL653 language is proposed to extend the AADL standard and combine with the AADL behavior annex [5] to provide an accurate and complete ARINC653-compliant AADL model representation based on classic set theory and first-order logic.
The AADL653 language is divided into two parts: the AADL653 Multi-task Model and the AADL653 Runtime Model. The AADL653 Multi-task Model describes multiple ARINC process communicating in the same or different partitions, as well as internal dynamic behavior of an ARINC process in application software layer. For example, the following Definition 1 gives a formal description for AADL653 Inter-Partition Task Queuing Communication Model. It uses first-order logic to specify the property this model must satisfy, that is the source and destination tasks must communicate with a queuing channel $c_1$ by writing queuing message to the source port $(s qp)$ of $c_1$ and reading queuing message from the destination port $(dqp)$ of $c_2$. The corresponding graphical representation is as shown in Figure 2.
\[ \text{Definition 1 (Inter-Partition Task Queuing Communication).} \]
It consists of a tuple $(t_1, t_2, c_1, c_2)$ where $t_1, t_2 \in \text{ThreadTypeSet}$, $c_1, c_2 \in \text{EvtDataPortConnSet}$
Satisfy property:
1. $\exists c_3 \in \{s qp, d qp\}$: $s qp \in \text{OutEvtDataPortSet}(p_1) \land d qp \in \text{InEvtDataPortSet}(p_1)$
2. $(\text{srcPort}(c_1)) = \text{OutEvtDataPortSet}(p_1)$
3. $(\text{destPort}(c_2)) = \text{InEvtDataPortSet}(p_1)$
\[ \text{Figure 2. AADL653 Inter-Partition Task Queuing Communication Model} \]
Definition 2 shows a Task Behavior Automation Machine to control the task’s state transition with guards (e.g. receiving message from a queuing port represented by $d q p$?) and actions (e.g. calling intra-partition communication subprograms represented by $\text{intraComSub}$).
\[ \text{Definition 2 (Task Behavior Automation Machine).} \]
It consists of a tuple $(t, S, s_0, G, A, E)$ where
1. $t \in \text{ThreadTypeSet}$
2. $S$ is a set of state, $s_0 \in S$ is the initial state
3. $G = \{\text{dqp}\} \cup \{\text{dqp}\} \cup \{\text{dqp}\}$
4. $A = \{\text{appSub}\} \cup \{\text{appSub}\}$
5. $E \subseteq S \times G \times A \times S$ is a set of transitions.
The AADL653 Runtime Model describes a partitioned runtime environment in core software layer. It involves inter-partition sampling or queuing communication...
channel, partition scheduling window time requirements, partition memory requirements and health monitor configuration information.
III. FROM AADL TO AADL653 MODELS
The AADL653 language can be used to manually build ARINC653-specific model, but it is still a hard and error-prone work for those avionics engineers focusing on design of avionics applications in high-level. This section proposes a model transformation framework with formal rules to automatically transform a domain-specific AADL model built by the avionics engineers to an AADL653 multi-task model conforming to ARINC653 requirements. Then the model can be integrated into an AADL653 runtime model for validation and analysis.
A. Model Transformation
The transformation rule consists of three parts: pattern, config and action. The pattern specifies an abstract platform-independent model which refers to an AADL model. The config is used to specify concrete ARINC653 task communication style (e.g. inter-partition or intra-partition) applied to current pattern. The action shows a sequence of operation performed on current pattern to transform an AADL model to an AADL653 model.
The transformation rules involves two levels: the first one is inter-task interaction level, aiming to transform abstract inter-task static dependencies to ARINC653-specific inter-task communication styles according to a specified config, e.g. the following Rule 1 shows a rule definition about transforming an AADL Event Data Port Interaction Model (EDPIM) specified in the pattern to a valid AADL653 Task Queuing Communication Model in Definition 1, with an inter-partition config. Likewise, the EDPIM can be transformed to an AADL653 Task Buffer Communication Model which shows each of the source and destination task has a requires data access to a Buffer data if the config is intra-partition.
The second-level transformation is aiming to each ARINC process’s internal dynamic behavior, to transform platform-independent task behaviors to ARINC653-specific task behaviors. For example, if a guard of the transition represents a message arriving at an \textit{in event data port} \textit{(iedp?)} of the task, and it has been transformed to a \textit{requires data access} \textit{(rda)} to a \textit{buffer object} \textit{(bufImpl)} in the first-level transformation, then the second-level transformation rule (e.g. Rule 2) will generate explicit intra-partition communication subprogram call \textit{(RECEIVE_BUFFER!)} which is respectively added to the head of the \textit{action} part of the \textit{transition} and the tail of the subprogram call sequence of the task, instead of reading message from an abstract \textit{in event data port}.
Rule 2: AADL REDPM to AADL653 RBM
\textbf{Pattern:} (Read Event Data Port Model)
\[ a \text{ tuple } (g,e,t) \text{ where } g = \text{GuardSet}(e) ; \text{ iedp} \in \text{InEvtDataPortSet}(t) ; \ t \in \text{ThreadType} ; \ e \in \text{TransitionSet}(t) ; \]
\textbf{Config (Inter-Partition):}
\[ \text{isTransformedTo}(\text{iedp}, \text{rda}, \text{bufImpl}) ; \]
\textbf{Actions:}
\[ \text{addTransAction}(\text{actions}, \text{head}, \text{RECEIVE_BUFFER!}); \]
\[ \text{addSubCall}(\text{psc}(\text{t}), \text{tail}, \text{RECEIVE_BUFFER!}); \]
Rule 3 shows a data exchange behavior conversion example to transform abstract AADL \textit{In Event Data Port Parameter Passing Model} (IEDPPM) to concrete AADL653 Buffer Parameter Passing Out Model (BPPOM). The \textit{pattern} means the value of the \textit{in event data port} \textit{(spar)} of the task will be read and passed to its internal subprogram. The \textit{config} means reading message from abstract port \textit{(spar?)} has been substituted with concrete subprogram call \textit{(RECEIVE_BUFFER!)}. Therefore, the \textit{action} in this rule respectively adds an AADL data access and parameter connection to represent the actual parameter value passing of the subprogram is from the data value read from the concrete buffer resource.
Rule 3: AADL IEDPPM to AADL653 BPPOM
\textbf{Pattern:} (In Event Data Port Parameter Passing Model)
\[ a \text{ tuple } (c,t,s) \text{ where } c \in \{ \text{sp}, \text{dp} \} ; \text{ spar} \in \text{InEvtDataPortSet}(t) ; \ A \text{ dpar} \in \text{InParameterSet}(s) ; \]
\textbf{Config (Intra-Partition):}
\[ \text{isTransformedTo}(\text{spar}, \text{rda}, \text{bufImpl}, \text{RECEIVE_BUFFER!}) ; \]
\textbf{Actions:}
\[ \text{addParameterConn}(\text{c}); \]
\[ \text{addDataAccessConn}(\text{rda}, \text{RECEIVE_BUFFER.rda}); \]
\[ \text{addParameterConn}(\text{RECEIVE_BUFFER.output}, \text{dpar}); \]
B. Transformation Implementation
A rule engine, which implements the above two-level transformation rules, is developed to accomplish automatic model transformation. It takes any valid AADL model instance and associated xml configuration file as input, and output a valid AADL653 Multi-Task Model instance. The configuration file describes current communication style (e.g. inter-partition with a sampling channel or intra-partition with a buffer) and communication resource details (e.g. buffer size) each pair of dependent tasks will use. It is generally specified by the avionics application integrator so as to put the tasks in the appropriate partition.
For each rule, the rule engine first to search for each matched pattern in the input AADL model, e.g. Algorithm 1 is an implementation of Rule 1, line 5-9 shows searching for each EDPIM pattern, that is source and destination threads with an event data port connection between them; second to check the input
configuration file whether the matched model will be configured with the same communication style as it is specified in the config part of this rule (line 10), if yes then perform the action sequence defined in this rule (line 11-14).
Algorithm 1: AADL EDPIM To AADL653 TQCM Rule Implementation
1: input: AADL model and XML configuration file
2: output: AADL653 task queuing communication model
3: begin
4: for all pmpl ∈ ProcessImplSet in AADL model
5: for all c ∈ ConnectionSet(p) // search for EDPIM pattern
6: t₁ ← srcComponent(c); sp ← srcPort(c);
7: t₂ ← destComponent(c); dp ← destPort(c);
8: if (t₁ ∈ ThreadTypeSet && t₂ ∈ ThreadTypeSet) then
9: if (XMLconfig(t₁, t₂) == "Inter-Partition") then //do actions
10: delete(c);
11: qc = getQueChannel(t₁, t₂);
12: c₁ = createEventDataPortConn(sqp, scrPort(qc));
13: c₂ = create EventDataPortConn(destPort(qc), dqp);
14: add(c₁);
15: end
C. Model Integration and Analysis
The purpose of the model integration is to integrate the AADL653 Multi-Task Model instance after transformation to an AADL653 Runtime Model instance to enable the integrated model still satisfies system requirements such as soft and hard real-time deadlines etc.
There are two scenarios for the model integration. One is application-driven scenario. That is we have known the multiple tasks and their dependency relation in the AADL653 Multi-Task Model instance, as well as the partition location each task will be put, then to configure the AADL653 Multi-Task Model instance, as well as the known the multiple tasks and their dependency relation in requirements such as soft and hard real-time deadlines etc. to enable the integrated model still satisfies system requirements.
For example, Table 1 gives a RT-Java interface DestQuePort declaration mapped from an in event data port (dap) in Definition 1, which represents a queuing channel’s destination port from which the queuing message can be read.
### Table 1. The DestQuePort Interface.
```java
public interface DestQuePort {
public Object READ_QUEUING_MESSAGE();
public int GET_MAX_MSG_NUM();
public byte GET_MAX_MSG_SIZE();
}
```
A. Code Mapping
The code mapping in this framework aims to build a safety-critical programming model class library for ARINC653. The class library will be used for ARINC653 object instantiation during the code generation process. The mapping involves two levels: first is to map an ARINC653 process interface with inter/intra-partition communication resource defined in the AADL653 Task Communication Model to a RT-Java/C++ task communication programming model. For example, Table 1 gives a RT-Java interface DestQuePort declaration mapped from an in event data port (dap) in Definition 1, which represents a queuing channel’s destination port from which the queuing message can be read.
### Table 2. The PeriodicArincProcessImpl Class.
```java
public class PeriodicArincProcessImpl implements ArincProcess extends NoHeapRealtimeThread {
public PeriodicArincProcessImpl(int priority, HighResolutionTime start, RelativeTime period, RelativeTime cost, RelativeTime deadline, AsyncEventHandler overrunHandler, AsyncEventHandler missHandler) {
super(new PriorityParameters(priority),
new PeriodicParameters(start, period, cost, deadline,
overrunHandler, missHandler), ImmortalMemory.instance());
}
// Constructor
//ScopedMemory with initial size in bytes
static ScopedMemory smArea = new LTMemory(size);
public void run() {
while (true) {
smArea.enter(new Runnable() {
public void run() {
//execute behavior code in this scoped memory...
}
});
boolean ok = waitForNextPeriod();
}
// structure body of run method
}
```
Second is to map an ARINC653 process instance defined in the AADL653 Task Behavior Model to a RTJava/C++ task behavior programming model. Table 2 shows a PeriodicArincProcessImpl class which represents a periodic ARINC653 process. The run() method consisting of an endless loop body contains business logic followed by a method waitForNextPeriod() [9], which shows a periodic dispatch behavior. The class extends the class NoHeapRealTimeThread [9] which is never allowed to allocate or reference any object allocated in the heap, thus it is always safe to interrupt the garbage collector at any time. Moreover, two types of memory ImmortalMemory and ScopedMemory [9] are used to allocate objects in the non-heap, thus very suitable for safety-critical systems implementation.
Code Generation
A two-phase code generation process is proposed to automatically implement ARINC653-compliant system consisting of real-time code running on the target and runtime configuration code provided to ARINC653-compliant OS.
The first phase is to generate real-time code consisting of partition initialization code and task behavior code. The partition initialization code includes all the ARINC653 processes and communication resource objects required in each partition, generated from the AADL653 Task Communication Model instance of each partition. For example, Algorithm 2 (line 4-6) gives a blackboard communication object instantiation algorithm by parsing the intra-partition task blackboard communication model where there are two threads both having access to the same blackboard data.
Algorithm 2: Intra-Partition Communication Resource Instantiation Code Generation
1: Input: AADL653 Intra-Partition Task Communication Model instance of a Partition: ((tmpl,tmpl1,dmpl,scon,rcon))
2: Output: Intra-Partition Communication Resource Instantiation Code
3: begin
4: if (dest(scon)) &
5: if (dest(rcon)) then
6: instantiate an ArincBlackboard object with
7: getPropertyValue(dataImpl, “MaxMessageSize”);
In order to adapt to multi-task collaboration scenarios, we present detailed internal behavior code generation algorithm for each ARINC653 process so that global interaction sequence among multiple tasks can be determined.
As shown in Algorithm 3, for each transition in the Task Behavior Automation Machine (line 5), first to match each guard in this transition (line 6) to generate inter-partition receiving message code, e.g. line 7-9 shows generating READ_SAMPLING_MESSAGE() call if there is a sampling data arriving the sampling port (dsp?); second to match each action in this transition (line 13) to generate inter-partition sending message and intra-partition communication subprogram call code, e.g. line 14-16 shows generating SEND_BUFFER(rda, message) call of the buffer object referenced by the rda.
Algorithm 3: Task Behavior Code Generation
1: Input: AADL653 Task Behavior Model Instance of Each ARINC653 Process
2: Output: Task Behavior Code of Each ARINC653 Process
3: begin
4: for all tmple ThreadImplSet do
5: for all transition t in BAM(tmple) do
6: for all guard ∈ GuardSet(transition) do
7: if (guard==dsp? && idp ∈ InDataPortSet(tImpl)) then
8: ArincDspImpl ← tmple.destMapPortMap.get(dsp.name)
9: Call ArincDspImpl.READ_SAMPLING_MESSAGE();
......
13: for all action ∈ ActionSet(transition) do
14: if (action==SEND_BUFFER(rda, message) &&
15: rda ∈ ReqDataAccessSet(tImpl)) then
16: Call ArincBufImpl.SEND_BUFFER(rda, message);
......
The second phase of code generation aims to generate runtime XML configuration code which will be used by ARINC653-compliant OS for the core module and partition configuration. Algorithm 4 gives the overall configuration code generation algorithm which will generate inter-partition channel (line 5-6), partition scheduling window (line 8-9) and partition memory requirements (line 11-13) configuration information.
Algorithm 4: Partition XML Configuration Code Generation
1: Input: AADL653 Runtime Model Instance
2: Output: Partition XML Configuration Code
3: begin
4: for all smple SystemImplSet do
5: for all (p,c,p’) ∈ InterParCom(smple) do
6: Call GeChannel(p, c, p’);
7: p←Processset(s); Vp←VirProSubComp(p);
8: for all vP ∈ P do
9: Call GeSchChannel(vP);
10: P←ProcessSubComp(s);
11: for all p ∈ P do
12: if (p ∈ BindMem); p;
13: Call GeMemreq(m);
14: end
A code generation tool which implements all the algorithms in two-phase code generation process is developed to generate ARINC653 system automatically.
V. CASE STUDY
In this section, we take a simplified multi-task flight application [10] as a case study to illustrate our approach proposed in this paper. This example is a scenario concerned with an aircraft system of computing the position and fuel information through multi-task collaboration. It consists of four periodic tasks Pos, Fuel, Para and Fsam each with 60ms period and a sharing data Global_params as shown in Figure 3.
respectively generated in the communication subprogram calls (e.g. updatePos.pos -> updatePos.updpos -> upd_posMes). The task sends a position request (reqPos_refresh) to the task Para for global parameter refreshment. Then, it waits for the notification of the end of refreshment (refresh_end?) from the task Para.
### TABLE 3. AADL BEHAVIOR MODEL OF THE TASK PositionIndicator
<table>
<thead>
<tr>
<th>Subprogram Sequence</th>
<th>Calls</th>
</tr>
</thead>
<tbody>
<tr>
<td>subprogram Sequence</td>
<td>calls</td>
</tr>
<tr>
<td>subprogram Sequence</td>
<td>{</td>
</tr>
<tr>
<td>subprogram Sequence</td>
<td>}</td>
</tr>
</tbody>
</table>
First, Figure 4 shows a corresponding AADL653 task communication model instance after the first-level transformation from Figure 3. For example, the task Pos and Para interacting with abstract event data ports is converted to communicating with a buffer resource (buff2) in an intra-partition config. For the second-level transformation, Table 4 shows a textual AADL653 task behavior model of the task Pos, which is transformed from Table 3. We can see explicit intra-partition communication subprogram calls (e.g. send_buff2!) are respectively generated in the subprogram call sequence and transition action part. Besides, data exchange (e.g. a parameter passing ParaCon1 between the subprogram getData and send_buff2!) in ARINC653 platform is also generated in connections part of the thread.
Next, the model after two-level transformation is integrated into a specified AADL653 Runtime Model instance as shown in Figure 5. It shows a core module with On_Flight and Info_Collection partitions communicating with a sampling channel. The two partitions are respectively binding to a separate memory to enforce spatial partitioning and a unique virtual processor to implement temporal partitioning by each allocated 40ms and 20ms scheduling window in a 60ms major frame. The integrated model is then verified schedulable with the Cheddar tool [8]. Finally, the RT-Java partition initialization and task behavior code, as well as XML configuration code can be generated automatically from this verified model.
We choose an ARINC653-compatible OS VxWorks653 [11] and a safety-critical business RT-Java virtual machine Jamaica VM for VxWorks653 [12] as our experiment environment. We first compile the previous generated code to binaries and run it in the VxWorks653 simulator (VxSim). The preliminary output partly is shown in Table 5. We can see the required partition resources (e.g. the buffer resource buff2) and concurrent entities (e.g. the task Pos) are created successfully and global synchronized interaction sequence among the tasks is as expected, e.g. the task Pos with the highest priority is waiting for the event notification until the task Para with the lowest priority calling the SET EVENT method (line 19-20). The results demonstrate the validity of our approach.
TABLE 5. EXPERIMENT RESULT EXCERPT.
1[VxWorks653]: creating the task Pos in Partition OF;
2[VxWorks653]: the task Fuel in Partition OF;
4[VxWorks653]: creating the source sampling port ssp1;
5[VxWorks653]: creating the buffer buff2;
11[VxWorks653]: the Partition OF enters into NORMAL mode;
12[VxWorks653]: the task Pos is started;
13[VxWorks653]: SEND BUFFER of buff2 by Pos
14[VxWorks653]: the task Fuel is started;
15[VxWorks653]: the task Para is started;
16[VxWorks653]: RECEIVE_BUFFER of buff2 by Para;
17[VxWorks653]: WAIT SEMAPHORE of sem1 by Para;
18[VxWorks653]: SIGNAL_SEMAPHORE of sem1 by Para;
19[VxWorks653]: DISPLAY_BLACKBOARD of board1 by Para;
20[VxWorks653]: SET EVENT of evtl by Para;
21[VxWorks653]: WAIT EVENT of evtl by Pos;
......
VI. RELATED WORK
Some studies have been done on safety-critical embedded system development based on model-driven engineering. Esterel Technologies propose correct-by-construction methods and develop SCADE tool to support the automated production of a large part of the software development life-cycle elements [13]. But it mainly focuses on domain-specific applications independent of platforms, whereas we pay attention to those constructed on ARINC653 platform. Hugges, J. et al. presents a rapid prototyping of distributed real-time embedded system using the AADL and Ocarina [14]. It discusses a mapping from the AADL model describing distributed embedded system to high-integrity Ada/C code, while our work targets a safety-critical ARINC653 system automatic generation based on the AADL653 model. J. Delange et al. proposes an AADL-based model-driven method to model, validate and implement ARINC653 systems [15]. It is similar to our work, but there are three differences: first, it models the ARINC653 architecture using the AADL ARINC653 annex which has some shortcomings as we stated previously, while our work is based on the formal AADL653 language we defined to provide an accurate and complete ARINC653-compliant AADL model; second, it suggested to use ARINC653 annex to manual building the AADL model for ARINC653 system directly, while we separate the domain-specific AADL model from the platform-specific AADL653 model and provide a transformation approach to automatically build the AADL653 model; third, it focus on a C code generation strategy for partitioned systems and only discusses simple inter-task communication code mapping strategy, whereas we focus on object-oriented ARINC653 code generation and gives detailed ARINC653 process’s internal dynamic behavior code generation algorithm suitable for multi-task collaboration scenarios.
VII. CONCLUSIONS
In this paper we present an AADL-based integrated development process for ARINC653-based avionics system to ease complexity and improve reliability of this work. Automatic model transformation and safety-critical code generation approach are presented in detail. A simplified multi-task flight application as a case study is given to illustrate and show the validity of our approach. In the future, we will verify the correctness of the model transformation and code generation to facilitate a certification of ARIN653 system as far as possible.
ACKNOWLEDGMENT
This work is partially supported by National Natural Science Foundation of China (NSFC) under Grant No.61003017.
REFERENCES
Ying Wang received his B.E. degree at 2005 in National University of Defense Technology. She is a Ph.D candidate at the Department of Computer Science in Beihang University now. Her research interests are software engineering, embedded real-time system and service-oriented computing.
Dianfu Ma is a professor in the State Key Laboratory of Software Development Environment, Beihang University, China. His research interests are severe environment computer, embedded real-time system and service-oriented computing.
|
{"Source-Url": "https://pdfs.semanticscholar.org/4ce7/71de6a588d0c32705277cdd088bf8ef94ec8.pdf", "len_cl100k_base": 6936, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29480, "total-output-tokens": 8556, "length": "2e12", "weborganizer": {"__label__adult": 0.0004532337188720703, "__label__art_design": 0.00029206275939941406, "__label__crime_law": 0.0003402233123779297, "__label__education_jobs": 0.0005850791931152344, "__label__entertainment": 6.35385513305664e-05, "__label__fashion_beauty": 0.00020551681518554688, "__label__finance_business": 0.0002574920654296875, "__label__food_dining": 0.0003838539123535156, "__label__games": 0.0008497238159179688, "__label__hardware": 0.003520965576171875, "__label__health": 0.00040030479431152344, "__label__history": 0.0002899169921875, "__label__home_hobbies": 0.00012791156768798828, "__label__industrial": 0.0008158683776855469, "__label__literature": 0.0001885890960693359, "__label__politics": 0.0002779960632324219, "__label__religion": 0.0004906654357910156, "__label__science_tech": 0.0292205810546875, "__label__social_life": 7.534027099609375e-05, "__label__software": 0.005523681640625, "__label__software_dev": 0.95263671875, "__label__sports_fitness": 0.00040221214294433594, "__label__transportation": 0.002185821533203125, "__label__travel": 0.00026226043701171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32964, 0.03663]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32964, 0.47059]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32964, 0.80269]], "google_gemma-3-12b-it_contains_pii": [[0, 4673, false], [4673, 9255, null], [9255, 14852, null], [14852, 18799, null], [18799, 23710, null], [23710, 25753, null], [25753, 30668, null], [30668, 32964, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4673, true], [4673, 9255, null], [9255, 14852, null], [14852, 18799, null], [18799, 23710, null], [23710, 25753, null], [25753, 30668, null], [30668, 32964, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32964, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32964, null]], "pdf_page_numbers": [[0, 4673, 1], [4673, 9255, 2], [9255, 14852, 3], [14852, 18799, 4], [18799, 23710, 5], [23710, 25753, 6], [25753, 30668, 7], [30668, 32964, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32964, 0.02283]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
8a14483be1cc9a2c366541122fe41e289ff9e368
|
D5.3.2 Real-time Stream Media Processing Platform and Cloud-based Deployment - v2
Alex Simov (Ontotext)
Abstract
FP7-ICT Strategic Targeted Research Project TrendMiner (No. 287863)
Deliverable D5.3.2 (WP 5)
This document presents the final version of the TrendMiner integrated platform. It focuses on the new developments and improvement of the platform since M24. This includes: integration of new components; updates of existing components; customizations for specific use-cases scenarios (finance) and adapting the system to be used in more general context (extended use-cases).
Keyword list: Integration, REST, scalability, cloud computing
Nature: Prototype
Contractual date of delivery: M36
Reviewed By: Márton Miháltz (RILMTA); Thierry Declerck (DFKI)
Web links: n/a
TrendMiner Consortium
This document is part of the TrendMiner research project (No. 287863), partially funded by the FP7-ICT Programme.
DFKI GmbH
Language Technology Lab
Stuhlsatzenhausweg 3
D-66123 Saarbrücken
Germany
Contact person: Thierry Declerck
E-mail: [email protected]
University of Southampton
Southampton SO17 1BJ
UK
Contact person: Mahensan Niranjan
E-mail: [email protected]
Internet Memory Research
45 ter rue de la Révolution
F-93100 Montreuil
France
Contact person: France Lafarges
E-mail: [email protected]
Eurokleis S.R.L.
Via Giorgio Baglivi, 3
Roma RM
00161 Italia
Contact person: Francesco Bellini
E-mail: [email protected]
University of Sheffield
Department of Computer Science
Regent Court, 211 Portobello St.
Sheffield S1 4DP, UK
Tel: +44 114 222 1930
Fax: +44 114 222 1810
Contact person: Kalina Bontcheva
E-mail: [email protected]
Ontotext AD
Polygraphia Office Center fl.4,
47A Tsarigradsko Shosse,
Sofia 1504, Bulgaria
Contact person: Atanas Kiryakov
E-mail: [email protected]
Sora Ogris and Hofinger GmbH
Bennogasse 8/2/16
1080 Wien Austria
Contact person: Christoph Hofinger
E-mail: [email protected]
Hardik Fintrade Pvt Ltd.
227, Shree Ram Cloth Market,
Opposite Manilal Mansion,
Revdhi Bazar, Ahmedabad 380002
India
Contact person: Suresh Aswani
E-mail: [email protected]
DAEDALUS - DATA, DECISIONS AND LANGUAGE, S. A.
C/ López de Hoyos 15, 3º, 28006 Madrid,
Spain
Contact person: José Luis Martínez Fernández
Email: [email protected]
Institute of Computer Science Polish Academy of Sciences
5 Jana Kazimierza Str., Warsaw, Poland
Contact person: Maciej Ogrodniczuk
E-mail: [email protected]
Universidad Carlos III de Madrid
Av. Universidad, 30, 28911, Madrid, Spain
Contact person: Paloma Martínez Fernández
E-mail: [email protected]
Research Institute for Linguistics of the Hungarian Academy of Sciences
Benczúr u. 33., H-1068 Budapest, Hungary
Contact person: Tamás Váradi
Email: [email protected]
Executive Summary
In this document we present the final version of the TrendMiner integrated platform. It focuses on the new developments and improvement of the platform since M24. This includes:
- integration of new components;
- updates of existing components;
- customizations for specific use-cases scenarios (finance);
- and adapting the system to be used in more general context (extended use-cases).
The whole platform has been deployed on the Amazon EC2 Cloud\(^1\) and it is accessible for all consortium members.
---
\(^1\) [http://aws.amazon.com/ec2/](http://aws.amazon.com/ec2/)
Contents
Executive Summary ............................................................................................................. 3
Contents ............................................................................................................................... 4
List of abbreviations .......................................................................................................... 4
List of figures ....................................................................................................................... 5
1 Introduction ....................................................................................................................... 6
2 Integrated Platform Overview ........................................................................................... 7
3 New components integration .............................................................................................. 7
3.1 Mimir Integration ............................................................................................................ 7
Mimir Overview ............................................................................................................... 7
Mimir Integration ............................................................................................................ 8
3.2 Clustering Service Integration ....................................................................................... 9
Clustering overview ......................................................................................................... 9
Clustering management layer & data workflow .............................................................. 9
New Clustering Service API .......................................................................................... 10
RDF model for clusters ................................................................................................. 11
Clustering Browser (UI) ................................................................................................. 12
3.3 Summarization Service Integration ............................................................................... 12
4 Extended Use-cases Architecture .................................................................................... 13
4.1 Data model for social media resources ........................................................................ 14
4.2 New APIs for direct data warehouse management .................................................... 16
RDF data import ........................................................................................................... 16
RDF repository reset .................................................................................................... 16
5 Use-cases demonstrators deployment ............................................................................... 17
6 Extension for financial data support (WP6) ...................................................................... 17
6.1 CSV Import Service API .............................................................................................. 17
6.2 Data access APIs ............................................................................................................ 18
7 Cloud Deployment ............................................................................................................. 19
Conclusions .......................................................................................................................... 19
List of abbreviations
<table>
<thead>
<tr>
<th>Abbreviation</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>UX</td>
<td>User Experience</td>
</tr>
<tr>
<td>REST</td>
<td>REpresentation State Transfer</td>
</tr>
<tr>
<td>RDF</td>
<td>Resource Description Framework</td>
</tr>
<tr>
<td>CPU</td>
<td>Central Processing Unit</td>
</tr>
<tr>
<td>API</td>
<td>Application Programming Interface</td>
</tr>
<tr>
<td>MG4J</td>
<td>Managing Gigabytes for Java</td>
</tr>
</tbody>
</table>
List of figures
Figure 1. TrendMiner overview ................................................................. 6
Figure 2. TrendMiner integrated platform ...................................................... 7
Figure 3. Mimir life cycle ............................................................................. 8
Figure 4. Clustering management components ............................................. 10
Figure 5. Clustering RDF model ................................................................. 11
Figure 6. Clustering browser ...................................................................... 12
Figure 7. Common architecture for the extended use-cases ....................... 13
Figure 8. Financial data and social media activity ....................................... 17
1 Introduction
TrendMiner provides a platform for distributed and real-time processing over social media streams. The platform covers all the phases from the social media stream processing lifecycle: large scale data collection\(^2\), multilingual information extraction and entity linking, sentiment extraction, trend detection, summarization and visualisation (Figure 1).

This document represents the updates of the platform since M24 and familiarity with D5.3.1\(^3\) is strongly encouraging. For the sake of brevity the general requirements and tasks sections are omitted here. The document focuses on the final implementation of the platform.
---
\(^2\) Data Collection and storage in TrendMiner is the focus of T5.1, and described in D5.1.1 “Real-time Stream Media Collection, v.1”
\(^3\) [http://www.trendminer-project.eu/images/d5.3.1.pdf](http://www.trendminer-project.eu/images/d5.3.1.pdf)
2 Integrated Platform Overview
From a higher level point of view there are no significant changes in the architecture since M24 prototype (Figure 2). The general workflow stays also more or less the same; however significant effort has been made for internal optimisations of the platform performance.
The new features of the current version of the platform include:
- Integration of new components (the Mímir framework)
- Updates of existing components supporting extended features (clustering, summarization)
- Opening the platform for third party contributing components (extended use-case partners tools)

3 New components integration
3.1 Mímir Integration
Mímir[^4] is an integrated semantic search framework, which offers indexing and search over full text, document structure, document metadata, linguistic annotations, and any linked, external semantic knowledge bases. It supports hybrid queries that arbitrarily mix full-text, structural, linguistic and semantic constraints. Its key distinguishing feature is the containment operators that allow flexible creation and nesting of full-
[^4]: http://gate.ac.uk/mimir/
text, structural, and semantic constraints, as well as the support for interactive knowledge discovery.
To improve the suitability of Mímir for large amounts of streaming data, we implemented live indexes that can accept new documents for indexing at the same time as they serve queries based on the documents already indexed. Mímir relies on MG4J\(^5\) for the implementation of on-disk indexes, and MG4J indexes are designed to be read-only, in order to maximize performance. Given these constraints, the approach we used was to index incoming documents in batches, and make each batch searchable as soon as it’s produced.

**Mímir Integration**
The integration of Mímir with the rest of the platform can be achieved at two levels - data provisioning and data querying & visualization. For the latter we decided to have a lightweight integration approach and to keep the interaction with the two existing use cases frontends independent. Still the user is presented a single entry point to the system and the option to choose which sub-system to use. The Mímir UI offers sophisticated search mechanisms over various aspects of the data but this comes at the price of certain expertise required by the user. The TrendMiner UI on the other hand
---
\(^5\) See [http://java-source.net/open-source/search-engines/mg4j](http://java-source.net/open-source/search-engines/mg4j) for more details.
D5.3.2 Real-time Stream Media Processing Platform and Cloud-based Deployment v2
provides simplified means for data exploration based entirely on UI widgets and controls. On the representational level the strength of Mímir is to deliver efficiently results from arbitrary complex queries whereas TrendMiner UI focuses on the analytical aspect of the data (aggregation and summarization) providing drill-down data exploration support.
The data provisioning integration is approached by extending the metadata pre-processing components to be able to send the final results for indexing to Mímir. The components interaction is based on the pre-existing HTTP API Mímir exposes for document indexing. From the platform side, the results from all pre-processing components are packaged as GATE document and passed to the API endpoint URL. This way any new data entering the system is synchronised immediately with the Mímir indexes.
3.2 Clustering Service Integration
Clustering overview
Spectral clustering is a clustering algorithm that has been shown to achieve state-of-the-art performance for a range of tasks, from image segmentation to community detection. This method treats the clustering problem as one of graph partitioning on the similarity graph between objects. The algorithm projects the objects via Singular Value Decomposition into a reduced space which aims for maximal separation of clusters. This way, spectral clustering is useful when data dimensionality is high. It is also particularly useful when the clusters can’t be discovered using a spherical metric, as is the case, for example, with k-means. The algorithm is also appealing because it is supported by spectral graph theory. More details on the algorithm and its implementation are available in D3.2.16.
The code of the clustering implementation and its packaging as a RESTful webservice is available as open source at github7.
Clustering management layer & data workflow
In order to enable the end users to utilize the complete power of the clustering service implementation we built an additional layer of components supporting various clustering tasks accessible directly through the UX layer. Such tasks include:
- constructing new clustering sets based on custom parameters;
- exploring clustering results;
- running parallel clusterings;
- sharing or removing clustering results.
The following diagram (Figure 4) reveals the major functional components of the clustering management layer. All activities and tasks are initiated from the UI frontend, refined by the UX Data Service component. Tasks related to data exploration are driven mainly by a Clustering Manager responsible for fetching the proper data from the data warehouse and delivering it in suitable form for visualization.
---
6 http://www.trendminer-project.eu/images/d3.2.1.pdf
7 https://github.com/danielpreotiuc/trendminer-clustering
The process of creating new clustering sets involves more components playing their role in the complex data processing chain. Here the role of the Clustering Manager is to trigger the proper clustering task described as a set of parameters and to orchestrate the whole subsequent process. The clustering task instructs the Data extractor how to collect the input data for clustering based on parameters like type of sources & time frame. The extractor collects the desired data and transforms it into the format the spectral clustering implementation requires it. The result is handled to the RDFizer component which parses it and transforms it in a format suitable for efficient consumption by the rest of the components. The actual model and representation is discussed in the following sections. The final result is loaded in the data warehouse which makes it immediately available to the UX.
The clustering computation task can be arbitrarily complex (the user can select an arbitrary amount of input data) which might cause certain performance issues for the platform. To cope with this possible problem, we built a service around the components involved, thus decoupling them from the rest of the components. The communication and workflow is implemented in an asynchronous way, utilizing tasks queues to handle arbitrary spikes of load. If necessary the service can further be spawn on cluster of computing nodes/machines to avoid processing bottlenecks.
The service API specification is described in the following section (New Clustering Service API).

**New Clustering Service API**
The following table provides API specification of the integrated clustering creation service, comprising the source data selection, actual clusters computing and transforming the results into the RDF model data (next section).
<table>
<thead>
<tr>
<th>URL</th>
<th>/trendminer/clustering</th>
</tr>
</thead>
<tbody>
<tr>
<td>HTTP Method</td>
<td>POST</td>
</tr>
</tbody>
</table>
| Parameters | **sesameURL** - the location of the source data and the result storage location
**source**: filter by source (twitter, facebook, news, blogs)
**fromDate**: Start of the selected period (YYYY-MM-DD)
**toDate**: End of the period (YYYY-MM-DD) |
10
The clustering computation in general can be a heavy and time consuming task therefore the service invocation is done asynchronously. On service request, all input parameters are validated, a clustering computation task is scheduled for execution and the service generates a successful response. The actual computation is started as soon as any preceding tasks are completed.
**RDF model for clusters**
In order to represent the results from the clustering computation and to have flexible means for managing them, we define a simple light-weight conceptual model for representing clusters and their properties. The following diagram represents the main classes with their properties and relations.

Based on the results produced by the spectral clustering service implementation, we identified three major types of entities:
- *ClusterSet* - a labelled container for clusters resulted from a single clustering service invocation. Each cluster set can be opened from exploration in the UI.
- *Cluster* - a labelled set of terms, forming a common topic
- *Word* - representation of a single term within a cluster with its relevance score.
On the ground level this model is represented as RDF graphs and stored in the RDF metadata store. Any operation on the clustering model is based on querying and updating the RDF data. For demonstrative purposes we include a fragment of clustering data represented in RDF (Turtle):
```
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
@prefix xsd: <http://www.w3.org/2001/XMLSchema#>.
@prefix tm-cluster: <http://trendminer.eu/cluster#>.
<http://trendminer.eu/usecases/sora/cs_sora2013> a tm-cluster:ClusterSet ;
rdfs:label "Twitter data april 2013" ;
tm-cluster:hasCluster <http://trendminer.eu/usecases/sora/cs_sora2013/cl_1> ;
tm-cluster:hasCluster <http://trendminer.eu/usecases/sora/cs_sora2013/cl_2> ;
tm-cluster:hasCluster <http://trendminer.eu/usecases/sora/cs_sora2013/cl_3> ;
<http://trendminer.eu/usecases/sora/cs_sora2013/cl_1> a tm-cluster:Cluster ;
rdfs:label "1" ;
tm-cluster:hasWord <http://trendminer.eu/usecases/sora/cs_sora2013/cl_1/%23arnautovic> ;
tm-cluster:hasWord <http://trendminer.eu/usecases/sora/cs_sora2013/cl_1/%23autfar> ;
tm-cluster:hasWord <http://trendminer.eu/usecases/sora/cs_sora2013/cl_1/%23autger> ;
```
---
8 This allows to consider a cluster as an instance that can be associated with any other object in the triple store.
3.3 Summarization Service Integration
The summarization service developed as part of WP4 (D4.1.2) has been integrated in the TrendMiner platform providing ranking facilities over the user data of interest (tracks). The application of the service is initiated by the user over the current result set to deliver the topmost representative tweets or other types of resources.
The service itself exposes a RESTful API which accepts a collection of textual resources in Twitter JSON format and responds with a ranked list of the same resources. The input preparation includes extracting the textual content of the resources as well as any entities and topics detected in the text during the pre-processing phase. The result (ranking list) is applied directly to the representation layer without any further actions.
4 Extended Use-cases Architecture
The inclusion of the new use-cases during the third year caused certain changes on the architecture of the platform but as a whole the backbone of the system is preserved. No major changes of the general workflow were required. A significant direction of changes is related to improving the openness of the system. We exposed more APIs for accessing and management of different sub-parts of the system as well as we built cleaner and more formal models for resources and other moving parts of the platform. The rationale for these changes is that the text mining tools developed by the second year of the project do not completely cover the languages required by the new partners. On the other hand the new partners came with their own means for handling these specific tasks as well as custom data collection approaches (different source and selection criteria).
Thus a reasonable integration approach was to replace the whole text pre-processing chain until the point where the language specifics are abstracted for the processing components (for example spectral clustering, text summarisation, ontology-based search).
Having analyzed case-by-case the needs and the abilities of each of the use-case partners, we identified the entry point of the data warehouse as the most appropriate border line between use-case specific tools and the common workflow components. The following Figure 7 represents the common architecture applicable to the extended use-cases workflow.

In order to implement this architecture along with the mainstream TrendMiner platform we had two major tasks:
- clean formal model for the resources entering the data warehouse
- exposing the data warehouse as independent component with appropriate access APIs
### 4.1 Data model for social media resources
Defined as an abstract model in the previous version of this deliverable here we provide more details on the actual data modelling and representation. Previously this model was internal for the platform, so the actual details were not essential. Now, as we provide direct access to the data warehouse, detailed descriptions here play an important role for anyone willing to publish annotated resources to the platform. For more coherent description we start with a short summary of the model we presented in D5.3.1:
<table>
<thead>
<tr>
<th>Property</th>
<th>Description</th>
<th>Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>identifier of the resource</td>
<td>11050918246</td>
</tr>
<tr>
<td>author</td>
<td>Author name (and URI if it can be resolved)</td>
<td>David MacLean</td>
</tr>
<tr>
<td>url</td>
<td>Original source on the Web</td>
<td><a href="http://original.web.pg">http://original.web.pg</a></td>
</tr>
<tr>
<td>topics</td>
<td>Related keywords</td>
<td>Labour, Budget, etc.</td>
</tr>
<tr>
<td>time</td>
<td>time of issuing or collecting the resource</td>
<td>2010-03-25T22:05:40</td>
</tr>
<tr>
<td>text</td>
<td>The actual text content of the resource</td>
<td>"Budget 2010: Labour is stealing from our children's future to buy votes ..."</td>
</tr>
<tr>
<td>source</td>
<td>the source the resource is retrieved from</td>
<td>Twitter, facebook, blogs, ...</td>
</tr>
<tr>
<td>sentiment</td>
<td>sentiment/polarity value</td>
<td>float number, negative or positive</td>
</tr>
<tr>
<td>location</td>
<td>geo-location related to the resource (location of origin)</td>
<td>![<a href="http://dbpedia.org/resource/London%3E">http://dbpedia.org/resource/London></a></td>
</tr>
<tr>
<td>language</td>
<td>"en", "de", ....</td>
<td></td>
</tr>
</tbody>
</table>
The concrete realization of this model represented in RDF is represented in the following table. We targeted at having a simple and comprehensible representation reusing popular and well accepted vocabularies. Thus our internal RDF model is based on the Dublin Core Metadata Initiative with some minor additions covering the specific modelling aspects. For readability improvement in the text we adopted the following namespaces:
- `dc-terms: <http://purl.org/dc/terms/>`
- `dc: <http://purl.org/dc/elements/1.1/>`
- `tm: <http://trendminer.eu/>`
- `xsd: <http://www.w3.org/2001/XMLSchema#>`
---
9 http://dublincore.org/
RDF model for **Social Media Resources:**
<table>
<thead>
<tr>
<th>Property</th>
<th>Description</th>
<th>Range</th>
<th>Required?</th>
<th>Multi-valued?</th>
</tr>
</thead>
<tbody>
<tr>
<td>dc:identifier</td>
<td>Internal identifier for the resource</td>
<td>literal</td>
<td>optional</td>
<td>no</td>
</tr>
<tr>
<td>dc:originLocation</td>
<td>Original location on the Web (if available)</td>
<td>URI</td>
<td>optional</td>
<td>no</td>
</tr>
<tr>
<td>dc-terms:creator</td>
<td>The author of the resource</td>
<td>literal</td>
<td>required</td>
<td>no</td>
</tr>
<tr>
<td>dc-terms:subject</td>
<td>Keywords/topics associated with the resource</td>
<td>literal</td>
<td>optional</td>
<td>no</td>
</tr>
<tr>
<td>dc:hashtag</td>
<td>Any hashtags found in the text (including the hash sign)</td>
<td>literal</td>
<td>optional</td>
<td>yes</td>
</tr>
<tr>
<td>dc:sentiment</td>
<td>Preferably in the range [-1.0 - 1.0]</td>
<td>xsd:float</td>
<td>optional</td>
<td>no</td>
</tr>
<tr>
<td>dc:description</td>
<td>The original resource text</td>
<td>literal</td>
<td>yes</td>
<td>no</td>
</tr>
<tr>
<td>dc:date</td>
<td>The time stamp of the resource creation</td>
<td>xsd:datetime</td>
<td>yes</td>
<td>no</td>
</tr>
<tr>
<td>dc:source</td>
<td>The source the resource: twitter.com, facebook.com, blogs, news</td>
<td>literal</td>
<td>yes</td>
<td>no</td>
</tr>
<tr>
<td>dc-terms:references</td>
<td>Mentions found in the text</td>
<td>URI</td>
<td>optional</td>
<td>yes</td>
</tr>
<tr>
<td>dc-terms:ref_lab</td>
<td>The literal surface level representation of the mentions (above). They should come in pairs</td>
<td>literal</td>
<td>optional</td>
<td>yes</td>
</tr>
<tr>
<td>dc:language</td>
<td>Language id: 'en', 'it', 'de', 'es' ...</td>
<td>literal</td>
<td>yes</td>
<td>no</td>
</tr>
<tr>
<td>dc:location</td>
<td>The geo location of resource origin (if available)</td>
<td>literal</td>
<td>optional</td>
<td>yes</td>
</tr>
<tr>
<td>dc:location_uri</td>
<td>The same as above but resolved to URIs in DBpedia (if possible)</td>
<td>URI</td>
<td>optional</td>
<td>yes</td>
</tr>
<tr>
<td>tm:hasTokens</td>
<td>Whitespace separated list of tokens which might be normalized, lemmatized or whatever is necessary to feed the clustering tool properly.</td>
<td>literal</td>
<td>required for clustering tool only</td>
<td>no</td>
</tr>
</tbody>
</table>
Here follows an example representation of a single resource in RDF Turtle:
```turtle
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix dc-terms: <http://purl.org/dc/terms/> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
<http://trendminer.eu/resources#tweet_327367546749718528> dc:identifier "327367546749718528" ;
dc-terms:creator "waldydzikowski" ;
dc:description "@KrzysztofLisek Dziekuje, w koncu sie zdecydowalem;) Pozdrawiam" ;
dc:language "pl" ;
dc:sentiment "1.0"^^xsd:float ;
```
4.2 New APIs for direct data warehouse management
An important step in making the platform open for third party contributions is by providing means for direct access of the operational data in the warehouse. At the same time integrity and consistency of the system data should be guaranteed. To approach these requirements we provided limited access APIs on top of the data warehouse.
**RDF data import**
This service enables publishing of RDF data directly in the data warehouse (OWLIM) which is immediately available in the UI demonstrator. The service can be polled on regular bases to ensure live updates of the data.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value and Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>/tm-data-service-v3/rdf</td>
</tr>
<tr>
<td>Method</td>
<td>POST</td>
</tr>
<tr>
<td>Input Parameters</td>
<td>Content-Type - any valid RDF serialization format. Valid values are: application/rdf+xml, text/plain, text/turtle, text/rdf+n3, text/x-nquads, application/rdf+json, application/trix, application/x-trig, application/x-binary-rdf</td>
</tr>
<tr>
<td>Response</td>
<td>Success confirmation or error report (HTTP codes)</td>
</tr>
</tbody>
</table>
**RDF repository reset**
This service can be used to remove all the documents from the data store resetting it to its initial state. This functionality is supposed to be used mainly during the development phase by the use-case partners responsible for delivering their annotated data directly in the store.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value and Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>/tm-data-service-v3/admin/resetstore</td>
</tr>
<tr>
<td>Method</td>
<td>DELETE</td>
</tr>
<tr>
<td>Input parameters</td>
<td>none</td>
</tr>
<tr>
<td>Response</td>
<td>Success confirmation or error report (HTTP codes)</td>
</tr>
</tbody>
</table>
5 Use-cases demonstrators deployment
Since June 2014 we are running five separate platform deployments for each of the use-cases. Each of them is reachable at URL:
http://<partner_name>-uc.ontotext.com/TrendMiner/tracks.html
where <partner_name> is one of: sora, ek, ipipan, riltna, daedalus.
During the development phase each partner has separate credentials to avoid accidental damaging of someone else data. Each demonstrator is configured separately depending on the corresponding partner's needs (recall Figure 2 and Figure 7).
6 Extension for financial data support (WP6)
For the purpose of the financial use-case it is of great importance to be able to align stock market information with social media activities (Figure 8). In this way it is easy to analyze the correlation between prices and hot topics and trends over time.
The nature of the financial data is numeric tables containing stock exchange market data on daily bases (companies, prices, volumes, indexes, etc.). The data is represented as tabular format (CSV) files for which we had to provide a separate API for upload and querying.

### 6.1 CSV Import Service API
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value & Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>/tm-anno-service/stockdata</td>
</tr>
<tr>
<td>HTTP Method</td>
<td>POST</td>
</tr>
<tr>
<td>Input Parameters</td>
<td>Content-Type - ‘text/csv’</td>
</tr>
</tbody>
</table>
**Company** - the name of the company of index for which the data is related
CSV content as request body. The format is three columns: date, volume, and price.
Example:
17/04/2014,10154800,524.94
16/04/2014,7670200,519.01
**Response**
Success confirmation or error report (HTTP codes)
---
### 6.2 Data access APIs
Listing all companies for which the platform has financial data uploaded.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value & Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>/tm-data-service-v3/companies</td>
</tr>
<tr>
<td>HTTP method</td>
<td>GET</td>
</tr>
<tr>
<td>Input parameters</td>
<td>-</td>
</tr>
<tr>
<td>Response</td>
<td>JSON array of company names or error message</td>
</tr>
</tbody>
</table>
Accessing detailed information for a company in a certain time interval.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value & Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>URL</td>
<td>/tm-data-service-v3/companies/{company-id}/stockdata/</td>
</tr>
<tr>
<td>HTTP method</td>
<td>GET</td>
</tr>
<tr>
<td>Input parameters</td>
<td>company-id - the company for which data is requested from - start of time period to - end of time period Time format is: yyyy-MM-dd'T'HH:mm:ss</td>
</tr>
<tr>
<td>Response</td>
<td>JSON object containing the information required by the timeline diagram to plot the volumes and prices for the specified period</td>
</tr>
</tbody>
</table>
7 Cloud Deployment
The following table summarizes the cloud deployment resources consumed and the purpose for each resource. The described deployment setup is capable to serve all the five use-cases without significant performance drops. All machines are based on the Linux OS, though it's not a strict requirement.
<table>
<thead>
<tr>
<th>Machine type</th>
<th>Purpose</th>
<th>Price category</th>
<th>Number of instances</th>
</tr>
</thead>
<tbody>
<tr>
<td>r3.2xlarge</td>
<td>Data warehouse, requiring plenty of RAM and fast I/O. Capable to serve the data for all use-cases on one host.</td>
<td>$$$</td>
<td>1</td>
</tr>
<tr>
<td>c3.2xlarge</td>
<td>Pre-processing tools (LODIE, Sentiment, Geo-location, etc.) requiring high CPU utilization and I/O, not so much RAM.</td>
<td>$$</td>
<td>2</td>
</tr>
<tr>
<td>c3.2xlarge</td>
<td>Clustering service - balance between CPU and RAM utilization requirements. Clustering computing is CPU intensive operation; however it is used relatively rarely.</td>
<td>$$</td>
<td>1</td>
</tr>
<tr>
<td>c3.2xlarge</td>
<td>UI & Backend services - moderate CPU and Memory requirements, minimal I/O usage</td>
<td>$</td>
<td>5</td>
</tr>
<tr>
<td>r3.large</td>
<td>Mimir indexing and querying - relatively high CPU and RAM utilization</td>
<td>$$</td>
<td>1</td>
</tr>
</tbody>
</table>
Conclusions
In this document we have presented the final version of the TrendMiner integrated platform. It focuses on the new developments and improvement of the platform since M24. This includes integration of new components; updates of existing components; customizations for specific use-cases scenarios (finance); and adapting the system to be used in more general context (extended use-cases).
The whole platform has been deployed on the Amazon EC2 Cloud and it is accessible for all consortium members.
|
{"Source-Url": "https://cordis.europa.eu/docs/projects/cnect/3/287863/080/deliverables/001-D532.pdf", "len_cl100k_base": 7760, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 46183, "total-output-tokens": 7874, "length": "2e12", "weborganizer": {"__label__adult": 0.00026416778564453125, "__label__art_design": 0.0005660057067871094, "__label__crime_law": 0.0004191398620605469, "__label__education_jobs": 0.0008234977722167969, "__label__entertainment": 0.0001380443572998047, "__label__fashion_beauty": 0.00015366077423095703, "__label__finance_business": 0.0021152496337890625, "__label__food_dining": 0.00030231475830078125, "__label__games": 0.0004298686981201172, "__label__hardware": 0.0012378692626953125, "__label__health": 0.0003521442413330078, "__label__history": 0.0003581047058105469, "__label__home_hobbies": 9.822845458984376e-05, "__label__industrial": 0.0005612373352050781, "__label__literature": 0.00029969215393066406, "__label__politics": 0.00039267539978027344, "__label__religion": 0.00029969215393066406, "__label__science_tech": 0.06787109375, "__label__social_life": 0.00013327598571777344, "__label__software": 0.062744140625, "__label__software_dev": 0.85986328125, "__label__sports_fitness": 0.0001474618911743164, "__label__transportation": 0.0003974437713623047, "__label__travel": 0.0002193450927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33848, 0.03411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33848, 0.06154]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33848, 0.75087]], "google_gemma-3-12b-it_contains_pii": [[0, 779, false], [779, 2766, null], [2766, 3362, null], [3362, 7526, null], [7526, 8318, null], [8318, 9309, null], [9309, 10493, null], [10493, 11917, null], [11917, 14811, null], [14811, 17084, null], [17084, 19536, null], [19536, 19910, null], [19910, 21932, null], [21932, 24357, null], [24357, 27568, null], [27568, 29336, null], [29336, 30763, null], [30763, 31933, null], [31933, 33848, null]], "google_gemma-3-12b-it_is_public_document": [[0, 779, true], [779, 2766, null], [2766, 3362, null], [3362, 7526, null], [7526, 8318, null], [8318, 9309, null], [9309, 10493, null], [10493, 11917, null], [11917, 14811, null], [14811, 17084, null], [17084, 19536, null], [19536, 19910, null], [19910, 21932, null], [21932, 24357, null], [24357, 27568, null], [27568, 29336, null], [29336, 30763, null], [30763, 31933, null], [31933, 33848, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33848, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33848, null]], "pdf_page_numbers": [[0, 779, 1], [779, 2766, 2], [2766, 3362, 3], [3362, 7526, 4], [7526, 8318, 5], [8318, 9309, 6], [9309, 10493, 7], [10493, 11917, 8], [11917, 14811, 9], [14811, 17084, 10], [17084, 19536, 11], [19536, 19910, 12], [19910, 21932, 13], [21932, 24357, 14], [24357, 27568, 15], [27568, 29336, 16], [29336, 30763, 17], [30763, 31933, 18], [31933, 33848, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33848, 0.22126]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
ff0a72da716766f7ee8b5e5d605c0ab7cce51bff
|
Chapter 11
Inheritance and Polymorphism
Composition is one example of code reuse. We have seen how classes can be composed with other classes to make more sophisticated classes. In this chapter we will see how classes can be reused in a different way. Using inheritance, a programmer can make a new class out of an existing class, thereby providing a way to create objects with enhanced behavior.
11.1 Inheritance
Recall TextTrafficLight (§ 10.2). TextTrafficLight objects simulate the cycling behavior of simple, real-world traffic lights. In Intersection (§ 10.4) we modeled a four-way intersection synchronizing the operation of four TextTrafficLight objects. Suppose the traffic at our intersection has grown, and it has become difficult for vehicles to make left-hand turns because the volume of oncoming traffic is too great. The common solution is to add a left turn arrow that allows opposing traffic to turn left without interference of oncoming traffic. Clearly we need an enhanced traffic light that provides such a left turn arrow. We could copy the existing TrafficLightModel source code, rename it to TurnLightModel, and then commence modifying it so that it provides the enhance functionality. We then could copy the TextTrafficLight code and modify it so that it can display a left turn arrow. This is not unreasonable given the size of our classes, but there are some drawbacks to this approach:
- Whenever code is copied and modified there is the possibility of introducing an error. It is always best, as far as possible, to leave working code untouched.
- If the code is copied and a latent error is discovered and fixed in the original code, the copied code should be repaired as well. The maintainers of the original code may not know who is using copies of their code and, therefore, cannot notify all concerned parties of the change.
Object-oriented languages provide a mechanism that addresses both of these issues. New classes can be built from existing classes using a technique known as inheritance, or subclassing. This technique is illustrated in TurnLightModel (§ 11.1):
```java
public class TurnLightModel extends TrafficLightModel {
// Add a new state indicating left turn
public static int LEFT_TURN = 1000;
}
```
// Creates a light in the given initial state
public TurnLightModel(int intialState) {
super(initialState);
}
// Add LEFT_TURN to the set of valid states
public boolean isLegalState(int potentialState) {
return potentialState == LEFT_TURN
|| super.isLegalState(potentialState);
}
// Changes the state of the light to its next state in its normal cycle.
// Properly accounts for the turning state.
public void change() {
int currentState = getState();
if (currentState == LEFT_TURN) {
setState(GO);
} else if (currentState == STOP) {
setState(LEFT_TURN);
} else {
super.change();
}
}
Listing 11.1: TurnLightModel—extends the TrafficLightModel class to make a traffic light with a left turn arrow
In TurnLightModel (11.1):
- The reserved word extends indicates that TurnLightModel is being derived from an existing class—TrafficLightModel. We can say this in English various ways:
- TurnLightModel is a subclass of TrafficLightModel, and TrafficLightModel is the superclass of TurnLightModel.
- TurnLightModel is a derived class of TrafficLightModel, and TrafficLightModel is the base class of TurnLightModel.
- TurnLightModel is a child class of TrafficLightModel, and TrafficLightModel is the parent class of TurnLightModel.
- By virtue of being a subclass, TurnLightModel inherits all the characteristics of the TrafficLightModel class. This has several key consequences:
- While you do not see a state instance variable defined within the TurnLightModel class, all TurnLightModel objects have such an instance variable. The state variable is inherited from the superclass. Just because all TurnLightModel objects have a state variable does not mean the code within their class can access it directly—state is still private to the superclass TrafficLightModel. Fortunately, code within TurnLightModel can see state via the inherited getState() method and change state via setState().
– While you see neither `getState()` nor `setState()` methods defined in the `TurnLightModel` class, all `TurnLightModel` objects have these methods at their disposal since they are inherited from `TrafficLightModel`.
- `TurnLightModel` inherits the state constants OFF, STOP, CAUTION, and GO and adds a new one, LEFT_TURN. LEFT_TURN’s value is defined so as to not coincide with the previously defined state constants. We can see the values of OFF, STOP, CAUTION, and GO because they are publicly visible, so here we chose 1,000 because it is different from all of the inherited constants.
- The constructor appears to be calling an instance method named `super`:
```java
super(initialState);
```
In fact, `super` is a reserved word, and when it is used in this context it means call the superclass’s constructor. Here, `TurnLightModel`’s constructor ensures the same initialization activity performed by `TrafficLightModel`’s constructor will take place for `TurnLightModel` objects. In general, a subclass constructor can include additional statements following the call to `super()`. If a subclass constructor provides any statements of its own besides the call to `super()`, they must follow the call of `super()`.
- `TurnLightModel` provides a revised `isLegalState()` method definition. When a subclass redefines a superclass method, we say it *overrides* the method. This version of `isLegalState()` expands the set of integer values that map to a valid state. `isLegalState()` returns true if the supplied integer is equal to LEFT_TURN or is approved by the superclass version of `isLegalState()`. The expression:
```java
super.isLegalState(potentialState)
```
looks like we are calling `isLegalState()` with an object reference named `super`. The reserved word `super` in this context means execute the superclass version of `isLegalState()` on behalf of the current object. Thus, `TurnLightModel`’s `isLegalState()` adds some original code (checking for LEFT_TURN) and reuses the functionality of the superclass. It in essence does what its superclass does plus a little extra.
Recall that `setState()` calls `isLegalState()` to ensure that the client does not place a traffic light object into an illegal state. `TurnLightModel` does not override `setState()`—it is inherited as is from the superclass. When `setState()` is called on a pure `TrafficLightModel` object, it calls the `TrafficLightModel` class’s version of `isLegalState()`. By contrast, when `setState()` is called on behalf of a `TurnLightModel` object, it calls the `TurnLightModel` class’s `isLegalState()` method. This ability to “do the right thing” with an object of a given type is called *polymorphism* and will be addressed in §11.4.
- The `change()` method inserts the turn arrow state into its proper place in the sequence of signals:
- red becomes left turn arrow,
- left turn arrow becomes green, and
- all other transitions remain the same (the superclass version works fine)
Like `isLegalState()`, it also reuses the functionality of the superclass via the `super` reference.
Another interesting result of inheritance is that a `TurnLightModel` object will work fine in any context that expects a `TrafficLightModel` object. For example, a method defined as
public static void doTheChange(TrafficLightModel tlm) {
System.out.println("The light changes!");
tlm.change();
}
obviously accepts a TrafficLightModel reference as an actual parameter, because its formal parameter is
declared to be of type TrafficLightModel. What may not be so obvious is that the method will also accept a
TurnLightModel reference. Why is this possible? A subclass inherits all the capabilities of its superclass and
usually adds some more. This means anything that can be done with a superclass object can be done with a sub-
class object (and the subclass object can probably do more). Since any TurnLightModel object can do at least
as much as a TrafficLightModel, the tlm parameter can be assigned to TurnLightModel just as easily as to
a TrafficLightModel. The doTheChange() method calls change() on the parameter tlm. tlm can be an in-
stance of TrafficLightModel or any subclass of TrafficLightModel. We say an is a relationship exists from
the TurnLightModel class to the TrafficLightModel class. This is because any TurnLightModel object is a
TrafficLightModel object.
In order to see how our new turn light model works, we need to visualize it. Again, we will use inheritance and
derive a new class from an existing class, TextTrafficLight. TextTurnLight (11.2) provides a text visualization
of our new turn light model:
```
public class TextTurnLight extends TextTrafficLight {
// Note: constructor requires a turn light model
public TextTurnLight(TurnLightModel lt) {
super(lt); // Calls the superclass constructor
}
// Renders each lamp
public String drawLamps() {
// Draw non-turn lamps
String result = super.drawLamps();
// Draw the turn lamp properly
if (getState() == TurnLightModel.LEFT_TURN) {
result += " (<) ";
} else {
result += " ( ) ";
}
return result;
}
}
```
Listing 11.2: TextTurnLight—extends the TextTrafficLight class to make a traffic light with a left turn arrow
TextTurnLight (11.2) is a fairly simple class. It is derived from TextTrafficLight (10.2), so it inherits all of
TextTrafficLight’s functionality, but the differences are minimal:
- The constructor expects a TurnLightModel object and passes it to the constructor of its superclass. The
superclass constructor expects a TrafficLightModel reference as an actual parameter. A TurnLightModel
reference is acceptable, though, because a TurnLightModel is a TrafficLightModel.
• In `drawLamps()`, a `TextTurnLight` object must display four lamps instead of only three. This method renders all four lamps. The method calls the superclass version of `drawLamps()` to render the first three lamps:
```java
super.drawLamps();
```
and so the method needs only draw the last (turn) lamp.
Notice that the `draw()` method, which calls `drawLamps()`, is not overridden. The subclass inherits and uses `draw()` as is, because it does not need to change how the “frame” is drawn.
The constructor requires clients to create `TextTurnLight` objects with only `TurnLightModel` objects. A client may not create a `TextTurnLight` with a simple `TrafficLightModel`:
```java
// This is illegal
TextTurnLight lt = new TextTurnLight(new TrafficLightModel
(TrafficLightModel.RED));
```
The following interaction sequences demonstrates some of the above concepts. First, we will test the light’s cycle:
```
Welcome to DrJava. Working directory is /Users/rick/java
> TextTurnLight lt = new TextTurnLight
(new TurnLightModel
(TrafficLightModel.STOP));
> System.out.println(lt.show());
[(R) ( ) ( ) ( )]
> lt.change(); System.out.println(lt.show());
[( ) ( ) ( ) (<)]
> lt.change(); System.out.println(lt.show());
[( ) ( ) (G) ( )]
> lt.change(); System.out.println(lt.show());
[( ) (Y) ( ) ( )]
> lt.change(); System.out.println(lt.show());
[(R) ( ) ( ) ( )]
> lt.change(); System.out.println(lt.show());
[( ) ( ) ( ) (<)]
> lt.change(); System.out.println(lt.show());
[( ) ( ) (G) ( )]
> lt.change(); System.out.println(lt.show());
[( ) (Y) ( ) ( )]
> lt.change(); System.out.println(lt.show());
[(R) ( ) ( ) ( )]
```
All seems to work fine here. Next, let us experiment with this is a concept. Reset the Interactions pane and enter:
```
Welcome to DrJava. Working directory is /Users/rick/java
> TextTrafficLight lt = new TextTurnLight
```
13 March 2008 Draft © 2008 Richard L. Halterman
11.1. INHERITANCE
(new TurnLightModel
(TrafficLightModel.STOP));
> System.out.println(lt.show());
\[(R) ( ) ( ) ( )\]
> lt.change(); System.out.println(lt.show());
\[( ) ( ) ( ) (<)\]
> lt.change(); System.out.println(lt.show());
\[( ) ( ) (G) ( )\]
> lt.change(); System.out.println(lt.show());
\[( ) (Y) ( ) ( )\]
> lt.change(); System.out.println(lt.show());
\[(R) ( ) ( ) ( )\]
Notice that here the variable lt's declared type is TextTrafficLight, not TextTurnLight as in the earlier interactive session. No error is given because a TextTurnLight object (created by the new expression) is a TextTrafficLight, and so it can be assigned legally to light. Perhaps Java is less picky about assigning objects? Try:
```
Welcome to DrJava. Working directory is /Users/rick/java
> Intersection light = new TextTurnLight
(new TurnLightModel
(TrafficLightModel.STOP));
Error: Bad types in assignment
```
Since no superclass/subclass relationship exists between Intersection and TextTurnLight, there is no is a relationship either, and the types are not assignment compatible. Furthermore, the is a relationship works only one direction. Consider:
```
Welcome to DrJava. Working directory is /Users/rick/java
> TextTurnLight lt2 = new TextTrafficLight
(new TrafficLightModel
(TrafficLightModel.STOP));
ClassCastException: lt2
```
All TurnLightModels are TrafficLightModels, but the converse is not true. As an illustration, all apples are fruit, but it is not true that all fruit are apples.
While inheritance may appear to be only a clever programming trick to save a little code, it is actually quite useful and is used extensively for building complex systems. To see how useful it is, we will put our new kind of traffic lights into one of our existing intersection objects and see what happens. First, to simplify the interactive experience, we will define TestIntersection (11.3), a convenience class for making either of the two kinds of intersections:
```java
public class TestIntersection {
public static Intersection makeSimple () {
...
}
}
```
return new Intersection(
new TextTrafficLight
(new TrafficLightModel(TrafficLightModel.STOP)),
new TextTrafficLight
(new TrafficLightModel(TrafficLightModel.STOP)),
new TextTrafficLight
(new TrafficLightModel(TrafficLightModel.GO)),
new TextTrafficLight
(new TrafficLightModel(TrafficLightModel.GO)));
}
public static Intersection makeTurn() {
return new Intersection(
new TextTurnLight
(new TurnLightModel(TrafficLightModel.STOP)),
new TextTurnLight
(new TurnLightModel(TrafficLightModel.STOP)),
new TextTurnLight
(new TurnLightModel(TrafficLightModel.GO)),
new TextTurnLight
(new TurnLightModel(TrafficLightModel.GO)));
}
Listing 11.3: TestIntersection—provides some convenience methods for creating two kinds of intersections
Both methods are class (static) methods, so we need not explicitly create a TestIntersection object to use the methods. The following interactive session creates two different kinds of intersections:
Welcome to DrJava. Working directory is /Users/rick/java
> simple = TestIntersection.makeSimple();
> simple.show();
[(R) ( ) ( )]
[( ) ( ) (G)]
[( ) ( ) (G)]
> simple.change(); simple.show();
[(R) ( ) ( )]
[( ) (Y) ( )]
[( ) (Y) ( )]
> simple.change(); simple.show();
[(R) ( ) ( )]
[( ) ( ) (G)]
[( ) ( ) (G)]
> simple.change(); simple.show();
[(R) ( ) ( )]
[( ) ( ) (G)]
[( ) ( ) (G)]
11.1. INHERITANCE
```javascript
> simple.change(); simple.show();
[( ) (Y) ( )]
[(R) ( ) ( )] [(R) ( ) ( )]
[( ) (Y) ( )]
> simple.change(); simple.show();
[( ) ( ) (G)] [( ) ( ) (G)]
[(R) ( ) ( )] [(R) ( ) ( )]
> simple.change(); simple.show();
[( ) (Y) ( )] [( ) (Y) ( )]
[(R) ( ) ( )] [(R) ( ) ( )]
> turn = TestIntersection.makeTurn();
> turn.show();
[(R) ( ) ( ) ( )]
[( ) ( ) (G) ( )] [( ) ( ) (G) ( )]
[(R) ( ) ( ) ( )] [(R) ( ) ( ) ( )]
> turn.change(); turn.show();
[( ) (Y) ( ) ( )] [( ) (Y) ( ) ( )]
[(R) ( ) ( ) ( )] [(R) ( ) ( ) ( )]
> turn.change(); turn.show();
[( ) ( ) ( ) (<)] [( ) ( ) ( ) (<)]
[(R) ( ) ( ) ( )] [(R) ( ) ( ) ( )]
> turn.change(); turn.show();
[( ) ( ) (G) ( )] [( ) ( ) (G) ( )]
[(R) ( ) ( ) ( )] [(R) ( ) ( ) ( )]
> turn.change(); turn.show();
[( ) (Y) ( ) ( )] [( ) (Y) ( ) ( )]
[(R) ( ) ( ) ( )] [(R) ( ) ( ) ( )]
> turn.change(); turn.show();
[( ) (Y) ( ) ( )] [( ) (Y) ( ) ( )]
[(R) ( ) ( ) ( )] [(R) ( ) ( ) ( )]
> turn.change(); turn.show();
```
13 March 2008 Draft © 2008 Richard L. Halterman
Notice that our original Intersection class was not modified at all, yet it works equally as well with TextTurnLight objects! This is another example of the “magic” of inheritance. A TextTurnLight object can be treated exactly like a TextTrafficLight object, yet it behaves in a way that is appropriate for a TextTurnLight, not a TextTrafficLight. A TextTrafficLight object draws three lamps when asked to show() itself, while a TextTurnLight draws four lamps. This is another example of polymorphism (see § 11.4).
11.2 Protected Access
We have seen how client access to the instance and class members of a class are affected by the public and private specifiers:
- Elements declared public within a class are freely available to code in any class to examine and modify.
- Elements declared private are inaccessible to code in other classes. Such private elements can only be accessed and/or influenced by public methods provided by the class containing the private elements.
Sometimes it is desirable to allow special privileges to methods within subclasses. Java provides a third access specifier—protected. A protected element cannot be accessed by other classes in general, but it can be accessed by code within a subclass. Said another way, protected is like private to non-subclasses and like public to subclasses.
Class designers should be aware of the consequences of using protected members. The protected specifier weakens encapsulation (see Section 8.7). Encapsulation ensures that the internal details of a class cannot be disturbed by client code. Clients should be able to change the state of an object only through the public methods provided. If these public methods are correctly written, it will be impossible for client code to put an object into an undefined...
or illegal state. When fields are made protected, careless subclassers may write methods that misuse instance variables and place an object into an illegal state. Some purists suggest that protected access never be used because the potential for misuse is too great.
Another issue with protected is that it limits how superclasses can be changed. Anything public becomes part of the class’s interface to all classes. Changing public members can break client code. Similarly, anything protected becomes part of the class’s interface to its subclasses, so changing protected members can break code within subclasses.
Despite the potential problems with the protected specifier, it has its place in class design. It is often convenient to have some information or functionality shared only within a family (inheritance hierarchy) of classes. For example, in the traffic light code, the setState() method in TrafficLightModel (10.1) might better be made protected. This would allow subclasses like TurnLightModel (11.1) to change a light’s state, but other code would be limited to making a traffic light with a specific initial color and then alter its color only through the change() method. This would prevent a client from changing a green light immediately to red without the caution state in between. The turn light code, however, needs to alter the basic sequence of signals, and so it needs special privileges that should not be available in general.
Since encapsulation is beneficial, a good rule of thumb is to reveal as little as possible to clients and subclasses. Make elements protected and/or public only when it would be awkward or unworkable to do otherwise.
### 11.3 Visualizing Inheritance
The Unified Modeling Language is a graphical, programming language-independent way of representing classes and their associated relationships. The UML can quickly communicate the salient aspects of the class relationships in a software system without requiring the reader to wade through the language-specific implementation details (that is, the source code). The UML is a complex modeling language that covers many aspects of system development. We will limit our attention to a very small subset of the UML used to represent class relationships.
In the UML, classes are represented by rectangles. Various kinds of lines can connect one class to another, and these lines represent relationships among the classes. Three relationships that we have seen are composition, inheritance, and dependence.
- **Composition.** An Intersection object is composed of four TextTrafficLight objects. Each TextTrafficLight object manages its own TrafficLightModel object, and a TextTurnLight contains a TurnLightModel. The UML diagram shown in Figure 11.1 visualizes these relationships. A solid line from one class to another with a diamond at one end indicates composition. The end with the diamond is connected to the container class, and the end without the diamond is connected to the contained class. In this case we see that an Intersection object contains TextTrafficLight objects. A number at the end of the line indicates how many objects are contained. (If no number is provided, the number 1 is implied.) We see that each intersection is composed of four traffic lights, while each light has an associated model.
The composition relationship is sometimes referred to as the *has a* relationship; for example, a text traffic light has a traffic light model.
- **Inheritance.** TurnLightModel is a subclass of TrafficLightModel, and TextTrafficLight is a subclass of TextTurnLight. The inheritance relationship is represented in the UML by a solid line with a triangular arrowhead as shown in Figure 11.2. The arrow points from the subclass to the superclass.
We have already mentioned that the inheritance relationship represents the *is a* relationship. The arrow points in the direction of the *is a* relationship.
11.3. VISUALIZING INHERITANCE
Figure 11.1: UML diagram for the composition relationships involved in the Intersection class
Figure 11.2: UML diagram for the traffic light inheritance relationship
• Dependence. We have used dependence without mentioning it explicitly. Objects of one class may use objects of another class without extending them (inheritance) or declaring them as fields (composition). Local variables and parameters represent temporary dependencies. For example, the TestIntersection class uses the Intersection, TrafficLightModel, TextTrafficLight, TurnLightModel, and TextTurnLight classes within its methods (local objects), but neither inheritance nor composition are involved. We say that TestIntersection depends on these classes, because if their interfaces change, those changes may affect TestIntersection. For example, if the maintainers of TextTurnLight decide that its constructor should accept an integer state instead of a TurnLightModel, the change would break TestIntersection. Currently TestIntersection creates a TextTurnLight with a TurnLightModel, not an integer state.
A dashed arrow in a UML diagram illustrates dependency. The label «uses» indicates that a TestIntersection object uses the other object is a transient way. Other kinds of dependencies are possible. Figure 11.3 shows the dependency of TestIntersection upon Intersection.
Figure 11.3: UML diagram for the test intersection dependencies
11.4 Polymorphism
How does the code within Intersection’s `show()` method decide which of the following ways to draw a red light?
```
[(R) ( ) ( )]
```
or
```
[(R) ( ) ( ) ( )]
```
Nothing within Intersection’s `show()` method reveals any distinction. It does not use any `if/else` statements to select between one form or another. The `show()` method does the right thing based on the exact kind of traffic light that it is asked to render. Can the compiler determine what to do when it compiles the source code for Intersection? The compiler is powerless to do so, since the `Intersection` class was developed and tested before the `TurnLightModel` class was ever conceived!
The compiler generates code that at runtime decides which `show()` method to call based on the actual type of the light. The burden is, in fact, on the object itself. The expression
```
northLight.show()
```
is a request to the `northLight` object to draw itself. The `northLight` object draws itself based on the `show()` method in its class. If it is really a `TextTrafficLight` object, it executes the `show()` method of the `TextTrafficLight` class; if it is really a `TextTurnLight` object, it executes the `show()` method of the `TextTurnLight` class.
This process of executing the proper code based on the exact type of the object when the `is a` relationship is involved is called **polymorphism**. Polymorphism is what makes the `setState()` methods work as well. Try to set a plain traffic light to the `LEFT_TURN` state, and then try to set a turn traffic light to `LEFT_TURN`:
Remember, \texttt{setState()} was not overridden. The same \texttt{setState()} code is executed for \texttt{TurnLightModel} objects as for \texttt{TrafficLightModel} objects. The difference is what happens when \texttt{setState()} calls the \texttt{isLegalState()} method. For \texttt{TrafficLightModel} objects, \texttt{TrafficLightModel}'s \texttt{isLegalState()} method is called; however, for \texttt{TurnLightModel} objects, \texttt{TurnLightModel}'s \texttt{isLegalState()} method in invoked. We say that \texttt{setState()} calls \texttt{isLegalState()} polymorphically. \texttt{isLegalState()} polymorphically "decides" whether \texttt{LEFT\_TURN} is a valid state. The "decision" is easy though; call it on behalf of a pure \texttt{TrafficLightModel} object, and says "no," but call it on behalf of a \texttt{TurnLightModel} object, and it says "yes."
Polymorphism means that given the following code:
\begin{verbatim}
TextTrafficLight light;
// Initialize the light somehow ...
light.show();
\end{verbatim}
we cannot predict whether three lamps or four lamps will be displayed. The code between the two statements may assign \texttt{light} to a \texttt{TextTrafficLight} object or a \texttt{TextTurnLight} depending on user input (§ 8.3), a random number (§ 13.6, time (§ 13.2), or any of hundreds of other criteria.
\section{Extended Rational Number Class}
\texttt{Rational} (§ 9.1) is a solid, but simple, class for rational number objects. Addition and multiplication is provided, but what if we wish to subtract or divide fractions? Without subclassing clients could do all the dirty work themselves:
\begin{itemize}
\item Subtracting is just adding the opposite:
\[ \frac{a}{b} - \frac{c}{d} = \frac{a}{b} + \left( -1 \times \frac{c}{d} \right) \]
\end{itemize}
> f3 = f1.add(new Rational(-1, 1).multiply(f2));
> f3.show()
"1/6"
Since $f1 = \frac{1}{2}$ and $f2 = \frac{1}{4}$, the statement
\[
\text{Rational f3 = f1.add(new Rational(-1, 1).multiply(f2));}
\]
results in
\[
f3 = \frac{1}{2} + \left( \frac{-1}{1} \times \frac{1}{4} \right) = \frac{2}{4} - \frac{1}{4} = \frac{1}{4}
\]
and for $f2 = \frac{1}{3}$, the statement
\[
f3 = f1.add(new Rational(-1, 1).multiply(f2));
\]
results in
\[
f3 = \frac{1}{2} + \left( \frac{-1}{1} \times \frac{1}{3} \right) = \frac{3}{6} - \frac{2}{6} = \frac{1}{6}
\]
- Dividing is multiplying by the inverse:
\[
\frac{a}{b} ÷ \frac{c}{d} = \frac{a}{b} \times \frac{d}{c}
\]
> // 1/2 divided by 2/3 = 3/4
> f1.show()
"1/2"
> f2 = new Rational(2, 3);
> f2.show()
"2/3"
> f3 = f1.multiply(new Rational(f2.getDenominator(),
\quad f2.getNumerator()));
> f3.show()
"3/4"
The problem with this approach is that it is messy and prone to error. It would be much nicer to simply say:
\[
f3 = f1.subtract(f2);
\]
and
\[
f3 = f1.divide(f2);
\]
but these statements are not valid if $f1$ is a Rational object. What we need is an extension of Rational that supports the desired functionality. EnhancedRational (§ 11.4) is such a class.
public class EnhancedRational extends Rational {
// num is the numerator of the new fraction
// den is the denominator of the new fraction
// Work deferred to the superclass constructor
public EnhancedRational(int num, int den) {
super(num, den);
}
// Returns this - other reduced to lowest terms
public Rational subtract(Rational other) {
return add(new Rational(-other.getNumerator(),
other.getDenominator()));
}
// Returns this / other reduced to lowest terms
// (a/b) / (c/d) = (a/b) * (d/c)
public Rational divide(Rational other) {
return multiply(new Rational(other.getDenominator(),
other.getNumerator()));
}
}
Listing 11.4: EnhancedRational—extended version of the Rational class
With EnhancedRational (11.4) subtraction and division are now more convenient:
Welcome to DrJava. Working directory is /Users/rick/java
> EnhancedRational f1 = new EnhancedRational(1, 2),
f2 = new EnhancedRational(1, 4);
> Rational f3 = f1.subtract(f2);
> f3.show()
"1/4"
> f3 = f1.divide(f2);
> f3.show()
"2/1"
11.6 Multiple Superclasses
In Java it is not possible for a class to have more than one superclass. Some languages like C++ and Smalltalk do support multiple superclasses, a concept called multiple inheritance.
Even though a class may not have more than one superclass, it may have any number of subclasses, including none.
11.7 Summary
• Inheritance allows us to derive a new class from an existing class.
• The original class is called the superclass, and the newly derived class is called the subclass.
• Other terminology often used instead of superclass/subclass base class/derived class, and parent class/child class.
• The subclass inherits everything from its superclass and usually adds more capabilities.
• The reserved word extends is used to subclass an existing class.
• Subclasses can redefine inherited methods; the process is called overriding the inherited method.
• Subclass methods can call superclass methods directly via the super reference.
• Constructors can invoke the superclass constructor using super in its method call form.
• A subclass inherits everything from its superclass, including private members (variables and methods), but it has no extra privileges accessing those members than any other classes.
• Subclass objects have an is a relationship with their superclass; that is, if \( Y \) is a subclass of \( X \), an instance of \( Y \) is a \( Y \), and an instance of \( Y \) is a \( X \) also.
• A subclass object may be assigned to a superclass reference; for example, if \( Y \) is a subclass of \( X \), an instance of \( Y \) may be assigned to a variable of type \( X \).
• A superclass object may not be assigned to a subclass reference; for example, if \( Y \) is a subclass of \( X \), an instance whose exact type is \( X \) may not be assigned to a variable of type \( Y \).
• The Unified Modeling Language (UML) uses special graphical notation to represent classes, composition, inheritance, and dependence.
• Polymorphism executes a method based on an object’s exact type, not simply its declared type.
• A class may not have more than one superclass, but it may have zero or more subclasses.
### 11.8 Exercises
1. Suppose the variable \( lt \) is of type TextTrafficLight and that it has been assigned properly to an object. What will be printed by the statement:
```java
System.out.println(lt.show());
```
2. What does it mean to override a method?
3. What is polymorphism? How is it useful?
4. If TurnLightModel (11.1) inherits the setState() method of TrafficLightModel (10.1) and uses it as is, why and how does the method behave differently for the two types of objects?
5. May a class have multiple superclasses?
6. May a class have multiple subclasses?
7. Devise a new type of traffic light that has \texttt{TrafficLightModel} (10.1) as its superclass but does something different from \texttt{TurnLightModel} (11.1). One possibility is a flashing red light. Test your new class in isolation, and then create a view for your new model. Your view should be oriented horizontally so you can test your view with the existing \texttt{Intersection} (10.4) code.
8. Derive \texttt{VerticalTurnLight} from \texttt{VerticalTextLight} (10.3) that works like \texttt{TextTurnLight} (11.2) but that displays its lamps vertically instead of horizontally. Will you also have to derive a new model from \texttt{TurnLightModel} (11.1) to make your \texttt{VerticalTurnLight} work?
|
{"Source-Url": "http://www.computing.southern.edu/halterman/OOPJ/chapter11.pdf", "len_cl100k_base": 7898, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 43347, "total-output-tokens": 8972, "length": "2e12", "weborganizer": {"__label__adult": 0.0003590583801269531, "__label__art_design": 0.00023293495178222656, "__label__crime_law": 0.00032138824462890625, "__label__education_jobs": 0.0007939338684082031, "__label__entertainment": 3.993511199951172e-05, "__label__fashion_beauty": 0.0001239776611328125, "__label__finance_business": 0.00011652708053588869, "__label__food_dining": 0.00030303001403808594, "__label__games": 0.0004341602325439453, "__label__hardware": 0.0004429817199707031, "__label__health": 0.00030922889709472656, "__label__history": 0.0001697540283203125, "__label__home_hobbies": 7.283687591552734e-05, "__label__industrial": 0.00028586387634277344, "__label__literature": 0.0001977682113647461, "__label__politics": 0.0002040863037109375, "__label__religion": 0.0004496574401855469, "__label__science_tech": 0.0022792816162109375, "__label__social_life": 9.262561798095704e-05, "__label__software": 0.0029277801513671875, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0002999305725097656, "__label__transportation": 0.0005598068237304688, "__label__travel": 0.00021135807037353516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32968, 0.01903]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32968, 0.72762]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32968, 0.84961]], "google_gemma-3-12b-it_contains_pii": [[0, 2263, false], [2263, 4222, null], [4222, 7512, null], [7512, 10027, null], [10027, 11974, null], [11974, 14059, null], [14059, 15546, null], [15546, 16614, null], [16614, 18401, null], [18401, 22335, null], [22335, 23781, null], [23781, 25356, null], [25356, 27145, null], [27145, 28360, null], [28360, 29919, null], [29919, 32255, null], [32255, 32968, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2263, true], [2263, 4222, null], [4222, 7512, null], [7512, 10027, null], [10027, 11974, null], [11974, 14059, null], [14059, 15546, null], [15546, 16614, null], [16614, 18401, null], [18401, 22335, null], [22335, 23781, null], [23781, 25356, null], [25356, 27145, null], [27145, 28360, null], [28360, 29919, null], [29919, 32255, null], [32255, 32968, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32968, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32968, null]], "pdf_page_numbers": [[0, 2263, 1], [2263, 4222, 2], [4222, 7512, 3], [7512, 10027, 4], [10027, 11974, 5], [11974, 14059, 6], [14059, 15546, 7], [15546, 16614, 8], [16614, 18401, 9], [18401, 22335, 10], [22335, 23781, 11], [23781, 25356, 12], [25356, 27145, 13], [27145, 28360, 14], [28360, 29919, 15], [29919, 32255, 16], [32255, 32968, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32968, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
ede5bd34d7c9478fbe060fab420ce66447a68373
|
INCOM: A WEB-BASED HOMEWORK COACHING SYSTEM FOR LOGIC PROGRAMMING
Nguyen-Thinh Le and Niels Pinkwart
Clausthal University of Technology
Germany
ABSTRACT
Programming is a complex process which usually results in a large space of solutions. However, existing software systems which support students in solving programming problems often restrict students to fill in pre-specified solution templates or to follow an ideal solution path. In this paper, we introduce a web-based homework coaching system for Logic Programming (INCOM) which allows students to develop a Prolog predicate in an exploratory manner, i.e., students are allowed to explore a solution space by themselves. As a tutoring model, this system supports students in two stages: a task analysis prior to an implementation phase. To model the solution space for a Logic Programming problem, a weighted constraint-based model was deployed in INCOM. This model consists of a set of constraints, constraint weights, a semantic table, and several transformation rules. The contribution of this paper is two-fold: it reviews a number of tutoring systems for learning programming which have not been considered in other surveys so far, and it presents an effective exploratory homework coaching system for Logic Programming.
KEYWORDS
Intelligent Tutoring Systems for programming, Logic Programming, weighted constraint-based model.
1. INTRODUCTION
Programming is a subject which is considered difficult by many students (Matthíasdóttir, 2006). To address programming novices’ difficulties, several types of educational software systems have been devised, e.g.: programming environments, debugging aids, intelligent tutoring systems (ITS), intelligent programming environments, visualization and animation systems, and simulation environments (Deek & McHugh, 1998; Gómez-Albarrán, 2005). Among these types of systems, ITS which deploy AI techniques to support students solve programming problems as they advance and to develop their programming skills are rarely found in literature. The scarcity of successful ITS for programming can be explained by the fact that teaching programming is a difficult task even for a human tutor. Jenskins (2002) suggested that programming should be taught by persons who can teach programming, not by people who can program. In addition, diagnosing errors in a student’s program is a challenge for a computational system due to the large space of possible correct (and incorrect) implementations. To be able to provide appropriate feedback to a student’s program, a system must hypothesize the student’s intention correctly. Otherwise, feedback is useless or even misleading. Two questions arise: Which educational approach should a tutoring system apply to support students in learning programming? How can a tutoring system identify errors in a student’s program and give appropriate feedback according to the student’s intention?
To address these two questions, in this paper, we introduce INCOM, a web-based homework coaching system for Logic Programming. INCOM supports a two-stage coaching strategy. First, the system requires students to analyze a problem specification and to represent results of their task analysis in form of a predicate signature. On the second stage, the system asks students to input a Prolog program in a free-form manner so that the requirements of the programming problem and the specified predicate signature are satisfied. To model a large solution space for each programming problem, we propose to apply the weighted constraint-based model (Le & Pinkwart, 2011).
2. TUTORING PROGRAMMING
In this section, we summarize findings about difficulties of programming novices and review existing ITS for programming. Then, we describe INCOM, a system which is intended to help students solve homework problems in Logic Programming.
2.1 Difficulties of programming novices
Novices’ difficulties in learning programming are diverse. Here, we consider only some essential findings agreed among researchers. First, programming is a subject which requires a combination of both surface and deep learning styles. Deep learning means, students have to concentrate on gaining an understanding of a topic while surface learning focuses on memorizing facts. Thus, programming cannot be learned solely from books as in other subjects, instead students have to learn programming by developing algorithms themselves to deepen their understanding (Lahtinen et al., 2005; Bellaby et al., 2003). However, many students are not aware of the combination of two learning styles and apply inappropriate study methodologies (Gomes & Mendes, 2007). Second, there exists a correlation between programming and mathematical skills. According to Byrne and Lyons (2001) and Pacheco et al. (2008), it is necessary that programming novices should have been equipped with appropriate level of mathematical knowledge, because logical and abstract thinking is required for developing algorithms. To relieve this difficulty, Jenskins (2002) suggested that programming courses should be provided after some courses (including Math) have been taught. Third, programming languages, which are intended for professional use in the industry, usually have complex syntax and programming concepts. Thus, they are considered inappropriate for programming novices (Gomes & Mendes, 2007). These languages impose a high cognitive load for students to memorize a new syntax in addition to programming concepts and techniques. A solution is using an easy-to-learn programming language (e.g., Pascal). However, a counter-argument might be the student’s worry about poor job market chances if learning a language which is not commonly used in the industry. Finally, programming is a complex activity which includes several sub-processes: analyzing a problem specification, transforming the problem specification into an algorithm, translating an algorithm into a program code, and testing a program. According to Jenskins (2002), the first sub-process is the most difficult and crucial phase of programming, because a correct and efficient algorithm is the basis of a program. While a programming expert has a set of solution schemata for a certain problem specification, programming novices usually struggle in this phase.
2.2 A survey of tutoring systems for programming
There exist numerous attempts to devising educational software systems which support students learn programming. Deek and MeChugh (1998) classified 29 educational software systems for programming into four classes: programming environments, debugging aids, ITS, and intelligent programming environments. Programming environments and debugging aids can be used to relieve the novices’ difficulties of learning syntax and concepts of a new programming language. These systems allow students to experiment with specific features of a new programming language and to observe the process of compilation. ITS usually provide students with programming tasks and integrated course materials. They are intended to support both surface and deep learning styles. Intelligent programming environments combine functionalities of ITS and programming environments. Twelve years ago, the authors of this survey concluded that a large part of research has rather focused on developing systems to support novices learn syntax and concepts and that the novices’ difficulties in solving programming problems have not gained much attraction. At that time only few tutoring systems for programming had demonstrated successful: e.g., LISP tutor (Anderson & Reiser, 1985), which had been used for several years in regular programming courses; and ELM, a tutoring system for LISP (Weber & Möllenberg, 1995) whose successor is ELM-ART which is still being used in curriculum for LISP and available on the Internet (http://art2.ph-freiburg.de/Lisp-Course). In addition, Deek and McHugh also pointed out that in 1998, many systems under review constricted the student’s freedom by proving students with solution templates to be filled due to the restricted ability of the underlying error diagnosis approach and thereby narrow down the possibilities to develop creative solutions. Gómez-Albarrán (2005) extended the existing classification of educational software systems with three other types: 1) example-based environments: the systems exploit examples to support students solve new problems and this
develop a pseudo-code algorithm by holding a conversation. The system focuses on the activities of algorithms for programming problems and demonstrated fewer errors in their implementation. (JITS, SQL-Tutor, and PROPL) have been evaluated with respect to their learning effect. We can infer that which uses communication patterns to coach students to understand a given problem specification and to analyzing a task and planning a solution. Students who used this system were frequently better at creating algorithm, few software systems have been developed. PROPL (Lane & VanLehn, 2005) is an ITS system with examples for class methods.
classes and methods given a description of the desired functionality. In addition, the system helps students navigate through the course material.
There exists another type of ITS for programming which rather focus on providing curriculum lessons according to a student’s knowledge level than support problem solving. For example, the system developed by Sierra et al. (2007) relies on the idea that a transfer can be made between two domains which share similar concepts. The system supports students who have been exposed to a programming language to learn new programming languages. In the cited paper, the system supports Perl, Java and C++. BITS (Butz et al., 2004), a web-based intelligent tutoring system for computer programming, rather focuses on helping students navigate through the course material.
The second difficulty of programming novices can potentially be solved by studying Math. To our knowledge, there exists no software system intended to support programming by enhancing student’s Math knowledge. Usually, universities offer a Math course in the early semester of a study.
The third difficulty of programming novices that the syntax and concepts of a new programming language are complex to learn can be relieved by providing programming environments enriched with text editors which are able to detect syntax errors and to propose correct syntax immediately while coding. Programming environments of this type can be found in (Deek & McHugh, 1998; Gómez-Albarrán, 2005). In addition, several systems have been developed to focus on a certain commonly used API of a specific programming language. For example, JTutors (Dahotre, 2011) searches examples for Java APIs on the Internet and provides them to students. JTutors uses CTAT as a platform to author and deliver tutorials related to APIs. For a similar purpose like JTutors, MICA (Stylos & Myers, 2006) searches on the Internet appropriate API classes and methods given a description of the desired functionality. In addition, the system helps students with examples for class methods.
Addressing the fourth difficulty of programming novices, i.e., transforming a problem specification to an algorithm, few software systems have been developed. PROPL (Lane & VanLehn, 2005) is an ITS system which uses communication patterns to coach students to understand a given problem specification and to develop a pseudo-code algorithm by holding a conversation. The system focuses on the activities of analyzing a task and planning a solution. Students who used this system were frequently better at creating algorithms for programming problems and demonstrated fewer errors in their implementation.
We have reviewed eight educational software systems for programming. Among them, only three systems (JITS, SQL-Tutor, and PROPL) have been evaluated with respect to their learning effect. We can infer that
other systems are still prototypes (or that only unsuccessful attempts at finding learning benefits have been made that were not published). This can partially be explained by the fact that developing and seriously evaluating a tutoring system requires a huge amount of time. Although a lot of time has been invested in building ITS for programming, most of them have not been deployed widely. This is in consistence with the observation of Eitelman (2006). Especially ITS focusing on problem solving like SQL-Tutor and JITS provide students with solution templates to be filled. These systems have demonstrated useful, however, the user interface of these systems restricts students’ ability to develop a program freely. The rationale for this is that diagnosing errors in a program is not an easy process and the error diagnosis approaches underlying the SQL-Tutor and JITS are limited. In the following, we introduce INCOM, a coaching system for Logic Programming, which allows students develop a program in a free-form manner. INCOM consists of two components: a two-stage coaching model and a domain model.
2.3 The two-stage coaching model of INCOM
The programming novices’ difficulty of transforming a problem specification to an algorithm has been confirmed by a pilot study for Logic Programming (Le, 2011). The authors reported that among 632 incorrect solution attempts submitted by students, on average 27.75% of errors were due to false task analysis, i.e., students were not able to specify correctly the clause head of a predicate. As an approach to help students overcome this difficulty, INCOM applies a two-stage coaching strategy: students are required to analyze a problem specification and to reproduce the analysis in form of a predicate signature prior to coding a predicate.
2.3.1 First stage: Task analysis
Task analysis aims at developing an understanding of the given problem specification and constructing a mental representation of the task. We propose to help students understand a programming problem by requesting them to reproduce information and goals given in the problem specification in form of an adequate predicate signature which consists of five components: a predicate name, argument names, meaning, type, and mode of each argument. A predicate name is the identifier of a predicate to be implemented. Argument names serve as unique identifiers for the argument positions of the predicate. Meaning(A_i) represents the purpose of the argument position A_i. Type(A_i) represents the data structure for the argument position A_i. Actually, Logic Programming does not require specifying a data structure for variables, the computation is based on unification techniques. However, from a pedagogical point of view, it is useful to request the student to specify the data structure she intends to use at a particular argument position. Most frequently used data structures in Logic Programming are atom, list and number. Apart from these data structures, other terms can be classified as “arbitrary type”. Mode(A_i) is the calling mode for the argument position A_i. For a given predicate whose number of argument position is greater than 0, each argument position can be specified to be in one of three calling modes: input (+), output (-), or indeterminate (?). INCOM provides a user interface which requires student to analyze a programming task (Figure 1). The interface is divided into three parts. The upper part is for displaying the problem specification. The middle part is for specifying the signature of a predicate to be implemented. The five labels 1-5 in this part indicate the place to input the five components of a signature accordingly. Using the box with label 0, students can add more argument positions or submit the predicate signature for evaluation. System’s feedback is shown on the bottom part. Specifying a predicate signature in this way, the student is free to place the position of each argument and to name the identifiers according to her understanding of a given problem specification. As long as the signature input from the student is not yet appropriate with respect to the problem, the system provides feedback indicating students to think about highlighted important information and goals in the problem specification. Requesting students to specify a predicate signature is consistent with recommendation of Logic Programming experts. Brna (2001) suggested learners of Logic Programming comment their code in order to indicate a predicate’s intended usage. From a technical point of view, the task analysis stage not only encourages the student to practice analyzing tasks, but also provides valuable information which helps to make the subsequent error diagnosis more accurate, because through specifying a predicate signature, students indicate the meaning of each argument position.
Although analyzing a programming problem by specifying a signature prior to the implementation is a good programming practice, this approach has several these limitations. First, this coaching approach is not
able to cover all possible understanding problems. For instance, a problem specification may contain the noun “list” which can be used to model the data type for an argument position. In case the student does not know the concept of a “list” structure in Logic Programming, then coaching her to specify “list” as a data type for an argument position would not help her further. This kind of knowledge should have been acquired during lectures or from textbooks, not during the stage of task analysis. In addition, it is not always possible to derive a unique data type for an argument position from a noun phrase if the noun phrase does not indicate a data type explicitly. For example, the noun phrase “A pair of persons” does not point to a specific data structure. Therefore, various data structures can be used. In such a case, the student is forced to use the predicate signature exactly as specified by the exercise author. Furthermore, the requirement to specify noun phrases for the argument positions of a predicate signature explicitly might easily render the problem specification look artificially, e.g., most exercise descriptions do not include the noun phrase which represents the result of a computation.
2.3.2 Second stage: Implementation
Once the student has provided an appropriate signature, the system guides her to the second stage where she is allowed to implement a predicate. The aim of this stage is to support students to develop a correct predicate for a given problem specification. For this purpose, the following requirements need to be satisfied. First, since students have specified a predicate signature, information about the agreed upon predicate signature should be available during the process of coding a predicate. Second, the implementation stage should allow students to input a solution in a free-form manner, because they should be able to explore a large space of possible solutions. Third, the system should provide feedback in consistence with the intention of students to help students develop a correct predicate.
INCOM provides a user interface for this implementation stage which fulfills these three requirements (Figure 2). The user interface includes four parts: 1) the top part for problem specification, 2) a display for the specified predicate signature, 3) a free-form input for solutions, and 4) the bottom part for system’s feedback. Although students are moderately restricted to a given solution structure (a clause always consists of a clause head and an (empty) clause body), this kind of layout still agrees with the requirement of designing a problem solving environment with free-form solution input. According to our survey in Section 2.2, most current ITS for programming have not considered this requirement. Using this user interface, first, the student needs to define necessary clauses by adding a new clause (Label 0). To be able to follow her intention, for each clause, the system asks her to additionally specify the type (recursive case, base case, and
non-recursive) of the clause she intends to implement (Label 1). Then, she is required to specify the clause head (Label 2) and a clause body (Label 3). After coding the predicate, the student chooses an action (Label 0) to submit her predicate for evaluation. If her predicate, including the main predicate does not fulfill the goals specified in the problem specification, the system provides feedback to improve the implementation. Feedback includes the location of and an explanation for the error is displayed on the bottom part of Figure 2.
Figure 2: A user interface for developing a predicate in a free-form manner
2.4 The domain model of INCOM and its error diagnosis
The domain model of INCOM has been developed by applying the weighted constraint-based approach; see (Le & Pinkwart, 2011) for a detailed description. Here, we only outline the guiding principles of this approach which consists of four components. First, a semantic table is used to cover alternative solution strategies for a problem specification. Second, constraints are defined to check the semantic correctness of a student’s predicate by comparing required components in the semantic table with the student’s predicate. In addition, constraints are used to check general well-formedness of a predicate. Third, each constraint is associated with a weight value which indicates the importance of that constraint. The determination of the importance level for constraints resembles the assessment of examinations by a human tutor: if a solution contains more important components, then it receives a better mark. Fourth, to extend the coverage of possible solutions, transformation rules can be exploited to transform a code fragment without changing the semantics of the predicate, e.g. using commutative and distributive laws to transform an arithmetic expression. The process of error diagnosis generates hypotheses about the student intention by matching the student’s predicate against alternative solution strategies in the semantic table iteratively. Hypotheses are evaluated using weighted constraints. The hypothesis which has violated constraints with “least” constraint weights is considered the solution strategy intended by the student. The violated constraints indicate errors in the student’s predicate. In this process, constraint weights serve three purposes: 1) controlling the process of error diagnosis, 2) determining the student’s intention, and 3) ranking feedback messages according to the severity of underlying errors. Enhancing constraints with weight values, Le and Pinkwart stated that the weighted constraint-based model is better suited to determine the student's intention than the classical constraint-based modeling technique without using weights.
3. EVALUATION
We conducted an evaluation of INCOM to determine whether students improve their skills in Logic Programming after having used the system. The study has been carried out during regular classroom hours, where normally students are expected to demonstrate their homework in the presence of a human tutor. This study took places in two sessions: in 2009 with 35 participants and in 2010 with 32 participants. In both sessions, a stable trend of the development of learning gains of the experimental group could be determined: the experimental group outperformed a control group (where students used a regular Prolog tool without feedback) by an effect size between 0.23 and 0.33 standard deviations (Le, 2011).
In addition to statistical results about the learning benefits of INCOM, based on a questionnaire, we asked students of the experimental group about their attitude towards the system. The questionnaire addressed the following issues: 1) the usability of the user interface, 2) the helpfulness of the two-stage coaching model, 3) the precision of error location, 4) the comprehensiveness of system’s hints, 5) students’ motivation, 6) the overall helpfulness of the system, 7) the confidence for solving similar tasks, and 8) using the system for homework. For each question, participants were asked to provide their opinion on a scale between 1 (very negative) and 5 (very positive). We accumulated the results of the questionnaire of two experiment sessions.
In addition, students from both control and experimental group were asked about the difficulty of given experiment exercises. 50% of the participants rated (very) difficult and 21% of them rated (very) simple.
Table 1: Students’ attitudes towards INCOM (1: very negative; 5: very positive)
<table>
<thead>
<tr>
<th>Feature</th>
<th>m (s.d.)</th>
<th>Feature</th>
<th>m (s.d.)</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. User interface</td>
<td>3.31 (0.7)</td>
<td>5. Motivation</td>
<td>3.41 (1.3)</td>
</tr>
<tr>
<td>2. Two-stage coaching</td>
<td>2.53 (1.1)</td>
<td>6. System’s helpfulness</td>
<td>2.57 (1.5)</td>
</tr>
<tr>
<td>3. Error location</td>
<td>3.27 (1.1)</td>
<td>7. Transferability</td>
<td>3.00 (1.3)</td>
</tr>
<tr>
<td>4. System’s hints</td>
<td>2.97 (1.5)</td>
<td>8. Use for homework</td>
<td>2.47 (1.4)</td>
</tr>
</tbody>
</table>
Table 1 depicts the summary of students’ attitudes towards the system INCOM. We can notice that students rated INCOM above average for the questions 1-7. Especially, the categories considering the user interface, the precise error location provided by the system, and the motivation of students using the system have highest ratings. With respect to the comprehensiveness of system’s hints and the confidence of students in being able to solve future problems of the same type, a positive trend could also be identified. However, with respect to the two-stage coaching strategy, the helpfulness of the system, and the deployment of INCOM for homework, a clear positive answer could not be determined. Despite this cautious subjective assessment, the objective statistic results showed that at least some of them have made moderate learning gains.
We took the subjective results seriously and attempted to find reasons which led to a non-positive attitude towards the helpfulness and the deployment of INCOM in homework settings. First, students may be not aware about their learning progress while using the system because the time of using the system was too short (60 minutes). Second, they needed much time to become familiar with the functionality of the system. Five of total thirteen students’ free comments (in addition to nine questions in the questionnaire) addressed this issue. In particular, participants of the experimental group spent between 17 minutes (in the 2nd session) and 21 minutes (in the 1st session) to analyze five programming tasks. This is a remarkable amount of time compared to the remaining time for the implementation stage during which the main coding activity takes place. Therefore, we can suspect that the first stage is one of the reasons why the usefulness of the system was not rated positive. Maybe feedback during this coaching stage was not sufficient because it could not elicit information (e.g., nouns used to represent a parameter position of a predicate) hidden in the problem specification as we discussed in Section 2.3.1. Third, the quality of feedback also plays an important role for the usefulness of the system. Although students rated the comprehensiveness of feedback above average (cf. Table 1), there was one participant who commented that system’s feedback was of little use if it only explained the error without giving a recommendation how to correct a solution. Indeed, we assumed that students were able to derive a corrective action from an error explanation message.
4. CONCLUSION
In this paper, we have reviewed a number of educational software systems for learning programming and we have introduced the system INCOM intended to help students do homework in Logic Programming. The
novelty contributed by this system is that it supports students solve programming problems in an exploratory manner. INCOM provides a two-stage coaching model which requires students to analyze a task by specifying a predicate signature prior to coding the predicate. The domain model of the system has been developed by applying a weighted constraint-based approach. An evaluation study showed that the system was able to help students improve their ability of defining predicates for logic programs: students who used the system outperformed students that did not use it by 0.23 and 0.33 standard deviations. Positive attitudes of students after using the system could be identified. Students were motivated while working with the system and they felt confident to solve similar programming problems. However, a large number of students consider the two-stage coaching model as too restrictive, and the user interface needs to be improved. In future, we investigate the weighted constraint-based approach in the imperative programming paradigm.
REFERENCES
|
{"Source-Url": "https://cses.informatik.hu-berlin.de/pubs/2011/celda/incom_a_web_based_homework_coaching_system_for_logic_programming.pdf", "len_cl100k_base": 5611, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20025, "total-output-tokens": 7098, "length": "2e12", "weborganizer": {"__label__adult": 0.000919818878173828, "__label__art_design": 0.0009522438049316406, "__label__crime_law": 0.0008997917175292969, "__label__education_jobs": 0.144287109375, "__label__entertainment": 0.0002079010009765625, "__label__fashion_beauty": 0.0004558563232421875, "__label__finance_business": 0.0006685256958007812, "__label__food_dining": 0.0011959075927734375, "__label__games": 0.002025604248046875, "__label__hardware": 0.0015649795532226562, "__label__health": 0.0012149810791015625, "__label__history": 0.0006456375122070312, "__label__home_hobbies": 0.00034165382385253906, "__label__industrial": 0.0010175704956054688, "__label__literature": 0.0010995864868164062, "__label__politics": 0.0008435249328613281, "__label__religion": 0.0014324188232421875, "__label__science_tech": 0.01503753662109375, "__label__social_life": 0.0004777908325195313, "__label__software": 0.0103912353515625, "__label__software_dev": 0.8115234375, "__label__sports_fitness": 0.0008716583251953125, "__label__transportation": 0.0015106201171875, "__label__travel": 0.0005393028259277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32537, 0.0381]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32537, 0.84143]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32537, 0.92232]], "google_gemma-3-12b-it_contains_pii": [[0, 3594, false], [3594, 8428, null], [8428, 11948, null], [11948, 17022, null], [17022, 20056, null], [20056, 22822, null], [22822, 27800, null], [27800, 32537, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3594, true], [3594, 8428, null], [8428, 11948, null], [11948, 17022, null], [17022, 20056, null], [20056, 22822, null], [22822, 27800, null], [27800, 32537, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32537, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32537, null]], "pdf_page_numbers": [[0, 3594, 1], [3594, 8428, 2], [8428, 11948, 3], [11948, 17022, 4], [17022, 20056, 5], [20056, 22822, 6], [22822, 27800, 7], [27800, 32537, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32537, 0.07692]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
75e3ae1718b829af2283ab80d779ec078469047a
|
Agent Capabilities: Extending BDI Theory
Lin Padgham¹ and Patrick Lamrix²
¹RMIT University, Melbourne, Australia
²Linköpings universitet, Linköping, Sweden
Abstract
Intentional agent systems are increasingly being used in a wide range of complex applications. Capabilities has recently been introduced into one of these systems as a software engineering mechanism to support modularity and reusability while still allowing meta-level reasoning. This paper presents a formalisation of capabilities within the framework of beliefs, goals and intentions and indicates how capabilities can affect agent reasoning about its intentions. We define a style of agent commitment which we refer to as a self-aware agent which allows an agent to modify its goals and intentions as its capabilities change. We also indicate which aspects of the specification of a BDI interpreter are affected by the introduction of capabilities and give some indications of additional reasoning which could be incorporated into an agent system on the basis of both the theoretical analysis and the existing implementation.
Introduction
Agent systems are becoming increasingly popular for solving a wide range of complex problems. Intentional agent systems have a substantial base in theory as well as a number of implemented systems that are used for challenging applications such as air-traffic control and space systems (Rao and Georgeff 1995). One of the strengths of the BDI Belief, Desire, Intention class of systems (including IRMA (Bratman et al. 1988), PRS (Georgeff and Ingrand 1989), JACK (Busetta et al. 1999b), JAM (Huber 1999) and UMPRS (Lee et al. 1994)) is their strong link to theoretical work, in particular that of Rao and Georgeff (Rao and Georgeff 1991), but also Cohen and Levesque (Cohen and Levesque 1990), Bratman et al. (Bratman et al. 1988) and Shoham (Shoham 1993). Although the theory is not implemented directly in the systems it does inform and guide the implementations (Rao and Georgeff 1992).
In this paper we investigate how a notion of capability can be integrated into the BDI logic of Rao and Georgeff (Rao and Georgeff 1991), preserving the features of the logic while adding to it in ways that eliminate current intuitive anomalies and mismatches between the theory and implemented systems. We understand capability as the ability to react rationally towards achieving a particular goal. Depending on circumstances a capability may not always result in an achievable plan for realising the goal, but it is a pre-requisite for such.
We describe a possible formal relationship of capabilities to the other BDI concepts of beliefs, goals and intentions. The addition of capabilities enriches the existing formal model and allows for definition of a self-aware agent which takes on and remains committed to goals only if it has a capability to achieve such goals. The formalisation we introduce deals only with a single agent, but we indicate directions for development that would be suitable for dealing with rational behaviour in a multi-agent system which takes into account the known capabilities of other agents.
This work is partially motivated by the recently reported development and use of a capability construct in JACK, a java based BDI agent development environment (Busetta et al. 1999b), which follows the basic abstract interpreter described in (Rao and Georgeff 1992). We indicate how capabilities can be integrated into this abstract interpreter and also indicate some issues for consideration in implementation of capabilities that are highlighted by this work. This work can be seen as part of the ongoing interplay between theory and practice in the area of BDI agent systems. It provides a foundation for exploring some of the practical reasoning mechanisms involving capabilities and for further developing the theory as well as informing the ongoing implementations.
Using Capabilities in Reasoning
Most BDI systems contain a plan library made up of plans which are essentially abstract specifications for achieving certain goals or doing subtasks on the way to achieving a goal. Each plan is associated with a triggering event (which may be an event of type achieve goal X). Each plan may also have a list of pre-conditions or a context which describes the situation in which the plan is intended to be used. The context condition may be used to bind variables which are then used in the plan body. The plan body is the code which executes the plan. This may contain invocations of subgoals which allow new plans to flesh out the detail of the plan, calls to external “actions”, or other code in the plan or host language.
We understand having a capability to achieve X as mean-
ing that the agent has at least one plan that has as its trigger the goal event achieve X. That is the agent has at least one way it knows how to achieve X in some situation. At any given time the agent may be unable to actually use this plan (depending on whether its pre-conditions match the state of the world), but having some such plan is clearly a prerequisite to being able to achieve X.\footnote{This assumes that all plans explicitly state what goals they achieve, and does not take account of goals being achieved as a result of side-effects. This is consistent with how all BDI systems of which we are aware are implemented, and is part of the mechanism which allows for efficient practical reasoning.}
In the description of the implementation of capabilities in JACK (Busetta et al. 1999a) a capability is essentially a set of plans, a fragment of the knowledge base that is manipulated by those plans and a specification of the interface to the capability. The interface is specified partially in terms of what events generated external to the capability, can be handled by this capability. Thus a part of the interface to a capability will be a list of the goal achievement events that the capability is designed to handle. Additional subgoal events and the plans that deal with these can be hidden within the internals of the capability. The interface also specifies what events generated by the capability are to be visible externally and gives information as to what portion of the knowledge base fragment is used by the capability.
As an example a scheduling capability may contain a set of plans to construct a schedule in a certain domain. The knowledge base fragment defined as part of this capability may have knowledge about the objects to be scheduled, their priorities, and various other information that is generated and used as a schedule is being built. There may be a single external goal event called achieve-schedule which this capability responds to while the only events it generates that are seen externally are events which notify the schedule or which notify failure to generate a schedule.
It is easy to see how this abstraction of a set of plans into a capability could be used to advantage in finding plans that are a possibility for responding to a specific event. Rather than examining all plans it is only necessary to look within the plans of either the generating capability, or a capability that has the relevant event specified as part of its external interface. Naturally this relies on appropriate software engineering design and will preclude the system “discovering” a plan within the internals of a capability that achieves a goal that is not specified as part of the interface of the capability. This is consistent with the practical reasoning approach inherent in these systems which relies on forward chaining based on specified triggers (combined with the ability to manage failure and retry alternative mechanisms to achieve goals and subgoals). The abstraction of sets of plans into capabilities also provides a mechanism for name scoping which is a practical help in building large and complex systems.
Busetta et al. (Busetta et al. 1999a) describe how agents can be built by incorporating specific capabilities. A growing amount of work in multi-agent systems discusses agents with varying “roles”. If an agent changes roles dynamically the expectation is that their behaviour also changes. One way to achieve this could be to use capabilities. A capability could specify and implement the things that an agent could do within a particular role. As an agent changed role, appropriate capabilities could then be activated or de-activated.
While a capability (in general language usage) cannot be regarded as a mental attitude similar to beliefs, desires, goals and intentions, beliefs about capabilities (both one’s own and others) are clearly important mental attitudes for reasoning about action.
When we talk about goals and intentions we expect that they are related to aspects of the world that the agent has (at least potentially) some control over. While it is reasonable to talk about an agent having a desire for it to be sunny tomorrow, having a goal for it to be sunny tomorrow makes little intuitive sense - unless of course our agent believes it can control the weather. Just as goals are constrained to be a consistent subset of the set of desires, we would argue that they should also be constrained to be consistent with its capabilities (at least within a single agent system - this needs to be modified for multi-agent systems but the notion of capability remains relevant; for multi-agent systems one must also consider capabilities of agents other than oneself). As intentions are commitments to achieve goals these also are intuitively limited to aspects of the world the agent has some control over. Consequently, we would wish our agent’s goals and intentions to be limited by its capabilities (or what it believes to be its capabilities).
Capabilities may also provide a suitable level at which agents in a multi-agent heterogeneous system have information about other agents. An agent observing an (external) event that it may not itself have the capability to respond to, may pass on the event to another agent if it believes that agent has the capability to respond to the event. (Beliefs about) capabilities of other agents may also provide a mechanism for supporting co-operation. An agent in a multi-agent system may contact or try to influence some other agent with the required capability, or alternatively may make decisions about its own actions based on the believed capabilities of other agents. Goals of an agent in a multi-agent system are likely to be constrained (in some way) by the capabilities of other agents as well as one’s own capabilities.
We explore a possible formalisation of capabilities within BDI logic that lays the initial foundation for addressing some of these issues. We first summarise the BDI logic of Rao and Georgeff and then explore how this can be extended to incorporate capabilities - currently in the context of a single agent reasoning about its own capabilities, although we are also working on extending this to multi-agent systems.
### Semantics of R&G BDI Logic
The logic developed by Rao and Georgeff\footnote{Due to space limitations we are unable to fully define R&G’s logic here, though we attempt to give the basic idea. The reader is referred to (Rao and Georgeff 1991) for full formal definitions.} (e.g. (Rao and Georgeff 1991; 1992)) is a logic involving multiple worlds, where each world is a time-tree of world states with branching time future and single time past. The various nodes in
the future of the time-tree represent the results of different events or agent actions. The different worlds (i.e., different time-tree structures) result from incomplete knowledge about the current state of the world and represent different scenarios of future choices and effects based on differing current state.
The main value of Rao and Georgeff’s formalism is that it avoids anomalies present in some other formalisms whereby an agent is forced to accept as goals (or intentions) all side effects of a given goal (or intention). Modalities are ordered according to a strength relation $\leq_{\text{strong}}$ and modal operators are not closed under implication with respect to a weaker modality, making formulae such as:
$$\text{GOAL}(\psi) \land \text{BEL}(\text{inevitable}(\text{always}(\psi \supset \gamma))) \land \neg\text{GOAL}(\gamma)$$
satisfiable. Thus it is possible to have a goal to go to the dentist, to believe that going to the dentist necessarily involves pain, but not have a goal to have pain.
Unlike the logic of predicate calculus BDI logic formulae are always evaluated with respect to particular time points. The logic has two kinds of formulae; state formulae are evaluated at a specific point in a time-tree, whereas path formulae are evaluated over a path in a time-tree.3 The modal operator $\text{optional}$ is said to be true of a path formula $\theta$ at a particular point in a time-tree if $\theta$ is true of at least one path emanating from that point. The operator $\text{inevitable}$ is said to be true of a path formula $\theta$ at a particular point in a time-tree if $\theta$ is true of all paths emanating from that point. The logic also includes the standard temporal operators $\langle n \rangle$ (next), $\diamond$ (eventually), $\Box$ (always) and $\bigcup$ (until) which operate over path formulae.
Figure 1 illustrates evaluation of some formulae in a belief, goal or intention world (i.e. a time-tree).
A belief $\alpha$, (written $\text{BEL}(\alpha)$) implies that $\alpha$ is true in all belief-accessible worlds. Similarly, a goal (GOAL($\alpha$)) is something which is true in all goal-accessible worlds and an intention (INTEND($\alpha$)) is true in all intention-accessible worlds. The axiomatisation for beliefs is the standard weak-S5 (or KD45) modal system. For goals and intentions the D and K axioms are adopted.
The logic requires that goals be compatible with beliefs (and intentions compatible with goals). This is enforced by requiring that for each belief-accessible world $\omega$ at time $t$, there must be a goal-accessible sub-world of $\omega$ at time $t$. This ensures that no formula can be true in all goal-accessible worlds unless it is true in a belief-accessible world. There is a similar relationship between goal-accessible and intention-accessible worlds.
The key axioms of what Rao and Georgeff refer to as the basic $I$-system (Rao and Georgeff 1991) are as follows:
- A11 \text{GOAL}(\alpha) \supset \text{BEL}(\alpha)
- A12 \text{INTEND}(\alpha) \supset \text{GOAL}(\alpha)
This framework can then be used as a basis for describing and exploring various commitment axioms that correspond to agents that behave in various ways with respect to commitment to their intentions. Rao and Georgeff describe axioms for what they call a blindly committed agent, a single-minded agent and an open-minded agent, showing that as long as an agent’s beliefs about the current state of the world are always true, as long as the agent only acts intentionally, and as long as nothing happens that is inconsistent with the agent’s expectations, then these agents will eventually achieve their goals.
**Semantics of Capabilities**
As discussed previously it makes little intuitive sense to have a goal and an intention for the sun to shine, unless an agent also has some mechanism for acting to achieve this world state. We extend the BDI logic of Rao and Georgeff’s $I$-system (Rao and Georgeff 1991; 1992) to incorporate capabilities which constrain agent goals and intentions to be compatible with what it believes are its capabilities. We will call our extended logic the $IC$-system.
The $IC$-system requires capability-accessible worlds exactly analogous to belief-accessible worlds, goal-accessible worlds and intention-accessible worlds. $\text{CAP}(\phi)$ is then defined as being true if it is true in all the capability-accessible worlds. If $\mathcal{C}$ is the accessibility relation with respect to capabilities, then
$$M, v, w_1 \models \text{CAP}(\phi) \iff \forall u' \in \mathcal{C}_w: M, v, w'_1 \models \phi$$
We adopt the D and K axioms for capabilities, i.e. capabilities are closed under implication and consistent.
**Compatibility Axioms**
The first two axioms of the basic $I$-system described in the previous section have to do with the compatibility between beliefs and goals, and goals and intentions. We add two further compatibility axioms relating to capabilities. Note that the compatibility axioms refer only to so-called $O$-formula, i.e. formula that do not contain any positive occurrences of $\text{inevitable}$.
**Belief-Capability Compatibility:**
This axiom states that if the agent has an $O$-formula $\alpha$ as a capability, the agent believes that formula.
$$\text{AIC1} \text{CAP}(\alpha) \supset \text{BEL}(\alpha)$$
$^3$See (Rao and Georgeff 1991) for definitions of state and path formulae.
$^4$A11 and A12 only hold for so-called $O$-formulae which are formulae with no positive occurrences of inevitable outside the scope of the modal operators. See (Rao and Georgeff 1991) for details. Also $\supset$ is implication (not superset).
$^5$It is also possible to have a variant where capability-accessible worlds are also required to always be sub-worlds of belief-accessible worlds. This variant and its ramifications are considered in a longer version of this paper which will be available as an RMIT technical report.
$^6$All the details of the supporting framework are not given here due to space limitations, but follow straightforwardly from (Rao and Georgeff 1991).
Beliefs, capabilities, goals and intentions respectively. So \( \text{CAP}(\text{rich}) \supset \text{BEL}(\text{rich}) \) means that if I am capable of being rich now then I believe I am rich now. Importantly it does not mean that if I have a capability of being rich in the future, I believe that I am rich in the future - I believe only that there is some possible future where I am rich. We note that intuitively it only really makes sense to talk about capabilities (and goals and intentions) with respect to future time, so the semantics of formulae such as \( \text{CAP}(\text{rich}) \supset \text{BEL}(\text{rich}) \) are intuitively awkward though not problematic. This is inherent in the original logic and applies to goals and intentions at least as much as to capabilities. It could be addressed by limiting the form of valid formulae using \( \text{CAP}, \text{GOAL} \) and \( \text{INTEND} \) but we have chosen to remain consistent with the original BDI logic.
The semantic condition associated with this axiom is:
\[
\text{CIC1} \ni u'' \in B^w_i, \exists u'' \in C^w_i \text{ such that } u'' \sqsubseteq u'.
\]
**Capability-Goal Compatibility**
This axiom and associated semantic condition states that if the agent has an O-formula \( \alpha \) as a goal, then the agent also has \( \alpha \) as a capability. This constrains the agent to adopt as goals only formulae where there is a corresponding capability.
\[
\text{AIC2 GOAL(}\alpha\text{)} \supset \text{CAP(}\alpha\text{)}
\]
\[
\text{CIC2} \ni u' \in C^w_i, \exists u'' \in G^w_i \text{ such that } u'' \sqsubseteq u'.
\]
**Mixed Modality Axioms**
Axioms A14, A15 and A16 define the relationships when the \( \text{BEL}, \text{GOAL} \) and \( \text{INTEND} \) modalities are nested. We add two new axioms and a corollary along with semantic conditions to capture the relationship between \( \text{CAP} \) and each of the other modalities. We note that the original axiom A14 actually follows from A11 and A16.
**Beliefs about Capabilities**
If the agent has a capability \( \alpha \) then it believes that it has a capability \( \alpha \).
\[
\text{AIC3 } \text{CAP}(\alpha) \supset \text{BEL}(\text{CAP}(\alpha))
\]
\[
\text{CIC3} \ni \forall u'' \in B^w_i, \forall u''' \in G^w_i \text{ we have } u''' \in G^w_i
\]
**Capabilities regarding Goals**
If an agent has a goal \( \alpha \) then it has the capability to have the goal \( \alpha \).
\[
\text{AIC4 GOAL(}\alpha\text{)} \supset \text{CAP(GOAL(}\alpha\text{))}
\]
\[
\text{CIC4} \ni \forall u'' \in C^w_i, \forall u''' \in G^w_i \text{ we have } u''' \in G^w_i
\]
**Capabilities regarding Intentions**
If an agent has an intention \( \alpha \) it also has the capability to have the intention \( \alpha \).
**Follows from AIC2 and A16**
\[
\text{INTEND(}\alpha\text{)} \supset \text{CAP(INTEND(}\alpha\text{))}
\]
**Semantic Condition:**
\[
\forall u' \in C^w_i, \forall u'' \in T^w_i \text{ we have } u''' \in T^w_i
\]
Strengthening of this group of axioms by replacing implication with equivalence would result in the expanded version of the equivalences mentioned in (Rao and Georgeff 1991) namely \( \text{INTEND(}\alpha\text{)} \equiv \text{BEL(INTEND(}\alpha\text{))} \equiv \text{CAP(INTEND(}\alpha\text{))} \equiv \text{GOAL(INTEND(}\alpha\text{))} \equiv \text{GOAL(}\alpha\text{)} \equiv \text{BEL(GOAL(}\alpha\text{))} \equiv \text{CAP(GOAL(}\alpha\text{))} \). Equivalence strengthening would also give \( \text{CAP(}\alpha\text{)} \equiv \text{BEL(CAP(}\alpha\text{))} \).
As mentioned in (Rao and Georgeff 1991) this has the effect of collapsing mixed nested modalities to their simpler non-nested forms.
We will refer to the axioms A12, A13, A16, A17, A18, AIC1, AIC2, AIC3 and AIC4 as the basic IC-system. We note that all axioms of the I-system remain true, although some are consequences rather than axioms.9
**Commitment Axioms**
Rao and Georgeff define three variants of a commitment axiom, which taken together with the basic axioms define
\[
\text{A11 follows from AIC1 and AIC2. A14 follows from AIC1, AIC2 and A16. A15 follows from AIC1 and AIC4.}
\]
what they call a blindly committed agent, a single-minded agent and an open-minded agent. The blindly committed agent maintains intentions until they are believed true, the single-minded agent maintains intentions until they are believed true or are believed impossible to achieve, while the open-minded agent maintains intentions until they are believed true or are no longer goals.
We define an additional kind of agent which we term a self-aware agent which is able to drop an intention if it believes it no longer has the capability to achieve that intention.
The self-aware agent is defined by the basic IC-system plus the following axiom which we call AIC9d.\(^{10}\)
\[
\text{AIC9d \ INTEND(inevitable} \Box \phi) \supset \neg \text{CAP(inevitable} \Box \phi) \\
\bigcup (\text{BEL}(\phi) \lor \neg \text{CAP(optional} \Box \phi))
\]
It is then possible to extend theorem 1 in \((\text{Rao and Georgeff 1991})\) to show that a self-aware agent will inevitably eventually believe its intentions, and to prove a new theorem that under certain circumstances the self-aware agent will achieve its intentions.\(^{11}\) Self-awareness can be combined with either open-mindedness or single-mindedness to obtain self-aware-open-minded and self-aware-single-minded agents.
### Properties of the Logic
The logic allows for believing things without having the capability for this, i.e. \(\text{BEL}(\phi) \land \neg \text{CAP}(\phi)\) is satisfiable. This means that, for instance, you can believe the sun will inevitably rise, without having a capability to achieve this. Also \(\neg \text{inevitable}(\Box \text{BEL}(\phi)) \land \neg \text{GOAL}(\phi)\) is satisfiable. Similarly, one can have the capability to achieve something without having the goal to achieve this. In general, a modal formula does not imply a stronger modal formula, where \(\text{BEL} <_{\text{strong}} \text{CAP} <_{\text{strong}} \text{GOAL} <_{\text{strong}} \text{INTEND}\).
**Theorem 1** For modalities \(R_1\) and \(R_2\) such that \(R_1 <_{\text{strong}} R_2\), the following formulae are satisfiable:
\[(a) \ R_1(\phi) \land \neg R_2(\phi) \]
\[(b) \ \neg \text{inevitable}(\Box R_1(\phi)) \land \neg R_2(\phi)\]
**Proof**: We prove the result for BEL and CAP. The proof for the other pairs of modalities is similar. Assume \(\text{BEL}(\phi)\). Then, \(\phi\) is true in every belief-accessible world. For every belief-accessible world there is a capability-accessible world. However, \(C\) may map to worlds that do not correspond to any belief-accessible world. If \(\phi\) is not true in one of these worlds, then \(\phi\) is not a capability. This shows the satisfiability of (a). Similar reasoning yields (b).
As we have seen before, the modalities are closed under implication. However, another property of the logic is that a modal operator is not closed under implication with respect to weaker modalities. For instance, an agent may have the capability to do \(\phi\), believe that \(\phi\) implies \(\gamma\), but not have the capability to do \(\gamma\).\(^{12}\)
---
10This numbering is chosen because of the relationship of AIC9d to A9a, A9b, and A9c in the original I-system.
11These theorems and proofs are not shown here due to space restrictions. They are available in the longer version of the paper.
12The alternative formulation referred to in footnote 5 does not have this property with respect to capabilities.
---
**Theorem 2** For modalities \(R_1\) and \(R_2\) such that \(R_1 <_{\text{strong}} R_2\), the following formulae are satisfiable:
\[(a) \ R_2(\phi) \land R_1(\neg \text{inevitable}(\Box (\phi \supset \gamma))) \land \neg R_2(\gamma) \]
\[(b) \ R_2(\phi) \land \neg \text{inevitable}(\Box R_1(\neg \text{inevitable}(\Box (\phi \supset \gamma)))) \land \neg R_2(\gamma)\]
**Proof**: We prove the result for BEL and CAP. The proof for the other pairs of modalities is similar. Assume \(\text{CAP}(\phi)\) and \(\text{BEL}(\text{inevitable}(\Box (\phi \supset \gamma)))\). Then, \(\phi\) is true in every capability-accessible world. To be able to infer that \(\gamma\) is true in each capability-accessible world, we would need that \(\phi \supset \gamma\) is true in each capability-accessible world. We know that for every belief-accessible world \(\text{inevitable}(\Box (\phi \supset \gamma))\) is true and that for each belief-accessible world there is a capability-accessible world. However, \(C\) may map to other worlds, where this is not true and thus \(\gamma\) is not a capability. This shows the satisfiability of (a). Similar reasoning yields (b).
The formal semantics of capabilities as defined fit well into the existing R\&G BDI logic and allow definition of further interesting types of agents. We look now at how this addition of capabilities affects the specification of an abstract interpreter for BDI systems and also what issues and questions arise for implementations as the result of the theoretical exploration.
### Implementation aspects
An abstraction of a BDI-interpreter which follows the logic of the basic I-system is given in \((\text{Rao and Georgeff 1992})\).\(^{13}\) The first stages in the cycle of this abstract interpreter are to generate and select plan options. These are filtered by beliefs, goals and current intentions. Capabilities now provide an additional filter on the options we generate and select. Similarly capabilities must be considered when dropping beliefs, goals and intentions. In a system with dynamic roles capabilities themselves may also be dropped. Thus we obtain this slightly modified version of the interpreter in \((\text{Rao and Georgeff 1992})\).
**BDI with capabilities interpreter:**
\begin{verbatim}
initialise-state();
do
options := option-generator(event-queue,B,C,G,I);
selected-options := deliberate(options,B,C,G,I);
update-intentions(selected-options,I);
execute(I);
get-new-external-events();
drop-successful-attitudes(B,C,G,I);
drop-impossible-attitudes(B,C,G,I);
until quit.
\end{verbatim}
This abstract interpreter is at a very high level and there are many details which must be considered in the actual implementation that are hidden in this abstraction. One important implementation detail that is highlighted by the def-
---
13Due to lack of space we cannot give more than the most basic summary here of this interpreter and its relation to the logic.
initions of the various kinds of agents (blindly committed, single-minded, open-minded and self-aware) has to do with when intentions should be dropped. With respect to capabilities the axiom AIC9d highlights the fact that if capabilities are allowed to change during execution it may be necessary to drop some intentions when a capability is lost/removed.
The observation that it is possible for an agent to have the capability to do $\phi$, believe that $\phi$ implies $\gamma$, but not have the capability to do $\gamma$ (see before), highlights an area where one may wish to make the agent more "powerful" in its reasoning by disallowing this situation. This is possible by a modification of the logical formalisation\textsuperscript{14} but would have an impact on how the option generation and selection phases of the abstract interpreter work.
In (Rao and Georgeff 1992) an example is given to illustrate the workings of the specified abstract interpreter. In this example John wants to quench his thirst and has plans (which are presented as a special kind of belief) for doing this by drinking water or drinking soda, both of which then become options and can be chosen as intentions (instantiated plans that will be acted on).
It is also possible to construct the example where the agent believes that rain always makes the garden wet, and that rain is eventually possible, represented as:
$$\text{BEL}(\text{inevitable } \Box (\text{rain}) \supset (\text{garden-wet}))$$
$$\text{BEL}(\text{optional } \lozenge (\text{rain}))$$
In the R&G formalism which does not differentiate between plans and other kinds of beliefs this would allow our agent to adopt (rain) as a GOAL. However, in the absence of any plan in the plan library for ever achieving rain this does not make intuitive sense - and in fact could not happen in implemented systems. With the IC $\rightarrow$ system presented here we would also require CAP(optional $\lozenge (\text{rain}))$ thus restricting goal adoption to situations where the agent has appropriate capabilities (i.e. plans).
This example demonstrates that in some respects the IC $\rightarrow$ system is actually a more correct formalisation of implemented BDI systems than the original I $\rightarrow$ system.
**Conclusion and Future Work**
The formalisation of capabilities and their relationships to beliefs, goals and intentions is a clean extension of an existing theoretical framework. Advantages of the extension include eliminating mismatch between theory and what happens in actual systems, better mapping of theory to intuition, indication of areas for development of implemented reasoning in line with the theory and highlighting of issues for consideration in actual implementations.
Exploration of how an agent’s knowledge of other agents’ capabilities affects its own goals and intentions requires further work and some modifications to the axioms relating goals to capabilities. This seems to require a framework which allows for beliefs about other agent’s capabilities.
Goals would then be constrained by a combination of one’s own capabilities plus beliefs about other agent’s capabilities.
**References**
\textsuperscript{14}The necessary modification is essentially to require that all capability-accessible worlds are sub-worlds of belief-accessible worlds. However this breaks the symmetry of the current formalisation where capability accessible worlds are exactly analogous to belief/goal/intention accessible worlds.
|
{"Source-Url": "http://www.aaai.org/Papers/AAAI/2000/AAAI00-011.pdf", "len_cl100k_base": 7291, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22484, "total-output-tokens": 8498, "length": "2e12", "weborganizer": {"__label__adult": 0.0004341602325439453, "__label__art_design": 0.0005450248718261719, "__label__crime_law": 0.0006399154663085938, "__label__education_jobs": 0.0010862350463867188, "__label__entertainment": 0.0001188516616821289, "__label__fashion_beauty": 0.0002200603485107422, "__label__finance_business": 0.0005106925964355469, "__label__food_dining": 0.0005245208740234375, "__label__games": 0.0010042190551757812, "__label__hardware": 0.0008554458618164062, "__label__health": 0.0009393692016601562, "__label__history": 0.0003862380981445313, "__label__home_hobbies": 0.0001609325408935547, "__label__industrial": 0.00072479248046875, "__label__literature": 0.0008616447448730469, "__label__politics": 0.0004627704620361328, "__label__religion": 0.0006532669067382812, "__label__science_tech": 0.1336669921875, "__label__social_life": 0.00016129016876220703, "__label__software": 0.00977325439453125, "__label__software_dev": 0.8447265625, "__label__sports_fitness": 0.00036716461181640625, "__label__transportation": 0.0009465217590332032, "__label__travel": 0.0002340078353881836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33612, 0.01265]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33612, 0.39275]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33612, 0.91156]], "google_gemma-3-12b-it_contains_pii": [[0, 4720, false], [4720, 11461, null], [11461, 17573, null], [17573, 21709, null], [21709, 28117, null], [28117, 33612, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4720, true], [4720, 11461, null], [11461, 17573, null], [17573, 21709, null], [21709, 28117, null], [28117, 33612, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33612, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33612, null]], "pdf_page_numbers": [[0, 4720, 1], [4720, 11461, 2], [11461, 17573, 3], [17573, 21709, 4], [21709, 28117, 5], [28117, 33612, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33612, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
c1aa5d6bd6f34d29fe7b998af265dc9f0d9836f9
|
XEP-0389: Extensible In-Band Registration
Sam Whited
mailto:[email protected]
xmpp:[email protected]
https://blog.samwhited.com/
2020-11-17
Version 0.6.0
<table>
<thead>
<tr>
<th>Status</th>
<th>Type</th>
<th>Short Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>Experimental</td>
<td>Standards Track</td>
<td>ibr2</td>
</tr>
</tbody>
</table>
This specification defines an XMPP protocol extension for in-band registration with instant messaging servers and other services with which an XMPP entity may initiate a stream. It aims to improve upon the state of the art and replace XEP-0077: In-Band Registration by allowing multi-factor registration mechanisms, and account recovery.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at https://xmpp.org/about/xsf/ipr-policy or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
## Contents
1 Introduction .................................................. 1
2 Requirements .................................................. 1
3 Glossary ....................................................... 1
4 Use Cases ...................................................... 2
5 Discovering Support .......................................... 2
6 Flows ............................................................ 2
6.1 Stream Feature ........................................... 3
6.2 Retrieving the Flows ..................................... 4
6.3 Selecting a Flow .......................................... 5
6.4 Issuing Challenges ........................................ 6
6.5 Completing Registration or Recovery .................. 7
7 Challenges ....................................................... 8
7.1 Data Form ................................................ 8
7.2 Out of Band Data ......................................... 10
7.3 SASL ......................................................... 10
8 Internationalization Considerations ..................... 11
9 Security Considerations ..................................... 11
10 IANA Considerations ........................................ 12
11 XMPP Registrar Considerations .......................... 12
11.1 Protocol Namespaces .................................... 12
11.2 IBR Challenges Registry ................................. 13
11.3 Challenge Types ......................................... 13
11.4 Namespace Versioning .................................. 14
1 Introduction
Historically, registering with an XMPP service has been difficult. Each server either used customized out-of-band registration mechanisms such as web forms which were difficult to discover, or they used In-Band Registration (XEP-0077) which could easily be abused by spammers to register large numbers of accounts and which allowed for only limited extensibility.
To solve these issues this specification provides a new in-band registration protocol that allows servers to present the user with a series of "challenges". This allows for both multi-stage proof-of-possession registration flows and spam prevention mechanisms such as proof-of-work functions.
2 Requirements
• The server MUST be able to present multiple challenges to the client.
• The server SHOULD be able reduce account registration spam.
• The server MAY present a challenge that requires the user to complete a step out-of-band.
• A client SHOULD be able to register an account without requiring the user to leave the client.
• A client MUST be able to use the same mechanism to register an account and to recover a forgotten password (subject to server policy).
• A client MUST be able to register with a server as well as external components.
3 Glossary
Challenge A challenge is an action taken during account registration or recovery that requires a response. For example, displaying a form to a user or asking for a token.
Challenge Type The type of a challenge is a unique string that identifies the type of payload that can be expected. For example, a challenge element with type "jabber:x:data" can be expected to contain a data form. Challenge types must be defined and registered in the challenge types registry. When defining a challenge it is often convenient to reuse an XML namespace from the document defining the challenge.
Flow A flow, or more specifically a "registration flow" or "recovery flow", is a collection of challenges that together can be used to gather enough information to register a new account or recover an existing account.
4 Use Cases
- As a server operator, I want to prevent individual spammers from registering many accounts so I require registrants to perform a proof-of-work function before registration is completed.
- As a server operator I want to prevent bots from registering accounts so I require that registrants submit a form which requires user interaction.
- As a user I do not want to lose access to my account if I forget my password, so I provide my email and telephone number in response to the servers data form.
- As a server operator I do not want users to accidentally add an incorrect recovery address so I send an email with a unique link to the indicated account and require that they click the link before registration can continue.
- As a server operator I want to prevent SPIM using a proof-of-possession protocol so I present the user with a form asking for a mobile phone number and then send a verification code to that number via SMS and show another form requesting the verification code.
5 Discovering Support
Clients, servers, and other services such as components that support Extensible IBR MUST advertise the fact by including a feature of “urn:xmpp:register:0” in response to Service Discovery (XEP-0030) information requests and in their Entity Capabilities (XEP-0115) profiles.
Listing 1: Disco info response
```xml
<query xmlns='http://jabber.org/protocol/disco#info'>...
<feature var='urn:xmpp:register:0'/>...
</query>
```
6 Flows
Registration or recovery is completed after responding to a series of challenges issued by the server. Challenges are grouped into “flows”, a number of challenges that may be issued together to complete an action. For example, a registration flow might be created that issues a data form challenge which will be shown to the user to gather information, then issues a
---
second data form challenge to let the user enter a confirmation code that was sent to their email.
6.1 Stream Feature
If a server supports registering for or recovering an account using Extensible IBR during stream negotiation, it MUST inform the connecting client when returning stream features during the stream negotiation process. This is done by including a `<register/>` element, qualified by the 'urn:xmpp:register:0' namespace for account registration, or a `<recovery/>` element qualified by the same namespace for account recovery. The register and recovery features are always voluntary-to-negotiate. The registration and recovery features MUST NOT be advertised before a security layer has been negotiated, eg. using direct TLS or opportunistic TLS. They SHOULD be advertised at the same time as the SASL authentication feature, meaning that after registration or recovery is completed SASL authentication can proceed.
For recovery or registration, the server MUST include a list of all challenges which the client may receive during the course of registering or recovering an account. These are grouped into “flows” and let the client pick a registration workflow that only contains challenges which the client supports. Each `<flow/>` element MUST have a unique "id" attribute which is used by the client to identify the flow being selected. The id attribute is only used during this particular flow negotiation and has no meaning after a flow has been selected. Flows must also have at least one `<name/>` element containing a short, human readable description of the flow. If multiple `<name/>` elements are present they MUST have unique values for the "xml:lang" attribute. Clients MAY use the name element to show the different flows to the user and ask them to pick between them. Each flow element must also contain an unordered set of `<challenge/>` elements representing the various challenge types that may be required to complete the registration or recovery flow. Each `<challenge/>` element contains a "type" attribute that uniquely identifies the challenge for the purpose of determining if it is supported. If a flow would offer the same challenge twice (eg. two data forms asking for different data), the challenge SHOULD only be listed once in the flow element.
For example, a server may advertise a "Verify with SMS" flow and a "Verify by Phone Call" flow that both show a data form asking for a phone number and then a second data form asking for a token provided to the user in a text message or phone call depending on which flow the user selects.
Listing 2: Host Advertises Stream Features
```xml
<stream:features>
<mechanisms xmlns='urn:xmpp:sasl:0'>
<mechanism>EXTERNAL</mechanism>
<mechanism>SCRAM-SHA-1-PLUS</mechanism>
<mechanism>SCRAM-SHA-1</mechanism>
<mechanism>PLAIN</mechanism>
</mechanisms>
</stream:features>
```
Just because a challenge type is listed by the server in the initial flow element does not mean that it will be issued by the server. Servers MAY choose to issue more or fewer challenges based on the result of previous challenges and may not use every challenge type listed in the original flow.
6.2 Retrieving the Flows
Registration or recovery may also be completed after stream negotiation if server policy allows it.
To find what flows an entity provides (if any) after stream negotiation is complete the requester can send an IQ of type "get" containing a <register> or <recovery> element qualified by the "urn:xmpp:register:0" namespace.
Listing 3: Registration flows query
```xml
<iq type='get'>
<register xmlns='urn:xmpp:register:0'/>
</iq>
```
When responding to a query for registration or recovery flows the list of challenges MUST be included just as it would be during stream feature negotiation. That is, a "register" or "recovery" element containing a list of flows, each with an id, containing a name and a list of challenges.
If an entity supports issuing challenges but does not provide any flows after stream negotiation is complete it MUST respond with an empty list. Similarly, an entity that supports this specification but does not support issuing challenges itself (for example, a client that only supports receiving challenges) it MUST respond successfully with an empty list.
6.3 Selecting a Flow
A client selects the registration or recovery feature for negotiation by replying with an element of the same name and namespace. The element MUST contain a <flow> element that MUST have an "id" attribute matching one of the flows advertised by the server. For example, to select the "Verify by Phone Call" registration flow from the previous example, the client would reply with:
Listing 6: Client selects a recovery flow
```xml
<register xmlns='urn:xmpp:register:0'>
<flow id='1'/>
</register>
```
If during stream initialization the client attempts to select a flow that does not match one of the flows sent by the server, the server MUST respond with an "undefined-condition" stream error containing an “invalid-flow” application error qualified by the ‘urn:xmpp:register:0’
namespace.
Listing 7: Server responds to an invalid selection during stream negotiation
```xml
<stream:error>
<undefined-condition xmlns='urn:ietf:params:xml:ns:xmpp-streams'/>
<invalid-flow xmlns='urn:xmpp:register:0'/>
</stream:error>
</stream:stream>
```
If the client is initiating registration or recovery after a stream has already been initiated it uses the same registration element wrapped in an IQ of type "set".
Listing 8: Client selects a recovery flow after stream negotiation
```xml
<iq type='set' id='foo'>
<recovery xmlns='urn:xmpp:register:0'>
<flow id='0'/>
</recovery>
</iq>
```
If the client attempts to select a flow that does not match one of the flows sent by the server in response to an IQ after stream initialization the server MUST respond with a stanza error of type "item-not-found".
Listing 9: Server responds to an invalid selection after stream negotiation
```xml
<iq type='error'>
<error type='cancel'>
<item-not-found xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
</error>
</iq>
```
6.4 Issuing Challenges
If a valid flow is selected by the client the server then replies to the IQ or feature selection with a challenge. If replying to an IQ, the challenge must be wrapped in an IQ of type "result". Challenges take the form of a `<challenge/>` element qualified by the ‘urn:xmpp:register:0’ namespace with a ‘type’ attribute that uniquely identifies the type of payload a client might expect the element to contain.
Listing 10: Server issues a challenge
```xml
<challenge xmlns='urn:xmpp:register:0' type='urn:example:challenge'>
<example xmlns='urn:example:challenge'>Payload</example>
</challenge>
```
After a challenge is received, the client replies to the challenge by sending a `<response/>` element qualified by the 'urn:xmpp:register:0' namespace or a cancellation as defined later in this document. If the client sends a response, it MUST also include the payload corresponding to the challenges 'type' element (which may be empty).
Listing 11: Client responds to a challenge
```xml
<response xmlns='urn:xmpp:register:0'>
<result xmlns='urn:example:challenge'>Example Response</result>
</response>
```
After a response is received, if the server needs more information it MAY issue another challenge. For example, if the user has entered their email in response to a challenge, the server might send an email and then issue another challenge asking for the unique code sent in the email.
6.5 Completing Registration or Recovery
If after receiving a challenge or response a client or server does not wish to continue registration or recovery, it may send an empty `<cancel/>` element qualified by the 'urn:xmpp:register:0' namespace. This informs the client or server that registration is complete. This is the same as submitting a data form of type 'cancel' in response to a data form challenge.
Listing 12: User Cancels Registration or Recovery
```xml
<cancel xmlns='urn:xmpp:register:0'/>
```
If the IQ based registration or recovery flow is being used and the server wishes to cancel the flow, it MAY respond to any IQ with the cancel element and type "result".
Listing 13: Client or server cancels request
```xml
<iq type='result' id='bar'>
<cancel xmlns='urn:xmpp:register:0'/>
</iq>
```
A server may also issue a cancelation IQ with type 'set' if it wishes to cancel after a request/response has been received (ie. when there is no existing IQ to respond to).
Listing 14: Server cancels flow
```xml
<iq type='set' id='bar'>
<cancel xmlns='urn:xmpp:register:0'/>
</iq>
```
If the client successfully completes all required challenges during stream negotiation the server MUST return a `<success/>` element qualified by the ‘urn:xmpp:register:0’ namespace, at which point it may continue with the stream negotiation process. The success element MUST contain a `<jid>` element containing the bare JID as registered or recovered by the server and a `<username>` element containing the simple user name for use with SASL (normally this will be the same as the localpart of the JID).
Listing 15: Server indicates success during stream negotiation
```
<success xmlns='urn:xmpp:register:0'>
<jid>[email protected]</jid>
<username>mercutio</username>
</success>
```
If the IQ based flow is being used and the server wishes to indicate success after a challenge has been completed it sends an IQ of type "set" containing the `<success/>` element.
Listing 16: Server indicates success after stream negotiation
```
<iq type='set' id='bar'>
<success xmlns='urn:xmpp:register:0'>
<jid>[email protected]</jid>
<username>mercutio</username>
</success>
</iq>
```
7 Challenges
This document defines several challenges that use existing technologies.
7.1 Data Form
Challenges of type ‘jabber:x:data’ MUST always contain a data form (an ‘x’ element with type 'form') as defined by Data Forms (XEP-0004) 4.
Listing 17: Server issues a data form challenge
```
<challenge xmlns='urn:xmpp:register:0' type='jabber:x:data'>
<x xmlns='jabber:x:data' type='form'>
<title>Chat Registration</title>
<instructions>
Please provide the following information
</instructions>
</x>
</challenge>
```
to sign up to view our chat rooms!
</instructions>
```xml
<field type='hidden' var='FORM_TYPE'>
<value>urn:xmpp:register:0</value>
</field>
<field type='text-single' label='Given_Name' var='first'>
<value>Juliet</value>
</field>
<field type='text-single' label='Family_Name' var='last'>
<value>Capulet</value>
</field>
<field type='text-single' label='Nickname' var='nick'>
<value>Jule</value>
</field>
<field type='text-single' label='Recovery_Email_Address' var='email'>
<value>[email protected]</value>
</field>
```
The response to a "jabber:x:data" challenge MUST be a form submission (an 'x' element of type 'submit'). For instance, to reply to the data form challenge from the previous example a client might send:
```xml
<response xmlns='urn:xmpp:register:0'>
<x xmlns='jabber:x:data' type='submit'>
<field type='hidden' var='FORM_TYPE'>
<value>urn:xmpp:register:0</value>
</field>
<field type='text-single' label='Given_Name' var='first'>
<value>Juliet</value>
</field>
<field type='text-single' label='Family_Name' var='last'>
<value>Capulet</value>
</field>
<field type='text-single' label='Nickname' var='nick'>
<value>Jule</value>
</field>
<field type='text-single' label='Recovery_Email_Address' var='email'>
<value>[email protected]</value>
</field>
</x>
</response>
```
7.2 Out of Band Data
Challenges of type "jabber:x:oob" MUST contain an <x/> element qualified by the "jabber:x:oob" namespace as defined in Out-of-Band Data (XEP-0066)\(^5\).
Listing 19: Server issues an OOB challenge
```xml
<challenge xmlns='urn:xmpp:register:0'
type='jabber:x:oob'>
<x xmlns='jabber:x:oob'>
<url>http://example.net/login?token=foo</url>
</x>
</challenge>
```
If the client sends a response to the OOB challenge it MUST be empty.
Listing 20: Client acknowledges the OOB challenge
```
<response xmlns='urn:xmpp:register:0'/>
```
7.3 SASL
Servers can support changing passwords by providing a reset flow containing a SASL challenge. The SASL challenge re-uses the SASL profile from RFC 6120\(^6\). The server begins by sending the mechanisms list, and the client responds by selecting a mechanism and possibly including initial data. Each step in the SASL process is issued as a new SASL challenge.
Listing 21: SASL challenge flow
```
<!-{}- Server -{}->
<challenge xmlns='urn:xmpp:register:0'
type='urn:ietf:params:xml:ns:xmpp-sasl'>
<mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'>
<mechanism>EXTERNAL</mechanism>
<mechanism>SCRAM-SHA-1-PLUS</mechanism>
<mechanism>SCRAM-SHA-1</mechanism>
<mechanism>PLAIN</mechanism>
</mechanisms>
</challenge>
<!-{}- Client -{}->
<response xmlns='urn:xmpp:register:0'>
<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl'
mechanism='SCRAM-SHA-1'>
biwsbj1qdWxpZXQscj1vTXNUQUF3QUFBQU1BQUBT1AwWEFBQUFBQUJQVTBBQQ==
</auth>
</response>
```
---
8 Internationalization Considerations
When providing instructions in a data form, or in the name element of a registration or recovery flow, the server SHOULD use the language specified in the XML stream’s current xml:lang, or the closest language for which the server has a translation (e.g. based on mutual intelligibility between scripts and languages).
For more information about language tags and matching, see BCP 47.\(^7\)
9 Security Considerations
Servers that allow in-band registration need to take measures to prevent abuse. Common techniques to prevent spam registrations include displaying CAPTCHAs or requiring proof-of-possession of a valid email address or telephone number by sending a unique code (e.g. an HMAC that can later be verified as having originated at the server) to the users email and requiring that they enter the code before continuing. Servers that do not take such measures risk being black listed by other servers in the network.
\(^7\)BCP 47: Tags for Identifying Languages <http://tools.ietf.org/html/bcp47>.
10 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA).
11 XMPP Registrar Considerations
11.1 Protocol Namespaces
This specification defines the following XML namespace:
- `urn:xmpp:register:0`
Upon advancement of this specification from a status of Experimental to a status of Draft, the XMPP Registrar shall add the foregoing namespace to the registries located at <https://xmpp.org/registrar/stream-features.html>, and <https://xmpp.org/registrar/disco-features.html> as described in Section 4 of XMPP Registrar Function (XEP-0053).
The XMPP Registrar shall also add the foregoing namespace to the Jabber/XMPP Protocol Namespaces Registry located at <https://xmpp.org/registrar/namespaces.html>. Upon
---
8The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
9The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
11The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
advancement of this specification from a status of Experimental to a status of Draft, the XMPP Registrar ¹² shall remove the provisional status from this registry entry.
11.2 IBR Challenges Registry
The XMPP Registrar shall maintain a registry of IBR challenges. Challenges defined within the XEP series MUST be registered with the XMPP Registrar.
In order to submit new values to this registry, the registrant shall define an XML fragment of the following form and either include it in the relevant XMPP Extension Protocol or send it to the email address <[email protected]>:
```
<challenge>
<type>A name that uniquely identifies the challenge.</type>
<desc>A natural-language summary of the challenge.</desc>
<doc>The document (or documents) in which the IBR challenge and its payload are defined.</doc>
</challenge>
```
For an example registration, see the next section.
11.3 Challenge Types
This specification defines the following IBR challenge:
- `jabber:x:data`
- `jabber:x:oob`
Upon advancement of this specification from a status of Experimental to a status of Draft, the XMPP Registrar ¹³ shall add the following definitions to the IBR challenges registry, as
¹²The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
¹³The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
described in this document:
```
<challenge>
<type>jabber:x:data</type>
<desc>Requests that the client fill out an XEP-0004 data form.</desc>
<doc>&xep0389;, &xep0004;</doc>
</challenge>
<challenge>
<type>jabber:x:oob</type>
<desc>Requests that the client execute a URI.</desc>
<doc>&xep0066;</doc>
</challenge>
```
11.4 Namespace Versioning
If the protocol defined in this specification undergoes a revision that is not fully backwards-compatible with an older version, the XMPP Registrar shall increment the protocol version number found at the end of the XML namespaces defined herein, as described in Section 4 of XEP-0053.
|
{"Source-Url": "https://xmpp.org/extensions/xep-0389.pdf", "len_cl100k_base": 6046, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 41255, "total-output-tokens": 7238, "length": "2e12", "weborganizer": {"__label__adult": 0.0004260540008544922, "__label__art_design": 0.000499725341796875, "__label__crime_law": 0.0021419525146484375, "__label__education_jobs": 0.0012102127075195312, "__label__entertainment": 0.0003345012664794922, "__label__fashion_beauty": 0.0001883506774902344, "__label__finance_business": 0.00119781494140625, "__label__food_dining": 0.000244140625, "__label__games": 0.0015411376953125, "__label__hardware": 0.0036678314208984375, "__label__health": 0.00029397010803222656, "__label__history": 0.0004377365112304687, "__label__home_hobbies": 9.03606414794922e-05, "__label__industrial": 0.000606536865234375, "__label__literature": 0.0005488395690917969, "__label__politics": 0.0004982948303222656, "__label__religion": 0.0004811286926269531, "__label__science_tech": 0.11102294921875, "__label__social_life": 0.0001405477523803711, "__label__software": 0.1986083984375, "__label__software_dev": 0.6748046875, "__label__sports_fitness": 0.00043082237243652344, "__label__transportation": 0.00045871734619140625, "__label__travel": 0.0002219676971435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27522, 0.02242]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27522, 0.22031]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27522, 0.80121]], "google_gemma-3-12b-it_contains_pii": [[0, 649, false], [649, 3182, null], [3182, 4759, null], [4759, 6896, null], [6896, 8883, null], [8883, 11768, null], [11768, 12820, null], [12820, 13986, null], [13986, 15659, null], [15659, 17561, null], [17561, 19276, null], [19276, 20738, null], [20738, 22519, null], [22519, 23570, null], [23570, 25178, null], [25178, 26880, null], [26880, 27522, null]], "google_gemma-3-12b-it_is_public_document": [[0, 649, true], [649, 3182, null], [3182, 4759, null], [4759, 6896, null], [6896, 8883, null], [8883, 11768, null], [11768, 12820, null], [12820, 13986, null], [13986, 15659, null], [15659, 17561, null], [17561, 19276, null], [19276, 20738, null], [20738, 22519, null], [22519, 23570, null], [23570, 25178, null], [25178, 26880, null], [26880, 27522, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27522, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27522, null]], "pdf_page_numbers": [[0, 649, 1], [649, 3182, 2], [3182, 4759, 3], [4759, 6896, 4], [6896, 8883, 5], [8883, 11768, 6], [11768, 12820, 7], [12820, 13986, 8], [13986, 15659, 9], [15659, 17561, 10], [17561, 19276, 11], [19276, 20738, 12], [20738, 22519, 13], [22519, 23570, 14], [23570, 25178, 15], [25178, 26880, 16], [26880, 27522, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27522, 0.0086]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
f545178bf72ccb9c71a6299ae0d2c14ebc292704
|
Business Process Modelling Towards Derivation of Information Technology Goals
Youseef Alotaibi and Fei Liu
Department of Computer Science and Computer Engineering,
La Trobe University, Bundoora, VIC, 3086, Australia,
Email: [yaalotaibi@student, f.liu]latrobe.edu.au
Abstract
Business Process Modelling (BPM) is a way to support business processes by using several techniques, methodologies, models, and systems to design, control and analyse business processes, where many resources are used: humans, applications, technologies, organizations etc. The current existing literature describes several BPM techniques, however, these techniques are often hard for IT people to understand which is one of the reasons why IT is often unable to completely implement the desired business process. This paper aims to present a business process modelling framework that is easy for IT people to understand. A mobile phone order management process in a telecommunication company has been used as a case study to validate the proposed framework. The results indicate that: 1) BPM has a positive influence on the implementation of a system according to business expectations; 2) by considering IT at the time of BPM, this resulted in better cultural and social relationships between the business and IT staff.
Key words: Business process modelling; Information system; Business process; Case study.
1. Introduction
One of the major challenges of BPM is managing rapid changes in the business environment [1]. Hammer and Champy (1994) found that business environments can be affected by several forces, such as consumers assuming to be in the operation department of the company instead of the salesmen and strong competition with other similar firms, resulting in companies changing their business goals rapidly. Consequently, the business processes suffers [2]. BPM is an important aspect of managing business processes within companies. It provides support to the organization’s processes by using different methods, techniques and software tools to control and analyse the organizational processes and activities, which includes people, organizations, applications, documents and other related information [3], [4]. Moreover, a well defined BPM method improves business performance by clarifying the business process including company goals, objectives, policies, and strategies [5, 6].
There are many business process modelling standards, languages, techniques, and tools in the literature, such as the Business Process Modelling Notation (BPMN) [7], Unified Modelling Language (UML) [8], Business Process Execution Language (BPEL) [9], etc. Some of the various techniques include the classification of modelling the business process [10], modelling languages for the process [11], task-based methods to construct the business model [12], modelling the process for information system design [13], etc. However, these techniques are hard for IT people to understand, which is often the reason why IT is not always able to completely implement the desired business process.
This paper aims to present a business process modelling framework that can be easily understood by IT people. We have divided our proposed framework into two modelling environments: the business modelling environment and the Information Technology modelling environment. A mobile phone order management process in a telecommunication company has been used as a case study in order to validate our proposed framework. The results show that BPM can have a positive influence on system implementation according to business expectations, as well as have a positive influence on the social and cultural relationship between IT and business staff in the organization. The remainder of this paper is organized as follows: section II describes the background of BPM; section III presents our proposed framework; section IV describes the proposed framework validation with the help of a case study; and the conclusion and future research directions are presented in section V.
2. Background
The notion of business process management is not new; it has been studied for many years. For example, in 1776, Adam Smith proposed the idea of managing labour in the manufacturing industry. He argued that a process could be divided into several sub-parts in order to make it more efficient [14]. Frederick Taylor (1911) proposed a management method known as 'time and motion' to document and analyse the work involved in business processes in order to reduce the time and the number of functions involved in any process. Proponents of the time and motion method claim that as a result of implementing this, there was an overall improvement in employees’ efficiency and the quality of the end product [15].
This concept of managing business and business activities expanded over time, and researchers began to use different terms to refer to this. In the 1960s, the term BPM was first used in the field of system engineering by S. Williams [16]. The idea of his approach was to cope with business process management issues, which helps companies produce a large volume of goods in less time, as in the 1960s, companies were aiming to increase their production in order to meet consumer demands [17]. In the 1970s, researchers introduced the concept of managing business processes automatically and proposed many techniques such as: workflow technologies, transaction process systems and manufacturing automation. In the early 1980s, the Total Quality Management (TQM) concept was introduced with several ideas, such as lean manufacturing and Six Sigma that were proposed in order to produce higher quality products with more services for a lower cost in less time [18]. The key features of TQM are that it is mainstreamed and can be successfully adapted to suit business processes in the 2000s.
In the 1990s, Business Process Reengineering (BPR) was proposed by Michael Hammer [19]. BPR aims to improve the critical measures of business performance by using IT services in order to fundamentally rethink and redesign business processes [20]. There are several tools and techniques used in BPR, such as process visualisation, process mapping/operation, change management, benchmarking and process and customers focus [21].
In the 21st century, business process management is considered to support different aspects of business processes in and between organizations, such as advanced reporting and analysis methodologies, executing business processes with workflow management, business process quality assurance and optimizing and redesigning business processes. Furthermore, companies aim to provide more high quality services in the 21st century. BPM allows organizations to abstract business processes from IT innovations as well as enabling them to modify their own business processes quickly, according to their changing requirements and customers [14].
As a result, the literature today describes many business process modelling techniques [10-13, 22-26]. For example, Aguilarsaven proposed a framework to classify different BPM standards and techniques according to their change model permissiveness and purposes [10]. Zur and Indulska used the Bung-Wand-Weber (BWW) representation in order to compare the representation capabilities for the rule and process modelling language [11]. Shi et al. proposed a Task-Based modelling (TBM) method which defines the key verb as the basic task components in business processes in order to model processes in a construction business [12]. Barjis’ aim was to design a BPM-based transaction theory of the DEMO method and the Petri net formal semantic graphic [13]. However, these techniques have several drawbacks and limitations, the most serious being that they are difficult for IT people to understand, which is often the reason why IT is unable to completely implement the desired business process.
In other words, it is hard for IT analysts to understand the business process and that is the reason why they always feel difficulties in order to develop the information system according to the proposed business process or according to the business expectations. As the process is a complex element of the business, one business process can carry more than one sub-processes or business goals. Therefore, business process modelling is an important priority to develop the process. This paper aims to present a business process modelling approach to model the process priority to implement the process [27].
3. Proposed Framework
BPM is a well accepted method within the business organization sector for structuring business processes. It provides support to the organization’s processes using different methods, techniques and software tools to control and analyse organizational processes and activities, which includes people, organizations, applications, documents and other related information. A successful BPM method contains three important components: model, strategy, and operations. A business model includes knowledge of the creation of the organization,
delivers values, and how to capture the business goals and objectives. Strategies carry rules and guidelines that fulfil all model-related elements. Operations in the business are the combination of several elements, such as people, processes and technology, whereas a different group of people worked together to complete organizational required goals with the help of information system services.
We have categorized our proposed framework into three separate parts: the business decision level, the business process modelling level and the Information Technology system goals level, where each level is made up of four business components as shown in figure 1. The business decision level consists of the business goals, the business rules, the rules measurement and the business rules analysis. The business goals are used to specify what the organizations need to achieve and when. The business rules aim to describe the operations and constraints which apply to the organization. The rules measurement is used to model and define business rules. The business rules analysis is used to assist the organizations to organize their business rules so that rules, including errors, can be identified [27].
The business process modelling level consists of the role model, the process events, the decision model and process monitoring. The role model is a technique to define the business goals. Process event is used to identify the detailed activities of the proposed process to be studied. The decision model is a model to manage and organize the business rules and logic. Process monitoring is a technique to track every individual process within the organization. The information system goals consist of the system behaviour, the business process, the system behaviour analysis, and the use case. System behaviour describes how process activities react. The business process is a set of activities acquiring one or more inputs and generating output as a value to the consumers. The system behaviour analysis is used to identify any missing information or errors in the process. The use case identifies the interaction between the system and the external actors in order to effectively achieve the business goals and objectives.
3.1. Modelling Business Environment
The modelling business environment contains two parts: the business decision and the business process modelling.
Figure 1: Proposed BPM Framework.
3.1.1. Modelling business decision
Modelling business decisions consist of the business goals, the business rules, the rules measurement and the business rules analysis. Business goals are used to represent why business processes exist and how to fulfill the organization’s mission statement. Business rules refer to the statement that how to control the overall business behaviour. It defines the operations, business constraints and definitions that apply to an organization. The business rules could be applied to people, business processes, behaviour and the information system in the organization and are put in place in order to assist organizations achieve their goals and objectives [11]. The measurement of business rules depicts the detailed analysis of business rules. Business rule analysis is a procedure to define rules and refine their meaning.
3.1.2. Business Process Modelling
Business process modelling consists of the role model, the process events, the decision model and the process monitoring. The business role model is used to capture the business organizational value. Events in the process are things in the business that affect the sequence of the process, including activities. The decision model is a unique logical representation for business logic showing how and where it is executed. Business logic, which is the logic proposed by the business rules, represents how the business intends to make significant decisions. The decision model is used to perceive, manage and organize the business rules and logic. Business process monitoring is a method used to identify how business people can provide real-time information on the significant indicators of the business performance in order to improve the speed and effectiveness of business operations. In the process monitoring, each individual activity is tracked and thus information on the state of the process can easily be seen and statistics on the performance of the process can be presented.
In this proposed paper, we model business decisions and business processes using well accepted modelling techniques, namely F* and the UML goal tree, as shown in figure 2 and figure 3.
3.2. Modelling IT environment
The term “IT modelling environment” became popular in the mid 1990s and refers to a set of shared IT resources that work together to achieve common goals. The IT environment normally comprises two major parts: “technical” and “human”, where technical includes software, hardware, network, telecommunication, etc, and human refers to the technical skills (persons) and knowledge that is required to maintain the IT resources. In the context of organizations, business processes are increasingly becoming more and more complex every day and their goals and objectives are changing rapidly. In this situation, the IT environment needs to be flexible so that rapid changes in business goals and objectives can be managed. In this paper, we propose to model the IT environment in relation to four different components: system behaviour, the business process, system behaviour analysis and use case.
System behaviour refers to how the system should behave when the customer places a query. The business process is a set of internal organizational procedures or activities that work together to achieve an organization’s goals and objectives to meet the consumers’ expectations. It is the key element of the business where other business components, such as goals, strategies, policies etc are based. System behaviour analysis is used to identify errors in the system’s behaviour; for example are all the system’s functions working well or not? Use Case Analysis is a technique used to identify the high level requirements of a system. We begin by identifying the actors involved in using the system. We then identify all the functions each actor will be performing with the system. Each function an actor is intended to carry out with the system is a use case. Two important elements are necessary for a complete use case diagram: actor and use case, where an actor is a person, system or other external entity that interacts with the system in question and a use case is a description of a system’s intended behaviour, given an external request by an actor. A use case identifies the type of interaction with a system and the actor involved. Use cases are a fundamental feature of the UML notation for describing system models.
4. Case Study
To validate this proposed framework, a mobile phone order management process in a telecommunication company is used as a case study, where the company goal is to implement the process of registering a new customer automatically in order to save customer time and reduce the number of staff which will, in turn, have a positive effect on company
Revenue. The modelling language has been used to model the proposed business model.
The modelling framework is an agent-oriented requirements modelling language appropriate for the early phase of system modelling to understand the system’s problems. It is used for the strategic actor relationships and intentional model. This framework contains two important components: the Strategic Dependency Model (SDM) and the Strategic Rationale Model (SRM). The SDM is used to describe the network of the relationships between actors. Moreover, the SDM is a component where every node represents an actor and every link between two nodes shows that one actor is dependent on the other actor. It provides a description for the external relationships between the actors. The aim of the SDM component is to provide indications about why the business process is organized in a certain way. However, it cannot adequately support the exploration, suggestion and evaluation of other solutions for the process, which the SRM can do.
Figure 2: Telecommunication Company Process
Legend
Resource Gateway Start activity
End activity Association Flow
Order shipped Packet complete
Packet ready to ship
The SRM is used to support and describe why the actors can have different ways to organize their work, such as a different configuration for Strategic Dependency networks. Moreover, SRM has four main nodes: goal, soft goal, resource and task, and two main links which are mean-ends link and the task decomposition links. It is used to model the internal relationships between actors. This model can systematically explore the area of possible new business process designs [28, 29].
Figure 2 shows how we model the proposed business process using I*. The model starts when the customer completes an online application form to order a new mobile phone and connection. After the company’s head office receives the form, they then check the information provided by the customer; if the information meets the company’s requirements, then the order is forwarded to the operational department. At this stage, the operational department creates a temporary packet which includes the order notes and sends it to the warehouse staff. The warehouse staffs are responsible for checking the availability of the mobile. If the mobile is in stock in the warehouse, they complete the packet and send the packet for shipment. However, if the mobile is not in stock, they hold the process and wait for new stock to arrive.
Figure 3: Goal Tree Model
Once the process has been modelled, then there is a need to analyse the process. As the process detailed in figure 2 only shows the business point of view, it is hard for IT people to understand the business process completely due to their lack of business knowledge. Therefore, we introduce the goal tree to analyse the process. A UML goal tree is used to analyse the mobile phone order management process, as shown in figure 3. A goal tree consists of different sets of nodes that are used to illustrate the goal. The nodes could be an operator, goal or test group nodes. The operator nodes are either a logic AND & OR operators [30, 31].
The analysis method is categorized into three main functions: (1) the place order function which is made up of three sub-process activities: “order to be processed”, “modify list” and “packet final”; (2) the mobile company system function which consists of “receive order” and “order shipped” activities; and (3) the packet ready to ship function contains two process activities: “packet complete” and “order shipped” and five sub-functions: complete packet, mobile availability, check when complete (“empty box”, “mobile” and “delivery note”), check when not complete (“wait for stock” and “delivery note”) and place delivery note in packet (“prioritise delivery notes” and “delivery notes placed”) functions.
After the process has been analysed, we then read the process in figure 3 thoroughly and identify the activities and functions which can be automated and those which are manual. Figure 4 shows the automatic and manual functions and sub-functions in our case study, where Symbol refers to automatic functions and refers to manual functions, sub-functions, and activities. There are 18 automatic functions, sub-functions and activities in

our case study process. However, there are 3 manual activities which are empty box, mobile, and wait for stock.
At this stage, the process is completed and analysed and is ready to derive the systems goals in the form of a UML use case. The use case is a graphical description of the actions or steps involved in the business process between the users and the software engineers system used. The UML modeling behavioural diagram is used to assist IT people develop and determine the implemented features and how to resolve the errors. Use cases help to obtain the system goals, such as: what needs to be included in the proposed mobile phone order management process (demand)?; where the process is going to be used (location)? who the process stakeholders are (users)?; and what are the company’s deadlines? After obtaining this information, it is then easy for IT developers to implement the process.
Figure 5 shows two use case diagrams for our case study. The first use case contains one actor, namely process start and four use cases which include “create the package”, “add delivery notes”, “confirm delivery notes” and “sort order”. The second use case contains five actors: the customer, head office, general manager, store manager and company staff. The customer actor logs on to start the order and then head office manages the customer order. Next, the store manager actor adds the notes. The general manager actor modifies the package, confirms this modification and makes the mobile shipment after the company staff actor checks the notes. By using the use case model, the system goals according to business expectations are now clear, allowing IT developers to develop the system easily.
5. Conclusion & implications
In this paper, we have proposed a business process modelling framework for IT people to better understand the business process. It contains three levels: the business decision level, the business process modelling level, and the Information Technology system goals level. The framework was validated by using the case study of mobile phone order management process in a telecommunication company. The results indicate: (1) the modelling business process has a positive influence on obtaining the systems’ goals; and (2) that this enables IT developers to implement the system according to the business’ desires, which alternately positively influences social and cultural relationships between business and IT staff.
Two major implications can be derived from the study for information system developers and business organizations. First, for developers, the study shows how system goals can be derived from the business environment which leads them to better understand the business’ demands. Second, for the business

organization, it is always hard for business analysts to define business goals and objectives. This proposed framework enables business analysts to identify and analyze business goals and objectives. However, the paper has one limitation; we only tested our proposed framework on one business process. Thus, in the future, it could be possible to test our framework with more than one business process in different business sectors by using different modeling techniques, such as BPMN or ARIS etc.
References
|
{"Source-Url": "https://www.computer.org/csdl/proceedings/hicss/2012/4525/00/4525e307.pdf", "len_cl100k_base": 4475, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25650, "total-output-tokens": 6418, "length": "2e12", "weborganizer": {"__label__adult": 0.0004763603210449219, "__label__art_design": 0.001018524169921875, "__label__crime_law": 0.0007042884826660156, "__label__education_jobs": 0.0134124755859375, "__label__entertainment": 0.0001779794692993164, "__label__fashion_beauty": 0.0003292560577392578, "__label__finance_business": 0.0209808349609375, "__label__food_dining": 0.0007090568542480469, "__label__games": 0.0009088516235351562, "__label__hardware": 0.0011348724365234375, "__label__health": 0.0012264251708984375, "__label__history": 0.000652313232421875, "__label__home_hobbies": 0.00024306774139404297, "__label__industrial": 0.0016031265258789062, "__label__literature": 0.0008764266967773438, "__label__politics": 0.0004773139953613281, "__label__religion": 0.0005197525024414062, "__label__science_tech": 0.2066650390625, "__label__social_life": 0.0002200603485107422, "__label__software": 0.05517578125, "__label__software_dev": 0.69091796875, "__label__sports_fitness": 0.00034308433532714844, "__label__transportation": 0.0010480880737304688, "__label__travel": 0.0003261566162109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28495, 0.03732]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28495, 0.44302]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28495, 0.93007]], "google_gemma-3-12b-it_contains_pii": [[0, 4038, false], [4038, 9075, null], [9075, 11494, null], [11494, 16280, null], [16280, 17465, null], [17465, 18798, null], [18798, 20660, null], [20660, 23455, null], [23455, 28495, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4038, true], [4038, 9075, null], [9075, 11494, null], [11494, 16280, null], [16280, 17465, null], [17465, 18798, null], [18798, 20660, null], [20660, 23455, null], [23455, 28495, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28495, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28495, null]], "pdf_page_numbers": [[0, 4038, 1], [4038, 9075, 2], [9075, 11494, 3], [11494, 16280, 4], [16280, 17465, 5], [17465, 18798, 6], [18798, 20660, 7], [20660, 23455, 8], [23455, 28495, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28495, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
eb133c459bac35a45ff5a252de08c63d6d1c94ba
|
Abstract
This draft describes the separation of service forwarding function and service delivery function abstractions, along with the mechanics of NSH encapsulated packet forwarding with such separation, in SFC deployments.
This separation frees the service functions from making forwarding decisions and the necessary control plane integration, thereby keeping the service functions simple and focused on service delivery. Further, this separation fully contains the forwarding decisions in forwarding functions, thereby allowing implementations to enforce integrity of the forwarding state carried in NSH which in turn is required for correctly forwarding NSH encapsulated packets.
Status of This Memo
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."
This Internet-Draft will expire on February 20, 2017.
Copyright Notice
Copyright (c) 2016 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
Table of Contents
1. Introduction .......................... 3
1.1. Requirements Language .................. 3
2. Definition Of Terms ...................... 3
3. NSH And Forwarding ....................... 4
3.1. NSH Background ......................... 4
3.2. NSH Forwarding .......................... 5
3.3. NSH Forwarding Shortcomings .............. 5
4. SFC Service Forwarding And Service Delivery Separation ........ 6
4.1. Forwarding Function And Service Function Separation .... 7
4.2. NSH Infrastructure Flag .................. 8
4.3. Rules For Infrastructure Flag Usage .......... 8
4.4. Service Header Integrity Check .............. 9
4.5. SF Considerations for Reverse Service Path Packets ..... 10
4.6. SF Considerations for Spontaneous Packets .......... 12
5. Infrastructure Forwarding Example ........... 12
6. Infrastructure Forwarding Advantages ........... 13
7. Acknowledgements ........................ 14
8. IANA Considerations ........................ 14
9. Security Considerations ................... 15
10. References ................................ 15
10.1. Normative References .................. 15
10.2. Informative References ............... 15
Authors’ Addresses .......................... 16
1. Introduction
SFC involves steering user or application traffic on a service overlay network through a list of ordered service functions before forwarding onwards to its destination, in the process servicing the traffic as per policies in the service functions as well as the SFC infrastructure.
NSH is the encapsulation designed to carry SFC specific forwarding state as well as metadata relevant to service delivery. The forwarding state in the NSH dictates how to forward the encapsulated packet or frame while the metadata aids service delivery by having one SFC entity produce it and the other consume it.
NSH in its current form, as described in [I-D.ietf-sfc-nsh], blurs the lines between service delivery and service forwarding. This leads to complexities in SFC deployment and operation as the SFC control plane has to deal with a large number of forwarding touch points further challenging the scalability of the deployment. Requiring forwarding decisions be made in the service functions violates operational environment policies due to errors or unintended modification of forwarding state by service functions.
This draft describes the separation of SFC overlay network into a service infrastructure overlay and service function overlay, thereby clearly demarking the boundaries between the two distinct architecture functions. This allows infrastructure components to create and manage the forwarding state as per control plane policy while freeing the service functions to focus on service delivery and not participate in forwarding decisions.
This draft further describes the forwarding process in SFC to achieve such separation that is not only architecturally clean but is friendly to software as well as hardware implementations of SFC infrastructure components.
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].
2. Definition Of Terms
This document uses some terms defined in SFC architecture [I-D.ietf-sfc-architecture] and NSH [I-D.ietf-sfc-nsh] drafts for ease of understanding and the reader is advised to refer to those documents for up to date and additional information.
SFC Infrastructure: A general term used to refer to, collectively, the SFC control plane entity, the classifier, the SFFs. As used freely in the rest of this document, it refers to the infrastructure data plane components - classifiers and SFFs.
Service Infrastructure Overlay: The overlay network extending between SFC infrastructure data plane components. In particular, the overlay network between the SFFs or Classifiers and SFFs.
Service Function Overlay: The overlay network extending between the SFF and SF.
3. NSH And Forwarding
3.1. NSH Background
NSH encapsulation is comprised of three parts as specified in [I-D.ietf-sfc-nsh]; namely a base header, a service path header and one or more context headers predicated on MD-type in the base header. The base and service path headers are reproduced in Figure 1 for ease of reading.
1. The base header provides the structure to NSH with code points to signal the payload carried in addition to control bits in the form of flags.
2. The service path header is the forwarding state carried in NSH and consists of a "Service Path ID" (SPI) and a "Service Index" (SI).
3. The context headers carry the metadata produced or consumed by the SFC infrastructure or the service functions.
Figure 1 reproduces the base and service path headers from [I-D.ietf-sfc-nsh], which are relevant to this discussion.
3.2. NSH Forwarding
Traffic requiring servicing is forwarded on a network overlay constructed using NSH and an outer transport. The forwarding of traffic on this overlay is specified by the NSH. In particular, NSH asserts the following as described in the NSH Actions section of [I-D.ietf-sfc-nsh].
1. Mandates service functions to update the service index (SI).
2. Mandates the service function forwarders to make forwarding decisions based on the contents in the service path header, namely SPI and SI.
These assertions essentially require the modification of the service path header in NSH by the SFs. The SFs are required to control the packet forwarding by making those decisions for the SFFs to use and forward NSH encapsulated packets on.
3.3. NSH Forwarding Shortcomings
NSH’s inability to separate packet forwarding and service delivery leads to the following disadvantages.
- NSH is based on a model where SFs are fully trusted to maintain the integrity of the encapsulation, thereby allowing SFFs to forward on the decision made by SFs. However, this may not be acceptable in all network environments. Strict infrastructure and application boundaries in operators’ environments essentially disallow such a method of packet forwarding.
- Since forwarding decisions are made at SFs, including non-classifier SFs, by way of SI updates in NSH, the control plane has to program the SFs with information to aid such updates. This includes the SI updates corresponding to each SPI the SF belongs to. This approach impacts scalability as the number of SFs are significantly greater in number as compared to SFFs in a typical deployment.
Since non-classifier SFs require forwarding information programmed as described above and the SFs can be from any vendor or third party, with even their own control and management planes in some cases, programming SFs and SFC infrastructure leads to increased control plane complexity. This in turn impacts scalability of SFC deployment, weakening the SFC architecture.
Since service forwarding is required at the SFFs, which in turn is solely based on the SPI and SI fields in the NSH encapsulation header, it leaves SFFs vulnerable to forwarding on decisions not made by itself. These decisions are made by SFs as listed in the assertions. If an SF is buggy or compromised or doing incorrect manipulation of the service index, it may lead to many issues including, packet loop (when SF does not update SI), bypass SF (when SF over decrements or increments SI), etc.
Forwarding decisions at SFF cannot be avoided as many use cases require that decision to be performed in the SFF, making it inconsistent. For instance, when flows are offloaded to SFF by an SF, as described in [I-D.kumar-sfc-offloads] SFF MUST update the service path header as the SF will be bypassed.
Inspecting service path header in NSH on the wire, it is not possible to determine what service function the packet is associated with and where along the path it is at any moment, due to the fact that SFs update the SI and hence the Service Path Header. For instance, the SI inside NSH of a packet being serviced, points to one SF while inflight from SFF to SF and another when it is inflight from that same SF back to SFF. In other words, additional context is required to make such an assertion making troubleshooting cumbersome.
4. SFC Service Forwarding And Service Delivery Separation
This section describes the separation of the forwarding and service delivery functions into separate abstract planes in the context of NSH but is generally applicable to the SFC architecture. Figure 2 depicts the separation of the service plane into service infrastructure and service function overlays. For the sake of simplicity SFC Proxy function is not shown as it is equivalent of an NSH aware SF in its handling of NSH encapsulation. The network connectivity between Classifier and SFF or SFFs or SFF and SF (or SFC Proxy) can be any network overlay over which NSH encapsulation can be transported, such as [I-D.kumar-sfc-nsh-udp-transport].
Forwarding Function And Service Function Separation
- We propose the separation of forwarding and servicing planes in NSH encapsulation to render the SFC architecture cleanly. This enables forwarding from Classifier to SFF or one SFF to another or SFF to SF (or SFC Proxy) to be fully owned and controlled by the service chaining infrastructure while service delivery is the sole responsibility of SFs. This allows for scaling the service plane independent of the forwarding plane while avoiding forwarding conflicts that may otherwise arise. In other words, SFC forwarding is fully controlled by the SFFs and any forwarding-state carried in NSH, be in NSH service context header or metadata context header, is fully opaque to the SFs.
- We propose the overlay network be separated into infrastructure overlay and the service overlays as depicted in Figure 2. Infrastructure overlay extends between SFFs or Classifier and SFFs while the service overlay extends between SFFs and SFs.
- We propose that only the SFFs and Classifiers update the service path header. This restriction limits the forwarding decisions to SFC infrastructure components. These steps make the service path header (or SPI and SI) opaque to SFs and immutable as it passes through an SF. Since SFs performing re-classification do so within the purview of the SFC control plane, re-classification SFs are an exception to maintaining this immutable property and are allowed to update the service path header.
- We propose the update operation on service index in NSH service path header at the SFFs be controlled by the presence of a signal or a flag that indicates whether the packet is on the Infrastructure overlay or the Service overlay. Section 4.2
describes the allocation specifics in NSH to achieve this, which is software as well hardware friendly.
- We further propose that SFFs verify the integrity of the service path header every time a NSH packet is received from a SF. A simple approach to such verification is described in Section 4.4.
4.2. NSH Infrastructure Flag
Figure 3 shows the format of the NSH encapsulation with the I flag in the base header.

<table>
<thead>
<tr>
<th>I Bit: Infrastructure overlay flag</th>
</tr>
</thead>
<tbody>
<tr>
<td>I = 1: Packet or frame is on the infrastructure overlay</td>
</tr>
<tr>
<td>I = 0: Packet or frame is on the service-function overlay</td>
</tr>
</tbody>
</table>
4.3. Rules For Infrastructure Flag Usage
The 'I' flag acts as a discriminator identifying the sender of the NSH encapsulated packet as SFC infrastructure component or service function. This becomes essential architecturally, as the same interface at a SFF may receive NSH encapsulated packets from both the SFC infrastructure components and the service functions.
The following rules MUST be observed by the SFC components in updating the 'I' flag and the service path header in NSH encapsulation header.
1. Classifier MUST set the 'I' flag to '1' when sending the NSH encapsulated packet or frame to the next SFF (or SFF)
2. SFF MUST set the 'I' flag to '1' when sending the NSH encapsulated packet to the next SFF
3. Classifier and SFF MUST set the ‘I’ flag to ‘0’ in all other circumstances when forwarding NSH encapsulated packet
4. SF and SFC Proxy MUST NOT set the I flag
5. SFF MUST update the Service Index in NSH only when a packet with NSH is received with the ‘I’ flag set to ‘0’ before making the next forwarding decision.
In addition to the above rules guiding the I flag usage, the following constraints must be met.
- When more than one classifier exists in the deployment, all classifiers MUST adhere to the above rules.
- Non-classifier SFs MUST NOT update the NSH service path header
- Control plane or static configuration at SFs and SFFs (outside the scope of this draft) MUST control the use of I flag and the overall behavior described in this draft. This is recommended as the default behavior of SFFs and SFs.
4.4. Service Header Integrity Check
The separation of service function and forwarding function responsibilities with respect to forwarding state, allows the service function forwarders to enforce integrity checks to verify the immutable aspect of the service path header. Implementations are recommended to use an appropriate method to verify the integrity of the service path header.
There are many approaches to performing the integrity checks. Actual method is out of scope for this document. A few methods are briefly summarized here as mere examples and implementations must devise their own.
- Every NSH packet received from a SF, ‘I=0’ in NSH base header, is checked against the three tuple: <SF-Transport-Address, SPI, SI> programmed in the SFF, by the control plane, for that SF. This method is simple and works well when a SF appears only once across all service paths.
- SFFs compute a hash of a n-tuple or a pseudo header and transport this hash, as opaque metadata in NSH, through the SFs on a service path. When SFF receives the opaque metadata back, post servicing of the packet, re-computes the hash of the same n-tuple and checks against the hash received in NSH. The n-tuple may include inner
payload, outer transport, service path header and SFF local data among others. Implementations must determine the n-tuple based on the SFC deployment requirements.
- SFFs that are stateful, use flow state to record SPI and SIs and validate the same when the packet is received back from a SF. This works well as long as an SF appears only once in a given SPI. If multiple instances of the same SF within the same SPI is needed, additional check to protect the SI must be used.
- As a generalized approach, control plane programs a mask to be applied to the NSH header to select the bits to perform integrity checks against. In the simplest case, the mask represents just the service path header.
These methods do not protect against threats such as packet replay or spoofing attacks, which do not violate the integrity of the service path header. These methods protect only against modification of the NSH service path header accidentally or otherwise thus ensuring the integrity of the same.
4.5. SF Considerations for Reverse Service Path Packets
Service functions are essentially applications servicing the traffic steered to them over NSH. Some service functions simply service the traffic received, by transmitting every packet along the path, in the same direction as received, after servicing it. Some service functions on the other hand, for instance service functions that act as proxies, often terminate the TCP flows locally before re-originating them towards the ultimate destination. Termination of a TCP flow locally at the SF requires completion of the TCP handshake, which further requires responding to the first sign of life packet or the TCP SYN packet.
SFs must be able to generate a NSH payload packet, in response to one received from a SFF, that flows in the opposite direction of the received payload packet. These response packets must thus traverse the service chain in the reverse direction of the one received from the SFF. However, NSH has provision to carry only one service path in the service path header; a SFF cannot convey both the forward and the reverse SPIs to the SFs, to enable SFs to use the reverse SPIs in such scenarios. SFs that need to send a packet on the reverse service path must thus know how to fill the service path header with the correct SPI and SI. One approach is to have the control plane provision such information for SFs to use. However, this requires SFs to integrate with control plane leading to all the issues as discussed in Section 3.3. Moreover, as discussed in this draft, SFs
benefit from focusing on service delivery while leaving the service forwarding decisions to SFFs.
This draft requires the service path header to be not updated by non-classifier SFs. In order to enable the SFs to send packets on reverse path while not modifying the service path header, we propose the SF request the SFF to move the packet to the appropriate service path. This is achieved by the use of the critical flag in NSH Base Header and a critical flag in the context header or TLV as shown in Figure 4 and Figure 5.
SF that wants to send a packet on the reverse path MUST insert a new CriticalFlags TLV and set the ‘C’ flag in the NSH Base Header to 1, in case of MD Type-2. The SF MUST set the ‘B’ flag to 1 to request forwarding of the packet on the reverse path.
SFF that receives a NSH packet with ‘B’ flag set to 1 in case of MD Type-1 or ‘C’ and ‘B’ flags set in case of MD Type-2 MUST transition the packet to the reverse path associated with the service path in the received NSH service path header. This transitioning involves SFF updating the NSH service path header to the right SPI and SI based on SFF configuration, policy or state.
```
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|R|R|B|R|B|R|R|R| Context Header 1 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Context Header 2 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Context Header 3 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Context Header 4 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
B : Backwards Flag;
SF requests packet to be sent on the reverse service path
Figure 4: Reverse Path Request Messages with NSH Type-1
Figure 5: Reverse Path Request Messages with NSH Type-2
4.6. SF Considerations for Spontaneous Packets
SFs in most cases process and service NSH packets inline and in direction as a response to receiving the NSH packet from a SFF. However, some SFs generate packets in the middle of a flow spontaneously without receiving any NSH packet from a SFF. This is typical in SFs terminating TCP or proxies that need to act on a timer or an internal event.
In order for SFF to process these spontaneous packets, the SFs MUST encapsulate them in NSH, which in turn requires the service path header to be filled with the right SPI and SI.
Stateful SFs MUST cache the service path header in the flow state, received in each direction from the SFF, and use the appropriate cached service path header in the NSH encapsulation for sending spontaneous packets. SFs MUST treat the service path header as opaque metadata while caching or encapsulating with NSH.
SFs that have no flow-state MUST host a classifier or interact with one to obtain the right content for the service path header.
5. Infrastructure Forwarding Example
Figure 6: SFC Infrastructure Overlay Separation Example
This section outlines a typical packet flow to show the working of this behavior through an example SPI1 = SFa@SFF1, SFb@SFF1, SFc@SFF2 with the topology depicted in Figure 6.
1. Packet enters the system via the SFC ingress network (net1) reaching the classifier function
2. Classifier determines the SPI and SI as part of classification
3. Classifier formulates the NSH infrastructure overlay packet, sets the ‘I’ flag among other header updates and forwards it onwards to SFF1
4. SFF receives the NSH infrastructure overlay packet, skips the decrement operation due to I=1, performs a forwarding lookup to determine the next-hop
5. SFF1 determines SFa as the next-hop, formulates the NSH service overlay packet, clears the ‘I’ flag among other header updates and forwards it onwards to SFa
6. SFa services the packet by consuming metadata or producing metadata and forwards the packet back to SFF1
7. SFF1 receives the NSH service overlay packet and decrements the SI, due to I=0, before performing a forwarding lookup
8. SFF1 determines the next-hop as SFb and the process repeats with SFb as before with SFa, with I=0
9. SFF1 receives the SFb serviced packet, decrements the SI due to I=0 and determines the next-hop to be SFc. It sets I=1 and forwards the packet to SFF2 on NSH infrastructure overlay.
10. SFF2 receives the packet from SFF1 and repeats the process through SFc similar to the steps for SFa performed by SFF1, by setting I=0
11. SFF2 receives the SFc serviced packet, decrements the SI due to I=0 and determines SPI1 is fully executed and proceeds with forwarding on the SFC egress network (net2)
6. Infrastructure Forwarding Advantages
The following are some of the benefits of separating the SFC overlay into infrastructure overlay and service function overlay.
- Constrains the SFC forwarding decisions to SFFs where it belongs, providing meaning to the last ‘F’ in ‘SFF’
- Frees the SFs to focus purely on service delivery and avoid complexities associated with forwarding decisions
- Enables validation of forwarding state carried in NSH, thereby maintaining the integrity of the forwarding state used to forward the packets along the service path. This removes issues arising from incorrect updates to service path header by SFs, accidentally or otherwise
- Allows the service index in NSH packet to be always associated with the service function as indicated by the service index, whether the packet is in transit from SFF to the SF or from SF to the SFF
- Allows additional security policies to be enforced between the infrastructure and the service functions by the network operators
- Allows snooping tools or any type of middle boxes to clearly tell whether the NSH encapsulated packet is going between SFFs or between SFF and SF (without relying on the source and destination locators), due to the ‘I’ flag, which is useful in tracing and debugging, especially in cloud deployments
- Allows the service chaining control plane to scale independent of the number of service functions
7. Acknowledgements
The authors would like to thank Abhijit Patra for his guidance and Mike Leske for his review comments.
8. IANA Considerations
IANA is requested to allocate a "STANDARD" class from the TLV Class registry as already requested in [I-D.kumar-sfc-offloads].
IANA is requested to allocate TLV type with value 0x2 from the STANDARD class TLV registry. The format of the "CriticalFlags" TLV is as defined in this draft, which can further be extended to define new flags in other drafts.
Table 1: New TLV in Standard Class Registry
<table>
<thead>
<tr>
<th>TLV#</th>
<th>Name</th>
<th>Description</th>
<th>Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>Critical</td>
<td>Flags that are critical to SFC functionality</td>
<td>This document</td>
</tr>
</tbody>
</table>
9. Security Considerations
Separating forwarding decisions from service functions allows for additional constraints to be enforced by the infrastructure controlling the forwarding decisions. This separation enables additional security methods in the infrastructure and does not itself mandate any new security considerations.
10. References
10.1. Normative References
[I-D.ietf-sfc-nsh]
[I-D.kumar-sfc-offloads]
10.2. Informative References
[I-D.ietf-sfc-architecture]
[I-D.kumar-sfc-nsh-udp-transport]
Authors’ Addresses
Surendra Kumar
Cisco Systems, Inc
170 W. Tasman Dr.
San Jose, CA 95134
US
Email: [email protected]
Kent Leung
Cisco Systems, Inc
170 W. Tasman Dr.
San Jose, CA 95134
US
Email: [email protected]
Peter Bosch
Cisco Systems, Inc
Haarlerbergpark Haarlerbergweg 13-19
Amsterdam, NOORD-HOLLAND 1101 CH
Netherlands
Email: [email protected]
Dongkee Lee
SK Telecom
9-1 Sunae-dong, Pundang-gu
Sungnam-si, Kyunggi-do
South Korea
Email: [email protected]
Rajeev Manur
Broadcom
Email: [email protected]
|
{"Source-Url": "https://tools.ietf.org/pdf/draft-kumar-sfc-nsh-forwarding-01.pdf", "len_cl100k_base": 5879, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 32634, "total-output-tokens": 6853, "length": "2e12", "weborganizer": {"__label__adult": 0.00045371055603027344, "__label__art_design": 0.0003535747528076172, "__label__crime_law": 0.0008358955383300781, "__label__education_jobs": 0.000736236572265625, "__label__entertainment": 0.0001684427261352539, "__label__fashion_beauty": 0.00019299983978271484, "__label__finance_business": 0.0011615753173828125, "__label__food_dining": 0.0003228187561035156, "__label__games": 0.0010023117065429688, "__label__hardware": 0.01081085205078125, "__label__health": 0.00046133995056152344, "__label__history": 0.00045371055603027344, "__label__home_hobbies": 0.00012171268463134766, "__label__industrial": 0.001049041748046875, "__label__literature": 0.0004014968872070313, "__label__politics": 0.0004379749298095703, "__label__religion": 0.0004334449768066406, "__label__science_tech": 0.281494140625, "__label__social_life": 0.00010955333709716796, "__label__software": 0.09320068359375, "__label__software_dev": 0.60302734375, "__label__sports_fitness": 0.00037550926208496094, "__label__transportation": 0.002346038818359375, "__label__travel": 0.0002760887145996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27473, 0.0276]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27473, 0.23453]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27473, 0.88536]], "google_gemma-3-12b-it_contains_pii": [[0, 1198, false], [1198, 3294, null], [3294, 5584, null], [5584, 6947, null], [6947, 8595, null], [8595, 11010, null], [11010, 12737, null], [12737, 14143, null], [14143, 16180, null], [16180, 18731, null], [18731, 20652, null], [20652, 21826, null], [21826, 23623, null], [23623, 25363, null], [25363, 26956, null], [26956, 27473, null], [27473, 27473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1198, true], [1198, 3294, null], [3294, 5584, null], [5584, 6947, null], [6947, 8595, null], [8595, 11010, null], [11010, 12737, null], [12737, 14143, null], [14143, 16180, null], [16180, 18731, null], [18731, 20652, null], [20652, 21826, null], [21826, 23623, null], [23623, 25363, null], [25363, 26956, null], [26956, 27473, null], [27473, 27473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27473, null]], "pdf_page_numbers": [[0, 1198, 1], [1198, 3294, 2], [3294, 5584, 3], [5584, 6947, 4], [6947, 8595, 5], [8595, 11010, 6], [11010, 12737, 7], [12737, 14143, 8], [14143, 16180, 9], [16180, 18731, 10], [18731, 20652, 11], [20652, 21826, 12], [21826, 23623, 13], [23623, 25363, 14], [25363, 26956, 15], [26956, 27473, 16], [27473, 27473, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27473, 0.05314]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
5406a5df9057d221d054cc45a5bab1e3679caef2
|
[REMOVED]
|
{"Source-Url": "http://www.soc.napier.ac.uk/~xiaodong/publication/ASEAinLNCS00590050.pdf", "len_cl100k_base": 5113, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24619, "total-output-tokens": 6456, "length": "2e12", "weborganizer": {"__label__adult": 0.0002579689025878906, "__label__art_design": 0.00031566619873046875, "__label__crime_law": 0.0002346038818359375, "__label__education_jobs": 0.0003857612609863281, "__label__entertainment": 4.017353057861328e-05, "__label__fashion_beauty": 0.0001080632209777832, "__label__finance_business": 0.00014781951904296875, "__label__food_dining": 0.00020575523376464844, "__label__games": 0.0003733634948730469, "__label__hardware": 0.00047969818115234375, "__label__health": 0.0002237558364868164, "__label__history": 0.0001361370086669922, "__label__home_hobbies": 4.696846008300781e-05, "__label__industrial": 0.00019609928131103516, "__label__literature": 0.00018262863159179688, "__label__politics": 0.0001443624496459961, "__label__religion": 0.0002758502960205078, "__label__science_tech": 0.005542755126953125, "__label__social_life": 5.269050598144531e-05, "__label__software": 0.00731658935546875, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.0001809597015380859, "__label__transportation": 0.000263214111328125, "__label__travel": 0.00013947486877441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28597, 0.03218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28597, 0.4424]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28597, 0.89311]], "google_gemma-3-12b-it_contains_pii": [[0, 2724, false], [2724, 6094, null], [6094, 8321, null], [8321, 11459, null], [11459, 13478, null], [13478, 16401, null], [16401, 19325, null], [19325, 21624, null], [21624, 24207, null], [24207, 27145, null], [27145, 28597, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2724, true], [2724, 6094, null], [6094, 8321, null], [8321, 11459, null], [11459, 13478, null], [13478, 16401, null], [16401, 19325, null], [19325, 21624, null], [21624, 24207, null], [24207, 27145, null], [27145, 28597, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28597, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28597, null]], "pdf_page_numbers": [[0, 2724, 1], [2724, 6094, 2], [6094, 8321, 3], [8321, 11459, 4], [11459, 13478, 5], [13478, 16401, 6], [16401, 19325, 7], [19325, 21624, 8], [21624, 24207, 9], [24207, 27145, 10], [27145, 28597, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28597, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
0f7dbc7eb24cfc9d7a35ed730fe102a4e52c8d8b
|
First-Order Theory Revision
Bradley L. Richards
Dept. of Computer Sciences
University of Texas at Austin
[email protected]
Raymond J. Mooney
Dept. of Computer Sciences
University of Texas, Austin
[email protected]
Abstract
Recent learning systems have combined explanation-based and inductive learning techniques to revise propositional domain theories (e.g., EITHER, RTLS, KBANN). Inductive systems working in first order logic have also been developed (e.g., CIGOL, FOIL, FOCL). This paper presents a theory revision system, Forte, that merges these two developments. Forte provides theory revision capabilities similar to those of the propositional systems, but works with domain theories stated in first-order logic.
1 INTRODUCTION
The past few years have seen a merger of inductive and explanation-based capabilities into a new class of systems performing theory revision. The premise of theory revision is that we can obtain a domain theory, be it from a book or an expert, but we cannot expect that theory to be entirely complete or correct. Theory revision systems use a set of training instances to improve the theory.
Forte (First-Order Revision of Theories with Examples) is a theory revision system for first-order logic. Theories are stated in a restricted form of Prolog, and a training set is used to identify where faults in the theory may lie. Forte uses operators drawn from propositional theory revision, first-order inductive systems, and inverse resolution to develop possible theory revisions.
2 RELATED WORK
2.1 PROPOSITIONAL THEORY REVISION
There are a number of propositional theory revision systems, including RTLS ([Ginsberg, 1990]), KBANN ([Towell, Shavlik, and Noordewier, 1990]), and EITHER ([Ourston and Mooney, 1990]). RTLS translates a theory into an operational form for use by an inductive learner, and translates it back into the theory language after modification. KBANN translates the initial theory into a neural network, and revises the network using standard neural network techniques. However, extracting a revised theory from the trained network is the subject of ongoing research. EITHER revises the theory directly, and in this sense is the most similar to Forte. However, all of these systems are limited to propositional domains. While most finite problem domains can be expressed in propositional form, doing so may greatly increase their size and reduce their understandability.
2.2 FIRST-ORDER LEARNING
Stephen Muggleton first pointed the way to first-order theory revision with his propositional system Duce ([Muggleton, 1987]). Duce uses six theory revision operators, which Muggleton later grouped under the heading inverse resolution. Duce takes advantage of the case with which resolution steps can be reversed in propositional logic; if we know the resolvent and either the goal or the input clause, we can abduce the missing element. Duce uses an oracle to verify its operations.
From Duce it was a short but important step to CIGOL, a related system working in first-order logic ([Muggleton and Buntine, 1988]). CIGOL performs inverse resolution in first-order logic, but it assumes that all input clauses are unit clauses and, like DUCE, it depends on an oracle. Forte uses inverse resolution operators, but without these limitations.
[Quilani, 1990] describes FOIL, an inductive learning system working in first order logic. FOIL works by generalization, constructing a set of Horn clauses that cover the positive examples while excluding the negative ones. FOCL, in [Pazzani, Brunk, and Silverstein, 1991], extends FOIL by using an input theory to guide and augment the search process. Clauses and portions of clauses from the input theory are considered for addition to the rules under development by FOIL. If adding a fragment of the input theory provides more information gain than adding a newly created antecedent, the term from the theory is chosen. Thus, providing a good input theory provides a substantial boost to the learning process. The primary difference between FOCL and Forte is that FOCL uses the input theory as an aid to the learning process, whereas Forte performs true theory revision.
3 PROBLEM DEFINITION
Our objective is to create a system that performs theory revision in first-order logic. The paragraphs below define our terminology, provide a more formal statement of our objective, and describe the restrictions placed on our prototype implementation.
3.1 THEORY
A theory, $T$, is a Prolog program without cuts.
3.2 ASSERTION
An assertion is a predicate corresponding with the consequent of one or more clauses in the theory. If an assertion is given that does not correspond to a clause in the theory, this indicates that a rule corresponding to the assertion needs to be added to the theory.
3.3 EXAMPLE
An example is a set of related instances that share a common set of facts. An example consists of a set of facts, $F$, a set of positive ground assertions, and a set of negative ground assertions. A fact is a ground atom corresponding to a predicate that may appear as an antecedent to clauses in the theory. A ground assertion is an instantiation of an assertion together with a boolean value indicating whether or not the ground assertion would be provable using a correct theory.
3.4 INSTANCE
An instance is a ground assertion with its associated truth value plus the facts associated with the example from which the ground assertion came. Instances that should be provable are positive instances, and instances that should not be provable are negative instances. Given a set, $P$, of positive instances and a set, $N$, of negative instances, we say theory $T$ is correct on these instances if
$$\forall p \in P, T \lor F \leftarrow p$$
$$\forall n \in N, T \lor F \leftarrow n$$
A training set $P \cup N$ is consistent if $P \cap N = \emptyset$. A theory cannot be correct on an inconsistent training set.
3.5 OBJECTIVE
Given an initial theory and a consistent set of instances, produce an "appropriately revised" theory that is correct on the given instances.
3.6 DISCUSSION
We say that a theory is appropriately revised if it meets certain heuristic criteria, namely
- A revised theory should be as similar as possible to the initial theory, both semantically and syntactically.
- A revised theory should be as simple as possible.
- A revised theory should make meaningful generalizations from the input instances, so that it will be as accurate as possible on instances that did not appear in the training set.
3.7 RESTRICTIONS
The initial version of Forte does not allow recursion or negation in its theories, and it is vulnerable to local maxima, which means that it does not always generate a theory that is correct on the training set. Lifting the theory restrictions, limiting or eliminating Forte's susceptibility to local maxima, and providing a more formal definition of an "appropriately revised" theory are the primary goals of our ongoing research.
4 SYSTEM DESCRIPTION
Forte uses a training set to identify and correct errors and omissions in the given domain theory. It chooses, if they exist, one positive instance that is unprovable and one negative instance that is provable and proposes revisions to the theory that correct one of these errors. Each revision is evaluated globally, to see what its effect is on the theory's overall accuracy on the training set. The best revision is implemented, and the system chooses another pair of improperly classified instances.
Classes or categories in Forte are assertions that are to be proven using the domain theory. Training examples may include both attribute and relational information. Objects in training examples may be many-sorted, e.g., a domain might include both birds and bicycles, each with their own set of attributes.
The outermost layer of the program is a relatively simple iterative shell that calls the theory revision operators, evaluates the revisions they propose, and implements the best such revision.
Theory revision operators come from a variety of sources. Simple ones, like delete-rule, are drawn from proposition- al theory revision systems. Operators for adding and deleting antecedents are based on two separate derivations of FOIL. And operators for modifying intermediate rules are drawn from inverse resolution.
Operators can have three effects on a theory: specialization, generalization, and compaction. If a positive instance is unprovable then the theory needs to be generalized, whereas if a negative instance is provable the theory needs to be specialized. Forte also compacts (simplifies) the theory when doing so does not degrade accuracy on the training set. Operators implemented in Forte, and their effects, appear in Table I.
Note that, in several cases, Table I shows that an operator can be used both to compact the theory and to generalize or specialize it. In these cases, there are actually two versions of the operator. While conceptually similar, they work with different information toward different goals.
Add antecedent (FOIL-based). If a negative example is provable, the proof may be forced to fail by specializing the theory. Each rule used in the proof is passed to a derivative of FOIL, along with sets of positive and negative instances. FOIL finds antecedents that distinguish between the positive and negative instances and adds these to the rule. If necessary, several rules will be added, each covering a portion of the positive instances.
Delete rule. If a negative example is provable, each of the rules used in the proof is considered for deletion from the theory. When used in compaction, a rule is deleted if doing so does not reduce the accuracy of the theory.
Delete antecedent (Inverse FOIL-based). If a positive instance is unprovable, each failing clause in the attempted proof is considered by delete-antecedent. This operator depends on a conceptual derivative of FOIL, called Inverse FOIL (IFOIL). Sets of positive and negative instances are passed to IFOIL, which deletes antecedents to create a rule allowing proof of some or all of the positive instances, but none of the negative ones. If necessary, IFOIL will create multiple rules to cover all positive instances.
Add rule (FOIL-based). If a positive instance is unprovable, each failure point in its proof is considered for add-rule. The failing clause is copied, with the failing antecedent deleted. If this allows the instance to be proven, FOIL is called to add any new antecedents that are required to keep negative instances from also becoming provable.
Identification (inverse resolution). Identification constructs an alternate definition for an antecedent identified in a failure point. It develops an alternate definition by performing an inverse resolution step using two existing rules in the domain theory. For example, suppose we need an alternate definition for predicate x, and we have the following two rules in the domain theory:
\[
a \leftrightarrow b, x
\]
\[
a \leftrightarrow b, c, d
\]
Identification will replace these two rules with the logically equivalent pair:
\[
a \leftrightarrow b, x
\]
\[
x \leftrightarrow c, d
\]
While this has no effect on the deductive closure of these rules alone, we have now introduced a new definition of x into the theory, which may allow our positive example to be proven. In first order logic, unification substantially complicates the picture, but the basic concept remains the same.
When used in compaction, identification seeks pairs of rules where it can construct definitions of intermediate predicates as shown. These changes are implemented if they reduce the size of the theory without reducing its accuracy.
Absorption (inverse resolution). Absorption is the complement of identification. Rather than constructing new definitions for intermediate predicates, absorption seeks to allow existing definitions to come into play. Suppose predicate c in the rule below is a failure point:
\[
a \leftrightarrow b, c, d
\]
(1)
Now suppose our domain theory contains the following rule, as well as other rules with consequent x:
\[
x \leftrightarrow c, d
\]
In this case, absorption would replace rule (1) with the new rule
\[
a \leftrightarrow b, x
\]
thereby possibly allowing alternate definitions of x to be used when proving a.
In compaction, absorption makes the same kind of modifications to rules, trying to reduce the size of the theory without adversely affecting its accuracy.
Delete Antecedent (ordinary). The delete antecedent operator based on Inverse FOIL may be unable to develop a revision that excludes all negative instances. However, deleting an antecedent may still improve performance on the training set. Hence, this operator independently considers antecedents identified in failure points for deletion from the theory. When used in compaction, this operator will delete an antecedent if doing so does not degrade the performance of the theory on the training set.
5 RESULTS
In this section we present results showing Forte's learning performance on the family domain used in [Quinlan, 1990] to test FOIL. This gives us a basis for comparison to a first-order inductive learner. Readers familiar with FOIL should note that Forte's instance-based representation is substantially different from the tuple representation.
used by FOIL. FOIL used the equivalent of 2400 training instances to achieve 97.5% accuracy on this data. Forte’s learning performance, both with and without an initial theory, is shown in Figures 1 and 2.
Training sets given to Forte were randomly selected from a database that included all 112 positive instances and 272 negative instances, which were chosen as being those closest to being provable. These are, in essence, the most useful negative instances to the theory revision process. FOIL, in its representation, had the equivalent of all positive instances and all negative instances that share their base constant with a positive instance (e.g., if John has an uncle, then FOIL would receive all negative instances of the form uncle(X, john)).
With no initial theory, Forte averaged 83% accuracy with 150 training instances, and improved slowly thereafter. With no training, we can reach 71% accuracy by guessing all instances to be negative. The initial fall-off in accuracy seen in Figure 1 reflects that fact that, with fewer than 75 instances, we do not have enough data for meaningful learning across twelve concepts. The training set performance shows that Forte is being caught in local maxima; in fact, with more than 75 instances, Forte rarely achieves 100% accuracy on the training set.
With an initial theory, Forte’s performance improves dramatically. The given theory begins with an accuracy of 83%. Forte can completely correct the theory with as few as 120 training instances, and it rarely falls into local maxima. By the time Forte has seen 150 instances (an average of 12.5 per concept), training and test accuracies have nearly converged.
5.1 Revised theory
The theory below is the initial theory given to Forte in Figure 2. It contains multiple faults, including missing and added rules, missing and added antecedents, and incorrect antecedents. Added and incorrect items are shown in italics, and missing items are struck out.
\[
\begin{align*}
\text{wife}(X, Y) & : \text{gender}(X, \text{female}), \text{married}(X, Y), \\
\text{husband}(X, Y) & : \text{gender}(X, \text{male}), \text{married}(X, Y), \\
\text{mother}(X, Y) & : \text{gender}(X, \text{male}), \text{parent}(X, Y), \\
\text{father}(X, Y) & : \text{gender}(X, \text{male}), \text{parent}(X, Y), \\
\text{daughter}(X, Y) & : \text{gender}(X, \text{female}), \text{parent}(X, Y), \\
\text{son}(X, Y) & : \text{gender}(X, \text{male}), \text{parent}(X, Y), \\
\text{sister}(X, Y) & : \text{gender}(X, \text{female}), \text{parent}(X, Y), \\
\text{brother}(X, Y) & : \text{gender}(X, \text{male}), \text{parent}(X, Y), \\
\text{uncle}(X, Y) & : \text{parent}(X, Y), \\
\text{aunt}(X, Y) & : \text{gender}(X, \text{female}), \text{uncle}(X, Y), \\
\text{nieces}(X, Y) & : \text{gender}(X, \text{female}), \text{sibling}(X, Y), \\
\text{nephews}(X, Y) & : \text{gender}(X, \text{male}), \text{uncle}(X, Y), \\
\text{niece}(X, Y) & : \text{gender}(X, \text{female}), \text{niece}(X, Y), \\
\text{neice}(X, Y) & : \text{gender}(X, \text{female}), \text{uncle}(X, Y), \\
\text{nephew}(X, Y) & : \text{gender}(X, \text{male}), \text{uncle}(X, Y), \\
\text{aunt}(X, Y) & : \text{gender}(X, \text{female}), \text{aunt}(X, Y), \\
\text{aunt}(X, Y) & : \text{married}(X, A), \text{sibling}(X, C), \text{parent}(C, Y), \\
\text{sibling}(X, Y) & : \text{parent}(A, X), \text{parent}(A, Y), X = Y.
\end{align*}
\]
Using 120 instances, Forte produced the correctly revised theory below, where additional compactions are shown in italics.
\[
\begin{align*}
\text{wife}(X, Y) & : \text{gender}(X, \text{female}), \text{married}(X, Y), \\
\text{husband}(X, Y) & : \text{gender}(X, \text{male}), \text{married}(X, Y), \\
\text{mother}(X, Y) & : \text{gender}(X, \text{female}), \text{parent}(X, Y), \\
\text{father}(X, Y) & : \text{gender}(X, \text{male}), \text{parent}(X, Y), \\
\text{daughter}(X, Y) & : \text{gender}(X, \text{female}), \text{parent}(X, Y), \\
\text{son}(X, Y) & : \text{gender}(X, \text{male}), \text{parent}(X, Y), \\
\text{sister}(X, Y) & : \text{gender}(X, \text{female}), \text{parent}(X, Y), \\
\text{brother}(X, Y) & : \text{gender}(X, \text{male}), \text{parent}(X, Y), \\
\text{uncle}(X, Y) & : \text{gender}(X, \text{male}), \text{uncle}(X, Y), \\
\text{aunt}(X, Y) & : \text{gender}(X, \text{female}), \text{aunt}(X, Y), \\
\text{niece}(X, Y) & : \text{gender}(X, \text{female}), \text{niece}(X, Y), \\
\text{nephew}(X, Y) & : \text{gender}(X, \text{male}), \text{uncle}(X, Y), \\
\text{niece}(X, Y) & : \text{gender}(X, \text{female}), \text{uncle}(X, Y), \\
\text{nephew}(X, Y) & : \text{gender}(X, \text{male}), \text{uncle}(X, Y), \\
\text{aunt}(X, Y) & : \text{gender}(X, \text{female}), \text{uncle}(X, Y), \\
\text{aunt}(X, Y) & : \text{married}(X, A), \text{sibling}(X, C), \text{parent}(C, Y), \\
\text{sibling}(X, Y) & : \text{parent}(A, X), \text{parent}(A, Y), X = Y.
\end{align*}
\]
6 CONCLUSION
Theory revision is an exciting development in machine learning, since it allows a system to take advantage of expert knowledge without requiring the expert to be infallible. In this paper we presented a system, Forte, that performs theory revision in first-order logic. Forte builds on prior work done in propositional theory revision, inductive learning, and inverse resolution. Future versions of Forte will lift a number of restrictions placed on the current system. Planned enhancements will introduce recursion and negation into the domain theories, and limit or eliminate Forte’s susceptibility to local maxima.
Acknowledgements
This research was supported in part by the Air Force Institute of Technology faculty preparation program, and in part by the NASA Ames Research Center under grant NCC 2-629.
References
|
{"Source-Url": "http://www.cs.utexas.edu/~ai-lab/downloadPublication.php?filename=http%3A%2F%2Fwww.cs.utexas.edu%2Fusers%2Fml%2Fpapers%2Frichards.icml91.pdf&pubid=127053", "len_cl100k_base": 4630, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 5685, "total-output-tokens": 5362, "length": "2e12", "weborganizer": {"__label__adult": 0.0004732608795166016, "__label__art_design": 0.0005927085876464844, "__label__crime_law": 0.0006346702575683594, "__label__education_jobs": 0.005748748779296875, "__label__entertainment": 0.0001678466796875, "__label__fashion_beauty": 0.0003211498260498047, "__label__finance_business": 0.0005993843078613281, "__label__food_dining": 0.000652313232421875, "__label__games": 0.0008459091186523438, "__label__hardware": 0.001312255859375, "__label__health": 0.001323699951171875, "__label__history": 0.00045561790466308594, "__label__home_hobbies": 0.0002498626708984375, "__label__industrial": 0.00103759765625, "__label__literature": 0.0010480880737304688, "__label__politics": 0.0004870891571044922, "__label__religion": 0.0007119178771972656, "__label__science_tech": 0.4775390625, "__label__social_life": 0.00025177001953125, "__label__software": 0.01110076904296875, "__label__software_dev": 0.49267578125, "__label__sports_fitness": 0.0004143714904785156, "__label__transportation": 0.0010137557983398438, "__label__travel": 0.000240325927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20379, 0.01848]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20379, 0.62009]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20379, 0.88851]], "google_gemma-3-12b-it_contains_pii": [[0, 4196, false], [4196, 9078, null], [9078, 13378, null], [13378, 18289, null], [18289, 20379, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4196, true], [4196, 9078, null], [9078, 13378, null], [13378, 18289, null], [18289, 20379, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20379, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20379, null]], "pdf_page_numbers": [[0, 4196, 1], [4196, 9078, 2], [9078, 13378, 3], [13378, 18289, 4], [18289, 20379, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20379, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
167f302d9d8e056e53be71cc45d76f45fbdbbe2f
|
Principles for secure design
Some of the slides and content are from Mike Hicks’ Coursera course
Making secure software
• **Flawed approach**: Design and build software, and *ignore security at first*
• Add security once the functional requirements are satisfied
• **Better approach**: *Build security in* from the start
• Incorporate security-minded thinking into all phases of the development process
Development process
Four common phases of development
- Requirements
- Design
- Implementation
- Testing/assurance
Security activities apply to all phases
Security Requirements
Abuse Cases
Architectural Risk Analysis
Security-oriented Design
Code Review (with tools)
Risk-based Security Tests
Penetration Testing
Development process
Four common **phases** of development
- Requirements
- Design
- **Implementation**
- Testing/assurance
We’ve been talking about these
**Security activities** apply to all phases
Four common phases of development
- Requirements
- Design
- Implementation
- Testing/assurance
We’ve been talking about these
Security activities apply to all phases
- Security Requirements
- Abuse Cases
- Architectural Risk Analysis
- Security-oriented Design
- Code Review (with tools)
- Risk-based Security Tests
- Penetration Testing
This class is about these
Designing secure systems
• **Model** your threats
• Define your **security requirements**
• What distinguishes a security requirement from a typical “software feature”?
• Apply good security **design principles**
Threat Modeling
Threat Model
• The threat model makes explicit the adversary’s assumed powers
• Consequence: The threat model must match reality, otherwise the risk analysis of the system will be wrong
• The threat model is critically important
• If you are not explicit about what the attacker can do, how can you assess whether your design will repel that attacker?
Threat Model
- The **threat model** makes explicit the adversary’s assumed powers
- Consequence: The threat model must match reality, otherwise the risk analysis of the system will be wrong
- The threat model is **critically important**
- If you are not explicit about what the attacker can do, how can you assess whether your design will repel that attacker?
“This system is secure” means *nothing* in the absence of a threat model
A few different network threat models
Malicious user
Client
Network
Server
A few different network threat models
A few different network threat models
Malicious user
Snooping
Co-located user
A few different network threat models
Malicious user
Snooping
Co-located user
Compromised server
Threat-driven Design
• Different threat models will elicit different responses
• **Only malicious users**: implies **message traffic is safe**
• No need to encrypt communications
• This is what **telnet** remote login software assumed
• **Snooping attackers**: means **message traffic is visible**
• So use encrypted wifi (link layer), encrypted network layer (IPsec), or encrypted application layer (SSL)
- Which is most appropriate for your system?
• **Co-located attacker**: can **access local files, memory**
• Cannot store unencrypted secrets, like passwords
• Likewise with a compromised server
More on these when we get to networking
In fact, even encrypting them might not suffice! (More later)
Threat-driven Design
• Different threat models will elicit different responses
- Only malicious users: implies message traffic is safe
- No need to encrypt communications
- This is what telnet remote login software assumed
- Snooping attackers: means message traffic is visible
- So use encrypted wifi (link layer), encrypted network layer (IPsec), or encrypted application layer (SSL)
- Which is most appropriate for your system?
- Co-located attacker: can access local files, memory
- Cannot store unencrypted secrets, like passwords
- Likewise with a compromised server
In fact, even encrypting them might not suffice! (More later)
Bad Model = Bad Security
- Any assumptions you make in your model are potential holes that the adversary can exploit
Bad Model = Bad Security
• Any **assumptions** you make in your model are potential **holes that the adversary can exploit**
• E.g.: Assuming no snooping users **no longer valid**
• **Prevalence of wi-fi networks in most deployments**
Bad Model = Bad Security
• Any **assumptions** you make in your model are potential **holes that the adversary can exploit**
• E.g.: Assuming no snooping users **no longer valid**
• *Prevalence of wi-fi networks in most deployments*
• Other mistaken assumptions
• **Assumption**: Encrypted traffic carries no information
Bad Model = Bad Security
- Any **assumptions** you make in your model are potential **holes that the adversary can exploit**
- E.g.: Assuming no snooping users **no longer valid**
- Prevalence of wi-fi networks in most deployments
- Other mistaken assumptions
- **Assumption**: Encrypted traffic carries no information
- Not true! By analyzing the size and distribution of messages, you can infer application state
- **Assumption**: Timing channels carry little information
- Not true! Timing measurements of previous RSA implementations could be used eventually reveal a remote SSL secret key
Bad Model = Bad Security
Assumption: Encrypted traffic carries no information
Skype encrypts its packets, so we’re not revealing anything, right?
But Skype varies its packet sizes...
Figure 2: Unigram frequencies of bit rates for English, Brazilian Portuguese, German and Hungarian
Language Identification of Encrypted VoIP Traffic: Alejandra y Roberto or Alice and Bob?
Charles V. Wright Lucas Ballard Fabian Monrose Gerald M. Masson
Bad Model = Bad Security
**Assumption**: Encrypted traffic carries no information
**Skype** encrypts its packets, so we’re not revealing anything, right?
But Skype varies its packet sizes…
…and different languages have different word/unigram lengths…

*Figure 2: Unigram frequencies of bit rates for English, Brazilian Portuguese, German and Hungarian*
Bad Model = Bad Security
**Assumption:** Encrypted traffic carries no information
**Skype** encrypts its packets, so we’re not revealing anything, right?
But Skype varies its packet sizes…
…and different languages have different word/unigram lengths…
...so you can infer **what language** two people are speaking based on **packet sizes**!
Finding a good model
- **Compare against similar systems**
- What attacks does their design contend with?
- **Understand past attacks and attack patterns**
- How do they apply to your system?
- **Challenge assumptions in your design**
- What happens if an assumption is untrue?
- What would a breach potentially cost you?
- How hard would it be to get rid of an assumption, allowing for a stronger adversary?
- What would that development cost?
You have your threat model.
Now let’s define what we need to defend against.
Security Requirements
Security Requirements
• **Software requirements** typically about *what* the software should do
• We also want to have **security requirements**
- **Security-related goals** (or policies)
- **Example**: One user's bank account balance should not be learned by, or modified by, another user, unless authorized
- **Required mechanisms for enforcing them**
- **Example**:
1. Users identify themselves using passwords,
2. Passwords must be “strong,” and
3. The password database is only accessible to login program.
Typical *Kinds* of Requirements
- **Policies**
- **Confidentiality** (and Privacy and Anonymity)
- **Integrity**
- **Availability**
- **Supporting mechanisms**
- **Authentication**
- **Authorization**
- **Audit-ability**
- **Encryption**
Supporting mechanisms
These relate identities ("principals") to actions
Authentication Authorization Audit-ability
Supporting mechanisms
These relate identities ("principals") to actions
**Authentication**
How can a system tell *who a user is*
**Authorization**
**Audit-ability**
Supporting mechanisms
These relate identities ("principals") to actions
<table>
<thead>
<tr>
<th>Authentication</th>
<th>Authorization</th>
<th>Audit-ability</th>
</tr>
</thead>
<tbody>
<tr>
<td>How can a system</td>
<td></td>
<td></td>
</tr>
<tr>
<td>tell who a user is</td>
<td></td>
<td></td>
</tr>
<tr>
<td>What we know</td>
<td></td>
<td></td>
</tr>
<tr>
<td>What we have</td>
<td></td>
<td></td>
</tr>
<tr>
<td>What we are</td>
<td></td>
<td></td>
</tr>
<tr>
<td>>1 of the above =</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Multi-factor authentication</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Supporting mechanisms
These relate identities ("principals") to actions
**Authentication**
How can a system tell *who a user is*
**Authorization**
How can a system tell *what a user is allowed to do*
**Audit-ability**
What we know
What we have
What we are
>1 of the above = Multi-factor authentication
Supporting mechanisms
These relate identities ("principals") to actions
<table>
<thead>
<tr>
<th>Authentication</th>
<th>Authorization</th>
<th>Audit-ability</th>
</tr>
</thead>
<tbody>
<tr>
<td>How can a system tell <em>who a user is</em></td>
<td>How can a system tell <em>what a user is allowed to do</em></td>
<td></td>
</tr>
</tbody>
</table>
What we know
What we have
What we are
>1 of the above = *Multi-factor authentication*
Access control policies
(defines)
+ *Mediator*
(checks)
Supporting mechanisms
These relate identities ("principals") to actions
<table>
<thead>
<tr>
<th><strong>Authentication</strong></th>
<th><strong>Authorization</strong></th>
<th><strong>Audit-ability</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>How can a system tell <em>who a user is</em></td>
<td>How can a system tell <em>what a user is allowed to do</em></td>
<td>How can a system tell <em>what a user did</em></td>
</tr>
</tbody>
</table>
- What we know
- What we have
- What we *are*
>1 of the above = *Multi-factor authentication*
Access control policies (defines)
+ *Mediator* (checks)
Supporting mechanisms
These relate identities ("principals") to actions
**Authentication**
How can a system tell *who a user is*
What we know
What we have
What we *are*
>1 of the above = *Multi-factor authentication*
**Authorization**
How can a system tell *what a user is allowed to do*
Access control policies (defines)
+ *Mediator* (checks)
**Audit-ability**
How can a system tell *what a user did*
Retain enough info to determine the circumstances of a breach
Defining Security Requirements
• Many processes for deciding security requirements
• Example: **General policy concerns**
• Due to regulations/standards (HIPAA, SOX, etc.)
• Due organizational values (e.g., valuing privacy)
• Example: **Policy arising from threat modeling**
• Which attacks cause the greatest concern?
- Who are the likely adversaries and what are their goals and methods?
• Which attacks have already occurred?
- Within the organization, or elsewhere on related systems?
Abuse Cases
• Abuse cases illustrate security requirements
• Where use cases describe what a system should do, abuse cases describe what it should not do
• Example use case: The system allows bank managers to modify an account’s interest rate
• Example abuse case: A user is able to spoof being a manager and thereby change the interest rate on an account
Defining Abuse Cases
• Construct cases in which an adversary’s exercise of power could violate a security requirement
• Based on the threat model
• What might occur if a security measure was removed?
• **Example**: Co-located attacker steals password file and learns all user passwords
• Possible if password file is not encrypted
• **Example**: Snooping attacker replays a captured message, effecting a bank withdrawal
• Possible if messages are have no nonce (a small amount of uniqueness/randomness - like the time of day or sequence number)
Security design principles
Design Defects = Flaws
- Recall that software defects consist of both flaws and bugs
- **Flaws** are problems in the design
- **Bugs** are problems in the implementation
- **We avoid flaws during the design phase**
- According to Gary McGraw, **50% of security problems are flaws**
- So this phase is very important
Categories of Principles
Categories of Principles
- Prevention
- **Goal**: Eliminate software defects entirely
- **Example**: Heartbleed bug would have been prevented by using a type-safe language, like Java
Categories of Principles
• Prevention
• **Goal**: Eliminate software defects entirely
• **Example**: Heartbleed bug would have been prevented by using a type-safe language, like Java
• Mitigation
• **Goal**: Reduce the harm from exploitation of unknown defects
Categories of Principles
• **Prevention**
• **Goal**: Eliminate software defects entirely
• **Example**: Heartbleed bug would have been prevented by using a type-safe language, like Java
• **Mitigation**
• **Goal**: Reduce the harm from exploitation of unknown defects
• **Example**: Run each browser tab in a separate process, so exploitation of one tab does not yield access to data in another
• **Detection (and Recovery)**
• **Goal**: Identify and understand an attack (and undo damage)
• **Example**: Monitoring (e.g., expected invariants), snapshotting
Principles for building secure systems
General rules of thumb that, when neglected, result in design flaws
- Security is economics
- Principle of least privilege
- Use fail-safe defaults
- Use separation of responsibility
- Defend in depth
- Account for human factors
- Ensure complete mediation
- Kerkhoff’s principle
- Accept that threat models change
- If you can’t prevent, detect
- Design security from the ground up
- Prefer conservative designs
- Proactively study attacks
“Security is economics”
You can’t afford to secure against *everything*, so what *do* you defend against? Answer: That which has the greatest “return on investment”
THERE ARE NO SECURE SYSTEMS, ONLY DEGREES OF INSECURITY
- In practice, need to **resist a certain level of attack**
- Example: Safes come with security level ratings
- “Safe against safecracking tools & 30 min time limit”
- Corollary: Focus energy & time on **weakest link**
- Corollary: Attackers follow the *path of least resistance*
“Principle of least privilege”
Give a program the access it legitimately needs to do its job. NOTHING MORE
- This doesn’t necessarily reduce probability of failure
- Reduces the EXPECTED COST
**Example:** Unix does a BAD JOB:
- Every program gets all the privileges of the user who invoked it
- vim as root: it can do anything -- should just get access to file
**Example:** Windows JUST AS BAD, MAYBE WORSE
- Many users run as Administrator,
- Many tools require running as Administrator
“Use fail-safe defaults”
Things are going to break. Break safely.
- **Default-deny policies**
- Start by denying all access
- Then allow only that which has been explicitly permitted
- **Crash => fail to secure behavior**
- Example: firewalls explicitly decide to forward
- Failure => packets don’t get through
“Use separation of responsibility”
Split up privilege so no one person or program has total power.
- **Example**: US government
- Checks and balances among different branches
- **Example**: Movie theater
- One employee sells tickets, another tears them
- Tickets go into lockbox
- **Example**: Nuclear weapons…
Use separation of responsibility
“Defend in depth”
Use multiple, redundant protections
• Only in the event that *all of them* have been breached should security be endangered.
• **Example**: Multi-factor authentication:
• Some combination of password, image selection, USB dongle, fingerprint, iris scanner,… (more on these later)
• **Example**: “You can recognize a security guru who is particularly cautious if you see someone wearing both….”
...a belt and suspenders
Defense in depth
…a belt and suspenders
“Ensure complete mediation”
Make sure your reference monitor sees **every** access to **every** object
- Any **access control system** has some resource it needs to enforce
- Who is allowed to access a files
- Who is allowed to post to a message board…
- **Reference Monitor:** The piece of code that checks for permission to access a resource
Ensure complete mediation
“Account for human factors”
(1) “Psychological acceptability”: Users must buy into the security model
• The security of your system ultimately lies in the hands of those who use it.
• If they don’t believe in the system or the cost it takes to secure it, then they won’t do it.
• **Example**: “All passwords must have 15 characters, 3 numbers, 6 hieroglyphics, …”
Log in to your message center.
Invalid log in or server error. Please try again.
Forgot your password?
Log In Address: [email protected]
exempl: [email protected]
Password: ********
note: password is case-sensitive
- Remember my Address and Password (what is this?)
[Log In]
Account for human factors ("psychological acceptability")
1. Users must buy into the security
“Account for human factors”
(2) The security system must be usable
• The security of your system ultimately lies in the hands of those who use it.
• If it is too hard to act in a secure fashion, then they won’t do it.
• **Example**: Popup dialogs
Account for human factors
(2) The security system must be usable
Account for human factors
(2) The security system must be usable
Account for human factors
(2) The security system must be usable
Account for human factors
(2) The security system must be usable
“Account for human factors”
(2) The security system must be usable
• The security of your system ultimately lies in the hands of those who use it.
• If it is too hard to act in a secure fashion, then they won’t do it.
• **Example**: Popup dialogs
“Kerkhoff’s principle”
Don’t rely on **security through obscurity**
- Originally defined in the context of crypto systems (encryption, decryption, digital signatures, etc.):
- Crypto systems should remain *secure even when an attacker knows all of the internal details*
- It is easier to change a compromised key than to update all code and algorithms
- The best security is the light of day
Kerkhoff’s principle??
Kerkhoff’s principle!
Principles for building secure systems
Know these well:
• Security is economics
• Principle of least privilege
• Use fail-safe defaults
• Use separation of responsibility
• Defend in depth
• Account for human factors
• Ensure complete mediation
• Kerkhoff’s principle
Self-explanatory:
• Accept that threat models change; adapt your designs over time
• If you can’t prevent, detect
• Design security from the ground up
• Prefer conservative designs
• Proactively study attacks
|
{"Source-Url": "https://www.cs.umd.edu/class/spring2016/cmsc414/slides/09-secure-design.pdf", "len_cl100k_base": 4382, "olmocr-version": "0.1.53", "pdf-total-pages": 70, "total-fallback-pages": 0, "total-input-tokens": 75182, "total-output-tokens": 6799, "length": "2e12", "weborganizer": {"__label__adult": 0.00044918060302734375, "__label__art_design": 0.0008759498596191406, "__label__crime_law": 0.0010623931884765625, "__label__education_jobs": 0.0013027191162109375, "__label__entertainment": 4.857778549194336e-05, "__label__fashion_beauty": 0.0001480579376220703, "__label__finance_business": 0.00048065185546875, "__label__food_dining": 0.0003082752227783203, "__label__games": 0.0006432533264160156, "__label__hardware": 0.0008411407470703125, "__label__health": 0.00029206275939941406, "__label__history": 0.00016045570373535156, "__label__home_hobbies": 0.0001067519187927246, "__label__industrial": 0.000408172607421875, "__label__literature": 0.00018358230590820312, "__label__politics": 0.0003368854522705078, "__label__religion": 0.0003371238708496094, "__label__science_tech": 0.0059967041015625, "__label__social_life": 8.612871170043945e-05, "__label__software": 0.005535125732421875, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.0003273487091064453, "__label__transportation": 0.00042176246643066406, "__label__travel": 0.00014591217041015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18987, 0.00138]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18987, 0.62693]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18987, 0.86515]], "google_gemma-3-12b-it_contains_pii": [[0, 98, false], [98, 410, null], [410, 727, null], [727, 929, null], [929, 1298, null], [1298, 1516, null], [1516, 1532, null], [1532, 1890, null], [1890, 2330, null], [2330, 2409, null], [2409, 2447, null], [2447, 2528, null], [2528, 2626, null], [2626, 3349, null], [3349, 4001, null], [4001, 4119, null], [4119, 4358, null], [4358, 4686, null], [4686, 5297, null], [5297, 5747, null], [5747, 6155, null], [6155, 6500, null], [6500, 6964, null], [6964, 7065, null], [7065, 7609, null], [7609, 7863, null], [7863, 7982, null], [7982, 8151, null], [8151, 8873, null], [8873, 9181, null], [9181, 9723, null], [9723, 10216, null], [10216, 10691, null], [10691, 11199, null], [11199, 11559, null], [11559, 12115, null], [12115, 12142, null], [12142, 12467, null], [12467, 12492, null], [12492, 12680, null], [12680, 12949, null], [12949, 13523, null], [13523, 14006, null], [14006, 14516, null], [14516, 15008, null], [15008, 15330, null], [15330, 15651, null], [15651, 15651, null], [15651, 15684, null], [15684, 16102, null], [16102, 16127, null], [16127, 16168, null], [16168, 16519, null], [16519, 16519, null], [16519, 16545, null], [16545, 16913, null], [16913, 17202, null], [17202, 17297, null], [17297, 17548, null], [17548, 17613, null], [17613, 17678, null], [17678, 17743, null], [17743, 17809, null], [17809, 18060, null], [18060, 18462, null], [18462, 18462, null], [18462, 18485, null], [18485, 18485, null], [18485, 18507, null], [18507, 18987, null]], "google_gemma-3-12b-it_is_public_document": [[0, 98, true], [98, 410, null], [410, 727, null], [727, 929, null], [929, 1298, null], [1298, 1516, null], [1516, 1532, null], [1532, 1890, null], [1890, 2330, null], [2330, 2409, null], [2409, 2447, null], [2447, 2528, null], [2528, 2626, null], [2626, 3349, null], [3349, 4001, null], [4001, 4119, null], [4119, 4358, null], [4358, 4686, null], [4686, 5297, null], [5297, 5747, null], [5747, 6155, null], [6155, 6500, null], [6500, 6964, null], [6964, 7065, null], [7065, 7609, null], [7609, 7863, null], [7863, 7982, null], [7982, 8151, null], [8151, 8873, null], [8873, 9181, null], [9181, 9723, null], [9723, 10216, null], [10216, 10691, null], [10691, 11199, null], [11199, 11559, null], [11559, 12115, null], [12115, 12142, null], [12142, 12467, null], [12467, 12492, null], [12492, 12680, null], [12680, 12949, null], [12949, 13523, null], [13523, 14006, null], [14006, 14516, null], [14516, 15008, null], [15008, 15330, null], [15330, 15651, null], [15651, 15651, null], [15651, 15684, null], [15684, 16102, null], [16102, 16127, null], [16127, 16168, null], [16168, 16519, null], [16519, 16519, null], [16519, 16545, null], [16545, 16913, null], [16913, 17202, null], [17202, 17297, null], [17297, 17548, null], [17548, 17613, null], [17613, 17678, null], [17678, 17743, null], [17743, 17809, null], [17809, 18060, null], [18060, 18462, null], [18462, 18462, null], [18462, 18485, null], [18485, 18485, null], [18485, 18507, null], [18507, 18987, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18987, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18987, null]], "pdf_page_numbers": [[0, 98, 1], [98, 410, 2], [410, 727, 3], [727, 929, 4], [929, 1298, 5], [1298, 1516, 6], [1516, 1532, 7], [1532, 1890, 8], [1890, 2330, 9], [2330, 2409, 10], [2409, 2447, 11], [2447, 2528, 12], [2528, 2626, 13], [2626, 3349, 14], [3349, 4001, 15], [4001, 4119, 16], [4119, 4358, 17], [4358, 4686, 18], [4686, 5297, 19], [5297, 5747, 20], [5747, 6155, 21], [6155, 6500, 22], [6500, 6964, 23], [6964, 7065, 24], [7065, 7609, 25], [7609, 7863, 26], [7863, 7982, 27], [7982, 8151, 28], [8151, 8873, 29], [8873, 9181, 30], [9181, 9723, 31], [9723, 10216, 32], [10216, 10691, 33], [10691, 11199, 34], [11199, 11559, 35], [11559, 12115, 36], [12115, 12142, 37], [12142, 12467, 38], [12467, 12492, 39], [12492, 12680, 40], [12680, 12949, 41], [12949, 13523, 42], [13523, 14006, 43], [14006, 14516, 44], [14516, 15008, 45], [15008, 15330, 46], [15330, 15651, 47], [15651, 15651, 48], [15651, 15684, 49], [15684, 16102, 50], [16102, 16127, 51], [16127, 16168, 52], [16168, 16519, 53], [16519, 16519, 54], [16519, 16545, 55], [16545, 16913, 56], [16913, 17202, 57], [17202, 17297, 58], [17297, 17548, 59], [17548, 17613, 60], [17613, 17678, 61], [17678, 17743, 62], [17743, 17809, 63], [17809, 18060, 64], [18060, 18462, 65], [18462, 18462, 66], [18462, 18485, 67], [18485, 18485, 68], [18485, 18507, 69], [18507, 18987, 70]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18987, 0.03563]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
0d252a665c3a6bee91d27ff7a4a52cc37ed9310e
|
COMMON RANGE ARCHITECTURE OBJECT MODEL
APPROVAL PROCESS INVESTIGATION
WHITE SANDS MISSILE RANGE
REAGAN TEST SITE
YUMA PROVING GROUND
DUGWAY PROVING GROUND
ABERDEEN TEST CENTER
NATIONAL TRAINING CENTER
ELECTRONIC PROVING GROUND
NAVAL AIR WARFARE CENTER WEAPONS DIVISION
NAVAL AIR WARFARE CENTER AIRCRAFT DIVISION
NAVAL UNDERSEA WARFARE CENTER DIVISION, NEWPORT
PACIFIC MISSILE RANGE FACILITY
NAVAL UNDERSEA WARFARE CENTER DIVISION, KEYPORT
30TH SPACE WING
45TH SPACE WING
AIR FORCE FLIGHT TEST CENTER
AIR ARMAMENT CENTER
AIR WARFARE CENTER
ARNOLD ENGINEERING DEVELOPMENT CENTER
BARRY M. GOLDWATER RANGE
UTAH TEST AND TRAINING RANGE
NATIONAL NUCLEAR SECURITY ADMINISTRATION (NEVADA)
DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE
DISTRIBUTION IS UNLIMITED
COMMON RANGE ARCHITECTURE OBJECT MODEL
APPROVAL PROCESS INVESTIGATION
DECEMBER 2004
Prepared by
DATA REDUCTION AND COMPUTER GROUP
RANGE COMMANDERS COUNCIL
Published by
Secretariat
Range Commanders Council
U.S. Army White Sands Missile Range
New Mexico 88002-5110
This page intentionally left blank.
TABLE OF CONTENTS
PREFACE.......................................................................................................................................v
ACRONYMS .................................................................................................................................... vii
CHAPTER 1: EXECUTIVE SUMMARY ......................................................................................... 1-1
CHAPTER 2: DATA REDUCTION TASK DR-31 BREAKDOWN ..................................................... 2-1
2.1 The Need For An Object Model (OM) Standardization Process ....................................... 2-1
2.2 DR-31: Task 1 and Task 2 Defined ................................................................................... 2-1
CHAPTER 3: TASK 1: STANDARDIZATION PROCESS DESCRIPTION ................................. 3-1
3.1 Overview ............................................................................................................................... 3-1
3.2 Recommended Review Process for Object Model Standardization Requests ................... 3-1
3.3 Guidelines for Initiating an OM Standards Review ............................................................... 3-2
3.4 OM Review Process by DR&CG Team .............................................................................. 3-2
3.5 RCC Coordination .............................................................................................................. 3-3
3.6 RCC Pink Sheet Process for OM Standard Review and Acceptance ............................... 3-3
3.7 Required Documents Summary ......................................................................................... 3-4
LIST OF FIGURES
Figure 3-1. Process for a proposed Object Model standard ....................................................... 5
PREFACE
The Range Commanders Council (RCC) Data Reduction and Computer Group (DR&CG) sponsored the development and publication of this document. This document represents the release of Task 1 of the DR&CG study effort DR-31, “Common Range Architecture Object Model Approval Process Investigation.” The DR&CG Common Range Architecture Committee developed this document to provide the reader with an overview of the process for developing Object Models (OM). The goal is to achieve consistency in developing OM standards throughout the Department of Defense (DoD).
The primary contributors to this report are shown below.
Author: Dr. William T. (Tilt) Thompkins, Jr.
Associate Vice President for Infrastructure
Information Technology, Purdue University
West Lafayette, IN 47906
E-Mail: [email protected]
Mr. David Browning
Data Reduction And Computer Group (DR&CG), Associate Member
Representing: Redstone Technical Test Center (CSTE-DTC-RT-F-FL)
Redstone Arsenal, AL 35898-8052
E-Mail: [email protected]
Address questions about this document to the RCC Secretariat.
Secretariat, Range Commanders Council
ATTN: TEDT-WS-RCC
1510 Headquarters Avenue
White Sands Missile Range, New Mexico 88002-5110
Telephone: (575) 678-1107, DSN 258-1107
E-mail [email protected]
This page intentionally left blank.
# ACRONYMS
<table>
<thead>
<tr>
<th>ACRONYMS</th>
<th>TERMS</th>
</tr>
</thead>
<tbody>
<tr>
<td>CRAC</td>
<td>Common Range Architecture Committee</td>
</tr>
<tr>
<td>CTEIP</td>
<td>Central Test and Evaluation Investment Program</td>
</tr>
<tr>
<td>DR&CG</td>
<td>Data Reduction and Computer Group</td>
</tr>
<tr>
<td>DoD</td>
<td>Department of Defense</td>
</tr>
<tr>
<td>DR-31</td>
<td>Data Reduction and Computer Group - Task Number 31</td>
</tr>
<tr>
<td>FI2010</td>
<td>Foundation Initiative 2010</td>
</tr>
<tr>
<td>JPEG</td>
<td>Joint Photographic Experts Group</td>
</tr>
<tr>
<td>JIST3</td>
<td>Joint Interoperability and Systems Technology for Test and Training (JIST3)</td>
</tr>
<tr>
<td>OM</td>
<td>Object Model</td>
</tr>
<tr>
<td>OO</td>
<td>Object Oriented</td>
</tr>
<tr>
<td>POC</td>
<td>Point of Contact</td>
</tr>
<tr>
<td>RCC</td>
<td>Range Commanders Council</td>
</tr>
<tr>
<td>TENA</td>
<td>Test and Training Enabling Architecture</td>
</tr>
<tr>
<td>UML</td>
<td>Unified Modeling Language</td>
</tr>
<tr>
<td>XMI</td>
<td>XML Metadata Interchange</td>
</tr>
<tr>
<td>XML</td>
<td>Extensible Markup Language</td>
</tr>
</tbody>
</table>
This page intentionally left blank.
CHAPTER 1
EXECUTIVE SUMMARY
The Charter for the Range Commanders Council (RCC) Data Reduction and Computer Group (DR&CG), Common Range Architecture Committee (CRAC), includes the evaluation of proposed RCC architectural standards as well as the configuration management and distribution of candidate and accepted standards.
The Central Test and Evaluation Investment Program (CTEIP), Foundation Initiative 2010 (FI2010) project has developed the Test and Training Enabling Architecture (TENA) to support test and training range interoperability. As part of the TENA objective, the FI2010 project will be offering proposed architectural standards to the RCC for ratification and management. The first offering from the project will be the common Object Models (OM) being produced and utilized within the TENA architecture. A pathfinder project was established to articulate the issues, provide a process for Object Model standardization, and prepare a guideline for standardization. This pathfinder project is identified as RCC task DR-31, Common Range Architecture Object Model Approval Process Investigation.
This document defines a notional, top-level process that the RCC in general, and the DR&CG in particular, will follow to standardize Object Models.
The detailed process that the DR&CG will use to store, review, modify, and manage the Object Models as they progress through the standardization process, can be seen in the supplement to this document. The title of the supplement is “Document 169-04 (Supplement) Common Range Architecture Object Model Approval Process Investigation.”
This page intentionally left blank.
CHAPTER 2
DATA REDUCTION TASK DR-31 BREAKDOWN
2.1 The Need For An Object Model (OM) Standardization Process
There are currently many activities that strive to enable interoperability between ranges and range resources. Therefore, a significant portion of these activities support the standardization of the data passed between the ranges. In addition, the architectures developed to support range interoperability, such as the Test and Training Enabling Architecture (TENA), have adopted an Object Oriented (OO) approach. When OO-based software is used in conjunction with data standardization, a notion of OM is presented. An OM is the interface to a given system that describes its data and functional capabilities. In other words, it’s the “contract” that must be enforced to support interoperability. The Range Commanders Council (RCC) task “DR-31, Common Range Architecture Object Model Approval Process Investigation,” was initiated to address concerns regarding the process required to standardize proposed Object Models.
2.2 DR-31: Task 1 and Task 2 Defined
The DR-31 effort was established to support two main tasks:
a. Task 1 - Develop the initial high-level notional process by which the RCC in general, and the DR&CG in particular, should standardize Object Models.
b. Task 2 - Develop the high-level notional process that the RCC could store, review, modify, and manage the Object Models as they progress through the standardization process defined in Task 1.
2.2.1 Task 1 Deliverables. For Task 1, the primary process deliverables are in Chapter 3 of this document. The primary deliverables include guidelines as to when a candidate OM should be reviewed, a draft process by which Object Models are reviewed by subject matter experts, and a draft process for revision and final approval of candidate Object Models. Additional deliverables include tutorial and training materials on software architectures and Object Oriented concepts.
Tutorial materials on Software Architecture and Object Oriented concepts were developed and presented at the 98th DR&CG meeting in Salt Lake City. A copy of the tutorial materials can be seen at the PowerPoint briefing “Software Architecture Concepts and Views UML Introduction” given by the author, William T. (Tilt) Thompkins on 24 March 2003.
2.2.2 Task 2 Deliverables. For Task 2, the primary deliverables include detailed data management definitions and supporting technologies necessary to properly manage the submission, review, and maintenance of proposed OMs. Task 2 deliverables are in the supplement to this document. The title of the supplement is “Document 169-04 (Supplement) Common Range Architecture Object Model Approval Process Investigation.”
This page intentionally left blank.
CHAPTER 3
TASK 1: STANDARDIZATION PROCESS DESCRIPTION
3.1 Overview
While considerable latitude is available to RCC groups to define their internal processes to develop and review standards, these processes must fit within a clearly defined and managed RCC process for reviewing and promulgating standards across individual test ranges. These processes insure that all relevant groups are included in the review process and that all test ranges agree to and are able to implement the standard.
The following sections outline the process for Object Model standardization requests and illustrate how the detailed Object Model subject review (detailed in Document 169-04 (Supplement) Common Range Architecture Object Model Approval Process Investigation) is integrated into this process. This process also provides uniform support to enable the individual groups to manage the overall process for acceptance and distribution of OM standards. An overview of the notional, top-level review process is shown in Figure 3-1 at the end of this chapter.
3.2 Recommended Review Process for Object Model Standardization Requests
The DR&CG Chair is assigned the primary responsibility as focal point for the Object Model standard submissions, the DR&CG review process, and the final coordination with the RCC members. An overview of the process is as follows:
a. An OM Working Group initiates the process by submitting a proposed OM standard to the DR&CG Chair for review and approval. Any group(s) or individual(s) submitting an OM standard for review will be referred to as an OM Working Group.
b. The DR&CG Chair will conduct a review of the OM standard proposal to ensure it meets the guidelines for review (see paragraph 3.3 below). Proposals not meeting the guidelines will be returned for needed changes.
c. For proposals meeting the guidelines, the Chair will appoint a DR&CG review team and a DR&CG review team leader to conduct the review.
d. The DR&CG team leader will coordinate a review of the OM by the DR&CG team (see paragraph 3.4) and forward the findings and recommendations to the DR&CG Chair.
e. The DR&CG chair will appoint a “Pink Sheet” point of contact (POC) for proposals that have been recommended for approval (see paragraph 3.5).
f. The “Pink Sheet” point of contact (POC) will forward the proposal to the RCC Secretariat for final coordination and RCC ratification.
g. The RCC Secretariat will provide coordination support between the Pink Sheet POC, the DR&CG Chair, and the appropriate RCC members and committees (see paragraph 3.6).
h. Once coordinated and approved by the RCC, the DR&CG Chair will coordinate publication of the new OM standard with the RCC Secretariat.
3.3 Guidelines for Initiating an OM Standards Review.
The DR&CG Chair will initiate a review when an OM draft standard proposal is received from a DoD community member, a test range, or RCC member. The OM standard must be in use on at least one test range and must contain the minimum OM documentation. The minimum documentation is outlined below and detailed descriptions are provided in “Document 169-04 (Supplement) Common Range Architecture Object Model Approval Process Investigation.”
The minimum OM documentation is defined to be:
a. The OM metadata - The metadata fields must be provided to put the OM definition and submission into context.
b. The Object Model - The Object Model must be graphically depicted using Unified Modeling Language (UML) notation as a standard class diagram.
c. Use case - While in UML a use case is only one of many diagrams, here we refer to a use case as several UML diagrams. A use case can be a UML-based use case model or the FI2010 use case template. If UML diagrams are used, the Use Case, Sequence, and Deployment diagrams are required.
d. Metamodel - A graphical or textual representation of the metamodel used during the OM definition
Unless otherwise specified, the Object Model and Use Case diagrams shall be presented to DR&CG Chair in the UML Standard XML metadata Interchange (XMI) 1.0 format. This format allows for diagram interchange between various UML tool programs. As standards evolve, it is recommended that the DR&CG adjust this requirement to meet new standard definitions. In addition, a Joint Photographic Experts Group (JPEG) format is also required for a quick-look capability.
The OM submission package may either be sent to an e-mail address designated by the DR&CG Chair or to a DR&CG-supported on-line submission process or system.
3.4 OM Review Process by DR&CG Team
The DR&CG review team leader will coordinate a review of the proposed OM standard using the guidelines in the OM Review Process defined below and coordinate with the OM Working Group to gain additional information and complete any needed changes.
a. The DR&CG review team will be responsible for judging compliance of the proposed OM standard with guidelines, and for judging if the proposed standard merits submission to the RCC as a draft standard.
b. For accepted OM standard proposals, the review team leader will forward the reviewed standard and documentation to the DR&CG Chair for submission to the RCC Secretariat.
3.5 RCC Coordination
When the DR&CG Chair receives an accepted OM standard proposal from a DR&CG review team, the Chair will initiate final coordination with other RCC members and group. The Chair will proceed as follows:
a. The DR&CG Chair will appoint a POC for the RCC “Pink Sheet” review (see paragraph 3.6 below).
b. The Pink Sheet POC will prepare all relevant coordination documentation and cover letters for RCC review and submit them to DR&CG Chair.
c. The Chair will submit the coordination documents and draft OM standard proposal to the RCC Secretariat.
d. The RCC Secretariat will ask technical representatives and group chairs to review the draft standard and provide issues and comments to the Pink Sheet POC.
e. The Pink Sheet POC will address all issues and comments from reviewers, finalize the OM standard proposal, and submit it to the DR&CG Chair.
f. If approved by the DR&CG Chair, he/she will submit the finalized OM standard proposal to the RCC Secretariat for final coordination with the RCC Taskmaster and publication as an OM standard.
3.6 RCC Pink Sheet Process for OM Standard Review and Acceptance
The RCC Secretariat manages the process for reviewing and accepting an RCC standard. This process is described as the “Pink Sheet” review and allows each of the member ranges to thoroughly examine the draft OM standard and ensure that their range can agree to the requirements of the standard. The process consists of the following steps:
a. The DR&CG Chair will submit the coordination documents and draft OM standard proposal to the RCC Secretariat (see paragraph 3.5c above). A cover letter with suspense date, identity of the Pink Sheet Point of Contact (POC) to whom comments are to be sent, and distribution limitations must be included.
b. If the document is to be for unlimited distribution, the Secretariat directs the Joint Interoperability and Systems Technology for Test and Training (JIST3) office to place the draft on the RCC public page under Draft Document Review. Otherwise the Secretariat places the document on the RCC private page.
c. The Secretariat notifies the RCC Technical Representatives and group chairs that the draft is available for review and comments.
d. Comments and questions are sent directly to the identified Pink Sheet POC listed on the cover letter. It is the responsibility of the POC to resolve issues identified by reviewers appointed by the Technical Representatives or group chairs and prepare a final draft agreed to by all parties.
e. The final draft is sent to the RCC Secretariat who will send it to the RCC Taskmaster and Technical Representatives for final review and acceptance as an RCC standard.
f. If approved, the standard is assigned to the RCC Secretariat for publication.
3.7 Required Documents Summary
The RCC requirements for submitting draft OM standard documents are shown below.
3.7.1 RCC required documents.
a. Draft standard - RCC documentation requirement is not specified. The DR&CG process will use the OM documentation set described below.
b. Cover letter - A cover letter with suspense date, a Point of Contact for comments, and distribution limitations.
3.7.2 DR&CG required documents for the Object Model approval process.
a. The OM metadata - The metadata fields must be provided to put the OM definition and submission into context.
b. The Object Model - The Object Model must be graphically depicted using UML notation as a standard class diagram.
c. Use case - While in UML a use case is only one of many diagrams. In this document, we refer to a use case as several UML diagrams. A use case can be a UML-based use case model or the FI2010 use case template. If UML diagrams are used, the use case, sequence, and deployment diagrams are required.
d. Metamodel - a graphical or textual representation of the metamodel must be used during the OM definition.
BEGIN
OM Working Gp\(^1\)
Submits Proposed Standard
DR&CG Chair:
Reviews Proposal
N
Meet Guidelines for Review?
Y
DR&CG Chair:
Appoints DR&CG Review Team + Leader
DR&CG Team Reviews
Using Guidelines from the OM Review Process \(^2\)
OM Working Gp
Responds to Questions, Provides Support
DR&CG Team Leader:
Sends Proposal + Recommendation to DR&CG Chair
Y
N
DR&CG Chair Approve?
RCC Secretariat
Asks Tech Reps & Group Chairs to Review & Submit Chgs to Pink Sheet POC
Pink Sheet POC:
Coordinates Final Draft, Sends to DR&CG Chair
Y
N
DR&CG Chair Approve?
RCC Secretariat Coordinates Draft with Task Master & Tech Reps
Proposed Std
Not Published
RCC Secretariat
Notifies Parties
Technical Editor Edits & Publishes
RCC Secretariat
Notifies Parties
Figure 3-1. Process for a proposed Object Model standard.
---
\(^1\) Any group(s) or individual(s) submitting a standard for review will be referred to as an OM Working Group.
\(^2\) The DR&CG review team will conduct a review using the guidelines in the OM Review Process defined in document "DR-31 Common Range Architecture Object Model Approval Process Investigation".
|
{"Source-Url": "https://www.wsmr.army.mil/rccsite/documents/169-04_common%20range%20architecture%20object%20model%20approval%20process%20investigation/169-04.pdf", "len_cl100k_base": 4359, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 30215, "total-output-tokens": 5001, "length": "2e12", "weborganizer": {"__label__adult": 0.0003476142883300781, "__label__art_design": 0.001117706298828125, "__label__crime_law": 0.0014982223510742188, "__label__education_jobs": 0.004192352294921875, "__label__entertainment": 8.720159530639648e-05, "__label__fashion_beauty": 0.00024008750915527344, "__label__finance_business": 0.0017213821411132812, "__label__food_dining": 0.0002529621124267578, "__label__games": 0.0006098747253417969, "__label__hardware": 0.0023040771484375, "__label__health": 0.0004162788391113281, "__label__history": 0.0009222030639648438, "__label__home_hobbies": 0.0001538991928100586, "__label__industrial": 0.0017232894897460938, "__label__literature": 0.0003180503845214844, "__label__politics": 0.0009775161743164062, "__label__religion": 0.0003862380981445313, "__label__science_tech": 0.1824951171875, "__label__social_life": 0.00012350082397460938, "__label__software": 0.030029296875, "__label__software_dev": 0.7685546875, "__label__sports_fitness": 0.0002765655517578125, "__label__transportation": 0.001079559326171875, "__label__travel": 0.000225067138671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20508, 0.02228]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20508, 0.07773]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20508, 0.84279]], "google_gemma-3-12b-it_contains_pii": [[0, 756, false], [756, 756, null], [756, 1022, null], [1022, 1058, null], [1058, 2918, null], [2918, 2918, null], [2918, 4205, null], [4205, 4241, null], [4241, 5904, null], [5904, 5940, null], [5940, 7538, null], [7538, 7574, null], [7574, 10294, null], [10294, 10330, null], [10330, 13033, null], [13033, 15502, null], [15502, 18266, null], [18266, 19373, null], [19373, 20508, null]], "google_gemma-3-12b-it_is_public_document": [[0, 756, true], [756, 756, null], [756, 1022, null], [1022, 1058, null], [1058, 2918, null], [2918, 2918, null], [2918, 4205, null], [4205, 4241, null], [4241, 5904, null], [5904, 5940, null], [5940, 7538, null], [7538, 7574, null], [7574, 10294, null], [10294, 10330, null], [10330, 13033, null], [13033, 15502, null], [15502, 18266, null], [18266, 19373, null], [19373, 20508, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20508, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20508, null]], "pdf_page_numbers": [[0, 756, 1], [756, 756, 2], [756, 1022, 3], [1022, 1058, 4], [1058, 2918, 5], [2918, 2918, 6], [2918, 4205, 7], [4205, 4241, 8], [4241, 5904, 9], [5904, 5940, 10], [5940, 7538, 11], [7538, 7574, 12], [7574, 10294, 13], [10294, 10330, 14], [10330, 13033, 15], [13033, 15502, 16], [15502, 18266, 17], [18266, 19373, 18], [19373, 20508, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20508, 0.08738]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
a81b3d05c421413dc391edbbbe4df66246dc2c58
|
An IMS testbed for SIP applications
Caba, Cosmin Marius; Soler, José
Published in:
Proceedings of IIT Real-Time Communications Conference
Publication date:
2013
Citation (APA):
An IMS testbed for SIP applications
Cosmin Caba, José Soler
DTU Fotonik – Networks Technology & Service Platforms Group
Lyngby, Denmark
+45 4525 3217
{cosm, joss}@fotonik.dtu.dk
ABSTRACT
The paper presents the design and implementation of an emulation platform for the IP Multimedia Subsystem. The SIP Servlet API v1.1 has been used to implement the final system. The purpose of the emulation is to offer to IMS service developers an environment where they can integrate development, deployment and testing of their SIP applications in a single platform, easy to setup and maintain. The emulation platform offers the possibility of configuring network triggers (e.g. initial Filter Criteria and Service Point Triggers) through a web interface, and executing complex test scenarios according to the configuration. The result is ideal to introduce telecommunication students to service development, testing and deployment in a user friendly environment.
Categories and Subject Descriptors
C.2.0 [Computer-Communication Networks]: Network Architecture and Design - Packet-switching network
General Terms
Design, Experimentation, Verification
Keywords
IP Multimedia Subsystem, SIP servlet, SailFin, test, emulation.
1. INTRODUCTION
One of the key components of a converged IP Multimedia Subsystem (IMS) network is the service layer. The service layer is comprised of application servers and gateways towards third party servers. The applications servers run the software applications that represent the services for the end-users.
It is important to educate developers into programming applications for the IMS network, to provide innovative and appealing services to customers. The platform implemented in this project is targeted towards students who start in the field of programming SIP (Session Initiation Protocol) based services for the IMS network.
While there are free tools and application servers that developers can use to program IMS services, there is no integrated environment offering capabilities similar to the IMS network along the regular development capabilities. The present project aims to cover this niche. The solution we implemented represents a simple interface for students and developers dealing with the topic of IMS service development, to have a single platform where they can create, deploy and test SIP-based applications.
To test a service from a system’s perspective [1] one can use a production network or a small scale system built specially for testing purposes. While very useful and precise, this method has the disadvantage of being costly. Another method for testing IMS services is to use an emulation platform in place of the real network elements. On one hand, this method leverages the properties of a real network, offering the possibility to test an ecosystem of services and observe the interactions among them. On the other hand, it is more cost effective, faster and easier to use than a real system. The approach we take towards testing IMS services is to emulate a part of the IMS network. It is important to mention that this paper only discusses the aspect of functional testing [1]. Other types of testing are left for more complex platforms. A brief comparison with other similar systems will be given in a later section.
Our solution is essentially an emulation of the core entity of the IMS network (i.e. Serving – Call Session Control Function). Using the solution proposed in this paper, developers can create realistic IMS network conditions (e.g. service execution, network triggers) within the same machine they use for the development, with a minimum amount of effort. The IMS functionality captured in the emulation is service composition and execution according to network triggers (i.e. initial filter criteria, service point triggers). By using our solution, the developers can verify that an application meets the functional requirements it was designed to fulfill hence perform functional testing.
IMS services may be split in two categories:
- Legacy services: traditional circuit switched services that are enabled through special gateways to translate between the protocols used in IMS and the legacy network [2].
- SIP based services: services that are executed using SIP messages.
The focus of the present paper is on SIP based services created using the SIP Servlet Application Programming Interface (SSAPI) [3], although, it will become clearer later that any SIP services can be tested under specific configuration of the emulation platform.
2. BACKGROUND
2.1 Service triggering
Service triggering relates to the Distributed Feature Composition (DFC) concept [4]. The DFC is a logical architecture for structured feature chaining within a multimedia call. A feature is, in this context, a service (be it SIP based or legacy) that adds value to the call. An established call is composed of a source, a destination and a set of feature boxes (FBs) in between (Figure 1).
The entity that orchestrates the insertion of the feature boxes along the call path is called DFC router. The DFC router coordinates the feature boxes and routes the call setup message towards the correct boxes based on an internal algorithm. The algorithm may consist of precedence rules for the features, user subscriptions or any other mechanisms.

The SSAPI can be used to create SIP based services, which are also called SIP applications. In version 1.1 of SSAPI, the DFC concept has been included under the name application selection, and it denotes the process of selecting a set of SIP applications from the applications deployed in a given SIP container. The Application Router (AR) is the entity performing the tasks of the DFC router in the SSAPI. It must be emphasized that the application selection mechanism enables service chaining only within a single SIP container.
In IMS, the service triggering at the network level involves a slightly different mechanism (Figure 2) [5]. The main entity responsible for the service composition is the Serving-Call Session Control Function (S-CSCF). The S-CSCF is aware of all the services available in the system and the subscriber data, which consists of subscription information, different rules depending on the terminal type and capabilities, etc. This information resides in the Home Subscriber Server (HSS) in the form of initial Filter Criteria (iFC) [6]. Unlike a DFC router, the S-CSCF processes every call setup message, therefore it is on the path of the multimedia call. Upon receiving a call setup message (i.e. SIP INVITE) the S-CSCF analyzes the iFC set for both the caller and the callee and creates a chain of services, following that the SIP message is routed to each service in turn, for execution.
An application server may contain multiple SIP applications. The S-CSCF uses SIP routing mechanisms to route the call to an application server but the server must manage internally the selection of the application to be executed for that SIP call. The application server may comprise of several containers, each of them managing one type of application. The SIP Servlet container manages applications build using the SSAPI. Inside the SIP Servlet container, the application selection process follows the DFC model, previously described for the SSAPI. Therefore, both triggering methods are used, one at the IMS network level and the other (i.e. DFC) inside the SIP servlet container.

2.2 The emulation
The paper presents the design and implementation of an IMS emulation platform. The implemented system is named intuitively IMS Core Emulation and its main purpose is to provide to service developers an emulation environment for the IMS network, where SIP applications can be deployed and have their behavior assessed. To this end, the emulation provides a graphical interface for configuration of user information (i.e. SIP address, iFC) and executes the service logic encapsulated in the SIP applications according to the configuration. In this context, a SIP application represents the System Under Test (SUT) and the SIP application developer is the tester [1].
Moreover, the emulation platform is integrated with the application server used to run the SIP applications, such that the students or developers can benefit from a unified environment to program, deploy and test the applications’ behavior. This represents the main strength and the driver for the IMS Core Emulation platform.
The technology used to implement the emulation platform is the SSAPI, which is also used to create SIP services. The implementation of the SSAPI that we use for the emulation is SailFin [7]. SailFin provides other features apart from the SIP container implementation (e.g. a web container) offering the possibility to implement rich SIP based services that interact with web technologies (e.g. web applications and services). In essence, the IMS Core Emulation can be viewed as a SIP application which behaves similar to the S-CSCF with respect to other SIP applications.
We are striving for simplicity, therefore the emulation captures a thin layer of functionality from IMS, just enough to be able to test the SIP application logic and service interaction.
The features the IMS Core Emulation provides are:
- A graphical user interface (GUI) to configure the test. Through the GUI the developer can set up the network triggers (i.e. iFC, SPTs), user related data (i.e. SIP addresses) and application specific data (i.e. socket, name, etc.).
- The execution of the service logic according to the configuration. In other words, the applications to be tested are executed as dictated by the network triggers.
There are other platforms for IMS, freely available, that offer a larger set of features and a more complete implementation. One such example is the Open IMS Core [8], which captures the standards in great detail. Nevertheless, the advantage of using our solution, the IMS Core Emulation, is that the developer is not required to deploy and maintain a complete IMS system.
(comprising of all the core entities). The emulation platform implemented in this project is targeted more towards software developers of SIP applications, and not for telecommunication engineers that have good knowledge about IMS. As we envision, the IMS Core emulation can be used by developers with minimal knowledge about IMS, but good understanding of SIP, in the context of rapid prototyping of SIP applications. The IMS Core Emulation platform basically combines the simplicity of SIP testing tools such as SIPp [9], with the service triggering and composition concepts from IMS. More thorough performance testing for the SIP applications will have to be performed in upcoming phases of the development (and/or deployment) process, using a complete IMS network, which may fall in the scope of the network engineer.
The rest of the paper is structured as follows: section 3 presents details about the implementation. Section 4 describes a typical test scenario and section 5 will give some general conclusion about the project.
3. IMPLEMENTATION
3.1 Design considerations
Before delving into details regarding the implementation of the IMS Core Emulation, it is necessary to show what are the exact entities and functions from IMS that need to be captured in the emulation. Figure 3 illustrates the part of the IMS architecture that is of interest for the project.
The SIP Application Server (SIP AS) is the entity hosting the services in the IMS network [2].
The Home Subscriber Server (HSS) is the user database. The Serving-Call Session Control Function (S-CSCF) is the core SIP server that handles multimedia calls and decides the set of services that must be executed for a particular call [10].
Concluding, the entities that fall under the scope of the emulation platform and have been implemented in software are the S-CSCF and the web interface module. The SIP AS runs the applications under test therefore is within the scope of the tester.
As previously mentioned, the emulation is implemented using the SIP Servlet API; therefore it is a SIP application which must run in an application server. The difference between the emulation and any other SIP application lies in the way it processes the incoming SIP messages: it interprets a test configuration and executes the logic of the applications under test. Because both the applications under test and the emulation run in a SIP AS, it is possible to use a single application server to deploy all the software.
The SailFin AS provides the necessary infrastructure to run the emulated S-CSCF and the web interface application. Figure 4 captures a detailed view of the components that build up the IMS Core Emulation. In this schema, a single SailFin AS is used to run both the emulation software and the SIP application to be tested. Later, we will show another scenario distributed over two application servers.
3.2 Emulation components
As it can be noticed in Figure 4, the final implementation is comprised of two software packages deployed in the same instance of SailFin. The first package, IMSCore, is also the most important because it contains the logic for the emulation platform. The second package, AR, is the SSAPI application router, which contains extra logic to perform the application selection process.
We split the logic of the emulation, encapsulated in the IMSCore software package, in two conceptual modules which we call subsystems: The IMS subsystem and the WEB subsystem. This will make it easier to refer to distinct parts of the implementation.
The WEB subsystem could have been implemented and deployed separately from the IMS subsystem, but we decided to reduce at minimum the number of packages in order to increase the usability, hence they are bundled together. However, the existence of the AR module is imposed by the SSAPI programming model, therefore, it was not possible to discard it or combine it with the rest of the platform.
The database module in Figure 4 represents the HSS from IMS. All the data required for the test is persisted in the database using
the Java Persistence API (JPA) framework. To be able to execute SIP tests, the developer must use a SIP test tool to send SIP requests towards the IMS Core Emulation system. When a test is initiated, a SIP message is send from one User Agent (UA) destined for the second UA. The message is processed by the emulation platform and the target SIP applications are executed according to the test configuration. In this way, the developer can deploy multiple services and test their behavior and the interactions between the services.
3.3 Data persistence
A test configuration consists of a set of initial Filter Criteria (iFC) and Service Point Triggers (SPT) that dictate the order in which the services must be executed and whether or not a specific service needs to be executed. The test configuration is persisted in the database so that it can be reused even if the server is restarted.
The emulation platform must also store the state of the service chain between consecutive invocations of the SIP applications. This information is persisted in a session object running in the server therefore it is not maintained if the application server is restarted.
The test configuration is persisted in the database so that it can be reused even if the server is restarted.
3.4 The testing process
In general, the service testing process with IMS Core platform implies the following steps:
- Deployment of the SIP applications to be tested in a SIP Servlet container (can be the same SailFin instance running the emulation or other SIP Servlet container).
- Creation of the test configuration. In this step the developer must configure the following data through the web interface of the emulation platform: the UA SIP addresses, the application specific data (i.e. location where the application can be reached), and the network triggers (iFC and SPT).
- Test execution. A SIP request is send towards the IMS Core Emulation to trigger the service execution, according to the test configuration from the previous step.
- After the test execution is finished, the developer investigates the SIP messages at the originating and receiving UAs, and the log messages at the application server, to decide whether the test has completed successfully or not. The log messages at the application server may contain any of the parameters o the SIP messages processed by the CSCF servlet object, or various indications about the status of the SIP call. Figure 5 shows an example of the standard message given by the emulation platform in case one of the UAs (i.e. originating or receiving) is not registered through the web interface before the test is initiated. In the figure, it can be seen that the INVITE request is processes by the CSCF servlet. The service chain could not be created because the SIP addresses used in the request were not added to the IMS domain hence the CSCF is not aware of those SIP addresses.
The emulated S-CSCF does not fulfill all the functions of a real S-CSCF. Its purpose is limited to interpreting the test configuration and routing the SIP messages towards the applications under test to execute the service logic. Moreover, the emulated S-CSCF can act as a registrar in case the developer activates this feature for a test scenario. More details regarding the IMS and the WEB subsystems will be offered in the following two subsections.
**Figure 5. Log message at the application server**
3.5 The IMS subsystem
This section further dissects the implementation of the IMS subsystem, therefore, the module responsible for the web interface is not included in the explanations or figures. The SSAPI offers two programming constructs to create SIP applications. The first construct is the servlet object, which defines callback methods where developers can write the code to process SIP messages. The emulated S-CSCF in the IMS Core Emulation is implemented using the servlet object, and it is represented in Figure 4 as the CSCF SIP Servlet entity belonging to the IMS subsystem. To be able to have increased flexibility in terms of managing the SIP sessions, the CSCF SIP Servlet is implemented using the Back-to-Back UA (B2BUA) object from the SSAPI.
The second programming construct that the SSAPI provides is the Application Router (AR) object. The AR has been introduced in v1.1 of the specification to cater for the need of performing complex application selection inside a SIP Servlet container. Every SIP Servlet container must have one AR deployed in order to be able to select a specific application. SailFin provides a default AR which selects the applications in alphabetical order. Since this selection method is not useful in our emulation, we have implemented another AR that replaces the default one provided by SailFin.
It may be the case that the developer would like to program its own application router to tailor a custom mechanism for the application selection process. This is not possible with the IMS Core Emulation since it is mandatory to deploy the provided AR. However, the emulation platform is itself a configurable system that is able to do application selection for any number of applications using flexible rules hence taking the role of the AR. Instead of programming an AR to select applications in a certain manner, the developer could configure the IMS Core Emulation through the GUI, to select the SIP applications to be executed. This is even more useful because, unlike the AR, the IMS Core Emulation is not limited to a single application server, but it can trigger SIP applications located at any network address.
The emulation can be seen as a SIP message router, where the routing logic is dictated by the test configuration. The routing of SIP messages is possible by using the Route header. As defined in the SIP specification [15], the Route header contains a SIP URI holding the destination address of the message. Therefore, by
manipulating Route headers the emulation is capable of instructing the SIP requests to reach specific destinations (which for our purpose are SIP applications). The headers are added to the message by the CSCF SIP Servlet object. However, the use of Route headers represents a coarse grained method to route SIP messages because it can only point to a SIP Servlet container. Consider for example two test target applications deployed in the same container. With the help of Route headers a SIP message can be routed to the container holding the applications but there is no implicit way to point to the exact application that must be executed, out of the two deployed in the respective container. The AR object complements the routing logic, by advising the container about which application should receive the SIP message, as explained in the DFC section.
It has been mentioned that a test scenario can be distributed over multiple application servers without losing the benefits of the IMS Core emulation. To better understand the mechanisms implemented in the IMS subsystem of the emulation, we show a detailed SIP request routing and service execution in Figure 6.
For each SIP application that must be executed for a SIP request, two Route headers are added to the respective SIP message: the first Route header contains the address of the CSCF Servlet object, and the second Route header contains the SIP URI of the application. The headers are processed in the reverse order, such that the request is routed to the SIP application in the first place, and after back to the CSCF for further processing.
In this particular case, the first application that must be executed is CallBlock, hence the CSCF servlet object pushes two routes in the SIP message: sip:cscf@server1 and sip:callblock@server1. The request is forwarded again to the AR1 for the next application selection. The AR1 decides that the request must visit the CallBlock service, and it sends the message to the network interface. However, because the host part of the top most Route header points to the same server (server1), the request is looped back on the network interface and dispatched again to the AR1. The AR1 analyzes the user part (callblock@server1) of the top most Route header and instructs the SIP Servlet container to dispatch the SIP request to the CallBlock application. The request reaches the CallBlock application, and the service logic is executed.
The top most Route header is removed, thus the request is further routed based on the next Route header in the stack (i.e. sip:cscf@server1). When the CSCF receives the SIP request the second time, it retrieves the associated service chain from the session object. The next application in the chain that must be executed is VoiceMail, hence the CSCF pushes the corresponding Route headers. The request is routed in a similar way to the VoiceMail application, except that instead of being looped back at the network interface of server1, the request is routed over the network to server2. In the end the SIP request returns to the CSCF, the service chain is removed from the server session because there are no more applications to be executed, and the request is forwarded to its intended destination.
Apart from the headers defined in the SIP base standard [11], the emulation uses the “P-Served-User” header defined in [12]. This header informs a SIP application on behalf of which subscriber the service logic must be executed.
3.6 The Web subsystem
As depicted in Figure 4, the WEB subsystem module is part of the IMSCore software package. This module supports the web based Graphical User Interface (GUI) through which the test can be configured.
The interface offers great flexibility in configuring tests and modifying already existing configurations. This supports the idea of rapid deployment of various test scenarios. The web interface consists of four pages, each of them providing means to visualize and modify specific information.
The first page of the GUI provides means for configuring the SIP applications under test. The second page contains configuration options for the subscriber data (e.g. SIP address, name). If the CSCF servlet object receives a request with unknown SIP addresses, it cannot retrieve any test configuration from the database hence it cannot create the service chain. On the third page the developer can add a set of iFC for each subscriber. An iFC represent an aggregation of SPTs. SPTs can be viewed as rules expressed as logic sentences. An iFC is deemed valid when the associated SPTs yield the logic value of true. The iFC are
assessed whenever the corresponding user is involved in a SIP session establishment. The fourth view shows the SPT set belonging to a single iFC. Figure 7 illustrates the user dialog for creating an SPT. The configuration of the SPTs is done according to the Disjunctive Normal Form as standardized by 3GPP [5]. The “Condition Negation” field indicates whether the actual SPT is negated or not. The “Group” field defines a subset of SPTs linked with the logical operator AND. Subsets of SPT belonging to different groups are linked between them with the logical operator OR. The rest of the fields define the body of the SPT rule. For a SIP request, an SPT is valid (logically true) if and only if the fields of the SPT match the headers in the SIP request.
All the data configured through the web interface can be modified at any point in time. The modifications are first taken into account for the next test initiated immediately after a modification has been made.
According to the test configuration, the services must be executed in the following order: for Alice no services must be executed; for Bob the first service to be executed is Call Forwarding. If the Call Forwarding application detects that Bob is available then application A must be executed on behalf of Bob before the call is forwarded to the destination. If Bob is not available, Call Forwarding routes the call to John, and application B must be executed on behalf of John.
When the test is initiated from Alice’s UA, the emulation platform fetches the information from the database and builds the service chain. It first routes the SIP request to the Call Forwarding application. As assumed, the Call Forwarding decides to change the destination of the call to John. The emulation platform detects the change in destination and rebuilds the service chain according to the new destination; it adds the services associated to John in the chain instead of those associated to Bob. The processing continues with application B which is executed on behalf of John and the SIP message is finally forwarded to John’s UA.
### 4. TEST SCENARIO
To emphasize the strengths of the IMS Core Emulation platform, Figure 8 illustrates a more complex test scenario. A single application server is used to run all the software packages: the IMS Core Emulation and the SIP applications that have to be tested. The main advantage of using a single application server is the ease of use and a decrease in computing resource consumption. Application A and B contain very simple logic to record statistics about the processed SIP requests. These two applications are used only to validate the behavior of the test target application, Call Forwarding.
For this particular test scenario the developer needs three user agents. The UA associated with Alice is the caller, Bob is the initial receiver of the call, and John is the receiver after the call is redirected by the Call Forwarding service. At first, the developer needs to configure the database with the right information. According to the test configuration, the services must be executed in the following order: for Alice no services must be executed; for Bob the first service to be executed is Call Forwarding. If the Call Forwarding application detects that Bob is available then application A must be executed on behalf of Bob before the call is forwarded to the destination. If Bob is not available, Call Forwarding routes the call to John, and application B must be executed on behalf of John.
By investigating the log messages from application A and B and from the UAs, the test is considered passed, meaning that the Call Forwarding application functions properly. Without the emulation platform, the developer would have to write a significant amount of extra code to implement the routing of messages between the applications in order to validate the behavior of the Call Forwarding service. Furthermore, different scenarios can be tried just by adjusting the test configuration in the GUI.
### 5. DISCUSSION AND FUTURE IMPROVEMENTS
The emulation platform provided in this project has been created for the purpose of functional testing of SIP applications. It offers capabilities for service composition, using network triggers, similarly to the S-CSCF entity from IMS. Furthermore, it integrates with the SailFin application server offering a unified platform for creating, deploying and testing SIP applications. The main advantage over already existent solutions is that developers will experience an integrated environment in the same development machine, and the same application server used for
SIP applications. The emulation captures only a part of the functionality of the IMS S-CSCF entity. The implementation of the ISC interface follows the 3GPP standards specification to reduce the eventual mismatches between the emulation platform and the tested SIP applications.
The IMS Core Emulation has been designed to be easy to use. It requires one SailFin application server and a relational database system. The provided software packages must be deployed in the application server and the emulation platform is up and running. Furthermore, it can trigger the execution of services running on application servers at remote locations, even if the SIP applications under test are created with different APIs (e.g. JAIN) or they run in a different application server (e.g. Mobicents) [13, [14].
The GUI is intuitive and offers great flexibility in setting up and modifying test configurations. To create different test setups, the developer needs to configure a number of iFC and SPTs without being worried about the implementation of the extra code needed to orchestrate the execution of several SIP applications.
The most notable limitation that the IMS Core Emulation exhibits at the time of writing is its sensitiveness to stress testing. The interactions between the application router and the CSCF Servlet object have a significant contribution to the delay of the SIP session establishment. Therefore, testing the performance of an application in terms of handled calls per second is going to be limited by the emulation platform rather than the application itself.
Moreover, the implemented system does not include a DNS subsystem. In conclusion domain names cannot be resolved and are not supported. The entire configuration must be made with IP addresses and port numbers.
The present work was performed by the main author as his Master Thesis project. The resulting platform is expected to be used by students interested in service development for telecommunication networks within the Technical university of Denmark. As future plans we would like to extend the emulation platform to the Mobicents SIP servlet [14]. Both Mobicents and SailFin use the same standard (SIP Servlet API) as a reference [3].
6. REFERENCES
[6] 3GPP TS 29.228 v11.4.0, IP Multimedia (IM) Subsystem Cx and Dx interfaces, June 2012.
|
{"Source-Url": "https://backend.orbit.dtu.dk/ws/portalfiles/portal/59451407/IMS_testbed_camera_ready.pdf", "len_cl100k_base": 6255, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26752, "total-output-tokens": 7112, "length": "2e12", "weborganizer": {"__label__adult": 0.0004315376281738281, "__label__art_design": 0.00037217140197753906, "__label__crime_law": 0.00040793418884277344, "__label__education_jobs": 0.001575469970703125, "__label__entertainment": 0.00016701221466064453, "__label__fashion_beauty": 0.00017571449279785156, "__label__finance_business": 0.00032067298889160156, "__label__food_dining": 0.0003552436828613281, "__label__games": 0.0008401870727539062, "__label__hardware": 0.004062652587890625, "__label__health": 0.0006136894226074219, "__label__history": 0.00042629241943359375, "__label__home_hobbies": 7.69495964050293e-05, "__label__industrial": 0.0007677078247070312, "__label__literature": 0.00029468536376953125, "__label__politics": 0.0002636909484863281, "__label__religion": 0.0005435943603515625, "__label__science_tech": 0.1912841796875, "__label__social_life": 0.00011777877807617188, "__label__software": 0.0238494873046875, "__label__software_dev": 0.771484375, "__label__sports_fitness": 0.0003826618194580078, "__label__transportation": 0.00109100341796875, "__label__travel": 0.0003082752227783203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33022, 0.04658]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33022, 0.16988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33022, 0.9105]], "google_gemma-3-12b-it_contains_pii": [[0, 309, false], [309, 4839, null], [4839, 10426, null], [10426, 14504, null], [14504, 20412, null], [20412, 25036, null], [25036, 29665, null], [29665, 33022, null]], "google_gemma-3-12b-it_is_public_document": [[0, 309, true], [309, 4839, null], [4839, 10426, null], [10426, 14504, null], [14504, 20412, null], [20412, 25036, null], [25036, 29665, null], [29665, 33022, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33022, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33022, null]], "pdf_page_numbers": [[0, 309, 1], [309, 4839, 2], [4839, 10426, 3], [10426, 14504, 4], [14504, 20412, 5], [20412, 25036, 6], [25036, 29665, 7], [29665, 33022, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33022, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
92492f013a0b023fdfb8a3f188e74c2492185fda
|
1 Introduction
VMES (versatile maintenance expert system) is a fault diagnosis system which assists maintenance technicians in troubleshooting electronic circuits. Since a major objective of VMES is its versatility in diagnosing a wide range of faults in different devices, it takes the device-model-based approach. Given the model i.e., structure and function of a device and the symptoms of its malfunction, VMES reasons on the model of the device to come up with the correct diagnosis. This approach is as opposed to diagnosis using shallow rules of association of symptom with cause, traditionally used in medical diagnosis.
VMES uses the design model of the diagnosed device and the observed symptom to locate the faulty objects at the selected maintenance level. This usually requires additional information and VMES is capable of suggesting places for the technician to probe. According to the reported measurements, candidates are reordered and some are eliminated. An enhancement is also made to diagnose a sequential circuit by utilizing fault characteristics for candidate generation.
In fact, the current software maintains two versions of VMES system, which we call VMES I and VMES II. VMES I makes use of SNePS knowledge representation and reasoning system to represent the logical and physical descriptions of the device by semantic nets, and to diagnose the device using those information. VMES I does the diagnosis for combinational circuits. VMES II incorporates the following features:
- Utilization of fault characteristics for candidate generation.
- Sequential Circuit Diagnosis
- Explanation Generation
- Working with Multiple Symptoms
The following changes have been affected in the representation scheme:
• Decoupling of logical hierarchy from physical hierarchy. This makes it possible to have several more levels of reasoning in the device.
• No representation of wires. This saves more than 50% of the space. Further the diagnostic power afforded by the representation of wires can be summed up in a single rule.
Currently, VMES I is used to diagnose a half adder circuit, and VMES II is able to diagnose a sequential multiplier and a M3A2 circuit. Therefore, activation of a specific version of VMES depends upon the application.
2 How to run the programs
2.1 Load source files
Host machine name and the top directory name can be different from machine to machine, so before you run the “load” command you have to do a little housekeeping job such as changing or adding logical path-name for host machine. If you load this software to other host machine, look at the file “myload.lisp” in the top directory and try to modify the function “fs:add-logical-pathname-host”. After doing that, all you have to do is to execute one load command.
(load "hostname:directory-spec;myload.lisp")
2.2 Run the program
Call the top-level function “vmes” as follows:
(vmes)
This will create 2 graphics windows and 1 lisp listener window as shown in Fig. 1 in the Appendix. Top window is for drawing the circuit under test, and the second window shows the current suspect list which is dynamically changed as the diagnosis continues. Third window is a normal listener window in which you can type the commands, and see the error messages if something goes wrong.
After creating 3 windows, the system provides a menu so that you can choose a circuit you want to diagnose, or go back to listener, or quit the diagnosis.
If you type 'a', 'b', or 'c', then what the system does is to draw the corresponding circuit on the top screen, and start the diagnosis by asking a series of questions. If you type 'b', you will be under VMES I, and if you type 'a' or 'c', you are running VMES II. You can notice two different components by different messages on the screen.
2.3 Running VMES I
At the beginning of a diagnostic session, the user is asked to set a couple of system parameters: VMES.IL and VMES.ConnCheck. VMES.IL, the system parameter for intended maintenance level, can be set to one of the two maintenance levels, "field" or "depot", with "field" being higher than "depot". VMES.ConnCheck is the system parameter which allows a user to make the assumption that all wires and POCONs (point of contact) are intact.
After these two parameters are set, VMES requests the initial symptom of the device by asking the user for the values of its primary inputs and outputs. The default number system of VMES is decimal and binary numbers can be entered with a prefix "B". Violations are derived using the functional descriptions of the device and candidates are ordered according to their relationships with the violations. Throughout a diagnostic session, VMES maintains an ordered list of candidates which contains potential faulty components. When the list is not empty, VMES suggests the user to check the first candidate by measuring its inputs and outputs. The reported values are used to reorder and eliminate candidates. This process continues until the list becomes empty or all the remaining candidates are in the same physical component at the intended maintenance level. In the latter case, the user is asked if the diagnosis can be terminated since it may or may not be necessary to distinguish faults among the remaining candidates.
Since diagnostic reasoning is carried out on the logical model of a device, VMES always requests the value of a logical port which may correspond to several otherwise unrelated physical pins of various chips. Through the use of cross-links between logical and physical structures, VMES is able to inform the user which physical ports should be measured for a logical port.
The physical structure of a device is also used for repair suggestions. At the end of a diagnostic session, VMES suggests a repair plan to the user according to the type of the faulty object. If the faulty object is a common components, VMES simply suggests the user to replace its corresponding physical part. If it is a wire, the corresponding physical wires are identified for repair. Note that a logical wire may correspond to several physical wires, e.g., a 4-bit logical wire is realized by four wires on a printed circuit board. If it is a POCON, the location of the bad contact point is given to the user by referring to the involved physical port of the non-wire component.
2.4 Running VMES II
VMES II has been implemented in Common Lisp without using SNePS. This has been dictated by a necessity to use re-assignable variables to store the values of sequential circuits. A result of this change is that VMES II is much faster in its responses than VMES.
Associated with each port of every device is a list of measured, expected
and test values. All the model information and port values are stored as property lists attached to the name of the device. This results in fast access of information. However, it also necessitates clean-up after each run.
VMES II expects all its values in decimal form. It automatically converts the values to binary form and back. Though this conversion adds considerably to the execution time, it is deemed a worthwhile feature, from the point of view of ease of use.
The reasoning code of VMES II is completely embedded in the recursive function `deep-reason()`. It checks for violation of the output of the device and justification of the unexpected output with the help of measured inputs. If neither of these succeed, the algorithm goes down to the level of the subdevices (if they exist) of the device, or declares the device faulty. Some of the other prominent code segments are for suspect-generation by backward propagation of the violated outputs and forward propagation of input values for simulation purposes.
Usage details are described as follows. The system asks if explanation generator should be activated and whether the user has multiple symptoms to proffer. After this, the system asks for the input and output values from the user. If the system determines that the subdevices of the device should be brought up for questioning, the procedure recurses over each of the subdevices, that occur in the ordered candidate list.
Two aspects need further elaboration. This explanation generator puts out canned messages into a file named explain.lsp. The sequence of these messages suggests a logical explanation for the diagnosis performed. The explanation is concatenated onto the screen at the end of the session. Answering the question about explanation essentially switches this feature on or off.
The other question about multiple symptoms works as follows. If the user has many symptoms (complete input-output sets) to provide for the device, he answers 'yes' to this question. Thereupon, the system proceeds to ask for the first set of symptoms. After all the input and output values have been asked for the device, the user is prompted to come up with the second set of symptoms and so on. If the user had initially responded that he did not have multiple symptoms, the system takes up diagnosis after asking for the first set of values.
3 Sample Runs
> (vmes)
Select a circuit you want to diagnose:
a) K3A2 b) Half Adder c) Sequential Multiplier d) test circuit
e) goto listener f) goto listener without clearing windows q) quit
4
Type [a-f], or q (default a): a
Drawing M3A2...
Diagnosing M3A2...
Do you want a summary/explanation of diagnosis? n
Do you want to use multiple symptoms? n
What is the MEASURED value of the port OUT1 of the device M3A2-1
What is the MEASURED value of the port OUT2 of the device M3A2-1
What is the MEASURED value of the port IN1 of the device M3A2-1
What is the MEASURED value of the port IN2 of the device M3A2-1
What is the MEASURED value of the port IN3 of the device M3A2-1
Type any key to continue:
Type any key to continue:
Type any key to continue:
What is the MEASURED value of the port OUT1 of the device MULT-2
What is the MEASURED value of the port IN1 of the device MULT-2
What is the MEASURED value of the port IN2 of the device MULT-2
VMES DIAGNOSES: MULT-2 is faulty.
VMES DIAGNOSES: M3A2-1 is faulty.
Type any key to go back to top-level diagnosis:
Select a circuit you want to diagnose:
a) M3A2 b) Half Adder c) Sequential Multiplier d) test circuit
e) goto listener f) goto listener without clearing windows q) quit
Type [a-f], or q (default a): b
Drawing a half adder....
Diagnosing a half adder....
Please select IML (intended maintenance level):
D(epot
F(ield
* D/F? d
VMES.IML set to DEPOT level
Assume all wires/POCONs are intact?
* y/n? y
VMES.ConnCheck set to nil (False)
-diagnose H001-
What’s the value of port OUT1 of H001
Equivalent Physical Port: pin 3 of H001
* [value]/nil? 1
What’s the value of port OUT2 of H001
Equivalent Physical Port: pin 4 of H001
* [value]/nil? 1
What’s the value of port IN1 of H001
Equivalent Physical Port: pin 1 of H001
* [value]/nil? 1
What’s the value of port IN2 of H001
Equivalent Physical Port: pin 2 of H001
* [value]/nil? 1
@@ searching vio-expct ...
@@ vio-outputs found: ports (OUT1) of H001
@@ getting suspects for H001 ...
@@ suspects created:
@@ (H001-01 H001-01 H001-A2 H001-A1)
Type any key to continue:
-diagnose H001-01-
What’s the value of port OUT of H001-01
Equivalent Physical Port: pin 3 of H001-U2
* [value]/nil? 1
@@ H001-01 shows no problem
-diagnose H001-01-
What’s the value of port OUT of H001-01
Equivalent Physical Port: pin 2 of H001-U3
* [value]/nil? 1
What’s the value of port IN of H001-01
Equivalent Physical Port: pin 1 of H001-U3
• [value]/nil? 1
• searching vio-expct ...
• vio-outsps found: ports (OUT) of H001-11
• H001-11 is faulty
• by vio-expct & unit-at-IML
• part to be replaced: H001-U3
Terminate the diagnosis?
• y/n? y
VMES manually terminated
>>>>> I GOT THE FAULTY PARTS AS >>>>>
$$ Repair Order: replace H001-U3 (type: S4F04)
Type any key to go back to top-level diagnosis:
Select a circuit you want to diagnose:
a) M312 b) Half Adder c) Sequential Multiplier d) test circuit
e) goto listener f) goto listener without clearing windows q) quit
Type [a-f], or q (default a): b
Drawing a half adder....
Diagnosing a half adder....
Please select IML (intended maintenance level):
-----------------------------------------------
D(epot
F(ield
-----------------------------------------------
* D/F? d
VMES.IML set to DEPOT level
Assume all wires/POCONs are intact?
• y/n? n
VMES.ConnCheck set to t (True)
diagnose H001
What's the value of port OUT1 of H001
Equivalent Physical Port: pin 3 of H001
* [value]/nil? 0
What's the value of port OUT2 of H001
Equivalent Physical Port: pin 4 of H001
* [value]/nil? 0
What's the value of port IN1 of H001
Equivalent Physical Port: pin 1 of H001
* [value]/nil? 1
What's the value of port IN2 of H001
Equivalent Physical Port: pin 2 of H001
* [value]/nil? 0
---
**SEARCHING VIO-EXPECT**
---
**VIO-OUTPUTS FOUND: PORTS (OUT1) OF H001**
---
**GETTING SUSPECTS FOR H001**
**SUSPECTS CREATED:**
(H001-W2 H001-W1 H001-W6 H001-W5 H001-W4 H001-U1 H001-U2 H001-A2 H001-W3 H001-A1)
---
Type any key to continue:
---
**DIAGNOSE WIRE: H001-W2**
What's the value of port 3 of H001-W2
The WIRE-END connected to: pin 2 of H001-U1
* [value]/nil? 0
What's the value of port 2 of H001-W2
The WIRE-END connected to: pin 2 of H001-U2
* [value]/nil? 0
**H001-W2 SHOWS NO PROBLEM**
---
**DIAGNOSE WIRE: H001-W1**
What's the value of port 2 of H001-W1
The WIRE-END connected to: pin 1 of H001-U2
* [value]/nil? 1
What's the value of port 3 of H001-W1
The WIRE-END connected to: pin 1 of H001-U1
* [value]/nil? 1
**H001-W1 SHOWS NO PROBLEM**
---
**DIAGNOSE WIRE: H001-W6**
What's the value of port 1 of H001-W6
The WIRE-END connected to: pin 6 of H001-U1
* [value]/nil? 0
**H001-W6 SHOWS NO PROBLEM**
```plaintext
$0000$ diagnose wire: H001-W5
What's the value of port 2 of H001-W5
The WIRE-END connected to: pin 5 of H001-U1
* [value]/nil? 1
$0000$ H001-W5 shows no problem
$0000$ suspects eliminated
• eliminated suspects: (H001-W1 H001-W3 H001-A1)
• remaining suspects: (H001-W4 H001-01 H001-A2)
Type any key to continue:
$0000$ diagnose wire: H001-W4
What's the value of port 2 of H001-W4
The WIRE-END connected to: pin 4 of H001-U1
* [value]/nil? 0
What's the value of port 1 of H001-W4
The WIRE-END connected to: pin 3 of H001-U2
* [value]/nil? 1
$0000$ suspects reordered:
• (H001-A2 H001-01)
Type any key to continue:
$0000$ wire H001-W4 is faulty
• by showing different values at wire ends
Terminate the diagnosis?
* y/n? y
VMES manually terminated
>>>>> I GOT THE FAULTY PARTS AS >>>>>
$\$$ Repair Order: fix the wire connecting
port 4 of H001-U1
port 3 of H001-U2
Type any key to go back to top-level diagnosis:
Select a circuit you want to diagnose:
a) M3A2 b) Half Adder c) Sequential Multiplier d) test circuit
e) goto listener f) goto listener without clearing windows q) quit
Type [a-f], or q (default a): c
```
Draving a sequential multiplier....
Diagnosing a sequential multiplier....
Do you want a summary/explanation of diagnosis?
n
Do you want to use multiple symptoms?
n
What is the MEASURED value of the port OUT1 of the device SMULT-1 49
What is the MEASURED value of the port IN1 of the device SMULT-1 6
What is the MEASURED value of the port IN2 of the device SMULT-1 8
Type any key to continue:
Type any key to continue:
What is the MEASURED value of the port OUT1 of the device QREG8-1 49
What is the MEASURED value of the port IN1 of the device QREG8-1 49
VMES DIAGNOSES : QREG8-1 is not faulty.
What is the MEASURED value of the port OUT1 of the device SADDER-1 49
What is the MEASURED value of the port IN1 of the device SADDER-1 0
What is the MEASURED value of the port IN2 of the device SADDER-1 48
"Interconnection Fault Detected"
VMES DIAGNOSES : SADDER-1 is faulty.
VMES DIAGNOSES : SMULT-1 is faulty.
Type any key to go back to top-level diagnosis:
Select a circuit you want to diagnose :
a) M3A2 b) Half Adder c) Sequential Multiplier d) test circuit
e) goto listener f) goto listener without clearing windows q) quit
Type [a-f], or q (default a): q
Quitting the diagnosis....
NIL
>
4 Figure illustration
Some TI snapshots are attached in the following pages, and brief explanations for them are given here.
Figure 1.
Initial setup for 3 windows. Topmost window is for drawing the circuit under test, and the current suspect list is displayed on the second screen. Bottom window is a lisp listener which shows a menu for user to choose a circuit.
Figure 2.
A snapshot during the diagnosis of M3A2 after a suspect list is generated by the observed valued of output port 1.
Figure 3.
A snapshot during the diagnosis of M3A2 after mult-2 component is found faulty. Notice that mult-2 is highlighted by a bold line on the top window.
Figure 4.
A snapshot during the diagnosis of a half adder after an initial suspect list is generated.
Figure 5.
A snapshot during the diagnosis of a sequential multiplier after the current suspect list is generated by observed output value.
Select a circuit you want to diagnose:
- a) M3R2
- b) Half Adder
- c) Sequential Multiplier
- d) test circuit
- e) goto listener
- f) goto listener without clearing windows
- q) quit
Type [a-f], or q (default a):
What is the MEASURED value of the port IN1 of the device M3A2-1?
What is the MEASURED value of the port IN2 of the device M3A2-1?
What is the MEASURED value of the port IN3 of the device M3A2-1?
Type any key to continue:
Figure 2.
What is the measured value of the port IN1 of the device MULT-2?
What is the measured value of the port IN2 of the device MULT-2?
VMES DIAGNOSES: MULT-2 is faulty.
VMES DIAGNOSES: M3A2-1 is faulty.
Type any key to go back to top-level diagnosis:
Figure 3.
Figure 4.
What is the measured value of the port IN1 of the device SMULT-1?
What is the measured value of the port IN2 of the device SMULT-1?
Type any key to continue:
Type any key to continue:
Figure 5.
|
{"Source-Url": "https://www.cse.buffalo.edu/sneps/Bibliography/vmesmanual", "len_cl100k_base": 4672, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 30797, "total-output-tokens": 5591, "length": "2e12", "weborganizer": {"__label__adult": 0.0007004737854003906, "__label__art_design": 0.0007348060607910156, "__label__crime_law": 0.0005426406860351562, "__label__education_jobs": 0.0011272430419921875, "__label__entertainment": 0.00019478797912597656, "__label__fashion_beauty": 0.0004930496215820312, "__label__finance_business": 0.0004334449768066406, "__label__food_dining": 0.0005645751953125, "__label__games": 0.0014133453369140625, "__label__hardware": 0.2296142578125, "__label__health": 0.0010623931884765625, "__label__history": 0.00033974647521972656, "__label__home_hobbies": 0.0006489753723144531, "__label__industrial": 0.0055999755859375, "__label__literature": 0.00022614002227783203, "__label__politics": 0.0002961158752441406, "__label__religion": 0.001071929931640625, "__label__science_tech": 0.15966796875, "__label__social_life": 0.00010281801223754884, "__label__software": 0.14599609375, "__label__software_dev": 0.44775390625, "__label__sports_fitness": 0.0005450248718261719, "__label__transportation": 0.0007176399230957031, "__label__travel": 0.0002340078353881836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17975, 0.06201]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17975, 0.57368]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17975, 0.88419]], "google_gemma-3-12b-it_contains_pii": [[0, 1735, false], [1735, 3789, null], [3789, 6688, null], [6688, 9262, null], [9262, 10449, null], [10449, 11475, null], [11475, 12438, null], [12438, 13773, null], [13773, 14945, null], [14945, 16165, null], [16165, 17059, null], [17059, 17274, null], [17274, 17507, null], [17507, 17768, null], [17768, 17778, null], [17778, 17975, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1735, true], [1735, 3789, null], [3789, 6688, null], [6688, 9262, null], [9262, 10449, null], [10449, 11475, null], [11475, 12438, null], [12438, 13773, null], [13773, 14945, null], [14945, 16165, null], [16165, 17059, null], [17059, 17274, null], [17274, 17507, null], [17507, 17768, null], [17768, 17778, null], [17778, 17975, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17975, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17975, null]], "pdf_page_numbers": [[0, 1735, 1], [1735, 3789, 2], [3789, 6688, 3], [6688, 9262, 4], [9262, 10449, 5], [10449, 11475, 6], [11475, 12438, 7], [12438, 13773, 8], [13773, 14945, 9], [14945, 16165, 10], [16165, 17059, 11], [17059, 17274, 12], [17274, 17507, 13], [17507, 17768, 14], [17768, 17778, 15], [17778, 17975, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17975, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
5a97b2662699ffd7ad962000c80d2ee0be1a88f8
|
COLORIST, WRITE YOUR ARTICLES OR BOOKS IN A COLORFUL WAY
JINWEN XU
[email protected]
July 2021, Beijing
Abstract
olorist is a series of styles and classes for you to typeset your articles or books in a colorful manner. The original intention in designing this series was to write drafts and notes that look colorful yet not dazzling. With the help of the ProjLib toolkit, also developed by the author, the classes provided here have multi-language support, preset theorem-like environments with clever reference support, and many other functionalities. Notably, using these classes, one can organize the author information in the \ AMS fashion, makes it easy to switch to journal classes later for publication.
Finally, this documentation is typeset using the colorart class (with the option allowbf). You can think of it as a short introduction and demonstration.
Contents
Before you start ............................................. 1
1 Introduction ............................................ 2
2 Usage and examples ..................................... 2
2.1 How to load it ........................................ 2
2.2 Example - colorart .................................... 2
2.3 Example - colorbook .................................. 5
3 The options ............................................. 7
4 Instructions by topic ...................................... 8
4.1 Language configuration ............................... 8
4.2 Theorems and how to reference them ................ 8
4.3 Define a new theorem-like environment ............. 9
4.4 Draft mark ........................................... 10
4.5 Title, abstract and keywords ....................... 11
5 Known issues .......................................... 11
Before you start
In order to use the package or classes described here, you need to:
• install TeX Live or MiKTeX of the latest possible version, and make sure that colorist and projlib are correctly installed in your \TEX system.
• be familiar with the basic usage of \LaTeX, and knows how to compile your document with pdf\LaTeX, \XeLaTeX or \LuaLaTeX.
Corresponding to: colorist 2021/07/30
1 Introduction
colorist is a series of styles and classes for you to typeset your articles or books in a colorful manner. The original intention in designing this series was to write drafts and notes that look colorful yet not dazzling.
The entire collection includes colorist.sty, which is the main style shared by all of the following classes; colorart.cls for typesetting articles and colorbook.cls for typesetting books. They compile with any major \TeX engine, with native support to English, French, German, Italian, Portuguese (European and Brazilian) and Spanish typesetting via \texttt{\UseLanguage} (see the instruction below for detail).
You can also found lebhart and beaulivre on CTAN. They are the enhanced version of colorart and colorbook with unicode support. With this, they can access to more beautiful fonts, and additionally have native support for Chinese, Japanese and Russian typesetting. On the other hand, they need to be compiled with X\LaTeX or Lua\LaTeX (not pdf\LaTeX).
With the help of the ProjLib toolkit, also developed by the author, the classes provided here have multi-language support, preset theorem-like environments with clever reference support, and many other functionalities such as draft marks, enhanced author information block, mathematical symbols and shortcuts, etc. Notably, using these classes, one can organize the author information in the \texttt{AMS} fashion, makes it easy to switch to journal classes later for publication. For more detailed information, you can refer to the documentation of ProjLib by running \texttt{texdoc projlib} in the command line.
2 Usage and examples
2.1 How to load it
You can directly use colorart or colorbook as your document class. In this way, you can directly begin writing your document, without having to worry about the configurations.
\begin{verbatim}
\documentclass[colorart] or \documentclass[colorbook]
\end{verbatim}
\textbf{Tip}
You may wish to use lebhart or beaulivre instead, which should produce better result. All the examples later using colorart or colorbook can be adopted to lebhart and beaulivre respectively, without further modification.
You can also use the default classes article or book, and load the colorist package. This way, only the basic styles are set, and you can thus use your preferred fonts and page layout. All the features mentioned in this article are provided.
\begin{verbatim}
\documentclass[article] or \documentclass[book]
\usepackage[colorist]
\end{verbatim}
2.2 Example - colorart
Let's first look at a complete example of colorart (the same works for lebhart).
\begin{verbatim}
\documentclass[colorart]
\usepackage[ProjLib]
\UseLanguage{French}
\begin{document}
title{(title)}
\author{(author)}
\end{document}
\end{verbatim}
\date{\PLdate{2022-04-01}}
\maketitle
\begin{abstract}
Ceci est un résumé. \dnf<Plus de contenu est nécessaire.>
\end{abstract}
\begin{keyword}
AAA, BBB, CCC, DDD, EEE
\end{keyword}
\section{Un théorème}
\begin{theorem}\label{thm:abc}
Ceci est un théorème.
\end{theorem}
Référence du théorème: \cref{thm:abc}
\end{document}
If you find this example a little complicated, don’t worry. Let’s now look at this example piece by piece.
2.2.1 Initialization
\documentclass[colorart]
\usepackage{ProjLib}
Initialization is straightforward. The first line loads the document class colorart, and the second line loads the ProjLib toolkit to obtain some additional functionalities.
2.2.2 Set the language
\UseLanguage{French}
This line indicates that French will be used in the document (by the way, if only English appears in your article, then there is no need to set the language). You can also switch the language in the same way later in the middle of the text. Supported languages include Simplified Chinese, Traditional Chinese, Japanese, English, French, German, Spanish, Portuguese, Brazilian Portuguese and Russian.¹
For detailed description of this command and more related commands, please refer to the section on the multi-language support.
2.2.3 Title, author information, abstract and keywords
\title{⟨title⟩}
\author{⟨author⟩}
\date{\PLdate{2022-04-01}}
\maketitle
\begin{abstract}
⟨abstract⟩
\end{abstract}
¹The language Simplified Chinese, Traditional Chinese, Japanese and Russian requires Unicode support, thus the classes \verb|lebhart| or \verb|beaulivre|.
This part begins with the title and author information block. The example shows the basic usage, but in fact, you can also write:
\author{⟨author 1⟩}
\address{⟨address 1⟩}
\email{⟨email 1⟩}
\author{⟨author 2⟩}
\address{⟨address 2⟩}
\email{⟨email 2⟩}
...
In addition, you may also write in the \texttt{AM\LaTeX} fashion, i.e.:
\title{⟨title⟩}
\author{⟨author 1⟩}
\address{⟨address 1⟩}
\email{⟨email 1⟩}
\author{⟨author 2⟩}
\address{⟨address 2⟩}
\email{⟨email 2⟩}
\date{\PLdate{2022-04-01}}
\subjclass{*****}
\keywords{⟨keywords⟩}
\begin{abstract}
⟨abstract⟩
\end{abstract}
\maketitle
2.2.4 Draft marks
\dnf{⟨some hint⟩}
When you have some places that have not yet been finished yet, you can mark them with this command, which is especially useful during the draft stage.
2.2.5 Theorem-like environments
\begin{theorem}\label{thm:abc}
Ceci est un théorème.
\end{theorem}
Référence du théorème: \cref{thm:abc}
Commonly used theorem-like environments have been pre-defined. Also, when referencing a theorem-like environment, it is recommended to use \cref{⟨label⟩} — in this way, there is no need to explicitly write down the name of the corresponding environment every time.
Tip
If you wish to switch to the standard class later, just replace the first two lines with:
\documentclass{article}
\usepackage[a4paper,margin=1in]{geometry}
\usepackage[hidelinks]{hyperref}
\usepackage[palatino,amsfashion]{ProjLib}
or to use the A \LaTeX{} class:
\documentclass{amsart}
\usepackage[a4paper,margin=1in]{geometry}
\usepackage[hidelinks]{hyperref}
\usepackage[palatino]{ProjLib}
Tip
If you like the current document class, but want a more “plain” style, then you can use the option classical, like this:
\documentclass[classical]{colorart}
2.3 Example - colorbook
Now let’s look at an example of colorbook (the same works for beaulivre).
\documentclass{colorbook}
\usepackage{ProjLib}
\UseLanguage{French}
\begin{document}
\frontmatter
\begin{titlepage}
\end{titlepage}
\tableofcontents
\mainmatter
\part{(part title)}
\parttext{(text after part title)}
\chapter{(chapter title)}
\section{(section title)}
There is no much differences with colorart, only that the title and author information should be typeset within the titlepage environment. Currently no default titlepage style is given, since the design of the title page is a highly personalized thing, and it is difficult to achieve a result that satisfies everyone.
In the next section, we will go through the options available.
3 The options
colorist offers the following options:
- The language options EN / english / English, FR / french / French, etc.
- For the option names of a specific language, please refer to \textit{language name} in the next section. The first specified language will be used as the default language.
- The language options are optional, mainly for increasing the compilation speed. Without them the result would be the same, only slower.
- draft or fast
- The option fast enables a faster but slightly rougher style, main differences are:
* Use simpler math font configuration;
* Do not use \texttt{hyperref};
* Enable the fast mode of \texttt{ProjLib} toolkit.
\textbf{Tip}
During the draft stage, it is recommended to use the fast option to speed up compilation. When in fast mode, there will be a watermark “DRAFT” to indicate that you are currently in the draft mode.
- allowbf
- Allow boldface. When this option is enabled, the main title, the titles of all levels and the names of theorem-like environments will be bolded.
- runin
- Use the “runin” style for \texttt{\subsubsection}
- puretext or nothms
- Pure text mode. Does not load theorem-like environments.
- nothmnum, thmnum or thmnum=⟨counter⟩
- Theorem-like environments will not be numbered / numbered in order 1, 2, 3... / numbered within ⟨counter⟩. Here, ⟨counter⟩ should be a built-in counter (such as \texttt{subsection}) or a custom counter defined in the preamble. If no option is used, they will be numbered within \texttt{chapter} (book) or \texttt{section} (article).
- regionalref, originalref
- When referencing, whether the name of the theorem-like environment changes with the current language. The default is \texttt{regionalref}, i.e., the name corresponding to the current language is used; for example, when referencing a theorem-like environment in English context, the names “Theorem, Definition...” will be used no matter which language context the original environment is in. If \texttt{originalref} is enabled, then the name will always remain the same as the original place; for example, when referencing a theorem written in the French context, even if one is currently in the English context, it will still be displayed as “Théorème”.
- In fast mode, the option \texttt{originalref} will have no effect.
Additionally, colorart and colorbook offers the following options:
- a4paper or b5paper
- Optional paper size. The default paper size is 8.5in × 11in.
- palatino, times, garamond, noto, biolinum | useosf
- Font options. As the name suggest, font with corresponding name will be loaded.
- The \texttt{useosf} option is used to enable the old-style figures.
4 Instructions by topic
4.1 Language configuration
colorart has multi-language support, including English, French, German, Italian, Portuguese (European and Brazilian) and Spanish. The language can be selected by the following macros:
- \UseLanguage\{language name\} is used to specify the language. The corresponding setting of the language will be applied after it. It can be used either in the preamble or in the main body. When no language is specified, “English” is selected by default.
- \UseOtherLanguage\{language name\}\{content\}, which uses the specified language settings to typeset \content. Compared with \UseLanguage, it will not modify the line spacing, so line spacing would remain stable when CJK and Western texts are mixed.
/language name/ can be (it is not case sensitive, for example, French and french have the same effect):
- Simplified Chinese: CN, Chinese, SChinese or SimplifiedChinese
- Traditional Chinese: TC, TChinese or TraditionalChinese
- English: EN or English
- French: FR or French
- German: DE, German or ngerman
- Italian: IT or Italian
- Portuguese: PT or Portuguese
- Portuguese (Brazilian): BR or Brazilian
- Spanish: ES or Spanish
- Japanese: JP or Japanese
- Russian: RU or Russian
In addition, you can also add new settings to selected language:
- \AddLanguageSetting\{settings\}
- Add \settings to all supported languages.
- \AddLanguageSetting\{language name\}\{settings\}
- Add \settings to the selected language \language name.
For example, \AddLanguageSetting\{German\}\{color\{orange\}\} can make all German text displayed in orange (of course, one then need to add \AddLanguageSetting\{\color\{black\}\} in order to correct the color of the text in other languages).
4.2 Theorems and how to reference them
Environments such as definition and theorem have been preset and can be used directly.
More specifically, preset environments include: assumption, axiom, conjecture, convention, corollary, definition, definition–proposition, definition–theorem, example, exercise, fact, hypothesis, lemma, notation, observation, problem, property, proposition, question, remark, theorem, and the corresponding unnumbered version with an asterisk * in the name. The titles will change with the current language. For example, theorem will be displayed as “Theorem” in English mode and “Théorème” in French mode.
When referencing a theorem-like environment, it is recommended to use \cref\{label\}. In this way, there is no need to explicitly write down the name of the corresponding environment every time.
Example
\begin\{definition\}[Strange things] \label\{def: strange\} ...
4.2 Theorems and how to reference them | 8
will produce
**Definition 4.1** (Strange things) This is the definition of some strange objects. There is approximately an one-line space before and after the theorem environment, and there will be a symbol to mark the end of the environment.
\cref{def: strange} will be displayed as: **Definition 4.1**.
After using \UseLanguage{French}, a theorem will be displayed as:
**Théorème 4.2** (Inutile) Un théorème en français.
By default, when referenced, the name of the theorem matches the current context. For example, the definition above will be displayed in French in the current French context: **Définition 4.1** et **Théorème 4.2**. If you want the name of the theorem to always match the language of the context in which the theorem is located, you can add originalref to the global options.
The following are the main styles of theorem-like environments:
**Theorem 4.3** Theorem style: theorem, proposition, lemma, corollary, ...
Proof | Proof style
Remark style
**Conjecture 4.4** Conjecture style
**Example** Example style: example, fact, ...
**Problem 4.5** Problem style: problem, question, ...
For aesthetics, adjacent definitions will be connected together automatically:
**Definition 4.6** First definition.
**Definition 4.7** Second definition.
### 4.3 Define a new theorem-like environment
If you need to define a new theorem-like environment, you must first define the name of the environment in the language to use:
* \NameTheorem{⟨language name⟩}{⟨name of environment⟩}{⟨name string⟩}
For ⟨language name⟩, please refer to the section on language configuration. When ⟨language name⟩ is not specified, the name will be set for all supported languages. In addition, environments with or without asterisk share the same name, therefore, \NameTheorem{envname*}{...} has the same effect as \NameTheorem{envname}{...}.
And then define this environment in one of following five ways:
\CreateTheorem*\{\textit{name of environment}\}
- Define an unnumbered environment \textit{name of environment}
\CreateTheorem\{\textit{name of environment}\}
- Define a numbered environment \textit{name of environment}, numbered in order 1,2,3,…
\CreateTheorem\{\textit{name of environment}\}\{\texttt{numbered like}\}
- Define a numbered environment \textit{name of environment}, which shares the counter \texttt{numbered like}
\CreateTheorem\{\textit{name of environment}\}\langle\texttt{numbered within}\rangle
- Define a numbered environment \textit{name of environment}, numbered within the counter \texttt{numbered within}
\CreateTheorem\{\textit{name of environment}\}\langle\textit{existed environment}\rangle
\CreateTheorem\*\{\textit{name of environment}\}\langle\textit{existed environment}\rangle
- Identify \textit{name of environment} with \textit{existed environment} or \textit{existed environment}*.
- This method is usually useful in the following two situations:
1. To use a more concise name. For example, with \CreateTheorem\{\texttt{thm}\}\{\texttt{theorem}\}, one can then use the name \texttt{thm} to write theorem.
2. To remove the numbering of some environments. For example, one can remove the numbering of the \texttt{remark} environment with \CreateTheorem\{\texttt{remark}\}\{\texttt{remark}\*\}.
\textbf{Tip}
This macro utilizes the feature of \texttt{amsthm} internally, so the traditional \texttt{theoremstyle} is also applicable to it. One only needs declare the style before the relevant definitions.
Here is an example. The following code:
\NameTheorem\[EN\]{\texttt{proofidea}\}\{\texttt{Idea}\}
\CreateTheorem\*\{\texttt{proofidea}\*\}
\CreateTheorem\{\texttt{proofidea}\}\langle\texttt{subsection}\rangle
defines an unnumbered environment \texttt{proofidea*} and a numbered environment \texttt{proofidea} (numbered within subsection) respectively. They can be used in English context. The effect is as follows:
\texttt{Idea} \quad The \texttt{proofidea*} environment.
\texttt{Idea 4.3.1} \quad The \texttt{proofidea} environment.
\subsection*{4.4 Draft mark}
You can use \texttt{dnf} to mark the unfinished part. For example:
\texttt{dnf} or \texttt{dnf<…>}. The effect is: \texttt{To be finished #1} or \texttt{To be finished #2: …}.
The prompt text changes according to the current language. For example, it will be displayed as \texttt{Pas encore fini #3} in French mode.
Similarly, there is \texttt{needgraph}:
\texttt{needgraph} or \texttt{needgraph<…>}. The effect is:
\texttt{A graph is needed here #1}
or
\texttt{A graph is needed here #2: …}
The prompt text changes according to the current language. For example, in French mode, it will be displayed as \texttt{Il manque une image ici #3}.
4.5 Title, abstract and keywords
colorart has both the features of standard classes and that of the \textit{\textsc{AMS}} classes. Therefore, the title part can either be written in the usual way, in accordance with the standard class \textit{article}:
\begin{verbatim}
\title{⟨title⟩}
\author{⟨author⟩\thanks{⟨text⟩}}
\date{⟨date⟩}
\maketitle
\begin{abstract}
⟨abstract⟩
\end{abstract}
\begin{keyword}
⟨keywords⟩
\end{keyword}
\end{verbatim}
or written in the way of \textit{\textsc{AMS}} classes:
\begin{verbatim}
\title{⟨title⟩}
\author{⟨author⟩}
\thanks{⟨text⟩}
\address{⟨address⟩}
\email{⟨email⟩}
\date{⟨date⟩}
\keywords{⟨keywords⟩}
\subjclass{⟨subjclass⟩}
\begin{abstract}
⟨abstract⟩
\end{abstract}
\maketitle
\end{verbatim}
The author information can contain multiple groups, written as:
\begin{verbatim}
\author{⟨author 1⟩}
\address{⟨address 1⟩}
\email{⟨email 1⟩}
\author{⟨author 2⟩}
\address{⟨address 2⟩}
\email{⟨email 2⟩}
...
\end{verbatim}
Among them, the mutual order of \texttt{\texttt{address}}, \texttt{\texttt{curraddr}}, \texttt{\texttt{email}} is not important.
5 Known issues
- The font settings are still not perfect.
- Since many features are based on the \textit{ProjLib} toolkit, colorist (and hence colorart, lebhart and colorbook, beaulivre) inherits all its problems. For details, please refer to the “Known Issues” section of the \textit{ProjLib} documentation.
- The error handling mechanism is incomplete: there is no corresponding error prompt when some problems occur.
- There are still many things that can be optimized in the code.
|
{"Source-Url": "https://ctan.math.washington.edu/tex-archive/macros/latex/contrib/colorist/colorist-doc.pdf", "len_cl100k_base": 5266, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34975, "total-output-tokens": 5927, "length": "2e12", "weborganizer": {"__label__adult": 0.0005230903625488281, "__label__art_design": 0.03424072265625, "__label__crime_law": 0.00034809112548828125, "__label__education_jobs": 0.0044097900390625, "__label__entertainment": 0.00043320655822753906, "__label__fashion_beauty": 0.0004067420959472656, "__label__finance_business": 0.00046539306640625, "__label__food_dining": 0.00034737586975097656, "__label__games": 0.0011148452758789062, "__label__hardware": 0.0008606910705566406, "__label__health": 0.0002994537353515625, "__label__history": 0.0004515647888183594, "__label__home_hobbies": 0.0005011558532714844, "__label__industrial": 0.0003535747528076172, "__label__literature": 0.001807212829589844, "__label__politics": 0.00021922588348388672, "__label__religion": 0.0008559226989746094, "__label__science_tech": 0.0077667236328125, "__label__social_life": 0.0003323554992675781, "__label__software": 0.2381591796875, "__label__software_dev": 0.70556640625, "__label__sports_fitness": 0.00025773048400878906, "__label__transportation": 0.00024271011352539065, "__label__travel": 0.0003159046173095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20665, 0.01723]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20665, 0.56849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20665, 0.79177]], "google_gemma-3-12b-it_contains_pii": [[0, 2154, false], [2154, 4930, null], [4930, 6510, null], [6510, 7691, null], [7691, 8624, null], [8624, 9006, null], [9006, 11712, null], [11712, 14392, null], [14392, 16314, null], [16314, 19086, null], [19086, 20665, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2154, true], [2154, 4930, null], [4930, 6510, null], [6510, 7691, null], [7691, 8624, null], [8624, 9006, null], [9006, 11712, null], [11712, 14392, null], [14392, 16314, null], [16314, 19086, null], [19086, 20665, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20665, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20665, null]], "pdf_page_numbers": [[0, 2154, 1], [2154, 4930, 2], [4930, 6510, 3], [6510, 7691, 4], [7691, 8624, 5], [8624, 9006, 6], [9006, 11712, 7], [11712, 14392, 8], [14392, 16314, 9], [16314, 19086, 10], [19086, 20665, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20665, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
97fa21f639cf57e5f710a1d924f9523c28117d83
|
Image representation
Slides from Subhransu Maji and many others
Lecture outline
- Origin and motivation of the “bag of words” model
- Algorithm pipeline
- Extracting local features
- Learning a dictionary — clustering using k-means
- Encoding methods — hard vs. soft assignment
- Spatial pooling — pyramid representations
- Similarity functions and classifiers
Figure from Chatfield et al., 2011
Bag of features
Origin 1: Texture recognition
- Texture is characterized by the repetition of basic elements or *textons*
- For stochastic textures, it is the identity of the textons, not their spatial arrangement, that matters
Origin 1: Texture recognition
Origin 2: Bag-of-words models
- Orderless document representation: frequencies of words from a dictionary
Salton & McGill (1983)
Origin 2: Bag-of-words models
Origin 2: Bag-of-words models
- Orderless document representation: frequencies of words from a dictionary
Salton & McGill (1983)
Origin 2: Bag-of-words models
- Orderless document representation: frequencies of words from a dictionary
Salton & McGill (1983)
Lecture outline
- Origin and motivation of the “bag of words” model
- Algorithm pipeline
- Extracting local features
- Learning a dictionary — clustering using k-means
- Encoding methods — hard vs. soft assignment
- Spatial pooling — pyramid representations
- Similarity functions and classifiers
Figure from Chatfield et al., 2011
Local feature extraction
- Regular grid or interest regions
Local feature extraction
Detect patches
Normalize patch
Compute descriptor
Choices of descriptor:
- SIFT
- Filterbank histograms
- The patch itself
Local feature extraction
Extract features from many images
Lecture outline
• Origin and motivation of the “bag of words” model
• Algorithm pipeline
• Extracting local features
• Learning a dictionary — clustering using k-means
• Encoding methods — hard vs. soft assignment
• Spatial pooling — pyramid representations
• Similarity functions and classifiers
Figure from Chatfield et al., 2011
Learning a dictionary
Slide credit: Josef Sivic
Learning a dictionary
Clustering
Slide credit: Josef Sivic
Learning a dictionary
Visual vocabulary
Clustering
Slide credit: Josef Sivic
Review: K-means clustering
- Want to minimize sum of squared Euclidean distances between features $x_i$ and their nearest cluster centers $m_k$
$$D(X,M) = \sum_{\text{cluster } k} \sum_{\text{point } i \text{ in cluster } k} (x_i - m_k)^2$$
Algorithm:
- Randomly initialize K cluster centers
- Iterate until convergence:
- Assign each feature to the nearest center
- Recompute each cluster center as the mean of all features assigned to it
Example codebook
Source: B. Leibe
Another codebook
Lecture outline
• Origin and motivation of the “bag of words” model
• Algorithm pipeline
• Extracting local features
• Learning a dictionary — clustering using k-means
• Encoding methods — hard vs. soft assignment
• Spatial pooling — pyramid representations
• Similarity functions and classifiers
Figure from Chatfield et al., 2011
Encoding methods
- Assigning words to features
Visual vocabulary
partition of space
Also called hard assignment
Encoding methods
- Assigning words to features
**Visual vocabulary**
- Different words
- Similar features
**Partition of space**
- Hard assignment
- Large quantization error
Assigning words to features
- **Visual vocabulary**
- **Partition of space**
- **Soft assignment**
\[
\alpha_i \propto e^{-f(d(x, c_i))}
\]
assign high weights to centers that are close in practice non-zero to only k-nearest neighbors
Encoding methods
- Assigning words to features
### Soft Assignment
\[ \alpha_i \propto e^{-f(d(x,c_i))} \]
### Similar Features
- Visual Vocabulary
- Partition of space
### Soft Assignment Examples
- 0.6 0 0.4
- 0.4 0 0.6
### Hard Assignment Examples
- 1 0 0
- 0 0 1
What should be the size of the dictionary?
- Too small: don’t capture the variability of the dataset
- Too large: have too few points per cluster
- The right size depends on the task and amount of data
- e.g. instance retrieval (e.g. Nister) uses a vocabulary of 1 million, whereas recognition (e.g., texture) uses a vocabulary of about a hundred.
Speed of embedding
- Tree structured vocabulary (e.g. Nister)
- Hashing, product quantization
More accurate embeddings
- Generalizations of soft embedding: LLC coding, sparse coding
- Higher order statistics: Fisher vectors, VLAD, etc.
Lecture outline
- Origin and motivation of the “bag of words” model
- Algorithm pipeline
- Extracting local features
- Learning a dictionary — clustering using k-means
- Encoding methods — hard vs. soft assignment
- Spatial pooling — pyramid representations
- Similarity functions and classifiers
Figure from Chatfield et al., 2011
Spatial pyramids
**pooling**: sum embeddings of local features within a region
Lazebnik, Schmid & Ponce (CVPR 2006)
Spatial pyramids
**pooling**: sum embeddings of local features within a region
Same motivation as **SIFT** — keep coarse layout information
Lazebnik, Schmid & Ponce (CVPR 2006)
**Spatial pyramids**
**pooling**: sum embeddings of local features within a region
Same motivation as **SIFT** — keep coarse layout information
Lazebnik, Schmid & Ponce (CVPR 2006)
Lecture outline
- Origin and motivation of the “bag of words” model
- Algorithm pipeline
- Extracting local features
- Learning a dictionary — clustering using k-means
- Encoding methods — hard vs. soft assignment
- Spatial pooling — pyramid representations
- Similarity functions and classifiers
Figure from Chatfield et al., 2011
Bags of features representation
\[ I \]
\[ h = \Phi(I) \]
image similarity = feature similarity
Comparing features
- **Euclidean distance:**
\[
D(h_1, h_2) = \sqrt{\sum_{i=1}^{N} (h_1(i) - h_2(i))^2}
\]
- **L1 distance:**
\[
D(h_1, h_2) = \sum_{i=1}^{N} |h_1(i) - h_2(i)|
\]
- **\(\chi^2\) distance:**
\[
D(h_1, h_2) = \sum_{i=1}^{N} \frac{(h_1(i) - h_2(i))^2}{h_1(i) + h_2(i)}
\]
- **Histogram intersection (similarity):**
\[
I(h_1, h_2) = \sum_{i=1}^{N} \min(h_1(i), h_2(i))
\]
- **Hellinger kernel (similarity):**
\[
K(h_1, h_2) = \sum_{i=1}^{N} \sqrt{h_1(i) h_2(i)}
\]
Given a feature representation for images, how do we learn a model for distinguishing features from different classes?
Classifiers
- Given a feature representation for images, how do we learn a model for distinguishing features from different classes?
- Examples of commonly used classifiers
- Nearest neighbor classifiers
- Linear classifiers: support vector machines
Nearest neighbor classifier
- Assign label of nearest training data point to each test data point
from Duda et al.
**k-Nearest neighbor classifier**
- For a new point, find the $k$ closest points from training data.
- Labels of the $k$ points “vote” to classify.
![Diagram showing $k$-Nearest neighbor classification with $k = 5$.]
Linear classifiers
Linear classifiers
- Find linear function (hyperplane) to separate positive and negative examples
\[ x_i \text{ positive: } x_i \cdot w + b \geq 0 \]
\[ x_i \text{ negative: } x_i \cdot w + b < 0 \]
Which hyperplane is best?
Support vector machines
• Find hyperplane that maximizes the *margin* between the positive and negative examples
• Find hyperplane that maximizes the *margin* between the positive and negative examples
\[ \begin{align*}
x_i \text{ positive } (y_i = 1) : & \quad x_i \cdot w + b \geq 1 \\
x_i \text{ negative } (y_i = -1) : & \quad x_i \cdot w + b \leq -1
\end{align*} \]
For support vectors, \( x_i \cdot w + b = \pm 1 \)
Distance between point and hyperplane:
\[ \frac{|x_i \cdot w + b|}{\|w\|} \]
Therefore, the margin is \( \frac{2}{\|w\|} \)
1. Maximize margin \( \frac{2}{||w||} \)
2. Correctly classify all training data:
\[
\begin{align*}
\text{x}_i \text{ positive (} y_i = 1 \text{):} & \quad \text{x}_i \cdot \text{w} + b \geq 1 \\
\text{x}_i \text{ negative (} y_i = -1 \text{):} & \quad \text{x}_i \cdot \text{w} + b \leq -1
\end{align*}
\]
*Quadratic optimization problem:*
\[
\min_{w,b} \frac{1}{2} ||w||^2 \quad \text{subject to} \quad y_i (w \cdot x_i + b) \geq 1
\]
Finding the maximum margin hyperplane
• Solution:
\[ w = \sum_i \alpha_i y_i x_i \]
Learned weight
(nonzero only for support vectors)
Finding the maximum margin hyperplane
• Solution: \[ w = \sum_i \alpha_i y_i x_i \]
\[ w \cdot x_i + b = y_i, \text{ for any support vector} \]
• Classification function (decision boundary):
\[ w \cdot x + b = \sum_i \alpha_i y_i x_i \cdot x + b \]
• Notice that it relies on an *inner product* between the test point \( x \) and the support vectors \( x_i \)
• Solving the optimization problem also involves computing the inner products \( x_i \cdot x_j \) between all pairs of training points
What if the data is not linearly separable?
- **Separable:**
\[
\min_{w,b} \frac{1}{2} \|w\|^2 \quad \text{subject to} \quad y_i(w \cdot x_i + b) \geq 1
\]
- **Non-separable:**
\[
\min_{w,b} \frac{1}{2} \|w\|^2 + C \sum_{i=1}^{n} \xi_i \quad \text{subject to} \quad y_i(w \cdot x_i + b) - 1 + \xi_i \geq 0
\]
- **C:** tradeoff constant, \( \xi_i \): *slack variable* (positive)
- Whenever margin is \( \geq 1 \), \( \xi_i = 0 \)
- Whenever margin is \(< 1\),
\[
\xi_i = 1 - y_i(w \cdot x_i + b)
\]
What if the data is not linearly separable?
\[ \min_{w,b} \frac{1}{2} \|w\|^2 + C \sum_{i=1}^{n} \max(0,1 - y_i (w \cdot x_i + b)) \]
Maximize margin
Minimize classification mistakes
What if the data is not linearly separable?
\[
\min_{w,b} \frac{1}{2} \|w\|^2 + C \sum_{i=1}^{n} \max(0, 1 - y_i (w \cdot x_i + b))
\]
Datasets that are linearly separable work out great:
But what if the dataset is just too hard?
We can map it to a higher-dimensional space:
• General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable:
\[ \Phi: x \rightarrow \phi(x) \]
Nonlinear SVMs
- **The kernel trick**: instead of explicitly computing the lifting transformation \( \varphi(x) \), define a kernel function \( K \) such that
\[
K(x, y) = \varphi(x) \cdot \varphi(y)
\]
(the kernel function must satisfy Mercer’s condition)
- This gives a nonlinear decision boundary in the original feature space:
\[
\sum_i \alpha_i y_i \varphi(x_i) \cdot \varphi(x) + b = \sum_i \alpha_i y_i K(x_i, x) + b
\]
Non-linear kernels for histograms
- Histogram intersection kernel:
\[ I(h_1, h_2) = \sum_{i=1}^{N} \min(h_1(i), h_2(i)) \]
- Hellinger kernel:
\[ K(h_1, h_2) = \sum_{i=1}^{N} \sqrt{h_1(i) h_2(i)} \]
- Generalized Gaussian kernel:
\[ K(h_1, h_2) = \exp\left(-\frac{1}{A} D(h_1, h_2)^2\right) \]
\[ D \text{ can be L1, Euclidean, } \chi^2 \text{ distance, etc.} \]
Summary: SVMs for image classification
1. Pick an image representation (in our case, bag of features)
2. Pick a kernel function for that representation
3. Feed the kernel and features into your favorite SVM solver to obtain support vectors and weights
4. At test time: compute kernel values for your test example and each support vector, and combine them with the learned weights to get the value of the decision function
\[ \sum_i \alpha_i y_i \varphi(x_i) \cdot \varphi(x) + b = \sum_i \alpha_i y_i K(x_i, x) + b \]
Lots of software available! LIBSVM, LIBLINEAR, SVMLight
Summary: SVMs for image classification
1. Pick an image representation (in our case, bag of features)
2. Feed the features into your favorite SVM solver to obtain support vectors and weights
3. At test time: compute features for your test example and multiply with the learned weights to get the value of the decision function
Lots of software available! LIBSVM, LIBLINEAR, SVMLight
What about multi-class SVMs?
• Many options!
• For example, we have to obtain a multi-class SVM by combining multiple two-class SVMs
• One vs. rest
• **Training**: learn an SVM for each class vs. the rest
• **Testing**: apply each SVM to test example and assign to it the class of the SVM that returns the highest decision value
• One vs. one
• **Training**: learn an SVM for each pair of classes
• **Testing**: each learned SVM “votes” for a class to assign to the test example
• [http://www.kernel-machines.org/software](http://www.kernel-machines.org/software)
Lecture outline
• Origin and motivation of the “bag of words” model
• Algorithm pipeline
• Extracting local features
• Learning a dictionary — clustering using k-means
• Encoding methods — hard vs. soft assignment
• Spatial pooling — pyramid representations
• Similarity functions and classifiers
Figure from Chatfield et al., 2011
Multi-class classification results
(100 training images per class)
<table>
<thead>
<tr>
<th>Level</th>
<th>Single-level</th>
<th>Pyramid</th>
<th>Single-level</th>
<th>Pyramid</th>
</tr>
</thead>
<tbody>
<tr>
<td>0 (1 × 1)</td>
<td>45.3 ±0.5</td>
<td></td>
<td>72.2 ±0.6</td>
<td></td>
</tr>
<tr>
<td>1 (2 × 2)</td>
<td>53.6 ±0.3</td>
<td>56.2 ±0.6</td>
<td>77.9 ±0.6</td>
<td>79.0 ±0.5</td>
</tr>
<tr>
<td>2 (4 × 4)</td>
<td>61.7 ±0.6</td>
<td>64.7 ±0.7</td>
<td>79.4 ±0.3</td>
<td>81.1 ±0.3</td>
</tr>
<tr>
<td>3 (8 × 8)</td>
<td>63.3 ±0.8</td>
<td>66.8 ±0.6</td>
<td>77.2 ±0.4</td>
<td>80.7 ±0.3</td>
</tr>
</tbody>
</table>
### Multi-class classification results (30 training images per class)
<table>
<thead>
<tr>
<th>Level</th>
<th>Weak features (16)</th>
<th>Strong features (200)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Single-level</td>
<td>Pyramid</td>
</tr>
<tr>
<td>0</td>
<td>15.5 ±0.9</td>
<td>41.2 ±1.2</td>
</tr>
<tr>
<td>1</td>
<td>31.4 ±1.2</td>
<td>32.8 ±1.3</td>
</tr>
<tr>
<td>2</td>
<td>47.2 ±1.1</td>
<td>49.3 ±1.4</td>
</tr>
<tr>
<td>3</td>
<td>52.2 ±0.8</td>
<td><strong>54.0</strong> ±1.1</td>
</tr>
</tbody>
</table>
Further thoughts and readings …
• All about embeddings (detailed experiments and code)
• K. Chatfield et al., The devil is in the details: an evaluation of recent feature encoding methods, BMVC 2011
• [http://www.robots.ox.ac.uk/~vgg/research/encoding_eval/](http://www.robots.ox.ac.uk/~vgg/research/encoding_eval/)
• Includes discussion of advanced embeddings such as Fisher vector representations and locally linear coding (LLC)
• All about SVMs — [http://research.microsoft.com/pubs/67119/svmtutorial.pdf](http://research.microsoft.com/pubs/67119/svmtutorial.pdf)
• Fast non-linear SVM evaluation (scales linearly with #SVs)
• Classification using Intersection kernel SVMs is efficient, Maji et al., CVPR 2008 — O(1) evaluation ~ 1000x faster on on large datasets!
(Also see the PAMI 2013 paper on my webpage)
• Approximate embeddings for kernels (Maji and Berg, Vedaldi and Zisserman) — O(n) training ~ 100x faster on large datasets!
|
{"Source-Url": "https://www.csee.umbc.edu/~hpirsiav/courses/CVsp17/slides/15_representation.pdf", "len_cl100k_base": 4421, "olmocr-version": "0.1.53", "pdf-total-pages": 58, "total-fallback-pages": 0, "total-input-tokens": 79174, "total-output-tokens": 6990, "length": "2e12", "weborganizer": {"__label__adult": 0.0003247261047363281, "__label__art_design": 0.0013132095336914062, "__label__crime_law": 0.0004427433013916016, "__label__education_jobs": 0.0045623779296875, "__label__entertainment": 0.00012421607971191406, "__label__fashion_beauty": 0.0002363920211791992, "__label__finance_business": 0.00026106834411621094, "__label__food_dining": 0.00036263465881347656, "__label__games": 0.0005507469177246094, "__label__hardware": 0.0017061233520507812, "__label__health": 0.00078582763671875, "__label__history": 0.0003981590270996094, "__label__home_hobbies": 0.00024247169494628904, "__label__industrial": 0.0006232261657714844, "__label__literature": 0.00039839744567871094, "__label__politics": 0.0002968311309814453, "__label__religion": 0.0005407333374023438, "__label__science_tech": 0.1968994140625, "__label__social_life": 0.00020325183868408203, "__label__software": 0.034149169921875, "__label__software_dev": 0.7548828125, "__label__sports_fitness": 0.0002694129943847656, "__label__transportation": 0.0004172325134277344, "__label__travel": 0.00024020671844482425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16137, 0.02434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16137, 0.48779]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16137, 0.71987]], "google_gemma-3-12b-it_contains_pii": [[0, 65, false], [65, 410, null], [410, 426, null], [426, 798, null], [798, 987, null], [987, 1121, null], [1121, 1252, null], [1252, 1386, null], [1386, 1520, null], [1520, 1864, null], [1864, 1925, null], [1925, 2077, null], [2077, 2137, null], [2137, 2482, null], [2482, 2531, null], [2531, 2592, null], [2592, 2672, null], [2672, 3121, null], [3121, 3156, null], [3156, 3173, null], [3173, 3515, null], [3515, 3631, null], [3631, 3811, null], [3811, 4051, null], [4051, 4327, null], [4327, 4918, null], [4918, 5262, null], [5262, 5380, null], [5380, 5560, null], [5560, 5744, null], [5744, 6086, null], [6086, 6185, null], [6185, 6697, null], [6697, 6816, null], [6816, 7071, null], [7071, 7188, null], [7188, 7407, null], [7407, 7426, null], [7426, 7654, null], [7654, 7887, null], [7887, 8451, null], [8451, 9011, null], [9011, 9267, null], [9267, 9889, null], [9889, 10420, null], [10420, 10606, null], [10606, 10742, null], [10742, 10884, null], [10884, 11059, null], [11059, 11610, null], [11610, 12155, null], [12155, 12732, null], [12732, 13119, null], [13119, 13699, null], [13699, 14041, null], [14041, 14501, null], [14501, 15182, null], [15182, 16137, null]], "google_gemma-3-12b-it_is_public_document": [[0, 65, true], [65, 410, null], [410, 426, null], [426, 798, null], [798, 987, null], [987, 1121, null], [1121, 1252, null], [1252, 1386, null], [1386, 1520, null], [1520, 1864, null], [1864, 1925, null], [1925, 2077, null], [2077, 2137, null], [2137, 2482, null], [2482, 2531, null], [2531, 2592, null], [2592, 2672, null], [2672, 3121, null], [3121, 3156, null], [3156, 3173, null], [3173, 3515, null], [3515, 3631, null], [3631, 3811, null], [3811, 4051, null], [4051, 4327, null], [4327, 4918, null], [4918, 5262, null], [5262, 5380, null], [5380, 5560, null], [5560, 5744, null], [5744, 6086, null], [6086, 6185, null], [6185, 6697, null], [6697, 6816, null], [6816, 7071, null], [7071, 7188, null], [7188, 7407, null], [7407, 7426, null], [7426, 7654, null], [7654, 7887, null], [7887, 8451, null], [8451, 9011, null], [9011, 9267, null], [9267, 9889, null], [9889, 10420, null], [10420, 10606, null], [10606, 10742, null], [10742, 10884, null], [10884, 11059, null], [11059, 11610, null], [11610, 12155, null], [12155, 12732, null], [12732, 13119, null], [13119, 13699, null], [13699, 14041, null], [14041, 14501, null], [14501, 15182, null], [15182, 16137, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16137, null]], "pdf_page_numbers": [[0, 65, 1], [65, 410, 2], [410, 426, 3], [426, 798, 4], [798, 987, 5], [987, 1121, 6], [1121, 1252, 7], [1252, 1386, 8], [1386, 1520, 9], [1520, 1864, 10], [1864, 1925, 11], [1925, 2077, 12], [2077, 2137, 13], [2137, 2482, 14], [2482, 2531, 15], [2531, 2592, 16], [2592, 2672, 17], [2672, 3121, 18], [3121, 3156, 19], [3156, 3173, 20], [3173, 3515, 21], [3515, 3631, 22], [3631, 3811, 23], [3811, 4051, 24], [4051, 4327, 25], [4327, 4918, 26], [4918, 5262, 27], [5262, 5380, 28], [5380, 5560, 29], [5560, 5744, 30], [5744, 6086, 31], [6086, 6185, 32], [6185, 6697, 33], [6697, 6816, 34], [6816, 7071, 35], [7071, 7188, 36], [7188, 7407, 37], [7407, 7426, 38], [7426, 7654, 39], [7654, 7887, 40], [7887, 8451, 41], [8451, 9011, 42], [9011, 9267, 43], [9267, 9889, 44], [9889, 10420, 45], [10420, 10606, 46], [10606, 10742, 47], [10742, 10884, 48], [10884, 11059, 49], [11059, 11610, 50], [11610, 12155, 51], [12155, 12732, 52], [12732, 13119, 53], [13119, 13699, 54], [13699, 14041, 55], [14041, 14501, 56], [14501, 15182, 57], [15182, 16137, 58]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16137, 0.03704]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
44c591487e0b2fa084079eb8ee4cf1aa9363a9f5
|
ABSTRACT
The design of empirical experiments involves making design decisions to trade off what is ideal against what is achievable. Researchers must weigh limitations on resources, metrics, and the current state of knowledge, against the validity of the results. In this paper, we report on the design decisions we made in a small controlled experiment and their effects on the conclusions of the study. The goal of the study was to measure the impact of requirements formats on maintenance tasks. We encountered problems with the subjects’ lack of expertise in the technology used, the equivalence of subjects in our experiment conditions, and the number of subjects. These issues meant that we were able to draw conclusions about how subjects worked with the requirements formats, but not about the effect of the formats on the completeness of the implementation. We had a practical and doable experiment, but our results were not conclusive, only informative.
Categories and Subject Descriptors
D.2.1 [Software Engineering]: Requirements/Specifications – Elicitation methods.
General Terms
Design, Experimentation, Human Factors.
Keywords
1. INTRODUCTION
Designing an experiment has much in common with designing software. It is often necessary to select one option or a combination of options based on the availability of resources. Software developers are aware that the decisions made will have an effect on the system, but these decisions will help to make the system more feasible. In the same manner, researchers also have to make design decisions. Each design decision will have trade-offs. On one hand the decisions will make the experiment practical, but on the other hand the decisions could impact the validity of the results. To trade off these constraints, researchers must keep in mind the larger goal: to perform experiments that will provide valuable information and insight about the phenomenon being studied.
To illustrate our analogy, imagine that we have to develop a small web application that allows 50 students in an elementary school to upload their homework and keep some information about the status of the uploaded files. We want to create the best design but we also have to consider the budget (small) and time constraints (short). We can consider two options to store the information: a Database Management System (DBMS) or XML files. On one hand, if we use the DBMS we can have access to the information via SQL commands and have all the power of this specialized system. On the other hand, if we store information in an XML file, we can easily create the file on disk, and read it directly without installing any other software. We could choose to implement the system using XML files knowing that it is not the best design but it will meet the requirements and constraints.
We were interested in investigating the effects of different requirements formats on the performance of maintenance tasks. Should we do a case study in an industrial setting, or a controlled experiment in a laboratory? Should we do an exploratory qualitative study or test a hypothesis quantitatively? Should we do a detailed study with a small number of subjects or a more constrained study with a large number of subjects?
We decided to conduct a laboratory experiment using a small number of subjects and to collect both qualitative and quantitative data. A laboratory experiment would allow us to perform head-to-head comparisons of the formats and to draw conclusions about causality. We would collect both quantitative and qualitative data, so that we could both explore and test hypotheses.
Having selected the basic structure of the study, there were still many other choices to be made. In this paper, we will discuss the design decisions that affected the conclusions and validity of the study. Some of these decisions were made to optimize on scarce resources, in particular, the availability of subjects and the length of the experiment. Other decisions reflected the novelty of the research problem and the limitations of our methods.
It is always a challenge to find willing subjects for experiments in software engineering. We only screened for their knowledge of Java. However, the experimental task involved making three changes to an existing web application. As a result, only one out of nine subjects was able to complete the task, which led to
inconclusive results regarding the effect of the formats on performance of the maintenance tasks.
When conducting a controlled experiment the length of the session is limited by how long someone can concentrate and how much time someone can commit in a single block. In our study, the experiment tasks combined with the tutorials, familiarization task, and debriefing interview, each session was very long (2.5 hours). Consequently, we administered only a short questionnaire about their background, and nothing on their personal or cognitive traits. Without this data, we were not able to effectively counterbalance the amount of subject experience in each of the conditions. As well, we were not able to control for background experience when analyzing subject performance.
The remainder of the paper is organized as follows. Section 2 presents related work done in empirical experiment design in software engineering, trade-offs in experiment design, and research design. Section 3 describes the experiment we conducted which we use as an example for our hypothesis. Section 4 discusses the trading off between practicality and perfection. Our conclusions are presented in Section 5.
2. BACKGROUND
There is a great deal of literature on the design of empirical studies. There are many books available from the social sciences, and a number of papers, tutorials, and books for software engineering specifically. For example, Kitchenham et al. [3] suggested taking into consideration eleven design guidelines for empirical research in software engineering. Some of these guidelines are related to the identification of the population, the process for allocating the treatments, among others. Following these guidelines will help researchers to have an ideal experiment design, but resource limitations could make it difficult to follow them.
At a macroscopic level, the trade-offs between field studies and laboratory studies, long-term studies and single session studies, qualitative and quantitative studies are well known.
Perry et al. [4] proposed that the design of better studies and the effective collection of data could help create better empirical studies in software engineering and draw more conclusive interpretations from the results. They concluded that no study is perfect and the challenge is to conduct credible studies. Perry et al. also studied the management of trade-offs in the experiment design. They suggested that design decisions should try to maximize accuracy of interpretations, relevance, and impact. However, these decisions should be subject to resource constraints and risk.
Not only empirical studies but formal experiments in software engineering as well depend on careful experiment design to have useful results. Pfleeger [5] presented the needed activities to design and analyze an experiment in software engineering. She explained in detail the principles of experimental design, which aim to satisfy the need for simplicity and for maximizing information. The author emphasized the importance of having simple experiment designs that help making a practical experiment. Also, simple design reduces the use of time, money, people, and other experimental resources.
However, there is little in the literature on how to make design decisions at a detailed level. There is also little discussion of the consequences and lessons learned from particular design decisions. It is here that this experience report seeks to contribute to the software engineering literature.
3. DESCRIPTION OF THE EXPERIMENT
3.1 Experiment Design
We performed an initial controlled experiment to study which requirements format was most effective: Use Cases alone, Agile Requirements alone (User Stories with access to an On-site Customer), or Use Cases with Agile Requirements. A full description of the experiment has been published elsewhere [1].
We had a small sample of nine subjects, each assigned to one of the three conditions. We attempted to counterbalance the level of experience of the subjects in each condition. In the study, subjects were asked to modify a shopping cart in a web application, by changing an existing feature and adding two new features.
3.1.1 Subjects
Nine subjects participated in our experiment. We recruited them by word of mouth. Most of them were graduate students but we also had an undergraduate student and a research assistant. Most of them had a major in Computer Science. Five of them stated to have had between 1 to 2 years of experience in Java Web Development. More details of our subjects are shown in Table 1.
<table>
<thead>
<tr>
<th>Table 1. Characteristics of subjects</th>
</tr>
</thead>
<tbody>
<tr>
<td>Average Age</td>
</tr>
<tr>
<td>Gender</td>
</tr>
<tr>
<td>Occupation</td>
</tr>
<tr>
<td>Degree Major</td>
</tr>
<tr>
<td>Years of Experience in Software Development</td>
</tr>
<tr>
<td>Years of Experience in Java Web Development</td>
</tr>
</tbody>
</table>
3.1.2 Conditions
The goal of our experiment was to measure the level of impact that different requirement formats could have on how people implement a system. To achieve our goal, the experiment had three conditions. First, subjects in the UC Group were given the requirements only in Use Cases. Second, subjects in the US&OC Group used agile requirements. They received the requirements in User Stories and they also had access to an On-site Customer via chat. Third, subjects in the UC+US&OC Group used all the requirement formats used by the previous groups. We will refer to each condition by the name of the group from here onwards.
3.1.3 Procedure
Subjects first filled out a Background Questionnaire to provide information about their background and experience. Then we provided tutorials in the requirements format subjects would use. A familiarization task was also included to familiarize subjects with the Eclipse Workbench, Tomcat Application Server, and implementing JSPs (Java Serve Pages) and Servlets.
Subjects were given descriptions of three features in one of three requirements formats, according to their assigned experimental
condition. They had to understand the requirements, perform three maintenance tasks, and to think aloud as they worked. The maintenance tasks involved modifying a feature and adding two new features to a shopping cart for a web-based application. This subject system, called "An Online Boat Shop," was taken from the book “More Servlets and Java Server Pages” by Marty Hall [2]. The boat shop application was developed using JSP and Servlets, it uses a Tomcat application server, and it does not need access to a database. The source code of the application includes 10 JSP files, 12 Java™ files, and a XML configuration file. In total, there were 1,340 lines of source code.
The first maintenance task asked the subject to change how items were added to the shopping cart. Initially, each time the user added an item, the system did not verify whether one was already in the cart. We asked our subjects to add a “Quantity” attribute to the shopping cart and to increment it when an existing item was added to the cart. The second maintenance task required our subjects to add a new feature that allowed users to update the quantity of an item in the shopping cart by entering a new quantity in an input field. The third implementation task asked subjects to add the functionality to delete items from the shopping cart.
If we observed that a subject would not be able to complete the tasks in the time available due to unfamiliarity with JSP and Servlets, we asked them to document their design. We suggested that they draw sketches of screens, but they could draw or write whatever they needed to show that they understood the requirements. The design sketch allowed us to collect data about how well they understood the requirements when they were not able to complete the implementation.
Finally, the subjects participated in a Debriefing Interview to provide feedback and insight about their experience using the requirements formats, about their preferences among the formats, and about their performance in the implementation and design tasks.
3.1.4 Analysis
We analyzed the data by reviewing the screen, video, and audio recordings of the experiment. We tallied the amount of time that they spent reading the requirements, chatting with the Customer (where applicable), and implementing the features. We also analyzed the chat transcripts, counted the number of questions asked, and judged the relevance of the questions.
We also collected data from the coding and design to provide an objective, performance-based measure of how well subjects understood the requirements. For subjects who completed the implementation, we scored the program code. Otherwise, we scored the design drawings and the explanation that they provided. The maximum possible score was 30 points. Finally, we examined subject responses from the Debriefing Interview.
We tested our data using non-parametric statistics. This kind of statistical method is appropriate for our study because we have a small sample size. In addition, we converted our ratio data into ordinal data by rank ordering the times and performance scores for the subjects.
3.2 Results
We found that subjects using Agile Requirements spent the most time understanding the requirements (average = 28:03 minutes), followed by subjects who used both Agile Requirements and Use Cases (average = 18:00), followed by subjects using only Use Cases (average = 4:13). This difference was found to be statistically significant at p<0.05 using the Kruskal-Wallis one-way analysis of variance by ranks [3]. This result is surprising for a number of reasons and to understand this difference we will examine how subjects went about understanding the requirements. Figure 1 shows how much time each group spent using each requirement format.

<table>
<thead>
<tr>
<th>Table 2. Time spent understanding requirements</th>
</tr>
</thead>
<tbody>
<tr>
<td>Average/Group</td>
</tr>
<tr>
<td>----------------</td>
</tr>
<tr>
<td>Time reading Use Cases</td>
</tr>
<tr>
<td>Time reading User Stories</td>
</tr>
<tr>
<td>Time asking relevant questions to the OC</td>
</tr>
<tr>
<td>Time asking irrelevant questions to the OC</td>
</tr>
<tr>
<td>Total time understanding requirements</td>
</tr>
</tbody>
</table>
The time spent understanding the requirements can be divided into two parts, reading and chatting with the Customer. A summary of the time spent can be found in Table 2. Subjects who were in the UC condition did not have the opportunity to talk to a Customer, and this was the main reason that they spent the least time understanding the requirements. However, all three groups spent time reading the documentation that they were given. The Agile Requirements group spent more time reading User Stories than the UC+US&OC group who were given all the formats. This is understandable, because the US&OC only had access to these short descriptions. On the other hand, it is surprising that the
UC+US&OC group spent the most time reading the requirements of all the three groups. They even spent more time reading the Use Cases than the UC group (7:34 minutes vs. 4:13 minutes). This difference was found to be statistically significant at p<0.05 using the Kolmogorov-Smirnov test for two independent samples [1]. This difference can be attributable to the availability of the On-Site Customer and not the User Stories, because subjects in the third group spent a scant 37 seconds reading the latter. We now examine the chatting portion of the time spent understanding the requirements.
We found that subjects in US&OC condition spent more time chatting with the On-Site Customer than those in the UC+US&OC condition (25:51 minutes vs. 9:49 minutes). This result is statistically significant at p<0.05. While it appears that Agile Requirements are less efficient, in reality this time included requirements elicitation activities that were not needed in the other two conditions. In other words, subjects in the US&OC condition had to talk to the Customer just to catch up with the other two groups in terms of knowledge. While the subjects in the UC+US&OC group spent less time chatting, they made better use of it by asking fewer relevant and irrelevant questions (p<0.05 by Kolmogorov-Smirnov). The average counts of the questions asked by the two groups are presented in Table 3.
Table 3. Number of relevant and irrelevant questions asked to the on-site customer
<table>
<thead>
<tr>
<th>Average/Group</th>
<th>US&OC</th>
<th>UC+US&OC</th>
<th>p value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Number of relevant questions to the OC</td>
<td>6.00</td>
<td>4.00</td>
<td><0.05</td>
</tr>
<tr>
<td>Number of irrelevant questions to the OC</td>
<td>1.67</td>
<td>0.33</td>
<td><0.05</td>
</tr>
</tbody>
</table>
Table 4 shows the partial and overall score on the implementation tasks. Overall, the differences between the groups were not statistically significant. Although there are numerical differences between the average performance of each group, the variation could be explained by chance alone.
We broke down the performance score into sub-parts to determine if one group did better than another in a particular part of the implementation. While the UC Group had the highest average score on validations and messages, none of the differences in the sub-parts were statistically significant.
Table 4. Partial and overall scores on tasks
<table>
<thead>
<tr>
<th>Average/Group</th>
<th>UC</th>
<th>US&OC</th>
<th>UC+US&OC</th>
<th>p value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Functionality score</td>
<td>18.17</td>
<td>19.17</td>
<td>17.67</td>
<td>n.s.</td>
</tr>
<tr>
<td>Validations and messages score</td>
<td>5.33</td>
<td>1.33</td>
<td>1.67</td>
<td>n.s.</td>
</tr>
<tr>
<td>Overall score</td>
<td>23.50</td>
<td>20.50</td>
<td>19.34</td>
<td>n.s.</td>
</tr>
</tbody>
</table>
3.3 Interpretation of Results
Our experiment produced two important findings. The first is that subjects in the third condition (UC+US&OC) spent more time reading Use Cases than subjects in the UC condition, but spent less time than subjects in the US&OC condition understanding the requirements. This difference can be attributed to the availability of the On-Site Customer, which meant that subjects had to study the Use Cases and understand them well enough to ask questions about them.
The second finding is that there is no clear link between the format in which the requirements were presented and how well subjects scored on the implementation task. Because we failed to reject the null hypothesis, it is unknown if this is an actual effect. If our conclusions are true, it means that efforts made to improve requirement formats will not benefit software engineers. However, we doubt about the veracity of this implication because of the decisions we made in the experiment design.
We expected that subjects who spent more time understanding requirements would perform better in the implementation tasks. In addition, we expected that subjects using more requirement formats at the same time would also perform better because they would have more information available. Thus, we believed that the requirement formats which subjects used would have an impact on their performance. Contrary to expectations, our results showed that there was no link between the time subjects spent understanding requirements and their performance. In addition, our results showed that subjects using the most number of requirement formats scored the worst, though the difference was not statistically significant.
Not surprisingly, it appears that design decisions we made regarding limited resources had an impact on the validity of our results. We had a practical and doable experiment but our findings were not conclusive, only informative. In the next section, we will discuss some of the design decisions and their consequences.
4. TRADING OFF PRACTICALITY AND PERFECTION
We designed our experiment to measure the impact of requirements formats in the implementation of a software system, but had to take into account resource limitations. In particular, these were the availability of subjects, qualifications of subjects, and duration of experiment sessions. Based on these limitations, we had to make choices to deal with difficult problems in the experiment design. As a result, these decisions were likely to affect the conclusions we were able to draw. These decisions made the experiment more feasible, but at the same time, also made the experiment less perfect and less ideal.
4.1 Subjects
The first scarce resource that we had to consider was the availability of qualified subjects. As a result, we had to make compromises in our sample size and our screening procedures.
4.1.1 Number of Subjects
Our experiment was designed with three conditions to evaluate. We thought that having three subjects per each group and nine subjects in total was appropriate for an initial study and was enough to draw some conclusions.
Our decision to have a small number of subjects had the advantage that we could finish with the experiment faster and we are able to report our initial experience sooner. However, it has the disadvantage that we are not sure about our results and conclusions. Definitely, the small number of our subjects affected the generalizability of our results.
Another factor in the decision to use nine subjects was the effort required to run the sessions and analyze the data. Each session required two experimenters to run and required about three hours of their time. Each subject produced about 2.5 hours of screen, video, and audio recordings and other artifacts, which typically took a pair of researchers 4 or more hours to analyze, because we were collecting qualitative and quantitative data. In total, it took approximately 15 person hours to run and analyze each subject, which is not an inconsiderable number.
This small number of subjects meant that we had to use non-parametric statistics to analyze the data. This type of statistic relaxes assumptions about the distribution of the data, but at the cost of making it more difficult to achieve statistical significance.
Clearly, more subjects are needed in order to produce stronger results. Published software engineering experiments typically use a sample size in the mid-teens, though this figure can range from a handful to three dozen. Power analysis suggests that to achieve $\beta=0.95$ a total of 96 subjects are required (32 per condition), a truly infeasible number. Once again, we will need to make design decisions that trade-off resource constraints.
### 4.1.2 Qualifications of Subjects
When recruiting subjects, we found that many of the potential subjects had knowledge of Java, but not of JSP and Servlets. We decided to accept these subjects and included JSP and Servlets tutorials and a familiarization task. It was very difficult to find nine subjects willing to spend 2.5 hours on the experiment. It was not practical to add another filter in the selection of subjects. We expected that a small number of our subjects would not be able to complete the implementation task. For that reason, we had the option of redirecting subjects who were struggling with the implementation technology to a design task.
The advantages of this decision were that we were able to recruit nine subjects and conduct the experiment within two and a half hours.
On the other hand, there were also disadvantages. First, we had a low rate of task completion; only one of our nine subjects finished the implementation task, which likely affected our results. One possible reason could be the level of difficulty of the implementation task. Subjects had to add a new attribute to the shopping cart to count items, allow this new attribute to be modified, and allow deletion of items in the shopping cart. We felt that this task was relatively straightforward, and subjects agreed. They were asked to rate the level of difficulty of the task and they assigned on an average 2.83 out of 5, 0 being easy and 5 difficult. We believe that the low task completion rate was caused by the low level of expertise in JSP and Servlets that our subjects had.
Second, depending on the completeness of the implementation task, we scored different artifacts. We scored the source code in case the subject completed some parts of the implementation task and also scored the design for other parts when the implementation was not completed. If a specific feature was completed, we assigned the same score for it without caring if the feature was completed in the implementation or in the design. However, it is possible that we could have been mixing apples with oranges when we equally scored the implementation and the design. We think that probably this equal scoring could not be fair in some cases because not all the subjects spent the same time implementing and designing. We asked our subjects to switch to designing at different times for each subject, depending on the difficulty they were having with the implementation.
### 4.2 Duration of Experiment Sessions
The second scarce resource that we had to manage was the length of the experiment sessions. There are limits to how long a subject can focus and work intensively on a task. As well, there are fewer people who are available and willing to commit to longer experiments. Our experiment sessions were 2.5 hours long, a duration that pushed these limits. Consequently, we had to make design decisions about what we asked subjects to do in the time available. These decisions had effects on the equivalence of subjects in the three conditions, and on the software tools and information that we gave them.
#### 4.2.1 Equivalence of Groups
Our experiment had three conditions to which we needed to assign the same number of subjects. Furthermore, we wanted to ensure that each group was comparable in terms of subject characteristics, background, and experience. Counterbalancing the groups helps to ensure that the performance of the groups can be compared.
Ideally, we would have assigned our subjects to groups based on tests of their cognitive traits and familiarity of JSP and Servlets. For example, we could have postponed our decision of assignment of subjects after having the results of the background questionnaire including quantitative information about their experience and background. The tests would have to assess knowledge and skills, and not just ask about how much prior experience the subjects had. The number of years of experience of subjects is known not to be a good measure of expertise. We found that four of our subjects said that they worked with Java web technologies for two years, but only one of them was able to finish the implementation tasks. However, adding more tests was not feasible, because it would have made the sessions too long. A skills test would have added 30 minutes and a personality test would have added 30-60 minutes.
Instead, we decided to assign three subjects to each group based on our knowledge of the expertise and background of our subjects. We were able to do this because we recruited our subjects by word-of-mouth. Before conducting the experiment we already contacted the nine subjects and we knew about the background and the expertise of some of them. Other subjects were asked informally about their familiarity with JSP and Servlets and background before scheduling the appointment for the experiment. Having this information, we assigned our subjects to each group before running the experiment and tried to have a balance of expertise and background in each group.
The design decisions we took had some advantages, for example before starting the experiments we knew that we had three subjects in each group and that this number of subjects will be enough to have balanced groups. Another advantage is that we did not need to include any additional test that could have increased the length of our experiment.
The main disadvantage of our approach was the groups created were not ideal, in the sense they were not completely balanced, and this likely affected the results of the study. As well, we were not able to control for personal traits, such as analytic ability, in analyzing the data. However, it was a reasonable trade-off, given the alternatives.
4.2.2 Stimuli Given to Subjects
Since this was a software engineering experiment, the subjects had to work with many different technologies in order to complete the maintenance tasks. We knew that we could not assume that all the subjects had worked with them previously, so we had to include time in the schedule for subjects to become familiar with the various tools and languages. We tried to reduce the technologies that subjects were required to use in order to save time and this affected the generalizability of the study.
Subjects in all three conditions had to use software tools (the Eclipse workbench and the Tomcat Application Server), programming languages and frameworks (Java, JSP, and Servlets), a data format (XML), and many conventions and best practices. Depending on the condition, subjects also had to work with Use Cases, User Stories, and an On-Site Customer via chat.
We decided not to include Test Cases with the material given to the groups using agile requirements for two reasons. One, we felt that Test Cases would have provided too much information and the comparison between the three conditions would have been too imbalanced and unfair. Two, we did not want to require our subjects to use yet another tool. Including a testing tool would have further increased the length of each experiment session.
In retrospect, this was not a good decision and the reasons were not well founded. Excluding Test Cases made conditions using Agile Requirements less realistic, and in turn, less generalizable. The prevailing view is that the trio of User Stories, On-Site Customer, and Test Cases form the core of Agile Requirements. The omission of test cases affected the credibility of study among agileists. We had assumed that we needed to provide the Test Cases in an automated testing tool, another common practice in Agile. We felt that this would have done too much of the work for those subjects using Agile Requirements, but at the same time required them to learn another tool. Looking back, we could have provided the written descriptions of the Test Cases, e.g. input, output, preconditions, to the US&OC and UC+US&OC conditions. This would have made the three conditions more similar in terms of the information given to them.
5. CONCLUSIONS
Making design decisions to implement a software system is not an easy task. Software designers have to evaluate the tradeoffs of each decision before selecting an option. Similarly, researchers need to evaluate different ways to design the experiment taking into account the resource constraints.
In this paper, we discussed the design decisions that had an effect on the validity of a small controlled experiment aiming to measure the impact of different requirement formats on how people implement a system. The limited resources that we were attempting to manage were the availability of qualified subjects and the duration of the experiment sessions.
Because it is very difficult to find qualified subjects, we decided to perform the study with nine subjects who had previous experience developing software using the Java programming language. The small sample size affected the power of the experiment, and meant that we could only use non-parametric statistics. The subject system in our study was a web application using JSP and Servlets. We did not screen for prior experience with these technologies and only one of our nine subjects were able to complete the implementation tasks. The poor scores of our subjects on this task led to inclusive results on the effect of the requirements formats on how well subjects implemented the change tasks.
The other constraint discussed in this paper was the duration of the experiment sessions. In our study, the sessions lasted 2.5 hours and included a background questionnaire, tutorials, a familiarization task, experiment tasks, and a debriefing interview. There were other tests and stimuli that we considered including, but did not.
Adding tests of skill and knowledge level in web technologies would have allowed us to make the groups in each conditions more similar to each other. Adding tests of personality traits and cognitive ability would have allowed us to control for the effects of these factors when analyzing performance. However, there simply was no time available to add these to the schedule.
We do regret one design decision that we made with respect to time constraints; we did not provide Test Cases to the groups using Agile Requirements and in retrospect we should have. We originally felt that we could not burden these subjects with yet another tool or format, but this was a poor decision, because it decreased the credibility of our experiment especially among Agilists.
In summary, these design decisions ensured that we had a study that was feasible, but at the cost of some threats to validity. It would not have been possible to conduct an ideal experiment. Instead, we had an imperfect experiment that shed light on a phenomenon, the effect of requirements formats on maintenance tasks. The result was a practical experiment, but our results are not conclusive, merely informative, which still allows us to make incremental progress as a field.
6. REFERENCES
|
{"Source-Url": "http://www.drsusansim.org/papers/weasel2007-gallardo.pdf", "len_cl100k_base": 6774, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 32320, "total-output-tokens": 7429, "length": "2e12", "weborganizer": {"__label__adult": 0.0003402233123779297, "__label__art_design": 0.0004398822784423828, "__label__crime_law": 0.0003447532653808594, "__label__education_jobs": 0.00457763671875, "__label__entertainment": 5.6862831115722656e-05, "__label__fashion_beauty": 0.00015485286712646484, "__label__finance_business": 0.000408172607421875, "__label__food_dining": 0.000335693359375, "__label__games": 0.00049591064453125, "__label__hardware": 0.000514984130859375, "__label__health": 0.0004930496215820312, "__label__history": 0.0002856254577636719, "__label__home_hobbies": 9.870529174804688e-05, "__label__industrial": 0.00037479400634765625, "__label__literature": 0.0003604888916015625, "__label__politics": 0.00022983551025390625, "__label__religion": 0.0003821849822998047, "__label__science_tech": 0.01383209228515625, "__label__social_life": 0.00014150142669677734, "__label__software": 0.0057525634765625, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.0002949237823486328, "__label__transportation": 0.00038743019104003906, "__label__travel": 0.00019359588623046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35237, 0.03903]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35237, 0.44164]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35237, 0.9613]], "google_gemma-3-12b-it_contains_pii": [[0, 4507, false], [4507, 10711, null], [10711, 15853, null], [15853, 21946, null], [21946, 28934, null], [28934, 35237, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4507, true], [4507, 10711, null], [10711, 15853, null], [15853, 21946, null], [21946, 28934, null], [28934, 35237, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35237, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35237, null]], "pdf_page_numbers": [[0, 4507, 1], [4507, 10711, 2], [10711, 15853, 3], [15853, 21946, 4], [21946, 28934, 5], [28934, 35237, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35237, 0.20472]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
dc93a2144ccdda5491ad16c144212c773d80e46d
|
11-731 Machine Translation
MT Quality Estimation
Alon Lavie
2 April 2015
With Acknowledged Contributions from:
• Lucia Specia (University of Sheffield)
• CCB et al (WMT 2012)
• Radu Soricut et al (SDL Language Weaver)
Outline
- Quality Estimation Measures:
- What are they and why they are needed?
- Applications
- Framework and Types of Features
- The WMT 2012 Shared Task on Quality Estimation
- Case Study: The SDL/Language Weaver QE System for WMT 2012
- Open Issues
- Conclusions
MT Quality Estimation
- MT Systems are used in a variety of applications and scenarios
- Need to assess how well they are performing and whether they are suitable for the task in which they are being used
- MT systems perform best on input similar to their training data
- System performance can vary widely from one sentence to the next
- MT Evaluation metrics can provide offline information:
- Pre-selected test data with human reference translation to compare against
- Metrics: BLEU, Meteor, TER
- What about online assessment in real time?
- No human reference translation
- Needs to be computable in real-time
MT Quality Estimation
Main Driving Applications:
- Is an MT-translated document sufficient in quality for publication and/or user consumption?
- Example: Translated product reviews or recommendations – publish?
- Example: Translated news summaries - sufficient for gisting?
MT translation used as a first-step for human translation:
- Pre-translate a document with MT or use Translation Memory?
- Is an MT-generated translation segment worth post-editing? Faster and better than translating the segment from scratch?
- Should poor quality MT-generated segments be filtered out?
- Can we predict in advance how much time/effort will it take to post-edit a document?
Hypothesis Selection and MT System Combination:
- Select the better output from multiple systems
MT Quality Estimation: Framework
- Supervised Learning Task
- Learn from examples of MT-generated translations and human-generated quality assessments to predict assessments for new unseen MT-generated translation outputs
- What level of granularity?
- Document-level or segment-level?
- What types of assessments?
- Quality scale based on human judgments
- Adequacy/Fluency [1-5] [0-1]
- Post-editing effort [1-4] [0-1]
- Class labels: Bad/OK/Good
- What type of machine learning?
- Classifiers for two or more classes [Good/Bad] [Good/OK/Bad]
- Logistic regression to maximize correlation with human label scales
- Ranking algorithms to maximize ranking correlation with human data
MT Quality Estimation: Framework
- What types of features?
- No reference translation available!
- Indicators extracted from the **MT-generated output** itself
- Output length, lexical features, linguistic complexity, LM-based
- Indicators extracted from the **source-language input**
- Input length, lexical features, linguistic complexity, LM-based
- Indicators extracted from **MT system internal features**
- Decoder features scores: translation model, LM, rules applied
- **Other features**
- OOV words, source-target similarity, similarity to training data
- Deeper linguistic analysis features
MT Quality Estimation: Framework
Quality Estimation Indicators:
- Adequacy indicators
- Complexity indicators
- Confidence indicators
- Fluency indicators
MT Quality Estimation: Framework
- **Training:**
- **Runtime:**
MT Quality Estimation: History
- Similar ideas in the context of MT System Combination around from the 1990s
- Some preliminary exploration in the form of “Confidence Estimation” in 2001/2002 inspired by confidence scores in speech recognition (word posterior probabilities)
- JHU Summer Workshop 2003:
- Goal: Predict BLEU/NIST/WER scores at runtime
- Relatively weak MT systems at the time
- Poor results
- New surge of interest since 2008:
- Better MT systems
- MT increasingly used for post-editing
- More meaningful human scores as data: post-editing time/effort
Some Recent Positive Results
- **Time to post-edit** subset of sentences predicted as “low PE effort” vs time to post-edit random subset of sentences [Spe11]
<table>
<thead>
<tr>
<th>Language</th>
<th>no QE</th>
<th>QE</th>
</tr>
</thead>
<tbody>
<tr>
<td>fr-en</td>
<td>0.75 words/sec</td>
<td>1.09 words/sec</td>
</tr>
<tr>
<td>en-es</td>
<td>0.32 words/sec</td>
<td>0.57 words/sec</td>
</tr>
</tbody>
</table>
- **Accuracy in selecting best translation** among 4 MT systems [SRT10]
<table>
<thead>
<tr>
<th>Best MT system</th>
<th>Highest QE score</th>
</tr>
</thead>
<tbody>
<tr>
<td>54%</td>
<td>77%</td>
</tr>
</tbody>
</table>
WMT 2012 QE Shared Task
- First large-scale competitive shared-task on Quality Estimation systems:
- Coordinated by Lucia Specia and Radu Soricut at WMT 2012
- Provide a common setting for development and comparison of QE systems
- Focus on **sentence-level QE of Post-Editing Effort**
- Main Objectives:
- Identify (new) effective features
- Identify most suitable machine learning techniques
- Contrast regression and ranking techniques
- Test (new) automatic evaluation metrics
- Establish the state of the art performance on this problem
WMT 2012 QE Shared Task
Data and Setting:
- Single common MT system generating data
- English to Spanish
- Moses Phrase-based SMT system developed on WMT 2012 data
- English source sentences; Spanish MT-generated output sentences
- MT output post-edited by a single professional translator
- Post-editing effort scored by three independent translators using a discrete [1-5] scale; averaged for each segment
- Spanish human reference translations available for analysis but not disclosed to QE development teams
- Data made available for development: 1832 segments
- Blind (unseen) test data: 422 segments
Annotation guidelines
3 human judges for PE effort assigning 1-5 scores for
(source, MT output, PE output)
[1] The MT output is incomprehensible, with little or no information transferred accurately. It cannot be edited, needs to be translated from scratch.
[2] About 50-70% of the MT output needs to be edited. It requires a significant editing effort in order to reach publishable level.
[3] About 25-50% of the MT output needs to be edited. It contains different errors and mistranslations that need to be corrected.
[4] About 10-25% of the MT output needs to be edited. It is generally clear and intelligible.
[5] The MT output is perfectly clear and intelligible. It is not necessarily a perfect translation, but requires little to no editing.
WMT 2012 QE Shared Task
SMT resources for training and test sets:
- SMT training corpus (Europarl and News-documentaries)
- LMs: 5-gram LM; 3-gram LM and 1-3-gram counts
- IBM Model 1 table (Giza)
- Word-alignment file as produced by *grow-diag-final*
- Phrase table with word alignment information
- Moses configuration file used for decoding
- Moses run-time log: model component values, word graph, etc.
WMT 2012 QE Shared Task
- Two Sub-tasks:
- **Scoring**: Predict a post-editing effort score [1-5] for each test segment
- **Ranking**: Rank the test segments from best to worst
Scoring Task evaluation measures:
- Mean-Absolute-Error (MAE)
- Root-Mean-Squared-Error (RMSE)
\[
\text{MAE} = \frac{\sum_{i=1}^{N} |H(s_i) - V(s_i)|}{N}
\]
\[
\text{RMSE} = \sqrt{\frac{\sum_{i=1}^{N} (H(s_i) - V(s_i))^2}{N}}
\]
\(N = |S|\)
\(H(s_i)\) is the predicted score for \(s_i\)
\(V(s_i)\) the is human score for \(s_i\)
WMT 2012 QE Shared Task
- **Ranking Task** evaluation measures:
- Spearman’s Rank Correlation Coefficient
\[ \rho = 1 - \frac{6 \sum d_i^2}{n(n^2 - 1)}. \]
- New metric: DeltaAvg
For \( S_1, S_2, \ldots, S_n \) quantiles:
\[ \text{DeltaAvg}_V[n] = \frac{\sum_{k=1}^{n-1} V(S_{1,k})}{n-1} - V(S) \]
\( V(S) \): extrinsic function measuring the “quality” of set \( S \)
Average human scores (1-5) of set \( S \)
DeltaAvg
Example 1: \( n=2 \), quantiles \( S_1, S_2 \)
\[ \text{DeltaAvg}[2] = V(S_1) - V(S) \]
"Quality of the top half compared to the overall quality"
Average **human scores** of top half compared to average **human scores** of complete set
Average human score: 3
\begin{align*}
\text{Random} & = [3, 3] = 0 \\
\text{QE} & = [3.8, 3] = 0.8 \\
\text{Oracle} & = [4.2 - 3] = 1.2 \\
\text{Lowerb} & = [1.8 - 3] = -1.2
\end{align*}
Average “human” score of top 50% selected after ranking based on QE score. QE score can be on any scale...
Final DeltaAvg metric
$$\text{DeltaAvg}_V = \frac{\sum_{n=2}^{N} \text{DeltaAvg}_V[n]}{N - 1}$$
where $N = |S|/2$
Average DeltaAvg$[n]$ for all $n$, $2 \leq n \leq |S|/2$
WMT 2012 QE Shared Task
- **Participating Teams:**
<table>
<thead>
<tr>
<th>ID</th>
<th>Participating team</th>
</tr>
</thead>
<tbody>
<tr>
<td>PRHLT-UPV</td>
<td>Universitat Politecnica de Valencia, Spain</td>
</tr>
<tr>
<td>UU</td>
<td>Uppsala University, Sweden</td>
</tr>
<tr>
<td>SDLLW</td>
<td>SDL Language Weaver, USA</td>
</tr>
<tr>
<td>Loria</td>
<td>LORIA Institute, France</td>
</tr>
<tr>
<td>UPC</td>
<td>Universitat Politecnica de Catalunya, Spain</td>
</tr>
<tr>
<td>DFKI</td>
<td>DFKI, Germany</td>
</tr>
<tr>
<td>WLV-SHEF</td>
<td>Univ of Wolverhampton & Univ of Sheffield, UK</td>
</tr>
<tr>
<td>SJTU</td>
<td>Shanghai Jiao Tong University, China</td>
</tr>
<tr>
<td>DCU-SYMC</td>
<td>Dublin City University, Ireland & Symantec, Ireland</td>
</tr>
<tr>
<td>UEdin</td>
<td>University of Edinburgh, UK</td>
</tr>
<tr>
<td>TCD</td>
<td>Trinity College Dublin, Ireland</td>
</tr>
</tbody>
</table>
One or two systems per team, most teams submitting for ranking and scoring sub-tasks
Baseline Features and System:
**Feature extraction** software – system-independent features:
- number of tokens in the source and target sentences
- average source token length
- average number of occurrences of words in the target
- number of punctuation marks in source and target sentences
- LM probability of source and target sentences
- average number of translations per source word
- % of source 1-grams, 2-grams and 3-grams in frequency quartiles 1 and 4
- % of seen source unigrams
**SVM regression** with RBF kernel with the parameters $\gamma$, $\epsilon$ and $C$ optimized using a grid-search and 5-fold cross validation on the training set
## WMT 2012 QE Shared Task
### Results - Ranking Task:
<table>
<thead>
<tr>
<th>System ID</th>
<th>DeltaAvg</th>
<th>Spearman Corr</th>
</tr>
</thead>
<tbody>
<tr>
<td>SDLLW_M5PbestDeltaAvg</td>
<td>0.63</td>
<td>0.64</td>
</tr>
<tr>
<td>SDLLW_SVM</td>
<td>0.61</td>
<td>0.60</td>
</tr>
<tr>
<td>UU_bltk</td>
<td>0.58</td>
<td>0.61</td>
</tr>
<tr>
<td>UU_best</td>
<td>0.56</td>
<td>0.62</td>
</tr>
<tr>
<td>TCD_M5P-resources-only*</td>
<td>0.56</td>
<td>0.56</td>
</tr>
<tr>
<td>Baseline (17FFs SVM)</td>
<td>0.55</td>
<td>0.58</td>
</tr>
<tr>
<td>PRHLT-UPV</td>
<td>0.55</td>
<td>0.55</td>
</tr>
<tr>
<td>UEdin</td>
<td>0.54</td>
<td>0.58</td>
</tr>
<tr>
<td>SJTU</td>
<td>0.53</td>
<td>0.53</td>
</tr>
<tr>
<td>WLV-SHEF_FS</td>
<td>0.51</td>
<td>0.52</td>
</tr>
<tr>
<td>WLV-SHEF_BL</td>
<td>0.50</td>
<td>0.49</td>
</tr>
<tr>
<td>DFKI_morphPOSibm1LM</td>
<td>0.46</td>
<td>0.46</td>
</tr>
<tr>
<td>DCU-SYMC_unconstrained</td>
<td>0.44</td>
<td>0.41</td>
</tr>
<tr>
<td>DCU-SYMC_constrained</td>
<td>0.43</td>
<td>0.41</td>
</tr>
<tr>
<td>TCD_M5P-all*</td>
<td>0.42</td>
<td>0.41</td>
</tr>
<tr>
<td>UPC_1</td>
<td>0.22</td>
<td>0.26</td>
</tr>
<tr>
<td>UPC_2</td>
<td>0.15</td>
<td>0.19</td>
</tr>
</tbody>
</table>
- = winning submissions
gray area = not different from baseline
* = bug-fix was applied after the submission
WMT 2012 QE Shared Task
- Ranking Task - Oracles:
**Oracle methods**: associate various metrics in a oracle manner to the test input:
- **Oracle Effort**: the gold-label Effort
- **Oracle HTER**: the HTER metric against the post-edited translations as reference
<table>
<thead>
<tr>
<th>System ID</th>
<th>DeltaAvg</th>
<th>Spearman Corr</th>
</tr>
</thead>
<tbody>
<tr>
<td>Oracle Effort</td>
<td>0.95</td>
<td>1.00</td>
</tr>
<tr>
<td>Oracle HTER</td>
<td>0.77</td>
<td>0.70</td>
</tr>
</tbody>
</table>
# WMT 2012 QE Shared Task
## Results - Scoring Task:
<table>
<thead>
<tr>
<th>System ID</th>
<th>MAE</th>
<th>RMSE</th>
</tr>
</thead>
<tbody>
<tr>
<td>SDLLW_M5PbestDeltaAvg</td>
<td>0.61</td>
<td>0.75</td>
</tr>
<tr>
<td>UU_best</td>
<td>0.64</td>
<td>0.79</td>
</tr>
<tr>
<td>SDLLW_SVM</td>
<td>0.64</td>
<td>0.78</td>
</tr>
<tr>
<td>UU_bltk</td>
<td>0.64</td>
<td>0.79</td>
</tr>
<tr>
<td>Loria_SVMlinear</td>
<td>0.68</td>
<td>0.82</td>
</tr>
<tr>
<td>UEdin</td>
<td>0.68</td>
<td>0.82</td>
</tr>
<tr>
<td>TCD_M5P-resources-only*</td>
<td>0.68</td>
<td>0.82</td>
</tr>
<tr>
<td>Baseline (17FFs SVM)</td>
<td>0.69</td>
<td>0.82</td>
</tr>
<tr>
<td>Loria_SVMrbuf</td>
<td>0.69</td>
<td>0.83</td>
</tr>
<tr>
<td>SJTU</td>
<td>0.69</td>
<td>0.83</td>
</tr>
<tr>
<td>WLV-SHEF_FS</td>
<td>0.69</td>
<td>0.85</td>
</tr>
<tr>
<td>PRHLT-UPV</td>
<td>0.70</td>
<td>0.85</td>
</tr>
<tr>
<td>WLV-SHEF_BL</td>
<td>0.72</td>
<td>0.86</td>
</tr>
<tr>
<td>DCU-SYMC_unconstrained</td>
<td>0.75</td>
<td>0.97</td>
</tr>
<tr>
<td>DFKI_grcfs-mars</td>
<td>0.82</td>
<td>0.98</td>
</tr>
<tr>
<td>DFKI_cfs-plsreg</td>
<td>0.82</td>
<td>0.99</td>
</tr>
<tr>
<td>UPC_1</td>
<td>0.84</td>
<td>1.01</td>
</tr>
<tr>
<td>DCU-SYMC_constrained</td>
<td>0.86</td>
<td>1.12</td>
</tr>
<tr>
<td>UPC_2</td>
<td>0.87</td>
<td>1.04</td>
</tr>
<tr>
<td>TCD_M5P-all</td>
<td>2.09</td>
<td>2.32</td>
</tr>
</tbody>
</table>
Analysis:
New and effective quality indicators (features)
Most participating systems use external resources: parsers, POS taggers, NER, etc. → variety of features
Many tried to exploit linguistically-oriented features
- none or modest improvements (e.g. WLV-SHEF)
- high performance (e.g. “UU” with parse trees)
Good features:
- confidence: model components from SMT decoder
- pseudo-reference: agreement between 2 SMT systems
- fuzzy-match like: source (and target) similarity with SMT training corpus (LM, etc)
WMT 2012 QE Shared Task
- Analysis:
Machine Learning techniques
- Best performing: Regression Trees (M5P) and SVR
- M5P Regression Trees: compact models, less overfitting, "readable"
- SVRs: easily overfit with small training data and large feature set
- Feature selection crucial in this setup
- Structured learning techniques: "UU" submissions (tree kernels)
WMT 2012 QE Shared Task
Analysis:
Evaluation metrics
- DeltaAvg → suitable for the ranking task
- automatic and deterministic (and therefore consistent)
- Extrinsic interpretability
- Versatile: valuation function $V$ can change, $N$ can change
- High correlation with Spearman, but less strict
- MAE, RMSE → difficult task, values stubbornly high
Regression vs ranking
- Most submissions: regression results to infer ranking
- Ranking approach is simpler, directly useful in many applications
WMT 2012 QE Shared Task
- Analysis:
Establish state-of-the-art performance
- “Baseline” - hard to beat, previous state-of-the-art
- Metrics, data sets, and performance points available
- Known values for oracle-based upperbounds
- Good resource to further investigate: best features & best algorithms
Case Study: SDL LW QE System
- Best performing system(s) in WMT 2012 shared tasks
- Two main system variants:
- M5P Regression Tree model
- SVM Regression Model (SVR)
- Main distinguishing characteristics:
- Novel Features used
- Feature Selection was crucial to performance
- Machine Learning approaches used
Case Study: SDL LW QE System
- **Features Used:**
- Total number of features: 42
- Baseline Features: 17
- Decoder Features: 8
- New LW Features: 17
Case Study: SDL LW QE System
● Baseline Features:
BF1 number of tokens in the source sentence
BF2 number of tokens in the target sentence
BF3 average source token length
BF4 LM probability of source sentence
BF5 LM probability of the target sentence
BF6 average number of occurrences of the target word within the target translation
BF7 average number of translations per source word in the sentence (as given by IBM 1 table thresholded so that $Prob(t|s) > 0.2$)
BF8 average number of translations per source word in the sentence (as given by IBM 1 table thresholded so that $Prob(t|s) > 0.01$) weighted by the inverse frequency of each word in the source corpus
BF9 percentage of unigrams in quartile 1 of frequency (lower frequency words) in SMT$_{src}$
BF10 percentage of unigrams in quartile 4 of frequency (higher frequency words) in SMT$_{src}$
BF11 percentage of bigrams in quartile 1 of frequency of source words in SMT$_{src}$
BF12 percentage of bigrams in quartile 4 of frequency of source words in SMT$_{src}$
BF13 percentage of trigrams in quartile 1 of frequency of source words in SMT$_{src}$
BF14 percentage of trigrams in quartile 4 of frequency of source words in SMT$_{src}$
BF15 percentage of unigrams in the source sentence seen in SMT$_{src}$
BF16 number of punctuation marks in source sentence
BF17 number of punctuation marks in target sentence
Case Study: SDL LW QE System
- Moses-based Decoder Features:
MF1 Distortion cost
MF2 Word penalty cost
MF3 Language-model cost
MF4 Cost of the phrase-probability of source given target $\Phi(s|t)$
MF5 Cost of the word-probability of source given target $\Phi_{lex}(s|t)$
MF6 Cost of the phrase-probability of target given source $\Phi(t|s)$
MF7 Cost of the word-probability of target given source $\Phi_{lex}(t|s)$
MF8 Phrase penalty cost
Case Study: SDL LW QE System
- New LW Features:
- LF1 number of out-of-vocabulary tokens in the source sentence
- LF2 LM perplexity for the source sentence
- LF3 LM perplexity for the target sentence
- LF4 geometric mean ($\lambda$-smoothed) of 1-to-4-gram precision scores (i.e., BLEU score without brevity-penalty) of source sentence against the sentences of $SMT_{src}$ used as “references”
- LF5 geometric mean ($\lambda$-smoothed) of 1-to-4-gram precision scores of target translation against the sentences of $SMT_{trg}$ used as “references”
- LF6 geometric mean ($\lambda$-smoothed) of 1-to-4-gram precision scores of source sentence against the top BLEU-scoring quartile of $Dev_{src}$
- LF7 geometric mean ($\lambda$-smoothed) of 1-to-4-gram precision scores of target translation against the top BLEU-scoring quartile of $Dev_{trg}$
- LF8 geometric mean ($\lambda$-smoothed) of 1-to-4-gram precision scores of source sentence against the bottom BLEU-scoring quartile of $Dev_{src}$
- LF9 geometric mean ($\lambda$-smoothed) of 1-to-4-gram precision scores of target translation against the bottom BLEU-scoring quartile $Dev_{trg}$
- LF10 geometric mean ($\lambda$-smoothed) of 1-to-4-gram precision scores of target translation against a pseudo-reference produced by a second MT Eng-Spa system
- LF11 count of one-to-one (O2O) word alignments between source and target translation
- LF12 ratio of O2O alignments over source sentence
- LF13 ratio of O2O alignments over target translation
- LF14 count of O2O alignments with Part-of-Speech-agreement
- LF15 ratio of O2O alignments with Part-of-Speech-agreement over O2O alignments
- LF16 ratio of O2O alignments with Part-of-Speech-agreement over source
- LF17 ratio of O2O alignments with Part-of-Speech-agreement over target
## Case Study: SDL LW QE System
- **Results – Baseline Features:**
<table>
<thead>
<tr>
<th>Systems</th>
<th>Ranking</th>
<th>Scoring</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>DeltaAvg</td>
<td>Spearman</td>
</tr>
<tr>
<td>17 BF with M5P</td>
<td>0.53</td>
<td>0.56</td>
</tr>
<tr>
<td>17 BF with SVR</td>
<td>0.55</td>
<td>0.58</td>
</tr>
<tr>
<td>best-system</td>
<td>0.63</td>
<td>0.64</td>
</tr>
</tbody>
</table>
*Table 1: Performance of the Baseline Features using M5P and SVR models on the test set.*
Case Study: SDL LW QE System
- Results - Moses-based Features:
<table>
<thead>
<tr>
<th>Systems</th>
<th>Ranking DeltaAvg</th>
<th>Ranking Spearman-Corr</th>
<th>MAE</th>
<th>RMSE</th>
<th>Predict. Interval</th>
</tr>
</thead>
<tbody>
<tr>
<td>8 MFs with M5P</td>
<td>0.58</td>
<td>0.58</td>
<td>0.65</td>
<td>0.81</td>
<td>[1.8-5.0]</td>
</tr>
<tr>
<td>best-system</td>
<td>0.63</td>
<td>0.64</td>
<td>0.61</td>
<td>0.75</td>
<td>[1.7-5.0]</td>
</tr>
</tbody>
</table>
Table 2: Performance of the Moses-based Features with an M5P model on the test set.
Case Study: SDL LW QE System
- Results – All Features:
<table>
<thead>
<tr>
<th>Systems</th>
<th>#L.Eq</th>
<th>Dev Set</th>
<th>Test Set</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>DeltaAvg</td>
<td>MAE</td>
</tr>
<tr>
<td>42 FFs with M5P</td>
<td>10</td>
<td>0.60</td>
<td>0.58</td>
</tr>
<tr>
<td><strong>(best-system)</strong> 15 FFs with M5P</td>
<td>2</td>
<td><strong>0.63</strong></td>
<td><strong>0.52</strong></td>
</tr>
<tr>
<td>14 FFs with M5P</td>
<td>6</td>
<td>0.62</td>
<td><strong>0.50</strong></td>
</tr>
</tbody>
</table>
Table 3: M5P-model performance for different feature-function sets (15-FFs ∈ 42-FFs; 14-FFs ∈ 42-FFs).
Case Study: SDL LW QE System
- Best Features:
<table>
<thead>
<tr>
<th>DeltaAvg optim.</th>
<th>BF1 BF3 BF4 BF6 BF12</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>BF13 BF14 MF3 MF4 MF6</td>
</tr>
<tr>
<td></td>
<td>LF1 LF10 LF14 LF15 LF16</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>MAE optim.</th>
<th>BF1 BF3 BF4 BF6 BF12</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>BF14 BF16 MF3 MF4 MF6</td>
</tr>
<tr>
<td></td>
<td>LF1 LF10 LF14 LF17</td>
</tr>
</tbody>
</table>
Table 4: Feature selection results.
Case Study: SDL LW QE System
- **Best Features (MAE Optimal):**
- BF1: number of tokens in the source sentence
- BF3: average source token length
- BF4: LM probability of source sentence
- BF6: average number of occurrences of the target word within the target translation
- BF12: percentage of bigrams in quartile 4 of frequency of source words in $SMT_{src}$
- BF14: percentage of trigrams in quartile 4 of frequency of source words in $SMT_{src}$
- BF16: number of punctuation marks in source sentence
- MF3: Language Model cost
- MF4: cost of the phrase-probability of source given target
- MF6: cost of the phrase-probability of target given source
- LF1: number of out-of-vocabulary tokens in the source sentence
- LF10: geometric mean of 1-to-4-gram precision scores of target translation against a pseudo-reference produced by a second EN-to-ES MT system
- LF14: count of 1-to-1 alignments with Part-of-Speech-agreement
- LF17: ratio of 1-to-1 alignments with Part-of-Speech-agreement over target
Open Issues
- Agreement between Translators:
- Noisy “Gold standard” PE effort data
- Absolute value judgments: difficult to achieve consistency across annotators even in highly controlled setup
- 30% of initial dataset discarded: annotators disagreed by more than one category
- Need for better methodology in establishing PE effort
- HTER is not a great solution:
- **HTER**: Edit distance between MT output and its minimally post-edited version
\[
\text{HTER} = \frac{\#\text{edits}}{\#\text{words\_postedited\_version}}
\]
- Edits: substitute, delete, insert, shift
- Analysis by Maarit Koponen (WMT-12) on post-edited translations with HTER and 1-5 scores
- Translations with low HTER (few edits) & low quality scores (high post-editing effort), and vice-versa
- Certain edits seem to require more cognitive effort than others - not captured by HTER
Open Issues
- How to utilize QE scores as estimated post-editing effort scores?
- Should (supposedly) bad quality translations be filtered out or shown to translators (different scores/color codes)?
- Tradeoff of translator wasting time looking at MT segments with bad scores/colors versus translators missing out on useful information
- How to define a threshold on the estimated translation quality to decide which MT segments should be filtered out?
- Translator dependent?
- Task dependent?
- Output quality and project time requirements
- Should the focus instead be on identifying the likely errors in the MT output rather than on estimating how good it is?
Open Issues
Do we really need QE? Can’t we use these features to directly improve or correct the MT output?
- In some cases yes, based on sub-sentence QE/error detection
- Generally, this is very difficult:
- Some linguistically-motivated features can be difficult and expensive to integrate into decoding (e.g. matching of semantic roles)
- Global features are particularly very difficult to incorporate into decoding, (e.g: coherence given previous n sentences)
Michael Denkowski’s PhD thesis addresses many of these issues:
- Immediate incremental learning of translation models from translator post-edited segments
- Tuning of features to learn how much to trust such incremental information
- New advanced MT evaluation metrics that directly reflect post-editing effort - optimizing MT systems to such metrics
Conclusions
- It is possible to estimate at least certain aspects of translation quality in terms of PE effort
- PE effort estimates can be used in real applications:
- Ranking translations: filter out bad quality translations
- Selecting translations from multiple MT systems
- Significant and growing commercial interest in this problem
- Challenging research problem with lots of open issues and questions to work on!
|
{"Source-Url": "http://demo.clab.cs.cmu.edu/sp2015-11731/slides/11-731-QualityEstimation-2015.pdf", "len_cl100k_base": 7551, "olmocr-version": "0.1.50", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 56141, "total-output-tokens": 8633, "length": "2e12", "weborganizer": {"__label__adult": 0.00046634674072265625, "__label__art_design": 0.0009131431579589844, "__label__crime_law": 0.0007262229919433594, "__label__education_jobs": 0.004169464111328125, "__label__entertainment": 0.0002694129943847656, "__label__fashion_beauty": 0.0002875328063964844, "__label__finance_business": 0.0009775161743164062, "__label__food_dining": 0.0003829002380371094, "__label__games": 0.000820159912109375, "__label__hardware": 0.0009255409240722656, "__label__health": 0.0007243156433105469, "__label__history": 0.00042319297790527344, "__label__home_hobbies": 0.0001093745231628418, "__label__industrial": 0.0007681846618652344, "__label__literature": 0.0018606185913085935, "__label__politics": 0.0005712509155273438, "__label__religion": 0.0008406639099121094, "__label__science_tech": 0.2266845703125, "__label__social_life": 0.0002435445785522461, "__label__software": 0.044708251953125, "__label__software_dev": 0.7119140625, "__label__sports_fitness": 0.0003709793090820313, "__label__transportation": 0.0005426406860351562, "__label__travel": 0.00023996829986572263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25709, 0.03293]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25709, 0.26654]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25709, 0.79955]], "google_gemma-3-12b-it_contains_pii": [[0, 221, false], [221, 491, null], [491, 1123, null], [1123, 1892, null], [1892, 2604, null], [2604, 3236, null], [3236, 3393, null], [3393, 3459, null], [3459, 4040, null], [4040, 4555, null], [4555, 5115, null], [5115, 5722, null], [5722, 6477, null], [6477, 6885, null], [6885, 7067, null], [7067, 7401, null], [7401, 7824, null], [7824, 8073, null], [8073, 8369, null], [8369, 8543, null], [8543, 9929, null], [9929, 10586, null], [10586, 11915, null], [11915, 12356, null], [12356, 13799, null], [13799, 14326, null], [14326, 14705, null], [14705, 15212, null], [15212, 15528, null], [15528, 15849, null], [15849, 16007, null], [16007, 17378, null], [17378, 17866, null], [17866, 19654, null], [19654, 20203, null], [20203, 20786, null], [20786, 21375, null], [21375, 21836, null], [21836, 22870, null], [22870, 23782, null], [23782, 24461, null], [24461, 25284, null], [25284, 25709, null]], "google_gemma-3-12b-it_is_public_document": [[0, 221, true], [221, 491, null], [491, 1123, null], [1123, 1892, null], [1892, 2604, null], [2604, 3236, null], [3236, 3393, null], [3393, 3459, null], [3459, 4040, null], [4040, 4555, null], [4555, 5115, null], [5115, 5722, null], [5722, 6477, null], [6477, 6885, null], [6885, 7067, null], [7067, 7401, null], [7401, 7824, null], [7824, 8073, null], [8073, 8369, null], [8369, 8543, null], [8543, 9929, null], [9929, 10586, null], [10586, 11915, null], [11915, 12356, null], [12356, 13799, null], [13799, 14326, null], [14326, 14705, null], [14705, 15212, null], [15212, 15528, null], [15528, 15849, null], [15849, 16007, null], [16007, 17378, null], [17378, 17866, null], [17866, 19654, null], [19654, 20203, null], [20203, 20786, null], [20786, 21375, null], [21375, 21836, null], [21836, 22870, null], [22870, 23782, null], [23782, 24461, null], [24461, 25284, null], [25284, 25709, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25709, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25709, null]], "pdf_page_numbers": [[0, 221, 1], [221, 491, 2], [491, 1123, 3], [1123, 1892, 4], [1892, 2604, 5], [2604, 3236, 6], [3236, 3393, 7], [3393, 3459, 8], [3459, 4040, 9], [4040, 4555, 10], [4555, 5115, 11], [5115, 5722, 12], [5722, 6477, 13], [6477, 6885, 14], [6885, 7067, 15], [7067, 7401, 16], [7401, 7824, 17], [7824, 8073, 18], [8073, 8369, 19], [8369, 8543, 20], [8543, 9929, 21], [9929, 10586, 22], [10586, 11915, 23], [11915, 12356, 24], [12356, 13799, 25], [13799, 14326, 26], [14326, 14705, 27], [14705, 15212, 28], [15212, 15528, 29], [15528, 15849, 30], [15849, 16007, 31], [16007, 17378, 32], [17378, 17866, 33], [17866, 19654, 34], [19654, 20203, 35], [20203, 20786, 36], [20786, 21375, 37], [21375, 21836, 38], [21836, 22870, 39], [22870, 23782, 40], [23782, 24461, 41], [24461, 25284, 42], [25284, 25709, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25709, 0.19604]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
dcf48d8cf705945e923f0ab313b27b8185665d8e
|
汎用模型化言語(UML)が組込まれたJava応用プログラム開発プラットフォーム(その1)
能登宏
A Java Application Development Platform with a Unified Modeling Language (UML) Plug-in (Part I)
Hiroshi NOTO
Contents
1. Introduction
2. Basic Design of Platform Complex
3. Configuration of Combinations of Platform Components
3.1 Combination of NetBeans and UML tool
3.2 Combination of NetBeans and iReport
3.3 Combination of NetBeans and MySQL on GlassFish Server
3.4 Combination of NetBeans Platform and Java DB
4. Case Studies of Running the Platform
4.1 NetBeans + UML
4.2 NetBeans + UML + JSP + GlassFish + MySQL
4.3 NetBeans + UML + MySQL + iReport
1. Introduction
In this article an Object-Oriented Java Programming Environment
is introduced incorporating Unified Modeling Language (UML) as a platform on the computer system.
We have constructed a Java programming platform for the students to learn and pursue application development, of the five courses that the author is in charge of, Department of Management and Information of Hokusei Gakuen University. We have been thinking and planning these two or three years that the programming platform in our courses that is built on the client computers and servers in the PC rooms of the Information Systems Center in our university is to meet those requirements:
- to support the Integrated Environment based on Object-Oriented Programming
- to manipulate Domain Modeling prescription
- to incorporate Unified Modeling Language (UML)
- to facilitate using a database system
- to enable us to design reporting layouts
Those requirements, of course, reflect the recent progress in software engineering and software development methodologies. The setup of the learning and developing platform satisfying the above requirements makes the students to concentrate on Domain Modeling
with Unified Modeling Language using the database system without specifying a particular programming language. In the stage targeting the implementation, however, of our domain modeled deliverable we have to specify a particular programming language.
The purpose of our seminars and lectures is for the students to study and pursue their software development practice based on those programming methodologies without taking much time or effort in learning the grammar in depth of the programming language or in getting used to the platform software. We hope that the students are able to manipulate their own logic visually in the domain modeling by trial and error on the introduced platform to decide their business models and architecture specifications which are to result in their own applications.
In §2 the basic design of our platform complex is presented. Explained in §3 is each configuration of several combinations of our platform components in our courses. We set forth in §4 the case studies of actual practices of running the platform in our courses. In §5 some feedback of our platform is noticed. The conclusion of the present article is stated in §6.
2. Basic Design of Platform Complex
In our seminars and lectures we have adopted Java as the programming language and NetBeans⁶ as the integrated development environment (IDE) in Java. We have decided to take advantage of NetBeans features to help us develop our Java applications efficiently. In addition to the standard support of Java application development, NetBeans leverages IDE either by built-in components or by installing additional plug-ins. NetBeans also supports many APIs (Application Programming Interfaces) to facilitate NetBeans functions and standalone applications.
The basic design of our platform is a combination of NetBeans and additional plug-ins that meet our demands for our Java programming and software development environment which are described in the previous section. We have realized the following combination of NetBeans and plug-ins: NetBeans for Object-Oriented Programming plus Unified Modeling Language for domain modeling technique and software development architecture. We have implemented Visual Paradigm’s SDE, i.e. SDE EE NB⁷ as the UML tool which comprises a UML design tool and a UML CASE (Computer Aided Software Engineering) tool. In addition to that we further need a database system and a reporting or documentation system. In the present case we have introduced MySQL⁸ as the former (a database system) and iReport⁹ as the latter (a reporting or documentation system). NetBeans generates JPA entities from an existing database schema. Here JPA is Java Persistence API, the standard Object-Relational Mapping tool included with Java EE⁸.
In practice one combination of NetBeans and plug-ins is symbolically written as follows
NetBeans + MySQL + iReport + UML (SDE EE NB)
Those components synthesize our platform complex that is oriented towards the domain modeling and visually designing report layouts, in our courses for software application development.
Another combination of NetBeans and additional plug-ins in our courses is that for Web application development. The web application is another important field or subject of software development studies and practices. Our setup of the web application platform is like this: the basic setup mentioned above incorporates web oriented plug-ins: Web Container such as GlassFish application server, Java Frameworks which comprise JSF (Java Server Faces) and/or ICEfaces. The NetBeans web application basically makes web application development efficient by using Servlet APIs and Java Server Pages (JSP). In addition to web development, NetBeans allows us to easily develop Enterprise JavaBeans (EJBs). An EJB is a server-side, reusable component (introduced as part of the Java EE 6 specification) that encapsulates specific business logic and is activated and executed by the EJB container within an application server.
As a result another combination of NetBeans and plug-ins for Web application development is symbolically written as follows:
3. Configuration of Combinations of Platform Components
In this section the configuration of each combination of NetBeans and plug-ins is elaborated. Next we briefly explain the NetBeans Platform architecture. We describe, however, the conceptual structure of NetBeans IDE first. It should be noted that Java Development Kit (JDK) is a prerequisite installation for NetBeans.
The NetBeans IDE is an open-source Integrated Development Environment that is used throughout our courses and makes it easier for us to develop and deploy our applications. The functionality and characteristics of an IDE are created in the form of modules on top of the NetBeans Platform which is explained in the next paragraph. The base IDE includes those functionalities, such as an advanced multi-language editor, debugger and profiler integration, file versioning control and unique developer collaboration features in addition to Navigator API and Refactoring API and other APIs. The NetBeans profiler can provide important information about the runtime behavior of our application; monitors thread states, CPU performance, and memory usage of our application from within the IDE, and imposes relatively low overhead. In NetBeans “refactoring” is a disciplined technique for improving the structure of existing code without changing the observable behavior, e.g. rename, replace block of code with a method and so on. NetBeans IDE is a very good example of a modular-rich client application. The conceptual structure of the NetBeans IDE is displayed in Fig. 3-1 cited from 12).
The NetBeans Platform is a broad Java framework on which we can base large desktop applications. The basic building block of the NetBeans Platform is modules. A module is a collection of functionally-related classes together with a description of the interfaces that the module exposes. The interface is provided by the Windows System API through the TopComponent class group for multiple windows. The complete NetBeans Platform, as well as the application built on top of it, is divided into modules. These are loaded by the core of the NetBeans Platform which is known as the NetBeans runtime container. The runtime container loads the application’s modules dynamically and automatically. The runtime container is also responsible for running the application. To optimize the encapsulation of code within modules, which is necessary within a modular system, the NetBeans Platform provides its own “classloader system” which is a part of the NetBeans runtime container. Each module is loaded by its classloader. To use the functionality from other modules, a module can declare dependencies on other modules. These dependencies are declared in the module’s manifest file and resolved by the NetBeans runtime container, ensuring that the application always starts up consistently. As shown in Fig. 3-2 cited from 12), the NetBeans Platform itself is formed from a group of core modules which are needed for starting the application and for defining its user interface. To this end the NetBeans Platform makes many API (Application Program Interface) modules and service provider interface (SPI) modules available, simplifying development processes considerably.
NetBeans creates many types of projects. We can develop applications based on those types of projects. The NetBeans Platform application project type is one of the project types built on the NetBeans Platform. In this project only the NetBeans Platform modules are provided by default. However we may have the possibility of accessing any modules of the NetBeans IDE.
Finally we mention the application server in our case. NetBeans needs to be connected to GlassFish server so that applications built with IDE can be easily deployed to the application server. As a result we have our Web application deployed and make it available on the application server.
3.1 Combination of NetBeans and UML tool
In this combination, the Java application gets the capability of visual modeling for the object. An object is a self-contained entity with well-defined characteristics and behaviors while the characteristics and the behaviors are represented by attributes and operations respectively. A class is the generic definition for a set of similar objects and an object is an instance of the class. An object model provides a static conceptual view of an application. It shows the key components (objects) and their relationships (associations) within the application system. In our Visual Paradigm (VP) UML tool, more specifically the Smart Development Environment Enterprise Edition (SDE EE) in the present case, a class diagram can be used to draw the objects and classes inside a system and the relationships between them.
A visual modeling for the object model brings about not only creating a new object model, but transforming from a data model. A data model provides the lower-level detail (or entity) of a relational database of an application. It shows the physical database models and their relationships in the application. An Entity Relationship Diagram can be used to
describe the entities inside the system and their relationships with each other. As Object–
Relational Mapping (ORM) is automated, the database, code and persistent layer can be
generated, which in turn makes streamlined the model–code–deploy software development
process.
SDE is not only a visual UML modeling plug-in, but an Object–Relational Mapping
(ORM) tool: SDE automates the mappings between Java objects, object models, data model
and relational database. SDE supports not only the generation of persistent code and
database, but the synchronization between persistent code, object model, data model and
relational database, which yields a significant reduction of the development time.
In Visual Paradigm UML the Object Model (UML diagram, Class Diagram in
particular) generates the Java persistent code and the Object Model maps to the Data
Model (i.e. database creation, property setup and data assignment). The persistent code is
the object that enables to store and retrieve data in relational databases permanently. This
function, eventually supports our software development in database applications in an easier
way.
MySQL is a popular open source Relational Database Management System (RDBMS)
commonly used in web applications due to its speed, flexibility and reliability. MySQL
employs SQL, or Structured Query Language, for accessing and processing data contained
in databases. MySQL Connector/J is an implementation of Sun's JDBC 3.0 API for the
MySQL relational database server. It strives to conform as much as possible to the JDBC
API. The Java Database Connectivity (JDBC) API is the industry standard for database–
independent connectivity between the Java programming language and a wide range of
databases. MySQL Connector/J is known to work with Application Servers, Object
Relational Mapping Tools, Development Environments like NetBeans.
3.2 Combination of NetBeans and iReport
The combination of these software components is for creating reports in NetBeans applications. To this end the iReport plug-in needs to be implemented. In the iReport plug-in, the viewer program is required to use the JasperReport API in the NetBeans project. We also need to add the MySQL JDBC driver to our library in the NetBeans project, since iReport accesses a database to show data. The Java Database Connectivity (JDBC) technology is used to access a database from NetBeans. The iReport Designer facilitates designing a report (report layout) and printing it a great deal.

Fig. 3-4. Combination of NetBeans and iReport
3.3 Combination of NetBeans and MySQL on GlassFish Server
The combination of this setup is for a simple web application that connects to a MySQL database server. MySQL employs SQL (Structured Query Language) for accessing and processing the data contained in the database. The most efficient way to implement communication between the server and the database is to set up a database connection pool. Creating a new connection for each client request can be very time-consuming. To rectify the situation, numerous connections are created and maintained in a connection pool. Any incoming requests that require access to the application’s data source use an already-created connection from the pool. Likewise, when a request is completed, the connection is not closed down, but returned to the pool.
The GlassFish Server provides a lightweight modular server for the development of Java Platform Enterprise Edition (Java EE) 5 or 6 applications and Java Web Services. GlassFish, therefore, is the reference implementation of Java EE Application Server. An application server is a piece of software that serves applications through the Internet to provide a service. Java EE Application Servers do this by implementing the Java EE Specification®. GlassFish is characterized by its enterprise performance, scalability, and reliability. The main deliverables of GlassFish are an Application Server, the Java EE 5 or 6 Reference Implementation, and the Java Persistence API Reference Implementation.
As for configuring JDBC Connection Pools, a connection pool contains a group of JDBC connections that are created when the connection pool is registered, i.e. when starting up a GlassFish Server or when deploying the connection pool to the target server or the cluster. Connection pools use a JDBC driver to create physical database connections. Our
application borrows a connection from the pool, uses it, and returns it to the pool when closing it.
3.4 Combination of NetBeans Platform and Java DB
The Java DB database system is implemented entirely in Java and Java DB is delivered as the client database by default in Java Platform 6 and 7. The NetBeans IDE supports Java DB. The actual database system is embedded as the file derby.jar and it also makes its driver available. A Java DB database can be integrated into a NetBeans Platform application. We create entity classes from JavaDB and wrap the entity classes into a module together with modules for the related Java Persistent API interface. Thereby we acquire a means to have the code for accessing the database.
In this section we present several Java application deliverables which have been developed based on our Java platforms with specific combinations of NetBeans plus plug-ins in our seminars and lectures in 2012. Listed below in each case study is a type of combination rather symbolically followed by the name of our deliverable in the course. After that the Java application developed in the course is briefly explained and finally some screen shots are displayed that show what the application looks like and significant and impressive scenes brought about with the help of the plug-ins, implemented on our platform.
4. Case Studies of Running the Platform
This application deals with the basic operations of making reservations of tickets for some events, matches, concerts, books and movies and the like. The basic operations here are Show the Ticket Information, Reserve Tickets, Show and/or Confirm Reservations, Change Reservations, Cancel Reservations and Exit.
In the process of application development the class diagrams are constructed of the objects pertaining to the present domain of Reserving Tickets with the help of the UML tool VP SDE. Figure 4–1 shows an example of the class diagrams drawn for the package “Application” that comprises four classes: Ticket, TicketCatalogue, Reservation and ReserveCatalogue.
As mentioned in the previous section, Smart Development Environment (SDE) is not only a visual UML modeling tool, but an Object-Relational Mapping plug-in with IDE which supports building a database application and automates the mapping between object model
and data model. This means that SDE is capable of generating Java code from data model and object model. We demonstrate here the code generation from the class diagram (see Fig. 4-1) of the package "Application" of the present application in Fig. 4-2 and List 4-1 where the generated code of the class diagram “Ticket” is displayed.
Fig. 4-1. Class diagram of the package "Application" of the present application drawn with SDE
Fig. 4-2. Screen shot of the generated code on NetBeans from the class diagram "Ticket" on SDE
package Application;
public class Ticket {
public String id;
public String eventName;
public String price;
public String availability;
private String name;
private int initialCount;
private int currentCount;
public String getId() {
return this.id;
}
/**
* @param id
*/
public void setId(String id) {
this.id = id;
}
public String getEventName() {
return this.eventName;
}
/**
* @param eventName
*/
public void setEventName(String eventName) {
this.eventName = eventName;
}
public String getPrice() {
return this.price;
}
/**
* @param price
*/
public void setPrice(String price) {
this.price = price;
}
public String getAvailability() {
return this.availability;
}
/**
* @param availability
*/
public void setAvailability(String availability) {
this.availability = availability;
}
public String getName() {
return this.name;
}
/**
* @param name
*/
public void setName(String name) {
this.name = name;
}
public int getInitialCount() {
return this.initialCount;
}
/**
* @param initialCount
*/
public void setInitialCount(int initialCount) {
this.initialCount = initialCount;
}
public int getCurrentCount() {
return this.currentCount;
}
/**
* @param currentCount
*/
public void setCurrentCount(int currentCount) {
this.currentCount = currentCount;
}
public Ticket() {
throw new UnsupportedOperationException();
}
}
Fig. 4.3 displays the initial menu screen where a button in the left pane starts up its operation shown in each button's text. From Fig. 4-4 to Fig. 4-10 we demonstrate how a screen switches to another according as we click on a button from ViewTicketInfor to CancelReservation.
Fig. 4-3. Initial menu screen
Fig. 4-4. Viewing the "Ticket Information" dealt with in the application
Fig. 4-5. Making reservations for ticket #3
Fig. 4-6. Specifying 10 tickets to be reserved
Fig. 4-7. Processing a reservation which results in the update of availability of ticket #3
Fig. 4-8. Cancelling 4 tickets for ticket #3
4.2 NetBeans + UML + JSP + GlassFish + MySQL
【Application II】
"HirosiIFPWAFCAD System"
We owe this application mostly to a tutorial presented by NetBeans developers\(^{(3)}\). The application creates a simple web application that connects to a MySQL database server. The application we build involves the creation of two JSP (Java Server Pages) pages. In each of them we use HTML and CSS (Cascading Style Sheet) to implement a simple interface, and apply JSTL (JavaServer Pages Standard Tag Library) technology\(^{(4)}\) to perform the logic that directly queries the database and inserts the retrieved data into the two pages. The two database tables Subject and Counselor are contained in the MySQL database “mynewdatabase.”
We have setup a platform based on the NetBeans IDE incorporated by the VP UML tool (SDE EE). The NetBeans platform is connected to the MySQL database and contained in the Application Server Java EE with the GlassFish server plugged-in. In our web application development, the Java Server Page (JSP) technology is used. JSP contains HTML tags and Java code as well. As long as the Java code is contained in a web application, JSP will be able to process the form data that is sent to it. The configuration of the combination of the present plug-ins requires JavaServer Pages Standard Tag Library (JSTL), the Java Database Connectivity (JDBC) API, and two-tier client-server architecture.
We have constructed two tables Subject and Counselor in the MySQL database named “mynewdatabase.” As stated in Section 3, the GlassFish Server provides the connection pooling functionality. In order to take advantage of this functionality, we have setup a connection pool named “HirosiIfpwa cadPool” and configured a JDBC (Java Database Connectivity) data source “jdbc/hirosiIFPWAFCAD” for the server which our application can use for connection pooling.
We show the result of running the present application in the browser. When the first JSP file (index.jsp) appears in the browser, we select a subject from the drop-down list
and click the submit button (see Fig. 4–11). Then we should now be forwarded to the second JSP file (response.jsp) page that shows details corresponding to our selection (see Fig. 4–12).
4.3 NetBeans + UML + MySQL + iReport
"Arbeit Wage Calculation System"
This application calculates the wages for part-time workers (Arbeit or Arubaito in Japanese) and prints the invoices for their wages. The basic operations of the system are represented by the buttons’ texts in the left pain of the menu screen: Select Arbeit Code, Input Arbeit’s Work Hours, Display Arbeit’s Wage Invoice, Print Arbeit’s Wage Invoice, and Exit (see Fig. 4–13).
As explained in Section 3, Smart Development Environment (SDE) is an Object–Relational Mapping (ORM) tool as well. SDE, therefore, supports ORM for Java (called Java ORM) and we can reverse engineer the Java classes into the object model with ORM–Persistable stereotyped. The ORM–Persistable class is capable of manipulating the persistent data with the relational database. This means that the Java classes written in the Java
code are reverse engineered into the object model, i.e. their corresponding class diagrams. Furthermore the ORM diagram provides a view of mapping between a ORM-Persitstable class and its entity (called Class Mapping).
In the process of development of the application we reverse engineer the package “UserInterface” which contains the main frame of the Initial Menu Screen into the object model in order to view and confirm the object structure of our application visually. Shown in Fig. 4–13 is the Initial Menu Screen of the system which appears when we invoke the application. The user interface of the system is coded in Java and stored in the package “UserInterface.”

In Fig. 4–14–1 we reverse engineer the “UserInterface” package into the object model which displays the class structure of the package visually with the capability of the ORM mapping in SDE implemented on our NetBeans platform. It should be noted here that stereotype ORM Persitstable is assigned above the class name in each class diagram. The reverse engineered class diagrams in the package “UserInterface” are **MainFrame**, **SelectArbeitCodeJPanel**, **InputHoursAndDaysWorkedJPanel**, **DisplayArbeitInvoiceJPanel** and **PrintArbeitInvoiceJPanel** which correspond to their respective buttons except **MainFrame** in the Initial Menu Screen. The **MainFrame** class corresponds to the Initial Menu Screen itself. Furthermore the mapping between the object model and the data model then brings about the mapping between the ORM-Persitstable class and its corresponding entity. In Fig. 4–14–2 the class diagrams in the package “UserInterface” are synchronized to their Entity Relationship Diagram (ERD).
From Fig. 4-15 to Fig. 4-18 we demonstrate how one screen switches to another according as we click on a button from SelectArbeitCode to PrintoutArbeitWageInvoice.
Fig. 4–16. Inputting work hours from Monday to Friday for the selected Arbeit
Fig. 4–17. Displaying the Arbeit Wage Invoice for the selected Arbeit
Fig. 4–18. Printing the Arbeit Wage Invoice in the PDF format
Fig. 4–19. Designing the layout of the Arbeit Wage Invoice in the iReport Designer within the NetBeans work area
Fig. 4–20. The contents of the Arbeit Wage Invoice displayed in Fig. 4–17 are dynamically populated in the workrecord table in the “arbeitwagecalc” database.
Figure 4–19 depicts iReport plug-in’s capability of a visual report designer integrated in the NetBeans platform. Here the “Arbeit Wage Invoice” layout is designed through the main user interface of iReport Designer within the NetBeans work area.
In the “Arbeit Wage Calculation System”, the contents of “Arbeit Wage Invoice” displayed in Fig. 4–17 are dynamically populated in the workrecord table (see the enclosed area by a rectangular in Fig. 4–20) contained in the MySQL database “arbeitwagecalc” that has already been created in advance and that we have registered a connection for in the NetBeans IDE.
The “Arbeit Wage Invoice” for the part-time workers is printed in the PDF format in Fig. 4–21–1 and Fig. 4–21–2.
References
Elements of a domain model are DomainObject classes, and the relationships between them.
4) http://netbeans.org/
5) http://www.visual-paradigm.com/
6) http://dev.mysql.com/
7) http://community.jaspersoft.com/project/ireport-designer
8) http://docs.oracle.com/javaee/
9) http://glassfish.java.net/
10) http://www.icesoft.org/java/
13) http://netbeans.org/kb/docs/web/mysql-webapp.html
14) http://www.oracle.com/technetwork/java/index-jsp-135995.html
[Abstract]
A Java Application Development Platform with a Unified Modeling Language (UML) Plug-in (Part I)
Hiroshi NOTO
We have introduced a Java application development platform with a Unified Modeling Language tool plug-in in the computer labs of the Information Systems Center of Hokusei Gakuen University. The purpose of the set-up of our platform is that the students of our courses are able to manipulate visually their own logic in the domain modeling by trial and error on this platform to decide their business models and architecture specifications which are to result in their own applications. In our seminars and lectures, we have adopted Java as the programming language and NetBeans as an integrated development environment (IDE) in Java to take advantage of NetBeans' features to help develop Java applications efficiently. The basic design of our platform complex is presented. Each configuration of several combinations of platform components is elaborated on according to each course. We present case studies of actually running the platform in our courses.
Key words: Java Application, Database Management System, NetBeans IDE, Unified Modeling Language (UML), Reporting System
|
{"Source-Url": "https://hokusei.repo.nii.ac.jp/index.php?action=pages_view_main&active_action=repository_action_common_download&item_id=1280&item_no=1&attribute_id=45&file_no=1&page_id=13&block_id=21", "len_cl100k_base": 5892, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 45654, "total-output-tokens": 6990, "length": "2e12", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.0003325939178466797, "__label__crime_law": 0.00023853778839111328, "__label__education_jobs": 0.002590179443359375, "__label__entertainment": 4.857778549194336e-05, "__label__fashion_beauty": 0.0001246929168701172, "__label__finance_business": 0.0002624988555908203, "__label__food_dining": 0.00028896331787109375, "__label__games": 0.0003407001495361328, "__label__hardware": 0.0005946159362792969, "__label__health": 0.00026345252990722656, "__label__history": 0.00017023086547851562, "__label__home_hobbies": 7.82012939453125e-05, "__label__industrial": 0.00032782554626464844, "__label__literature": 0.00017631053924560547, "__label__politics": 0.00018012523651123047, "__label__religion": 0.0003962516784667969, "__label__science_tech": 0.0027008056640625, "__label__social_life": 8.803606033325195e-05, "__label__software": 0.00418853759765625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002219676971435547, "__label__transportation": 0.0005583763122558594, "__label__travel": 0.00020873546600341797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29285, 0.01233]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29285, 0.42156]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29285, 0.87287]], "google_gemma-3-12b-it_contains_pii": [[0, 50, false], [50, 1813, null], [1813, 4710, null], [4710, 7598, null], [7598, 9260, null], [9260, 11137, null], [11137, 13010, null], [13010, 15570, null], [15570, 16298, null], [16298, 17886, null], [17886, 18411, null], [18411, 20104, null], [20104, 20720, null], [20720, 22768, null], [22768, 23834, null], [23834, 25616, null], [25616, 25780, null], [25780, 26265, null], [26265, 26989, null], [26989, 28084, null], [28084, 29285, null]], "google_gemma-3-12b-it_is_public_document": [[0, 50, true], [50, 1813, null], [1813, 4710, null], [4710, 7598, null], [7598, 9260, null], [9260, 11137, null], [11137, 13010, null], [13010, 15570, null], [15570, 16298, null], [16298, 17886, null], [17886, 18411, null], [18411, 20104, null], [20104, 20720, null], [20720, 22768, null], [22768, 23834, null], [23834, 25616, null], [25616, 25780, null], [25780, 26265, null], [26265, 26989, null], [26989, 28084, null], [28084, 29285, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29285, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29285, null]], "pdf_page_numbers": [[0, 50, 1], [50, 1813, 2], [1813, 4710, 3], [4710, 7598, 4], [7598, 9260, 5], [9260, 11137, 6], [11137, 13010, 7], [13010, 15570, 8], [15570, 16298, 9], [16298, 17886, 10], [17886, 18411, 11], [18411, 20104, 12], [20104, 20720, 13], [20720, 22768, 14], [22768, 23834, 15], [23834, 25616, 16], [25616, 25780, 17], [25780, 26265, 18], [26265, 26989, 19], [26989, 28084, 20], [28084, 29285, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29285, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
85401e6b138ab48a70bb264be66f2ad63465f31d
|
Web Programming
Lecture 9 – Introduction to Ruby
Origins of Ruby
- Ruby was designed by Yukihiro Matsumoto ("Matz") and released in 1996.
- It was designed to replace Perl and Python, which Matz considered inadequate.
- It grew quickly in Japan and then spread around the world.
- Its expansion was a result of the increasing popularity of Rails, a Web development framework that was written in Ruby and that uses Ruby.
## Uses of Ruby
- Because Ruby is implemented by pure interpretation, it’s easy to use.
- Example
```ruby
irb(main):001:0> puts "hello, world"
hello, world
=> nil
```
- Ruby uses regular expressions and implicit variables like Perl, objects like JavaScript but is quite different from these languages.
## Scalar Types in Ruby
- Ruby has three categories of data types:
- Scalars – either numerics or character strings.
- Arrays – that uses dynamic sizing
- Hashes – associative arrays, similar to PHP.
- Everything in Ruby is an object
Numeric and String Literals
• All numeric data types are derived from the base class \texttt{Numeric}, has two derived classes \texttt{Float} and \texttt{Integer}.
\textbf{Integer} Literals
• \texttt{Integer} has two derived classes:
– \texttt{FixNum} - fits the range of a machine word (usually 32 bits).
– \texttt{BigNum} – numbers outside the range of \texttt{FixNum}. (if an operation on \texttt{BigNum} produces a smaller value, it will be coerced into \texttt{FixNum}).
• Ruby ignores underscores in integer literals so they can be more readable.
– \texttt{1_234_567_890} is more readable than \texttt{1234567890}
**Float Literals**
- A numeric literal with either an embedded decimal point or an exponent following it is a Float object.
- Float objects are stored as double-precision floating point numbers.
- Decimal points must have a digit on both sides of it.
**String Literals**
- All string literals are String objects, which are sequences of bytes that represent characters.
- String objects are either single-quoted or double-quoted.
Single-Quoted **String** Literals
- Single quoted strings cannot have escape sequences.
- Examples
- 'I’ll meet you at O’Malleys'
the inner apostrophes are included correctly.
- 'Some apples are red, \n some are green.'
contains a backslash followed by n (not a newline).
Delimiting Single-Quoted **String** Literals
- You can use a different delimiter by beginning the string with a q followed by another character. It will even match up braces, brackets or parentheses.
- Examples
- q$Don’t you think she’s pretty$
- q<don’t you think she’s pretty>
Double-Quoted String Literals
- Double-quoted strings can contain the special characters specified by escape sequences. And the values of variable names can be substituted into the string.
- Example
- “Runs \t Hits \t Errors” will include the expected tabs
- For a different delimited for double-quotes strings, begin the string with Q:
- Q@“Why not learn Ruby”, he asked.@
Naming Local Variables
- A local variable is not a class nor an instance variable. It belong to the block, method definition, etc. in which it is located.
- Local variable names begin with a lowercase letter or an underscore, followed by letters, digits or underscores. While variable names are case-sensitive, the convention is not to use uppercase letters.
Using Variables In Strings
- The value associated with a local variable can be inserted in a double-quoted string:
- "Tuesday’s high temperature was #{tue_high} "
- is printed as
"Tuesday’s high temperature was 83"
- Everything in Ruby is an object, so we are really working with their references, which are typeless. As a result, all variables are implicitly declared (how we use them determines their type).
Constants in Ruby
- Constants in Ruby begin with an uppercase letter.
- A constant is created by assigning it a value, which can be any constant expression.
- Constants in Ruby can be assigned new values, but there will be a warning message.
Predefined Variables
- Ruby has predefined variables (like Perl), which consist of $ followed by a special character.
- Examples - $_, $^, $\n
Numerical Operators
<table>
<thead>
<tr>
<th>Operator</th>
<th>Associativity</th>
</tr>
</thead>
<tbody>
<tr>
<td>**</td>
<td>Right</td>
</tr>
<tr>
<td>Unary +, -</td>
<td>Right</td>
</tr>
<tr>
<td>*, /, %</td>
<td>Left</td>
</tr>
<tr>
<td>Binary +, -</td>
<td>Left</td>
</tr>
</tbody>
</table>
Assignment Statements
- Assignment statements are like those in C-based languages.
- Ruby includes the Math module, which has basic trigonometric and transcendental functions, including:
- `Math.cos` (cosine)
- `Math.sin` (sine)
- `Math.log` (logarithm)
- `Math.sqrt` (square root)
- All of these return a `Float` value.
Interactive Ruby (`irb`)
```ruby
irb(main):001:0> 17*3
=> 51
irb(main):002:0> conf.prompt_i = ">>>"
=> ">>>"
```
""
**String Methods**
- Ruby’s `String` class has over 75 methods, many of which can be used as if they were operators.
- These include:
- `+` - concatenation
- `<<` - append
- Example
```ruby
>> "Happy" + " " + "Holidays!"
=> "Happy Holidays!"
>>
```
**Assigning String Values**
- `<<` appends a string to the right of another string.
```ruby
irb(main):001:0> mystr = "G'day, "
=> "G'day, "
irb(main):002:0> mystr << "mate"
=> "G'day, mate"
irb(main):003:0>
```
- This created the string literal and assigned its reference to `mystr`.
Assigning **String** Values (continued)
```ruby
irb(main):003:0> mystr = "Wow!"
=> "Wow!"
irb(main):004:0> yourstr = mystr
=> "Wow!"
irb(main):005:0> yourstr
=> "Wow!"
irb(main):006:0>
```
• Ruby assigned `yourstr` a copy of the same reference that `mystr` held.
Assigning **String** Values (continued)
```ruby
irb(main):001:0> mystr = "Wow!"
=> "Wow!"
irb(main):002:0> yourstr = mystr
=> "Wow!"
irb(main):003:0> mystr = "What?"
=> "What?"
irb(main):004:0> yourstr
=> "Wow!"
irb(main):005:0>
```
• After the assignment, `yourstr` has the same reference as `mystr`. But when `mystr` is assigned a different string literal, Ruby sets aside another memory location for the new literal and that is the reference that `mystr` now holds.
Assigning **String** Values (continued)
- If you want to change the value in the location that `mystr` references but have `mystr` reference the same location in memory, use the `replace` method:
```ruby
irb(main):001:0> mystr = "Wow!"
=> "Wow!"
irb(main):002:0> yourstr = mystr
=> "Wow!"
irb(main):003:0> mystr.replace("Golly")
=> "Golly!"
irb(main):004:0> mystr
=> "Golly!"
irb(main):005:0> yourstr
=> "Golly!"
irb(main):006:0>
```
- You can also use `+=` to perform the append operation.
```ruby
irb(main):001:0> mystr = "check"
=> "check"
irb(main):002:0> mystr += "mate"
=> "checkmate"
irb(main):003:0>
```
### Commonly Used String Methods
<table>
<thead>
<tr>
<th>Method</th>
<th>Action</th>
</tr>
</thead>
<tbody>
<tr>
<td>capitalize</td>
<td>Converts the first letter to uppercase and the rest of the letters to lowercase</td>
</tr>
<tr>
<td>chop</td>
<td>Removes the last character</td>
</tr>
<tr>
<td>chomp</td>
<td>Removes a newline from the right end if there is one</td>
</tr>
<tr>
<td>upcase</td>
<td>Converts all of the lowercase letters in the object to uppercase</td>
</tr>
<tr>
<td>downcase</td>
<td>Converts all of the uppercase letters in the objects to lowercase</td>
</tr>
<tr>
<td>strip</td>
<td>Removes the spaces on both ends</td>
</tr>
<tr>
<td>lstrip</td>
<td>Removes the spaces on the left end</td>
</tr>
<tr>
<td>rstrip</td>
<td>Removes the spaces on the right end</td>
</tr>
<tr>
<td>reverse</td>
<td>Reverses the characters of the string</td>
</tr>
<tr>
<td>swapcase</td>
<td>Converts all uppercase letters to lowercase and all lowercase letters to uppercase</td>
</tr>
</tbody>
</table>
- The methods mentioned before produce new string and do NOT modify the given string in place.
- If you wish to modify the string instead of producing a new string, place a ! at the end of the method name. Such methods are called **bang methods** or **mutator methods**.
Mutator Methods – An Example
```
irb(main):001:0> str = "Frank"
=> "Frank"
irb(main):002:0> str.upcase
=> "FRANK"
irb(main):003:0> str
=> "Frank"
irb(main):004:0> str.upcase!
=> "FRANK"
irb(main):005:0> str
=> "FRANK"
irb(main):006:0>
```
Ruby Strings as Arrays
- Ruby strings can be indexed, in a manner similar to arrays, with indices starting at 0.
- The brackets serve as an accessor for a single character, returned as an ASCII value. If you wish the character, use the `chr` method.
- More recent implementations of Ruby may return the character instead of ASCII value for the `[]` operator.
Ruby Strings as Arrays – An Example
```
irb(main):006:0> str = "Shelley"
=> "Shelley"
irb(main):007:0> str[1]
=> "h"
irb(main):008:0> str[1].chr
=> "h"
```
Ruby Strings and Substring
• A multicharacter substring can be accessed by specifying the starting character and number of characters in the substring:
```
irb(main):009:0> str = "Shelley"
=> "Shelley"
irb(main):010:0> str[2,4]
=> "elle"
irb(main):011:0>
```
Changing a String With a Substring
• The \[=\] operator can be used to specify characters of a substring and to what they are be changed:
```
irb(main):013:0> str = "Donald"
=> "Donald"
irb(main):014:0> str[3,3] = "nie"
=> "nie"
irb(main):015:0> str
=> "Donnie"
irb(main):016:0>
```
Comparing Strings for Equality
• == is used to see if two string have the same content.
• equal? tests to see if both are the same object
• Example
```
irb(main):016:0> "snowstorm" == "snowstorm"
=> true
irb(main):017:0> "snowie" == "snowy"
=> false
irb(main):018:0> "snowstorm".equal?("snowstorm")
=> false
irb(main):019:0>
```
Comparing Numeric Values
• The `==` operator determines if the values are equivalent regardless of type.
• The `eql?` operator returns true if the types and values match.
```
irb(main):023:0> 7 == 7.0
=> true
irb(main):024:0> 7.eql?(7.0)
=> false
irb(main):025:0>
```
• The `<>` operator compares two different values and returns -1 if the second operator is greater than the first, 0 if they are equal and 1 if the first is greater than the second.
$\Rightarrow$ - Examples
```ruby
irb(main):025:0> 7 <=> 5
=> 1
irb(main):026:0> "grape" <=> "grape"
=> 0
irb(main):027:0> "grape" <=> "apple"
=> 1
irb(main):030:0> "apple" <=> "prune"
=> -1
irb(main):031:0>
```
Repetition Operator (\ast)
- The repetition operator (\ast) takes a string as its left operand and a numeric expression as its right operand and replicates the left operand as many times as indicated by the right operand.
- Example
```ruby
irb(main):031:0> "More!" * 3
=> "More!More!More!"
irb(main):032:0>
```
Screen Output
- Output is directed to the screen using the puts method (or operator).
- The operand for puts is a string literal with a newline implicitly appended to the end.
- A variable’s value can be included in the string by writing `#{variableName}`
- `print` works in the same way except with the included newline.
- `sprintf` works as it does in C, allowing for formatted output.
Screen Output – An Example
```
irb(main):032:0> name = "Pudgy"
=> "Pudgy"
irb(main):033:0> puts "My name is #{name}"
My name is Pudgy
=> nil
irb(main):034:0> print "My name is #{name}"
My name is Pudgy=> nil
irb(main):035:0> total = 10
=> 10
irb(main):036:0> str = sprintf("%5.2f", total)
=> "10.00"
irb(main):037:0>
```
Keyboard Input
• The gets method gets a line of input from the keyboard. The retrieved line includes the newline character. You can get rid of it with chomp:
```ruby
irb(main):037:0> name = gets
apple
=> "apple\n"
irb(main):038:0> name = name.chomp
=> "apple"
irb(main):039:0> name = gets.chomp
apple
=> "apple"
irb(main):040:0>
```
Keyboard Input (continued)
• Since the input is taken to be a string, it needs to be converted if its numeric:
```ruby
irb(main):042:0> age = gets.to_i
29
=> 29
irb(main):043:0> age = gets.to_f
28.9
=> 28.9
irb(main):044:0>
```
quadeval.rb
#quadeval.rb - A simple Ruby program
# Input: Four numbers, representing the values of
# a, b, c, and x
# output: The value of the expression
# a*x**2 + b*x + c
# Get input
puts "please input the value of a"
a = gets.to_i
puts "please input the value of b"
b = gets.to_i
puts "please input the value of c"
c = gets.to_i
# compute and display the result
result = a * x ** 2 + b * x + c
puts "The value of the expression is #{result}"
Running `quadeval.rb`
```
C:\>ruby quadeval.rb
please input the value of a
1
please input the value of b
2
please input the value of c
1
Please input the value of x
5
The value of the expression is 36
```
```
C:\>
```
---
Relational Operators
<table>
<thead>
<tr>
<th>Operator</th>
<th>Operation</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>==</code></td>
<td>Is equal to</td>
</tr>
<tr>
<td><code>!=</code></td>
<td>Is not equal to</td>
</tr>
<tr>
<td><code><</code></td>
<td>Is less than</td>
</tr>
<tr>
<td><code>></code></td>
<td>Is greater than</td>
</tr>
<tr>
<td><code><=</code></td>
<td>Is less than or equal to</td>
</tr>
<tr>
<td><code>>=</code></td>
<td>Is greater than or equal to</td>
</tr>
<tr>
<td><code><></code></td>
<td>Compare, returning -1, 0 or +1</td>
</tr>
<tr>
<td><code>eql?</code></td>
<td>True if the receiver object and the parameter have the same type and equal values</td>
</tr>
<tr>
<td><code>equal?</code></td>
<td>True if the receiver object and the parameter have the same object ID</td>
</tr>
</tbody>
</table>
Operator Precedence
<table>
<thead>
<tr>
<th>Operator</th>
<th>Associativity</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>**</code></td>
<td>Right</td>
</tr>
<tr>
<td><code>!, unary + and -</code></td>
<td>Right</td>
</tr>
<tr>
<td><code>*, /, %</code></td>
<td>Left</td>
</tr>
<tr>
<td><code>+, -</code></td>
<td>Left</td>
</tr>
<tr>
<td><code>&</code></td>
<td>Left</td>
</tr>
<tr>
<td>`+=, -=, *=, /=, %=, &=,</td>
<td></td>
</tr>
<tr>
<td><code>!</code></td>
<td>Right</td>
</tr>
<tr>
<td>`</td>
<td></td>
</tr>
<tr>
<td><code>&&</code></td>
<td>Left</td>
</tr>
<tr>
<td><code>==, !=, <=></code></td>
<td>Nonassociative</td>
</tr>
<tr>
<td><code>==, !=, <=></code></td>
<td>Nonassociative</td>
</tr>
</tbody>
</table>
if Statement in Ruby
- `if` statements in Ruby do not require parentheses around the control expression, but they do require `end`:
```ruby
irb(main):045:0> if a > 10
irb(main):046:1> b = a * 2
irb(main):047:1> end
=> nil
irb(main):048:0>
```
if..elsif..else
if snowrate < 1
puts "Light snow"
elsif snowrate < 2
puts "Moderate snow"
else
puts "Heavy snow"
end
unless Statement
The `unless` statement is the opposite of the `if` statement
unless sum > 100
puts "We are not finished yet!"
end
# case Statements
<table>
<thead>
<tr>
<th>case Expression</th>
<th>case BooleanExpression</th>
</tr>
</thead>
<tbody>
<tr>
<td>when value then Statement</td>
<td>when value then Statement</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>when value then Statement</td>
<td>when value then Statement</td>
</tr>
<tr>
<td>[else Statement]</td>
<td>else Expression</td>
</tr>
<tr>
<td>end</td>
<td>end</td>
</tr>
</tbody>
</table>
## case – An Example
```ruby
case in_val
when -1 then
neg_count += 1
when 0 then
zero_count += 1
when 1 then
pos_count += 1
else
puts "Error - in_val is out of range"
end
```
case – An Example
leap = case
when year % 400 then true
when year % 100 then false
else year %4 == 0
end
while Statement
- The syntax for a while statement:
while ControlExpression
Statement(s)
end
- Example
i = 0
while i < 5 do
puts i
i += 1
end
**until Statement**
- The syntax for a `until` statement:
```ruby
until ControlExpression
Statement(s)
end
```
- Example
```ruby
i = 4
until i >= 0 do
puts i
i -= 1
end
```
**loop Statement**
- `loop` statement are infinite loops – there is no built-in mechanism to limit its iterations.
- `loop` statements can be controlled using:
- the `break` statement – which goes to the first statement after the loop
- the `next` statement – which goes to the first statement within the loop
loop Statement - Examples
```
sum = 0
loop do
dat = gets.to_i
if dat < 0 break
sum += dat
end
sum = 0
loop do
dat = gets.to_i
if dat < 0 next
sum += dat
end
```
Arrays in Ruby
- In Ruby, array size is dynamic, growing and shrinking as necessary
- Arrays in Ruby can store different types of data in the same array.
- Arrays can be created by:
- Using the predefined Array class.
- Assign a list literal to a variable.
Initializing Arrays - Examples
irb(main):001:0> list1 = Array.new(5)
=> [nil, nil, nil, nil, nil]
irb(main):002:0> list2 = [2, 4, 3.14159, "Fred", []]
=> [2, 4, 3.14159, "Fred", []]
irb(main):003:0> list3 = Array.new(5, "Ho")
=> ["Ho", "Ho", "Ho", "Ho", "Ho"]
irb(main):004:0>
Working With Arrays - Examples
irb(main):004:0> list = [2, 4, 6, 8]
=> [2, 4, 6, 8]
irb(main):005:0> second = list[1]
=> 4
irb(main):006:0> list[3] = 9
=> 9
irb(main):007:0> list
=> [2, 4, 6, 9]
irb(main):009:0> list[2.99999] # indices are truncated
=> 6
irb(main):010:0> len = list.length
=> 4
irb(main):011:0>
for-in Statement
- The **for-in** statement is used to process elements of an array.
- The scalar variable takes on the values in the array one at a time.
- The scalar variable gets the **value, not** a reference to a value. Therefore, operations on the scalar variable do not affect the array.
for-in Statement – An Example
```ruby
irb(main):001:0> sum = 0
=> 0
irb(main):002:0> list = [2, 4, 6, 8]
=> [2, 4, 6, 8]
irb(main):003:0> for value in list
irb(main):004:1> sum += value
irb(main):005:1> end
=> [2, 4, 6, 8]
irb(main):006:0> sum
=> 20
irb(main):007:0>
```
for-in Statement – Another Example
```ruby
irb(main):001> list = [1, 3, 5, 7]
=> [1, 3, 5, 7]
irb(main):002> for value in list
value += 2
irb(main):003> end
=> [1, 3, 5, 7]
irb(main):004> list
=> [1, 3, 5, 7]
irb(main):005>
```
for-in Statement – Another Example
```ruby
irb(main):001> list = [2, 4, 6]
=> [2, 4, 6]
irb(main):002> for index in [0, 1, 2]
puts "For index = #{index}, the value is #{list[index]}"
irb(main):003> end
For index = 0, the value is 2
For index = 1, the value is 4
For index = 2, the value is 6
=> [0, 1, 2]
irb(main):005>
```
Built-in Methods for Arrays and Lists
- There are many built-in methods that are a part of Ruby. They include:
- **shift** – removes and returns the first element of the list
- **pop** – removes and return the last element of the list
- **unshift** – takes a scalar or an array literal and appends it to the beginning of the array.
- **push** - takes a scalar or an array literal and appends it to the end of the array.
Built-in Methods for Arrays and Lists
- There are many built-in methods that are a part of Ruby. They include:
- **+** - catenates two arrays
- **reverse** – returns an array with the order of elements of the array reversed
- **include?** – returns true if the specific object is in the array.
- **sort** – sorts elements as long as Ruby has a way to compare them.
**shift – An Example**
```ruby
irb(main):001:0> list = [3, 7, 13, 17]
=> [3, 7, 13, 17]
irb(main):002:0> first = list.shift
=> 3
irb(main):003:0> list
=> [7, 13, 17]
irb(main):004:0>
```
**pop – An Example**
```ruby
irb(main):004:0> list = [2, 4, 6]
=> [2, 4, 6]
irb(main):005:0> last = list.pop
=> 6
irb(main):006:0> list
=> [2, 4]
irb(main):007:0>
```
unshift – An Example
• `irb(main):009:0> list = [2, 4, 6]`
• => `[2, 4, 6]`
• `irb(main):010:0> list.unshift(8, 10)`
• => `[8, 10, 2, 4, 6]`
• `irb(main):011:0>`
push – An Example
• `irb(main):007:0> list = [2, 4, 6]`
• => `[2, 4, 6]`
• `irb(main):008:0> list.push(8, 10)`
• => `[2, 4, 6, 8, 10]`
• `irb(main):009:0> `
concat - An Example
irb(main):011:0> list1 = [1, 3, 5, 7]
=> [1, 3, 5, 7]
irb(main):012:0> list2 = [2, 4, 6, 8]
=> [2, 4, 6, 8]
irb(main):013:0> list1.concat(list2)
=> [1, 3, 5, 7, 2, 4, 6, 8]
irb(main):014:0>
+ - An Example
irb(main):014:0> list1 = [1, 3, 5, 7]
=> [1, 3, 5, 7]
irb(main):015:0> list2 = [2, 4, 6, 8]
=> [2, 4, 6, 8]
irb(main):016:0> list3 = list1 + list2
=> [1, 3, 5, 7, 2, 4, 6, 8]
irb(main):017:0>
**reverse – An Example**
```ruby
irb(main):018:0> list = [2, 4, 6, 8]
=> [2, 4, 6, 8]
irb(main):019:0> list.reverse
=> [8, 6, 4, 2]
irb(main):020:0> list
=> [2, 4, 6, 8]
irb(main):021:0> list.reverse!
=> [8, 6, 4, 2]
irb(main):022:0> list
=> [8, 6, 4, 2]
irb(main):023:0>
```
**include? – An Example**
- ```ruby
irb(main):023:0> list = [2, 4, 6, 8]
#=> [2, 4, 6, 8]
irb(main):024:0> list.include?(4)
#=> true
irb(main):025:0> list.include?(10)
#=> false
irb(main):026:0>
```
sort – An Example
irb(main):028:0> list = [16, 8, 2, 4]
=> [16, 8, 2, 4]
irb(main):029:0> list.sort
=> [2, 4, 8, 16]
irb(main):030:0> list2 = ["jo", "fred", "mike", "larry"]
=> ["jo", "fred", "mike", "larry"]
irb(main):031:0> list2.sort
=> ["fred", "jo", "larry", "mike"]
irb(main):032:0> list = [2, "jo", 8, "fred"]
=> [2, "jo", 8, "fred"]
irb(main):033:0> list.sort
ArgumentError: comparison of Fixnum with String failed
from (irb):33:in `sort'
from (irb):33
from C:/Ruby193/bin/irb:12:in `<main>'
irb(main):034:0>
|
{"Source-Url": "http://home.adelphi.edu:80/~siegfried/cs390/390l9.pdf", "len_cl100k_base": 7043, "olmocr-version": "0.1.49", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 58483, "total-output-tokens": 8525, "length": "2e12", "weborganizer": {"__label__adult": 0.0003314018249511719, "__label__art_design": 0.00029850006103515625, "__label__crime_law": 0.00016868114471435547, "__label__education_jobs": 0.0023040771484375, "__label__entertainment": 5.501508712768555e-05, "__label__fashion_beauty": 0.00010955333709716796, "__label__finance_business": 0.00013375282287597656, "__label__food_dining": 0.00031876564025878906, "__label__games": 0.0003817081451416016, "__label__hardware": 0.0005197525024414062, "__label__health": 0.00023376941680908203, "__label__history": 0.00016260147094726562, "__label__home_hobbies": 0.00010329484939575197, "__label__industrial": 0.00025963783264160156, "__label__literature": 0.00018513202667236328, "__label__politics": 0.00015938282012939453, "__label__religion": 0.00035881996154785156, "__label__science_tech": 0.0018167495727539065, "__label__social_life": 9.85860824584961e-05, "__label__software": 0.006092071533203125, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002071857452392578, "__label__transportation": 0.0002608299255371094, "__label__travel": 0.0001952648162841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20539, 0.07454]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20539, 0.793]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20539, 0.73648]], "google_gemma-3-12b-it_contains_pii": [[0, 423, false], [423, 979, null], [979, 1609, null], [1609, 2041, null], [2041, 2611, null], [2611, 3351, null], [3351, 4014, null], [4014, 4361, null], [4361, 4810, null], [4810, 5375, null], [5375, 6112, null], [6112, 6768, null], [6768, 8136, null], [8136, 8737, null], [8737, 9156, null], [9156, 9773, null], [9773, 10226, null], [10226, 10752, null], [10752, 11464, null], [11464, 12030, null], [12030, 12477, null], [12477, 13203, null], [13203, 13885, null], [13885, 14145, null], [14145, 14638, null], [14638, 14922, null], [14922, 15444, null], [15444, 15882, null], [15882, 16474, null], [16474, 17043, null], [17043, 17602, null], [17602, 18406, null], [18406, 18763, null], [18763, 19101, null], [19101, 19521, null], [19521, 20015, null], [20015, 20539, null]], "google_gemma-3-12b-it_is_public_document": [[0, 423, true], [423, 979, null], [979, 1609, null], [1609, 2041, null], [2041, 2611, null], [2611, 3351, null], [3351, 4014, null], [4014, 4361, null], [4361, 4810, null], [4810, 5375, null], [5375, 6112, null], [6112, 6768, null], [6768, 8136, null], [8136, 8737, null], [8737, 9156, null], [9156, 9773, null], [9773, 10226, null], [10226, 10752, null], [10752, 11464, null], [11464, 12030, null], [12030, 12477, null], [12477, 13203, null], [13203, 13885, null], [13885, 14145, null], [14145, 14638, null], [14638, 14922, null], [14922, 15444, null], [15444, 15882, null], [15882, 16474, null], [16474, 17043, null], [17043, 17602, null], [17602, 18406, null], [18406, 18763, null], [18763, 19101, null], [19101, 19521, null], [19521, 20015, null], [20015, 20539, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20539, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20539, null]], "pdf_page_numbers": [[0, 423, 1], [423, 979, 2], [979, 1609, 3], [1609, 2041, 4], [2041, 2611, 5], [2611, 3351, 6], [3351, 4014, 7], [4014, 4361, 8], [4361, 4810, 9], [4810, 5375, 10], [5375, 6112, 11], [6112, 6768, 12], [6768, 8136, 13], [8136, 8737, 14], [8737, 9156, 15], [9156, 9773, 16], [9773, 10226, 17], [10226, 10752, 18], [10752, 11464, 19], [11464, 12030, 20], [12030, 12477, 21], [12477, 13203, 22], [13203, 13885, 23], [13885, 14145, 24], [14145, 14638, 25], [14638, 14922, 26], [14922, 15444, 27], [15444, 15882, 28], [15882, 16474, 29], [16474, 17043, 30], [17043, 17602, 31], [17602, 18406, 32], [18406, 18763, 33], [18763, 19101, 34], [19101, 19521, 35], [19521, 20015, 36], [20015, 20539, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20539, 0.07668]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
b1d268d5ad971dd25b803102b8f4adef25f26c79
|
SOA Security -
Secure Cross-Organizational Service Composition
Michael Menzel¹, Ivonne Thomas¹, Christian Wolter², Christoph Meinel¹
¹Hasso-Plattner-Institute,
{michael.menzel, ivonne.thomas, meinel}@hpi.uni-potsdam.de
²SAP Research, CEC Karlsruhe,
{christian.wolter}@sap.com
Abstract: Service-oriented Architectures (SOA) facilitate the interoperable and seamless interaction of services. The need to communicate with business partners demands a seamless integration of services across organizational boundaries. In fact, the integration and composition of services represent important aspects of a SOA to enable an increased responsiveness to changing business requirements. However, the interaction of independent organizations requires the establishment of trust across all involved business partners as a prerequisite to ensure secure interactions. In particular, the integration of scalable security solutions into SOA is highly demanded. This paper outlines approaches and open issues regarding secure service compositions and cross-organizational service invocation. Finally, new approaches are described to overcome current limitations regarding the dynamic composition of services based on semantic technologies, the specification and modeling of security requirements in business processes and the management of security policies based on trust levels.
1 Introduction
Service-oriented architectures are an abstract concept which exposes capabilities in distributed, domain-spanning environments as services[MLM+06]. In general, SOA facilitates the interoperable and seamless interaction of service consumer and service provider to meet the consumer’s needs by the service’s capabilities. Several key aspects can be derived from this paradigm as described in [Erl05]: Loose coupling to reduce dependencies between services, service contracts to define interaction agreements, autonomy and abstraction to hide service logic, reusability and composability of services, statelessness to minimize the information specific to an activity, and discoverability to enable visibility of services.
The SOA paradigm provides a vast amount of flexibility in the way complex software systems are implemented. Especially in terms of an enterprise SOA, composability and reusability of services are the important concepts enabling the mapping of capabilities exposed as services to abstract activities in complex business processes that can be rearranged in an easy way at any time. Furthermore, the cooperation with business partners demands the utilization of capabilities across organizational boundaries. The involvement of in-
dependent trust domains constitutes the key aspect regarding security in service-oriented architectures. Collaborations requiring the integration of foreign services represent a considerable security threat.
The important question to address is: How can security be assured in such an unsteady environment while preserving scalability and flexibility? In traditional software systems, authentication and authorization are performed in a relatively fixed manner with a dedicated registration and authentication process which was chosen at the time of design. This is not the case in SOA anymore. The exchange of simple security credentials is insufficient when multiple trust domains are involved. Each domain may have a different understanding of security attributes (such as business roles), may support different security mechanisms and may require different information for access control. In addition, users may have multiple accounts registered with different service providers.
In this paper we provide a classification of security concepts to guarantee security goals, a description of standards implementing these concepts and, finally, introduce new approaches to overcome revealed limitations concerning the secure composition of services. Our solutions are based on modeling concepts, semantic technologies and trust levels to express, manage and negotiate security requirements in a technology-independent way.
The rest of this paper is organized as follows. In Section 2 we introduce basic concepts to guarantee the security goals confidentiality, integrity, authentication, and authorization. A classification of approaches to implement access control in SOA is presented as well. Section 3 presents various security mechanisms, which were developed or adapted to address these new security requirements and concepts. Afterwards, in section 4 we will discuss open issues regarding secure service compositions, which are in the focus of our research group, and propose solutions for selected problems. The last Section concludes this paper.
## 2 Classifying Security Solutions for SOA
The abstract concept of security can be defined precisely by specifying a set of security goals [PP02]. In this chapter we will present security concepts regarding the characteristics of SOA stated above. Due to space limitations, the concepts introduced in this section are related to the security goals Authorization, Authentication, Integrity, and Confidentiality. In general, we can distinguish the concepts related to confidentiality and integrity in SOA from those realizing authentication and authorization. Confidentiality and integrity provide protection of stored, processed, or transferred information in terms of properness and secrecy, while authentication and authorization are related to a digital identity regarding the establishment of trust and granting permissions to identities.
### 2.1 Protecting stored, processed, or transferred information
Traditional security solutions enabling a secure communication regarding confidentiality and integrity - such as SSL - provide transport security by creating a secure pipe between
two hosts. Since security mechanisms are just related to the secure pipe, these solutions are not sufficient to secure information permanently. Since messages can be passed through several intermediaries in a SOA based on document exchange (i.e. Web Services using SOAP), mechanisms are required that are applied to the message itself to preserve this security information. This facilitates compliance and enables that some parts of a message, which are important to the involved intermediaries, can be kept visible. Further, different security mechanisms can be applied to different message types. However, enhanced flexibility provided by message-based security comes also along with increased complexity, since different security mechanisms may be required by service consumer and service provider. Security requirements of services regarding confidentiality and integrity have to be described by security policies and negotiated with the service consumer.
2.2 Authenticating and authorizing a digital identity
A digital identity consists of several personal attributes with different privacy requirements that unambiguously represent a related subject. The process of authenticating the subject’s identity information establishes a trust relationship between a subject and a party that relies on claims stated by the subject. Authorization concerns the determination of rights granted to the subject based on the quality of the trust relationship and attributes that are related to the subject’s identity.
Security solutions that facilitate a trusted service invocation in SOA can be categorized in three groups based on the distribution of authentication and authorization information [MWM07]. For each category we present a short description along with some examples.
2.2.1 Service Managed Policies
Approaches based on Service Managed Policies enable the service to store and handle all information for access control. The identity of the service requester and its role are usually the most important aspects to grant access. Since all this information needs to be maintained for each user who is allowed to access the service, an initial registration of users is required to create a new account in a particular trust domain (cf. Figure 1). However, this approach requires the user to maintain different accounts and to reauthenticate when he tries to access a service in another domain. Moreover, the user has to adopt the authentication method specified by the service provider. The interaction between user and service provider will fail if different security infrastructures are used, probably supporting incompatible ways for authentication.
For example, security solutions in this category may implement identity-based access control based on a public key infrastructure (PKI). Infrastructure components are linked to keystores containing the certificates of either authorized users or the issuing certificate authority. Although a basic secure cross-domain invocation of Web Services is enabled by using a PKI, the general problem remains that such a trust domain cannot interact with a
domain that is based on another security solution, such as Kerberos.
2.2.2 Equal Sharing of Policy Information
Equal sharing means that the policy information is maintained by the client along with the service provider. This can be realized based on direct policy exchange, a central federation policy repository, or dedicated authentication/authorization services (cf. Figure 2). This approach simplifies administrative aspects and represents a common way to realize security solutions in collaborations. Moreover, it constitutes the traditional way to implement single sign-on based on a central database.
Nevertheless, the establishment of collaboration to enable cross-domain service interaction is complicated due to the necessity to adopt the central security settings for each local infrastructure. Moreover, domain-specific individual security requirements are hard to be supported by this approach.
2.2.3 User Managed Policies
In the context of User Managed Policies the identity of the service users - and therefore the authentication policies - are solely managed and known in the user domain. The service provider may store some local policies necessary on the provider’s side to define requirements for access control, but there are no cross-domain policies used - the policies of all trust domains are restricted to the respective security domain.
This approach is based on an identity federation. The key concept in an identity federation is the brokering of trust whereby all parties in a federation are willing to rely on assertions representing claims about a digital identity. For instance, these claims can represent authentication/authorization decisions to implement single sign-on, can state permissions such as ‘the user is allowed to perform orders that are limited to 10,000 Euro’ or additional information such as the authentication context.
Trust is usually stipulated by contracts specifying the business relationships and technically realized using security tokens that contain the assertions. Dedicated components (Identity Providers) in a federation are able to assert identity attributes that can be promoted to service providers acting as Relying Parties (RP).
A service will grant access based on this asserted user information if the asserting authority in the user domain is trustworthy. Since a trustworthy communication is enabled although the user is unknown to the service, this approach provides scalability and flexibility that is needed in a SOA composing independent services. Furthermore, each domain is able to utilize an own security model independently from others.
Although, this approach decouples the security infrastructure used in the different trust domains, a common understanding of the exchanged attributes is still required. For example, the involved organisations may have a different understanding of roles and identities. This requires mapping mechanisms to translate these attributes.
Figure 3: Client Managed Policies
3 Security Solutions in SOA
This chapter presents a selection of standards and mechanisms to secure a SOA based on the concepts which were introduced in the previous section.
3.1 WS-Security Standard
WS-Security has been proposed as a standard by Microsoft and IBM [IBM02] in 2002 and was established as an OASIS standard in 2004. This standard defines enhancements to SOAP-messaging in order to provide security to the messages transmitted between a con-
sumer and a provider. For this purpose, additional information is included in the header of a SOAP message based on further specifications such as XMLEncryption for message confidentiality, XMLSignature for message integrity and many more. WS-Security is used to apply a wide range of different security technologies and models such as X.509 certificate and Kerberos.
### 3.2 Security Tokens
As described in the previous chapter, security tokens are an important concept to build trust on a technical layer by sending security credentials encapsulated in a special structure to the other party. WS-Security, as one way to define security tokens, supports for example the following types: an unsigned token (UsernameToken) to pass information like user name and password and a signed token (BinarySecurityToken) that has been endorsed by a third party, such as X.509 certificates or Kerberos. These security tokens can be used by a service to perform the authentication or authorization.
Apart from the WS-Security specification, the Security Assertion Markup Language is another standard – specified by OASIS – to describe security tokens [RHPM06]. With SAML, assertions about the authentication, authorization or attributes of a user can be stated and exchanged between service consumer and provider domain. In order to request security token and to exchange them between services, WS-Trust [NGG+07] can be used.
### 3.3 Communicating policies
In a Web Service environment the standard way to expose service capabilities is the utilization of WSDL. However, the requirements of a service are described and communicated by security policies to enable a service consumer to determine, which security tokens he requires to access a service. In April 2006, the WS-Policy specification [DLGea05] has been submitted to the W3C as a proposed standard. This proposal describes an extensible and flexible grammar for expressing the general characteristics, capabilities and requirements of entities as policies in an XML Web Service system. By this specification, a base set of constructs is defined which can be used by other web service standards to describe a variety of service requirements. Another policy specification is XACML, which is already an OASIS standard. The latest Version, XACML 2.0, has been accepted as an OASIS standard in February 2005. In contrast to WS Policy, XACML is not a general-purpose policy language. It is focused on access control and authorization and specifies the architecture to enforce these policies as well. A policy in XACML is a set of rules containing a boolean expression that can be used to determine who is allowed to perform an action on a resource.
3.4 Solutions for Web Service Federation
As described in 2.2.3, a federation between Web Service consumer and Web Service provider is necessary to perform the authentication process on the user’s side. Several implementations and standards for Web Service federation exist, but the two major approaches are WS-Federation and Liberty Alliance.
3.4.1 WS-Federation
The Web Service Federation language (WS-Federation) [NKea06] defines a framework to federate independent trust domains by leveraging WS-* Standards such as WS-Security [NKMHB06], WS-Trust [NGG+07] and WS-SecureConversation [GNea05]. This specification provides a model for security token exchange to enable the brokering of identities, the discovery and retrieval of attributes, and the provision of security claims in a Web Service based architecture. The token exchange is based on generic Secure Token Services using WS-Trust. A meta-data model to describe and establish a federation is introduced as well [GHN+05]. Altogether, WS-Federation is designed to enable the use of identity attributes across trust domains to facilitate authorization decisions specified by WS-Policy.
3.4.2 Liberty Alliance
Liberty Alliance provides specifications for federated network identity management that is not just limited to Web Services. This project has been supported by a broad range of companies (Sun Microsystems, Novell, Intel, Oracle, ...) acting in different business areas.
The specification defines a basic framework for federation including protocols, bindings and profiles to enable account federation and cross-domain authentication based on SAML 1.0 (specified in Liberty Identity Federation Foundation (ID-FF)). In addition, bindings for Web Service Federation are defined (Liberty Identity Web Service Framework (ID-WSF)) and a set of standard services (Liberty Identity Service Interface Specifications (IS-SIS)).
In contrast to WS-Federation that can be used to exchange any type of security token, Liberty Alliance is totally based on SAML. However, this federation specification has been merged in SAML 2.0.
4 Challenges of Service Compositions in SOA
In the previous section, several standards to enable a secure federation of Web Services have been introduced. However, the application of these standards in terms of service compositions is still challenging regarding the generation, verification and negotiation of security policies. The application differs whether service compositions are deployed in cross-organisational scenarios or not.
4.1 Organizational Service Compositions
Service compositions in terms of business process modeling represents a cornerstone of process-aware information systems. Process modeling notations would provide a suitable abstract perspective to specific security goals on a more accessible level, but current notations do not support the specification of security goals at the business process level. Our research is focused on a model-driven approach addressing the difficulties to manage security mechanisms and their seamless integration into process-aware information systems by providing an abstract security goal specification, see Figure 4. This specification is translated to security policies that are deployed to provide security at the service level as well as at the business level. As a result, the security goal specification would be consistent with the affected business processes and the used security configuration as has been shown in [WS07].
Figure 4: Modeling business goals in business processes
4.2 Cross-organizational Service Compositions
Although the generation and distribution of policy information is feasible within a single organisation, it will fail in a federation comprising multiple organisations due to the need to exchange policy information. Each organization in the federation may use its own security mechanisms or requires specific information for authorization. Therefore, services in a particular organization may have its own security requirements expressed as security policies in a specific language. These services may be mapped to an abstract activity in a service composition, as shown in Figure 5. Since service compositions are exposed as a service to the users, the security requirements of the composite service depends on the security policies of the basic services.
The federation frameworks introduced in the previous section support this scenario by allowing services to negotiate and resolve needed attributes at runtime. However, problems will arise if not all needed attributes can be resolved, e.g. due to privacy requirements. Another reason that causes a negotiation to fail is that the services located in one trust domain have no relationship to the client in another domain and no possibility to resolve attributes at all. Dynamic service compositions may be an additional reason that requires the determination of security preconditions in advance to enable a proper matchmaking.
Our research is focused on the prior calculation, verification and negotiation of the workflow’s security requirements. A simulation environment should ensure in advance that a process can be executed successfully. This requires the determination of security preconditions defined by basic services. Security preconditions describe the security mechanisms that must be supported in order to invoke a service, the required security tokens and claims that must be provided comprising several attributes. Therefore, a security ontology is needed to describe security information and their relationships.
Using a formal workflow model based on petri nets - as described in research work about the calculation of preconditions in semantic workflows[Mey07] - the security requirements of the composite service can be determined.
4.3 Security Ontology
As aforementioned, our approaches concerning the modeling of security goals in organizational business processes and security verification of cross-organizational composite services require a security ontology to express the security preconditions of services and the relationship among these requirements. Several approaches have been described to define security in semantic web and Web Services, but this work is based on simple security annotations for services. We introduce a security model in this section that describes SOA-related security aspects including the relationship to policy definitions and security goals.
As shown in Figure 6, a security policy is composed of constraints that typically describe the relationship among security goals and affected entities. The basic entity in such a model is an Object. We define an object as an entity that is capable to participate in an Interaction with other objects. This interaction always leads to an Effect, which can comprise the provision of information or the change of state in a system. For example, one object could be a client application and another object could be a resource, such as a database. The process of accessing this database would be the interaction resulting in the effect that data in the database is changed or information is returned to the application.
Each object is related to a set of attributes describing its meta information. For instance, if the object represents a subject, attributes that constitute the digital identity will be related. Altogether, policy constraints always refer to a set of objects, a particular set of objects’ attributes, and optionally a set of interactions and effects that are related to the objects. Based on these relations, specific constraints for particular security goals can be derived. These specific constraints define requirements for associations between the entities with regard to the particular security goals.
As shown in Figure 6, constraints specify security mechanisms that guarantee the defined
Authentication Constraint
Interaction
1..* 1..* 1..*
Specifies
Effect
Assurance
Security Policy
Security Mechanisms
Security
11 … *
Authentication
Credential
use
Mechanism
Authorization
1
1..*
1..*
1..* 1..*
Composed
Syntax
Algorithm
Protocol
Simple Credential
Credential
Use
define
presumes
defines
Follows
Executes
constraint. For instance, a confidentiality policy usually specifies an algorithm (e.g. DES) that must be used to guarantee this requirement. In our model a Security Mechanism is designed to characterise techniques that are used to enforce security constraints, see Figure 7. In general, these mechanisms can be classified as algorithms (e.g. DES), protocols (e.g. WS-Security) or syntax (e.g. XML). Besides security mechanisms, a Credential represents another important entity in our model that subsumes evidences used by security mechanisms. A detailed classification of security credentials was presented by Denker et al. [DKF+03]. In this work they introduced an ontology that divides credentials in simple credentials (e.g. key, login, certificate) and composed credentials (e.g. Smart Card, SAML, WS-Security Token) that contain a set of simple credentials.
The strength of our model is a general description of security goals, omitting technical details. Thus, the provided models can be mapped to an arbitrary application or technical specification. For example, we mapped our models to the technical specification of the WS-Security standard described in the previous section. Besides the potential mapping to a technical implementation, we addressed the issue of security goal specification. In particular the projection to general business process models allowing to directly specify security goals in the context of business processes. It has been revealed, that just two basic ontologies are needed to describe security preconditions in an SOA - security mechanisms and security credentials.
4.4 Managing and Negotiating Policies
As described in the previous section, several specifications such as WS-Policy or the XACML language specification can be used to express constraints and therefore security requirements. However, as stated earlier, service-oriented architectures can involve computers with all kinds of security infrastructures. Especially in a cross-organizational service composition services must be capable to rely on a large set of different security tokens and claims. This makes it very hard to list all the requirements for a service, since the access control decision can depend on many attributes of these architectures. If we consider for instance the authentication process, there are many attributes on which the access control decision can depend. In particular, it is not sufficient to consider the authentication method which was used to authenticate a user, but the whole number of attributes, which describe such an authentication as for example the length of a password, whether it was auto-generated or user-chosen, the encryption method used to transmit and store such a password, etc. Regarding all these attributes, a password-based authentication performed
by one service can be much stronger or weaker than the one performed by another service i.e. due to different encryption methods used. This means the quality of the same aspect can differ tremendously between services. And this is not only true for the authentication aspect, but for all aspects, which characterize such a service and which are part of the access control decision. Consequently, it is necessary to state the attributes which characterize such differences in the security policies of the services. This however makes policy definition as well as the negotiation of such attributes very cumbersome. Furthermore, the policy structure reaches a complexity, which is very hard to handle by a human being and may therefore lead to policy inconsistencies and to the very end to the disruption of the related business process execution.
Therefore, mechanism are required to simplify the policy structure. One possible solution that we propose is to use a quantitative model, which expresses the expected security for an aspect and the achieved security for this aspect by a numerical value. For this purpose a numerical value is assigned to each aspect, as for example to each authentication mechanism which represents the confidence in this mechanism, which is determined by the service provider. This way, the attributes which characterize a certain security aspect are summed up in a numerical value indicating the trust, which has to be reached by a certain attribute combination. In order to calculate a trust value, classical probability theory is used.
5 Conclusion
This paper has pointed out which new security paradigms need to be applied in order to bring service-oriented architectures to their full potential. In an environment, in which services can be composed across organizational borders, security concepts are required which can deal with the unsteady and flexible nature of service-oriented architectures. Especially in a federated environment, that comprises multiple independent trust domains, the establishment of trust and the provision of identity information has been revealed as the key aspect to secure information, services, and interactions.
Several solutions for federated identity management have been introduced that are designed to be used with Web Services. However, it is challenging to apply these solutions in SOA since current approaches neither consider security aspects at the business process level nor enable a seamless composition of services that have different security requirements. We revealed that the design of service compositions under security constraints and the enabling of automatic service compositions require a generic security model. A model has been introduced that specifies security goals, policies, and constraints based on a set of basic entities. The strength of our model is that these entities can be mapped to an arbitrary application domain and all layers in an SOA. This model constitutes the foundation to express security aspects at the business process level and provides an ontology to calculate the security preconditions of a workflow, which can be used for policy negotiation with clients from other trust domains. Finally, trust levels have been introduced to simplify the definition of policies that express service requirements.
References
|
{"Source-Url": "https://hpi.de/fileadmin/user_upload/fachgebiete/meinel/papers/Trust_and_Security_Engineering/2007_Menzel_SSF.pdf", "len_cl100k_base": 5253, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 27871, "total-output-tokens": 6500, "length": "2e12", "weborganizer": {"__label__adult": 0.00041031837463378906, "__label__art_design": 0.0005602836608886719, "__label__crime_law": 0.001316070556640625, "__label__education_jobs": 0.0007328987121582031, "__label__entertainment": 0.00011867284774780272, "__label__fashion_beauty": 0.0001926422119140625, "__label__finance_business": 0.0015277862548828125, "__label__food_dining": 0.0003962516784667969, "__label__games": 0.0005626678466796875, "__label__hardware": 0.001071929931640625, "__label__health": 0.00072479248046875, "__label__history": 0.0003170967102050781, "__label__home_hobbies": 8.374452590942383e-05, "__label__industrial": 0.0006060600280761719, "__label__literature": 0.00035834312438964844, "__label__politics": 0.0005245208740234375, "__label__religion": 0.00041747093200683594, "__label__science_tech": 0.1280517578125, "__label__social_life": 0.00013053417205810547, "__label__software": 0.036102294921875, "__label__software_dev": 0.82470703125, "__label__sports_fitness": 0.00022983551025390625, "__label__transportation": 0.00051116943359375, "__label__travel": 0.00021278858184814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31651, 0.03813]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31651, 0.26806]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31651, 0.92319]], "google_gemma-3-12b-it_contains_pii": [[0, 2633, false], [2633, 5782, null], [5782, 8889, null], [8889, 10256, null], [10256, 12340, null], [12340, 15035, null], [15035, 17124, null], [17124, 18577, null], [18577, 20833, null], [20833, 22895, null], [22895, 26053, null], [26053, 29375, null], [29375, 31651, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2633, true], [2633, 5782, null], [5782, 8889, null], [8889, 10256, null], [10256, 12340, null], [12340, 15035, null], [15035, 17124, null], [17124, 18577, null], [18577, 20833, null], [20833, 22895, null], [22895, 26053, null], [26053, 29375, null], [29375, 31651, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31651, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31651, null]], "pdf_page_numbers": [[0, 2633, 1], [2633, 5782, 2], [5782, 8889, 3], [8889, 10256, 4], [10256, 12340, 5], [12340, 15035, 6], [15035, 17124, 7], [17124, 18577, 8], [18577, 20833, 9], [20833, 22895, 10], [22895, 26053, 11], [26053, 29375, 12], [29375, 31651, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31651, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
e6955e507af3fdc6755ff23f8c8331517af2c579
|
Abstract—We discuss the problem of extracting control and data flows from vehicular distributed embedded systems at higher abstraction levels during their development. Unambiguous extraction of control and data flows is vital part of the end-to-end timing model which is used as input by the end-to-end timing analysis engines. The goal is to support end-to-end timing analysis at higher abstraction levels. In order to address the problem, we propose a two-phase methodology that exploits the principles of Model Driven Engineering and Component Based Software Engineering. Using this methodology, the software architecture at a higher level is automatically transformed to all legal implementation-level models. The end-to-end timing analysis is performed on each generated implementation-level model and the analysis results are fed back to the design-level model. This activity supports design space exploration, model refinement and/or remodeling at higher abstraction levels for tuning the timing behavior of the system.
I. INTRODUCTION
The intrinsic complexity of vehicular embedded systems demands for development methodologies and technologies that are able to cope with it. In the last decades, Component-Based Software Engineering (CBSE) [1], [2], Model-Driven Engineering (MDE) [3] and their crosplay have gained acceptance due to their ability to both reduce the development complexity, by raising the abstraction level, and to cope with the most arduous aspects of these systems such as timing and safety requirements [2].
EAST-ADL [4] together with its development methodology has been getting closer and closer towards the status of de facto standard within the automotive domain. It defines a top down development process promoting the separation of concerns through a four-level architecture, where each level is designed for hiding details pertaining to higher or lower levels. At the lowest level, i.e., implementation level, EAST-ADL makes use of AUTOSAR [5], which is an industrial initiative to provide standardized software architecture for the development of vehicular embedded systems. While EAST-ADL methodology has been successful in raising the software development abstraction level, it provides few means for coping with the timing requirements of such software systems. In the past few years, several initiatives such as TIMMO [6] and TIMMO2USE [7] and their outcomes including TADL [8] and TADL2 [9] languages, tried to provide AUTOSAR with a timing model. Nevertheless, they did not fully succeed with this goal at various abstraction levels because AUTOSAR explicitly hides some implementation-level information which is necessary for building a timing model from the software architecture.
Nowadays, automotive industry needs development methodologies and technologies able to cope with the timing requirements of such software systems. Nevertheless, current industrial needs push for having such end-to-end timing analysis earlier during the development process, i.e., at the design level. Industry is currently reusing most of the software architecture from previous projects, that means, some crude software architecture is already available in the early stages of the software development. In this context, it is beneficial to perform early timing analysis for Design Space Exploration (DSE) and software architecture refinements.
A. Paper Contribution
We target core challenges that are faced when end-to-end timing models are extracted to support end-to-end timing analysis at higher abstraction levels and earlier stages of the software development of vehicular distributed embedded systems. These challenges include extraction of data and control paths at the implementation level from the design-level models; transformation of multiple implementation-level models from a single design-level model; and dealing with these transformed models from the timing analysis point of view. In order to deal with these challenges, we propose a two-phase methodology that exploits the principles of MDE and CBSE. In the first phase, the software architecture of the system at the EAST-ADL design level is automatically transformed to all legal implementation-level models, e.g., models that are build using the Rubus Component Model (RCM) [10]. Whereas in the second phase, the end-to-end timing analyses are performed on each generated implementation-level model. The analysis results of all or selected implementation-level models are fed back to the design-level model. Thus, the methodology provides a support for DSE and models refinement. Moreover, it supports remodeling at higher abstraction levels for the purpose of tuning the timing behavior of the system.
B. Relation with Authors’ Previous Works
In [11], we provide a method to extract timing models and perform end-to-end timing analysis of component-based distributed embedded systems. In [12], RCM is presented as an alternative to AUTOSAR in the EAST-ADL development methodology and its usage is discussed for enabling end-to-end timing analysis at the lowest EAST-ADL abstraction level, i.e., implementation level. In [13], RCM is extended with a concrete meta-model definition. In [14], the translation of timing constraints from the design- to the implementation-level models is provided. However, the translation is done manually and is limited by the constraint such that it only considers that implementation-level model which results in worst-case response times and delays. In comparison with above works, this paper presents a novel two-phase methodology to automatically transform the software architecture of the system at the EAST-ADL design level to all legal implementation-level models (RCM models). The existing analysis engines in the Rubus analysis framework perform timing analysis on each generated implementation-level model. The analysis results are then fed back to the design-level model to support DSE and models refinement.
II. BACKGROUND AND RELATED WORKS
A. EAST-ADL Development Methodology
EAST-ADL defines a top-down development methodology that promotes the separation of concerns through the usage of four different abstraction levels, where each level provides a complete definition of the system under development for a specific perspective. Figure 1 shows the abstraction levels architecture together with the methodologies, models and languages used at each level.
1) Vehicle level: The vehicle level, also known as end-to-end level, serves for capturing all the information regarding what the system is supposed to do, i.e., requirements and features on the end-to-end functionality of the vehicle. Feature models and requirements can be used for showing what the system provides and, eventually, how the product line is organized in terms of available assets.
2) Analysis level: At this level, the end-to-end functionalities are expressed using formal notations. Behaviors and interfaces are specified for each functionality. Yet, design and implementation details are omitted. At this stage, high-level analysis for functional verification can be performed.
3) Design level: At this level, the analysis-level artifacts are refined with more design-oriented details. The architecture of the system is redefined in terms of software, hardware and middleware architectures. Also, software functions to hardware allocation is expressed.
4) Implementation level: The design-level artifacts are enriched with implementation details. Component models are used to model the system in terms of components and their interconnections. The code for vehicle functions can be synthesized from the software component architecture.
B. The Rubus Component Model (RCM)
Rubus is a collection of methods, theories and tools for model- and component-based development of resource-constrained embedded real-time systems. It is developed by Arcticus Systems in collaboration with Mälardalen University. Rubus is mainly used for development of vehicles control functionality by several international companies. The Rubus concept comprises of RCM and its development environment Rubus-ICE (Integrated Component development Environment), which includes modeling tools, code generators, analysis tools and run-time infrastructure. RCM has been recently extended with a concrete meta-model definition [13] for embracing the MDE vision and streamlining the modeling language.
RCM is used for expressing the software architecture in terms of software components and interconnections. A software component in RCM is called Software Circuit (SWC) and represents the lowest-level hierarchical element. Its purpose is to encapsulate basic functions. RCM distinguishes the SWCs interactions by separating the data flow from the control flow. The latter is defined by triggering objects, i.e., clocks and events. SWCs communicate with each other via data ports. RCM facilitates analysis and reuse of components in different contexts by separating functional code from the infrastructure that implements the execution environment. Within the context of above-mentioned abstraction levels in Figure 1, RCM is used at the implementation level.
C. End-to-end Timing Models and Analyses
An end-to-end timing model consists of timing properties, requirements, dependencies and linking information of all tasks, messages and task chains in the distributed embedded system under analysis. It can be divided into timing and linking models. For instance, consider a task chain distributed over three nodes connected by a network as shown in Figure 2. The system timing model contains all the timing information about the three nodes and the network. Whereas the system linking model contains all the linking information in the task chains, including the control and data paths.
The analysis engines [11] use these models for performing end-to-end timing analyses. The analyses results include response-time of tasks and messages as well as system utilization. Also, the analysis engines calculate the end-to-end
response times and delays. The end-to-end response time of a task chain is equal to the elapsed time between the arrival of an event, e.g., the brake pedal sensor input in the sensor node and the response time of task, e.g., the brake actuation signal in the actuation node as shown in Figure 2.
Within a task chain, if the tasks are triggered by independent sources, then it is important to calculate different types of delays such as age and reaction. Such delays are crucial in control systems and body electronics domains, respectively. An age delay corresponds to the freshness of data, while the reaction delay corresponds to the first reaction for a given stimulus. In order to explain the meaning of reaction and age delays, consider a task chain in a single-node system as shown in Figure 3. There are two tasks in the chain denoted by $\tau_1$ and $\tau_2$ and triggered by independent clocks of periods 25ms and 5ms respectively. Let the Worst-Case Execution Times (WCETs) of these tasks be 2ms and 1ms respectively. $\tau_1$ reads data from the register Reg-1 and writes data to Reg-2. Similarly, $\tau_2$ reads data from the Reg-2 and writes data to Reg-3. Since, the tasks are activated independently with different clocks, there can be multiple outputs (Reg-3) corresponding to one input (Reg-1) to the chain as shown by several unidirectional arrows in Figure 4. The age and reaction delays are also identified in the figure. These delays are equally important in distributed embedded systems.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{fig3.png}
\caption{A task chain with independent activations of tasks}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{fig4.png}
\caption{Example demonstrating end-to-end delays}
\end{figure}
D. Model Driven Engineering (MDE) and Janus Transformation Language (JTL)
MDE is a discipline which aims to abstract software development from the implementation technology by shifting the focus from the coding to the modeling phase. In this context, MDE promotes models and model transformations as first-class citizens. Models are seen as an abstraction of a real systems, built for a specific purpose [3]. Whereas, model transformations can be seen as a glueing mechanism among models [15]. Rules and constraints for the models’ construction are specified in the so-called metamodels, i.e., a language definition to which a correct model must conform.
JTL [16] is a declarative model transformation language tailored to support bidirectionality and change propagation. The JTL transformation engine is implemented by means of the Answer Set Programming (ASP) [17], that is a form of declarative programming oriented towards difficult search problems and based on the stable model (answer set) semantics of logic programming. In JTL, a model transformation between a source and a target model, is specified as a set of relations among models, which must hold for the transformation to be successful. The transformation engine considers such mapping rules for generating the set of all possible solutions. Then, it can refine the generated set by applying constraints on the generated target models, i.e., meta-model conformance rules.
E. MDE for DSE
During the last decades, MDE has been successfully employed for DSE. In [18], the author exploit JTL for implementing an automatic deployment exploration technique based on refinement transformations and platform-based design. The technique is validated upon an automotive case study using an AUTOSAR-like metamodel. [19] presents a pattern catalog for categorizing different MDE approaches for DSE. It demonstrates the usage of the identified patterns with a literature survey. The work in [20] defines a guided DSE approach based on selection and cut-off criteria defined using dependency analysis of transformation rules and an algebraic abstraction. Cut-off criteria are used to identify dead-end states, while selection criteria are used to order applicable rules in a given state. The methodology has been effectively evaluated upon a cloud configuration problem.
III. PROBLEM STATEMENT
In order to support the end-to-end timing analysis at the design level, the end-to-end timing model should be extracted from the design-level model of the application. Consider the design-level model of a component chain consisting of three software components shown in Figure 5. Among other parameters, complete control (trigger) and data paths along component chains (task chains at run-time) must be unambiguously captured in the timing model. Unambiguous extraction of control and data paths from the system are vital for performing its timing analysis.
A control path captures the flow of triggers along the components chain, e.g., control path of the chain in Figure 6(b) can be expressed as $\{\text{Sensor} \rightarrow \text{Controller}\}$. This means that Controller SWC is triggered by Sensor SWC, while Actuator SWC is triggered independently. Similarly, control paths of the chains shown in Figure 6(a) and Figure 6(c) can be expressed as $\{\text{Sensor} \rightarrow \text{Controller} \rightarrow \text{Actuator}\}$ and $\{\text{Sensor}, \text{Controller}, \text{Actuator}\}$ respectively. It should be noted that the three component chains shown in Figure 6...
are modeled at the implementation level using the Rubus-ICE tool suite.

**Fig. 5: Design-level model of a component chain**
The main challenge faced during the extraction of end-to-end timing models at the design level is the lack of clear separation between control and data paths. Although TADL2 augments EAST-ADL with some timing information at the design level, the support for clear separation and unambiguous extraction of control and data flows is still missing. At the implementation level, e.g. in RCM, these paths are clearly separated from each other by means of trigger and data ports as shown in Figure 6. A trigger output port of an SWC can only be connected to the trigger input port(s) of other SWC(s). Similarly, a data output port of an SWC can only be connected to the data input port(s) of other SWC(s). Hence, the trigger and data paths can be clearly identified and extracted in the timing model. Whereas at the design level, the components communicate by means of **flow ports** as shown in Figure 5. A flow port is an EAST-ADL object that is used to transfer data between components. It has a single buffer. The data contained in the port is non-consumable and over-writable. Since there is no other explicit information available about this object, it can be interpreted as a data or a trigger port at the implementation level. There is no support to specify explicit trigger paths at the design level. Moreover, a component can be triggered via specified timing constraints on events, modes, or internal behavior of the component. For example, consider again the design-level model of a component chain shown in Figure 5. Assume there is a periodic constraint of 10ms specified on this chain. There can be three model interpretations of this chain at the implementation level as shown in Figure 6. Consequently, there are three different control flows in these models. The data flow and control flow should be clearly and separately captured in the end-to-end timing model because the type of the timing analysis depends upon it. For example, it is not meaningful to perform end-to-end delay analysis on a trigger chain as shown in Figure 6(a) [11].
We have considered a very small part of a large system in the above example. In reality, distributed embedded systems may contain hundreds of software components and component chains. The component chains, in turn, may be distributed over several nodes or Electronic Control Units (ECUs). Intuitively, there can be a large number of implementation-level model interpretations of the design-level model of a single distributed chain. To the best of our knowledge, RCM is the only model that intends to support high-precision end-to-end timing analysis at the design level\(^\text{3}\). However, it considers only that implementation-level model interpretation of the design-level model which produces worst-case response-times and delays. As a result, the calculated response-times and delays may be very pessimistic (considerably large compared to actual response times and delays). In order to be less pessimistic with the analysis results, the end-to-end timing analysis should be performed on all possible implementation-level model interpretations of a design-level model. The analysis results of all these models should be presented to the user. The user should be able to select the model with respect to the analysis results. This activity also helps in doing DSE and performing model refinements earlier during the development. There is a need for a methodology and corresponding automated model transformations to deal with this problem.
**IV. PROPOSED SOLUTION AND METHODOLOGY**
In order to address the problem discussed in the previous section, we propose a solution methodology as shown in Figure 7. The input to the methodology is the EAST-ADL design-level software architecture of the system under development. Whereas, the output of the methodology consists of the end-to-end timing analysis results that are fed back to the design-level software architecture. The methodology comprises of two major phases (A) transformation phase and (B) timing analysis phase.
**A. Transformation phase**
The transformation phase is realized as a model-to-model transformation between EAST-ADL design-level and RCM models. The mapping relation between the related metamodels is a non-surjective relation. We select JTL to implement the transformation because it is able to deal with partial information, information loss and uncertainty [16]. To the best of our knowledge, JTL is the only transformation language with such characteristics. The JTL transformation requires the EAST-ADL design-level model and metamodel as well
\(^{3}\)The solution is being prototyped.
as the RCM metamodel as inputs. Exploiting the ASP engine, JTL produces, with a single execution, all the possible RCM models for the specified EAST-ADL design-level model. The transformation assumes a one-to-one mapping between each design- and implementation-level component. Although a design-level component can be mapped to more than one implementation-level components, our assumption of one-to-one mapping is based on common practice in industry, especially in the segment of construction-equipment vehicles domain. All the generated implementation-level models have same data flows but different control flows. For instance, consider that the EAST-ADL design-level model shown in Figure 5 along with the EAST-ADL and RCM metamodels are provided as input to the JTL framework. The corresponding transformation results into three implementation-level models as shown in Figure 6. For a complex embedded application, there can be many such transformations of a design-level model.
B. Timing analysis phase
In the timing analysis phase, our methodology exploits the end-to-end timing analysis framework of Rubus-ICE [11]. All the generated implementation-level models from the previous phase are provided as inputs to the analysis framework. It should be noted that the timing analysis framework operates on the implementation-level models which are annotated with complete timing information. However, in the generated models derived from the previous phase, some of the timing information required to do the timing analysis may be missing. In this respect, we make assumptions to compensate for the missing timing information. For example, if worst-, best- and average-case execution times are not specified at the design level, they can be estimated at the implementation level either using estimations by experts, reusing them from other projects or from previous iterations during the model refinement process. Further, we assume that the execution order of design-level components in a chain is specified, otherwise we make implicit assumption about it. That is, each component is assumed to execute only after successful execution of preceding component in the chain, unless specified otherwise. This means, a data provider component is assumed to be always executed before the data receiver component. Since this assumption fixes the execution order, it is safe to assume the priorities of the components are equal within the component chain.
Eventually, the analysis framework performs end-to-end response-time and delay analyses on each implementation-level model separately. Once again, consider the three generated implementation-level models shown in Figure 6. We assume the WCET of each component to be equal to 1ms. Here we are interested in the end-to-end response times, reaction and age delays among all timing analysis results. These times for the three component chains are (a) 3ms, 3ms, 3ms; (b) 3ms, 10ms, 10ms; and (c) 3ms, 29ms, 19ms respectively. These analysis results are provided to the filter module which selects optimal result(s) depending upon the specified constraints (e.g. constraints on timing or constraints on activation of individual components in a chain, i.e., dependent or independent triggering). The filter can be considered as the designer who selects optimal implementation-level model interpretation of the design-level model based on the analysis results. The filter can also be a logical block making such decisions based on the specified constraints (the process of automating the filter is a future work).
The translation from the design- to the implementation-level models is automatic. Moreover, the translation is not limited by the constraint of considering that implementation-level model which results in worst-case timing behavior. For example, in the case of constrained translation, the design-level model in Figure 5 is only translated to implementation-level model of Figure 6 (c) because that chain results in worst-case delays. On the other hand, the timing analysis phase in our current
methodology provides all possible implementation-level model interpretations of the design-level model. For example, the filter module can select the chain in Figure 6(a) or Figure 6(b) as optimal because of lower end-to-end delays and provide the corresponding analysis results back to the design level. Based on this feedback, better decisions can be made during DSE or the refinement of the system model. Moreover, the system can be remodeled or decisions can be made such that the timing analysis results in the next iteration are less pessimistic. This can help in fine-tuning the timing behavior of the system.
C. Proof of concept
As a proof of concept we instantiate the above presented methodology within Rubus-ICE as depicted in Figure 8. In Rubus-ICE tool suite, Rubus-EAST tool supports modeling of applications with EAST-ADL. There are two options to start the modeling at the design level: i) model directly in Rubus-EAST, or ii) import XMI formats of EAST-ADL models of the application from any other EAST-ADL designer. The transformation phase of the methodology can be implemented as a plug-in for Rubus-ICE denoted as DL-JTL plug-in, where DL stands for Design Level. According to the proposed methodology, this plug-in calls the JTL framework that generates all feasible RCM models corresponding to the design-level model and provides them back to the plug-in. Consequently, the DL-JTL plug-in calls the HRTA and E2EDA plug-ins [11] and provides all generated implementation-level models to them. The HRTA and E2EDA plug-ins, in turn, perform end-to-end response time and delay analyses of all the input models and provide their analysis results back to the DL-JTL plug-in. Finally, the DL-JTL plug-in selects the optimal analysis results and feeds them back to the design-level model of the application in the Rubus-EAST tool. The sequence of above mentioned steps are identified in Figure 8. The sequence of above mentioned steps are identified in Figure 8.
Fig. 8: Methodology instantiated within Rubus-ICE
V. CONCLUSION
In this work we target core challenges arising when end-to-end timing models are extracted to support end-to-end timing analysis at the design level of the EAST-ADL development methodology. Towards such goal we propose a two-phase methodology that exploits MDE, CBSE and their crossplay. Within the proposed methodology, the design-level model of the system under development is automatically transformed to all possible implementation-level models. Further, End-to-end timing analyses are performed on each generated implementation-level model; analyses results are filtered based on specified constraints and eventually the analysis results are fed back to the design-level model. Due to lack of needed information, timing model(s) can not be unambiguously extracted from a design-level model. More precisely, more than one timing model may correspond to a single design-level model, as shown in Section III. One way to deal with this issue might be to consider a priori mapping between the design-level model and one of the feasible implementation-level model. In contrast, the proposed methodology is able to generate and manage all the feasible implementation-models (transformation phase) and it is able to choose the implementation-model which better meets the timing requirements, based on the timing analysis results (timing analysis phase). Such methodology naturally supports DSE and model refinements. As a proof of concept, we instantiate the proposed methodology within the Rubus-ICE industrial tool suite. As a future investigation direction, we will, together with our industrial partners, validate, and possibly refine, such methodology upon real industrial design-level models. In this context, it is important to evaluate the performance and scalability of the proposed methodology when the number of alternatives may grow remarkably.
ACKNOWLEDGEMENT
This work is supported by the Swedish Research Council (VR) and the Swedish Knowledge Foundation (KKS) within the projects SynthSoft and FEMMVA respectively. The authors would like to thank the industrial partners Arcticus Systems and Volvo Construction Equipment, Sweden.
REFERENCES
|
{"Source-Url": "http://www.es.mdh.se/pdf_publications/3827.pdf", "len_cl100k_base": 5568, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21762, "total-output-tokens": 7074, "length": "2e12", "weborganizer": {"__label__adult": 0.00044918060302734375, "__label__art_design": 0.000415802001953125, "__label__crime_law": 0.00035858154296875, "__label__education_jobs": 0.0004336833953857422, "__label__entertainment": 6.949901580810547e-05, "__label__fashion_beauty": 0.0002073049545288086, "__label__finance_business": 0.00026702880859375, "__label__food_dining": 0.0003964900970458984, "__label__games": 0.000732421875, "__label__hardware": 0.0024871826171875, "__label__health": 0.0004706382751464844, "__label__history": 0.0003209114074707031, "__label__home_hobbies": 9.942054748535156e-05, "__label__industrial": 0.0008463859558105469, "__label__literature": 0.00023114681243896484, "__label__politics": 0.00035762786865234375, "__label__religion": 0.0005640983581542969, "__label__science_tech": 0.040802001953125, "__label__social_life": 6.878376007080078e-05, "__label__software": 0.00510406494140625, "__label__software_dev": 0.94287109375, "__label__sports_fitness": 0.0004305839538574219, "__label__transportation": 0.00168609619140625, "__label__travel": 0.0002567768096923828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31694, 0.02389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31694, 0.2682]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31694, 0.9069]], "google_gemma-3-12b-it_contains_pii": [[0, 4708, false], [4708, 10026, null], [10026, 15358, null], [15358, 20151, null], [20151, 24211, null], [24211, 30233, null], [30233, 31694, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4708, true], [4708, 10026, null], [10026, 15358, null], [15358, 20151, null], [20151, 24211, null], [24211, 30233, null], [30233, 31694, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31694, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31694, null]], "pdf_page_numbers": [[0, 4708, 1], [4708, 10026, 2], [10026, 15358, 3], [15358, 20151, 4], [20151, 24211, 5], [24211, 30233, 6], [30233, 31694, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31694, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
2512001209500b2041bce1a3e39c98d78ea92534
|
Improvement and Realization of Rete Algorithm for the Dynamic Evolution of Software System
DanFeng WU, GuangPing ZENG, JingYing YAN
School of Computer & Communication Engineering, University of Science & Technology Beijing, Beijing 100083, China, [email protected]
Abstract
In this paper, we focus on the optimization of the Rete algorithm on aspects of memory consumption and time-consuming in matching process. Facing the problem of requirement of limited memory and quick response from users in the rule pattern matching process of dynamic evolution of software system, on the basis of the classic Rete algorithm and from the perspective of the complexity of algorithm space, Rete network structure matching efficiency have been analyzed, combined with characteristics of system dynamic evolution of the strong dynamic and high efficiency, and based on rule weights, the network time and fact adding, through additional node storage space adjustable mechanism and introduction of facts adding process for self-learning. Comparison test shows that the optimized Rete algorithm could complete the self-regulation of the storage space, and the time-consuming of system running have also been reduced, which satisfied the needs of real-time matching of the evolution of system rules.
Keywords: Rete Algorithm, Rules Engine, Pattern Matching, Software Dynamic Evolution
1. Introduction
With the changing of running environment and service requirement, gradually the software has a self-adjusting ability, namely the formation of dynamic evolution of the software [1]. Since the system running environment and user requirement may change over the life cycle of software, the capability of software evolution has become an important indicator to measure the performance of a system. The rule engine that is evolved by the inference engine of rule-based expert system is an effective way to achieve the system dynamic evolution logic [2], which consists of three parts: the pattern match, the agenda, and the execution engine. Among them, pattern matching is responsible for matching the known pattern of system rule set with the changing information in working memory (WM) that is received by matcher. The corresponding rule will be activated and put into agenda, waiting to be executed if the matching is successful. While the pattern matching algorithm is an important tool for the guarantee of shorter operation time and less memory cost during the matching process. So far, the typical pattern matching algorithm has included the algorithms of LFA, Rete, LEAPS and so on [3]. Among them, the Rete which is proposed by Charles L. Forgy of Carnegie Mellon University in 1974 is an algorithm of Forward-Chaining. After that, Forgy had introduced the Rete algorithm in detail in his 1979 ph.D thesis and paper in 1982 [4]. This algorithm which makes the problems of rule-based reasoning mechanism and matching resolved efficiently is one of the most efficient algorithms of the rule engine pattern matching, and in today’s dynamic global business environment, it has now become the leading product of the world's top commercial business rule engine [5].
This paper is on the basis of the 'SoMan' of the distributed Component (SMC) system, we make thorough research on the problems of the requirements of quick response from users and high real-time of rule matching and the shortage of memory resource. Meanwhile, the classic Rete algorithm is improved: The node storage space adjustable mechanism based on rule weights and first in first delete strategy is established by setting the upper limit of utilization of network two-input node storage space; The time consuming of the two-input node matching process and the system running can be reduced through the facts adding process for self-learning in Rete network matching process. Experimental results have shown that the optimized Rete algorithm could achieve the stable running of system when
a new object is added and the reducing utilization of storage space for deleting facts and the save of
time consuming of system running.
2. Related Conception
Considering how to enable the industrial and automated development of application system, the
problem of dynamic behavior and structure flexibility of software[6] [7] have been the key research
issues because of the open, dynamic and ever-changing Internet[8]. A component model-- 'SoftMan'
component (the SMC) can be formed with the properties of construction and evolution by a special
'component' which is a generation of the 'SoftMan' ontology and focused on the core technology of
dynamic evolution supported by component level explicitly. The SMCOSE, namely, SMC-based
Open System Evolution platform, from lower to upper, consists of four layers: the support platform,
the infrastructure, the evolution control, and the application[9]. In this paper, we mainly focus on rule-
based reasoning in the evolution control layer.
In order to improve the matching efficiency of the whole Rete algorithm and to reduce the amount of
calculation, a part of memory space is sacrificed for saving and taking advantage of the information
(facts, rules, matching circumstances, rule patterns and so on) in the previous pattern matching
process[10]. The algorithm will be discussed from two aspects below, identification of network and
pattern matching process.
The Rete algorithm is compiled to generate a network to make pattern matching[4]. Production rule is
composed of two parts, the left LHS (i.e. conditions) and the right RHS (i.e. conclusion). The Rete
algorithm compiles the left side of the rule into an identification network. Dr.Frogy described four basic
points of identification network[10]: root, one-input (also known as alpha node), two-input (also known
as beta node) and terminal. Root node is a point of entry for facts entering into the network; The one-
input node has an alpha storage space and an input port, where the alpha network of identification
network consists of; The two-input node has both the right and left storage spaces and input ports. The
left storage space is beta storage and the right is alpha storage. The beta network of identification
network consists of two-input nodes, where the information is passed to terminal node represents the
rule corresponding to root node in network is activated, and the movements corresponding to the rule
will be activated and put into agenda waiting to be executed. WME (Working Memory Element), which
is established for showing facts, is an element for matching with non-root node, and it is the smallest
storage unit of storage space. WME can be used as an input of one-input node or the right input of two-
input node. Token can be used for left input of two-input node refers to the WME binding list, and the
list contains one or more WME. If WME is passed to the left of two-input node, WME will be
packaged to Token with only one WME as the left input of two-input[6]. The process of pattern matching
between WME/Token and node is presented in Figure 1.

(1) If the type matching between WME and successor node of root(a kind of one-input node) is succeeded, pass the WME to successor node and continue to match, otherwise, end the matching.(2) If WME is passed to the one-input node, match WME with the corresponding pattern to this node, if the matching is succeeded, WME will be added to Token, and then pass this Token to the next node, otherwise, end the matching.(3) If WME is passed to the right side of two-input node, the WME will be added to the alpha storage space of this node and match with Token of this node's beta storage, and if the matching is succeeded, Token will become the new Token via packaging and matching which is passed to the next node, otherwise, end the matching.(4) If Token is passed to the left side of two-input node, the Token will be added to beta storage space of this two-input node and match with WME of the alpha storage space, if the matching is succeeded, Token will be added to the Token with only one WME element, and then perform step(4). (5) If WME is passed to the left side of two-input node, the WME is packaged as Token with only one WME element, and then perform step(4). (6) If Token is passed to the terminal node, active the rule corresponding to this root node, and put the movement corresponding to the rule into Agenda waiting to be executed. (7) If WME is passed to the terminal node, package this WME as the Token with only one WME element, and execute step(6).
3. The Analysis of Matching Efficiency of Rete Network Structure
Rete identification network is generated by one or more rules. Before analyzing the matching efficiency of Rete network structure, we assume that there is only one rule and affirmative condition is considered.
The alpha network of the identification network gives WME the existing intrinsic characteristics test. If the test is passed, put the result into the storage space A. Assume $\delta_i$ $(0 \leq i \leq n)$ is the object count of the storage space of $A_i$ (object is the corresponding instance after matching); Because two-input node has both the left and right input ports, the part of beta network mainly uses two-input node to make an interaction WME characteristics test and save the intermediate results of the matching with node into the storage space B. Assume $\theta_j$ $(0 \leq j \leq n-1)$ is the count of objects in the storage space $B_j$, $p_k$ $(0 \leq k \leq n-1)$ is conditional probability of satisfaction of the object to two-input node. Figure 2 represents the Rete network which has n+1 original storage spaces.

When an object enters into network from storage space $A_0$, i.e., $\delta_0 = 1$, the count of objects in storage space $B_0$ is as follows:
$$\theta_0 = \delta_0 p_0$$ \hspace{1cm} (1)
When the object passes the first two-input node, it will be influenced by the probability of $p_0$. Then the object enters the storage space $B_0$ matches with object $\delta_2$ of $A_2$. The count of objects which enters the storage space $B_1$ after being influenced by $p_1$ is as follows:
$$\theta_1 = \delta_1 p_0 \delta_2 p_1$$ \hspace{1cm} (2)
In the same way, the count of objects which enters the storage space $B_{n-1}$ is as follows:
$$\theta_{n-1} = \delta_1 p_0 \delta_2 p_1 ... \delta_{j-1} p_{j-1} ... \delta_n p_n = n \prod_{j=1}^{n} \delta_j p_{j-1}$$ \hspace{1cm} (3)
Hence, the total number of the added object which reaches storage space $B_{n-1}$ from $A_0$ is as follows:
$$\text{Ram} = \sum_{j=0}^{n} \delta_j + \sum_{j=0}^{n} \theta_j$$ \hspace{1cm} (4)
The equation (4) shows that for a system, the total memory space of the Rete network needed is related to the storage space number of $A_n$, and the object number of $\delta_j$ of every storage space. For a system with more number of storage space and objects, it not only requires large memory space, but also takes much time to match each object.
Comparing the process of deleting facts with adding facts in Rete network, there is one more overhead searching the same record with intermediate matching result produced in the process of deleting the facts in storage space $B_j$, but the two-input node is not discuss in detail here.
4. Improvement of Rete Algorithm for Dynamic Evolution of Software System
In the process of dynamic evolution of software system, the component architecture has to evolve dynamically according to the application requirements and changes of network environment, which mainly on the variability of the count of entity elements, the adjustability of relation structure, and the dynamic configurability of structure form. In the dynamic evolution of 'SoftMan' component (SMC) system, the information of current environment status and users' demands are abstracted as the fact objects stored in fact base of system decision-making device. Since the component has strong environment perception, it reacts to the dynamic environment strongly, resulting in a strong dynamic of component system evolution. The decision driven\cite{11-14} of dynamic evolution is the core of Internet ware. When the facts change, the pattern matching module of decision-making device calls rules management module, and then match all the facts data with rules based on Rete algorithm. Since SMC system requires the average response time to users as short as possible, the system evolution has the hard real-time property as well. So the dynamic evolution process of SMC system has strong dynamic and real-time characteristics.
Aiming at the two main characteristics of dynamic evolution of above-mentioned 'SoftMan' components system and analyzing the matching efficiency of Rete network, this paper proposed the improvement of Rete algorithm in the two following aspects.
(1) Establish the adjustable mechanism of node storage space.
Environment information is constantly changing and requiring value from users continues to accumulate as well. In such strong dynamic environment, the facts continue to add, delete, and update, while the total memory space of Rete network is invariable. In order to meet the strong dynamic property of component evolution, we try to manage the occupancy of memory by introducing node storage space adjustability mechanism. Set upper limit of the utilization of two-input node storage space, and users can adjust the upper limit accordingly to the specific conditions. When the utilization rate is greater than 80%, we remove 20%
memory of this storage space in a lower to higher order according to the corresponding rule weight of objects. If the rule weights are the same, the strategy of first-in-first-delete will be executed according to the time order when rules enter the network. The corresponding rules to these objects’ corresponding memory contents to one-input node and other two-input node are also deleted. To delete corresponding memory contents to one-input node in two cases: If this one-input node is shared with other rules which are not deleted, delete the corresponding stored information to the rules which are going to be delete, otherwise, delete this node from Rete network completely.
(2) Introduction of self-learning in facts adding process.
The key to achieve the strong real-time property of dynamic evolution of component system is to improve the matching efficiency of rules. It can reduce the matching time in the matching process of Rete algorithm by introducing the self-learning mechanism. Recording the information of added facts can reduce the running time of deleting the fact. For the traditional Rete algorithm, if wants to delete a fact from working memory area, initially to make a character test by alpha network, if the test is successful, delete the related content to the corresponding storage space A, and pass the alpha records to the corresponding two-input node. The left condition of this two-input node will be matched with alpha records, if the matching is successful, generates the beta records and check out whether there are same beta records by searching corresponding storage space B. If there is same beta record, delete it and pass this beta record to the next two-input node which is executed in the same deleting process and this process continues until all the content associated with the object have been deleted. It is shown that the input records produced in adding process are not used in deleting process, and contrasted with adding process there is an extra searching process. While the fact is adding, we can retain the intermediate calculation results of matching with two-input node in the storage space B. When deleting the fact, the process of matching with storage space B of two-input node is omitted by the intermediate calculation results of the fact adding process records and only need to search each node memory.
5. Experiments and discussion
There are two main standards to evaluate the efficiency of rule pattern matching algorithm: matching time and usage rate of memory. In Matlab 2006 environment, this paper presents the presupposition evolution experiment of comparing both memory area usage rate and system running time of before and after improved algorithm by SMCOSER which was developed based on Eclipse 3.4.
(1) Comparison of usage rate in memory area
Take $B_i$ as the research object. Presupposition evolution experiment has the result of storage area usage rate in different time by 140 trials of matching the objects of $B_i$ in the case of other storage area usage rate did not reach the upper limit. The result is shown in Figure 3.(Pentagram points to the storage area usage rate before optimized, and diamond points to storage area usage rate after optimized)
In the first 50 tests, there were no new added objects in storage space, and the usage rate of system storage area did not change greatly, so the system ran more smoothly. During the later 50 tests, due to objects entered the network, utilization of the storage area in the storage space changed with the matching objects increased, and test ran smoothly before the 100th time. In the 100th time, new added objects led to the utilization of storage space up to 80% which is the upper limit of memory utilization. As we set that 20% memory is reduced in time when the utilization of storage area is up to 80%, the utilization of storage area is reduced sharply to around 60% after optimized and then ran smoothly.
(2) Comparison of matching time-consuming
The running time of system can be got when we observe the storage area usage rate, i.e. the time taken by the object entering Rete network until matching to the storage space. The result is shown in Figure 4. (Pentagram points to time-consuming before optimized, and diamond points to time-consuming after optimized):
In the first 50 tests, the system was running smoothly, so the time-consuming did not change obviously. After the 50th time, due to new objects entered the network, storage area utilization of the storage space \( B_5 \) was increased, at the same time there were some burden to the running system, so time-consuming increased obviously. But before the second time of added object, time-consuming did not change apparently. In the second time of added object, the storage area utilization of storage space \( B_5 \) has reached the upper limit, which causes the storage space to run dramatically slowly and time-consuming to increase obviously. Because the operation to reduce the utilization of storage area was not set for the network before optimized, the system will run slowly after the time-consuming sharply increased. But after optimized, utilization of storage area was reduced 20\%, there was a certain memory spare in this storage space, and 25\% system time-consuming was saved. So the system time-consuming reduced correspondingly with the decrease of storage area usage rate and system can run faster than before.
6. Conclusion
In this paper, we presented the optimized Rete algorithm on the basis of node storage space adjustable mechanism and self-learning approach for facts adding process. The optimized algorithm realized the dynamic control of the Rete network storage space occupancy and the time-consuming reduction of object removing. The running efficiency of pattern matching algorithm is improved obviously, and the problems of low efficiency of rule matching and the resource issues of memory of SMC dynamic evolution system had been solved. However, the complexity and variability of context environment information cause the strong instability of related facts information in the working space. We achieved the adjustment of memory space shared by Rete network, but in terms of memory space usage self-adaption, it still needs to be researched further.
7. References
|
{"Source-Url": "http://www.aicit.org/AISS/ppl/AISS2951PPL.pdf", "len_cl100k_base": 4154, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 18553, "total-output-tokens": 5280, "length": "2e12", "weborganizer": {"__label__adult": 0.000293731689453125, "__label__art_design": 0.00031685829162597656, "__label__crime_law": 0.0003330707550048828, "__label__education_jobs": 0.000530242919921875, "__label__entertainment": 6.288290023803711e-05, "__label__fashion_beauty": 0.0001322031021118164, "__label__finance_business": 0.00029850006103515625, "__label__food_dining": 0.00028777122497558594, "__label__games": 0.0004580020904541016, "__label__hardware": 0.0011358261108398438, "__label__health": 0.0004754066467285156, "__label__history": 0.00022089481353759768, "__label__home_hobbies": 7.474422454833984e-05, "__label__industrial": 0.00035643577575683594, "__label__literature": 0.0002218484878540039, "__label__politics": 0.00020456314086914065, "__label__religion": 0.00032830238342285156, "__label__science_tech": 0.0460205078125, "__label__social_life": 7.063150405883789e-05, "__label__software": 0.01180267333984375, "__label__software_dev": 0.935546875, "__label__sports_fitness": 0.0002346038818359375, "__label__transportation": 0.00041365623474121094, "__label__travel": 0.00017774105072021484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22335, 0.02354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22335, 0.52308]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22335, 0.91922]], "google_gemma-3-12b-it_contains_pii": [[0, 3956, false], [3956, 7119, null], [7119, 9750, null], [9750, 13605, null], [13605, 17557, null], [17557, 17918, null], [17918, 21945, null], [21945, 22335, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3956, true], [3956, 7119, null], [7119, 9750, null], [9750, 13605, null], [13605, 17557, null], [17557, 17918, null], [17918, 21945, null], [21945, 22335, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22335, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22335, null]], "pdf_page_numbers": [[0, 3956, 1], [3956, 7119, 2], [7119, 9750, 3], [9750, 13605, 4], [13605, 17557, 5], [17557, 17918, 6], [17918, 21945, 7], [21945, 22335, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22335, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
d9b702975416c97aec4e56fa52e61b789891eb4b
|
PROTEUS
Scalable online machine learning for predictive analytics and real-time interactive visualization
687691
D3.10 Optimizer Prototype
Lead Author: Bonaventura Del Monte
With contributions from:
Jeyhun Karimov, Alireza Rezaei Mahdiraji
Reviewers: Waqas Jamil (BU), Javier De Matias Bejarano (TREE)
<table>
<thead>
<tr>
<th>Deliverable nature:</th>
<th>Demonstrator (D)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dissemination level: (Confidentiality)</td>
<td>Public (PU)</td>
</tr>
<tr>
<td>Contractual delivery date:</td>
<td>May 31^th^ 2018</td>
</tr>
<tr>
<td>Actual delivery date:</td>
<td>May 31^th^ 2018</td>
</tr>
<tr>
<td>Version:</td>
<td>1.0</td>
</tr>
<tr>
<td>Total number of pages:</td>
<td>17</td>
</tr>
<tr>
<td>Keywords:</td>
<td>Optimization, Domain Specific Language, Machine Learning, Dataflow Engine, Operator Fusion</td>
</tr>
</tbody>
</table>
Abstract
In this deliverable, we describe our first prototype of the PROTEUS optimizer. One of the main objectives of PROTEUS is to provide the end-users with a Domain Specific Language (DSL) that allows them to define Machine Learning algorithms on data streams. We proposed Lara as DSL for linear algebra and relational operations on streams (for details on Lara see D3.4 – D3.8). Our proposed language maps each linear algebra operation to one specific high-order operator of the underlying execution dataflow engine (e.g., Apache Flink). However, the language requires an extra module to perform holistic optimization on the sequence of high-order operators, which are shipped to the execution engine for the actual processing. To this end, we introduce the PROTEUS optimizer that analyzes the logical execution plan of a streaming query (containing Machine Learning methods) and perform non-trivial optimizations tailored to the domain specific language. In particular, the PROTEUS optimizer performs operator fusion for Sum-Product operators. This optimization results in less intermediate results and resource efficiency. We target this class of optimizations because almost all the PROTEUS workloads contain machine learning methods, which rely on linear algebra operations.
This enables better resource management of the underlying execution engine as it results in fewer operator being scheduled for execution.
Executive summary
In this deliverable, we describe a first prototype of the PROTEUS optimizer. One of the main objectives of PROTEUS is to provide the end-users with a Domain Specific Language that allows them to define Machine Learning algorithms on data streams. To that end, we proposed Lara as Domain Specific Language for linear algebra operations on streams (see D3.4 – D3.8). Our proposed language maps each linear algebra in one specific high-order operator of the underlying execution engine (e.g., Apache Flink). However, the language requires an extra module to perform holistic optimization on the sequence of high-order operators, which are shipped to the execution engine for the actual processing. To this end, we introduce the PROTEUS optimizer, a module that analyzes the logical execution plan of a streaming query and perform non-trivial optimizations tailored to the domain specific language, namely, it performs operator fusion for Sum-Product operators. This enables better resource management of the underlying execution engine as it results in fewer operator being scheduled for execution.
In this document, we first provide a comprehensive list of common optimizations for stream data processing (e.g., operator fusion, operator fission) as well as the current state-of-the-art optimizations for systems for declarative large-scale offline machine learning (i.e., SystemML and SPOOF). We start from those two types of optimizations and we build an optimizer that combines these two worlds, i.e., our optimizer performs Sum-Product optimizations by means of operator fusion.
### Document Information
<table>
<thead>
<tr>
<th>IST Project Number</th>
<th>Acronym</th>
<th>PROTEUS</th>
</tr>
</thead>
<tbody>
<tr>
<td>687691</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**Full Title**: Scalable online machine learning for predictive analytics and real-time interactive visualization
**Project URL**: http://www.proteus-bigdata.com/
**EU Project Officer**: Martina EYDNER
<table>
<thead>
<tr>
<th>Deliverable Number</th>
<th>Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>D3.9</td>
<td>Optimizer Prototype</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Work Package Number</th>
<th>Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>WP3</td>
<td>Scalable Architectures for both data-at-rest and data-in-motion</td>
</tr>
</tbody>
</table>
**Date of Delivery**
<table>
<thead>
<tr>
<th>Contractual Date</th>
<th>Actual Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>M30</td>
<td>M30</td>
</tr>
</tbody>
</table>
**Status**: version 1.0 □ final
**Nature**: demonstrator
**Dissemination level**: public
<table>
<thead>
<tr>
<th>Author (Partner)</th>
<th>Name</th>
<th>E-mail</th>
</tr>
</thead>
<tbody>
<tr>
<td>Responsible</td>
<td>Bonaventura Del Monte</td>
<td><a href="mailto:[email protected]">[email protected]</a></td>
</tr>
<tr>
<td>Partner</td>
<td>DFKI</td>
<td></td>
</tr>
<tr>
<td>Phone</td>
<td>+49 30 23895 6631</td>
<td></td>
</tr>
</tbody>
</table>
### Abstract (for dissemination)
**Keywords**: Optimization, Domain Specific Language, Machine Learning, Dataflow Engine, Operator Fusion
### Version Log
<table>
<thead>
<tr>
<th>Issue Date</th>
<th>Rev. No.</th>
<th>Author</th>
<th>Change</th>
</tr>
</thead>
<tbody>
<tr>
<td>11-Apr-2018</td>
<td>0.1</td>
<td>Bonaventura Del Monte</td>
<td>Initial version</td>
</tr>
<tr>
<td>29-Apr-2018</td>
<td>0.2</td>
<td>Alireza Rezaei Mahdiraji</td>
<td>First internal review</td>
</tr>
<tr>
<td>02-May-2018</td>
<td>0.3</td>
<td>Bonaventura Del Monte</td>
<td>Addressing first internal review comments</td>
</tr>
<tr>
<td>04-May-2018</td>
<td>0.4</td>
<td>Alireza Rezaei Mahdiraji</td>
<td>Second internal review</td>
</tr>
<tr>
<td>07-May-2018</td>
<td>0.5</td>
<td>Bonaventura Del Monte</td>
<td>Addressing second internal review comments</td>
</tr>
<tr>
<td>22-May-2018</td>
<td>1.0</td>
<td>Bonaventura Del Monte</td>
<td>Addressing comments from Waqas Jamil (BU) and Javier De Matias Bejarano (TREE)</td>
</tr>
</tbody>
</table>
# Table of Contents
Executive summary ........................................................................................................................................... 3
Document Information ..................................................................................................................................... 4
Table of Contents ........................................................................................................................................... 5
List of figures and/or list of tables .................................................................................................................. 6
1 Introduction .................................................................................................................................................. 7
2 Background .................................................................................................................................................. 8
2.1 Common Streaming Optimizations ....................................................................................................... 8
2.2 Machine Learning Optimizations ......................................................................................................... 10
3 The PROTEUS Optimizer ......................................................................................................................... 12
3.1 Compilers and Optimizers to the Rescue .............................................................................................. 14
4 Conclusions ................................................................................................................................................ 17
References ..................................................................................................................................................... 18
List of figures and/or list of tables
Figure 1 Simple execution plan for X * W – Y. ................................................................. 16
Figure 2 Optimized execution plan for X * W – Y............................................................. 16
1 Introduction
One of the main objectives of PROTEUS is to provide the end-users with a Domain Specific Language (DSL) that allows them to define Machine Learning algorithms on data streams. We proposed Lara as DSL for linear algebra and relational algebra operations on streams (for details on Lara see D3.4 – D3.8). The Lara language maps each linear algebra operation to one specific high-order operator of the underlying execution dataflow engine (e.g., Apache Flink). However, our language compiler requires an extra component to perform holistic optimization on the sequence of high-order operators, which are shipped to the execution engine for the actual processing. To this end, we introduce the PROTEUS optimizer in this deliverable. Our PROTEUS Optimizer is designed to analyze the logical execution plan of a streaming query (containing Machine Learning methods) and to perform not-trivial optimizations. These optimizations are tailored to the domain specific language and thus for analytics workloads containing online machine learning methods. In particular, the PROTEUS optimizer performs operator fusion for Sum-Product operators. This optimization results in less intermediate results and resource efficiency. We target this class of optimizations because almost all the PROTEUS workloads contain machine learning methods, which rely on linear algebra operations. This enables better resource management of the underlying execution engine as it results in fewer operator being scheduled for execution.
This document is structured as follows: we provide basic background information regarding common streaming optimizations and Sum-Product class of optimization for offline machine learning. Then, we provide the description of the first prototype of a PROTEUS optimizer in Section 3. Then, we describe how we plan to extend the PROTEUS optimizer in Section 4.
2 Background
In this section, we provide the background knowledge regarding common techniques for stream processing optimization and common optimization for execution plans meant for machine learning workloads.
2.1 Common Streaming Optimizations
Hirzel et al. present eleven techniques for stream processing optimization [1], each with different characteristic and profitability.
OPERATOR REORDERING (A.K.A. HOISTING, SINKING, ROTATION, PUSH-DOWN)
Move more selective operators upstream to filter data early. The key idea of operator reordering is to move operators that perform selection or projection before costly operators (e.g., join).
REDUNDANCY ELIMINATION (A.K.A. SUBGRAPH SHARING, MULTI-QUERY OPTIMIZATION)
Eliminate redundant computations. This optimization removes those components of the query graph that are performing redundant work. The idea is to reuse intermediate results as much as possible.
OPERATOR SEPARATION (A.K.A. DECOUPLED SOFTWARE PIPELINING)
Separate operators into smaller computational steps. The main idea behind operator separation is to enable other optimizations such as operator reordering or fission and to build parallel pipelines that scale better on multi-core systems.
FUSION (A.K.A. SUPERBOX SCHEDULING)
Avoid the overhead of data serialization and transport. Fusion trades communication cost against pipeline parallelism. The communication cost between two fused operators is cheap. However, fused operators might run on different cores and thus having the upstream operator reading a new item, while the downstream operator is still processing the previous item.
FISSION (A.K.A. PARTITIONING, DATA PARALLELISM, REPLICATION)
Parallelize computations. The intuition behind this optimization is to assign items of the input stream to different instances of the upstream operator. This requires upfront partitioning but allows for parallel execution of the upstream operator.
**PLACEMENT (A.K.A. LAYOUT)**
Assign operators to hosts and cores. The key idea behind placement is to assign specific operators to specific computing resources. For instance, if an operator requires a large amount of memory to perform its task, it would be beneficial to place it on a node equipped with more memory.
**LOAD BALANCING**
Distribute workload evenly across resources. The purpose of this optimization is to ensure a balanced workload on every parallel instance of an operator, thus fighting skewness.
**STATE SHARING (A.K.A. SYNOPSIS SHARING, DOUBLE-BUFFERING)**
Optimize for space by avoiding unnecessary copies of data. State sharing can result in high-throughput processing as it decreases the memory footprint, which leads to less cache misses or disk I/O.
BATCHING (A.K.A. TRAIN SCHEDULING, EXECUTION SCALING)
Process multiple data items in a single batch. Batching trades latency with throughput. Batching can improve throughput by amortizing operator-firing and communication costs over more data items. Such
ALGORITHM SELECTION (A.K.A. TRANSLATION TO PHYSICAL QUERY PLAN)
Use a faster algorithm for implementing an operator.
LOAD SHEDDING (A.K.A. ADMISSION CONTROL, GRACEFUL DEGRADATION)
Degrade gracefully when overloaded. The intuition behind load shedding is to drop some input items when the operator is overloaded. Another technique that allows for load shedding is approximation.
As of May 2018, the PROTEUS platform, which is based on Apache Flink 1.4 release, supports all the above optimizations except for the algorithm selection and the load shedding.
2.2 Machine Learning Optimizations
Several researchers proposed a number of optimizations for large-scale machine learning systems running in a batch fashion. Those systems allow the user to define machine learning algorithms through linear algebra and relational algebra and they generate efficient execution plans [2]. The main intuition behind the optimizers of those system deals with reducing the number of materialized intermediate results, reducing the number of scans of the input, and leveraging sparsity when operator chaining is involved.
Boehm et al. proposed above optimizations in their SystemML – SPO (Sum-Product Optimizer) frameworks [3]. In particular, they focused on Sum-Product Optimizations, Operator Fusion, and Sparsity exploitation [2, 4]. Briefly, SPO analyzes the execution plan produced by SystemML, rewrites the plan by fusing sum and
matrix operations, and then compiles the plan through SystemML to the underlying execution engine (e.g., Apache Spark).
3 The PROTEUS Optimizer
Optimizations such as operator fusion are well-known techniques in the area of stream data processing. Stream processing engines such as Apache Flink already feature this type of optimizations. Systems designed for large scale machine learning also feature those optimizations but only in a batch execution model. In this deliverable, we develop a prototype of an optimizer that combines pure-streaming operator fusion techniques with Sum-Product optimizations. We tailor Sum-Product optimization because they can easily be applied to a variety of linear machine learning models including in SOLMA library of the PROTEUS project. We build our PROTEUS optimizer on top of the Apache Flink Streaming APIs.
Before we delve into the technical details of our solution, we present in the following a motivational example regarding how a General Linear Model such as Lasso can be mapped to dataflow execution plan. We target this specific algorithm for two main reasons: 1. it features a Sum-Product operation and 2. it is the main algorithm used in the PROTEUS validation scenario. Then, we highlight how this basic mapping results in a loss of performance and what an optimizer should do to prevent such performance degradation.
Consider the traditional formulation of the objective function of Lasso (a Linear Model):
$$
\min_w \frac{1}{2n_{\text{samples}}} \|Xw - y\|^2_2 + \alpha \|w\|_1
$$
We train Lasso using the Stochastic Gradient Descent as online optimization method and $L_1$ regularization. To implement that formulation, we could use a high-level language such as Scala and we would obtain something similar to the following snippet:
```scala
1. def computeGradient(X: Matrix, Y: Vector, oldGradient: Matrix): (Matrix, Double) = {
2. val diff = X *%* W - Y // calculate activations
3. val loss = diff * diff / 2.0 // calculate loss
4. val newGradient = diff * data + oldGradient // calculate new gradient
5. (newGradient, loss)
6. }
7.
9. val stepSizeCurrentEpoch = stepSize / math.sqrt(epoch)
10. val W = stepSizeCurrentEpoch * grad + oldW // calculate new weights given gradient and step
11. val shrinkageVal = regP * stepSizeCurrentEpoch
12. W foreach {
13. value => signum(value) * max(0.0, abs(value) - shrinkageVal) // apply L1 regularization
14. }
15. }
```
The above code however is meant for single-thread execution, i.e., it is not scalable. To make it scalable, we need to reformulate it using the APIs of a dataflow engine such as Apache Flink or Apache Spark. To this end, we show in the following how a system expert would code the same snippet using the streaming APIs of Apache Flink.
We assume that our program receives a stream of labelled data points and also uses a system-specific aspect to broadcast the updated model to the workers. In particular, we use a feedback edge in the dataflow graph in order to merge all the gradients produced by the different workers and broadcast the updated model to all the workers. The overall implementation result in a very cumbersome-to-follow block of code where language specific constructs (e.g., lambdas, Options, Either) are mixed with system-specific features (e.g., iteration, connected streams). We highlight that the below hand-crafted code snippet results in an optimal streaming execution plan when executed on Apache Flink. The main logic of the Lasso algorithm is embedded in a set of User Defined Functions as follows:
1. asynchronously compute the gradient upon the arrival of a minibatch of points using the latest model available
2. asynchronously merge the gradients produced by the different workers
3. asynchronously update the current model with a merged gradient and broadcast the new model to every worker.
```scala
1. val env = ...
2. val trainingStream = env.addSource(...).map(Left(_))
3. val initialWeights = env.addSource(WeightRandomInitializer()).map(Right(_))
4. def stepFunc(workerIn: ConnectedStreams[Matrix, Matrix]) = {
5. val worker = workerIn.flatMap(new RichCoFlatMapFunction[Matrix, Matrix, Either[Matrix, Matrix]] {
6. @transient var stagedBatches =
7. @transient var currentModel =
8. override def open(parameters: Configuration): Unit = {
9. stagedBatches = new mutable.Queue[Matrix]()
10. currentModel = None
11. } // incoming answer from PS
12. override def flatMap2(newModel: Matrix, out: Collector[Either[(Matrix, Double), Matrix]]) = {
13. currentModel = Some(newModel)
14. } // incoming data
15. override def flatMap1(dataOrInitialModel: Either[Matrix, Matrix], out: Collector[Either[(Matrix, Double), Matrix]]) = {
16. dataOrInitialModel match {
17. case Left(data) => {
18. if (data.isLabelled) {
19. currentModel match {
20. case Some(model) => {
21. val diff = miniBatch % * % model - Y
22. val loss = diff * diff / 2.0
23. val newGradient = diff * data + oldGradient
24. out.collect(Left(Left(newGradient, loss)))
25. }
26. } else {
27. // do prediction here wrapping it with Right
28. }
29. }
30. case Right(model) => {
31. currentModel = Some(model) out.collect(Left(Right(model)))
32. }
33. }
34. }
35. }).setParallelism(workerParallelism)
36. val wOut = worker.flatMap(x => x match {
37. case Right(out) => Some(out)
38. case _ => None
39. }).setParallelism(workerParallelism)
40. val ps = worker.flatMap(x => x match {
41. case Left(workerOut) => Some(workerOut)
42. case _ => None
43. })
44. .setParallelism(workerParallelism)
45. .partitionCustom(new Partitioner[Int]() {
46. override def partition(key: Int, numPartitions: Int): Int = key % numPartitions
47. }, paramPartitioner).flatMap {
48. new RichFlatMapFunction[Either[(Matrix, Double), Matrix], Either[Matrix, Matrix]] {
49. transient val currentModel = _ override
50. def open(p: Configuration): Unit = {
51. super.open(p)
52. currentModel = None
53. }
54. override def flatMap(msg: Either[(Matrix, Double), Matrix],
55. out: Collector[Either[Matrix, Matrix]]): Unit = {
56. msg match {
57. case Left(t) => {
```
3.1 Compilers and Optimizers to the Rescue
The example in the above section shows how a simple algorithm such as Lasso result in a muddle of system-specific constructs as well as language-specific aspects. However, the user really expects to have something like:
```scala
1. val env = ...
2. val trainingStream = env.addSource(...).toMatrix
```
val initialModel = env.addSource(LassoModelRandomInitializer()).toMatrix
withFeedback(trainingStream, initialModel) {
(X, model) => {
val d = diag(DenseMatrix(X.colsNum)(sqrt(abs(model.gamma))))
val A = model.A + X %*% X.t + inv(d)
val prediction = model.b.t %*% inv(A) %*% X
(prediction, (A, gamma))
(currModelA, currModelB) => currModelA + currModelB,
(currModel, Y, X) => {
val l = Y %*% X
currModel.b += l
}
}
In the above snippet, we abstract the complexity of the feedback loop and we use an online Lasso formulation close to the one used in the PROTEUS SOLMA library (see D4.3), which was used in the Flink example and we let the end-user define only:
1. how to compute the local parameter of the model given a current model and a minibatch of points coming from the input stream and output a prediction
2. how to merge two models
3. how to update the model given the true target vector for a given point
To have this high-level interface, we need to obtain a holistic view on the overall set of operations we want to perform on the data stream(s). To this end, we extended the language developed in T3.3 and its underlying Intermediate Representation (IR) to support operations among streaming matrices and translate the execution flow in a streaming execution plan with feedback loops. This introduces a secondary challenge that we had to solve in this deliverable, i.e., generating the user defined functions given an arbitrary linear combination of streaming matrices. Consider the following formula, which is a common subexpression present in any General Linear Model:
\[ x \times W - y \]
where \( x \) is a matrix of size (n, m), \( W \) is a matrix of size (m, p), \( y \) is a matrix of size (n, p), \( \times \) is the matrix multiplication operator, and \( - \) is the subtraction operator among two matrices of equal size. We model each matrix as a streaming matrix that are ingested asynchronously in our processing engine. The system receives the matrix as a stream of rows of size \( m \) and triggers the computation only when \( n \) rows have been ingested. Therefore, we assume we want to perform the above operations when we receive \( n \) new rows for each of the two streaming matrices \( x \) and \( y \). A straightforward way to implement this in an execution engine such as Apache Flink is to use 2 count-window operations to materialize the \( n \) new items and then connect the stream of count-windows coming from input \( x \) to the stream of count-windows coming from the streaming matrix \( W \). Then, perform a matrix multiplication within a coFlatMap and then connect the stream containing the result of the coFlatMap to the one containing the materialized count-window of the streaming matrix \( y \). Then, compute the subtraction using the intermediate values and the last \( n \) items coming from \( y \).
The problem with this approach is that it requires at least 2p threads for the window operators (with p being the parallelism of the window operators) and 2q threads for the coFlatMaps (with q the parallelism of the coFlatMap operator). As we cannot omit the window operators, a potential optimization is to union the three streams and perform the multiplication and the subtraction in a single flatMap operator, which runs with parallelism of p. The challenge with this approach is to detect a pattern as operand1 * operand2 + operand3 and generate specialized code for the flatMap operator containing the fused math operations.
To detect those types of patterns, we leverage our Intermediate Representation format to holistically inspect the user code. Our Intermediate Representation allows for pattern matching and by exploiting such feature, our PROTEUS optimizer can fuse the three math operators into a single Flink flatMap operators.
4 Conclusions
In this document, we presented a first prototype of the PROTEUS optimizer. The main feature of this optimizer is to holistically optimize a logical execution plan containing linear algebra operators performing matrix operations on streams. The proposed optimizer is able to perform Product Sum optimizations by means of operator fusing. This leads to better resource utilization as the underlying engine will have to schedule less physical operators.
As future work, which we will carry out in D3.11, we plan to benchmark the capabilities of the PROTEUS optimizer by using it on one of the algorithm provided by WP4.
The code of the optimizer is available on the PROTEUS Github page at the following address: https://github.com/proteus-h2020/streaming-lara
References
[2] On Optimizing Operator Fusion Plans for Large-Scale Machine Learning in SystemML, Hirzel et al., CoRR, abs/1801.00829, 2018
|
{"Source-Url": "https://s293afedef7b6d099.jimcontent.com/download/version/1564075448/module/10120071870/name/D3.10.pdf", "len_cl100k_base": 5748, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 43822, "total-output-tokens": 6399, "length": "2e12", "weborganizer": {"__label__adult": 0.00036787986755371094, "__label__art_design": 0.0002608299255371094, "__label__crime_law": 0.0003299713134765625, "__label__education_jobs": 0.0004487037658691406, "__label__entertainment": 7.086992263793945e-05, "__label__fashion_beauty": 0.00016069412231445312, "__label__finance_business": 0.0003066062927246094, "__label__food_dining": 0.00043892860412597656, "__label__games": 0.0004732608795166016, "__label__hardware": 0.0009012222290039062, "__label__health": 0.0005893707275390625, "__label__history": 0.00023293495178222656, "__label__home_hobbies": 7.998943328857422e-05, "__label__industrial": 0.0005178451538085938, "__label__literature": 0.0002058744430541992, "__label__politics": 0.0003490447998046875, "__label__religion": 0.00042891502380371094, "__label__science_tech": 0.03863525390625, "__label__social_life": 9.232759475708008e-05, "__label__software": 0.0072021484375, "__label__software_dev": 0.94677734375, "__label__sports_fitness": 0.0003154277801513672, "__label__transportation": 0.0004749298095703125, "__label__travel": 0.00019800662994384768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26527, 0.04148]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26527, 0.32557]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26527, 0.80994]], "google_gemma-3-12b-it_contains_pii": [[0, 699, false], [699, 2122, null], [2122, 3723, null], [3723, 6213, null], [6213, 8109, null], [8109, 8375, null], [8375, 10255, null], [10255, 11930, null], [11930, 12956, null], [12956, 14636, null], [14636, 14756, null], [14756, 18306, null], [18306, 21113, null], [21113, 21460, null], [21460, 24326, null], [24326, 25269, null], [25269, 26043, null], [26043, 26527, null]], "google_gemma-3-12b-it_is_public_document": [[0, 699, true], [699, 2122, null], [2122, 3723, null], [3723, 6213, null], [6213, 8109, null], [8109, 8375, null], [8375, 10255, null], [10255, 11930, null], [11930, 12956, null], [12956, 14636, null], [14636, 14756, null], [14756, 18306, null], [18306, 21113, null], [21113, 21460, null], [21460, 24326, null], [24326, 25269, null], [25269, 26043, null], [26043, 26527, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26527, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26527, null]], "pdf_page_numbers": [[0, 699, 1], [699, 2122, 2], [2122, 3723, 3], [3723, 6213, 4], [6213, 8109, 5], [8109, 8375, 6], [8375, 10255, 7], [10255, 11930, 8], [11930, 12956, 9], [12956, 14636, 10], [14636, 14756, 11], [14756, 18306, 12], [18306, 21113, 13], [21113, 21460, 14], [21460, 24326, 15], [24326, 25269, 16], [25269, 26043, 17], [26043, 26527, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26527, 0.14103]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
496d751e02107333e5c74162af324a59d7a7866e
|
An implementation of the HLP
By T. GyimóthY*, E. Simon*, Á. Makay**
Introduction
The Helsinki Language Processor (HLP) system was designed originally [7] for description of programming languages and for automatic generation of compilers. Saving the descriptional metalanguages different implementations have a great freedom with respect to the applications of parsing, semantic evaluation and software generation techniques. Our implementation chooses SIMULA 67 [1] as base language which influences the collection of semantic functions usable for description of semantic features and the structure of the generated compiler too.
This text follows the steps of the generating process. A source language \( L \) is assumed which has a lexical description on the lexical metalanguage and a syntactic-semantic description on that metalanguage of the HLP.
There are two hand-written lexical analyzers for the metalanguages. One of them receives the lexical description of \( L \) and produces the input for the generator of the lexical analyzer of \( L \). This will be constructed as a finite automaton. The other works on the syntactic-semantic description of \( L \) fundamentally in the form of an attribute grammar, producing the input for the semantic evaluator and for the pure syntax constructor. Because there may be different token class names and terminal strings in the lexical and syntactical description, unification of the symbol table of the generated lexical analyzer must be executed after that two lexical analysis.
In the semantic description of \( L \) we can use attributes as SIMULA types involving simple types, classes, expressions, functions, statements and predefined standard procedures.
Having the pure syntax of \( L \), the parser generator checks the grammar being of type LR (1) [2]. If it is so it constructs the table of the optimal parser of type LR (1), LALR (1), SLR (1) or LR (0).
We can choose one of the modified strategies ASE [3] or OAG [4] for computing the necessary passes and order of the evaluation of the attribute values in the generated compiler. For each syntax rule a SIMULA class is constructed, which contains the actions of parsing and evaluation decomposed to passes, anywhere this rule is applied in a derivation. One-pass compilation is possible if we have only synthesized attributes and it means, that the values of all attribute occurrences are evaluable.
during parsing bottom up. This is a sufficient condition and so a proposition with respect to the formulation of the grammar.
By this means we have all components of the combined parser and semantic evaluator. Working under the control of the parsing table new objects of types predefined in the above SIMULA classes are created and connected as the derivation tree. Subsequent passes are executed by reactivating and deactivating the objects as the inner structures of the classes prescribe the evaluation order.
The generators have the same structure as the generated compiler so there are possibilities to generate new variations of the lexical analysers and the parser by the system itself. We have written these parts of the HLP in the metalanguages of its own.
Structure of the generated compiler
The nucleus of the generated compiler (GC) consists of a parser based on a grammar $G$ from the class or subclass of the LR (1) grammars. It constructs the derivation tree in the grammar $G$ from the token stream produced from the incoming text $p \in L(G)$ by the generated lexical analyzer GL. The nodes of the derivation tree are the SIMULA objects of types (SIMULA classes) representing the rewriting rules in the grammar $G$. Local pointers inside the objects ensure the connections — edges — toward the nodes on a lower level of the tree.
The objects contain the local variables of the attribute occurrences too together with the calling sequence, which represents the attribute evaluation strategy predefined from the attribute dependencies of the grammar $G$ by one of the algorithms ASE or OAG. During parsing, when a new object is activated not only a new node is generated in the derivation tree (bottom up) but those attributes are evaluated, which depend on previously evaluated attributes. After that the object — the procedural part of the object — detaches itself while accessing the contents of the variables of the attribute occurrences just evaluated is possible. These are usable by the objects on a higher level of the derivation tree. Reactivating an object a new package of attribute occurrences not evaluated yet is evaluable. Of course during evaluation this object activates other objects too: going up or down in the tree in the order of the strategy. After finite number of activating-deactivating action pairs an object together with all the objects on the lower levels have no attribute occurrences not evaluated. This part of the tree is unnecessary so it is destroyed. Finally we have only the root of the derivation tree together with one or more attributes of the initial nonterminal of the grammar $G$. Generally these attributes serve the purposes of the target code generation.
Of course we can describe and so generate not only a compiler for a programming language by an attribute grammar — as the metalanguage of the system — but other special purpose systems based on context-free languages too: schemes of data bases, machine architectures, picture description and processing, and so on. The common feature of these tasks is, that there exist a class of very similar algorithms, each of which we can specify by a context-free grammar together with several special attributes. The result is, that we have a generated software system specialized to one task only and the gain is in time or space complexity. It is the case of a compiler too: GC has a parser for one grammar and one strategy for the evaluation of a given attribute set.
Although it is possible to describe the generation of the target code by an attribute in the metalanguage too, we recommend a final pass for it based on the other attribute values evaluated earlier. Several procedures well defined for this purpose can help the users in that — target language dependent — job. So far we have neglected this aspect because we need experiences in large-sized and complicated languages.
**Lexical metalanguage**
The lexical metalanguage is used to describe the lexical structure of the source language for automatic construction of the lexical analyzer which forms tokens from the character strings of the source program. A description on the lexical metalanguage consists of five parts. In the first part a collection of character sets is defined. Specification of token classes by regular expressions can be found in the second part. The description of transformations concerns characters and token classes too. Transformations are performed during the isolation of a character or tokens. In action blocks the scanning sequence, screening of keywords from token classes and the way the isolated tokens are sent to the syntax analyzer, are given.
To give an idea of what a lexical description looks like we refer to the description of a simple block structured language called BLOCK_HLP given in Appendix.
**The syntactic and semantic metalanguage**
The definition of an attribute grammar is divided into five parts. First the inherited and synthesized attributes must be defined by SIMULA types. It should be noted that the concept of global attributes was not implemented. Global attributes can be replaced by SIMULA objects. In nonterminal declaration those nonterminals are declared which appear in the production list as the left-hand side of at least one production. Each nonterminal declaration has a possibly empty attribute list associated with it. An attribute from this list is associated with all nonterminals appearing in the nonterminal list. The third part of the description is the declaration of the start symbol. We assume the grammar to be reduced. The auxiliary SIMULA variables, classes, functions and procedures which are used in the semantic rules and code generation are declared in the procedure declaration part.
As in the original HLP system we employ BNF (Backus Naur Form) description method for the syntax of the source language. Semantic rules and code generation are built in the productions. Note that if the semantic part is empty for one production, then the use of ECF (Extended Context Free) description [5] on the right-hand side is allowable. According to the SIMULA features the original notation of an attribute occurrence is modified. In a production, if an attribute is associated to the left-hand side nonterminal, then only the attribute name must occur. Other attribute occurrences are denoted by
```
nonterminal name • attribute name
```
In Appendix the syntactic semantic description of the language BLOCK_HLP can be found too.
Attribute grammars
An attribute grammar (AG) can be considered as an extension of a context free (CF) grammar with attributes and semantic rules defining values of attributes. These attributes serve to describe the semantic features of the language elements. An AG is a 3-tuple
\[ AG = (G, A, F), \]
where \( G = (V_N, V_T, P, S) \) is a reduced CF grammar, \( V_N, V_T, P \) and \( S \) denote the nonterminals, terminals, productions and the start symbol of the grammar respectively.
A production \( p \in P \) has the form
\[ p: X_0 \rightarrow X_1 \ldots X_n, \quad \text{where} \quad n_p \geq 0, \quad X_0 \in V_N, \quad X_i \in V_N \cup V_T \quad (1 \leq i \leq n_p). \]
The finite set \( A \) is the set of attributes. There is a fixed set \( A(X) \) associated with each nonterminal \( X \in V_N \) denoting the attributes of \( X \). For an \( X \in V_N, p \in P \) and \( a \in A(X) \) \( X \cdot a \) denotes an attribute occurrence in \( p \). An attribute can be either inherited or synthesized, so each \( A(X) \) is partitioned into two disjoint subsets, \( I(X) \) and \( S(X) \).
The set
\[ A_p = \bigcup_{i=0}^{n_p} \bigcup_{a \in A(X_i)} X_i \cdot a \]
denotes all attribute occurrences in a syntactic rule \( p \).
The set \( F \) consists of semantic rules associated with syntactic rules too. A semantic rule is a function type defined on attribute occurrences as argument types. For each attribute we have a set of attribute values (the domain of attribute) and for each semantic rule a semantic function defined on the sets which are related to its type. Formally, let us denote by \( F_p \) the rules associated with syntactic rule \( p \), then
\[ F = \bigcup_{p \in P} F_p. \]
We classify the set \( A_p \) into an output attribute occurrence set
\[ OA_p = \{X_i \cdot a | (i = 0 \text{ and } a \in S(X_i)) \text{ or } (i > 0 \text{ and } a \in I(X_i))\} \]
and an input attribute occurrence set [6].
\[ IA_p = A_p - OA_p \]
We assume, that for each \( X_i \cdot a \in OA_p \) there is exactly one semantic rule \( f \in F_p \), the function related to it defines the value of \( X_i \cdot a \). An AG is in normal form provided that only input occurrences appear as arguments of the semantic rules.
Evaluation of attribute values
Denote by \( t \) a derivation tree in the grammar \( G \). If a node of \( t \) is labeled by \( X \), then we can augment it by the attribute occurrences of \( X \) and their semantic rules defined by two syntactic rules. One of these is applied on the level over \( X \) in \( t \) and defines the inherited occurrences, while the other on the level under \( X \) determines the synthesized ones. Naturally, the root has no inherited and the leaves have no synthesized occurrences. (Leaves have attributes defined by the lexical analyzer which can be considered as synthesized ones.) A rule \( p \), so an attribute occurrence may occur several times in \( t \). We distinguish them and if it would be confusing we say occurrence in a tree \( t \). Denote by \( T_{AG} \) the set of the augmented derivation trees in \( AG \).
Let be given the set of semantic functions \( F \) associated with \( F \). The value of each attribute occurrence in \( t \) is computed by one of these functions and it is computable only if the argument values are computed. Therefore we have dependencies among attribute occurrences in the tree \( t \). We denote by \( (X_j \cdot a \rightarrow X_j \cdot b) \) the fact, that the function defining the value of \( X_j \cdot b \) in \( t \) has the value of \( X_j \cdot a \) as an argument. We say that \( X_j \cdot b \) depends on \( X_j \cdot a \) in \( t \). By this relation the set \( F \) and the tree \( t \) induce a dependency graph \( D_t \). If \( D_t \) has no cycle it determines an evaluation order for the computation of the values of all attribute occurrences in \( t \). An attribute grammar is noncircular if there is no derivation tree with dependency graph containing a cycle. The decision whether an AG is noncircular requires algorithms of exponential complexity.
To determine the dependency graph to each derivation is time-consuming during compilation. For several subclasses of AG's it is possible to determine an evaluation strategy based on the grammar only. Such a strategy consists of an ordering of the attribute occurrences in the rules of the grammar in the form of a dependency graph \( D \) and means, that wherever an occurrence \( X \cdot a \) appears in any tree \( t \), it is computable if the occurrences on which it depends by \( D \) are already evaluated. The problem is determine \( D \) from the AG. During compilation we have to follow the evaluation order defined by \( D \) for each derivation tree. Naturally it is a tree traversal strategy and one travers may be seen as a pass of the compilation.
Two subclasses of AG's are considered in our system in accordance with them two algorithms, ASE and OAG serve to generate evaluation strategies.
**ASE**
The ASE algorithm is based on a fixed tree traversal strategy. An AG is ASE if any \( t \in T_{AG} \) is evaluable during \( m \) alternating depht-first, left-to-right (\( L—R \)) and depht-first right-to-left (\( R—L \)) tree traversal passes.
The attribute evaluation during an \( L—R \) traversal can be illustrated for a syntactic rule \( p : X_0 \rightarrow X_1 \ldots X_n \) as follows.
**PROCEDURE TRAVERSE** \((X_0)\);
**BEGIN**
**FOR** \( i := 1 \) **STEP** 1 **UNTIL** \( n \) **DO**
BEGIN EVAL (I(Xi)); TRAVERSE (Xi) END;
EVAL (S(X0));
END OF TRAVERSE;
During an R-L pass the FOR statement above has the form
FOR i := n STEP -1 UNTIL 1 DO
The ASE algorithm makes a membership test for an AG by this traversal procedure, and assigns attributes to passes. By the EVAL procedure we denoted the computation of the values of the attributes. The different instances of the same attribute is evaluated during the same pass.
Our experiences show that the ASE subclass is large enough and can be applied well in a compiler writing system. But it needs some modification in the original algorithm to use it in a practical system. For example, we need not traverse a subtree during the ith pass if there is no evaluable attribute in this subtree. It can be decided by the following test.
Denote \( H(X) \) the set of nonterminals which can be derived from an \( X \in V_N \).
It is easy to generate these sets by the transitive closure using \( P \).
Let \( K(X) = \bigcup_{Y \in H(X)} A(Y) \), and denote by \( A_j \) the set of attributes which can be evaluated during the jth pass.
If \( (K(X) \cup S(X)) \cap A_j = \emptyset \), then for an \( X = X_i \) we will not call the TRAVERSE \( (X_i) \) during the jth pass.
In the ASE algorithm the tree traversal and the attribute evaluation starts from the root of the derivation tree. In our system we use bottom-up tree constructor and many synthesized attributes can be evaluated interleaved with the construction of the derivation tree. These synthesized attributes can be easily assigned by the TRAVERSE procedure often decreasing the number of evaluation passes of an AG.
We can ensure an efficient space management technique for a generated compiler by using an extended version of ASE algorithm. We test for each \( p \in P \) whether after the ith pass the attributes of the subtrees which can be derived from \( p \) are computed or not. If each of them is computed we generate a statement for the rule \( p \) which releases these subtrees. This technique is based on a garbage collector and is very efficient, because large parts of a derivation tree are released during the construction of the tree.
The ASE algorithm is pessimistic in the sense that it considers all dependencies for an attribute \( a \). E.g. there are dependencies for an attribute \( a \) in the rules \( p \) and \( q \), but there is no derivation tree containing the rules \( p \) and \( q \) together. Generally this does not occur in practical programming languages but it causes problems in some types of languages. Whether there is a derivation tree containing the rules \( p \) and \( q \) together may be decided by a simple algorithm using the sets \( H(x) \).
**OAG**
In this section we give a short description of the OAG algorithm using some notations of [4]. We modified this algorithm, so an attribute evaluation strategy is given for a larger subclass of noncircular AG’s. The time needed for the modified algorithm does not significantly differ from the time needed for the original algorithm.
An implementation of the HLP
As opposed to ASE algorithm in the OAG algorithm there is not a predefined tree traversal strategy. For each AG∈OAG an attribute evaluation strategy is generated, and all derivation trees of the AG can be evaluated by this strategy. The OAG algorithm for each $X\in V_N$ constructs a partial order over the set $A(X)$, such that in any derivation tree containing $X$ its attributes are evaluable in that order.
Denote by $DS(X)$ the partial order over the $A(X)$, and let
$$DS = \bigcup_{X \in V_N} DS(X)$$
be the set of these partial orders.
We define dependency graphs over the attribute occurrences of syntactic rules and over the attributes of nonterminals, finally we construct $DS$ using these graphs.
The dependency graph $DP_p$ contains the direct dependencies between attribute occurrences associated to a syntactic rule $p$.
$$DP_p = \{(X_i \cdot a \rightarrow X_j \cdot b) \mid \text{there is an } f \in F_p \text{ defining } X_j \cdot b \text{ depending on } X_i \cdot a\}$$
$$DP = \bigcup_{p \in P} DP_p$$
The dependency graph $IDP_p$ can be constructed from the $DP$,
$$IDP_p = DP_p \cup \{(X_i \cdot a \rightarrow X_i \cdot b) \mid X_i \text{ occurs in rules } p \text{ and } q, (X_i \cdot a \rightarrow X_i \cdot b) \in IDP_q^+\},$$
where $IDP_q^+$ denotes the nonreflexive, transitive closure of $IDP_q$.
$$IDP = \bigcup_{p \in P} IDP_p$$
The graph $IDP$ comprises the direct and indirect dependencies of attribute occurrences. For an $X \in V_N$ the dependency graph $IDS(X)$ contains the induced dependencies between attributes of $X$.
$$IDS(X) = \{(X \cdot a \rightarrow X \cdot b) \mid \text{there is an } X_i = X \text{ in a rule } p \text{ and } (X_i \cdot a \rightarrow X_i \cdot b) \in IDP_p\}$$
$$IDS = \bigcup_{X \in V_N} IDS(X).$$
The set $DS$ can be constructed using $IDS$. For an $X \in V_N$ the set $A(X)$ is partitioned into disjoint subsets $A(X)_i$, and $DS(X)$ defines a linear ordering over these subsets. The sets $A(X)_i$ are determined such that for an $a \in A(X)_i$ if $(X \cdot a \rightarrow X \cdot b) \in IDS(X)$ and $b \in A(X)_k$, then $k \leq i$. The sets $A(X)_i$ consist of either synthesized or inherited attributes only. The $DS(X)$ defines an alternating sequence of the synthesized and inherited sets $A(X)_i$.
$$DS(X) = IDS(X) \cup \{(X \cdot a \rightarrow X \cdot b) \mid X \cdot a \in A(X)_k, X \cdot b \in A(X)_{k-1}, 2 \leq k \leq m_x\},$$
where \( m_x \) is the number of the sets \( A(X)_i \). The extended dependency graph EDP is defined by IDP and DS.
\[
EDP_p = IDP_p \cup \{(X_i \cdot a \rightarrow X_i \cdot b)|(X \cdot a \rightarrow X \cdot b) \in DS(X),
X_i = X \text{ and } X_i \text{ occurs in rule } p\}
\]
\[
EDP = \bigcup_{p \in P} EDP_p.
\]
A given AG is an OAG iff the EDP is noncircular. We implemented the OAG algorithm as a part of our compiler writing system. We have favourable experiences using the algorithm, but we have found simple attribute grammars (occurring in practical applications, see Fig. 1), where the IDP is noncircular but the EDP is circular. We modified the OAG algorithm so that in these cases we generate a new EDP.
The graphs DP, IDP, IDS, DS are computed using the original algorithm. In the next step for each \( X \in V_N \) and \( (X \cdot a \rightarrow X \cdot b) \in DS(X) \) — IDS \( (X) \) we add \( (X \cdot a \rightarrow X \cdot b) \) to IDP \( p \), if \( X \) occurs in rule \( p \), and construct IDP +. If a \( (Y \cdot c \rightarrow Y \cdot d) \) is induced in IDP +, then
(a) if \( (Y \cdot d \rightarrow Y \cdot c) \in DS(Y) \) — IDS \( (Y) \), then we add \( (Y \cdot c \rightarrow Y \cdot d) \) to IDS \( (Y) \) and generate a new DS \( (Y) \) using the modified IDS \( (Y) \),
(b) if \( (Y \cdot d \rightarrow Y \cdot c) \in IDS(Y) \), then the algorithm is finished and the given AG is not an OAG,
(c) otherwise we have \( (Y \cdot c \rightarrow Y \cdot d) \) out of consideration.
If each \( (X \cdot a \rightarrow X \cdot b) \in DS(X) \) — IDS \( (X) \) is added for an \( X \in V_N \), then the set DS \( (X) \) is not changed later on.
In Fig. 1 we show an AG which is neither ASE nor OAG but for which an attribute evaluation strategy can be generated using the modified OAG algorithm. We denote by \( \circ \) an inherited attribute and by \( \bullet \) a synthesized one.
The dependencies in rule 2 show that AG \( \notin \) ASE. We construct the sets \( A(Y)_i \) and \( A(Z)_i \) using the rules 1, 3, 5. The sets \( A(Y)_1 = \emptyset, A(Y)_2 = \{e, g\}, A(Y)_3 = \{f\} \) and \( A(Z)_1 = \{f\}, A(Z)_2 = \{e\} \) imply that \( (Y \cdot f \rightarrow Y \cdot e) \in DS(Y) \) and \( (Z \cdot e \rightarrow Z \cdot f) \in DS(Z) \). If we construct EDP \( 3 \) by DS \( (Y) \) and DS \( (Z) \) it will be circular, so AG \( \notin \) OAG. Using the modified algorithm, if we add \( (Y \cdot f \rightarrow Y \cdot e) \) to IDP \( 3 \), then \( (Z \cdot f \rightarrow Z \cdot e) \) is induced in IDS \( (Z) \). The new DS \( (Z) \) is constructed from the sets \( A(Z)_1 = \emptyset, A(Z)_2 = \{e\}, A(Z)_3 = \{f\} \) and the EDP is generated by this DS \( (Z) \) will be noncircular. It is easy to prove that for an AG \( \in \) OAG the modified algorithm does not change the set DS and graph EDP. The OAG algorithm for each \( p \in P \) generates a visit-sequence VS \( p \) using the graph EDP \( p \). Each VS \( p \) is linear sequence of node visits and attribute evaluations and it is easy to generate an attribute evaluation strategy using the sets VS \( p \).
An implementation of the HLP
In the present paragraph the logical description of parsing automata constructor modul is given. This modul serves to compute the state transitions for finite state and stack automaton too. The definition of the token classes by regular expressions and the description of an ECF grammar are coded uniform manner. Consequently the procedure which computes the parsing states can be controlled at the job control level to generate finite, ELR (1), ELALR (1), ESLR (1) or ELR (0) [2] states too. The states are represented by SIMULA objects based on the following declarations.
CLASS ITEM (NO, DOT, RSET);
INTEGER NO, DOT; REF (SET) RSET;
BEGIN
REF (ITEM) LINK;
END ITEM;
CLASS SET (BOUND);
INTEGER BOUND;
BEGIN INTEGER ARRAY TSET [0: BOUND];
END SET;
It is easy to see that this representation has two advantages. The finite and LR (0) states which have no follower set can be stored uniform manner. Secondly, those
Fig. 1
Construction of the parser
1. $S \rightarrow XY$
2. $X \rightarrow Xt$
3. $Y \rightarrow lZ$
4. $X \rightarrow t$
5. $Z \rightarrow l$
An implementation of the HLP
In the present paragraph the logical description of parsing automata constructor modul is given. This modul serves to compute the state transitions for finite state and stack automaton too. The definition of the token classes by regular expressions and the description of an ECF grammar are coded uniform manner. Consequently the procedure which computes the parsing states can be controlled at the job control level to generate finite, ELR (1), ELALR (1), ESLR (1) or ELR (0) [2] states too. The states are represented by SIMULA objects based on the following declarations.
CLASS ITEM (NO, DOT, RSET);
INTEGER NO, DOT; REF (SET) RSET;
BEGIN
REF (ITEM) LINK;
END ITEM;
CLASS SET (BOUND);
INTEGER BOUND;
BEGIN INTEGER ARRAY TSET [0: BOUND];
END SET;
It is easy to see that this representation has two advantages. The finite and LR (0) states which have no follower set can be stored uniform manner. Secondly, those
Fig. 1
items which have the same follower sets store only one SIMULA reference to an object in which the followers have been written. If the computed state is equal to a state which has been computed earlier then the SIMULA run-time system releases the space by calling the garbage collector. For each computed state there is a table which contains a set of ordered pairs. The first element describes the state number from which this state has been derived. The second element contains the symbol code used to compute the considered state.
By applying the next theorem from [2] to our parser constructor we can obtain an useful conclusion. An ELR (0) language can be parsed by a finite state automaton iff there is no state which can be derived by a nonterminal from more than one state. Hence, in order to generate a finite state automaton the ELR (0) states are computed first. It is followed by performing the finite state test.
After computing the selected type of states (ELR (1), ELALR (1), ESLR (1) and ELR (0)) a membership test will be performed together with parser code generation. If it produces true then the next type of states will be computed from the last states. The test are performed from ELR (1) to ELR (0). Some simple optimization procedure are executed during the tests. In the present version of our implementation there is no automatic error recovery procedure. Ordering of states on the base [8] an efficient error correcting algorithm is under development.
The states and the internal code of the lexical analyzer as a finite automaton are generated by the same modul. Of course we need additional service routines working in the lexical analyzers. These are written for metalanguage purposes, but they can be used in the generated compilers in the same form too.
Appendix
% LEXICAL DESCRIPTION FOR A SIMPLE BLOCK STRUCTURED
% LANGUAGE
% CALLED BLOCK_HLP
LEXICAL DESCRIPTION BLOCK_HLP
CHARACTER SETS
LETTER OR DIGIT = LETTER/DIGIT;
END OF CHARACTER SETS
TOKEN CLASSES
UNDERSCORE = * * ;
IDENTIFIER = LETTER (LETTER OR DIGIT/UNDERSCORE)* [16];
PROPERTY =DIGIT + [2];
COMMENT = ≠ % ≠ ANY* ENDOFLINE;
SPACES =SPACE* ENDOFLINE;
END OF TOKEN CLASSES
TRANSFORMATIONS ARE
UNDERSCORE=>;
END OF TRANSFORMATIONS
ACT BLOCK: BEGIN
IDENTIFIER => IDENTIFIER / KEYSTRINGS;
PROPERTY => PROPERTY;
COMMENT => ;
SPACES => ;
END OF ACT BLOCK
END OF LEXICAL DESCRIPTION BLOCK_HLP.
FINIS
% SYNTACTIC-SEMANTIC DESCRIPTION OF BLOCK_HLP
ATTRIBUTE GRAMMAR BLOCK_HLP
SYNTHESIZED ATTRIBUTES ARE
REF (SYMB) SYMREF; REF (SDECL) SEREF;
INTEGER ID, TYPE, EXTYPE;
END OF SYNTHESIZED ATTRIBUTES
INHERITED ATTRIBUTES ARE
REF (SBL) SYMT;
END OF INHERITED ATTRIBUTES
NONTERMINALS ARE
PROGRAM;
BLOCK HAS SYMT, SYMREF;
STATLIST HAS SYMT, SYMREF;
STAT HAS SYMT, SEREF;
IDECL HAS ID, TYPE;
EXDECL HAS SYMT, EXTYPE;
END OF NONTERMINALS
% PROCEDURES AND CLASSES
$$$$
CLASS SBL (A, B);
REF (SBL) A; REF (SYMB) B;
BEGIN
END OF SBL;
CLASS SYMB (A, B);
REF (SYMB) A; REF (SDECL) B;
BEGIN
END OF SYMB;
CLASS SDECL (A, B);
INTEGER A, B;
BEGIN
END OF SDECL;
PROCEDURE FIND (A, B, C);
NAME A;
INTEGER A, B; REF (SBL) C;
BEGIN
% The value of A will be the type of the variable B. This type is tried to find % in the list of identifiers defined by C. B. If B is not found in it, then C is replaced % by C A. Repeating until having the type of B or being the list empty, the % requested value is done or A is undeclared.
END OF FIND
****
% END OF PROCEDURES AND CLASSES
PRODUCTIONS ARE
%1%
? m
PROGRAM = BLOCK;
DO
BLOCK.SYMT := NEW SBL (NONE, BLOCK.SYMREF);
END
%2%
BLOCK = STATLIST;
%3%
STATLIST = STATLIST STAT;
DO
SYMREF := IF STAT.SEREF /= NONE THEN
NEW SYMB (STATLIST.SYMREF, STAT.SEREF) ELSE
STATLIST.SYMREF;
END
%4%
STATLIST = STAT;
DO
SYMREF := IF STAT.SEREF == NONE THEN NONE
ELSE NEW SYMB (NONE, STAT.SEREF);
END
%5%
STAT = IDECL;
DO
SEREF := NEW SDECL (IDECL.ID,IDECL.TYPE);
END
%6%
STAT = EXDECL;
DO
SEREF := NONE;
END
%7%
STAT = BEGIN BLOCK END;
DO
BLOCK.SYMT := NEW SBL (SYMT,BLOCK.SYMREF);
SEREF := NONE;
END
%8%
IDECL = DECLARE IDENTIFIER PROPERTY;
DO
ID := IDENTIFIER.VALUE;
TYPE := PROPERTY.VALUE;
END
%9%
EXDECL = USE IDENTIFIER;
DO
An implementation of the HLP
EXTYPE = FIND (EXTYPE, IDENTIFIER.VALUE, EXDECL.SYMT);
END
END OF PRODUCTIONS
END OF ATTRIBUTE GRAMMAR
% SIMULA classes associated with two nonterminals and a production in the
% generated compiler
NODE CLASS GRNODE 1;
BEGIN COMMENT PROGRAM;
END;
NODE CLASS GRNODE 2;
BEGIN COMMENT BLOCK;
REF (SBL) SYMT;
REF (SYMB) SYMREF;
END;
GRNODE 1 CLASS P 1;
BEGIN COMMENT PROGRAM;
REF (GRNODE 2) BLOCK;
BLOCK: — POP QUA GRNODE 2;
PUSH (GOTO (1), THIS NODE);
DETACH;
BLOCK.SYMT:— NEW SBL (NONE, BLOCK.SYMREF);
CALL (BLOCK);
DETACH;
END;
References
(Received Feb. 7, 1983)
|
{"Source-Url": "https://cyber.bibl.u-szeged.hu/index.php/actcybern/article/download/3262/3247", "len_cl100k_base": 7490, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 33468, "total-output-tokens": 8756, "length": "2e12", "weborganizer": {"__label__adult": 0.00029969215393066406, "__label__art_design": 0.00029754638671875, "__label__crime_law": 0.0002419948577880859, "__label__education_jobs": 0.00034880638122558594, "__label__entertainment": 5.346536636352539e-05, "__label__fashion_beauty": 0.0001251697540283203, "__label__finance_business": 0.00014984607696533203, "__label__food_dining": 0.0003066062927246094, "__label__games": 0.0004165172576904297, "__label__hardware": 0.0007991790771484375, "__label__health": 0.0003447532653808594, "__label__history": 0.00018286705017089844, "__label__home_hobbies": 7.104873657226562e-05, "__label__industrial": 0.00037550926208496094, "__label__literature": 0.00025081634521484375, "__label__politics": 0.0001971721649169922, "__label__religion": 0.00044155120849609375, "__label__science_tech": 0.01300811767578125, "__label__social_life": 5.441904067993164e-05, "__label__software": 0.004749298095703125, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.0002206563949584961, "__label__transportation": 0.0004010200500488281, "__label__travel": 0.00017583370208740234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31159, 0.01886]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31159, 0.72467]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31159, 0.84537]], "google_gemma-3-12b-it_contains_pii": [[0, 2422, false], [2422, 5907, null], [5907, 8922, null], [8922, 11165, null], [11165, 14433, null], [14433, 17496, null], [17496, 19941, null], [19941, 23049, null], [23049, 25097, null], [25097, 27452, null], [27452, 28615, null], [28615, 29364, null], [29364, 31159, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2422, true], [2422, 5907, null], [5907, 8922, null], [8922, 11165, null], [11165, 14433, null], [14433, 17496, null], [17496, 19941, null], [19941, 23049, null], [23049, 25097, null], [25097, 27452, null], [27452, 28615, null], [28615, 29364, null], [29364, 31159, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31159, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31159, null]], "pdf_page_numbers": [[0, 2422, 1], [2422, 5907, 2], [5907, 8922, 3], [8922, 11165, 4], [11165, 14433, 5], [14433, 17496, 6], [17496, 19941, 7], [19941, 23049, 8], [23049, 25097, 9], [25097, 27452, 10], [27452, 28615, 11], [28615, 29364, 12], [29364, 31159, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31159, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
4b9675d9a3de857797753b70fda86256b302f416
|
Iteration:
**while** loops, **for** loops, iteration tables
---
CS111 Computer Programming
Department of Computer Science
Wellesley College
---
**What is Iteration?**
Repeated execution of a set of statements
Keep repeating….
until **stopping** condition is reached
[5, 9, 7, 8]
Stopping condition of **for** loop: list is over
---
**High-level motivation for iteration**
Display time until no more time left
Hillary shimmy; Shaq shimmy;
Play until blocks too small to stack
Keep coding until all test cases passed
---
**How does a for loop work?**
Execution model of a **for** loop
nums = [5, 9, 7, 8]
sumSoFar = 0
for n in nums:
sumSoFar += n
print sumSoFar
29 sumSoFar
8 n
29
Some **for** loop examples
A **for** loop performs the loop body for each element of a sequence.
```python
word = 'boston'
for i in range(len(word)):
print(i, word[i])
```
We can also loop directly over the string if we don't need indices.
```python
word = 'boston'
for c in word:
print(c)
```
More **for** loop examples
```python
nums = [2, -5, 1, 3]
for n in nums:
print(n * 10)
```
```python
sumSoFar = 0
for n in nums:
sumSoFar += n
print(sumSoFar)
```
What if we don't know in advance when something will be over?
- Stopping condition of **for** loop: list is over
```python
[5, 9, 7, 8]
```
- Example: repeatedly ask user for input until they say to stop
```
Please enter your name: Ted
Hi, Ted
Please enter your name: Marshall
Hi, Marshall
Please enter your name: Lily
Hi, Lily
Please enter your name: quit
Goodbye
```
Another construct: **while** loops
**while** loops are a fundamental mechanism for expressing iteration.
```python
while continuation_condition:
statement1
statement2
...
statementN
```
Keyword indicating while loop
- **while**
- **continuation_condition**
- **statement1**, **statement2**, ..., **statementN**
Obligatory boolean expression denoting whether to iterate through the body of the loop one more time.
**while** loops and user input
```python
name = raw_input('Please enter your name: ')
while (name.lower() != 'quit'):
print 'Hi,', name
name = raw_input('Please enter your name: ')
print('Goodbye')
```
Please enter your name: Ted
Hi, Ted
Please enter your name: Marshall
Hi, Marshall
Please enter your name: Lily
Hi, Lily
Please enter your name: quit
Goodbye
**while** loops are not just for user input.
Useful for other problems too.
---
**while** Loop Example: `printHalves`
```python
def printHalves(n):
'''Prints positive successive halves of n'''
while n > 0:
print(n)
n = n/2
```
```python
In[2]: printHalves(22)
```
What is printed here?
---
A slight variation of `printHalves`:
```python
def printHalves2(n):
'''Attempts to print positive successive halves of n'''
while n > 0:
print(n)
n = n/2
```
What's the output? `printHalves2(22)`
```python
In[2]: printHalves2(22)
```
---
Why don’t computer scientists ever get out of the shower?
Because the shampoo bottle says:
- Lather
- Rinse
- Repeat
---
Accumulating a result with a \texttt{while} loop
It is common to use a \texttt{while} loop with “accumulators” that accumulate results from processing the elements.
Define a \texttt{sumHalves} function that takes an nonnegative integer and returns the sum of the values printed by \texttt{printHalves}.
\begin{verbatim}
def sumHalves(n):
sumSoFar = 0
while n > 0:
sumSoFar = sumSoFar + n # or sumSoFar += n
n = n/2
return sumSoFar
\end{verbatim}
\begin{verbatim}
In [3]: sumHalves(22)
Out[3]: 41 # 22 + 11 + 5 + 2 + 1
\end{verbatim}
Iteration Tables
An iteration is a step-by-step process characterized by a collection of \texttt{state variables} that determine the next step of the process from the current one. E.g the state variables of \texttt{sumHalves} are \texttt{n} and \texttt{sumSoFar}.
The execution of an iteration can be summarized by an iteration table, where columns are labeled by state variables and each row represents the values of the state variables at one point in time.
Example: iteration table for \texttt{sumHalves(22)}:
\begin{center}
\begin{tabular}{|c|c|}
\hline
\textbf{step} & \textbf{n} & \textbf{sumSoFar} \\
\hline
0 & 22 & 0 \\
1 & 11 & 22 \\
2 & 5 & 33 \\
3 & 2 & 38 \\
4 & 1 & 40 \\
5 & 0 & 41 \\
\hline
\end{tabular}
\end{center}
Iteration Rules
An iteration is governed by
- initializing the state variables to appropriate values;
- specifying iteration rules for how the next row of the iteration table is determined from the previous one;
- specifying the continuation condition (alternatively, stopping condition)
\begin{verbatim}
Iteration rules for \texttt{sumHalves}:
\begin{itemize}
\item next \texttt{sumSoFar} is current \texttt{sumSoFar} plus current \texttt{n}.
\item next \texttt{n} is current \texttt{n} divided by 2.
\end{itemize}
\end{verbatim}
Printing the iteration table in a loop
By adding a print statement to the top of a loop and after the loop, you can print each row of the iteration table.
\begin{verbatim}
def sumHalvesPrint(n):
sumSoFar = 0
while n > 0:
print 'n:', n, ' | sumSoFar:', sumSoFar
sumSoFar = sumSoFar + n # or sumSoFar += n
n = n/2
print 'n:', n, ' | sumSoFar:', sumSoFar
return sumSoFar
\end{verbatim}
\begin{verbatim}
In[4]: sumHalvesPrint(22)
\end{verbatim}
\begin{verbatim}
Out[17]: 41
\end{verbatim}
What is the result? Fill in the table.
```
def sumHalves2(n):
'''Prints positive successive halves of n'''
sumSoFar = 0
while n > 0:
n = n/2
sumSoFar = sumSoFar + n
return sumSoFar
```
**sumHalves2(22)**
<table>
<thead>
<tr>
<th>step</th>
<th>n</th>
<th>sumSoFar</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>22</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>11</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>5</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>1</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>0</td>
<td></td>
</tr>
</tbody>
</table>
```
sumBetween(lo, hi):
'''Returns the sum of the integers from lo to hi (inclusive). Assume lo and hi are integers.'''
```
```
def sumList(nums):
'''Returns the sum of the elements in nums'''
sumSoFar = 0
for n in nums:
sumSoFar += n
return sumSoFar
```
```
def sumListWhile(nums):
'''Returns the sum of the elements in nums'''
sumSoFar = 0
index = 0
while index < len(nums):
n = nums[index]
sumSoFar += n # or sumSoFar = sumSoFar + n
index += 1 # or index = index + 1
return sumSoFar
```
```
def sumListFor(listOfNums):
'''Returns the sum of elements in listOfNums'''
sumSoFar = 0
for n in listOfNums:
sumSoFar += n
return sumSoFar
```
Accumulating a result with a for loop
**sumList** should take any list of numbers and return the sum of the numbers
```
In [ ]: sumList([8,3,10,4,5])
Out[ ]: 30
```
```
In [ ]: sumList([5,10,-20])
Out[ ]: -5
```
```
In [ ]: sumList([])
Out[ ]: 0
```
**sumBetween with while loop**
```
In[6]: sumBetween(4,8)
Out[6]: 30 # 4 + 5 + 6 + 7 + 8
```
<table>
<thead>
<tr>
<th>step</th>
<th>lo</th>
<th>hi</th>
<th>sumSoFar</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>4</td>
<td>8</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>5</td>
<td>8</td>
<td>4</td>
</tr>
<tr>
<td>2</td>
<td>6</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>3</td>
<td>7</td>
<td>8</td>
<td>15</td>
</tr>
<tr>
<td>4</td>
<td>8</td>
<td>8</td>
<td>22</td>
</tr>
<tr>
<td>5</td>
<td>9</td>
<td>8</td>
<td>30</td>
</tr>
</tbody>
</table>
**sumBetween(4,8) returns 30**
**sumBetween(4,4) returns 4**
**sumBetween(4,3) returns 0**
Accumulators with lists of strings
```python
concatAll(['To', 'be', 'or', 'not', 'to', 'be']) # 'To be or not to be'
beatles = ['John', 'Paul', 'George', 'Ringo']
concatAll(beatles) # 'JohnPaulGeorgeRingo'
concatAll([]) # ''
```
What should the accumulator do in this case?
```python
def concatAll(elts):
'''Returns the string that results from concatenating all elements in elts'''
```
for loop: countOf
```python
sentence = 'the cat that ate the mouse liked the dog that played with the ball'
sentence.split() # ['the', 'cat', 'that', 'ate', ... 'ball']
```
```python
countOf('the', sentence.split())
countOf('that', sentence.split())
countOf('mouse', sentence.split())
countOf('bunny', sentence.split())
countOf(3, [1, 2, 3, 4, 5, 4, 3, 2, 1])
```
```python
def countOf(val, elts):
'''Returns the number of times that val appears in elts'''
```
Returning early from a loop
In a function, `return` can be used to exit the loop early (e.g., before it visits all the elements in a list).
```python
def isElementOf(val, elts):
'''Returns True if val is found in elts; False otherwise'''
for e in elts:
if e == val:
return True # return (and exit the function)
return False # only get here if val is not in elts
```
Premature return done wrong (1)
```python
def isElementOfBroken(val, elts):
'''Faulty version of returns True if val is found in elts; False otherwise'''
for e in elts:
if e == val:
return True
return False
```
Always returns after the 1st element without examining the rest of the list.
```python
In [1]: sentence = 'the cat that ate the mouse liked the dog that played with the ball'
In [2]: isElementOf('cat', sentence.split())
Out[2]: True # returns as soon as 'cat' is encountered
In [3]: isElementOf('bunny', sentence.split())
Out[3]: False
```
```python
In [1]: isElementOfBroken(2, [2, 6, 1])
Out[1]: True
In [2]: isElementOfBroken(6, [2, 6, 1])
Out[2]: False
```
Premature return done wrong (2)
```python
def sumHalvesBroken2(n):
'''Broken version of returns sum of halves of n'''
sumSoFar = 0
while n > 0:
sumSoFar = sumSoFar + n # or sumSoFar += n
n = n/2
return sumSoFar # wrong indentation!
# exits function after first
# loop iteration. Sometimes we
# want this, but not here!
In [4]: sumHalvesBroken2(22)
Out[4]: 22
```
Example of returning early
```python
containsDigit('The answer is 42') True
containsDigit('pi is 3.14159...') True
containsDigit('76 trombones') True
containsDigit('the cat ate the mouse') False
containsDigit('one two three') False
```
Use the built-in `isdigit()` string predicate to check if a character is a digit. E.g.
'4'.isdigit() returns True
'h'.isdigit() returns False
```python
def containsDigit(string):
'''Returns True if the string contains a number'''
```
```
areAllPositive
areAllPositive([17, 5, 42, 16, 31]) returns True
areAllPositive([17, 5, -42, 16, 31]) returns False
areAllPositive([-17, 5, -42, -16, 31]) returns False
areAllPositive([]) returns True
def areAllPositive(listOfNums):
'''Returns True if all elements of listOfNums are positive'''
```
```
indexOf
indexOf(8, [8,3,6,7,2,4]) returns 0
indexOf(7, [8,3,6,7,2,4]) returns 3
indexOf(5, [8,3,6,7,2,4]) returns -1
def indexOf(val, elts):
'''Returns the first index in elts at which val appears. If val does not appear in elts, returns -1'''
```
**longestConsonantSubstring**
longestConsonantSubstring('strong') returns 'str'
longestConsonantSubstring('strengths') returns 'ngths'
longestConsonantSubstring('lightning') returns 'ghtn'
longestConsonantSubstring('Program') returns 'Pr'
longestConsonantSubstring('adobe') returns 'd'
```python
def longestConsonantSubstring(s):
'''Returns the longest substring of consecutive consonants. If more than one such substring has the same length, returns the first to appear in the string.'''
```
Note: This is hard! Draw iteration tables first! What state variables do you need?
**Nested loops**
A **for** loop body can contain a **for** loop.
```
# print the multiplication table from 2 to 5
for i in range(2, 6):
for j in range(2, 6):
print i, 'x', j, '=', i*j
```
Inner loop gets executed for each value of i
**Nested Loops**
Here's a picture involving a grid of randomly colored circles with radius = 50 on a 800x600 canvas.
This picture is created using two nested **for** loops and the `Color.randomColor()` function.
What is printed?
```
for letter in ['g', 'p', 'd', 's']:
for letter2 in ['ib', 'ump']:
print letter + letter2
```
Variable update order matters
```python
def sumHalvesBroken(n):
sumSoFar = 0
while n > 0:
n = n/2 # updates n too early!
sumSoFar += n
return sumSoFar
```
In [3]: sumHalvesBroken(22)
Out[3]: 19
This table is the solution to slide 7-17.
Simultaneous update example:
Greatest Common Divisor algorithm
- The greatest common divisor (gcd) of integers a and b is largest integers that divides both a and b
- Eg: gcd(84, 60) is 12
- Euclid (300 BC) wrote this algorithm to compute the GCD:
- Given a and b, repeat the following steps until b is 0.
- Let the new value of b be the remainder of dividing a by b
- Let the new value of a be the old value of b
- … this is a perfect opportunity for a while loop.
<table>
<thead>
<tr>
<th>step</th>
<th>a</th>
<th>b</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>84</td>
<td>60</td>
</tr>
<tr>
<td>1</td>
<td>60</td>
<td>24</td>
</tr>
<tr>
<td>2</td>
<td>24</td>
<td>12</td>
</tr>
<tr>
<td>3</td>
<td>12</td>
<td>0</td>
</tr>
</tbody>
</table>
Neither of the following two gcd functions works. Why?
```python
# Assume a >= b > 0
def gcdBroken1(a, b):
while b != 0:
a = b
b = a % b
return a
# Assume a >= b > 0
def gcdBroken2(a, b):
while b != 0:
b = a % b
a = b
return a
```
Fixing simultaneous update
```python
# Assume a >= b > 0
def gcdFixed1(a, b):
while b != 0:
prevA = a
prevB = b
a = prevB
b = prevA % prevB
return a
# Assume a >= b > 0
def gcdFixed2(a, b):
while b != 0:
prevA = a
prevB = b
b = prevA % prevB
a = prevB
return a
```
Python's simultaneous assignment is an even more elegant solution!
```python
# Assume a >= b > 0
def gcdFixed3(a, b):
while b != 0:
a, b = b, a % b
return a
```
|
{"Source-Url": "http://cs111.wellesley.edu/content/lectures/lecture07/files/07_iteration_blanked_handout.pdf", "len_cl100k_base": 4443, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26958, "total-output-tokens": 5004, "length": "2e12", "weborganizer": {"__label__adult": 0.0003709793090820313, "__label__art_design": 0.0004494190216064453, "__label__crime_law": 0.000293731689453125, "__label__education_jobs": 0.002841949462890625, "__label__entertainment": 9.763240814208984e-05, "__label__fashion_beauty": 0.0001455545425415039, "__label__finance_business": 0.00012117624282836914, "__label__food_dining": 0.0005474090576171875, "__label__games": 0.0006990432739257812, "__label__hardware": 0.0014123916625976562, "__label__health": 0.0005793571472167969, "__label__history": 0.00022327899932861328, "__label__home_hobbies": 0.0001804828643798828, "__label__industrial": 0.0004940032958984375, "__label__literature": 0.0002849102020263672, "__label__politics": 0.0002084970474243164, "__label__religion": 0.0005087852478027344, "__label__science_tech": 0.02490234375, "__label__social_life": 0.0001322031021118164, "__label__software": 0.006183624267578125, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.00041866302490234375, "__label__transportation": 0.0004730224609375, "__label__travel": 0.00022304058074951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13642, 0.03519]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13642, 0.57777]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13642, 0.67474]], "google_gemma-3-12b-it_contains_pii": [[0, 707, false], [707, 1996, null], [1996, 3075, null], [3075, 5454, null], [5454, 7310, null], [7310, 9299, null], [9299, 10769, null], [10769, 11945, null], [11945, 13642, null]], "google_gemma-3-12b-it_is_public_document": [[0, 707, true], [707, 1996, null], [1996, 3075, null], [3075, 5454, null], [5454, 7310, null], [7310, 9299, null], [9299, 10769, null], [10769, 11945, null], [11945, 13642, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13642, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13642, null]], "pdf_page_numbers": [[0, 707, 1], [707, 1996, 2], [1996, 3075, 3], [3075, 5454, 4], [5454, 7310, 5], [7310, 9299, 6], [9299, 10769, 7], [10769, 11945, 8], [11945, 13642, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13642, 0.04536]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
bdff7dc6df36d23323e764983b11b5ed12d16d7e
|
Pipelined Processor Design: Handling Control Hazards
We have been discussing pipelined design for MIPS processor. Last time we had seen how we can handle data hazards by introducing stalls if necessary and we also saw how we can introduce bypass paths or forwarding paths so that delay can be cut down.
We noticed that in some cases delay may still be required or the stall cycle may still be required and we worked out the logic which is required to be put in the controller to take care of this situation. So you need to detect when hazards are occurring; you need to basically find out the dependency and then introduce appropriate control signals. Today we are going to look at the issue of handling control hazards.
(Refer Slide Time: 00:02:04)
So we will take the design which we have discussed so far and modify it to handle the control hazards, then we will see how we can improve performance in view of control hazards or branch hazards and a couple of techniques we will look at briefly and not go into too much of detail.
We would look at the possibility of eliminating branches altogether in some special cases or trying to speed up the execution of branches or introduce something what is called prediction of branch. So you do branch prediction and try to take action accordingly.
Finally I will make a brief mention about dynamic scheduling which is used for instruction level parallel architecture; that is another way which is used to keep the pipeline full in view of branch hazards and data hazards.
(Refer Slide Time: 00:03:07)
So just to recollect what was the effect of data hazards on execution of instructions. We had seen that when data hazards are there you need to introduce null instruction or nop instructions or bubbles in between instructions.
So, for example, looking at a sequence of instructions in the stage wise view where you are showing various stages and vertically you are looking at various clock cycles or time instants you would notice that nops are getting introduced when there are two instructions which are dependent on each other.
The same thing is seen in another view where you are looking at instruction by instruction and here are the two nop instructions because the instruction which is dependent is getting delayed and this delay is passed onto subsequent instructions also so everything behind this instruction in the pipeline gets delayed.
So, to improve things we had identified where we can have forwarding paths. So this diagram, for example, shows that you can have paths going from output of ALU through this inter stage register back to ALU or you could have output of the memory stage after this register and possibly after multiplexer going back to memory and also back to ALU.
So now you need to derive control signals which guide these multiplexers. So you need to detect which source address is matching with the destination address in the two instructions and accordingly enable these paths.
(Refer Slide Time: 00:05:02)
Now, coming back to the control hazards we now imagine a branch instruction like this (Refer Slide Time: 5:05) and these are the stages it is going through and here we take some decision and either continue sequentially or branch to an instruction label L.
So we assume that if the processor continues getting instructions in sequence; at some point suppose we come to know that there is a branch and the condition has become true then the address change is the sequence is broken and you can fetch the target instruction here but now you realize that the two instructions which had entered the pipeline are not the one you intended and therefore they need to be flushed out so you need to actually generate a control signal which will do this.
Here is the datapath and controller which we have designed so far. Now there is one thing which we need to notice here is that here actually the way we have designed the branch will take place in fourth cycle unlike what we expected here; we thought that the condition is getting tested in ALU and the next instruction can be fetched in this cycle. But what is happening in this datapath is that we are looking at the target address after this register and we are also looking at the condition which is tested by ALU after this register so that means it is in memory stage only. We are looking at this outcome of the branch and here is it the control signal is being generated for feeding to this multiplexer.
So basically if this is how it is done then there will be one more additional delay. So this instruction actually can start only... the instruction will be fetched and be available from this cycle onwards. Because at the end of fourth cycle you are latching the new address into program counter and therefore the fetch cycle shifts to this. So we will try to improve this by tapping this output the new address and the control signal before the register (Refer Slide Time: 7:34).
So this is the change which is shown here.
We are not making these pass through this register (Refer Slide Time: 7:42). The consequences of this are that the delay calculation which ultimately determines the clock period would undergo a change. Now you need to account for the multiplexer delay of this which we have ignored so far along with this adder and also with this ALU to consider the path delays.
So now if you look at the paths going from this register to some registers in this case PC so you need to account for this ALU, this AND gate, (Refer Slide Time: 8:28) this multiplexer and then to PC so this is one path. Similarly, the path you need to consider is that going through S2, adder, then AND gate and multiplexer and so on. So the delays get redistributed and whatever is the influence of this change on the clock has to be borne in mind. The objective here is that we should be able to decide at the end of third cycle what the next instruction is so this is the change which will enable that.
Now having done that we need to flush the instructions which have entered the pipeline in anticipation but are not required. So when we are loading the new value in PC, when we are loading the target address in PC that is the signal which is actually making it possible. We need to use the signal to flush out the instructions which are in these two stages. So the instruction which would have normally gone from this stage to this stage and the instruction which would have gone from this stage to this stage (Refer Slide Time: 9:50) would be flushed if you make the contents of these two registers as 0. So effectively you have made these as nop instructions. What we had realized is that if you make everything in this register 0 it acts like a nop instruction that is what we were trying to do when we were introducing nops in the case of data hazards.
So, somewhat similar action is required that we need to make contents of this register and this register 0 when the signal is active. So effectively we have flushed away these two instructions and do not allow them to proceed forward.
So this is how we can handle the branches in the most simple manner. In general, now we realize that there are two things which are happening in a branch instruction in general that there is a condition which is being evaluated and there is an address which is being evaluated so two things would typically happen which would happen in some particular cycle.
Now suppose there was a deep pipeline; somewhere there is a condition evaluation which is getting completed somewhere, target address generation is completed. So, what is the earliest you can have next instruction if you are going inline that means you are not branching and what is the earliest you can have the next instruction if you are going to the target that is the branch is taking place. So the particular cycle is where these two activities are happening would determine how you can place the next instruction. So whatever technique you are applying if you are predicting one of the two alternatives and going that way then we need to keep in mind these delays. So to make things better we this is what we need to cut down.
In our example here this was happening. We started with this (Refer Slide Time: 12:05) where basically the computation of next address and condition evaluation were getting completed in the third cycle and in fourth cycle we were using it. So if you can reduce this for example we finished this in the third cycle itself (Refer Slide Time: 00:12:20) and took decision there so thereby we reduced the delay. If we can reduce it further as we will see later on we can make things even better. So this is what has to be kept in mind.
Another factor is that there are architectures where condition evaluation and branching are split into two instructions. So you would have for example a subtract instruction and the result of that will be used to set some flags. The branch instruction will simply test the flag; it will not actually do the comparison it will only test the flag and carry out branch. So condition evaluation there is trivial. In fact the real evaluation takes place in an instruction which is earlier. So it could be immediately preceding instruction or it could be an instruction which is occurring one or two instructions earlier. So we need to see where exactly in the previously instruction the condition is getting evaluated.
Now, there are several ways in which you can improve the performance in view of branch hazards.
There are cases when you can eliminate branches altogether and therefore get rid of the problem. You can speed up branch execution this is what I was discussing in the previous slide that you can take these decisions earlier or you can move these events as early as possible or increase the gap between decision making and actual branching so we will see that in some more detail.
Then third thing is branch prediction where you do something in anticipation in a manner that most of the time you are correct and when you are correct you are saving a lot of time. In the rare event of not getting it correctly you lose time but in probabilistic sense if you take average then on the whole you would have improved the situation. So this branch prediction can be done statically or dynamically; I will distinguish between these two how that is done and lastly particularly when you are doing dynamic prediction you are looking at what happened in the past so when you are doing that you can also remember the target address and thereby also speed up not only the decision but calculation of the target address.
Here is the small illustration of branch elimination.
Suppose you are testing some condition and if the condition is true you want to do one step or one instruction and if the condition is false you want to skip that. So it is a very small if or conditional structure so there may be one instruction or two instructions not too many. These can be replaced by what is called conditional instruction or predicated instruction. So, some processors have this feature that with almost every instruction you can attach a condition so you would have some field which will indicate that this instruction needs to be done only under certain conditions. So essentially you have fused these two; you have attached the condition with the instruction itself and if the condition is true the instruction gets done otherwise it is like nop.
So, suppose there was conditional branch here so this is not MIPS language it is testing if the condition..... this specifies which condition is being tested so condition being tested is the zero flag which indicates if result of previous instruction was 0. So if z flag is set then you branch to current instruction plus 2 that means you skip ADD. So, if ADD instruction had a provision of specifying a condition you could put that together. So here we are saying that you do this if z flag is not set; NZ means nonzero. So we are removing this branch instruction explicit branch instruction and putting that condition with the add instruction itself.
So now you might wonder that whatever you do the condition has to be somewhere the other tested so yes that is true but now condition testing is part of the ADD instruction so you could start preparation for doing the addition and depending upon the condition you may store the result or you may not store the result. So if you look at this as a whole you would have saved a few cycles.
How do you speed up execution of branch?
Now you can speed up; target address calculation and condition evaluation. So for calculating the target address early what you can assume is that each instruction is a branch instruction and then generate target address. What may be done normally is that you may test, you may check, you may decode the instruction, find out if it is branch instruction then do the target address calculation because now you know that it is a branch instruction. But we can calculate target address in anticipation. Actually in the design we have done this is already happening.
If you recall, in the second cycle when you are fetching the registers A and B we are also calculating the target address and we keep it in a register and use it if necessary. So this technique we have not mentioned explicitly but we have already used this. Situation could be more complex if virtual memory was involved. So address calculation also involves page table look up and doing that translation from virtual address to real address. So address calculation does not simply involve doing an addition but it may involve something much more.
So, starting early in anticipation like this is definitely advantageous and you can also omit page (Refer Slide Time: 19:16) this translation if the target address is in same page in which you have the current instruction so all that check could be performed early and if the instruction is not found to be branch instruction you can discard this so no harm done.
Now secondly, condition evaluation. Again try to move this as early as possible in the sequence of cycles and we will see an example of that. So I am showing a part of the design; I have just stretch things horizontally and the last stages have been thrown out just to make space.
What we have done is instead of checking for equality in the ALU we have introduced a comparator in the second stage itself in the second cycle itself and also the target address calculation which was actually happening here in the third cycle; although in the multi cycle design we had described earlier we had done this in the second cycle because ALU was free at that time so our motivation there for doing that in the second cycle was that utilize the ALU while it is free. But now we know that it has advantage it is advantageous if we do all these as early as possible. So I am moving that adder here and introducing a comparator here (Refer Slide Time: 20:55).
Now what is the implication of this in terms of clock cycle?
It is that this path may not be making so much of difference because whether it is here or there it is just coming in series with AND gate and multiplexer these do not cause any delay but the main effect would be that the delay of register file and the delay of this comparator they are coming in series within a clock cycle so it will definitely have some adverse effect on the clock cycle.
Now remember that testing for equality is much simpler as compared to testing for less than or greater than. In this case you need to do bit by bit comparison and then take AND of all those; you do not need any carry to be propagated. So it may be possible to afford equality or inequality comparison; less than greater than comparison may still be done within ALU the fast ALU which is sitting in third cycle so at least beq bne kind of instructions can be speeded up by doing this. So we are putting a simpler comparator here which will add to the delay but only marginally.
Now with this done we are ready to get the target instruction in the third cycle itself so we need to only flush this one we do not need to flush instruction which is going into this. So this is a design with slight improvement where we are losing one cycle basically when branch actually occurs instead of two cycles.
Now, another way of speeding up is to increase the gap between condition code evaluation and actual branching.
(Refer Slide Time: 00:23:03)
Now imagine that you were having separate instructions to evaluate condition and the branch instruction was only looking at some flags so flag can be easily looked at in a cycle as soon as you have the instruction. So what can be done is that you can increase the gap between the instruction which is actually setting the condition code and the instruction which is testing so that the situation is somewhat similar to data hazard situation that one instruction is dependent upon the previous one. So here one instruction is setting cc the condition code, other instruction is testing and if you increase the gap between these that means have other useful instructions in between then the waiting is cut down. So, as soon as the branch instruction can calculate its target address you can branch; condition testing is done early.
The other approach was to do delayed branch. That means do the target address calculation and assume that branch has to be effective whether it is true or false after a few cycles. So let us just say that you want to delay by one instruction. That means after a branch instruction there will always be an unconditional instruction which has to be done whether condition is true or false. So the role of compiler or the code generator is that it tries to find suitable independent instruction which has to be done under both conditions and keep it after the branch instruction and the role of the hardware here is that you take a decision about branching, well, you take a decision but actually make branch effective one cycle later.
Next is the technique of branch prediction where effectively we try to treat branches either as unconditional branches that means do not worry about the condition and try to branch or as no operation that means do not branch but continue sequentially and when you know whether you have gone wrong or if necessary you can undo.
Now this is the basic idea. Question is how do you do the prediction? There are three ways of doing it: one is fixed prediction that you always guess inline so effectively this is what we were trying to do; we allow the pipeline to get subsequent instructions into the pipeline and if you find that branch condition is true and branch is to be taken then you undo. So this is called fixed prediction.
Second is static prediction. Here you make choice between going inline or going for the target by using some criteria and the criteria is fixed it does not depend upon situation which is created at the run time you should know ahead of time whether for this particular branch you need to predict inline or you need to predict the target. So the basis of such prediction could be opcode or the target address; you might say that some kind of branches are more likely to be taken and some kind of branches are less likely to be taken.
Or it could be based on the target address. For example, if you are branching to an address which is earlier that is branch back which typically indicates and of a loop then you may predict that yes branch is more likely to be taken because loops are often iterated several times. Otherwise if it is a forward jump you might say that it is either equally likely or less likely to be taken; it is an exception condition normally you go through and the condition checking is to take care of situations which are occurring less occasionally and less likely. So depending upon any such feature you can decide.
The dynamic branch prediction tries to take into account what has happened when the program has been executing. So there you keep a record of what happened in the past that if there was a branch instruction which occurs several times inside a loop then you see that on previous occurrences what happened to this; whether condition was true or false. So a simple minded approach there would be to think that the pattern which was there last time is going to repeat so you could just look at the last one and say that this is what is
going to happen next time or a more sophisticated decision would look at last several occurrences and based on that you may decide.
(Refer Slide Time: 00:28:39)
When you look at branch prediction and delay slots then together you can make the design little more intelligent by giving the programmer flexibility of selectively bypassing the delay slots so that is called delayed branch with nullification or annulment. So the delay slot is not something which is fixed by the hardware it is possible and the programmer has the choice to use it or not to use it. With the branch instruction you could have another field which will indicate that you have to enable the delay slot or you do not have to enable. It would depend upon how you are predicting. Depending upon your prediction you may actually exercise this option of using a delay slot or not using the delay slot.
A simple branch prediction which is based on just looking at the last occurrence of the same branch may sometime work poorly whereas some very straightforward logic can do better. So just look at a simple loop like this that you have a few instructions and at that end you have instructions which loops back.
So now suppose this itself is in a larger loop an outer loop where every time you come here you do a few iterations and then come out then you will come again here do few iterations and then come out. So now suppose your strategy is to predict based on the previous outcome. So, if last time the condition had become true you think that it will be true this time and vice versa. So, with every instance of the loop you will have two mispredictions that is when you enter the loop first time when you come to this for the first time in some loop instance you will find that last time the condition was false you will think it is false again but now it is the beginning of the loop so you will iterate you might go wrong at this point and last time when the condition becomes false you might still think loop is going to continue and you will make a mistake. So twice you will make a mistake: when you are entering the loop and when you are exiting the loop.
But on the other hand, if you had chosen a static branch policy here saying that always you predict that loop will be taken or the branch will be taken you will make one mistake per loop. So apparently the dynamic prediction strategy does worse than the static prediction strategy in this case. To make dynamic branch prediction more effective we need to modify our dynamic prediction strategy and one possible scheme which is commonly used is shown here.
So instead of just remembering what happened in the last branch you try to remember little more. Here I am showing a scheme where you actually imagine that there is a state machine which can be in one of the four states and these states can be represented by 2 bits. So instead of remembering 1-bit information you are remembering effectively 2 bits of information. What this machine remembers is some summary of last several outcomes. What we are trying to do here is that you do not change your decision just by looking at one change. So, for example, suppose you are in state 0 and you are predicting that branch is taken; suppose branch keeps on getting taken you remain in that state and you keep on predicting that branch is taken. At some point branch does not get taken so \( N \) means branch is not taken, \( T \) means branch is taken.
Now these arcs (Refer Slide Time: 33:10) are indicating how you are transiting the state; the label on the arc indicates what was the actual outcome and based on the actual outcome you are going to some next state. While you are in state 0 and 1 you predict that branch is taken, while you are in state 2 and 3 you predict that branch is not taken. So the situation I am trying to depict here is that when you are comfortably in this state and continuously branch is being taken and you are predicting that branch is being taken if once the condition becomes false you are not taking the branch you are still in a state where you will continue to predict that branch is taken unless you get another \( N \) and then you go to a different state.
So in 1 also you will predict that branch is taken and if it turns out to be true you get back to 0. If branch is not taken then you go to state 3 where you now start predicting that branch is not taken.
So the idea here is that in a situation like what I just depicted with a single loop you will actually continue in state 0 or 1 you will not come to state 2 and 3. So this is a general
mechanism which can be used for dynamic branch prediction and you will avoid making double mistakes when there is a simple loop.
Lastly we are trying not only to know the decision early either as anticipation or by actual computation; we also want target address to be available to us as early as possible.
(Refer Slide Time: 00:34:57)
<table>
<thead>
<tr>
<th>Branch Target Capture</th>
</tr>
</thead>
<tbody>
<tr>
<td>Branch Target Buffer (BTB)</td>
</tr>
<tr>
<td>Target Instruction Buffer (TIB)</td>
</tr>
</tbody>
</table>
So if you are keeping the history; if you are looking at what happened in the past we can also recall what was the target address in the previous occurrence of the same branch and if that is available in a buffer we can simply pick up the address from there rather than calculating. So, actually speaking the address calculation will still occur just to ensure that what you have picked up from the buffer is same as what you actually get and if the two are same it is fine otherwise you will try to undo what was done in anticipation.
So here the assumption is that the address is not going to change from one occurrence of the branch to the next occurrence of same branch. There are situations where this may not hold. For example, if you take JR instruction, the branch address is coming from a register and you cannot be sure that what comes next time will be same. But if you take a jump instruction with a constant address or a beq type of instruction where the address is obtained by adding a constant to the program counter so none of these two things are changing and therefore the address next time is bound to be the same. So keeping in mind that JR will occur much less often than beq, bne and J instructions this scheme will work.
So here is the picture (Refer Slide Time: 36:35) of how this buffer could be organized; this is one way there are many different organizations. It could be organized as an associative or content addressable memory where you look at the current instruction address and each word here has three fields: one field carries instruction address; second field will carry prediction statistics like those 2 bits of the finite state machine and the target address. So, given the address of the current instruction which is the branch
instruction you try to look up in this table; looking up in this table is that in parallel it will search and try to match that address with all the addresses in the first field, if any match occurs you pick up the prediction statistics and you pick up the target address.
Again here there is a variation possible that you may store target address or you may store target instruction directly. So instruction to which you are branching can be directly fetched from there so you go a step further here. The only thing you need to keep in mind is that if you have target instruction here the instruction following that is not available here so for that you need to calculate the address but it is assumed that it will be calculated in the due course. So, in due course of time branch instruction carries out all that it has to do; all these things we are doing in anticipation and we of course have to make sure that what we anticipated or what we predicted or really correct. But it allows us a way of doing it early most of the time.
Typically, for example, probability of changing of target could be less than 5 percent and therefore at least 95 percent of the time you are doing it fast and you are doing it correctly.
Now, so far we assumed that we look at repeated occurrences of same branch. So you have branch somewhere in the program and because of loop you are coming to this again and again. Sometimes there is some coupling or some correlation between various branches.
(Refer Slide Time: 00:38:51)
For example, look at this sequence on the left; you are testing some condition here, you are testing some other condition there then you are looking at something which is dependent on both these. So suppose z which is being tested here was x and y so now once you have gone through this and you have gone through this and the value of x and y have not changed in this course then it varies it to predict what z may be because you
have evaluated x and y; if both are true for example then you can predict that the branch will be taken here.
So B3 can be predicted with 100 percent accuracy on the basis of outcomes of B1 and B2. So what this means is that, this is a very simple case but in general what this points out to is that there could be some correlation between various branches. So, if you are looking at not just the history of this branch but global history that means history of branches which have recently occurred in time there may not be recent occurrences of just the same branch but other branches.
For example, if you encode outcome of this branch by 0,1; outcome of this branch by 0,1 (Refer Slide Time: 40:24) you look at the string of 1s and 0s which represent last few branches and looking at that pattern it may be possible to predict with much more accuracy the outcome of the branch you have currently at hand. So, by going for global history you can improve your prediction. Of course more and more of these things you do you are incurring cost somewhere; you are putting in more and more control hardware to carry out these things and also more hardware to undo the effect of wrong decisions. So, finally one very powerful method of trying to avoid stalls whether they are due to data hazards or branch hazards is to do what is called dynamic scheduling.
(Refer Slide Time: 00:41:20)
Instead of putting the instruction in the pipeline in the order in which they are occurring you try to analyze the instruction and push those instructions in the pipeline which can go through without causing any stalls so that is dynamic scheduling. What we are doing now, for example, when we are doing data forwarding we were trying to check the dependency between instructions and trying to take correct action.
On the other hand, similar mechanism could be built in hardware, if one instruction is producing result which is to be used by next instruction, the next instruction could be
pushed in pipeline a little later; you can put something else in the pipeline. Some of these things can be done by compiler also but compiler does not have complete picture of what is happening dynamically. So, dynamic pipelining is an expensive thing; you need to have lot of extra hardware but it tries to find instructions which can keep the pipeline busy as far as possible; if necessary it can change the order of instruction; out of order instruction and it can nicely support speculative execution and dynamic branch prediction the kind of stuff we have been talking of. This also forms the basis of what is called superscalar architecture.
In superscalar architecture you try to fetch, decode and execute several instructions at the same time and the pipeline there has to be necessarily a dynamic pipeline where you look at a window of instructions in your stream of instructions and pick up something which can be pushed in the pipeline so the same idea of dynamic pipeline actually is getting extended to multiple instruction and this is called instruction level parallelism. You are not talking of parallelism in terms of multiple processors doing multiple instructions but same processor is capable of fetching and decoding and executing several instructions at a time. The key here is the hardware which looks at a set of instructions and finds out which instruction can be initiated. Of course the hardware which executes these instructions in parallel also has to make sure that the results are obtained consistently.
Coming to ILP or Instruction Level Parallelism there is a different approach to this also that is we rely entirely on compiler to identify what instructions can be done together in parallel and do not leave this worry to the hardware.
(Refer Slide Time: 00:44:06)
Here each instruction could possibly be carrying multiple operations. It is like a long instruction word where you have coded multiple instructions put them together so that is where the term VLIW comes into picture Very Large Instruction Word it is the meaning of VLIW. So whether we are having a superscalar architecture with dynamic scheduling
or a compiler driven VLIW approach the basic thing is that you have multiple functional units so multiple ALUs which can handle multiple instructions so that is the basic requirement and all these have to access a register file so register file also must support multiple read write ports so that all these can actually do in parallel.
Between VLIW and superscalar the difference will be the way instructions are fetched and pushed in the pipeline. In case of VLIW we will expect that compiler forms long instructions carrying multiple operations and the hardware will simply take them one by one and execute them. On the other hand, superscalar architecture will have a complicated decode and issue unit which will look at many instructions which are fetched simultaneously; pick up the right one out of these and then assign to different functional units to do them in parallel.
(Refer Slide Time: 46:14)
Here we are showing that multiple instructions are there but they are in terms of stream of instructions they are following each other. Each instruction as you see in the program is like a scalar instruction but in case of VLIW each instruction is a special instruction which has many operational fields.
So, if you look at the timings of these two alternative architecture versus timing of a simple pipeline this is how it will look like. This top picture shows a four stage pipeline; let us say the four stages are: instruction fetch, decode, execute and write back. Then in ideal case you have instructions which are overlapping in this manner. So at any given time you have up to four instructions in flight.
A superscalar, suppose again in ideal case that the degree of parallelism is 3 here so in every cycle it is fetching three instructions, decoding three instructions, executing three instructions, doing write back for three instructions. VLIW, on the other hand, will look at these as single instruction being fetched, single instruction being decoded but each instruction will actually carry out multiple operations. So on the same scale it does three execute operations. This is how one could place these three architecture in a common perspective.
Now, most of the modern processors are actually superscalar processors whether you take Pentium or power PC or IBM’s power 4 or PA-RISC and so on all these are high performance desktop computing machines; all are superscalar processors. VLIWs are used only in specialized embedded applications.
So question is why are superscalar popular for general purpose computing?
The main reason there is of binary code compatibility. If you take for example Intel’s 486 processor and then Pentium a superscalar version of the same thing the code compatibility exists; the code which was available for older machines can be made to work on the new machines without any change in the code. Code can directly be taken from the old machine and run on the new machine in the same series as long as instruction set is same. The difference comes only here in the hardware.
As far as programmer or user is concerned it is just another machine executing the same code faster; architecturally it is different fundamentally whereas if you were to achieve speed through VLIW technique you would need to regenerate the code you have to have a specialized compiler which can detect parallelism at instruction level, pack the instructions appropriately and then you can run the code. So there could be source level compatibility but no object level compatibility. Actually this code compatibility from a commercial point of view is a very very major issue.
So these VLIW processors require very specialize compilers. It is almost impossible to code them by hand whereas if you can code normal scalar machine by hand you can code superscalar and there it does not require anything extra. Of course a compiler which is designed for superscalar processor would try to take into account some features and produce better code.
If you know that the machine is superscalar; how many functional units it has in parallel and how it will be better to keep the pipeline busy the compiler can produce better code but even the earlier code which is not generated specially for superscalar will still run correctly. Sometimes code density in VLIW can be very poor. You may not find many instructions to be packed together and you have to fill them with nops.
In terms of area or the cost superscalars are more expensive. In terms of performance it is possible to achieve a higher degree of performance in VLIW technique provided you are able to have a good compiler. Finally before I conclude let me say few words about exception handling.
We discussed exceptional handling in case of multi cycle-design; how is it placed in case of pipeline design. In fact in pipeline design exception handling becomes more complex. The main problem is that since there are many instructions in processing you could have many exceptions which could be generated at the same time. In the same cycle you may find that one instruction is showing overflow, another instruction is showing page fault or illegal opcode. And knowing that different types of exceptions get detected in different cycles it is also possible that an instruction which is coming later may show an exception earlier.
So, for example, let us say the there was an instruction which results in overflow; overflow will be detected only when you have done the arithmetic operation whereas another instruction which is coming behind it has wrong opcode so immediately you will come to know. So it may happen that you may find one exception earlier which was actually supposed to occur later in the instruction sequence. This makes things very difficult and you have a concept of precise interrupt or imprecise interrupt. Precise interrupt means that you detect interrupts or exception in the same order in which instructions are occurring. Somehow you have to time your detections so that the order in which you detect is not inconsistent with the order in which instructions are occurring.
(Refer Slide Time: 00:53:08)
Some machines insist on precise interrupt; some do not worry about it and allow imprecise interrupts to happen. The saving of status is naturally very complex here because if you need to handle the exception and come back and resume the instruction you need to know how far the instructions were executed and with many instructions in
flight each having done to some stage you need to remember all that that has happened so that you can continue.
some time what you do is that instruction which are partially executed you may simply flush them rather than trying to remember how far they have done you may simply flush them out and restart them all over again when you come back. So there are a lot of possibilities and we will leave this at this point realizing that there are many complexities involved in exceptional handling when there is pipelining.
(Refer Slide Time: 00:54:11)
Finally to summarize we looked at the stalls which occurred due to branch hazards and we saw how we can flush the instructions which may come up wrongly into the pipeline. We looked at several techniques to improve branch performance including branch elimination, branch speed up, branch prediction in a static or dynamic manner and dynamic scheduling. So, from dynamic scheduling it was led to superscalar and VLIW architecture which are basically instruction level parallel architectures and try to improve performance beyond CPI of 1. So pipelining ideally tries to make CPI as 1 but in real case it will be slightly worse than 1. To cross this CPI 1 barrier you need to do instruction level parallelism either in a VLIW manner or in a superscalar manner. I will stop at that, thank you.
|
{"Source-Url": "https://nptel.ac.in/reviewed_pdfs/106102062/lec27.pdf", "len_cl100k_base": 8160, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 60463, "total-output-tokens": 9125, "length": "2e12", "weborganizer": {"__label__adult": 0.0005159378051757812, "__label__art_design": 0.0008106231689453125, "__label__crime_law": 0.0004673004150390625, "__label__education_jobs": 0.0005307197570800781, "__label__entertainment": 0.0001194477081298828, "__label__fashion_beauty": 0.0002512931823730469, "__label__finance_business": 0.0002884864807128906, "__label__food_dining": 0.000682830810546875, "__label__games": 0.0011243820190429688, "__label__hardware": 0.0181884765625, "__label__health": 0.0005841255187988281, "__label__history": 0.0004696846008300781, "__label__home_hobbies": 0.00028896331787109375, "__label__industrial": 0.0021076202392578125, "__label__literature": 0.00022804737091064453, "__label__politics": 0.0005097389221191406, "__label__religion": 0.001094818115234375, "__label__science_tech": 0.12164306640625, "__label__social_life": 6.967782974243164e-05, "__label__software": 0.006740570068359375, "__label__software_dev": 0.8408203125, "__label__sports_fitness": 0.0006022453308105469, "__label__transportation": 0.0014677047729492188, "__label__travel": 0.000335693359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41063, 0.02449]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41063, 0.71846]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41063, 0.96872]], "google_gemma-3-12b-it_contains_pii": [[0, 1300, false], [1300, 2087, null], [2087, 2752, null], [2752, 3747, null], [3747, 4982, null], [4982, 5953, null], [5953, 7406, null], [7406, 9484, null], [9484, 10648, null], [10648, 12462, null], [12462, 14262, null], [14262, 16282, null], [16282, 18316, null], [18316, 20391, null], [20391, 21575, null], [21575, 22989, null], [22989, 24973, null], [24973, 27204, null], [27204, 29148, null], [29148, 31125, null], [31125, 33274, null], [33274, 34489, null], [34489, 35744, null], [35744, 37672, null], [37672, 39721, null], [39721, 41063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1300, true], [1300, 2087, null], [2087, 2752, null], [2752, 3747, null], [3747, 4982, null], [4982, 5953, null], [5953, 7406, null], [7406, 9484, null], [9484, 10648, null], [10648, 12462, null], [12462, 14262, null], [14262, 16282, null], [16282, 18316, null], [18316, 20391, null], [20391, 21575, null], [21575, 22989, null], [22989, 24973, null], [24973, 27204, null], [27204, 29148, null], [29148, 31125, null], [31125, 33274, null], [33274, 34489, null], [34489, 35744, null], [35744, 37672, null], [37672, 39721, null], [39721, 41063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41063, null]], "pdf_page_numbers": [[0, 1300, 1], [1300, 2087, 2], [2087, 2752, 3], [2752, 3747, 4], [3747, 4982, 5], [4982, 5953, 6], [5953, 7406, 7], [7406, 9484, 8], [9484, 10648, 9], [10648, 12462, 10], [12462, 14262, 11], [14262, 16282, 12], [16282, 18316, 13], [18316, 20391, 14], [20391, 21575, 15], [21575, 22989, 16], [22989, 24973, 17], [24973, 27204, 18], [27204, 29148, 19], [29148, 31125, 20], [31125, 33274, 21], [33274, 34489, 22], [34489, 35744, 23], [35744, 37672, 24], [37672, 39721, 25], [39721, 41063, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41063, 0.03604]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
ca3fddcfcd78d6af9168073a646e8b7f0c84f278
|
A method for generating a representation of multimedia content by first segmenting the multimedia content spatially and temporally to extract objects. Feature extraction is applied to the objects to produce semantic and syntactic attributes, relations, and a containment set of content entities. The content entities are coded to produce directed acyclic graphs of the content entities, where each directed acyclic graph represents a particular interpretation of the multimedia content. Attributes of each content entity are measured and the measured attributes are assigned to each corresponding content entity in the directed acyclic graphs to rank order the multimedia content.
U.S. PATENT DOCUMENTS
5,873,081 A * 2/1999 Harel .......................... 707/3
6,002,803 A * 12/1999 Qian et al. .................... 382/242
6,236,395 B1 * 5/2001 Segal et al. ...................... 715/723
6,266,053 B1 * 7/2001 French et al. ................. 715/300.1
Y. Wang et al., “Multimedia Content Classification Using Motion and Audio Information,” IEEE, Jun. 9-12, 1997, Hong Kong, pp. 1488-1491.*
E. Wold et al., “Content-Based Classification, Search, and Retrieval of Audio,” IEEE, Fall 1999, pp. 27-36.*
* cited by examiner
FIG. 1a
Audio-Visual DS
- Syntactic DS
- Syntactic/Semantic Relation Graph DS
- Semantic DS
FIG. 1b
Syntactic DS
- Segment DS
- Segment/Region Relation Graph DS
- Region DS
FIG. 1c
Semantic DS
- Event DS
- Event/Object Relation Graph DS
- Object DS
Prior Art
"Commentary"
Attributes
Properties
Speaker Bob Costas
Text:
"Clavin Schtaldi winds up with a fast ball"
FIG. 4
1
METHOD FOR REPRESENTING AND
COMPARING MULTIMEDIA CONTENT
ACCORDING TO RANK
CROSS-REFERENCE TO RELATED
APPLICATION
This is a Continuation-in-Part application of U.S. patent
application Ser. No. 09/385,169, “Method for Representing
and Comparing Multimedia Content” filed on Aug. 30, 1999
now U.S. Pat. No. 6,546,135 by Lin et al.
FIELD OF THE INVENTION
This invention relates generally to processing multimedia
content, and more particularly, to representing and
comparing ranked multimedia content.
BACKGROUND OF THE INVENTION
There exist many standards for encoding and decoding
multimedia content. The content can include audio signals in
one dimension, images with two dimensions in space, video
sequences with a third dimension in time, text, or combi-
nations thereof. Numerous standards exist for audio and text.
For images, the best known standard is JPEG, and for
video sequences, the most widely used standards include
MPEG-1, MPEG-2 and H.263. These standards are
relatively low-level specifications that primarily deal with the
spatial compression in the case of images, and spatial and
temporal compression for video sequences. As a common
feature, these standards perform compression on a frame
basis. With these standards, one can achieve high compres-
sion ratios for a wide range of applications.
Newer video coding standards, such as MPEG-4, see “Infor-
mation Technology—Generic coding of audio/visual
objects,” ISO/IEC FDIS 14496-2 (MPEG4 Visual), Novem-
ber 1998, allow arbitrary-shaped objects to be encoded and
decoded as separate video object planes (VOP). This emerg-
ing standard is intended to enable multimedia applications,
such as interactive video, where natural and synthetic
materials are integrated, and where access is universal. For
example, one might want to “cut-and-paste” a moving figure
or object into one video to another. In this type of scenario,
it is assumed that the objects in the multimedia content
have been identified through some type of segmentation algo-
rithm, see for example, U.S. patent application Ser. No.
09/326,759 “Method for Ordering Image Spaces to Search
for Object Surfaces” filed on Jun. 4, 1999 by Lin et al.
The most recent standardization effort taken on by the
MPEG committee is that of MPEG-7, formally called “Mult-
imedia Content Description Interface,” see “MPEG-7 Con-
text, Objectives and Technical Roadmap,” ISO/IEC N2729,
March 1999. Essentially, this standard plans to incorporate
a set of descriptors and description schemes that can be used
to describe various types of multimedia content. The
descriptor and description schemes are associated with the
content itself and allow for fast and efficient searching of
material that is of interest to a particular user. It is important
to note that this standard is not meant to replace previous
coding standards. Rather, it builds on other standard repre-
sentations, especially MPEG-4, because the multimedia
content can be decomposed into different objects and each
object can be assigned a unique set of descriptors. Also, the
standard is independent of the format in which the content
is stored. MPEG-7 descriptors can be attached to com-
pressed or uncompressed data.
Descriptors for multimedia content can be used in
a number of ways, see for example “MPEG-7 Applications,”
ISO/IEC N2728, March 1999. Most interesting, for the
purpose of the description below, are database search and
retrieval applications. In a simple application environment,
a user may specify some attributes of a particular object. At
this low level of representation, these attributes may include
descriptors that describe the texture, motion and shape of
the particular object. A method of representing and comparing
shapes has been described in U.S. patent application Ser. No.
09/326,759 “Method for Ordering Image Spaces to Repre-
sent Object Shapes” filed on Jun. 4, 1999 by Lin et al. One
of the drawbacks of this type of descriptor is that it is not
straightforward to effectively combine this feature of the
object with other low-level features. Another problem with
such low-level descriptors, in general, is that a high-level
interpretation of the object or multimedia content is difficult
to obtain. Hence, there is a limitation in the level of represen-
tation.
To overcome the drawbacks mentioned above and obtain
a higher-level of representation, one may consider more
elaborate description schemes that combine several low-
level descriptors. In fact, these description schemes may
even contain other description schemes, see “MPEG-7
Description Schemes (V0.5),” ISO/IEC N2844, July 1999.
As shown in FIG. 1a, a generic description scheme (DS)
have been proposed to represent multimedia content. This
generic audio-visual DS 100 includes a separate syntactic
DS 101, and a separate semantic DS 102. The semantic
structure refers to the physical and logical signal aspects of
the content, while the semantic structure refers to the con-
ceptual meaning of the content. For a video sequence, the
syntactic elements may be related to the color, shape and
motion of a particular object. On the other hand, the seman-
tic elements may refer to information that cannot be
extracted from low-level descriptors, such as the time and
place of an event or the name of a person in the multimedia
content. In addition to the separate syntactic and semantic
DSs, a syntactic-semantic relation graph DS 103 has been
proposed to link the syntactic and semantic DSs.
The major problem with such a scheme is that the
relations and attributes specified by the syntactic and seman-
tic DS are independent, and it is the burden of the relation
graph DS to create a coherent and meaningful interpretation
of the multimedia content. Furthermore, the DSs mentioned
above are either tree-based or graph-based. Tree-based rep-
resentations provide an efficient means of searching and
comparing, but are limited in their expressive ability; the
independent syntactic and semantic DS are tree-based. In
contrast, graph-based representations provide a great deal of
expressive ability, but are notoriously complex and prone to
error for search and comparison.
For the task at hand, it is crucial that a representation
scheme is not limited to how multimedia content is inter-
preted. The scheme should also provide an efficient means of
comparison. From a human perspective, it is possible to
interpret multimedia content in many ways; therefore, it is
essential that any representation scheme allows multiple
interpretations of the multimedia content. Although the
independent syntactic and semantic DSs, in conjunction with
the relation graph DS, may allow multiple interpretations of
multimedia content, it would not be efficient to perform
comparisons.
As stated above, it is possible for a DS to contain other
DSs. In the same way that the generic DS includes a
syntactic DS, a semantic DS, and a syntactic/semantic
relation graph DS. It has been proposed that the syntactic DS
directed acyclic graphs of the content entities. Edges of the
directed acyclic graphs represent the content entities, and
nodes represent breaks in the segmentation. Each directed
acyclic graph represents a particular interpretation of the
multimedia content.
In one aspect the multimedia content is a two dimensional
image, and in another aspect the multimedia content is a
three dimensional video sequence.
In a further aspect of the invention, representations for
different multimedia contents are compared based on simi-
larity scores obtained for the directed acyclic graphs. Attributes of each content entity are measured and the
measured attributes are assigned to each corresponding
content entity in the directed acyclic graphs to rank order the
multimedia content.
In another aspect of the invention, attributes of each
content entity are measured, and the entities are ranked
according to the measured attributes. The rank list can be
culled for desirable permutations of primary content entities
as well as secondary entities associated with the primary
entities. By culling desirable permutations, one can summa-
rize, browse or traverse the multimedia content. For
example, the most active and least active video segments of
a video sequence form a summary that has the desirable
attribute of conveying the dynamic range of action contained
in the video sequence.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1a-1c are block diagrams of prior art description
schemes;
FIG. 2 is a block diagram of a description scheme for a
general content entity according to the invention;
FIGS. 3a-3c are a block diagrams of description schemes
for example content entities;
FIG. 4 is a flow diagram of a method for generating the
description scheme according to the invention;
FIG. 5 is a flow diagram for a method for comparing the
description schemes according to the invention;
FIG. 6 is a block diagram of a client accessing multimedia
on a server according to the invention;
FIG. 7 is a ranked graph;
FIG. 8 is a summary of the graph of FIG. 7; and
FIG. 9 is a ranked graph with secondary content entities.
DETAILED DESCRIPTION OF PREFERRED
EMBODIMENTS
Introduction
We describe methods for representing and comparing
multimedia content according to a ranking of the content.
The methods are based a new generic data structure, which
includes a directed acyclic graph (DAG) representation. In
the following, we describe objects in our scheme and the
advantages of the DAG representation. It is the DAG
representation that allows the scheme to infer multiple
interpretations of multimedia content, yet still be efficient in
the comparison with other multimedia content. In fact, when
we score with respect to a probability likelihood function,
the computations are not only tractable, but also optimal.
Besides describing the generic data structure, we also
describe three important functions that allow us to realize
this efficient representation and perform comparisons. The
first function will be referred to as a DAG-Coder. The
DAG-Coder is responsible for taking individual content
entities contained in the object and producing a DAG-
Composition. The second function is an Object-Compare. The Object-Compare efficiently compares two content entities by determining a similarity score. The third function is Content Ranker. This function ascribes a ranking score to content entities so that DAG-Compositions can be traversed, browsed, or summarized according to rank. The traversing, browsing, and summarizing can be an increasing or decreasing rank order.
After the data structure and three functions mentioned above have been described, we review and elaborate on applications that are enabled by our representation scheme. An integrated application system that performs feature extraction, database management and object comparison is described. Also described is an application system for traversing, browsing, and summarizing multimedia content according to a ranking of the content.
Generic Description Scheme of a Content Entity
To introduce our scheme of representing content objects, we define generic object types, and restrictions on instantiations of such generic object types.
As shown in FIG. 2, a content entity, for example, a video entity 200 is the main part of our scheme. The content entity is a data object that relates contained objects together. The content entity is a recursive data structure divided into four parts: attributes (properties) 201, relations 202, DAG-Compositions 203, and a containment set 204.
Attributes
The attributes 201 form the basis within our recursive description scheme. Attributes are an unordered set that contains properties that may provide details about parts of the entity or summarize the entity as a whole. Attributes are global to the object and may refer to such syntactic properties as color and motion, or other semantic properties of the object such as time and place. The attributes provide basic, low-level information without any structure; however, after structure is added, it is these properties that actually contribute to the degree of similarity. Also, as we will describe later, attributes can define an ordering that helps to compose and interpret the individual entities contained within the object. It should be noted that these properties are inherent qualities of the content entity that contains them and instantiations of this entity should be accessible/visible through the content entity itself.
As an example, a video sequence of a airplane landing on a runway may contain the semantic attributes of place, date, time and temperature, along with the caption, “airplane (767) landing.” Some syntactic attributes that may be attached to this multimedia content are the trajectory of descent. Attached to the airplane object may be the color and shape of the airplane itself. Here, we make an important distinction between attributes of the multimedia content and attributes of the object. The reason that the trajectory is an attribute of the multimedia content is because trajectory is relative to the ground. Therefore, it does not make sense as an attribute of the plane alone, whereas color and shape do make sense.
Relations
The relations (R) 202 are objects that detail relationships between content entities (VE). It is important to note that the context of the relations is given by the containing content entity. The reason is that multimedia content that are segmented differently will produce different relations. Essentially, the relation can be viewed as a hyperlink between a contained object and something else, for example, another content entity. Types of relations are global and instantiation of relations should only be accessible within the content entity itself. One of the utilities of relations is that they may be useful in guiding a search. Returning to our example of the airplane landing, several relations can be identified: the plane is landing on the runway, the lights are guiding the plane, and the runway is located at a particular airport with a particular orientation.
The relations are different from containment, described below, in that the related object may not be completely contained by the content entity and therefore is not considered in similarity comparisons. However, relations allow a user to search for other relevant objects to the content entity in question. All the relations in the content entity must have one argument that is contained within the content entity.
DAG-Compositions
In general, the DAG-Compositions 203 are directed acyclic graphs 205 where edges 206 represent content entities and nodes 207 correspond to breakpoints in the segmentation. The DAG-Composition allow us to infer multiple interpretations of the same multimedia content. Because DAGs operate on 1D spaces, segmentation in this context refers to the delineation of some 1D process. For instance, if we consider a spatio-temporal multimedia content, then the temporal segmentation is a 1D process that defines points in time where several successive events may begin and end. Hence, we may have a DAG-Composition that corresponds to temporal actions. In the spatial domain, we may define an order from left to right across an image. In this way, we may have a DAG-Composition that corresponds to object positions from left to right. Of course, we may define other orderings such as a counter-clockwise spatial ordering, which may serve a totally different purpose.
In U.S. patent application Ser. Nos. 09/326,750 and 09/326,759, incorporated herein by reference, Voronoi ordering functions were respectively defined over the exterior and interior image space with respect to an object boundary. The ordering on the interior space was particularly useful in obtaining a skeleton-like representation of the object shape, then forming a partially ordered tree (POT), which made use of the DAG representation.
It should be emphasized that the method of ordering 2D images or 3D video sequences to achieve DAG-Compositions is not the focus here, rather we are concerned with techniques that use the DAG-Composition to infer higher-level interpretations of a particular multimedia content.
Containment Set
The containment set 204 includes pointers to other content entities that are strictly contained temporally and/or spatially within the content entity 200. The restriction on the containment set is that one object cannot contain another object that contains the first object, i.e., containment induces a directed acyclic graph. The content entities need not be mutually exclusive and there is no ordering within the containment set. For example, in the video sequence of the airplane landing, the containment set includes pointers to each content entity. Some possibilities include pointers to the plane, the runway, the runway lights, the plane touching down, radio communications, etc.
DAG-Coder
The DAG-Compositions are the result of different DAG-Coders applied to the content entity. In other words, given the content entities in the containment set and their relations, different DAG-Coders produce different interpretations of the multimedia content. This function is further described in the following.
ADAG-Coder is a function that segments a given content entity into its components by inducing an ordering over the content entity components. The ADAG-Coder produces the DAG-Composition 204. The ADAG-Coder is global to the database and can be applied to any content entity. The ADAG-Coder provides a perspective on the spatio-temporal content space and make similarity calculations between objects more tractable. A path in the DAG represents an interpretation of the content entity 200. This DAG representation becomes a framework for the description scheme that can interchange syntactic and semantic information, at any level. Furthermore, the complexity of the description scheme is hidden from the user.
Multiple Path Through a DAG
The ADAG-Coder produces multiple interpretations of the multimedia content through such DAG-Compositions. This is achieved through the multiple path structure of the DAG. In the following, we focus on what these multiple paths really mean in terms of the multimedia content.
FIGS. 3a-3c illustrate multiple paths in terms of an example “baseball video” entity 300. In FIG. 3a, the content entity 300 includes attributes 301, relations 302, DAG-Compositions 303, and a containment set 304. In FIG. 3a, a content entity 310 includes attributes 311, relations 312, DAG-Compositions 313, and a containment set 314.
As illustrated, a temporal DAG can represent equivalent interpretations of the same event. For instance as shown in FIGS. 3a and 3b, in the baseball video, a pitching and hitting sequence, or the inning that is being played may be recognizable through the observation of syntactic elements, such as motion, color and/or activity. However, as an alternate means of representation as shown in FIG. 3c, such a sequence or event can also be summarized by attributes 321 of the commentary of the announcer 320. So, from this example, is evident that multiple temporal interpretations of multimedia content are possible and that they may or may not occur simultaneously.
In the case of spatial DAGs, multiple paths can also represent equivalent interpretations, and in some sense can add a higher level of expressiveness. This added level of expressiveness is achieved by a grouping of individual objects into a composite object, then realizing that this composite object can be interpreted with a different semantic meaning. Usually, this new semantic interpretation is higher than before since more information is considered as a whole.
As an example, consider several objects: a gasoline pump, a gas attendant and a car. Individually, these objects have their own set of attributes and are distinct in their semantic meaning. Put together though, these individual objects can obviously be interpreted as a gas station. These multiple paths are efficiently represented by the DAG structure. On the syntactic side, various interpretations of the shape of an object for example may be deduced in a similar manner.
Generating Multimedia Content Description
FIG. 4 illustrates a method 400 for generating a description scheme 409 from a multimedia content 410. The multimedia content can be a 2D image or a 3D video sequence. First, spatial and temporal segmentation 410 is applied to the multimedia content to extract objects 411. Next, feature extraction 420 is applied to the objects to obtain a set of all content entities 429. Feature extraction includes attribute extraction 421, containment extraction 422, and relations extractions 423. The ADAG-Coder 430, according to an ordering 431, generates the DAG-Compositions for the entities 429 to form the multimedia content description 409 according to the invention.
Focusing on the DAG structure, we map the DAG structure of DAG-Composition as follows: edges represent content entities, and nodes correspond to breakpoints segmentation. We can structure the object as a configuration of contained content entities within DAGs according to a predefined topological order. The restrictions on the DAGs compared to a general graph structure is its topological ordering. This order may be temporal or spatial, but it must be 1D. By following the order and obeying connectivity, a subgraph of DAG structure leads to a new concept: an ordered path represents a particular interpretation of multimedia content, i.e., a representative view of the content entity as an ordered subset of its contained entities.
Because a DAG can contain multiple ordered paths, the DAG becomes a compact representation of the multiple interpretations of the data. The DAG data structure allows for the concept of parallel paths; thus, the DAG may integrate both semantic and syntactic elements through this parallel structure. The semantic and syntactic elements are not necessary equivalent, but, within the context of the DAG structure, they can be made interchangeable by placing them on these parallel constructions and its ordering. These functionalities are a subset of a generic graph structure. However, as most graph matching problems are still open, these restrictions will allow us to compare these expressive structures. Although this ordering constrains the expressiveness of a DAG-composition, it does allow for element alignment in robust comparison of content entities.
Universal Multimedia Access
Because our description scheme is capable of representing and comparing multiple interpretations of multimedia content, it fits very well with the concept of Universal Multimedia Access (UMA). The basic idea of UMA, as shown in FIG. 6, is to enable client devices 601 with limited communication, processing, storage, and display capabilities to access, via a network 602, rich multimedia content 603 maintained by a server device 604.
Recently, several solutions have focused on adapting the multimedia content to the client devices. UMA can be provided in two basic ways—the first by storing, managing, selecting, and delivering different versions of the media objects (images, video, audio, graphics, and text) that comprise the multimedia presentations. The second way is by manipulating the media objects on-the-fly, such as by using methods for text-to-speech translation, image and video transcoding, media conversion and summarization. This allows the multimedia content delivery to adapt to the wide diversity of client device capabilities in communication, processing, storage, and display.
Our description scheme can support UMA through the first item mentioned above, that is, depending on the client-side capabilities, the server-side may choose to send a more elaborate interpretation of the multimedia content or simply send a brief summary of the multimedia content. In this way, our description scheme acts as a managing structure that helps decide which interpretation of the multimedia content is best suited for the client-side devices. As part of the attributes for a content entity, the requirements may include items such as the size of each image or video frame, the number of video frames in the multimedia content, and other fields that pertain to resource requirements.
Ranking
As an additional feature, the content entities in a DAG can have associated ranks. FIG. 7 shows a DAG 700 including edges 701-709 having associated ranks (R) 711-719. The ranking is according to the attributes, e.g., semantic intensity, syntactic direction, spatial, temporal, and so forth. The ranking can be in an increasing or decreasing order depending on some predetermined scale, for example a scale of one to ten, or alternatively, ten to one.
For example, the various segments of an “adventure-action” movie video can be ranked on a scale of 1-10 as to the intensity of the “action” in the movie. Similarly, the segments of a sports video, such as a football match can be ranked, where a scoring opportunity receives a relatively high score, and an “injury” on the field receives a relatively low score. Segments of gothic romance videos can be ranked on the relative level of “romantic” activity, horror films on the level of the levels of fright inducing scenes, comedies on their level of humor, rock videos on their loudness, and so forth. It should be understood that the measurements can be based on the semantic and/or the syntactic properties of the content.
The ranking can be manual, or machine generated. For example, a high number of short segments in a row would generally be indicative of a high level of activity, whereas long segments would tend to include a low level of activity. See Yeo et al., in “Rapid Scene Analysis on Compressed Video,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 5, No. 6, December 1995, page 533-544, for one way of measuring content attributes.
Once the various segments have been ranked, as shown in FIG. 8, it becomes possible to traverse the DAG 800 according to the rank-ordering. The traversal can be considered a permutation of the content. In FIG. 8, the arrows 801 indicates “skips,” and the bolded edges indicate the only segments that are traversed. For example, here the ranking is based on an “action,” and only segments having an “action” ranking of eight or greater are traversed. It should be apparent that the traversing can be according to other rank orderings of the content.
Summary
Specifying a particular rank-based traversal in effect allows one to summarize a video. The “summary” shown in FIG. 8 is a “high-action” summary. Thus, if summaries for two different video are extracted based on the same ranking criteria, the summaries can be compared with the scheme as shown in FIG. 5. The advantage here is that when the videos are fairly lengthy, extraneous segments not germane to the comparison can be rapidly skipped and ignored to provide a more meaningful and faster comparison.
In another embodiment, as shown in FIG. 9, some or all of the “primary content” entities 711-719 have associated secondary content elements (2n) 901-909. A secondary content entity characterizes its associated primary entity in a different manner. For example, a fifteen minute interview clip of a person speaking, can be associated with just one frame of the segment, a still image of the same person, or perhaps, text containing the persons name, and a brief description of what the person is saying. Now, a traversal can be via the primary or associated secondary content entities, and a summary can be the primary content entities, or the secondary content entities, or a mix of either. For example, a low bandwidth summary of a video would include only textual secondary entities in its traversal or selected permutations, and perhaps a few still images.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore,
it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
We claim:
1. A computer implemented method for ordering multimedia content, comprising the steps of:
- segmenting the multimedia content to extract video objects, in which the objects are video object planes;
- extracting and associating features of the video object to produce content entities, wherein the content entities are recursive data structures comprising features, relations, directed acyclic graphs and containment sets;
- coding the content entities to produce directed acyclic graphs of the content entities, each directed acyclic graph representing a particular interpretation of the multimedia content;
- measuring high-level temporal attributes of each content entity;
- assigning the measured high-level temporal attributes to each corresponding content entity in the directed acyclic graphs to order the content entities of the multimedia content;
- comparing the ordered content entities in a plurality of the directed acyclic graphs to determine similar interpretations of the multimedia content;
- traversing the multimedia content according to the directed acyclic graph and the measured attributes assigned to the content; and
- summarizing the multimedia content according to the directed acyclic graph and the measured attributes assigned to the content entities.
2. The method of claim 1 wherein the measured attributes include intensity attributes.
3. The method of claim 1 wherein the measured attributes include direction attributes.
4. The method of claim 1 wherein the measured attributes include spatial attributes and the order is spatial.
5. The method of claim 1 wherein the measured attributes include temporal attributes and the order is temporal.
6. The method of claim 1 wherein the measured attributes are arranged in an increasing rank order.
7. The method of claim 1 wherein the measured attributes are arranged in a decreasing rank order.
8. The method of claim 1 wherein the multimedia content is a three dimensional video sequence.
9. The method of claim 1 wherein nodes of the directed acyclic graphs represent the content entities and edges represent breaks in the segmentation, and the measured attributes are associated with the corresponding edges.
10. The method of claim 1 wherein at least one secondary content entity is associated with a particular content entity, and wherein the secondary content entity is selected during the traversing.
11. The method of claim 1 wherein a summary of the multimedia is a selected permutation of the content entities according to the associated ranks.
* * * * *
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/7383504", "len_cl100k_base": 6688, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 18352, "total-output-tokens": 8871, "length": "2e12", "weborganizer": {"__label__adult": 0.0005130767822265625, "__label__art_design": 0.003231048583984375, "__label__crime_law": 0.0012598037719726562, "__label__education_jobs": 0.0010042190551757812, "__label__entertainment": 0.0006451606750488281, "__label__fashion_beauty": 0.000278472900390625, "__label__finance_business": 0.001018524169921875, "__label__food_dining": 0.0004458427429199219, "__label__games": 0.0008387565612792969, "__label__hardware": 0.0078125, "__label__health": 0.0005102157592773438, "__label__history": 0.00042510032653808594, "__label__home_hobbies": 0.00013649463653564453, "__label__industrial": 0.0010805130004882812, "__label__literature": 0.0005879402160644531, "__label__politics": 0.000392913818359375, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.4296875, "__label__social_life": 7.402896881103516e-05, "__label__software": 0.09326171875, "__label__software_dev": 0.45556640625, "__label__sports_fitness": 0.00019180774688720703, "__label__transportation": 0.0004749298095703125, "__label__travel": 0.00019502639770507812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35595, 0.03554]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35595, 0.54487]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35595, 0.88712]], "google_gemma-3-12b-it_contains_pii": [[0, 681, false], [681, 4350, null], [4350, 4612, null], [4612, 4612, null], [4612, 4612, null], [4612, 4612, null], [4612, 4719, null], [4719, 4726, null], [4726, 4726, null], [4726, 4726, null], [4726, 11722, null], [11722, 14871, null], [14871, 21978, null], [21978, 25648, null], [25648, 32860, null], [32860, 35595, null]], "google_gemma-3-12b-it_is_public_document": [[0, 681, true], [681, 4350, null], [4350, 4612, null], [4612, 4612, null], [4612, 4612, null], [4612, 4612, null], [4612, 4719, null], [4719, 4726, null], [4726, 4726, null], [4726, 4726, null], [4726, 11722, null], [11722, 14871, null], [14871, 21978, null], [21978, 25648, null], [25648, 32860, null], [32860, 35595, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35595, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35595, null]], "pdf_page_numbers": [[0, 681, 1], [681, 4350, 2], [4350, 4612, 3], [4612, 4612, 4], [4612, 4612, 5], [4612, 4612, 6], [4612, 4719, 7], [4719, 4726, 8], [4726, 4726, 9], [4726, 4726, 10], [4726, 11722, 11], [11722, 14871, 12], [14871, 21978, 13], [21978, 25648, 14], [25648, 32860, 15], [32860, 35595, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35595, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
71df5ed3d66bd19fc807bd4aefbc10b013579f0e
|
A Rigorous Approach to Resource Management in Activity Coordination
Rodion M. Podorozhny, Barbara Staudt Lerner, Leon J. Osterweil
University of Massachusetts, Amherst MA 01003, USA
Abstract. System behaviors can be expected to vary widely depending upon the availability, or shortage, of resources that they require. Thus, the precise specification of resources required by, and resources available to, a system is an important basis for being able to reason about, and optimize, system behavior. Previous resource models for such disciplines as management and workflow have lacked the rigor to support powerful reasoning and optimization. Some resource models for operating systems have been quite rigorous, but have generally been overly narrow in scope. This paper argues that it is important to be able to optimize and reason about the broad and complex resource requirements of modern distributed multi agent systems. This entails precisely modeling a wide range of entity types, including humans, tools, computation platforms, and data. The paper presents a metamodeling approach that can be used to create precise models of a wide range of resource types, and provides examples of the use of this metamodel. The paper also describes a prototype resource allocation and management system that implements these approaches. This prototype is designed to be a separable, orthogonal component of a system for supporting the execution of processes defined as hierarchies of steps, each of which incorporates a specification of resource requirements.
Keywords
Resources, resource specification, process execution, process centered environments
1 Introduction
Much research in such areas as software process, workflow, multi agent scheduling, and computer supported cooperative work focuses on devising mechanisms for adequately specifying coordination of diverse activities to accomplish a complex task. Most of this work incorporates approaches in which the overall task to be accomplished is represented as a synthesis of lower level tasks. In addition, different approaches incorporate, to differing degrees of formality, the specification of the artifacts that these various tasks use and produce. Some approaches also have mechanisms to specify the agents and resources that are to be used to support task execution.
The differences in emphasis on these different components of activity coordination specification are often clearly due to differences in goals. Some specifications are intended to be largely illustrative and advisory, being intended for use in helping humans to come to common understandings. But many specifications are intended to be sufficiently rigorous and detailed that they can be used as prescriptions for computer support and the application of tools. Some of these more rigorous specifications are sufficiently rigorous that they can support powerful reasoning about the activities, thereby allowing them to be used as the basis for strong support of these activities, and for diagnosis and improvement of the activities being represented.
Our own past work has this latter character and goal. We have concentrated on developing and evaluating specification formalisms that are sufficiently rigorous that they can be used to reason about such difficult and complex activities as the processes used to develop software. Our goals have included being able to provide strong tool and automation support to these processes, and also to reason about how to support these processes most efficiently. This past work has demonstrated the need for structuring the tasks that comprise larger overall processes, and the need for being articulate and complete in specifying the artifacts that these tasks use and produce. But our work has also increasingly shown that it is essential to represent the use of resources in descriptions of processes. This need is most acute in the case of processes that are to be carried out through the coordination of multiple agents.
Operating systems research has long ago demonstrated the pivotal importance of reasoning about resources in parallelizing, coordinating, and optimizing the execution of system processes. Clearly, the abundance of resources can enable execution of tasks in parallel, thereby speeding up accomplishment of larger goals. This same phenomenon is also clearly observable in broader classes of activity coordination. If a software design activity requires the use of a particular design tool, then the various members of a design team are able to work in parallel only if there are multiple copies of that tool available. Similarly, if the design activity has been decomposed into separate parallel subtasks, the design process may go faster, but only if more than one qualified designer is available to work on the project.
Correspondingly, the lack of resources causes contention, occasions the need for some tasks to wait for others to complete, and generally slows down accomplishment of larger goals. Often potential delays can be avoided or reduced by using resource analysis to identify ways in which tasks can be parallelized. Thus, for example, if only one design tool is available for use by two designers, delay may be avoided or reduced by scheduling one designer to perform some other task (e.g. requirements revision) while the other designer has use of the one copy of the design tool.
Of course operating systems research has also long ago demonstrated that it is all too easy to create such problems as deadlocks and livelocks if one is not careful in the scheduling of the assignment of resources. These problems are very real for the larger class of activity coordination problems as well. It is not
hard to devise a process in which a requirements analyst is awaiting the results of a prototype activity in order to complete requirements specification, while a prototyper is awaiting the completion of the requirements specification in order to complete the prototype.
In our work, we are interested in creating activity coordination specifications that are sufficiently precise and rigorous that they effectively support reasoning of the sorts just indicated. We would like to be able to identify tasks that are truly parallelizable so that we can schedule them in parallel. We would like to be able to identify where deadlocks, race conditions, and starvation are possible, so that we can assure that they do not occur. We would like to be able to infer when additional resources could potentially speed up execution of the overall activity, and when an activity has an excess of resources, some of which might potentially be reassigned. We would like to be able to manage contention for resources that are in high demand, and those to which access should be managed by disciplines such as transaction mechanisms.
The preceding sorts of reasoning and controls seem to us to be impractical or impossible unless the resources needed to perform tasks are specified. As with most software analyses, these sorts of reasoning can be more powerful and precise when the rigor with which needed resources are specified is more powerful and precise. Thus, in this work we propose a powerful and precise resource specification formalism, and indicate how it can be combined with a suitably powerful and precise process formalism to provide a basis for the kinds of analysis indicated above.
2 Motivation
In order to motivate our work we present an example of how it might be useful in addressing a typical software development problem. To that end, we suggest how the management of resources specified by our modeling formalism can improve the effectiveness of a small software development team. We hypothesize that the team has a variety of human resources, among which are three developers, one of whom is both a qualified coder and designer, one of whom is a qualified coder and tester, and one of whom is qualified to do design work, coding, and testing. Non-human resources include one license for the Rational Rose design support system, and one license for the Visual C++ tool.
Let us further hypothesize that the team is currently in the late stages of development of a product, and is both completing implementation and also carrying out some redesign that is required in order to fix bugs that have been detected in earlier work. Thus, some of the team’s activities entail recoding, some requires redesign, and all of it requires retesting. It is not hard to also imagine that the progress of the team is, from time to time, slowed by the lack of availability of resources. In particular there are probably times when it would be desirable to have more qualified designers or coders. More likely, however, there are probably times at which progress could be expedited if the team had available an additional Rose license and an additional Visual C++ license.
We see a number of ways in which the resource management capability that we have developed could be of substantial benefit to this organization. A reasonable scenario might be that the organization has money to purchase only one additional license, and it would be important for it to have some way to determine the best choice, a Rose license or a Visual C++ license. Our resource specification capability is intended to be a facility for supporting informed decisions of precisely this sort. It might be used as the basis for analysis of flow time reduction achievable through one purchase or the other (or both). It might be used to study increases in human resource utilization, or it might be used to suggest ways in which the process itself might be altered to reduce flow time without having to purchase new software licenses at all.
The capability we present here assumes that a process is represented as a set of steps that are connected to each other by control flow and/or dataflow dependencies, and that it supports the careful specification of potential parallel activities. The capability requires that each step has attached to it a specification of the resources that are required in order for the step to be executed. Attaching such resource specifications to the various steps enables a process execution supervisor to assured that steps have what they need before being executed. More important, however, such specifications support determination of how much parallelization can be supported by the available resources, and which infusions of new resources are most likely to be most useful, especially when exceptional cases have arisen. Thus, in our example it may be reasonable to purchase an additional design support tool, either because of the lack of availability of a second designer, or because the set of tasks that are dictated by the process does not require or allow parallel design activities. Precise process and resource specification should help determine such things.
We now introduce a resource modeling and management approach and indicate why it seems to us to be effective in dealing with problems such as these.
3 Overview
Our resource management component is intended to be one component of a larger system that may be used to represent processes, support reasoning about real-time systems, or be integrated into a planning system, for example. Since we believe that the need for powerful and precise resource management is widespread we have designed this component to support the modeling of a wide range of types of resources, from physical entities such as robots, to electronic entities such as programs or data artifacts, and to human entities. The resource management component similarly does not prescribe specific protocols about when resources should be reserved, acquired, and released, but rather leaves the definition of those protocols to the external system with which it is integrated. In this section, we present an overview of our resource management component and introduce some key terminology.
A resource model is a model of those entities of an environment that may be required, but for which an unlimited supply cannot be assumed. A
A resource model is defined as a collection of resource classes and resource instances. Resource classes are connected to each other using IS-A links to define a resource hierarchy. This allows resources to be described and referenced at various levels of detail. Resource instances are themselves connected to the model using IS-A links to identify which resource class(es) they are instances of.
In addition to IS-A links, resources may also be connected by Requires links and Whole-Part links. A Requires link indicates that any use of a particular resource also requires use of another particular resource. For example, a piece of licensed software requires use of a computer configured to run that software.
A Whole-Part link indicates a relationship in which a resource can be thought of as a single aggregate unit or may be decomposed into smaller pieces that are separately managed. An example of this is a design team that can be assigned a high-level design task, implying responsibility of the team members, while allowing each individual design team member (part) to be assigned other tasks.
There are four primary operations on a resource model:
- **Identification** of resources that can satisfy specific requirements.
- **Reservation** of a resource for a specific start time and duration at some point in the future.
- **Acquisition** of a resource locking the resource for use in a specific activity.
- **Release** of a resource so that it can be used by other activities.
The mechanism described here is quite general and would be used in different ways by different types of systems. A simple process execution environment might rely just on acquisition and release, greedily acquiring resources as necessary. A planning system would reserve resources as the plan is being developed and acquire and release them according to the plan created. Static analysis might examine the resource requests being made by a system to determine whether the necessary resources exist and where delays or even deadlock might arise when resources are contended for. The resource model is also intended to be manipulable by non-programmers. We therefore require both the static definition of the resource model and its dynamic reservation and acquisition status to be visualizable and easily manipulable through a GUI tool.
### 4 Defining the resource model
A resource model serves two purposes. First, it is a mechanism to describe and organize the resources available within an organization. It is essential that a resource modeling notation be expressive enough to capture the characteristics of resources and relationships among resources that accurately reflect the actual resource entities being modeled. Second, a resource model is used dynamically to control concurrent access to resources, to schedule resources, to analyze the impact of the resource environment on an organization's ability to carry out an activity. Thus, the dynamic modeling capability of a resource management mechanism is also vitally important. In this section, we present the descriptive
features of the resource modeling mechanism, while in the next we present the
dynamic characteristics of our mechanism.
4.1 Resource Entities
A resource model is described as a collection of resource classes, resource instances, and relationships between resources. A resource instance represents a unique entity from the physical environment, such as a specific person, printer, or document. A resource class represents a set of resources (other classes and/or instances) that have some common properties. Resource classes are further subdivided into unschedulable resource classes and schedulable resource classes. A schedulable resource class is one in which its instances are similar enough that the person using the resource model might not care which instance is chosen. For example, Printer would probably be a schedulable resource class. An unschedulable resource class is more abstract and is intended more as an organizational convenience when defining the resource model. For example, Person might be a sensible resource class, but is too general to be scheduled for a specific task.
Each resource is described using a set of typed attribute-value pairs. There is a small set of predefined attributes required of all resources. In addition, the resource modeler can define user-defined attributes that are unique to the specific type of resource. The attribute values of a resource serve to identify the resource and distinguish it from other similar resources. For example, a printer might have an attribute indicating whether it produces color or black output, but that attribute would not be required of resources that are not printers. These descriptive attributes have static values, that is, the values do not change as a result of use of the resources.
There are three pre-defined descriptive attributes required of all resources: a name, a textual description, and a set of criteria assertions. Criteria assertions are membership tests used to ensure that the resources are placed in the model consistent with semantic rules defined by the modeler. Criteria assertions are described further in Section 4.3.
Schedulable resource classes and resource instances also have a capacity attribute associated with them. Capacity is used to indicate the maximum rate at which a resource can be used. The units by which capacity is measured are specific to the type of resource. For example, a printer’s capacity would be measured in pages per minute, while a human’s capacity would be measured in hours per week. Capacity is used to support resource sharing and is discussed further in Section 4.2.
4.2 Relationships between Resources
The resource modeling mechanism supports the definition of three types of relationships between resources: IS-A relationships, Requires relationships, and Whole-Part relationships. IS-A relationships provide a hierarchical abstraction mechanism. Requires relationships express functional dependencies between the
use of resources. Whole-part relationships allow resources to be manipulated at either coarse or fine granularities and to allow refinement of resource usage over time.
**IS-A hierarchy** The relationship between a resource class and its members is expressed with an IS-A link. The resource classes and their IS-A links form a singly-rooted tree. The root of the tree is the predefined resource class named Resource. A resource instance may belong to multiple classes, thus the entire resource model forms a singly-rooted DAG. Each child of an IS-A link inherits all attributes of its parents. If the same attribute name appears in multiple parents and is originally defined in separate classes (that is, not in a common ancestor), the instance contains both attributes, qualified by the class name. For example, suppose Amelia is an instance of both the Programmer class and the Translator class. Suppose each of these classes has a language attribute. Amelia might have Java and C++ as the value for the Programmer.language attribute and English and Japanese as values for the Translator.language class. An example of an IS-A hierarchy is presented in Fig. 1.

**Requires** Functional dependencies between resources are captured with a Requires relation. A Requires relation denotes that one resource requires another resource in order to perform any work. For example, a piece of software requires a computer with a certain configuration or a delivery person requires a vehicle. The fundamental property captured by the Requires relation is that any conceivable use of one resource requires another resource. By capturing these relations in the resource model, the dependency is permanently captured and does not need to be repeated whenever the first resource is used. Arities may be associated with the Requires relation to indicate how many instances of a resource are required.
It is also possible to associate a capacity attribute with a Requires relation. This is useful in situations in which a resource does not require exclusive use of a second resource. For example, while a software package might require use of a specific computer, it might be possible for multiple people to use the same computer concurrently, if the CPU and memory requirements of that software package are not excessive. When the computer is used for that software package, the capacity available for other activities is reduced by the amount specified in the Requires relation.
The Requires relation can exist between resource classes, resource instances, or a combination of these. A Requires relation between resource instances establishes a tight, static binding between the resources. For example, a particular software package might only be loaded on a specific computer. To use that software package therefore requires use of that specific computer. A Requires relation between resource classes establishes the dependency, but defers the binding of which specific required resource is assigned. For example, a delivery person requires a car, but it does not matter which delivery person uses which car in general. A specific task to be performed by a delivery person might place additional requirements on the car. For example, the product to be delivered might need to remain cool, so the delivery car would then require air conditioning. Those additional requirements can be described in the context in which the resources are being used.
An example of the Requires relation is presented in Fig. 2. The example shows that a resource instance under schedulable resource class Coder requires one resource instance Visual C++ which, in its turn requires only one resource instance under schedulable resource class Computer.
**Whole–Part** The semantics of the Whole-Part relation is that one resource is logically or physically part of a second resource. The difference between Requires and Whole-Part is subtle. Both the Whole and the Parts are modeled as resources to indicate that the resource can be used as a whole, or parts of it can be used individually. Arities and capacities can also be associated with the Whole-Part relation. An example of a Whole-Part relation is that a team is a Whole whose Parts are the team members. The team can be given an assignment, implying a joint assignment to the team members. The assignment can be refined later, giving parts of the assignment to the individual members. This capability does
The Whole-Part relation can be between resource classes or between resource instances. A Whole-Part relation between resource classes specifies a typing relation that must hold for all Whole instances. For example, a Team resource class might have a Whole-Part relation with a Member resource class with an arity of 3–5. Such a relation implies that any Team instance must have a Whole-Part relation with 3–5 Member instances. Thus, when a Whole instance is added to the resource model, it is imperative that the corresponding Whole-Part relation be created to identify the specific team’s members. This differs from the Requires relation in which the binding between resource instances can be deferred until the resources are used, if desired.
An example of Whole-Part relation overlaid on the IS-A hierarchy is presented in Fig. 3. The figure shows that resource instance Team1 is an aggregate of resource instances Frank and Dave.
### 4.3 Criteria assertions
A resource class may define a set of criteria assertions. A criteria assertion is a boolean expression over the attribute values of the resource. These assertions serve as membership criteria for all the resource classes and resource instances that are children of the resource class. The additional assertions introduced at a child resource class may not contradict those of a parent resource class. While this is not checkable in general, such contradictions would make it impossible to populate the resource class with instances. Therefore, the resource classes farther from the root along a certain path of IS-A relations have more membership criteria to satisfy and thus represent more specific classes than those closer to the root.
As an example, one might define a resource class FastPrinter as a child of the more general Printer class. The FastPrinter class might have associated with it an assertion that its capacity is greater than 12 pages per minute. Each resource class or instance below it must satisfy this criteria. Failure to do so indicates that the model does not satisfy the semantics that were intended.
The resource manager checks the criteria assertions after any changes to the criteria assertions, to the attribute values of a resource, or the addition of any resources connected (transitively) with IS-A links.
### 4.4 Modifying the Resource Model
We expect a resource model to be a relatively static entity. Many resources represent physical entities, such as people and equipment. These types of resources change relatively infrequently. As such, we expect these resources to be added to a model by a human resource modeler. We provide a GUI tool to facilitate the creation, reorganization, and visualization of these resource models.
Additionally, we expect some resources to be created, or perhaps destroyed, dynamically during the execution of some activity. For example, a software engineering process produces artifacts, such as design documents, implemented
classes, test cases, etc. Each of these could be modeled as a resource. To facilitate this, we provide an API that allows tools to dynamically create, reorganize, and examine the resource models.
5 Using the resource model
Once a resource model has been defined, it can be used to control resource sharing, to plan future resource usage, or to reason about whether activities could be performed in less time by increasing the number of resources available. In this section, we describe the operations provided by the resource manager that allow such reasoning to be performed. It is important to keep in mind that the activities for which the resources are used are defined and controlled from outside the resource manager. The responsibility of the resource manager is to provide information about the types and availability of resources and to track their usage.
5.1 Identifying Needed Resources
The first step in using a resource model is to determine what resources exist and how well they match the needs of the activity to be performed. To do this, a resource user specifies the name of a resource class and a query over the attribute values contained by resources of that class. The resource manager then returns a collection of resource instances that match that resource class and query. To find matching resources, the resource manager finds all instances connected transitively via IS-A links from the named resource class. The query is applied to each instance to see if it satisfies the necessary conditions and, if so, that instance is added to the set to be returned.
5.2 Acquiring Resources for Use
The resource manager controls the use of resources by keeping track of bindings to activities that are using them. The resource manager has only an abstract notion of the activities that are using them. An activity requests use of a resource by specifying either a specific resource instance by name, or a resource class and query to identify a resource to acquire. If the resource manager can successfully identify a matching resource, the resource is locked for use by the given activity. If more than one resource matches, the resource manager selects one. The acquisition request may also indicate a capacity of the resource that is going to be used if exclusive use of the resource is not required.
If the acquisition request is for a resource that requires another resource, the acquisition only succeeds if all transitively-required resources can also be acquired. Similarly, if the resource is a Whole, the corresponding Parts must also be available for the acquisition to succeed.
Note that in general, the resource manager is not actually able to prevent unauthorized use of the resources as their use is done externally to the resource
manager. For example, a person can be given an assignment without the resource manager being informed. While this cannot be prevented, it does compromise the resource manager's ability to assist in planning and scheduling.
5.3 Reserving Resources for Future Use
Acquisition results in immediate locking of a resource for use. Reservation supports the ability to plan future use of a resource. As with acquisition requests, reservation requests must specify the resource instance to reserve or a resource class and query as well as the capacity required. In addition, a reservation request must identify the time in the future that the reservation is for and the duration of time that it is for. If there is exactly one matching resource instance available at that time, that resource instance is reserved. If there is a schedulable resource class for which all its resource instances satisfy the query, the reservation is made at the level of the schedulable resource instance. In this way, the selection of a specific resource is deferred until acquisition time. This has the potential to increase the overall utilization of the resources since a later request might require a specific resource and by not binding a specific resource instance at reservation time, it is more likely that the more specific request can be satisfied.
The resource manager creates schedulable resource classes dynamically to allow late binding of reservation requests even when the matching resource instances do not wholly comprise an existing schedulable resource class. For example, suppose our resource model had a class named Printer and all printer instances in the building were direct children of the Printer class. A printer reservation request might use a query that stipulated a specific value for the location attribute. If there was a single printer at that location, that instance would be reserved. If there were multiple printers at that location and at least one printer at a different location, a schedulable resource class would be created dynamically to represent the co-located printers and a reservation would be placed on the class.
Reservations automatically reserve required resources and part resources in a manner similar to acquisitions.
In order to use a reserved resource, an acquisition request must be made identifying the reservation that is to be converted into an acquisition.
5.4 Releasing Resources
If a resource is acquired or reserved, it can be made available again for others to use by releasing the resource. A reserved resource is automatically released if the duration of its reservation expires and it has never been converted to an acquisition.
5.5 Refining Usage of Part Resources
In general, a resource cannot be reserved or acquired if there is insufficient capacity available given its current reservations and acquisitions. An exception to this
is in the use of Part resources. The main purpose of the Whole-Part relation is to allow a Whole resource to be reserved/acquired resulting in all its parts being similarly reserved/acquired, but then to allow the initial reservation/acquisition to be refined to a collection of reservations/acquisitions on the Parts. For example, imagine a project management tool that assigns the design of a software system to a team of designers. This ensures that each member of the team is reserved for that task. As the design progresses, the large design assignment will be refined into more specific tasks, each of those given to a smaller team or an individual.
To support refinement, Parts may be reserved/acquired within the context of an existing reservation/acquisition of the Whole. In these cases the reservation/acquisition are made with respect to the capacity already reserved by the Whole reservation/acquisition and not with respect to the generally available capacity of the Parts.
Note that unlike aggregation found in object-oriented design notations, our Whole-Part relation does not denote that the part is entirely owned by the Whole. In this way, an individual might devote 50% of their time to one team and 50% of their time to another team, for example.
6 Related work
Other resource modeling and specification work has been done in such resource sensitive application areas as software process, operating systems, artificial intelligence planning and management. The approaches in these areas have some similarities to our own work, as they concern themselves with such similar problems as the coordination of activities that can span long time periods.
6.1 Related work in software process
There have been a number of software process modeling and programming languages and systems that have addressed the need to model and manage resources. Among the most ambitious and comprehensive have been APEL [10], and MVP-L [15], both of which have attempted to incorporate general resource models and to use resource managers to facilitate process execution. APEL’s approach is similar to ours in that APEL deals with resource management as a separate issue that is orthogonal to other process issues. APEL provides a way to specify an organizational model that includes human resources and their aggregates (teams). It also introduces the notion of a position that is very similar to our notion of a class in that it tags human resources according to their skill sets. APEL’s roles define the capacity in which a resource is used by a specific activity. APEL’s resource modeling approach, however, seems to be less general and comprehensive than our model of resources. In addition it does not seem to incorporate any provision for support of scheduling.
There are a number of other process languages that provide for the explicit modeling of different sorts of resources. Merlin [6], for example, provides rules for
associating tools and roles (or specific users) with a work context (which may be likened to a Little-JIL step). Some others that offer similar limited capabilities are ALF [5], Statemate [9], and ProcessWeaver [1]. In all of these cases, however, the sorts of resources that are modeled are rather limited in scope.
6.2 Related work in operating systems
The problem of scheduling resources has been extensively studied in the field of operating systems (for example, [4], Chapter 6.4). The most common resources in this problem domain include peripheral devices and parts of code or data that require exclusive access. The differences between the needs of resource management in operating systems and software engineering (or artificial intelligence) arise from the fact that operating system resources:
- are used for much shorter periods of time (hence, more elaborate notions of availability are not usually needed).
- are generally far less varied (e.g. Humans are not considered to be resources in operating systems)
- resource usage is much less predictable. Independent programs compete for the same resources and are executed at unpredictable times.
Operating systems research on resource management per se usually focuses on scheduling techniques of a specific resource (such as a CPU or a hard disk as in [7] or [17]). The similarities in purposes of resource modeling between software engineering and operating systems fields appear only in research on admissions control ([3], [2], [16], [13]) which is more in the domain of networks research. For instance, both of these fields require a resource environment abstraction that can model several kinds of resources. They also require a way to satisfy a general resource request (as opposed to the request for a specific instance). This means that a search mechanism is needed in both cases. The resource modeling approaches in the field of networks research are somewhat similar to our approach in that they also often introduce a hierarchy of resources and provide some functionality (i.e., operations for search, reservation and acquisition of resources). It is interesting to note that the authors of [3] and [2] also saw the need to make the resource model independent from the model representing tasks (applications in their terminology). Resource models in this domain, however, seem to lack flexibility and generality.
6.3 Related work in AI planning systems
Probably, the closest resource modeling approach to ours is suggested in the DITOPS/OZONE system. OZONE is a toolkit for configuring constraint-based scheduling systems [18]. DITOPS is an advanced tool for generation, analysis and revision of crisis-action schedules that was developed using the OZONE ontology. The closeness is evidenced by the fact that OZONE also incorporates
a definition of a resource, contains an extensive predefined set of resource attributes, uses resource hierarchies, offers similar operations on resources, and also resource aggregate querying. We believe that our resource modeling approach places a greater emphasis on human resources in the predefined attributes and allows for an implementation that is easier to adapt to different environments.
The Cypress integrated planning environment is another example of a resource-aware AI planning system. It integrates several separately developed systems (including a proactive planning system (SIPE-2 [20]) and a reactive plan execution system). The ACT formalism [12] used for proactive control specification in the Cypress system has a construct for resource requirements specification. It allows the specification of only a particular resource instance. The resource model does not allow for resource hierarchies and the set of predefined resource attributes is rigid and biased towards the problem domain (transportation tasks).
6.4 Related work in management
An example of a resource modeling approach in a management system is presented in the Toronto Virtual Enterprise (TOVE) project [8]. This approach suggests a set of predefined resource properties, a taxonomy based on the properties and a set of predicates that relate the state with the resource required by the activity. The predicates have a rough correspondence to some methods of our resource manager. It is very likely that our resource manager would satisfy the functionality requirements for a resource management system necessitated by the activity ontology suggested in the TOVE project.
6.5 Related work in other distributed software systems
The Jini distributed software system [19], which is currently being developed by Sun Microsystems, seems to employ a resource modeling approach that seems somewhat similar to ours. The Jini system is a distributed system based on the idea of federating groups of users and the resources required by those users. The overall goal of the system is to turn a network into a flexible, easily administered tool on which resources can be found by human and computational clients. One of the end goals of the Jini system is to provide users easy access to resources. Jini boasts the capability for modeling humans as resources, allows for resource hierarchies, provides ways to query a resource repository using a resource template that is very similar to resource queries in our suggested approach. Because information about Jini is limited it is difficult to say what kind of a resource model is used. It is also difficult to see how easily Jini's resource model can be adapted to new environments.
7 Evaluation and Future Work
In an earlier paper [14] we describe some of the experience we have had in applying this resource management system. Most of this experience has come from
integrating this system with the Little-JIL [11] process programming system, and using that integrated system to program processes for robot coordination, data mining, and negotiation, as well as software engineering processes such as collaborative design and regression testing. This past experience has confirmed that the features of the system we have developed are of substantial value, and that the approaches we are taking seem appropriate. Our experiences have resulted in the creation and modification of our initial notions and decisions. For example, the Whole-Part relation was incorporated into the resource modeling capability after the need became apparent in trying to address the problem of programming the design of software by teams. These experiences have served to reinforce our commitment to further experimentation and evaluation, which we believe will lead to further improvements to this system.
Our experiences have also encouraged us to continue to enhance the features of this system. We are planning to focus more effort on the design and implementation of better languages for the definition of the resource model and the specification of resource requirements. Currently we rely primarily on the use of Java for these specifications, and we seek languages that will be more accessible to non-programmer users.
We would also like to see the creation of a simulation system that would use the process and resource specification systems we have developed to draw inferences about projected resource needs and utilization. A simulation system of this sort would enable users to gauge the gains and losses that would be likely to result from making changes in resource availability, or from making changes in the process itself.
We would also like to see the resource management system enhanced with a capability for modeling the skilling-up of humans as a result of their execution of certain process steps. Our current system assumes that agents have fixed skill levels. But clearly humans will learn as they go through a lengthy process. As a consequence their skill levels should go up. This is clearly a desirable modeling feature and should be incorporated in a future version of our system.
As a more distant goal we see the addition of a best match resource querying mechanism. Ideally, it should be able to find a closest match to a resource query based on the user specified criteria. Currently, the query is satisfied by returning all the matching resources. For such a mechanism to operate we also need to devise metrics that would allow to measure “distances” between resources (to be exact, their attribute values) and resource queries.
Finally, we need to gain more experience in the use of the resource manager in support of other systems. To this end, we intend to continue the development of processes that use resources, to evaluate how well the resource management system addresses the needs of the process execution environment and specific processes, and to determine how well the resource management system can facilitate planning and analysis activities.
8 Acknowledgments
This research was partially supported by the Air Force Research Laboratory/IFTD and the Defense Advanced Research Projects Agency under Contract F30602-97-2-0032. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements either expressed or implied, of the Defense Advanced Research Projects Agency, the Air Force Research Laboratory/IFTD, or the U.S. Government.
References
This article was processed using the \LaTeX\ macro package with LLNCS style
|
{"Source-Url": "http://laser.cs.umass.edu/techreports/99-12.pdf", "len_cl100k_base": 8014, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 43020, "total-output-tokens": 9447, "length": "2e12", "weborganizer": {"__label__adult": 0.0002505779266357422, "__label__art_design": 0.0003995895385742187, "__label__crime_law": 0.0002422332763671875, "__label__education_jobs": 0.0012035369873046875, "__label__entertainment": 6.330013275146484e-05, "__label__fashion_beauty": 0.0001246929168701172, "__label__finance_business": 0.0003762245178222656, "__label__food_dining": 0.0002741813659667969, "__label__games": 0.0004038810729980469, "__label__hardware": 0.0006613731384277344, "__label__health": 0.0003933906555175781, "__label__history": 0.0002448558807373047, "__label__home_hobbies": 9.554624557495116e-05, "__label__industrial": 0.0003669261932373047, "__label__literature": 0.0002751350402832031, "__label__politics": 0.0001747608184814453, "__label__religion": 0.00032138824462890625, "__label__science_tech": 0.039306640625, "__label__social_life": 8.374452590942383e-05, "__label__software": 0.01268768310546875, "__label__software_dev": 0.94140625, "__label__sports_fitness": 0.0001971721649169922, "__label__transportation": 0.0004010200500488281, "__label__travel": 0.00019299983978271484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45962, 0.01629]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45962, 0.92179]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45962, 0.94081]], "google_gemma-3-12b-it_contains_pii": [[0, 2330, false], [2330, 5714, null], [5714, 8880, null], [8880, 12084, null], [12084, 15163, null], [15163, 18128, null], [18128, 19347, null], [19347, 22615, null], [22615, 24319, null], [24319, 25579, null], [25579, 28349, null], [28349, 31234, null], [31234, 34168, null], [34168, 36982, null], [36982, 39882, null], [39882, 42991, null], [42991, 45887, null], [45887, 45962, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2330, true], [2330, 5714, null], [5714, 8880, null], [8880, 12084, null], [12084, 15163, null], [15163, 18128, null], [18128, 19347, null], [19347, 22615, null], [22615, 24319, null], [24319, 25579, null], [25579, 28349, null], [28349, 31234, null], [31234, 34168, null], [34168, 36982, null], [36982, 39882, null], [39882, 42991, null], [42991, 45887, null], [45887, 45962, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45962, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45962, null]], "pdf_page_numbers": [[0, 2330, 1], [2330, 5714, 2], [5714, 8880, 3], [8880, 12084, 4], [12084, 15163, 5], [15163, 18128, 6], [18128, 19347, 7], [19347, 22615, 8], [22615, 24319, 9], [24319, 25579, 10], [25579, 28349, 11], [28349, 31234, 12], [31234, 34168, 13], [34168, 36982, 14], [36982, 39882, 15], [39882, 42991, 16], [42991, 45887, 17], [45887, 45962, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45962, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
c45431b3e371ba8a81c72bf1cee25ac4a3e52c1d
|
Aonix
Modelling and generating COM components with StP/UML and ACD
Software through Pictures®
White Paper
May 2000
Oliver Maus Aonix GmbH
Executive Summary
Code generation from application development tools promised the earth and actually delivered just a few stones. With a new approach and proven technology, it is now possible to leverage modelling tools to get meaningful amounts of high quality code automatically generated.
This white paper describes the application of this technology in the modelling of COM components with Software through Pictures (StP) for the UML and the generation of code using the Architecture Component Development technology (ACD).
The modelling of components is thus taken beyond the traditional role of building models for documentation purposes, and enriching these, in accordance with UML to achieve compact, re-usable models which can be used for application generation.
**More Compact Models, and more Generated Code - How?**
Demonstrably larger amounts of much more meaningful source code can be generated from UML models by capturing the architectural and technical aspects of an application in a set of templates, which are automatically applied when generating code. Putting this detail into templates rather than UML results in more compact, easier to understand and maintain models, that more accurately reflect the business and application requirements.
This approach helps to achieve a level of re-use of these templates and models on subsequent projects. Ultimately this can mean increased quality, better usage of people resources, and a greater likelihood of meeting project cost and schedules.
The starting point is to get a clear understanding of what needs to be generated - using the target language and the way in which it will be used. The next step is to make clear what elements can be modelled with UML, and how these are to be represented in the models. The third and last step is to write templates that generate the source code from the model elements using these strategies.
**Mapping UML to COM**
COM (Component Object Model) is a binary standard for remote object invocation between clients and servers. It is based on interfaces that are separated from the implementations, and its components are binary, reusable components that are used via standardised mechanisms.
The interfaces mentioned above are described in an “Interface Description Language” (IDL) and stored in one or several files with an extension of .idl.
The Interface Description Language
This is an example of an IDL file describing the interface of a simple timeserver:
```
[ object,
uuid(1a658790-982e-11d3-b844-00008606a51e),
helpstring("ITime Interface"),
pointer_default(unique)
]
interface ITime : IUnknown
{
import "unknown.idl";
HRESULT setSwitchTime( [in] int hour, [in] int minute);
HRESULT getSwitchTime( [out] int* hour, [out] int* minute);
};
[ object,
uuid(1a6ba450-982e-11d3-b844-00008606a51e),
helpstring("ITimeString Interface"),
pointer_default(unique)
]
interface ITimeFormat : IUnknown
{
import "unknown.idl";
HRESULT getEuropeanTime( [out] int* hour, [out] int* minute);
HRESULT getAmericanTime( [out] int* hour, [out] int* minute,
[out] int* am);
};
[ uuid(1a71c110-982e-11d3-b844-00008606a51e),
version(1.0),
helpstring("Library for time server")
]
library TimeLib
{
importlib("stdole32.tlb");
[ uuid(1a7656a0-982e-11d3-b844-00008606a51e),
helpstring("TimeServer Class")
]
coclass TimeServer
{
[default] interface ITime;
interface ITimeFormat;
}
};
```
Two interfaces ITime and ITimeFormat are specified. Both interfaces have two methods specified, for example interface ITime has the methods setSwitchTime and
getSwitchTime. Interfaces can be seen as abstract classes that have no attributes. Interfaces are directly or indirectly inherited from the base interface IUnknown. IUnknown itself has three methods: QueryInterface, AddRef and Release, but more about this later.
Interfaces must be implemented in at least one class, which inherits from the interfaces. Therefore, a class implements all methods of the interfaces it inherits from. Because interfaces can be seen as abstract classes the implementation class has to implement all methods of the interface hierarchy up to the base interface IUnknown. The keyword coclass denotes the implementation class (sometimes referred to as object class) and this term will be used throughout the document. It is therefore obvious, that the class TimeServer inherits, and therefore has to implement the methods from ITime and ITimeFormat as well as the interface IUnknown.
The declaration of the coclass and it’s related interfaces are summarised by the keyword library called type library.
All the type libraries, classes and interfaces must be globally unique, that’s why all elements are identified by Universal Unique Identifier (UUID). The COM terminology for these identifiers is Globally Unique Identifier (GUID). Depending on the context they are called Class Identifier (CLSID), Interface Identifier (IID) or Library Identifier (LIBID). An example for such an identifier is - 1a658790-982e-11d3-b844-00008606a51e.
The necessary UUIDs are part of the model, because they may not be changed. These are available for interfaces, coclass and package with ”TimeLibrary” and stored in the annotation item “UniqueID” which belongs to note “object”.
Each element in an IDL file is preceded with some (COM-) attributes, included in square brackets. Therefore, the UUID’s are attributes of the elements shown in Table 1 – COM Constructs and Mapping to UML.
If a class implements more than one interface, one of the interfaces must be the default interface denoted by [default] in front of that interface.
Parameters of interface methods can have various COM-attributes as well, e.g. [in], [out], [in, out], etc. This information tells COM how to marshal parameters (if marshalling is required).
Let’s start collecting requirements and map them to StP/UML model elements:
COM Construct | Mapping to UML Model Element
--- | ---
Interfaces | Either classes with stereotype "Interface" or interface symbol (bubble), depending on the context
Interface methods | Methods(operations) in class tables, with parameters and return types
Implementation class (coclass) for interfaces | Class symbol with stereotype "COMObject"
Implementation relation between coclass and interfaces | Implements link between coclass and interfaces in class diagram
IDL attributes ([...]) | Tagged values, added to the appropriate modelling element
Type library | Package symbol with stereotype "TypeLibrary"
| **Table 1 – COM Constructs and Mapping to UML** |
**The Interface Implementing Class**
An interface represents an abstract class concept, and its methods must be implemented in a derived class that is already named in the interface file with the keyword `coclass`. Therefore, this provides the name of the interface implementing class as well as its method declarations.
The header file for coclass would look like the following:
```cpp
#include "TimeLib.h" // Header generated by MIDL.exe
class TimeServer : public ITime, public ITimeFormat
{
private:
ULONG localRefCounter;
public:
TimeServer() {localRefCounter=0;}
public:
// Root interface methods
STDMETHODIMP QueryInterface(REFIID riid, void** ppv);
STDMETHODIMP_(ULONG) AddRef(void);
STDMETHODIMP_(ULONG) Release(void);
// Interface ITimeFormat
private:
// Interface ITime
```
STDMETHODIMP setSwitchTime ( int hour, int minute);
STDMETHODIMP getSwitchTime ( int* hour, int* minute);
// Interface ITimeFormat
STDMETHODIMP getEuropeanTime( int* hour, int* minute);
STDMETHODIMP getAmericanTime ( int* hour, int* minute, int* am);
};
The parameter types of our own interface methods are in pure C++ style, without the marshalling information. Here there is a further requirement for the code generator. What is required is a single representation of the parameter types in the model that map to the appropriate target style as part of the generation process.
The implementation file comprises only the methods of interface IUnknown. These methods of IUnknown serve two purposes:
*QueryInterface* is needed for the client to ask the object for a special interface different to that which it is already using. The required interface is delivered in parameter *riid*, if the interface is available, a pointer to it is returned in parameter *ppv*.
*AddRef* and *Release* are used for reference counting. If a client has access to one of the interfaces of the object it has to call AddRef, which increases an internal counter. If the client does not use the object any longer it has to call *Release* to decrement the counter. When the counter reaches NULL, the object is no longer needed and can remove itself from memory.
Since it is known what has to happen to the methods from IUnknown, it is possible to provide a standard implementation of these functions.
**The Class Factory**
There are different ways in which COM components can be used. The simplest way is as *In-Process Server*, which is realised as a DLL, and is loaded into the same address space as the client application. The second way is as *Local-Server*, that is usually an EXE file on the same machine as the client, but clearly has it’s own address space. The third possibility is as *Remote-Server*. This can be either a DLL or EXE on a remote machine. The client need not know about which way the requested server is realised (although it can specify a location for the COM server).
A COM component can exist in a number of different ways, and the client does not need to specify the location of the server, it follows that a mechanism much be present to instantiate an object with the specified interface, even if it is located in a different address space. This mechanism is realised using the *class factory* concept.
Class factories are ordinary COM classes that support a special interface called *IClassFactory*, which inherits from IUnknown. For each COM class that can be
addressed by a client, a class factory must be provided. Therefore, the first step for a client is to obtain a pointer to the appropriate class factory, that can instantiate as many COM objects as needed. For this purpose the class factory provides a method called \textit{CreateInstance}. This method expects two parameters – \textit{riid} as identification for the interface of the COM object the client wants to address and \textit{ppv} to deliver a pointer to that interface. The client programmer can then ask \textit{QueryInterface} for the appropriate interface. The second method is to use \textit{LockServer}, which has to establish the class factory in memory if it is often used.
Looking at the header of such a class factory:
```cpp
#include "TimeLib.h" // Header generated by MIDL.exe
#include "TimeServer.h"
class TimeServerFactory : public IClassFactory
{
public:
// IUnknown
STDMETHODIMP QueryInterface (REFIID riid, void** ppv);
STDMETHODIMP_(ULONG) AddRef(void);
STDMETHODIMP_(ULONG) Release(void);
// IClassFactory
STDMETHODIMP CreateInstance (LPUNKNOWN punkOuter, REFIID iid, void **ppv);
STDMETHODIMP LockServer (BOOL fLock);
TimeServerFactory() {localRefCounter=0;}
private:
ULONG localRefCounter;
};
```
\textit{TimeServerFactory} has to implement the methods \textit{CreateInstance} and \textit{LockServer} from IClassFactory as well as the methods from IUnknown. All the methods that can be delivered are standard implementations in the implementation file. Therefore, class factories can be virtually 100% automatically generated.
\textbf{The Type Library}
The include statement “TimeLib.h” appears in the headers for both the implementation class and class factory.
As discussed earlier, the declarations in the IDL file can be collected into a \textit{type library}. This is a binary that can be delivered together with the COM server or alternatively can be obtained by compiling the IDL file with \texttt{midl.exe}, which is shipped with Microsoft's Developer Studio. A second possibility is to generate a header file with the same name as the IDL file, containing the interfaces in a COM specific way, as well as \textit{proxy} (client)
and stub (server) method declarations. Including this header is necessary for our own generated headers of implementation classes and class factories.
Capturing COM components in UML Class Diagrams
Now that we have established some of the heuristics for mapping from COM constructs to UML model elements we can now turn our attention to the model representation and the role that this plays in the generation of source code.
It is important to point out that the standard UML extensibility mechanisms such as stereotypes and tagged values are used, rather than proprietary notations and symbols. This is a key part of adapting the UML to support particular architectures or target languages, and as such is fully supported in the Software through Pictures tool suite.
Modelling the Interfaces and Mapping these to Code
In the class diagram below, the inheritance hierarchy for the interfaces is modelled using class symbols with the stereotype “Interface”. This stereotype is necessary, so that code generator knows that it is dealing with an interface and not a class, and can thus generate code accordingly.
Diagram 1 – Modelling Interfaces in Class Diagrams
It is generally accepted good practice that class diagrams should not be used to model the internals or details of classes or interfaces and thus to specify the methods of the
interfaces a class table, one for each interface, is used. The class table captures the implementation view.
The interface IUnknown is marked as external in the properties, so that the ACD generator detects that it is not necessary to generate any code. Since the methods of IUnknown are standard, there is no need to specify these methods in the class table, they can be fully generated.
Diagram 2 – Class Table for an Interface
Interface ITime has two methods, setSwitchTime and getSwitchTime, which both have two parameters and the same return type HRESULT.
The table below describes the stereotypes needed for modelling interfaces, to which UML model elements they are added, and how they map to source code. The code generator can extract this information from the model to generate source code correctly from the model representation.
<table>
<thead>
<tr>
<th>Stereotype</th>
<th>Modelling element</th>
<th>Mapping</th>
</tr>
</thead>
</table>
| TypeLibrary | Package | IDL: Keyword library, whereby the package label represents the name of the library. The package label is used as name for the IDL file as well.
CPP: package label is used for the automatic include statement in the top of both headers (for coclass and factory class) |
| Interface | Class | IDL: keyword interface, the label of the class symbol is used as the interface name.
CPP: in the header of the coclass the class label of the interface is used as base class for coclass. |
Therefore, coclass inherits from the interface.
### Table 2 – Stereotypes for Modelling Interfaces and Mapping to Source Code
<table>
<thead>
<tr>
<th>Stereotype Modelling element</th>
<th>Mapping</th>
</tr>
</thead>
</table>
#### Type Mapping
The parameter types are of special interest, because there is the issue of generating them with the proper type information for both the IDL file as well as the implementation file.
As mentioned earlier, the IDL file needs some extra marshalling information, so an abstract data type `IN_INT` and `OUT_PINT` are used. These abstract types will be mapped to the appropriate data type during code generation. What we now need is a clear type mapping rule. (Note: the class table for interface `ITimeFormat` looks very similar, so there is no need to show this in a separate diagram).
[Diagram 3 – Modelling Data Types for Type Mapping]
Abstract data types are modelled as classes with stereotype "DataType". The label of the class symbol represents the abstract data types that are used as parameter types in the class tables above, the mapping is defined with tagged values (shown in curly braces `{}`).
The table below describes the stereotype needed for modelling the type mapping, to which UML model elements it is added and how it maps to source code.
| DataType | Class | **IDL**: used for mapping to that type denoted by the value, that is associated with the tag IDL
| | | **CPP**: used for mapping to that type denoted by the value, that is associated with the tag CPP |
Table 3 - Stereotypes for Modelling Type Mapping and Mapping to Source Code
**IDL Specific Attributes**
All the elements in the IDL file could have COM attributes (enclosed in square brackets). This information is captured in the model as *tagged values*.
The diagram below shows the property dialog for interface `ITime` and each attribute added to the *Tagged Values* field. Since `ITime` should be the default interface, the tagged value "default" is added, but without the value. (This is in conformance to the UML specification, a boolean tagged value may be written without the value TRUE, if its value were FALSE, the tag would be omitted as well).

The property dialog for the package representing the type library is shown below, and contains the tagged values for the helpstring and the version attributes of the IDL file.
Diagram 5 – Dialog window for adding tagged values to package symbol
In the table below you will find the tagged values needed for modelling the interfaces, to which UML model elements they are added and how they map to source code.
<table>
<thead>
<tr>
<th>Tagged Value</th>
<th>Modelling element</th>
<th>Mapping</th>
</tr>
</thead>
<tbody>
<tr>
<td>default (no value)</td>
<td>Interface/Class</td>
<td><strong>IDL</strong>: denotes the default interface in the coclass section</td>
</tr>
<tr>
<td></td>
<td></td>
<td><strong>CPP</strong>: in the CreateInstance method of the factory implementation it's that interface to which the new instance of coclass is cast. The QueryInterface method of this interface is called and returns a pointer to itself in parameter ppv.</td>
</tr>
<tr>
<td>helpstring=<string></td>
<td>Interface</td>
<td><strong>IDL</strong>: helpstring attribute</td>
</tr>
<tr>
<td>helpstring=<string></td>
<td>package</td>
<td><strong>IDL</strong>: helpstring attribute</td>
</tr>
<tr>
<td>version=<string></td>
<td>package</td>
<td><strong>IDL</strong>: version attribute</td>
</tr>
</tbody>
</table>
Table 4 – Tagged Values for adding COM Attributes to Modelling Elements and Mapping to Source Code
Modelling the coclass
Until now we specified the interfaces, introduced the package symbol for the type library and set all the necessary COM attributes for the IDL file as tagged values in the model. The type mapping for the abstract data types has been specified as well.
The last step that is required is to model the implementation class for the interfaces and introduce the information for its class factory.
The interfaces are drawn with UML interface symbols, having implements links to a class labeled with TimeServer and stereotyped as "COMObject". This is the way we express that class TimeServer is the (co) class implementing the interfaces ITime and ITimeFormat.
Now, only the factory class information has to be specified. This is done as a tagged value that is associated with the class object it has to instantiate. In our example, class TimeServer has the tagged value “Factory=TimeServerFactory”. Where TimeServerFactory is the name of the factory class to be generated (there is an exception, where a COM class can have no class factory, so it was decided to add the tag Factory to make it explicit when a class factory should be generated).
Now we have all the necessary model information to generate the code following our requirements.
In the table below, are the stereotypes and tagged values needed for modelling the coclass, to which UML model elements they are added and how they map to source code:
<table>
<thead>
<tr>
<th>Stereotype</th>
<th>Modelling element</th>
<th>Mapping</th>
</tr>
</thead>
<tbody>
<tr>
<td>COMObject</td>
<td>Class (coclass)</td>
<td><strong>IDL</strong>: keyword coclass, the label of the class represents the name of the coclass.</td>
</tr>
<tr>
<td></td>
<td></td>
<td><strong>CPP</strong>: class label is used as name for the class implementing the associated interfaces as well as the name for the appropriate C++ files (header and impl.). This is although the name of the class, that is instanciated by the class factory (s. although tagged value below).</td>
</tr>
</tbody>
</table>
Table 5 - Stereotype for Modelling the coclass and Mapping to Source Code
<table>
<thead>
<tr>
<th>Tagged Value</th>
<th>Modelling element</th>
<th>Mapping</th>
</tr>
</thead>
<tbody>
<tr>
<td>Factory=<name of factory class></td>
<td>Class (coclass)</td>
<td><strong>CPP</strong>: value used for the name of the class factory as well as the file name for the factory (header and implementation).</td>
</tr>
<tr>
<td>Helpstring=<string></td>
<td>Class (coclass)</td>
<td><strong>IDL</strong>: helpstring attribute</td>
</tr>
</tbody>
</table>
Table 6 – Tagged values as additional information to the coclass
(Re)Using a COM component
The modelling of COM components and generating code for them has demonstrated a server that makes services available to clients. How can clients use the services of such a component? Normally a client uses a COM component by using its interfaces. What must be modelled is a client class and a relationship to the used interface.
Take a look to the following class diagram:

This class diagram could be part of the StP system holding the client model. *Clockwork* is a class that uses a service from our COM component that is made available by the interface *ITime*. The relationship between *Clockwork* and *ITime* is of type *dependency*. We have therefore expressed the usage of the service by the client in modelling terms.
**Summary**
This paper has demonstrated the effective modelling, and generation for COM components using UML, a state of the art modelling tool and code generation technology. Significant benefits such as re-use, quality and productivity can be gained from using these approaches.
|
{"Source-Url": "http://www.aonix.com/pdf/ACD2.pdf", "len_cl100k_base": 5023, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 28539, "total-output-tokens": 5595, "length": "2e12", "weborganizer": {"__label__adult": 0.0002865791320800781, "__label__art_design": 0.00029921531677246094, "__label__crime_law": 0.00022172927856445312, "__label__education_jobs": 0.0003204345703125, "__label__entertainment": 3.236532211303711e-05, "__label__fashion_beauty": 0.0001100301742553711, "__label__finance_business": 0.00016736984252929688, "__label__food_dining": 0.0002532005310058594, "__label__games": 0.00026226043701171875, "__label__hardware": 0.000476837158203125, "__label__health": 0.0002384185791015625, "__label__history": 0.00011408329010009766, "__label__home_hobbies": 5.030632019042969e-05, "__label__industrial": 0.0002758502960205078, "__label__literature": 0.00011497735977172852, "__label__politics": 0.00015056133270263672, "__label__religion": 0.0003046989440917969, "__label__science_tech": 0.002307891845703125, "__label__social_life": 5.1081180572509766e-05, "__label__software": 0.003688812255859375, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.00022280216217041016, "__label__transportation": 0.00031304359436035156, "__label__travel": 0.00015926361083984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22777, 0.01539]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22777, 0.4604]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22777, 0.86732]], "google_gemma-3-12b-it_contains_pii": [[0, 161, false], [161, 2348, null], [2348, 3739, null], [3739, 6053, null], [6053, 7597, null], [7597, 10175, null], [10175, 12430, null], [12430, 13774, null], [13774, 15243, null], [15243, 16535, null], [16535, 17528, null], [17528, 19014, null], [19014, 20277, null], [20277, 22025, null], [22025, 22777, null], [22777, 22777, null]], "google_gemma-3-12b-it_is_public_document": [[0, 161, true], [161, 2348, null], [2348, 3739, null], [3739, 6053, null], [6053, 7597, null], [7597, 10175, null], [10175, 12430, null], [12430, 13774, null], [13774, 15243, null], [15243, 16535, null], [16535, 17528, null], [17528, 19014, null], [19014, 20277, null], [20277, 22025, null], [22025, 22777, null], [22777, 22777, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22777, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22777, null]], "pdf_page_numbers": [[0, 161, 1], [161, 2348, 2], [2348, 3739, 3], [3739, 6053, 4], [6053, 7597, 5], [7597, 10175, 6], [10175, 12430, 7], [12430, 13774, 8], [13774, 15243, 9], [15243, 16535, 10], [16535, 17528, 11], [17528, 19014, 12], [19014, 20277, 13], [20277, 22025, 14], [22025, 22777, 15], [22777, 22777, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22777, 0.10048]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
5b7d0c7948dc7253e0b1168574d9034a6c2d8889
|
Asymmetric Key Packages
Abstract
This document defines the syntax for private-key information and a content type for it. Private-key information includes a private key for a specified public-key algorithm and a set of attributes. The Cryptographic Message Syntax (CMS), as defined in RFC 5652, can be used to digitally sign, digest, authenticate, or encrypt the asymmetric key format content type. This document obsoletes RFC 5208.
Status of This Memo
This is an Internet Standards Track document.
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc5958.
Copyright Notice
Copyright (c) 2010 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
This document defines the syntax for private-key information and a Cryptographic Message Syntax (CMS) [RFC5652] content type for it. Private-key information includes a private key for a specified public-key algorithm and a set of attributes. The CMS can be used to digitally sign, digest, authenticate, or encrypt the asymmetric key format content type. This document obsoletes PKCS #8 v1.2 [RFC5208].
1.1. Requirements Terminology
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
1.2. ASN.1 Syntax Notation
The key package is defined using ASN.1 [X.680], [X.681], [X.682], and [X.683].
1.3. Summary of Updates to RFC 5208
The following summarizes the updates to [RFC5208]:
- Changed the name "PrivateKeyInfo" to "OneAsymmetricKey". This reflects the addition of the publicKey field to allow both parts of the asymmetric key to be conveyed separately. Not all algorithms will use both fields; however, the publicKey field was added for completeness.
- Defined Asymmetric Key Package CMS content type.
- Removed redundant IMPLICIT from attributes.
- Added publicKey to OneAsymmetricKey and updated the version number.
- Added that PKCS #9 attributes may be supported.
- Added discussion of compatibility with other private-key formats.
- Added requirements for encoding rule set.
- Changed imports from PKCS #5 to [RFC5912] and [RFC5911].
- Replaced ALGORITHM-IDENTIFIER with ALGORITHM from [RFC5912].
- Registers application/pkcs8 media type and .p8 file extension.
2. Asymmetric Key Package CMS Content Type
The asymmetric key package CMS content type is used to transfer one or more plaintext asymmetric keys from one party to another. An asymmetric key package MAY be encapsulated in one or more CMS protecting content types (see Section 4). Earlier versions of this specification [RFC5208] did not specify a particular encoding rule set, but generators SHOULD use DER [X.690] and receivers MUST support BER [X.690], which also includes DER [X.690].
The asymmetric key package content type has the following syntax:
```
ct-asymmetric-key-package CONTENT-TYPE ::=
{ AsymmetricKeyPackage IDENTIFIED BY id-ct-KP-aKeyPackage }
id-ct-KP-aKeyPackage OBJECT IDENTIFIER ::=
{ joint-iso-itu-t(2) country(16) us(840) organization(1)
gov(101) dod(2) infosec(1) formats(2)
key-package-content-types(78) 5 }
AsymmetricKeyPackage ::= SEQUENCE SIZE (1..MAX) OF OneAsymmetricKey
OneAsymmetricKey ::= SEQUENCE {
version Version,
privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
privateKey PrivateKey,
attributes [0] Attributes OPTIONAL,
...,
[[2: publicKey [1] PublicKey OPTIONAL ]],
...
}
```
PrivateKeyInfo ::= OneAsymmetricKey
-- PrivateKeyInfo is used by [P12]. If any items tagged as version 2 are used, the version must be v2, else the version should be v1. When v1, PrivateKeyInfo is the same as it was in [RFC5208].
Version ::= INTEGER { v1(0), v2(1) } (v1, ..., v2)
PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier
{ PUBLIC-KEY,
{ PrivateKeyAlgorithms } }
PrivateKey ::= OCTET STRING
-- Content varies based on type of key. The algorithm identifier dictates the format of the key.
PublicKey ::= BIT STRING
-- Content varies based on type of key. The algorithm identifier dictates the format of the key.
Attributes ::= SET OF Attribute { { OneAsymmetricKeyAttributes } }
AsymmetricKeyPackage contains one or more OneAsymmetricKey elements.
The syntax of OneAsymmetricKey accommodates a version number, an indication of the asymmetric algorithm to be used with the private key, a private key, optional keying material attributes (e.g., userCertificate from [X.520]), and an optional public key. In general, either the public key or the certificate will be present. In very rare cases will both the public key and the certificate be present as this includes two copies of the public key.
OneAsymmetricKey renames the PrivateKeyInfo syntax defined in [RFC5208]. The new name better reflects the ability to carry both private- and public-key components. Backwards compatibility with the original PrivateKeyInfo is preserved via version number. The fields in OneAsymmetricKey are used as follows:
- version identifies the version of OneAsymmetricKey. If publicKey is present, then version is set to v2 else version is set to v1.
- privateKeyAlgorithm identifies the private-key algorithm and optionally contains parameters associated with the asymmetric key pair. The algorithm is identified by an object identifier (OID) and the format of the parameters depends on the OID, but the PrivateKeyAlgorithms information object set restricts the
permissible OIDs. The value placed in privateKeyAlgorithmIdentifier is the value an originator would apply to indicate which algorithm is to be used with the private key.
- privateKey is an OCTET STRING that contains the value of the private key. The interpretation of the content is defined in the registration of the private-key algorithm. For example, a DSA key is an INTEGER, an RSA key is represented as RSAPrivateKey as defined in [RFC3447], and an Elliptic Curve Cryptography (ECC) key is represented as ECPrivateKey as defined in [RFC5915].
- attributes is OPTIONAL. It contains information corresponding to the public key (e.g., certificates). The attributes field uses the class ATTRIBUTE which is restricted by the OneAsymmetricKeyAttributes information object set. OneAsymmetricKeyAttributes is an open ended set in this document. Others documents can constrain these values. Attributes from [RFC2985] MAY be supported.
- publicKey is OPTIONAL. When present, it contains the public key encoded in a BIT STRING. The structure within the BIT STRING, if any, depends on the privateKeyAlgorithm. For example, a DSA key is an INTEGER. Note that RSA public keys are included in RSAPrivateKey (i.e., n and e are present), as per [RFC3447], and ECC public keys are included in ECPrivateKey (i.e., in the publicKey field), as per [RFC5915].
3. Encrypted Private Key Info
This section gives the syntax for encrypted private-key information, which is used by [P12].
Encrypted private-key information shall have ASN.1 type EncryptedPrivateKeyInfo:
```
EncryptedPrivateKeyInfo ::= SEQUENCE {
encryptionAlgorithm EncryptionAlgorithmIdentifier,
encryptedData EncryptedData }
EncryptionAlgorithmIdentifier ::= AlgorithmIdentifier
{ CONTENT-ENCRYPTION,
{ KeyEncryptionAlgorithms } }
EncryptedData ::= OCTET STRING
```
The fields in EncryptedPrivateKeyInfo are used as follows:
- encryptionAlgorithm identifies the algorithm under which the private-key information is encrypted.
- encryptedData is the result of encrypting the private-key information (i.e., the PrivateKeyInfo).
The encryption process involves the following two steps:
1. The private-key information is encoded, yielding an octet string. Generators SHOULD use DER [X.690] and receivers MUST support BER [X.690], which also includes DER [X.690].
2. The result of step 1 is encrypted with the secret key to give an octet string, the result of the encryption process.
4. Protecting the AsymmetricKeyPackage
CMS protecting content types, [RFC5652] and [RFC5083], can be used to provide security to the AsymmetricKeyPackage:
- SignedData can be used to apply a digital signature to the AsymmetricKeyPackage.
- EncryptedData can be used to encrypt the AsymmetricKeyPackage with symmetric encryption, where the sender and the receiver already share the necessary encryption key.
- EnvelopedData can be used to encrypt the AsymmetricKeyPackage with symmetric encryption, where the sender and the receiver do not share the necessary encryption key.
- AuthenticatedData can be used to protect the AsymmetricKeyPackage with message authentication codes, where key management information is handled in a manner similar to EnvelopedData.
- AuthEnvelopedData can be used to protect the AsymmetricKeyPackage with algorithms that support authenticated encryption, where key management information is handled in a manner similar to EnvelopedData.
5. Other Private-Key Format Considerations
This document defines the syntax and the semantics for a content type that exchanges asymmetric private keys. There are two other formats that have been used for the transport of asymmetric private keys:
- Personal Information Exchange (PFX) Syntax Standard [P12], which is more commonly referred to as PKCS #12 or simply P12, is a transfer syntax for personal identity information, including private keys, certificates, miscellaneous secrets, and extensions. OneAsymmetricKey, PrivateKeyInfo, and EncryptedPrivateKeyInfo can be carried in a P12 message. The private key information, OneAsymmetricKey and PrivateKeyInfo, are carried in the P12 keyBag BAG-TYPE. EncryptedPrivateKeyInfo is carried in the P12 pkcs8ShroudedKeyBag BAG-TYPE. In current implementations, the file extensions .pfx and .p12 can be used interchangeably.
- Microsoft’s private-key proprietary transfer syntax. The .pvk file extension is used for local storage. The .pvk and .p12/.pfx formats are not interchangeable; however, conversion tools exist to convert from one format to another.
To extract the private-key information from the AsymmetricKeyPackage, the encapsulating layers need to be removed. At a minimum, the outer ContentInfo [RFC5652] layer needs to be removed. If the AsymmetricKeyPackage is encapsulated in a SignedData [RFC5652], then the SignedData and EncapsulatedContentInfo layers [RFC5652] also need to be removed. The same is true for EnvelopedData, EncryptedData, andAuthenticatedData all from [RFC5652] as well as AuthEnvelopedData from [RFC5083]. Once all the outer layers are removed, there are as many sets of private-key information as there are OneAsymmetricKey structures. OneAsymmetricKey and PrivateKeyInfo are the same structure; therefore, either can be saved as a .p8 file or copied into the P12 KeyBag BAG-TYPE. Removing encapsulating security layers will invalidate any signature and may expose the key to unauthorized disclosure.
.p8 files are sometimes PEM-encoded. When .p8 files are PEM encoded they use the .pem file extension. PEM encoding is either the Base64 encoding, from Section 4 of [RFC4648], of the DER-encoded EncryptedPrivateKeyInfo sandwiched between:
"-----BEGIN ENCRYPTED PRIVATE KEY-----
-----END ENCRYPTED PRIVATE KEY-----"
or the Base64 encoding, see Section 4 of [RFC4648], of the DER-encoded PrivateKeyInfo sandwiched between:
"-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----"
6. Security Considerations
Protection of the private-key information is vital to public-key cryptography. Disclosure of the private-key material to another entity can lead to masquerades. The encryption algorithm used in the encryption process must be as ‘strong’ as the key it is protecting.
The asymmetric key package contents are not protected. This content type can be combined with a security protocol to protect the contents of the package.
7. IANA Considerations
This document makes use of object identifiers to identify a CMS content type and the ASN.1 module found in Appendix A. The CMS content type OID is registered in a DoD arc. The ASN.1 module OID is registered in an arc delegated by RSADSI to the SMIME Working Group. No further action by IANA is necessary for this document or any anticipated updates.
This specification also defines a new media subtype that IANA has registered at http://www.iana.org/.
7.1. Registration of media subtype application/pkcs8
Type name: application
Subtype name: pkcs8
Required parameters: None
Optional parameters: None
Encoding considerations: binary
Security considerations: Carries a cryptographic private key. See section 6.
Interoperability considerations:
The PKCS #8 object inside this media type MUST be DER-encoded PrivateKeyInfo.
Published specification: RFC 5958
Applications which use this media type:
Any MIME-compliant transport that processes asymmetric keys.
Additional information:
Magic number(s): None
File extension(s): .p8
Macintosh File Type Code(s):
Person & email address to contact for further information:
Sean Turner <[email protected]>
Restrictions on usage: none
Author:
Sean Turner <[email protected]>
Intended usage: COMMON
Change controller:
The IESG
8. References
8.1. Normative References
8.2. Informative References
Appendix A. ASN.1 Module
This annex provides the normative ASN.1 definitions for the structures described in this specification using ASN.1 as defined in [X.680] through [X.683].
AsymmetricKeyPackageModuleV1
{ iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-9(9)
smime(16) modules(0) id-mod-asymmetricKeyPkgV1(50) }
DEFINITIONS IMPLICIT TAGS ::= BEGIN
-- EXPORTS ALL
IMPORTS
-- FROM New SMIME ASN.1 [RFC5911]
Attribute{}, CONTENT-TYPE
FROM CryptographicMessageSyntax-2009
{ iso(1) member-body(2) us(840) rsadsi(113549) pkcs(1) pkcs-9(9)
smime(16) modules(0) id-mod-cms-2004-02(41) }
-- From New PKIX ASN.1 [RFC5912]
ATTRIBUTE
FROM PKIX-CommonTypes-2009
{ iso(1) identified-organization(3) dod(6) internet(1)
security(5) mechanisms(5) pkix(7) id-mod(0)
id-mod-pkixCommon-02(57) }
-- From New PKIX ASN.1 [RFC5912]
AlgorithmIdentifier{}, ALGORITHM, PUBLIC-KEY, CONTENT-ENCRYPTION
FROM AlgorithmInformation-2009
{ iso(1) identified-organization(3) dod(6) internet(1)
security(5) mechanisms(5) pkix(7) id-mod(0)
id-mod-algorithmInformation-02(58) }
ContentSet CONTENT-TYPE ::= {
ct-asymmetric-key-package,
... -- Expect additional content types --
}
ct-asymmetric-key-package CONTENT-TYPE ::=
{ AsymmetricKeyPackage IDENTIFIED BY id-ct-KP-aKeyPackage }
id-ct-KP-aKeyPackage OBJECT IDENTIFIER ::=
{ joint-iso-itu-t(2) country(16) us(840) organization(1)
gov(101) dod(2) infosec(1) formats(2)
key-package-content-types(78) 5 }
AsymmetricKeyPackage ::= SEQUENCE SIZE (1..MAX) OF OneAsymmetricKey
OneAsymmetricKey ::= SEQUENCE {
version Version,
privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
privateKey PrivateKey,
attributes [0] Attributes OPTIONAL,
...,
[2: publicKey [1] PublicKey OPTIONAL ],
...
}
PrivateKeyInfo ::= OneAsymmetricKey
-- PrivateKeyInfo is used by [P12]. If any items tagged as version
-- 2 are used, the version must be v2, else the version should be
-- v1. When v1, PrivateKeyInfo is the same as it was in [RFC5208].
Version ::= INTEGER { v1(0), v2(1) } (v1, ..., v2)
PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier
{ PUBLIC-KEY,
{ PrivateKeyAlgorithms } }
PrivateKey ::= OCTET STRING
-- Content varies based on type of key. The
-- algorithm identifier dictates the format of
-- the key.
PublicKey ::= BIT STRING
-- Content varies based on type of key. The
-- algorithm identifier dictates the format of
-- the key.
Attributes ::= SET OF Attribute { { OneAsymmetricKeyAttributes } }
OneAsymmetricKeyAttributes ATTRIBUTE ::= {
... -- For local profiles
}
-- An alternate representation that makes full use of ASN.1
-- constraints follows. Also note that PUBLIC-KEY needs to be
-- imported from the new PKIX ASN.1 Algorithm Information module
-- and PrivateKeyAlgorithms needs to be commented out.
-- OneAsymmetricKey ::= SEQUENCE {
-- version Version,
-- privateKeyAlgorithm SEQUENCE {
-- algorithm PUBLIC-KEY.&id({PublicKeySet}),
-- parameters PUBLIC-KEY.&Params({PublicKeySet})
-- (@privateKeyAlgorithm.algorithm))
-- OPTIONAL}
-- privateKey OCTET STRING (CONTAINING
-- PUBLIC-KEY.&PrivateKey({PublicKeySet})
-- (@privateKeyAlgorithm.algorithm)),
-- attributes [0] Attributes OPTIONAL,
-- ...}
-- [[2: publicKey [1] BIT STRING (CONTAINING
-- PUBLIC-KEY.&Params({PublicKeySet})
-- (@privateKeyAlgorithm.algorithm))
-- OPTIONAL,
-- ...]
EncryptedPrivateKeyInfo ::= SEQUENCE {
encryptionAlgorithm EncryptionAlgorithmIdentifier,
encryptedData EncryptedData }
EncryptionAlgorithmIdentifier ::= AlgorithmIdentifier
{ CONTENT-ENCRYPTION,
{ KeyEncryptionAlgorithms } }
EncryptedData ::= OCTET STRING -- Encrypted PrivateKeyInfo
PrivateKeyAlgorithms ALGORITHM ::= {
... -- Extensible
}
KeyEncryptionAlgorithms ALGORITHM ::= {
... -- Extensible
}
END
Acknowledgements
Many thanks go out to the Burt Kaliski and Jim Randall at RSA. Without the prior version of the document, this one wouldn’t exist.
I’d also like to thank Pasi Eronen, Roni Even, Alfred Hoenes, Russ Housley, Jim Schaad, and Carl Wallace.
Author’s Address
Sean Turner
IECA, Inc.
3057 Nutley Street, Suite 106
Fairfax, VA 22031
USA
EMail: [email protected]
|
{"Source-Url": "http://art.tools.ietf.org/pdf/rfc5958.pdf", "len_cl100k_base": 4359, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 26125, "total-output-tokens": 5834, "length": "2e12", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.0003709793090820313, "__label__crime_law": 0.001880645751953125, "__label__education_jobs": 0.0006341934204101562, "__label__entertainment": 0.00011235475540161131, "__label__fashion_beauty": 0.00018167495727539065, "__label__finance_business": 0.0013494491577148438, "__label__food_dining": 0.0002903938293457031, "__label__games": 0.0007777214050292969, "__label__hardware": 0.0022678375244140625, "__label__health": 0.0005450248718261719, "__label__history": 0.00037479400634765625, "__label__home_hobbies": 0.00011658668518066406, "__label__industrial": 0.0008225440979003906, "__label__literature": 0.0003509521484375, "__label__politics": 0.0005707740783691406, "__label__religion": 0.0005707740783691406, "__label__science_tech": 0.1942138671875, "__label__social_life": 0.00012052059173583984, "__label__software": 0.05718994140625, "__label__software_dev": 0.73583984375, "__label__sports_fitness": 0.0003190040588378906, "__label__transportation": 0.0005645751953125, "__label__travel": 0.00017523765563964844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20260, 0.03109]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20260, 0.37636]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20260, 0.76051]], "google_gemma-3-12b-it_contains_pii": [[0, 1646, false], [1646, 2919, null], [2919, 4478, null], [4478, 6454, null], [6454, 8300, null], [8300, 10139, null], [10139, 12359, null], [12359, 13801, null], [13801, 14851, null], [14851, 15796, null], [15796, 16976, null], [16976, 18444, null], [18444, 19886, null], [19886, 20260, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1646, true], [1646, 2919, null], [2919, 4478, null], [4478, 6454, null], [6454, 8300, null], [8300, 10139, null], [10139, 12359, null], [12359, 13801, null], [13801, 14851, null], [14851, 15796, null], [15796, 16976, null], [16976, 18444, null], [18444, 19886, null], [19886, 20260, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20260, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20260, null]], "pdf_page_numbers": [[0, 1646, 1], [1646, 2919, 2], [2919, 4478, 3], [4478, 6454, 4], [6454, 8300, 5], [8300, 10139, 6], [10139, 12359, 7], [12359, 13801, 8], [13801, 14851, 9], [14851, 15796, 10], [15796, 16976, 11], [16976, 18444, 12], [18444, 19886, 13], [19886, 20260, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20260, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
6a4f4584e78a193b9847fb5df2d54111e88e51ec
|
MWSMF: A Mediation Framework Realizing Scalable Mobile Web Service Provisioning
Satish Narayana Srirama 1, Matthias Jarke 1,2
1 RWTH Aachen, Informatik V
Ahornstrasse 55,
52056 Aachen, Germany
{srirama, jarke}@cs.rwth-aachen.de
Wolfgang Prinz 1,2
2 Fraunhofer FIT
Schloss Birlinghoven
53754 Sankt Augustin, Germany
[email protected]
ABSTRACT
It is now feasible to invoke basic web services on a smart phone due to the advances in wireless devices and mobile communication technologies. While mobile web service clients are common these days, we have studied the scope of providing web services from smart phones. Although the applications possible with Mobile Host are quite welcoming, the scalability of such a Mobile Host is observed to be considerably low. In the scalability analysis of our Mobile Host, we have observed that binary compression of SOAP messages being exchanged over cellular network have greatly improved the performance of the Mobile Host. While binary compression is observed to be very efficient, the mechanism has raised the need for an intermediary in the mobile web service invocation cycle. The paper addresses our mobile web service message optimization scenario, at the enterprise service bus technology based mediation framework, with evaluation results.
1. INTRODUCTION
From the view-point of information systems engineering, the Internet has lead the evolution from static content to web services. Web services are software components that can be accessed over the Internet using well established web mechanisms, XML-based open standards and transport protocols such as SOAP and HTTP. Public interfaces of web services are defined and described using Web Service Description Language (WSDL), regardless of their platforms, implementation details. Web services have wide range of applications and primarily used for integration of different organizations. The biggest advantage of web services technology lies in its simplicity in expression, communication and servicing. The componentized architecture of web services also makes them reusable, thereby reducing the development time and costs. [1]
Concurrently, the capabilities of high-end mobile phones and PDAs have increased significantly, both in terms of processing powers and memory capabilities. The smart phones are becoming pervasive and are being used in wide range of applications like location based services, mobile banking services, ubiquitous computing etc. The market capture of such smart phones is quite evident. The higher data transmission rates achieved in wireless domains with 3G and 4G technologies and the fast creeping of all-ip broadband based mobile networks also boosted this growth in the cellular market. The situation brings out a large scope and demand for software applications for such high-end smart phones.
To meet this demand of the cellular domain and to reap the benefits of the fast growing web services domain and standards, the scope of the mobile terminals as both web services clients and providers is being observed. While mobile web service clients are common these days [7, 4], we have studied the scope of mobile web service provisioning in one of our previous projects. In this project, we have developed a Mobile Host [19], capable of providing basic web services from smart phones. Extensive performance analysis of the Mobile Host was conducted and many applications were designed and developed proving the feasibility of concept.
While the applications possible with Mobile Host are quite welcoming, during the performance analysis of the Mobile Host, we have observed the scalability of the Mobile Host to be considerably low. We define scalability as the Mobile Host’s ability to process reasonable number of clients, over long durations, without failure and without seriously impeding normal functioning of the smart phone for the user. For improving the scalability of the Mobile Host the mobile web service messages exchanged over the radio link are to be compressed without seriously affecting their interoperability. Different compression techniques are studied in detail and BinXML [8] was identified to be the best possible compression option and the binary encoding was adapted for the mobile web service invocation cycle.
While BinXML is observed to be very efficient for the mobile web service message compression, the encoding mechanism has raised the need for an intermediary or a middle-ware framework in the mobile web service invocation cycle. The mediation framework should encode/decode the mobile web service messages to/from XML/BinXML formats in the mobile operator proprietary networks. During our Mobile Host’s application, QoS and discovery research, we
have identified the final deployment scenario of Mobile Hosts in the cellular networks. The Mobile Web Services Mediation Framework (MWSMF) is established as an intermediary between the web service clients and the Mobile Hosts based on the Enterprise Service Bus (ESB) technology. The features, realization details and performance analysis of the mediation framework, specific to maintaining scalability of the Mobile Host are addressed in this paper. The rest of the paper is organized as follows:
Section 2 discusses the concept of mobile web service provisioning. Section 3 addresses scalability analysis of the Mobile Host. Section 4 discusses the components and realization details of the mobile web services mediation framework, while section 5 provides the evaluation of the MWSMF. Section 6 summarizes the results and concludes the paper.
2. MOBILE WEB SERVICE PROVISIONING
Service Oriented Architecture (SOA) [6] is a component model that delivers application functionality as services to end-user applications and other services, bringing the benefits of loose coupling and encapsulation to the enterprise application integration. SOA is not a new notion and many technologies like CORBA and DCOM at least partly represent this idea. Web services are newest of these developments and by far the best means of achieving SOA. Using web services for SOA provides certain advantages over other technologies. Specifically, web services are based on a set of still evolving, though well-defined W3C standards that allow much more than just defining interfaces.
The quest for enabling these open XML web service interfaces and standardized protocols also on the radio link, with the latest developments in cellular domain, lead to a new domain of applications, mobile web services. The developments in cellular world are two folded; firstly there is a significant improvement in device capabilities like better memory and processing power and secondly with the latest developments in mobile communication technologies with 3G and 4G technologies, higher data transmission rates in the order of few mbs were achieved. In the mobile web services domain, the resource constrained mobile devices are used as both web service clients and providers, still preserving the basic web services architecture in the wireless environments. While mobile web service clients are quite common these days [7, 4], the research with providing web services from smart phones is still sparse. In our mobile web service provisioning project one such Mobile Host was developed proving the feasibility of concept [19]. Figure 1 shows the deployment scenario of mobile web services, where mobile devices are used as both web service providers and clients.
Mobile Host is a light weight web service provider built for resource constrained devices like cellular phones. It has been developed as a web service handler built on top of a normal Web server. The SOAP based web service requests sent by HTTP tunneling are diverted and handled by the web service handler component. The Mobile Host was developed in PersonalJava on a SonyEricssson P800 smart phone. The footprint of the fully functional prototype is only 130 KB. Open source kSOAP2 [22] was used for creating and handling the SOAP messages.
3. SCALABILITY ANALYSIS OF MOBILE WEB SERVICE PROVISIONING
While the applications possible with Mobile Host are quite welcoming, during the performance analysis of the Mobile Host, we have observed the scalability of the Mobile Host to
be considerably low. From the regression analysis of Mobile Host for checking its scalability, Mobile Host was successful in handling 8 concurrent accesses for reasonable service like location data provisioning service with response size of approximately 2Kb. The main reason for not being able to process more mobile web service clients was due to the transmission delay which constituted 90% of the mobile web service invocation cycle time. Similar analysis conducted with the mobile picture service, where the response size is approximately 40 kb, further supported this point. So the Mobile Host’s scalability is inversely proportional to increased transmission delays. The transmission delays can be reduced in two ways. 1.) By achieving higher data transmission rates with current generation telecommunication technologies. 2.) By reducing the size of the message. In the scalability analysis of the Mobile Host, we mainly concentrated at the second issue i.e. reducing the size of the message being transmitted over the radio link.
Web services communication is a layered communication and across different protocols. Considering SOAP over HTTP, at the lowest level is the transportation protocol, TCP. On top of TCP lies the HTTP communication. Then SOAP communication is over the HTTP protocol. The application communication and protocols for example WS-Security lies on top of SOAP. So any message exchanged over the web service communication, consists some overhead across all the different layers. Since we have considered wireless environments, and the message exchange is over the cellular network, the size of the message has to be reduced to the minimum possible level [13]. The size of the mobile web service message is shown in equation (1).
\[ B_{msg} = B_{tp} + B_{nttp} + B_{soap} + B_{app} \]
Where \( B_{tp} \), \( B_{nttp} \), \( B_{soap} \), \( B_{app} \) are the message overheads over transportation, message transportation, SOAP, application protocols respectively. So to exchange the messages effectively over the radio link \( B_{msg} \) has to be minimized. For this the messages are to be compressed/encoded in the optimal way. The minimal encoding may not always be the best solution. First reason for this is that the encoding should be efficient, both in terms of message size reduced and extra performance penalties added to the devices. For example if the size of message is reduced by 50% and the processing of the encoding takes more than half the time of actual message exchange cycle, the encoding mechanism is not efficient. Secondly the encoding mechanism should not affect the interoperability. If an attempt is made to reduce the overload at \( B_{tp} \) or \( B_{nttp} \), the interoperability of the web services is seriously impeded. So the best position to target the encoding process is at the \( B_{soap} \) and upper levels. So the XML based SOAP messages are to be compressed.
Web service messages can be compressed with standard compression techniques like Gzip or XML-specific compression techniques like XMill to obtain smaller message sizes. Canonical XML [5] standard targets the logical equivalence of these compressed XML messages. Recently there is an effort with the Fast Web Services [17], Fast Infoset standard draft [18], Efficient XML [14], BinXML [8] etc. to specify a binary format for XML data that is an efficient alternative to XML in resource constrained environments. Most recently there is also some effort with BiM (Binary Format for Metadata) [11] standard for the binary encoding of MPEG-7 Metadata. BiM is designed in a way that it allows fast parsing and filtering of the XML data at the binary level itself, without having to decompress again. [9] gives a comparison of different compression technologies for XML data and specifies the best scenario for the web service message exchange across smart phones. The analysis suggests that BinXML is the best option (BiM was not considered in this analysis) to compress web service messages considering compression ratio, processing time and resource usage.
Based on the analysis at [9] we have adapted BinXML for compressing the mobile web service messages. BinXML is a very simple binary representation of XML data. It replaces each tag and attribute with a unique byte value and replaces each end tag with 0xFF. By using a state machine and 6 special byte values including 0x0F, any XML data with circa 245 tags can be represented in this format. The approach is specifically designed to target SOAP messages across radio links. So the mobile web service messages are exchanged in the BinXML format in radio link and the approach has improved the performance of the Mobile Host significantly.
To analyze the effects of BinXML compression for mobile web services, we have used two Sony Ericsson P990i smart phones as web service requestor and the Mobile Host. The phones have an internal flash shared memory of 64 Mb. The devices support MIDP2.0 with CLDC1.1 configuration. The two mobile phones were connected to the Internet using GPRS connections. The services of the Expertise Finder Modules are deployed on the smart phones and the mobile web service clients tried to invoke these services. Expertise Finder scenario is developed with the Mobile Host and is used in m-learning domain. Using the Expertise Finder Modules, learners can collaborate among each other to find an expert, who can solve a specific problem. Once the expert is found, the expert can rate himself to the client, using the expert rating service. The Expert Rating service is used in the scalability analysis of the Mobile Host. The SOAP request message of this service includes the actual expert finder request message, the intermediaries to whom the message is forwarded before reaching the expert, and the rating and details of the expert. The size of the request message is observed to be 2544 bytes, with 4 forwards. The response just shows the acknowledgement from the client, and its size is 570 bytes. With the BinXML encoding, the size of the message to be transmitted over the radio link has reduced significantly. The size of the request reduced to 1591 bytes while the size of the response reduced to 495 bytes. This reduction in size of the messages to be transmitted has caused significant reduction in transmission delays of the request and response messages.
The actual gain in mobile web service invocation cycle time, i.e. mobile web services compression gain with the BinXML encoding is:
\[ T_{msg} = \delta T_{req} + \delta T_{req} - T_{reqenc} - T_{reqdec} - T_{resenc} - T_{resdec} \]
Where \( T_{reqenc} \) and \( T_{reqdec} \) are the encoding and decoding delays of the SOAP request message while \( T_{resenc} \) and \( T_{resdec} \) are the encoding and decoding delays of the SOAP response.
message, and $\delta T_{\text{req}}$, $\delta T_{\text{res}}$ are the respective reductions in request transmission and response transmission delays.
Figure 2 shows the comparison of delays in mobile web service invocation cycle, with and without BinXML encoding. From this diagram, we can derive that there is approximately 1333 milliseconds (~15%) gain in performance of the Mobile Host with the BinXML encoding. We could also conclude that the performance gain is directly proportional to the compression gain achieved with the binary encoding. Alternative compression mechanisms can also be verified for the Mobile Host’s performance gain, with the architecture we have proposed in this study. As long as $T_{\text{mwscg}}$ value is positive, the compression mechanism is efficient.
But BinXML is not an open standard. Hence not all the messages transmitted over the radio link can be based on this standard. If a client sends an uncompressed message for the Mobile Host, the transmission is not very efficient, even though the Mobile Host can process such a request. In such a scenario a valid mediation framework should be established at the gateway of the mobile operator proprietary networks, helping in encoding/decoding the mobile web service messages to/from XML/BinXML formats sent into the cellular domain.
### 4. MOBILE WEB SERVICES MEDIATION FRAMEWORK (MWSMF)
The Mobile Web Services Mediation Framework (MWSMF) is established as an intermediary between the web service clients and the Mobile Hosts. The mobile web service clients in the Internet can thus invoke the services deployed with the Mobile Host, via the MWSMF. The architecture and the features of the MWSMF are addressed in [18].
The detailed architecture of the MWSMF includes many Peer to Peer (P2P) concepts and thus beyond the scope of this paper. A perusal of the deployment scenario is recommended for clear understanding of the framework, though the scalability concepts discussed further in this paper does not require such clarity. The mediation framework ensures the QoS of the mobile web service messages and transforms them as and when necessary and routes the messages based on their content to the respective Mobile Hosts. Apart from handling security and improvements to scalability the QoS provisioning features of MWSMF also includes message persistence, guaranteed delivery, failure handling and transaction support.
Enterprise Service Bus (ESB) is the most recent development in enterprise integration domain and a standards-based ESB solves the integration problems elevated by the MWSMF. Gartner et al. defines enterprise service bus as a new architecture that exploits web services, messaging middleware, intelligent routing, and transformation [17]. We tried to realize the MWSMF based on ESB technology and implemented the middleware framework using Java Business Integration (JBI) based open source ServiceMix ESB.
ServiceMix by following the JBI architecture supports two types of components - Service Engine Components and Binding Components. Service engines are components responsible for implementing business logic and they can be service providers/consumers. The binding components marshal and unmarshal messages to and from protocol-specific data formats to normalized messages. Thus they allow the JBI environment to process only normalized messages. The normalized message consists of the message content also called payload, message properties or metadata and optional message attachments referenced by the payload. No two components in the framework communicate directly and the messages are exchanged over the Normalized Message Router (NMR). The components are deployed into the framework using spring based XML configuration file. Spring is a lightweight container, with wrappers that make it easy to use many different services and frameworks. Lightweight containers accept any Java Bean, instead of specific types of components [20]. The configuration uses the WS-Addressing for routing the messages across the components via the normalized message router. WS-Addressing is a specification of transport-neutral mechanisms that allow web services to communicate addressing information. It essentially consists of two parts: a structure for communicating a reference to a web service endpoint, and a set of Message Addressing Properties, which associate addressing information with a particular message.
### 4.1 Components developed for scalability maintenance of the Mobile Host
The following components are developed at the mediation framework for improving the scalability of the Mobile Hosts.
**HttpReceiver:** The HttpReceiver component receives the web service requests (SOAP over HTTP) over a specific port and forwards them to the Broker component via NMR. The component thus acts as the gateway to the mediation framework and as a proxy for the Mobile Hosts. The component is configured into the mediation framework by adding the following xml chunk to the configuration file.
```xml
<sm:activationSpec componentName="httpReceiver" service="ssn:httpBinding" destinationService="ssn:mwsmfBroker" />
<sm:component>
<bean class="org.apache.servicemix.components.http.HttpConnector">
<property name="host" value="localhost"/>
<property name="port" value="8912"/>
</bean>
```
The `componentName` attribute of the `activationSpec` specifies the name of the component, while `service` indicates the service name of the proxied endpoint. The `endpoint` specifies the endpoint name of the proxied endpoint and the `destinationService` attribute specifies the service name of the target endpoint, here it is the Broker component. The remaining properties of the bean component are specified in the `<sm:component>` element.
The support for this component is most recently deprecated in ServiceMix and ServiceMix ESB now ships with a JBI compliant HTTP/SOAP binding component named `servicemix-http`. The component can be used as both a requester and a provider [2]. The component also has the support for WS-Addressing specification. But as most of the message flow analyses of the MWSMF were conducted by considering the HttpReceiver component, here I still go with the old architecture. ServiceMix is an open source project and currently under rigorous development and hence the modification to the old components and support for new components are quite obvious and regular.
**HttpInvoker:** The binding component generates a web server request, if the message is a normal HTTP request. The component can also invoke web services by transferring the SOAP messages as HTTP body. The component is developed by us, while evaluating the mediation framework. As discussed already and similar to HttpReceiver component, the HttpInvoker component can theoretically be replaced by servicemix-http binding component.
**Broker:** This component serves as a hub for all the communication that happens with other components, in the mediation framework. It receives the client-supplied message from the HttpInvoker component and hosts the main integration logic of the mediation framework. In case of the scalability maintenance, the messages received by Broker are verified for mobile web service/BinXML messages. The component also interfaces with other components and provides the result to the client.
**BinaryTransformer:** The service engine component transforms the mobile web service messages to and from the BinXML format. If the request message is in the XML format it encodes the message to BinXML format and decodes to XML if it receives a BinXML message. The component always acts as a provider and it receives the messages from the Broker.
Apart from the components discussed above, the MWSMF also provides many alternative components that help in providing QoS and discovery for mobile web services deployed with the Mobile Host. For example QoSVerifier, XSLT-Transformer help in maintaining the security of the mobile web services. The mediation framework also provides features like helping in automatic startup of the Mobile Hosts conserving the resources of smart phones and UDDI server for publishing the mobile web services. Further details of the MWSMF are provided at [13].
### 4.2 Mobile web service message optimization
Once the MWSMF is established, for improving the scalability of the Mobile Hosts, the components discussed in the previous subsection are deployed with the mediation framework. The messages received by the mediation framework from external clients are compressed using BinXML encoding, and the binary messages are sent over the radio link. The scenario is named as mobile web service message optimization. The message flow of the scenario is shown and numbered in figure 3. The service engine components are shown as straight lined rectangles, while the binding components are shown as dashed rectangles.
1. The HttpReceiver component receives the mobile web service request or the HTTP Request from the client.
2. The HttpReceiver sends the message to the Broker through the NMR.
3. The Broker examines the message for web service request and transfers the message to HttpInvoker, if it is a normal HTTP request. The Broker transfers the message to the BinaryTransformer component through the NMR, if the message comprises a mobile web service request.
4. The BinaryTransformer component BinXML encodes the message and transfers the response message back to the Broker component.
5. The Broker component sends the message to the HttpInvoker via the NMR.
6. The HttpInvoker generates the request to the Mobile Host by setting the BinXML data to the body of the HTTP message.
7. The Mobile Host processes the request using a BinXML adapter and sends the response message back to the HttpInvoker.
8. The HttpInvoker sends the response back to the Broker via the NMR.
9. The Broker transfers the response to the BinaryTransformer through the NMR.
10. The BinaryTransformer decodes the BinXML data to the XML format and transfers the response message back to the Broker component.
11. The Broker returns the response to the HttpReceiver component through the NMR.
12. The HttpReceiver component returns the response back to the client.
### 5. EVALUATION OF THE MWSMF
Once the mobile web service message optimization was designed and established, the MWSMF was extensively tested for its performance and scalability issues, using load test principles. A huge number of clients were generated for the mediation framework, simulating real-time mobile operator network load. The expert rating service considered in the scalability analysis of the Mobile Host is again considered for the performance evaluation of the MWSMF.
5.1 Test setup
The prototype of the ServiceMix based mediation framework is established on a HP Compaq nw8240 laptop. The laptop has an Intel(R) Pentium(R) M Processor 2.00GHz / 1GB RAM. A java based server was developed and run on the same laptop on an arbitrary port (4444), mocking the Mobile Host. The server receives the expert rating service request from the client and populates the standard response. The response is then BinXML encoded and the compressed response is sent back to the client, in the HTTP response message format. By considering this simple server, we can eliminate the pure performance delays of the Mobile Host and the transmission delays of the radio link, and thus getting the actual performance analysis of the MWSMF. For the load generation we used a Java clone of the popular ApacheBench load generator from WSO2 ESB [3, 21]. The load generator can initiate a large number of concurrent web service invocations simulating multiple parallel clients. The command line executable benchmark.jar also provides a detailed statistics of the invocations, like the number of concurrent request, successful transactions per second, mean of the client invocation times etc. The benchmark.jar and commons-cli-1.0.jar are downloaded to a working directory and are used to simulate huge number of concurrent requests. The following sample shows a command that simulates 200 concurrent clients for the expert rating service with each client generating 10 requests for the same service.
```
```
5.2 Test results
Figure 4 shows the time taken for handling a client request under multiple concurrent requests generated for the mediation framework. The mediation framework was successful in handling up to 110 concurrent requests without any connection refusals. Higher numbers of concurrent requests were also possible, but some of the requests failed as the mediation framework generated 'connection refused IO error'. The main reason for this connection refusal is as: The ServiceMix transport is based on blocking code which means that the ESB can handle only as many concurrent requests as the number of threads configured in the system [14]. Figure 4 also shows a steady increase in the average time taken for handling a client request with the increase in number of concurrent requests. The figure also shows a sharp decline in the time taken to handle a client after 240 concurrent requests. The decline is because of a large number of failed requests at this concurrency level. More than 300 concurrent requests are not considered as already at this high concurrency level, the number of failed requests is more than 50% of the total requests. The increase in average duration to handle a client is quite normal and the mean duration of handling a single request still remains mostly constant. The mean is calculated considering the performance of the MWSMF over long durations, including parameters like the number of service requests failed. The mean value is in the range 100-150 milliseconds, and it improved slightly with the increase in concurrency levels. This shows the performance of mediation framework is actually improving when there are large numbers of clients to handle.
The results from this analysis show that the mediation framework has reasonable levels of performance and the MWSMF can scale to handling large number of concurrent clients, possible in the deployment scenario addressed in [18]. This conclusion is also evident from figure 5 which shows that the number of transactions handled by the mediation framework per second almost remains steady (in fact growing) even under such heavy load conditions. The mediation framework is successful in handling 6-8 mobile web service invocations per second, with the mobile web service message optimization scenario. The values can significantly grow, when the deployment scenario is established on reasonable servers, with high resource and performance capabilities.
6. CONCLUSIONS
This paper addressed our scalability analysis of the Mobile Host, which has identified that Mobile Host’s scalability is inversely proportional to increased transmission delays. In order to reduce the transmission delays, we tried to reduce the size of the mobile web service messages being exchanged, using BinXML encoding mechanism. The performance gain to the Mobile Host with binary compression is quite significant in terms of improved scalability. The study also raised the necessity for an intermediary in the mobile web service invocation cycle. We have developed an enterprise service bus technology based mobile web services mediation framework, acting as a proxy for mobile web service provisioning. The paper also addressed the features, components and realization details of the MWSMF. The regression analysis of the mediation framework conducted with the mobile web service message optimization scenario, clearly showed that the mediation framework has reasonable levels of performance and the MWSMF can scale to handling large number of concurrent clients, possible in mobile operator networks.
7. ACKNOWLEDGMENTS
This work is supported by the Research Cluster Ultra High-Speed Mobile Information and Communication (UMIC) at RWTH Aachen University (http://www.umic.rwth-aachen.de/). The authors would also like to thank R. Levenshteyn and M. Gerdes of Ericsson Research for their help and support.
8. REFERENCES
|
{"Source-Url": "http://eudl.eu/pdf/10.4108/ICST.MOBILWARE2008.2797", "len_cl100k_base": 6020, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22627, "total-output-tokens": 7410, "length": "2e12", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.00030922889709472656, "__label__crime_law": 0.0002942085266113281, "__label__education_jobs": 0.00047969818115234375, "__label__entertainment": 9.679794311523438e-05, "__label__fashion_beauty": 0.00016224384307861328, "__label__finance_business": 0.0004167556762695313, "__label__food_dining": 0.000370025634765625, "__label__games": 0.0004341602325439453, "__label__hardware": 0.0027523040771484375, "__label__health": 0.0006117820739746094, "__label__history": 0.0003154277801513672, "__label__home_hobbies": 7.18235969543457e-05, "__label__industrial": 0.0004470348358154297, "__label__literature": 0.00022518634796142575, "__label__politics": 0.0002225637435913086, "__label__religion": 0.0004448890686035156, "__label__science_tech": 0.0843505859375, "__label__social_life": 8.416175842285156e-05, "__label__software": 0.0203094482421875, "__label__software_dev": 0.8857421875, "__label__sports_fitness": 0.000286102294921875, "__label__transportation": 0.0007386207580566406, "__label__travel": 0.0002796649932861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34150, 0.0324]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34150, 0.15069]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34150, 0.89241]], "google_gemma-3-12b-it_contains_pii": [[0, 4739, false], [4739, 8259, null], [8259, 15094, null], [15094, 20401, null], [20401, 25781, null], [25781, 29898, null], [29898, 34150, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4739, true], [4739, 8259, null], [8259, 15094, null], [15094, 20401, null], [20401, 25781, null], [25781, 29898, null], [29898, 34150, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34150, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34150, null]], "pdf_page_numbers": [[0, 4739, 1], [4739, 8259, 2], [8259, 15094, 3], [15094, 20401, 4], [20401, 25781, 5], [25781, 29898, 6], [29898, 34150, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34150, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
183b861afea3135f4b2cdcde1d8d7f85f9be8fe6
|
Logical Operations
- Logical operations perform operations on the bits themselves, rather than the values they represent
- e.g. and, or, exclusive-or, not (invert)
- Truth tables
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>A AND B</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>A OR B</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>A EOR B</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>A</th>
<th>NOT A</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
</tr>
</tbody>
</table>
### Instructions for Logical Operations
- **Bitwise logical AND – AND**
```
AND r0, r1, r2 @ r0 = r1 . r2 (r1 AND r2)
```
```plaintext
31 30
```
1 1
```
```
0 1
```
```
0 1
```
\[ r0 = r1 \cdot r2 \]
Instructions for Logical Operations
- Bitwise logical OR – ORR
ORR r0, r1, r2 @ r0 = r1 + r2 (r1 OR r2)
### Instructions for Logical Operations
- **Bitwise logical Exclusive OR – EOR**
<table>
<thead>
<tr>
<th>EOR</th>
<th>r0, r1, r2</th>
<th>@ r0 = r1 ⊕ r2 (r1 EOR r2)</th>
</tr>
</thead>
</table>

**Example:**
- **r0:** 1 1
- **r1:** 1 0 1 0
- **r2:** 0 1 1 0
- **r0** AFTER operation: 0 1 1 0
**Operation:**
- **r0 ⊕ r2**
- **r0** ← **r0 ⊕ r2**
**Result:**
- **r0** AFTER operation: 0 1 1 0
**Equation:**
- **r0 = r1 ⊕ r2**
- **r0** ← **r1 ⊕ r2**
**Diagram:**
- Represents the bitwise logical Exclusive OR operation between registers r0, r1, and r2.
Instructions for Logical Operations
- Bitwise logical inversion
- **MVN MoVe Negative** – like MOV but moves the one’s complement of a value (bitwise inversion) to a register
### Example Instructions
- `MVN r0, r0` @ r0 = r0’ (NOT r0)
- `MVN r0, r1` @ r0 = r1’ (NOT r1)
**Bit Manipulation**
- e.g. Clear bits 3 and 4 of the value in r1
```
0 1 . . . . 1 0 0 0 1 1 0 0
31 30 6 5 4 3 2 1 0
```
- Observe $0 \cdot x = 0$ and $1 \cdot x = x$
- Construct a mask with 0 in the bit positions we want to clear and 1 in the bit positions we want to leave unchanged
```
0 1 . . . . 1 0 0 0 0 1 0 0
31 30 6 5 4 3 2 1 0
```
- Perform a bitwise logical AND of the value with the mask
```
1 1 . . . . 1 1 0 0 0 1 1 1
31 30 6 5 4 3 2 1 0
mask
```
e.g. Clear bits 3 and 4 of the value in r1 (continued)
Write an assembly language program to clear bits 3 and 4 of the value in r1
```assembly
start:
LDR r1, =0x61E87F4C @ load test value
LDR r2, =0xFFFFFFE7 @ mask to clear bits 3 and 4
AND r1, r1, r2 @ clear bits 3 and 4
@ result should be 0x61E87F44
stop: B stop
```
- Alternatively, the BIC (Bit Clear) instruction allows us to define a mask with 1’s in the positions we want to clear
```assembly
LDR r2, =0x00000018 @ mask to clear bits 3 and 4
BIC r1, r1, r2 @ r1 = r1 . NOT(r2)
```
- Or use an immediate value, saving one instruction
```assembly
BIC r1, r1, #0x00000018 @ r1 = r1 . NOT(0x00000018)
```
**Bit Manipulation**
- e.g. Set bits 2 and 4 of the value in r1

- Observe $1 + x = 1$ and $0 + x = x$
- Construct a mask with 1 in the bit positions we want to set and 0 in the bit positions we want to leave unchanged

- Perform a bitwise logical OR of the value with the mask
e.g. Set bits 2 and 4 of the value in r1 (continued)
Bit Manipulation
- Bitwise Logic, Shifts and Rotates
Example:
- r1 before: 011001100
- Mask (e.g. r2): 001010100
- r1 after: 101111000
### Program 5.2 – Set Bits
- Write an assembly language program to set bits 2 and 4 of the value in r1
```assembly
start:
LDR r1, =0x61E87F4C @ load test value
LDR r2, =0x00000014 @ mask to set bits 2 and 4
ORR r1, r1, r2 @ set bits 2 and 4
@ result should be 0x61E87F5C
stop: B stop
```
- Can save an instruction by specifying the mask as an immediate operand in the ORR instruction.
```assembly
ORR r1, r1, #0x00000014 @ set bits 2 and 4
```
- REMEMBER: since the ORR instruction must fit in 32 bits, only some 32-bit immediate operands can be encoded. Assembler will warn you if the immediate operand you specify is invalid.
Bit Manipulation
- e.g. Invert bits 1 and 3 of the value in r1
- Observe $1 \oplus x = x'$ and $0 \oplus x = x$
- Construct a mask with 1 in the bit positions we want to invert and 0 in the bit positions we want to leave unchanged
- Perform a bitwise logical exclusive-OR of the value with the mask
e.g. Invert bits 1 and 3 of the value in r1 (continued)
\[
\begin{array}{llllllllll}
0 & 1 & \bullet & \bullet & \bullet & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\
\oplus & \oplus & \bullet & \bullet & \bullet & \oplus & \oplus & \oplus & \oplus & \oplus & \oplus & \oplus \\
0 & 0 & \bullet & \bullet & \bullet & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\
= & = & \bullet & \bullet & \bullet & = & = & = & = & = & = & = \\
0 & 1 & \bullet & \bullet & \bullet & 1 & 0 & 0 & 0 & 1 & 1 & 0 \\
\end{array}
\]
Write an assembly language program to invert bits 1 and 3 of the value in r1
```
start: LDR r1, =0x61E87F4C @ load test value
LDR r2, =0x0000000A @ mask to invert bits 1 and 3
EOR r1, r1, r2 @ invert bits 1 and 3
@ result should be 0x61E87F46
stop: B stop
```
- Again, can save an instruction by specifying the mask as an
```
EOR r1, r1, #0x0000000A @ invert bits 1 and 3
```
- Again, only some 32-bit immediate operands can be encoded.
Design and write an assembly language program that will make the ASCII character stored in r0 upper case.
<table>
<thead>
<tr>
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>NUL</td>
<td>DLE</td>
<td>SPACE</td>
<td>0</td>
<td>@</td>
<td>P</td>
<td>`</td>
<td>p</td>
</tr>
<tr>
<td>1</td>
<td>SOH</td>
<td>DC1</td>
<td>!</td>
<td>1</td>
<td>A</td>
<td>Q</td>
<td>a</td>
<td>q</td>
</tr>
<tr>
<td>2</td>
<td>STX</td>
<td>DC2</td>
<td>"</td>
<td>2</td>
<td>B</td>
<td>R</td>
<td>b</td>
<td>r</td>
</tr>
<tr>
<td>3</td>
<td>ETX</td>
<td>DC3</td>
<td>#</td>
<td>3</td>
<td>C</td>
<td>S</td>
<td>c</td>
<td>s</td>
</tr>
<tr>
<td>4</td>
<td>EOT</td>
<td>DC4</td>
<td>$</td>
<td>4</td>
<td>D</td>
<td>T</td>
<td>d</td>
<td>t</td>
</tr>
<tr>
<td>5</td>
<td>ENQ</td>
<td>NAK</td>
<td>%</td>
<td>5</td>
<td>E</td>
<td>U</td>
<td>e</td>
<td>u</td>
</tr>
<tr>
<td>6</td>
<td>ACK</td>
<td>SYN</td>
<td>&</td>
<td>6</td>
<td>F</td>
<td>V</td>
<td>f</td>
<td>v</td>
</tr>
<tr>
<td>7</td>
<td>BEL</td>
<td>ETB</td>
<td>'</td>
<td>7</td>
<td>G</td>
<td>W</td>
<td>g</td>
<td>w</td>
</tr>
<tr>
<td>8</td>
<td>BS</td>
<td>CAN</td>
<td>(</td>
<td>8</td>
<td>H</td>
<td>X</td>
<td>h</td>
<td>x</td>
</tr>
<tr>
<td>9</td>
<td>HT</td>
<td>EM</td>
<td>)</td>
<td>9</td>
<td>I</td>
<td>Y</td>
<td>i</td>
<td>y</td>
</tr>
<tr>
<td>A</td>
<td>LF</td>
<td>SUB</td>
<td>*</td>
<td>:</td>
<td>J</td>
<td>Z</td>
<td>j</td>
<td>z</td>
</tr>
<tr>
<td>B</td>
<td>VT</td>
<td>ESC</td>
<td>+</td>
<td>@</td>
<td>K</td>
<td>[</td>
<td>k</td>
<td>{</td>
</tr>
<tr>
<td>C</td>
<td>FF</td>
<td>FS</td>
<td>,</td>
<td><</td>
<td>L</td>
<td>\</td>
<td>l</td>
<td></td>
</tr>
<tr>
<td>D</td>
<td>CR</td>
<td>GS</td>
<td>-</td>
<td>=</td>
<td>M</td>
<td>]</td>
<td>m</td>
<td>}</td>
</tr>
<tr>
<td>E</td>
<td>SO</td>
<td>RS</td>
<td>.</td>
<td>></td>
<td>N</td>
<td>^</td>
<td>n</td>
<td>~</td>
</tr>
<tr>
<td>F</td>
<td>SI</td>
<td>US</td>
<td>/</td>
<td>?</td>
<td>O</td>
<td>_</td>
<td>o</td>
<td>DEL</td>
</tr>
</tbody>
</table>
Logical Shift Left by 1 bit position
ARM MOV instruction allows a source operand, Rm, to be shifted left by $n = 0 \ldots 31$ bit positions before being stored in the destination operand, Rd
$$\text{MOV Rd, Rm, LSL } \#n$$
- LSB of Rd is set to zero, MSB of Rm is discarded
Logical Shift Right by 1 bit position
ARM MOV instruction allows a source operand, Rm, to be shifted right by $n = 0 \ldots 31$ bit positions before being stored in the destination operand, Rd
$$\text{MOV} \ R_d, \ R_m, \ \text{LSR} \ #n$$
- MSB of Rd is set to zero, LSB of Rm is discarded
Logical Shift Left/Right – Examples
- Logical shift left r1 by 2 bit positions
MOV r1, r1, LSL #2 @ r1 = r1 << 2
- Logical shift left r1 by 5 bit positions, store result in r0
MOV r0, r1, LSL #5 @ r0 = r1 << 5
- Logical shift right r2 by 1 bit position
MOV r2, r2, LSR #1 @ r2 = r2 >> 1
- Logical shift right r3 by 4 bit positions, store result in r1
MOV r1, r3, LSR #4 @ r1 = r3 >> 4
- Logical shift left r4 by the number of positions in r0
MOV r4, r4, LSR r0 @ r4 = r4 >> r0
Instead of discarding the MSB when shifting left (or LSB when shifting right), we can cause the last bit shifted out to be stored in the Carry Condition Code Flag
- By setting the S-bit in the MOV machine code instruction
- By using MOVVS instead of MOV
```
MOVS Rd, Rm, LSL #n
MOVVS Rd, Rm, LSR #n
```
Design and write an assembly language program that will calculate the parity bit for a 7-bit value stored in r1. The program should then store the computed parity bit in bit 7 of r1. Assume even parity.
Parity bits are used to detect data transmission errors.
- Using even parity, the parity bit of a value is set such that the number of set bits (1’s) in a value is always even.
Parity example
Shift-And-Add Multiplication
- Shifting a binary value left (right) by $n$ bit positions is an efficient way of multiplying (~dividing) the value by $2^n$
- Example
```plaintext
MOV r1, r1, LSL #2
```

$r1 = 6 \times 2^2 = 24$
We can express multiplication by any value as the **sum** of the results of multiplying the value by different powers of 2.
**Example**
- \( a \times 12 = a \times (8 + 4) = a \times (2^3 + 2^2) = (a \times 2^3) + (a \times 2^2) \)
- \( a \times 12 = (a \ll 3) + (a \ll 2) \)
Is there a simple way to determine which powers of 2 we need to use for our partial products?
Design and write an assembly language program that will multiply the value in r1 by 12 and store the result in r0
\[
\text{start:}
\begin{align*}
\text{MOV} & \quad \text{r0, r1, LSL #3} @ \ r0 = r1 \times 2^3 \\
\text{ADD} & \quad \text{r0, r0, r1, LSL #2} @ \ r0 = r0 + r1 \times 2^2
\end{align*}
\text{stop:} \quad B \ \text{stop}
\]
[ASIDE] We can also formulate instructions to efficiently compute \( Rm \times (2^n-1) \) or \( Rm \times (2^n+1) \), saving one instruction
\[
\begin{align*}
\text{ADD} & \quad \text{r0, r1, r1, LSL #3} @ \ r0 = r1 \times 9 \\
\text{RSB} & \quad \text{r0, r1, r1, LSL #3} @ \ r0 = r1 \times 7
\end{align*}
\]
Arithmetic Shift Right
- Arithmetic Shift Right by 1 bit position

- ARM MOV instruction allows a source operand, \( R_m \), to be shifted right by \( n = 0 \ldots 31 \) bit positions before being stored in the destination operand, \( R_d \)
\[
\text{MOV } R_d, \ R_m, \ \text{ASR } \#n
\]
- MSB of \( R_d \) is set to MSB of \( R_m \), LSB of \( R_m \) is discarded
Rotate Right
- Rotate Right by 1 bit position
- ARM MOV instruction allows a source operand, \( Rm \), to be rotated right by \( n = 0 \ldots 31 \) bit positions before being stored in the destination operand, \( Rd \)
\[
\text{MOV } Rd, \ Rm, \ ROR \ #n
\]
Sticky right-shift and zero-fill right-shift (arithmetic and logical shift)
1011 -5 0101 5
1101 -3 0010 2
Shifting right by n bits on a two's complement signed binary number has the effect of dividing it by $2^n$, but it always rounds down (towards negative infinity).
This is different from the way rounding is usually done in signed integer division (which rounds towards 0)
Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by $2^n$
For an 8-bit signed integer -1 is 1111 1111. An arithmetic right-shift by 1 (or 2, 3, ..., 7) yields 1111 1111 again, which is still −1.
This corresponds to rounding down (towards negative infinity),
but is not the usual convention for division.
A barrel shifter is a digital circuit that can shift a data word by a specified number of bits in one clock cycle.
It can be implemented as a sequence of multiplexers (mux.), and in such an implementation the output of one mux is connected to the input of the next mux in a way that depends on the shift distance.
Register, optionally with shift operation
- Shift value can be either be:
- 5 bit unsigned integer
- Specified in bottom byte of another register.
- Used for multiplication by constant
Immediate value
- 8 bit number, with a range of 0-255.
- Rotated right through even number of positions
- Allows increased range of 32-bit constants to be loaded directly into registers
Here's the bit layout of an ARM data processing instruction
<table>
<thead>
<tr>
<th>Bit</th>
<th>Description</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>31</td>
<td>Cond</td>
<td>0 0</td>
</tr>
<tr>
<td>29-26</td>
<td>Immediate bit</td>
<td>0 1 0 0</td>
</tr>
<tr>
<td>25</td>
<td>S</td>
<td>0</td>
</tr>
<tr>
<td>24-21</td>
<td>Opcode field</td>
<td>0100</td>
</tr>
<tr>
<td>20-11</td>
<td>Operand 1</td>
<td></td>
</tr>
<tr>
<td>10-</td>
<td>Destination</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Operand 2</td>
<td></td>
</tr>
</tbody>
</table>
Any instruction with bits 27 and 26 as 00 is “data processing”. The four-bit opcode field in bits 24–21 defines exactly which instruction this is: add, subtract, move, compare, and so on. 0100 is ADD.
Bit 25 is the "immediate" bit. If it's 0, then operand 2 is a register. If it's set to 1, then operand 2 is an immediate value. Note that operand 2 is only 12 bits. That doesn't give a huge range of numbers: 0–4095, or a byte and a half. Not great when you're mostly working with 32-bit numbers and addresses.
Each ARM instruction is encoded into a 32-bit word.
Access to memory is provided only by Load and Store instructions.
ARM data-processing instructions operate on data and produce a new value.
They are not like the branch instructions that control the operation of the processor and sequencing of instructions.
But ARM doesn't use the 12-bit immediate value as a 12-bit number. Instead, it's an 8-bit number with a 4-bit rotation, like this:
![Rotation Diagram]
The 4-bit rotation value has 16 possible settings, so it's not possible to rotate the 8-bit value to any position in the 32-bit word. The most useful way to use this rotation value is to multiply it by two. It can then represent all even numbers from zero to 30.
To form the constant for the data processing instruction, the 8-bit immediate value is extended with zeroes to 32 bits, then rotated the specified number of places to the right. For some values of rotation, this can allow splitting the 8-bit value between bytes.
The rotated byte encoding allows the 12-bit value to represent a much more useful set of numbers than just 0–4095. It's occasionally even more useful than the MIPS or PowerPC 16-bit immediate value.
ARM immediate values can represent any power of 2 from 0 to 31. So you can set, clear, or toggle any bit with one instruction.
```
ORR r5, r5, #0x8000 @ Set bit 15 of r5
BIC r0, r0, #0x20 @ ASCII lower-case to upper-case
EOR r9, r9, #0x80000000 @ Toggle bit 31 of r9
More generally, you can specify a byte value at any of the four locations in the word:
AND r0, r0, #0xff000000 @ Only keep the top byte of r0
```
In practice, this encoding gives a lot of values that would not be available otherwise. Large loop termination values, bit selections and masks, and lots of other weird constants are all available.
Faced with the constraint of only having 12 bits to use, the ARM designers had the insight to reuse the idle barrel shifter to allow a wide range of useful numbers.
No other architecture has this feature. It's unique.
0xC0000034
0x1 0xD3
0xD3 ROR 2
However, for a large immediate value “mov” does NOT work, it is not ok to use mov r0,#0x55555555
The reason is because there is no way to fit a 32-bit data into a 32-instruction (an instruction must have the instruction-code-part and the data-part, if the data-part is 32-bit long, there is no room to store the instruction-code-part).
Instead use ldr r0,=0x55555555
Then, the assembler will generate some code to place the constant 0x55555555 in a nearby table in the code area. Then it uses an instruction to load a data from that table pointed by the program counter and an offset to fill up r0.
LDR rn, [pc, #offset to literal pool]
@ load register n with one word
@ from the address [pc + offset]
MOVSD Rd, Operand2
Updates the N and Z flags according to the result.
Can update the C flag during the calculation of Operand2.
Does not affect the V flag.
When an Operand2 constant is used with the instructions MOVSD, MVNS, ANDS, ORRS, ORNS, EORS, BICS, TEQ or TST, the carry flag is updated to bit[31] of the constant, if the constant is greater than 255 and can be produced by shifting an 8-bit value. These instructions do not affect the carry flag if Operand2 is any other constant.
|
{"Source-Url": "https://www.scss.tcd.ie/John.Waldron/CSU33D01/Lectures/LogicShift.pdf", "len_cl100k_base": 6148, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 65321, "total-output-tokens": 6950, "length": "2e12", "weborganizer": {"__label__adult": 0.0007195472717285156, "__label__art_design": 0.0007281303405761719, "__label__crime_law": 0.0005831718444824219, "__label__education_jobs": 0.00045180320739746094, "__label__entertainment": 0.00014507770538330078, "__label__fashion_beauty": 0.0003380775451660156, "__label__finance_business": 0.00044035911560058594, "__label__food_dining": 0.0007681846618652344, "__label__games": 0.0012254714965820312, "__label__hardware": 0.07818603515625, "__label__health": 0.0006461143493652344, "__label__history": 0.000576019287109375, "__label__home_hobbies": 0.0005202293395996094, "__label__industrial": 0.00634765625, "__label__literature": 0.00026679039001464844, "__label__politics": 0.0005216598510742188, "__label__religion": 0.0012731552124023438, "__label__science_tech": 0.2059326171875, "__label__social_life": 8.749961853027344e-05, "__label__software": 0.01702880859375, "__label__software_dev": 0.6806640625, "__label__sports_fitness": 0.0006008148193359375, "__label__transportation": 0.0018482208251953125, "__label__travel": 0.00031876564025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15882, 0.07427]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15882, 0.12177]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15882, 0.80144]], "google_gemma-3-12b-it_contains_pii": [[0, 603, false], [603, 810, null], [810, 919, null], [919, 1516, null], [1516, 1789, null], [1789, 2308, null], [2308, 2363, null], [2363, 3130, null], [3130, 3454, null], [3454, 3646, null], [3646, 4292, null], [4292, 4595, null], [4595, 5076, null], [5076, 5572, null], [5572, 6475, null], [6475, 6752, null], [6752, 7046, null], [7046, 7548, null], [7548, 7853, null], [7853, 8250, null], [8250, 8513, null], [8513, 8885, null], [8885, 9535, null], [9535, 9968, null], [9968, 10229, null], [10229, 10720, null], [10720, 10968, null], [10968, 11283, null], [11283, 11661, null], [11661, 12619, null], [12619, 12932, null], [12932, 13612, null], [13612, 13612, null], [13612, 14237, null], [14237, 14655, null], [14655, 14688, null], [14688, 15393, null], [15393, 15882, null]], "google_gemma-3-12b-it_is_public_document": [[0, 603, true], [603, 810, null], [810, 919, null], [919, 1516, null], [1516, 1789, null], [1789, 2308, null], [2308, 2363, null], [2363, 3130, null], [3130, 3454, null], [3454, 3646, null], [3646, 4292, null], [4292, 4595, null], [4595, 5076, null], [5076, 5572, null], [5572, 6475, null], [6475, 6752, null], [6752, 7046, null], [7046, 7548, null], [7548, 7853, null], [7853, 8250, null], [8250, 8513, null], [8513, 8885, null], [8885, 9535, null], [9535, 9968, null], [9968, 10229, null], [10229, 10720, null], [10720, 10968, null], [10968, 11283, null], [11283, 11661, null], [11661, 12619, null], [12619, 12932, null], [12932, 13612, null], [13612, 13612, null], [13612, 14237, null], [14237, 14655, null], [14655, 14688, null], [14688, 15393, null], [15393, 15882, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15882, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 15882, null]], "pdf_page_numbers": [[0, 603, 1], [603, 810, 2], [810, 919, 3], [919, 1516, 4], [1516, 1789, 5], [1789, 2308, 6], [2308, 2363, 7], [2363, 3130, 8], [3130, 3454, 9], [3454, 3646, 10], [3646, 4292, 11], [4292, 4595, 12], [4595, 5076, 13], [5076, 5572, 14], [5572, 6475, 15], [6475, 6752, 16], [6752, 7046, 17], [7046, 7548, 18], [7548, 7853, 19], [7853, 8250, 20], [8250, 8513, 21], [8513, 8885, 22], [8885, 9535, 23], [9535, 9968, 24], [9968, 10229, 25], [10229, 10720, 26], [10720, 10968, 27], [10968, 11283, 28], [11283, 11661, 29], [11661, 12619, 30], [12619, 12932, 31], [12932, 13612, 32], [13612, 13612, 33], [13612, 14237, 34], [14237, 14655, 35], [14655, 14688, 36], [14688, 15393, 37], [15393, 15882, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15882, 0.15692]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
53e98bfda7b95aa5e6dda19345959d691478c50d
|
Analysis and Learning Frameworks for Large-Scale Data Mining
Kohsuke Yanai and Toshihiko Yanase
Additional information is available at the end of the chapter
http://dx.doi.org/10.5772/51713
1. Introduction
Recently, lots of companies and organizations try to analyze large amount of business data and leverage extracted knowledge to improve their operations. This chapter discusses techniques for processing large-scale data. In this chapter, we propose two computing frameworks for large-scale data mining:
1. Tree structured data analysis framework, and
The first framework is for analysis phase, in which we find out how to utilize business data through trial and error. The proposed framework stores tree-structured data using vertical partitioning technique, and uses Hadoop MapReduce for distributed computing. These methods enable to reduce disk I/O load, and to avoid computationally-intensive processing, such as grouping and combining of records.
The second framework is for model learning phase, in which we create predictive models using machine learning algorithms. The proposed framework is another implementation of MapReduce. The framework is designed to ease parallelization of machine learning algorithms and reduce calculation overheads for iterative procedures. The framework minimizes frequency of thread generation and termination, and keeps feature vectors in local memory and local disk during iteration.
We start with discussion on process of data utilization in enterprise and organization described in Figure 1. We suppose the data utilization process consists of the following phases.
1. Pre-processing phase
2. Analysis phase
3. Model learning phase
4. Model application phase
1.1. Pre-processing phase
Pre-processing phase consists of 2 steps:
Step 1-1 Cleansing
Step 1-2 Structuring
Firstly Step 1-1 removes incorrect values and secondly Step 1-2 transforms table-format data into tree-structured data. This pre-processing phase combines raw data from multiple data
sources and creates tree-structured data in which records from multiple data sources are "joined".
Figure 2 illustrates an example of tree-structured server logs, in which the log data are grouped by each site at the top level. Site information consists of site ID (e.g. "site001") and a list of server information. Server information consists of server ID (e.g. "serv001"), average CPU usage (e.g. "ave-cpu:84.0%") and a list of detail records. Furthermore, a detail record consists of date (e.g. "02/05"), time (e.g. "10:20"), CPU usage (e.g. "cpu:92%") and memory usage (e.g. "mem:532MB").
```
[(site001
[(serv001 ave-cpu:84.0%
[(02/05 10:10 cpu:92% mem:532MB)
(02/05 10:20 cpu:76% mem:235MB))]
(serv002 ave-cpu:12.6%
[(02/05 15:30 cpu:13% mem:121MB)
(02/05 15:40 cpu:15% mem:142MB)
(02/05 15:50 cpu:10% mem:140MB))])
(site021
[(serv001 ave-cpu:50.0%
[(02/05;11:40 cpu:88% mem:889MB)
(02/05;11:50 cpu:12% mem:254MB))])]
```
Figure 2. Example of tree-structured data.
If we store the data in table format, data grouping and data combining are repeatedly computed in analysis phase which comes after the pre-processing phase. Data grouping and data combining correspond "group-by" and "join" in SQL respectively. Note that the tree structure keeps the data be grouped and joined. In general when data size is large, the computation cost of data grouping and data combining becomes intensive. Therefore, we store data in tree structure format so that we avoid repetition of these computationally-intensive processing.
1.2. Analysis phase
Analysis phase finds out how to utilize the data through trial-and-error. In most situations the purpose of data analysis is not clear at an early stage of the data utilizatoin process. This is the reason why this early phase needs trial-and-error processes.
As described in Figure 1, the analysis phase consists of 3 independent steps:
Step 2-1 Attribute appending
Step 2-1 Aggregation
Step 2-1 Extraction
This phase iterates Step 2-1 and Step 2-2. The purpose of the iterative process is
- To obtain statistical information and trend,
- To decide what kind of predictive model should be generated, and
To decide which attributes should be used to calculate feature vectors of the predictive model.
Step 2-1 appends new attributes to tree-structured data by combining existing attributes. We suppose the iteration of attribute appending increases data size by 5-20 times. On the other hand, Step 2-2 calculates statistics of attributes and generates charts that help to grasp characteristics of the data. The calculations of Step 2-2 include mean, variance, histogram, cross tabulation, and so on.
An instance of the iterative process consisting attribute appending and aggregation is the following.
1. Calculate frequencies of CPU usage (Step 2-2)
2. Append a new attribute "average memory usage for each server" (Step 2-1)
3. Calculate standard deviation of a new attribute "average memory usage" (Step 2-2)
4. Append a new attribute "difference of memory usage from its average" (Step 2-1)
5. ...
We usually append more than 10 new attributes into the raw data. Attribute appending increases value and visibility of data, and eases trial-and-error process for finding how to utilize the data.
After the iterative process of attribute appending and aggregation, Step 2-3 extracts feature vectors from tree-structured data, which are used in model learning phase.
1.3. Model learning phase
Model learning phase generates predictive models which are used in real-world operations of enterprises and organizations. The model learning phase uses machine learning techniques, such as SVM (support vector machine) [1] and K-Means clustering [2].
For instance, this phase generates a model that predicts when hardware troubles will happen in IT system. The input of the model is history of CPU usage and memory usage. The output is date and time.
1.4. Model application phase
Model application phase applies the predictive models obtained from the model learning phase into actual business operations. We emphasize the input data is "real time".
As described in Figure 1, the model application phase consists of 2 steps:
Step 4-1 Extraction
Step 4-2 Classification
Step 4-1 extracts a feature vector from real time data. Usually computation of this step is similar to that of Step 2-3. Step 4-2 attaches a predictive label to the input data by using predictive models. For example, this label represents date and time of hardware trouble. The label is used in business operations as an event.
2. Architecture
We propose architecture for large-scale data mining. Figure 3 illustrates our architecture.
![Architecture Diagram]
Table 1. Approach of each phase. MR: MapReduce, DFS: Distributed File System.
<table>
<thead>
<tr>
<th>#</th>
<th>Phase</th>
<th>Data</th>
<th>Computation</th>
<th>Approach</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Pre-processing</td>
<td>Table format</td>
<td>I/O-intensive</td>
<td>Hadoop MR</td>
</tr>
<tr>
<td>2</td>
<td>Analysis</td>
<td>Tree-structured, large number</td>
<td>I/O-intensive</td>
<td>Hadoop MR + vertical partitioning +</td>
</tr>
<tr>
<td></td>
<td></td>
<td>of attributes</td>
<td></td>
<td>data store in tree-structured format</td>
</tr>
<tr>
<td>3</td>
<td>Model learning</td>
<td>Vectors</td>
<td>CPU-intensive</td>
<td>Iterative MR + Hadoop DFS</td>
</tr>
<tr>
<td>4</td>
<td>Model application</td>
<td>Stream</td>
<td>Real time</td>
<td>Event driven software</td>
</tr>
</tbody>
</table>
As discussed in Section 1, we suppose four phases: pre-processing, analysis, model learning and model application. In pre-processing phase, data is in table format and the computation is I/O-intensive. Hadoop MapReduce [3] is appropriate for the pre-processing from the viewpoint of data format and I/O load reduction. Hadoop MapReduce is distributed computing platform based on MapReduce computation model [4, 5]. Hadoop MapReduce consists of three computation phases: Map, Combine and Reduce. Hadoop MapReduce parallelizes disk I/O by reading and writing data in parallel on Hadoop DFS (Distributed File System). Regarding details of Hadoop, refer to the literature [4, 5].
We develop cleansing program and structuring program which run on Hadoop MapReduce. The cleansing program and the structuring program are general-purpose, which means we can use the same programs for all cases. The cleansing program and the structuring program read cleansing rule and structuring rule respectively, and programs run by following the rules written by users as XML files.
In analysis phase, data is tree-structured and the computation is I/O-intensive. In addition, the number of attributes is large since this phase repeatedly appends new attributes. Therefore, key approach is also reduction of I/O load. We propose a method combining Hadoop MapReduce, vertical partitioning and data store in tree-structured format in Section 3. This phase also needs chart viewer that displays result of aggregation of Step 2-2.
On the other hand, I/O load in model learning phase is permissive. Because input of machine learning algorithms is feature vectors whose size is much smaller than that of raw data. The
computation in model learning phase is CPU-intensive since machine learning algorithms include iterative calculation for optimization. Section 4 proposes another MapReduce framework for parallel machine learning, in which iterative algorithms are easily parallelized.
In model application phase, data is stream and the computation should be performed in real time. Therefore, we develop event-driven software that runs at the timing of input data coming. The software includes a library of classification function. It reads a predictive model written in PMML [6] that is XML-based language for model description.
We summarize our approaches in Table 1. The rest of this paper focuses on frameworks for analysis phase and model learning phase. Because while new techniques are necessary for efficient computation in the two phases, system for pre-processing and model application is easily implemented by combining existing technologies.
3. Tree-structured data analysis framework
3.1. Method
This section proposes a computing framework that performs data analysis on a large amount of tree-structured data. As discussed in Section 1, an early stage of the data utilization process needs trial-and-error processes, in which we repeatedly append new attributes and calculate statistics of attributes. As a result of repetition of attribute appending, the number of attributes increases. Therefore, not only scalability to the number of records but also scalability to the number of attributes is important.
The key approaches of the proposed framework are:
1. To partition tree-structured data in column-wise and store the partitioned data in separated files corresponding to each attribute, and
2. To use Hadoop MapReduce framework for distributed computing.
The method (1) is referred to as "vertical partitioning." It is well known that vertical partitioning of table format data is efficient [7]. We propose vertical partitioning of tree-structured data. Figure 4 illustrates the vertical partitioning method. The proposed framework partitions tree-structured data into multiple lists so that each list includes values belonging to the same attribute. Then the framework stores the lists of each attribute in corresponding files. Note that the file of "Average CPU usage" in Figure 4 includes only values belonging to "Average CPU usage" attribute, and does not include values of any other attributes.
The framework reads only 1-3 attributes required in data analysis out of 10-30 attributes, and restores tree-structured data that consists of only required attributes. In addition, when appending a new attribute, the framework writes only the newly created attribute into files. If we do not use the vertical partitioning technique, it should write all of existing attributes into files. Thus the proposed method reduces amount of input data as well as amount of output data.
3.2. Implementation
The data model of the proposed framework is a recursively-defined tuple.
**Tuple:** Combination of lists and values. e.g. \( \text{"serv002", 13, [(15, 10)]}. \)
**List:** Sequence of tuples whose types are the same. e.g. \( \text{["serv001", 4.0, "serv002", 2.6]}. \)
**Value:** Scalar, vector, matrix or string. e.g. "532MB".
A round bracket ( ) represents a tuple while a square bracket [ ] represents a list. In this paper, elements of a tuple and a list are separated by white spaces.
Figure 5 describes pseudo code of partitioning algorithm. The algorithm partitions tree-structured data into recursive lists by running the function “Partition” recursively. Each list to be generated by the algorithm consists of values belonging to the same attribute.
Similarly Figure 6 describes pseudo code of restoring algorithm. The algorithm restores tree-structured data from divided attribute data. An example of input for the algorithm is shown in Figure 7. S is trimmed schema which excludes attributes unused in analysis computation. D is generated by replacing attribute names in trimmed schema with recursive lists stored in attribute files.
\begin{verbatim}
Partition(S, D) {
if S is atom then return [D]
if S is list then {
L = []
foreach di in D:
append Partition(SOE(S), di) to L
return transpose of L
}
if S is tuple then {
L = []
foreach (si, di) in (S, D):
L = L + Partition(si, di)
return L
}
}
Figure 5. Pseudo code of partitioning algorithm. S is schema information, D is tree-structured data, The function SOE returns schema of an element of a list.
Restore(S, D, d=0) {
if S is atom then return D
if S is list then {
L = []
foreach (di) in D:
append Restore(SOE(S), di, d+1) to L
return transpose L with depth d
}
if S is tuple then {
L = []
foreach (si, di) in (S, D):
append Restore(si, di, d) to L
return L
}
}
Figure 6. Pseudo code of restoring algorithm. An example of the input is shown in Figure 7. The function SOE returns schema of an element of a list.
S: [(
["Average CPU usage"
["Memory Usage" ]])])
D: [([ave-cpu:84.0% ave-cpu:12.6%] [ave-cpu:50.0%])
Figure 7. Example of input of the restoring algorithm.
We implemented the partitioning algorithm and the restoring algorithm in Gauche. Gauche is an implementation of computer language Scheme. Users implement programs for attribute appending and aggregation using Gauche. The proposed framework combines user programs with partitioning and restoring programs. Then the combined program runs in parallel on Hadoop Streaming of Hadoop MapReduce 0.20.0. Table 2 summarizes key Hadoop components for implementation of the framework.
Figure 8 shows an example of user program. The program appends a new attribute "Average memory usage". The variable "new-schema" represents a location of the newly appended attribute in tree structure. The function mapper generates a new tree-structured data including only the attribute to be appended. The framework provides accessors to attributes and tuples, such as "ref-Server-tuples" and "ref-Memory-usage".
```scheme
(define new-schema
'(["Average memory usage"]))
(define (mapper site)
(tuple
[foreach (ref-Server-tuples site) (lambda (server)
(tuple
(mean (map ref-Memory-usage (ref-Record-tuples server)))
))]))
```
Figure 8. Example of user program.
### 3.3. Evaluation
We evaluated the proposed framework on 6 benchmark tasks.
**Task A** Calculates average CPU usage for each server and append it as a new attribute into the corresponding tuple of server information. The SQL for the calculation includes "group-by" and "update" if relational database is used instead of the proposed framework.
**Task B** Calculates difference between CPU usage and average CPU usage for each server. The SQL of the calculation includes "join".
**Task C** Calculates frequency distribution of CPU usage with interval of 10. The SQL of the calculation includes "group-by".
**Task D** Calculates difference between CPU usages of two successive detail records and append it as a new attribute into the corresponding tuple of a detail record. It is impossible to express the calculation with SQL.
**Task E** Searches detail records in which both of CPU usage and memory usage is 100%.
Figure 9 shows the result of evaluation on 90 GB data. We used 19 servers as slave machines for Hadoop: 9 servers with 2-core 1.86 GHz CPU and 3 GB memory, and 10 servers with two of 4-core 2.66 GHz CPU and 8 GB memory. Thus the Hadoop cluster has 98 CPU cores in total. The vertical axis of Figure 9 represents average execution time over 5 runs. The result indicates...
that the vertical partitioning accelerates the calculations by 17.5 times on the task A and by 12.7 times on the task D. The task A and D require the processing of attribute appending, in which a large amount of tree-structured data is not only read from files, but also written into files. That is the reason why the acceleration on the task A and D is more than that on the other tasks.

Table 3 compares the proposed method with MySQL. Both of the proposed framework and MySQL run on a single server, and the size of benchmark data is 891 MB. Note that parallelization is not used in this experiment so that we investigate the effect of vertical partitioning and data store in tree-structured format without the disturbing factor due to parallel computation. We created indexes on columns of primary id, CPU usage and memory usage in MySQL tables. Table 3 shows average and standard deviation of execution times over 5 runs. The performance of the proposed method is comparative or superior to that of MySQL on the task A, B, C and D despite the proposed method is mainly implemented in Gauche. On the other hand, the performance of the proposed method on the task E is inferior to that of MySQL. This is because MySQL finds records that match the condition by using indexes while the proposed framework scans whole data linearly to find out the records. However, the actual execution time of the proposed framework on the task E is permissible since it is not long compared to that on the other tasks.
<table>
<thead>
<tr>
<th>Task</th>
<th>Proposed method [sec]</th>
<th>MySQL [sec]</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>10.67 ± 0.08</td>
<td>402.72 ± 5.55</td>
</tr>
<tr>
<td>B</td>
<td>76.67 ± 0.36</td>
<td>445.48 ± 3.42</td>
</tr>
<tr>
<td>C</td>
<td>13.21 ± 0.18</td>
<td>12.89 ± 0.05</td>
</tr>
<tr>
<td>D</td>
<td>36.36 ± 0.20</td>
<td>-</td>
</tr>
<tr>
<td>E</td>
<td>16.87 ± 0.14</td>
<td>1.34 ± 2.66</td>
</tr>
</tbody>
</table>
Table 3. Comparison of the tree-structured data analysis framework and MySQL using a single server.
As a result of the experiments, we conclude that the proposed framework is efficient for data analysis of a large amount of tree-structured data. The performance can be improved by implementing it using Java, instead of Gauche.
4. Parallel machine learning framework
4.1. Method
This section proposes a computing framework for parallel machine learning. The proposed framework is designed to ease parallelization of machine learning algorithms and reduce calculation overheads of iterative procedures.
We start with discussion on a model of machine learning algorithms. Let \( D = (x_n, y_n) \) be training data, where \( x_n \) is a feature vector with \( d \) dimension, \( y_n \) is a label. Machine learning algorithm estimates a model \( M \) describing \( D \) well. In this paper we discuss machine learning algorithms that are describable as an iteration of the following steps:
\[
\begin{align*}
z_n &= f(x_n, y_n, M) \\
M &= r(g([z_0, z_1, ..., z_{N-1}]))
\end{align*}
\]
(1) \hspace{1cm} (2)
where \( M \) represents a model to be trained, \( g \) is a function which satisfies the following constraint.
\[
\forall i < j < ... < k < N, g([z_0, ..., z_{N-1}]) = g(g([z_0, ..., z_{i-1}]), g([z_i, ..., z_{j-1}]), ..., g([z_k, ..., z_{N-1}]])
\]
(3)
For instance, a function that summates elements in an array satisfies the constraints mentioned above. By using the characteristic of \( g \), we re-formulate the steps of machine learning algorithms as follows.
\[
\begin{align*}
M_{i,j} &= g([f(x_i, y_i, M), f(x_{i+1}, y_{i+1}, M), ..., f(x_{j-1}, y_{j-1}, M)]) \\
M &= r(g([M_{0,i}, M_{i,j}, ..., M_{k,N}]))
\end{align*}
\]
(4) \hspace{1cm} (5)
Note that parallelization of the calculation of \( M_{i,j} \) is possible since the calculation is independent of other \( (x_n, y_n) \).
Consider we use MapReduce for parallelization; Map phase calculates \( M_{i,j} \) and Reduce phase calculates \( M \). Although MapReduce fits parallelization of machine learning algorithms described with the above formula, use of Hadoop Mapreduce, that is, the most popular implementation of MapReduce, is unreasonable. Because the implementation of Hadoop MapReduce is optimized so that it performs non-iterative algorithms efficiently. The problems with repeatedly using Hadoop MapReduce are following.
- Hadoop MapReduce does not keep feature vectors in memory devices during iterations.
- Hadoop MapReduce restarts threads of Map and Reduce at every iteration. Initialization overheads of these threads are large compared to computation time of machine learning algorithms.
Consequently, the proposed framework provides another MapReduce implementation for iterative algorithms of machine learning. The key approaches of the framework are follows.
1. It keeps feature vectors in memory devices during iterations. In case data size of feature vectors is larger than memory size, it uses local disk as a cache.
2. It does not terminate threads of Map and Reduce and uses the same threads repeatedly.
3. It controls iterations, read/write and data communication.
4. Users implement only 4 functions: initialization of \( M \), calculation of \( M_{i,j} \), update of \( M \) and termination condition.
5. It utilizes Hadoop DFS as its file system.
A few MapReduce frameworks for iterative computation have been proposed. Haloop [8] adds the functions of loop control, caching and indexing into Hadoop. However, it restarts threads of Map and Reduce at every iteration like Hadoop. Therefore, the initialization overheads still remain. Twister [10, 11] and Spark [9] reduce the initialization overheads and keep feature vectors in memory devices during iterations. These frameworks perform similarly to the proposed framework if input data size is smaller than total memory size of a computing cluster. However, in case the data size is larger than total memory size, the performance of the proposed framework is superior to that of Twister and Spark since the proposed framework uses local disk as a cache.
### 4.2. Implementation
We implemented the proposed framework using Java. The framework reads feature vectors and configuration parameters from Hadoop DFS with version of 0.20.2. Figure 10 illustrates the sequence diagram of the proposed framework. The framework consists of a master thread, a Reduce thread and multiple Map threads. The master thread controls the Reduce thread and the Map threads. The Reduce thread controls iterations. The Map threads parallelize calculations of \( M_{i,j} \).
Firstly the master thread starts multiple Map threads, which read feature vectors from Hadoop DFS and keep the data in memory and HDD in a local machine during an iteration. Secondly the master thread starts a Reduce thread. The Map threads and the Reduce thread are not terminated until the iteration ends. Next the Reduce thread initializes \( M \), and then the Map threads calculate \( M_{i,j} \) in parallel. The Reduce thread updates \( M \) by collecting the calculation results from the Map threads and continue the iteration.
Figure 11 and Figure 12 shows implementation of parallel K-Means algorithms using the proposed framework. We omit initialization of \( M \) and termination condition since these implementations are obvious. As shown in Figure 11 and Figure 12, parallelization of the algorithm is easily implemented, and the source code is short. The rest procedures are implemented inside the framework, and users do not have to write codes of data transfer and data read. Thus users are able to focus on core logics of machine learning algorithms.
### 4.3. Evaluation
We compared the proposed framework with Hadoop. We used Mahout library as implementations of machine learning algorithms on Hadoop [16]. We used 6 servers as slave machines for both of the proposed framework and Hadoop: 4 servers with 4-core 2.8 GHz CPU and 4 GB memory, and 2 servers with two of 4-core 2.53 GHz CPU and 2 GB memory. In
The Map phase, 40 Map threads run in parallel. On the other hand, one Reduce thread runs in the Reduce phase. The data size of feature vectors is 1.4 GB. Table 4 shows execution times of one iteration on three machine learning algorithms: K-Means [2], Dirichlet process clustering [12] and IPM perceptron [13, 14]. The values are mean and standard deviation over 10 runs. The result indicates that the proposed framework is 33.8-274.1 times as fast as Mahout.
Figure 13 illustrates scalability of the proposed framework on three machine learning algorithms: K-Means, variational Bayes clustering [15] and linear SVM [1]. The horizontal axes represent the number of Map threads that run in parallel. The vertical axes represent 1 / (execution time), i.e., speed. Figure 13 indicates that the more Map threads in parallel are, the faster the parallelized algorithms run.
class KMeansReducer extends Reducer<KMeansModel> {
public KMeansModel reduce(KMeansModel[] Mijs) {
KMeansModel M = new KMeansModel();
for (int cid=0; cid<M.num_of_cluster; cid++) {
for (Mijs : Mij) {
M.s[cid].add(Mij.s[cid]);
M.l[cid] += Mij.l[cid];
}
M.centroid[cid] = M.s[cid] / M.l[cid];
}
return M;
}
}
Figure 12. Implementation of updating $M$ in parallel K-Means algorithm.
<table>
<thead>
<tr>
<th>Algorithm</th>
<th>Proposed method [sec]</th>
<th>Mahout [sec]</th>
</tr>
</thead>
<tbody>
<tr>
<td>K-Means</td>
<td>0.93 ± 0.052</td>
<td>31.8 ± 1.49</td>
</tr>
<tr>
<td>Dirichlet process clustering</td>
<td>1.14 ± 0.057</td>
<td>67.4 ± 3.87</td>
</tr>
<tr>
<td>IPM perceptron</td>
<td>0.11 ± 0.026</td>
<td>30.7 ± 2.00</td>
</tr>
</tbody>
</table>
Table 4. Comparison of the parallel machine learning framework and Mahout on K-Means [2], Dirichlet process clustering [12] and IPM perceptron [13, 14].
Figure 13. Scalability evaluation of the parallel machine learning framework.
We also applied the framework in order to parallelize a learning algorithm of an acoustic model for speech recognition. The learning algorithm reads voice data and corresponding
text data, and generates a Hidden Markov model by using Forward Backward algorithm. We compared performance of the parallelized algorithm with that of single thread implementation using C language. We used 1.0 GB of feature vectors as a input of these programs. The parallelized algorithm on the proposed framework with 32 parallel Map threads run 7.15 times faster than the single thread implementation. Considering the difference of speed between Java and C language, the proposed framework performs the parallelization well. Consequently, we conclude that the proposed framework is efficient for parallel machine learning.
5. Conclusion
This chapter discussed techniques for processing large-scale data. Firstly we explained that process of data utilization in enterprises and organizations includes (1) pre-processing phase, (2) analysis phase, (3) model learning phase and (4) model application phase. Secondly we described architecture for the data utilization process. Then We proposed two computing frameworks: tree-structured data analysis framework for analysis phase, and parallel machine learning framework for model learning phase. The experimental results demonstrated that our approaches work well.
Future works are follows:
- To implement tree-structured data analysis framework using Java.
- To design original machine learning algorithms which run on the parallel machine learning framework.
- To formulate a framework for model application phase.
Author details
Kohsuke Yanai
Research & Development Centre, Hitachi India Pvt. Ltd.
Central Research Laboratory, Hitachi Ltd.
Toshihiko Yanase
Central Research Laboratory, Hitachi Ltd.
6. References
|
{"Source-Url": "http://cdn.intechopen.com/pdfs/39038/InTech-Analysis_and_learning_frameworks_for_large_scale_data_mining.pdf", "len_cl100k_base": 6689, "olmocr-version": "0.1.48", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 31883, "total-output-tokens": 8353, "length": "2e12", "weborganizer": {"__label__adult": 0.00033211708068847656, "__label__art_design": 0.0003769397735595703, "__label__crime_law": 0.0004558563232421875, "__label__education_jobs": 0.002033233642578125, "__label__entertainment": 0.00010907649993896484, "__label__fashion_beauty": 0.00017917156219482422, "__label__finance_business": 0.0009675025939941406, "__label__food_dining": 0.00041866302490234375, "__label__games": 0.0006327629089355469, "__label__hardware": 0.001438140869140625, "__label__health": 0.0007109642028808594, "__label__history": 0.00032448768615722656, "__label__home_hobbies": 0.00014841556549072266, "__label__industrial": 0.0008912086486816406, "__label__literature": 0.00033545494079589844, "__label__politics": 0.0003693103790283203, "__label__religion": 0.00048422813415527344, "__label__science_tech": 0.2705078125, "__label__social_life": 0.000152587890625, "__label__software": 0.026947021484375, "__label__software_dev": 0.69091796875, "__label__sports_fitness": 0.00028967857360839844, "__label__transportation": 0.000514984130859375, "__label__travel": 0.0001990795135498047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31394, 0.04659]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31394, 0.39064]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31394, 0.85896]], "google_gemma-3-12b-it_contains_pii": [[0, 1705, false], [1705, 2050, null], [2050, 4244, null], [4244, 6642, null], [6642, 9399, null], [9399, 12288, null], [12288, 13460, null], [13460, 15097, null], [15097, 17086, null], [17086, 19109, null], [19109, 21880, null], [21880, 25066, null], [25066, 25936, null], [25936, 27218, null], [27218, 29493, null], [29493, 31394, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1705, true], [1705, 2050, null], [2050, 4244, null], [4244, 6642, null], [6642, 9399, null], [9399, 12288, null], [12288, 13460, null], [13460, 15097, null], [15097, 17086, null], [17086, 19109, null], [19109, 21880, null], [21880, 25066, null], [25066, 25936, null], [25936, 27218, null], [27218, 29493, null], [29493, 31394, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31394, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31394, null]], "pdf_page_numbers": [[0, 1705, 1], [1705, 2050, 2], [2050, 4244, 3], [4244, 6642, 4], [6642, 9399, 5], [9399, 12288, 6], [12288, 13460, 7], [13460, 15097, 8], [15097, 17086, 9], [17086, 19109, 10], [19109, 21880, 11], [21880, 25066, 12], [25066, 25936, 13], [25936, 27218, 14], [27218, 29493, 15], [29493, 31394, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31394, 0.07116]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
ef900beff004deef60c9764f8ac86735f91a2416
|
Top-Down Enterprise Application Integration with Reference Models
Willem-Jan van den Heuvel & Wilhelm Hasselbring & Mike Papazoglou
Infolab, Dept. Information Management and Computer Science, Tilburg University, PO Box 90153, NL-5000 LE Tilburg, Netherlands,
Email: {wjheuvel|hasselbring|mikep}@kub.nl
Abstract. For Enterprise Resource Planning (ERP) systems such as SAP R/3 or IBM SanFrancisco, the tailoring of reference models for customizing the ERP systems to specific organizational contexts is an established approach. In this paper, we present a methodology that uses such reference models as a starting point for a top-down integration of enterprise applications. The re-engineered models of legacy systems are individually linked via cross-mapping specifications to the forward-engineered reference model’s specification. The actual linking of reference and legacy models is done with a methodology for connecting (new) business objects with (old) legacy systems.
1 Introduction
With the traditional bottom-up approach to the integration of existing (legacy) systems, the structure of the (merged) integrated information models is highly determined by the overlaps among the component system models. As discussed in [Has99], the maintenance of such integrated models may become a serious problem, because the merged models rapidly become very complex; usually more complex than required for the actual integration goals. This situation can lead to severe scalability problems with respect to execution performance, usability, and maintenance. For a discussion of the resulting problems refer to [Has99].
Another way to approach the integration of heterogeneous information systems is a top-down process. Starting with common reference models, the individual component models are integrated into these common reference models [Has99]. The resulting integration process is illustrated in Figure 1. The local models of the legacy systems are not integrated into a common global model (which would be the ‘federated schema’ in federated database systems [SL90]) as it would be the case with the traditional bottom-up approach. Instead, an integration of the given reference model with each individual local model is constructed via a linking mechanism (a form of type matching in our methodology). A cross-mapping specification defines the mapping from the (given) reference model to the (local) legacy models. The integration process starts top-down with the reference model. The linking process combines forward and reverse engineering techniques. Both, the reference model and the legacy models are specified in our Component Definition Language (CDL), before they are integrated. An important difference with integration in federated database systems is that with enterprise applications, we integrate business models, not database schemas. CDL has been proposed as the standard
component specification language by the Business Object Domain Task Force of the OMG. It is a superset of the OMG Interface Definition Language (IDL) and the ODMG Object Definition Language (ODL). We introduce specific extensions for business modeling and, in particular, cross-mapping specification.
For the actual linking of reference and legacy models, we employ a methodology for business model integration, called Business Applications to LEgacy Systems (BALES) [vdH01]. This methodology allows reusing as much of the legacy data and functionality as needed for the development of applications that meet modern organization requirements and policies. In particular, the BALES methodology allows to construct configurable business applications on the basis of business objects and activities that can be linked through parameterization to their legacy counterparts.
2 The BALES Methodology for Linking Business Models
Most of the approaches to integrate legacy systems with modern applications are designed around the idea that data residing in a variety of legacy database systems and applications represents a collection of entities that describe various elements of an enterprise. Moreover, they assume that by combining these entities in a coherent manner with legacy functionality and objectifying (wrapping) them, legacy systems can be readily used in place. In this way it is expected that the complexities surrounding the modern usage of legacy data and applications can be effectively reduced. Unfortunately, these approaches do not take into account the evolutionary nature of business and the continual changes of business activities and policies which need to be reflected in the legacy systems. Although part of the functionality of a legacy system can be readily used, many of its business activities and policies may change with the passage of time.
One important characteristic of business object technology, that also contributes to the critical challenge described above, is the explicit separation of interface and imple-
Fig. 2. Methodology of linking reference and legacy models. Workflows initiate business activities and business activities use business objects. The dashed lines are meant to illustrate (possible) mappings between reference and legacy elements.
The BALEs methodology, that is under development [vdH01], has as its main objective to link business objects (BOs) with legacy objects (LOs). Legacy objects serve as conceptual repositories of extracted (wrapped) legacy data and functionality. These objects, just like business objects, are described by means of their interfaces rather then their implementation. A business object interface can be constructed from a legacy object interface partition comprising a set of selected attribute and method signatures. All remaining interface declarations are masked off from the business object interface specification. In this way, business objects in the BALEs methodology are configured so that part of their specification is supplied by data and services found in legacy objects. A business object within the reference model can thus have a part that is directly supplied from some legacy data and services which it integrates with data and services defined at its own level. This means that the business object interfaces are parameterizable to allow these objects to evolve by accommodating upgrades or adjustments in their structure and behavior.
The core of the BALEs linking methodology comprises three phases, as illustrated on the right border of Figure 2: forward engineering, reverse engineering and linking. To illustrate this linking methodology, a simplified example is drawn from the domain
of maintenance and overhaul of aircrafts. This example is inspired from building block definitions that are currently developed at the Department of Defense in the Netherlands [Dep97].
The upper part of Figure 2 illustrates the reference model for the business domain (which is based on the Defense and Aerospace reference models for SAP R/3 [SAP]) in terms of workflows, business activities and business objects. As can be seen from this figure, the reference model defines a Request_Part workflow which comprises three business activities: Request, Prognosis and Issue. The Request_Part workflow is initiated by a maintenance engineer who requests parts (for maintaining aircrafts) from a warehouse. A warehouse manager can react in two different ways to a request:
1. Firstly, the manager can directly issue an invoice and charge/dispatch the requested products to the requester. In this case, the workflow will use information from the Request activity to register the maintenance engineer’s request in an order list. This list can be used to check availability and plan dispatch of a specific aircraft part from the warehouse. The Request activity uses the business (entity) objects Part and Warehouse for this purpose. Subsequently, the workflow initiates the Issue activity (see Figure 2). The Issue activity registers administrative results regarding the dispatching of requested parts and updates the part inventory record by means of the Part Stock business object. The business object Request Part Control is an auxiliary control object used during the execution of the workflow to store and control the state of the running business activities. If the requested part is not in stock, then an Order Part workflow is triggered (not shown in this figure). This workflow then orders the requested parts to fulfill the request of the Request Part workflow.
2. Secondly, in case of an ‘abnormal’ request, for example if the customer informs the warehouse manager about a large future purchase, the manager may decide to run a prognosis. This activity first registers the request information provided by the Request business activity and runs a prognosis on the basis of the availability and consumption history of the requested part. The Prognosis activity uses information from the Part and Warehouse business objects for this purpose. After the prognosis finished successfully, the part can be reserved. If the results of the activity Prognosis are negative with respect to the future availability of the requested aircraft part, another workflow for ordering parts is activated.
The lower part of Figure 2 represents the result of the reverse engineering activity in the form of two activities (wrapped applications and related databases) Material Requirements Planning and Purchase Requisition. These activities make use of five legacy objects to perform their operations. Figure 2 also indicates that the reference workflow draws not only on “modern” business objects and activities, but also on existing (legacy) data and functionality to accomplish its objectives. For example, business activities such as Request and Issue on the reference model level are linked to the legacy activities Material Requirements Planning and Purchase Requisition by means of solid lines. This signifies the fact that the activities on the reference model level reuse the functionality of the activities at the legacy model level. The same applies for business objects at the reference model level such as Part, Part Stock and Stock Location, which are parameterized with legacy objects. In this
simplified example we assume that problems such as conflicting naming conventions and semantic mismatches between the reference and legacy models have already been resolved.
Figure 3 illustrates the integration approach of the BALEs methodology and the individual steps applied during its three phases. The forward engineering phase transforms a conceptual reference model (e.g., SanFrancisco [A+98] Reference Models specified in the UML notation [BRJ99]) into CDL and maps this CDL definition to a Meta-CDL Model which serves as a basis for comparison between business and legacy enterprise models. This phase comprises the activities which correspond to steps 1, 2, and 3 in Figure 3. In the second phase of the BALEs methodology, we represent the legacy objects and activities in terms of CDL and link them to a Meta-CDL Legacy Model. The activities during the reverse engineering phase, which correspond to steps 4, 5, and 6 in Figure 3, are similar to those performed during the forward engineering phase. The actual linking is then done in step 7, and the cross-mapping specification is constructed in the final step 8. Below, we illustrate the BALEs methodology for constructing a cross-mapping specification by means of the aircraft maintenance and overhaul example following those steps:
1. Reference Modeling:
The forward engineering activity starts with the given reference model. The ref-
ference model reveals the activities, structure, information, actors, goals, and constraints of the business in terms of business objects, activities, and workflows, and is illustrated by the reference workflow in the upper part of Figure 2.
2. **CDL-Specification of the Reference Model:**
The interface descriptions of the business objects and activities need to be constructed on the basis of the reference model. To formally describe the interfaces of business objects, we use a variant of the CDL that has been developed by the OMG [Dat97]. CDL is a declarative specification language — a superset of the OMG Interface Definition Language (IDL) and the ODMG Object Definition Language (ODL) with specific extensions for business modeling and, in particular, cross-mapping specification. A specification in CDL defines business object interfaces, structural relationships between business objects, collective behavior of related business objects, and temporal dependencies among them [Dat97]. Detailed descriptions of the CDL syntax can be found in [Dat97] and some practical experience with the use of CDL is discussed in [HMPS98].
The reference model represented in the upper part of Figure 2, serves as a starting point to specify the business object/activity in CDL. Figure 4 gives an extract of the CDL specification involving a business object with interesting dynamic behavior, namely the Request Part Control object. This CDL specification describes the interface of the business control object Request Part Control (see Figure 2) and shows that this business object encapsulates three business activities: Request, Prognosis, and Issue. The dynamic behavior of the encapsulated Prognosis activity should be interpreted as follows: the Prognosis activity is triggered by the incoming signal register-expected. After this signal is received, the activity moves from the state initial to the state forecasting. This is expressed with the state transition rule (STR) Start_forecasting. While the Prognosis activity resides in the state forecasting, it can perform the forecast operation. This operations calculates the required stock on the basis of past data (stock levels) in the warehouse (warehouseID) and required future demand of the part (partID) for period consumptionPeriod.
The manualReorderPlanning business operations of the Prognosis activity refers to the situation where, the user has to define the reorder point and the safety stock him/herself. This approach is in contract to the automatic reorder planning method where both parameters are automatically forecasted. Reorder planning is a special category consumption-based planning that calculates the reorder point on the basis of past and future consumptions, delivery lead times, etc. In this planning strategy, the available stock is compared with the reorder point; if the actual stock gets below the reorder point, the system automatically creates an order form.
3. **Instantiating the Meta-CDL Reference Model:**
The CDL descriptions of both the forward- and backward-engineered models have to be connected to each other in order to be able to ascertain which parts of the legacy object interfaces can be re-used with new applications. To achieve this, we represent both business and legacy CDL specifications in a repository system. For this purpose we utilize the ConceptBase system [JJNS98], which has an advanced query language for abstract models (like the CDL meta model) and it uniformly represents objects at any abstraction level (data objects, model components, mod-
Fig. 4. The CDL specification for the reference business object Request_Part_Control.
ing notations, etc.). The advantage of this repository approach is that the content of the repository, viz. Meta-CDL models, is subject to automated analysis, mainly by means of queries.
After the interfaces of the business objects and activities have been specified in CDL, the CDL specifications are instantiated according to a Meta-CDL reference model. This meta model depicts the instantiations of the CDL model elements. It defines how the CDL constructs are related to each other, and provides information about their types. The CDL meta-modeling step is used as a basis to infer how the constructs found in a Meta-CDL reference model can be connected to related constructs found in the legacy models (see below). In summary, the Meta-CDL model serves as a shared description (could be compared to the ‘canonical data model’ in federated database systems [SL90]) to which the forward and the reverse engineered CDL models will be linked in order to ascertain which (portions of) legacy elements can be linked to the reference model level. In this way, it is possible to parameterize reference model elements with related legacy elements for linking them to each other in the cross-mapping specification.
4. Reverse Engineered Legacy Model:
The reverse engineered model represents the wrapped legacy data and function-
ality. To construct the legacy objects, we rely on techniques that combine object wrapping and meta-modeling with semantic schema enrichment [PR95, PvdH00]. The legacy object model comprises a distinct legacy object and activity layer in the BALES methodology (see bottom part of Figure 2).
Reverse engineered legacy activities such as Material Requirements Planning and Purchase Requisition and wrapped objects like Part, Plant, Warehouse, etc., are represented in the reverse engineered model as illustrated in Figure 2. The legacy activity Material Requirements Planning is used to determine the requirements for parts at an aircraft maintenance location.
5. CDL Specification of the Legacy Model:
The interfaces of the legacy objects and activities are described in CDL in the same way as we explained for reference activities and objects. Figure 5 presents an example interface of the legacy object Warehouse. As can be seen from this example, the legacy object Warehouse offers the legacy activity Material Requirements Planning. This legacy activity can be used to plan all the part requirements in the warehouse. For this purpose it uses the legacy operation forecast-StochModel (where stochastic planning is a special form of consumption-based planning, see the business operation forecast).
The definitions in the legacy object Warehouse will subsequently be used as a basis to construct the interface of the reference business object Warehouse.
Fig. 5. The CDL specification for the legacy object Warehouse.
6. **Instantiating the Meta-CDL Legacy Model:**
After the CDL specifications of the legacy components are available, they are also instantiated into the meta-model repository.
7. **Link Phase of the CDL Meta Models:**
When both the forward and reverse engineered CDL descriptions have been instantiated by means of the Meta-CDL model in ConceptBase, the actual linking of business objects and activities to legacy objects and activities can take place.
We assume the semantic matching problem is not solvable in an automatic way; many techniques have been proposed such as ontologies and lexicons, but their practical applicability is often weak. They only seem to work in a clean lab environment. This is even harder when working with reverse engineered code that uses obscure variable names.
However, since both the forward and reverse engineered models ideally are defined in CDL, the underlying syntactical structures are alike. Our idea is to limit the “semantic” matching to the types of the reference and legacy model constructs. Moreover, we look at a representation of CDL descriptions as type trees. In a type tree, a leaf represents a type, and a node represents a typed entity. We do not look at the names of the typed entities, but only at the type structure. Figure 5 shows the legacy CDL specification that has been derived from an SAP R/2 legacy system. It corresponds to the reference CDL specification (for SAP R/3) in Figure 4.
Given an interface description in CDL $I_0$ and a set of several CDL descriptions $I_1 \cdots I_n$, we want to know, which CDL description in the set fits best to the given CDL description. A type-tree is a directed acyclic graph $G = (N, E, S, C, R)$, where $N$ is the set of nodes, and $E$ is the set of edges. The leaves $L \subseteq N$ of $G$ are nodes, with no outgoing edges. The leaves are marked by Simple Type, all other nodes by Composite Type. The inner nodes in $E \setminus L$ are called constructor nodes.
The predefined set Simple Types includes types as Integer, String, etc. The predefined set Composite Types includes type constructors like function and record. Both sets may be enhanced by user defined types. The function
$$S : L \rightarrow \text{Simple-Types}$$
maps to each leaf a simple type. The function
$$C : E \setminus L \rightarrow \text{Composite Types}$$
assigns to each constructor node a composite type. The partial function
$$R : E \rightarrow \text{Roles}$$
can be used to assign a role to some edges. Roles can be, for example, for a node with the constructor function, return type, input type, and output type. Other roles can be method of attribute. Given two type trees $t_1, t_2$ we intend to compute a number, which is a measurement for the similarity of $t_1$ and $t_2$. The higher this number, the greater the similarity.
Figure 6 illustrates the type-tree matching process for the CDL examples we have discussed above. At the left hand side of this figure, the type tree of the reference
Fig. 6. The type-tree matching process for the reference model and the legacy enterprise model.
(enterprise) model is depicted. The right hand side expresses the reverse engineered (SAP R/2) legacy system model, and has been derived from Figure 5. The algorithm now calculates the match between both type trees. This particular calculation results in an equivalence ration of \( \frac{139}{180} \). For details on the algorithm, we refer the reader to [vdH01].
8. **CDL Specification of the Cross Mapping:**
The BALES methodology results in a CDL specification of reference elements in terms of their related legacy counterparts. The interfaces that are most likely to match, now need to be checked by the designer to resolve semantic conflicts. The syntactically and semantically matched constructs now need to be specified in the resulting parameterized business (task) object(s). For this purpose we use the initial CDL specification for reference objects from step 3, in which we connect reference element specifications with links to equivalent (mappable) legacy component specifications that we identified by means of the matching algorithm.
An example of such a mapping is given in Figure 7. This (simplified) example defines the reference object operation `forecast` in terms of the legacy operation `Material_Requirements_Planning` which is embedded in the business object by means of the linking operator `-->`.
This linking process is followed for each legacy system that shall be integrated into the common reference model.
### 3 Conclusions and Future Research
In this paper, we present a top-down approach to enterprise application integration, whereby reference models are used as starting point for the integration process. The linking of reference and legacy models combines forward and reverse engineering techniques employing the BALES methodology. A resulting cross-mapping specification defines the mapping from the reference model to the individual legacy models. The
BALES methodology has as its main objective to inter-link parameterizable business objects to legacy objects. Legacy objects serve as conceptual repositories of extracted (wrapped) legacy data and functionality. These objects are, just like business objects, described by means of their interfaces rather than their implementation. Business objects in the BALES methodology are configured so that part of their implementation is supplied by legacy objects. Future research includes considering similarity weights in the matching algorithm.
The reference models serve as the starting point in the integration process with the top-down approach. The overall integrated system will be more scalable than with the bottom-up approach, because the integrated reference models do not grow linearly with the number of component systems. The decentralized responsibility for maintaining the cross-mapping specifications reduces the central coordination needs and distributes the maintenance cost. Starting with the reference models should avoid changes to the fundamental structure of these models, making the integration more usable and scalable. For integrations with a small number of local/component systems, the top-down approach may not offer the optimal solutions, but for integrations with a large number of connected systems, we will obtain a more usable and maintainable overall systems architecture.
In practice, we can also expect a *yo-yo* approach, as discussed in [Has99]: the integration process alternates with bottom-up and top-down steps. For instance, the bottom-up process may provide input for extending the reference models. In the presented example, that was taken from a project with the Department of Defense in the Netherlands, it turned out that the existing reference models (the so-called Defense and Aerospace Solution Maps [SAP]) for SAP R/3 did not cover all requirements of this Dutch setting.
These problems are currently addressed by SAP through extending their respective reference models according to these additional requirements. The development process must take feedback, which is based on experience with actual applications, into consideration. Anyway, it is important to start at the top (with the common models). To take the analogy of the yo-yo toy: when the game starts, the reel should be coiled up.
References
|
{"Source-Url": "http://eprints.uni-kiel.de/24499/1/EFISpaper2000.pdf", "len_cl100k_base": 4892, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 29236, "total-output-tokens": 6128, "length": "2e12", "weborganizer": {"__label__adult": 0.00036072731018066406, "__label__art_design": 0.0006375312805175781, "__label__crime_law": 0.00046753883361816406, "__label__education_jobs": 0.0021457672119140625, "__label__entertainment": 0.00010830163955688477, "__label__fashion_beauty": 0.00020968914031982425, "__label__finance_business": 0.003734588623046875, "__label__food_dining": 0.0004074573516845703, "__label__games": 0.000579833984375, "__label__hardware": 0.0009160041809082032, "__label__health": 0.000606536865234375, "__label__history": 0.00044083595275878906, "__label__home_hobbies": 0.00011849403381347656, "__label__industrial": 0.001102447509765625, "__label__literature": 0.0004150867462158203, "__label__politics": 0.0003325939178466797, "__label__religion": 0.0003969669342041016, "__label__science_tech": 0.1353759765625, "__label__social_life": 0.00011849403381347656, "__label__software": 0.036590576171875, "__label__software_dev": 0.8134765625, "__label__sports_fitness": 0.00024509429931640625, "__label__transportation": 0.0009708404541015624, "__label__travel": 0.00026154518127441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27734, 0.0151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27734, 0.54275]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27734, 0.90483]], "google_gemma-3-12b-it_contains_pii": [[0, 2895, false], [2895, 4944, null], [4944, 6595, null], [6595, 10189, null], [10189, 11592, null], [11592, 15169, null], [15169, 16582, null], [16582, 18106, null], [18106, 21126, null], [21126, 23123, null], [23123, 25044, null], [25044, 27734, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2895, true], [2895, 4944, null], [4944, 6595, null], [6595, 10189, null], [10189, 11592, null], [11592, 15169, null], [15169, 16582, null], [16582, 18106, null], [18106, 21126, null], [21126, 23123, null], [23123, 25044, null], [25044, 27734, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27734, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27734, null]], "pdf_page_numbers": [[0, 2895, 1], [2895, 4944, 2], [4944, 6595, 3], [6595, 10189, 4], [10189, 11592, 5], [11592, 15169, 6], [15169, 16582, 7], [16582, 18106, 8], [18106, 21126, 9], [21126, 23123, 10], [23123, 25044, 11], [25044, 27734, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27734, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
3cbb669c17193c42b3fdb25396face9adad51a7f
|
Framework for path finding in multi-layer transport networks
Dijkstra, F.
Citation for published version (APA):
Appendix A
Algorithm Time Complexity
A.1 Running Time of Multi-Layer Path Finding
In both the Multi-Layer-Breadth-First and Multi-Layer-k-Shortest-Path algorithms (listings 7.1 and 7.4 respectively) the running time will mostly depend on the size of the queue $Q$. The best estimate we can give is to first estimate the length of the shortest path, and then estimate the number of paths of that length. Since the algorithms are basically a flooding mechanism, we assume that the path branches at each hop, and the number of branches after $i$ hops is $O(r^i)$, with $r$ the number of branches per hop. If we ignore all suppression mechanisms, as we should for the worst-case scenario, the queue length is proportional to the number of branches, $O(r^i)$. The number of branches per hop $r$ is roughly proportional to the average out-degree and to the number of possible labels $|\langle Lb \rangle|$ per layer. The average out-degree is the average number of adjacencies:
$$|\langle Adj \rangle| = \frac{|E_i|}{|V_i|}$$ \hspace{1cm} (A.1)
Now we can estimate the queue length.
$$O(|Q|) = O(r^i)$$
$$= O\left(\left(\frac{|E_i| \times |\langle Lb \rangle|}{|V_i|}\right)^{|V|}\right)$$
$$= O\left(\left(\frac{|N| \times |Y| + |L| \times |\langle Lb \rangle|}{|N| \times |Y|}\right)^{|N| \times |Y|}\right)$$
(A.2)
Assuming that our network is a small world network, the average path length is $O(\log(|V|))$ [p1]. While the estimate of the worst-case remain the
Appendix A. Algorithm Time Complexity
same, the estimate for the average running time would reduce $i$ from $|\mathcal{V}|$ to $\log(|\mathcal{V}|)$:
$$
\mathcal{O}(|Q|) = \mathcal{O}(r^i) = \mathcal{O}\left(\frac{|\mathcal{E}_i| \times |\langle Lb \rangle|}{|\mathcal{V}|} \log(|\mathcal{V}|)\right) = \mathcal{O}\left(\frac{|\mathcal{N}| \times |\mathcal{Y}| + |L| \times |\langle Lb \rangle|}{|\mathcal{N}| \times |\mathcal{Y}|} \log(|\mathcal{N}| \times |\mathcal{Y}|)\right)
$$
Such exponential behaviour is typical for a NP-complete problem, such as a path-constrained path finding algorithm.
A.2 Multi-Layer Dijkstra’s Algorithm
Listing A.1 shows MULTI-LAYER-DIJKSTRA, a variant of the Dijkstra algorithm applied to the graph $G_s$ as defined in section 7.5.
This algorithm is an improvement over Dijkstra’s algorithm [p11]. Dijkstra’s algorithm applied to the graph $G_s$ in figure 7.4 would find the path $A_{Eth} - B_{Eth} - B_{24c} - E_{24c} - D_{24c} - D_{Eth} - D_{3c7v} - E_{3c7v} - F_{3c7v} - F_{Eth} - C_{Eth}$ as shortest path from $A$ to $C$. This is not correct, due to the limited capacity between $D$ and $E$. The Multi-Layer-Dijkstra algorithm in listing A.1 would find the correct shortest path, $A_{Eth} - B_{Eth} - B_{24c} - E_{24c} - D_{24c} - D_{Eth} - D_{3c7v} - B_{3c7v} - E_{3c7v} - F_{3c7v} - F_{Eth} - C_{Eth}$.
Nevertheless, the Multi-Layer-Dijkstra algorithm is still imperfect. For example, while it finds the shortest path from $A$ to $C$, it will not find the shortest path from $C$ to $A$. The reason is that this algorithm does check for the used bandwidth, but it only keeps track of the bandwidth usage as a global variable, rather than per path. This means that it adds edges to the list of used bandwidth, even if that edge later turns out not be used anymore. This condition is too strict, resulting in false negatives, such as the above.
Lines 1-6 of the meta-code initialises all vertices and edges. Line 7 inserts all vertices in the queue $Q$. The main algorithm starts at line 8. Line 9 extracts the vertex $u$ from the queue that has the shortest weight (i.e., $d[u] \leq d[v]$ $\forall v \neq u \in Q$). Vertex $u$ can be regarded as the new scanning vertex towards destination $v_{dst}$. Consequently, we have to reduce the bandwidth of the last edge in the path to vertex $u$ with the amount of consumed bandwidth (which in this case we take from the edge parameter $B_{e}(e)$). Lines 13 and 14 make sure we only retrieve the shortest path between a source and a single destination (as
A.3. Running Time of Multi-Layer-Dijkstra
The running time of algorithm A.1 is slightly longer than Dijkstra’s algorithm due to the extension in line 20, which re-inserts edges into the queue. If we
ignore this extension, the running time is:
\[ \mathcal{O}(\text{Algorithm A.1}) = \mathcal{O}(|V_s| \times \mathcal{O}(\text{Extract-Min}) + |E_s| \times \mathcal{O}(\text{Insert})) \]
(A.4)
In here, \( \mathcal{O}(\text{Extract-Min}) \) is caused by line 9 and \( \mathcal{O}(\text{Insert}) \) is caused by line 21. If the graph is sufficiently sparse (|E| < |V|^2) [p9], this equation reduces to \( \mathcal{O}(|V_s| \times \log(|V_s|) + |E_s| \times \mathcal{O}(1)) = \mathcal{O}(|V_s| \cdot \log(|V_s|) + |E_s|) \).
Equation 7.34 estimates \( |E_s| \approx (|A| + |L|) \times T^{|Y|} \), and we assume that \( \mathcal{O}(|A|) = \mathcal{O}(|N| \times |V|) \). Equation 7.29 gives the upper limit \( |V_s| \approx |N| \times |S| \approx |N| \times |Y| \times T^{|Y|} \) with \( T = \langle |T(y)| \rangle \) the average number of technologies for each stack.
The running time of algorithm A.1 with constant edge weights \( W_e(e) \) is:
\[ \mathcal{O}(\text{Algorithm A.1}) = \mathcal{O}(|E_s| + |V_s|) = \mathcal{O}((|A| + |L|) \times T^{|Y|} + |N| \times |Y| \times T^{|Y|}) \]
(A.5)
The running time of algorithm A.1 with variable edge weights \( W_e(e) \) is:
\[ \mathcal{O}(\text{Algorithm A.1}) = \mathcal{O}(|E_s| + |V_s| \cdot \log(|V_s|)) = \mathcal{O}((|A| + |L|) \times T^{|Y|} + |N| \times |Y| \times T^{|Y|} \times \log(|N| \times |Y| \times |Y| \times T^{|Y|})) \]
(A.6)
A.4 Running Time of Multi-Layer-Breadth-First
The rough estimate of the running time in the previous section does not provide many insights. In this section, we will assume that the segment of a shortest path is also a shortest path. This means we can abort any path if it contains a (node, adaptation stack) tuple that we encountered before. Such algorithm would be comparable to Multi-Layer-Dijkstra, as described above. This allows us to do a more thorough comparison of running times between path finding in \( G_l \) and path finding in \( G_s \).
If we can abort a path if the current vertex has been processed with the same adaptations stack \( s \), then each vertex is processed at most \( |S_y| \) times,
A.4. Running Time of Multi-Layer-Breadth-First
with $|S_y|$ the number of possible technology stacks for layer $Y_y(v)$ of vertex $v$. The worst-case of $|S_y|$ is $|S|_Y$, or $\prod_{y \in Y} |T(y)|$ according to equation 7.26.
Recall that average adjacency can be found by dividing the number of edges by the number of vertices in a graph (equation A.1).
According to equation 7.35, the running time of algorithm 7.1 is:
$$O(\text{Algorithm 7.1}) = O(|Q|) \times O(\text{loop}) =$$
$$= O(|Q|) \times (O(\text{dequeue}) + O(|adj|) \times O(\text{Extend-Path}))$$
$$= O(|Q|) \times (O(\text{dequeue}) + \frac{O(|E_i|)}{O(|V_i|)} \times O(\text{Extend-Path}))$$
(A.7)
With the restriction in place, the queue size is limited to one adaptation stack per node. Since there are at most $|S|$ adaptations stacks, the upper limit of $|Q|$ is equal to
$$O(|Q|) = O(|N| \times |S|) \lesssim O(|N| \times |Y| \times T^{|Y|})$$
(A.8)
$|Q|$ is equivalent to $|V_s|$, since $V_s$ is also determined by the number of adaptation stacks per node. We have seen in equations 7.25 and 7.27 that the estimate of $|S|$ is lower than its upper limit by a factor of $|Y|$. The estimate average of $|Q|$ is equal to
$$O(|Q|) = O(|N| \times |S|) = O(|N| \times T^{|Y|})$$
(A.9)
These results apply to both the Multi-Layer-Breadth-First and Multi-Layer-k-Shortest-Path algorithm with restricted search space.
If we assume $O(|A|) = O(|N| \times |Y|)$, and use equation 7.12 as the estimate of $|V_i|$ and $|E_i|$, we can expand equation A.7:
$$O(\text{Algorithm 7.1}) = O(|Q|) \times (O(\text{dequeue}) + \frac{O(|E_i|)}{O(|V_i|)} \times O(\text{Extend-Path}))$$
$$= O(|N| \times |Y| \times T^{|Y|}) \times$$
$$(O(\text{dequeue}) + \frac{O(|A| + |L|)}{O(|N| \times |Y|)} \times O(\text{Extend-Path}))$$
$$= O(|N| \times |Y| \times T^{|Y|}) \times O(\text{dequeue}) +$$
$$O((|N| \times |Y| + |L|) \times T^{|Y|}) \times O(\text{Extend-Path})$$
(A.10)
Appendix A. Algorithm Time Complexity
The only operations in the path extension subroutine (algorithm 7.2) which have a running time larger than $O(1)$, are the operations on lines 6, 9 that checks for duplicate tuples (vertex, stack) using $R$, and the operation on line 22 which checks the number of labels per layer $⟨Lb⟩$. Both operations depend if $R$ and the labels can be sorted. If no sorting is possible, the running time is $O(⟨R⟩) = O(|Q|)$ and $O(⟨Lb⟩)$ respectively. If sorting is possible, the running time is $O(\log(|Q|))$ and $O(\log(⟨Lb⟩))$ respectively.
$$O(\text{Extend-Path}) = O(\log(|Q|) + \log(⟨Lb⟩)) \quad (A.11)$$
The original breadth first search algorithm can only deal with edge lengths of 1 ($W_e(e) = 1$ for all $e$). The advantage is that the dequeue operation only takes $O(1)$, since the queue is sorted in order of path length. If we allow different $W_e(e)$, this function needs to be replaced with an extract min operation. The time complexity of this operation becomes $O(\log(|Q|))$, provided that the queue $Q$ is sorted using a Fibonacci heap [p9].
$$O(\text{dequeue operation}) = O(1) \quad (A.12)$$
$$O(\text{extract-min operation}) = O(\log(|Q|)) \quad (A.13)$$
The queue length $|Q|$ highly depends on how quickly the flooding principle is suppressed by the incompatibility check and duplicate stack check. The worst-case is given in equation A.8.
If we assume $W_e(e) = 1$, $O(\text{Extend-Path}) = O(\log(|Q|))$ (no labels), the worst-case running time for algorithm 7.1 becomes:
$$O(\text{Algorithm 7.1}) = O(|N| \times |Y| \times T^{[\mathcal{Y}]}) \times O(\text{dequeue}) +$$
$$O((|N| \times |Y| + |L|) \times T^{[\mathcal{Y}]} \times O(\text{Extend-Path})$$
$$= O(|N| \times |Y| \times T^{[\mathcal{Y}]}) \times O(1) +$$
$$O((|N| \times |Y| + |L|) \times T^{[\mathcal{Y}]} \times O(\log(|Q|))$$
$$\approx O((|N| \times |Y| + |L|) \times T^{[\mathcal{Y}]}) \times O(\log(|N| \times |Y| \times T^{[\mathcal{Y}]})) \quad (A.14)$$
We can now calculate the worst-case running time of algorithm 7.1 for
A.5. Running Time of Multi-Layer-k-Shortest-Path
variable edge weights \( W_e(e) \):
\[
O_{\text{Algorithm 7.1}} = O(|N| \times |\mathcal{Y}| \times |T^{[\mathcal{Y}]}|) \times O(\text{dequeue}) + \\
O((|N| \times |\mathcal{Y}| + |L|) \times |T^{[\mathcal{Y}]}|) \times O(\text{Extend-Path})
\]
\[
= O(|N| \times |\mathcal{Y}| \times |T^{[\mathcal{Y}]}|) \times O(\log(|Q|)) + \\
O((|N| \times |\mathcal{Y}| + |L|) \times |T^{[\mathcal{Y}]}|) \times O(\log(|Q|) + \log(\langle lb \rangle))
\]
\[
\approx O((|N| \times |\mathcal{Y}| + |L|) \times |T^{[\mathcal{Y}]}|) \times O((\log(|N|) + \log(|\mathcal{Y}|) + |\mathcal{Y}| \cdot \log(T)) \log(\langle lb \rangle))
\]
(A.15)
Even so, the estimated average running time of algorithm 7.1 is:
\[
O_{\text{Algorithm 7.1}} = O(|Q|) \times O(\text{dequeue}) + \\
O((|N| + \frac{|L|}{|\mathcal{Y}|}) \times |T^{[\mathcal{Y}]}|) \times O(\text{Extend-Path})
\]
\[
= O(|Q|) \times O(\log(Q)) + \\
O((|N| + \frac{|L|}{|\mathcal{Y}|}) \times |T^{[\mathcal{Y}]}|) \times O(\log(Q) + \log(\langle lb \rangle))
\]
\[
= O(|N| \times |T^{[\mathcal{Y}]}|) \times O(\log(|N| \times |T^{[\mathcal{Y}]}|)) + \\
O((|N| + \frac{|L|}{|\mathcal{Y}|}) \times |T^{[\mathcal{Y}]}|) \times O(\log(|N| \times |T^{[\mathcal{Y}]}|) + \log(\langle lb \rangle))
\]
\[
\approx O((|N| + \frac{|L|}{|\mathcal{Y}|}) \times |T^{[\mathcal{Y}]}| \times (\log(|N| \times |T^{[\mathcal{Y}]}|) + \log(\langle lb \rangle)))
\]
(A.16)
A.5 Running Time of Multi-Layer-k-Shortest-Path
The running time of Multi-Layer-k-Shortest-Path (algorithm 7.4) is roughly comparable to the running time of Multi-Layer-Breadth-First (algorithm 7.1) as it is basically the same algorithm.
The only difference in the running time is caused by two factors. First, Multi-Layer-k-Shortest-Path reduces the search space by using the estimate path length. This reduces the average running time by aborting path that are unfeasible due to their length. At the same time, this change increases the
worst-case running time, because in the worst-case, no paths are aborted, and the running of Dijkstra’s algorithm adds to the running time.
The second difference stems between the two algorithms from the fact that Multi-Layer-k-Shortest-Path operates on a larger graph than Multi-Layer-Breadth-First. While this may seem worse in terms of number of vertices, the running time is nearly equivalent in terms of devices and technology layers. In fact, any algorithm running on $G_s$ is slightly faster than the equivalent algorithm running on $G_l$. The reason is that $G_s$ contains more intrinsic information. For example, the loop check (lines 6-9 in algorithm 7.1, line 6 in algorithm 7.5) is more expensive in $G_l$ than in $G_s$, since it has to search through a list of possible adaptations in $G_l$ while the adaptation function is immediately obvious from the vertex in $G_s$. So, while the creation of the graph $G_s$ is more computational intensive than the creation of the graph $G_l$, this drawback is a benefit when running the algorithm. Finally, $G_s$ may be slightly more efficient, since $G_s$ can collapse multiple links on the same edge, while this is not done in $G_l$. Again, this advantage during the algorithm running time is offset by a disadvantage when generating the graph $G_s$.
The worst-case running time of Multi-Layer-k-Shortest-Path is:
$$O(\text{Algorithm 7.4}) = O(|V_s|) + O(|Q|) \times (O(\text{Extract-Min}) + O(|\text{adj}|) \times (O(\text{Feasible}) + O(\text{Enqueue})))$$
(A.17)
If again, we assume that $O(|Q|) = O(|V_s|) \leq |N| \times |S|$, and further assume $O(\text{Extract-Min}) = O(\log(Q))$, $O(\text{Enqueue}) = O(1)$, and $O(\text{Feasible}) = O(|p|) = O(\log(V_s))$ then we can specify the worst-case running time for Multi-Layer-k-Shortest-Path.
$$O(\text{Algorithm 7.4}) = O(|V_s|) + O(|V_s|) \times (O(\log(V_s)) + O(\log(V_s)) \times (O(\log(V_s)) + O(1)))$$
$$\approx O(|V_s|) \times O(\log(V_s)) + O(|E_s|) \times O(\log(V_s))$$
$$= O((|V_s| + |E_s|) \times \log(V_s))$$
$$= O((|N| \times |V| \times T^{[\mathcal{V}]} + (|N| \times |V| + |L|) \times T^{[\mathcal{V}]} \times \log(|N| \times |V| \times T^{[\mathcal{V}]})$$
$$\approx O((|N| \times |V| + |L|) \times T^{[\mathcal{V}] \times \log(|N| \times |V| \times T^{[\mathcal{V}]})$$
(A.18)
The estimated average running time for Multi-Layer-k-Shortest-Path
A.5. Running Time of Multi-Layer-k-Shortest-Path
is
\[ O(\text{Algorithm 7.4}) = O((|V_s| + |E_s|) \times \log(V_s)) \]
\[ = O((|N| \times T^{|Y|} + (|N| + \frac{|L|}{|Y|}) \times T^{|Y|}) \times \log(|N| \times T^{|Y|})) \]
\[ \approx O((|N| + \frac{|L|}{|Y|}) \times T^{|Y|} \times \log(|N| \times T^{|Y|})) \]
(A.19)
|
{"Source-Url": "https://pure.uva.nl/ws/files/1079194/66056_13.pdf", "len_cl100k_base": 5136, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36674, "total-output-tokens": 5804, "length": "2e12", "weborganizer": {"__label__adult": 0.0007672309875488281, "__label__art_design": 0.0003819465637207031, "__label__crime_law": 0.0008764266967773438, "__label__education_jobs": 0.001049041748046875, "__label__entertainment": 0.0003001689910888672, "__label__fashion_beauty": 0.0003345012664794922, "__label__finance_business": 0.0006251335144042969, "__label__food_dining": 0.00074005126953125, "__label__games": 0.0025005340576171875, "__label__hardware": 0.0031986236572265625, "__label__health": 0.0010776519775390625, "__label__history": 0.0009899139404296875, "__label__home_hobbies": 0.00020945072174072263, "__label__industrial": 0.0014753341674804688, "__label__literature": 0.0007338523864746094, "__label__politics": 0.0006327629089355469, "__label__religion": 0.0007381439208984375, "__label__science_tech": 0.38623046875, "__label__social_life": 0.00018310546875, "__label__software": 0.0182037353515625, "__label__software_dev": 0.56396484375, "__label__sports_fitness": 0.0011777877807617188, "__label__transportation": 0.01248931884765625, "__label__travel": 0.0008854866027832031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15241, 0.03434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15241, 0.67358]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15241, 0.74531]], "google_gemma-3-12b-it_contains_pii": [[0, 197, false], [197, 1668, null], [1668, 4215, null], [4215, 4415, null], [4415, 6539, null], [6539, 8485, null], [8485, 10542, null], [10542, 12536, null], [12536, 14919, null], [14919, 15241, null]], "google_gemma-3-12b-it_is_public_document": [[0, 197, true], [197, 1668, null], [1668, 4215, null], [4215, 4415, null], [4415, 6539, null], [6539, 8485, null], [8485, 10542, null], [10542, 12536, null], [12536, 14919, null], [14919, 15241, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15241, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15241, null]], "pdf_page_numbers": [[0, 197, 1], [197, 1668, 2], [1668, 4215, 3], [4215, 4415, 4], [4415, 6539, 5], [6539, 8485, 6], [8485, 10542, 7], [10542, 12536, 8], [12536, 14919, 9], [14919, 15241, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15241, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
38bba9e5b8a719bb3577b74694b4966addc72932
|
DNS Certification Authority Authorization (CAA) Resource Record
Abstract
The Certification Authority Authorization (CAA) DNS Resource Record allows a DNS domain name holder to specify one or more Certification Authorities (CAs) authorized to issue certificates for that domain. CAA Resource Records allow a public Certification Authority to implement additional controls to reduce the risk of unintended certificate mis-issue. This document defines the syntax of the CAA record and rules for processing CAA records by certificate issuers.
Status of This Memo
This is an Internet Standards Track document.
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc6844.
Copyright Notice
Copyright (c) 2013 IETF Trust and the persons identified as the document authors. All rights reserved.
This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.
1. Introduction
The Certification Authority Authorization (CAA) DNS Resource Record allows a DNS domain name holder to specify the Certification Authorities (CAs) authorized to issue certificates for that domain. Publication of CAA Resource Records allows a public Certification Authority to implement additional controls to reduce the risk of unintended certificate mis-issue.
Like the TLSA record defined in DNS-Based Authentication of Named Entities (DANE) [RFC6698], CAA records are used as a part of a mechanism for checking PKIX certificate data. The distinction between the two specifications is that CAA records specify an authorization control to be performed by a certificate issuer before issue of a certificate and TLSA records specify a verification control to be performed by a relying party after the certificate is issued.
Conformance with a published CAA record is a necessary but not sufficient condition for issuance of a certificate. Before issuing a certificate, a PKIX CA is required to validate the request according to the policies set out in its Certificate Policy. In the case of a public CA that validates certificate requests as a third party, the certificate will typically be issued under a public trust anchor certificate embedded in one or more relevant Relying Applications.
Criteria for inclusion of embedded trust anchor certificates in applications are outside the scope of this document. Typically, such criteria require the CA to publish a Certificate Practices Statement (CPS) that specifies how the requirements of the Certificate Policy (CP) are achieved. It is also common for a CA to engage an independent third-party auditor to prepare an annual audit statement of its performance against its CPS.
A set of CAA records describes only current grants of authority to issue certificates for the corresponding DNS domain. Since a certificate is typically valid for at least a year, it is possible that a certificate that is not conformant with the CAA records currently published was conformant with the CAA records published at the time that the certificate was issued. Relying Applications MUST NOT use CAA records as part of certificate validation.
CAA records MAY be used by Certificate Evaluators as a possible indicator of a security policy violation. Such use SHOULD take account of the possibility that published CAA records changed between the time a certificate was issued and the time at which the certificate was observed by the Certificate Evaluator.
2. Definitions
2.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].
2.2. Defined Terms
The following terms are used in this document:
Authorization Entry: An authorization assertion that grants or denies a specific set of permissions to a specific group of entities.
Certificate: An X.509 Certificate, as specified in [RFC5280].
Certificate Evaluator: A party other than a relying party that evaluates the trustworthiness of certificates issued by Certification Authorities.
Certification Authority (CA): An issuer that issues certificates in accordance with a specified Certificate Policy.
Certificate Policy (CP): Specifies the criteria that a Certification Authority undertakes to meet in its issue of certificates. See [RFC3647].
Certification Practices Statement (CPS): Specifies the means by which the criteria of the Certificate Policy are met. In most cases, this will be the document against which the operations of the Certification Authority are audited. See [RFC3647].
Domain: A DNS Domain Name.
Domain Name: A DNS Domain Name as specified in [STD13].
Domain Name System (DNS): The Internet naming system specified in [STD13].
DNS Security (DNSSEC): Extensions to the DNS that provide authentication services as specified in [RFC4033], [RFC4034], [RFC4035], [RFC5155], and revisions.
Issuer: An entity that issues certificates. See [RFC5280].
Property: The tag-value portion of a CAA Resource Record.
Property Tag: The tag portion of a CAA Resource Record.
Property Value: The value portion of a CAA Resource Record.
Public Key Infrastructure X.509 (PKIX): Standards and specifications issued by the IETF that apply the [X.509] certificate standards specified by the ITU to Internet applications as specified in [RFC5280] and related documents.
Resource Record (RR): A particular entry in the DNS including the owner name, class, type, time to live, and data, as defined in [STD13] and [RFC2181].
Resource Record Set (RRSet): A set of Resource Records or a particular owner name, class, and type. The time to live on all RRs with an RRSet is always the same, but the data may be different among RRs in the RRSet.
Relying Party: A party that makes use of an application whose operation depends on use of a certificate for making a security decision. See [RFC5280].
Relying Application: An application whose operation depends on use of a certificate for making a security decision.
3. The CAA RR Type
A CAA RR consists of a flags byte and a tag-value pair referred to as a property. Multiple properties MAY be associated with the same domain name by publishing multiple CAA RRs at that domain name. The following flag is defined:
Issuer Critical: If set to ‘1’, indicates that the corresponding property tag MUST be understood if the semantics of the CAA record are to be correctly interpreted by an issuer.
Issuers MUST NOT issue certificates for a domain if the relevant CAA Resource Record set contains unknown property tags that have the Critical bit set.
The following property tags are defined:
issue <Issuer Domain Name> [; <name>=<value> ]* : The issue property entry authorizes the holder of the domain name <Issuer Domain Name> or a party acting under the explicit authority of the holder of that domain name to issue certificates for the domain in which the property is published.
issuewild <Issuer Domain Name> [; <name>=<value> ]* : The issuewild property entry authorizes the holder of the domain name <Issuer Domain Name> or a party acting under the explicit authority of the holder of that domain name to issue wildcard certificates for the domain in which the property is published.
iodef <URL> : Specifies a URL to which an issuer MAY report certificate issue requests that are inconsistent with the issuer’s Certification Practices or Certificate Policy, or that a Certificate Evaluator may use to report observation of a possible policy violation. The Incident Object Description Exchange Format (IODEF) format is used [RFC5070].
The following example is a DNS zone file (see [RFC1035]) that informs CAs that certificates are not to be issued except by the holder of the domain name ‘ca.example.net’ or an authorized agent thereof. This policy applies to all subordinate domains under example.com.
If the domain name holder specifies one or more iodef properties, a certificate issuer MAY report invalid certificate requests to that address. In the following example, the domain name holder specifies that reports may be made by means of email with the IODEF data as an attachment, a Web service [RFC6546], or both:
```
$ORIGIN example.com
. CAA 0 issue "ca.example.net"
```
A certificate issuer MAY specify additional parameters that allow customers to specify additional parameters governing certificate issuance. This might be the Certificate Policy under which the certificate is to be issued, the authentication process to be used might be specified, or an account number specified by the CA to enable these parameters to be retrieved.
For example, the CA ‘ca.example.net’ has requested its customer ‘example.com’ to specify the CA’s account number ‘230123’ in each of the customer’s CAA records.
```
$ORIGIN example.com
. CAA 0 issue "ca.example.net; account=230123"
```
The syntax of additional parameters is a sequence of name-value pairs as defined in Section 5.2. The semantics of such parameters is left to site policy and is outside the scope of this document.
The critical flag is intended to permit future versions CAA to introduce new semantics that MUST be understood for correct processing of the record, preventing conforming CAs that do not recognize the new semantics from issuing certificates for the indicated domains.
In the following example, the property ‘tbs’ is flagged as critical. Neither the example.net CA nor any other issuer is authorized to issue under either policy unless the processing rules for the ‘tbs’ property tag are understood.
```
$ORIGIN example.com
. CAA 0 issue "ca.example.net; policy=ev"
. CAA 128 tbs "Unknown"
```
Note that the above restrictions only apply at certificate issue. Since the validity of an end entity certificate is typically a year or more, it is quite possible that the CAA records published at a domain will change between the time a certificate was issued and validation by a relying party.
4. Certification Authority Processing
Before issuing a certificate, a compliant CA MUST check for publication of a relevant CAA Resource Record set. If such a record set exists, a CA MUST NOT issue a certificate unless the CA determines that either (1) the certificate request is consistent with the applicable CAA Resource Record set or (2) an exception specified in the relevant Certificate Policy or Certification Practices Statement applies.
A certificate request MAY specify more than one domain name and MAY specify wildcard domains. Issuers MUST verify authorization for all the domains and wildcard domains specified in the request.
The search for a CAA record climbs the DNS name tree from the specified label up to but not including the DNS root ‘.’.
Given a request for a specific domain X, or a request for a wildcard domain *.X, the relevant record set R(X) is determined as follows:
Let CAA(X) be the record set returned in response to performing a CAA record query on the label X, P(X) be the DNS label immediately above X in the DNS hierarchy, and A(X) be the target of a CNAME or DNAME alias record specified at the label X.
- If CAA(X) is not empty, R(X) = CAA(X), otherwise
- If A(X) is not null, and R(A(X)) is not empty, then R(X) = R(A(X)), otherwise
- If X is not a top-level domain, then R(X) = R(P(X)), otherwise
- R(X) is empty.
For example, if a certificate is requested for X.Y.Z the issuer will search for the relevant CAA record set in the following order:
X.Y.Z
Alias (X.Y.Z)
Y.Z
4.1. Use of DNS Security
Use of DNSSEC to authenticate CAA RRs is strongly RECOMMENDED but not required. An issuer MUST NOT issue certificates if doing so would conflict with the relevant CAA Resource Record set, irrespective of whether the corresponding DNS records are signed.
DNSSEC provides a proof of non-existence for both DNS domains and RR set within domains. DNSSEC verification thus enables an issuer to determine if the answer to a CAA record query is empty because the RR set is empty or if it is non-empty but the response has been suppressed.
Use of DNSSEC allows an issuer to acquire and archive a proof that they were authorized to issue certificates for the domain. Verification of such archives MAY be an audit requirement to verify CAA record processing compliance. Publication of such archives MAY be a transparency requirement to verify CAA record processing compliance.
5. Mechanism
5.1. Syntax
A CAA RR contains a single property entry consisting of a tag-value pair. Each tag represents a property of the CAA record. The value of a CAA property is that specified in the corresponding value field.
A domain name MAY have multiple CAA RRs associated with it and a given property MAY be specified more than once.
The CAA data field contains one property entry. A property entry consists of the following data fields:
Where n is the length specified in the Tag length field and m is the remaining octets in the Value field \((m = d - n - 2)\) where \(d\) is the length of the RDATA section.
The data fields are defined as follows:
Flags: One octet containing the following fields:
- **Bit 0, Issuer Critical Flag:** If the value is set to ‘1’, the critical flag is asserted and the property MUST be understood if the CAA record is to be correctly processed by a certificate issuer.
A Certification Authority MUST NOT issue certificates for any Domain that contains a CAA critical property for an unknown or unsupported property tag that for which the issuer critical flag is set.
Note that according to the conventions set out in [RFC1035], bit 0 is the Most Significant Bit and bit 7 is the Least Significant Bit. Thus, the Flags value 1 means that bit 7 is set while a value of 128 means that bit 0 is set according to this convention.
All other bit positions are reserved for future use.
To ensure compatibility with future extensions to CAA, DNS records compliant with this version of the CAA specification MUST clear (set to "0") all reserved flags bits. Applications that interpret CAA records MUST ignore the value of all reserved flag bits.
Tag Length: A single octet containing an unsigned integer specifying the tag length in octets. The tag length MUST be at least 1 and SHOULD be no more than 15.
Tag: The property identifier, a sequence of US-ASCII characters.
Tag values MAY contain US-ASCII characters ‘a’ through ‘z’, ‘A’ through ‘Z’, and the numbers 0 through 9. Tag values SHOULD NOT contain any other characters. Matching of tag values is case insensitive.
Tag values submitted for registration by IANA MUST NOT contain any characters other than the (lowercase) US-ASCII characters ‘a’ through ‘z’ and the numbers 0 through 9.
Value: A sequence of octets representing the property value. Property values are encoded as binary values and MAY employ sub-formats.
The length of the value field is specified implicitly as the remaining length of the enclosing Resource Record data field.
5.1.1. Canonical Presentation Format
The canonical presentation format of the CAA record is:
CAA <flags> <tag> <value>
Where:
Flags: Is an unsigned integer between 0 and 255.
Tag: Is a non-zero sequence of US-ASCII letters and numbers in lower case.
Value: Is the <character-string> encoding of the value field as specified in [RFC1035], Section 5.1.
5.2. CAA issue Property
The issue property tag is used to request that certificate issuers perform CAA issue restriction processing for the domain and to grant authorization to specific certificate issuers.
The CAA issue property value has the following sub-syntax (specified in ABNF as per [RFC5234]).
issuevalue = space [domain] space [";" *(space parameter) space]
domain = label *("." label)
label = (ALPHA / DIGIT) *( *("-") (ALPHA / DIGIT))
space = *(SP / HTAB)
parameter = tag "=" value
tag = 1*(ALPHA / DIGIT)
value = *VCHAR
For consistency with other aspects of DNS administration, domain name values are specified in letter-digit-hyphen Label (LDH-Label) form.
A CAA record with an issue parameter tag that does not specify a domain name is a request that certificate issuers perform CAA issue restriction processing for the corresponding domain without granting authorization to any certificate issuer.
This form of issue restriction would be appropriate to specify that no certificates are to be issued for the domain in question.
For example, the following CAA record set requests that no certificates be issued for the domain 'nocerts.example.com' by any certificate issuer.
nocerts.example.com CAA 0 issue ";"
A CAA record with an issue parameter tag that specifies a domain name is a request that certificate issuers perform CAA issue restriction processing for the corresponding domain and grants authorization to the certificate issuer specified by the domain name.
For example, the following CAA record set requests that no certificates be issued for the domain 'certs.example.com' by any certificate issuer other than the example.net certificate issuer.
certs.example.com CAA 0 issue "example.net"
CAA authorizations are additive; thus, the result of specifying both the empty issuer and a specified issuer is the same as specifying just the specified issuer alone.
An issuer MAY choose to specify issuer-parameters that further constrain the issue of certificates by that issuer, for example, specifying that certificates are to be subject to specific validation policies, billed to certain accounts, or issued under specific trust anchors.
The semantics of issuer-parameters are determined by the issuer alone.
5.3. CAA issuewild Property
The issuewild property has the same syntax and semantics as the issue property except that issuewild properties only grant authorization to issue certificates that specify a wildcard domain and issuewild properties take precedence over issue properties when specified. Specifically:
issuewild properties MUST be ignored when processing a request for a domain that is not a wildcard domain.
If at least one issuewild property is specified in the relevant CAA record set, all issue properties MUST be ignored when processing a request for a domain that is a wildcard domain.
5.4. CAA iodef Property
The iodef property specifies a means of reporting certificate issue requests or cases of certificate issue for the corresponding domain that violate the security policy of the issuer or the domain name holder.
The Incident Object Description Exchange Format (IODEF) [RFC5070] is used to present the incident report in machine-readable form.
The iodef property takes a URL as its parameter. The URL scheme type determines the method used for reporting:
mailto: The IODEF incident report is reported as a MIME email attachment to an SMTP email that is submitted to the mail address specified. The mail message sent SHOULD contain a brief text message to alert the recipient to the nature of the attachment.
http or https: The IODEF report is submitted as a Web service request to the HTTP address specified using the protocol specified in [RFC6546].
6. Security Considerations
CAA records assert a security policy that the holder of a domain name wishes to be observed by certificate issuers. The effectiveness of CAA records as an access control mechanism is thus dependent on observance of CAA constraints by issuers.
The objective of the CAA record properties described in this document is to reduce the risk of certificate mis-issue rather than avoid reliance on a certificate that has been mis-issued. DANE [RFC6698] describes a mechanism for avoiding reliance on mis-issued certificates.
6.1. Non-Compliance by Certification Authority
CAA records offer CAs a cost-effective means of mitigating the risk of certificate mis-issue: the cost of implementing CAA checks is very small and the potential costs of a mis-issue event include the removal of an embedded trust anchor.
6.2. Mis-Issue by Authorized Certification Authority
Use of CAA records does not prevent mis-issue by an authorized Certification Authority, i.e., a CA that is authorized to issue certificates for the domain in question by CAA records.
Domain name holders SHOULD verify that the CAs they authorize to issue certificates for their domains employ appropriate controls to ensure that certificates are issued only to authorized parties within their organization.
Such controls are most appropriately determined by the domain name holder and the authorized CA(s) directly and are thus out of scope of this document.
6.3. Suppression or Spoofing of CAA Records
Suppression of the CAA record or insertion of a bogus CAA record could enable an attacker to obtain a certificate from an issuer that was not authorized to issue for that domain name.
Where possible, issuers SHOULD perform DNSSEC validation to detect missing or modified CAA record sets.
In cases where DNSSEC is not deployed in a corresponding domain, an issuer SHOULD attempt to mitigate this risk by employing appropriate DNS security controls. For example, all portions of the DNS lookup
process SHOULD be performed against the authoritative name server. Data cached by third parties MUST NOT be relied on but MAY be used to support additional anti-spoofing or anti-suppression controls.
6.4. Denial of Service
Introduction of a malformed or malicious CAA RR could in theory enable a Denial-of-Service (DoS) attack.
This specific threat is not considered to add significantly to the risk of running an insecure DNS service.
An attacker could, in principle, perform a DoS attack against an issuer by requesting a certificate with a maliciously long DNS name. In practice, the DNS protocol imposes a maximum name length and CAA processing does not exacerbate the existing need to mitigate DoS attacks to any meaningful degree.
6.5. Abuse of the Critical Flag
A Certification Authority could make use of the critical flag to trick customers into publishing records that prevent competing Certification Authorities from issuing certificates even though the customer intends to authorize multiple providers.
In practice, such an attack would be of minimal effect since any competent competitor that found itself unable to issue certificates due to lack of support for a property marked critical SHOULD investigate the cause and report the reason to the customer. The customer will thus discover that they had been deceived.
7. IANA Considerations
7.1. Registration of the CAA Resource Record Type
IANA has assigned Resource Record Type 257 for the CAA Resource Record Type and added the line depicted below to the registry named "Resource Record (RR) TYPEs" and QTYPEs as defined in BCP 42 [RFC6195] and located at http://www.iana.org/assignments/dns-parameters.
<table>
<thead>
<tr>
<th>RR Name</th>
<th>Value and meaning</th>
<th>Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td>CAA</td>
<td>257 Certification Authority Restriction</td>
<td>[RFC6844]</td>
</tr>
</tbody>
</table>
7.2. Certification Authority Restriction Properties
IANA has created the "Certification Authority Restriction Properties" registry with the following initial values:
<table>
<thead>
<tr>
<th>Tag</th>
<th>Meaning</th>
<th>Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td>issue</td>
<td>Authorization Entry by Domain</td>
<td>[RFC6844]</td>
</tr>
<tr>
<td>issuewild</td>
<td>Authorization Entry by Wildcard Domain</td>
<td>[RFC6844]</td>
</tr>
<tr>
<td>iodef</td>
<td>Report incident by IODEF report</td>
<td>[RFC6844]</td>
</tr>
<tr>
<td>auth</td>
<td>Reserved</td>
<td>[HB2011]</td>
</tr>
<tr>
<td>path</td>
<td>Reserved</td>
<td>[HB2011]</td>
</tr>
<tr>
<td>policy</td>
<td>Reserved</td>
<td>[HB2011]</td>
</tr>
</tbody>
</table>
Although [HB2011] has expired, deployed clients implement the CAA properties specified in the document and reuse of these property tags for a different purpose could cause unexpected behavior.
Addition of tag identifiers requires a public specification and Expert Review as set out in [RFC6195], Section 3.1.1.
The tag space is designed to be sufficiently large that exhausting the possible tag space need not be a concern. The scope of Expert Review SHOULD be limited to the question of whether the specification provided is sufficiently clear to permit implementation and to avoid unnecessary duplication of functionality.
7.3. Certification Authority Restriction Flags
IANA has created the "Certification Authority Restriction Flags" registry with the following initial values:
<table>
<thead>
<tr>
<th>Flag</th>
<th>Meaning</th>
<th>Reference</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Issuer Critical Flag</td>
<td>[RFC6844]</td>
</tr>
<tr>
<td>1-7</td>
<td>Reserved></td>
<td>[RFC6844]</td>
</tr>
</tbody>
</table>
Assignment of new flags follows the RFC Required policy set out in [RFC5226], Section 4.1.
8. Acknowledgements
The authors would like to thank the following people who contributed to the design and documentation of this work item: Chris Evans, Stephen Farrell, Jeff Hodges, Paul Hoffman, Stephen Kent, Adam Langley, Ben Laurie, James Manager, Chris Palmer, Scott Schmit, Sean Turner, and Ben Wilson.
9. References
9.1. Normative References
9.2. Informative References
Authors’ Addresses
Phillip Hallam-Baker
Comodo Group, Inc.
EMail: [email protected]
Rob Stradling
Comodo CA, Ltd.
EMail: [email protected]
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc6844.pdf", "len_cl100k_base": 5636, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 33740, "total-output-tokens": 6992, "length": "2e12", "weborganizer": {"__label__adult": 0.0004382133483886719, "__label__art_design": 0.0005192756652832031, "__label__crime_law": 0.004253387451171875, "__label__education_jobs": 0.0022640228271484375, "__label__entertainment": 0.00020194053649902344, "__label__fashion_beauty": 0.0002760887145996094, "__label__finance_business": 0.0070037841796875, "__label__food_dining": 0.00032591819763183594, "__label__games": 0.0010480880737304688, "__label__hardware": 0.0039520263671875, "__label__health": 0.0006284713745117188, "__label__history": 0.0006165504455566406, "__label__home_hobbies": 0.0001341104507446289, "__label__industrial": 0.0009412765502929688, "__label__literature": 0.0006113052368164062, "__label__politics": 0.0010471343994140625, "__label__religion": 0.0006175041198730469, "__label__science_tech": 0.33740234375, "__label__social_life": 0.00014913082122802734, "__label__software": 0.259765625, "__label__software_dev": 0.37646484375, "__label__sports_fitness": 0.00029921531677246094, "__label__transportation": 0.0007772445678710938, "__label__travel": 0.0002722740173339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27631, 0.02482]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27631, 0.20753]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27631, 0.86764]], "google_gemma-3-12b-it_contains_pii": [[0, 1753, false], [1753, 2594, null], [2594, 4767, null], [4767, 6577, null], [6577, 8690, null], [8690, 10502, null], [10502, 12318, null], [12318, 13664, null], [13664, 15063, null], [15063, 16425, null], [16425, 18034, null], [18034, 19865, null], [19865, 21853, null], [21853, 23688, null], [23688, 25455, null], [25455, 27104, null], [27104, 27482, null], [27482, 27631, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1753, true], [1753, 2594, null], [2594, 4767, null], [4767, 6577, null], [6577, 8690, null], [8690, 10502, null], [10502, 12318, null], [12318, 13664, null], [13664, 15063, null], [15063, 16425, null], [16425, 18034, null], [18034, 19865, null], [19865, 21853, null], [21853, 23688, null], [23688, 25455, null], [25455, 27104, null], [27104, 27482, null], [27482, 27631, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27631, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27631, null]], "pdf_page_numbers": [[0, 1753, 1], [1753, 2594, 2], [2594, 4767, 3], [4767, 6577, 4], [6577, 8690, 5], [8690, 10502, 6], [10502, 12318, 7], [12318, 13664, 8], [13664, 15063, 9], [15063, 16425, 10], [16425, 18034, 11], [18034, 19865, 12], [19865, 21853, 13], [21853, 23688, 14], [23688, 25455, 15], [25455, 27104, 16], [27104, 27482, 17], [27482, 27631, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27631, 0.06944]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
55505dd87968dcb3287e755a2893ec43e2fe05a7
|
Temporal Logic Based Fast Verification System Using Cover Expressions
Hiroshi Nakamura Masahiro Fujita* Shinji Kono
Hidehiko Tanaka
Department of Electrical Engineering, The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo 113, Japan
* FUJITSU LABORATORIES LTD.
1015 Kamikodanaka, Nakahara-ku, Kawasaki 211, Japan
We have developed a verification and synthesis method for hardware logic designs specified by temporal logic using Prolog, but this system was not satisfactory from the view point of speed and memory. Hence, we have implemented another verification system using the C language, where the combinational circuit part is handled in sum-of-product form (cover expressions).
While the time required for the verification in both systems are nearly equal in the cases of small designs, the larger the scale of design is, the more it takes to verify in the Prolog-version system (increases almost exponentially). The C-version system can handle much larger designs in comparison, and it has successfully verified a DMA controller about 1000 times faster than the Prolog-version system.
1 Introduction
In the past several years there has been an increasing interest in verifying, as distinct from simply testing, the proposed logic designs [2,6].
We have developed a verification and synthesis method for hardware logic designs specified by temporal logic with Prolog [4], but this system was unsatisfactory due to its deficiency in speed and memory. Hence, we have implemented another verification system using the C language. This system verifies the synchronous circuits of the synchronization part in the digital systems. In this system, the combinational circuit part is handled in sum-of-product form. Since the undefined values of the variables are directly expressed in this form, the stage of backtracking is reduced and the number of times required to trace the combinational circuit is also reduced.
In this system, we used Tokio[9] as a specification description language, which is based on temporal logic, and which enables us to describe a specification in any levels[7].
In this paper, we present the method of verification of hardware logic circuits using temporal logic. We will show the efficiency of this system implemented with the C language, and will compare the results with those implemented with Prolog.
1.1 Contents
In the following sections, we discuss the following topics:
Section 2 The structure of the system.
Section 3 Verification based on temporal logic.
Section 4 Verification method using cover expressions.
Section 5 Application and evaluation of the system.
Section 6 Verification method using terminal variables.
Section 7 Conclusions.
2 The Structure of the Verification System
The structure of the verification system is as shown in Figure 1. This system verifies the synchronous circuits of the synchronization part in the digital systems. The synchronization part is generally small enough to be treated in sum-of-product form. The parts of translating Tokio into Linear Time Temporal Logic (LTTL; see section 3) and LTTL into state diagrams are implemented with Prolog [4]. In this paper, we will not discuss the method of translating Tokio into LTTL. The basic idea of the verification in this system (see section 3) is generally the same as that in the Prolog-version system. HSL is a hardware description language which only describes the networks among the gates.

3 Temporal Logic and State Diagram
In this section, we first briefly introduce temporal logic, then describe the method of translating it into state diagrams, and finally explain the verification method using state diagrams.
3.1 Specifications in Temporal Logic
There are several kinds of temporal logic; here we use Linear Time Temporal Logic (LTTL)[11], which is an extension of the traditional logic with four temporal operators added, that is, \(\circ\) (next), \(\Box\) (always), \(\Diamond\) (sometimes), and \(\cup\) (until). LTTL is defined not on continuous states but on discrete states. The first three operators are unary and the last is a binary operator. The meaning of each temporal operator is as follows.
• P (without temporal operators): P is true at the current state.
• øP: P is true at the next state.
• □P: P is true in all future states.
• ◯P: P is true in some future state.
• P U Q: P is true for all states until the first state where Q is true.
LTTL can express a wide variety of properties of sequences, which make it easy to describe the specifications of the hardware. For instance,
Every state (clock) where signal P is active is immediately followed by a state in which signal Q is active.
is described as
□(P → øQ).
(From now on, “A → B” means that “if A holds true then B must be true”.)
3.2 Translation into State Diagrams
Next, we describe the technique to translate LTTL formulas into state diagrams. The basic idea of this technique is that LTTL formula can be decomposed into sets containing formulas which are either atomic (that is, without temporal operators) or that have ø as their main operator. The atomic sets are transition conditions and the rest excluding the outermost ø operator are conditions in the next state (details are described in [10]). The decompositions are repeated until every condition in the next states produced during the decompositions are the same as those conditions already treated.
The decomposition rules are as follows.
• □F = F ∧ ø□F
• ◯F = F ∨ (~ F ∧ ø◇F{F})
• F1 U F2 = F2 ∨ (F1 ∧ ~ F2 ∧ ø(F1 U F2))
(From now on, “~” represents negative.)
For example, let P, Q and R are atomic and we want to translate
(A) □P
(B) ~((P ∧ øQ) → ø□R)
into state diagrams using above rules. It goes as follows;
(A) □P = P ∧ ø□P
Figure 2: State Diagram (1)
Figure 3: State Diagram (2)
Since the condition in the next state ‘$\Box P$’ (underlined) is the same as the condition at the current state, the decomposition is completed and the corresponding state diagram is obtained as shown in Figure 2.
The translation of (B) goes;
\[
(B) \sim ((P \land o\Box Q) \rightarrow o\Box R)
\]
\[
= (P \land o\Box Q \land \sim (o\Box R))
\]
\[
= (P \land o\Box Q \land o\Diamond (\sim R))
\]
\[
= (P \land o(\Box Q \land \Diamond (\sim R))
\]
\[
(\Box Q \land \Diamond (\sim R)) \text{ is the next condition and decomposed as follows.}
\]
\[
(\Box Q \land \Diamond (\sim R))
\]
\[
= Q \land o\Box Q \land (\sim R \lor (R \land o\Diamond (\sim R) \{\sim R\})
\]
\[
= (Q \land \sim R \land o\Box Q) \lor (Q \land R \land o(\Box Q \land \Diamond (\sim R) \{\sim R\})
\]
Therefore, the corresponding state diagram is as shown in Figure 3.
**Satifiability** A LTTL formula is satisfiable iff it has at least one infinite sequence of state transitions when it is translated into a state diagram. The logic formula (A) is satisfiable because it has an infinite sequence of state transitions $<1>, <1>, \ldots$. Similarly, the logic formula (B) is satisfiable because it has an infinite sequence of state transitions $<5>, <5>, \ldots$. Obviously, an infinite sequence is nothing but a loop. Here, the sequence $<4>, <4>, \ldots$ is not infinite because it does not satisfy the eventuality $\{\sim R\}$. The eventuality $\{P\}$ means that $P$ must eventually be true in all sequences of future states which follow the state $\{P\}$. Since $\sim R$ is never true in the sequence $<4>, <4>, \ldots$, this sequence cannot be infinite.
It is easy to check the satisfiability of products of some logic formulas by tracing each state diagram concurrently. For example, we check the satisfiability of the formula
\[
\Box (Q \land R) \land \sim ((P \land o\Box Q) \rightarrow o\Box R).
\]
To do so, we only have to trace the state diagrams shown in Figure 3 and Figure 4 concurrently. The result is as shown in Figure 5. Since there exists no loop (even the sequence $<4,6>, <4,6>, \ldots$ does not satisfy the eventuality), this formula is unsatisfiable.
Figure 4: State Diagram (3)
Figure 5: State Diagram (4)
### 3.3 Verification Based on Temporal Logic
Here, we verify that the hardware designs satisfy the specifications. Let $D$ be the temporal logic expression for the hardware design and $S$ be that for the specification. We must investigate whether the following formula;
\[
D \rightarrow S
\]
is valid. To do so, we show that the negation of the formula, that is,
\[ D \land \sim S \]
is unsatisfiable. In order to check this, we only need to do the following operations;
- Make state diagrams for \( \sim S \) and \( D \).
- Check whether there is any loop for both state diagrams \( \sim S \) and \( D \).
If there exists an infinite sequence, the design does not satisfy the specification (contradiction), and if it does not exist, the design is correct with respect to the specification \( S \).
### 3.4 Implementation on Prolog
Here, we describe the method of obtaining the state diagram for the design on the Prolog-version system. This is where the difference between the Prolog-version system and C-version system is most obvious.
Since states for the design are nothing but the conditions of the flip-flops, in order to acquire the state diagram for the design, it is only necessary to trace all the gates and decide the next condition of the flip-flops provided that the current condition of them is given.
At first, the description concerning networks between gates is translated into Prolog. For example, AND-gate like in Figure 6 is described as
\[
\text{and2}([I1, I2, O1]).
\]

There also exists a data base for the functional gates such as
\[
\text{and2}([1,1,1]).
\]
\[
\text{and2}([A,0,0]).
\]
\[
\vdots
\]
and so on. Therefore, if the values of \( I1 \) and \( I2 \) are both 1, the value of \( O1 \) is unified to 1 by unification. The next condition of the flip-flops is only acquired when this operation is executed throughout all the gates.
Here, we must consider the case that the current input values are not fixed such as external inputs. For example, let us consider that \( I1 = 1 \), and the value of \( I2 \) is not fixed in Figure 6. In this case, the value of \( O1 \) is unified to 1 at first because the value of \( I2 \) is unified to 1 using a data base, and tracing the gates is continued. Even if there does not exist a loop in this case, since there still remains the possibility of contradiction, the verification backtracks and the value of \( O1 \) is unified to 0 (the value of \( I2 \) is unified to 0), and the verification continues.
Therefore, in this system, all the gates should be traced every time in obtaining the next condition of the flip-flops and there also exist many backtrakings, which lead to the degradation in the efficiency of the verification. To raise the efficiency of the verification, the number of backtrakings and tracing gates should be reduced. Thus, we suggest two approaches for the efficiency;
1. Use triple-valued logic and handle undefined value.
2. Trace the combinational part of the design only once.
In the next section, we show the more efficient verification system in which these two approaches are implemented.
4 Logic Design Verification using Cover Expressions
The synchronous circuits are divided into the combinational part and flip-flop part as in Figure 7.

Here, we show the verification method by handling the combinational part as cover expressions [3].
At first, we briefly explain the cube and cover.
4.1 Cube and Cover
Let $p$ be the product term associated with a sum-of-product expression of a logic function with $n$ inputs $(x_1,..,x_n)$ and $m$ outputs $(f_1,..,f_m)$. Then $p$ is specified by a row vector $c = [c_1,..,c_n,c_{n+1},..,c_{n+m}]$, where
$$
c_i = \begin{cases}
10 & \text{if } x_i \text{ appears complemented in } p, \\
01 & \text{if } x_i \text{ appears not complemented in } p, \\
11 & \text{if } x_i \text{ does not appear in } p, \\
0 & \text{if } p \text{ is not present in the representation of } f_{i-n}, \\
1 & \text{if } p \text{ is present in the representation of } f_{i-n}.
\end{cases}
$$
For example, let us consider a logic function with 4 inputs and 2 outputs. For $f_1 = x_1x_2\overline{x_4}$, we have $c = [01 \ 01 \ 11 \ 10 \ 10]$. The input part of $c$ is the subvector of $c$ containing the first $n$ entries of $c$. The output part of $c$ is the subvector of $c$ containing the last $m$ entries of $c$. A variable corresponding to 11 in the input part is referred to as an input don't care, and 00 never appears in the input part.
A set of cubes is said to be a cover $C$ associated with a sum-of-product expression. For
$$f_1 = x_1x_2 + \overline{x_2}x_3 + x_1x_3;$$
$$f_2 = \overline{x_2}x_3 + \overline{x_3}x_4;$$
we have $C = \begin{bmatrix}
01 & 01 & 11 & 11 & 1 & 0 \\
11 & 10 & 01 & 11 & 1 & 1 \\
01 & 11 & 01 & 11 & 1 & 0 \\
11 & 11 & 10 & 10 & 0 & 1
\end{bmatrix}$.
Intersection Suppose that the intersection (logical and) of two cubes $c$ and $d$, written as $c \cdot d$, is a cube $e$. Then, the entries $e_i$ of the cube $e$ are obtained from bit-and operation between the cube $c$ and $d$.
\[ f_1 = x_1 x_2 \quad [01 \ 01 \ 11 \ 1] \]
\[ f_1 = x_2 x_3 \quad [11 \ 01 \ 01 \ 1] \]
\[ f_1 = x_1 x_2 x_3 \quad [01 \ 01 \ 01 \ 1] \]
**Example.**
**On-cover and Off-cover** For a certain output variable \( f_i \), the set of all cubes where \( f_i \) is 1 is called on-cover for the output variable \( f_i \); similarly the set of all cubes which \( f_i \) is 0 is called off-cover for the output variable \( f_i \).
### 4.2 Verification Method using Cover Expressions
The verification flowchart using cover expressions is shown in Figure 8. (From now on, the state diagram corresponding to the Negation of the Specification is called NS.) We explain Figure 8 by verifying an example. The example is the control part of a receiver by handshaking [5]. The design is shown in Figure 9 and the specification to be verified is
\[ \Box \text{(Call} \rightarrow \Diamond \text{Hear)} \]
with the condition that flip-flops are reset at the initial state.
---
**Figure 8: Flowchart of Verification**
---
**Figure 9: The Control Part of Receiver by Handshaking**
(preparation) We describe each cube in the form of [Call, CY, Hear-i, Call-o, Hear-o, Hear] (input variables are Call, CY, and Hear-i; output variables are Call-o, Hear-o, and Hear). Then on-cover and off-cover of the combinational part of this circuit are as follows.
\[
\begin{align*}
\text{Con} &= (\text{on-cover}) \\
&= \begin{bmatrix}
01 & 11 & 11 & 1 & 0 & 0 \\
01 & 10 & 11 & 0 & 1 & 0 \\
01 & 01 & 01 & 0 & 1 & 0 \\
11 & 11 & 01 & 0 & 0 & 1 \\
10 & 11 & 11 & 1 & 0 & 0 \\
10 & 11 & 11 & 0 & 1 & 0 \\
11 & 01 & 01 & 0 & 1 & 0 \\
11 & 11 & 10 & 0 & 0 & 1
\end{bmatrix}, \\
\text{Coff} &= (\text{off-cover}) \\
&= \begin{bmatrix}
01 & 11 & 11 & 1 & 0 & 0 \\
11 & 11 & 11 & 1 & 0 & 0 \\
11 & 10 & 11 & 1 & 0 & 0 \\
11 & 11 & 10 & 0 & 0 & 1
\end{bmatrix}.
\end{align*}
\]
For example, the second and third rows of Con show that Hear-o is on in two cases, that is, either Call is on and CY is off or Call, CY, and Hear-i are all off.
The connections between flip-flops and the combinational part are described as \(\text{oHear-i} = \text{Hear-o}\) and \(\text{oCY} = \text{Call-o}\).
The negation of the specification \(\lnot \square (\text{Call} \rightarrow \Diamond \text{Hear})\) is translated into a state diagram such as in Figure 10. This state diagram is nothing but NS.
\[ \text{Figure 10: State Diagram for NS} \]
(step 1) The initial state of NS is \(<1>\) in Figure 10. The condition in which the flip-flops are reset is described in the cube form as
\[ \text{Ccond} = [11 10 10 1 1 1]. \]
(step 2) The transition condition from the state \(<1>\) in NS is "Call \(\land \lnot \text{Hear}\)."
The cube for the condition Call is
\[ [01 11 11 1 1 1], \]
Then the cube for the condition "\(\lnot \text{Hear}\)" is
\[ [11 11 10 1 1 1], \]
which is taken from the fourth row of Coff, because we only have to derive from the off-cover for the output variable Hear.
Therefore, we obtain the cover for "Call \(\land \lnot \text{Hear}\)" as
\[ \text{Ct} = [01 11 10 1 1 1]. \]
The next state in NS is \(<2>\) and that in design are calculated from
\[ \text{Cnext-on} = \text{Ccond} \cdot \text{Con} \cdot \text{Ct} \]
and
\[ \text{Cnext-off} = \text{Ccond} \cdot \text{Coff} \cdot \text{Ct}. \]
(step 3) The next state in the design is obtained from Cnext-on and Cnext-off in the following way. If there is a certain output variable that has the value 1 only in the cover Cnext-on, that variable should be 1 at the next state. Similarly, if there exists a certain output variable that has the value 1 only in the cover Cnext-off, that variable should be 0 at the next state.
Here, since
\[ \text{Cnext-on} = [01 10 10 1 1 0], \]
\[ \text{Cnext-off} = [01 10 10 0 0 1], \]
Call-o and Hear-o are 0 and Hear is 1 at the next state. Considering the connections between flip-flops and the combinational part, the next state in the design is as follows.
\[ \text{Cnext} = [11 01 01 1 1 1]. \]
If there are some variables that have the value 1 both in Cnext-on and Cnext-off, the values for those variables at the next state can not be decided.
If there exists a variable that has the value 1 neither in Cnext-on nor Cnext-off, the next state in the design can not be obtained. In this case, verification flow goes to the (step 5) as shown in Figure 10.
(step 4) Check whether there is any transition loop both in NS and in the design. If there exists a loop and the loop satisfies eventuality, that is a contradictory example.
In this case, since there does not exist any loop, set $<2>$ to the current state in NS and set Cnext to the current state in the design (in other words, set Cnext to new Ccond), and go to the (step 2).
(step 2) The condition of the state transition in NS is $Ct = [11 11 10 1 1 1]$.
In this case, since $Ct$:Ccond = nil, both Cnext-on and Cnext-off are nil. and the next state in the design can not be obtained. Then go to the (step 5).
(step 5) Since there remains no state transitions in NS, verification has been finished.
5 Evaluation of Verification System
Here, we use two examples; one is a receiver by handshaking[8] and the other is a DMA controller for a mini computer[1]. As mentioned earlier, we have developed two verification systems; the Prolog-version[4] and the C-language version implemented this time. We verify these two examples using both systems, and discuss the results. We used VAX11/730 (0.2 ~ 0.3 MIPS) and UNIX C-Prolog, which was developed by Edinburgh University [8].
(1) Receiver
The design is shown in Figure 11.

Figure 11: Receiver by Handshaking
We verify the specification
$$\Box(\text{Reset} \rightarrow \Box(\text{Call} \rightarrow \Diamond \text{Hear})).$$
The results are shown in Table 1.
### Table 1: CPU time for Verifying Receiver
<table>
<thead>
<tr>
<th>Bit width of data path</th>
<th>without memorizing states</th>
<th>with memorizing states</th>
<th>state diagram</th>
<th>without memorizing states</th>
<th>with memorizing states</th>
</tr>
</thead>
<tbody>
<tr>
<td>1bit</td>
<td>9.88</td>
<td>4.28</td>
<td>2.3 (2.1)</td>
<td>0.7</td>
<td>0.6</td>
</tr>
<tr>
<td>2bits</td>
<td>204.80</td>
<td>28.25</td>
<td>(not measured)</td>
<td>(not measured)</td>
<td></td>
</tr>
<tr>
<td>3bits</td>
<td>5645.80</td>
<td>253.3</td>
<td>5.2 (3.3)</td>
<td>1.4</td>
<td>1.4</td>
</tr>
<tr>
<td>4bits</td>
<td></td>
<td></td>
<td>10.2 (4.7)</td>
<td>2.9</td>
<td>3.1</td>
</tr>
<tr>
<td>8bits (Keep on increasing)</td>
<td></td>
<td></td>
<td>26.8 (8.4)</td>
<td>9.1</td>
<td>10.2</td>
</tr>
<tr>
<td>16bits</td>
<td></td>
<td></td>
<td></td>
<td>0.4</td>
<td>0.4</td>
</tr>
</tbody>
</table>
* In C version, the design is filtered in making covers, and so the time depends on the bit width of data path. The time for each width is shown above in (parentheses).
(VAX11/730 0.2~0.3 MIPS)
### Filtering Design Description
One technique for restraining the increase of verification time is filtering design description which is to extract the necessary part for the verification. For example, as for this specification, there is only one output variable Hear and therefore, we only need to verify the filtered part about Hear (it is surrounded by the broken lines in Figure 11). This technique is implemented on both systems.
### Memorization of states
In the verification flowchart as in Figure 8, when we get a state that has appeared before, we need not check this state again provided that every state that has appeared is remembered. This is called memorization of states.
The basic operation for obtaining the next state is the bit-calculation in this C-version system whereas it is unification in the Prolog-version system. Hence, it takes much more time to obtain the next state in the Prolog-version system than in the C-version. This means that the memorization of states is not so efficient in this system as in the Prolog-version, therefore we do not implement this method on this system.
(2) DMA controller

The design is shown in Figure 12. We verify these two specifications.
(1) □((Reset ∧ o □ ∼ Reset ∧ o □ ∼ Acdt) → o□(Rqdma → o Rqdt))
(2) □((Reset ∧ o □ ∼ Reset ∧ o □ ∼ Rqdma) → o□(Acdt → o ∼ Rqdt))
Filtered part is dotted in Figure 12, and the results are shown in Table 2.
<table>
<thead>
<tr>
<th>CPU time [sec]</th>
<th>Prolog version</th>
<th>C language version</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>verification part</td>
<td></td>
</tr>
<tr>
<td></td>
<td>memorizing states</td>
<td>HSL → cover</td>
</tr>
<tr>
<td></td>
<td>without</td>
<td>with</td>
</tr>
<tr>
<td>not filtered</td>
<td>(1) > 60,000</td>
<td>> 60,000</td>
</tr>
<tr>
<td></td>
<td>(2) > 60,000</td>
<td>> 60,000</td>
</tr>
<tr>
<td>filtered</td>
<td>(1) > 60,000</td>
<td>2672.05</td>
</tr>
<tr>
<td></td>
<td>(2) > 60,000</td>
<td>1923.35</td>
</tr>
</tbody>
</table>
("> 60,000" means over 60,000 seconds )
(VAX11/730 0.2~0.3 MIPS)
Table 2: CPU time for Verifying DMA Controller
**Evaluation** What differs most between the two systems is the way the combinational part is handled. In the Prolog-version system, all gates are traced every time in obtaining the next state in the design, whereas those are traced once in making covers Con and Coff in the C-version system. Moreover, because the undefined values cannot be handled in Prolog, there exists in the Prolog-version system a lot of needless backtrackings which do not exist in the C-version system.
Therefore, while the time required for the verification in both systems are nearly equal in the cases of small the designs, the larger designs are, the longer it takes to verify in the Prolog-version system (increases almost exponentially). The C-version system can handle much larger designs in comparison, and it verified DMA controller about 1000 times faster than the Prolog-version system. Also, it takes little time to make covers.
### 6 Verification Method using Terminal Variables
In case the designs become larger and the number of input variables increases, it is not easy to translate the combinational part of the designs into cover expressions. The number of cubes is $2^{n-1}$ in the worst case where $n$ be the number of input variables. The more complicated the logic of the combinational part becomes, the more it becomes the worst case. Although that part of the synchronization part is not usually so large and complicated as that of the function part, covers may explode.
In this section, we present a verification method using terminal variables which prevents covers from exploding. Introducing terminal variables makes it easy to translate the design into cover expressions, since the logic for output variables becomes simple. This method, however, is not implemented yet.
In using terminal variables, the structure of the synchronous circuits is as shown in Figure 13. Terminal variables should be derived from the original designs on condition that the terminal inputs and outputs have one-to-one correspondence.
The flowchart of the verification is very similar to what is mentioned in section 3. Different points happen where on-cover and off-cover of the combinational part are used. That is,
Figure 13: Structure of Synchronous Circuits using Terminal Variables
(1) translating a transition condition of the specification into cover expressions
(2) calculating the next state in the design.
(preparation) First, we get on-cover and off-cover of the combinational part of the circuit. The form of each cube is
\[ \{I,FI,TI,O,FO,TO\}. \]
Here, each column means as follows.
\begin{itemize}
\item[I]: external input variables
\item[FI]: internal input variables
\item[TI]: terminal input variables
\item[O]: external output variables
\item[FO]: internal output variables
\item[TO]: terminal output variables
\end{itemize}
Input part consists of I, FI, and TI, and output part consists of O, FO, and TO.
(1) To obtain the cover Ct, which is the transition condition of NS. Ct has the form
\[ \{I,FI,TI,O,FO,TO\}, \]
and values of all the output variables are 1. Here, we remove terminal variables from the input part of Ct.
removal of terminal variables In case that a terminal input variable \( TI \) is 01 (10), the input part of Ct is replaced by that part of the intersection between Ct and some rows of on-cover (off-cover) which corresponds to \( TO \) — \( TI \) and \( TO \) correspond to each other one by one —, and then the value of \( TI \) is changed into 11. This operation should be repeated until all the values of TI are changed into 11.
If a certain input variable, that is I, FI, and TI, is changed into 00 during this operation, that means “the cube is nil” and this cube should be deleted. If the cover Ct becomes nil during this operation, this means that this transition condition cannot be satisfied and another transition should be found.
(2) We remove the terminal variables in the same way from the input part of Cnext-on and Cnext-off, which are obtained as described in section 3.
The rest of the verification flowchart are the same as mentioned earlier.
7 Conclusions
We have presented the verification method using cover expressions. The verification system where this method is implemented can verify larger designs and it has verified a DMA controller about 1000 times faster than the Prolog-version system. This is due to handling the combinational part in cover expressions.
We have also presented the verification method using terminal variables. We intend to implement this method and to show the efficiency of this method in the case of much larger designs.
References
|
{"Source-Url": "http://lab.iisec.ac.jp/~tanaka_lab/images/pdf/conference/E88-kokusai-nakamura2.pdf", "len_cl100k_base": 7332, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 12935, "total-output-tokens": 8652, "length": "2e12", "weborganizer": {"__label__adult": 0.0007658004760742188, "__label__art_design": 0.0009069442749023438, "__label__crime_law": 0.0007271766662597656, "__label__education_jobs": 0.0008978843688964844, "__label__entertainment": 0.00019681453704833984, "__label__fashion_beauty": 0.0004014968872070313, "__label__finance_business": 0.0006976127624511719, "__label__food_dining": 0.0006999969482421875, "__label__games": 0.0013637542724609375, "__label__hardware": 0.03564453125, "__label__health": 0.0011425018310546875, "__label__history": 0.0004546642303466797, "__label__home_hobbies": 0.0003790855407714844, "__label__industrial": 0.00295257568359375, "__label__literature": 0.0003077983856201172, "__label__politics": 0.0005459785461425781, "__label__religion": 0.0010232925415039062, "__label__science_tech": 0.34912109375, "__label__social_life": 0.0001112818717956543, "__label__software": 0.0072784423828125, "__label__software_dev": 0.59130859375, "__label__sports_fitness": 0.0006208419799804688, "__label__transportation": 0.0021381378173828125, "__label__travel": 0.0003006458282470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28695, 0.05155]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28695, 0.5962]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28695, 0.88791]], "google_gemma-3-12b-it_contains_pii": [[0, 2462, false], [2462, 4223, null], [4223, 5859, null], [5859, 8372, null], [8372, 11106, null], [11106, 13235, null], [13235, 14305, null], [14305, 17220, null], [17220, 19044, null], [19044, 21650, null], [21650, 24888, null], [24888, 26568, null], [26568, 28695, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2462, true], [2462, 4223, null], [4223, 5859, null], [5859, 8372, null], [8372, 11106, null], [11106, 13235, null], [13235, 14305, null], [14305, 17220, null], [17220, 19044, null], [19044, 21650, null], [21650, 24888, null], [24888, 26568, null], [26568, 28695, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28695, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28695, null]], "pdf_page_numbers": [[0, 2462, 1], [2462, 4223, 2], [4223, 5859, 3], [5859, 8372, 4], [8372, 11106, 5], [11106, 13235, 6], [13235, 14305, 7], [14305, 17220, 8], [17220, 19044, 9], [19044, 21650, 10], [21650, 24888, 11], [24888, 26568, 12], [26568, 28695, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28695, 0.05592]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
0642bb3727d1136b68ad17fe7421fafb285695fd
|
Transforming Object-Centric Process Models into BPMN 2.0 Models in the PHILharmonicFlows Framework
Marius Breitmayer †, Lisa Arnold †, Marko Pejic †, Manfred Reichert †
Abstract: Business processes can be modeled using a plethora of different paradigms including activity-centric (e.g., imperative, declarative), and data-centric processes. The former focus on the process activities to be executed as well as their execution order and constraints, whereas the latter deal with the data required to progress during process execution. Both representations, however, allow describing the same process, but from different viewpoints. Consequently, a transformation between process representations based on the different paradigms yields promising perspectives for enabling a holistic view on both the behavior and the data perspective of a process and fosters a common understanding of different paradigms. This paper presents an approach for transforming object-centric processes (i.e., object lifecycle processes and their interactions) into corresponding activity-centric representations modeled in terms of BPMN 2.0. We present seven transformation rules for mapping an object- to an activity-centric process, illustrated along a running example. We evaluate the approach based on a proof-of-concept implementation that can automatically perform the necessary transformations and has been applied in multiple scenarios. Overall, our approach for transforming object-centric processes into BPMN 2.0 models provides new insights into the relationship between the two paradigms and enables a more flexible and effective way of modeling business processes in general.
Keywords: Object-centric Process; Data-centric Process; Activity-centric Process; Process Model Transformation
1 Introduction
Process-aware Information Systems (PAISs) are based on executable business process models expressed in terms of the activity-centric paradigm [DvTH05]. In the latter, a process model comprises a set of connected black-box activities that represent units of work or sub-processes, specified with an imperative language (e.g., BPMN 2.0). At runtime, the completion of these activities drives process execution. However, many processes (e.g., unstructured, and knowledge-intensive processes [Si09]) are data-centric, requiring the treatment of data as a first-class citizen at both design- and run-time. Due to the insufficient integration of processes and data, traditional PAISs do not adequately support such processes. To remedy this drawback, the data-centric paradigm has emerged [St19]. As opposed to activity-centric processes, the available data (values) drives process execution. Usually, approaches implementing this paradigm follow an object-centric approach, i.e., a business
† Ulm University, Institute of Databases and Information Systems, Helmholtzstraße 16, 89081 Ulm, Germany [email protected]
process corresponds to a multitude of concurrently processed business objects (of same or different types) that interact with each other to reach the overall process goal. Examples of object-centric process management approaches include case handling [vdAWG05], artifact-centric processes [CH09], object-centric process mining [vdA19], and object-centric/-aware process management [KR11]. Both activity-centric and object-centric paradigms have their pros and cons. An object-centric process model may be useful for understanding how data drives process execution, and how the various business objects involved in a business process interact with each other. In contrast, an activity-centric process model fosters our understanding on how work is executed. In general, the same process may be described from both viewpoints (i.e., object- and activity-centric), thus allowing for a more comprehensive approach of expressing process semantics and fostering process literacy. Despite the relevance of the different viewpoints, there exists a gap regarding the common use of activity- and object-centric process management approaches.
The contribution of this paper is twofold. On the one hand, we provide a conceptual approach for transforming object-centric processes (i.e., a PHILharmonicFlows process model) into an activity-centric representation (i.e., BPMN 2.0). As a benefit, the strengths of both paradigms can be exploited, increasing overall process information provided by the two representations. Furthermore, the approach allows for a more flexible and effective way of modeling processes, improving their compatibility and comparability. On the other hand, the paper provides valuable insights into how such transformation can be automatically performed using the FLOWS2BPMN approach.
This paper is structured as follows: Section 2 introduces fundamentals. Section 3 describes the proposed approach and presents the transformation rules of FLOWS2BPMN. Section 4 evaluates the approach and Section 5 relates it to existing works. Section 6 concludes the paper with a summary and outlook.
2 Fundamentals
As pillar of this work, we use the object-centric process management approach PHILharmonicFlows which implements the fundamental concepts of the object-centric paradigm, covering all lifecycle stages (i.e., modeling, execution, monitoring, and analysis/evolution). PHILharmonicFlows provides the most comprehensive approach as shown in a literature study [St19] as well as an integrated and usable implementation.
2.1 PHILharmonicFlows
In the PHILharmonicFlows approach, we developed a framework for data-centric and -driven process management and enhanced it with the concept of objects. Generally, an object-centric business process comprises multiple interacting objects (e.g., Job Offer, Application, Review, and Interview) with each object representing a real-world business
object. A (semantic) data model (see Fig. 1a) is used to organize all process-relevant objects (including their attributes) as well as their semantic relations [KR11]. The latter also consider cardinality constraints. Finally, object behavior (i.e., the data-driven processing of the respective object lifecycle) is expressed in terms of a state-based object lifecycle process model. Thereby, each state of a specific lifecycle process (e.g., the lifecycle process of object Application in Fig. 1b comprises states Created, Sent, Checked, Accepted, and Rejected) may comprise multiple steps. Each step refers to a specific object attribute to be written before completing the respective state. After all required attributes (i.e., steps) of the present state have assigned values, the object may transition to the next state, i.e., the execution of a lifecycle process is data-driven.
At runtime, each object may be instantiated multiple times with the corresponding lifecycle process instances being executed concurrently [ASR21]. Furthermore, relations (e.g., the one between an Application instance and a Review instance) may be instantiated multiple times, enabling 1-to-many or many-to-many associations between individual object instances. Overall, this results in a large relational process structure at runtime [SAR18b]. The interactions between the various object instances of such a process structure, are managed by a coordination process [SAR18a]. In Fig. 2, for example, an application may only be rejected if either the review or the interview proposes rejection. Conversely, an application is accepted if the job offer is closed, the application is checked, and the interview proposes hiring the applicant. In a nutshell, the coordination process enables or prohibits objects to change to another state depending on pre-defined constraints of that object in relation to the states of other objects (see [SAR18a] for details).
### 2.2 Business Process Model and Notation
Business Process Model and Notation (BPMN) 2.0 constitutes an established standard for representing business processes [Ko15]. BPMN is based on the activity-centric process
management paradigm, providing a standardized multi-domain modeling notation [We19]. Generally, the completion of activities drives process execution, with the order in which these activities are executed being managed by the control flow. The latter comprises *sequence flows*, *message flows*, and *gateways*. In addition, *data objects* allow modeling data and may be read or written by activities, enriching the latter with specific information. Modeling the data perspective, however, is often neglected as activities are treated as first-class citizens [Re12]. Fig. 3 depicts a simplified process model of a recruitment process in which a *Job Offer* is created and published by a *Personnel Officer*. Then, *Applicants* may create and send their *Application*. Afterwards, a *Department Expert* evaluates the application and either accepts or rejects it. Finally, the *Applicant* is notified. Note that the execution of this process is driven by the execution of activities rather than the data becoming available (cf. Section 2.1).
Fig. 3: Example Process Recruitment in BPMN 2.0 (simplified)
### 3 Transformation Approach
The goal of FLOWS2BPMN, the approach we propose for transforming object-centric process models to BPMN 2.0 models, has been three-fold: First, we developed a concept for transforming object-centric processes into an activity-centric representation, i.e., a BPMN 2.0 process model. Second, we implemented a proof-of-concept prototype capable of automatically realizing this model transformation. Third, the transformation between object-centric and activity-centric approaches enables a holistic view on business processes and facilitates our understanding of processes expressed with these different paradigms. To enable the transformation between object- and activity-centric processes, we derived a set of transformation rules that allow transforming object types, lifecycle processes, coordination
Transforming Object-Centric Process Models into BPMN 2.0 Models
processes, and user assignments into a suitable BPMN 2.0 representation, enabling the application of existing process management tools to object-centric processes.
Fig. 4a depicts the main components of the FLOWS2BPMN approach, which enables the generic transformation of object-centric process models into BPMN 2.0 models.
The core of the approach comprises 7 transformation rules (TR), each belonging to one of the following categories: Object Type Transformation, Lifecycle Process Transformation, Coordination Process Transformation, and User Assignment Transformation. The Transformation procedure is illustrated by Fig. 4b. It describes the order in which the transformation rules are applied to generate a BPMN 2.0 process model from an object-centric process representation. The remainder of this section introduces the 7 transformation rules along the running example of a recruitment process (cf. Sec 2) and the transformation procedure.
3.1 Object Type Transformation
TR1 (Object Type Transformation):
An object type of an object-centric process is mapped to a pool of a BPMN collaboration diagram.
Transformation rule TR1 maps object types to BPMN elements. Each object type is transformed into a separate pool. Note that the generation of a pool implies adding a start and end event to it. As a consequence, we map different object types to different pools, each having corresponding start and end events. Fig. 5 illustrates the application of TR1 to object type Application. Generally, multiple instances of an object type may be created at runtime, each being executed by a separate lifecycle process instance. In the transformation of object types to BPMN pools, this is reflected by the use of multi-instance pools (MI pool), i.e., each pool generated by TR1 is tagged as a MI pool.
3.2 Lifecycle Process Transformation
Based on TR1, each object type can be transformed to a multi-instance pool. The second step of the transformation procedure (see Fig. 4b) then transforms each object lifecycle process to a semantically corresponding BPMN 2.0 representation. This includes the transformation of lifecycle states, steps (i.e., attributes), and transitions between them to corresponding BPMN elements.
**TR2 (State Type Transformation):**
Each state type of the object lifecycle process is transformed into a BPMN activity. Depending on the respective state type, this activity corresponds to an atomic task or a sub-process reflecting the internal logic of the steps within an object state. Additionally, activities write data objects that indicate the state of an object type.
Remember that a state of an object lifecycle process comprises a set of ordered steps, each representing atomic actions (i.e., writing an attribute) of the object. In BPMN, this behavior can be encapsulated by a sub-process. When transforming a state to BPMN elements, two cases need to be distinguished:
1. The state comprises a number of connected steps that reflect the attributes to be written (e.g., in a form) before leaving the state. In this case, the state is mapped to a (collapsed) BPMN sub-process. Particularly, this sub-process requires the transformation of the steps within the respective state as well (see Fig. 6b and TR3).
2. The state is silent, i.e., it does not comprise any step and action respectively, (e.g., the silent state Accepted in Fig. 6a). Consequently, representing the internal logic of the state is not required. In this case, the state is transformed into a BPMN task.
The resulting BPMN process model reflects the state-based view of the lifecycle process (see Fig. 1b). Expanding sub-processes, in turn, displays the internal logic of a state and its respective steps (see TR3). Consequently, both granularity levels of an object lifecycle process (i.e., state and step level) can be represented. Moreover, each created BPMN activity is connected to a data object that refers to the object in the corresponding state. Note that this data object corresponds to a multi-instance data object.
Lifecycle steps represent atomic actions to write or update object attributes, e.g., by filling in form fields or sensing a data value from the physical environment. Mapping a step to BPMN results in a task (i.e., atomic activity), embedded in the respective sub-process generated
Transforming Object-Centric Process Models into BPMN 2.0 Models
(a) Transforming Silent State Type
(b) Transforming Non-silent State Type
Fig. 6: Example Lifecycle State Type Transformation
by TR2. The created sub-process and its tasks reflect the attributes to be updated when processing an object state as well as the order (including if-then-else constraints) in which the corresponding attribute values may be written during runtime. Note that, for example, this information may be exploited by process implementers to create corresponding forms (which are auto-generated in PHILharmonicFlows).
TR3 (Lifecycle Step Type Transformation):
When mapping a lifecycle step to a BPMN element, different cases need to be distinguished, depending on the properties of the step (cf. Fig 1b):
1. **Attribute steps** update object attributes and correspond to the default step type. They are transformed to a BPMN task.
2. **Computation steps** enable the automatic computation of attribute values (e.g., the current date or a price including VAT). They are mapped to a Service Task in BPMN.
3. **Decision steps** comprise predicate steps and are mapped to tasks.
4. **Predicate steps** enable choices during lifecycle process execution, i.e., the respective object may transition to different successors depending on attribute values. Predicate steps are not mapped to BPMN tasks, but to the labels of the sequence flows outgoing from a predicate step.
5. **Silent steps** (i.e., steps without associated action) are represented by the state generated in TR2 – no transformation is required.
In a nutshell, only attribute, computation, decision, and predicate steps need to be mapped to the BPMN model. In particular, attribute and decision steps are mapped to tasks, whereas computation steps are mapped to a service task. They further require the integration of data in the BPMN model. For this purpose, each task is associated with a corresponding data object in the BPMN model. As steps are allocated to states, these data objects exist within the created sub-process. Note that these data objects are labelled as multi-instance data objects. The mapping of steps to BPMN elements is depicted in Fig. 7.
Note that attribute values need not necessarily be provided in the pre-specified order of the lifecycle process steps. Transitions between the steps of a state, therefore, only define a default execution order of steps [SAR19], e.g., an applicant may upload her CV before providing...
her address when creating an application. Consequently, the sub-process representing a state may be declared as a BPMN ad-hoc activity, i.e., the order in which attributes are written is arbitrary. In particular, the sub-process may only contain activities, data objects, sequence flows, and gateways. If sequence flows are omitted, the execution order of the internal tasks will be arbitrary. Consequently, the flexible execution (i.e., arbitrary order of how attributes are written) of the lifecycle process is adopted in the resulting BPMN model. Note that we greyed out the (yet to-be transformed) sequence flows in Fig. 7 as they are not mandatory, but optional. Again, note that the created sub-process provides a specification to process engineers that is useful for implementing the process (e.g., user form design). In general, the transitions of a lifecycle process correspond to sequence flows of a BPMN model as both possess the same semantics in their respective process modeling paradigm. Mapping the exact semantics of a transition to a sequence flow, however, depends on its context.
**TR4 (Transition Type Transformation):**
A transition type is transformed into a sequence flow. Expressions specified in lifecycle predicate steps are mapped to labels of the corresponding sequence flows. To minimize the routing paths per element, exclusive gateways are generated to group these sequence flows and to ensure that activities only have one incoming and outgoing sequence flow.
More precisely, a sequence flow in the BPMN model must be solely labeled if the source element of the transition corresponds to a *predicate step* (see TR3). In this case, the label of the created sequence flow should correspond to the expression attached to the predicate step. Fig. 8 shows the mapping of a predicate step to labeled sequence flows according to TR4.
TR5 (Backwards Transition Type Transformation):
A backwards transition type of a lifecycle process is transformed to a loop in the corresponding BPMN model.
Backwards transitions allow the users involved in the execution of a lifecycle process to return to previous states. When mapping a backwards transition to a BPMN model, certain activities of the BPMN model need to be repeated. For this purpose, each backwards transition is mapped to a loop in the BPMN model. Fig. 9 illustrates this transformation.
3.3 Coordination Process Transformation
The previous sections have introduced the transformation rules for object types and their lifecycle processes. This section shows how interactions between different object lifecycle processes are considered in the model transformation procedure. In PHILharmonicFlows, an interaction between lifecycle processes refers to the states of different objects and is managed by a coordination process (see Fig. 2) [SAR18a]. The latter comprises a number of coordination steps of which each reflects a semantic relationship between object states.
TR6 (Coordination Process Transformation):
The information contained in a coordination process is transformed to BPMN elements as well. In particular, this transformation considers the constraints attached to the coordination steps of the coordination process, i.e., the semantics of each individual coordination step needs to be mapped to its BPMN counterpart. For this purpose, intermediate message events (catching) are applied to catch messages. Moreover, intermediate parallel multiple events are used to catch multiple messages. In turn, intermediate message events (throwing) enable sending messages. Finally, event-based gateways allow coordinating alternative paths.
The semantics of a coordination step is defined by the ports associated with it as well as the transitions connected to these ports (see Fig. 2). Different cases need to be distinguished:
1. Multiple ports attached to a coordination step reflect an (X)OR-semantics (cf. coordination step Application Rejected in Fig. 2). State Rejected of object Application may be only executed after executing one of the preceding states. In BPMN, we realize this behavior using an Event-based Gateway. This transformation is depicted in Fig. 10.
2. Multiple transitions targeting at a single port of a coordination step express AND-semantics. Consider coordination step Application Accepted in Fig. 2. Regarding this step, multiple constraints (e.g., Job Offer Closed, Application Checked, and Interview Hired in Fig. 2) need to be met. When mapping this behavior to a BPMN process model, intermediate parallel multiple events are leveraged, i.e., process execution is delayed upon arrival of corresponding messages. These messages are modeled via intermediate message events (throwing) following the corresponding activities in the respective (object) pool. This transformation is depicted in Fig. 10 (2).
3. If a coordination step has exactly one port with one incoming transition, the activation of the state necessitates the previous completion of another state corresponding to a different object (see coordination step Application Created in Fig. 2). Such behavior can be expressed with BPMN using a message exchange, i.e., a particular coordinating activity sends a message received by another coordination activity. For this purpose, an intermediate message event (throwing) as well as an intermediate message event (catching) need to be added to the process model. Note that the control flow of the BPMN model is delayed until completing this message exchange. This transformation is illustrated by Fig. 10 (3).
For object-centric processes these different semantics are not mutually exclusive. The actual complexity might increase when using the various coordination constraints in combination.
3.4 User Assignment Transformation
To complete the transformation rules of the FLOWS2BPMN approach, the activities created by the previous TRs need to be associated with their respective user roles. The lifecycle process model associates each state with a user assignment, which determines the user roles responsible for processing the respective state.
TR7 (User Assignment Transformation):
A user role is transformed to a lane within a pool. Based on this transformation, a human task (i.e., state) is assigned to that lane whose role corresponds to the user assignment set out by the respective state of the lifecycle process model. A system user role (lane) comprises non-assigned states.
We accomplish the transformation of user roles in a two-step procedure. First, we map the user role to a new lane of the pool created by TR1. Further, we label the newly added lane with the name of the respective user role. The activities of the corresponding pool need to be assigned to their proper lanes. To be more precise, each task resulting from the application of TR2 is assigned to the lane that represents the user role in the context of the respective the state. In turn, lifecycle states without user assignments (see states Accepted and Rejected in Fig. 1b) are executed by a software system. Modeling such behavior in terms of BPMN requires an additional lane representing a System-user role. Similar to user roles, this lane indicates that a (computer) system auto-executes the activities of this lane. Fig. 11 illustrates the transformation of object Application (cf. Fig. 1b) including user assignments and collapsed sub-processes.

Fig. 11: Example User Assignment Transformation
4 Evaluation
4.1 Proof-of-concept Prototype
We implemented a proof-of-concept prototype that realizes the presented transformations. It leverages standard data interchange formats for representing object-centric processes (i.e., JSON) and BPMN processes (i.e., XML) respectively. Generated BPMN process models can be imported to any tool capable of visualizing BPMN models (e.g., Signavio, bpmn.io or Camunda). The source code is available via GitHub².
² https://github.com/markopejc-git/Transforming-Object-Centric-Processes-into-BPMN
4.2 Case Studies
We conducted case studies in which we applied the transformation procedure to process models of three different real-world scenarios initially modeled using the PHILharmonicFlows framework. Each of these object-centric process models focuses on the complexity in different parts of the process model (e.g., objects, lifecycles, or coordination). Supplementary material on PHILharmonicFlows, the different object-centric process models, and the resulting BPMN process models is provided in a cloudstore³.
Recruitment The recruitment process served as a running example in this paper and stems from a long-term collaboration we have had with an ERP software provider. In total, the recruitment model comprises 4 object types, 2 user types, and 5 relations (see Fig. 1a). Corresponding lifecycle processes consist of 20 states and 43 steps (see Fig. 1b for the lifecycle of object Application), while the coordination includes 16 steps and 19 transitions (see Fig. 2). This coordination process has the highest complexity as it includes a plethora of coordination steps, various combinations of ports and transitions (cf. TR 6).
PHoodle The e-learning system PHoodle, an object- and process-centric information system, we implemented with PHILharmonicFlows, includes 7 object types (e.g., Lecture, Exercise, or Submission), 2 user types, 11 relations, and corresponding lifecycle processes. The latter comprise 20 states and 52 steps. Moreover, a coordination process exists that consists of 6 steps and 9 transitions. This process has also been applied in a real-world deployment at Ulm University to organize the lecture, exercises, and exams of a course over one semester. In this real-world scenario, PHoodle managed 2 teaching employees, 5 Exercises, 6 Tutors, 14 Downloads, 51 Tutorials, 128 Students, and 487 Submissions.
Diagnosis & Treatment The Diagnosis & Treatment process deals with the admission, diagnosis, tests, treatment, and discharge of patients in a hospital scenario. The object-centric process model comprises 3 object types, 2 user types, and 4 relations. Their corresponding lifecycle processes contain 14 states and 22 steps. The coordination process consists of 10 steps and 10 transitions, and mainly focuses on the sequential coordination of objects.
4.3 Limitations
The presented approach faces several limitations:
1. The transformation requires the extended set of BPMN elements. On the one hand, the extended set of BPMN elements allows reducing the number of process model elements (i.e., model size and complexity). On the other, resulting models might be harder to comprehend, especially for inexperienced modelers, and the syntax of the extended BPMN elements might not fully match the one of the object-centric model.
2. Pools generated for each object according to TR1 do not fully conform with the traditional representation of pools (i.e., participants) in BPMN. However, in the context of object-centric processes, a variety of (interacting) objects participate in the process rather than traditional participants (i.e., organizations or roles). In future work, we will extend the approach to further address this issue.
³ https://cloudstore.uni-ulm.de/s/d9Mq3kBHbyKNa
3. The multi-instance symbol in BPMN assumes that the number of instances is known beforehand, which is not always the case in object-centric processes. In the running example, the number of application object associated with a job offer might be arbitrary. The latter corresponds to unbounded interleaving behavior, for which BPMN does not have a special symbol.
4. The execution syntax of a lifecycle process state and the ad-hoc sub-process generated by TR3 are not identical. The ad-hoc sub-process in BPMN specifies that the performer determines the sequence and number of an activity. The sequence, in which the steps of a lifecycle process are organized, specifies the guidance provided at runtime. However, a lifecycle process state may be completed upon availability of all required values. We incorporated the execution guidance of lifecycles through sequence flows within the ad-hoc sub-process rather than losing this information during the transformation.
4.4 Benefits
The presented approach enables the exploration of object-centric process for modelers unfamiliar with object-centric processes. The representation uses an established modeling language (i.e., BPMN 2.0), facilitating the understanding of the fundamental differences between the two process management paradigms. Especially, this is beneficial in education or training, during which understanding the differences of the modeling paradigms is of utmost importance. In a nutshell, the presented transformation might increase understandability of object-centric processes in general. Furthermore, the representation of object-centric processes in terms of BPMN enables applying a plethora of existing approaches for process management to object-centric processes. Consequently, this further strengthens the use of the object-centric process paradigm. This includes approaches towards modeling, analyzing, and evolving & optimizing business processes [Du18].
5 Related Work
The presented work is related to process model transformations, especially between activity-centric and data-centric process representations. Despite their fundamental differences, activity- and data-centric processes are not mutually exclusive [Re12] and approaches often combine existing principles. In Case Handling [vdAWG05], for example, activities are completed upon provision of data and do not constitute atomic work units, i.e., process execution is data-driven. The work presented in [Me13] enriches activities with SQL-enabling data support.
[KLW08] formally defines activity-centric process models and presents an approach for transforming activity- to information-centric models. UML state charts are used for representing lifecycle processes. These state charts, however, have limited capabilities regarding the communication between tasks and objects, i.e., coordination aspects are neglected. The work presented in [EVG16] overcomes these issues by enabling parallelism as well as event communication. However, processes are represented in terms of UML state charts, and no explicit support for BPMN is provided. As opposed to [KLW08, EVG16], the presented approach also considers the transformation of object coordination constraints using message events in the resulting BPMN model.
A similar approach is presented in [MW14]. It allows transforming artifact- to activity-centric models and vice versa, but requires intermediate steps (e.g., a synchronized object lifecycle), potentially increasing complexity. Besides, no implementation is provided. Similar to [KRW08], the approach lacks the integration of coordination constraints.
[SMW07] presents an approach for enabling users to define semantic correspondences between different syntax elements using mapping operators, and to execute the transformation between EPC and UML process models. Both EPC and UML process models represent activity-centric process models. In contrast, our approach is able to automatically derive the BPMN model without mapping operators specified by users.
[Es13] presents a framework for representing artifact-centric process models in UML. It considers elements of artifact-centric process models (i.e., business artifacts, lifecycles, services, and associations), similar to our approach’s consideration of different granularity levels (i.e., data models, object lifecycles, and coordination processes). However, [Es13] focuses representing artifact-centric elements in UML, rather than performing a complete transformation of process models. Consequently, specific UML methods (e.g., state machines and activity diagrams) are required for representing artifact-centric elements. In contrast, our approach provides a complete transformation of all levels of granularity, integrating them into one BPMN 2.0 process model.
[Out06] presents an approach for transforming BPMN and UML processes to BPEL-based workflows. However, no data-centric approaches were considered. In contrast, the presented approach tackles the transformation of data-centric processes to BPMN. Finally, [KRG07] presents an approach for generating compliant business process models from a set of reference object lifecycles. Synchronization points between lifecycles need to be manually identified first. In contrast, our approach derives coordination constructs based on the information available from object-centric coordination processes (i.e., coordination steps).
6 Summary and Outlook
This paper presents an approach for transforming object-centric processes into activity-centric processes with the latter being modeled in terms of BPMN. In particular, we want to bridge the gap between the two paradigms. In detail, we introduced 7 transformation rules that cover different aspects of object-centric processes (i.e., data model, lifecycle processes, and coordination process) and enable their mapping to activity-centric process models (i.e., BPMN 2.0). The technical feasibility of the automated transformation procedure was demonstrated by a proof-of-concept prototype. Furthermore, we applied the transformation to three processes of varying complexity and from different domains. In future work, we will further enhance the transformation by examining the comprehensibility of the generated BPMN 2.0 models, investigate different labels of object-centric processes, and simplify the resulting model. Furthermore, we will enable the reverse transformation (i.e., mapping BPMN 2.0 models to object-centric models) by inverting the transformation rules.
Bibliography
|
{"Source-Url": "https://dl.gi.de/server/api/core/bitstreams/f28c8a5d-179d-432c-b233-e16d41380aad/content", "len_cl100k_base": 6792, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39100, "total-output-tokens": 8852, "length": "2e12", "weborganizer": {"__label__adult": 0.00032329559326171875, "__label__art_design": 0.0006113052368164062, "__label__crime_law": 0.00032639503479003906, "__label__education_jobs": 0.001811981201171875, "__label__entertainment": 0.00010603666305541992, "__label__fashion_beauty": 0.00019025802612304688, "__label__finance_business": 0.0011415481567382812, "__label__food_dining": 0.0003559589385986328, "__label__games": 0.0005478858947753906, "__label__hardware": 0.000560760498046875, "__label__health": 0.0005178451538085938, "__label__history": 0.0003123283386230469, "__label__home_hobbies": 8.732080459594727e-05, "__label__industrial": 0.0005645751953125, "__label__literature": 0.0004279613494873047, "__label__politics": 0.0002301931381225586, "__label__religion": 0.0003528594970703125, "__label__science_tech": 0.061431884765625, "__label__social_life": 0.00012117624282836914, "__label__software": 0.0225677490234375, "__label__software_dev": 0.90625, "__label__sports_fitness": 0.0002455711364746094, "__label__transportation": 0.000492095947265625, "__label__travel": 0.00019443035125732425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38558, 0.03051]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38558, 0.50986]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38558, 0.88894]], "google_gemma-3-12b-it_contains_pii": [[0, 2918, false], [2918, 5818, null], [5818, 7979, null], [7979, 9914, null], [9914, 11784, null], [11784, 14297, null], [14297, 16791, null], [16791, 18654, null], [18654, 20954, null], [20954, 22871, null], [22871, 24822, null], [24822, 28056, null], [28056, 31311, null], [31311, 34553, null], [34553, 37423, null], [37423, 38558, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2918, true], [2918, 5818, null], [5818, 7979, null], [7979, 9914, null], [9914, 11784, null], [11784, 14297, null], [14297, 16791, null], [16791, 18654, null], [18654, 20954, null], [20954, 22871, null], [22871, 24822, null], [24822, 28056, null], [28056, 31311, null], [31311, 34553, null], [34553, 37423, null], [37423, 38558, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38558, null]], "pdf_page_numbers": [[0, 2918, 1], [2918, 5818, 2], [5818, 7979, 3], [7979, 9914, 4], [9914, 11784, 5], [11784, 14297, 6], [14297, 16791, 7], [16791, 18654, 8], [18654, 20954, 9], [20954, 22871, 10], [22871, 24822, 11], [24822, 28056, 12], [28056, 31311, 13], [31311, 34553, 14], [34553, 37423, 15], [37423, 38558, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38558, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
360d730ba48ab5ed20f46129470ac08ad33cd463
|
Chapter 7 – XML Data Modeling
Outline
Overview
1. **Object-Relational Database Concepts**
1. User-defined Data Types and Typed Tables
2. Object-relational Views and Collection Types
3. User-defined Routines and Object Behavior
4. Application Programs and Object-relational Capabilities
2. **Online Analytic Processing**
5. Data Analysis in SQL
6. Windowed Tables and Window Functions in SQL
3. **XML**
7. **XML Data Modeling**
8. XQuery
9. SQL/XML
4. **More Developments** (if there is time left)
- temporal data models, data streams, databases and uncertainty, ...
XML Origin and Usages
- Defined by the WWW Consortium (W3C)
- Originally intended as a document markup language, not a database language
- Documents have tags giving extra information about sections of the document
- For example:
- `<title> XML </title>`
- `<slide> XML Origin and Usages </slide>`
- Derived from SGML (Standard Generalized Markup Language)
- standard for document description
- enables document interchange in publishing, office, engineering, ...
- main idea: separate form from structure
- XML is simpler to use than SGML
- roughly 20% complexity achieves 80% functionality
- XML (like SGML) is a meta-language
- a language for the definition of languages (vocabularies)
- examples
- SGML -> HTML
- XML -> XHTML
XML – Data and Metadata
- XML documents are to some extent self-describing
- Tags (markup) represent metadata about specific parts/data items of a document
- metadata provided at the 'instance'-level
- Example
```xml
<bank>
<account>
<account-number> A-101 </account-number>
<branch-name> Downtown </branch-name>
<balance> 500 </balance>
</account>
<depositor>
<account-number> A-101 </account-number>
<customer-name> Johnson </customer-name>
</depositor>
</bank>
```
- Schema provides 'global' metadata (optional!)
- defines the vocabulary, rules for document structure, permitted or default content
- associated with/referenced by the document
Forces Driving XML
- Document Processing
- Goal: use document in various, evolving systems
- structure - content - layout
- grammar: markup vocabulary for mixed content
- Data Bases and Data Exchange
- Goal: data independence
- structured, typed data - schema-driven - integrity constraints
- Semi-structured Data and Information Integration
- Goal: integrate autonomous data sources
- data source schema not known in detail - schemata are dynamic
- schema might be revealed through analysis only after data processing
XML Documents
- XML documents are text (unicode)
- markup (always starts with '<' or '&')
- start/end tags
- references (e.g., <, &, ...)
- declarations, comments, processing instructions, ...
- data (character data)
- characters '<' and '&' need to be indicated using references (e.g., <) or using the character code
- alternative syntax: "<![CDATA[ (a<b)&(c<d) ]]>"
- XML documents are well-formed
- logical structure:
- optional XML declaration (XML version, encoding, ...)
- (optional) schema (DTD)
- single root element (possibly nested)
- comments
- processing instructions
- example: reference to a stylesheet, used by a browser
- additional requirements on the structure and content of <element>
XML Documents: Elements
- **Tag**: label for a section of data
- **Element**:
- start tag `<tagname>`
- content: text and/or nested element(s)
- may be empty, alternative syntax: `<tagname/>`
- end tag `</tagname>`
- Elements must be properly nested for the document to be **well-formed**
- Formally: every start tag must have a unique matching end tag, that is in the context of the same parent element.
- Mixture of text with sub-elements (mixed content) is legal in XML
- Example:
```xml
<account>
This account is seldom used any more.
<account-number> A-102 </account-number>
<branch-name> Perryridge </branch-name>
<balance>400 </balance>
</account>
</account>
```
- Useful for document markup, but discouraged for data representation
- Element content (i.e., text and nested elements) is ordered!
XML Element Structure
- Arbitrary levels of nesting
- Same element tag can appear multiple times
- at the same level
```xml
<bank-1>
<customer>
<customer-name> Hayes </customer-name>
<account>
<account-number> A-102 </account-number>
<balance>400 </balance>
</account>
<account>
... </account>
</customer>
</bank-1>
```
- at different levels
```xml
<product>
<productName> ... </productName>
<part>
<id> ... </id>
<part> ... </part>
...</part>
</part>
...</product>
```
XML Documents: Attributes
- **Attributes**: can be used to further describe elements
- attributes are specified by `name="value"` pairs inside the starting tag of an element
- value is a text string
- no further structuring of attribute values
- attributes are not ordered
- Example:
```xml
<account acct-type = "checking">
<account-number> A-102 </account-number>
<branch-name> Perryridge </branch-name>
<balance> 400 </balance>
</account>
```
- Well-formed documents:
- attribute names must be unique within the element
- attribute values are enclosed in single or double quotation marks
Attributes vs. Subelements
- Distinction between subelement and attribute
- In the context of documents, attributes are part of markup, while subelement contents are part of the basic document content
- markup used to interpret the content, influence layout for printing, etc.
- In the context of data representation, the difference is unclear and may be confusing
- Same information can be represented in two ways
- `<account account-number = "A-101"> ... </account>`
- `<account>
<account-number>A-101</account-number>
</account>`
- Limitations of attributes
- single occurrence within element
- no further attribute value structure, no ordering
**Namespaces**
- A single XML document may contain elements and attributes defined by different vocabularies
- Motivated by modularization considerations, for example
- Name collisions have to be avoided
- Example:
- A **Book** vocabulary contains a Title element for the title of a book
- A **Person** vocabulary contains a Title element for an honorary title of a person
- A **BookOrder** vocabulary uses both vocabularies
- Namespaces specifies how to construct universally unique names
**Namespaces (cont.)**
- Namespace is a collection of names identified by a URI
- Namespaces are declared via a set of special attributes
- These attributes are prefixed by xmlns - Example:
```xml
<BookOrder xmlns:Customer="http://mySite.com/Person"
xmlns:Item="http://yourSite.com/Book">
...
</BookOrder>
```
- Namespace applies to the element where it is declared, and all elements within its content
- unless overridden
- Elements/attributes from a particular namespace are prefixed by the name assigned to the namespace in the corresponding declaration of the using XML document
- ```xml
...Customer:Title='Dr'...
...Item:Title='Introduction to XML'...
```
- Default namespace declaration for fixing the namespace of unqualified names
- Example:
```xml
<BookOrder xmlns="http://mySite.com/Person"
xmlns:Item="http://yourSite.com/Book">
```
XML Document Schema
- XML documents may optionally have a schema
- standardized data exchange, ...
- Schema restricts the structures and data types allowed in a document
- document is **valid**, if it follows the restrictions defined by the schema
- Two important mechanisms for specifying an XML schema
- Document Type Definition (DTD)
- XML Schema
Document Type Definition - DTD
- Original mechanism to specify type and structure of an XML document
- What elements can occur
- What attributes can/must an element have
- What subelements can/must occur inside each element, and how many times.
- DTD does not constrain data types
- All values represented as strings in XML
- Special DTD syntax
- `<ELEMENT element (subelements-specification) >`
- `<ATTLIST element (attributes) >`
- DTD is
- contained in the document, or
- stored separately, referenced in the document
- DTD clause in XML document specifies the root element type, supplies or references the DTD
- `<DOCTYPE bank [ ... ]>`
Element Specification in DTD
- Subelements can be specified as
- names of elements, or
- #PCDATA (parsed character data), i.e., character strings
- EMPTY (no subelements) or ANY (anything defined in the DTD can be a subelement)
- Structure is defined using regular expressions
- sequence (subel, subel, ...), alternative (subel | subel | ...)
- number of occurrences
- "?" - 0 or 1 occurrence
- "+" - 1 or more occurrences
- "*" - 0 or more occurrences
- Example
```xml
<!ELEMENT depositor (customer-name, account-number)>
<!ELEMENT customer-name (#PCDATA)>
<!ELEMENT account-number (#PCDATA)>
<!ELEMENT bank ( (account | customer | depositor)+)>
```
Attribute Specification in DTD
- Attribute list of an element defines for each attribute
- name
- type of attribute (as relevant for data modeling)
- character data (CDATA)
- identifiers (ID) or references to an identifier attribute (IDREF, IDREFS)
- see next chart for details
- XML name tokens (NMTOKEN, NMTOKENS)
- enumeration type
- whether
- mandatory (#REQUIRED)
- default value (value)
- optional without default (#IMPLIED), or
- the value, if present, must not differ from the given one (#FIXED value)
- Examples
```xml
<!ATTLIST account acct-type CDATA "checking">
<!ATTLIST customer customer-id ID #REQUIRED
accounts IDREFS #REQUIRED >
```
IDs and IDREFs
- An element can have at most one attribute of type ID
- The ID attribute value of each element in an XML document must be distinct
- ID attribute (value) is an object identifier
- An attribute of type IDREF must contain the ID value of an element in the same document
- An attribute of type IDREFS contains a set of (0 or more) ID values. Each ID value must contain the ID value of an element in the same document
- IDs and IDREFs are untyped, unfortunately
- Example below: The owners attribute of an account may contain a reference to another account, which is meaningless; owners attribute should ideally be constrained to refer to customer elements
Example: Extended Bank DTD
- Bank DTD with ID and IDREF attribute types
```xml
<!DOCTYPE bank [
<!ELEMENT account (branch-name, balance)>]
<!ATTLIST account
account-number ID #REQUIRED
owners IDREFS #REQUIRED>
<!ELEMENT customer(customer-name, customer-street, customer-city)>]
<!ATTLIST customer
customer-id ID #REQUIRED
accounts IDREFS #REQUIRED>
... declarations for bank, branch-name, balance, customer-name, customer-street and customer-city
]>
```
XML data with ID and IDREF attributes
```
<bank>
<account account-number="A-401" owners="C100 C102">
<branch-name> Downtown </branch-name>
<balance>500 </balance>
</account>
<customer customer-id="C100" accounts="A-401">
<customer-name> Joe </customer-name>
<customer-street> Monroe </customer-street>
<customer-city> Madison </customer-city>
</customer>
<customer customer-id="C102" accounts="A-401 A-402">
<customer-name> Mary </customer-name>
<customer-street> Erin </customer-street>
<customer-city> Newark </customer-city>
</customer>
</bank>
```
Schema Definition with XML Schema
- XML Schema is closer to the general understanding of a (database) schema
- XML Schema (unlike DTD) supports
- Typing of values
- E.g., integer, string, etc
- Constraints on min/max values
- Typed references
- User defined types
- Schema specification in XML syntax
- Schema is a well-formed and valid XML document
- Integration with namespaces
- Many more features
- List types, uniqueness and foreign key constraints, inheritance ..
- BUT: significantly more complicated than DTDs
Types in XML Schema
- Simple vs. complex types
- Simple type
- no further structure, does not contain child elements or attributes
- can be used as a type for both attribute values and element content
- facets of simple types provide additional characteristics
- e.g., pattern, length
- Complex type
- consists of attribute declarations (optional) and a content model
- content model defines possible child elements, content based on simple types, mixed content
- Primitive vs. derived types
- Primitive types
- subset of the simple types that are not defined in terms of other types
- Examples: string, decimal
- Derived types
- defined in terms of other (derived or primitive) base types
- different derivation mechanisms
- by restriction – derived type permits only subset of value or literal space of the base type
- by list, union – similar to composite types
- by extension – similar to subtyping
- Built-in vs. user-derived types
XML Schema Built-in Types
- Integer is derived from decimal by restriction:
- decimal.minDigits = 0
- decimal point in the lexical representation is disallowed
Derivation By Restriction
- Based on the following facets
- upper/lower bounds for value domain
- minExclusive, minInclusive
- maxExclusive, maxInclusive
- length for strings, names, URIs or lists
- length
- maxLength
- minLength
- length restrictions for decimal
- totalDigits
- fractionDigits
- value enumeration
- enumeration
- regular expression limiting the lexical space
- pattern
- Examples
- `<xs:simpleType name="MoneyAmnt">`<xs:restriction base="xs:decimal">
<xs:totalDigits value="10"/>
<xs:_fractionDigits value="2"/>
</xs:restriction>
</xs:simpleType>
- `<xs:simpleType name="Phone">`
<xs:restriction base="xs:string">
<xs:pattern value="0[1-9][0-9\d]+\-[1-9][0-9\d]+"/>
</xs:restriction>
</xs:simpleType>
Complex Types
- Needed for modeling attributes and content model of elements
- defines the type of the element, but not the element tag name
- Simple content: no child elements, extends/restricts a simple type for element content
- `<xs:complexType name="Money">`
`<xs:simpleContent>`
`<xs:extension base="MoneyAmnt">`
`<xs:attribute name="currency" type="xs:string" use="required"/>
</xs:extension>
</xs:simpleContent>`
</xs:complexType>
Complex Types (cont.)
- Complex content
- three types of content models (may be nested arbitrarily)
- sequence - subelements have to occur in the specified order
- choice - only one of the subelements may occur
- all - each subelement can appear at most once, in arbitrary order
```xml
<xs:complexType name="AccountT">
<xs:sequence>
<xs:element name="account-number" type="xs:string"/>
<xs:element name="branch-name" type="xs:string"/>
<xs:element name="balance" type="Money"/>
</xsd:sequence>
</xs:complexType>
```
- Specifying the number of occurrences
- minOccurs, maxOccurs attributes can be used in element and content model definitions
```xml
<xs:element name="account" type="AccountT" minOccurs="0" maxOccurs="10"/>
<xs:choice minOccurs="2" maxOccurs="unbounded"> ...
</xs:choice>
```
Restricting And Extending Complex Types
- Derivation by restriction
- derived type has the same content model as the base type in terms of valid attributes, elements
- restrictions possible by
- limiting the number of occurrences by choosing a larger min or smaller max value
- supplying a default or fixed attribute value
- remove an optional component
- replacing a simple type with a derivation of the simple type
- Derivation by extension
- new attributes and elements can be added to the type definition inherited from the base type
```xml
<xs:complexType name="SavingsAccountT">
<xs:complexContent>
<xs:extension base="AccountT">
<xs:sequence>
<xs:element name="interest-rate" type="xs:decimal"/>
</xsd:sequence>
</xs:extension>
</xs:complexContent>
</xs:complexType>
```
Derived Types and "Substitutability"
- Derived types can be explicitly used in schema definitions
- At the document (i.e., "instance") level
- an instance of a derived type may appear instead of an instance of its base type
- derivation by extension or by restriction
- may be explicitly blocked for a base type in the schema definition
- the derived type has to be indicated using xsi:type
- example (assuming that element account has type AccountT):
```xml
<account xsi:type="SavingsAccountT">
<account-number>1234</account-number>
<branch-name>Kaiserslautern</branch-name>
<balance currency="Euro">3245.78</balance>
<interest-rate>3.5</interest-rate>
</account>
```
- the element name is not affected, only the content
- Substitution groups
- extends the concept to the element level
- a named head element may be substituted by any element in the substitution group
- group elements have to be derived from head element
- Elements and types may be declared as "abstract"
Namespaces and XML Schema
- XML-Schema elements and data types are imported from the XML-Schema namespace http://www.w3.org/2001/XMLSchema
- xsd is generally used as a prefix
- The vocabulary defined in an XML Schema file belongs to a target namespace
- declared using the `targetNamespace` attribute
- declaring a target namespace is optional
- if none is provided, the vocabulary does not belong to a namespace
- required for creating XML schemas for validating (pre-namespace) XML1.0 documents
- XML document using an XML schema
- declares namespace, refers to the target namespace of the underlying schema
- can provide additional hints where an XML schema (xsd) file for the namespace is located
- schemaLocation attribute
XML Schema Version of Bank DTD
```xml
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="http://www.banks.org"
xmlns="http://www.banks.org">
<xsd:element name="bank" type="BankType"/>
<xsd:element name="account">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="account-number" type="xsd:string"/>
<xsd:element name="branch-name" type="xsd:string"/>
<xsd:element name="balance" type="xsd:decimal"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
.... definitions of customer and depositor ....
<xsd:complexType name="BankType">
<xsd:choice minOccurs="1" maxOccurs="unbounded">
<xsd:element ref="account"/>
<xsd:element ref="customer"/>
<xsd:element ref="depositor"/>
</xsd:choice>
</xsd:complexType>
</xsd:schema>
```
XML Document Using Bank Schema
```xml
<bank xmlns="http://www.banks.org"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.banks.org Bank.xsd">
<account>
<account-number> ... </account-number>
<branch-name> ... </branch-name>
<balance> ... </balance>
</account>
...
</bank>
```
Assertions in XML-Schema
- Uniqueness: UNIQUE-Element, KEY-Element
- forces uniqueness of attribute or element values
- <field> element(s)
- can be applied to/declared for specific parts of the XML document
- <selector> element
- Example: within a bank element, all accounts should have a unique account number
- <xs:element name="bank" type="bankType">
<xs:unique name="uniqueAcctNo">
<xs:selector xpath="/account"/>
<xs:field xpath="account-number"/>
</xs:unique>
</xs:element>
- Some remarks
- NULL value semantics: nillable at the schema level, nil in the document
- <key> equivalent to <unique> and nillable="false"
- composite keys/unique elements
Mapping ER-Model -> XML Schema
- Mapping Entities
- 1:1 mapping to XML elements
- use <key> to represent ER key attributes
- <element name="ABT">
<complexType>
<attribute name="anr" type="string" />
<attribute name="street" type="string" />
<attribute name="name" type="string"/>
</complexType>
</element>
- <key name="abt_pk">
<selector xpath="//ABT"/>
<field xpath="@anr"/>
</key>
Mapping 1:N Relationships
- Mapping alternative: nesting
- using local element definition
```xml
<element name="ABT">
<complexType>
<sequence>
<element name="ANG">
<complexType>
<attribute name="street" type="string"/>
<attribute name="name" type="string"/>
<attribute name="spnr" type="string"/>
<attribute name="abtid" type="string"/>
</complexType>
</element>
</sequence>
<attribute name="street" type="string"/>
<attribute name="name" type="string"/>
</complexType>
</element>
```
- using global element definition
```xml
<element name="ABT">...
<element name="ANG">...
<complexType>
<attribute name="street" type="string"/>
<attribute name="name" type="string"/>
</complexType>
</element>
...
```
Primary/Foreign Keys
- Problem
- nesting alone is not sufficient for modeling a 1:n relationship
- element identity is required to avoid duplicate entries
- Foreign Keys
- guarantee referential integrity: `<key>` / `<keyref>` elements
```xml
<element name="ABT">
<complexType>
<sequence>
<element name="ANG">
<complexType>
<attribute name="spnr" type="string"/>
<attribute name="name" type="string"/>
<attribute name="office" type="string"/>
<attribute name="abtid" type="string"/>
</complexType>
</element>
</sequence>
<attribute name="street" type="string"/>
<attribute name="name" type="string"/>
</complexType>
</element>
```
```xml
<key name="abt_pk">
<selector xpath="./ABT" />
<field xpath="@anr" />
</key>
```
```xml
<key name="ang_uniq">
<selector xpath="./ABT/ANG" />
<field xpath="@spnr" />
</unique>
```
```xml
<keyref name="abt_fk" refer="abt_pk">
<selector xpath="./ABT/ANG" />
<field xpath="@abtid" />
</keyref>
```
Primary/Foreign Keys
- Advantages over ID/IDREF
- based on equality of data types
- composite keys
- locality, restricting scope to parts of the XML document
- Mapping of N:M – relationships
- use <key/> <keyref/> elements
- flat modeling plus "pointers"
- addition of helper element similar to mapping to relational model
```xml
<element name="PROJ_ANG">
<complexType>
<attribute name="pnr" type="string"/>
<attribute name="jnr" type="string"/>
</complexType>
</element>
```
Summary
- XML introduction and overview
- document structure – elements, attributes
- namespaces
- XML schema support
- document type definitions (DTD)
- document structure, but no support for data types, namespaces
- XML Schema specification
- powerful: structure, data types, complex types, type refinement, constraints, ...
- complex!
- Mapping ER -> XML
- 1:1, 1:n, n:m relationships
- primary/foreign keys
|
{"Source-Url": "http://wwwlgis.informatik.uni-kl.de/cms/fileadmin/courses/SS2010/Recent_Developments_for_Data_Models/Chapter_7_-_XML_and_Databases.pdf", "len_cl100k_base": 5742, "olmocr-version": "0.1.51", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 41885, "total-output-tokens": 6860, "length": "2e12", "weborganizer": {"__label__adult": 0.0002703666687011719, "__label__art_design": 0.0005288124084472656, "__label__crime_law": 0.0003173351287841797, "__label__education_jobs": 0.0015163421630859375, "__label__entertainment": 6.401538848876953e-05, "__label__fashion_beauty": 0.00012123584747314452, "__label__finance_business": 0.0006394386291503906, "__label__food_dining": 0.00022602081298828125, "__label__games": 0.0003330707550048828, "__label__hardware": 0.0004570484161376953, "__label__health": 0.000240325927734375, "__label__history": 0.00029850006103515625, "__label__home_hobbies": 8.666515350341797e-05, "__label__industrial": 0.00036716461181640625, "__label__literature": 0.0003559589385986328, "__label__politics": 0.00017261505126953125, "__label__religion": 0.00035572052001953125, "__label__science_tech": 0.0257720947265625, "__label__social_life": 0.0001074075698852539, "__label__software": 0.035308837890625, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.00014400482177734375, "__label__transportation": 0.0002472400665283203, "__label__travel": 0.0001577138900756836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23624, 0.00549]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23624, 0.63966]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23624, 0.6042]], "google_gemma-3-12b-it_contains_pii": [[0, 602, false], [602, 2115, null], [2115, 3419, null], [3419, 4911, null], [4911, 6217, null], [6217, 7650, null], [7650, 8670, null], [8670, 10067, null], [10067, 11246, null], [11246, 12392, null], [12392, 13568, null], [13568, 14827, null], [14827, 16553, null], [16553, 18357, null], [18357, 19528, null], [19528, 20680, null], [20680, 22686, null], [22686, 23624, null]], "google_gemma-3-12b-it_is_public_document": [[0, 602, true], [602, 2115, null], [2115, 3419, null], [3419, 4911, null], [4911, 6217, null], [6217, 7650, null], [7650, 8670, null], [8670, 10067, null], [10067, 11246, null], [11246, 12392, null], [12392, 13568, null], [13568, 14827, null], [14827, 16553, null], [16553, 18357, null], [18357, 19528, null], [19528, 20680, null], [20680, 22686, null], [22686, 23624, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23624, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23624, null]], "pdf_page_numbers": [[0, 602, 1], [602, 2115, 2], [2115, 3419, 3], [3419, 4911, 4], [4911, 6217, 5], [6217, 7650, 6], [7650, 8670, 7], [8670, 10067, 8], [10067, 11246, 9], [11246, 12392, 10], [12392, 13568, 11], [13568, 14827, 12], [14827, 16553, 13], [16553, 18357, 14], [18357, 19528, 15], [19528, 20680, 16], [20680, 22686, 17], [22686, 23624, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23624, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
393134894f38da77d6ce9af1543f086d1139a906
|
[REMOVED]
|
{"len_cl100k_base": 6569, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 30325, "total-output-tokens": 7994, "length": "2e12", "weborganizer": {"__label__adult": 0.0004634857177734375, "__label__art_design": 0.0029811859130859375, "__label__crime_law": 0.0006895065307617188, "__label__education_jobs": 0.0030117034912109375, "__label__entertainment": 0.00016367435455322266, "__label__fashion_beauty": 0.0002932548522949219, "__label__finance_business": 0.005336761474609375, "__label__food_dining": 0.0005536079406738281, "__label__games": 0.0007658004760742188, "__label__hardware": 0.0014972686767578125, "__label__health": 0.0010204315185546875, "__label__history": 0.0007772445678710938, "__label__home_hobbies": 0.0001493692398071289, "__label__industrial": 0.0013628005981445312, "__label__literature": 0.0006074905395507812, "__label__politics": 0.0004940032958984375, "__label__religion": 0.0006551742553710938, "__label__science_tech": 0.2127685546875, "__label__social_life": 0.0001145005226135254, "__label__software": 0.0222015380859375, "__label__software_dev": 0.74267578125, "__label__sports_fitness": 0.00029730796813964844, "__label__transportation": 0.0009746551513671876, "__label__travel": 0.0003345012664794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33984, 0.02009]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33984, 0.54689]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33984, 0.88117]], "google_gemma-3-12b-it_contains_pii": [[0, 2565, false], [2565, 6088, null], [6088, 8998, null], [8998, 10873, null], [10873, 13535, null], [13535, 16792, null], [16792, 19710, null], [19710, 22141, null], [22141, 24030, null], [24030, 25490, null], [25490, 28696, null], [28696, 31902, null], [31902, 33984, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2565, true], [2565, 6088, null], [6088, 8998, null], [8998, 10873, null], [10873, 13535, null], [13535, 16792, null], [16792, 19710, null], [19710, 22141, null], [22141, 24030, null], [24030, 25490, null], [25490, 28696, null], [28696, 31902, null], [31902, 33984, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33984, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33984, null]], "pdf_page_numbers": [[0, 2565, 1], [2565, 6088, 2], [6088, 8998, 3], [8998, 10873, 4], [10873, 13535, 5], [13535, 16792, 6], [16792, 19710, 7], [19710, 22141, 8], [22141, 24030, 9], [24030, 25490, 10], [25490, 28696, 11], [28696, 31902, 12], [31902, 33984, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33984, 0.15966]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
631ee244e7c04debf4cbc67d0b55f3f9fb6a3f65
|
Planning
- Introduction
- Planning vs. Problem-Solving
- Representation in Planning Systems
- Situation Calculus
- The Frame Problem
- STRIPS representation language
- Blocks World
- Planning with State-Space Search
- Progression Algorithms
- Regression Algorithms
- Planning with Plan-Space Search
- Partial-Order Planning
- The Plan Graph and GraphPlan
- SatPlan
Material from Russell & Norvig, chapters 10.3. and 11
Slides based on Slides by Russell/Norvig, Lise Getoor and Tom Lenaerts
Sussman Anomaly
- Famous example that shows that subgoals are not independent
- **goal:** on(A, B), on(B, C)
- **achieve on(B, C) first:**
- shortest solution will just put B on top of C → subgoal has to be undone in order to complete the goal
- **achieve on(A, B) first:**
- shortest solution will not put B on C → subgoal has do be undone later in order to complete the goal
Partial-Order Planning (POP)
- Progression and regression planning are totally ordered plan search forms
- this means that in all searched plans the sequence of actions is completely ordered
- Decisions must be made on how to sequence actions in all the subproblems
→ They cannot take advantage of problem decomposition
- If actions do not interfere with each other, they could be made in any order (or in parallel) → partially ordered plan
- if a plan for each subgoal only makes minimal commitments to orders
- only orders those actions that must be ordered for a successful completion of the plan
- it can re-order steps later on (when subplans are combined)
- Least commitment strategy:
- Delay choice during search
Shoe Example
Initial State: nil
Goal State: RightShoeOn & LeftShoeOn
Action( LeftSock,
PRECOND: -
ADD: LeftSockOn
DELETE: -
)
Action( RightSock,
PRECOND: -
ADD: RightSockOn
DELETE: -
)
Action( LeftShoe,
PRECOND: LeftSockOn
ADD: LeftShoeOn
DELETE: -
)
Action( RightShoe,
PRECOND: RightSockOn
ADD: RightShoeOn
DELETE: -
)
Shoe Example
- **Total-Order Planner**
- all actions are completely ordered
- **Partial-Order Planner**
- may leave the order of some actions undetermined
- any order is valid
State-Space vs. Plan-Space Search
State-Space Planning
- Search goes through possible states
Plan-Space Planning
- Search goes through possible plans
- Set of formulas
- STRIPS operator
- Plan component
- Plan-space search:
- Plan transformation operators
- Incomplete plan
- S0
- S1
- S2
POP as a Search Problem
- A solution can be found by a search through Plan-Space:
- States are (mostly unfinished) plans
Each plan has 4 components:
- A set of actions (steps of the plan)
- A set of ordering constraints: \( A < B \) (\( A \) before \( B \))
- Cycles represent contradictions.
- A set of causal links \( A \rightarrow p \rightarrow B \) (\( A \) adds \( p \) for \( B \))
- The plan may not be extended by adding a new action \( C \) that conflicts with the causal link.
- An action \( C \) conflicts with causal link \( A \rightarrow p \rightarrow B \)
- if the effect of \( C \) is \( \neg p \) and if \( C \) could come after \( A \) and before \( B \)
- A set of open preconditions
- Preconditions that are not achieved by action in the plan
Example of Final Plan
- **Actions** = `{RightSock, RightShoe, LeftSock, LeftShoe, Start, Finish}
- **Orderings** =
- `{RightSock < RightShoe; LeftSock < LeftShoe}
- **Causal Links** =
- `{RightSock → RightSockOn → RightShoe, LeftSock → LeftSockOn → LeftShoe, RightShoe → RightShoeOn → Finish, LeftShoe → LeftShoeOn → Finish}
- **Open preconditions** = `{}`
Search through Plan-Space
- **Initial State** (empty plan):
- contains only virtual **Start** and **Finish** actions
- ordering constraint **Start** < **Finish**
- no causal links
- all preconditions in **Finish** are open
- these are the original goal
- **Successor Function** (refining the plan):
generates all consistent successor states
- picks one open precondition $p$ on an action $B$
- generates one successor plan for every possible *consistent* way of choosing action that achieves $p$
- a plan is *consistent* iff
- there are *no cycles* in the ordering constraints
- *no conflicts* with the causal links
- **Goal test** (final plan):
- A consistent plan with no open preconditions is a solution.
Subroutines
- **Refining a plan** with action $A$, which achieves $p$ for $B$:
- add causal link $A \rightarrow p \rightarrow B$
- add the ordering constraint $A < B$
- add **Start** $< A$ and $A < **Finish** to the plan (only if $A$ is new)
- resolve conflicts between
- new causal link $A \rightarrow p \rightarrow B$ and all existing actions
- new action $A$ and all existing causal links (only if $A$ is new)
- **Resolving a conflict** between a causal link $A \rightarrow p \rightarrow B$ and an action $C$
- we have a conflict if the effect of $C$ is $\neg p$ and $C$ could come after $A$ and before $B$
- resolved by adding the ordering constraints $C < A$ or $B < C$
- both refinements are added (two successor plans) if both are consistent
Search through Plan-Space
- **Operators** on partial plans
- Add an action to fulfill an open condition
- Add a causal link
- Order one step w.r.t another to remove possible conflicts
- **Search** gradually moves from incomplete/vague plans to complete/correct plans
- **Backtrack** if an open condition is unachievable or if a conflict is irresolvable
- pick the next condition to achieve at one of the previous choice points
- ordering of the conditions is irrelevant for completeness (the same plans will be found), but may be relevant for consistency
Executing Partially Ordered Plans
- Any particular order that is consistent with the ordering constraints is possible
- A partial order plan is executed by repeatedly choosing any of the possible next actions.
- This flexibility is a benefit in non-cooperative environments.
Example: Spare Tire Problem
Initial State: \( \text{at(flat,axle), at(spare,trunk)} \)
Goal State: \( \text{at(spare,axle)} \)
Action( remove(spare,trunk),
PRECOND: \( \text{at(spare,trunk)} \)
ADD: \( \text{at(spare,ground)} \)
DELETE: \( \text{at(spare,trunk)} \)
)
Action( remove(flat,axle),
PRECOND: \( \text{at(flat,axle)} \)
ADD: \( \text{at(flat,ground)} \)
DELETE: \( \text{at(flat,axle)} \)
)
Action( putOn(spare,axle),
PRECOND: \( \text{at(spare,ground), not(at(flat,axle))} \)
ADD: \( \text{at(spare,axle)} \)
DELETE: \( \text{at(spare,ground)} \)
)
Here we need a not, which is not part of the original STRIPS language!
Example: Spare Tire Problem
- Initial plan:
- Action `start` has the current state as effects
- Action `finish` has the goal as preconditions
```
Start
At(Spare, Trunk)
At(Flat, Axle)
At(Spare, Axle)
Finish
```
Example: Spare Tire Problem
- Action \texttt{putOn(spare,axle)} is the only action that achieves the goal \texttt{at(spare,axle)}
- the current plan is refined to one new plan:
- \texttt{putOn(spare,axle)} is added to the list of actions
- add constraints \texttt{putOn(spare,axle) < finish} and \texttt{> start}
- add causal link \texttt{putOn(spare,axle) \rightarrow at(spare,axle) \rightarrow finish}
- the preconditions of \texttt{putOn(spare,axle)} are now open
Example: Spare Tire Problem
- we select the next open precondition \( \text{at(spare,ground)} \) as a goal
- only \( \text{remove(spare,trunk)} \) can achieve this goal
- the current plan is refined to a new one as before, causal links are added
Example: Spare Tire Problem
- we select the next open precondition \( \text{not(at(flat,axle))} \) as a goal
- could be achieved with two actions
- leave-overnight
- remove(flat,axle)
- \( \rightarrow \) we have two successor plans
Example: Spare Tire Problem
Plan 1: leaf-overnight
- is in conflict with the constraint
\[ \text{remove(spare, trunk)} \rightarrow \text{at(spare, ground)} \rightarrow \text{putOn(spare, axle)} \]
\[ \rightarrow \text{has to be ordered before remove(spare, trunk)} \]
- cannot be ordered after putOn(spare, axle) because it achieves its precondition
- constraint \text{leave-overnight} < \text{remove(spare, trunk)} \text{ is added}
Example: Spare Tire Problem
Plan 1: leave-overnight
- the condition $at(spare, trunk)$ has to be achieved next
- $start$ is the only action that can achieve this
- however, $start \rightarrow at(spare, trunk) \rightarrow remove(spare, trunk)$
is in conflict with leave-overnight
- this conflict cannot be resolved $\rightarrow$ backtracking
leave-overnight cannot be ordered before $start$, and is already ordered before $remove(spare, trunk)$
$\rightarrow$ irresolvable conflict
Example: Spare Tire Problem
Plan 2: `remove(\text{flat,axle})`
- achieves goal `\text{not(at(flat,axle))}`
- corresponding causal link and order relation are added
- `at(\text{flat,axle})` becomes open precondition
Example: Spare Tire Problem
- open precondition $at(\text{spare, trunk})$ is selected as goal
- action $\text{start}$ is added
- corresponding causal link and order relation are added
Example: Spare Tire Problem
- open precondition $\text{at}(\text{spare}, \text{trunk})$ is selected as goal
- action $\text{start}$ is added
- corresponding causal link and order relation are added
- open precondition $\text{at}(\text{flat}, \text{axle})$ is selected as goal
- action $\text{start}$ can achieve this and is already part of the plan
- corresponding causal link and order relation are added
- no more open preconditions remain
→ plan is completed
POP in First-Order Logic
- Operators may leave some variables unbound
**Example**
- Achieve goal \(\text{on}(a,b)\) with action \(\text{move}(a,\text{From},b)\)
- It remains unspecified from where block \(a\) should be moved (\(\text{PRECOND}: \text{on}(a,\text{From})\))
**Two approaches**
- Decide for one binding and backtrack later on (if necessary)
- Defer the choice for later (least commitment)
**Problems with least commitment:**
- e.g., an action that has \(\text{on}(a,\text{From})\) on its delete-list will only conflict with above if both are bound to the same variable
- can be resolved by introducing inequality constraint.
Heuristics for Plan-Space Planning
- Not as well understood as heuristics for state-space planning
- General heuristic: number of distinct open preconditions
- maybe minus those that match the initial state
- underestimates costs when several actions are needed to achieve a condition
- overestimates costs when multiple goals may be achieved with a single action
- Choosing a good precondition to refine has also a strong impact
- select open condition that can be satisfied in the fewest number of ways
- analogous to most-constrained variable heuristic from CSP
- Two important special cases:
- select a condition that cannot be achieved at all (early failure!)
- select deterministic conditions that can only be achieved in one way
Planning Graph
- A planning graph is a special structure used to
- achieve better heuristic estimates.
- directly extract a solution using GRAPHPLAN algorithm
- Consists of a sequence of levels (time steps in the plan)
- Level 0 is the initial state.
- Each level consists of a set of literals and a set of actions.
- Literals = all those that could be true at that time step
- depending on the actions executed at the preceding time step
- Actions = all those actions that could have their preconditions satisfied at that time step
- depending on which of the literals actually hold.
- Only a restricted subset of possible negative interactions among actions is recorded
- Planning graphs work only for propositional problems
- STRIPS and ADL can be propositionalized
Cake Example
- Initial state: \texttt{have(cake)}
- Goal state: \texttt{have(cake), eaten(cake)}
Action( \texttt{eat(cake)},
PRECOND: \texttt{have(cake)}
ADD: \texttt{eaten(cake)}
DELETE: \texttt{have(cake)} )
Action( \texttt{bake(cake)},
PRECOND: \texttt{not(have(cake))}
ADD: \texttt{have(cake)}
DELETE: \texttt{-} )
Persistence Actions
- pseudo-actions for which the effect equals the precondition
- analogous to frame axioms
- are automatically added by the planner
Mutual exclusions
- link actions or preconditions that are mutually exclusive (\texttt{mutex})
Cake Example
Persistence Actions (□)
- pseudo-actions for which the effect equals the precondition
- analogous to frame axioms
- are automatically added by the planner
Mutual exclusions (mutex)
- link actions or preconditions that are mutually exclusive (mutex)
Cake Example
- Start at level $S_0$, determine action level $A_0$ and next level $S_1$
- $A_0$ contains all actions whose preconditions are satisfied in the previous level $S_0$
- Connect preconditions and effects of these actions
- Inaction is represented by persistence actions
- Level $A_0$ contains the actions that could occur
- Conflicts between actions are represented by mutex links
**Cake Example**
- Per construction, Level $S_1$ contains all literals that could result from picking any subset of actions in $A_0$
- Conflicts between literals that can not occur together are represented by mutex links.
- $S_1$ defines multiple possible states and the mutex links are the constraints that hold in this set of states
- Continue until two consecutive levels are identical
- Or contain the same amount of literals (explanation later)
Mutex Relations
- A mutex relation holds between **two actions** when:
- **Inconsistent effects:**
- one action negates the effect of another.
- **Interference:**
- one of the effects of one action is the negation of a precondition of the other.
- **Competing needs:**
- one of the preconditions of one action is mutually exclusive with the precondition of the other.
- A mutex relation holds between **two literals** when:
- **Inconsistent support:**
- If one is the negation of the other OR
- if each possible action pair that could achieve the literals is mutex
Example: Spare Tire Problem
Initial State: at(flat,axle),
at(spare,trunk)
Goal State: at(spare,axle)
Action( remove(spare,trunk),
PRECOND: at(spare,trunk)
ADD: at(spare,ground)
DELETE: at(spare,trunk)
)
Action( remove(flat,axle),
PRECOND: at(flat,axle)
ADD: at(flat,ground)
DELETE: at(flat,axle)
)
Action( putOn(spare,axle),
PRECOND: at(spare,ground),
not(at(flat,axle)),
ADD: at(spare,axle)
DELETE: at(spare,ground)
)
Here we need a not, which is not part of the original STRIPS language!
GRAPHPLAN Example
- $S_0$ consist of 5 literals (initial state and the CWA literals)
$$S_0$$
$\text{At}(\text{Spare}, \text{Trunk})$
$\text{At}(\text{Flat}, \text{Axle})$
$\neg \text{At}(\text{Spare}, \text{Axle})$
$\neg \text{At}(\text{Flat}, \text{Ground})$
$\neg \text{At}(\text{Spare}, \text{Ground})$
GRAPHPLAN Example
- $S_0$ consist of 5 literals (initial state and the CWA literals)
- EXPAND-GRAPH adds actions with satisfied preconditions
- add the effects at level $S_1$
- also add persistence actions and mutex relations
GRAPHPLAN Example
- Repeat
Note: Not all mutex links are shown!
Inconsistent Effects
Interference
Competing Needs
Inconsistent Support
GRAPHPLAN Example
- Repeat until all goal literals are pairwise non-mutex in $S_i$
- If all goal literals are pairwise non-mutex, this means that a solution might exist
- not guaranteed because only pairwise conflicts are checked
→ we need to search whether there is a solution
Deriving Heuristics from the PG
- Planning Graphs provide information about the problem
- Example:
- A literal that does not appear in the final level of the graph cannot be achieved by any plan
- Extraction of a **serial plan**
- PG allows several actions to occur simultaneously at a level
- can be serialized by restricting PG to one action per level
- add mutex links between every pair of actions
- provides a **good heuristic** for serial plans
- Useful for backward search
- Any state with an unachievable precondition has cost $= +\infty$
- Any plan that contains an unachievable precond has cost $= +\infty$
- In general: **level cost** $= \text{level of first appearance of a literal}$
- clearly, level cost are an admissible search heuristic
- PG may be viewed as a **relaxed problem**
- checking only for consistency between pairs of actions/literals
Costs for Conjunctions of Literals
- **Max-level**: maximum level cost of all literals in the goal
- admissible but not accurate
- **Sum-level**: sum of the level costs
- makes the subgoal independence assumption
- inadmissible, but works well in practice
- **Cake Example**:
- estimated costs for `have(cake) ∧ eaten(cake)` is 0+1=1
- true costs are 2
- **Cake Example without action `bake(cake)`**
- estimated costs are the same
- true costs are $+\infty$
- **Set-level**: find the level at which all literals appear and no pair has a mutex link
- gives the correct estimate in both examples above
- dominates max-level heuristic, works well with interactions
The \texttt{GRAPHPLAN} Algorithm
- Algorithm for extracting a solution directly from the PG
- alternates solution extraction and graph expansion steps
```
function GRAPPLAN(problem) returns solution or failure
graph ← INITIAL-PLANNING-GRAPH(problem)
goals ← GOALS[problem]
loop do
if goals all non-mutex in last level of graph then do
solution ← EXTRACT-SOLUTION(graph, goals, LENGTH(graph))
if solution ≠ failure then return solution
else if NO-SOLUTION-POSSIBLE(graph) then return failure
graph ← EXPAND-GRAPH(graph, problem)
```
- \texttt{EXTRACT-SOLUTION}:
- checks whether a plan can be found searching backwards
- \texttt{EXPAND-GRAPH}:
- adds actions for the current and state literals for the next level
A state consists of
- a pointer to a level in the planning graph
- a set of unsatisfied goals
- Initial state
- last level of PG
- set of goals from the planning problem
- Actions
- select any non-conflicting subset of the actions of $A_{i-1}$ that cover the goals in the state
- Goal
- success if level $S_0$ is reached with such with all goals satisfied
- Cost
- 1 for each action
Could also be formulated as a Boolean CSP
GRAPHPLAN Example
- Start with goal state \( at(\text{spare,axle}) \) in \( S_2 \)
- only action choice is \( \text{puton}(\text{spare,axle}) \) with preconditions
\( \neg \text{at}(\text{spare,axle}) \) and \( \text{at}(\text{spare,ground}) \) in \( S_1 \)
- two new goals in level 1
GRAPHPLAN Example
- \( \text{remove}(\text{spare, trunk}) \) is the only action to achieve \( \text{at}(\text{spare, ground}) \)
- \( \neg \text{at}(\text{flat, axle}) \) can be achieved with \( \text{leave-overnight} \) and \( \text{remove}(\text{flat, axle}) \)
- \( \text{leave-overnight} \) is mutex with \( \text{remove}(\text{spare, trunk}) \) → \( \text{remove}(\text{spare, trunk}) \) and \( \text{remove}(\text{flat, axle}) \)
- preconditions are satisfied in \( S_0 \) → we're done
\[
\begin{align*}
S_0: & \quad \text{At}(\text{spare, trunk}) \\
A_0: & \quad \text{remove}(\text{spare, trunk}) \\
S_1: & \quad \text{At}(\text{spare, trunk}) \\
A_1: & \quad \text{remove}(\text{spare, trunk}) \\
S_2: & \quad \text{At}(\text{spare, trunk})
\end{align*}
\]
\[
\begin{align*}
A_0: & \quad \text{At}(\text{flat, axle}) \\
A_0: & \quad \neg \text{At}(\text{spare, axle}) \\
A_0: & \quad \neg \text{At}(\text{flat, ground}) \\
A_0: & \quad \neg \text{At}(\text{spare, ground}) \\
A_1: & \quad \neg \text{At}(\text{flat, axle}) \\
A_1: & \quad \neg \text{At}(\text{spare, axle}) \\
A_1: & \quad \neg \text{At}(\text{flat, ground}) \\
A_1: & \quad \neg \text{At}(\text{spare, ground}) \\
S_2: & \quad \text{At}(\text{spare, ground})
\end{align*}
\]
Termination of GraphPlan
1. The planning graph converges because everything is finite
- number of literals is monotonically increasing
- a literal can never disappear because of the persistence actions
- number of actions is monotonically increasing
- once an action is applicable it will always be applicable
(because its preconditions will always be there)
- number of mutexes is monotonically decreasing
- If two actions are mutex at one level, they are also mutex in all
previous levels in which they appear together
- inconsistent effects and interferences are properties of actions
→ if they hold once, they will always hold
- competing needs are properties of mutexes
→ if the number of actions goes up, chances increase that there is
a pair of non-mutex actions that achieve the preconditions
2. After convergence, EXTRACT-SOLUTION will find an existing
solution right away or in subsequent expansions of the PG
- more complex proof (not covered here)
**SatPlan**
- **Key idea:**
- translate the planning problem into *propositional logic*
- similar to situation calculus, but all facts and rules are ground
- the same literal in different situations is represented with two different propositions (we call them propositions at a depth \(i\))
- actions are also represented as propositions
- rules are used to derive propositions of depth \(i+1\) from actions and propositions of depth \(i\)
- **Goal:**
- find a true formula consisting of propositions of the *initial state*, propositions of the *goal state*, and some action propositions
- **Method:**
- use a satisfiability solver with iterative deepening on the depth
- first try to prove the goal in depth 0 (initial state)
- then try to prove the goal in depth 1
- .... until a solution is found in depth \(n\)
Key Problem
- Complexity
- In the worst case, a proposition has to be generated
- for each of $a$ actions with
- each of $o$ possible objects in the $n$ arguments
- for a solution depth $d$
$\rightarrow$ maximum number of propositions is $d \cdot a \cdot o^n$
- the number of rules is even larger
Solution Attempt: Symbol Splitting
- a possible solution is to convert each $n$-ary relation into $n$ binary relations
- “the $i$-th argument of relation $r$ is $y$”
- this will also reduce the size of the knowledge base because arguments that are not used can be omitted from the rules
- Drawback: multiple instances of the same rule get mixed up
$\rightarrow$ no two actions of same type at the same time step
- Nevertheless, SATPLAN is very competitive
|
{"Source-Url": "https://www.ke.tu-darmstadt.de/lehre/archiv/ws-15-16/ki/plan-space-planning.pdf", "len_cl100k_base": 6062, "olmocr-version": "0.1.53", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 71632, "total-output-tokens": 7932, "length": "2e12", "weborganizer": {"__label__adult": 0.0003616809844970703, "__label__art_design": 0.0005102157592773438, "__label__crime_law": 0.000560760498046875, "__label__education_jobs": 0.0022945404052734375, "__label__entertainment": 0.00011175870895385742, "__label__fashion_beauty": 0.0001951456069946289, "__label__finance_business": 0.0004656314849853515, "__label__food_dining": 0.00044035911560058594, "__label__games": 0.001567840576171875, "__label__hardware": 0.0008287429809570312, "__label__health": 0.00046944618225097656, "__label__history": 0.0004453659057617187, "__label__home_hobbies": 0.00023221969604492188, "__label__industrial": 0.0007328987121582031, "__label__literature": 0.0005860328674316406, "__label__politics": 0.0003409385681152344, "__label__religion": 0.0005164146423339844, "__label__science_tech": 0.0958251953125, "__label__social_life": 0.00019693374633789065, "__label__software": 0.017547607421875, "__label__software_dev": 0.8740234375, "__label__sports_fitness": 0.00044155120849609375, "__label__transportation": 0.0009369850158691406, "__label__travel": 0.0002281665802001953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22675, 0.00405]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22675, 0.4939]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22675, 0.83744]], "google_gemma-3-12b-it_contains_pii": [[0, 511, false], [511, 895, null], [895, 1634, null], [1634, 2039, null], [2039, 2223, null], [2223, 2520, null], [2520, 3298, null], [3298, 3663, null], [3663, 4404, null], [4404, 5179, null], [5179, 5747, null], [5747, 6025, null], [6025, 6699, null], [6699, 6917, null], [6917, 7393, null], [7393, 7640, null], [7640, 7877, null], [7877, 8319, null], [8319, 8811, null], [8811, 9028, null], [9028, 9217, null], [9217, 9690, null], [9690, 10332, null], [10332, 11091, null], [11091, 11883, null], [11883, 12453, null], [12453, 12717, null], [12717, 13117, null], [13117, 13574, null], [13574, 14169, null], [14169, 14685, null], [14685, 14998, null], [14998, 15229, null], [15229, 15370, null], [15370, 15661, null], [15661, 16553, null], [16553, 17234, null], [17234, 18015, null], [18015, 18455, null], [18455, 18749, null], [18749, 20004, null], [20004, 21041, null], [21041, 21887, null], [21887, 22675, null]], "google_gemma-3-12b-it_is_public_document": [[0, 511, true], [511, 895, null], [895, 1634, null], [1634, 2039, null], [2039, 2223, null], [2223, 2520, null], [2520, 3298, null], [3298, 3663, null], [3663, 4404, null], [4404, 5179, null], [5179, 5747, null], [5747, 6025, null], [6025, 6699, null], [6699, 6917, null], [6917, 7393, null], [7393, 7640, null], [7640, 7877, null], [7877, 8319, null], [8319, 8811, null], [8811, 9028, null], [9028, 9217, null], [9217, 9690, null], [9690, 10332, null], [10332, 11091, null], [11091, 11883, null], [11883, 12453, null], [12453, 12717, null], [12717, 13117, null], [13117, 13574, null], [13574, 14169, null], [14169, 14685, null], [14685, 14998, null], [14998, 15229, null], [15229, 15370, null], [15370, 15661, null], [15661, 16553, null], [16553, 17234, null], [17234, 18015, null], [18015, 18455, null], [18455, 18749, null], [18749, 20004, null], [20004, 21041, null], [21041, 21887, null], [21887, 22675, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22675, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22675, null]], "pdf_page_numbers": [[0, 511, 1], [511, 895, 2], [895, 1634, 3], [1634, 2039, 4], [2039, 2223, 5], [2223, 2520, 6], [2520, 3298, 7], [3298, 3663, 8], [3663, 4404, 9], [4404, 5179, 10], [5179, 5747, 11], [5747, 6025, 12], [6025, 6699, 13], [6699, 6917, 14], [6917, 7393, 15], [7393, 7640, 16], [7640, 7877, 17], [7877, 8319, 18], [8319, 8811, 19], [8811, 9028, 20], [9028, 9217, 21], [9217, 9690, 22], [9690, 10332, 23], [10332, 11091, 24], [11091, 11883, 25], [11883, 12453, 26], [12453, 12717, 27], [12717, 13117, 28], [13117, 13574, 29], [13574, 14169, 30], [14169, 14685, 31], [14685, 14998, 32], [14998, 15229, 33], [15229, 15370, 34], [15370, 15661, 35], [15661, 16553, 36], [16553, 17234, 37], [17234, 18015, 38], [18015, 18455, 39], [18455, 18749, 40], [18749, 20004, 41], [20004, 21041, 42], [21041, 21887, 43], [21887, 22675, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22675, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
65a561bd466891b296cba69e48032eb1336b5b41
|
The topic of today’s lecture is data structuring. Till now we have concentrated more on the efforts, our efforts have more been concentrated towards problem solving techniques and as we said before that solving a problem in a computer consists of two main parts, one is called the problem solving or what we call the problem decomposition aspect and once we know how to decompose the problem and try to generate an initial solution and then we will analyze the problem.
The next aspect is to analyze the solution to the problem and we have seen very quickly what is meant by asymptotic analysis which is one thing which we will do to analyze that. Second is how to improve the solution and obtain good solutions. For example in our analysis of the merge sort algorithm we saw that if we split it in the middle, we will get a better algorithm than whether we split it on other ways. There are various other aspects how to improve the solution, one of which is organization of data.
So before studying the techniques of various ways in which we can improve the solution, we will also have to study. One of them is data structuring, one of them we saw is let us say let us call it balancing the split. It is called balancing, where will you split if you decompose into two parts or into n parts then where will you split the problem. The balance is very important that is called the problem of balancing. There are various other issues which we shall come to later on and then we shall see how to combine all of them in more complex situations. So we have just done a bit of problem decomposition, we
have seen how to analyze the solutions asymptotically either by doing a loop analysis in terms of analyzing how many loops will be there or in terms of solving a recurrence equation.
The other aspect is the organization, structuring, storage and manipulation of the data elements that we have in our program and then there are other techniques of improving a solution which we shall come to later on but data structuring is also a very important consideration of algorithm design and this is something which we shall concentrate independently by choosing problems which are typically data structuring problems themselves, so that the algorithm design and the problem decomposition aspect does not come immediately. We shall just choose problems which are just pure data structuring problems and see how to organize and how to define such data structures and in order to have such data structures what are the techniques we may need to employ and what are the standard techniques which are known. And it is there that all dynamic allocation and the structure of all linked lists and the way we have manipulated them will come very handy.
(Refer Slide Time 05:25)
So we shall concentrate for the next few classes on the problem of data structuring and then we shall go back using our knowledge of data structuring, balancing and one or two more techniques to see how we can completely solve given problems so that will be the approach that we shall take till the rest of the course. So what is data structuring which is today’s topic. The data structuring consists of two things, one is the organization of the data elements and two the efficient procedures for data manipulation. So how we will organize the data elements, how we will store the data elements and how we will efficiently write by manipulation routines on the data elements and both are interlinked and we shall first see some examples of problems which are pure data structuring problems alright.
And every programming language will provide you some data structures or data types which are in built with certain operations and facility is to create other data structures. So first we will see several examples of data structuring problems themselves from simple ones to fairly complex ones. The first one is something which we have seen before. We have to represent complex numbers and in complex numbers we have to add, subtract, read and write them and we have seen how we will represent the complex number as a structure with two parts conceptually it is one single structure with two parts, the real part and the imaginary part and we define operations read, write, add and subtract on them.
Now when we provide a data structure we will not allow external user who will use this data structure to manipulate the internal data elements of that data structure directly. The only operations or view of the data structure available to anybody who is using the data structure is the procedures or functions which operate on that data structure. Now programming language will provide you automatically several such data structures in the name of data types. For example even an array is a structure provided by the program in which you can declare and define it and the operation provided is direct access. Suppose you define a two dimensional array then a i j when you write with instantiated values of i and j will give you is actually an operation on that data structure. We are not allowed to change the address of the array, we are not allowed to move that array in the memory and store it in another location of memory, so some things we are not allowed to do, the system automatically will store it in some place and provide you certain operations. So a programming language like C will provide you certain mechanisms to define data structures and will give you certain predefined data types like a string data type and it has given you a whole set of libraries to operate on that string compare, this that everything. Similarly for example you could have created a data structure in a language and allowed the user that this is the data structure that I am giving you and these are the functions that you can use on that data structure, alright.
So if you provide that, you have given a data structure as a data type in a programming language. So a data structure is of two types, is of two categories, one is its declaration. That is this is a complex, this is a data structure for the complex number with these operations. The other is its instance that is I may declare two complex numbers of the same type and then I may operate on them alright. So one is like a type declaration which is called the data structure definition, the other is like a variable declaration which is called the data structure instantiation.
So for different instance, the operations will be the same but the data element inside the data structure will be different. So you can define a data structure for complex numbers and define some 20 complex numbers and then add and subtract and read them but you cannot manipulate when you declare this data structure, you will not allow manipulation of this in terms of, for example the real part cannot be suddenly changed or the imaginary part cannot be suddenly changed because these are not provided. Second, sets of numbers and the operations that you define, insert an element into a set, delete an element from a set, find out whether an element is a member of a set, take two sets and perform union of them and return a set, take two sets and perform an intersection of them and return a set.
You can implement this by some means given. So this, the data inside and these operations, the organization of the data in these operations will find what is your data structure. Now for the same data structure you can have 3 or 4 types of implementation of that data structure. And what we will be looking for is an efficient implementation of a data structure.
(Refer Slide Time 10:51)
Let’s see another one, vectors. You can define vectors and each vector may be parameterized by the size of that vector then read a vector, write a vector, add two vectors, subtract two vectors, take a dot product of two vectors. So similarly you can define a data structure for this and you can imagine how you will implement the software. Matrices, you can define a data structure called a matrix which is parameterized by two parameters depending on the rows and the columns of the matrix and then you can define operations for adding two matrices, subtracting two matrices, multiplication of two matrices, inversion of a matrix, finding the determinant of a matrix and so many other. So these matrices will be operated by the operators which are defined.
So these are, suppose I give you such problems, declare and use complex numbers, define a library for complex numbers that is you are actually left to defining the data in a particular way and writing out these manipulation procedures in that data in an efficient manner. Let us see some examples which are slightly different but they are also in the same mode as data structure. Let us look at doctors, a simple doctor’s appointment list. And in this appointment list you will be asked, say somebody will say request for an appointment and secretary of the doctor will make that appointment and by normal law of making an appointment you will now be the last person that is you will be the last person who is in the normal point. And what you will, that operator will do as when the person, the doctor has seen already one person? They will choose or call the next person, alright.
So these are the two operations, over and above this you can define cancellation of appointments. Somebody may make an appointment and then ask for cancellation, you may make it more complex and say he will make an appointment and ask for the time slot that I am can you give me this slot. So they will look and see whether that slot is available for that person. Here we have made a simple appointment; there may be various ways in which you can look at this appointment problem. The simplest is like a queue, you stand in a queue and you just go to the doctor and put your chit there, so your number 9. And when number 6 goes away, number 7 is called and then you can finally say somebody may cancel the appointment, so number 12 may get cancelled.
So after 11, 13 should be called that’s the way you can make an cancel and appointment or in a more complex situation you can ask appointment for a particular slot and you will be given the time slot to come for. That is you come between 10 and 11 and between this time your appointment will be given. And it is in somebody may ask for time slot between 10 and 12 and then choose or call the next person will be more complex because many people will be overlapping between 10 and 11 and you may have to form a queue out there or you may have to make a certain prioritized operation. So it may become more and more complex depending on the situation but this is also a problem of organization of the data elements. Here the data elements will be the people who will request for the appointment and you have to organize the data in such a way so that you can make an appointment appropriately and select the next person appropriately. So this is the job of the computer as if in a menu these two operations are there and these are the two operations which have to be done or they may be called from another function like this.
Now let us look at it. You can define this as a data structure and in another program where you require an appointment list and you will be making a program which will be doing an appointment list. You can say I will declare an appointment list L₁ and
appointment list L_2 and appointment list L_3 for three doctors in a poly clinic all right and you will declare three such appointment lists and you will manipulate them by making an appropriate call to this appointment list. So, there may be a main menu at the secretary of the poly clinic where 3 or 4 appointment lists for three doctors, so the program will use this data structure to define a higher level menu which will make an appointment to a given doctor at any particular point of time. So this is how you will grow up and you will develop the structures. So look at it as if in a very very high level programming this is the data type, it is a predefined data type. In a very high language you can provide to a user who is using a menu or who is writing a program this is his data type.
(Refer Slide Time 16:18)
You can define appointment list app list x, x_1 x_2 x_3 just like you declare int i_1 i_2 i_3. So this is how the problem of data structuring scales up slowly. Let’s come to a slightly more interesting problem of a merit list. Here in our student information system we are to prepare a merit list and here you will add a student in terms of the student means the student roll number and the cgpa, may be you can delete a student, may be you can select the best of a student and you can select the worst or last student that is you will print, find out the best at any and you can print the complete merit list at a particular point of time. So this is how, these are the operations available on this problem.
We are now discussing pure problems where the problem is only of data structuring all right. And each one of them will have to be implemented by an algorithm and we will have to come to an algorithm which will implement this. But these algorithms will be related to the data structure defined, so we are more or less on a pure data structuring mode rather than an absolute problem decomposition mode. And finally maybe we can have a complete academic section information which will have add, delete or modify all the subjects, these are the subjects, subject name, subject number offered by this department etc etc.
Student information, these are the students, this department, etc, add, delete, modify. Registration information, grade information, computation information of cgpa, sgpa, merit list preparation, failure list preparation, individual grade card preparation, printing out individual grade cards and then when you register for a semester you may have to check whether he has not registered for the same subject twice that he has cleared, all the rules of the institute must be maintained. So this is another problem which leads to a pure data structuring type of a problem and when this is in massive amount of data, there are two aspects of data structuring. We are discussing data structuring which we will write ordinary programs for and in which the data will be stored in the memory all right. But if the data is to be stored in the files because when large amount of data is there, you cannot store all the data in the memory and you will have to store it in a file and you have to do file operations. So the complexity issues will be slightly different from the memory operations, depending on the type of file, what file you are using, how you are accessing the file. So we will come slowly to all these issues but before we come to them, we should start from beginning and see what are the issues in data structure manipulation.
So let us come back to our first problem which we have done before and let us see what we have here, complex numbers with operations add, subtract, read and write and we saw that converting to a programming language is the last thing. It is the conceptual design of the data structure which is important and we will see how we can develop a language in diagrams or what ever to develop this complex structure. Now here we have these operations. So what we decided was in the previous case, in that we have one structure that is the data structure. That is as we said before there will be two parts, the data organization and its manipulation will consist of two, the real part and the imaginary part. And this is what we will define for any instance of the data. For different instances these values will be different, different actual allocation will be done but the structure or the type will be the same and the functions add locate two numbers and add them, subtract this is what we have written before. And let us see, could we have done anything better? I
don’t think so, because we have done just two addition operations for these two things and for read and write, so there is the complexity of both of them are all in constant time, all operations add is done in constant time, subtraction can be done in constant time, read is done in a constant time for a fixed size.
(Refer Slide Time 19:55)
Obviously if you are asked to read complex numbers of length some 20,000 then you have to think of it in a different way. For example the problem of adding two very large numbers which you cannot declare in your programming language is itself a data structuring problem. Given two numbers, you will have to add two numbers of arbitrary sizes. Then how will you read, them how will you store them? Here we have fixed size, so all of them are constant time and therefore this is a reasonably good way to implement it.
On the other hand if we gave you that you have to work on integer long, very long integers and you will have to read them, write them and say add them. Then how would you implement such very long integers? For example very long means, you don’t know the size, you will be just getting it and it will be ending with a dollar, you will be given a number, huge number it will be ending with a dollar. So how would you organize it, how would you store it? One way could be, you could break it up into fixed size numbers. That is you know that a number can be at most say 10 digits in your representation and you would break it up into fixed size numbers. But you still do not know how long it will be. So what would you do? Either you would, when you read this number you would read it into character string, you would read it into a character string or somebody must have told you that is you will read character by character, count how many are there and after you count how many are there, you know what is the size all right. And after you know what is the size, you know how many integers you will require to store that size. So you can malloc that many integers, you can simply malloc that many integers.
So after reading the number of characters, you can malloc or you can dynamically allocate that many fixed size integers to store your very long integer. The other option could be, you could have done it as a linked list. That is after you will read in some numbers and you will just form a linked list of these numbers. Now given two such very long integers, writing them is no problem, both of them will require one will require to access the array and the other will require to access the linked list and the time of reading and writing will be proportional to the length of the number of digits in that integer which is quite fine because you cannot read it in less than that time.
Addition, how would you do addition? In a linked list given two linked lists, suppose this is the least significant, how would you, would you make the point as this way or would you make it both. Can you tell me? Suppose the number is 3 5 7 8 and suppose your digit allows you to store only one integer at a time, then in the array representation it would be 8 7 5 and 3 and this is you would have declared an array of size 4 and stored it like this. In a linked list, you would have declared 4 elements and how you would have stored it? Suppose you define the linked list like this, would you store it like this, 3 5 7 8? The other alternative 8 7 5 3, these are the two alternatives which we can use to store it or we can put in a linked list which has got both front and back point as which makes it optional. So could you tell me which one would we choose here, the first one or the second one? The second one will be chosen.
Now why, why will the second one be chosen when compared to the first one, why the first one will not be chosen, why? What is the intuitive argument that you are having to choose the second one and not the first one? That is because of the, see the read and the write operation are unaffected whether you store it this way or this way because you can, as we have seen in linked list operations you read an element and you can prepare the list this way or prepare the list that way, we have seen programs to do that. But to do the add operation, if two numbers are stored with the most significant bit and pointing this way, addition inherently you will have to add it this way. So every time you may, if you have pointers in this direction, when you have added this and the next one then you will not be able to go back here. We will again have to start from the beginning and move. So moving at every point of time will be of complexity of order n where n is the number of integers. Moving to the least significant bit will be n then it will be n minus 1 then it will be n minus 2 then it will be n minus 3.
And your addition operation which normally in this case you would have added these two and moved here, added these two and moved here, added these two and moved here. So very quickly if you analyze, if you store it in the second form, your addition operation will be order n plus m where n is the size of one number and m is the size of other number. On the other hand if you store it with the pointers as pointed out here then what will be your complexity for each operation? For the first operation you will require n plus m movements, for the second one n minus 1 plus m minus 1, for the third one we added like this and you will end up with order of n square plus m square. This is actually you can say it’s this order itself.
So we have seen an example where the manipulation where the storage of data itself, even simple things like which point, how the order in which it will be done has to depend on the efficiency of the algorithm which will implement. That is why I said, the efficient manipulation algorithm and the storage of the data will go hand in hand. These are very simple examples. Examples will be very complex but still is it better than this one? If you store it as two such blocks of arrays then you will have to know the size of the array obviously and then you can add them this way. So this one will also take order n time to add, isn’t it? This one will also take order in time because you will add two numbers like this then take the carry, add two numbers like this, take the carry and instead of being a single digit it may be a big number itself.
It may not be a single digit; it may be a big number itself. You will just use your maximum size available to you in your current program. Now how do you compare? So this one is out, how do you compare this with this, can you tell me which is better? In terms of addition operation functionality, so in terms of time both are order n but can you tell me which one is better? In terms of storage space this will be better than this because this uses an additional pointer and this will just allocate and block of array as memory. So when time is the same as we said before, space becomes an important consideration. So this is how we will decide on the data structure and once we decide on the data structure, we will decide on the, our final algorithms to implement it. So here we will get order n algorithms to read, to write and to add two numbers, two very very long numbers. And obviously you cannot, you can always show that you cannot do in less than order n time, otherwise you will be missing out some integer in the reading or the writing or in the addition itself.
So let’s come to the vector problem. Can you see? So, how will you represent vectors? How will you represent a vector? Let us see the alternatives. The size of the vector will be fixed for every vector. Now this vector is say a vector of integers, it’s a vector of numbers say it can be a vector of complex numbers as well where each element of that vector will be implemented as a complex number and the addition of the elements will be implemented by complex number addition. So that way we can develop it hierarchically but let us see what are the alternatives. Here again if you know the size, you can define a vector as a data, as a block of an array by dynamically allocating that size.
Once somebody asks you to declare a vector of a particular size, you can dynamically allocate that size as an array and then you can do the read and store it, the write print it, add two vectors in linear time, subtract two vectors, take the product. That is also how much time will take the product of two vectors be, how much time, what will be the complexity of taking the dot product of two vectors? This with this plus this with this order n times, isn’t it? It will be order n time. On the other hand if you have the cross product then it would return, what? A cross product of two vectors would return a matrix. So if you define cross product of two vectors, you would also have to define a matrix and you would have to write m is equal to add vector v1 v2 where v1 and v2 are declared as vectors, add vector operates on these two and m is of type matrix. And this is how you would just simply write it, sorry multiply, I am sorry this will be, cut it out, m is equal to multiply v1 v2. Now lets the good fun of what is called something called polymorphism, you would simply write m is equal to v1 star v2.
Now the system would find out that this is of type vector, this is of type vector, it will realize that this multiplication must be vector multiplication. When you generate the code, the translator will itself know what to do. When you write $v_3$ is equal to $v_1$ plus $v_2$, it will do vector addition because it knows vectors have been declared. So automatically the function, the appropriate function will be called knowing what data you have defined. So this will make your programming, your style of programming, your technique of programming much more simpler but you will have to implement all these data structures in an efficient way. So this is the one approach which data structure and programming takes to take programming out at the highest level and for that you must know what is data structuring because data structuring will be required even at the highest level.
Once you are provided with matrix multiplication operation, you would like to do something more complex and define it as a data structure. On the other hand if these vectors were bits 1 0 1 0 1 0 then may be you wouldn’t represent it as an array, you would represent it may be as one integer in a bit format and if a language allows you to do bitwise ANDing and ORing, it would be useful for you to do several other operations may be AND OR because it is a bit vector, bit vectors usually represent some other types of information. So bit vectors can be used to implement such things.
Coming to sets, sets can be implemented by bit vectors. Sets of elements can be, if you know that there are fixed size of elements, there is a maximum limit on the number of elements and you know the universal sets. So let’s see how sets will be represented. Suppose you know the universal set, so if you know the universal set then you can represent any defined set. Suppose a universal set is between say 1 to 50 then you can represent it by a vector of size 15, insert will be making that point one. Insert an element i will make the ith point one, delete an element will meet the ith point zero, member will check if that ith point is one, union will do ORing, bitwise ORing of the two. You know the OR operation, bitwise OR operation, intersection will do the bitwise AND operation.
On the other hand if you do not know the universal set then all you have left with this is to make a linked list and store them up. That is you will store these numbers as a linked list because now you have got an insert and delete which dynamically changes the size of your set. In any set the size of the set will be changed dynamically. Now once the size of the set changes dynamically, you cannot malloc a fixed size. What we could do in the case of long integers where we knew the size of the integer, here because the new element may be inserted at any point of time and the element may be deleted. What we can do is we can implement it by a linked list.
We cannot malloc a fixed size and implement it. Are you getting the point? So what will the insert operation, suppose we just make a linked list, insert operation how much time will it take? Constant time, you can insert in the beginning, you can insert in the end wherever you want. Delete operation will take, so insert operation will take order one time. Delete operation will take, if you give a number and asked to delete that number, how much will it take? You have to find out the number in the list and then delete it. You will have to find the number in the list and then delete it. So in the worst case, if there are n elements it will take order n time. Member will take, member will take how much time? Order of n. On the other hand, if you knew the universal set and you represented it as an array of binary numbers that is as bit vectors, insert would take order one time, delete would take order one time, member would take order one time, just accessing that element in that array. Isn’t it?
(Refer Slide Time 38:48)
How much time will union take in this case and intersection, how will you find union? Remember if they were disjoined sets, union could be done in constant time. Can you tell me, just make the tail of the list just join the end of this list to the beginning of that list. If they were disjoined sets that is you were told that they were disjoined sets then you would have to maintain in every list a start and the end. So your data structure would be a linked list like this because a start and an end pointer. And somebody tells you to do
union of disjoined elements then you would make the end of one point to the start of the
other, it does not depend which one you are doing to which one. But on the other hand if
they do not contain disjoined elements then you are in trouble then you will have to scan
one element, see whether it is present in the other list. If it is not present, only then you
can put it on the end of the list or you can create a third list and put it all right. So this is
what you can do but how much time will this take? Order n into n which is n square. It
will take n into n where n is the size of one element size of the other, so it will take order
n square. So is there anything that you can do about it, is there any improvement that you
can make about it, is there any way in which you can organize the data better? Can you
come up with a suggestion?
I will give you a hint. This was an arbitrary list, if I give you 5 3 4 7, you would have
written 5 3 4 7. But suppose during insertion you sorted it, during insertion you kept it in
sorted order like you do in insertion sort then insert would be order n. Let’s see how it
changes? Delete would be order n, member would be order n, largest smallest is order
one but member is order n, largest smallest is order one. If you wanted to find largest
smallest here both would be n but here largest smallest is order one but this is order n.
What about union? N, you remember merge sort. The merge routine of merge sort, you
remember the merge routine of merge sort? You start with one pointer here and one
pointer here and if this is less than this you put this. If it is equal to this only put one of
them then that is the only change that you have to make. Are you getting my point? So
then it will become order n plus m which is the same as order 2 m which is order n. So by
sorting the list, you have increased your complexity here but you have brought one order
n square to order n.
(Refer Slide Time 42:02)
because the total complexity would be \( \sigma_1 \) into order \( n \) plus that is the number of times you have to do this plus \( \sigma_2 \) into this plus \( \sigma_3 \), a weighted sum. And now if this one to this one, the relation is not square that is only if this one is to be done more than square of the number of times this one is to be done then this is better in the worst case otherwise this is better. So now I hope you are getting the idea what is meant by data structuring, how to organize the data elements, in exactly what form the data will be stored whether it will be sorted or unsorted this that everything and each and every routine and what is the complexity of each and every operation. If you can do union, you can do intersection is nearly the same.
So, in this particular case if all of them take equal amount, they can come with equal probability then for the time being we would choose this one. Maybe we can see later on whether we can improve on this one or not but for the time being this is what we have. So this is the essence of data structure and we shall come to other examples and see slowly what we can do about different problems and whether we can define other than simple linked lists or ordered linked lists, whether we can define other types of conceptual structures which will help us to perform these operations better. Those are known as the well known data structures like linked lists, we will see other well known data structures like trees, graphs and other things.
|
{"Source-Url": "https://www.btechguru.com/showpdf/CSE/106105085-Programming_and_Data_Structure/PDFs/lec21.pdf", "len_cl100k_base": 7012, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 29236, "total-output-tokens": 7604, "length": "2e12", "weborganizer": {"__label__adult": 0.0003731250762939453, "__label__art_design": 0.0004072189331054687, "__label__crime_law": 0.00041604042053222656, "__label__education_jobs": 0.003143310546875, "__label__entertainment": 7.319450378417969e-05, "__label__fashion_beauty": 0.0001817941665649414, "__label__finance_business": 0.00017702579498291016, "__label__food_dining": 0.0005259513854980469, "__label__games": 0.0006642341613769531, "__label__hardware": 0.0014524459838867188, "__label__health": 0.0006766319274902344, "__label__history": 0.00026297569274902344, "__label__home_hobbies": 0.0001468658447265625, "__label__industrial": 0.0004715919494628906, "__label__literature": 0.00027370452880859375, "__label__politics": 0.0003039836883544922, "__label__religion": 0.0005974769592285156, "__label__science_tech": 0.021270751953125, "__label__social_life": 0.00012409687042236328, "__label__software": 0.004276275634765625, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.0003139972686767578, "__label__transportation": 0.0006017684936523438, "__label__travel": 0.0002009868621826172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33140, 0.01758]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33140, 0.66096]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33140, 0.9617]], "google_gemma-3-12b-it_contains_pii": [[0, 1600, false], [1600, 3565, null], [3565, 7201, null], [7201, 9233, null], [9233, 11363, null], [11363, 13514, null], [13514, 15912, null], [15912, 17980, null], [17980, 19596, null], [19596, 21437, null], [21437, 23360, null], [23360, 25168, null], [25168, 27426, null], [27426, 29663, null], [29663, 31620, null], [31620, 33140, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1600, true], [1600, 3565, null], [3565, 7201, null], [7201, 9233, null], [9233, 11363, null], [11363, 13514, null], [13514, 15912, null], [15912, 17980, null], [17980, 19596, null], [19596, 21437, null], [21437, 23360, null], [23360, 25168, null], [25168, 27426, null], [27426, 29663, null], [29663, 31620, null], [31620, 33140, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33140, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33140, null]], "pdf_page_numbers": [[0, 1600, 1], [1600, 3565, 2], [3565, 7201, 3], [7201, 9233, 4], [9233, 11363, 5], [11363, 13514, 6], [13514, 15912, 7], [15912, 17980, 8], [17980, 19596, 9], [19596, 21437, 10], [21437, 23360, 11], [23360, 25168, 12], [25168, 27426, 13], [27426, 29663, 14], [29663, 31620, 15], [31620, 33140, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33140, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
397c18d9cbf3690bfe40f2b5cea73829ddb5dd57
|
CS 433: Computer Architecture – Fall 2022
Homework 5
Total Points: Undergraduates (44 points), Graduates (52 points)
Undergraduate students should only solve the first 4 problems.
Graduate students should solve all problems.
Due Date: November 1, 2022 at 10:00 pm CT
(See course information slides for more details)
Directions:
● All students must write and sign the following statement at the end of their homework submission. "I have read the honor code for this class in the course information handout and have done this homework in conformance with that code. I understand fully the penalty for violating the honor code policies for this class." No credit will be given for a submission that does not contain this signed statement.
● On top of the first page of your homework solution, please write your name and NETID, your partner’s name and NETID, and whether you are an undergrad or grad student.
● Please show all work that you used to arrive at your answer. Answers without justification will not receive credit. Errors in numerical calculations will not be penalized. Cascading errors will usually be penalized only once.
● See course information slides for more details.
Problem 1 [5 Points]
A 4 entry victim cache for a 4KB direct mapped cache removes 80% of the conflict misses in a program. Without the victim cache, the miss rate is 0.064 (6.4%) and 67% of these misses are conflict misses. What is the percentage improvement in the AMAT (average memory access time) due to the victim cache?
Assume a hit in the main (4KB) cache takes 1 cycle. For a miss in the main cache that hits in the victim cache, assume an additional penalty of 1 cycle to access the victim cache. For a miss in both the main and victim caches, assume a further penalty of 48 cycles to get the data from memory. Assume a simple, single-issue, 5-stage pipeline, in-order processor that blocks on every read and write until it completes.
Solution:
AMAT = Hit Time + Miss Rate × Miss Penalty
Without victim cache: AMAT = 1 + 0.064 × 48 = 4.072 cycles
With victim cache: AMAT = 1 + 0.064 × [(0.67 × 0.80 × 1) + ((1 − 0.67 × 0.80) × 49)] = 2.489
Improvement: \( \frac{\text{AMAT}_{\text{orig}} - \text{AMAT}_{\text{victim}}}{\text{AMAT}_{\text{orig}}} = \frac{4.072 - 2.489}{4.072} \approx 0.389 = 38.9\% \)
Grading:
1 point for the original AMAT
3 points for victim cache AMAT:
1 point for determining the correct rate for victim cache hits and misses
1 point for assigning the correct penalty to victim cache hits
1 point for assigning the correct penalty to victim cache misses
1 point for percent improvement
Some students used \( \frac{\text{AMAT}_{\text{orig}}}{\text{AMAT}_{\text{victim}}} - 1 = \frac{4.072}{2.489} - 1 \approx 0.636 = 63.6\% \) as the percentage improvement.
For this homework, we did not take off points for this. (This would be the performance improvement i.e. speedup, assuming that performance is 1/AMAT.)
Problem 2 [12 points]
You are building a computer system around a processor with in-order execution that runs at 1 GHz and has a CPI of 1, excluding memory accesses. The only instructions that read or write data from/to memory are loads (20% of all instructions) and stores (5% of all instructions).
The memory system for this computer has a split L1 cache. Both the I-cache and the D-cache hold 32 KB each. The I-cache has a 2% miss rate and 64 byte blocks, and the D-cache is a write-through, no-write-allocate cache with a 5% miss rate and 64 byte blocks. The hit time for both the I-cache and the D-cache is 1 ns. The L1 cache has a write buffer. 95% of writes to L1 find a free entry in the write buffer immediately. The other 5% of the writes have to wait until an entry frees up in the write buffer (assume that such writes arrive just as the write buffer initiates a request to L2 to free up its entry and the entry is not freed up until the L2 is done with the request). The processor is stalled on a write until a free write buffer entry is available.
The L2 cache is a unified write-back, write-allocate cache with a total size of 512 KB and a block size of 64-bytes. The hit time of the L2 cache is 15ns for both read hits and write hits. Tag comparison for hit/miss is included in the 15ns in all cases, do not add hit time to miss time on a miss. The local hit rate of the L2 cache is 80%. Also, 50% of all L2 cache blocks replaced are dirty. The 64-bit wide main memory has an access latency of 20ns (including the time for the request to reach from the L2 cache to the main memory), after which any number of bus words may be transferred at the rate of one bus word (64-bit) per bus cycle on the 64-bit wide 100 MHz main memory bus. Assume inclusion between the L1 and L2 caches and assume there is no write-back buffer at the L2 cache. Assume a write-back takes the same amount of time as an L2 read miss of the same size.
Assume all caches in the system are blocking; i.e., they can handle only one memory access (load, store, or writeback) at a time. When calculating the miss penalty for a load or store for a writeback cache, the time for any needed writebacks should be included in the miss penalty.
While calculating any time values (such as hit time, miss penalty, AMAT), please use ns (nanoseconds) as the unit of time. For miss rates below, give the local miss rate for that cache.
By miss penalty$_{L2}$, we mean the time from the miss request issued by the L2 cache up to the time the data comes back to the L2 cache from main memory.
Part A [7 points]
Computing the AMAT (average memory access time) for instruction accesses.
i. Give the values of the following terms for instruction accesses: hit time\textsubscript{L1}, miss rate\textsubscript{L1}, hit time\textsubscript{L2}, miss rate\textsubscript{L2}. [1 point]
**Solution:**
\begin{align*}
\text{hit time}\textsubscript{L1} &= 1 \text{ cycle} = 1 \text{ ns} \\
\text{miss rate}\textsubscript{L1} &= 0.02 \\
\text{hit time}\textsubscript{L2} &= 15 \text{ ns} \\
\text{miss rate}\textsubscript{L2} &= 1 - 0.8 = 0.2
\end{align*}
**Grading:**
1 point for giving the correct values of all 4 terms, otherwise no points.
This part is just meant to get everyone started on the right track.
ii. Give the formula for calculating miss penalty\textsubscript{L2}, and compute the value of miss penalty L2. [4 points]
**Solution:**
\begin{align*}
\text{miss penalty}\textsubscript{L2} &= \text{memory access latency + time to transfer one L2 cache block} \\
&= \frac{64 \text{ bit}}{10 \text{ ns}} \times \frac{8 \text{ byte}}{10 \text{ ns}} = 0.8 \text{ byte/ns} = 800 \text{ MB/s} \\
&= \text{time to transfer one L2 cache block} = \frac{64 \text{ byte}}{0.8 \text{ byte/ns}} = 80 \text{ ns} \\
&= \text{miss penalty}\textsubscript{clean L2} = 20 \text{ ns} + 80 \text{ ns} = 100 \text{ ns} \\
&= 50\% \text{ of replaced blocks are dirty and must be written back; this is another 100 ns} \\
&= \text{miss penalty}\textsubscript{L2} = 100 \text{ ns} + 0.5 \times 100 \text{ ns} = 150 \text{ ns}
\end{align*}
**Grading:**
1 point for the correct formula for miss penalty\textsubscript{L2}.
1 point for correctly setting up the time to transfer one block.
1 point for the correct value of miss penalty\textsubscript{clean L2}.
1 point for noting that 50\% of the time the replaced block will need to be written back and for correctly setting up the value of miss penalty\textsubscript{L2} taking this into account.
No points to be taken off for calculation errors.
iii. Give the formula for calculating the AMAT for this system using the five terms whose values you computed above and any other values you need. [1 point]
**Solution:**
\[ \text{AMAT} = \text{hit time}\textsubscript{L1} + \text{miss rate}\textsubscript{L1} \times (\text{hit time}\textsubscript{L2} + \text{miss rate}\textsubscript{L2} \times \text{miss penalty}\textsubscript{L2}) \]
**Grading:**
1 point for a completely correct formula, otherwise no points.
iv. Plug in the values into the AMAT formula above, and compute a numerical value for AMAT for instruction accesses. [1 point]
**Solution:**
\[ \text{AMAT} = 1 \text{ ns} + 0.02 \times (15 \text{ ns} + 0.2 \times 150 \text{ ns}) = 1.9 \text{ ns} \]
**Grading:**
1 point for setting up the correct values in the AMAT formula.
No points to be taken off for calculation errors.
Part B [2 points]
Computing the AMAT for data reads.
i. Give the value of miss rate\textsubscript{L1} for data reads. [1 point]
**Solution:** \text{miss rate}\textsubscript{L1} = 0.05
**Grading:** 1 point for giving the correct value of miss rate\textsubscript{L1}.
ii. Calculate the value of the AMAT for data reads using the above value, and other values you need. [1 point]
**Solution:**
\[ \text{AMAT} = \text{hit time}\textsubscript{L1} + \text{miss rate}\textsubscript{L1} \times (\text{hit time}\textsubscript{L2} + \text{miss rate}\textsubscript{L2} \times \text{miss penalty}\textsubscript{L2}) \]
\[ \text{AMAT} = 1 + 0.05 \times (15 + 0.2 \times 150) = 3.25 \text{ ns} \]
**Grading:** 1 point for setting up the correct AMAT formula for data reads.
No points to be taken off for calculation errors.
Part C [3 points]
Computing the AMAT for data writes. Assume miss penalty\textsubscript{L2} for a data write is the same as that computed previously for a data read.
i. Give the value of write time\textsubscript{L2Buff}, the time for a write buffer entry to be written to the L2 cache. [2 points]
**Solution:**
\[ \text{write time}\textsubscript{L2Buff} = \text{hit time}\textsubscript{L2} + \text{miss rate}\textsubscript{L2} \times \text{miss penalty}\textsubscript{L2} \]
\[ \text{write time}\textsubscript{L2Buff} = 15 + 0.2 \times 150 = 45 \text{ ns} \]
**Grading:** 1 point for setting up the correct formula for write time\textsubscript{L2Buff}.
1 point for setting up the correct values in the write time\textsubscript{L2Buff} formula.
No points to be taken off for calculation errors.
ii. Calculate the value of the AMAT for data writes using the above information, and any other values that you need. Only include the time that the processor will be stalled. Hint: There are two cases to be considered here depending upon whether the write buffer is full or not. [1 point]
**Solution:**
\[ \text{AMAT} = \text{hit time}\textsubscript{L1} + \text{miss rate}\textsubscript{L1} \times \text{write time}\textsubscript{L2Buff} \]
\[ \text{AMAT} = 1 + 0.05 \times 45 = 3.25 \text{ ns} \]
The two cases to consider are whether the write buffer has a free entry or not. When it does, it can be written to in 1 cycle. When it does not, it must incur the penalty for writing an entry to the L2 cache, which is write time\textsubscript{L2Buff}.
**Grading:** 1 point for setting up the correct AMAT equation and inserting the correct values.
No points to be taken off for calculation errors.
Problem 3 [13 points]
Consider the following piece of code:
```c
register int i, j; /* i, j are in the processor registers */
register float sum1, sum2;
float a[64][64], b[64][64];
for (i = 0; i < 64; i++) {
for (j = 0; j < 64; j++) {
sum1 += a[i][j]; /* 3 */
}
for (j = 0; j < 32; j++) {
sum2 += b[i][2*j]; /* 5 */
}
}
```
Assume the following:
- There is a perfect instruction cache; i.e., do not worry about the time for any instruction accesses.
- Both `int` and `float` are of size 4 bytes.
- Only the accesses to the array locations `a[i][j]` and `b[i][2*j]` generate loads to the data cache. The rest of the variables are all allocated in registers.
- Assume a fully associative, LRU data cache with 32 lines, where each line has 16 bytes.
- Initially, the data cache is empty.
- To keep things simple, we will assume that statements in the above code are executed sequentially. The time to execute lines (1), (2), and (4) is 4 cycles for each invocation. Lines (3) and (5) take 10 cycles to execute and an additional 40 cycles to wait for the data if there is a data cache miss.
- There is a data prefetch instruction with the format `prefetch(array[index])`. This prefetches the entire block containing the word `array[index]` into the data cache. It takes 1 cycle for the processor to execute this instruction and send it to the data cache. The processor can then go ahead and execute subsequent instructions. If the prefetched data is not in the cache, it takes 40 cycles for the data to get loaded into the cache.
- The arrays `a` and `b` are stored in row major form.
- The arrays `a` and `b` both start at cache line boundaries.
Part A [2 points]
How many cycles does the above code fragment take to execute if we do NOT use prefetching?
Solution:
Each cache line has \( \frac{\text{cache line size}}{\text{element size}} = \frac{16}{4} = 4 \) elements, so every 4 accesses in line 3 will miss and every 2 accesses in line 5, for a total of \( 64 \times \left( \frac{64}{4} + \frac{32}{2} \right) = 64 \times (16 + 16) = 2048 \) misses.
Line 1 executes 65 times: \( 65 \times 4 = 260 \)
Line 2 executes \( 64 \times 65 \) times: \( 64 \times 65 \times 4 = 16,640 \)
Line 3 executes \( 64 \times 64 \) times: \( 64 \times 64 \times 10 = 40,960 \)
Line 3 misses \( 64 \times \frac{64}{4} \) times: \( 64 \times \frac{64}{4} \times 40 = 40,960 \)
Line 4 executes \( 64 \times 33 \) times: \( 64 \times 33 \times 4 = 8,448 \)
Line 5 executes \( 64 \times 32 \) times: \( 64 \times 32 \times 10 = 20,480 \)
Line 5 misses \( 64 \times \frac{32}{2} \) times: \( 64 \times \frac{32}{2} \times 40 = 40,960 \)
Total cycles: 168,708
Average cycles per outer loop iteration: \( \frac{168,708}{64} = 2636.0625 \)
Grading: 1 point for correct cycles taken by lines 3 and 5.
1 point for correct cycles taken by lines 1, 2, and 4.
Part B [2 points]
Consider inserting prefetch instructions for the two inner loops for the arrays \( a \) and \( b \) respectively. Explain why we may want to unroll the loops to insert prefetches. What is the minimum number of times you would need to unroll for each of the two loops for this purpose?
Solution:
There is one miss every four iterations of the first loop, and every two iterations of the second loop. The latency of this miss covers the prefetch time. We only need to issue a prefetch instruction once per cache line accessed. If code size is not a problem, unrolling the loop is the most efficient way to do this, since it avoids branches that test for the correct iteration count.
We unroll the first loop 4 times and the second loop 2 times; the prefetch is for iteration \( j + 4 \). (For the second loop, the prefetch is not for the next iteration of the unrolled loop, but for the one after that.)
Grading: 1 point for unroll count of the first loop.
1 point for unroll count of the second loop.
Part C [4 points]
Unroll the inner loops for the number of times identified in part b, and insert the minimum number of software prefetches to minimize execution time. The technique to insert prefetches is analogous to software pipelining. You do not need to worry about startup and cleanup code and do not introduce any new loops.
Solution:
```c
register int i, j; /* i, j are in the processor registers */
register float sum1, sum2;
float a[64][64], b[64][64];
for (i = 0; i < 64; i++) { /* 1 */
for (j = 0; j < 64; j += 4) { /* 2 */
prefetch(a[i][j+4]) /* P1 */
sum1 += a[i][j+0]; /* 3a */
sum1 += a[i][j+1]; /* 3b */
sum1 += a[i][j+2]; /* 3c */
sum1 += a[i][j+3]; /* 3d */
}
for (j = 0; j < 32; j += 2) { /* 4 */
prefetch(b[i][2*j+8]) /* P2 */
sum2 += b[i][2*j+0]; /* 5a */
sum2 += b[i][2*j+2]; /* 5b */
}
}
```
Grading: 1 point for correct indices, statements, and loop header for the first loop.
1 point for prefetch in the first loop.
1 point for correct indices, statements, and loop header for the second loop.
1 point for prefetch in the second loop.
Part D [2 points]
How many cycles does the code in part (c) take to execute? Calculate the average speedup over the code without prefetching. Assume prefetches are not present in the startup code. Extra time needed by prefetches executing beyond the end of the loop execution time should not be counted.
Solution:
Now the only misses are on the very first execution of line 3a—row major ordering means prefetching is effective even across outer iterations—and the first two executions of line 5a, since the prefetch is preparing for the $j + 4$ iteration. So there are 3 misses total.
Line 1 executes 65 times: $65 \times 4 = 260$
Line 2 executes $64 \times 17$ times: $64 \times 17 \times 4 = 4,352$
Line P1 executes $64 \times 16$ times: $64 \times 16 \times 1 = 1,024$
Lines 3a–d execute $64 \times 16$ times: $4 \times 64 \times 16 \times 10 = 40,960$
Line 3a misses 1 time: $1 \times 40 = 40$
Line 4 executes $64 \times 17$ times: $64 \times 17 \times 4 = 4,352$
Line P2 executes $64 \times 16$ times: $64 \times 16 \times 1 = 1,024$
Lines 5a–b execute $64 \times 16$ times: $2 \times 64 \times 16 \times 10 = 20,480$
Line 5a misses 2 times: $2 \times 40 = 80$
Total cycles: 72,572
Average cycles per outer loop iteration: $\frac{72,572}{64} = 1133.9375$
Speedup over code without prefetching: $\frac{168,708}{72,572} \approx 2.32$
Grading:
1 point for calculating the correct number of misses.
1 point for the rest of the cycles.
Part E [3 points]
Is there another technique that can be used to achieve the same objective as loop unrolling in this example, but with fewer static instructions? Explain this technique and illustrate its use for the code in part (c).
Solution:
The simplest option is to issue excess prefetch requests. Costing only one cycle if the data has already been requested, that's probably cheaper than trying to use branches.
```c
for (i = 0; i < 64; i++) {
for (j = 0; j < 64; j++) {
prefetch(a[i][j+4]) /* P1 */
sum1 += a[i][j]; /* 3 */
}
for (j = 0; j < 32; j++) {
prefetch(b[i][2*j+8]) /* P2 */
sum2 += b[i][2*j]; /* 5 */
}
}
```
Grading: 1.5 points for prefetching in the first loop.
1.5 points for prefetching in the second loop.
Alternate Solution: Code may also test j%4 (resp. j%2) to only issue the same number of prefetches as in the unrolled code.
```c
for (i = 0; i < 64; i++) {
for (j = 0; j < 64; j++) {
if (j%4 == 0) /* P1a */
prefetch(a[i][j+4]) /* P1b */
sum1 += a[i][j]; /* 3 */
}
for (j = 0; j < 32; j++) {
if (j%2 == 0) /* P2a */
prefetch(b[i][2*j+8]) /* P2b */
sum2 += b[i][2*j]; /* 5 */
}
}
```
Grading: 1.5 points for inserting the correct if statement in the first loop
1.5 points for inserting the correct if statement in the second loop.
Problem 4 [14 points]
Way prediction allows an associative cache to provide the hit time of a direct-mapped cache. The MIPS R10000 processor used way prediction to achieve a different goal: reduce the cost of the chip package. The R10000 hardware includes an on-chip L1 cache, on-chip L2 tag comparison circuitry, and an on-chip L2 way prediction table. L2 tag information is brought on chip to detect an L2 hit or miss. The way prediction table contains 8K 1-bit entries, each corresponding to two L2 cache blocks. L2 cache storage is built external to the processor package, is 2-way associative, and may have one of several block sizes.
Part A [2 points]
How can way prediction reduce the number of pins needed on the R10000 package to read L2 tags and data, and what is the impact on performance compared to a package with a full complement of pins to interface to the L2 cache?
Solution: With way prediction, only one way is read at a time rather than reading and comparing both. The package only needs enough pins to read the tag and data from a single line in a cycle instead of two, plus one extra bit to select the way. (Assuming the cache is just simple memory). A cache access takes an extra cycle whenever the way prediction is incorrect, and on every cache miss, which will slow performance.
Grading: 2 points for observing that we have to access both ways simultaneously if we don’t have way prediction.
Part B [2 points]
How could a 2-associative cache be implemented with the same smaller number of pins but without the way prediction table? What is the performance drawback?
Solution: Without way prediction, the processor will access the ways sequentially. This will incur a delay whenever the data is found in the second way, and prediction would have been accurate.
Grading: 1 point for suggesting sequential access.
1 point for describing the performance loss.
Part C [4 points]
Assume that the R10000 uses most-recently used way prediction. What are reasonable design choices for the cache state update(s) to make when the desired data is in the predicted way, the desired data is in the non-predicted way, and the desired data is not in the L2 cache? Please fill in your answers in the following table.
<table>
<thead>
<tr>
<th>Cache Access Case</th>
<th>Cache State Change Way Prediction Entry</th>
</tr>
</thead>
<tbody>
<tr>
<td>Desired data is in the predicted way</td>
<td>No change</td>
</tr>
<tr>
<td>Desired data is in the non-predicted way</td>
<td>Flip state (to the way for this access)</td>
</tr>
<tr>
<td>Desired data is not in the L2 cache</td>
<td>Set to location of new data, or, Flip State (state will be flipped if we overwrite least recently used way)</td>
</tr>
</tbody>
</table>
**Grading:** 2 points per entry.
Part D [2 points]
For a 1024 KB L2 cache with 64-byte blocks and 8-way set associativity, how would the prediction table be organized for this new size? Give your answer in the form of “X entries by Y bits per entry.”
Solution: \[
\frac{1024 \text{ KB}}{64 \text{ B/line}} = 16,384 \text{ lines, } \frac{16,384 \text{ lines}}{8 \text{ ways/set}} = 2,048 \text{ sets}
\]
For 8-way associativity, \(\log_2 8 = 3\) bits are needed.
So need 2,048 (2K) entries by 3 bits per entry
Grading: 1 point for number of entries (sets).
1 point for entry width (bits).
Part E [2 points]
For an 8 MB L2 cache with 128-byte blocks and 2-way set associativity, what would the prediction table organization be? Again, give your answer as “X entries by Y bits per entry.”
Solution: \[
\frac{8 \text{ MB}}{128 \text{ B/line}} = 65,536 \text{ lines, } \frac{65,536 \text{ lines}}{2 \text{ ways/set}} = 32,768 \text{ sets}
\]
2-way associativity, \(\log_2 2 = 1\) bit is needed.
So need 32,768 (32K) entries by 1 bit per entry.
Grading: 1 point for number of entries (sets).
1 point for entry width (bits).
Part F [2 points]
What is the difference in the way that the R10000 with only 8K way prediction table entries will support the cache in part d) versus the cache in part e)? Hint: Think about the similarity between a way prediction table and a branch prediction table.
Solution: An 8Kb way prediction table is enough to support the cache in part D directly, or maybe even to treat the 8Kb as a smaller number of more sophisticated predictors. For the cache in part E, several ways will have to share a predictor, using some map from 32K sets onto 8K predictors. For best results, the mapping should aim to ensure that memory that will be used at similar times will be mapped to different ways, for example simply by dropping some high bits.
Grading: 1 point for saying there are enough entries for part D.
1 point for suggesting ways to share predictors in part E.
Problem 5 [8 points]
Consider a computer with an in-order CPU, and with a data cache block size of 64 bytes (16 words) and a 32-bit wide bus to the memory. The memory takes 10 cycles to supply the first word and 2 cycles per word to supply the rest of the block. The cache is non-blocking, and it can support any number of outstanding misses. The memory can service multiple requests simultaneously if required (techniques to achieve this will be discussed in class).
This cache and memory system implement a “Requested Word First and Early Restart” policy, and the bus delivers the block data in “cyclic order” starting with the requested word. Cyclic order means that if the requested word is the 5th in a block of size 16 words, then the order in which the words in the block are supplied is 5, 6, 7 … 16, 1, 2, 3, 4.
Part A [3 points]
Consider the following code fragment, which operates on an integer array A which is block-aligned (that is A[0] is located at the start of a cache block in memory):
```c
for (i = 11; i < 100; i += 16) {
/* 1 */
A[i] *= 2;
/* 2 */
}
```
Suppose that the cache is big enough so that there are only compulsory misses. Further, statement 1 takes 4 cycles to execute, and statement 2 takes 4 cycles to execute in addition to any miss latency. Assume no overlap in the execution of these statements. Initially, the array A is not present in the cache, so any initial accesses to A cause misses in the cache.
What is the running time of this loop with the “Requested Word First and Early Restart” policy?
**Solution:** A[i] is supplied after 10 cycles.
Line 1 executes 7 times: \(7 \times 4 = 28\)
Line 2 executes 6 times: \(6 \times (10 + 4) = 84\)
**Total:** 112 cycles
**Grading:** 1 point for line 1.
2 points for line 2.
Part B [3 points]
How many cycles would the above loop take to run in a system with just “Early Restart” (i.e. the block is fetched in normal order, but the program is started early at arrival of requested word).
**Solution:**
A[i] is the 11th word in the block (remember that C uses zero-based indexing)
So A[i] is supplied after $10 + 2 \times 11 = 32$ cycles.
Line 1 executes 7 times: $7 \times 4 = 28$
Line 2 executes 6 times: $6 \times (32 + 4) = 216$
Total: 244 cycles
**Grading:**
2 points for data supply cycles.
1 point for statement execution time.
Part C [2 points]
How many cycles would the above loop take to run in a system with the base policy (i.e. normal fetch and restart)?
**Solution:**
The entire block, including A[i], is supplied after $10 + 2 \times 15 = 40$ cycles.
Line 1 executes 7 times: $7 \times 4 = 28$
Line 2 executes 6 times: $6 \times (40 + 4) = 264$
Total: 292 cycles
**Grading:**
1 point for data supply cycles.
1 point for statement execution time.
|
{"Source-Url": "https://courses.grainger.illinois.edu/CS433/fa2022/assignments/cs433-fa22-hw5-solutions.pdf", "len_cl100k_base": 7449, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 53532, "total-output-tokens": 8641, "length": "2e12", "weborganizer": {"__label__adult": 0.0008578300476074219, "__label__art_design": 0.0014123916625976562, "__label__crime_law": 0.000919342041015625, "__label__education_jobs": 0.078857421875, "__label__entertainment": 0.00022804737091064453, "__label__fashion_beauty": 0.0006308555603027344, "__label__finance_business": 0.001018524169921875, "__label__food_dining": 0.0010824203491210938, "__label__games": 0.0024433135986328125, "__label__hardware": 0.0299835205078125, "__label__health": 0.0014772415161132812, "__label__history": 0.001346588134765625, "__label__home_hobbies": 0.0006995201110839844, "__label__industrial": 0.004489898681640625, "__label__literature": 0.0006818771362304688, "__label__politics": 0.0008573532104492188, "__label__religion": 0.00144195556640625, "__label__science_tech": 0.356689453125, "__label__social_life": 0.0003304481506347656, "__label__software": 0.01090240478515625, "__label__software_dev": 0.499267578125, "__label__sports_fitness": 0.0011358261108398438, "__label__transportation": 0.002716064453125, "__label__travel": 0.000499725341796875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26242, 0.05516]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26242, 0.56458]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26242, 0.83747]], "google_gemma-3-12b-it_contains_pii": [[0, 1184, false], [1184, 2941, null], [2941, 5510, null], [5510, 8334, null], [8334, 10867, null], [10867, 12547, null], [12547, 14761, null], [14761, 15907, null], [15907, 17349, null], [17349, 18734, null], [18734, 20625, null], [20625, 21485, null], [21485, 23445, null], [23445, 25227, null], [25227, 26242, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1184, true], [1184, 2941, null], [2941, 5510, null], [5510, 8334, null], [8334, 10867, null], [10867, 12547, null], [12547, 14761, null], [14761, 15907, null], [15907, 17349, null], [17349, 18734, null], [18734, 20625, null], [20625, 21485, null], [21485, 23445, null], [23445, 25227, null], [25227, 26242, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26242, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26242, null]], "pdf_page_numbers": [[0, 1184, 1], [1184, 2941, 2], [2941, 5510, 3], [5510, 8334, 4], [8334, 10867, 5], [10867, 12547, 6], [12547, 14761, 7], [14761, 15907, 8], [15907, 17349, 9], [17349, 18734, 10], [18734, 20625, 11], [20625, 21485, 12], [21485, 23445, 13], [23445, 25227, 14], [25227, 26242, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26242, 0.01597]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
4c1b7ca06eb3d3cc7863a87030d02f96bf9231a8
|
Reacting and Adapting to the Environment
Designing Autonomous Methods
for Multi-Objective Combinatorial Optimisation
Aymeric Blot
Supervisor: Laetitia Jourdan
Co-advisor: Marie-Éléonore Kessaci
ORKAD team, CRISiAL, Université de Lille
PhD defence – September 21, 2018
Contents
▶ Introduction
▶ Context
▶ Multi-Objective Local Search
▶ Automatic Design
▶ Wrap-up
Thesis
Reacting and Adapting to the Environment
Designing Autonomous Methods
for Multi-Objective Combinatorial Optimisation
Topic Automatic algorithm design
Context Multi-objective combinatorial optimisation
Use Case Multi-objective local search algorithms
Travelling Salesman Problem
Input Set of \( n \) cities, travel costs
Solutions Hamiltonian paths (permutations)
Quality Total cost (e.g., distance, time, money)
**Thesis**
**Reacting and Adapting to the Environment**
Designing Autonomous Methods for Multi-Objective Combinatorial Optimisation
**Environment**
<table>
<thead>
<tr>
<th>Problem</th>
<th>Circuit board drilling? Order-picking? Vehicle routing?</th>
</tr>
</thead>
<tbody>
<tr>
<td>Search</td>
<td>Easy to improve? Stuck in local optima?</td>
</tr>
</tbody>
</table>
**Permutation Flowshop Scheduling Problem**
**Input** Set of $n$ jobs, processing times on $m$ machines
**Solutions** Jobs schedules (permutations)
**Quality** Various, e.g.:
- Makespan (max of completion times)
- Flowtime (sum of completion times)
\[M_1\]
\[M_2\]
\[\ldots\]
\[M_m\]
**Ambitions**
Automatically, in a multi-objective context:
- Design algorithms variants for specific problem characteristics
- Benefit from many existing strategies
- Avoid relying on expert knowledge
**Roadmap**
**Reacting and Adapting to the Environment**
Designing Autonomous Methods for Multi-Objective Combinatorial Optimisation
**Topic** Automatic algorithm design
**Context** Multi-objective combinatorial optimisation
**Use Case** Multi-objective local search algorithms
**Automatic Algorithm Design**
**Algorithm Performance**
- Differs with the problem
- Differs with the instance
- Depends on explicit or hidden design choices
**Ideas**
- Select from a set of existing algorithms
- Tune a specific algorithm
- Generate new algorithms
**AAD: Taxonomy Proposition**
**Algorithmic viewpoint**
- Parameters
- Components
- Algorithms
**Temporal viewpoint**
- Problem features
- Tuning
- Configuration
- Mapping
- A priori features
- Setting
- Selection
- Search features
- Control
- Scheduling
**AAD: Investigated Fields**
**Roadmap**
**Reacting and Adapting to the Environment**
Designing Autonomous Methods for Multi-Objective Combinatorial Optimisation
**Topic** Automatic algorithm design
**Context** Multi-objective combinatorial optimisation
**Use Case** Multi-objective local search algorithms
Multi-Objective Optimisation
Bi-objective minimisation
- Dominated solutions
- (Optimal) archive Pareto (optimal) set
Performance Assessment
Hypervolume (1-HV)
- Spread
Roadmap
Reacting and Adapting to the Environment
Designing Autonomous Methods for Multi-Objective Combinatorial Optimisation
Topic: Automatic algorithm design
Context: Multi-objective combinatorial optimisation
Use Case: Multi-objective local search algorithms
Questions:
- General structure?
- Possible strategies?
- Efficiency?
Local Search Algorithms
“Similar solutions have similar quality”
Trajectory
- Identify neighbours
- Move the current solution
- Iterate
Multi-Objective Local Search Algorithms
Selected History
- Single trajectory
- MOSA [Serafini, 1994]
- TPLS [Paquete et al., 2003]
- Multiple trajectories
- PSA [Czyzak et al., 1996]
- MOTS [Hansen, 1997]
- Archive
- PAES [Knowles et al., 1999]
- PLS [Paquete et al., 2004]
MOLS Generalisation
Components
- Initialisation
- Selection
- Exploration
- Archive
- Stopping condition
- Perturbation
<animation>
Selected MOLS Parameters
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Type</th>
<th>Parameter values</th>
</tr>
</thead>
<tbody>
<tr>
<td>initStrat</td>
<td>category</td>
<td>{...}</td>
</tr>
<tr>
<td>selectStrat</td>
<td>category</td>
<td>{all, rand, newest, oldest}</td>
</tr>
<tr>
<td>selectSize</td>
<td>integer</td>
<td>N*</td>
</tr>
<tr>
<td>explorStrat</td>
<td>category</td>
<td>{all, imp, ndom, ...}</td>
</tr>
<tr>
<td>explorRef</td>
<td>category</td>
<td>{pick, arch}</td>
</tr>
<tr>
<td>explorSize</td>
<td>integer</td>
<td>N*</td>
</tr>
<tr>
<td>archiveStrat</td>
<td>category</td>
<td>{bounded, unbounded, ...}</td>
</tr>
<tr>
<td>archiveSize</td>
<td>integer</td>
<td>N*</td>
</tr>
<tr>
<td>iterationLength</td>
<td>integer</td>
<td>N*</td>
</tr>
<tr>
<td>perturbStrat</td>
<td>category</td>
<td>{restart, kick, ...}</td>
</tr>
<tr>
<td>perturbSize</td>
<td>integer</td>
<td>N*</td>
</tr>
<tr>
<td>perturbStrength</td>
<td>integer</td>
<td>N*</td>
</tr>
</tbody>
</table>
Parameter Distribution Analysis
How efficient are the generated MOLS?
- Protocol
- 300 MOLS configurations
- 3 PFSP + 3 TSP scenarios
- 10 runs per instance
- Average $(1 - HV, \Delta')$
- Scenarios
- PFSP (10 instances)
- 50 jobs, 20 machines
- 100 jobs, 20 machines
- 200 jobs, 20 machines
- TSP (15 instances)
- 100 cities
- 300 cities
- 500 cities
Results: Parameter Distribution Analysis
Analysis
Conclusions
- Generated MOLS can be very efficient
- Parameters values are meaningful
Next Step
- Automatically design efficient MOLS algorithms
The configuration space is structured!
Knowledge can be extracted!
Expert knowledge is limited
Roadmap
Reacting and Adapting to the Environment
Designing Autonomous Methods for Multi-Objective Combinatorial Optimisation
Topic Automatic algorithm design
Context Multi-objective combinatorial optimisation
Use Case Multi-objective local search algorithms
Questions:
- How to automatically design efficient MOLS?
- Is it possible to beat expert knowledge?
- How to improve adaptability?
Algorithm Configurators
Automatic Algorithm Configuration
Goal Optimise performance over a given distribution of instances
Mean Optimisation, machine learning
Twist Data is unreliable and very expensive
Single-Objective Configuration
- irace [López-Ibáñez et al., 2016]
- ParamILS [Hutter et al., 2009]
- SMAC [Hutter et al., 2010]
- GGA++ [Ansótegui et al., 2015]
Multi-Objective Configuration
- SPRINT-Race [Zhang et al., 2015]
- MO-ParamILS [Blot et al., 2016]
MO-ParamILS
- Extension of ParamILS for multiple performance indicators
- Iterated MOLS on the configuration space
- Outputs a Pareto set of configurations
Configuration Protocol
How to ensure efficient predictions?
3 Phases
- Training
- On training instances
- Multiple times (e.g., ×20)
- Validation
- All final configurations
- Test
- Non-dominated configurations
- On test instances
\[ \Delta' = \text{animation} \]
\[ 1 - HV \]
Automatic Configuration
How efficient is our multi-objective approach?
Configurators
- ParamILS
- Single-objective
- \((1 - HV)\)
- ParamILS
- Single-objective
- \(\frac{3}{4} (1 - HV) + \frac{1}{4} \Delta'\)
- MO-ParamILS
- Multi-objective
- \((1 - HV), \Delta'\) simultaneously
Protocol
- Few configurations
- 10×100 runs / 300 MOLS
- 3 PFSP + 3 TSP scenarios
- More configurations
- 20×1000 runs / 10920 MOLS
- 3 PFSP + 3 TSP scenarios
- Crafted instances
- 20×1000 runs / 10920 MOLS
- 3 PFSP + 3 TSP scenarios
Results: Automatic Configuration
“Exhaustive” analysis: x (300 configurations)
Configurator: ○ ParamILS △ ParamILS(0.75,0.25) □ MO-ParamILS
MO-ParamILS: excellent spread, no loss of convergence
Analysis
Conclusions
- MO-ParamILS allows much better context
- Configuration of MO algorithms is a MO problem
- Problem: predicts single configurations
Next Steps
- Scheduling
- Sequence multiple strategies
- Control
- Interweave multiple predictions
- Delay predictions
How to better fit the algorithm to the search?
Configuration Scheduling
\[
\begin{array}{c}
\text{Performance may vary during the search} \\
\text{Real-time decisions are difficult} \\
\text{Static schedules can be optimised offline}
\end{array}
\]
Experiments
How efficient are configuration schedules?
Protocol
<table>
<thead>
<tr>
<th>$K = 1$ ($k = 1$)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Exhaustive analysis; single configurations</td>
</tr>
<tr>
<td>60 configurations = 60 schedules</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>$K = 2$ ($k \in {1, 2}$)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Automatic configuration; up to two configurations</td>
</tr>
<tr>
<td>20×1000 runs / 10860 schedules</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>$K = 3$ ($k \in {1, 2, 3}$)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Automatic configuration; up to three configurations</td>
</tr>
<tr>
<td>20×10000 runs / 658860 schedules</td>
</tr>
</tbody>
</table>
Selected $K = 3$ Configuration Schedules
\[
\begin{align*}
(T/3, T/3, T/3) & \quad \text{timed} \\
(T/4, T/4, T/2) & \quad \text{timed} \\
(T/2, T/4, T/4) & \quad \text{timed} \\
(T/2, T/2) & \quad \text{timed} \\
(T/4, 3T/4) & \quad \text{timed} \\
(3T/4, T/4) & \quad \text{timed} \\
(T) & \quad \text{timed} \\
\end{align*}
\]
\[3 \times 60^3 + 3 \times 60^2 + 60 = 658,860 \text{ schedules}\]
Results: Configuration Scheduling
<table>
<thead>
<tr>
<th>$K = k = 1$ exhaustive analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td>PFSP 50 jobs, 20 machines</td>
</tr>
</tbody>
</table>
\[\Delta' \quad \text{Pareto dominated} \]
\[\Delta' \quad \text{dominated} \]
Better balanced algorithms!
K = k = 1 Pareto dominated
Analysis
Conclusions
| $k = 1$ schedules are limited |
| Schedules can be optimised offline |
| Combinatorial explosion |
Offline Adaptation
| Schedules are still predicted |
| No real-time decisions |
Control
<table>
<thead>
<tr>
<th>Offline Design</th>
<th>Online Design</th>
</tr>
</thead>
<tbody>
<tr>
<td>▶ Prediction based</td>
<td>▶ Adaptation based</td>
</tr>
<tr>
<td>▶ Instance classes / distributions</td>
<td>▶ Single current instance</td>
</tr>
<tr>
<td>▶ Computationally expensive</td>
<td>▶ Slight overhead</td>
</tr>
</tbody>
</table>
Motivations
▶ Use control as an extension of offline learning
▶ Take advantage of multiple strategies during the run
▶ Delay the final prediction
Control Mechanisms
Generic Parameter Control
▶ Random
▶ Probability based
▶ Multi-armed bandits
▶ Reinforcement learning
[Karafotias et al., 2015]
Experiments
Can efficient strategies be determined online?
Protocol
▶ 2 simple control mechanisms
▶ 12 PFSP scenarios
▶ 200 runs per scenario
Strategies
▶ 3 arms (imp, imp-ndom, ndom)
▶ 2 arms (imp-ndom, ndom)
▶ 3 → 2 arms
Simple Control Mechanisms
▶ Uniform random: \( p_i(t + 1) = \frac{1}{N} \)
▶ \( \varepsilon \)-greedy: \( p_i(t + 1) = \begin{cases} (1 - \varepsilon) + \frac{\varepsilon}{N}, & \text{if } i = \arg \max_j q_j(t) \\ \varepsilon / N, & \text{otherwise} \end{cases} \)
### Results: 3-arm Ranking
Wilcoxon signed ranked tests, Friedman post-hoc analysis
<table>
<thead>
<tr>
<th>Approach</th>
<th>Instance ($n, m$)</th>
<th>Avg.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>20</td>
<td>50</td>
</tr>
<tr>
<td></td>
<td>5</td>
<td>10</td>
</tr>
<tr>
<td>imp</td>
<td>5 5 5 5 5 5 5 5 5 5 5 5</td>
<td>5</td>
</tr>
<tr>
<td>imp-ndom</td>
<td>4 4 3 4 4 4 4 1 2 1 2 1</td>
<td>2.8</td>
</tr>
<tr>
<td>ndom</td>
<td>1 1 3 1 1 1 1 1 1 1 1 1</td>
<td>1.2</td>
</tr>
<tr>
<td>rand_3</td>
<td>1 1 1 1 1 1 1 1 2 3 3 3</td>
<td>1.6</td>
</tr>
<tr>
<td>greedy_3</td>
<td>1 1 1 1 1 1 1 1 2 3 3 3</td>
<td>1.6</td>
</tr>
</tbody>
</table>
Control fails on larger instances
### Results: 2-arm Ranking
Wilcoxon signed ranked tests, Friedman post-hoc analysis
<table>
<thead>
<tr>
<th>Approach</th>
<th>Instance ($n, m$)</th>
<th>Avg.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>20</td>
<td>50</td>
</tr>
<tr>
<td></td>
<td>5</td>
<td>10</td>
</tr>
<tr>
<td>imp-ndom</td>
<td>4 4 3 4 4 4 4 4 4 4 4 1</td>
<td>3.7</td>
</tr>
<tr>
<td>ndom</td>
<td>1 1 3 1 1 1 1 1 1 1 1 1</td>
<td>1.2</td>
</tr>
<tr>
<td>rand_2</td>
<td>1 1 1 1 1 1 1 1 1 1 1 1</td>
<td>1.1</td>
</tr>
<tr>
<td>greedy_2</td>
<td>1 1 1 1 1 1 1 1 1 1 1 1</td>
<td>1.1</td>
</tr>
</tbody>
</table>
imp was the culprit
### Results: Long Term Learning Ranking
Wilcoxon signed ranked tests, Friedman post-hoc analysis
<table>
<thead>
<tr>
<th>Approach</th>
<th>Instance ($n, m$)</th>
<th>Avg.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>20</td>
<td>50</td>
</tr>
<tr>
<td></td>
<td>5</td>
<td>10</td>
</tr>
<tr>
<td>rand_3</td>
<td>4 4 2 4 4 4 4 4 4 4 4 3</td>
<td>3.8</td>
</tr>
<tr>
<td>rand_ltl_50</td>
<td>3 1 2 1 1 1 3 3 3 2 3 3</td>
<td>2.2</td>
</tr>
<tr>
<td>rand_ltl_20</td>
<td>1 1 2 1 1 1 1 1 1 2 2 2</td>
<td>1.3</td>
</tr>
<tr>
<td>rand_2</td>
<td>1 1 1 1 1 1 1 1 1 1 1 1</td>
<td>1</td>
</tr>
<tr>
<td>greedy_3</td>
<td>1 1 1 1 4 4 4 4 4 4 4 3</td>
<td>2.9</td>
</tr>
<tr>
<td>greedy_ltl_50</td>
<td>1 1 1 1 1 1 3 3 3 3 2 3</td>
<td>1.9</td>
</tr>
<tr>
<td>greedy_ltl_20</td>
<td>1 1 1 1 3 1 1 1 1 2 2 2</td>
<td>1.3</td>
</tr>
<tr>
<td>greedy_2</td>
<td>1 1 1 1 1 1 1 1 1 1 1 1</td>
<td>1</td>
</tr>
</tbody>
</table>
Ineffective arms should be automatically removed
### General Contributions and Conclusions
**Automatic Algorithm Design**
- Taxonomy proposition
- Multi-objective configuration, MO-ParamILS
- MO algorithms are better optimised using a MO configurator
- Configuration scheduling
- Better balanced algorithms can be predicted
- Control as extension of automatic configuration
- Some design choices can be postponed to the search itself
**Multi-objective Optimisation**
- Wider generalisation of MOLS algorithms
- Automatic design of multi-objective algorithms
Short-Term Perspectives
Automatic design
▶ Extension to other algorithms
▶ Other multi-objective configurators
▶ Robustness in configurators
Automatic configuration
▶ Validation on other types of problems
Configuration scheduling
▶ Guided experimentation protocol
▶ More semantic representation
Online mechanisms
▶ More strategies, more complex mechanisms
Long-Term Perspectives
Anytime Behaviour of Algorithms
Insight Other applications of multi-objective algorithm design
Example Quality/running time trade-off
Ideas ▶ Designing for multiple running times
▶ Area-under-the-curve as fitness
▶ Configuration scheduling
Artificial Configuration Spaces
Insight Automatic configuration extremely time-expensive
Problem So is developing/improving/comparing configurators
Ideas ▶ Semantic parameter analysis
▶ Zero-cost configuration spaces
Publications I
Blot, Hoos, Jourdan, Kessaci-Marmion, and Trautmann – LION 2016
MO-ParamILS: A Multi-objective Automatic Algorithm Configuration Framework
Blot, Pernet, Jourdan, Kessaci-Marmion, and Hoos – EMO 2017
Automatically Configuring Multi-objective Local Search Using Multi-objective Optimisation
Blot, Kessaci-Marmion, and Jourdan – MIC 2017
AMH: a new Framework to Design Adaptive Metaheuristics
Blot, Kessaci-Marmion, and Jourdan – GECCO 2017
Automatic design of multi-objective local search algorithms: case study on a bi-objective permutation flowshop scheduling problem
Publications II
Blot, Kessaci, Jourdan, and de Causmaecker – LION 2018
Adaptive Multi-Objective Local Search Algorithms for the Permutation Flowshop Scheduling Problem
Blot, López-Ibáñez, Kessaci, and Jourdan – PPSN 2018
Archive-aware Scalarisation-based Multi-Objective Local Search for a Bi-objective Permutation Flowshop Problem
Blot, Hoos, Kessaci, and Jourdan – ICTAI 2018
Automatic Configuration of Multi-objective Optimization Algorithms. Impact of Correlation between Objectives
Blot, Kessaci, and Jourdan – Journal of Heuristics, 2018
Survey and Unification of Local Search Techniques in Metaheuristics for Multi-objective Combinatorial Optimisation
|
{"Source-Url": "http://www0.cs.ucl.ac.uk/staff/a.blot/phd/handout.pdf", "len_cl100k_base": 4998, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 30044, "total-output-tokens": 5299, "length": "2e12", "weborganizer": {"__label__adult": 0.0004811286926269531, "__label__art_design": 0.0007343292236328125, "__label__crime_law": 0.0008053779602050781, "__label__education_jobs": 0.00826263427734375, "__label__entertainment": 0.00014793872833251953, "__label__fashion_beauty": 0.00036025047302246094, "__label__finance_business": 0.0010824203491210938, "__label__food_dining": 0.00048065185546875, "__label__games": 0.0010709762573242188, "__label__hardware": 0.0011730194091796875, "__label__health": 0.0019989013671875, "__label__history": 0.0007448196411132812, "__label__home_hobbies": 0.00025272369384765625, "__label__industrial": 0.0013399124145507812, "__label__literature": 0.0006089210510253906, "__label__politics": 0.0007076263427734375, "__label__religion": 0.0007271766662597656, "__label__science_tech": 0.480712890625, "__label__social_life": 0.00029206275939941406, "__label__software": 0.00896453857421875, "__label__software_dev": 0.486328125, "__label__sports_fitness": 0.0007696151733398438, "__label__transportation": 0.0014867782592773438, "__label__travel": 0.00032711029052734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14878, 0.03767]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14878, 0.01479]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14878, 0.64095]], "google_gemma-3-12b-it_contains_pii": [[0, 790, false], [790, 2019, null], [2019, 2872, null], [2872, 3519, null], [3519, 3943, null], [3943, 5353, null], [5353, 6663, null], [6663, 7934, null], [7934, 9323, null], [9323, 10361, null], [10361, 12782, null], [12782, 14878, null]], "google_gemma-3-12b-it_is_public_document": [[0, 790, true], [790, 2019, null], [2019, 2872, null], [2872, 3519, null], [3519, 3943, null], [3943, 5353, null], [5353, 6663, null], [6663, 7934, null], [7934, 9323, null], [9323, 10361, null], [10361, 12782, null], [12782, 14878, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14878, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14878, null]], "pdf_page_numbers": [[0, 790, 1], [790, 2019, 2], [2019, 2872, 3], [2872, 3519, 4], [3519, 3943, 5], [3943, 5353, 6], [5353, 6663, 7], [6663, 7934, 8], [7934, 9323, 9], [9323, 10361, 10], [10361, 12782, 11], [12782, 14878, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14878, 0.17062]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
b8060f48ab117bd0ac74a7636bb046f7eef00560
|
Efficient Entity Embedding Construction from Type Knowledge for BERT
Yukun Feng\textsuperscript{1}, Amir Fayazi\textsuperscript{2}, Abhinav Rastogi\textsuperscript{2}, Manabu Okumura\textsuperscript{1}
\textsuperscript{1}Tokyo Institute of Technology
\textsuperscript{2}Google Research
\{yukun,oku\}@lr.pi.titech.ac.jp
\{amiraf,abhirast\}@google.com
Abstract
Recent work has shown advantages of incorporating knowledge graphs (KGs) into BERT (Devlin et al., 2019) for various NLP tasks. One common way is to feed entity embeddings as an additional input during pre-training. There are two limitations to such a method. First, to train the entity embeddings to include rich information of factual knowledge, it typically requires access to the entire KG. This is challenging for KGs with daily changes (e.g., Wikidata). Second, it requires a large scale pre-training corpus with entity annotations and high computational cost during pre-training. In this work, we efficiently construct entity embeddings only from the type knowledge, that does not require access to the entire KG. Although the entity embeddings contain only local information, they perform very well when combined with context. Furthermore, we show that our entity embeddings, constructed from BERT’s input embeddings, can be directly incorporated into the fine-tuning phase without requiring any specialized pre-training. In addition, these entity embeddings can also be constructed on the fly without requiring a large memory footprint to store them. Finally, we propose task-specific models that incorporate our entity embeddings for entity linking, entity typing, and relation classification. Experiments show that our models have comparable or superior performance to existing models while being more resource efficient.
1 Introduction
Many studies have attempted to enhance pre-trained language models with knowledge such as ERNIE (Zhang et al., 2019), KnowBert (Peters et al., 2019), K-ADAPTER (Wang et al., 2020), E-BERT (Poerner et al., 2020), and KEPLER (Wang et al., 2021). Among them, ERNIE, KnowBert, E-BERT, and KEPLER are typical work that do so by incorporating entity embeddings. The entity embeddings are usually trained by methods that model the global graph structure, such as TransE (Bordes et al., 2013a) used in ERNIE and TuckER (Balažević et al., 2019) used in KnowBert. These entity-incorporated pre-trained language models have shown to be powerful on various natural language processing (NLP) tasks, such as entity linking, entity typing, and relation classification.
In this paper, we investigate whether we can construct entity embeddings by considering only local entity features. This is motivated by the observation that the context itself usually provides good information for the right answer. A number of examples are shown in Table 1. Instead of heavily relying on entity embeddings that encode global information, we simply tell the model what these entities are by using local features to help the model infer the answer from the context more easily. For example, if we can know ‘Cartí Sugtupu’ is a place in the relation classification example in Table 1, the task may be easier. To utilize such information for an entity, we select entity-type knowledge from Wikidata as a local feature for the entity. Specifically, we propose to encode the labels of neighboring nodes of the entity connected through instance_of edges in Wikidata. Figure 1 shows an example. These labels can informatively tell the entity type and are usually short, which enables them to be efficiently encoded by simple methods, that we mention later.
One big advantage of utilizing only local features of entities is that we can update our entity embeddings very fast once the knowledge graph (KG) is changed, which is a desirable feature for KGs with rapid updates. We can construct the entity embeddings even on the fly to significantly reduce memory consumption and parameters since a number of tasks (e.g., entity linking) easily involve millions of entities. A disadvantage is that it is hard to infer the answer if large amounts of information are missing. For example, the LAnguage Model Analysis (LAMA) task (Petroni et al., 2019) re-
quires a [MASK] placeholder in the given sentence “Sullivan was born in Chippewa Falls, Wisconsin in [MASK]” to be filled. The type knowledge may not be able to answer this question. Thus, we do not focus on such tasks. Instead, we apply our method on several typical entity-focused tasks, which were also chosen by related work.
To construct the entity embeddings, we simply average BERT’s WordPiece embeddings from the type label of the entity as there are only 2.8 or 2.96 WordPiece tokens on average per label depending on our tasks. Thus, our method is very fast and can be used to construct the entity embeddings on the fly without much cost to save memory and reduce parameters. For example, E-BERT requires six hours to train its entity embeddings, while our method takes only about 1 minute to prepare the entity embeddings for our downstream tasks. The trained entity embeddings of E-BERT take up around 30GB in size. Thus, storing these embeddings requires a large memory footprint, and the size continues to grow linearly if new entities are added. However, our method does not require such extra space for entity embeddings.
For incorporation, previous work incorporates their entity embeddings during both fine-tuning and pre-training (ERNIE and KnowBert). However, pre-training language models is a cumbersome and resource-intensive task. We show simply incorporating our entity embeddings during fine-tuning without any pre-training works well. One reason may be that these entity embeddings are directly constructed through averaging BERT’s WordPiece embeddings, so that they look like BERT’s WordPiece embeddings, which may be helpful for incorporation for BERT.
Finally, we propose task-specific models to incorporate our entity embeddings. For entity linking, we propose a model that incorporates entity embeddings into the output; for entity typing and relation classification, the proposed model incorporates entity embeddings into the input. We show that our entity embeddings and incorporation method are simple and can achieve comparable or superior performance to existing methods on entity linking, entity typing, and relation classification. The contribution of this work can be summarized as follows:
- We propose an efficient method to construct entity embeddings that are particularly a good fit for BERT, and they work well without any pre-training step during incorporation.
- Our entity embeddings can be constructed on the fly for BERT. We do not need a large memory footprint to store entity embeddings, which is often required by other work.
- We propose task-specific models to incorporate our entity embeddings for entity linking, entity typing and relation classification.
2 Related Work
ERNIE (Zhang et al., 2019), KnowBert (Peters et al., 2019), E-BERT (Poerner et al., 2020), and our model are all based on Google BERT and aim to incorporate entity embeddings into them. The main differences between the models are the methods for constructing entity embeddings and incorporating them.
For entity embeddings, ERNIE uses the one trained on Wikidata by TransE (Bordes et al., 2013b). KnowBert uses TuckER (Balazevic et al., 2019) embeddings, and E-BERT incorporates Wikipedia2Vec entity embeddings (Yamada et al., 2016). These entity embeddings were trained with consideration for a KG structure and have to be trained again if new updates need to be incorporated from KGs, which further requires additional pre-training of ERNIE and KnowBert. When only local features are used to construct the entity embeddings, the aforementioned issues can be avoided. In addition, our entity embeddings are simply obtained by averaging BERT WordPiece embeddings and can be constructed on the fly to save a large memory footprint usually required by
\[\text{Figure 1: An example of connected entity nodes from Wikidata. The circles are entity nodes with blue texts as their labels. We encode the labels of the neighboring nodes of “baltimore” connected through instance_of edges to construct its entity embedding.}\]
other work. We found that although our entity embeddings contain only local information, they perform well when combined with context. However, ERNIE, KnowBert or E-BERT are supposed to work better than ours where large amounts of information are missing such as in LAMA task.
For the incorporation, ERNIE and KnowBert both use new encoder layers to feed the entity embeddings, which requires pre-training. In contrast, E-BERT achieves comparable results without pre-training by directly incorporating its entity embeddings into the standard BERT model during task-specific fine-tuning. One proposal from E-BERT is to align the entity and BERT WordPiece embeddings in the same space. To do so, it first trains word and entity embeddings jointly and then learns a linear mapping from word to BERT WordPiece embeddings. The final entity embeddings can be obtained by applying this learned linear mapping so that they look like BERT WordPiece embeddings. This mapping helps improve 4.4 micro F1 score on the test data on entity linking task. To learn this mapping, E-BERT needs to train both word and entity embeddings, which are 30GB in size. Our method for constructing entity embeddings shares the similar spirit, but it is an averaging method from BERT WordPiece embeddings.
K-ADAPTER (Wang et al., 2020) and KEPLER (Wang et al., 2021) are both trained using multi-task learning based on RoBERTa (Liu et al., 2019) in relation classification and knowledge base completion and do not rely on entity embeddings.
Outside the area of incorporating entity embedding into pretrained language model, there are a number of work that propose to use entity types from KGs on various tasks. For example, on entity linking task, some work use entity types together with entity descriptions or entity embedding trained over whole KG (Francis-Landau et al., 2016; Gupta et al., 2017; Gillick et al., 2019; Hou et al., 2020; Tianran et al., 2021). Some work use only entity types on entity linking task (Sun et al., 2015; Le and Titov, 2019; Raiman, 2022). Khosla and Rose (2020) use entity type embeddings for coreference resolution. The main difference between our work with them is that we mainly design our method for constructing entity embedding and our incorporation method for BERT. As introduced before, we simply create entity embeddings from the BERT’s internal WordPiece embeddings. When incorporating our entity embeddings into BERT, we also propose a model that makes use of BERT’s position embeddings on entity typing and relation classification task (mentioned in Sec. 5.2).
3 Entity Embedding Construction
We take the labels of the neighboring nodes for an entity obtained from Wikidata as local features. Since these labels are usually very short, as shown in Figure 1, we can efficiently obtain label embeddings by averaging WordPiece embeddings in the label. The final entity embeddings are computed as follows:
$$e = \frac{1}{M} \sum_{i=1}^{M} \frac{1}{N_i} \sum_{j=1}^{N_i} m_{ij}, \quad (1)$$
where $M$ and $N_i$ are the number of labels and that of WordPiece tokens in the $i$-th label, respectively. Please note that $M$ and $N_i$ are small in our relation classification task (1.27 and 2.96, respectively, on average). Finally, the generated entity embeddings are updated in the task-specific fine-tuning.
4 Entity Linking
4.1 Task Description
Entity linking (EL) is the task of recognizing named entities and linking them to a knowledge base. In this paper, we focus on an end-to-end EL system that includes detecting the entities and then disambiguating them to the correct entity IDs.
We use the AIDA dataset (Hoffart et al., 2011), which was also chosen in related works. The gold
As with E-BERT, we formulate this task as a classification task where the model needs to classify the
Thus, given a span in a sentence, our model needs to learn to reject it or predict the correct one among
its candidate IDs in accordance with the context. As with E-BERT, we formulate this task as a classification task where the model needs to classify the
given input. The classified labels contain candidate IDs and a rejection label.
4.2 Dataset
We use the AIDA dataset (Hoffart et al., 2011), which was also chosen in related works. The gold
named entities in AIDA and spans found by KnowBert’s generator are identified with Wikipedia
URLs. Due to this reason, we have to convert them to Wikidata IDs to determine the type knowledge of an annotated entity, in which a number are
missing during conversion. The statistics of AIDA, found entities by generator, and conversion rates
are shown in Table 2.
<table>
<thead>
<tr>
<th></th>
<th>Train</th>
<th>Dev</th>
<th>Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>#Tokens</td>
<td>222K</td>
<td>56K</td>
<td>51K</td>
</tr>
<tr>
<td>#Gold entities</td>
<td>18454</td>
<td>4778</td>
<td>4778</td>
</tr>
<tr>
<td>#Unique generated entities</td>
<td>230K</td>
<td>154K</td>
<td>148K</td>
</tr>
<tr>
<td>#Conversion rate</td>
<td>0.8</td>
<td>0.80</td>
<td>0.81</td>
</tr>
</tbody>
</table>
Table 2: Data statistics of AIDA and found unique entities by generator. The conversion rate is the ratio of found entities that we can link to Wikidata.
Following the same setting of E-BERT, we use KnowBert’s candidate generator to first find all spans that might be potential entities in a sentence. These spans are matched in a precomputed span-entity co-occurrence table (Hoffart et al., 2011) and each span is annotated with linked entity candidate IDs associated with prior probabilities based on frequency. Note that the generator tends to over-generate and most found spans should be rejected according to our observation on the training dataset. Thus, given a span in a sentence, our model needs to learn to reject it or predict the correct one among its candidate IDs in accordance with the context. As with E-BERT, we formulate this task as a classification task where the model needs to classify the
given input. The classified labels contain candidate IDs and a rejection label.
4.3 Model
Our model is based on BERT BASE and the architecture is shown in Figure 2. We describe the incorporation method, modeling, and training hyperparameters in the following.
4.3.1 Incorporation Method
Given a span from the generator, we denote the embeddings of candidate entities as \{c_1, c_2, ..., c_N\} and corresponding prior probabilities as \{p_1, p_2, ..., p_N\}. The entity embeddings are
computed by Eq. 1. Since different candidate entities may have the same type (e.g., the type ’country’ may contain different entities), the model cannot distinguish these label embeddings in classification if we simply use the entity embeddings as the label embeddings. Note that this is not an issue when incorporating these entity embeddings into the input, as shown later in our entity typing and relation classification tasks, because the surface forms of entities included in the input can help distinguish between each embedding. Thus, to distinguish these label embeddings, we propose to combine the surface forms of entity candidates, which are still local features, and entity embeddings into label embeddings. The embeddings of surface forms of entities are denoted as \{s_1, s_2, ..., s_N\}. \(s_i\) is simply computed by averaging the WordPiece embeddings in the surface form, which is the same way as computing our entity embeddings. Since large number of entities are involved in this task as shown in Table 2, we compute \(s_i\) and \(c_i\) both on the fly to save memory and reduce the parameters. This means the gradients will come to the WordPiece embeddings during backpropagation. To combine \(s_i\) and \(c_i\), we use a gate to learn to control the weight between \(s_i\) and \(c_i\), and label embedding \(l_i\) is computed as follows:
\[
g = \text{sigmoid}(W c_i),
\]
\[
l_i = (1 - g) \odot c_i + g \odot s_i
\]
\(\odot\) is element-wise multiplication and \(W \in \mathbb{R}^{d \times d}\)
are trainable parameters where \(d\) is a BERT dimension. If \(c_i\) is not found during the aforementioned conversion, we only use \(s_i\).
4.3.2 Modeling
We denote the output vector from the BERT encoder at the position of ‘[ENT]’ as \(o_{ENT}\). The value of the \(i\)-th candidate entity before the softmax function is computed as \(l_i^T o_{ENT} + b_i\) where \(b_i\) is the bias of the \(i\)-th entity candidate. To incorporate the prior probabilities in the classification, we set \(b_i\) as \(\log p_i\) so that the probability will be \(p_i\) if no other information is available (i.e., \(l_i^T o_{ENT}\) equals zero). The bias of a rejection label will be learned from the training data. We use the standard cross entropy as our loss function.
4.3.3 Hyper-parameters
Since the dataset is quite small as shown in Table 2, we only train for maximum of four epochs, and the
model with best micro F1 score on the valid dataset is chosen. The batch size is set to 16 and the default AdamW optimizer was used with a linear learning rate scheduler (10\% warmup). The learning rate was chosen among \{1e-5, 2e-5, 3e-5,\} on a valid set.
4.4 Results
The results on the AIDA test set are shown in Table 3. We mainly compare our model with BERT-Random (introduced later), KnowBert and E-BERT as they also focus on incorporating entity embeddings to BERT. Note that we only include end-to-end EL models in this table, and the results are not comparable to ones of disambiguation-only EL models where the golden entity mentions are given.
We used BERT-Random as our baseline, which is the same as our model except that the label embeddings are randomly initialized and trained from scratch. Compared with BERT-Random, our model shows significant improvement, which suggests our proposed label embeddings are effective.
E-BERT incorporates its entity embeddings not only to the output but also to the input. The embedding of its `[ENT]` in the input is computed by averaging all embeddings of candidate entities. We also tried a similar strategy but found no obvious change in our model. Thus, we only focused on the output. In addition, E-BERT uses another strategy that iteratively refines predictions during inference. However, this strategy slows down the inference speed. The results, indicate that the local features work even better than global features used to train entity embeddings in E-BERT. This may suggest that we can utilize local features to construct entity embeddings in tasks where the context already contains a lot of information. Please also note that we can only convert around 80\% of Wikipedia URLs to Wikidata IDs, and this may limit the performance of our model. Another advantage is that our label embeddings are constructed on the fly and thus save memory and reduce the number of training parameters. Finally, our model and E-BERT achieved the highest strong micro-F1 and macro-F1 scores among all models, indicating it may be a good way to incorporate knowledge through entity embeddings.
5 Entity Typing and Relation Classification
5.1 Task Description
The goal of entity typing is to predict the types of a given entity from its context. Note that it is not necessary that the mention of a given entity is a named entity. For example, the type ‘they’ is labeled as ‘organization’ as shown in the example of entity...
typing in Table 1. The formulation of relation classification is similar with the only difference being that there are two target entities in the sentence. We need to predict the relation of two given target entities together with the context. Thus, the application of our entity embeddings is similar for entity typing and relation classification. We introduce our incorporation method in the following section.
5.2 Incorporation Method
Unlike the EL task where we applied our entity embeddings to the input for these two tasks. To incorporate the entity embeddings, we propose a method that emphasizes target entities (e.g., in relation classification, there are two entity mentions). Specifically, for all entities, we first sum the embeddings of the entities and the corresponding BERT WordPiece tokens, and then feed them into the BERT model. For target entities, we explicitly insert the entity embeddings into the input of WordPiece token embeddings and make the entity embeddings share the same position embeddings with their corresponding WordPiece token embeddings, as if they are in the same position. Our model architecture is shown in Figure 3. We mathematically describe our method as follows.
We denote the number of WordPiece tokens in a sentence as $T$, and the $i$-th WordPiece token embedding, entity embedding, and position embedding as $w_i$, $e_i$, and $p_i$, respectively. As shown in the figure, the entity embedding $e_i$ is 0 if the $i$-th token is not the start token of an entity. For simplicity, we ignore token type embeddings here, although they are actually used in our model. We first obtain the input $x_i$ to the BERT encoder by summing the entity embeddings with the other embeddings:
$$x_i = e_i + w_i + p_i. \quad (3)$$
Since target entities are usually more important than other entities in an entity-centric task, we explicitly insert target entity embeddings that have the same position embeddings as their aligned WordPiece embeddings, as if they are in the same position. For the relation classification task, there are two target entities, and thus the extra inserted inputs are $x_{T+1}$ and $x_{T+2}$, which are computed as follows:
$$x_{T+1} = e_{k_1} + p_{k_1},$$
$$x_{T+2} = e_{k_2} + p_{k_2}, \quad (4)$$
where $k_1$ and $k_2$ are the index of the first and second target entities, respectively.
5.3 Experiments
5.3.1 Entity Typing
We chose Open Entity (Choi et al., 2018) to evaluate our model. The dataset has several versions, and we chose the one that has nine general types (e.g., person, location, and object), which is the same as that in previous works. One example from this dataset is shown in Table 1. As previously mentioned, the entity mention in Open Entity is not limited to named entities, and pronoun mentions and common noun expressions are also included. We used a preprocessed version from ERNIE (Zhang et al., 2019). This preprocessed dataset was annotated with mentions of named entities and automatically linked to Wikidata by TAGME (Ferragina and Scaiella, 2010) so that we could find their type knowledge in Wikidata for all entities in Open Entity. We used the same annotated entities as the ones used in ERNIE by keeping the same confidence threshold to filter unreliable entity annotations. The statistics of this dataset are shown in Table 4. Most annotated entities are non-target because the entity mention in Open Entity is not limited to named entities. Our model needs to utilize the context together with the entity annotations to infer the types of the target entity. We can also see the type labels of entities are quite short (only 2.8 word pieces per label), and this may be one reason that our averaging method for constructing entity embeddings works. If the label is long (e.g., becoming a text description), the averaging method might be too simple to encode it. Since the involved entities are not that many, we did not construct the entity embeddings on the fly to speed up the training. That is, the entity embeddings are initialized by Eq. 1 and are updated in the training.
<table>
<thead>
<tr>
<th></th>
<th>Train</th>
<th>Dev.</th>
<th>Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>#Instances</td>
<td>2,000</td>
<td>2,000</td>
<td>2,000</td>
</tr>
<tr>
<td>#Target entities</td>
<td>122</td>
<td>107</td>
<td>94</td>
</tr>
<tr>
<td>#All entities</td>
<td>2,573</td>
<td>2,511</td>
<td>2,603</td>
</tr>
<tr>
<td>#Labels per entity</td>
<td>1.56</td>
<td>1.56</td>
<td>1.63</td>
</tr>
<tr>
<td>#WordPieces per label</td>
<td>2.8</td>
<td>2.8</td>
<td>2.8</td>
</tr>
</tbody>
</table>
Table 4: Statistics of Open Entity dataset with nine label types. TAGME (Ferragina and Scaiella, 2010) is used to automatically annotate named entities in the dataset.
Our code was adapted from ERNIE, and we
We used a preprocessed relation classification dataset from ERNIE (Zhang et al., 2019) to evaluate our model. This dataset is from the FewRel corpus (Han et al., 2018) and was rearranged by Zhang et al. (2019) for the common relation classification setting. One example from this dataset is shown in Table 1. We used FewRel oracle entity IDs, which were also used in ERNIE and E-BERT (Poerner et al., 2020). These oracle entity IDs cover only target entities; there are no annotations for non-target entities. Our model needs to predict the relation of two given target entities with their annotations and context. The statistics of the FewRel dataset are shown in Table 6. Since oracle annotations were used, the statistics of annotated target entities are not shown in the table. Again, we can see the type labels are quite short, which enables them to be encoded with a simple averaging method. Since there are not many entities involved, we take these entity embeddings as parameters and do not construct them on the fly.
As with the entity typing task, special tokens [HD] and [TL] were used to mark the span of a head and tail entity, respectively. The [CLS] vector in the last hidden layer of the BERT encoder was used for relation classification. For the hyper-parameters, we basically followed those of ERNIE. The model is trained for 10 epochs with a batch size of 16. The default AdamW optimizer was used with a linear learning rate scheduler (10% warmup). The learning rate was set to 4e-5, which was chosen among {2e-5, 3e-5, 4e-5, 5e-5} on the valid dataset.
The results are shown in Table 7. ERNIE, E-BERT, and our model can be directly compared because all the models are based on BERT_BASE and used the same entity annotations. Our model achieves better results than ERNIE and E-BERT, indicating that our methods are effective while being cost-efficient. However, E-BERT reports that their entity coverage is about 90% (around 10% of entity embeddings are not found in their Wikipedia2Vec embeddings), while the entity coverage in our model and ERNIE is about 96%. This may put E-BERT at a disadvantage.
### 5.4 Ablation Study
To analyze the gain, we define three components in our model for entity typing and relation classification: entityEmbs, defined by Eq. 1, sum, defined by Eq. 3, and insert, defined by Eq. 4. When entityEmbs is not used, the entity embeddings are initialized randomly. The results for cases when independently excluding each component are shown in Table 8. When entityEmbs was removed, the performance of our model on two datasets dropped significantly, which indicates our method for con-
Table 5: Results of our model and related models on the entity typing dataset - Open Entity. Note that only K-ADAPTER is in the LARGE size, and ERNIE, KnowBERT, and K-ADAPTER also require incorporating knowledge during fine-tuning.
<table>
<thead>
<tr>
<th>Model</th>
<th>Architecture</th>
<th>P</th>
<th>R</th>
<th>F1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Incorporate KG in pre-training</td>
<td>ERNIE (Zhang et al., 2019)</td>
<td>BERT_BASE</td>
<td>78.42</td>
<td>72.90</td>
</tr>
<tr>
<td></td>
<td>KnowBERT (Peters et al., 2019)</td>
<td>BERT_BASE</td>
<td>78.60</td>
<td>73.70</td>
</tr>
<tr>
<td></td>
<td>K-ADAPTER (Wang et al., 2020)</td>
<td>RoBERTA_LARGE</td>
<td>79.30</td>
<td>75.84</td>
</tr>
<tr>
<td></td>
<td>KEPLER (Wang et al., 2021)</td>
<td>RoBERTA_BASE</td>
<td>77.80</td>
<td>74.60</td>
</tr>
<tr>
<td>Fine-tuning only</td>
<td>BERT_BASE (our reproduction)</td>
<td>BERT_BASE</td>
<td>79.78</td>
<td>70.90</td>
</tr>
<tr>
<td></td>
<td>Our model</td>
<td>BERT_BASE</td>
<td>78.53</td>
<td>74.16</td>
</tr>
</tbody>
</table>
Table 6: Relation classification dataset FewRel with 80 relation types.
<table>
<thead>
<tr>
<th></th>
<th>Train</th>
<th>Dev.</th>
<th>Test</th>
</tr>
</thead>
<tbody>
<tr>
<td>#Instances</td>
<td>8,000</td>
<td>16,000</td>
<td>16,000</td>
</tr>
<tr>
<td>#Labels per entity</td>
<td>1.27</td>
<td>1.25</td>
<td>1.25</td>
</tr>
<tr>
<td>#WordPieces per label</td>
<td>2.96</td>
<td>3.0</td>
<td>3.02</td>
</tr>
</tbody>
</table>
Table 7: Relation classification results on FewRel. Only ERNIE incorporates entity embeddings in both pre-training and fine-tuning steps.
<table>
<thead>
<tr>
<th>Model</th>
<th>P</th>
<th>R</th>
<th>F-1</th>
</tr>
</thead>
<tbody>
<tr>
<td>ERNIE (Zhang et al., 2019)</td>
<td>88.49</td>
<td>88.44</td>
<td>88.32</td>
</tr>
<tr>
<td>E-BERT (Poerner et al., 2020)</td>
<td>88.51</td>
<td>88.46</td>
<td>88.38</td>
</tr>
<tr>
<td>BERT_BASE (our reproduction)</td>
<td>86.16</td>
<td>86.16</td>
<td>86.16</td>
</tr>
<tr>
<td>Our model</td>
<td>88.93</td>
<td>88.93</td>
<td>88.93</td>
</tr>
</tbody>
</table>
Table 8: Ablation study with F1 scores. Each component in our model is excluded independently.
<table>
<thead>
<tr>
<th>Model</th>
<th>Open Entity</th>
<th>FewRel</th>
</tr>
</thead>
<tbody>
<tr>
<td>Our model</td>
<td>76.28</td>
<td>88.93</td>
</tr>
<tr>
<td>w/o entityEmbs</td>
<td>74.03</td>
<td>84.98</td>
</tr>
<tr>
<td>w/o sum</td>
<td>75.83</td>
<td>88.81</td>
</tr>
<tr>
<td>w/o insert</td>
<td>75.62</td>
<td>87.99</td>
</tr>
</tbody>
</table>
Table 9: Ablation study on Open Entity dataset.
6 Conclusion
In this paper, we proposed to construct entity embeddings using local features instead of training those with consideration of the whole KG for tasks where the context already contains large amounts of information. Utilizing local features to construct the entity embeddings is much faster than the methods mentioned in related work. The local features of an entity used in this paper are the labels of its neighboring nodes connected through instance\_of edges in Wikidata. Since these labels are usually very short, we can simply encode them by averaging their WordPiece embeddings. The simple averaging method enables us to even construct entity embeddings on the fly without much cost. This is helpful for saving memory and reducing parameters in tasks where minions of entities may be involved. Finally, we proposed task-specific models to incorporate our entity embeddings. Unlike most previous works, our entity embeddings can be directly incorporated during fine-tuning without requiring any specialized pre-training. Our experiments on entity linking, entity typing, and relation classification show that our entity embeddings and incorporation method are simple and effective, and the proposed models have comparable or superior performance to existing models while having the aforementioned advantages.
References
Nikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. 2018. End-to-end neural entity
|
{"Source-Url": "https://aclanthology.org/2022.findings-aacl.1.pdf", "len_cl100k_base": 7444, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34457, "total-output-tokens": 10060, "length": "2e12", "weborganizer": {"__label__adult": 0.0006389617919921875, "__label__art_design": 0.0014600753784179688, "__label__crime_law": 0.0007081031799316406, "__label__education_jobs": 0.003421783447265625, "__label__entertainment": 0.0006718635559082031, "__label__fashion_beauty": 0.0004544258117675781, "__label__finance_business": 0.0008478164672851562, "__label__food_dining": 0.0005893707275390625, "__label__games": 0.0016489028930664062, "__label__hardware": 0.0010585784912109375, "__label__health": 0.0011186599731445312, "__label__history": 0.0006303787231445312, "__label__home_hobbies": 0.00013935565948486328, "__label__industrial": 0.0005888938903808594, "__label__literature": 0.005245208740234375, "__label__politics": 0.0005469322204589844, "__label__religion": 0.0008153915405273438, "__label__science_tech": 0.386474609375, "__label__social_life": 0.0002894401550292969, "__label__software": 0.0526123046875, "__label__software_dev": 0.53857421875, "__label__sports_fitness": 0.00035071372985839844, "__label__transportation": 0.0006556510925292969, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39038, 0.05995]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39038, 0.18315]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39038, 0.86094]], "google_gemma-3-12b-it_contains_pii": [[0, 4227, false], [4227, 8280, null], [8280, 11890, null], [11890, 16957, null], [16957, 19430, null], [19430, 24052, null], [24052, 26689, null], [26689, 30030, null], [30030, 35167, null], [35167, 39038, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4227, true], [4227, 8280, null], [8280, 11890, null], [11890, 16957, null], [16957, 19430, null], [19430, 24052, null], [24052, 26689, null], [26689, 30030, null], [30030, 35167, null], [35167, 39038, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39038, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39038, null]], "pdf_page_numbers": [[0, 4227, 1], [4227, 8280, 2], [8280, 11890, 3], [11890, 16957, 4], [16957, 19430, 5], [19430, 24052, 6], [24052, 26689, 7], [26689, 30030, 8], [30030, 35167, 9], [35167, 39038, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39038, 0.22353]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
1930ece23ffb6454672d610d0e7f04ecdf8a3c34
|
Caching Best Practices | Goals for Today
- Differences between Raster Tiles and Vector Tiles
- Picking a format
- Best ways to cook each
- How to share them
- How to consume them
# Creating, Using, and Maintaining Tile Services
<table>
<thead>
<tr>
<th>WORKSHOP</th>
<th>LOCATION</th>
<th>TIME FRAME</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Tuesday, 10 July</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• ArcGIS Pro: Creating Vector Tiles</td>
<td>• SDCC – 17B</td>
<td>• 10:00 am – 11:00 am</td>
</tr>
<tr>
<td>• Caching Maps and Vector Tile Layers: Best Practices</td>
<td>• SDCC – 10</td>
<td>• 2:30 pm – 3:30 pm</td>
</tr>
<tr>
<td>• Working With OGC WMS and WMTS</td>
<td>• SDCC – Esri Showcase: Interoperability and Standards Spotlight Theater</td>
<td>• 4:30 pm – 4:50 pm</td>
</tr>
<tr>
<td><strong>Wednesday, 11 July</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• ArcGIS for Python: Managing Your Content</td>
<td>• SDCC – Demo Theater 01</td>
<td>• 11:15 am – 12:00 pm</td>
</tr>
<tr>
<td>• ArcGIS Online: Three-and-a-Half Ways to Create Tile Services</td>
<td>• SDCC – Demo Theater 06</td>
<td>• 1:15 pm – 2:00 pm</td>
</tr>
<tr>
<td>• Understanding and Styling Vector Basemaps</td>
<td>• SDCC – 15B</td>
<td>• 2:30 pm – 3:30 pm</td>
</tr>
<tr>
<td>• Working With OGC WMS and WMTS</td>
<td>• SDCC – Esri Showcase: Interoperability and Standards Spotlight Theater</td>
<td>• 4:30 pm – 4:50 pm</td>
</tr>
<tr>
<td><strong>Thursday, 12 July</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• ArcGIS Enterprise: Best Practices for Layers and Service Types</td>
<td>• SDCC – 16B</td>
<td>• 10:00 am – 11:00 am</td>
</tr>
<tr>
<td>• ArcGIS Pro: Creating Vector Tiles</td>
<td>• SDCC – 17B</td>
<td>• 10:00 am – 11:00 am</td>
</tr>
<tr>
<td>• Web Mapping: Making Large Datasets Work in the Browser</td>
<td>• SDCC – 16B</td>
<td>• 1:00 pm – 2:00 pm</td>
</tr>
<tr>
<td>• Caching Maps and Vector Tile Layers: Best Practices</td>
<td>• SDCC – 04</td>
<td>• 4:00 pm – 5:00 pm</td>
</tr>
<tr>
<td>• Understanding and Styling Vector Basemaps</td>
<td>• SDCC – 10</td>
<td>• 4:00 pm – 5:00 pm</td>
</tr>
</tbody>
</table>
Caching Best Practices | Roadmap
1. Overview
2. Compare and contrast
3. Use cases
4. What’s new in ArcGIS Pro
5. Optimizing raster tile generation
6. Optimizing vector tile generation
7. Share and Publish
8. Restyling multiple maps from one tileset
Raster and Vector Tiles
...an overview
Overview | Raster Tiles
- What are Raster Tiles?
- Pre-rendered snapshots of your map
- JPEG’s and PNG’s
- Tiling Scheme:
- Origin
- Tile Dimension and Format
- Extent
- CRS
- LOD’s
- Generate Cache
- Cooking
Overview | Vector Tiles
• What are Vector Tiles?
- Tiled containers of your data
- Separate style provides rendering instructions for how to draw your map
• Client device / browser is responsible for drawing the map
• Tileset components:
- Tiles
- Styles
- Sprites
- Fonts
- Index
Overview | Vector Tiles in ArcGIS
- Leverages several Open Source projects
- Tiles use the Mapbox vector tile spec
- Based on Google protocol buffers
- Styling conforms to the Mapbox GL style spec
- More aggressive overzoom
- Indexed tiling scheme
- Support for traditional tiling also exists
- Any supported Coordinate System
Overview | Advantages of Vector Tiles
• Display quality
- Best possible resolution for HD displays
• Dynamic labeling
- Clearer, more readable text
- On the fly labeling for heads up display
• Map Styling
- Many styles from one tileset
- Restyling
Compare and Contrast
Raster Tiles and Vector Tiles
## Compare / Contrast | Authoring Clients / Tools
### Raster Tiles
- MXD’s, Map Projects, and MosaicDatasets
- ArcGIS Desktop
- Manage Tile Cache
- Create Map Tile Package
- Integrated sharing in ArcGIS Pro 1.4
- ArcGIS Server / Enterprise / Online
- Server tools / caching toolset
### Vector Tiles
- Map Projects
- ArcGIS Pro v1.2+
- Create Vector Tile Package
- ArcGIS Pro v1.4*
- Integrated sharing workflow
Compare / Contrast | Tileset Structure
• Raster Tiles:
- .bundle
- JPEG, PNG8, PNG24, PNG32, LERC
- Smart Tiles: PNG, MIXED
• Vector Tiles:
- .bundle
- Tiled data encoded as Protocol Buffers (.PBF)
- Fonts
- Glyphs as .PBF
- Sprites
- Sprite.png / [email protected]
- Style
- .JSON
Compare / Contrast | Tile creation process - Esri World Basemap
• Raster Tiles for entire world
- ~ many weeks on a server cluster per map style
- Tiles ~ 20 TB
• Compared to vector tiles
- ~ 12 hrs on a desktop machine
- Tiles ~ 26 GB
- Multiple styles can use the same tileset
Compare / Contrast | Supporting Architecture
ArcGIS Server
Raster Tiles
Federated Server
ArcGIS Pro
Vector Tiles
ArcGIS Online
Hosting Server
Cached Map Image Layers
Cloud store
Hosted Tile Layers
## Summary | Compare and Contrast
<table>
<thead>
<tr>
<th></th>
<th>Raster Tiles</th>
<th>Vector Tiles</th>
</tr>
</thead>
<tbody>
<tr>
<td>Imagery</td>
<td>√</td>
<td>X</td>
</tr>
<tr>
<td>Projection</td>
<td>All Supported CRS</td>
<td>All Supported CRS</td>
</tr>
<tr>
<td>Updating AOI</td>
<td>√</td>
<td>Future Release</td>
</tr>
<tr>
<td>Changing styles</td>
<td>X</td>
<td>√</td>
</tr>
<tr>
<td>Tile format</td>
<td>JPEG, PNG, LERC</td>
<td>PBF</td>
</tr>
<tr>
<td>Tile consumption</td>
<td>ArcGIS Pro, ArcGIS Desktop</td>
<td>ArcGIS Pro 1.3+</td>
</tr>
<tr>
<td></td>
<td>Runtime</td>
<td>Modern Browsers with WebGL support*</td>
</tr>
<tr>
<td></td>
<td>JSAPI</td>
<td>Runtime 100.0+</td>
</tr>
<tr>
<td></td>
<td>ArcGIS Earth</td>
<td>JSAPI 3.15+</td>
</tr>
<tr>
<td>Authoring Clients</td>
<td>ArcGIS Pro, ArcGIS Desktop</td>
<td>ArcGIS Pro 1.2+</td>
</tr>
<tr>
<td>Hosting Components</td>
<td>ArcGIS Online, ArcGIS Enterprise</td>
<td>ArcGIS Online</td>
</tr>
<tr>
<td></td>
<td>ArcGIS for Server</td>
<td>ArcGIS Enterprise 10.4+</td>
</tr>
<tr>
<td>Export Packages</td>
<td>√</td>
<td>ArcGIS Enterprise 10.6.1 and ArcGIS Online</td>
</tr>
<tr>
<td>Printing</td>
<td>√</td>
<td>ArcGIS Enterprise 10.6 and ArcGIS Online</td>
</tr>
</tbody>
</table>
*Current Display Driver*
Use Cases
Raster Tiles and Vector Tiles
Use Cases | Common Basemaps
Raster Tiles:
- Imagery Basemap
- CADRG / ECRG (Scanned Maps)
- Hillshade / Shaded Relief
- 3D Terrain
- StreetMap
- Canvas Maps
- Boundaries and Places
- Transportation
Vector Tiles:
Use Cases | Mapping & Visualization Comparison
- Vector:
- Map Service (Tiled)
- Vector Tile Service
- Map Service (Dynamic)
- Feature Service
- Raster:
- Image Service (Tiled)
- Image Service (Dynamic)
What’s new?
Quite a bit actually…
Authoring | What’s new in ArcGIS Pro?
• New at Pro 2.2
- Popups for vector tile layers!
- Visual variables / attribute driven styling
- Single symbol / unique value
- Color picker directly from symbol
- Font fallback
• New at Pro 2.1
- Visual variables / attribute driven styling
- Graduated / Proportional symbols
- Text rotation
- Unclassed symbol support in vector tiles
- Pause drawing!
- Improved rendering of vector tiles
- Convert representations to unique values
- Batch geoprocessing
- Create directories and FGDB’s
- Improved polygon labeling / placement
- Improved vector tile layer display in Pro
Caching | What’s new in ArcGIS Enterprise and Online?
• **ArcGIS Enterprise 10.6.1**
- Level 1 users are free 10.6
- Replace vector tile layer
- Offline / Export tile layers
- Caching in cloud store directories
- Amazon S3
- Azure Blob Store
- Alibaba OSS
- Huawei OBS
• **ArcGIS Online**
- Replace vector tile layer
- Auto update Hosted Tile Layers based on Hosted Feature services
Optimizing Raster Tile Generation
Raster Tiles
Raster Tiles | Are they still relevant?
<table>
<thead>
<tr>
<th>Product</th>
<th>Raster Basemaps</th>
<th>Vector Basemaps</th>
</tr>
</thead>
<tbody>
<tr>
<td>ArcGIS Pro Map Type</td>
<td>Imagery, Scanned Maps/Charts, Elevation</td>
<td>3D Terrain</td>
</tr>
<tr>
<td>Publish as</td>
<td>Map</td>
<td>Scene</td>
</tr>
<tr>
<td>Tile Format</td>
<td>JPEG or MIXED</td>
<td>LERC</td>
</tr>
<tr>
<td>Compression / Quality</td>
<td>65 - 75</td>
<td>0.1</td>
</tr>
<tr>
<td>Mosaic Dataset Overviews</td>
<td>Optional</td>
<td>YES</td>
</tr>
<tr>
<td>Data location</td>
<td>Source rasters - network share / NAS / SAN</td>
<td>Local FGDB</td>
</tr>
<tr>
<td>Cache Extents</td>
<td>Mosaic Datasets - local FGDB</td>
<td></td>
</tr>
<tr>
<td>Special Considerations</td>
<td>Increase mosaic max rasters and row/columns</td>
<td>Optionally compress the FGDB</td>
</tr>
<tr>
<td>Maplex</td>
<td></td>
<td>When needed</td>
</tr>
<tr>
<td>Data Conditioning</td>
<td></td>
<td>Check spatial indices and attribute indices</td>
</tr>
<tr>
<td>FGDB Health</td>
<td></td>
<td>Compact FGDB’s after data updates / edits</td>
</tr>
<tr>
<td>Analyzers</td>
<td></td>
<td>Utilize mosaic dataset and map analyzers to identify common issues</td>
</tr>
</tbody>
</table>
Pop Quiz
Raster Tile Selection
Raster Tiles | Imagery
- **Service Type:**
- Map or Image Service
- Web Map Image Layer
- Web Image Layer
- **Tile format:**
- MIXED
- JPEG-65
- **Cache Extents:**
- Mosaic Dataset footprints
Raster Tiles | 3D Terrain
- **Service Type:**
- Image Service
- Web Elevation Layer
- **Tile format:**
- LERC-0.1
- **Cache Extents:**
- Mosaic Dataset footprints
Raster Tiles | Topographic Map with Hillshade
- Service Type:
- Map Service
- Web Map Image Layer
- Tile format:
- JPEG-90
- Cache Extents:
- Custom
Cache smarter…not harder
Raster Tiles | Navigation StreetMap
- Service Type:
- Map Service
- Web Map Image Layer
- Tile format:
- JPEG-90
- PNG?
- Cache Extents:
- Custom
• Don’t use Fine, Verbose, or Debug logging.
• Size your Caching Tools Instances:
- $N = \# \text{ of cores per machine}$
- Min and Max = $N$
- 2 - 4GB of RAM x $N$
- Decrease $N$ if necessary
Cache smarter…not harder
• Only cache what is necessary
• Use AOI’s with decreasing coverage as you increase LOD’s
• Break your basemap project into multiple cache jobs by bracketing LOD’s
- Each job can / should have a unique AOI
• Only update what has changed
- You don’t need to re-cache everything if you have partial updates to your data
Cache smarter…not harder
Summary | Optimizing Raster Tile Generation
• Optimize your data:
- spatial index, compact FGDB, copy data local
• Optimize MXD / APRX and Imagery Projects:
- analyzer results, scale dependencies, Maplex when needed, Mosaic Dataset tuning
• Configure ArcGIS Server Caching instances
• Optimize cache jobs:
- Multiple jobs, AOI per LOD’s per job, only cache what is necessary
• ArcGIS Server will scale and leverage system resources
Cache smarter…not harder
Optimizing Vector Tile Generation
Vector Tiles
Vector Tiles | Data
- Use a local FGDB copy / extract of your data
- Clean your data
- Eliminate duplicates
- Check/fix geometry errors
- How dense is your data?
- Set reasonable scale dependencies
- Generalize
Vector Tiles | Cartography
- Set your scales according to the tiling scheme you select
- Remember scale logic in Pro is different from ArcMap
- Convert representations to unique value symbols
- Limit...
- number of layers
- duplication of content
- inclusion of additional fields / data in the tileset
• Avoid…
- group layers
- complex symbols and unsupported symbol effects: hatched / gradient fills
- unsupported layer types: annotations, basemaps
• Be mindful of users that want to re-style your maps
Vector Tiles | Cooking Tips
- Create and use index polygons
- Set max scale appropriately
- Choose a local directory for the .VTPK
Summary | Optimizing Vector Tile Generation
- Uncheck the box - “Draw up to and including maximum scale in scale ranges.”
- Pick a tiling scheme and set scale properties to match
- Copy your data to a FGDB
- Get your data healthy
- Make a valid map
- Make an efficient map
Make an Efficient Map
#KnowBeforeYouPro
Sharing, Cooking, and Updating
...is caring
Sharing
Cached Image Layers
Share Web Elevation Layers
Sharing Web Elevation Layers
Sharing Web Elevation Layers
Sharing and Cooking
Vector Tile Layers
1. Draw a line in the map.
2. Select the line and press enter.
3. Right-click the line and select Share as Web Layer.
4. Name the layer "Migrate Final Vector".
5. Check the box for Vector Tile.
6. Click Publish.
7. Share with "Admin (root)".
New Zealand Production
Overview
Map of New Zealand, streetMap Content, Production2
Tile Layer (hosted) by sharing1
Created: Jul 6, 2018 Updated: Jul 6, 2018 View Count: 0
Description
This is a street basemap of New Zealand, production2
Layers
New Zealand Production
Terms of Use
Open in Map Viewer
Open in Scene Viewer
View style
Share
Replace Layer
Item Information
Low
High
Details
Source: Vector Tile Service
Created from: New Zealand Production, Vector Tile Package
Restyling multiple maps from one tileset
ArcGIS Vector Tile Style Editor
Design your own custom styles for Esri Vector Basemaps.
Get Started
Summary
Raster Tiles and Vector Tiles…choose wisely
Caching Best Practices | Summary
Raster Tiles:
- Rasters and elevation datasets
- Any client
- Big Footprint
- TB’s of cache data
- Generation can consumes lots of resources
- Days and Weeks
Vector Tiles
- All vector datasets
- Modern browsers with WebGL
- ArcGIS Pro 1.3+
- Small footprint
- ~26 GB for whole world
- Generation consumes less resources
- Minutes and Hours
https://esri.box.com/v/CachingReadAhead
References
ArcGIS Pro
- Cartography MOOC (5 September - 17 October): https://www.esri.com/training/catalog/596e584bb826875993ba4ebf/cartography/
Vector Tiles
- Esri Vector Tile Style Editor (VTSE): https://developers.arcgis.com/vector-tile-style-editor/
- UC 2017 - Creating Vector Tiles: https://www.youtube.com/watch?v=dqKsEos1iSw
- Replace vector tile workflow: https://developers.arcgis.com/rest/users-groups-and-items/replace-service.htm
ArcGIS Online
- Esri Vector Basemap Group: https://www.arcgis.com/home/group.html?id=30de8da907d240a0bccd5ad3ff25ef4a#overview
- Blogs Human Geography Basemaps:
JSAPI Sample Apps
- Flights:
- Code: https://github.com/gbochenek/vector-tile-demo-js
- Live Demo: https://gbochenek.github.io/vector-tile-demo-js
- Browse Styles:
- Code: https://github.com/tfauvell/vt-styles-js
- Live Demo: https://tfauvell.github.io/vt-styles-js
<table>
<thead>
<tr>
<th>WORKSHOP</th>
<th>LOCATION</th>
<th>TIME FRAME</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Tuesday, 10 July</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• ArcGIS Pro: Creating Vector Tiles</td>
<td>• SDCC – 17B</td>
<td>• 10:00 am – 11:00 am</td>
</tr>
<tr>
<td>• Caching Maps and Vector Tile Layers: Best Practices</td>
<td>• SDCC – 10</td>
<td>• 2:30 pm – 3:30 pm</td>
</tr>
<tr>
<td>• Working With OGC WMS and WMTS</td>
<td>• SDCC – Esri Showcase: Interoperability and Standards Spotlight Theater</td>
<td>• 4:30 pm – 4:50 pm</td>
</tr>
<tr>
<td><strong>Wednesday, 11 July</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• ArcGIS for Python: Managing Your Content</td>
<td>• SDCC – Demo Theater 01</td>
<td>• 11:15 am – 12:00 pm</td>
</tr>
<tr>
<td>• ArcGIS Online: Three-and-a-Half Ways to Create Tile Services</td>
<td>• SDCC – Demo Theater 06</td>
<td>• 1:15 pm – 2:00 pm</td>
</tr>
<tr>
<td>• Understanding and Styling Vector Basemaps</td>
<td>• SDCC – 15B</td>
<td>• 2:30 pm – 3:30 pm</td>
</tr>
<tr>
<td>• Working With OGC WMS and WMTS</td>
<td>• SDCC – Esri Showcase: Interoperability and Standards Spotlight Theater</td>
<td>• 4:30 pm – 4:50 pm</td>
</tr>
<tr>
<td><strong>Thursday, 12 July</strong></td>
<td></td>
<td></td>
</tr>
<tr>
<td>• ArcGIS Enterprise: Best Practices for Layers and Service Types</td>
<td>• SDCC – 16B</td>
<td>• 10:00 am – 11:00 am</td>
</tr>
<tr>
<td>• ArcGIS Pro: Creating Vector Tiles</td>
<td>• SDCC – 17B</td>
<td>• 10:00 am – 11:00 am</td>
</tr>
<tr>
<td>• Web Mapping: Making Large Datasets Work in the Browser</td>
<td>• SDCC – 16B</td>
<td>• 1:00 pm – 2:00 pm</td>
</tr>
<tr>
<td>• Caching Maps and Vector Tile Layers: Best Practices</td>
<td>• SDCC – 04</td>
<td>• 4:00 pm – 5:00 pm</td>
</tr>
<tr>
<td>• Understanding and Styling Vector Basemaps</td>
<td>• SDCC – 10</td>
<td>• 4:00 pm – 5:00 pm</td>
</tr>
</tbody>
</table>
Download the Esri Events app and find your event
Select the session you attended
Scroll down to find the feedback section
Complete answers and select “Submit”
|
{"Source-Url": "http://proceedings.esri.com/library/userconf/proc18/tech-workshops/tw_1765-371.pdf", "len_cl100k_base": 4981, "olmocr-version": "0.1.53", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 73812, "total-output-tokens": 6895, "length": "2e12", "weborganizer": {"__label__adult": 0.00030040740966796875, "__label__art_design": 0.002597808837890625, "__label__crime_law": 0.0002999305725097656, "__label__education_jobs": 0.003862380981445313, "__label__entertainment": 0.00018930435180664065, "__label__fashion_beauty": 0.0002206563949584961, "__label__finance_business": 0.0004050731658935547, "__label__food_dining": 0.0002601146697998047, "__label__games": 0.00130462646484375, "__label__hardware": 0.0018739700317382812, "__label__health": 0.00024437904357910156, "__label__history": 0.0008130073547363281, "__label__home_hobbies": 0.00033593177795410156, "__label__industrial": 0.0005030632019042969, "__label__literature": 0.0002799034118652344, "__label__politics": 0.0001856088638305664, "__label__religion": 0.0003972053527832031, "__label__science_tech": 0.049072265625, "__label__social_life": 0.00025534629821777344, "__label__software": 0.291259765625, "__label__software_dev": 0.64404296875, "__label__sports_fitness": 0.00031638145446777344, "__label__transportation": 0.0004165172576904297, "__label__travel": 0.0005812644958496094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18978, 0.01434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18978, 0.08416]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18978, 0.655]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 180, false], [180, 1595, null], [1595, 1845, null], [1845, 1885, null], [1885, 2113, null], [2113, 2411, null], [2411, 2751, null], [2751, 3012, null], [3012, 3063, null], [3063, 3488, null], [3488, 3814, null], [3814, 4105, null], [4105, 4312, null], [4312, 5933, null], [5933, 5973, null], [5973, 6187, null], [6187, 6404, null], [6404, 6438, null], [6438, 7072, null], [7072, 7485, null], [7485, 7533, null], [7533, 7573, null], [7573, 10351, null], [10351, 10382, null], [10382, 10589, null], [10589, 10763, null], [10763, 10949, null], [10949, 11108, null], [11108, 11338, null], [11338, 11689, null], [11689, 11689, null], [11689, 12158, null], [12158, 12205, null], [12205, 12427, null], [12427, 12739, null], [12739, 12948, null], [12948, 13080, null], [13080, 13354, null], [13354, 13394, null], [13394, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13467, null], [13467, 13494, null], [13494, 13523, null], [13523, 13552, null], [13552, 13552, null], [13552, 13552, null], [13552, 13854, null], [13854, 13854, null], [13854, 13854, null], [13854, 14348, null], [14348, 14389, null], [14389, 14389, null], [14389, 14389, null], [14389, 14491, null], [14491, 14543, null], [14543, 14967, null], [14967, 16563, null], [16563, 18816, null], [18816, 18978, null], [18978, 18978, null], [18978, 18978, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 180, true], [180, 1595, null], [1595, 1845, null], [1845, 1885, null], [1885, 2113, null], [2113, 2411, null], [2411, 2751, null], [2751, 3012, null], [3012, 3063, null], [3063, 3488, null], [3488, 3814, null], [3814, 4105, null], [4105, 4312, null], [4312, 5933, null], [5933, 5973, null], [5973, 6187, null], [6187, 6404, null], [6404, 6438, null], [6438, 7072, null], [7072, 7485, null], [7485, 7533, null], [7533, 7573, null], [7573, 10351, null], [10351, 10382, null], [10382, 10589, null], [10589, 10763, null], [10763, 10949, null], [10949, 11108, null], [11108, 11338, null], [11338, 11689, null], [11689, 11689, null], [11689, 12158, null], [12158, 12205, null], [12205, 12427, null], [12427, 12739, null], [12739, 12948, null], [12948, 13080, null], [13080, 13354, null], [13354, 13394, null], [13394, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13439, null], [13439, 13467, null], [13467, 13494, null], [13494, 13523, null], [13523, 13552, null], [13552, 13552, null], [13552, 13552, null], [13552, 13854, null], [13854, 13854, null], [13854, 13854, null], [13854, 14348, null], [14348, 14389, null], [14389, 14389, null], [14389, 14389, null], [14389, 14491, null], [14491, 14543, null], [14543, 14967, null], [14967, 16563, null], [16563, 18816, null], [18816, 18978, null], [18978, 18978, null], [18978, 18978, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18978, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18978, null]], "pdf_page_numbers": [[0, 0, 1], [0, 180, 2], [180, 1595, 3], [1595, 1845, 4], [1845, 1885, 5], [1885, 2113, 6], [2113, 2411, 7], [2411, 2751, 8], [2751, 3012, 9], [3012, 3063, 10], [3063, 3488, 11], [3488, 3814, 12], [3814, 4105, 13], [4105, 4312, 14], [4312, 5933, 15], [5933, 5973, 16], [5973, 6187, 17], [6187, 6404, 18], [6404, 6438, 19], [6438, 7072, 20], [7072, 7485, 21], [7485, 7533, 22], [7533, 7573, 23], [7573, 10351, 24], [10351, 10382, 25], [10382, 10589, 26], [10589, 10763, 27], [10763, 10949, 28], [10949, 11108, 29], [11108, 11338, 30], [11338, 11689, 31], [11689, 11689, 32], [11689, 12158, 33], [12158, 12205, 34], [12205, 12427, 35], [12427, 12739, 36], [12739, 12948, 37], [12948, 13080, 38], [13080, 13354, 39], [13354, 13394, 40], [13394, 13439, 41], [13439, 13439, 42], [13439, 13439, 43], [13439, 13439, 44], [13439, 13439, 45], [13439, 13439, 46], [13439, 13467, 47], [13467, 13494, 48], [13494, 13523, 49], [13523, 13552, 50], [13552, 13552, 51], [13552, 13552, 52], [13552, 13854, 53], [13854, 13854, 54], [13854, 13854, 55], [13854, 14348, 56], [14348, 14389, 57], [14389, 14389, 58], [14389, 14389, 59], [14389, 14491, 60], [14491, 14543, 61], [14543, 14967, 62], [14967, 16563, 63], [16563, 18816, 64], [18816, 18978, 65], [18978, 18978, 66], [18978, 18978, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18978, 0.15238]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
8afb942578c9ef0752287e90126860da56ed4ca9
|
Roger Lee (Ed.)
Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing 2012
Supporting End-User Development of Context-Aware Services in Home Network System
Masahide Nakamura, Shuhei Matsuc, and Shinsuke Matsumoto
Abstract. The home network system (HNS, for short) provides value-added and context-aware services for home users, by orchestrating networked home appliances and sensors. Although the HNS services have been developed by system vendors, there exist strong needs that the end-users create their own services according to their convenience. This paper presents a novel service creation environment, called Sensor Service Binder (SSB), which provides a user-friendly interface for creating context-aware services within the HNS. Built on top of the service-oriented HNS, the SSB allows non-expert users to register contexts using the sensors, and to bind the registered context to any operation of the networked appliances. Experimental evaluation with an actual HNS showed that the effort for service creation was reduced to 10% by introducing the proposed SSB.
Keywords: home network system, home appliances, sensors, context-aware services, end-user development.
1 Introduction
Research and development of the home network system (HNS, for short) is recently a hot topic in the area of ubiquitous computing applications [13][1]. Orchestrating house-hold appliances and sensors via network, the HNS provides various value-added services for home users. Typical services include; the remote control service inside/outside home, the monitoring service of device and environment status, and the integrated service of multiple appliances [9].
Masahide Nakamura · Shuhei Matsuo · Shinsuke Matsumoto
Graduate School of System Informatics, Kobe University, 1-1 Rokkodai-cho, Nada-ku, Kobe, Hyogo 657-8501, Japan
e-mail: {masa-n, shinsuke}@cs.kobe-u.ac.jp, [email protected]
springerlink.com © Springer-Verlag Berlin Heidelberg 2013
In this paper, we especially focus on the context-aware service [10] within the HNS, which automatically triggers service operations based on contextual information. A context-aware service is basically implemented by binding a context to an operation of the HNS device. The context is usually characterized by device states and/or environment properties gathered by sensors. For instance, binding a context “Hot: temperature>=28C” with an operation “airConditioner.cooling()” implements an autonomous air-conditioning service.
Such context-aware services have been developed by vendors in a “ready-made” form. However, due to the variety of user’s tastes on the context, appliances deployed and surrounding environment, the conventional ready-made services do not necessarily cover all requirements of end-users. In the above example, a user may feel hot when the temperature is 26C. Also, she may want to use a fan instead of the air-conditioner, as she dislikes the air-conditioner.
To cope with such fine requirements by end-users, we propose a context-aware service creation environment for the HNS, called Sensor Service Binder (SSB). We have previously developed the service-oriented HNS [9], where operations of the HNS appliances and sensors are exhibited as Web services. The SSB is built on top of the service-oriented HNS, and supports end-users to perform the following two primary tasks to create a context-aware service:
- **Register**: Define an end-user context with device states and sensor values, and register it to the server.
- **Subscribe**: Bind a registered context to a desired appliance operation. The operation is triggered when the context becomes true.
We have conducted an experimental evaluation where non-expert users create simple context-aware services with the SSB. It was shown that the time taken for each subject to create a service was a couple of minutes. We also observed that the number of faults in invoking Web service APIs was significantly reduced. These facts imply that the proposed SSB can contribute to efficient and reliable end-user development of context-aware services in the HNS.
2 Related Work
There have been several methods and systems that facilitate development of context-aware applications. In [4] and [12], middleware supports for context-aware applications development have been proposed. Using the middleware, developers can create own applications rapidly, by combining building blocks for reading sensor values, reasoning contexts, executing callbacks, etc. Also, [11] presented a graphical user interface for modeling context-aware application using UML. These studies are supposed to be provided, but are different from our SSB.
In [3], a user interface, which end-users was proposed. User instruction. The demonstration manually to bind scenes with. Our SSB provides a more controls to the ones constructed are created by a set of simple do “scrap and build” of appliance.
3 Preliminaries
3.1 Service-Oriented
Applying the service-oriented HNS devices. Wrapping protocol loose-coupling and platform uses the devices. Several smart home appliances [13][1] and.
In our previous work [9], an service-oriented HNS, called C27, shown in Figure 1, the C27 and a home server that manages device is abstracted as a set of services (Web-APIs), encapsulation.
Each appliance service exposes features of the appliance applications (usually installed protocols [i.e., REST or SOAP], selecting channels, volume, in any kinds of TVs. To select http://c27-hns/TV application (e.g., Web browser).
On the other hand, every of Web-APIs. The API get an environment property, set register() and subscribe explained in the next section.
are supposed to be provided for sophisticated programmers and designers. So they are different from our SSB helping development by non-expert users at home.
In [3], a user interface, aCAPpella, for context-aware application development by end-users was proposed. Using aCAPpella, a user can program contexts by demonstration. The demonstration is captured by a camera and sensors. Then it is annotated manually to bind scenes with relevant contexts.
Our SSB provides a more light-weight approach for home users, by limiting contexts to the ones constructed by sensor services within the HNS. Since an application is created by a set of simple IF-THEN rules of ready-made services, users can easily do “scrap and build” of applications within a couple of minutes.
3 Preliminaries
3.1 Service-Oriented HNS
Applying the *service-oriented architecture (SOA)* to the HNS is a smart solution to achieve the programmatic interoperability among heterogeneous and distributed HNS devices. Wrapping proprietary control protocols by Web services provides loose-coupling and platform-independent access methods for external software that uses the devices. Several studies have been reported on the service orientation of home appliances [13][1] and sensor networks [5][6].
In our previous work [9][8], we have also designed and implemented a service-oriented HNS, called CS27-HNS, using actual home appliances and sensors. As shown in Figure 1, the CS27-HNS consists of *appliance services*, *sensor services*, and a *home server* that manages and controls the services. Each appliance (or sensor) device is abstracted as a service, where features of the device are exhibited as Web services (Web-APIs), encapsulating a device-proprietary protocol under the service layer.
Each appliance service has a set of Web-APIs that operate vendor-neutral features of the appliances. These Web-APIs can be executed by external applications (usually installed in the home server) using standard Web service protocols (i.e., REST or SOAP). For example, a TV service has methods for selecting channels, volume, input sources, etc., which are commonly included in any kinds of TVs. To select channel No.4 of the TV, one can just access a URL http://cs27-hns/TVServe/channel?no=4 with a Web-supported application (e.g., Web browser).
On the other hand, every sensor service in the CS27-HNS has the same set of Web-APIs. The API `getValue()` returns the current normalized value of an environment property, such as temperature (C), brightness (lux). Other APIs `register()` and `subscribe()` are for sensor-driven context-aware services, explained in the next section.
3.2 Context-Aware Services in HNS
By using the sensor services in the HNS, it is possible to gather various contexts [2] of the environment and users. A context can be used for triggering appliance services, which implements a context-aware service in the HNS. For example, one may define a context "Hot" to trigger the activation of an air-conditioner service.
Then, binding Hot to a Web-API is possible by using an SSF.
To facilitate the context binding, a special framework is used. When the sensor device reports a temperature sensor value, then the client of a sensor service fires a context-aware service. Note that different clients can subscribe to a same context and that any registered context can be fired.
Using the SSF in the CS27-HNS is easily implemented by the following steps:
1. Define a context (i.e., Hot) and subscribe to the temperature sensor service.
```
http://cs27-hns/context=Hot
```
2. Bind Hot to Web-API APIs.
```
http://cs27-hns/consumer?context=Hot
```
3.3 End-User Development
Although the CS27-HNS provides context-aware services, it is yet quite challenging (pragmatic with Web service) to develop services. A user needs to write a Web-API, usually described in Web service spec., registered contexts, and write an implementation code for non-expert users to understand and use.
Under this situation, our service platform supports the development of their own services. For this, we propose the Sensor Service Binder (SSB) as a tool that supports a platform-independent service specification and service implementation for a given context, with the context-aware service development, and service deployment.
define a context “Hot” to represent that “the room temperature is 28°C or greater”. Then, binding Hot to a Web-API AirConditioner.cooling() achieves a context-aware air-conditioning service.
To facilitate the context management, the sensor services in our CS27-HNS conform to a special framework, called Sensor Service Framework (SSF) [8]. For every sensor device, the SSF provides autonomous monitoring/notification services for the device, performed by a pair of Web-APIs: register() and subscribe(). A client of a sensor service first defines a context by a logical expression over the sensor value. Then, the client registers the context to the service using register() method. Next, the client executes subscribe() to tell a callback Web-API. After that, the service keeps monitoring the sensor value. When the registered context (i.e., the logical expression) becomes true, the SSF invokes the callback Web-API. Note that different clients can register multiple contexts in the same sensor service, and that any registered context can be shared and reused by different subscriptions.
Using the SSF in the CS27-HNS, the example air-conditioning service can be easily implemented by the following sequence of REST invocations.
1. Define a context Hot as an expression “temperature $\geq 28$”, and register it to TemperatureSensorService.
```
http://cs27-hns/ TemperatureSensorService/register?
context=Hot & expression=’temperature>=28’
```
2. Bind Hot to Web-API AirConditioner.cooling().
```
http://cs27-hns/ TemperatureSensorService/
subscribe?context=Hot & notify=’http://cs27-hns/
AirConditionerService/cooling’
```
### 3.3 End-User Development of Context-Aware Services
Although the CS27-HNS with the SSF facilitates the development of context-aware services, it is yet quite challenging for end-users, who have no expertise in programming with Web services, to develop their own services. To use a sensor (or appliance) service, a user has to understand the interface and end point of the Web-API, usually described in WSDL. Also, the information managed by the SSF (sensor spec., registered contexts, callback APIs, etc.) are all described in XML. It is hard for non-expert users to understand and use the sensor services correctly.
Under this situation, our objective is to support the non-expert end-users to create their own services. For this, we propose a novel service creation environment, called Sensor Service Binder (SSB), built on top of the CS27-HNS.
4 Sensor Service Binder: User-Friendly Interface for Context-Aware Service Creation
4.1 Overview
The SSB provides a graphical user interface for rapid creation of context-aware services, which acts as a front-end of the CS27-HNS with the SSF. The SSB automatically parses the WSDL and the XML files of the sensor/appliance services. It then displays the information in an intuitive and user-friendly form. The user can play with the services through basic widgets such as buttons, lists and textboxes, without knowing underlying information like the service end point, the message types, etc. Since the SSB restricts the user’s input from the GUI only, it is possible to minimize the careless faults in operating services.
Also, the SSB can search and aggregate contexts and callback Web-APIs registered in all sensor services. This feature allows users to overlook the entire list of available contexts and corresponding services. The list can be used to verify, reuse and refine the existing context-aware services, which were difficult activities by the SSF only.
The SSB provides the following two primary features supporting end-users.
- **(F1: Registration Feature)** Register a user-defined context by executing `register()` method of the sensor service.
- **(F2: Subscription Feature)** Bind a registered context to a Web-API of a HNS operation using `subscribe()` method of the sensor service.
4.2 Context Registration Feature of SSB
This feature allows a user to define and register a context using the sensor services. In the SSB, a context is defined by a `name` and a `condition`. The context name is a unique label identifying the context, whereas the context condition is a logical expression composed of sensor values and comparison operators.
Figure 2(a) shows a screenshot of the registration feature. The left side of the screen is the registration pane. A user first chooses a favorite sensor service from the drop-down list, and then enters a context name in the textbox. In the below of the textbox, an attribute of the sensor service is automatically derived and shown. The user defines a context condition by an expression over the attribute. In the default mode, the SSB allows only a constant value and a comparison operator, just for convenience. Finally, the user presses the “Register” button. The SSB registers the context to the service by invoking `register()` method.
The right side of the screen represents a list of contexts that were already registered. The list is dynamically created by `getRegisteredContexts()` method of the SSF. Each line contains a context name, a context condition and a sensor service where the context is registered. The user can check if the created context is
Supporting End-User Development of Context-Aware Services
(a) Screen of Context Registration
(b) Screen of Context Subscription
Fig. 2 Screenshot of Sensor Service Binder
registered. The user can also discard unnecessary contexts by just pressing “Delete” button. The SSB requests the service to delete the context.
4.3 Context Subscription Feature of SSB
This feature helps a user bind a registered context to a Web-API of the appliance operation. Figure 2(b) shows a screenshot of the context subscription feature. The left side of the screen enumerates the registered contexts, each of which is labeled by the context name. When a user clicks a preferred context, the context is chosen for the binding.
The right side of the screen shows the list of appliance services deployed in the HNS. When a user clicks a preferred appliance, a menu of operations of the appliance is popped up. Then the user chooses an operation to bind. The list of appliances and the menu of operations are automatically generated by parsing the WSDL of the appliance services.
Finally, when the user clicks “Bind” button in the center, the SSB subscribes the binding by executing subscribe() method. This completes a service creation. The subscribed contexts are shown in the textbox in the center, where the user can delete any binding.
4.4 Example
As an illustrative example, let us create a simple service, say automatic TV service with the SSB. This service turns on a TV only when a user sits down on a couch. We suppose that a Force sensor is deployed under the couch to detect a human sitting on the couch.
First, we define and register a context SitDown using the registration feature. From the drop-down list of sensors (see Figure 2(a)), we choose the Force sensor. Then we enter the name SitDown in the textbox, and make a condition as pressure==true. Finally, we press the register button.
Next, we bind SitDown to TV.on() using the subscription feature. We first choose SitDown from the context list of Figure 2(b). Then, we choose TV from the appliance list, and on from the operation menu. Finally, we press the bind button. Similarly, we create a context StandUp as pressure==false, and then bind it to TV.off(), which completes the creation of the automatic TV service.
5 Evaluation
5.1 Experiment Setting
To evaluate the effectiveness, we have conducted an experiment of service creation with (and without) the proposed SSB. The total 6 subjects (3 under-graduates, 2 graduates, and 1 faculty) participated. Each subject was familiar with the CS27-HNS.
In the experiment, we asked:
- (T1: context registration) Users to register the following contexts:
1. SitDown: A force sensor detects a user is sitting down.
2. StandUp: A force sensor detects a user is standing up.
3. Dark: A light sensor detects the room is dark.
4. Hot: A temperature sensor detects the room is hot.
5. Moved: A motion sensor detects a movement.
- (T2: context subscription) Users to subscribe service for each context:
1. Turn on a TV when a user sits down.
2. Turn off a TV when a user is standing up.
3. Turn on a ceiling light when it is dark.
4. Turn off a gas heater when it is hot.
5. Close a curtain when a user is standing up.
Each task was performed:
- [with SSB] Each subject used the proposed SSB.
- [without SSB] Each subject used the CS27-HNS.
To avoid the habituation effect, the other half executed the tasks in reverse order.
The usage of the Web browser was not allowed. The most familiar tool for users was the browser. Another option is to use a Web-API.
The experiment was performed in four days:
1. We gave instructions on how to use the Web-APIs.
2. We showed a sample task.
3. Each subject conducted the task.
4. We showed a sample task.
5. Each subject conducted the task.
6. We interviewed the subject about the service creation.
We have measured the time it took to complete the tasks. We also counted the number of errors made by each subject.
Matsuo, and S. Matsumoto
by just pressing “Delete”
Web-API of the appliance
subscription feature. The
of which is labeled
and make a context
services deployed in the
of operations of the appli-
SSB subscribes the
service creation.
user can
say automatic TV service
sits down on a couch. We
detect a human sitting
by the registration feature.
choose the Force sen-
make a condition as a
scription feature. We first
then, we choose TV from the
we press the bind button.
false, and then bind it
Supporting End-User Development of Context-Aware Services
graduates, and 1 faculty) participated in the experiment. None of the subjects was
familiar with the CS27-HNS or the SSB.
In the experiment, we asked the subjects to do the following tasks.
• (T1: context registration) Each subject defines and registers the following 5
contexts.
1. SitDown: A force sensor detects a pressure.
2. StandUp: A force sensor detects no pressure.
3. Dark: A light sensor measures below 200lux.
4. Hot: A temperature sensor shows above 15C.
5. Moved: A motion sensor detects a motion.
• (T2: context subscription) Each subject binds a registered context to an appli-
cance service. Specifically, each subject creates the following 5 bindings.
1. Turn on a TV when SitDown holds.
2. Turn off a TV when StandUp holds.
3. Turn on a ceiling light when Dark holds.
4. Turn on an air conditioner when Hot holds.
5. Close a curtain when Moved holds.
Each task was performed in two ways.
• [with SSB] Each subject uses the SSB.
• [without SSB] Each subject uses a Web browser to directly access the Web-APIs
of the CS27-HNS.
To avoid the habituation effect, the half of the subjects performed [with SSB] first,
and the other half executed [without SSB] first.
The usage of the Web browser in [without SSB] is due to the fact that it is the
most familiar tool for users that can invoke the Web-APIs. In the experiment, the
subjects were instructed to enter URIs of the Web-APIs in the address bar of the
browser. Another option is to train the subjects for writing programs. However, this
is too expensive and is beyond our assumption of “end-users”.
The experiment was performed as follows.
1. We gave instructions of the experiment as well as the usage of the SSB and the
Web-APIs.
2. We showed a sample task of T1 to the subjects.
3. Each subject conducted T1.
4. We showed a sample task of T2 to the subjects.
5. Each subject conducted T2.
6. We interviewed the subjects for the usability of the SSB and the browser-based
service creation.
We have measured the time taken for completing the tasks, to evaluate the efficiency.
We also counted the number of faults in user’s operations as a reliability measure.
5.2 Result
Figure 3 shows a boxplot of the time taken for the subjects to complete each task. It is shown in task T1 that the context registration with the SSB took 76 seconds on the average, which is 12% of the time taken without the SSB. Similarly in task T2, the context subscription with the SSB took only 74 seconds on the average, which is 9% of the time without the SSB. It is also interesting to see that using the SSB all the subjects completed the task as quickly as one minute and plus, which reflects the user-friendly and intuitive design of the SSB. These results show how the SSB improves the efficiency of the end-user development.
Table 1 shows the number of faults made by subjects in each task, summarized according to (a) individual subjects and (b) fault types. Due to a trouble in recording, data for [T1 without SSB] of subject S1 was omitted.
It was surprising to see that no operational fault was made in any task with the SSB. Among the tasks without the SSB, more faults occurred in T2 since the task of subscription is generally more troublesome than registration. Investigating the type of faults, we found that the subjects were likely to mistype very long URIs of the Web-APIs in the browser. It was also seen that the subjects were often at a loss to identify the correct sensors and appliances. These faults were well circumvented by the GUI of the SSB, which reflects the reliability of the service creation.
On the other side of the successful results, we also recognized limitations of the SSB in the subsequent interview. A subject pointed out: “As more and more contexts are registered, the context list of the SSB will become larger. So, it is quite hard for me to search a correct context.” The same thing happens when the number of sensors and appliances is dramatically increased. To cope with this problem, the SSB has to employ efficient search and reuse techniques for the contexts and services, which is left for our future work.
6 Conclusion and Future Works
We have presented a novel approach and development of context-awareness evaluation with non-expert users through HNS. It was shown that the HNS can significantly increase the number of faults within the framework, thereby improving reliability in developing context systems.
As for the future work, we have developed a system for context awareness and SSB. One important issue is the difficulty of identifying sensors and appliances by users and the difficulty of the SSB to identify faults. By addressing these issues, we are constructing a better interface that is more intuitive for users. The feature interaction feature is currently being developed, which is functional to present context information, and it will be available for the entire group of users. We hope to develop a system that is more intuitive and efficient for users.
Supporting End-User Development of Context-Aware Services
Table 1 Experiment Result: number of faults
(a) Number of faults for individual subject
<table>
<thead>
<tr>
<th>Subject ID</th>
<th>w/o SSB</th>
<th>with SSB</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>T1</td>
<td>T2</td>
</tr>
<tr>
<td>S1</td>
<td>6</td>
<td>0</td>
</tr>
<tr>
<td>S2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>S3</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>S4</td>
<td>2</td>
<td>5</td>
</tr>
<tr>
<td>S5</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>S6</td>
<td>0</td>
<td>3</td>
</tr>
<tr>
<td>Total</td>
<td>6</td>
<td>19</td>
</tr>
<tr>
<td>Average</td>
<td>1.2</td>
<td>3.2</td>
</tr>
</tbody>
</table>
(b) Number of faults with respect to fault types
<table>
<thead>
<tr>
<th>Fault type</th>
<th>w/o SSB</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>T1</td>
</tr>
<tr>
<td>Wrong URI of sensor service</td>
<td>4</td>
</tr>
<tr>
<td>Wrong URI of appliance operation</td>
<td>0</td>
</tr>
<tr>
<td>Wrong argument of Web service</td>
<td>1</td>
</tr>
<tr>
<td>Registration to wrong sensor service</td>
<td>0</td>
</tr>
<tr>
<td>Subscription to wrong appliance operation</td>
<td>0</td>
</tr>
<tr>
<td>Wrong context name</td>
<td>0</td>
</tr>
<tr>
<td>Wrong context condition</td>
<td>1</td>
</tr>
</tbody>
</table>
6 Conclusion and Future Directions
We have presented a novel environment, called Sensor Service Binder, for end-user development of context-aware services. We have also conducted an experimental evaluation with non-expert users using the practical home network system, CS27-HNS. It was shown that the SSB significantly reduced the development time and the number of faults within the service creation contributed to the efficiency and the reliability in developing context-aware services.
As for the future work, we are currently implementing several extensions of the SSB. One important issue is the discovery feature, with which users can search sensors and appliances by name, location, purpose, etc. Another issue is to share and reuse the existing contexts, facilitating the context creation and registration. For these issues, we are constructing a registry within the CS27-HNS to manage metadata for locating services and contexts.
The feature interaction [7] problem is also an important problem to be tackled, which is functional conflicts among different services. The feature interactions can occur as well within the services created by the SSB users. For instance, if different users bind incompatible appliance operations with the same context, a race condition occurs, leading to unexpected behaviors. In the future, we plan to develop a validation feature within the SSB, which detects and resolves feature interactions among the user-made services.
Acknowledgements. This research was partially supported by the Japan Ministry of Education, Science, Sports, and Culture [Grant-in-Aid for Scientific Research (C) (No.24500079), Scientific Research (B) (No.23300009)], and Kansai Research Foundation for technology promotion.
References
|
{"Source-Url": "http://ws.cs.kobe-u.ac.jp/achieve/data/pdf/1244.pdf", "len_cl100k_base": 6202, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 14966, "total-output-tokens": 7798, "length": "2e12", "weborganizer": {"__label__adult": 0.0003559589385986328, "__label__art_design": 0.0008258819580078125, "__label__crime_law": 0.0003535747528076172, "__label__education_jobs": 0.0010042190551757812, "__label__entertainment": 9.626150131225586e-05, "__label__fashion_beauty": 0.0002112388610839844, "__label__finance_business": 0.0003101825714111328, "__label__food_dining": 0.0003528594970703125, "__label__games": 0.0005102157592773438, "__label__hardware": 0.007556915283203125, "__label__health": 0.0007257461547851562, "__label__history": 0.00035572052001953125, "__label__home_hobbies": 0.000339508056640625, "__label__industrial": 0.0007038116455078125, "__label__literature": 0.00028228759765625, "__label__politics": 0.0002237558364868164, "__label__religion": 0.0004527568817138672, "__label__science_tech": 0.175048828125, "__label__social_life": 0.00013899803161621094, "__label__software": 0.01837158203125, "__label__software_dev": 0.79052734375, "__label__sports_fitness": 0.00022864341735839844, "__label__transportation": 0.0006017684936523438, "__label__travel": 0.00024271011352539065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30668, 0.02706]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30668, 0.12567]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30668, 0.90245]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 115, false], [115, 2085, null], [2085, 5785, null], [5785, 8434, null], [8434, 10106, null], [10106, 12590, null], [12590, 15323, null], [15323, 15497, null], [15497, 19310, null], [19310, 22009, null], [22009, 24861, null], [24861, 27489, null], [27489, 30668, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 115, true], [115, 2085, null], [2085, 5785, null], [5785, 8434, null], [8434, 10106, null], [10106, 12590, null], [12590, 15323, null], [15323, 15497, null], [15497, 19310, null], [19310, 22009, null], [22009, 24861, null], [24861, 27489, null], [27489, 30668, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30668, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30668, null]], "pdf_page_numbers": [[0, 0, 1], [0, 115, 2], [115, 2085, 3], [2085, 5785, 4], [5785, 8434, 5], [8434, 10106, 6], [10106, 12590, 7], [12590, 15323, 8], [15323, 15497, 9], [15497, 19310, 10], [19310, 22009, 11], [22009, 24861, 12], [24861, 27489, 13], [27489, 30668, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30668, 0.08898]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
b339e6e118684b611dc96006aa0cd3e825cdcfdf
|
A Principled Approach to Reasoning about the Specificity of Rules
John Yen
Department of Computer Science
Texas A&M University
College Station, TX 77843
[email protected]
Abstract
Even though specificity has been one of the most useful conflict resolution strategies for selecting productions, most existing rule-based systems use heuristic approximation such as the number of clauses to measure a rule's specificity. This paper describes an approach for computing a principled specificity relation between rules whose conditions are constructed using predicates defined in a terminological knowledge base. Based on a formal definition about pattern subsumption relation, we first show that a subsumption test between two conjunctive patterns can be viewed as a search problem. Then we describe an implemented pattern classification algorithm that improves the efficiency of the search process by deducing implicit conditions logically implied by a pattern and by reducing the search space using subsumption relationships between predicates. Our approach enhances the maintainability of rule-based systems and the reusability of definitional knowledge.
Introduction
Specificity is a classic conflict resolution heuristic used by many rule languages from OPS through ART for selecting productions [McDermott and Forgy 1978]. It provides a convenient way for expert systems (such as R1) to describe general problem solving strategies as well as strategies for handling exceptional cases. In a similar spirit, common sense reasoning also relies on the specificity of a rule's antecedents to override conclusions drawn by more general rules when they contradict the more specific rule.
Even though the specificity of rules has been long recognized as an important information for the selection of rules, few efforts have been made to develop algorithms for computing a principled measure of rules' specificity. Instead, most existing rule systems use syntactic information such as the number of clauses as a heuristic approximation to the specificity of rules. This has both encouraged, and to some extent necessitated, bad programming practices in which clauses are placed in production rules solely to outsmart the conflict resolution algorithm. As a result, it is hard to explain rules and difficult to determine how to correctly add or revise them. Two other problems with rule-based systems have often been identified by critics. First, the meaning of the terminology used by rules is often ill-defined [Swartout and Neches 1986]. This makes it difficult to determine when rules are, or should be, relevant to some shared abstraction - which, in turn, makes it difficult to find and change abstractions. Third, it is difficult to structure large rule sets [Fikes and Kehler 1985]. This makes it difficult to decompose the set into smaller, more comprehensible and maintainable subsets.
To address these problems with rule-based systems, we have developed a production system, CLASP, where the semantics of predicates used in rules are defined using a term subsumption language (LOOM)\(^1\) [Yen et al. 1989]. One of the major feature of CLASP is a pattern classifier that organizes patterns into a lattice where more specific patterns are below more general ones, based on the definitions of predicates in the patterns. Using the pattern classifier, CLASP can compute a well-defined specificity relation between rules.
Related Work
The idea of using the taxonomic structure of a terminological knowledge base to infer specificity relations between rules was first introduced by CONSUL [Mark 1981]. Because rules in CONSUL mapped one description to another, the condition of a CONSUL's rule is just a concept. Specificity of
---
\(^1\)Term subsumption languages refer to knowledge representation formalisms that employ a formal language, with a formal semantics, for the definition of terms (more commonly referred to as concept or classes), and that deduce whether one term subsumes (is more general that) another using a classifier [Patel-Schneider et al. 1990]. These formalisms generally descend from the ideas presented in KL-ONE [Brachman and Schmolze 1985]. LOOM is a term subsumption based knowledge representation system developed at USC/ISI [Gregor and Bates 1987].
rules can thus be obtained directly from the concept subsumption lattice. To verify the consistency and completeness of expert systems, researchers have also developed algorithms for detecting subsumed rules based on a subset test of clauses [Suwa et al. 1982, Nguyen et al. 1985]. More recently, the problem of computing the subsumption relation between plan classes has also been explored [Wellman 1988].
**Defining Pattern Subsumption Relations**
Conceptually, a pattern \( P_2 \) is more specific than (i.e., is subsumed by) a pattern \( P_1 \) if, for all states of the fact base, a match with \( P_2 \) implies a match with \( P_1 \). To define the subsumption of patterns more formally, we need to introduce the following terminology. A pattern is denoted by \( \text{P} \) where \( \text{X} \) denotes the set of variables in the pattern\(^2\). An instantiation of the pattern is denoted as \( \text{P}(\vec{x}) \) where \( \vec{x} \) is a vector of variable bindings for \( \text{X} \). For instance, the expression \( \text{P}([\text{John} \ \text{Angela} \ \text{Carl}]) \) denotes an instantiation of \( \text{P} \) that binds pattern variables \( ?x_1, ?x_2, \) and \( ?x_3 \) to John, Angela, and Carl respectively. Let \( \mathcal{T} \) be a terminological knowledge base. Concepts and roles (i.e., relations) are unary predicates and binary predicates defined in \( \mathcal{T} \). An interpretation \( \mathcal{I}_T \) of \( \mathcal{T} \) is a pair \( (\mathcal{D}, \mathcal{E}) \) where \( \mathcal{D} \) is a set of individuals described by terms in \( \mathcal{T} \) and \( \mathcal{E} \) is an extension function that maps concepts in \( \mathcal{T} \) to subsets of \( \mathcal{D} \) and roles in \( \mathcal{T} \) to subsets of the Cartesian product, \( \mathcal{D} \times \mathcal{D} \), denoted as \( \mathcal{D}^2 \). \( \mathcal{E}(\vec{x}) \) denotes that \( \vec{x} \) satisfies the condition of the pattern \( \text{P} \) under the extension function \( \mathcal{E} \), i.e.,
\[
\forall x, y \in \mathcal{D} \\
\begin{align*}
(C \ x)^\mathcal{E} & \iff x \in \mathcal{E}(C) \\
(R \ x \ y)^\mathcal{E} & \iff [x, y] \in \mathcal{E}(R) \\
(l_1 \land l_2)^\mathcal{E} & \iff l_1^\mathcal{E} \land l_2^\mathcal{E}
\end{align*}
\]
where \( C \) and \( R \) denote concepts and relations defined in \( \mathcal{T} \); \( l_1 \) and \( l_2 \) denote two literals.
**Definition 1** Suppose \( \text{P}_1 \) and \( \text{P}_2 \) are patterns whose predicates are defined in a terminological knowledge base \( \mathcal{T} \). The pattern \( \text{P}_1 \) subsumes \( \text{P}_2 \), denoted as \( \text{P}_1 \subseteq \text{P}_2 \), if
\[
\forall \mathcal{I}_T = (\mathcal{D}, \mathcal{E}) \quad \forall \vec{x} \in \mathcal{D}^n \\
(P_2^\mathcal{E}(\vec{x}) \Rightarrow \exists \vec{y} \in \mathcal{D}^m \quad P_1^\mathcal{E}(\vec{y})
\]
where \( \vec{x} \) and \( \vec{y} \) are vectors of elements in \( \mathcal{D} \), with dimension \( n \) and \( m \) respectively.
It is easy to verify that the pattern subsumption relation is reflexive and transitive.
Several important points about our definition of the pattern subsumption relation are worth mentioning. First, the definition allows patterns with different number of variables to be compared with each other. This
\( ^{2} \)When it is not important to refer to the variables of a pattern, we denote patterns simply by \( P \)
Classifying Conjunctive Patterns
This section describes an algorithm for classifying patterns that are conjunctions of non-negated literals (which we will refer to as conjunctive patterns). The algorithm consists of two steps. First, each pattern is normalized by making explicit in the pattern any unstated conditions logically implied by the patterns and the terminological knowledge. Second, the algorithm searches for a subsumption substitutions between pairs of normalized patterns.
A General Strategy
The general strategy of CLASP's pattern classification algorithm is to simplify the subsumption test between pairs of patterns by first normalizing them. This strategy is analogous to completing a concept definition before actually classifying the concept in KL-ONE's classifier [Schmolze and Lipkis 1983]. To formally justify our approach, this section first defines normalized patterns, then describes a theorem about the subsumption test of normalized conjunctive patterns.
A pattern is normalized if it contains no implicit conditions other than those that can be deduced easily from the subsumption lattice of concepts and of roles, which has been precomputed by LOOM's classifier. More formally, we define a normalized pattern as follows:
Definition 2 A pattern \( P \) is said to be normalized iff
\[ \forall l, \text{ if } P \nrightarrow l, \text{ then } \exists l' \text{ in } P \text{ such that } l' \nrightarrow l \]
where \( l \) and \( l' \) are literals with the same number of arguments.
We say a pattern \( \overline{P} \) is a normalized form of \( p \) if and only if \( \overline{P} \) is normalized and \( P \) equals \( \overline{P} \) (i.e., they are equivalent without variable substitution).
The rationale behind normalizing patterns is to simplify the subsumption test. Without the normalization process, the search for a subsumption substitution would have to consider the possibility that a condition in the parent pattern subsumes a conjunctive subpattern of the child pattern. For example, consider rules R2 and R3 in Figure 4. The condition (\textbf{College-graduate} \( ?y \)) \textbf{A} (\textbf{Successful-Father} \( ?z \)) \textbf{A} (\textbf{Child} \( ?z \) \( ?w \)) of R3's condition under the substitution \( ?y/\textbf{u} \) \textbf{A} (\textbf{Child} \( ?x \) \( ?z \)) \textbf{A} (\textbf{Successful-Father} \( ?z \)) \textbf{A} (\textbf{College-graduate} \( ?y \)). Having deduced the conditions implied by these conjunctive subpatterns during the normalization process, the subsumption test only needs to consider pairs of conditions with the same arity (one from the parent pattern, one from the child pattern) for testing subsumption possibility of the two patterns. Thus, normalizing patterns significantly reduces the complexity of the subsumption test. The following theorem formally states the impact of pattern normalization to the subsumption test.
Theorem 2 Suppose \( P_1 \) and \( P_2 \) are two normalized conjunctive patterns:
\[ P_1 = l_1^1 \wedge l_2^1 \wedge \ldots l_n^1 \]
\[ P_2 = l_1^2 \wedge l_2^2 \wedge \ldots l_m^2 \]
where \( l_j^1 \) and \( l_j^2 \) are literals without negations. The pattern \( P_1 \) subsumes \( P_2 \) if and only if there exists a subsumption substitution \( S \) such that every literal \( l_j^1 \) in \( P_1 \) subsumes at least one literal in \( P_2 \) with the same arity, i.e.,
\[ P_1 \nRightarrow P_2 \iff \exists S \left[ \forall l_j^1 \text{ in } P_1, \exists l_j^2 \text{ in } P_2, \text{ such that } l_j^2 \nRightarrow l_j^1 \right] \]
2. Normalizing Unary Conditions:
To prove the theorem, we first introduce the following lemma.
Lemma 1 Suppose \( P_1 \) is a conjunction of \( n \) literals, i.e., \( P_1 = \bigwedge l_1^1 \wedge l_2^1 \wedge \ldots l_m^1 \), where \( l_1^1, l_2^1, \ldots l_m^1 \) are literals without negations. The pattern \( P_1 \) subsumes a pattern \( P_2 \) if and only if there exists a subsumption substitution such that each literals \( l_i^1 \) subsumes the pattern \( P_2 \) under the substitution, i.e.,
\[
P_1 \Rightarrow P_2 \iff \exists \delta \text{ such that }
(P_2 \not\subseteq \delta l_1^1/S) \land (P_2 \not\subseteq \delta l_2^1/S) \land \ldots (P_2 \not\subseteq \delta l_m^1/S)
\]
Proof of Lemma 1 and Theorem 2 can be found in [Yen 1990].
Comparing Equations 3 and 7, we can see immediately that the complexity of the subsumption test has been reduced significantly by first normalizing the patterns. Based on Theorem 2, computing whether \( P_2 \) is more specific than \( P_1 \) only requires searching for a subsumption mapping such that each condition (i.e., literal) in \( P_1 \) subsumes at least one condition (i.e., literal) in \( P_2 \) under the mapping. We will refer to a \( P_2 \)'s condition that is subsumed by a condition \( l_i^1 \) in \( P_1 \) as \( l_i^1 \)'s subsumee. The subsumption test between normalized conjunctive patterns, thus, is a simpler search problem. The following two sections describe the normalization of patterns and the subsumption test between normalized patterns implemented in CLASP.
Normalizing Patterns
The normalization step transforms each pattern into an equivalent normalized pattern. Five kinds of normalization steps have been implemented in CLASP: (1) domain and range deductions, (2) normalizing unary conditions, (3) normalizing binary conditions, (4) value restriction deductions, and (5) at least one deductions. Each normalization step will be described and illustrated with examples, based on Figures 3 and 4. These normalization steps are correct because each one transforms a pattern into an equivalent one based on the semantics of LOOM's term-forming expressions in Figure 2.
1. Domain and Range Deduction: This step deduces unary conditions about variables that appear in a binary condition using domains and ranges of the condition's predicate (i.e., a relation). For instance, this step will infer an implicit condition of R3 (\( \text{Vehicle ?c) from the range of Has-car relation.} \)
2. Normalizing Unary Conditions: Unary conditions that involve the same variables are replaced by one unary condition whose predicate is the conjunction of the unary predicates (i.e., concepts) in the original pattern. This ensures that all patterns are transformed into a canonical form where each variable has at most one unary condition. The condition-side of R2 thus is normalized to combine two unary conditions about the variable ?x, into one condition (\( \text{College-graduate\&Car-Owner ?y) where College-graduate\&Car-Owner is the conjunct of College-graduate and Car-Owner.} \)
3. Normalizing Binary Conditions: Binary conditions with the same arguments are collected, and replaced by a new composite binary condition that takes into account the unary conditions of its domain variable and its range variable. This ensures that all normalized patterns have at most two binary conditions for each variable pair (the argument position of the variables can be switched). For instance, the conditions in R3 (\( \text{Child ?z ?w) \wedge (Female ?w) are transformed to (Daughter ?z ?w) \wedge (Female ?w).} \)
4. Value Restriction Deduction: Suppose a pattern contains conditions of the form (\( \text{(:\& (C ?x) (R ?x ?y) \ldots )) \) and the definition of \( C_1 \) in the terminological space has a value restriction on \( R \), say \( C_2 \). Then the pattern is equivalent to a pattern that has an additional unary condition \( (C_2 ?y). \) For example, conditions (\( \text{Successful-Father ?z) and (Daughter ?z ?w) in R3 deduce an implicit condition (College-graduate ?w) because successful-father has been defined as a father all whose children, which include daughters, are college graduates as shown in Figure 3. \)
5. At-least-one Deduction: A pattern containing two conditions in the form (\( \text{(:\& \ldots (C ?x) \ldots (R ?x \alpha) \ldots )) \), where \( \alpha \) is either a variable or a constant, is transformed to one that replaces \( C \) by the concept \( C' \) defined below, which has an additional at-least-one number restriction on the relation \( R \).
\[
\text{(defconcept C' (:\& C (:\& \alpha R))}
\]
Following our example, the conditions (\( \text{Female ?w) and (Has-car ?w ?c) in R3 now can deduce another implicit condition about ?w: (Car-owner ?w), for Car-Owner has been defined to be a person who} \)
### Table 1: Interpretation of Some Term-Forming Expressions
<table>
<thead>
<tr>
<th>Expression</th>
<th>Interpretation</th>
</tr>
</thead>
<tbody>
<tr>
<td>( \text{and } C_1 \ C_2 )</td>
<td>( \lambda x. <a href="x">C_1</a> \land <a href="x">C_2</a> )</td>
</tr>
<tr>
<td>( \text{and } R_1 \ R_2 )</td>
<td>( \lambda x y. [R_1](x, y) \land [R_2](x, y) )</td>
</tr>
<tr>
<td>( \text{at-least } 1 R )</td>
<td>( \lambda x. \exists y. [R](x, y) )</td>
</tr>
<tr>
<td>( \text{all } R \ C )</td>
<td>( \lambda x y. [R](x, y) \to <a href="y">C</a> )</td>
</tr>
<tr>
<td>( \text{domain } C )</td>
<td>( \lambda y. <a href="y">C</a> )</td>
</tr>
<tr>
<td>( \text{range } C )</td>
<td>( \lambda x. <a href="x">C</a> )</td>
</tr>
</tbody>
</table>
Figure 2: Semantics of Some Term-Forming Expressions
(defconcept Person (:primitive))
(defconcept Male (:and Person :primitive))
(defconcept Female (:and Person :primitive))
(defconcept College-graduate (:and Person :primitive))
(defrelation Child (:and :primitive (:domain Person) (:range Person)))
(defrelation Daughter (:and Child (:range Female)))
(defrelation Son (:and Child (:range Male)))
(defconcept Father (:and Male (:at-least 1 Child )))
(defconcept Successful-Father (:and Father (:all Child College-graduate)))
(defrelation Has-car (:and :primitive (:domain Person) (:range Vehicle)))
(defconcept Car-owner (:and Person (:at-least 1 Has-car)))
Figure 3: An Example of Terminological Knowledge
(defrule R2
:when
(:and (Person ?x)
(College-graduate&Car-Owner ?y)
(Child ?x ?y))
)
(defrule R3
:when
(:and (Successful-Father ?z)
(Female&College-graduate&Car-owner ?w)
(Daughter ?z ?w)
(Father ?f)
(Vehicle ?c)
(Son ?f Fred)
(Has-Car ?w ?c))
)
Figure 5: Two rules after normalization
Reducing the Search Space Although an exhaustive search that considers all possible mappings cannot be avoided in the worst case, the search space of possible subsumption mapping can be significantly reduced in most cases by considering the subsumption relationship between predicates. Normally, the condition pattern of a rule consists of several different predicates, only a small percentage of which are subsumed by a predicate in another pattern. Thus, using the subsumption relationships between predicates, we can significantly reduce the search space for finding a subsumption mapping.
Our strategy is to identify potential subsumees for all literals in the parent pattern \( P_1 \). A literal \( l_2 \) is a potential subsumee of a literal \( l_1 \) if there exists a subsumption substitution \( S \) such that \( l_2 \xrightarrow{S} l_1 \). The set of potential subsumees of a unary literal determines a set of potential candidates (which we call potential images) that a variable can map to under a subsumption mapping. The set of potential subsumees of a binary literal generates mapping constraints on how pairs of variables should be mapped. Potential images are used to reduce the branching factor of the search space, and mapping constraints are used to prune the search tree. This is illustrated using the example in Figure 5. Only two conditions in R3, (Son ?x Fred) and (Daughter ?z ?w), can potentially be subsumed by (Child ?x ?y) in R2. Since (Child ?x ?y) must have a subsumee under a subsumption mapping, we can infer that any subsumption mapping that proves
R3 is more specific than R2 has to satisfy one of the following two mapping constraints: (1) If \((\text{Child} \ ?x \ ?y)\) subsumes \((\text{Son} \ ?f \ Fred)\), then the variable \(?x\) should map to \(?f\) and the variable \(?y\) should map to \(Fred\). (2) If \((\text{Child} \ ?x \ ?y)\) subsumes \((\text{Daughter} \ ?z \ ?w)\), then the variable \(?x\) should map to \(?z\) and the variable \(?y\) should map to \(?w\). Similarly, potential subsumees of a parent pattern's unary condition restrict the candidate images a variable can map to. Using the example in Figure 5 again, \((\text{Successful-father} \ ?z)\) and \((\text{Father} \ ?f)\) are the only two unary conditions in R3 that can potentially be subsumed by \((\text{Person} \ ?x)\) in R2. Hence, the potential images of \(?x\) are \(?z\) and \(?f\).
The process of reducing the search space can also detect early failure of the subsumption test. The subsumption test terminates and returns false whenever (1) it fails to find any potential images for a variable in \(P2\); or (2) a binary condition in \(P1\) fails to find any binary condition in \(P2\) as a potential subsumee.
Searching for a Subsumption Substitution A subsumption mapping between two normalized patterns is constrained by the potential images of each variables in the parent pattern and the mapping constraints imposed by binary conditions of the parent pattern \(P1\). To search for a subsumption mapping that satisfies these constraints, which are generated by algorithms discussed in previous sections, the pattern classifier first sorts the parent variables in increasing order of the number of their potential images, then performs a dependency-directed backtracking. The position of a variable in the sorted list corresponds to the level it's images are assigned in the search tree. At each node, the current assignment of variables' images are checked to see if it satisfies the mapping constraints. If it does not satisfy any of the mapping constraint, the algorithm backtrack to the closest node whose assignment causes a constraint violation.
Discussion
We have shown elsewhere that CLASP's pattern classification algorithm is sound [Yen 1990]. It is also complete for a simple term subsumption language whose expressiveness is equivalent to that of \(\mathcal{FL}^e\) in [Brachman and Levesque 1984]. In general, an implementation of our pattern classification algorithm is sound if (1) the normalization step transforms an input pattern to an equivalent one, and (2) all identified potential subsumees are correct (which requires the classifier to be sound). An implementation of the general algorithm is complete if (1) the normalization step transforms an input pattern into its normalized equivalent form, and (2) the complete set of potential subsumees are identified for each literals of the parent pattern (which requires the classifier to be complete). A more detailed discussion on the issues regarding soundness and completeness of the pattern classification algorithm can be found in [Yen 1990].
Determining the subsumption of normalized conjunctive patterns is NP-complete, for it can be reduced from the problem of determining subgraph isomorphism for directed graphs, which is known to be NP-complete. However, worst case rarely occur in practice. To analyze the behavior of an algorithm in reality, we have defined normal cases \(^3\) and have shown that the complexity of the algorithm for normal cases is polynomial [Yen 1990].
Brachman and Levesque have demonstrated that there is an important tradeoff between the expressiveness of a terminological language and the complexity of its reasoner [Brachman and Levesque 1984]. A similar tradeoff between the computational complexity of the normalization process and the expressiveness of the terminological language has also been investigated [Yen 1990].
Summary
We have presented a principled approach to computing the specificity of rules whose conditions are constructed from terms defined using a terminological language. Based on a formal definition of pattern subsumption relation, we first show that the subsumption test between conjunctive patterns can be viewed as a search problem. Then we describe a pattern classification algorithm that improves the efficiency of the search process in two ways. First, implicit conditions logically implied by a pattern are made explicit before the search step. Second, the algorithm attempts to reduce the search space using information about the subsumption relation between predicates.
Our approach offers several important benefits to the developers of rule-based systems. First, the pattern classifier makes it possible to provide, for the first time, a principled account of the notion of rule-specificity as a guide to conflict resolution. This will greatly improve the predictability of rule-based systems, and thus alleviate the problems in maintaining them. Second, using pattern classifier to compute the specificity of methods, CLASP is able to generalize methods in object-oriented programming for describing a complex situation to which the method applies. Third, separating definitional knowledge from rules enhances the reusability of knowledge and the explanation capability of the system. Finally, the pattern classifier is also the enabling technology for our future development of a rule base organizer, which automatically determines groupings of a large set of rules based on the semantics of rules and rule classes.
Acknowledgements
I would like to thank Robert Neches for his encouragement and support of this research. I am also grate-
ful to Robert MacGregor, Bill Swartout, and David Benjamin for their fruitful ideas regarding the pattern classification algorithm. Finally, the research on CLASP has benefited from many discussions with Paul Rosenbloom and John Granacki. Part of the work described in this paper was supported by Engineering Excellence Fund at Texas A&M University.
References
|
{"Source-Url": "http://www.aaai.org/Papers/AAAI/1990/AAAI90-105.pdf", "len_cl100k_base": 6330, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24063, "total-output-tokens": 7816, "length": "2e12", "weborganizer": {"__label__adult": 0.0003864765167236328, "__label__art_design": 0.0005702972412109375, "__label__crime_law": 0.0006456375122070312, "__label__education_jobs": 0.0035724639892578125, "__label__entertainment": 0.00013518333435058594, "__label__fashion_beauty": 0.0002713203430175781, "__label__finance_business": 0.000576019287109375, "__label__food_dining": 0.00052642822265625, "__label__games": 0.0006375312805175781, "__label__hardware": 0.0010499954223632812, "__label__health": 0.0010128021240234375, "__label__history": 0.0004041194915771485, "__label__home_hobbies": 0.00020945072174072263, "__label__industrial": 0.0008578300476074219, "__label__literature": 0.0010786056518554688, "__label__politics": 0.00048065185546875, "__label__religion": 0.0006208419799804688, "__label__science_tech": 0.28857421875, "__label__social_life": 0.0002231597900390625, "__label__software": 0.015350341796875, "__label__software_dev": 0.681640625, "__label__sports_fitness": 0.00030922889709472656, "__label__transportation": 0.0007505416870117188, "__label__travel": 0.0002061128616333008}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28531, 0.01755]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28531, 0.82783]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28531, 0.88005]], "google_gemma-3-12b-it_contains_pii": [[0, 4288, false], [4288, 7704, null], [7704, 11261, null], [11261, 16090, null], [16090, 19244, null], [19244, 24879, null], [24879, 28531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4288, true], [4288, 7704, null], [7704, 11261, null], [11261, 16090, null], [16090, 19244, null], [19244, 24879, null], [24879, 28531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28531, null]], "pdf_page_numbers": [[0, 4288, 1], [4288, 7704, 2], [7704, 11261, 3], [11261, 16090, 4], [16090, 19244, 5], [19244, 24879, 6], [24879, 28531, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28531, 0.05634]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
b2f61b91f9832e08b12b616e4873bc35cb890988
|
An argumentation framework with uncertainty management designed for dynamic environments
Marcela Capobianco and Guillermo R. Simari
Artificial Intelligence Research and Development Laboratory
Department of Computer Science and Engineering
Universidad Nacional del Sur – Av. Alem 1253, (8000) Bahía Blanca ARGENTINA
Email: {mc,grs}@cs.uns.edu.ar
Abstract. Nowadays, data intensive applications are in constant demand and there is need of computing environments with better intelligent capabilities than those present in today’s Database Management Systems (DBMS). To build such systems we need formalisms that can perform complicate inferences, obtain the appropriate conclusions, and explain the results. Research in argumentation could provide results in this direction, providing means to build interactive systems able to reason with large databases and/or different data sources.
In this paper we propose an argumentation system able to deal with explicit uncertainty, a vital capability in modern applications. We have also provided the system with the ability to seamlessly incorporate uncertain and/or contradictory information into its knowledge base, using a modular upgrading and revision procedure.
1 Introduction and motivations
Nowadays, data intensive applications are in constant demand and there is need of computing environments with better intelligent capabilities than those present in today’s Database Management Systems (DBMS). Recently, there has been progress in developing efficient techniques to store and retrieve data, and many satisfactory solutions have been found for the associated problems. However, the problem of how to understand and interpret a large amount of information remains open, particularly when this information is uncertain, imprecise, and/or inconsistent. To do this we need formalisms that can perform complicate inferences, obtain the appropriate conclusions, and explain the results.
Research in argumentation could provide results in this direction, providing means to build interactive systems able to reason with large databases and/or different data sources, given that argumentation has been successfully used to develop tools for common sense reasoning [8, 4, 14].
Nevertheless, there exists important issues that need to be addressed to use argumentation in these kind of practical applications. A fundamental one concerns the quality of the information expected by argumentation systems: most of them are unable to deal with explicit uncertainty which is a vital capability in
modern applications. Here, we propose an argumentation-based system that addresses this problem, incorporating possibilistic uncertainty into the framework following the approach in [9]. We have also provided the system with the ability to seamlessly incorporate uncertain and/or contradictory information into its knowledge base, using a modular upgrading and revision procedure.
This paper is organized as follows. First, we present the formal definition of our argumentation framework showing its fundamental properties. Next, we propose an architectural software pattern useful for applications adopting our reasoning system. Finally, we state the conclusions of our work.
2 The OP-DeLP programming language: fundamentals
Possibilistic Defeasible Logic Programming (P-DeLP) [1, 2] is an important extension of DeLP in which the elements of the language have the form \((\varphi, \alpha)\), where \(\varphi\) is a DeLP clause or fact. Below, we will introduce the elements of the language necessary in this presentation. Observation based P-DeLP (OP-DeLP) is an optimization of P-DeLP that allows the computation of warranted arguments in a more efficient way, by means of a pre-compiled knowledge component. It also permits a seamless incorporation of new perceived facts into the program codifying the knowledge base of the system. Therefore the resulting system can be used to implement practical applications with performance requirements. The idea of extending the applicability of DeLP in a dynamic setting, incorporating perception and pre-compiled knowledge, was originally conceived in [5]. Thus the OP-DeLP system incorporates elements from two different variants of the DeLP system, O-DeLP [5] and P-DeLP [9]. In what follows we present the formal definition of the resulting system.
The concepts of signature, functions and predicates are defined in the usual way. The alphabet of OP-DeLP programs generated from a given signature \(\Sigma\) is composed by the members of \(\Sigma\), the symbol ‘\(\sim\)’ denoting strong negation [11] and the symbols ‘\(\neg\)’, ‘\(\land\)’, ‘\(\lor\)’ and ‘\(\Rightarrow\)’. Terms, Atoms and Literals are defined in the usual way. A certainty weighted literal, or simply a weighted literal, is a pair \((L, \alpha)\) where \(L\) is a literal and \(\alpha \in [0, 1]\) expresses a lower bound for the certainty of \(\varphi\) in terms of a necessity measure.
OP-DeLP programs are composed by a set of observations and a set of defeasible rules. Observations are weighted literals and thus have an associated certainty degree. In real world applications, observations model perceived facts. Defeasible rules provide a way of performing tentative reasoning as in other argumentation formalisms.
**Definition 1.** A defeasible rule has the form \((L_0 \sim L_1, L_2, \ldots, L_k, \alpha)\) where \(L_0\) is a literal, \(L_1, L_2, \ldots, L_k\) is a non-empty finite set of literals, and \(\alpha \in [0, 1]\) expresses a lower bound for the certainty of the rule in terms of a necessity measure.
Intuitively a defeasible rule \(L_0 \sim L_1, L_2, \ldots, L_k\) can be read as “\(L_1, L_2, \ldots, L_k\) provide tentative reasons to believe in \(L_0\)” [15]. In OP-DeLP these rules also have a certainty degree, that quantifies how strong is the connection between the
ψ
(virus(b), 0.7)
(local(b), 1)
(local(d), 1)
(~filters(b), 0.9)
(~filters(c), 0.9)
(~filters(d), 0.9)
(black_list(c), 0.75)
(black_list(d), 0.75)
(contacts(d), 1)
∆
(move_inbox(X) ≺ ~filters(X), 0.6)
(~move_inbox(X) ≺ move_junk(X), 0.8)
(~move_inbox(X) ≺ filters(X), 0.7)
(move_junk(X) ≺ spam(X), 1)
(move_junk(X) ≺ virus(X), 1)
(spam(X) ≺ black_list(X), 0.7)
(~spam(X) ≺ contacts(X), 0.6)
(~spam(X) ≺ local(X), 0.7)
Fig. 1. An OP-DeLP program for email filtering
premises and the conclusion. A defeasible rule with a certainty degree 1 models
a strong rule.
A set of weighted literals Γ will be deemed as contradictory, denoted as
Γ ⋵ ⊥, iff Γ ⋵ (l, α) and Γ ⋵ (~l, β) with α and β > 0. In a given OP-DeLP
program we can distinguish certain from uncertain information. A clause (γ, α)
will be deemed as certain if α = 1, otherwise it will be uncertain.
Definition 2 (OP-DeLP Program). An OP-DeLP program P is a pair ⟨ψ, ∆⟩,
where ψ is a non contradictory finite set of observations and ∆ is a finite set of
defeasible rules.
Example 1. Fig.1 shows a program for basic email filtering. Observations
describe different characteristics of email messages. Thus, virus(X) stands for
“message X has a virus”; local(X) indicates that “message X is from the local
host”; filters(X) specifies that “message X should be filtered” redirecting it
to a particular folder; black_list(X) indicates that “message X is considered
dangerous” because of the server it is coming from; and contacts(X) indicates
that “the sender of message X is in the contact list of the user”.
The first rule expresses that if the email does not match with any user-
defined filter then it usually should be moved to the “inbox” folder. The second
rule indicates that unfiltered messages in the “junk” folder usually should not be
moved to the inbox. According to the third rule, messages to be filtered should
not be moved to the inbox. The following two rules establish that a message
should be moved to the “junk” folder if it is marked as spam or it contains
viruses. Finally there are three rules for spam classification; a message is usually
labeled as spam if it comes from a server that is in the blacklist. Nevertheless,
even if an email comes from a server in the blacklist it is not labeled as spam
when the sender is in the contact list of the user. Besides, a message from the
local host is usually not classified as spam.
In OP-DeLP the proof method, written ⋵, is defined by derivation based on
the following instance of the generalized modus ponens rule (GMP): (L₀ → L₁ ∧
⋯ ∧ Lₖ, γ), (L₁, β₁),… ,(Lₖ, βₖ) ⋵ (L₀, min(γ, β₁,… , βₖ)), which is a particular
instance of the well-known possibilistic resolution rule. Literals in the set of
observations $\Psi$ are the basis case of the derivation sequence, for every literal $Q$ in $\Psi$ with a certainty degree $\alpha$ it holds that $(Q, \alpha)$ can be derived from $P \equiv (\Psi, \Delta)$.
Given an OP-DeLP program $P$, a query posed to $P$ corresponds to a ground literal $Q$ which must be supported by an argument [15, 10].
**Definition 3.** [Argument] Let $P = (\Psi, \Delta)$ be a program, $A \subseteq \Delta$ is an argument for a goal $Q$ with necessity degree $\alpha > 0$, denoted as $(A, Q, \alpha)$, iff: (1) $\Psi \cup A \vdash (Q, \alpha)$, (2) $\Psi \cup A$ is non contradictory, and (3) there is no $A_1 \subset A$ such that $\Psi \cup A_1 \vdash (Q, \beta), \beta > 0$. An argument $(A, Q, \alpha)$ is a subargument of $(B, R, \beta)$ iff $A \subseteq B$.
As in most argumentation frameworks, arguments in O-DeLP can attack each other. An argument $(A_1, Q_1, \alpha_1)$ counter-argues an argument $(A_2, Q_2, \beta)$ at a literal $Q$ if and only if there is a subargument $(A, Q, \gamma)$ of $(A_2, Q_2, \beta)$, (called disagreement subargument), such that $Q_1$ and $Q$ are complementary literals. Defeat among arguments is defined combining the counterargument relation and a preference criterion “$\preceq$”. This criterion is defined on the basis of the necessity measures associated with arguments.
**Definition 4.** [Preference criterion $\preceq$] [9] Let $(A_1, Q_1, \alpha_1)$ be a counterargument for $(A_2, Q_2, \alpha_2)$. We will say that $(A_1, Q_1, \alpha_1)$ is preferred over $(A_2, Q_2, \alpha_2)$ (denoted $(A_1, Q_1, \alpha_1) \preceq (A_2, Q_2, \alpha_2)$) iff $\alpha_1 \geq \alpha_2$. If it is the case that $\alpha_1 > \alpha_2$, then we will say that $(A_1, Q_1, \alpha_1)$ is strictly preferred over $(A_2, Q_2, \alpha_2)$, denoted $(A_2, Q_2, \alpha_2) \succ (A_1, Q_1, \alpha_1)$. Otherwise, if $\alpha_1 = \alpha_2$ we will say that both arguments are equi-preferred, denoted $(A_2, Q_2, \alpha_2) \approx (A_1, Q_1, \alpha_1)$.
**Definition 5.** [Defeat] [9] Let $(A_1, Q_1, \alpha_1)$ and $(A_2, Q_2, \alpha_2)$ be two arguments built from a program $P$. Then $(A_1, Q_1, \alpha_1)$ defeats $(A_2, Q_2, \alpha_2)$ (or equivalently $(A_1, Q_1, \alpha_1)$ is a defeater for $(A_2, Q_2, \alpha_2)$) iff (1) Argument $(A_1, Q_1, \alpha_1)$ counter-argues argument $(A_2, Q_2, \alpha_2)$ with disagreement subargument $(A, Q, \alpha)$; and (2) Either it is true that $(A_1, Q_1, \alpha_1) \succ (A, Q, \alpha)$, in which case $(A_1, Q_1, \alpha_1)$ will be called a proper defeater for $(A_2, Q_2, \alpha_2)$, or $(A_1, Q_1, \alpha_1) \approx (A, Q, \alpha)$, in which case $(A_1, Q_1, \alpha_1)$ will be called a blocking defeater for $(A_2, Q_2, \alpha_2)$.
As in most argumentation systems [7, 13], OP-DeLP relies on an exhaustive dialectical analysis which allows to determine if a given argument is ultimately undefeated (or warranted) wrt a program $P$. An argumentation line starting in an argument $(A_0, Q_0, \alpha_0)$ is a sequence $\langle \{A_0, Q_0, \alpha_0\}, \langle A_1, Q_1, \alpha_1 \rangle, \ldots, \{A_n, Q_n, \alpha_n\} \rangle$ that can be thought of as an exchange of arguments between two parties, a proponent (evenly-indexed arguments) and an opponent (oddly-indexed arguments). In order to avoid fallacious reasoning, argumentation theory imposes additional constraints on such an argument exchange to be considered rationally acceptable wrt an OP-DeLP program $P$, namely:
1. **Non-contradiction**: given an argumentation line $\lambda$, the set of arguments of the proponent (resp. opponent) should be non-contradictory wrt $P$. Non-contradiction for a set of arguments is defined as follows: a set $S = \bigcup_{i=1}^{n} \{A_i, Q_i, \alpha_i\}$ is contradictory wrt $P$ iff $\Psi \cup \bigcup_{i=1}^{n} A_i$ is contradictory.
2. **No circular argumentation**: no argument $(A_j, Q_j, \alpha_j)$ in $\lambda$ is a subargument of an argument $(A_i, Q_i, \alpha_i)$ in $\lambda$, $i < j$.
3. **Progressive argumentation**: every blocking defeater $(A_i, Q_i, \alpha_i)$ in $\lambda$ is defeated by a proper defeater $(A_{i+1}, Q_{i+1}, \alpha_{i+1})$ in $\lambda$.
An argumentation line satisfying the above restrictions is called *acceptable*, and can be proved to be finite. Given a program $\mathcal{P}$ and an argument $\langle A_0, Q_0, \alpha_0 \rangle$, the set of all acceptable argumentation lines starting in $\langle A_0, Q_0, \alpha_0 \rangle$ accounts for a whole dialectical analysis for $\langle A_0, Q_0, \alpha_0 \rangle$ (i.e. all possible dialogs rooted in $\langle A_0, Q_0, \alpha_0 \rangle$), formalized as a *dialectical tree*, denoted $\mathcal{T}_{\langle A_0, Q_0, \alpha_0 \rangle}$. Nodes in a dialectical tree $\mathcal{T}_{\langle A_0, Q_0, \alpha_0 \rangle}$ can be marked as *undefeated* and *defeated* nodes (U-nodes and D-nodes, resp.). A dialectical tree will be marked as an And-Or tree: all leaves in $\mathcal{T}_{\langle A_0, Q_0, \alpha_0 \rangle}$ will be marked U-nodes (as they have no defeaters), and every inner node is to be marked as D-node iff it has at least one U-node as a child, and as U-node otherwise. An argument $\langle A_0, Q_0, \alpha_0 \rangle$ is ultimately accepted as valid (or *warranted*) iff the root of $\mathcal{T}_{\langle A_0, Q_0, \alpha_0 \rangle}$ is labeled as U-node.
**Definition 6.** [Warrant][9] Given a program $\mathcal{P}$, and a literal $Q$, $Q$ is warranted wrt $\mathcal{P}$ iff there exists a warranted argument $\langle A, Q, \alpha \rangle$ than can be built from $\mathcal{P}$.
To answer a query for a given literal we should see if there exists a warranted argument supporting this literal. Nevertheless, in OP-DeLP there may be different arguments with different certainty degrees supporting a given query. This fact was not considered in [9], but we are clearly interested in finding the warranted argument with the highest certainty degree.
**Definition 7.** [Strongest Warrant] Given a program $\mathcal{P}$, and a literal $Q$, we will say that $\alpha$ is the strongest warrant degree of $Q$ iff (1) there exists a warranted argument $\langle A, Q, \alpha \rangle$ than can be built from $\mathcal{P}$ and (2) no warranted argument $\langle B, Q, \beta \rangle$ such that $\beta > \alpha$ can built from $\mathcal{P}$.
Note that to find out the strongest warrant degree for a given literal $Q$ we need to find the strongest warranted argument supporting it, that is, the warranted argument supporting $Q$ with the higher certainty degree. Then, to find the strongest warrant degree for a literal $Q$ we must first build the argument $A$ that supports the query $Q$ with the highest possible certainty degree and see if $A$ is a warrant for $Q$. Otherwise we must find another argument $B$ for $Q$ with the highest certainty degree among the remaining ones, see if it is a warrant for $Q$, and so on, until a warranted argument is found or there are no more arguments supporting $Q$.
**Example 2.** Consider the program shown in Example 1 and let $\text{move\_inbox}(d)$ be a query wrt this program. The search for a warrant for $\text{move\_inbox}(d)$ will result in an argument $\langle A, \text{move\_inbox}(d), 0.6 \rangle$, with
$$A = \{ (\text{move\_inbox}(d) \leftarrow \neg \text{filters}(d), 0.6) \}$$
allowing to conclude that message $d$ should be moved to the folder Inbox, as it has no associated filter with a certainty degree of 0.6. However, there exists a defeater for $\langle A, \text{move\_inbox}(d), 0.6 \rangle$, namely $\langle B, \neg \text{move\_inbox}(d), 0.7 \rangle$, as there are reasons to believe that message $d$ is spam:
$$B = \{ (\neg \text{move\_inbox}(d) \leftarrow \text{move\_junk}(d), 0.8) \}$$
$$\{ \text{move\_junk}(d) \leftarrow \text{spam}(d), 1 \}, \{ \text{spam}(d) \leftarrow \text{black\_list}(d), 0.7 \}$$
Using the preference criterion, \( \langle B, \neg \text{move_inbox}(d), 0.7 \rangle \) is a proper defeater for \( \langle A, \text{move_inbox}(d), 0.6 \rangle \). However, two counterarguments can be found for \( \langle B, \neg \text{move_inbox}(d), 0.7 \rangle \), since message \( d \) comes from the local host, and the sender is in the user’s contacts list:
- \( \langle C, \neg \text{spam}(d), 0.6 \rangle \), where \( C = \{ (\neg \text{spam}(d) \leftarrow \text{contacts}(d), 0.6) \} \).
- \( \langle D, \neg \text{spam}(d), 0.9 \rangle \), where \( D = \{ (\neg \text{spam}(d) \leftarrow \text{local}(d), 0.9) \} \).
\( B \) defeats \( C \) but is defeated by \( D \). There are no more arguments to consider, and the resulting dialectical tree has only one argumentation line: \( A \) is defeated by \( B \) who is in turn defeated by \( D \). Hence, the marking procedure determines that the root node \( \langle A, \text{move_inbox}(d), 0.6 \rangle \) is a \textbf{U-node} and the original query is warranted.
3 Dialectical graphs and pre-compiled knowledge
To obtain faster query processing in the OP-DeLP system we integrate pre-compiled knowledge to avoid the construction of arguments which were already computed before. The approach follows the proposal presented in [5] where the pre-compiled knowledge component is required to: (1) minimize the number of stored arguments in the pre-compiled base of arguments (for instance, using one structure to represent the set of arguments that use the same defeasible rules); and (2) maintain independence from the observations that may change with new perceptions, to avoid modifying also the pre-compiled knowledge when new observations are incorporated.
Considering these requirements, we define a database structure called \textit{dialectical graph}, which will keep a record of all possible arguments in an OP-DeLP program \( P \) (by means of a special structure named potential argument) as well as the counterargument relation among them. Potential arguments, originally defined in [5] contain non-grounded defeasible rules, depending thus only on the set of rules \( \Delta \) in \( P \) and are independant from the set of observations \( \Psi \).
Potential arguments have been devised to sum-up arguments that are obtained using \textit{different} instances of the \textit{same} defeasible rules. Recording every generated argument could result in storing many arguments which are structurally identical, only differing on the constants being used to build the corresponding derivations. Thus, a potential argument stands for several arguments which use the same defeasible rules. Attack relations among potential arguments can be also captured, and in some cases even defeat can be pre-compiled. In what follows we introduce the formal definitions, adapted from [5] to fit the OP-DeLP system.
**Definition 8.** [Weighted Potential argument] Let \( \Delta \) be a set of defeasible rules. A \textit{subset} \( A \) of \( \Delta \) is a \textit{potential argument} for a literal \( Q \) with an upper bound \( \gamma \) for its certainty degree, noted as \( \langle A, Q, \gamma \rangle \) if there exists a non-contradictory set of literals \( \Phi \) and an instance \( A \) that is obtained finding an instance for every rule in \( A \), such that \( \langle A, Q, \alpha \rangle \) is an argument wrt \( \langle \Phi, \Delta \rangle, (\alpha \leq \gamma) \) and there is no instance \( \langle B, Q, \beta \rangle \) of \( A \) such that \( \beta > \gamma \).
The nodes of the dialectical graph are the potential arguments. The arcs of our graph are obtained calculating the counterargument relation among the nodes previously obtained. To do this, we extend the concept of counterargument for potential arguments. A potential argument \( \langle \langle A_1, Q_1, \alpha \rangle \rangle \) counter-argues \( \langle \langle A_2, Q_2, \beta \rangle \rangle \) at a literal \( Q \) if and only if there is a non-empty potential sub-argument \( \langle \langle A, Q, \gamma \rangle \rangle \) of \( \langle \langle A_2, Q_2, \beta \rangle \rangle \) such that \( Q_1 \) and \( Q \) are contradictory literals.\(^1\) Note that potential counter-arguments may or may not result in a real conflict between the instances (arguments) associated with the corresponding potential arguments. In some cases instances of these arguments cannot co-exist in any scenario (e.g., consider two potential arguments based on contradictory observations). Now we can finally define the concept of dialectical graph:
**Definition 9.** [Dialectical Graph] Let \( \mathcal{P} = (\Psi, \Delta) \) be an OP-DeLP program. The dialectical graph of \( \Delta \), denoted as \( G_\Delta \), is a pair \( (\text{PotArg}(\Delta), C) \) such that: (1) \( \text{PotArg}(\Delta) \) is the set \( \{ \langle \langle A_1, Q_1, \alpha_1 \rangle \rangle, \ldots, \langle \langle A_k, Q_k, \alpha_k \rangle \rangle \} \) of all the potential arguments that can be built from \( \Delta \); (2) \( C \) is the counterargument relation over the elements of \( \text{PotArg}(\Delta) \).
We have devised a set of algorithms to use the dialectical graph for improving the inference process. For space reasons these algorithms are not detailed in this work, the interested reader may consult [6] for a more detailed treatment of this subject. We have also compared the obtained algorithms theoretically with standard argument-based inference techniques (such as those used in P-DeLP). At the inference process, we have found out that complexity is lowered from \( O \left( 2^{\Delta' |(2^{\Delta})^1/4} \right) \) to \( O(2^{\Delta' \Delta'}) \).
4 **A proposed architecture for OP-DeLP applications**
Applications that use the OP-DeLP system will be be engineered for contexts where: (1) information is uncertain and heterogeneous, (2) handling of great volume of data flows is needed, and (3) data may be incomplete, vague or contradictory. In this section we present an architectural pattern that can be used in these applications.
Previous to proposing a pattern we started analyzing the characteristics of OP-DeLP applications. First, we found that data will generally be obtained from multiple sources. Nowadays the availability of information through the Internet has shifted the issue of information from quantitative stakes to qualitative ones [3]. For this reason, new information systems also need to provide assistance for judging and examining the quality of the information they receive.
For our pattern, we have chosen to use a multi-source perspective into the characterization of data quality[3]. In this case the quality of data can be evaluated by comparison with the quality of other homologous data (i.e. data from different information sources which represent the same reality but may have
\(^1\) Note that \( P(X) \) and \( \sim P(X) \) are contradictory literals although they are non-grounded. The same idea is applied to identify contradiction in potential arguments.
contradictory values). The approaches usually adopted to reconcile heterogeneity between values of data are: (1) to prefer the values of the most reliable sources, (2) to mention the source ID for each value, or (3) to store quality meta-data with the data. We have chosen to use the second approach. In multi-source databases, each attribute of a multiple source element has multiple values with the ID of their source and their associated quality expertise. Quality expertise is represented as meta-data associated with each value. We have simplified this model for an easy and practical integration with the OP-DeLP system. In our case, data sources are assigned a unique certainty degree. For simplicity sake, we assume that different sources have different values. All data from a given source will have the same certainty degree. This degree may be obtained weighting the plausibility of the data value, its accuracy, the credibility of its source and the freshness of the data.
OP-DeLP programs basically have a set of observations $\Psi$ and a set of rules $\Delta$. The set of rules is chosen by the knowledge engineer and remains fixed. The observation set may change according with new perceptions received from the multiple data sources. Nevertheless, inside the observation set we will distinguish a special kind of perceptions, those with certainty degree 1. Those perceptions are also codified by the knowledge engineer and cannot be modified in the future by the perception mechanism. To assure this, we assume that every data source has a certainty value $\gamma$ such that $0 < \gamma < 1$.
Example 3. Consider the program in Example 2. In this case data establishing a given message is from the local host comes from the same data source and can be given a certainty degree of 1. The same applies for $\text{contacts}(X)$. The algorithm that decides whether to filter a given message is another data source with a degree of 0.9, the filter that classifies a message as a virus is another data source with a degree of 0.7, and the algorithm that checks if the message came from some server in the blacklist is a different source that has a degree of 0.75. Note that we could have different virus filters with different associated certainty degrees if we wanted to build higher trust on this filter mechanism.
The scenario just described requires an updating criterion different to the one presented in [5], given that the situation regarding perceptions in OP-DeLP is much more complex. To solve this, we have devised Algorithm 1, that summarizes different situations in two conditions. The first one acts when the complement of the literal $Q$ is already present in the set $\Psi$. Three different cases can be analyzed in this setting: (1) If both certainty degrees are equal it means that both $Q$ and its complement proceed from the same data source. Then the only reason for the conflict is a change in the state of affairs, thus an update is needed and the new literal is added. (2) If $\alpha \, > \, \beta$ it means that the data sources are different. Thus we choose to add $(Q, \alpha)$ since it has the higher certainty degree. (3) If $\alpha \, < \, \beta$ we keep $(\overline{Q}, \beta)$. Note that (1) is an update operation [12] while (2) and (3) are revisions over $\Psi$. The difference between updating and revision is fundamental. Updating consists in bringing the knowledge base up to date when the world changes. Revision allows us to obtain new information about a static scenario [12].
The second condition in Algorithm 1 considers the case when \( Q \) was in \( \Psi \) with a different certainty degree. Then it chooses the weighted literal with the highest degree possible. Note that the observations initially codified that have a certainty degree of 1 cannot be deleted or modified by algorithm 1.
**Algorithm 1 UpdateObservationSet**
Input: \( \mathcal{P} = (\Psi, \Delta), (Q, \alpha) \)
Output: \( \mathcal{P} = (\Psi, \Delta) \) \{with \( \Psi \) updated\}
If there exists a weighted literal \( (Q', \beta) \in \Psi \) such that \( \beta \leq \alpha \) Then
- delete\( ((Q', \beta)) \)
- add\( ((Q, \alpha)) \)
If there exists a weighted literal \( (Q, \beta) \in \Psi \) such that \( \alpha \leq \beta \) Then
- delete\( ((Q, \beta)) \)
- add\( ((Q, \alpha)) \)
Finally, fig. 2 summarizes the main elements of the O-DeLP-based architecture. Knowledge is represented by an OP-DeLP program \( \mathcal{P} \). Perceptions from multiple sources may result in changes in the set of observations in \( \mathcal{P} \), handled by the updating mechanism defined in algorithm 1. To solve queries the OP-DeLP inference engine is used. This engine is assisted by the dialectical graph (Def. 9) to speed-up the argumentation process. The final answer to a given query \( Q \) will be yes, with the particular certainty degree of the warranted argument supporting \( Q \), or no if the system could not find a warrant for \( Q \) from \( \mathcal{P} \).
## 5 Conclusions
In this work we have defined an argumentation-based formalism that integrates uncertainty management. This system was also provided with an optimization mechanism based on pre-compiled knowledge. Using this, the argumentation system can comply with real time requirements needed to administer data and model reasoning over this data in dynamic environments. Another contribution is the architectural model to integrate OP-DeLP in practical applications to administer and reason with data from multiple sources.
As future work, we are developing a prototype based on the proposed architecture to extend the theoretical complexity analysis with empirical results and to test the integration of the OP-DeLP reasoning system in real world applications. We are also working on the integration of OP-DeLP and database management systems by means of a strongly-coupled approach.
References
|
{"Source-Url": "http://sedici.unlp.edu.ar/bitstream/handle/10915/20879/Documento_completo.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 7595, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35065, "total-output-tokens": 9172, "length": "2e12", "weborganizer": {"__label__adult": 0.00039887428283691406, "__label__art_design": 0.0006475448608398438, "__label__crime_law": 0.0008134841918945312, "__label__education_jobs": 0.00336456298828125, "__label__entertainment": 0.00015592575073242188, "__label__fashion_beauty": 0.0002366304397583008, "__label__finance_business": 0.0005116462707519531, "__label__food_dining": 0.0005826950073242188, "__label__games": 0.00090789794921875, "__label__hardware": 0.0009145736694335938, "__label__health": 0.0010242462158203125, "__label__history": 0.0004076957702636719, "__label__home_hobbies": 0.0001634359359741211, "__label__industrial": 0.0007290840148925781, "__label__literature": 0.0011034011840820312, "__label__politics": 0.0005049705505371094, "__label__religion": 0.0006594657897949219, "__label__science_tech": 0.276123046875, "__label__social_life": 0.00021529197692871096, "__label__software": 0.018310546875, "__label__software_dev": 0.69140625, "__label__sports_fitness": 0.0003008842468261719, "__label__transportation": 0.0006694793701171875, "__label__travel": 0.0002065896987915039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32258, 0.02585]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32258, 0.6934]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32258, 0.84535]], "google_gemma-3-12b-it_contains_pii": [[0, 2543, false], [2543, 5865, null], [5865, 8590, null], [8590, 12750, null], [12750, 16441, null], [16441, 19983, null], [19983, 23478, null], [23478, 27008, null], [27008, 29010, null], [29010, 32258, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2543, true], [2543, 5865, null], [5865, 8590, null], [8590, 12750, null], [12750, 16441, null], [16441, 19983, null], [19983, 23478, null], [23478, 27008, null], [27008, 29010, null], [29010, 32258, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32258, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32258, null]], "pdf_page_numbers": [[0, 2543, 1], [2543, 5865, 2], [5865, 8590, 3], [8590, 12750, 4], [12750, 16441, 5], [16441, 19983, 6], [19983, 23478, 7], [23478, 27008, 8], [27008, 29010, 9], [29010, 32258, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32258, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
4e29d0e8f6514a860e0a3a0d58b902b6ee60fab1
|
SOFTWARE TOOL ARTICLE
**ReactomeFIViz: a Cytoscape app for pathway and network-based data analysis [version 2; referees: 2 approved]**
Previously titled: ReactomeFIViz: the Reactome FI Cytoscape app for pathway and network-based data analysis
Guanming Wu¹,², Eric Dawson³, Adrian Duong¹, Robin Haw¹, Lincoln Stein¹,⁴
¹Ontario Institute for Cancer Research, Toronto, Ontario M5G 0A3, Canada
²DMICE, Oregon Health & Science University, Portland, Oregon 97239, USA
³Section of Integrative Biology, Institute for Cellular and Molecular Biology, and Center for Computational Biology and Bioinformatics, The University of Texas at Austin, Austin, TX 78712, USA
⁴Department of Molecular Genetics, University of Toronto, Toronto, Ontario M5S 1A8, Canada
**Abstract**
High-throughput experiments are routinely performed in modern biological studies. However, extracting meaningful results from massive experimental data sets is a challenging task for biologists. Projecting data onto pathway and network contexts is a powerful way to unravel patterns embedded in seemingly scattered large data sets and assist knowledge discovery related to cancer and other complex diseases. We have developed a Cytoscape app called “ReactomeFIViz”, which utilizes a highly reliable gene functional interaction network combined with human curated pathways derived from Reactome and other pathway databases. This app provides a suite of features to assist biologists in performing pathway- and network-based data analysis in a biologically intuitive and user-friendly way. Biologists can use this app to uncover network and pathway patterns related to their studies, search for gene signatures from gene expression data sets, reveal pathways significantly enriched by genes in a list, and integrate multiple genomic data types into a pathway context using probabilistic graphical models. We believe our app will give researchers substantial power to analyze intrinsically noisy high-throughput experimental data to find biologically relevant information.
This article is included in the Cytoscape apps channel.
Introduction
High-throughput experiments, which generate large and complex data sets, are routinely performed in modern biological and clinical studies to unravel mechanisms underlying complex diseases, such as cancer. However, extracting reliable and meaningful results from these experiments is usually difficult and requires sophisticated computational tools and algorithms, which are challenging for experimental biologists to comprehend. A user-friendly software tool is extremely important for both bench and computational biologists to perform high-throughput data analysis related to cancer and other complex diseases.
Many studies have shown that alterations in pathways or networks are better correlated with complex disease phenotypes than any particular gene or gene product. Pathway- and network-based data analysis approaches project information about seemingly unrelated genes and proteins onto pathway and network contexts, and create an integrated view for researchers to understand mechanisms related to phenotypes of interest.
In this paper, we describe a software tool called ReactomeFIViz (also called the Reactome FI Cytoscape app or ReactomeFIPlugin), which can be used to perform pathway- and network-based data analysis for data generated from high-throughput experiments. This tool uses the highly reliable Reactome functional interaction (FI) network for doing network-based data analysis. The FI network was constructed by merging interactions extracted from human curated pathways with interactions predicted using a machine learning approach. This tool can also be used to perform pathway-based data analysis by using high quality human-curated pathways in the Reactome database, the most comprehensive open source pathway database.
Implementation
Software architecture
We used conventional three-tier software architecture to implement ReactomeFIViz (Figure 1). The back-end contains several databases hosted in the open-source MySQL database engine (http://www.mysql.com). The middle server-side application uses hibernate (http://hibernate.org) to access the databases storing FIs and cancer gene index data (see below). The server-side application also uses the in-house developed Reactome API for Object/Relational mapping to access pathway-related contents stored in a database using the Reactome database schema. On the server-side, a lightweight servlet container, Spring Framework (http://projects.spring.io/spring-framework/), and a Java RESTful framework, Jersey (https://jersey.java.net), are used to power a RESTful API for the Cytoscape front-end. The front-end Cytoscape app uses this RESTful API to communicate with the server-side application. Almost all analysis features in the app are provided by this RESTful API, which should also facilitate their use by other front-end applications, such as a web browser or tablet app.
For cancer data analysis, we imported the cancer gene index (CGI, https://wiki.nci.nih.gov/display/cageneindex) data into a MySQL database and then developed a hibernate API for the server-side application. The CGI data contains annotations for cancer-related genes. These annotations were extracted by using text-mining technologies and then validated by human curators (https://wiki.nci.nih.gov/display/cageneindex/Creation+of+the+Cancer+Gene+Index).
The Reactome FI network is updated annually. We recommend using the latest version of the FI network. Different versions of the FI network may yield different results due to updates to gene interactions, so we have also deployed two older versions of the FI network to use for comparison of legacy data sets and to reproduce published results.
R (http://www.r-project.org) is used in the server-side for executing network module-based survival analysis and other statistical computations. ReactomeFIViz uses Java based methods in the server-side to call functions in R. Users of our app don’t need to install R in their machines in order to perform the statistical analyses implemented in the app.
ReactomeFIViz is designed and implemented for Cytoscape 3, and includes all features in Reactome FI Cytoscape plug-in for Cytoscape 2. Users are recommended to use the latest version of our app for Cytoscape 3.
Network analysis features
ReactomeFIViz implements multiple features for users to perform network-based data analysis, including FI sub-network construction, network module discovery, functional annotation, HotNet mutation analysis, and network module-based gene signature discovery from microarray data sets. The HotNet algorithm was...
implemented by porting python and MatLab code of HotNet_v1.0.0 (downloaded from http://compbio.cs.brown.edu/projects/hotnet/) to Java and R. For details about other algorithms and their implementations, please refer to our previous work.\(^3\),\(^7\).
The majority of interactions in the Reactome FI network are extracted from reactions and complexes. In order to display semantic meanings (e.g. catalysis, activation and inhibition) of these interactions, we created a Reactome FI network specific visual style. This visual style is registered as a service using the OSGi API supported by Cytoscape 3, and applied to newly constructed FI sub-network automatically for network analysis.
**Pathway analysis features**
Since version 4.0.0.beta, released in January 2014, ReactomeFIViz allows users to explore a list of high quality, human curated Reactome pathways, visualize Reactome pathways directly in Cytoscape, and perform pathway enrichment analysis on a list of genes based on a binomial test.\(^8\) In April 2014, we added a new experimental feature for performing integrated analysis for multiple genomic data types by adapting a factor graph based approach called “PAR-ADIGM” into ReactomeFIViz.
The Reactome database contains several hundred manually laid-out pathway diagrams.\(^1\) Pathway diagrams in Reactome are drawn based on biochemical reactions. A reaction usually contains multiple inputs and outputs, in addition to catalysts, inhibitors and activators. The network model in Cytoscape is designed to support simple graphs containing edges between two nodes only. In order to display Reactome pathway diagrams, we adapted the pathway diagram view in the Reactome curator tool into the Cytoscape environment, and wrapped it in a JInternalFrame so that a pathway view can be displayed along with a network view in the Cytoscape desktop (Figure 2).
**Results**
ReactomeFIViz provides a suite of features to assist users to perform pathway- and network-based data analyses (Figure 3). Based on a list of genes loaded from a file, the user can construct a sub-network, perform network clustering to search for network modules related to patient clinical or other phenotypic information, annotate network modules, perform pathway enrichment analysis, and even model pathway activities based on probabilistic graphical models.\(^9\) By performing pathway- and network-based analyses using ReactomeFIViz, researchers will be able to uncover pathway and network
---
**Figure 2.** A Reactome pathway diagram displayed in a Reactome diagram view. The diagram view is wrapped in a JInternalFrame and hosted in the Cytoscape desktop.
patterns related to their studies and then link found patterns to clinical phenotypes\cite{3,7}.
As an example, we present results generated from network module based analysis of the TCGA ovarian cancer mutation data\cite{10} using ReactomeFIViz. The TCGA mutation data file and clinical information file were downloaded from the Broad Institute Firehose website https://confluence.broadinstitute.org/display/GDAC, released in July 2012. The clinical information has been pre-processed.
For this data set, we chose the 2009 version of the FI network, and picked genes mutated in three or more samples to construct a FI sub-network. We performed a network clustering, followed by survival analysis for each network module by splitting samples into two groups: samples having genes mutated in the module (Group 1) and samples not having genes mutated in the module (Group 0). Our results indicate that group 1 samples (Figure 4, green line in the Kaplan-Meier plot\cite{11}) have significantly longer overall survival times compared to group 0 samples (Figure 4, red line in the Kaplan-Meier plot) (p-value = 3.4 × 10^{-5} based on the CoxPH analysis\cite{12}) based on module 3. Pathway enrichment analysis results imply that module 3 is enriched with genes in calcium signaling pathway (http://www.genome.jp/kegg/pathway/hsa/hsa04020.html) and mitotic G2/M transition (http://www.reactome.org/cgi-bin/control_panel_st_id?ST_ID=REACT_2203.2). These results suggest that mutations impacting calcium signaling and the cell cycle may increase the survival of ovarian cancer patients. However, we may need more samples and independent data sets to validate our conclusion.
Using the same version of ReactomeFIViz but different versions of the FI network may yield different results because of updates of protein interactions in the FI network. We performed the same analysis with the latest version of the FI network (the 2013 version), and found that genes in module 3 from the 2009 version of the FI network have been split among several modules discovered using the newer version of the FI network. The module having the largest overlap with module 3 from the 2009 version of the FI network has the most significant p-value from the survival analysis (p-value = 1.1 × 10^{-3} from CoxPH), which implies that our method is fairly robust against updates of the FI network. For details, see the supplementary results.
**Discussion**
Our Cytoscape app provides a suite of features for users to perform network- and pathway-based analysis for data generated from multiple experiments related to cancer and other complex diseases. Users can use our tool to search for disease-related network and pathway patterns. Our tool is built upon the Reactome database, arguably the most comprehensive human curated open source pathway database, and leverages the highly reliable functional interaction network extracted from human curated pathways. Many
Module 3 generated from the TCGA ovarian cancer mutation data file is significantly related to patient overall survival. The central main panel shows the network view of module 3. The bottom table displays pathway annotations for genes in module 3 with two pathways, Calcium Signaling Pathway and G2/M Transition, highlighted. The right panel shows survival analysis results using both the Cox proportional hazards (CoxPH) model and the Kaplan-Meier model. The Kaplan-Meier plot was added to the figure later.
studies based on the FI network and this app have shown its many applications to cancer and other disease studies\textsuperscript{13-16}.
For future development, we will focus on using probabilistic graphical models, such as factor graphs, for performing pathway modeling and linking results to patient clinical information in order to uncover cellular mechanisms related to cancer drug sensitivity, search for cancer biomarkers, and assist new drug development.
**Data availability**
Use the detailed procedures described in our user guide to reproduce the results described in the example: http://wiki.reactome.org/index.php/Reactome_FI_Cytoscape_Plugin.
**Software availability**
Homepage: http://wiki.reactome.org/index.php/Reactome_FI_Cytoscape_Plugin
Cytoscape app: http://apps.cytoscape.org/apps/reactomefiplugin
Latest source code: https://github.com/reactome-fi/CytoscapePlugIn
Source code as at the time of publication: https://github.com/F1000Research/CytoscapePlugIn
Archived source code as at the time of publication: http://www.dx.doi.org/10.5281/zenodo.10385\textsuperscript{17}
License: the Creative Commons Attribution 3.0 Unported License (http://www.reactome.org/?page_id=362).
**Author contributions**
GW and LS initiated and guided the project. GW designed the software. GW, ED, and AD implemented the software. GW, ED, RH and LS wrote the paper.
**Competing interests**
No competing interests were disclosed.
**Grant information**
This project is supported by a NIH grant (2U41HG003751-05) to LS and a Genome Canada grant (OGI 5458) to LS.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
**Acknowledgements**
We would like to thank Irina Kalatskaya, Christina Yung and other members in Dr. Lincoln Stein’s group for software testing and many feedbacks. We also thank Peter D’Eustachio and Irina Kalatskaya for reading and editing the manuscript. We are indebted to Alexander Pico and William Longabaugh from the Cytoscape core development team for reviewing the manuscript and providing many suggestions.
Supplementary results
In this supplementary document, we describe analysis results for the example data set using the 2013 version of the FI network. Figure S1 shows two modules, module 4 and module 11, having smallest p-values from survival analyses based on the CoxPH model. Figure S2 shows the Kaplan-Meier plots for Modules 4 and 11. Overlapping analysis (Table S1) indicated that 58 genes in the module 3 from the 2009 version of the FI network have been spread into module 4 (18 genes, p-value = $5.5 \times 10^{-13}$ based on hypergeometric test), module 3 (21 genes, p-value = $5.8 \times 10^{-9}$), module 26 (2 genes, p-value = 0.01), module 11 (5 genes, p-value = 0.02), module 1 (4 genes, p-value = 1.0), and module 0 (1 gene, p-value = 1.0). It is interesting to see that module 4 has the most significant overlap with module 3 from the 2009 version of the FI network, and also has the most significant p-value from the survival analyses (p-value = $1.1 \times 10^{-3}$ from CoxPH), which implies that our method is fairly robust against updates of the FI network.
Figure S1. Modules 4 and 11 from the TCGA ovarian cancer mutation data file generated by using the 2013 version of the FI network. The central panel shows the network view of module 4 (right in lightseegreen) and module 11 (left in darkkhaki). The bottom table displays pathway annotations for module 4. The right panel shows survival analysis results using the CoxPH model for modules containing genes no less than 10 and the Kaplan-Meier model for modules 4 and 11.
Figure S2. Kaplan-Meier survival plots for modules 4 and 11. P-values are calculated based on the Kaplan-Meier survival model.
Table S1. Distribution of module 3 genes from the 2009 version of the FI network into modules from the 2013 version of the FI network. Column Module is for module indices, Size for numbers of genes contained by modules from the 2013 version of the FI network, Shared for numbers of genes shared between 2009 module 3 and 2013 module, and P-value for significance of sharing based on the hypergeometric test.
<table>
<thead>
<tr>
<th>Module</th>
<th>Size</th>
<th>Shared</th>
<th>P-value</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>30</td>
<td>19</td>
<td>5.5E-13</td>
</tr>
<tr>
<td>3</td>
<td>60</td>
<td>21</td>
<td>5.8E-08</td>
</tr>
<tr>
<td>26</td>
<td>2</td>
<td>2</td>
<td>1.2E-02</td>
</tr>
<tr>
<td>11</td>
<td>16</td>
<td>5</td>
<td>2.2E-02</td>
</tr>
<tr>
<td>1</td>
<td>97</td>
<td>4</td>
<td>1.0E+00</td>
</tr>
<tr>
<td>0</td>
<td>110</td>
<td>1</td>
<td>1.0E+00</td>
</tr>
</tbody>
</table>
References
Open Peer Review
Current Referee Status: ✔ ✔
Version 2
Referee Report 22 September 2014
doi:10.5256/f1000research.5631.r6104
Nikolaus Schultz¹, B. Arman Aksoy²
¹ Computational Biology Center, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
² Memorial Sloan Kettering Cancer Center, New York, NY, USA
The authors have addressed all of our comments and concerns.
Competing Interests: No competing interests were disclosed.
We have read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
Version 1
Referee Report 11 August 2014
doi:10.5256/f1000research.4742.r5319
Nikolaus Schultz¹, B. Arman Aksoy²
¹ Computational Biology Center, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
² Memorial Sloan Kettering Cancer Center, New York, NY, USA
In this manuscript, the authors describe details of a Cytoscape 3.0 App: ReactomeFIViz. This app is intended to help users easily investigate genomic alteration data in the context of biological networks. The app allows users to obtain a network from the curated Reactome Functional Interaction database; map mutation, copy-number alteration or gene expression data onto the network; conduct a gene set enrichment analysis or module discovery on the simplified Reactome network; and finally, see the detailed pathway view provided by the Reactome Pathway Browser.
The previous version of the App, which was compatible with Cytoscape 2, was introduced as a supplemental software tool in an earlier study (Wu, Feng and Stein, 2010). Since then, both Cytoscape extensions, the plug-in and the app, have actively been used by users. This has also been evident by the positive reviews the App has received on the Cytoscape App Store. Considering the rich functionality and the ease of cancer genomics analysis that the App provides to the users, we believe this app is of interest to many researchers working in the fields of Computational Biology, Cancer Genomics and Systems Biology.
There is already extensive documentation about the App on the Reactome web site, however the manuscript fails to provide a general overview for non-experienced users. We have the following suggestions for the authors and if addressed, we believe, these will considerably improve the manuscript:
- In the abstract, the authors say "... pathways from Reactome and other pathway databases...". The paper, however, creates the impression that the App is highly dependent on the Reactome infrastructure and does not allow communication with other databases. We suggest that the authors remove "and other pathway databases" from the abstract or better clarify this point in the abstract.
- The details about the type of analyses and the statistical tests the App enables are provided in an earlier paper by the same group (Wu, Feng and Stein, 2010), but not in this paper. For readers that are interested in learning these details, we suggest the authors to add a sentence to the manuscript and refer to their earlier work for details. If the functionality and the implementation of these tests have changed since then, we suggest that the authors clearly list these new improved items in the paper for more clarification.
- As an example use case, the authors provide the details of a re-analyzed data set and mention that the results of these two analyses differ due to changes in Reactome FI networks. Curated databases are, of course, subject to changes over time; but it is not clear from the text whether it was the changed network that was causing the problem or the new version of the extension. We suggest that the authors provide the results of these two runs as a supplement to their paper for users to compare and contrast.
- The last sentence of the Implementation section says that the analyses were conducted in R, but can the authors clarify the requirements for this App? Do users, for example, need to install R to use this App? Related to this, the authors do not talk about the Cytoscape version that the App targets. Is the Cytoscape 2 Plugin deprecated? Do authors suggest that users install the newer version in Cytoscape 3 as an App?
- Figure 1 provides details about the implementation and the architecture of the ReactomeFIViz App, however we think the manuscript needs a simple user flow diagram that shows where different types of data are obtained, the functionalities of the App and the output the users get. The mutation-based module discovery and differential survival analysis examples mentioned in the paper are good use cases, however it is not clear from the text what the App supports other than these examples.
- Finally, we suggest the authors to provide a supplemental step-by-step guide to replicate the results that they describe in the paper. For a new user, this may provide a good base to start using the software and for many researchers it might be more convenient to have such an article provided as a supplementary file to this paper.
**Competing Interests:** No competing interests were disclosed.
We have read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however we have significant reservations, as outlined above.
Guanming Wu,
Thanks a lot for your review of our article. We have made changes according to your thoughtful suggestions. Please see details below:
- In the abstract, the authors say “... pathways from Reactome and other pathway databases...”. The paper, however, creates the impression that the App is highly dependent on the Reactome infrastructure and does not allow communication with other databases. **We suggest that the authors remove “and other pathway databases” from the abstract or better clarify this point in the abstract.**
Although the app uses a software infrastructure running on Reactome, the curated pathways it uses for the network and pathway enrichment analysis come in equal parts from Reactome and other pathway databases. We have clarified this in the abstract.
- The details about the type of analyses and the statistical tests the App enables are provided in an earlier paper by the same group (Wu, Feng and Stein, 2010), but not in this paper. For readers that are interested in learning these details, we suggest the authors to add a sentence to the manuscript and refer to their earlier work for details. If the functionality and the implementation of these tests have changed since then, we suggest that the authors clearly list these new improved items in the paper for more clarification.
We have added references to our original FI network paper in the paragraph introducing “Network analysis features”. Also, we added the date that version 4.0.0.beta was released in order to indicate that pathway analysis features are new features that have not been covered by our previous work.
- As an example use case, the authors provide the details of a re-analyzed data set and mention that the results of these two analyses differ due to changes in Reactome FI networks. Curated databases are, of course, subject to changes over time; but it is not clear from the text whether it was the changed network that was causing the problem or the new version of the extension. We suggest that the authors provide the results of these two runs as a supplement to their paper for users to compare and contrast.
We modified the sentence in the last paragraph describing different results using different versions of the FI network to make it clear that the same software, but different versions of the FI network, were used. We also deleted this sentence in the implementation section, “Each version is handled by its own web application on the server-side for easy software maintenance.”, to avoid confusing the readers with technical details.
As suggested, added a supplementary document to describe results from the 2013 version of the FI network.
- The last sentence of the Implementation section says that the analyses were conducted in R, but can the authors clarify the requirements for this App? Do users, for example, need to install R to use this App? Related to this, the authors do not talk about the Cytoscape version that the App targets. Is the Cytoscape 2 Plugin deprecated? Do authors suggest that users install the newer version in Cytoscape 3 as an App?
We have clarified the section to make the requirements clearer, and added a new paragraph.
to address the relationship of the application to Cytoscape 2 and 3.
- Figure 1 provides details about the implementation and the architecture of the ReactomeFIViz App, however we think the manuscript needs a simple user flow diagram that shows where different types of data are obtained, the functionalities of the App and the output the users get. The mutation-based module discovery and differential survival analysis examples mentioned in the paper are good use cases, however it is not clear from the text what the App supports other than these examples.
We have added a new paragraph at the top of the Results section to highlight some major features in the App, and created a new diagram to show these major features as suggested.
- Finally, we suggest the authors to provide a supplemental step-by-step guide to replicate the results that they describe in the paper. For a new user, this may provide a good base to start using the software and for many researchers it might be more convenient to have such an article provided as a supplementary file to this paper.
The application’s web-based tutorial already has provided very detailed, step-by-step instructions that replicate the results. We have added a new sentence in the Data availability section to point this out, and the README file contained in the zip file for downloading now points readers interested in replicating the analysis to the online tutorial.
**Competing Interests:** No competing interests were disclosed.
Why does this happen and how can users guard against it?
I was also not able to explore/reproduce the example presented in the paper, because (contrary to its ".txt" extension) the MAF file provided did not appear to be plain text and the needed survival data file was not included in the zip file provided. A 'README' file with instructions may be useful here.
P.S. The multiple names given to this resource - "ReactomeFIViz (also called the Reactome FI Cytoscape app or ReactomeFIPlugIn)" - are a little confusing and seem unnecessary.
**Competing Interests:** No competing interests were disclosed.
I have read this submission. I believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.
---
**Author Response 06 Sep 2014**
Guanming Wu,
Thanks a lot for your comments. We have made changes according to your comments. Please see details below:
- **My only reservation about this article is that its brevity makes it sketchy. For example, how are pathway enrichment statistics calculated? Does the implementation of HotNet offered here differ in any way from the original?**
We have now stated that a binomial test is used for pathway enrichment analysis in the "Pathway analysis features" section and cited a reference (reference 8) for it. We also added a sentence to point out that our implementation of HotNet was done by porting the original Python and Matlab code to Java and R, and so the original algorithm is unchanged.
- **I am especially intrigued that the example use-case given is reported to work well with the 2009 version of the ReactomeFI network, but apparently the discovered modules cannot be found with the newer versions of the FI because the underlying interactions have been "spread into several modules in the newer version of the FI network". Why does this happen and how can users guard against it?**
Because new interactions have been added into the latest version of the FI network, these new interactions may change the network clustering results. However, we still find some network modules, which overlap significantly with the original module and have significant p-values, though higher than the original one, from survival analysis. As suggested by another reviewer, we have added a supplementary document to describe results from the 2013 version of the FI network. As stated in the manuscript, we recommend that the user should use the latest version of the FI network, and check with previous versions of the FI network to investigate the stability of found network modules.
I was also not able to explore/reproduce the example presented in the paper, because (contrary to its ".txt" extension) the MAF file provided did not appear to be plain text and the needed survival data file was not included in the zip file provided. A 'README' file with instructions may be useful here.
We apologize for the problem. It turned out that the zip file was collapsed somehow. We have fixed the problem, and added a simple README file as suggested.
The multiple names given to this resource - "ReactomeFIViz (also called the Reactome FI Cytoscape app or ReactomeFIPlugIn)" - are a little confusing and seem unnecessary.
Because of the development history of this software, the same app has been called different names. For example, prior to Cytoscape 3, all Cytoscape extensions were called “plug-ins”, but the nomenclature has now been changed to “app”. We have tried to minimize the confusion by using ReactomeFIViz consistently and referring to the names of earlier versions of the software just once.
**Competing Interests:** No competing interests were disclosed.
|
{"Source-Url": "https://f1000researchdata.s3.amazonaws.com/manuscripts/5631/edcdf563-0d9b-441d-841e-8b0a45c9faad_4431_-_guanming_wu_v2_2.pdf?doi=10.12688/f1000research.4431.2", "len_cl100k_base": 6690, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 36971, "total-output-tokens": 8357, "length": "2e12", "weborganizer": {"__label__adult": 0.0004506111145019531, "__label__art_design": 0.0005593299865722656, "__label__crime_law": 0.0004119873046875, "__label__education_jobs": 0.0028095245361328125, "__label__entertainment": 0.00025582313537597656, "__label__fashion_beauty": 0.0002827644348144531, "__label__finance_business": 0.0006737709045410156, "__label__food_dining": 0.0006661415100097656, "__label__games": 0.0013036727905273438, "__label__hardware": 0.0021991729736328125, "__label__health": 0.005741119384765625, "__label__history": 0.0004351139068603515, "__label__home_hobbies": 0.00028777122497558594, "__label__industrial": 0.0005140304565429688, "__label__literature": 0.0004324913024902344, "__label__politics": 0.00031065940856933594, "__label__religion": 0.0006546974182128906, "__label__science_tech": 0.34033203125, "__label__social_life": 0.00030994415283203125, "__label__software": 0.150390625, "__label__software_dev": 0.489501953125, "__label__sports_fitness": 0.0006961822509765625, "__label__transportation": 0.0004529953002929687, "__label__travel": 0.00031948089599609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34188, 0.03781]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34188, 0.16643]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34188, 0.90242]], "google_gemma-3-12b-it_contains_pii": [[0, 2092, false], [2092, 2092, null], [2092, 6678, null], [6678, 9327, null], [9327, 12269, null], [12269, 15021, null], [15021, 16697, null], [16697, 20533, null], [20533, 22571, null], [22571, 25830, null], [25830, 29022, null], [29022, 30516, null], [30516, 33103, null], [33103, 34188, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2092, true], [2092, 2092, null], [2092, 6678, null], [6678, 9327, null], [9327, 12269, null], [12269, 15021, null], [15021, 16697, null], [16697, 20533, null], [20533, 22571, null], [22571, 25830, null], [25830, 29022, null], [29022, 30516, null], [30516, 33103, null], [33103, 34188, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34188, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34188, null]], "pdf_page_numbers": [[0, 2092, 1], [2092, 2092, 2], [2092, 6678, 3], [6678, 9327, 4], [9327, 12269, 5], [12269, 15021, 6], [15021, 16697, 7], [16697, 20533, 8], [20533, 22571, 9], [22571, 25830, 10], [25830, 29022, 11], [29022, 30516, 12], [30516, 33103, 13], [33103, 34188, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34188, 0.05161]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
fe4e6440c365135ca66cf2297271294413827843
|
QuARS Express: A Tool for Evaluating Natural Language Requirements
A. Bucchiarone, S. Gnesi, G. Lami and G. Trentanni
Istituto di Scienze e Tecnologie dell’Informazione, CNR, Pisa, Italy
{antonio.bucchiarone,stefania.gnesi,giuseppe.lami,gianluca.trentanni}@isti.cnr.it
A. Fantechi
Dipartimento di Sistemi e Informatica
Università degli studi di Firenze, Italy
[email protected]
Abstract
Requirements analysis is an important phase in a software project. It is often performed in an informal way by specialists who review documents looking for ambiguities, technical inconsistencies and incompleteness. Automatic evaluation of Natural Language (NL) requirements documents has been proposed as a means to improve the quality of the system under development. We show how the tool QuARS Express, introduced in a quality analysis process, is able to manage complex and structured requirement documents containing metadata, and to produce an analysis report rich of categorized information that points out linguistic defects and indications about the writing style of NL requirements. In this paper we report our experience in the automatic analysis of a large collection of natural language requirements, produced inside the MODCONTROL project using this tool.
1 INTRODUCTION
The achievement of the quality of system and software requirements is the first step towards the development of a reliable and dependable product. It is well known that inaccuracies in requirements documents could determine serious problems to the subsequent phases of system and software development. The availability of techniques and tools for the analysis of requirements may improve the effectiveness of the requirement process and the quality of the final product. Particularly, the availability of automatic tools for the quality analysis of Natural Language (NL) requirements [13] is recognized as a key factor. QuARS (Quality Analyzer for Requirements Specifications) [24] was introduced as an automatic analyzer of such requirement documents. It performs a lexical analysis of requirements detecting ambiguous terms or wordings. In this paper we introduce QUARS EXPRESS, a modified version of QUARS, specialized to be applied to the analysis of a large collection of NL requirements produced inside the EU/IP MODTRAIN project, subproject MODCONTROL [12]. MODCONTROL addresses the standardization of an innovative Train Control and Monitoring System (TCMS) for the future interoperable European trains. In the specification phase for TCMS, project partners have gathered requirements from different existing sources. These requirements had to be consolidated, harmonized and refined among the various project partners. An analysis of the natural language requirements by means of automatic tools has been considered as an added value for guaranteeing the successful outcome of the project, due to the capability to point out potential sources of ambiguity and other weaknesses. TCMS requirements have then been stored in a single repository, associating to each requirement several metadata attributes providing several notions of traceability (to the author, to the package, and so on). In order to be able to use QUARS on the TCMS requirements it was necessary to interface it with the repository. A modified version of the QUARS tool (QUARS EXPRESS) has therefore been developed for the MODCONTROL project, to address these needs. In particular QUARS EXPRESS is able to handle a more complex and structured data format containing metadata and produces an analysis report rich of categorized information. The information grows as a function of the number of metadata items available (e.g. as a function of the number of authors, the number of packages and so on) and the size of the report grows consequently and can be composed of several pages. As an improvement of the simple text based report made by QUARS the new report exploits the HTML technology to produce structured hypertextual pages. We have analyzed using QUARS EXPRESS the Functional and System Requirements of TCMS including more than 5700
...
requirements. The results of the analysis have shown that the analysis process based on QUARS EXPRESS not only can be able to point out linguistic defects, but can provide also some indications on the writing style of different NL requirements authors (from different partners) giving them the opportunity to become aware of defects and of potential improvements.
In the next section we briefly present the MODCONTROL case study. In section 3 we introduce QUARS and in section 4 we show how it has been modified to cope with the needs of the MODCONTROL project. In section 5 we present the quality analysis process used in the project, in which QUARS EXPRESS is used together with other tools (i.e., IBM RequisitePro and SoDA). In section 6 we discuss the experience made in MODCONTROL while conclusions and future work are presented in section 7.
2 MODCONTROL TCMS CASE STUDY
A key objective of MODCONTROL is the standardization of TCMS functional modules and their interfaces with other subsystems on-board and external to the train. The TCMS controls and monitors the various subsystems of a train, providing the necessary information to the driver. It also performs other integrational tasks like allowing train-wide diagnosis and maintenance.
MODCONTROL approach is to elaborate a Functional Requirements Specification (FRS) and a System Requirements Specification (SRS) for the new generation of TCMS. These specifications will aim at the standardization of essential interfaces of the TCMS with other major subsystems of the train, such as Traction Control, Air Conditioning, Doors, Brakes or Auxiliary Power Distribution. During MODCONTROL’s specification phase, project partners gather requirements from different sources such as specifications of existing trains, standards or drafted specifications from other EU projects. These requirements are then consolidated, harmonized and refined among the project partners in several review sessions.
For the production of harmonized and consistent FRS and SRS, the collection of requirements into a common, project-wide structure is essential.
The SRD (System Requirements Document) has been generated from the common server of the MODTRAIN project and it is the result of the input provided by the project partners. The SRD expressed as Natural Language sentences, in its current status, is composed of more than 5700 requirements categorized in the following way:
- Functional Requirements (FREQ): Requirements for a TCMS function.
- System Requirements (SREQ): Requirements for devices carrying some functions (or sub-functions).
- Glossary Items (TERM): Identifies all glossary items within the project.
- Use Cases (UC): Description of use cases in the project.
As previously said, MODCONTROL aims to produce the Requirements Specifications (FRS and SRS) for a new generation of train control systems. It is therefore evident that the produced specifications should not be possibly mis-interpreted due to weaknesses and ambiguities in the NL requirements for TCMS. An added difficulty from this point of view was the fact that requirements have been produced by several partners and then merged in a single repository with consequent problems due to heterogeneous writing styles. A set of writing rules were enforced in the project [11] and they were (almost always) followed when inserting new requirements by the various partners. Using RequisitePro [22], each requirement has been stored in a repository: a requirement is constituted by several attributes, that are:
- Text: provides the NL text of a single requirement;
- Source: indicates from which previous product requirements document, if any, the requirement derives. It may reveal that a defective requirement has actually been borrowed from some standard and hence it cannot be resolved unless issuing a standard change request;
- Responsibility: refers to the person who has actually inserted the requirement in the repository. Using it, is possible therefore to ask an individual for the resolution of potential weakness, either by correction of the requirements, or by recognition of a so called false defect;
- Package: indicates which part of the system the requirement refers to;
- Type: describes the category the requirement belongs to (i.e., Functional, Architectural, Performance, Real-time, etc.).
3 NL REQUIREMENTS ANALYSIS
A NL requirements document, composed by different sources, may suffer differences in style and accuracy producing an unbalanced and ambiguous final requirements document. Several approaches can be followed to ensure a good quality requirements document. One approach is the linguistic analysis of a NL requirements document aimed to remove as many readability and ambiguity issues as possible. Several studies dealing with the evaluation and the achievement of quality in NL requirement documents can
be found in the literature and natural language processing (NLP) tools have been recently applied to NL requirements documents for checking the consistency and completeness. Among such tools, QuARS [4, 3, 24], (see the next subsection) and ARM [9, 26] perform a lexical analysis of documents detecting and possibly correcting ambiguous terms or wordings, while tools such as LOLITA [10] and Circe-Cico [1] exploit syntactic analyzers to detect ambiguous sentences having different interpretations.
### 3.1 QuARS
In the context of MODCONTROL the tool QuARS (Quality Analyzer for Requirements Specifications) was initially chosen for the evaluation of the TCMS requirements document, since it was used also in previous several projects [2, 5, 6]. QuARS performs an initial parsing of the requirements for automatic detection of potential linguistic defects that can determine ambiguity problems impacting the following development stages.
The functionalities provided by QuARS are:
- **Defect Identification**: QuARS performs a linguistic analysis of a requirements document in plain text format and points out the sentences that are defective according to the expressiveness quality model described in [4, 3]. The defect identification process is split in two parts: (i) the "lexical analysis" capturing optionality, subjectivity, vagueness and weakness defects, by identifying candidate defective words that are identified into a corresponding set of dictionaries; and (ii) the "syntactical analysis" capturing implicitity, multiplicity and under-specification defects. In the same way, detected defects may however be false defects. This may occur mainly for three reasons: (i) a correct usage of a candidate defective word, (ii) a usage of a candidate defective wording which is not usually considered a defect in the specific system or domain, and (iii) a possible source of ambiguity inserted on purpose to give more freedom to implementors. For this reason, a false positive masking feature is provided.
- **Requirements clustering**: The capability to handle collections of requirements, i.e. the capability to highlight clusters of requirements holding specific properties, can facilitate the work of the requirements engineers.
- **Metrics derivation**: Metrics have been defined in QuARS for evaluating the quality of NL requirements document with respect to measures on the readability of the document plus measures on the percentage of defects throughout the whole document. Among the metrics calculated by QuARS, we cite the readability index and the defect rate. The readability index is given by the Coleman-Liau Formula [17]. The reference value of this formula for an easy-to-read technical document is 10, if it is greater than 15 the document is considered difficult to read. The defect rate is the percentage ratio calculation of defects distribution in the document with respect to the performed kind of analysis.
- **View derivation**: A View is a subset of the input requirements document, consisting of those sentences that deal with particular quality attributes or other non-functional aspects of the system. The view derives identifies and collects together those sentences belonging to a given View. The availability of Views makes the detection of inconsistencies and incompleteness easier because the reviewer only has to consider smaller sets of sentences where possible defects can be found with much less effort.
### 4 QuARS Express
QuARS EXPRESS exploits the same core engine of QuARS, but the huge number of requirements to be analyzed claimed the availability of a simpler to use tool producing richer reports and able to manage a minimum metadata set. To address these needs, four improvements have been implemented:
- a new graphical user interface has been developed allowing the user to perform the time-consuming analysis in a click (Figure 1);
- since the requirements and the metadata are stored in a repository based on RequisitePro, the tool has been interfaced to it by means of the SoDA plug-in [23]. This has required the definition of a text format that has been established to handle the five metadata fields: a requirement unique ID, the Responsibility, the Type, the Source, and the Package. Any requirement is traceable by means of at least one of its five metadata fields and the produced report is tailored to be used both for analysis and correction purposes, or for productiveness investigations. The text format is illustrated in Figure 2 with an example;
- several readability analysis has been introduced allowing the requirement authors to improve their writing style;
- the set of metrics has been enriched adding statistics both on the whole document and on requirements subsets singled out by means of metadata fields.
Figure 3 shows a feature based comparison between QuARS and QuARS EXPRESS. Although most of the fea-
features are shared, it is clear that the two tools are complementary rather than one the extension of the other.
In the following, the QUARS EXPRESS features are described more in detail.
**Defect Identification** As we already said, QUARS EXPRESS shares with QUARS the core analysis engine and produces the same analysis results. These are based on the same quality model, and divided in lexical analysis, capturing *optionality*, *subjectivity*, *vagueness* and *weakness* defects, and syntactic analysis, capturing *implicitly*, *multiplicity* and *under-specification* defects, as well.
**Readability Analysis** In QUARS EXPRESS, seven readability indexes have been introduced. This new feature exploits the GNU program called “Diction/Style” [14]. The Style program analyzes the surface characteristics of the writing style of a document and calculates the values of seven readability indexes well known in the readability research field: *Kincaid* [19], *ARI* [15], *Coleman-Liau* [17], *Flesh* [18], *FOG* [20], *LIX* [21], *SMOG* [16]. These readability indexes are a mathematical attempt, based on word and syllables count, to point out the minimum US school grade the reader needs to understand the text. As a consequence, there isn’t an actually *good* value for any of them, but we can assume that technical writings, as requirements documents are, present an unavoidable reading difficulty that leads to scores higher than those presented by common popular writings such as newspapers, novels etc. The readability analysis scores are shown in each report file for each defective sentence such as the lexical analysis and the syntactic analysis. Moreover the readability scores calculated for all the sentences, even the not defective ones, and for the whole document are reported as well but in sep-
Metrics and Statistics derivation The set of metrics has been enriched with the analysis defect rate and error defect rate, explained in detail in the following.
- **Defect Rate.** It is the percentage ratio between the number of requirements with at least a defect and the total number of analyzed requirements. Moreover, the same ratio is calculated with respect to requirements subsets catalogued by metadata fields.
- **Analysis Defect Rate.** It is the percentage ratio between the number of requirements with at least a defect of a chosen type (Optionality, Subjectivity, Vagueness, Weakness, Implicity, Multiplicity, Underspecification), divided by the number of defective requirements found in the document. The same ratio is calculated with respect to requirements subsets belonging to metadata fields as well.
- **Error Defect Rate.** Since more defects can be found in a single requirement, this finer metric gives the percentage ratio of defects of the chosen type and the total number of defects found.
Note that all the defect rates are calculated with respect to both general analysis results, and to any single chosen kind of analysis. Moreover QUARS EXPRESS separately produces metrics reports based on requirements subsets related to the metadata fields (e.g. Responsibility, Type, Source and Package).
False Defects masking/hiding During the evaluation false defects may be detected. QUARS EXPRESS provides a simple mechanism to mask to the analysis engine false defective wording. Due to the handling metadata included in QUARS EXPRESS, the management of false positive defects can be done with the granularity of the classification given by metadata. For this reason we have not maintained in QUARS EXPRESS the more refined false positive management implemented in QUARS.
### 4.1 QUARS Express Report Structure
QUARS EXPRESS produces an analysis report rich of categorized information. The information grows as a function of the number of metadata items available (e.g. as a function of the number of authors, the number of packages and so on) and the size of the report grows consequently and can be made of several pages. As an evolution of the simple text-based report made by QUARS, the new report exploits the HTML technology to produce structured hypertextual pages, organized in a main directory and five subdirectories (Figure 4).
The name of the main directory is formed by the fixed
string "QuARSxpsReport" followed by the "ReportID", a unique report identifier based on the time of generation, that allows users to store several reports without overwriting risks. The main directory contains general report files, the ReferenceFiles directory and four additional subdirectories. The general report files show the analysis performed on the whole document and give a general idea of the defects distribution showing concise overview tables and global statistics. The ReferenceFiles directory contains explanations about how the tool works and about the meaning of the various analysis performed, the statistics calculated and the readability indexes formulas utilized.
The other four subdirectories, namely Responsibility, Type, Source and Package, are the metadata related ones, containing report files about the analysis filtered through the metadata field values. Each of them can contain several HTML files, depending on how many values the specific metadata field contains. Each of these files gives a projection of the performed analysis over the subset of the requirements catalogued by means of the metadata field, hence providing help for traceability with respect to authors, source document, requirement type or originating package.
All the HTML pages are dynamically produced following a common structure. The header, the Table of Contents and the analysis results are organized in tables providing hypertext links to allow for easily jumping from a detailed point of view to a more general one and vice versa.
The HTML pages share one each other common items:
- the heading of the file that specifies the path of the analyzed requirements file, the belonging to the metadata item and its name, the session pointed out by the unique Report ID and the date of the performed analysis;
- a Table of the page Contents: a list of links pointing the related sections of the page;
- the index of defective sentences by means of their ID where every ID is a link pointing to the complete defect description;
- a synoptic view of all defective sentences shown as a table, with associated the defect wording and the kind of analysis performed, where exist a link pointing to the complete description (full view) of the requirement analysis.
- some statistics (i.e., Analysis Defect Rate, Error Defect Rate, etc.) related to the whole document. In Figure 5 we show an example of this kind of output.
### Analysis Statistics
<table>
<thead>
<tr>
<th>Analyzed Requirements</th>
<th>Defective Requirements</th>
<th>Error</th>
<th>Defect Rate*</th>
</tr>
</thead>
<tbody>
<tr>
<td>560</td>
<td>340</td>
<td>237</td>
<td>77%</td>
</tr>
</tbody>
</table>
* The number of sentences found in the document with at least an error (defective sentences) divided by the number of the analyzed sentences (e.g. all the requirements found in the document)
<table>
<thead>
<tr>
<th>Defect Rates</th>
<th>Analyzed Defect Rates</th>
<th>Error Defect Rates</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Analysis Statistic</td>
<td>Error Statistic</td>
</tr>
<tr>
<td></td>
<td>5% (3/70)</td>
<td>10% (1/10)</td>
</tr>
</tbody>
</table>
** The number of sentences with at least an error of the kind of the analysis related item (Optionality, Subjectivity...) divided by the number of defective sentences found in the document
*** The number of related analysis item errors divided by the total number of errors found in the document
### Figure 5. Analysis Statistics
#### 5 Quality Analysis Process
The overall Quality Analysis Process adopted in the project is depicted in Figure 6 and is summarized in the following: (a) The partners of the project create a new file project in RequisitePro [22] and insert the requirements with all the required attributes (Name, Text, Responsibility, Package, etc.).
(b) The different requirements are stored in a Requirements File, one for each requirement class.
(c) At this point, in an automatic way, the tool SoDA [23] generates a text document containing the requirements and the relevant attributes, and saves it in **txt** format (alternative formats are **DOC**, **HTML** and **XML**). A specific template has been defined for SoDA in order to allow QUARS XPRESS to properly interpret the information contained in the generated document.
(d) The obtained **txt** file is input to QUARS XPRESS that analyzes the sentences (requirements) and gives as output the Defects Requirement Reports (DRR), for both FREQ and SREQ documents, together with the calculation of relevant metrics.
(e) In the case QUARS XPRESS points to some defects, a refinement activity is needed, possibly followed by another quality analysis step. The DRR should be filtered by experts, in a "false defect survey" (f), in order to establish whether a refinement is really necessary or not.
(g) Otherwise, the approved requirements document is released.
#### 6 THE RESULTS OF THE ANALYSIS OF MODCONTROL REQUIREMENTS
In MODCONTROL project, we have analyzed by means of QUARS XPRESS the whole set of produced requirements, that is the SREQ and FREQ documents. The results of the analysis have shown that the underlying process not
only can be able to point out linguistic defects, but can pro-
provide also some indications on the writing style of different
requirements authors (from different partners), giving them
the opportunity to become aware of defects and of potential
improvements. In particular, it has been noted that a
requirement author is inclined to repeat the same type of
mistakes, unless becoming aware of them. In Figure 7 we
can see the number of requirements (SREQ or FREQ) writ-
ten by the partners (A, B, C, and Others for the requirements
that have been recorded without the author indication). The
project partner B has had apparently more responsibility on
system requirements, while C has had more responsibility on functional requirements.
In Table 1 and 2 we can see the number of defective re-
quirements and the "Defect Rate" associated to each part-
ner of the project after the QUARS EXPRESS application on
SREQ and FREQ documents. These numbers, once false
defects have been filtered out, can give an indication on
which partner can be considered less accurate in the pro-
cess of writing requirements. Another important informa-
tion is on what type of defects is more often introduced in
the writing.
<table>
<thead>
<tr>
<th>Partners</th>
<th>Analyzed</th>
<th>Defective</th>
<th>Errors</th>
<th>Defect Rate(%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>388</td>
<td>238</td>
<td>585</td>
<td>61</td>
</tr>
<tr>
<td>B</td>
<td>596</td>
<td>296</td>
<td>558</td>
<td>50</td>
</tr>
<tr>
<td>C</td>
<td>1803</td>
<td>1046</td>
<td>2516</td>
<td>58</td>
</tr>
<tr>
<td>Others</td>
<td>422</td>
<td>67</td>
<td>136</td>
<td>15</td>
</tr>
<tr>
<td>Total</td>
<td>3209</td>
<td>1647</td>
<td>3795</td>
<td>51</td>
</tr>
</tbody>
</table>
Table 1. FREQ: Defect Rate and Errors
<table>
<thead>
<tr>
<th>Partners</th>
<th>Analyzed</th>
<th>Defective</th>
<th>Errors</th>
<th>Defect Rate(%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>710</td>
<td>353</td>
<td>900</td>
<td>61</td>
</tr>
<tr>
<td>B</td>
<td>1153</td>
<td>524</td>
<td>998</td>
<td>45</td>
</tr>
<tr>
<td>C</td>
<td>208</td>
<td>46</td>
<td>88</td>
<td>22</td>
</tr>
<tr>
<td>Others</td>
<td>497</td>
<td>356</td>
<td>836</td>
<td>72</td>
</tr>
<tr>
<td>Total</td>
<td>2568</td>
<td>1282</td>
<td>2822</td>
<td>50</td>
</tr>
</tbody>
</table>
Table 2. SREQ: Defect Rate and Errors
In Table 3 and 4 we can notice the multiplicity and vague-
ness are more frequent. Table 5 gives some results about
the execution time needed to perform the described analysis
over such large documents. The differences in the execution
speed between FREQ and SREQ, depends on the text length
for each requirement. SREQ requirements tend to be more
concise than FREQ ones: apparently, describing functions
requires more verbosity.
The last analysis performed is the Readability Analysis. Ta-
bble 6 shows the readability average scores of the two docu-
ments, FREQ and SREQ.
Note that the SREQ document results to be more readable
than the FREQ one. In fact, the indexes values of the
SREQ document stand in reasonable ranges according to
their technical nature, whereas the scores of the FREQ doc-
ument are higher than we expected.
Indeed, values of the Kincaid, ARI, Coleman-Liau, FOG,
SMOG indexes higher than 15, of the LIX index higher than
58, and of the Flesh index lower than 60 give the indication
of a hardly readable document. In our case FREQ exceeds
most of such indexes, and it is close to the limits for the
other ones: though this is not a dramatic defect, it is advis-
able to improve the readability of functional requirements,
for example shortening phrases and splitting paragraphs.
### Table 3. FREQ: Defects for Type
<table>
<thead>
<tr>
<th>Analysis</th>
<th>Defects</th>
<th>%</th>
<th>Errors</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>Optionality</td>
<td>35</td>
<td>2</td>
<td>47</td>
<td>1</td>
</tr>
<tr>
<td>Subjectivity</td>
<td>39</td>
<td>2</td>
<td>54</td>
<td>1</td>
</tr>
<tr>
<td>Vagueness</td>
<td>353</td>
<td>22</td>
<td>652</td>
<td>18</td>
</tr>
<tr>
<td>Weakness</td>
<td>128</td>
<td>8</td>
<td>164</td>
<td>4</td>
</tr>
<tr>
<td>Implicitly</td>
<td>116</td>
<td>7</td>
<td>251</td>
<td>7</td>
</tr>
<tr>
<td>Multiplicity</td>
<td>847</td>
<td>51</td>
<td>2437</td>
<td>64</td>
</tr>
<tr>
<td>Underspecification</td>
<td>129</td>
<td>8</td>
<td>190</td>
<td>5</td>
</tr>
</tbody>
</table>
### Table 4. SREQ: Defects for Type
<table>
<thead>
<tr>
<th>Analysis</th>
<th>Defects</th>
<th>%</th>
<th>Errors</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>Optionality</td>
<td>23</td>
<td>2</td>
<td>29</td>
<td>1</td>
</tr>
<tr>
<td>Subjectivity</td>
<td>39</td>
<td>3</td>
<td>61</td>
<td>2</td>
</tr>
<tr>
<td>Vagueness</td>
<td>396</td>
<td>31</td>
<td>613</td>
<td>22</td>
</tr>
<tr>
<td>Weakness</td>
<td>54</td>
<td>4</td>
<td>61</td>
<td>2</td>
</tr>
<tr>
<td>Implicitly</td>
<td>66</td>
<td>5</td>
<td>129</td>
<td>5</td>
</tr>
<tr>
<td>Multiplicity</td>
<td>633</td>
<td>49</td>
<td>1809</td>
<td>64</td>
</tr>
<tr>
<td>Underspecification</td>
<td>68</td>
<td>6</td>
<td>120</td>
<td>4</td>
</tr>
</tbody>
</table>
### Table 5. Execution Time of Analysis
<table>
<thead>
<tr>
<th>Doc.</th>
<th>N. Req.</th>
<th>Time(min)</th>
<th>Speed(Req/min)</th>
</tr>
</thead>
<tbody>
<tr>
<td>FREQ</td>
<td>3209</td>
<td>210</td>
<td>15.28</td>
</tr>
<tr>
<td>SREQ</td>
<td>2568</td>
<td>52.8</td>
<td>48.63</td>
</tr>
</tbody>
</table>
### Table 6. Readability Analysis Results
<table>
<thead>
<tr>
<th>Readability Index</th>
<th>FREQ Scores</th>
<th>SREQ Scores</th>
</tr>
</thead>
<tbody>
<tr>
<td>Kincaid</td>
<td>13.5</td>
<td>7.4</td>
</tr>
<tr>
<td>ARI</td>
<td>15.6</td>
<td>7.6</td>
</tr>
<tr>
<td>Coleman Liau</td>
<td>14.2</td>
<td>13</td>
</tr>
<tr>
<td>Flesch Index</td>
<td>44.8/100</td>
<td>63.4/100</td>
</tr>
<tr>
<td>Fog Index</td>
<td>16.8</td>
<td>10.4</td>
</tr>
<tr>
<td>LIX</td>
<td>56.5</td>
<td>40.7</td>
</tr>
<tr>
<td>SMOG-Grading</td>
<td>14.2</td>
<td>10.1</td>
</tr>
</tbody>
</table>
### 6.1 Review Process
In MODCONTROL, after the first evaluation process execution, the partners have been invited not only to correct defects, but also to return knowledge about false defects. We have hence indeed identified some typical sources of false defects, such as:
- Words usually indicating vagueness are used to allow for implementation freedom by the manufacturers, that is not to impose implementation choices.
- Sometimes the use of passive voice, in verbs, can be deliberately chosen by authors not to address a specific subject for a specific requirement. But in such cases, a discussion of that requirement among experts, is useful to clarify the intended meaning of the requirement. Some defects are originating from previous guidelines as norms, which are taken as they are.
Consider these examples of false defects, taken from the requirements related to the lighting systems:
- **FREQ2349:**... lighting shall provide a comfortable and pleasing visual environment.
In this case the judgment about a "comfortable" and "pleasing" (two vague words) lighting level for passengers is left to the manufacturers, which will follow also marketing criteria. Anyway, this requirement is derived from European guidelines, and hence it has been imported as it was.
- **FREQ2351:** The emergency lighting shall be sufficient to enable continued occupation or safe egress from the vehicle.
In this case the vague word "sufficient" is indeed weak, since a standard is expected to predicate more precisely about emergency issues. However, this text is taken as it is from the same European guidelines.
- **FREQ1760:** The emergency lighting system shall provide a suitable lighting level in the passenger and in the service areas of at least 5 lux at floor level.
In this case, the vague word "suitable" is indeed a vagueness defect, but we can note that the lighting level is specified in the next line: this is actually a redundant requirement, that should be better written as:
The emergency lighting system shall provide, in the passenger and in the service areas, a lighting level of at least 5 lux at floor level.
These examples show that for the detection of most false defects the domain knowledge of the experts who have written the requirements is needed. Collecting the feedback from experts on false defects, we will be able to tune the
tools in order to diminish the false defects percentage. Actually, in MODCONTROL this collection has been performed point-wise, and no systematic means to collect feedbacks, and hence to measure the false defect rate, was established. This is a point to improve in future applications of the approach. Anyway, the application of QUARS EXPRESS to check the quality of the requirements has been appreciated at the project level, as an added means to consolidate the results of the project.
7 Conclusions and Future Works
In this paper we have presented the tool QUARS EXPRESS aimed at the analysis of Natural language requirements, and we have reported on its application to the analysis of a large set of requirements coming form the MODCONTROL project. We discuss in the following the two key points that have emerged after this experience and the main issue that we would like to consider in the future.
- **Process automation and learning phase**: the evaluation process introduced in Figure 6 is very simple to use and has a high degree of automation. The user is demanded to learn the use of the tools, to insert the requirements in the database and to define the SoDA template in order to generate, in an automatic way, the requirements document that is going to be analyzed by QUARS EXPRESS.
- **Scalability**: QUARS EXPRESS has been shown to easily scale up by an order of magnitude. Moreover, the documents analyzed for MODCONTROL were not only text lines, but included metadata which have been used for a better and more accurate presentation of results. Another direction of the flexibility of QUARS EXPRESS is witnessed by the connection to RequisitePro by means of the SoDA documentation tool or any other tool able to export in a customized txt format accepted by QUARS EXPRESS (e.g. DOORS [25]).
Another important issue in requirements management is semantic consistency among requirements coming from different sources. This issue has been for example discussed in [8], but is currently not addressed by QUARS EXPRESS. On the other hand, in the context of MODCONTROL project semantic consistency has been addressed by separation of concerns, giving responsibility of each different function or subsystem to one partner.
8 Acknowledgments
This work has been partially supported by the European project MODTRAIN (FP6-PLT-506652/TIP3-CT-2003-506652) subproject MODCONTROL. Moreover authors thank Andreas Winzen of Siemens AG, Erlangen, Germany, for his valuable insights about this work.
References
|
{"Source-Url": "http://puma.isti.cnr.it:80/rmydownload.php?filename=cnr.isti%2Fcnr.isti%2F2011-TR-023%2F2011-TR-023.pdf", "len_cl100k_base": 7661, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30290, "total-output-tokens": 8891, "length": "2e12", "weborganizer": {"__label__adult": 0.0003020763397216797, "__label__art_design": 0.0006489753723144531, "__label__crime_law": 0.00032019615173339844, "__label__education_jobs": 0.0020046234130859375, "__label__entertainment": 0.0001556873321533203, "__label__fashion_beauty": 0.00018715858459472656, "__label__finance_business": 0.0006351470947265625, "__label__food_dining": 0.0003094673156738281, "__label__games": 0.0010013580322265625, "__label__hardware": 0.0006480216979980469, "__label__health": 0.00034546852111816406, "__label__history": 0.0002741813659667969, "__label__home_hobbies": 0.00011628866195678712, "__label__industrial": 0.00051116943359375, "__label__literature": 0.0006661415100097656, "__label__politics": 0.0002160072326660156, "__label__religion": 0.00040268898010253906, "__label__science_tech": 0.053680419921875, "__label__social_life": 0.0001628398895263672, "__label__software": 0.0528564453125, "__label__software_dev": 0.8837890625, "__label__sports_fitness": 0.0002224445343017578, "__label__transportation": 0.0005593299865722656, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37040, 0.03751]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37040, 0.31567]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37040, 0.90778]], "google_gemma-3-12b-it_contains_pii": [[0, 4102, false], [4102, 8971, null], [8971, 13868, null], [13868, 15687, null], [15687, 18109, null], [18109, 23236, null], [23236, 26687, null], [26687, 30702, null], [30702, 35117, null], [35117, 37040, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4102, true], [4102, 8971, null], [8971, 13868, null], [13868, 15687, null], [15687, 18109, null], [18109, 23236, null], [23236, 26687, null], [26687, 30702, null], [30702, 35117, null], [35117, 37040, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37040, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37040, null]], "pdf_page_numbers": [[0, 4102, 1], [4102, 8971, 2], [8971, 13868, 3], [13868, 15687, 4], [15687, 18109, 5], [18109, 23236, 6], [23236, 26687, 7], [26687, 30702, 8], [30702, 35117, 9], [35117, 37040, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37040, 0.22222]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
627bd2026bec44e2dd4fe460b6484065b0e0c107
|
Extensible Multi-Domain Generation of Virtual Worlds using Blackboards
Gaetan Deglorie\textsuperscript{1}, Rian Goossens\textsuperscript{2}, Sofie Van Hoecke\textsuperscript{1} and Peter Lambert\textsuperscript{1}
\textsuperscript{1}ELIS Department, IDLab, Ghent University-iMinds, Sint-Pietersnieuwstraat 41, B-9000, Ghent, Belgium
\textsuperscript{2}ELIS Department, Ghent University, Sint-Pietersnieuwstraat 41, B-9000, Ghent, Belgium
\{gaetan.deglorie, rian.goossens, sofie.vanhoecke, peter.lambert\}@ugent.be
Keywords: Procedural Modeling, Blackboard Architecture, Heterogeneous Modeling
Abstract: Procedural generation of large virtual worlds remains a challenge, because current procedural methods mainly focus on generating assets for a single content domain, such as height maps, trees or buildings. Furthermore current approaches for multi-domain content generation, i.e. generating complete virtual environments, are often too ad-hoc to allow for varying design constraints from creative industries such as the development of video games. In this paper, we propose a multi-domain procedural generation method that uses modularized, single-domain generation methods that interact on the data level while operating independently. Our method uses a blackboard architecture specialized to fit the needs of procedural content generation. We show that our approach is extensible to a wide range of use cases of virtual world generation and that manual or procedural editing of the generated content of one generator is automatically communicated to the other generators, which ensures a consistent and coherent virtual world. Furthermore, the blackboard approach automatically reasons about the generation process which allows 52\% to 98\% of the activations, i.e. executions of the single-domain content generators, to be discarded without compromising the generated content, resulting in better performing large world generation.
1 INTRODUCTION
As consumer expectations put increasing pressure on the video game industry to improve the visual quality of games, game content production today focuses on increasing the visual detail and quantity of game assets used in virtual worlds. The creation of game content, including but not limited to 2D art, 3D geometry (trees, terrain, etc.) and sounds, largely remains a manual process by human artists. Using more artists to produce a larger but still coherent virtual world becomes increasingly infeasible, not to mention the associated increase in production costs. Procedural content generation (PCG) is the generation of (game) assets through the use of algorithms (Togelius et al., 2013a). This allows for a large increase in the size and diversity of produced content without an associated increase in cost.
However, previous work on PCG focuses primarily on single-domain content generation, i.e. algorithms that only generate one specific type of asset. Multi-domain content generation integrates these methods but producing coherent content at a larger scale remains challenging, and focuses mostly on ad-hoc solutions for specific use cases (Smelik et al., 2011; Dormans, 2010; Kelly and McCabe, 2007). Although these methods generate varying types of content in an integrated manner, the generation process targets a specific mixture of content and generates it in a specific sequence and manner. This makes these solutions less suitable for creative industries, such as game development, because each project comes with highly varying design constraints and targeted content domains. Although the motivation for this work stems from the game development domain, our work is more broadly applicable to other domains such as animation, movies, simulation, etc.
In this paper, we introduce a blackboard architecture as a solution to this problem that allows PCG methods to cooperate while manipulating highly heterogeneous data to create a coherent virtual world. Our approach is extensible with any PCG method and allows edits of the generated content to be automatically communicated to other content and procedural generators. Furthermore, our approach supports artists and designers by allowing the reuse of integrated PCG methods across different use cases.
The remainder of this paper is as follows. In Sec-
ation 2 we give an overview of previous approaches for multi-domain content generation and the concepts and extensions for blackboard systems. In Section 3 we elaborate upon our architecture. Then we show several design considerations when using our approach to generate virtual worlds in Section 4. In Section 5 we evaluate the extensibility of our approach, as well as the editability of the generated content and the generation performance. Finally, we present the conclusions and future work in Section 6.
2 RELATED WORK
We introduce a multi-domain content generation approach based on blackboard systems. The discussion of single-domain procedural methods lies beyond the scope of this work, for an overview of procedural methods we refer to a recent survey book (Shaker et al., 2016).
2.1 Multi-Domain Content Generation
A prominent approach to multi-domain content generation is the waterfall model used in urban contexts (Parish and Müller, 2001; Kelly and McCabe, 2007) and game level generation (Dormans, 2010; Hartsook et al., 2011). Both the work by Parish and Kelly integrate several procedural methods (road network generation, plot subdivision and building generation) sequentially to create a complete and coherent virtual city scene. Dormans combines the generative grammars to generate both mission and space of game levels, where the mission is generated first and the space is mapped onto it. However, the rules used in the system are game-specific and as such cannot be easily reused. In a similar approach, Hartsook uses procedurally generated or manually authored stories or narratives to generate the spatial layout of a game level. Waterfall model approaches, however, assume a predefined order between generators. This imposes constraints from each layer to the next, which in turn reduces the re-usability of the system. The sequential and tightly coupled nature of such systems makes them hard to extend to new use cases with new generators. Furthermore, editing content at a certain level requires that content at lower levels should be entirely regenerated, although creating (non-extensible) ad-hoc editing operations is still possible (e.g. the work by Kelly).
Alternative approaches to multi-domain content generation include data flow systems (Silva et al., 2015; Ganster and Klein, 2007), combining procedural models into a meta-procedural model (Gurin et al., 2016a; Gurin et al., 2016b; Grosbellet et al., 2015; Genevaux et al., 2015), declarative modelling (Smelik et al., 2010; Smelik et al., 2011), answer set programming (Smith and Mateas, 2011) and evolutionary algorithms (Togelius et al., 2013b). Silva augments generative grammars by representing them as a data flow graph. This allows them to add additional content domains such as lights and textures as well as new filtering, grouping and aggregation features. Although the data flow approach splits the procedural generation process into separate nodes, an additional requirement for data flow graphs is that every node needs to know in advance how it will interact with other nodes. This adds overhead to creating nodes as the designer potentially needs to revisit previous nodes. Furthermore, all editing operations need to be translated from the content domain into the procedural graph domain which requires additional expertise from the user. The work by Guerin, Grosbellet and Genevaux on meta-procedural models focuses on two separate directions. Firstly, meta-procedural modeling of geometric decoration details involves generating details such as leaves, pebbles and grass tufts for pre-authored environments. Secondly, meta-procedural modeling of terrains involves generating terrain height maps, waterways, lakes and roads in an integrated manner. This is achieved by using a common geometric representation set, i.e. elevation functions, which allows these different terrain features to be integrated. Both methods impose a specific ordering or structure on the generated content, which means that they are incapable of handling different domains. Smelik combines the generation of different aspects of a virtual world, including road networks, vegetation and terrain. Instead of a waterfall model, all generation occurs independently and is subsequently combined using a conflict resolution system. Their editor provides an intuitive way of editing for their specific content. However, adding new generators to their system is difficult. Smith defines the design space of the procedural generation problem as an Answer Set Program (ASP). By formally declaring the design space, they can define new procedural methods for a variety of content domains. Togelius formalizes the entire game level into a single data model and use multi-objective evolution to generate balanced strategy game levels. Both the evolutionary and ASP approach additionally require a formalization of the underlying data model. Changing the underlying constraints or adding new procedural methods also means updating the data model manually. Additionally edits to the generated content cannot be communicated back to such systems.
2.2 Blackboard Systems
The blackboard system is a technique from the domain of Artificial Intelligence (AI) used to model and solve ill-defined and complex problems (Corkill, 1991). Analogous to a real blackboard on which several researchers work on a problem, the system uses knowledge sources specialized in different tasks. A control system ensures that the knowledge sources are triggered at the right time to avoid conflicts.
Blackboard systems are a good solution for design problems as they can handle problems that have to work on heterogeneous data. Blackboards are therefore used in semantic problem solving (Verborgh et al., 2012), game design (Treanor et al., 2012; Mateas and Stern, 2002) and poetry generation (Misztal-Radecka and Indurkhya, 2016). Previous work does not include the use of blackboards for procedural generation of virtual worlds.
3 BLACKBOARD SYSTEM FOR PROCEDURAL GENERATION
Current multi-domain content generation approaches are hard to extend, impose a predefined order of generation procedures implicitly constraining the whole generation process or regenerate large parts when editing content at a certain level. To solve these problems, we introduce a blackboard architecture that allows PCG methods to cooperate while manipulating highly heterogeneous data to create coherent virtual worlds.
3.1 Blackboard and Data Model
The blackboard stores all the data, i.e. all generated content instances and additional input information. In contrast to previous work using data models where the designer needs to manually create and manage them, we instead opt to automatically generate our model based on the input and output data types of the knowledge sources. The model is automatically created, before the content generation process is started, using the selected knowledge sources that were chosen to generate the virtual world. This model is used to structure the generated data and to help inform the scheduler (which will be discussed in Section 3.4). Figure 2 provides an example of our data model as it was built for the evaluation.
As each knowledge source will generate new content based on previous content from the blackboard, this implies a dependency from one piece of content to another. These dependencies are encoded as the edges of a directed graph used to store the generated data. For example, the placement of trees might depend on the placement of the road network, to avoid placing a tree on a road instead of next to it.
Our data model allows cyclical dependencies in the resulting content. This increases the expressiveness of the overall system as content generated further in the generation process can affect and improve
earlier generated content (Togelius et al., 2013a). Although we support cyclical dependencies at all levels of our system, our use cases do not extensively test the robustness of including cycles, as cycles can introduce potential deadlocks or infinite knowledge source activations that create an unstable virtual world. We suggest careful design of the virtual world generator, taking care to avoid these unstable situations. A detailed exploration of cyclic dependencies will be addressed in future work.
3.2 Query System
The query system is used to access and retrieve information from the blackboard. It supports direct queries (e.g. retrieve terrain height map) or contextual queries (e.g. retrieve terrain only from Arctic biomes). The query system is currently implemented as a basic database with basic select and where clauses. Queries are executed based on data type, this means that a formal model is needed for all PCG knowledge sources to interact in a coherent manner.
The contextual queries are performed by searching the directed graph for ancestors and children based on the required information. Additionally, a contextual query might encode a requirement for the parameters of a piece of data instead of its relation to other nodes inside the graph. For example, a terrain data request could contain the added context of only requiring data of a minimal elevation.
3.3 PCG Knowledge Sources
PCG knowledge sources encapsulate procedural methods, preferably single-domain algorithms such as L-systems or coherent noise generators as these improve re-usability. Additionally, we distinguish three types of PCG behaviour: (1) addition of new content instances, (2) modification of existing content instances and (3) deletion of existing content instances. A PCG knowledge source is required to implement one or more of these behaviours to be supported by the blackboard system. As stated earlier, a knowledge source does not immediately execute when it retrieves data from the scheduler, instead it produces a knowledge source activation. Knowledge source activations store their operation (i.e. addition, modification or deletion) in an event and send it to the scheduler for execution. The separation into three types allows us to reason about what the generators plan to do and optimize the order or even remove some unnecessary activations (see next section). However, this requires the PCG knowledge sources to be stateless, i.e. they should not remember what content instances were previously generated by them, as manipulating the order or occurrence of activations would create a mismatch between the internal state of each knowledge source and the state of the blackboard.
3.4 Scheduler
The scheduler controls the execution order of PCG knowledge sources and handles the execution of knowledge source activations. The execution order is determined by automatically calculating a priority for each knowledge source in the system. This priority is derived from the blackboard data model. The priority order can be determined by performing a topological sort of the directed graph. However, topological sorts only work on directed acyclic graphs (DAGs). By condensing cycles in the graph into single vertices, we can make any directed graph into a DAG.
Based on the determined priorities, the knowledge source activations are collected in a priority queue. This ensures that when a knowledge source activates, all work resulting in its input type will be finished. The next step is merging and removing unnecessary knowledge source activations, done in accordance with two rules: (1) deletion cancels additions and modifications and (2) modification can be merged with an addition event to form a new altered addition. Merging activations reduces the amount of activations to be executed, without changing the results, thus increasing the generation performance of the system. This will be evaluated in Section 5.4.
4 DESIGN CONSIDERATIONS
In this section, we will discuss the insights obtained from the implementation of our use cases. As each knowledge source encapsulates a procedural method and only communicates through the data on the blackboard, it provided ample opportunity to create reusable modularized behaviours. The scope of our work is virtual world generation. For this, we propose a set of reusable abstractions, that we dubbed knowledge source design patterns. First we will discuss what considerations can be made in terms of compartmentalizing generation processing to improve modularity when designing knowledge sources. Next, we will provide an overview of the knowledge source design patterns that were useful when designing use cases for virtual world generation.
4.1 Modularity of PCG Knowledge Sources
Each knowledge source encapsulates a procedural method capable of generating a specific piece of content. One could encapsulate any stateless state-of-the-art procedural method into one of these modules, e.g. putting an entire L-system and turtle interpreter into a single module. However, naively encapsulating algorithms will negatively impact performance. The PCG knowledge sources are only dependent on their input and output data types. For example, a knowledge source might produce one or more elements of type C from an input of type A. This knowledge source can be replaced by two (or more) knowledge sources, e.g. one knowledge source that produces B from A and another that produces C from B. As long as the intermediate data type differs from the starting input and output types no conflicts will arise within the generation process.
From a design perspective, we can modularize knowledge sources in three different ways: (1) by output type, (2) by input type and (3) by generation process.
1. The output type can be made more generic. By splitting a knowledge source in two, an intermediate data type can be created which contains more generic information which can be reused for subsequent processes. Figure 3 shows an example of splitting a forest generator into an object distribution generator and a tree generator. We can then reuse the locations produced by the object distribution generator to place other objects.
2. The input type can also be made more generic. Instead of making specialized processes for each input type, it is more beneficial to split these into a specialized converter to an intermediary type and create a more generic process for this type. Figure 4 shows an example of generating bird nests in trees for both fir and jungle type forests. We can make a specialized parser that converts this tree information into a more general tree model, such as a branch list. From this list a generic bird nest placer can be used.
3. Given the same input and output data types, multiple knowledge sources can be created that contain different algorithms. For example, a knowledge source calculating the minimal spanning tree using Kruskals algorithm, which is faster for sparse graphs, could be switched out with another one using Prims algorithm which is faster for denser and larger graphs.
These three types of modularity should ideally be used together, maximizing the reuse of modules across different domains. As will be discussed in Section 5.4, this will impact the performance of the entire system.
4.2 Knowledge Source Design Patterns
The different design patterns obtained from the implementation of our example scenes for evaluation (see Section 5.2) are:
- The **converter** simply converts data types from one type to another. These are typically used to convert specific types into generic types or vice-versa (e.g. generic bird nest placer).
- The **distributor** generates a specific amount of instances from an input type, according to a distribution function. These are typically used for object placement in a certain search space.
- The **modifier** simply modifies data of a certain type, i.e. manipulating its parameters. For example, modifying the height of a mountain at a certain location.
- The **constraint** ensures that data has been produced in the right environment by allowing it to
modify or delete parent data if it does not adhere to this constraint.
- The collector collects instances of a specified data type and puts them into a collection. This is useful when you want to perform an operation on several instances of the same data type.
Further identification of these design patterns will support the applicability of our approach in future work.
5 EVALUATION
The evaluation of procedural generation methods remains a challenge. Content representation has not been standardized, i.e. a geometric object can be represented in a variety of ways (e.g. voxel representation, triangle meshes or billboard images). This means that comparing the resulting content is often approximate at best. Furthermore, different procedural methods can focus on specific features, e.g. generation performance, content quality, extensibility, etc.
We chose to compare our approach to Esri’s CityEngine, a state-of-the-art urban city generator, to give the reader a general idea of what is currently used in industry. CityEngine is also a good example of a waterfall approach to multi-domain content generation; it is a well established commercial tool and has an advantage over our approach in terms of design features, interface accessibility and performance. However, comparing our proof-of-concept in terms of design features or user interface is out of the scope of our research. Instead, we aim to show that the blackboard-based approach is highly extensible and is more robust when editing generated content.
5.1 Test Cases
We created a number of test cases to evaluate the extensibility and the generation performance of our system. These were created using custom modules for procedural content generation of virtual worlds: (1) a generic object placement module allows for objects to be distributed randomly and allows meshes to be placed in the scene; (2) a terrain system allows for complex terrains with customizable and fully independent biomes; (3) a vegetation module leverages L-systems to produce vegetation meshes; and (4) a road module allows for road networks to be built by various sources and supports multiple road strategies, similar to the technique used by Kelly et al. (Kelly and McCabe, 2007). Covering all possible content generation algorithms to create virtual worlds would be out of scope for this paper, however these examples form a representative sample of virtual world generation in general.
Example 1: Forest Scene
The forest scene features a generated height map with variable amount of vegetation objects. By default, the forest scene contains an equal amount of both trees and bushes. We can however remove the bushes or add a third object, e.g. plants. Varying the amount of object types will be discussed further in Section 5.4.
Figure 5: Forest scene using object placement (left) and L-systems (right).
Figure 5 shows an example forest scene featuring 50 trees, 100 bushes and 100 plants. By swapping out knowledge sources the vegetation can be either created by placing predefined objects, or by creating meshes at runtime using an L-system.
Example 2: Road Scene
The road scene generates a height map for a desert environment, with a variable amount of interest point objects. The interest points are combined into a road graph according to two strategies based on previous work (Kelly and McCabe, 2007): (1) straight road, connecting two points with the shortest possible road, and (2) minimum elevation difference, roads with lowest steepness within a maximum allowed extension versus straight roads.
Figure 6: Road scenes using minimum elevation difference (left) and straight strategies (right).
Figure 6 shows an example road scene with the
two different strategies. The different road strategies can be utilized by swapping out knowledge sources.
**Example 3: Combined Scene**
The combined scene is an example of combining all of the aforementioned techniques. It features a generated height map, different biomes (e.g. arctic, desert and forest), vegetation generation (e.g. using L-systems) and placement and road network generation. This example serves as a “complete” virtual world and a point of comparison with the content produced by CityEngine.

Figure 7 shows an example of the combined scene with the placement of vegetation and the choice of road strategy automatically adapting to the biome. It can be noted that our prototype does not feature building generation as is the case with CityEngine. However, CityEngine’s tree representation uses pre-authored billboards while we use L-systems to generate full 3D trees with leaves. Thus we argue that these scenes have a similar scene complexity generally speaking.
**5.2 Extensibility**
Current approaches of multi-domain content generation have limited extensibility. They make choices on what content should be generated, how and in what order. This makes them less suitable for the creative industries, as artists need (nearly) complete control over their design tools. CityEngine, for example, generates the road networks first, followed by plot subdivision in the resulting street blocks and finally generating a building or open area on each plot. This hampers the usability of such a system for different domains or projects. For example, this implies that an artist cannot place one or more buildings first and add a street connecting them to the street network, or create a plot subdivision based on the placement of a set of buildings and roads.
Our blackboard approach however allows a more flexible way of editing the content generation process for any combination of content domains, e.g. extending the system with new generators or changing the content dependencies, because our blackboard data model is automatically generated based on the selected knowledge sources. Furthermore, the scene generation does not happen in concrete steps or in a set order. Instead the scene is built in small increments and previously generated content can still be affected by content that has yet to be generated. This allows for more innovation than would normally be possible in a system where rules cannot be dependent on aspects that appear later in the generation order (Togelius et al., 2013a).
Our architecture makes knowledge sources, and consequently the procedural methods, independent of each other where communication is handled implicitly through the blackboard, or more specifically through the data.
**5.3 Editing Generated Content**
Another key advantage of our approach over waterfall approaches is that it allows newly generated content to edit previously generated content without complete regeneration. In a waterfall-based approach, editing content in a certain layer causes all subsequent layers to regenerate as there is no way to communicate what specifically has changed in that layer and what parts of depending content should change accordingly. This can be partly alleviated by implementing ad-hoc solutions where necessary. For example, in CityEngine, very small translational movements (less than 1 meter) of a street intersection mostly does not regenerate all surrounding building blocks but instead resizes the building plots without having to regenerate the building. However, even slightly increasing this movement causes the subdivision to update, which in turn regenerates the buildings. This means an apartment building on the corner of the street might turn into a small park.
Obviously when changing content in a scene all dependent content should change, however the underlying problem is that content dependencies in a waterfall model are over-generalized. Content naturally depends on each other, but we need to model these dependencies at a more granular level. In our approach, generation is segmented into several knowledge sources which exposes these content dependencies. This way, changes in the scene do not cause complete regeneration, but instead allows a more localized effect where only the properties of the content that should change do change.
Our generation process triggers knowledge sources through changes in data on the blackboard, i.e. additions, modifications and deletions. Thus any modification of data on the blackboard, i.e. the generated virtual world, triggers knowledge sources that depend on that content type and subsequent knowledge sources, cfr. the data model. Figure 8 shows an example of the height map being displaced and the nearby trees automatically updating their vertical position and orientation (to follow the surface normal). In this case, only the vertical position logically depended on the height map, thus the overall horizontal distribution is maintained and all trees are still placed correctly on the surface of the virtual world. Figure 8: Displacing the height map locally updates the height and orientation of nearby trees.
Important to note here is that this behaviour is observed irrespective of whether the edit was procedural or manual. Because the generation process has been split into different knowledge sources, manual editing of the resulting content automatically causes dependent knowledge sources to update their content. The changes cannot be communicated to the knowledge source that created said piece of content however. Consequently, if another change happens earlier in the dependency graph, the first change will be overwritten.
5.4 Generation Performance
To evaluate the generation performance, we will broadly compare our system with CityEngine. Example 3 from Section 5.2 will be used as a reference to CityEngine’s urban worlds. We generate a virtual world of similar size and scene complexity with both tools and compare the execution times. Our framework has been implemented on the Unity Engine (UnityTechnologies, 2016), and all tests were performed on an Intel Core i7-5960X 3.00GHz computer with 32GB of RAM.
Esri’s CityEngine (Esri, 2016) creates cityscapes with an approximate virtual size of 12 km² about 7500 to 9500 geometric objects in 15 to 30 seconds. Conversely, our approach creates a virtual world of approx. 12 km² with 8000 geometric objects in about 4 minutes. Although our system performs 10 times slower than CityEngine, it should be noted that this is for full regeneration in both cases and comparing a commercial optimized solution versus a research prototype. When editing the scene, the changes typically take a couple of milliseconds to at most a couple of seconds for our system, similar to CityEngine.
It should be noted that the resulting performance of our system depends on the number of knowledge source activations. The constant sorting and scheduling of these events introduces an overhead into the system, and furthermore the design choices, i.e. how to split up the generation process into knowledge sources, also impact performance. However, they do not increase the runtime complexity of the underlying algorithms, e.g. a road generation algorithm of complexity $O(n^2)$ remains $O(n^2)$. For example 3, at 8000 geometric objects, we measured on average 6.7 seconds of overhead.
In the next Sections, we discuss the impact of scheduling and modularization on the generation performance of our blackboards architecture.
**Impact of Scheduling on Performance**
In order to evaluate the impact of scheduling, we generate each example scene (see Section 5.2) with and without event reduction at varying scene complexities. Figure 9 shows the relative number of reduced activations due to event reduction.

We can see that the activation reduction increases significantly over time for examples 2 and 3, while example 1 remains fairly constant. This can be explained by the presence of the road network generation algorithm in both examples 2 and 3.
Creating a road network in our case involves relatively more knowledge sources than for example forest generation, i.e. 6 steps instead of 3: (1) height map
to point cloud, (2) point cloud to interest point, (3) interest point to point collection, (4) point collection to abstract road graph, (5) abstract road graph to road graph with strategy tags and (6) road graph to textured mesh. Furthermore, the event deletion greatly reduces vertically, i.e. generators using highly sequential or a large amount of dependent steps benefit most from event deletion. In conclusion, the event reduction can reduce the number of activations from 52% to 98%.
**Impact of Modularization on Performance**
One can argue however that modularizing PCG generators into several smaller modules will significantly increase the runtime complexity as more modules also means more events. However, although more events are indeed created we will show that the overall runtime can decrease depending on the design.
This test uses example 1, as we stated in Section 5.2 the forest scene can be designed with only trees (single object type), trees and bushes (double object types) or trees, bushes and plants (triple object types). The modularization in all three cases is an example of modularization by output type (see Section 4.1). We will evaluate the performance of example 1 for all three configurations (single, double and triple) with and without the introduced modularization by output type. The configurations are introduced to show the impact of data and knowledge source reuse facilitated by modularization.
First modularization increases the activation count by 200% for the single configuration, by 50% for the double collection and stays about the same for the triple configuration. The highest increase is the single configuration, this is to be expected as we cut up the generation process in more steps without having more knowledge sources that take advantage of it (i.e. reuse). Increasing the amount of object types in our test case however decreases this disadvantage, with an object type count of three almost nullifying the increase in activation count from adding extra modules.
**Figure 10**: Overhead ratio with event reduction for all use cases.
It is clear that event reduction is beneficial for large scale virtual world generation. However, event reduction also introduces an overhead into the system. Figure 10 shows the relative overhead (i.e. the percentage of time not spent on generation) for all example scenes. We can see that the overhead is between 13% and 42%. Example 2 can be considered highly vertical in terms of content generation, conversely example 1 and 3 are more horizontal, i.e. subsequent knowledge sources mostly reuse content generated by a single knowledge source. For example, the placement of road intersections and tree positions are both derived from a point cloud generated based on the height map. From this, we can infer that the more horizontal, i.e. reusable, the generation process the lower the overhead.
**Figure 11**: Total execution speed-up due to modularization for different object type amounts.
**Figure 12**: Generation speed-up due to modularization for different object type amounts.
Next we take a look at the execution times, both total (Figure 11) and actual generation (Figure 12). The results show that in the single configuration the generation process is considerably slower, although the generation time does not decrease that much (by about 20% at 50,000 objects) the total execution time at least doubles. However when introducing more object types (i.e. double and triple configurations), the modularized knowledge sources can be reused by.
other knowledge sources thus increasing the performance. Modularization moves work related to converting from one data type to another into a separate knowledge source. The result of this conversion can be used by multiple knowledge sources which would otherwise have to do this conversion themselves. Moving the conversion into its own knowledge source and making the input and output as generic as possible also allows various optimizations which are otherwise not possible. This means that the extra overhead created by modularization due to the extra activations is less than the benefit gained from it.
6 CONCLUSIONS AND FUTURE WORK
To improve the usability of multi-domain content generation, the selected content domains for generation as well as the order and manner in which it is generated should be configurable. We presented an extensible framework for multi-domain content generation of virtual worlds. We have introduced blackboards to the domain of procedural generation to alleviate the current limitations of multi-domain content generation. Encapsulating the different (single content-domain) procedural methods into knowledge sources, allows the system to reason about the generation process which in turn allows optimization of the generation process by eliminating unnecessary generation executions and ordering the remaining based on priority in terms of content dependencies.
We provided an overview of the design considerations when using our method for virtual world generation: modularization and knowledge source design patterns. We have shown the extensibility of our system by implementing a set of 3 different use cases (forest, road and combined environments) which form a representative sample set of virtual world generation. Furthermore, our system facilitates a more stable way of editing generated content, as changes in the data only trigger the specific procedural methods that depend on it. Finally, the generation performance of the system depends on the scheduling system (i.e. event reduction) and modularization design paradigms. Event reduction reduced the number of knowledge source activations by as much as 98% resulting in better performing large world generation. Although modularization increases the number of activations, we proved that the overall runtime can be reduced by intelligent data and knowledge source reuse.
For paths for future research, we suggest five possible extensions. Firstly exploring the behaviour of dependency cycles in the data model. Secondly, improving the editing of generated content by automatically communicating edits to the knowledge source that created said changed content. Thirdly improving the generation performance of the system by for example automatic concurrency of the PCG blackboard architecture. The separation of procedural modelling methods into knowledge sources should provide opportunities for parallelization. Fourthly, data ontologies could be utilized to provide a formalized data format, allowing for more optimizations for scheduling and overall improvement of the coherence of the resulting content. Lastly recursive blackboards could be used for PCG blackboards, where the knowledge sources can contain blackboards themselves. This could be used to enable scoping operations, where certain knowledge sources are scoped within certain regions of the virtual worlds. This would allow finer-grained control over the generation process and allow different types of constraints in different regions.
REFERENCES
|
{"Source-Url": "https://biblio.ugent.be/publication/8515255/file/8515257.pdf", "len_cl100k_base": 7575, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30328, "total-output-tokens": 9473, "length": "2e12", "weborganizer": {"__label__adult": 0.0010328292846679688, "__label__art_design": 0.00301361083984375, "__label__crime_law": 0.0011768341064453125, "__label__education_jobs": 0.0024394989013671875, "__label__entertainment": 0.0006003379821777344, "__label__fashion_beauty": 0.0006098747253417969, "__label__finance_business": 0.0006442070007324219, "__label__food_dining": 0.0008478164672851562, "__label__games": 0.032684326171875, "__label__hardware": 0.0017595291137695312, "__label__health": 0.0012226104736328125, "__label__history": 0.0014104843139648438, "__label__home_hobbies": 0.00017833709716796875, "__label__industrial": 0.0010356903076171875, "__label__literature": 0.000912189483642578, "__label__politics": 0.0006732940673828125, "__label__religion": 0.00127410888671875, "__label__science_tech": 0.1851806640625, "__label__social_life": 0.00017762184143066406, "__label__software": 0.0222930908203125, "__label__software_dev": 0.73828125, "__label__sports_fitness": 0.0010232925415039062, "__label__transportation": 0.0009860992431640625, "__label__travel": 0.0006356239318847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43338, 0.02894]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43338, 0.67216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43338, 0.89327]], "google_gemma-3-12b-it_contains_pii": [[0, 4283, false], [4283, 9404, null], [9404, 12094, null], [12094, 16813, null], [16813, 20207, null], [20207, 23919, null], [23919, 28293, null], [28293, 32250, null], [32250, 35806, null], [35806, 40820, null], [40820, 43338, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4283, true], [4283, 9404, null], [9404, 12094, null], [12094, 16813, null], [16813, 20207, null], [20207, 23919, null], [23919, 28293, null], [28293, 32250, null], [32250, 35806, null], [35806, 40820, null], [40820, 43338, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43338, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43338, null]], "pdf_page_numbers": [[0, 4283, 1], [4283, 9404, 2], [9404, 12094, 3], [12094, 16813, 4], [16813, 20207, 5], [20207, 23919, 6], [23919, 28293, 7], [28293, 32250, 8], [32250, 35806, 9], [35806, 40820, 10], [40820, 43338, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43338, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
0a64f18e66c4129efc4dd92102cfb00aa1c765c2
|
Computation Tree Logic (CTL) &
Basic Model Checking Algorithms
Martin Fränzle
Carl von Ossietzky Universität
Dpt. of Computing Science
Res. Grp. Hybride Systeme
Oldenburg, Germany
What you’ll learn
1. **Rationale behind declarative specifications:**
- Why operational style is insufficient
2. **Computation Tree Logic CTL:**
- Syntax
- Semantics: Kripke models
3. **Explicit-state model checking of CTL:**
- Recursive coloring
Operational models
Nowadays, a lot of ES design is based on executable behavioral models of the system under design, e.g. using
- Statecharts (a syntactically sugared variant of Moore automata)
- VHDL.
Such operational models are good at
- supporting system analysis
- simulation / virtual prototyping
- supporting incremental design
- executable models
- supporting system deployment
- executable model as “golden device”
- code generation for rapid prototyping or final product
- hardware synthesis
...are bad at
- supporting non-operational descriptions:
- *What* instead of *how*.
- E.g.: Every request is eventually answered.
- supporting negative requirements:
- “Thou shalt not...”
- E.g.: The train ought not move, unless it is manned.
- providing a structural match for requirement *lists*:
- System has to satisfy $R_1$ *and* $R_2$ *and* ...
- If system fails to satisfy $R_1$ then $R_2$ should be satisfied.
Multiple viewpoints
Requirements analysis
Aspects
"What?"
Tests & proofs
"Consistent?"
Validation / verification
Programming
Algorithmics
"How?"
Model checking
Device specification
Device Descript.
architecture behaviour of processor is
process fetch
if halt=0 then
if mem_wait=0 then
nextins <= dport
...
Specification
\( \models (\pi \iff \phi) \)
Model Checker
Approval/Counterexample
Exhaustive state-space search
Automatic verification/falsification of invariants
**Safety requirement:** Gate has to be closed whenever a train is in “In”.
The gate model
Open
~enter?
enter?
Opening
Closing
~leave?
leave?
Closed
Track model
— safe abstraction —
Empty
Appr.
In
enter!
leave!
Gate reaction: Open, Closing, Closed, Opening, Open, Open.
Computation Tree Logic
Syntax of CTL
We start from a countable set $\mathcal{AP}$ of atomic propositions. The CTL formulae are then defined inductively:
- Any proposition $p \in \mathcal{AP}$ is a CTL formula.
- The symbols $\bot$ and $\top$ are CTL formulae.
- If $\phi$ and $\psi$ are CTL formulae, so are
\begin{align*}
&\neg \phi, \phi \land \psi, \phi \lor \psi, \phi \rightarrow \psi \\
&\text{EX } \phi, \text{AX } \phi \\
&\text{EF } \phi, \text{AF } \phi \\
&\text{EG } \phi, \text{AG } \phi \\
&\phi \text{ EU } \psi, \phi \text{ AU } \psi
\end{align*}
Semantics (informal)
- \( E \) and \( A \) are **path quantifiers**:
- \( A \): for **all paths** in the computation tree . . .
- \( E \): for **some path** in the computation tree . . .
- \( X, F, G \) and \( U \) are **temporal operators** which refer to the path under investigation, as known from LTL:
- \( X\phi \) (**Next**): evaluate \( \phi \) in the next state on the path
- \( F\phi \) (**Finally**): \( \phi \) holds for some state on the path
- \( G\phi \) (**Globally**): \( \phi \) holds for all states on the path
- \( \phi U \psi \) (**Until**): \( \phi \) holds on the path at least until \( \psi \) holds
**N.B.** Path quantifiers and temporal operators are compound in CTL: there never is an isolated path quantifier or an isolated temporal operator. There is a lot of things you can’t express in CTL because of this...
CTL formulae are interpreted over Kripke structures. A **Kripke structure** $K$ is a quadruple $K = (V, E, L, I)$ with
- $V$ a set of vertices (interpreted as system states),
- $E \subseteq V \times V$ a set of edges (interpreted as possible transitions),
- $L \subseteq V \rightarrow \mathcal{P}(AP)$ labels the vertices with atomic propositions that apply in the individual vertices,
- $I \subseteq V$ is a set of initial states.
A path $\pi$ in a Kripke structure $K = (V, E, L, I)$ is an edge-consistent infinite sequence of vertices:
- $\pi \in V^\omega$,
- $(\pi_i, \pi_{i+1}) \in E$ for each $i \in \mathbb{N}$.
Note that a path need not start in an initial state!
The labelling $L$ assigns to each path $\pi$ a propositional trace
$$tr_{\pi} = L(\pi) \overset{\text{def}}{=} \langle L(\pi_0), L(\pi_1), L(\pi_2), \ldots \rangle$$
that path formulae ($X\phi$, $F\phi$, $G\phi$, $\phi U \psi$) can be interpreted on.
Semantics (formal)
Let $K = (V, E, L, I)$ be a Kripke structure and $v \in V$ a vertex of $K$.
- $v, K \models \top$
- $v, K \not\models \bot$
- $v, K \models p$ for $p \in AP$ iff $p \in L(v)$
- $v, K \models \neg \phi$ iff $v, K \not\models \phi$,
- $v, K \models \phi \land \psi$ iff $v, K \models \phi$ and $v, K \models \psi$,
- $v, K \models \phi \lor \psi$ iff $v, K \models \phi$ or $v, K \models \psi$,
- $v, K \models \phi \Rightarrow \psi$ iff $v, K \not\models \phi$ or $v, K \models \psi$.
Semantics (contd.)
- $v, K \models \text{EX } \phi$ iff there is a path $\pi$ in $K$ s.t. $v = \pi_1$ and $\pi_2, K \models \phi$,
- $v, K \models \text{AX } \phi$ iff all paths $\pi$ in $K$ with $v = \pi_1$ satisfy $\pi_2, K \models \phi$,
- $v, K \models \text{EF } \phi$ iff there is a path $\pi$ in $K$ s.t. $v = \pi_1$ and $\pi_i, K \models \phi$ for some $i$,
- $v, K \models \text{AF } \phi$ iff all paths $\pi$ in $K$ with $v = \pi_1$ satisfy $\pi_i, K \models \phi$ for some $i$ (that may depend on the path),
- $v, K \models \text{EG } \phi$ iff there is a path $\pi$ in $K$ s.t. $v = \pi_1$ and $\pi_i, K \models \phi$ for all $i$,
- $v, K \models \text{AG } \phi$ iff all paths $\pi$ in $K$ with $v = \pi_1$ satisfy $\pi_i, K \models \phi$ for all $i$,
- $v, K \models \phi \text{ EU } \psi$, iff there is a path $\pi$ in $K$ s.t. $v = \pi_1$ and some $k \in \mathbb{N}$ s.t. $\pi_i, K \models \phi$ for each $i < k$ and $\pi_k, K \models \psi$,
- $v, K \models \phi \text{ AU } \psi$, iff all paths $\pi$ in $K$ with $v = \pi_1$ have some $k \in \mathbb{N}$ s.t. $\pi_i, K \models \phi$ for each $i < k$ and $\pi_k, K \models \psi$.
A Kripke structure $K = (V, E, L, I)$ satisfies $\phi$ iff all its initial states satisfy $\phi$,
i.e. iff $is, K \models \phi$ for all $is \in I$.
CTL Model Checking
Explicit-state algorithm
Rationale
We will extend the idea of verification/falsification by exhaustive state-space exploration to full CTL.
- Main technique will again be breadth-first search, i.e. graph coloring.
- Need to extend this to other modalities than AG.
- Need to deal with nested modalities.
Model-checking CTL: General layout
Given: a Kripke structure $K = (V, E, L, I)$ and a CTL formula $\phi$
Core algorithm: find the set $V_\phi \subseteq V$ of vertices in $K$ satisfying $\phi$ by
1. for each atomic subformula $p$ of $\phi$, mark the set $V_p \subseteq V$ of vertices in $K$ satisfying $\phi$
2. for increasingly larger subformulae $\psi$ of $\phi$, synthesize the marking $V_\psi \subseteq V$ of nodes satisfying $\psi$ from the markings for $\psi$’s immediate subformulae until all subformulae of $\phi$ have been processed (including $\phi$ itself)
Result: report “$K \models \phi$” iff $V_\phi \supseteq I$
Reduction
The tautologies
\[
\begin{align*}
\phi \lor \psi & \iff \neg (\neg \phi \land \neg \psi) \\
AX \phi & \iff \neg EX \neg \phi \\
AG \phi & \iff \neg EF \neg \phi \\
EF \phi & \iff T \ EU \phi \\
EG \phi & \iff \neg AF \neg \phi \\
\phi \ AU \psi & \iff \neg ((\neg \psi) \ EU \neg (\phi \lor \psi)) \land AF \psi
\end{align*}
\]
indicate that we can rewrite each formula to one only containing atomic propositions, \(\neg, \land, EX, EU, AF\).
After preprocessing, our algorithm need only tackle these!
**Given**: A finite Kripke structure with vertices $V$ and edges $E$ and a labelling function $L$ assigning atomic propositions to vertices. Furthermore an atomic proposition $p$ to be checked.
**Algorithm**: Mark all vertices that have $p$ as a label.
**Complexity**: $O(|V|)$
Model-checking CTL: $\neg \phi$
**Given:** A set $V_\phi$ of vertices satisfying formula $\phi$.
**Algorithm:** Mark all vertices not belonging to $V_\phi$.
**Complexity:** $O(|V|)$
Model-checking CTL: $\phi \land \psi$
**Given:** Sets $V_\phi$ and $V_\psi$ of vertices satisfying formulae $\phi$ or $\psi$, resp.
**Algorithm:** Mark all vertices belonging to $V_\phi \cap V_\psi$.
**Complexity:** $O(|V|)$
Given: Set $V_\phi$ of vertices satisfying formulae $\phi$.
Algorithm: Mark all vertices that have a successor state in $V_\phi$.
Complexity: $O(|V| + |E|)$
Model-checking CTL: $\phi \text{EU} \psi$
**Given:** Sets $V_\phi$ and $V_\psi$ of vertices satisfying formulae $\phi$ or $\psi$, resp.
**Algorithm:** Incremental marking by
1. Mark all vertices belonging to $V_\psi$.
2. Repeat
- if there is a state in $V_\phi$ that has some successor state marked then mark it also
until no new state is found.
**Termination:** Guaranteed due to finiteness of $V_\phi \subset V$.
**Complexity:** $O(|V| + |E|)$ if breadth-first search is used.
Given: Set $V_{\phi}$ of vertices satisfying formula $\phi$.
Algorithm: Incremental marking by
1. Mark all vertices belonging to $V_{\phi}$.
2. Repeat
if there is a state in $V$ that has all successor states marked then mark it also until no new state is found.
Termination: Guaranteed due to finiteness of $V$.
Complexity: $O(|V| \cdot (|V| + |E|))$.
Model-checking CTL: $\text{EG} \phi$, for efficiency
Given: Set $V_\phi$ of vertices satisfying formula $\phi$.
Algorithm: Incremental marking by
1. Strip Kripke structure to $V_\phi$-states:
$$(V, E) \leadsto (V_\phi, E \cap (V_\phi \times V_\phi)).$$
Complexity: $O(|V| + |E|)$
2. Mark all states belonging to loops in the reduced graph.
Complexity: $O(|V_\phi| + |E_\phi|)$ by identifying strongly connected components.
3. Repeat
if there is a state in $V_\phi$ that has some successor states marked then mark it also until no new state is found.
Complexity: $O(|V_\phi| + |E_\phi|)$
Complexity: $O(|V| + |E|)$.
Theorem: It is decidable whether a finite Kripke structure $(V, E, L, I)$ satisfies a CTL formula $\phi$.
The complexity of the decision procedure is $O(|\phi| \cdot (|V| + |E|))$, i.e.
- linear in the size of the formula, given a fixed Kripke structure,
- linear in the size of the Kripke structure, given a fixed formula.
However, size of Kripke structure is exponential in number of parallel components in the system model.
Appendix
Fair Kripke Structures &
Fair CTL Model Checking
A fair Kripke structure is a pair \( (K, \mathcal{F}) \), where
- \( K = (V, E, L, I) \) is a Kripke structure
- \( \mathcal{F} \subseteq \mathcal{P}(V) \) is set of vertex sets, called a fairness condition.
A fair path \( \pi \) in a fair Kripke structure \( ((V, E, L, I), \mathcal{F}) \) is an edge-consistent infinite sequence of vertices which visits each set \( F \in \mathcal{F} \) infinitely often:
- \( \pi \in V^\omega \),
- \( (\pi_i, \pi_{i+1}) \in E \) for each \( i \in \mathbb{N} \),
- \( \forall F \in \mathcal{F}. \exists \infty i \in \mathbb{N}. \pi_i \in F \).
Note the similarity to (generalized) Büchi acceptance!
Fair CTL: Semantics
- \( \nu, K, F \models_F \text{EX } \phi \) iff there is a fair path \( \pi \) in \( K \) s.t. \( \nu = \pi_0 \) and \( \pi_1, K, F \models_F \phi \),
- \( \nu, K, F \models_F \text{AX } \phi \) iff all fair paths \( \pi \) in \( K \) with \( \nu = \pi_0 \) satisfy \( \pi_1, K, F \models_F \phi \),
- \( \nu, K, F \models_F \text{EF } \phi \) iff there is a fair path \( \pi \) in \( K \) s.t. \( \nu = \pi_0 \) and \( \pi_i, K, F \models_F \phi \) for some \( i \),
- \( \nu, K, F \models_F \text{AF } \phi \) iff all fair paths \( \pi \) in \( K \) with \( \nu = \pi_0 \) satisfy \( \pi_i, K, F \models_F \phi \) for some \( i \) (that may depend on the fair path),
- \( \nu, K, F \models_F \text{EG } \phi \) iff there is a fair path \( \pi \) in \( K \) s.t. \( \nu = \pi_0 \) and \( \pi_i, K, F \models_F \phi \) for all \( i \),
- \( \nu, K, F \models_F \text{AG } \phi \) iff all fair paths \( \pi \) in \( K \) with \( \nu = \pi_0 \) satisfy \( \pi_i, K, F \models_F \phi \) for all \( i \),
- \( \nu, K, F \models_F \phi \text{ EU } \psi \), iff there is a fair path \( \pi \) in \( K \) s.t. \( \nu = \pi_0 \) and some \( k \in \mathbb{N} \) s.t. \( \pi_i, K, F \models_F \phi \) for each \( i < k \) and \( \pi_k, K, F \models_F \psi \),
- \( \nu, K, F \models_F \phi \text{ AU } \psi \), iff all fair paths \( \pi \) in \( K \) with \( \nu = \pi_0 \) have some \( k \in \mathbb{N} \) s.t. \( \pi_i, K, F \models_F \phi \) for each \( i < k \) and \( \pi_k, K, F \models_F \psi \).
A fair Kripke structure \( ((V, E, L, I), F) \) satisfies \( \phi \), denoted \( ((V, E, L, I), F) \models_F \phi \), iff all its initial states satisfy \( \phi \), i.e. iff \( \nu \in I, K, F \models_F \phi \) for all \( \nu \in I \).
Lemma: Given a fair Kripke structure \(((V, E, L, I), \mathcal{F})\), the set \(\text{Fair} \subseteq V\) of states from which a fair path originates can be determined algorithmically.
Alg.: This is a problem of finding adequate SCCs:
1. Find all SCCs in \(K\).
2. Select those SCCs that do contain at least one state from each fairness set \(F \in \mathcal{F}\).
3. Find all states from which at least one of the selected SCCs is reachable.
Model-checking fair CTL: $\text{EX} \, \phi$
**Given:** Set $V_\phi$ of vertices fairly satisfying formulae $\phi$.
**Algorithm:** Mark all vertices that have a successor state in $V_\phi \cap \text{Fair}$.
Note that the intersection with $\text{Fair}$ is necessary even though the states in $V_\phi$ fairly satisfy $\phi$:
- $\phi$ may be an atomic proposition, in which case fairness is irrelevant;
- $\phi$ may start with an $\exists$ path quantifier that is trivially satisfied by non-existence of a fair path.
Model-checking fair CTL: $\phi E U \psi$
**Given:** Sets $V_{\phi}$ and $V_{\psi}$ of vertices fairly satisfying formulae $\phi$ or $\psi$, resp.
**Algorithm:** Incremental marking by
1. Mark all vertices belonging to $V_{\psi} \cap Fair$.
2. Repeat
if there is a state in $V_{\phi}$ that has some successor state marked then mark it also
until no new state is found.
Model-checking fair CTL: \texttt{EG} \phi
**Given:** Set \( V_\phi \) of vertices fairly satisfying formula \( \phi \).
**Algorithm:** Incremental marking by
1. Strip Kripke structure to \( V_\phi \)-states:
\((V, E) \rightsquigarrow (V_\phi, E \cap (V_\phi \times V_\phi))\).
2. Mark all states that can reach a *fair* SCC in the *reduced* graph.
(Same algorithm as for finding the set \textit{Fair}, yet applied to the reduced graph.)
|
{"Source-Url": "http://www2.imm.dtu.dk/courses/02917/L2.pdf", "len_cl100k_base": 4769, "olmocr-version": "0.1.53", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 60098, "total-output-tokens": 6439, "length": "2e12", "weborganizer": {"__label__adult": 0.0003948211669921875, "__label__art_design": 0.0005736351013183594, "__label__crime_law": 0.0004968643188476562, "__label__education_jobs": 0.0014333724975585938, "__label__entertainment": 8.475780487060547e-05, "__label__fashion_beauty": 0.0001989603042602539, "__label__finance_business": 0.000377655029296875, "__label__food_dining": 0.0004410743713378906, "__label__games": 0.0007829666137695312, "__label__hardware": 0.0016603469848632812, "__label__health": 0.0006551742553710938, "__label__history": 0.00033473968505859375, "__label__home_hobbies": 0.0001939535140991211, "__label__industrial": 0.0009899139404296875, "__label__literature": 0.0003452301025390625, "__label__politics": 0.00036835670471191406, "__label__religion": 0.0006771087646484375, "__label__science_tech": 0.10595703125, "__label__social_life": 0.00012218952178955078, "__label__software": 0.005626678466796875, "__label__software_dev": 0.87646484375, "__label__sports_fitness": 0.0004067420959472656, "__label__transportation": 0.0009737014770507812, "__label__travel": 0.00024628639221191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14871, 0.00434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14871, 0.42072]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14871, 0.67697]], "google_gemma-3-12b-it_contains_pii": [[0, 182, false], [182, 444, null], [444, 959, null], [959, 1392, null], [1392, 1543, null], [1543, 1802, null], [1802, 1884, null], [1884, 1960, null], [1960, 2110, null], [2110, 2110, null], [2110, 2220, null], [2220, 2243, null], [2243, 2802, null], [2802, 3656, null], [3656, 4089, null], [4089, 4585, null], [4585, 5091, null], [5091, 6389, null], [6389, 6434, null], [6434, 6715, null], [6715, 7345, null], [7345, 7861, null], [7861, 8141, null], [8141, 8326, null], [8326, 8554, null], [8554, 8713, null], [8713, 9203, null], [9203, 9563, null], [9563, 10200, null], [10200, 10630, null], [10630, 10689, null], [10689, 11328, null], [11328, 13080, null], [13080, 13524, null], [13524, 14043, null], [14043, 14425, null], [14425, 14871, null]], "google_gemma-3-12b-it_is_public_document": [[0, 182, true], [182, 444, null], [444, 959, null], [959, 1392, null], [1392, 1543, null], [1543, 1802, null], [1802, 1884, null], [1884, 1960, null], [1960, 2110, null], [2110, 2110, null], [2110, 2220, null], [2220, 2243, null], [2243, 2802, null], [2802, 3656, null], [3656, 4089, null], [4089, 4585, null], [4585, 5091, null], [5091, 6389, null], [6389, 6434, null], [6434, 6715, null], [6715, 7345, null], [7345, 7861, null], [7861, 8141, null], [8141, 8326, null], [8326, 8554, null], [8554, 8713, null], [8713, 9203, null], [9203, 9563, null], [9563, 10200, null], [10200, 10630, null], [10630, 10689, null], [10689, 11328, null], [11328, 13080, null], [13080, 13524, null], [13524, 14043, null], [14043, 14425, null], [14425, 14871, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14871, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14871, null]], "pdf_page_numbers": [[0, 182, 1], [182, 444, 2], [444, 959, 3], [959, 1392, 4], [1392, 1543, 5], [1543, 1802, 6], [1802, 1884, 7], [1884, 1960, 8], [1960, 2110, 9], [2110, 2110, 10], [2110, 2220, 11], [2220, 2243, 12], [2243, 2802, 13], [2802, 3656, 14], [3656, 4089, 15], [4089, 4585, 16], [4585, 5091, 17], [5091, 6389, 18], [6389, 6434, 19], [6434, 6715, 20], [6715, 7345, 21], [7345, 7861, 22], [7861, 8141, 23], [8141, 8326, 24], [8326, 8554, 25], [8554, 8713, 26], [8713, 9203, 27], [9203, 9563, 28], [9563, 10200, 29], [10200, 10630, 30], [10630, 10689, 31], [10689, 11328, 32], [11328, 13080, 33], [13080, 13524, 34], [13524, 14043, 35], [14043, 14425, 36], [14425, 14871, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14871, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
6e89074cdd19d626122475cc977d297d59be1330
|
METHOD AND SYSTEM FOR CREATING ORDERED READING LISTS FROM UNSTRUCTURED DOCUMENT SETS
Inventors: Robert J. St. Jacques, Jr., Fairport, NY (US); Mary Catherine McCorkindale, Fairport, NY (US); Saurabh Prabhat, Webster, NY (US)
Assignee: Xerox Corporation, Norwalk, CT (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 161 days.
Prior Publication Data
US 2013/0097167 A1 Apr. 18, 2013
Field of Classification Search
CPC ......................... G06F 17/3071; G06F 17/3086
USPC ...................... 707/3, 737, 722, 14, 602, 749, 707; 706/11; 715/753, 854
References Cited
U.S. PATENT DOCUMENTS
5,687,364 A 11/1997 Saad et al.
5,963,940 A * 10/1999 Liddy et al.
7,539,653 B2 5/2009 Handley
8,515,937 B1 * 8/2013 Sun et al. .............. 707/707
ABSTRACT
A method for creating an ordered reading list for a set of documents includes identifying the topics among documents in a document set; clustering the document set into groups by topic; calculating a probability that a particular topic describes a given document in a cluster based upon the occurrence of the keywords in the document; determining relevant documents in a cluster based on a probability distribution; determining relevant information in a document by repeating a similar operation on the document paragraphs; generating an ordered reading list for the related documents of the cluster based on the relevance; and associating a visual cue with non-redundant information in each document to indicate which paragraphs contain the relevant information.
17 Claims, 6 Drawing Sheets
### References Cited
#### U.S. PATENT DOCUMENTS
<table>
<thead>
<tr>
<th>Patent Number</th>
<th>Date</th>
<th>Inventor(s)</th>
<th>Classification</th>
</tr>
</thead>
<tbody>
<tr>
<td>2007/0011154</td>
<td>1/2007</td>
<td>Musgrove et al.</td>
<td>707/5</td>
</tr>
<tr>
<td>2008/0244418</td>
<td>10/2008</td>
<td>Manolescu et al.</td>
<td>715/753</td>
</tr>
<tr>
<td>2008/0256128</td>
<td>10/2008</td>
<td>Pierce et al.</td>
<td>707/104.1</td>
</tr>
<tr>
<td>2011/0047168</td>
<td>2/2011</td>
<td>Ellingsworth</td>
<td>707/749</td>
</tr>
<tr>
<td>2011/0173264</td>
<td>7/2011</td>
<td>Kelly</td>
<td>709/205</td>
</tr>
<tr>
<td>2011/0258229</td>
<td>10/2011</td>
<td>Ni et al.</td>
<td>707/776</td>
</tr>
<tr>
<td>2011/0295612</td>
<td>12/2011</td>
<td>Donneau-Golencer et al.</td>
<td>705/1.1</td>
</tr>
<tr>
<td>2012/0079172</td>
<td>3/2012</td>
<td>Kandekar et al.</td>
<td>715/256</td>
</tr>
<tr>
<td>2012/0246162</td>
<td>9/2012</td>
<td>Yamaguchi</td>
<td>707/737</td>
</tr>
<tr>
<td>2012/0296891</td>
<td>11/2012</td>
<td>Rangan</td>
<td>707/712</td>
</tr>
<tr>
<td>2013/0212060</td>
<td>8/2013</td>
<td>Crouse et al.</td>
<td>707/602</td>
</tr>
</tbody>
</table>
#### OTHER PUBLICATIONS
* cited by examiner
Cluster repository documents into groups of reasonable number of documents
Determine topical inference score per document
Prune less useful documents
Organize useful documents into ordered reading list based on document score
Highlight useful portions of documents in reading list
**FIG. 3**
This paragraph contains new information about visualizing tasks for information analysis. Applying a high degree of visualization to business processes, such as those contained in large data sets, is essential. However, there is a need for new methods to support semantic-driven approaches that require new tools based on visualization and data interpretation. The work also introduces a semantic-based visualization tool for interactive analysis of large data sets, which are often required to support real-world business tasks. The work is also notable for its implementation of a semantic representation of visualization tools that provide context-rich data. FIG. 5
INTRODUCTION AND RELATED WORK
As more and more large enterprises generate volumes of data, there is a large number of visualization systems that have been designed to help users consume, understand, and analyze information. Today’s visualization techniques are becoming increasingly sophisticated, with support for advanced visual encodings (e.g., “sparklines” [11] and “fourth dimension visualizations” as used in “Spaghetti” [12] and “Gadgets” [23]), and for supporting complex visual representations of systems (e.g., “network visualizations” [32]).
At the same time, businesses have been creating and storing more data than ever before. Recognizing that valuable insights are hidden within these mountains of information, companies have begun to push the use of visualization to all their employees to aid their business decision-making processes. However, most of today’s visualization tools still target two niche audiences: (1) dedicated information analysts and (2) dashboard consumers.
Today’s business analysts often come from departments with a high degree of visualization and computer skills and often are specialists on visualization software. However, they are generally not experts in business knowledge and business processes. In contrast, dashboard consumers are experts in their own business processes and can understand the visual representations more easily. These two groups often collaborate to generate new insights, which are often required to support new business processes.
However, there is a third, often untapped group of users for whom visualization can be a powerful tool: the business users. These users possess valuable business knowledge and an understanding of business processes. Time is spent on deep dives into responsibilities, their performance evaluation.
HARVEST combines three key technologies to support an exploratory visual analytic process without requiring users to be visualization or computer experts:
- **Smart visual analytic widgets.** A set of visualization widgets that can be easily reused across applications. They support semantics-based user interaction to help identify and capture user intention, and incrementally handle dynamic data sets retrieved during a continuous visual analytic task.
- **Dynamic visualization recommendation.** A context-driven approach that assists users in finding the proper visualizations for their context.
- **Semantics-based capture of insight provenance.** A semantics-based approach to modeling and capturing a user’s logical analytic process. It supports automatic detection of user action patterns for better visualization recommendation and enables flexible adaptation of a user’s analytic process for reuse in new contexts.
Our work on HARVEST is motivated by the common information needs of employees within our own company. Our organization maintains a large wiki site describing all ongoing research projects. Each project page is a semi-structured...
FIG. 6
METHOD AND SYSTEM FOR CREATING ORDERED READING LISTS FROM UNSTRUCTURED DOCUMENT SETS
BACKGROUND
The disclosed embodiments generally relate to the field of data base management, and more particularly to clustering a set of documents in a document repository into cluster groups, and then organizing the clustered groups into an ordered reading list based upon the relational strength and usefulness to a topic. Such an ordered reading list comprises a document trail for efficient topical reading by a user.
The ability to store documents electronically has led to an information explosion. Information bases such as the Internet, corporate digital data networks, electronic government record warehouses, and so forth, store vast quantities of information, which motivates development of effective information organization systems. Two commonly used organizational approaches are categorization and clustering. In categorization, a set of classes are predefined, and documents are grouped into classes based on content similarity measures. Clustering is similar, except that no predefined classes are defined, rather, documents are grouped or clustered based on similarity, and groups of similar documents define the set of classes. U.S. Pat. Nos. 7,539,653 and 7,711,747 are typical examples of clustering techniques.
The use of such clustering management system to facilitate organization, or even when such documents are organized into groups manually, is usually followed by readers/users of the clustered groups manually reading through the data of the documents therein, and then making subjective judgment calls about whether or not a document is relevant or useful to a related topic. The problem involved is that such a judgment can only occur by the manual reading of the entire document itself. Manual reading of related documents usually involves a lot of wasted time due to document redundancies and overlap. It is not uncommon for each document in a series to have much duplicate information already provided by documents earlier in the series. People reading such a series of documents often must spend a significant amount of time trying to determine what novel content exists in each subsequent document in the series. This frequently leads to “skimming” where readers attempt to quickly parse documents at some level of granularity (e.g., by paragraph) to try to quickly determine if the information provided is novel. This can lead to a waste of time and missed information.
Thus, there is a need for improved systems and methods for further organizing a document repository to more efficiently reader/user review of accessible documents by minimizing presented overlap, redundancy or non-useful information, and highlighting desired new, particularly useful or strongly related information to the desired topic.
The present embodiments are directed to solving one or more of the specified problems and providing a fulfillment of the desired needs.
SUMMARY
The embodiments relate to a clustering process wherein a corpus of a document set is analyzed in accordance with preselected text analytics and natural language processing steps for identifying grouping relationships for sets of documents therein and clustering the sets into a plurality of clustered groups. Such parsing of the documents in the repository is responsive to identification of words in the documents themselves that are deemed significant by the text analytic and language processing steps.
The embodiments further provide a methodology for organizing a repository of unstructured documents into groups of ordered reading lists, i.e., document trails. Each “document trail” is an ordered list of documents that are related to each other by subject matter. The disclosed embodiments combine standard tools for text analytics and natural language processing (e.g., topic extraction, entity extraction, meta data extraction, readability) with machine learning techniques (e.g., document clustering) to group documents, choose the most important/relevant documents from each group, and organize those documents into a suggested reading order. Additionally, documents within each document trail may be marked up or highlighted to indicate which paragraphs therein contain novel or useful information.
Before the present methods, systems and materials are described in detail, it is to be understood that this disclosure is not limited to the particular methodologies, systems and materials described, as these may vary. It is also to be understood that the terminology used in the description is for the purpose of describing the particular versions or embodiments only, and is not intended to limit the scope.
It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Thus, for example, reference to a “document” is a reference to one or more documents and equivalents thereof known to those skilled in the art, and so forth. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. Although any methods, materials, and devices similar or equivalent to those described herein can be used in the practice or testing of embodiments, the preferred methods, materials, and devices are now described. All publications and specifically cited patents, mentioned herein are incorporated by reference. Nothing herein is to be construed as an admission that the embodiments described herein are not entitled to antedate such disclosure by virtue of prior invention.
In accordance with certain embodiments illustrated herein, a method is disclosed for creating an ordered reading list for a set of documents. The method comprises: analyzing a corpus of the document set in accordance with preselected text analytics and natural language processing steps for identifying a grouping relationship and clustering the set into a plurality of cluster groups; prioritizing the documents in a one of the cluster groups in relation to importance to a topic of the cluster group; and organizing the documents in accordance with the prioritizing into the ordered reading list as a document trail for sequential access to a reader of the document set. Pruned documents, documents determined by the system to be less useful/relevant, are hidden, but not lost; users may choose to display hidden documents in the trail to get more information. Users may provide feedback while reading a trail to “branch” into other potential paths (e.g., more readable documents, more documents like a presented one, newer documents, etc.), deviating from the default document order. In addition, once multiple trails have been created, the trails are sorted based on anticipation/prediction of user needs; e.g., trails related to topics that a user has been recently interested in move to the top of the trail list.
In accordance with other aspects of the embodiments, a system is provided for processing a repository of documents comprising a clustering processor for segregating the documents based on topics identified in the documents and
relating a plurality of documents into a cluster group; and, a prioritizing processor for organizing the documents in the cluster group based upon strength of relationship to the topic by sequencing the documents into an ordered reading list in accordance with the strength of relationship.
**BRIEF DESCRIPTION OF THE DRAWINGS**
FIG. 1 diagrammatically shows the concept of a document trail comprising an ordered reading list of documents; FIG. 2 is a block diagram of a system for processing a repository of documents into clustered and organized document trails; FIG. 3 is a flowchart showing steps for creating an ordered document trail from a repository corpus; FIG. 4 is an exemplar of a document display using visual cues to indicate particular portions having distinctive significance on a displayed page being read by a user; FIG. 5 is an exemplar of a display comprising a tool tip explaining a highlighted portion of the display; FIG. 6 is an exemplar of a display using an alternative style of visual cues for a document within a document trail; and FIG. 7 is an exemplar of a document trail wherein a reader has provided feedback that affects the order of documents presented later during trail reading.
**DETAILED DESCRIPTION**
With reference to FIG. 1, the disclosed embodiments are intended to display/provide a reading user of the subject system an ordered reading list comprising a sequential order of documents with a defined beginning comprising a first document suggested by the system as being most relevant to the topic of interest to the reader. Such an ordered reading list is referred to herein as a "document trail". The document trail is intended to provide the reader with a suggested shortest path to the most relevant documents about a specific topic in a highly efficient manner by identifying novel or particularly useful information and identifying redundant overlap or less useful information with some forms of visual cues for selective and easy looking over by the reader.
The disclosed embodiments provide methods and systems that can be applied to a large set of unstructured documents such as a typical document repository corpus. The subject methodology separates the documents in the corpus into groups by determining how strongly related the individual documents are with respect to discerned topics therein. Such a topical model is acquired through known clustering processes employing text analytics and natural language processing steps that can identify a grouping relationship for the documents so that they may be clustered into distinct clustered groups. For each group, the most useful documents are extracted and then ordered into a reading list. Usefulness is typically determined based upon identifying words in the document that are the most significant therein.
The methodology further includes hiding pruned documents, i.e., documents determined by the system to be less useful/relevant within the cluster group. Such documents are not lost, and the option is available to the user to choose or display hidden documents in a document trail to get more information.
An alternative aspect of the subject methodology is that the users may provide feedback while reading a trail to branch into other potential paths (e.g., more readable documents, more documents like the present one being read, newer documents, or other aspects that could be of interest to the reader.) Branching comprises deviating from the default document order originally presented as the initial document trail. Deviating the list to hidden documents is an example of branching.
Yet another alternative aspect includes sorting document trails based upon anticipation/prediction of user needs. The system records and stores the topic that a user has been recently interested in and once such topics are determined to have a relation to a topic being currently read, document trails related to that topic can be sorted so that those trails related to topics that the user is reviewing can be moved to the top of the ordered reading list.
With reference to FIGS. 1, 2 and 3, the subject document trail is constructed from a repository on behalf of a user using text analytics, machine learning and inference. A processing system clusters the documents from the repository with a clustering processor and organizes relevant documents in the clustered groups into the document trails with an organizing processor. The user accesses the documents through an interactive display device.
In general, the creation and consumption of a document trail includes the following steps: first, document clustering; second, choosing relevant documents; third, choosing the best documents, fourth, ordering the documents; and fifth, a user interacting through an interface to allow the user to navigate through a trail.
The first step, document clustering involves grouping the repository corpus into natural groupings based upon information contained in the individual documents. The text analytics and natural language processing steps involved in the grouping are known and exemplars thereof are included in the clustering patents referenced above. A topic model is created using a training set (e.g., a randomly selected sample of significant size) from the full document corpus; each topic in the model is a collection of keywords that frequently appear together throughout the corpus. The number of topics is variable so the particular number may change, but generally it is selected to ultimately end up with a reasonable number of document trails. Anything between ten and fifty could be a reasonable number of trails to a user, so the number of topics will usually correspond to obtaining the trail objective. Once the topic model is created, the documents are clustered by topic by placing them into "buckets" for each topic, and then sorting them based on the probability that the topic describes that document.
The analytics comprise generating a topic inference for each document in the corpus, one at a time. The inference comprises a calculation in probability distribution across the topic model that a particular topic describes that document based upon the occurrence of keywords in the document. Simply stated, if a lot of keywords corresponding to a particular topic appear in the document, the document will get a higher topical inference score; and if keywords are lacking, or do not appear in the document, then the document will get a lower score. Latent Dirichlet Allocation is a more specific implementation for such topic modeling/inference. See, e.g., the website at http://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pdf. After the documents of the corpus have all been analyzed, it can be determined for each document in the system how many topics are commonly related in each document. In other words, if a document has high scores for a plurality of the same topics, those documents are considered to be strongly related—because they are generally discussing the same topical subjects. Additionally, the desired is that a presented document trail
comprise a reasonable number of documents, i.e., one that is comfortably accessible and consumable by a user/reader. Typically, a cluster group can be preselected to be in the range of ten to fifty documents based upon the topical subject matter at hand. For larger repositories including a vast number of documents, a clustering may involve several clustering iterations to continually distill the groupings into the desired reasonable number.
The choice of the relevant documents in a cluster group to be presented in the document trail involves pruning documents that contain no or minimal useful information. Within each cluster there is likely to be a large number of documents that contain useless information (i.e., redundant or unrelated to the cluster group). In order to form the trail, such documents must be pruned 26 which can be implemented again with reference to the corresponding topical inferential scores. For example, if there is less than a five percent chance that a given topic describes a document, that document is dropped from the cluster. Redundancy of paragraphs between different documents within the cluster group can similarly be identified through applying the same analytic and natural language processing techniques for keyword identification to individual paragraphs as for the document itself.
A document trail is next built by choosing the best documents from the relevant documents. Once only relevant documents that contain useful information remain in the group, a specific target number/percentage of best documents may be chosen in order keep the trail length reasonable and small. Again, analytics and inference may be combined with user preferences and feedback either dynamically or statically to prune the trail. Different kinds of thresholds may be used for so identifying the most useful documents; for example, by choosing the top N documents based on topical inference/probability scores, or based on detected closeness to a topical centroid concept, or dropping documents that are beyond a certain threshold from the centroid. Because documents are being related across a potentially broad spectrum of topics, it is difficult to choose any one topic to represent a cluster of documents. Two documents are related because they share similar probability scores across a plurality of topics; in a topic model that contains hundreds of topics, two documents could potentially have a large number of topics in common (e.g. 10 or more, easily). When more documents (10, 25, 50+) are added to the cluster, the relationships between documents and specific topics becomes even more complex. Overcoming this problem is based on calculating the “centroid,” which is a point in Euclidian space that represents the “center” of the cluster, or, in this case, the probability distribution across the topic model for an “ideal document” in that cluster; this is easily done by averaging all of the probability distributions for the documents in the cluster. It is then possible to calculate how closely affiliated a document is with the cluster by calculating its distance from the centroid (e.g. how far its probability distribution is from the ideal, which can be done using a common technique called “Jenson Shannon Divergence <http://en.wikipedia.org/wiki/Jensen%28%E2%80%93Shannon_divergence>”). Weaker documents on the fringes of the cluster are dropped to get down to a specific, desirable number.
Ordering the documents in a document trail is accomplished by organizing the documents into a logical reading order based on specific criteria, for example: most novel content first, oldest first, newest first, readability (for example this can be determined using natural language processing to count syllables per word, and words per sentence); or the documents can be presented in a random order. A preferred order would be based on documents’ probability scores.
With reference to FIGS. 2 and 7, the disclosed embodiments provide a user interface 20 that allows the user to navigate through a document trail (forward, backward, skipping N documents) and to move between trails. The user's state relative to the trails (e.g., position in a particular trail, which documents were read) is kept persistently, allowing the user to pick up at the last position in the most recent trail automatically. Additionally, users may provide feedback as they navigate the trail (e.g., more documents like this, documents that are easier to read than this, etc.). This feedback can be used in real time to alter the documents that appear later in the trail, effectively creating a branch (FIG. 7). For example, if a user indicates that they would like to read “more documents like this one,” entity extraction can be used to find more documents that cover similar people, places, and events later in the trail; even documents that originally had been dropped from the trail. Also, the system may consider documents that a user has most recently stored/saved, and prioritize trails that include those documents.
According to further aspects of the subject embodiments, anytime while reading the trail, a user can mouse over the “?” 54. FIG. 4, in the displayed screen. This opens a small survey that allows the user to provide real time feedback by rating different aspects of the trail. For example:
- Please rate the reader level of this trail from 1 (far too easy) to 5 (way too hard) with 3 being “about right.” 1 2 3 4 5
- Please rate the relevance of this trail to your desired topic from 1 (not relevant at all) to 5 (totally relevant): 1 2 3 4 5
You have 7 documents with about 15,000 words that will take approximately 3 hours to read. Please use the sliding scale to suggest adjustments from shorter to longer.
Check this box to save these settings and remember them for next time.
Depending on the feedback that the user provides, future documents in the trail may be adjusted to reflect the user’s preferences.
An alternative aspect of the subject embodiments is the selective highlighting 30 of useful portions of documents in the reading list. With reference to FIGS. 4-6, aspects of the subject embodiments are disclosed wherein a series of documents are arranged into an ordered list and that each document in the series contains visual cues at the paragraph level identifying the novelty of the information presented in that paragraph. Such visual cues may be limited to highlighting the text in the paragraph, or could contain more advanced information (i.e., tool tips to identify what is special about the information, and why). Such a system allows users to quickly identify novel information in each subsequent document in the series, and additionally, identify redundant information that can be safely skipped. Because no information is actually discarded from the original documents, contextual information is not lost.
With particular reference to FIG. 4, it can be seen that displayed original document 40 is displayed on the interactive reader 20 to assist users/readers in navigating a document trail comprising a sequence of related documents about a specific topic. The original document 40 in the document trail sequence 42 remains completely intact but fragments of information (e.g., paragraphs) 44, 46, 48 are selectively highlighted using clear visual cues to allow users to imme-
diately identify at least information in the following categories: New—information that appears later in the document sequence, but is seen for the first time in the current document; Novel—unique information that only appears in the current document; Redundant—duplicate information that has appeared previously in a document sequence; and, Current position in the trail—where the document that the reader is currently viewing exists in the overall trail of the documents. This indication of position is in contrast to other mechanisms that provide essentially unbounded search results (e.g., “you are viewing document 94 of 437965”). In FIG. 4, specifically highlighted section 48 represents new information (e.g., a green background); highlighted portions 46 represent unique information (e.g., a yellow background); and, highlighted portions 44 represent redundant (previously seen) information (e.g., a red background). The reader’s position in the trail is shown in the pane 42 by highlighting the presented document 52 with particular highlighting (e.g., yellow). Each document in the trail is represented as a single node in the pane. Controls 54 are exemplary and may be used by the reader to move forward and backward in the document trail, or to provide feedback to alter the trail. For example, the question mark can be accessed to allow the user to deviate from the present trail to certain other branches as noted above.
The subject implementations are designed to increase the efficiency of reading a collection of related documents. The embodiments do not suggest that decisions be made on behalf of the user about which information fragments should be kept, and which fragments should be discarded. Instead, all information is preserved, completely in context, but readers are given the tools that they need to quickly decide for themselves whether to read, skim or skip individual fragments entirely.
With reference to FIG. 5, an aspect of the embodiments of the disclosed elements is shown where the user is able to interact with individual data fragments to learn more about why a specific data fragment was highlighted using a particular visual cue. More particularly, clicking on paragraph 48 which had been highlighted with a cue signifying new information with a mouse (or tapping a touch screen application) will produce a tool tip that provides additional descriptive details to the user, helping them to decide whether or not to read/skim/skip the fragment. In this example, the feedback explaining why the selected paragraph was highlighted with a cue representing new information is that the paragraph contains “new information about visualizing data. Similar information appears later in the trail.”
The subject embodiments comprise methods and systems that have as a primary goal a simple, intuitive interface that allows a reader/user to respond immediately and instinctively. As such, the document trail application is configurable by the user. Examples of potential configuration options include: enabling or disabling specific kinds of highlighting. [For example, disabling the highlighting for “novel” text so that it appears unaltered (e.g., “black and white”); customizing highlight colors (for example, allowing the reader to use a color picker to define the custom highlighting colors); configuring “new” and “novel” materials to use the same highlight colors; and, blocking or blocking out redundant information entirely.
The examples provided throughout this specification are provided to clearly communicate the concept of a document trail with visual cues, but they are not meant to be comprehensive. Other related and similar mechanisms for providing visual feedback about the relative novelty of data fragments within a cluster of documents are also considered to be within the scope of the subject embodiments.
FIG. 6 represents a further example demonstrating in alternative methodology for achieving the same functionality as is described above. In FIG. 6 redundant information is blurred out and the new/novel information is left black-on-white (it may also be bolded). While the blurry text is still legible, the user may non-select a blurry text to see a crisp, more readable view if they wish to.
It will be appreciated that variants of the above-disclosed and other features and functions, or alternatives thereof, may be combined into many other different systems or applications. Variants presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.
The disclosed elements can encompass embodiments in hardware, software, or a combination thereof.
What is claimed is:
1. A method for creating an ordered reading list for a set of documents:
(a) analyzing a corpus of the document set in accordance with preselected text analytics and natural language processing steps for creating a topic model, each topic being based on a collection of words frequently appearing together throughout the corpus;
(b) after identifying the topics, clustering the document set into a plurality of cluster groups by topic, each cluster including related documents about a specific topic;
(c) calculating a probability distribution across the topic model that a particular topic describes a given document based upon the occurrence of the keywords in the document;
(d) determining relevant documents in a cluster including removing from the cluster the documents associated with a probability distribution falling below a predetermined threshold, and
(e) determining relevant information in a document by repeating steps (a)-(d) to individual paragraphs of the document to identify redundant information in the document; and
(f) for each document on the list, associating a visual cue with non-redundant information in the each document to indicate which paragraphs of the each document contain the relevant information.
2. The method of claim 1 further comprising providing at least one visual cue including:
highlighting selected portions of the documents corresponding to a predetermined relationship relative to either novelty or usefulness to the preselected subject.
3. The method of claim 2 wherein the highlighting comprises coloring a background text of the document.
4. The method of claim 1 wherein the prioritizing includes excluding documents in the cluster group from the document trail that are identified as lacking information useful to the preselected subject.
5. The method of claim 1 wherein the calculating comprises determining the relation to importance by applying a topical inference score relative to the topic to each document in the cluster group.
6. The method of claim 5 wherein the topical inference score comprises a relative probability to the topic based on identified keywords in the document.
7. The method of claim 5 wherein the applying the topical inference score comprises determining a relative probability of the document being related to the topic.
8. The method of claim 5 wherein the organizing includes sequencing the documents in the document trail in accordance with a relative probability of relationship to the topic.
9. The method of claim 8 wherein the organizing further includes setting a maximum number of documents to be included in the document trail.
10. The method of claim 1 further including a user deviating from the document trail in accordance with user feedback to a branch into a distinct document path.
11. The method of claim 10 wherein the deviating includes feedback messaging comprising requests for more readable documents than a presented document, more documents similar to the presented document, or more documents newer than the presented document.
12. The method of claim 1 including identifying a reader of the document trail, recognizing past access by the reader of alternate topic document trails, and organizing accessible document trails in accordance with an anticipated need by the reader based upon the past access.
13. The method of claim 1 wherein the analyzing includes setting a maximum number of topics to be included in the cluster groups.
14. A system for processing a repository of documents and including a processor configured to:
(a) analyze a corpus of the document set in accordance with preselected text analytics and natural language processing steps for creating a topic, each topic being based on a collection of words frequently appearing together throughout the corpus;
(b) after identifying the topics, cluster the document set into a plurality of cluster groups by topic, each cluster including related documents about a specific topic;
(c) calculating a probability distribution across the topic model that a particular topic describes a given document based upon the occurrence of the keywords in the document;
(d) determine relevant documents in a cluster including: removing from the cluster the documents associated with a probability distribution falling below a predetermined threshold, and applying the preselected text analytics and natural language processing steps of step (a) to individual paragraphs and removing the documents including redundancy of paragraphs; and,
(e) generate the ordered reading list for the related documents of the cluster based on the relevance of the each related document.
15. The system of claim 14 further including an interactive display for providing user feedback for adjusting the organizing and prompting explanations regarding the documents and the reading list.
16. The system of claim 14, wherein the processor is further configured to prune documents from the reading list having minimal relative probability to the topic.
17. The system of claim 14 wherein the processor is further configured to record past access by a reader of alternate reading lists and organizes a presented reading list in accordance with an anticipated need by the reader based upon the past access.
* * * * *
|
{"Source-Url": "https://patentimages.storage.googleapis.com/2d/62/c9/bf6ad7d68499d8/US9454528.pdf", "len_cl100k_base": 7565, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 17015, "total-output-tokens": 9059, "length": "2e12", "weborganizer": {"__label__adult": 0.00040650367736816406, "__label__art_design": 0.002277374267578125, "__label__crime_law": 0.00116729736328125, "__label__education_jobs": 0.004947662353515625, "__label__entertainment": 0.0003521442413330078, "__label__fashion_beauty": 0.00029850006103515625, "__label__finance_business": 0.005115509033203125, "__label__food_dining": 0.0003936290740966797, "__label__games": 0.0009927749633789062, "__label__hardware": 0.0033397674560546875, "__label__health": 0.0006690025329589844, "__label__history": 0.0005788803100585938, "__label__home_hobbies": 0.0002257823944091797, "__label__industrial": 0.0008940696716308594, "__label__literature": 0.00154876708984375, "__label__politics": 0.0003552436828613281, "__label__religion": 0.0004682540893554687, "__label__science_tech": 0.349853515625, "__label__social_life": 0.00014901161193847656, "__label__software": 0.242431640625, "__label__software_dev": 0.3828125, "__label__sports_fitness": 0.0001322031021118164, "__label__transportation": 0.0003871917724609375, "__label__travel": 0.00017344951629638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40005, 0.09127]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40005, 0.32482]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40005, 0.89967]], "google_gemma-3-12b-it_contains_pii": [[0, 2416, false], [2416, 4290, null], [4290, 4290, null], [4290, 4587, null], [4587, 4587, null], [4587, 5256, null], [5256, 8228, null], [8228, 8228, null], [8228, 15454, null], [15454, 22560, null], [22560, 29915, null], [29915, 36722, null], [36722, 40005, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2416, true], [2416, 4290, null], [4290, 4290, null], [4290, 4587, null], [4587, 4587, null], [4587, 5256, null], [5256, 8228, null], [8228, 8228, null], [8228, 15454, null], [15454, 22560, null], [22560, 29915, null], [29915, 36722, null], [36722, 40005, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40005, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40005, null]], "pdf_page_numbers": [[0, 2416, 1], [2416, 4290, 2], [4290, 4290, 3], [4290, 4587, 4], [4587, 4587, 5], [4587, 5256, 6], [5256, 8228, 7], [8228, 8228, 8], [8228, 15454, 9], [15454, 22560, 10], [22560, 29915, 11], [29915, 36722, 12], [36722, 40005, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40005, 0.1039]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
44b0f98bb8f52eb898a80c505ba8598cb3da41e2
|
[REMOVED]
|
{"Source-Url": "https://hal.science/hal-01284816/file/icsr-main.pdf", "len_cl100k_base": 7493, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 41611, "total-output-tokens": 9922, "length": "2e12", "weborganizer": {"__label__adult": 0.00034165382385253906, "__label__art_design": 0.00031065940856933594, "__label__crime_law": 0.00026035308837890625, "__label__education_jobs": 0.000732421875, "__label__entertainment": 5.316734313964844e-05, "__label__fashion_beauty": 0.00013816356658935547, "__label__finance_business": 0.00015652179718017578, "__label__food_dining": 0.0002846717834472656, "__label__games": 0.0003848075866699219, "__label__hardware": 0.0004978179931640625, "__label__health": 0.00035309791564941406, "__label__history": 0.00020003318786621096, "__label__home_hobbies": 6.723403930664062e-05, "__label__industrial": 0.00030517578125, "__label__literature": 0.0002830028533935547, "__label__politics": 0.0002130270004272461, "__label__religion": 0.00040984153747558594, "__label__science_tech": 0.009368896484375, "__label__social_life": 8.487701416015625e-05, "__label__software": 0.00432586669921875, "__label__software_dev": 0.98046875, "__label__sports_fitness": 0.0002290010452270508, "__label__transportation": 0.0003628730773925781, "__label__travel": 0.00016486644744873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41258, 0.02485]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41258, 0.67435]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41258, 0.87208]], "google_gemma-3-12b-it_contains_pii": [[0, 1118, false], [1118, 3551, null], [3551, 6967, null], [6967, 9807, null], [9807, 11765, null], [11765, 13711, null], [13711, 16380, null], [16380, 17849, null], [17849, 19419, null], [19419, 22056, null], [22056, 24880, null], [24880, 28724, null], [28724, 29680, null], [29680, 32805, null], [32805, 35099, null], [35099, 37983, null], [37983, 41258, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1118, true], [1118, 3551, null], [3551, 6967, null], [6967, 9807, null], [9807, 11765, null], [11765, 13711, null], [13711, 16380, null], [16380, 17849, null], [17849, 19419, null], [19419, 22056, null], [22056, 24880, null], [24880, 28724, null], [28724, 29680, null], [29680, 32805, null], [32805, 35099, null], [35099, 37983, null], [37983, 41258, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41258, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41258, null]], "pdf_page_numbers": [[0, 1118, 1], [1118, 3551, 2], [3551, 6967, 3], [6967, 9807, 4], [9807, 11765, 5], [11765, 13711, 6], [13711, 16380, 7], [16380, 17849, 8], [17849, 19419, 9], [19419, 22056, 10], [22056, 24880, 11], [24880, 28724, 12], [28724, 29680, 13], [29680, 32805, 14], [32805, 35099, 15], [35099, 37983, 16], [37983, 41258, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41258, 0.03922]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
b0c3a72c73511ac7f057e99bffb8b65fd9d3ca3b
|
Zhongwen Guo, Yu Yu, Zhaosui Sun and Yunhong Lu
Department of Computer Science and Engineer, Ocean University of China, Qingdao 266100, China
Email: {guozhw2007, rainertop}@gmail.com, {i2mtc_zhaosui, i2mtc_yunhong}@163.com
Abstract—Measurement-based product testing management systems (MPTMS) are widely used during the life cycle of products to validate the design and test the products. Unfortunately, measurement data from measurement systems are isolated with testing task information in the MPTMSs. Furthermore, the development of MPTMSs often begins from scratch, which leads to low development efficiency and low reusability. To solve the problems mentioned above, an application development framework is designed. The framework supports seamless integration between MPTMSs and measurement systems. Meanwhile, components like resource management, organization management, workflow management, testing task management and process mining are provided to improve the reusability of the framework. The development process of MPTMSs consists of workflow model design, components reuse and user interfaces customization. Moreover, an application example is given to validate the framework and the result shows its availability.
Index Terms—Measurement-based product testing management systems, measurement systems, application development framework, workflow management, process mining
I. INTRODUCTION
MPTMSs are becoming pervasive in factories because of quality management. The tests in MPTMSs are performed on measurement systems with product samples. By managing the measured data, control instruments and sensors, measurement systems composed of sensors, instrument and computers can perform measurement operations. The measured data from the measurement system are basis for test results in MPTMSs. MPTMSs manage product’s quality in life-cycle by enforcing tests against test standards which specify test items with test result judgment rules.
As for product testing, challenges are listed as follows:
- Data sharing: Measurement data are from heterogeneous measurement systems. It is very hard to incorporate the measurement data into MPTMSs. However, shortages of corresponding measurement data make test results lack of credibility. Meanwhile, measurement systems are isolated from testing task information, which makes measurement systems separate from MPTMSs.
- Workflow management: In product manufacturing factory, many kinds of product’s testing management systems coexist. However, those systems are lack of extendibility, which causes redundant construction and low development efficiency.
- Testing task management: Devices, test engineers and product samples are dynamic changing resources, which increases the complexity of testing task management for test order and test report.
- Process mining: MPTMSs generate event logs for every running workflow instances to trace the processes. Those logs should be mined to obtain valuable information for process improvement or problem identification.
Most of previous work focuses either on development of measurement systems [1]–[10] or on human labor cost and resource management involved [11]–[14]. Aimed at large system-level test and diagnostic related systems, IEEE SCC20 community develop a serial of standards which include hardware interfaces(HI), diagnostic and maintenance control(DMC), test and ATS description (TAD), test information integration (TII). However, it cannot be used for product level test described in this paper, which puts more emphasis on measurement data and testing task information. Testing management system (TingMS) of YOKOGAWA provides functionalities of test management, data acquisition, curve browse, report export and instrument management. However, it cannot provide management to tests from a workflow-centric perspective. There are few studies on testing management systems, especially in MPTMSs area. It is difficult for existing solutions to solve the challenges of measurement-based product testing management. It is necessary to design and implement an application development framework for MPTMSs. In this paper, the requirements of MPTMSs are analyzed and an application development framework is proposed first. Then, based on such framework, design strategy and implement details are given. To construct a new MPTMS easily, we implement it by deploying a new workflow model [15], database tables, user interfaces along with reusable components of the framework. Especially, our main contributions are as follows:
The process requirements for measurement-based product testing management are studied and further decomposed into roles, actions and data entities.
An application development framework for measurement-based product testing management is proposed, which supports agile system development by leveraging on scalable architecture and reusable components.
Process mining is used to further discover secret behind system event logs, which supports continue process improvement.
The remainder of the paper is structured as follows. Section II outlines background of MPTMSs. Section III describes the application development framework for MPTMSs. Design of the framework is presented in Section IV. An application example is given in Section V. Finally, conclusions are given in Section VI.
II. BACKGROUND
Fig.1 demonstrates the generic process of measurement-based product testing using BPMN notation [16]. The main participants in measurement-based product testing are R&D engineer, R&D project manager, lab supervisor, test engineer and test supervisor. At first, R&D engineer fills test order and submits the test order. The test order contains test items to be tested against standards. Then, the order will be reviewed by R&D project manager. If the order is approved by the R&D project manager, the order will be sent to lab supervisor. In response to the test order, lab supervisor checks the test order. If the check is passed, the test order will be received by lab supervisor. Then lab supervisor dispatches the test order to a certain test engineer. Then test is performed and measurement data are collected by measurement system. Test report is filled by test engineer and submitted to test supervisor. Upon receiving the test report, test supervisor checks it by his/her experience and examining measurement data. The test report will be published if it passes the check. Then, R&D engineer can view test results in the test report.
This work is based on our previous work on measurement systems. In [17], a software generation platform for DMS [18] development in the household appliance test field has been developed. A measurement system architecture which offers a universal client with rich user interaction, facilitates system integration and simplifies the system development has been described in [19]. Given continuity to those works, we further design an application development framework in this paper, which supports agile software development of MPTMSs and integration with measurement systems.
III. MEASUREMENT-BASED PRODUCT TESTING MANAGEMENT APPLICATION DEVELOPMENT FRAMEWORK
In the measurement-based product testing management area, there is no existing application development framework that guides the development of MPTMSs. MPTMSs
from various manufacturers are developed on special solutions, which leads to low development efficiency. In this section, a MPTMS application development framework is provided (See Fig. 2).
Most of the MPTMSs follow the same structure: users login in the system via user portal. Based on responsibilities, users are classified into different roles. Multiple roles are involved to accomplish a test order. Information passed through those roles is controlled by workflow management component. Besides that, there are some components which are essential for product testing management, such as resource management, organization management and testing task management. The scalability of architecture is achieved by taking cloud computing [20] into consideration from the very beginning of system design. By leveraging on cloud computing utilities, system can be easily horizontally extended by using multiple virtual machine instances. Loading balance ability can be provided by databases clustering and app servers clustering in those virtual machine instances. Besides that, the agility of the framework lies in its abilities to handle rapid user demand fluctuation. Advanced web application develop technology is used to promote the reusability of those components, such as ExtJS framework [21]. Building and modifying of user interfaces are directly through WYSIWYG [22] web builder. Another key point of MPTMSs is integration with measurement systems. Detailed description about integration is in [23].
A. MPTMS
- **Resource management**: Resource management component maintains resources shared by product testing workflow instances, such as product sample information. Product sample information is used to describe product sample characteristics. It should be fulfilled first and then included in each test order. In order to improve efficiency and reduce mistakes in product sample information fulfilling, most of information is maintained previously in the system by administrator, such as product sample type, level, technical parameters and main parts. Most of user operations in product sample information fulfilling are searching and selecting items from existing information list.
- **Organization management**: An organization is composed of users, departments, roles and authorities. Concretely, users belong to zero or more roles and departments. Role-based administration to authorities is used to protect the security of important and sensitive data. Organization management provides the function of modeling organization and organization information query. Organization modeling is often accomplished by administrator. Then, the function of organization information query is invoked by workflow management component to acquire organization information, such as which department a user belongs to and the supervisor of this department.
- **Workflow management**: Workflow management component orchestrates functions of other components. Tests are abstracted as user reaction driven running workflow instances which assembles all the tests needed information. However, user can only drive the workflow instances in a predefined way following the process model. Process model contains activities and connections between those activities. Those activities are with constrains of user, role and organization. Connections are with condition for route selection. Workflow provides maximum flexibility for accommodating to ever-changing product testing requirements while requires minimum modifications to existing systems.
- **Testing task management**: Testing task management component provides management to test order and test report. User’s routine operations are heavily depended on testing task management. Testing task management delivers test orders and test reports to users with certain permitted operations according to the workflow instances related to those test orders and test reports. It also provides the function of test scheduling. Test scheduling improves the efficiency of product testing management which constrains by dynamic changing resources.
- **Measurement information acquire**: Measurement information acquire component gets measurement data from measurement systems by invoking the measurement information web service interfaces provided by measurement systems.
- **Process mining**: The pipeline of process mining begins from system event logs. Those logs are transformed into MXML log format supported by ProM [24]. ProM is a process mining tool which extracts and presents the information typically needed in measurement-based product testing management. Event logs transformation is supported by process mining component. Besides that, it also provides information query functions to processes related data. To further understand process mining result, users have to turn to information query function.
- **Measurement systems**: Measurement systems are used by test engineers to perform the test according to standards information in the test order. In a common scenario, the product sample is attached with sensors. Then, product sample is running according to requirements of standards. Data collected by sensors are stored and further handled by measurement system. Usually, data are presented in an intuitive way, such as curves with colors and marks in a coordinate system. Curves can be included in test report to certify the test result.
B. Interfaces between MPTMSs and measurement systems
- **Web service interface of measurement information**: Web service interface of measurement information complied with IEEE 1851 standard which defines system framework, interface of web services, and
data exchange formats that facilitates inter-operation and data sharing between measurement systems. IEEE 1851 standard was formally approved by IEEE at June 8, 2012. The main content of the standard are from our R&D team and scientific research. Detailed information about IEEE 1851 standard can be found in [25].
- **Web service interface of test order information:** In Fig.3, the schema for test order information is depicted. The order information schema is summarized from years of experience with user requirements and measurement-based product testing systems. Test order information composes of three parts: test order properties, test sample information and test standard information.
- **Test order properties:** Test order properties describe the main attributes of test order. Detailed descriptions of those attributes are listed in Table I.
- **Test sample information:** Test sample information describes under testing product. It composed of test sample properties, main parts and technical parameters. Detailed descriptions of those are given in Table II.
- **Test standard information:** Test standard is unique identified by its name. It can be national standard or company internal standard. Each standard contains many items specify test names and specifications the tests should be followed.
IV. DESIGN OF MEASUREMENT-BASED PRODUCT TESTING MANAGEMENT APPLICATION DEVELOPMENT FRAMEWORK
A. **Design strategy**
Based on the analysis of user requirements, workflow management is adopted in the system design. System is partitioned into components which serve as information provider for workflow instances. A test begins from test order and ends at test report. During the process of test, users of different roles cooperate in a predefined way which prevents mistakes from happening. Basically, there are two important factors should be considered during the process of system design.
- **Statuses of test order and test report:** The status of a workflow instance changes when the status of its activities changes. From the view of users, test order and test report also have different statuses, such as submitted order, check pending order, audited order. The mapping between statuses of test order, test report and statuses of workflow instances will certainly reduce the complexity of system design.
- **Operations of different roles:** Users operations are associated to its roles. Roles can have different operations according to the activities they can participate in. So, mapping between activities in workflow instances and roles operations should also be maintained. Based on the mapping, system can be constructed with user interfaces classified by roles. System will automatically load different configuration to initialize different roles user interface, which guarantees the reusability and scalability of component.


### TABLE I. TEST ORDER PROPERTIES
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>OrderID</td>
<td>OrderID is the unique identification of test order</td>
</tr>
<tr>
<td>R&DEngineer</td>
<td>R&DEngineer is the originator of test order</td>
</tr>
<tr>
<td>TestPurpose</td>
<td>TestPurpose indicates purpose of test, such as product validation test, proof test, type test, product development test</td>
</tr>
<tr>
<td>TestProperty</td>
<td>TestProperty includes entrustment test and sampling test</td>
</tr>
</tbody>
</table>
### TABLE II. TEST SAMPLE INFORMATION
<table>
<thead>
<tr>
<th>Attribute</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>MainParts</td>
<td>MainParts are the main constituent parts of the test sample</td>
</tr>
<tr>
<td>TechnicalParameters</td>
<td>TechnicalParameters describe sample’s technical characteristics, such as voltage rating, power rating and current rating</td>
</tr>
<tr>
<td>Level</td>
<td>Level specifies current level in product’s whole lifecycle. It can be process prototype, manual prototype, small batch prototype and batch products</td>
</tr>
<tr>
<td>ProductMode</td>
<td>ProductMode is the unique identification to mode of the product. A product of certain mode may have test samples of different levels</td>
</tr>
</tbody>
</table>
### B. Role definitions
In product testing management, workflow instances may involve multiple users which belong to different roles. In this subsection, detailed roles explanations are given (See Fig. 4).
- System administrator: System administrator is responsible for basis data maintenance. Basis data are fundamental to the product testing management. Information in test order and test sample, such as test purpose, test property, level, test standard information, category, productMode, productCode, main parts and technical parameters are maintained as basis data by the system administrator.
- R&D engineer: R&D engineer uses MPTMSs to submit test order and get test report.
- R&D project manager: R&D project manager is the leader of R&D engineers. Test orders submitted by R&D engineers will be checked by R&D project manager.
- Lab supervisor: The test orders checked by R&D project manager will be checked again by lab supervisor. If check is failed, the test order with problems specified is returned to R&D engineer who should solve the problems and submit the test order again. Then, the test orders are dispatched to test engineers by lab supervisor.
- Test engineer: Tests are carried out by test engineers according to test standard items specified in the test orders. Test engineers fulfill test report according to measurement data from measurement system. Then, the test reports are submitted to test supervisor for checking.
- Test supervisor: Test reports are checked by test supervisor. If test report passes the check, the test report will be published. Then, R&D engineer can get the published test report.
### C. Organization information
Organizational management is vital to workflow applications [26]. Organization information schema is provided in Fig. 5. The organization information schema includes department information, role information, user information, authority information and user authority information. The schema supports departments organized in nested form. Each department can have many sub departments. Users can belong to one or many departments and roles. All authority items in the system are contained in authority information. User authorities associate user with corresponding authority items.
### D. Workflow management
In order to improve the reusability of workflow management component. A workflow engine is designed. Building a new workflow engine other than leveraging on exist workflow engines like Enhydra Shark [27] and jBPM [28], because those workflow engines are hard to be customized. The workflow engine is composed of process definition and runtime environment. Detailed information about process definition and runtime environment will be described below.
- **Process definition** is translated into XML based definition file which complies with corresponding XML schema (see Fig. 6). Each process definition consists of one process element which has name attribute. The process element consists of eight types of sub
elements which can be further classified into flow objects (Start, Activity, Decision, Subprocess and End), connections (Transition) and messages(Message and MessageHandler).
- Transition: Transition has attributes of name, from and to. It refers to the path from one flow object to another.
- Start: Start element contains at least one transition element which refers to entrance of the process.
- Activity: Activity element contains at least one transition element which refers to work items in the process.
- Decision: Decision element contains transition elements with condition attributes. It refers to a selection made by users, which decides the path in the process.
- Message: Messages are used to reduce the complexity of flow control. A message is sent in response to special condition in decision.
- MessageHandler: Message handler responses to messages by amending statuses of activities and routing flow in workflow instance.
- Subprocess: Subprocess is circular definition of process element. Multiple subprocesses elements can be included in a process.
- End: End element has name attribute which refers to the end of process.
- **Runtime environment** provides execution API (Application Programming Interface) for driving workflow execution. Architecture of runtime environment is depicted in Fig. 7. Process XML definition is checked by schema validation in schema validation sub component. Then, process XML definition is resolved into elements. Then those elements will be persistent into corresponding database tables by model persistence sub component. A new workflow instance is created by calling corresponding execution API exposed by execution component. The execution of workflow instance is driven by calling corresponding execution API.
### E. Testing task management

**Table III. Test data for job shop scheduling**
<table>
<thead>
<tr>
<th>Jobs</th>
<th>Machine sequence</th>
<th>Processing times</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1, 2, 3, 4</td>
<td>$p_{11}=12$, $p_{21}=110$ \ $p_{31}=70$, $p_{41}=75$ $p_{12}=110$, $p_{12}=12$ \ $p_{32}=70$, $p_{32}=120$ \ $p_{42}=100$</td>
</tr>
<tr>
<td>2</td>
<td>2, 1, 4, 3, 5</td>
<td>$p_{22}=110$, $p_{21}=12$ \ $p_{32}=70$, $p_{32}=120$ \ $p_{42}=100$ \ $p_{43}=110$</td>
</tr>
<tr>
<td>3</td>
<td>1, 2, 4</td>
<td>$p_{13}=24$, $p_{23}=180$ \ $p_{33}=100$</td>
</tr>
</tbody>
</table>

check order, save order and view order. It includes all
the operations to test order and dispatches them to user
according to user role. In addition, test report management
includes edit report, submit report, check report, and view
report. It also includes all the operations performed on test
report and dispatches them to the user according to user
role.
Besides that, testing task management also supports test
scheduling which will be emphasized on. The aim of test
scheduling is to schedule all the tests so as to minimize
the earliest time by which all the tests are completed. We
relate the test scheduling problem with deterministic job
shop scheduling problem [29]. Test orders can be viewed
as jobs.
Each job consists of \(n\) operations, corresponding to \(n\)
test items in the test order. The \(m\) test units corresponding
to the \(m\) machines, \(m > n\), each test unit performs one
kind of test item. Its goal is to find a schedule which
minimizes finish time of the last job, denoted \(C_{\text{max}}\).
Shifting bottleneck heuristic algorithm is used to solve
deterministic job shop scheduling problem. MPTMSs
can obtain exact test processing time from measurement
systems, so scheduling unit is changed from day to hour.
More precious processing time can be obtained by analyzing
historical data about time dimension information of
measurement data, designated test engineer and test unit.
The processing times in Table III are average processing
time of test unit belongs to different product type. Job1
and Job2 are the same product type with different test
items and predefined sequences. Fig. 8 illustrates a feasible
solution with \(C_{\text{max}} = 500\) corresponding to the test
data presented in Table III. The \(x\) axis in Fig. 8 represents
machines listed in machine sequence column of Table III.
The dispatch list interface of the solution is also given in
Fig. 9, which shows the start time and the end time of
each operation.
F. Process Mining
Server logs record the events triggered in workflow
instances, which consists of events with workflow identifi-
cation, event type, timestamp and originator information.
Based on server logs, the patterns of workflow can be
classified by included events. From originator information
and timestamp information, the workload of each work-
flow participant can be got. Besides that, the bottleneck
of those workflows can be calculated from timestamp in-
formation. By comparing those mined workflow patterns,
further information can be found, such as the reason of
a longer pattern consumes less time than a short pattern,
which may indicates deficiencies.
V. APPLICATION EXAMPLE
In order to validate the application development frame-
work, it is applied to develop washing machine testing
management system. Fig. 10 exhibits washing machine
test laboratory. The MPTMS and measurement system are
showed in Fig.11 and Fig.12.
During the procedure of user requirement analysis,
workflow model is designed. Fig. 13 and Fig. 14 demonstra-
tes the workflow model. The process definition XML
excerpt is given in Fig.15. Compared to the generic
measurement-based product testing process, this process
is more complicated and customized by user require-
ments. Messages are used to partition the complex process
decouple the branches process with the main process.
The design of system is based on the workflow model.
The workflow model is decomposed into flow objects.
User interfaces, miscellaneous database tables for each
flow object are further designed. Data forms and user
interfaces are built using JSP. Java message service [30]
is used to support the message handling. Workflow is
driven by calling execution API exposed by workflow
runtime environment. Measurement systems that comply
with IEEE 1851 standard and test order information web
service interface can seamlessly integrate with MPTMSs.
Forty one cases are selected as example of measurement-based washing machine product testing process data. All those cases are completed and integrated cases translated from server logs. Each case consists of events with event type, timestamp and originator information. As depicted in Fig. 16, forty one cases are classified into twelve patterns which are ordered by frequencies and indicated by different colors. Activities between SubmitOrder and Message:OrderModification cost a large amount of time and frequently switch before entering the following activities. Along with roles assigned with those activities, it can be inferred that R&D engineers are not familiar with test standards, sample main parts and sample technical parameters. Lab supervisor had to send back the incorrect orders to the R&D engineers and wait for the R&D engineers correct and re-submit the orders. As depicted in Fig. 17, pattern1 has the maximum frequency of fifteen and second minimum average cost time, which indicates that most of the workflows go through the normal way and avoid the time spent on handling correcting errors in orders and reports. In Fig. 16, pattern3 is more likely cost time than pattern2. However in Fig. 17, pattern2 cost more time than pattern3. This contradiction can be explained by further exploring cases contained in pattern2. Using basic log statistics function of ProM, the overall duration of each processes contained in pattern2 can be found. In Fig. 18, process6347 and process6143 have the duration 959 hours that much higher than other processes. By further inspecting process6347, the reason of high durations is caused by R&D engineer’s latency in ViewReport activity, which indicates that R&D
high duration is caused by Lab supervisor latency in Message:Request order modification activity which is eighteen days after ReceiveSample activity, which may indicates Lab supervisor does not notice this order due to bad user interface design.
VI. CONCLUSION
Based on the previous projects and user requirements, designs of MPTMSs are analyzed. Then, an application development framework for MPTMSs is presented. Followed by the application framework, the design of the framework is given. New MPTMS can be built by leveraging the exist components provided by the framework. IEEE 1851 standard interfaces and test order information web service interfaces are used to remove the integration barriers between measurement systems and MPTMSs. The development of MPTMS is complicated, so there are many limitations can be improved. For example, the application development framework depends on Apache Tomcat [31] web server and other J2EE [32] servers are not supported currently. The process mining of event logs can be given further study. Take into consideration of those limitations and growing requirements, the application development framework will be updated and extended to support efficient development of more complex MPTMSs in the future.
REFERENCES
Zhongwen Guo was born in China in 1965. He received the B.S. degree in computer science and technology from Tongji University, Shanghai, China, in 1987 and the M.S. and Ph.D. degrees from Ocean University of China, Qingdao, China. He is currently a Professor and Doctoral Advisor with the Department of Computer Science and Engineering at Ocean University of China. His main research interests focus on sensor networks, distributed measurement systems, ocean monitoring, and so on.
Yu Yu was born in China in 1984. He received the M.S. degree in computer science and technology from Shandong University of Science and Technology in 2010. He is currently working under the Ph.D. degree with the Department of Computer Science and Technology, Ocean University of China, Qingdao, China, where he is researching information management system.
Zhaosui Sun received the B.S. degree in Qufu normal university, Qufu, China, in 1987 and the M.S. degree from Tianjin University, Tianjin, China in 1992. He is currently working toward the Ph.D. degree with the Department of Computer Science and Technology, Ocean University of China, Qingdao, China, where he is researching distributed management system.
Yunhong Lu was born in China in 1972. He received the B.S. degree in Yantai university, Yantai, China, in 1994 and the M.S. degree from Jilin university, Jilin, China in 2001. He is currently working toward the Ph.D. degree with the Department of Computer Science and Technology, Ocean University of China, Qingdao, China, where he is researching wireless sensor network.
|
{"Source-Url": "http://www.jsoftware.us/vol8/jsw0806-12.pdf", "len_cl100k_base": 6140, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33668, "total-output-tokens": 6953, "length": "2e12", "weborganizer": {"__label__adult": 0.0003898143768310547, "__label__art_design": 0.0004949569702148438, "__label__crime_law": 0.0003707408905029297, "__label__education_jobs": 0.0015354156494140625, "__label__entertainment": 6.723403930664062e-05, "__label__fashion_beauty": 0.0002493858337402344, "__label__finance_business": 0.0007100105285644531, "__label__food_dining": 0.0004687309265136719, "__label__games": 0.0006384849548339844, "__label__hardware": 0.006137847900390625, "__label__health": 0.0005555152893066406, "__label__history": 0.0003285408020019531, "__label__home_hobbies": 0.00017881393432617188, "__label__industrial": 0.00363922119140625, "__label__literature": 0.0002218484878540039, "__label__politics": 0.0002237558364868164, "__label__religion": 0.0004472732543945313, "__label__science_tech": 0.16748046875, "__label__social_life": 8.821487426757812e-05, "__label__software": 0.0178375244140625, "__label__software_dev": 0.7958984375, "__label__sports_fitness": 0.0003750324249267578, "__label__transportation": 0.0012035369873046875, "__label__travel": 0.00022077560424804688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32443, 0.01217]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32443, 0.41119]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32443, 0.92108]], "google_gemma-3-12b-it_contains_pii": [[0, 4923, false], [4923, 7688, null], [7688, 13348, null], [13348, 16306, null], [16306, 20685, null], [20685, 23086, null], [23086, 26915, null], [26915, 28692, null], [28692, 30874, null], [30874, 32443, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4923, true], [4923, 7688, null], [7688, 13348, null], [13348, 16306, null], [16306, 20685, null], [20685, 23086, null], [23086, 26915, null], [26915, 28692, null], [28692, 30874, null], [30874, 32443, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32443, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32443, null]], "pdf_page_numbers": [[0, 4923, 1], [4923, 7688, 2], [7688, 13348, 3], [13348, 16306, 4], [16306, 20685, 5], [20685, 23086, 6], [23086, 26915, 7], [26915, 28692, 8], [28692, 30874, 9], [30874, 32443, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32443, 0.09392]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
0c35c8914bbd7a698933e6ec3cbe612eff166296
|
Lecture Unit 4: Flow of Control
Examples of Statements in C
Conditional Statements
- The if-else Conditional Statement
- The switch-case Conditional Statement
- The (exp) ? x:y Conditional Expression
Iteration or Loop Statements
- The while Loop Statement
- The do-while Loop Statement
- The for Loop Statement
Jump Statements: break, continue, and goto
Example: Primality Testing
Examples of Statements
❖ Every program in C is a sequence of compiler directives, declarations, and **statements**. There are six kinds of statements in C: **expression statement, compound statement, conditional statement, iteration (loop) statement, labeled statement, and jump statement**.
❖ So far, we have seen just one of these --- the **expression statement**. An expression statement is just an expression followed by `;` for example:
```
x = a+b;
x++; a+b; ;
```
❖ Here is one more kind of statements --- the **compound statement** also called a **block**. A compound statement is any sequence of statements and declarations enclosed in `{ . . . }` for example:
```
int main()
{
t = x;
x = y;
y = t;
}
```
```
{ a = x++;
if (a == x)
{
printf("Wow?\n");
return 1;
}
}
```
Outline of this Lecture
❖ Examples of Statements in C
❖ Conditional Statements
› The if-else Conditional Statement
› The switch-case Conditional Statement
› The (exp) ? x:y Conditional Expression
❖ Iteration or Loop Statements
› The while Loop Statement
› The do-while Loop Statement
› The for Loop Statement
❖ Jump Statements: break, continue, and goto
❖ Example: Primality Testing
The `if` Conditional Statement
The syntax of the `if` conditional statement is as follows:
```
if (expression) statement
```
- If the logical value of the `expression` is **True** the `statement` is executed.
- If the logical value of the `expression` is **False** the `statement` is skipped, and the program flow continues with the next statement.
- In most cases, but not always, the `statement` is a **compound statement**.
**Example:**
```c
if (b*b < 4*a*c) {
printf("Quadratic equation has no real roots!");
return 1;
}
```
Example:
The syntax of the `if-else` conditional statement is as follows:
- If the logical value of the `expression` is `True`, `statement1` is executed, then `statement2` is skipped and the program flow continues.
- If the logical value of the `expression` is `False`, `statement2` is executed, then `statement1` is skipped and the program flow continues.
**Example:**
```c
if (tomorrow_is_rainy == 'y')
printf("Take your umbrella.\n");
else
printf("Leave the umbrella at home.\n");
```
What does the following statement do?
```c
if (a==1)
if (b==2)
printf("***");
else
printf("###");
else printf("###");
```
❖ The rule is this: the `else` always belongs to the last else-less `if`.
❖ Nevertheless, the rule can be modified by explicitly creating a single compound statement enclosed in `{ . . . }` for example like this:
Multi-way if-else-if Chains
The best way to explain this is by using an example:
```c
if (a < 0)
return -1;
else
if (a == 0)
return 0;
else
if (a < 11)
return 1;
else
return a;
```
The best way to explain this is by using an example:
Proper indentation is extremely useful in elucidating the structure of an **if-else-if** chain, like this:
```c
if (a < 0)
return -1;
else
if (a == 0)
return 0;
else
if (a < 11)
return 1;
else
return a;
```
Both indentation methods are well-established. The second one results in a more compact code, with shorter lines.
The syntax of the `switch-case-default` conditional statement is as follows:
```c
switch (expression) {
case value1: statements
break; // optional
case value2: statements
break; // optional
...
default: statements // optional
}
```
The value of the `expression` is compared to `value1`, `value2`, and so on sequentially until there is a match. If there is no match, `default` is executed. If there is also no `default` case, nothing is executed. If there is a match, execution starts at this case and goes on until either a `break` or the closing brace `}` is encountered.
int x, y;
char op;
... // read x, y, op
switch (op)
{
case '+':
printf("Result is: %d", x + y);
break; // what if we remove it?
case '-':
printf("Result is: %d", x - y);
break;
... // code for '*' and '/' and '%'
default:
printf("Invalid operation!");
}
Conditional Expression (exp) ? x : y
Expressions in C can involve the **ternary conditional operator**. The syntax of this operator ? : is as follows:
```
expression1 ? expression2 : expression3
```
- If the logical value of `expression1` is True, `expression2` is evaluated.
- If the logical value of `expression1` is False, `expression3` is evaluated.
The conditional expression ? : is **not** a statement; thus it can appear as part of another expression. For example:
```c
if (n > 1) c = 2*a;
else c = 2*b;
```
What is the value of `x` here?
```c
float x; x = 1/((n > 1) ? 2.0:2);
```
More Examples:
```c
min = (x < y) ? x:y;
distance = (a > b) ? (a - b):(b - a);
printf("You have %d item%s\n", n, (n == 1) ? ":" : ":s");
```
Outline of this Lecture
❖ Examples of Statements in C
❖ Conditional Statements
▶ The \texttt{if-else} Conditional Statement
▶ The \texttt{switch-case} Conditional Statement
▶ The \texttt{(exp) ? x:y} Conditional Expression
❖ Iteration or Loop Statements
▶ The \texttt{while} Loop Statement
▶ The \texttt{do-while} Loop Statement
▶ The \texttt{for} Loop Statement
❖ Jump Statements: \texttt{break}, \texttt{continue}, and \texttt{goto}
❖ Example: Primality Testing
What Is a Loop?
❖ In general, a **loop** is a piece of code that is executed repeatedly several times, where “several” could range from zero to infinity. Here are some examples of loops:
```
for (i = 1; i <= n; i++)
factorial *= i;
```
```
while (1)
{
factorial *= i++;
if (i > n) break;
}
```
❖ Each time the piece of code is executed is called a **loop iteration**, or simply an iteration. These iterations continue until either:
- The **loop condition** ceases to be satisfied, or
- The program exits from the loop via a *jump statement*, such as **break** or **return**.
❖ The C language provides three different types of loops:
- The **while** statement
- The **do-while** statement
- The **for** statement
The syntax of the `while` loop statement is as follows:
```
while (expression)
statement
```
- The `expression` is evaluated. If its logical value is `True` (the `expression` is nonzero) then `statement` is executed.
- After that the `expression` is evaluated again, and the process repeats until the logical value of the `expression` becomes `False`.
- When the logical value of the `expression` is `False`, program continues to next statement.
**Example:**
```c
int n, factorial = 1, i = 1;
scanf("%d", &n);
while(i <= n) factorial *= i++;
printf("n!= %d", factorial);
```
Example: Odd and Even Averages
**Problem:** Prompt the user to enter a positive integer, and keep doing this while the user complies. When the user inputs an integer that is not positive, output:
- The average of all the even positive integers the user entered
- The average of all the odd positive integers the user entered
**Assumptions:** The user input is valid. The user enters only integers, including at least one odd integer and at least one even integer.
**Algorithm:** Read user input in a while loop, and keep the current even_sum and odd_sum. At each iteration, the integer entered by the user will be added to even_sum if it is even and to odd_sum if it is odd. Also count the total number of odd and even integers entered. Then:
```
even_average = even_sum / even_count
odd_average = odd_sum / odd_count
```
Example: Odd and Even Averages
```c
#include <stdio.h>
int main()
{
int odd_sum = 0, even_sum = 0, odd_count = 0, even_count = 0, next;
printf("Enter a positive integer: ");
scanf("%d", &next);
while (next > 0)
{
if (next % 2) {odd_sum += next; odd_count++;}
else {even_sum += next; even_count++;}
printf("Enter a positive integer: ");
scanf("%d", &next);
}
printf("\nAverage of even integers is: %6.2f\n"
" Average of odd integers is: %6.2f\n"
, (double)even_sum/even_count, (double)odd_sum/odd_count);
return 0;
}
```
The do-while Loop Statement
- The syntax of the do-while loop statement is as follows:
```c
do statement while (expression);
```
- First, the `statement` is executed, and then `expression` is evaluated.
- If its logical value is `True` (the `expression` is nonzero), the `statement` is executed again, and the process repeats until the logical value of the `expression` becomes `False`.
- When `expression` evaluates to `False`, program flow continues to the next statement.
**Example:**
```c
do
{ printf("Enter a positive integer: ");
scanf("%d", &x);
} while (x <= 0);
```
The statement of a do-while loop is executed at least once!
The syntax of the for loop statement is as follows:
```
for (expression1;
expression2;
expression3)
statement
```
- First, `expression1` is executed. This serves as the initialization of the loop.
- Next, `expression2` is evaluated. If its logical value is True (`expression2` is nonzero), the `statement` and then `expression3` are executed. They keep being executed, in this order repeatedly, as long as `expression2` evaluates to True.
- When `expression2` evaluates to False, program flow continues to next statement.
Each of the three expressions can be empty; empty `expression2` is True!
The for Loop: Examples
- Here is an example of a for loop that computes the sum of the first \( n \) squares:
```c
sum = 0;
for (i = 1; i <= n; i++) sum += i*i;
```
- Another example:
```c
factorial = 1;
for (i = 2; i <= n; i++) factorial *= i;
```
**Note:** The for and the while loops are equivalent!
The for Loop: Comma Operator
What is the comma operator?
- An expression like this `exp1, exp2` is evaluated by first evaluating `exp1` then evaluating `exp2`. Similarly for `exp1, exp2, exp3`, etc.
- The type of a comma expression is the type of the last `exp` evaluated.
- Such expressions can be used in the initialization/increment of for-loop.
Example. We have seen this before:
```c
sum = 0;
for (i = 1; i <= n; i++)
sum += i*i;
```
This can be also written as:
```c
for (sum = 0, i = 1; i <= n; i++)
sum += i*i;
```
Or even as:
```c
for (sum = 0, i = 1; i <= n; sum += i*i, i++);
```
Example: Primality Testing
**Problem:** Given a positive integer \( n \), decide whether it’s prime.
**Definition:** A positive integer \( n \) is prime if and only if \( n > 1 \) and it has no divisors in the set \( \{2, 3, \ldots, n-1\} \). Equivalently, \( n \) is not prime if either \( n = 1 \) or \( n \) has a divisor in the set \( \{2, 3, \ldots, n-1\} \).
**Solution:**
```c
unsigned int is_prime = 1, n, i;
... \// read n
if(n == 1) is_prime = 0;
else
for (i = 2; i < n; i++)
if (n % i == 0) is_prime = 0;
if (is_prime)
printf("%d is a prime\n", n);
else
printf("%d is not a prime\n", n);
```
Problem: Given a positive integer \( n \), decide whether it’s prime.
Outline of this Lecture
❖ Examples of Statements in C
❖ Conditional Statements
‣ The if-else Conditional Statement
‣ The switch-case Conditional Statement
‣ The (exp) ? x:y Conditional Expression
❖ Iteration or Loop Statements
‣ The while Loop Statement
‣ The do-while Loop Statement
‣ The for Loop Statement
❖ Jump Statements: break, continue, and goto
❖ Example: Primality Testing
The break and continue Statements
These are special *jump statements* that change the default behavior of the *while*, *do-while*, and *for* loops:
**break** Causes **immediate exit from the loop**. The rest of the code in the loop is skipped, and the program flow proceeds to the *next statement after the end of the loop*.
Recall that **break** works in exactly the same way also for the *switch-case-default* statement. The following, though, works only for loops:
**continue** Causes the **rest of the loop to be skipped**. The program flow proceeds to the *next iteration of the same loop*.
**Example:** What will get printed?
```c
for (k = i = 0; i < 8; i++)
for (j = 0; j < 8; j++)
if (j == 4) break;
k++;
printf("%d,%d,%d", i,j,k);
```
```c
for (k = i = 0; i < 8; i++)
for (j = 0; j < 8; j++)
if (j == 4) continue;
k++;
printf("%d,%d,%d", i,j,k);
```
```
8, 4, 32
```
```
8, 8, 56
```
Examples of continue Usage
❖ The following code reads 100 integers and computes the sum of those that are nonnegative:
```c
for (i = 0; i < 100; i++)
{
scanf("%d",&num);
if(num < 0) continue;
sum += num;
}
```
❖ What is the difference between the code above and the code to the left? Are they equivalent?
❖ The for loop will always read exactly 100 integers. The while loop will read as many integers as it needs (perhaps 1,000,000) to get exactly 100 nonnegative integers.
What could we change to make the two loops equivalent?
Infinite Loops and **break** Usage
- Infinite loops could arise in a C program either **by design** or **by mistake**. If a program goes into an infinite loop by mistake, it will usually “hang” producing no output. In most cases, you can terminate such a program by pressing **CTRL-C** and then debug.
- However, infinite loops are often introduced into a C program by design. For example:
```c
sum = 0;
do{
scanf("%d", &num);
sum += num;
} while (num >= 0)
```
The program can be **designed** to exit such infinite loops via one of the following statements: **break**, **return**, or **goto**.
**Example:**
The following loop is designed to read from the input nonnegative integers and compute their sum, exiting as soon as the user inputs the first negative integer.
```c
sum = 0;
do{
scanf("%d", &num);
sum += num;
} while (num >= 0)
```
**What’s wrong with this do-while loop?**
**Problem:** Compute the logical OR of \( n \) input values.
**Solution:** Two different approaches, both using `break`.
```c
enum boolean {FALSE,TRUE};
int i, value;
boolean or;
or = FALSE;
for (i = 0; i < n; i++)
{
scanf("%d", &value);
if (value)
{
or = TRUE;
break;
}
}
```
```c
for (i = 0; i < n; i++)
{
scanf("%d", &value);
if (value) break;
or = (i < n);
}
```
What happens if we delete the `break`?
The goto Jump Statement
To use the goto statement, you first need to create a labeled statement, which is any statement prefixed by a label:, like this:
```
sum: x = a+b;
myloop: while(1) x++;
error: return 1;
```
Any identifier can serve as the label in a labeled statement. Thereafter, you can use `goto sum`, `goto myloop`, etc. at any point in the program.
**Example of Usage:**
```
for (i = 0; i < n; i++)
{
...
// do something
for (j = 0; j < m; j++)
{
...
// do something else
if (disaster) goto error;
...
}
}
...
error: printf("Please wait while I clean-up the mess!");
...
// clean-up the mess
```
Outline of this Lecture
❖ Examples of Statements in C
❖ Conditional Statements
‣ The if-else Conditional Statement
‣ The switch-case Conditional Statement
‣ The (exp) ? x:y Conditional Expression
❖ Iteration or Loop Statements
‣ The while Loop Statement
‣ The do-while Loop Statement
‣ The for Loop Statement
❖ Jump Statements: break, continue, and goto
❖ Example: Primality Testing
Recall: Primality Testing
**Problem:** Given a positive integer \( n \), decide whether it’s prime.
**Definition:** A positive integer \( n \) is **prime** if and only if \( n > 1 \) and it has no divisors in the set \( \{2,3,\ldots,n-1\} \). Equivalently, \( n \) is **not prime** if either \( n = 1 \) or \( n \) has a divisor in the set \( \{2,3,\ldots,n-1\} \).
**Solution:**
```c
unsigned int is_prime = 1, n, i;
... \ read n
if(n == 1) is_prime = 0;
else
for (i = 2; i < n; i++)
if (n % i == 0) is_prime = 0;
if (is_prime)
printf("%d is a prime\n", n);
else
printf("%d is not a prime\n", n);
```
```c
unsigned int is_prime = 1, n, i;
... \ read n
if(n == 1) is_prime = 0;
else
for (i = 2; i < n; i++)
if (n % i == 0) is_prime = 0;
if (is_prime)
printf("%d is a prime\n", n);
else
printf("%d is not a prime\n", n);
```
If \( n \) is not 2, but is divisible by 2, then \( n \) is **not** prime.
If \( n \) is not divisible by anything up to the square root of itself, then it must be prime.
It is enough to check divisibility up to square root of \( n \).
Since \( n \) is odd under `else`, we can skip testing divisibility by all even integers \( i \).
When a divisor \( i \) is found, \( n \) is **not** prime, and there is no need to keep testing.
```c
#include <math.h>
...
int is_prime = 1, n, i, sqrt_n;
...
if (n == 1 || (n != 2 && n%2 == 0))
is_prime = 0;
else
{
sqrt_n = (int) sqrt(n);
for (i = 3; i <= sqrt_n; i += 2)
if (n%i == 0) {is_prime = 0; break;}
}
if (is_prime)
printf("%d is a prime\n", n);
else
printf("%d is not a prime\n", n);
...
```
|
{"Source-Url": "http://cseweb.ucsd.edu/classes/fa18/ece15-a/Lectures/Lecture4/Lecture4.pdf", "len_cl100k_base": 4955, "olmocr-version": "0.1.49", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 64164, "total-output-tokens": 6491, "length": "2e12", "weborganizer": {"__label__adult": 0.0003559589385986328, "__label__art_design": 0.0002684593200683594, "__label__crime_law": 0.0003185272216796875, "__label__education_jobs": 0.0006532669067382812, "__label__entertainment": 8.106231689453125e-05, "__label__fashion_beauty": 0.00013494491577148438, "__label__finance_business": 0.00010508298873901369, "__label__food_dining": 0.00046944618225097656, "__label__games": 0.00127410888671875, "__label__hardware": 0.0012712478637695312, "__label__health": 0.00042057037353515625, "__label__history": 0.0001500844955444336, "__label__home_hobbies": 8.958578109741211e-05, "__label__industrial": 0.0003578662872314453, "__label__literature": 0.00020682811737060547, "__label__politics": 0.0002644062042236328, "__label__religion": 0.00049591064453125, "__label__science_tech": 0.00849151611328125, "__label__social_life": 6.490945816040039e-05, "__label__software": 0.00363922119140625, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00032520294189453125, "__label__transportation": 0.0004153251647949219, "__label__travel": 0.00017535686492919922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17378, 0.01301]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17378, 0.73215]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17378, 0.78296]], "google_gemma-3-12b-it_contains_pii": [[0, 32, false], [32, 385, null], [385, 1190, null], [1190, 1590, null], [1590, 2140, null], [2140, 2628, null], [2628, 2990, null], [2990, 3287, null], [3287, 3670, null], [3670, 4296, null], [4296, 4581, null], [4581, 5321, null], [5321, 5802, null], [5802, 6527, null], [6527, 7111, null], [7111, 7938, null], [7938, 8549, null], [8549, 9196, null], [9196, 9807, null], [9807, 10131, null], [10131, 10736, null], [10736, 11432, null], [11432, 11832, null], [11832, 12774, null], [12774, 13320, null], [13320, 14222, null], [14222, 14677, null], [14677, 15350, null], [15350, 15750, null], [15750, 16612, null], [16612, 17378, null]], "google_gemma-3-12b-it_is_public_document": [[0, 32, true], [32, 385, null], [385, 1190, null], [1190, 1590, null], [1590, 2140, null], [2140, 2628, null], [2628, 2990, null], [2990, 3287, null], [3287, 3670, null], [3670, 4296, null], [4296, 4581, null], [4581, 5321, null], [5321, 5802, null], [5802, 6527, null], [6527, 7111, null], [7111, 7938, null], [7938, 8549, null], [8549, 9196, null], [9196, 9807, null], [9807, 10131, null], [10131, 10736, null], [10736, 11432, null], [11432, 11832, null], [11832, 12774, null], [12774, 13320, null], [13320, 14222, null], [14222, 14677, null], [14677, 15350, null], [15350, 15750, null], [15750, 16612, null], [16612, 17378, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17378, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17378, null]], "pdf_page_numbers": [[0, 32, 1], [32, 385, 2], [385, 1190, 3], [1190, 1590, 4], [1590, 2140, 5], [2140, 2628, 6], [2628, 2990, 7], [2990, 3287, 8], [3287, 3670, 9], [3670, 4296, 10], [4296, 4581, 11], [4581, 5321, 12], [5321, 5802, 13], [5802, 6527, 14], [6527, 7111, 15], [7111, 7938, 16], [7938, 8549, 17], [8549, 9196, 18], [9196, 9807, 19], [9807, 10131, 20], [10131, 10736, 21], [10736, 11432, 22], [11432, 11832, 23], [11832, 12774, 24], [12774, 13320, 25], [13320, 14222, 26], [14222, 14677, 27], [14677, 15350, 28], [15350, 15750, 29], [15750, 16612, 30], [16612, 17378, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17378, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.