Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
8e21aa788aeab8ca27c41b380c6330b8ff97d9c5
Data Integration through Service-based Mediation for Web-enabled Information Systems Yaoling Zhu, Dublin City University, School of Computing, Dublin 9, Ireland, Phone: ++353 +1 7005620, Fax: ++353 +1 700 5442, Email: [email protected] Claus Pahl, School of Computing, Dublin City University, Dublin 9, Ireland, Phone: ++353 +1 7005620, Fax: ++353 +1 700 5442, Email: [email protected] Abstract The Web and its underlying platform technologies have often been used to integrate existing software and information systems. Traditional techniques for data representation and transformations between documents are not sufficient to support a flexible and maintainable data integration solution that meets the requirements of modern complex Web-enabled software and information systems. The difficulty arises from the high degree of complexity of data structures, for example in business and technology applications, and from the constant change of data and its representation. In the Web context, where the Web platform is used to integrate different organisations or software systems, additionally the problem of heterogeneity arises. We introduce a specific data integration solution for Web applications such as Web-enabled information systems. Our contribution is an integration technology framework for Web-enabled information systems comprising, firstly, a data integration technique based on the declarative specification of transformation rules and the construction of connectors that handle the integration and, secondly, a mediator architecture based on information services and the constructed connectors to handle the integration process. Keywords Web Applications, Data Integration, Software Architecture, Data Models, Information System Design INTRODUCTION The Web and its underlying platform technologies have often been used to integrate existing software and information systems. Information and data integration is a central issue in this context. Basic techniques based on XML for data representation and XSLT for transformations between XML documents are not sufficient to support a flexible and maintainable data integration solution that meets the requirements of modern complex Web-enabled software and information systems. The difficulty arises from the high degree of complexity of data structures, for example in business --- 1 This chapter appears in “Software Engineering for Modern Web Applications: Methodologies and Technologies” edited by Brandon, Daniel M., Copyright 2008, IGI Global, www.igi-global.com. Posted by permission of the publisher. and technology applications, and from the constant change of data and its representation. In the Web context, where the Web platform is used to integrate different organisations or software systems, additionally the problem of heterogeneity arises. This calls for a specific data integration solution for Web applications such as Web-enabled information systems. The advent of Web services and service-oriented architecture (SOA) has provided a unified way to expose the data and functionality of an information system. Web services are provided as-is at certain location and can be discovered and invoked using Web languages and protocols. SOA is a service-based approach to software application integration. The use of standard technologies reduces heterogeneity and is therefore central to facilitating application integration. The Web services platform is considered an ideal infrastructure to solve the problems in the data integration domain such as heterogeneity and interoperability (Orriens et al., 2003; Haller et al., 2005; Zhu et al., 2004). We propose a two-pronged approach to address this aim: firstly, data integration and adaptivity through declarative, rule-based service adaptor definition and construction, and, secondly, a mediator architecture that enables adaptive information service integration based on the adaptive service connectors. Abstraction has been used successfully to address flexibility problems in data processing - database query languages are a good example here. XML as a markup language for document and data structuring has been the basis of many Web technologies. XML-based transformation languages like XSLT, the XML Stylesheet Transformation Language, XML-based data can be translated between formats. With recent advances in abstract, declarative XML-based data query and transformation languages beyond the procedural XSLT, this technology is ready to be utilised in the Web application context. The combination of declarative abstract specification and automated support of the architecture implementation achieves the necessary flexibility to deal with complexity and the maintainability of constantly changing data and system specifications. Our objective is to explore and illustrate solutions to compose a set of data integration services. The data integration services deliver a unified data model built on top of individual data models in dynamic, heterogeneous and open environments. The presentation of this technology framework aims to investigate the practical implications of current research findings in Web information systems technology. A lightweight mediated architecture for Web services composition shall be at the centre of our solution. Data integration is a central architectural composition aspect. The flexibility of the architecture to enable information integration is essential in order to separate the business process rules from the rest of the application logic. Therefore, the data transformation rules are best expressed at the abstract model level. We apply our solution to the Web Services platform in the context of information technology services management in the Application Service Providers ASP (on demand) business area. We focus on this context to illustrate problems and solutions. Portals, provided by ASPs, are classical examples where data might come from different sources that motivate our research. In order to consume the information, the data models and representation needs to be understood by all participants. The ASP maintains the application, the associated infrastructure, and the customer's data. The ASP also ensures that systems and data are available when needed. The chosen area demonstrates the need to support deployment of Web service technology beyond toy examples (Stern & Davies, 2004). It is a specific, but important area due to the need to find solutions to accommodate constant structural changes in data representations. Two central themes shall be investigated: - to identify data model transformation rules and how to express these rules in a formal, but also accessible and maintainable way are central to the data integration problem and its automation, - service composition to enable interoperability through connector and relationship modelling based on workflow and business processes is central. Our contribution based on these themes is an integration technology framework for Web-enabled information systems comprising - a data integration technique based on the declarative specification of transformation rules and the construction of connectors that handle the integration in a software system, - a mediator architecture based on information services and the constructed connectors to handle the integration process. We start our investigation by providing some data integration background. We then present the principles of our declarative data integration technique. The mediator architecture that realises the data integration technique for Web services is subsequently presented. A larger application scenario will then be discussed. We end with some conclusions. BACKGROUND Data Integration Context The Application Service Provider or ASP business model, which has been embraced by many companies, promotes the use of software as a service. Information Systems (IS) outsourcing is defined as the handing over to third party the management of IT and IS infrastructure, resources and/or activities (Willcocks & Lacity, 1998). The ASP takes primary responsibility for managing the software application on its infrastructure, using the Internet as the delivery channel between each customer and the primary software application. The ASP maintains the application and ensures that systems and data are available when needed. Handing over the management of corporate information systems to third party application service providers in order to improve the availability of the systems and reduce costs is changing the ways that we manage information and information systems. Information integration aims at bringing together various types of data from multiple sources such that it can be accessed, queried, processed and analysed in an integrated and uniform manner. In a large modern enterprise, it is inevitable that different parts of the organization will use different systems to produce, store, and search their critical data. Recently, service-based platforms have been used to provide integration solutions for ASP applications. Data integration in these types of collaborating systems is necessary. This problem has been widely addressed in component-based software development through adaptor and connector approaches (Crnkovic & Larsson, 2000; Szyperski, 2002). In the service-based Web applications context, the data in XML representation retrieved from the individual Web services needs to be merged and transformed to meet the integration requirements. The XML query and transformation rules that govern the integration may change; therefore, the programs for building up the connectors that facilitate the connection between integrated Web services and data service providers need to be adjusted or rewritten. As with schema integration, the schema-mapping task cannot be fully automated since the syntactic representation of schemas and data do not completely convey the semantics of different data sources. As a result, for both schema mapping and schema integration, we must rely on an outside source to provide some information about how different schemas (and data) correspond. For instance, a customer can be identified in the configuration management repository by a unique customer identifier; or, the same customer may be identified in the problem management repository by a combination of a service support identifier and its geographical location. In this case, a transformation might be necessary; see Fig. 1 for a visualisation of the customer identifier example. Fig. 1. Example of Data Integration in Adaptive Service Architectures - two Data Schemas that need to be transformed into one another. Data Integration Principles Information integration is the problem of combining heterogeneous data residing at different sources, and providing the user with a unified view (Lenzerini, 2002). This view is central in any attempt to adapt services and their underlying data sources to specific client and provider needs. One of the main tasks in information integration is to define the mappings between the individual data sources and the unified view of these sources and vice versa to enable this required adaptation, as the example in Fig. 1 illustrates. The data integration itself is defined using transformation languages. There are two major architectural approaches to the data integration problem that provide the infrastructure for the execution of transformations (Widom, 1995). - Data warehousing is an eager or in-advance approach that gathers data from the appropriate data sources to populate the entities in the global view. A data warehousing approach to integration is suitable for data consumers wanting to access to local copies of data so that it can be modified and calculated to suite the business needs by nature. - In contrast, the mediated approach extracts only data from export schemas in advance. A mediated approach to integration is suitable for information that changes rapidly, for service environments that change, for clients in need tailored data, and for queries that operate over large amounts of data from numerous information sources and most importantly for clients with the need of the most recent state of data. XSLT Shortcomings XSLT is the most widely used language for XML data integration, but these XSLT transformations are difficult to write and maintain for large-scale information integration. It is difficult to separate the source and target parts of the rules as well as the filtering constraints. The verbosity of XML makes manual specifications of data and transformations difficult in any case. With this difficulty in mind, we propose a declarative query and transformation approach yielding more expressive power and the ability to automatically generate query programs as connectors to improve the development of services-based data integration in Web-based information systems. XSLT does work well in terms of transforming data output from one Web service to another in an ad-hoc manner. XSLT code is, however, difficult to write and almost impossible to reuse in a large enterprise integration solution. The syntactical integration of the query part and construction part of a XSLT transformation program is hard to read and often new programs are needed even when a small portion of the data representation changes. XSLT does not support the join of XML documents. We would in our context need to merge several source XML documents into one document before it can be transformed into another document according to an over-arching general schema. A DECLARATIVE DATA INTEGRATION AND TRANSFORMATION TECHNIQUE A declarative, rules-based approach can be applied into the data transformation problem (Orriens et al., 2003). A study by Peltier et al. (2001) introduces the MTRANS language that is placed on top of XSLT to describe data model transformations. XSLT is generated from an MTrans specification. The transformation rules are expressed in the form of MTrans and then parsed using a generator. Peltier et al. argue that the data transformation rules are best expressed declaratively at the abstract model level rather than at the concrete operational level in order to reduce the complexity of the transformation rules. A data integration engine for the Web services context can be built in the Web service business process execution language WS-BPEL, which is another example for the benefits of abstraction in transformation and integration. A common over-arching information model governs what types of services are involved in the composition. In (Rosenberg & Dustdar, 2005), a business rule engine-based approach has been introduced to separate the business logic from the executable WS-BPEL process. These two examples illustrate current work in this context. Now, a detailed discussion shall elicit the specific requirements for service-based information integration. Requirements for Mediated Integration The flexibility of the architecture in which information integration is to be realised is essential in order to separate the business logic from the rest of the application logic. Therefore, the data transformation rules are best expressed at an abstract business model level. These rules, stored in a repository, can be used to dynamically create XSLT-based transformations using a connector or integration service as the mediator. These integration services are the cornerstones of a mediator architecture that processes composite client queries that possibly involve different data sources provided by different Web services. We start our investigation by discussing the properties of suitable integration and transformation languages. XML data might be provided without accompanying schema and sometimes is not well-formed; XML data often contains nested structures. Therefore, transformation techniques need more expressive power than traditional database languages such as relational algebra or SQL. The characteristics of an XML query language have been studied extensively (Jhingran et al., 2002; Lenzerini, 2002; Peltier et al., 2002). However, these investigations often focus on the features to query an XML or semi-structured data repository in the spirit of database query languages rather than constructing a new XML document in the context of the data integration. The following principles, which are inspired by the data integration literature such as (Lenzerini, 2002), aim to provide a comprehensive requirements list. - The language should support both querying and restructuring XML Data. - The language must enable the generation of query programs by other programs. - The language should be capable of expressing the following operations in addition to the ones existing in database languages (such as projection, selection, and joins): restructuring (constructing a new set of element instances based on variable bindings and the global schema), combination (merging two or more element instances into one), and reduction (to express transformation rules that exclude parts of the data from the result). - Compositionality is an essential feature for an XML query and transformation language to support query composition. A rule-based, declarative language enables developers to concentrate on the integration logic rather than on implementation details and enables the required compositionality and expressiveness. Most XML and semi-structured data query languages have been proposed to extract XML data from the XML databases or the Web. A comparative analysis of existing languages has been done by Reynaud et al. (2001). A language is generally designed to suit the needs for a limited application domain such as database querying or data integration; some languages are designated only for semi-structured data that predated the XML-format. A query language should be able to query data sources using complex predicates, joins and even document restructuring. We add the following criteria specifically for the context of Web-based data integration: - **Join.** The language must support joins of multiple XML data sources. A join condition is necessary to compare attributes or elements in any number of XML documents. In data integration systems, data is most likely to come from more than one source. - **Data Model.** The queries and their answers are the instances of a data model. Sometimes, a rich data model is needed to support the functionality of some query languages. The underlying framework plays a major role in determining a data model for a query language. - **Incomplete Query Specification.** XML and semi-structured data is not as rigid as relational data in term of schema definitions and data structure. Therefore, it is important that a query language is capable of expressing queries in incomplete form, such as by using wildcard and regular expressions - also called partially-specified path expressions. - **Halt on Cyclic Query Terms.** If a language supports querying with incomplete query specification by wildcard and regular expression, it might cause termination problems. Therefore, features to detect cyclic conditions are required. - **Building New Elements.** The ability to construct a new node added to the answering tree is an important feature for data integration systems. - **Grouping.** Grouping XML nodes together by some conditions by querying the distinct values is another important feature in data integration. Some languages use nested queries to perform grouping operations; in contrast, some more powerful languages have built-in constructors. - **Nested Queries.** Nested queries are common in relational database languages for joining different data elements by their values. In logic-based languages, the construction part and the selection part are separated. - **Query Reduction.** Query reduction allows users to specify what part of the elements or what nodes in the query conditions will be removed from the resulting XML tree. A number of potential candidates shall briefly be discussed in the context of these requirements: - **XQuery** is a W3C-supported query language that aims at XML-based database systems. XQuery is an extension of XPath 2.0 adding functionalities needed by a full query language. The most notable of these functionalities are support of sequences, the construction of nodes and variables, and user-defined functions. - **UnQL** - the Unstructured Query Language - is a query language originally developed for querying semi-structured data and nested relational databases with cyclic structures. It has later been adapted to query XML documents and data. Its syntax uses query patterns and construction patterns and a query consists of a single select or traverse rule that separates construction from querying. Queries may be nested, in which case the separation of querying and construction is abandoned. UnQL was one of the first languages to propose a pattern-based querying (albeit with subqueries instead of rule chaining). - **XML-QL** uses query patterns and path expressions to select data from XML sources. These patterns can be augmented by variables for selecting data. XML-QL uses query patterns containing multiple variables that may select several data items at a time instead of path selections that may only select one data item at a time. Furthermore, variables are similar to the variables of logic programming, i.e. joins can be evaluated over variable name equality. Since XML-QL does not allow one to use more than one separate rule, it is often necessary to employ subqueries to perform complex queries. The shortcomings of these widely known and used languages in the context of the given requirements and the language comparisons have led us to choose a fully declarative language called Xcerpt (Bry & Schaffert, 2002) that satisfies all criteria that we have listed earlier on. However, other recently developed and well-supported transformation languages such as ATL and QVT are similarly suitable candidates. While QVT satisfies the criteria, it is currently not as well supported through tools and accessible tutorial material. Xcerpt is a query language designed for querying and transforming both data on the standard Web (e.g. XML and HTML data) and data on the Semantic Web (e.g. RDF data). Xcerpt not only allows one to construct answers in the same data formats as the data queries like XQuery, but also allows further processing of the data generated by this same query program. One of the design principles is to strictly separate the matching part and the construction part in a query. Xcerpt follows a pattern-based approach to querying the XML data. A similar approach has been proposed in the languages UnQL and XML-QL. However, Xcerpt has extended the pattern-based approach in the following aspects. Firstly, the query patterns can be specified by incomplete query specifications in three dimensions. Incomplete query specifications can be represented in depth, which allows XML data to be selected at any arbitrary depth; in breadth, which allows querying neighbouring nodes by using wildcards, and in order. Incomplete query specifications allow the pattern specifications to be specified in a more flexible manner but without losing accuracy. Secondly, the simulation unification computes answer substitutions for the variables in the query pattern against underlying XML terms - similar to UnQL, but strict unification is used in UnQL. Declarative Transformation Rules We have adapted Xcerpt to support the construction of the service connectors, which is our central objective: - From the technical point of view, in order to promote code reuse, the individual integration rules should not be designed to perform the transformation tasks alone. The composition of rules and rule chaining demand the query part of service connector to be built ahead of the construction part of the service connector. - From the business point of view, the data presentation of the global data model changes as element names change or elements are being removed. These should not affect the query and integration part of the logic. Only an additional construction part is needed to enable versioning of the global data model. Grouping and incomplete query specifications turn out to be essential features. Xcerpt is a document-centric language which is designed to query and transform XML and semi-structured documents. Therefore the ground rules, which read data from the document resources, are tied with at least one resource identifier. This is a bottom up approach in terms of data population because the data are assigned from the bottom level of the rules upward until the rule application reaches the ultimate goal of a complex, hierarchically structured rule. These rules are defined through an integration goal at the top level and structured into sub-rules down to ground rules, which address individual data elements. ``` CONSTRUCT CustomerArray { all Customer[ nameAsContracted [var Name], companyId [var CompanyId], serviceOrganizationIdentifier [var OrgId], all supportIdentifier [ CustomerSupportIdentifier [var Code], ISOCountryCode [var CSI] ] ] } FROM arrayOfCustomer[ item [ orgName [var Name], companyId [var CompanyId], gcdbOrgId [var OrgId], countryCode [var Code], csiNumber [var CSI] ] ] ``` Fig. 2. Declarative Query and Transformation Specification of Customer Array Element in Xcerpt. Fig. 2 shows a transformation example for a customer array based on Fig. 1. Fig. 1 is a graphical illustration of XML-based data structures. The upper structure provides the data schema of the input document; the lower structure is the target data schema that a transformation needs to map onto. The graphical representation allows us to avoid the verbosity of XML-based data representations for this investigation. An output customer in CustomerArray is constructed based on the elements of an item in an arrayOfCustomer by using a pattern matching approach, identifying relevant attributes in the source and referring to them in the constructed output through variables. For instance, the Name variable is used to declare nameAsContracted and OrgName as semantically equal elements in both representations that are syntactically different. This original Xcerpt approach is unfortunately not feasible in an information integration solution because the resource identifiers cannot be hard coded in the ground rules in our setting. A wrapper mechanism has been developed to pass the resource identifiers from the goal level all the way down to the ground rules. In addition to the original Xcerpt approach, we propose a mediator-based data integration architecture where the Xcerpt-based connectors are integrated with the client and provider Web services. WS-BPEL code is generated by a transformation generator within the mediator service (see Fig. 4 below, which is explained in a separate section). Implementation of Connector Construction The construction of Xcerpt-based connectors, which specify integration through declarative rules, can be automated using rule chaining. Ground rules are responsible for querying data from individual Web services. Intermediate composite rules are responsible for integrating the ground rules to render data types that are described in global schemas. The composite rules are responsible for rendering the data objects described in the interfaces of the mediator Web services based on demand. Therefore, exported data from a mediator service is the goal of the corresponding connector (i.e. a query program), see Fig. 3. Fig. 1 defines again the respective input and output data schemas. The CONSTRUCT .. FROM clauses in Fig. 3 define the individual rules. Here, information from ArrayOfCustomers and Customers is selected to construct the SupportIdentifier. ``` GOAL Out { Resource "{file:SupportIdentifier_Customer.xml}", SupportIdentifier [ All var SupportIdentifier ] } } FROM Var SupportIdentifier -> SupportIdentifier {()} END CONSTRUCT SupportIdentifier [var Code, optional Var Cname, Var Code] FROM in { Resource "{file:customer1.xml}", ArrayOfCustomer [] customer [[ optional countryName [var CName], countryCode [var Code] ] csiNumber [var CSI] ] } }} END CONSTRUCT SupportIdentifier [var Code, Var Cname, optional Var Code] FROM in { Resource "{file:customer2.xml}", Customers [] customer [[ countryName [var CName], optional countryCode [var Code] ] csiNumber [var CSI] ] } ] } } END ``` Fig. 3. Transformation Specification in Xcerpt based on Goal Chaining. We apply backward goal-based rule chaining in this adapted implementation to execute complex queries based on composite rules. Fig. 3 shows an example of this pattern matching-based approach that separates a possibly partial query based on resource and construction parts. This transformation rule maps the supportIdentifier element of the customer example from Fig. 1. Fig. 3 is a composite rule based on the SupportIdentifier construction rule at a lower level. These rules are saved in a repository. When needed, a rule will be picked and the backward rule chaining enables data objects to be populated to answer transformation requests. This architecture will be detailed in the subsequent section. MEDIATOR ARCHITECTURE Motivation Zhu et.al. (2004) argue that traditional data integration approaches such as federated schema systems and data warehouses fail to meet the requirements of constantly changing and adaptive environments. We propose, based on (Haller et al., 2005; Wiederhold, 1992; Sheth & Larson, 1990; Zhu et al., 2004), a service-oriented data integration architecture to provide a unified view of data on demand from various data sources. A service-oriented data integration architecture is different from business process integration as the latter is concerned with integrating the business process rather than data. The proposed integration architecture uses Web services to enable the provision of data on demand whilst keeping the underlying data sources autonomous. There is consequently a need for mediators in an architecture that harmonise and present the information available in heterogeneous data sources (Stern & Davies, 2003). This harmonisation comes in the form of identification of semantic similarities in data while masking their syntactic differences; see Fig. 1. Relevant and related data is then integrated and presented to a higher layer of applications. The sourcing, integration, and presentation of information can be seen as logically separated mediator rules for integration, implemented by mediator services - which shall form the basis for the presented mediator architecture. Garcia-Molina et.al. (1997) identify that the following requirements are essential in order to build a mediator architecture. Firstly, it must be based on a common data model that is more flexible than the models commonly used for the database management systems. Secondly, it must be supported by a common query language. Finally, there must be a tool to make the creation of new mediators and mediator systems more cost-effective than building them from scratch. Architecture Definition The mediator architecture transforms local XML documents into documents based on a global schema. Fig. 4 illustrates this architecture with a few sample information services - Customer Data, E-business System, Request Logging and Analysis Service - that a client might access. The data integration engine is built based on a composition of individual services using WS-BPEL, where component invocation orders are predefined in the integration schemas. These service orchestrations are defined by specifying the order in which operations should be invoked. The proposed Web services-based mediator architecture, Fig. 4, contains the following components: - **Schema Repository**: Each object within the model is a logical representation of the entity and will often be populated with data sourced from more than one repository. The advantage of having a unified view of data is to make sure that the customers will have a consistent view of data and to avoid duplication. - **Information Services**: These provide source data retrieved from the underlying data repositories to clients and other services. The signature of the Web service interfaces such as input parameters and data output is agreed in advance by business domain experts from both client and provider sides. The benefit of asking the data sources to provide a Web service interface is to delegate the responsibility and cut down the effort spent on developing data access code and understanding the business logic. - **Data Integration and Mediation Services**: A common data model can be implemented as an XML schema. Two basic approaches have been proposed for the mappings between the export schemas and the federated schema - called global-as-view and local-as-view in (Lenzerini, 2002). The former approach defines the entities in the global data model as views over the export schemas whereas the latter approach defines the export schemas as views over the global data model. In this work, a data integration service will be treated as a mediator in the mediator architecture. We introduce a novel approach to ease and improve the development of the mediators. There are two quite different styles of transformation: procedural, with explicit source model traversal and target object creation and update, and declarative, with implicit source model traversal and implicit target object creation. Therefore, an approach based on a declarative rule markup language to express the data transformation rules and a rule engine have been chosen. The mapping should be conducted at the abstract syntax mappings level, leaving the rendering of the result to a separate step at runtime to the BPEL engine. • Query Component: The query service is designed to handle inbound requests from the application consumer side. The application developers build their applications and processes around common objects and make successive calls to the mediated Web services. Therefore, the interfaces of individual Web service providers are transparent to the application customers; they may send any combinations of the input parameters to the query service. In order to facilitate these unpredicted needs, the query service has to decompose the input messages into a set of pre-defined WS-BPEL flows. Normally a BPEL flow belongs to a mediator that delivers a single common object. Occasionally, two or more mediators need to be bundled together to deliver a single object. Each of the components can in principle be offered as a service by a (potentially different) provider. Within the composite mediator service, both transformation and connector generation services are separated and only loosely coupled. Developer Activities The architecture in Fig. 4 explains the runtime view from the client and user perspective. In order to complete the picture, the development perspective shall also be addressed. Fig. 5 illustrates development activities, looking at the developers of architecture, rules, and services - and their respective activities. A number of actors including service provider engineers, application software engineer, integration business analyst, integration software architect, and integration software engineer are distinguished. These are associated to the activities they are involved in. In particular the integration team is involved with Xcerpt-based rule definition and application. Activities are also related among themselves. The participation of different roles from possibly different organisations (application customer, service provider, integration team) demonstrates the need for common understanding and maintainability of the integration problem, which can be achieved through abstract and declarative rule specifications (here in Xcerpt format), shared by service provider developers, integration business analysts, and integration software developers. APPLICATION SCENARIO AND DISCUSSION The presented data integration technique and the mediated architecture are complemented by an incremental, evolutionary process model. Some pragmatic aspects of this process shall now be addressed. In the proposed architecture, the unified data model (over-arching schema) is maintained manually. The schema for large enterprise integration solutions might consist of a large number of data aspects. From the development point of view, it is only reasonable to deliver the data integration services on a phased basis such as one data aspect for one release cycle. A mediator consists of the following components: the individual provided Web services, a WS-BPEL workflow, and one or more service connectors, as illustrated in Fig. 4. Mediators in our solution are used to deliver these data aspects according to the unified schema. This schema is available to the customers so that these can decide which mediator to call based on the definition of the unified schema. The focus of this investigation is not on the automatic composition of Web services, rather on how the data output from multiple Web services can be automatically integrated according to a global data model and sent back to users. Therefore, in terms of the WS-BPEL process flow, a static approach with respect to the orchestration of the involved Web services can be taken. These can be orchestrated together in form of a WS-BPEL flow built in advance. During the development phase, the mappings between the global model and the local models will be expressed at the abstract model level, for instance in the widely used MOF (Meta Object Facility) framework for modelling language definition. Model transformation between different metamodels can then be automatically carried out. The inputs are the source XML schema definitions and the transformation rules. The output is an XSLT transformation file. In the proposed process model illustrated in Fig. 5, the unified data model and the creation of rules are the responsibility of the business solution analysts, not necessarily the software architect. The rules are merely mappings from the elements exposed by Web service providers to the elements in the unified data model. We assume here that the semantic similarity is determined manually. In the literature on data model transformation, the automation of the mapping is often limited to transforming the source model and the destination model rather than integrating more than one data model into a unified data model. Even in the case of source to destination model mapping, the user’s intervention is needed to select one from more than one sets of mappings that are generated. In our proposed architecture, the service connectors can be generated on the fly by rule composition. The sacrifice is that semantic similarity is not taken into consideration. The data integration rules are created at the higher level than the Xcerpt ground query programs themselves, as the following schematic example demonstrates (Fig. 3 shows an example of a composite rule like A below): - Rule A: \( A(a, b) := B(a, b), C(b) \) - Rule B: \( B(a, b) := D(a), E(b) \) - Rule C: \( C(b) := E(b), F(b) \) Each of the above rules would be implemented in the Xcerpt language. In the above example, rule \( A \) is a composite rule, based on \( B \) and \( C \). This could be used to answer a user’s query directly, but internally referring to subordinated rules dealing with the extraction and transformation of specific data aspects. The resource identifiers in form of variables and the interfaces for the data representation such as version number of the unified data model will be supplied to the transformation generator. The rule mappings in the transformation generator serve as an index to find the correct Xcerpt queries for execution. As a result, a query program including both query part and construction part is being executed to generate the XML output, which is sent back to the transformation generator. In terms of examples, we have so far only addressed complex transformations based on compositional rules within data provided by one Web service - the customer information service. Queries could of course demand to integrate data from different services. For instance, to retrieve all services requests by a particular customer would target two services, based on several composite integration and transformation rules. FUTURE TRENDS Adaptivity in service-based software systems is emerging as a crucial aspect beyond the discussed area of service-based ASP infrastructures and on-demand information systems. Adaptability of services and their infrastructure is necessary to reconcile integration problems that arise in particular in dynamic and changing environments. We have excluded the problem of semantic interoperability from our investigation. Mappings between schemas might still represent the same semantic information. The recently widely investigated semantic Web services field, with ontology-based domain and service models, can provide input for some planned extensions in this direction (Haller et al., 2005). Re-engineering and the integration of legacy systems is another aspect that we have not addressed. The introduction of data transformation techniques for reengineering activities can improve the process of re-engineering legacy systems and adopting service-oriented architecture to manage the information technology services (Zhang & Yang, 2004). Business rules often change rapidly - requiring the integration of legacy systems to deliver a new service. How to handle the information integration in the context of service management has not yet been exploited in sufficient detail in the context of transformation and re-engineering. CONCLUSIONS The benefit of information systems on demand must be supported by corresponding information service management systems. Many application service providers are currently modifying their technical infrastructures to manage information using a Web services-based approach. However, how to handle information integration in the context of service-based information systems has not yet been fully exploited. The presented framework utilises information integration technologies for service-oriented software architectures. The crucial solutions for the information integration problem are drawn from mediated architectures and data model transformation, allowing the data from local schemas to be transformed, merged and adapted according to declarative, rule-based integration schemas for dynamic and heterogeneous environments. We have proposed a declarative style of transformation, with implicit source model traversal and implicit target object creation. The development of a flexible mediator service is crucial for the success of the service-based information systems architecture from the deployment point of view. REFERENCES
{"Source-Url": "http://doras.dcu.ie/17088/1/SEMWA08.pdf", "len_cl100k_base": 7886, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 37357, "total-output-tokens": 10367, "length": "2e12", "weborganizer": {"__label__adult": 0.0002675056457519531, "__label__art_design": 0.0004401206970214844, "__label__crime_law": 0.0003218650817871094, "__label__education_jobs": 0.0004818439483642578, "__label__entertainment": 5.805492401123047e-05, "__label__fashion_beauty": 0.00012505054473876953, "__label__finance_business": 0.0003192424774169922, "__label__food_dining": 0.0002949237823486328, "__label__games": 0.0003333091735839844, "__label__hardware": 0.0005650520324707031, "__label__health": 0.0003740787506103515, "__label__history": 0.00021708011627197263, "__label__home_hobbies": 5.507469177246094e-05, "__label__industrial": 0.00033736228942871094, "__label__literature": 0.00024116039276123047, "__label__politics": 0.00021398067474365232, "__label__religion": 0.0003190040588378906, "__label__science_tech": 0.02459716796875, "__label__social_life": 6.35981559753418e-05, "__label__software": 0.0114898681640625, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.00018203258514404297, "__label__transportation": 0.0003542900085449219, "__label__travel": 0.0001742839813232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48288, 0.01079]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48288, 0.23645]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48288, 0.89188]], "google_gemma-3-12b-it_contains_pii": [[0, 2587, false], [2587, 6264, null], [6264, 9209, null], [9209, 11292, null], [11292, 14421, null], [14421, 17884, null], [17884, 21161, null], [21161, 24480, null], [24480, 27055, null], [27055, 29486, null], [29486, 31961, null], [31961, 34078, null], [34078, 37264, null], [37264, 37887, null], [37887, 41051, null], [41051, 43683, null], [43683, 46310, null], [46310, 48288, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2587, true], [2587, 6264, null], [6264, 9209, null], [9209, 11292, null], [11292, 14421, null], [14421, 17884, null], [17884, 21161, null], [21161, 24480, null], [24480, 27055, null], [27055, 29486, null], [29486, 31961, null], [31961, 34078, null], [34078, 37264, null], [37264, 37887, null], [37887, 41051, null], [41051, 43683, null], [43683, 46310, null], [46310, 48288, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48288, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48288, null]], "pdf_page_numbers": [[0, 2587, 1], [2587, 6264, 2], [6264, 9209, 3], [9209, 11292, 4], [11292, 14421, 5], [14421, 17884, 6], [17884, 21161, 7], [21161, 24480, 8], [24480, 27055, 9], [27055, 29486, 10], [29486, 31961, 11], [31961, 34078, 12], [34078, 37264, 13], [37264, 37887, 14], [37887, 41051, 15], [41051, 43683, 16], [43683, 46310, 17], [46310, 48288, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48288, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
196aa8c4e17abac67ec77de1c89df88271bcf2d5
Abstract Using a case study approach, this paper introduces and outlines the Unified Modeling Language (UML) as it applies to modeling a site on the World Wide Web. The authors include an introduction to the concept of modeling, in general, as well as how modeling relates to the design of a Web site. A simple, fictitious university Web site serves as an illustrative tool throughout the paper. This site is reflected in several UML-based diagrams, as well as the discussion of some of the issues, considerations and techniques when using UML to model a Web site. The paper concludes with a list of ‘best practices’ when modeling Web sites using UML. 1. Introduction This paper introduces and outlines the Unified Modeling Language (UML) as it applies to modeling a site on the World Wide Web. We focus our attention on Internet-based systems, essentially all of the UML-related design implications included in our discussion hold for Intranet-based systems, as well. UML is more than able to model complex, Web-based applications, including transaction processing (such as book/CD ordering system) and document management (such as an academic conference manager). However, for purposes of simplicity, we will be discussing some of the major considerations of modeling a relatively simple Web site using UML, largely from the user’s perspective (client-side). Our examples will be drawn from the Web site of a fictitious University (see Appendix A), and we include several UML-based diagrams, which are intended to illustrate various points of modeling with UML. We used Rational Software Corporation’s, Rational Rose, to generate our UML diagrams, and combined with a narrative of the site, help to illustrate the concepts being proposed and the site being developed. Booch, Rumbaugh and Jacobson [1999] define UML as a “standard language for writing software blueprints”, including the capability to “visualize, specify, construct and document the artifacts” of the system to be modeled through the use of numerous diagrams [Booch, et al, 1999]. UML offers consistent notations and numerous tools across processes and projects. Jim Conallen, Web Modeling Evangelist at Rational Software Corporation, suggests that UML is the “language of choice for modeling software-intensive systems” [Conallen, 1999]. Web site development falls into this category of ‘software-intensive systems’. In illuminating the specific merits of UML when modeling Web sites and applications, Conallen pointed out the following [Conallen, 1999]: - Web applications are a type of software-intensive system that are not only becoming increasingly complex, but are being implemented in more critical situations. - A software system typically has multiple models, each representing a different viewpoint, level of abstraction and detail. - The proper level of abstraction and detail depend on the artifacts and worker activities in the development process. - UML can “express the execution of the system’s business logic [if any] in those Web-specific elements and technologies” [Conallen, 1999]. UML can model specific representations at various levels of abstraction. These different levels are comprised of several diagrams, which taken together, allow us to view the design of the system with as much, or as little, detail as needed. Modeling at differing levels of abstraction (between high-level/generalized and low-level/detailed) will depend on exactly what information needs to be conveyed through the completed model. When modeling any system, Conallen [1999] suggests the importance of concentrating on the elements that will be of value to those who will be using the model (system developers). This entails “model [ing] the artifacts of the system – those ‘real life’ entities that will be constructed and manipulated to produce the final product”. Of course, the artifacts of a particular system will depend on the system being modeled. Artifacts of a Web site may include, but are not limited to, the following: - Web pages, - Multimedia-based elements, such as images, animation, video and audio clips, - Inter- and intra-page hyperlinks (“navigational paths” through a Web site), - Dynamic Web page content, both on the client and server-side, - Various end-users of the system. Depending on the particular site, these elements, or a subset of them, are of direct concern to the designers and creators of a Web site. In general, modeling the internal workings of the Web server or Web browser will not lend any significant insight to the designers and creators (programmers) of the site, and as such, would not be included in a typical UML diagram. Given the characteristics of our sample University Web site, we felt that modeling the navigational links and paths of the site was a priority. Among software-based systems, a site map is unique to a Web-based system and UML’s corresponding tool for this map is a Component Diagram, which is discussed later in this paper. The structure of this paper is as follows. We begin with a general introduction to modeling, including why we model and what UML represents. This section also includes a general architecture of a Web site, as well as an overview of a fictitious sample University Web site. We then present more traditional approaches to modeling a Web site, including the development of a cognitive walkthrough and a storyboard. Our next major section presents some of the issues, considerations and techniques when using UML to model a Web site. This section makes extensive use of various UML-based diagrams. Lastly, we present our conclusions, which include our list of ‘best practices’ when modeling Web sites using UML. 1.3 Scope of our analysis: simplified university Web site Regardless of the modeling method(s) and tools employed, there are several critical aspects to designing an effective, easy-to-navigate and information Web site. It is critical for a Web site to provide information and support the functions that its users need. These aspects are related to who is expected to use the site and what tasks these specific users need to accomplish through the site. Specifically, an initial analysis needs to be completed of at least the following: - Determine the overall purpose of the site. - Identify the intended users of the site. - Frame the scope of information contained within the site. The overall purpose of our sample University Web site is twofold. The first is to provide information related to programs, people and admissions. The second is to facilitate contact through phone, mail and e-mail between users of the site, and representatives of the University (faculty and administration). Brannan [2000] suggested that part of the design process to is attempt to identify groups of users based on their common informational needs, and then essentially generate a navigation and manipulation model from the information gathered. This would result in applications that are more tailored to users. The intended users of the site are as follows: - Potential students and their parents/guardians. - Current students and their parents/guardians. - Faculty and administrators. - Industry representatives. - Alumni. There may be inherent variety among these users in terms of their expectations, goals and technical constraints (including the speed of their modem and Internet hookup) [Baresi, 2000]. The information contained in the site pertains to the following: - Academic programs (undergraduate, graduate and continuing education). - People associated with the University (faculty, administration and alumni). - Admission information (undergraduate, graduate and continuing education). 1.4 Traditional methods of Web site modeling There are a few traditional methods for modeling a Web site. They include the following: - Text-based description of the general contents and navigational requirements of the site. - Cognitive walkthroughs for each user task. - Storyboard of the site. The following section includes text-based description of the general contents and navigational requirements of the site. It should be noted that for brevity, this description includes only the home page, the major sections of the site and subsequent ‘first-level’ sub-pages. This site contains approximately 50 pages, which, with the exception of the home page, are arranged into the following categories: Programs, People and Admissions. Appropriate links on each of their pages are shown as follows: PageName. The Programs section of the site includes pages for Undergraduate (Overview, Majors/Minors, Class List), Graduate Overview, Majors/Minors, Class List) and Continuing Education (Overview, Class List). The People section of the site includes pages for Faculty (Overview, Faculty List), Administration (Overview, Administrator List) and Alumni (Overview, Alumni List). The Admissions section of the site includes pages for Undergraduate (Overview, Apply), Graduate (Overview, Apply) and Continuing Education (Overview, Apply). Each of these pages links to their respective Overview and Apply pages. Every page in the site contains a link back to the Home Page, as well as a link page to the appropriate section of the site (Programs, People or Admissions). Cognitive walkthrough provides step-by-step instructions, combined with prototyped screens, to test the completeness of the site in executing a given user task. Similar walkthroughs would need to be developed and documented for other common user tasks. The specific tasks would need to be determined through user studies, perhaps in the form of interviews and/or surveys. A storyboard is used to illustrate the navigational hierarchy and paths within a Web site. The direction of the various arrows indicate the destination page of a particularly hyperlink. The following sections relate to UML-specific modeling, with some general discussions that are supplemented by diagrams and documentation that is specific to our University Web site. 2. Implementing UML in Web site modeling We have selected those diagrams that we deemed to be most relevant to modeling a Web site, particularly those with extensive navigation path analysis. We include discussions of the following elements and diagrams of UML, arranged by the following general and specific categories: - **System Analysis:** - 2.1 Problem Statement - 2.2 Use Case Diagrams - 2.3 Analysis-Level (high-level) Class Diagrams - **System Design:** - 3.1 Sequence Diagrams - 3.2 State Diagrams - 3.3 Activity Diagrams - 3.4 Design-Level (low-level) Class Diagrams - **Physical Design:** - 4.1 Component Diagrams - 4.2 Deployment Diagrams - **Applications Design:** - 5.1 Interaction Diagrams This section will investigate and demonstrate how UML is used to model the design of a Web site, with appropriate levels of abstraction. We will be developing a primary Use Case to serve as a basis when utilizing UML and producing our UML-based examples. We will not attempt to model, in detail, these ‘backend’ aspects of the Web’s client/server architecture. For simplicity, we will also not attempt to model any animation or real time graphic techniques, such as ‘mouse-over’ help.) 2. SYSTEM ANALYSIS 2.1 Problem Statement Our assignment is to develop a Web site for a University that will provide pertinent information to a wide variety of users. These users include potential students, students, parents and guardians, faculty/administrators, alumni and representatives from industry. The included information relates to information on Programs, People and Admissions. The site should also assist users in contacting various representatives of the University (including faculty, administration, admissions and alumni) through e-mail links and contact information (address, telephone and fax numbers). The site must also provide for password-restricted access by enrolled students to pages containing course grades and assignments. 2.2 Use Case Diagrams Before discussing Use Cases, we present a discussion of the concept of mapping user groups to UML’s Actor object. We have previously identified our user groups and they can be directly mapped to Actors in UML. Actors are identified based on their distinctive interactive role with the system being modeled. For the purpose of modeling, therefore, Actors are considered external to the system. The operational and navigational needs of various user groups are “associated with the actors [that] they are specific to” [Baresi, et al, 2000]. The identified Actors (users) of our sample University Web site are: prospective students, students, parents and guardians, faculty and administration, alumni, and industry representatives. Rosenberg [1999] defines a use case as “a sequence of actions that an Actor performs within a system to achieve a particular goal”. (For clarity, we would reword this to read, “…that an Actor performs through a system…” We believe that this wording would more accurately represent the role of the Actor(s)). We have identified two “analysis-level (or business process)” Use Cases related to our University Web site [Rosenberg, 1999]: 1. Access various information regarding the Programs, People and Admissions of the University. Relevant Actors for this Use Case include potential students, current students, students’ parents/guardians, faculty, administrators, alumni and industry representatives. 2. Contact various representatives of the University, including faculty and administrators, primarily through e-mail. Relevant Actors for this Use Case include potential students, current students, students’ parents/guardians, faculty, administrators, alumni and industry representatives. The specific detailed, or design-level, Use Case that we will use for purposes of illustration is the following. A potential student of the University is interested in accessing an overview of the “Introduction To Computing”, an undergraduate course offered through the College of Information Technology. There are several available forms of Use Case documentation. We have used a template developed by Dr. Il-Yeol Song (College of Information Science & Technology, Drexel University USA) to present formal, structured, high-level descriptions of our Use Cases. Figure 1 includes these Use Case Descriptions. **Figure 1: Use Case Descriptions** <table> <thead> <tr> <th>Level</th> <th>Access Information Use Case</th> <th>Contact Representative Use Case</th> </tr> </thead> <tbody> <tr> <td>Primary (1)</td> <td>1a. Access Information</td> <td>1b. Contact Representative (not detailed)</td> </tr> <tr> <td>Secondary (2)</td> <td>2a.1. Access Restricted Information</td> <td>2b.1. Contact Faculty (not detailed)</td> </tr> <tr> <td></td> <td>2a.2. Access General Information</td> <td>2b.2. Contact Administration (not detailed)</td> </tr> <tr> <td></td> <td></td> <td>2b.3. Contact Admissions (not detailed)</td> </tr> <tr> <td>Ternary (3)</td> <td>3a.1. Access Overview for Introduction To Computing Course (not detailed)</td> <td>3b.1a. Contact Dr. Smith (not detailed)</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Use Case Reference #</th> <th>Use Case Name</th> <th>Actor</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td>1a</td> <td>Access Information</td> <td>Currently enrolled students.</td> <td>To access grades and assignments of a particular course.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Overview and scope</th> <th>Students use the various pages and navigational links to access information on the grades and assignments of the courses they are currently enrolled in.</th> </tr> </thead> <tbody> <tr> <td>Level</td> <td>Primary</td> </tr> <tr> <td>Preconditions</td> <td>Connection to the World Wide Web through an HTTP connection.</td> </tr> <tr> <td></td> <td>Access to the University’s Web site.</td> </tr> <tr> <td>Post conditions in words</td> <td>Desired information is displayed on the student’s screen.</td> </tr> <tr> <td>Trigger</td> <td>A student wants to access their course grade(s) or assignment(s).</td> </tr> <tr> <td>Included Use Cases</td> <td>None.</td> </tr> <tr> <td>Extension Use Cases</td> <td>Access Restricted Information.</td> </tr> <tr> <td></td> <td>Access General Information.</td> </tr> <tr> <td>Frequency</td> <td>Unknown.</td> </tr> <tr> <td>Other Comments</td> <td>This is a high level Use Case description, as the level of detail shows.</td> </tr> <tr> <td></td> <td>‘Primary’ Level refers to a top-level Use Case. Access Information and Contact Representatives are each primary level Use Cases.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Use Case Reference #</th> <th>Use Case Name</th> <th>Actor</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td>2a.2</td> <td>Access General Information</td> <td>Potential students, students, parents &amp; guardians, faculty &amp; administrators, alumni and industry reps.</td> <td>To access information regarding the University.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Overview and scope</th> <th>Actors use the various pages and navigational links to access information on the University’s Programs (Undergraduate, Graduate and Continuing Education), People (Faculty, Administration and Alumni) and Admissions (Undergraduate, Graduate and Continuing Education).</th> </tr> </thead> <tbody> <tr> <td>Level</td> <td>Secondary</td> </tr> <tr> <td>Preconditions</td> <td>Connection to the World Wide Web through an HTTP connection.</td> </tr> <tr> <td></td> <td>Access to the University’s Web site.</td> </tr> <tr> <td>Post conditions in words</td> <td>Desired information is displayed on the Actor’s screen.</td> </tr> <tr> <td>Trigger</td> <td>An actor wants to access information about the University.</td> </tr> <tr> <td>Included Use Cases</td> <td>None.</td> </tr> <tr> <td>Extension Use Cases</td> <td>None.</td> </tr> <tr> <td>Frequency</td> <td>Unknown.</td> </tr> <tr> <td>Other Comments</td> <td>This is a secondary Use Case description, as the level of detail shows.</td> </tr> <tr> <td></td> <td>‘Secondary’ Level refers to a second-level Use Case, specifically Access Information: Access General Information.</td> </tr> </tbody> </table> 28 A set of Use Case Diagrams provides a high-level view of the operational and navigational aspects of the site [Baresi, et al, 2000]. Our University example is a relatively straightforward in its structure, navigation and operation. Therefore, we can model all of our access paths in one diagram. Figure 2 depicts our Use Case Diagram for our Web site. This high level of Use Case does not differentiate between client and server-side functions. It includes the following: - Students Actors can be specialized into: - *Students In Introduction To Computing*: This refers to those students that are currently enrolled in the course. These students will have password-enabled access to the Grades & Assignment pages of Introduction To Computing. - *Students Not In Introduction To Computing*: This refers to all other students. These students will only be able to access those pages that are not password protected. - The Access Information Use Case is extended into: - *Access Restricted Information Use Case*: This refers to those pages that are restricted to people with the appropriate password. An example is the Grades & Assignment page of the Introduction To Computing page. - *Access General Information Use Case*: This refers to those pages that can be accessed by any of the identified Actors. Baresi and his colleagues [2000] suggested that requirements of Web sites that can be represented by Use Case Diagrams fall into two classes: operational and navigational. Operational requirements include those functions that “modify the state of the applications”. Transaction processing would be an example of an operational requirement. In our University site, we consider using the site to e-mail representatives of the University as an operational requirement. Navigational requirements refer to the various interactions between the Actors and the site. Operational and navigational can both be represented through the use of Use Case Diagram(s) by one of two methods: a. Separate models for navigational and operational requirements, b. A single, combined model that includes color-coding of the two classes within one diagram. The choice between the two methods depends on “the degree of intertwining between operations and navigations”. Due to its relative simplicity, our University site can be modeled using one color-coded diagram. We specialized Student Actors into “Students In Intro To Computing” and “Students Not In Intro To Computing”. However, the remaining Actors also do not have access to the restricted Introduction To Computing pages. Therefore, based on restricted access to the pages related to this course, we can create two distinct groups of users for this particular Use Case. Essentially, the two groups are those that have access to the restricted course pages associated with the Introduction To Computing course and those that do not. It should also be noted that we have chosen to use the <<extends>> notation to depict the Access Restricted Information and Access General Information secondary Use Cases of Access Information, the primary Use Case. We chose not to use <<includes>> because the two secondary Use Cases are not invoked by any other Use Cases. Grouping various Actors together based on common functionality relates to a broader UML modeling concept called ‘Packaging’. This concept relates to the grouping of common objects with Packages. This tool allows the system developer to group various objects that conceptually ‘fit’ together. Such a grouping is intended to increase the clarity of various diagrams. In our University site and specified Use Case, the Actors can be grouped according to their access to the Introduction to Computing course’s restricted Grades and Assignments pages. As mentioned, these two pages are restricted to those students currently taking the course. Other Actor-oriented packages could conceivably be developed, depending on the specified criteria of the package (a common need to access differing information or a common need to communication with differing representatives of the University, for example), as determined by the specific Use Case being analyzed. See Figure 3, which details our Actor-based Packages. **Figure 2: 2nd Level Use Case Diagram** **Figure 3: Packages of Actors based on access to password-protected course pages** In fact, all of the other Actors, aside from those students currently enrolled in the Introduction To Computing course, do not have access to that course’s password protected pages. Given this criterion, there is no need to detail Parents/Guardians, Faculty/Administrators and Alumni as individual Actors because all of these groups share common access (or lack thereof) to the pages related to the Introduction To Computing course. Specifically, no Actor other than those students currently enrolled in the class can access the pages related to grades and assignments. In the absence of Packages, any differences related to operational or navigational constraints, if any, among the different Actors (users) would be noted with a comment on the navigational link to that particular Actor(s) [Baresi, et al, 2000]. 2.3 Analysis-Level (high-level) Class Diagrams: UML maps various components and entities of the project at hand to objects. Class Diagrams depict the “structures, navigations and operations” of the identified objects that users of the system utilize in order to “accomplish their tasks”. Baresi and his colleagues [2000], suggested that they should be modeled from the perspective of the various users (Actors), as opposed to an implementation (physical) view. Baresi, et al, suggest that this approach might result in class diagrams that are not ‘typical’ of the classes that are derived from “traditional object oriented design”. Further, they suggest that since these classes are modeled from the Actor’s perspective, several class diagrams may be necessary in order to fully capture the context and essence of the viewpoint of the various users of the site. For purposes of illustration, we have developed a Class Diagram only of our design-level Use Case. Baresi, et al, [2000] also suggested a need for ‘navigational nodes’ of a Web site to be modeled as a class. These nodes could be identified as the start and end points for users to navigate through the system. Basically, the nodes are individual Web pages, or “well identifiable logical blocks in a page” (or intra-page links) [Baresi, et al, 2000]. Figure 4 is our version of an analysis-level (high-level) Class Diagram for our University Web site. It includes both client and server collaborations. ![Figure 4: Analysis-Level Class Diagram](image-url) 3. SYSTEM DESIGN 3.1 Sequence Diagrams A Sequence Diagram, or a set of Sequence Diagrams, charts the steps, in order, that are necessary to complete a specific Use Case, including “all alternative courses” of action within the Use Case [Baresi, et al, 2000]. The particular Diagram being constructed determines the relative level of detail of the steps. Sequence diagrams include boundary, control, and entity objects, as well as the narrative steps from a particular Use Case description. Our sample University site’s objects can be mapped to the following: - **Boundary Object**: various pages of the site (example: home page) - **Control Object**: various hyperlinks (example: Programs, People, Admissions) - **Entity Object**: various text of hyperlinks (example: Programs, People, and Admissions) Although not part of our state design-level Use Case (namely accessing the Overview of the Introduction To Computing Course), we wanted to include an outline of the steps necessary to gain access to one of the password-protected pages (namely, Grades). Figure 5 is a partial depiction of the Sequence Diagram that includes password-enabled access to the Grades page. ![Sequence Diagram](image) **Figure 5: Partial Sequence Diagram depicting password-enabled access** **Notes:** Steps 1 through 6: Access Web site home page, access Programs page, access Undergraduate page, access Class List page, access Information Technology Classes page and access Introduction To Computing page. The various client-side browser windows are 'boundary' objects in this Use Case. The Web server object is a ‘control’ object. The format of the messaging resembles a ‘stair’ format, whereby there is a delegation of authority among the object ‘lifelines’. UML includes a conceptual view that consists of several diagrams, and it is called the Implementation View. We include the following diagrams as part of our University Web site analysis: static-based views (Component Diagrams) and dynamic-based views (State Diagram and Interface Design Diagram). 3.4 Design-Level (low-level) Class Diagrams We have added operations and attributes to our University Web site in Figure 6, as we further refine our model for the site in this System Design phase of modeling. Figure 6: Design-Level Class Diagram The operations included in our design-level Class Diagram were originally identified from the Sequence Diagram. We conclude our discussion of Class Diagrams with a list of the various UML elements as they map to the artifacts of a Web site. Web Artifacts UML Elements User Actor HTML hyperlink* Association element / control object Hyperlink text Entity object Site Map Component Diagram Storyboard Component Diagram Server page <<server page>> Client page <<client page>> / boundary object Java Script <<java script>> HTML form <<form>> HTML target of a frame <<target>> HTML frameset <<frameset>> Various groups of related elements Packages of these related elements * The <<link>> association has a list of parameter names that are sent along with the link request. The server then processes this link request along with any parameters [Conallen, 1999]. Hyperlinks request a specific page, either within the site or a site stored on another computer that is accessible through the Web (i.e. through HTTP). 4. PHYSICAL DESIGN 4.1 Component Diagrams An UML-based component diagram is essentially a site map, which provides an overview of client-side navigation through high-level abstraction of the various pages of the Web site. The components of a Component Diagram, as they relate to a Web site, include each of the pages and the hyperlinks (navigational links) among, and between, the pages [Conallen, 1999]. (Due to space limitations, we have not included a component diagram.) Components, however, only represent the “physical’ packaging” [Conallen, 1999]. As such, they provide no value when modeling any of the workings of the component, which are conceptually internal to the page [Conallen, 1999]. When considering a Web site, these internal workings could include inter-page links, scripts, Java applets and Active Server Pages (ASP). As we have attempted to demonstrate through this paper, other charts can be used to fill-in these details. 4.2 Deployment Diagrams Deployment Diagrams provide a modeling mechanism for illustrating the physical components of a system. Figure 7 represents our conceptual representation of the physical components of a typical Web-based application. It should be noted that the Application Server component is not necessary for our University Web site. It is included in the above figure only to illustrate a Web system that includes database functionality, which might be located on an Application Server. While a Deployment Diagram provides concise modeling for the physical structure of a Web site, it lends little, if any, insight into the development of our University site. As developers, we are primarily concerned with the workings of the site that directly interact with its end-users. We have presented several UML-based components that address these aspects of content presentation and navigational links. 5. APPLICATIONS DESIGN 5.1 Interface Diagram An Interface Diagram illustrates the navigation paths, similar to a Component Diagram. Directions of the various arrows indicate the navigational flow of control among, and between, the various pages. For clarity, we have deviated from software engineering notation in our Interface Diagram. Whereas standard UML notation includes the ability to depict bi-directional navigational flow (to and from a given hyperlink) through use of a single vertical line, each of our vertical lines is intended to depict a particular one-directional navigational flow. Figure 8 shows a partial Interface Diagram. ![Partial Interface Diagram](image) Figure 8: Partial Interface Diagram 6. **Recommended Best Practices** We have compiled a list of some of the ‘best practices’ for UML-based modeling of Web sites. **General comments:** - Don’t necessarily think that you need to develop every available UML diagram for every design effort you undertake. - We have found that as when modeling any software development project, the level of detail of the various UML diagrams is determined by the relative sophistication of the particular Web site, as well as the particular focus of the developers of the site. You should attempt to model only at the level of abstraction that will be of the most value to you. This involves considerable judgment, and as such, requires experience. - Don’t waste too much time and effort on modeling the server side of a Web site, unless your particular site is intended to support transaction processing or other back-end processing functions for which the site will act as a front-end. - When short on time, consider developing Use Case Descriptions, Class Diagrams and Sequence Diagrams, as Grady Booch suggested that 80% of the design effort can be accomplished through the development of these three tools. - When severely short on time, develop at least an analysis-level (high-level) Class Diagram. - Provide a numbering scheme for the Use Case Descriptions, Sequence Diagrams and Activity Diagrams (1, 1.x, 2, 2.x, etc.) and be consistent with this numbering scheme across each of the three diagrams. This will assist you and your end-users/clients in following the flow of processes and objects, which in turn assists in determining any oversights in your design. - Consider developing the appropriate diagrams in the following order of priority: 1. Problem Statement 2. Use Case Diagrams 3. Analysis-Level (high-level) Class Diagrams 4. Sequence Diagrams 5. State Diagrams 6. Activity Diagrams 7. Design-Level (low-level) Class Diagrams 8. Component Diagrams 9. Deployment Diagrams 10. Interaction Diagrams **System Analysis:** - Consider using Packages to simplify your concepts, particularly when dealing with several groups of Actors. Grouping them by function or some other common denominator will simplify your diagrams and your process modeling. (Figure 3) **System Design:** - Include the text from your Use Case Description down the left side of your Sequence Diagram. This will serve as an organizational tool as you development your Diagram. Because Rational Rose does not ‘link’ the steps of your Use Case Description to individual objects on your diagram, you will find yourself doing additional moving (vertically) of objects within your diagram if you add any additional objects/procedures. Therefore, in an effort to minimize the number and complexity of your edits, we suggest that if you are using Rose as your design tool, draw a sketch, or two, of your Sequence Diagram on an oversized piece of paper prior to developing the Diagram in Rose. (Figure 5) - Develop additional Sequence Diagrams for each alternative course of action that may need to be modeled as part of the Use Case. This will help maintain the clarity of your original Diagram. - We believe you will gain the most benefit from the development of a State Diagram if the Web site you are modeling is intended to serve as the front-end for some type of transaction processing system. Otherwise, as was the case with our sample University Web site, a State Diagram will not add much value to the design process. - If you plan to develop a Activity Diagram, consider developing one that is based on physical swimlanes (client browser and Web server, as well as any other application server), particularly a site that supports transaction processing or other back-end processing functions for which the site will act as a front-end. **Physical Design:** - Use a Component Diagram to develop a model of the navigational links of your site. This essentially represents a storyboard of the site. **Applications Design:** - Consider color-coding the various rows of an Interface Diagram in order to provide additional visual structure for the navigation paths (Figure 8). - Consider using our suggested notation when constructing an Interface Diagram (Figure 8), depending on the level of complexity of the site you are modeling. As noted, our notation results in a wider Diagram, but we feel it is more readable than the standard notation. - Consider supplementing the Interface Diagram with a text-based outline of the navigational links. - Don’t waste too much time and effort on a Deployment Diagram, as it adds little value to the design effort of a Web site, particularly a site that doesn’t support transaction processing or other back-end processing functions for which the site will act as a front-end. (Figure 7) 7. **Conclusions** There is clearly a need for tools that are robust enough to assist developers in capturing the various views, constructs and capabilities of Web sites of varying complexity. Baresi and his colleagues [2000] referred to some of these complexities when they pointed out the various interrelationships between hypermedia design (“information structures and navigational paths”) and functional design (“operations and applications behavior”). Through our personal experiences and research for this paper, as well as our development of the prototype University Web site, we found that the UML-based tools we have discussed combined to form a more robust and capable tool set than traditional methods of modeling sites (text-based descriptions, cognitive walkthroughs and storyboards). With some experience in utilizing these tools, designers can be reasonably assured of developing a complete model of even complicated sites. Lastly, we present the following thoughts regarding possible future research. We would be interested in investigating if there was any evidence to indicate whether or not using UML to model and develop a Web site increases the likelihood of creating a site that has a more useful, efficient and informative design than compared to a site that was developed utilizing traditional modeling techniques (text-based documentation, cognitive walkthroughs and storyboards). We believe it would also be both interesting and useful to investigate the relative usefulness and clarity of the various modifications we have introduced to some of our UML-based models. This could be accomplished through surveys and/or interviews with Web developers who would have had an opportunity to work with both standard-UML diagrams, as well as our modified diagrams. Appendix A: Scope of University site (prototyped Web pages) **University Home Page** - Programs - People - Admissions **PROGRAMS Page** - Undergraduate - Graduate - Continuing Education **PROGRAMS: Undergraduate Page** - Overview - Majors/Minors - Class List **PROGRAMS: Undergraduate: Class List Page** - Business Classes - Education Classes - Information Technology Classes - Science & Math Classes **PROGRAMS: Undergraduate: Info Tech Class List Page** - Introduction To Computing - Introduction To HTML - Introduction To Java - Introduction To Networking - Introduction To Programming **PROGRAMS: Undergraduate: Info Tech/Intro To Computing: General Information Page** - Overview - Schedule **PROGRAMS: Undergraduate: Info Tech/Intro To Computing: General Information: Overview Page** --- References
{"Source-Url": "http://www.journal.au.edu:80/ijcim/2002/may02/article2.pdf", "len_cl100k_base": 7956, "olmocr-version": "0.1.48", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 44332, "total-output-tokens": 8882, "length": "2e12", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.0014505386352539062, "__label__crime_law": 0.00034737586975097656, "__label__education_jobs": 0.0113983154296875, "__label__entertainment": 0.0001017451286315918, "__label__fashion_beauty": 0.0002157688140869141, "__label__finance_business": 0.0004744529724121094, "__label__food_dining": 0.0003204345703125, "__label__games": 0.0005145072937011719, "__label__hardware": 0.00078582763671875, "__label__health": 0.0004487037658691406, "__label__history": 0.0004229545593261719, "__label__home_hobbies": 0.00012153387069702148, "__label__industrial": 0.0004296302795410156, "__label__literature": 0.0006737709045410156, "__label__politics": 0.00022745132446289065, "__label__religion": 0.0005178451538085938, "__label__science_tech": 0.0182037353515625, "__label__social_life": 0.00012683868408203125, "__label__software": 0.01306915283203125, "__label__software_dev": 0.94873046875, "__label__sports_fitness": 0.00021076202392578125, "__label__transportation": 0.0005335807800292969, "__label__travel": 0.0002351999282836914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41580, 0.01738]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41580, 0.49326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41580, 0.91004]], "google_gemma-3-12b-it_contains_pii": [[0, 1747, false], [1747, 5057, null], [5057, 7978, null], [7978, 11205, null], [11205, 14313, null], [14313, 20274, null], [20274, 23484, null], [23484, 24617, null], [24617, 26955, null], [26955, 28447, null], [28447, 29491, null], [29491, 32403, null], [32403, 33122, null], [33122, 35855, null], [35855, 38785, null], [38785, 39708, null], [39708, 40607, null], [40607, 40607, null], [40607, 40607, null], [40607, 41580, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1747, true], [1747, 5057, null], [5057, 7978, null], [7978, 11205, null], [11205, 14313, null], [14313, 20274, null], [20274, 23484, null], [23484, 24617, null], [24617, 26955, null], [26955, 28447, null], [28447, 29491, null], [29491, 32403, null], [32403, 33122, null], [33122, 35855, null], [35855, 38785, null], [38785, 39708, null], [39708, 40607, null], [40607, 40607, null], [40607, 40607, null], [40607, 41580, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41580, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41580, null]], "pdf_page_numbers": [[0, 1747, 1], [1747, 5057, 2], [5057, 7978, 3], [7978, 11205, 4], [11205, 14313, 5], [14313, 20274, 6], [20274, 23484, 7], [23484, 24617, 8], [24617, 26955, 9], [26955, 28447, 10], [28447, 29491, 11], [29491, 32403, 12], [32403, 33122, 13], [33122, 35855, 14], [35855, 38785, 15], [38785, 39708, 16], [39708, 40607, 17], [40607, 40607, 18], [40607, 40607, 19], [40607, 41580, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41580, 0.15079]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
c8e4555ee2724a8cd3a5c446f098a2e84b83f3b9
Chapter 4 Macro Processors -- Basic Macro Processor Functions Introduction - A macro instruction (macro) is a notational convenience for the programmer - It allows the programmer to write shorthand version of a program (module programming) - The macro processor replaces each macro instruction with the corresponding group of source language statements (expanding) - Normally, it performs no analysis of the text it handles. - It does not concern the meaning of the involved statements during macro expansion. - The design of a macro processor generally is machine independent! Basic macro processor functions - Two new assembler directives are used in macro definition - MACRO: identify the beginning of a macro definition - MEND: identify the end of a macro definition - Prototype for the macro - Each parameter begins with ‘&’ ``` name MACRO parameters : body : MEND ``` - Body: the statements that will be generated as the expansion of the macro. ## Macro Expansion <table> <thead> <tr> <th>Source</th> <th>Expanded source</th> </tr> </thead> <tbody> <tr> <td>M1 MACRO &amp;D1, &amp;D2</td> <td></td> </tr> <tr> <td>STA &amp;D1</td> <td>STA DATA1</td> </tr> <tr> <td>STB &amp;D2</td> <td>STB DATA2</td> </tr> <tr> <td>MEND</td> <td></td> </tr> <tr> <td>M1 DATA1, DATA2</td> <td></td> </tr> <tr> <td>M1 DATA4, DATA3</td> <td></td> </tr> <tr> <td></td> <td>STA DATA4</td> </tr> <tr> <td></td> <td>STB DATA3</td> </tr> </tbody> </table> *System Programming* Example of macro definition Figure 4.1, pp. 178 5 COPY START 0 COPY FILE FROM INPUT TO OUTPUT 10 RDBUFF MACRO &INDEV, &BUFADR &RECLTH 15 . 20 . MACRO TO READ RECORD INTO BUFFER 25 . 30 CLEAR X CLEAR LOOP COUNTER 35 CLEAR A 40 CLEAR S 45 +LDT #4096 SET MAXIMUM RECORD LENGTH 50 TD =X’&INDEV’ TEST INPUT DEVICE 55 JEQ *-3 LOOP UNTIL READY 60 RD =X’&INDEV’ READ CHARACTER INTO REG A 65 COMPR A, S TEST FOR END OF RECORD 70 JEQ *+11 EXIT LOOP IF EOR 75 STCH &BUFADR, X STORE CHARACTER IN BUFFER 80 TIXR T LOOP UNLESS MAXIMUM LENGTH 85 JLT *-19 HAS BEEN RECORD 90 STX &RECLTH SAVE RECORD LENGTH 95 MEND System Programming Macro invocation - A macro invocation statement (a macro call) gives the name of the macro instruction being invoked and the arguments to be used in expanding the macro. - `macro_name p1, p2, ...` - Difference between macro call and procedure call - Macro call: statements of the macro body are expanded each time the macro is invoked. - Procedure call: statements of the subroutine appear only one, regardless of how many times the subroutine is called. - Question - How does a programmer decide to use macro calls or procedure calls? - From the viewpoint of a programmer - From the viewpoint of the CPU Exchange the values of two variables ```c void exchange(int a, int b) { int temp; temp = a; a = b; b = temp; } main() { int i=1, j=3; printf("BEFORE - %d %d\n", i, j); exchange(i, j); printf("AFTER - %d %d\n", i, j); } ``` What’s the result? Pass by Reference ```c void exchange(int *p1, int *p2) { int temp; temp = *p1; *p1 = *p2; *p2 = temp; } main() { int i=1, j=3; printf("BEFORE - %d %d\n", i, j); exchange(&i, &j); printf("AFTER - %d %d\n", i, j); } ``` # 12 Lines of Assembly Code ## . Subroutine EXCH <table> <thead> <tr> <th>EXCH</th> <th>LDA</th> <th>@P1</th> </tr> </thead> <tbody> <tr> <td></td> <td>STA</td> <td>TEMP</td> </tr> <tr> <td></td> <td>LDA</td> <td>@P2</td> </tr> <tr> <td></td> <td>STA</td> <td>@P1</td> </tr> <tr> <td></td> <td>LDA</td> <td>TEMP</td> </tr> <tr> <td></td> <td>STA</td> <td>@P2</td> </tr> <tr> <td></td> <td>RSUB</td> <td></td> </tr> </tbody> </table> - P1: RESW 1 - P2: RESW 1 - TEMP: RESW 1 ## MAIN <table> <thead> <tr> <th>MAIN</th> <th>LDA</th> <th>#1</th> </tr> </thead> <tbody> <tr> <td></td> <td>STA</td> <td>I</td> </tr> <tr> <td></td> <td>LDA</td> <td>#3</td> </tr> <tr> <td></td> <td>STA</td> <td>J</td> </tr> </tbody> </table> . Call a subroutine <table> <thead> <tr> <th></th> <th>LDA</th> <th>#I</th> </tr> </thead> <tbody> <tr> <td></td> <td>STA</td> <td>P1</td> </tr> <tr> <td></td> <td>LDA</td> <td>#J</td> </tr> <tr> <td></td> <td>STA</td> <td>P2</td> </tr> <tr> <td></td> <td>JSUB</td> <td>EXCH</td> </tr> </tbody> </table> - I: RESW 1 - J: RESW 1 -END- *System Programming* Swap two variables by macro ```c #define swap(i,j) { int temp; temp=i; i=j; j=temp; } main() { int i=1, j=3; printf("BEFORE - %d %d\n", i, j); swap(i,j); printf("AFTER - %d %d\n", i, j); } ``` 6 Lines of Assembly Code MAIN LDA #1 STA I LDA #3 STA J . Invoke a macro LDA I STA TEMP LDA J STA I LDA TEMP STA J I RESW 1 J RESW 1 TEMP RESW 1 END MAIN Macro expansion - Each macro invocation statement will be expanded into the statements that form the body of the macro. - Arguments from the macro invocation are substituted for the parameters in the macro prototype (according to their positions). - In the definition of macro: parameter - In the macro invocation: argument - Comment lines within the macro body will be deleted. - Macro invocation statement itself has been included as a comment line. - The label on the macro invocation statement has been retained as a label on the first statement generated in the macro expansion. - We can use a macro instruction in exactly the same way as an assembler language mnemonic. Example of macro invocation Figure 4.1, pp. 178 170 . MAIN PROGRAM 175 . 180 FIRST STL RETADR SAVE RETURN ADDRESS 190 CLOOP RDBUFF F1,BUFFER,LENGTH READ RECORD INTO BUFFER 195 LDA LENGTH TEST FOR END OF FILE 200 COMP #0 205 JEQ ENDFIL EXIT IF EOF FOUND 210 WRBUFF 05,BUFFER,LENGTH WRITE OUTPUT RECORD 215 J CLOOP LOOP 220 ENDFIL WRBUFF 05,EOF,THREE INSERT EOF MARKER 225 J @RETADR 230 EOF BYTE C’EOF’ 235 THREE WORD 3 240 RETADR RESW 1 245 LENGTH RESW 1 LENGTH OF RECORD 250 BUFFER RESB 4096 4096-BYTE BUFFER AREA 255 END FIRST System Programming Example of macro expansion Figure 4.2, pp. 179 <table> <thead> <tr> <th>Line</th> <th>Instruction</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>5</td> <td>COPY START 0</td> <td>COPY FILE FROM INPUT TO OUTPUT</td> </tr> <tr> <td>180</td> <td>FIRST STL RETADR</td> <td>SAVE RETURN ADDRESS</td> </tr> <tr> <td>190</td> <td>.CLOOP RDBUFF F1,BUFFER,LENGTH</td> <td>READ RECORD INTO BUFFER</td> </tr> <tr> <td>190a</td> <td>CLOOP CLEAR X</td> <td>CLEAR LOOP COUNTER</td> </tr> <tr> <td>190b</td> <td>CLEAR A</td> <td></td> </tr> <tr> <td>190c</td> <td>CLEAR S</td> <td></td> </tr> <tr> <td>190d</td> <td>+LDT #4096</td> <td>SET MAXIMUM RECORD LENGTH</td> </tr> <tr> <td>190e</td> <td>TD =X’F1’</td> <td>TEST INPUT DEVICE</td> </tr> <tr> <td>190f</td> <td>JEQ *-3</td> <td>LOOP UNTIL READY</td> </tr> <tr> <td>190g</td> <td>RD =X’F1’</td> <td>TEST FOR END OF RECORD</td> </tr> <tr> <td>190h</td> <td>COMPR A, S</td> <td>TEST FOR END OF RECORD</td> </tr> <tr> <td>190i</td> <td>JEQ *+11</td> <td>EXIT LOOP IF EOR</td> </tr> <tr> <td>190j</td> <td>STCH BUFFER, X</td> <td>STORE CHARACTER IN BUFFER</td> </tr> <tr> <td>190k</td> <td>TIXR T</td> <td>LOOP UNLESS MAXIMUM LENGTH</td> </tr> <tr> <td>190l</td> <td>JLT *-19</td> <td>HAS BEEN REACHED</td> </tr> <tr> <td>190M</td> <td>STX LENGTH</td> <td>SAVE RECORD LENGTH</td> </tr> </tbody> </table> ### Example of macro expansion **Figure 4.2, pp. 179** <table> <thead> <tr> <th>Line</th> <th>Instruction</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>195</td> <td>LDA LENGTH</td> <td>TEST FOR END OF FILE</td> </tr> <tr> <td>200</td> <td>COMP #0</td> <td></td> </tr> <tr> <td>205</td> <td>JEQ ENDFIL</td> <td>EXIT IF EOF FOUND</td> </tr> <tr> <td>210</td> <td>WRBUFF 05,BUFFER,LENGTH</td> <td>WRITE OUTPUT RECORD</td> </tr> <tr> <td>210a</td> <td>CLEAR X</td> <td>CLEAR LOOP COUNTER</td> </tr> <tr> <td>210b</td> <td>LDT LENGTH</td> <td></td> </tr> <tr> <td>210c</td> <td>LDCH BUFFER,X</td> <td>GET CHARACTER FROM BUFFER</td> </tr> <tr> <td>210d</td> <td>TD =X’05’</td> <td>TEST OUTPUT DEVICE</td> </tr> <tr> <td>210e</td> <td>JEQ #-3</td> <td>LOOP UNTIL READY</td> </tr> <tr> <td>210f</td> <td>WD =X’05’</td> <td>WRITE CHARACTER</td> </tr> <tr> <td>210g</td> <td>TIXR T</td> <td>LOOP UNTIL ALL CHARACTERS</td> </tr> <tr> <td>210h</td> <td>JLT #-14</td> <td>HAVE BEEN WRITTEN</td> </tr> <tr> <td>215</td> <td>J CLOOP</td> <td>LOOP</td> </tr> <tr> <td>220</td> <td>.ENDFIL WRBUFF 05,EOF,THREE</td> <td>INSERT EOF MARKER</td> </tr> </tbody> </table> Example of macro expansion Figure 4.2, pp. 179 <table> <thead> <tr> <th>Line</th> <th>Instruction</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>220a</td> <td>ENDFIL</td> <td>CLEAR X</td> </tr> <tr> <td>220b</td> <td>LDT</td> <td>THREE</td> </tr> <tr> <td>220c</td> <td>LDCH</td> <td>EOF,X</td> </tr> <tr> <td>220d</td> <td>TD</td> <td>=X‘05’</td> </tr> <tr> <td>220e</td> <td>JEQ</td> <td>-*3</td> </tr> <tr> <td>220f</td> <td>WD</td> <td>=X‘05’</td> </tr> <tr> <td>220g</td> <td>TIXR</td> <td>T</td> </tr> <tr> <td>220h</td> <td>JLT</td> <td>-*14</td> </tr> <tr> <td>225</td> <td>J</td> <td>@RETADR</td> </tr> <tr> <td>230</td> <td>EOF</td> <td>BYTE C‘EOF’</td> </tr> <tr> <td>235</td> <td>THREE</td> <td>WORD 3</td> </tr> <tr> <td>240</td> <td>RETADR</td> <td>RESW 1</td> </tr> <tr> <td>245</td> <td>LENGTH</td> <td>RESW 1</td> </tr> <tr> <td>250</td> <td>BUFFER</td> <td>RESB 4096</td> </tr> <tr> <td>255</td> <td>END</td> <td>FIRST</td> </tr> </tbody> </table> CLEAR LOOP COUNTER GET CHARACTER FROM BUFFER TEST OUTPUT DEVICE LOOP UNTIL READY WRITE CHARACTER LOOP UNTIL ALL CHARACTERS HAVE BEEN WRITTEN System Programming 16 No label in the macro body Problem of the label in the body of macro: - If the same macro is expanded multiple times at different places in the program … - There will be *duplicate labels*, which will be treated as errors by the assembler. Solutions: - Do not use labels in the body of macro. - Explicitly use PC-relative addressing instead. - Ex, in RDBUFF and WRBUFF macros, - JEQ *+11 - JLT *-14 - It is inconvenient and error-prone. - The way of avoiding such error-prone method will be discussed in Section 4.2.2 Two-pass macro processor - You may design a two-pass macro processor - Pass 1: - Process all macro definitions - Pass 2: - Expand all macro invocation statements - However, one-pass may be enough - Because all macros would have to be defined during the first pass before any macro invocations were expanded. - The definition of a macro must appear before any statements that invoke that macro. - Moreover, the body of one macro can contain definitions of other macros. Example of recursive macro definition Figure 4.3, pp.182 MACROS (for SIC) Contains the definitions of RDBUFF and WRBUFF written in SIC instructions. 1 MACROS MACOR {Defines SIC standard version macros} 2 RDBUFF MACRO &INDEV,&BUFADR,&RECLTH 3 MEND 4 WRBUFF MACRO &OUTDEV,&BUFADR,&RECLTH 5 MEND {End of WRBUFF} 6 MEND {End of MACROS} System Programming Example of recursive macro definition Figure 4.3, pp.182 - **MACROX (for SIC/XE)** - Contains the definitions of RDBUFF and WRBUFF written in SIC/XE instructions. <table> <thead> <tr> <th>Line</th> <th>Macro</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>MACROX</td> <td>MACRO {Defines SIC/XE macros}</td> </tr> <tr> <td>2</td> <td>RDBUFF</td> <td>MACRO &amp;INDEV,&amp;BUFADR,&amp;RECLTH {SIC/XE version}</td> </tr> <tr> <td>3</td> <td>MEND</td> <td>{End of RDBUFF}</td> </tr> <tr> <td>4</td> <td>WRBUFF</td> <td>MACRO &amp;OUTDEV,&amp;BUFADR,&amp;RECLTH {SIC/XE version}</td> </tr> <tr> <td>5</td> <td>MEND</td> <td>{End of WRBUFF}</td> </tr> </tbody> </table> *System Programming* Example of macro definitions - A program that is to be run on SIC system could invoke MACROS whereas a program to be run on SIC/XE can invoke MACROX. - However, defining MACROS or MACROX does not define RDBUFF and WRBUFF. These definitions are processed only when an invocation of MACROS or MACROX is expanded. One-pass macro processor - A one-pass macro processor that alternate between \textit{macro definition} and \textit{macro expansion} in a recursive way is able to handle recursive macro definition. - Restriction - The definition of a macro must appear in the source program before any statements that invoke that macro. - This restriction does not create any real inconvenience. Data structures for one-pass macro processor - **DEFTAB** (definition table) - Stores the macro definition including *macro prototype* and *macro body* - Comment lines are omitted. - References to the macro instruction parameters are converted to a positional notation for efficiency in substituting arguments. - **NAMTAB** - Stores macro names - Serves as an index to DEFTAB - Pointers to the *beginning* and the *end* of the macro definition (DEFTAB) - **ARGTAB** - Stores the arguments of macro invocation according to their positions in the argument list - As the macro is expanded, arguments from ARGTAB are substituted for the corresponding parameters in the macro body. Data structures Algorithm **Procedure GETLINE** If EXPANDING then get the next line to be processed from DEFTAB Else read next line from input file **MAIN program** - Iterations of - GETLINE - PROCESSLINE **Procedure EXPAND** Set up the argument values in ARGTAB Expand a macro invocation statement (like in MAIN procedure) - Iterations of - GETLINE - PROCESSLINE **Procedure PROCESSLINE** - DEFINE - EXPAND - Output source line **Procedure DEFINE** Make appropriate entries in DEFTAB and NAMTAB Algorithm Figure 4.5, pp. 184 begin \{macro processor\} EXPANDINF := FALSE while OPCODE \neq \textquoteleft EN D\textquoteright \ do begin GETLINE PROCESSLINE end \{while\} end \{macro processor\} Procedure PROCESSLINE begin search MAMTAB for OPCODE if found then EXPAND else if OPCODE = \textquoteleft MACRO\textquoteright then DEFINE else write source line to expanded file end \{PROCESSOR\} Algorithm Figure 4.5, pp. 185 Procedure DEFINE begin begin enter macro name into NAMTAB enter macro prototype into DEFTAB LEVEL := 1 while LEVEL > do begin GETLINE if this is not a comment line then begin substitute positional notation for parameters enter line into DEFTAB if OPCODE = ‘MACRO’ then LEVEL := LEVEL +1 else if OPCODE = ‘MEND’ then LEVEL := LEVEL – 1 end {if not comment} end {while} store in NAMTAB pointers to beginning and end of definition end {DEFINE} System Programming Algorithm Figure 4.5, pp. 185 Procedure EXPAND begin EXPANDING := TRUE get first line of macro definition {prototype} from DEFTAB set up arguments from macro invocation in ARGTAB while macro invocation to expanded file as a comment while not end of macro definition do begin GETLINE PROCESSLINE end {while} EXPANDING := FALSE end {EXPAND} Procedure GETLINE begin if EXPANDING then begin get next line of macro definition from DEFTAB substitute arguments from ARGTAB for positional notation end {if} else read next line from input file end {GETLINE} Handling nested macro definition - In DEFINE procedure - When a macro definition is being entered into DEFTAB, the normal approach is to continue until an MEND directive is reached. - This would not work for nested macro definition because the first MEND encountered in the inner macro will terminate the whole macro definition process. - To solve this problem, a counter LEVEL is used to keep track of the level of macro definitions. - Increase LEVEL by 1 each time a MACRO directive is read. - Decrease LEVEL by 1 each time a MEND directive is read. - A MEND terminates the whole macro definition process when LEVEL reaches 0. - This process is very much like matching left and right parentheses when scanning an arithmetic expression. Comparison of Macro Processors Design - One-pass algorithm - Every macro must be defined before it is called - One-pass processor can alternate between macro definition and macro expansion - Nested macro definitions are allowed but nested calls are not - Two-pass algorithm - Pass1: Recognize macro definitions - Pass2: Recognize macro calls - Nested macro definitions are not allowed
{"Source-Url": "http://solomon.ipv6.club.tw/~solomon/Course/SP/sp4-1.pdf", "len_cl100k_base": 5149, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 47885, "total-output-tokens": 5571, "length": "2e12", "weborganizer": {"__label__adult": 0.00033211708068847656, "__label__art_design": 0.0002911090850830078, "__label__crime_law": 0.0002703666687011719, "__label__education_jobs": 0.0004620552062988281, "__label__entertainment": 4.392862319946289e-05, "__label__fashion_beauty": 0.00012683868408203125, "__label__finance_business": 0.00011593103408813477, "__label__food_dining": 0.0003376007080078125, "__label__games": 0.000560760498046875, "__label__hardware": 0.0030803680419921875, "__label__health": 0.0002760887145996094, "__label__history": 0.0001627206802368164, "__label__home_hobbies": 0.00011646747589111328, "__label__industrial": 0.0005679130554199219, "__label__literature": 0.00014662742614746094, "__label__politics": 0.00018286705017089844, "__label__religion": 0.0004558563232421875, "__label__science_tech": 0.00960540771484375, "__label__social_life": 4.863739013671875e-05, "__label__software": 0.0042266845703125, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.0003345012664794922, "__label__transportation": 0.0004835128784179687, "__label__travel": 0.00014960765838623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16532, 0.0369]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16532, 0.28263]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16532, 0.71921]], "google_gemma-3-12b-it_contains_pii": [[0, 62, false], [62, 585, null], [585, 976, null], [976, 1550, null], [1550, 2169, null], [2169, 2793, null], [2793, 3070, null], [3070, 3323, null], [3323, 3951, null], [3951, 4163, null], [4163, 4459, null], [4459, 5142, null], [5142, 5693, null], [5693, 7106, null], [7106, 8479, null], [8479, 9832, null], [9832, 10366, null], [10366, 10857, null], [10857, 11226, null], [11226, 11905, null], [11905, 12218, null], [12218, 12602, null], [12602, 13300, null], [13300, 13316, null], [13316, 13810, null], [13810, 14252, null], [14252, 14744, null], [14744, 15373, null], [15373, 16134, null], [16134, 16532, null]], "google_gemma-3-12b-it_is_public_document": [[0, 62, true], [62, 585, null], [585, 976, null], [976, 1550, null], [1550, 2169, null], [2169, 2793, null], [2793, 3070, null], [3070, 3323, null], [3323, 3951, null], [3951, 4163, null], [4163, 4459, null], [4459, 5142, null], [5142, 5693, null], [5693, 7106, null], [7106, 8479, null], [8479, 9832, null], [9832, 10366, null], [10366, 10857, null], [10857, 11226, null], [11226, 11905, null], [11905, 12218, null], [12218, 12602, null], [12602, 13300, null], [13300, 13316, null], [13316, 13810, null], [13810, 14252, null], [14252, 14744, null], [14744, 15373, null], [15373, 16134, null], [16134, 16532, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16532, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 16532, null]], "pdf_page_numbers": [[0, 62, 1], [62, 585, 2], [585, 976, 3], [976, 1550, 4], [1550, 2169, 5], [2169, 2793, 6], [2793, 3070, 7], [3070, 3323, 8], [3323, 3951, 9], [3951, 4163, 10], [4163, 4459, 11], [4459, 5142, 12], [5142, 5693, 13], [5693, 7106, 14], [7106, 8479, 15], [8479, 9832, 16], [9832, 10366, 17], [10366, 10857, 18], [10857, 11226, 19], [11226, 11905, 20], [11905, 12218, 21], [12218, 12602, 22], [12602, 13300, 23], [13300, 13316, 24], [13316, 13810, 25], [13810, 14252, 26], [14252, 14744, 27], [14744, 15373, 28], [15373, 16134, 29], [16134, 16532, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16532, 0.20519]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
386896534e58b0d19b39d5cb23e274fdf41d6079
Information Retrieval: Improving Question Answering Systems by Query Reformulation and Answer Validation Mohammad Reza Kangavari, Samira Ghandchi, Manak Golpour Abstract — Question answering (QA) aims at retrieving precise information from a large collection of documents. Most of the Question Answering systems composed of three main modules: question processing, document processing and answer processing. Question processing module plays an important role in QA systems to reformulate questions. Moreover answer processing module is an emerging topic in QA systems, where these systems are often required to rank and validate candidate answers. These techniques aiming at finding short and precise answers are often based on the semantic relations and co-occurrence keywords. This paper discussed about a new model for question answering which improved two main modules, question processing and answer processing which both affect on the evaluation of the system operations. There are two important components which are the bases of the question processing. First component is question classification that specifies types of question and answer. Second one is reformulation which converts the user's question into an understandable question by QA system in a specific domain. The objective of an Answer Validation task is to judge the correctness of an answer returned by a QA system, according to the text snippet given to support it. For validating answers we apply candidate answer filtering, candidate answer ranking and also it has a final validation section by user voting. Also this paper described new architecture of question and answer processing modules with modeling, implementing and evaluating the system. The system differs from most question answering systems in its answer validation model. This module makes it more suitable to find exact answer. Results show that, from total 50 asked questions, evaluation of the model, show 92% improving the decision of the system. Keywords — Answer Processing, Answer validation, Classification, Question Answering and Query Reformulation. 1. INTRODUCTION Many researches have been done in recent years on QA systems. QA systems have been expanded to answer simple questions correctly; but now researches have been focused on methods for answering complex questions truthfulness. Those methods analyze and parse complex question to multi simple questions and use existing techniques for answering them. [1] Recent researches show that increasing the performance of system is dependent on number of probable answers in documents. Finding the exact answer is one of the most important problems in QA systems. For this purpose, this designed model uses syntax and semantic relations together, previous asked questions and dynamic patterns to find exact answer at least timeline. This model work on aerology domain by forecasting the weather information based on patterns in close domain question answering system. If there is no default proper pattern in database, user can make appropriate patterns referring to English language grammar. Designed QA system just answers the questions in factoid form or one sentence. The aim of this paper is to design and implement a new model for classification, reformulation and answer validation in a QA system. Used methodology in this system to find correct answer in 'weather forecasting' domain with NLP techniques, syntax & semantic relation among words, Dynamic pattern and previous information about defined domain. The main reason behind the necessity of providing the system with an answer validation component concerns the difficulty of picking up from a document the "exact answer". Our approach to automatic answer validation relies on discovering relations between a question and the answer candidates by mining the documents or a domain text corpus for their co-occurrence tendency [11]. In this model, first of all, questions are parsed by using semantic and syntax information in the question. Second, answer patterns based on their types are specified. Then the search engine finds candidate answer document and sends them to answer processing module to extract correct answers. The system filter candidate answers collection based on co-occurrence patterns and assigns a priority number to the candidate answers. Finally the system ranks the answers and sends to user for final validation in order to extract the exact answer. Considered patterns in this program are based on English language grammar and tried to include all probable patterns. If no proper pattern find, user can make a new pattern. This paper tries to use syntax, semantic relations and existing information of pervious asked questions by users which were saved in system. Our system modeled in aerology domain but it can easily works in both close and open domain in QA systems. In Section II, we considered QA systems, section III consist of question processing part and section IV present answer processing part. Section V include the architecture of the new model and section VI discussed evaluation. Final section include conclusion of the designed model. II. QUESTION ANSWERING SYSTEMS (QA) QA is a type of information retrieval. Given a collection of documents (such as the World Wide Web or a local collection), the system should be able to retrieve answers to questions posed in natural language. QA is regarded as requiring more complex natural language processing (NLP) techniques than other types of information retrieval such as document retrieval, and it is sometimes regarded as the next step beyond search engines.[1][2] QA research attempts to deal with a wide range of question types including: fact, list, definition, how, why, hypothetical, semantically-constrained and cross-lingual questions. Search collections vary from small local document collections to internal organization documents to complied newswire reports to the World Wide Web. QA systems are classified in two main parts [12]: open domain QA system and closed domain QA system. Open domain question answering deals with questions about nearly everything and can only rely on general ontology [4] and world knowledge. On the other hand, these systems usually have much more data available from which to extract the answer. Closed-domain question answering deals with questions under a specific domain (for example medicine or weather forecasting and etc) and can be seen as an easier task because NLP systems can exploit domain-specific knowledge frequently formalized in ontology. Alternatively, open-domain might refer to a situation where unlimited type of questions are accepted, such as questions asking for descriptive [1][2]. Many searches have been done for expanding English language QA systems. Also some other works have been done on Chinese, Arabic, Spanish and … QA systems. [3] The aim of QA systems is to find exact and correct answer for user's questions. In addition to user interaction, various QA systems contain at least three following parts: 1- Question processing 2- Document processing 3- Answer processing III. QUESTION PROCESSING As mentioned before, question, document and answer processing are three main parts of a QA system. Important components of question processing are classification of question and reformulation. A. Classification component For answer extraction in a large collection of documents and texts, at first the system should know what it look for. In this case, questions should be classified regarding their types [4]. Question classification will be done before reformulation. This is for finding types of questions and answers. For this, system first should know type of question. It also helps system to omit the question in final format of answer. Table No. 1 shows question words, type of questions and answers. Totally questions can be divided as follows: - Questions with ‘WH’ question words such as what, where, when, who, whom, which, how, why and etc. - Questions with ‘modal’ or ‘auxiliary’ verbs that their answers are Yes/No. It is obvious that specifying type of question is not enough to find the correct answer. For example in question 'Who was the first aerologist in USA?' type of answer will be 'a person'. But if a question is asked with 'What', exact type of answer is not specified. Because the answer may be definition, number or title.[6] For correct answer extraction, some patterns should be defined for system to find exact type of answer and then sends to document processing.[4][5] B. Reformulation component Question reformulation (also called surface pattern, paraphrase or answer pattern) tries to identify various ways of expressing an answer given a natural language question. This reformulation is often used in Question Answering system to retrieve answers in a large document collection. [7] The query reformulation component converts the question into a set of keyword queries that will be sent to the search engine for parallel evaluation. Following items are important in reformulation: 1- Use of syntax relations among words of asked question sentence. 2- Use of semantic relations among words of asked question sentence. 3- Use the existing information of pervious asked questions and answers in which a part or totally is same to user's asked question. In this case, system can use type of pervious answer for new asked question. It causes that the process of finding proper pattern and type of answer become shorter and reduces the necessary time for submitting correct answer.[8][9] It would be possible if the system has the ability of saving information in 'Usage knowledge' database. If all above options work together at the same time, the flexibility of system will increase. As mentioned before all flexibility of designed system in on 'Usage knowledge' part. This part is same as FAQ1 and also can answer to new asked questions which are not totally same to Previous questions and have some differences in adverbs or verbs. When a user asks a question, first sentence parses to its syntax components and then its keywords are selected to use in reformulation. Table 1 Classification of question and answer <table> <thead> <tr> <th>Question Classification</th> <th>Sub classification</th> <th>Type of Answer</th> <th>Example</th> </tr> </thead> <tbody> <tr> <td>When</td> <td>DATE</td> <td>When did rain come yesterday?</td> <td></td> </tr> <tr> <td>Which</td> <td>Which-Who</td> <td>PERSON</td> <td>Which person did invent the instrument of aerology?</td> </tr> <tr> <td>Which</td> <td>Which-Where</td> <td>LOCATION</td> <td>Which city has the min temperature?</td> </tr> <tr> <td>Which</td> <td>Which-When</td> <td>DATE</td> <td>Which month has max rain?</td> </tr> <tr> <td>Why</td> <td>REASON</td> <td></td> <td>Why don't we have enough rain this year?</td> </tr> <tr> <td>What</td> <td>Money / Number</td> <td></td> <td>What is the temperature of Tehran?</td> </tr> <tr> <td></td> <td>Definition /Title</td> <td></td> <td></td> </tr> <tr> <td>What – Who</td> <td>PERSON</td> <td></td> <td>What is the best meteorologist in Iran?</td> </tr> <tr> <td>What – When</td> <td>DATE</td> <td></td> <td>What year do we have max rain?</td> </tr> <tr> <td>What – Where</td> <td>LOCATION</td> <td></td> <td>What is the capital of Iran?</td> </tr> <tr> <td>Who</td> <td>PERSON</td> <td></td> <td>Who is the first meteorologist in world?</td> </tr> </tbody> </table> There is an important question: 'What are keywords in question sentence?' Keywords are selected in question sentence as follow: 1- All words which are in 'quotations' and "double quotations". 2- All words that are name. 3- All words that are adverb (time, location, status). 4- All words that are main verb or modal verb. 5- All words those are subject. 6- All words that are object. Next important subject is how can use keywords to make answer? For this propose, system uses patterns to find correct format of answer. These pattern are made regarding English language grammar.[4] 1) Rules for extract answer patterns. First step to find proper pattern is to find verb in sentence. In defined patterns verbs are totally divided in three parts: 1- Main Verb such as: 'to be' (am, is , are, was, were, been) or 'to have' (have, has, had) 2- Auxiliary verbs (do, does, did) 3- Modal verbs (can, could, shall, should, may, might, …) Main verbs never delete in answer, but regarding type of answer its location in sentence may change. Sometimes these verbs (am, is, are …) come with another verb in 'ing' form. But auxiliary verbs will be deleted in answers. It should be noted that 'do' may located in sentence as a main verb that can be find by semantic relations.[9][10] If question doesn't have WH question word and question is asked with a modal verb or 'to be' then answer is yes/no. Also if sentence doesn't have any question word (WH question or modal), system asks user that which question word make his question. After that usual process will be done. IV. ANSWER PROCESSING Answer processing module consist of two main components: answer extraction and answer validation. First, candidate answers extract from documents which are retrieve by search engine in answer extraction module. After that we validate answers with filtering and ranking candidate answers and final system’s suggested answers with user voting. Our approach to automatic answer validation relies on discovering relations between asked question and answer candidates by mining the documents or a domain text corpus for their co-occurrence tendency[10],[11]. The underlying hypothesis is that the number of these co-occurrences can be considered a significant clue to the validity of the answer. As a consequence, this information can be effectively used to rank the amount of candidate answers that our QA system is often required to deal with. Also we can exploit domain knowledge and answer patterns to create new answers based on co-occurrence keywords and semantic relation [5]. 1 Frequently asked questions A. FILTERING COMPONENT Candidate answers collection which has been sent by answer extraction feed in filtering component. These candidate collections consist of some snippets which may include the exact answer. By using answer keywords, the system finds co-occurrence words [9] and semantic relations [12] existing in database ontology and moreover related sentences from knowledge domain. By analyzing the candidate answers and using answer type and keywords, some snippets eliminate from the collection. Then the best candidate answers send for ranking. B. RANKING COMPONENT This component receives a list of answers which have filtered before. This list consists of the best answer from the system’s point of view which is more related to the question. Ranking component classifies the answers and gives priority to them. A priority number is specified to answers by using the number of repeated answer type in the snippets and the distance of answer keywords (considering to threshold). The answer with highest priority is located at the top of the list and this task performs frequently for all answers. After that the data fetch from domain knowledge database, and the answers sent to user to validate. C. USER VOTING (VALIDATING) In this step, the answers are shown to user for validation. If the top answer was the exact answer, then system would increase a validation grade in usage knowledge for [q, a] pairs. That answer will submit in database to answer next similar question. Otherwise the other candidates will be shown to user to certify. This process continuous until there aren’t any other answers, then the systems asks for additional information from user and will send those information or new question to question processing module. V. ARCHITECTURE To increase the reliability and ability of designed QA system and to find correct and exact answer, we use dynamic pattern with semantic relations among words, verbs and keywords and also co-occurrence keywords. In question processing module, at first the question is classified regarding linguistic theories and bases of answering questions. Then question's structure and keywords are specified by classification, send to document processing module to retrieve documents which may have proper answer. In answer processing module, first of all candidate answers which is received from search engine, will be filtered by co-occurrence patterns and ordered based on some analyzing in system. Then the answers send to user to validate the candidates. Finally the system will present the exact answer. A. SYSTEM COMPONENTS Designed architecture has these parts, (see fig1): 1-Question interface: In this part user writes his question by an interface. If no proper answer is given, user can write his question in another way. 2- Query Analyzer: In this part question is parsed to its particles such as subject, object, verb, noun, adjective, adverb & etc. 3- Lexicon: This part is used as vocabulary (dictionary) and contains all words that are in related domains. Also the type of word such as subject, object, verb, noun, adjective, adverb & etc is specified in this part. 4- Database Ontology: In this part questions and answers are surveyed semantically. Semantic relation among keywords saved in this database. 5- Domain knowledge: Domain information is saved as database in this part and will submit the user's answer when a web service connects to internet. 6- Question classification: Question classification is one of the important functions of most QA systems. Most researches on this subject are based on regular expression, hand writing grammar rules and other advanced techniques in natural language for question parsing and finding answers. In this part all questions are classified regarding WH question words (such as What, Where, When, Who & etc) or other question words with Yes/No answer. 7- Reformulation: In this part main question (Q) with using rules changes to a question with new format (Q'). In this part question words and punctuation which make no difference in question and answer, are deleted and the root of words will be specified. Then by the words of new question, proper patterns and information are surveyed. 8- Usage Knowledge: one of the most useful ways to find correct related answers is to use a library of the previous questions and answer. If new user's question is similar to a previous submitted question, the answer of the old question will be used as answer of new question. If the new question is different with old questions in database, new question will be sent for other steps. It should be notified, this database is new for answer validation. 9- Candidate Answers Filtering: by this part, candidate answers will be filtered based on answer type and co-occurrence patterns which were created in system. Also some answers generate dynamically based on domain knowledge and co-occurrence keywords. 10- Candidate Answers Ranking: answers rank based on distance of keywords in snippets, answer type and answer repetition. By using ranking part in our model, answer candidate collection, which be filtered, will be ranked based on their validation value. 11- User Voting: this part of the system plays the human assessment role which checks the correctness of answer and fills the validation grade in usage knowledge for the next validating which will affect on total system timeline. 12- Pattern: this is a database which is used for answer patterns and will be updated with dynamic patterns which created in system. B. ALGORITHM As mentioned before for each question that is written by user in natural language, some words of question are used as keywords in answer. These keywords can be used as subject, object, verb, adverbs & etc. Designed Algorithm of this QA system is as follow: 1- User asks question through a query interface. If the question is similar to one of the previous questions which were saved in usage knowledge database, the answer of pervious question will be chosen for user's question and the system give the answer. Otherwise next step will be done. 2- Query analyzer pars question as subject, verb, object, adverb & etc. It should be noted that the type of words and synonyms of them (if is existed) were defined dynamically in Lexicon database. If system could not find the word or its type in question, system will announce and user can enter the new word and its type. In this case Lexicon database will be completed and updated. At last a tree view result will be used in classification part. In classification part, type of question and after that type of answer will be specified. 3- The question may have a WH question word which its answer is proportionate to that question. 3-1 Asked question doesn’t have any WH question word and just has a modal or auxiliary verb with Yes/No answer. 3-2 User may ask his question with a sentence that has no verb or question word such as: Temperature of Tehran. 4-After finishing these steps, for finding answer the most important part of job, query reformulation based on proportionate pattern, should be done. 5- We suppose that in document processing part, the search engine retrieves the documents in scope of the domain and based on answer patterns and important keywords. 6- Search engine send candidate answers collection to answer processing module. Answer extraction part will extract candidate answers from retrieved documents. Then these candidate answers pass to filtering unit. 7- Based on co-occurrence words and semantic relations existing in database ontology, answer type and keywords which extracting in question processing module, system filter candidate answers collection. Therefore some answers which are not related with the asked question will be eliminated. Fig. 1 Question answering system architecture 8- The remained answers rank by keywords distance and the frequently rate of answer keywords in snippets. In this case the filtered answers obtain priority and located in an ordered list. 9- The answers with high priority show to user for validation. Then the answers get a validation answer grade and save it in usage knowledge. If the user accepts the suggested answer which system presented as an exact answer, the algorithm will be finished. 10- If not, the algorithm send next set of candidate answers with priority to user from the list. This task performs recursively. 11- Finally if the user doesn’t accept any answer from the candidate list, the system asks for new question and request for additional information from the user and sends them to question processing module. Also update the pattern database to eliminate non efficient patterns. VI. PROCESS OF MODEL For increasing efficiency and finding exact answer, system uses data base ontology. This database include co-occurrence words such as rain and umbrella [6] and also semantic words which are near in meaning, such as ‘temperature’ and ‘degree’ in weather forecasting domain. Flexibility of designed system is based on usage knowledge. It means that if the new question is totally or nearly similar to a question which was asked before, system can use the generated answer. Also we use a field named answer validation grade to decrease system’s useful timeline. If an answer is valid the system automatically adds one number to this grade. Then if user ask repeated question, the answer with highest grade will be select as valid answer. New asked question is parsed in query analyzer part to its components. Then all of these components check with data in usage knowledge to find the probable similarity with pervious questions. If during checking, the structure of asked question totally is same as data in usage knowledge, certainly the answer of new asked question is same to answer of pervious question. But if some differences find between new asked question and data (such as question word, proposition, adjective, name and adverb) then system uses ‘word ontology’ to find synonyms of different parsed words to find similarity between new asked and pervious questions. At last if there was any answer for synonyms word in previous question, system uses this answer for the answer of new asked question. For example during checking, if two words such as ‘temperature’ in new asked question with ‘degree’ in pervious questions is different and in ‘word ontology’ these two words were saved as ‘synonyms’, also a previous question with ‘degree’ was asked, then system takes these two questions and the type of their answers, same even if they have different adverbs. It should be noted that different adverbs in two same questions, have no effect in type of question. This option is important for question words that have more than one type of answer (such as ‘what’ that its answer type may be ‘number’, ‘title’ or “definition’). If the structure of question has totally different, it means no similar question exists in usage knowledge, system uses other defined patterns to finding answer of question. Suggested model by using co-occurrence technique can increase the validity of the candidate answers and also exploiting from validation value caused the efficiency of system affected. In addition, in final step user can check the validity of answers and select the best validated answer/answers. System will appropriate a validation value for selected answer and store that value in usage knowledge. Likewise, system receives a score between 0-100 from user which shows user satisfactions from system operation. This score will use for evaluation of the system. Because of this measure for evaluation which exploit user viewpoint, the percent of validation is so high. If there are some words which are co-occurrence in candidate answer sentence, there is strong probability that the answer will be valid. The ontology for the weather events consists of event concepts, which are similar to Synset in WORDNET [15]. For example, rain and umbrella are in same event concept in the domain ontology for weather events, because the questions about using umbrella are usually asking about raining (e.g. Will I need to bring umbrella tomorrow? and Will it be raining tomorrow?) VII. EVALUATION The model implemented base on dynamic patterns, syntax and semantic relation among words, co-occurrence keywords, answer validation value, and use previous information (question and answers) that saved in usage knowledge. Chart No.1 shows type and quantity of questions (such as questions which ask for “quality/status, quantity/amount, location, time/date, person and defined/descriptive questions”). The model is capable to control words of question semantically and also use co-occurrence word relations for validate the answer. Other advantage of this model is the ability of defining new answers by system from domain knowledge, keywords and answer patterns. Also the model used an answer validation value which affect on system's timeline. Because our domain is restricted on aerology, and if frequent question increase in system, number of the validated answer also increase. This means that the validity of answers will be increased. Moreover, model works with dynamic patterns. Also this model of program is capable to answer a sentence with more than one question word individually or a sentence without question word or a multi sentence text that has a question. For evaluation of the implemented system, 50 questions were asked by 20 various persons in age and knowledge in different location and time situations. From total 50 asked questions, evaluation of the model, show 92% improving the decision of the system. Chart No.2 shows evaluation if the model base on user voting. As before noted, this model improve the precision of question answering systems by working on query reformulation and answer validation modules. The model exploit syntax and semantic relation for making dynamic patterns to reformulate asked question and also develop answer validation part by using words co-occurrence techniques and apply user assessments to validate answers. Then system's performance increased. Finally model use usage knowledge to decline total timeline for answering to question. VIII. CONCLUSIONS AND FUTURE WORKS RDQA, working on small document collections and restricted subjects, seems to be a task no less difficult than open-domain QA. Due to candidate scarcity, the precision performance of a RDQA system, and in particular that of its IR module, becomes a problematic issue. It affects seriously the entire success of the system, because if most of the retrieved candidates are incorrect, it is meaningless to apply further techniques of QA to refine the answers. The simplest approach to improve the accuracy of a question answering system might be restricting the domain it covered. By restricting the question domain, the size of candidate answer collection becomes smaller. Reformulation is an important part for understanding the interplay of information retrieval. There are three steps in patterns of reformulation by user: format, contents and source. The main goal of rewriting question is asking question in another new format by user in which less time and sources are used for search. Reformulation of questions is one of the most difficult parts in user's domain, even in web that seems learning and using documents is simple. Understanding the behavior of question and designed software for supporting of these behaviors are important problems in this part as the QA systems based on reformulation. So for increasing the performance and getting exact correct answer, this component should use syntax and semantic relations together at same time and also use the existing information of pervious asked questions by users which were saved in program. One of the important specifications of reformulation component is the ability of being used separately in other QA systems. It is for, not having any relation to a special domain can be used in open and closed domains. The designed system considers following step to increase its proficiency: - Semantic analysis - Syntax analysis - Dynamic pattern - Use the pervious data(usage knowledge) - Using questions with related subjects, - Adding a web services as a rich source in domain ontology In addition the question processing module, the improvement of answer processing module will be complete the question processing task in efficiency of the QA system, because the system must return correct answer. Then by improving the answer processing module the system can able to present exact answers, because it perform on close domain and deal with frequent questions and use validations patterns. Another reason is that the exact answers obtained by filtering candidate answers collection which perform in several steps, therefore the answers select from restricted collection and this makes the algorithm more efficient. This model by using a validation grade is more effective than the other models in total response time. This grade is null at the beginning of the system but by using QA system this field will increase and affect on response access time. Future researches should consider factors that lead users to reformulate their questions. Also new research should be done to gather more information in various levels of understanding, effectiveness and situations. Methods of gathering multi information such as documents, interviews, reports and etc. should be done. In addition that we must improve the answer processing module by identification new kind of patterns and try to decline the timeline to find the exact answer which is performed here by using validation grade and usage knowledge database. REFERENCES
{"Source-Url": "https://publications.waset.org/13824/pdf", "len_cl100k_base": 6297, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23125, "total-output-tokens": 7560, "length": "2e12", "weborganizer": {"__label__adult": 0.0005536079406738281, "__label__art_design": 0.001041412353515625, "__label__crime_law": 0.0010595321655273438, "__label__education_jobs": 0.02288818359375, "__label__entertainment": 0.0004596710205078125, "__label__fashion_beauty": 0.00043129920959472656, "__label__finance_business": 0.0008959770202636719, "__label__food_dining": 0.0006780624389648438, "__label__games": 0.002353668212890625, "__label__hardware": 0.00103759765625, "__label__health": 0.002002716064453125, "__label__history": 0.0008096694946289062, "__label__home_hobbies": 0.00017690658569335938, "__label__industrial": 0.0005712509155273438, "__label__literature": 0.004665374755859375, "__label__politics": 0.0005555152893066406, "__label__religion": 0.0008053779602050781, "__label__science_tech": 0.3525390625, "__label__social_life": 0.000377655029296875, "__label__software": 0.124755859375, "__label__software_dev": 0.47998046875, "__label__sports_fitness": 0.0003888607025146485, "__label__transportation": 0.0006651878356933594, "__label__travel": 0.00030803680419921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35234, 0.01039]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35234, 0.74408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35234, 0.94109]], "google_gemma-3-12b-it_contains_pii": [[0, 4456, false], [4456, 9582, null], [9582, 14296, null], [14296, 19838, null], [19838, 22135, null], [22135, 27255, null], [27255, 32034, null], [32034, 35234, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4456, true], [4456, 9582, null], [9582, 14296, null], [14296, 19838, null], [19838, 22135, null], [22135, 27255, null], [27255, 32034, null], [32034, 35234, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35234, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35234, null]], "pdf_page_numbers": [[0, 4456, 1], [4456, 9582, 2], [9582, 14296, 3], [14296, 19838, 4], [19838, 22135, 5], [22135, 27255, 6], [27255, 32034, 7], [32034, 35234, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35234, 0.07303]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
6e26484f7881c4d05392a552035156896802c0c1
Dynamic Programming Cormen et. al. IV 15 Motivating Example: Fibonacci numbers \[ F(1) = F(2) = 1 \] \[ F(n) = F(n-1) + F(n-2) \quad n>2 \] Fibonacci numbers F(1) = F(2) = 1 F(n) = F(n-1) + F(n-2) \quad n>2 Simple recursive solution: ``` def fib(n): if n<=2: return 1 else: return fib(n-1) + fib(n-2) ``` What is the size of the call tree? Problem: exponential call tree Can we avoid it? Efficient computation using a memo table def fib(n, table): # pre: n>0, table[i] either 0 or contains fib(i) if n <= 2: return 1 if table[n] > 0: return table[n] result = fib(n-1, table) + fib(n-2, table) table[n] = result return result We use a memo table, never computing the same value twice. How many calls now? \(O(n)\) Can we do better? Look ma, no table def fib(n): if n<=2: return 1 a, b = 1 c = 0 for i in range(3, n+1): c = a + b a = b b = c return c Compute the values "bottom up" Avoid the table, only store the previous two same \(O(n)\) time complexity, constant space. Only keeping the values we need. Optimization Problems In optimization problems a set of choices are to be made to arrive at an optimum, and sub problems are encountered. This often leads to a recursive definition of a solution. However, the recursive algorithm is often inefficient in that it solves the same sub problem many times. Dynamic programming avoids this repetition by solving the problem bottom up and storing sub solutions, that are (still) needed. Dynamic vs Greedy, Dynamic vs Div&Co Compared to Greedy, there is no predetermined local choice of a sub solution, but a solution is chosen by computing a set of alternatives and picking the best. Another way of saying this is: Greedy only needs ONE best solution. Dynamic Programming builds on the recursive definition of a divide and conquer solution, but avoids re-computation by storing earlier computed values, thereby often saving orders of magnitude of time. Fibonacci: from exponential to linear Dynamic Programming Dynamic Programming has the following steps - Characterize the structure of the problem, i.e., show how a larger problem can be solved using solutions to sub-problems - Recursively define the optimum - Compute the optimum bottom up, storing values of sub solutions - Construct the optimum from the stored data Optimal substructure Dynamic programming works when a problem has optimal substructure: we can construct the optimum of a larger problem from the optima of a "small set" of smaller problems. - small: polynomial Not all problems have optimal substructure. Travelling Salesman Problem (TSP) does not have optimal substructure. Weighted Interval Scheduling We studied a greedy solution for the interval scheduling problem, where we searched for the maximum number of compatible intervals. If each interval has a weight and we search for the set of compatible intervals with the maximum sum of weights, no greedy solution is known. Weighted Interval Scheduling Weighted interval scheduling problem. - Job $j$ starts at $s_j$, finishes at $f_j$, and has value $v_j$. - Two jobs compatible if they don't overlap. - Goal: find maximum value subset of compatible jobs. Weighted Interval Scheduling Assume jobs sorted by finish time: \( f_1 \leq f_2 \leq \ldots \leq f_n \). \( p(j) \) = largest index \( i < j \) such that job \( i \) is compatible with \( j \), in other words: \( p(j) \) is \( j \)'s latest predecessor; \( p(j) = 0 \) if \( j \) has no predecessors. Example: \( p(8) = 5, p(7) = 3, p(2) = 0 \). Using \( p(j) \) can you think of a recursive solution? Recursive (either / or) Solution Notation. \( \text{OPT}(j) \): optimal value to the problem consisting of job requests 1, 2, ..., \( j \). - Case 1: \( \text{OPT}(j) \) includes job \( j \). - add \( v_j \) to total value - can’t use incompatible jobs \{ \( p(j) + 1, p(j) + 2, \ldots, j - 1 \) \} - must include optimal solution to problem consisting of remaining compatible jobs \( 1, 2, \ldots, p(j) \) - Case 2: \( \text{OPT}(j) \) does not include job \( j \). - must include optimal solution to problem consisting of remaining compatible jobs \( 1, 2, \ldots, j - 1 \) \[ \text{OPT}(j) = \begin{cases} 0 & \text{if } j = 0 \\ \max \left\{ v_j + \text{OPT}(p(j)), \text{OPT}(j - 1) \right\} & \text{otherwise} \end{cases} \] Either / or recursion This is very often a first recursive solution method: - either some item is in and then there is some consequence - or it is not, and then there is another consequence, e.g. knapsack, see later slides: Here: for each job \( j \) - either \( j \) is chosen - add \( v_j \) to the total value - consider \( p_j \) next - or it is not - total value does not change - consider \( j-1 \) next --- Weighted Interval Scheduling: Recursive Solution **input:** \( s_1, \ldots, s_n, f_1, \ldots, f_n, v_1, \ldots, v_n \) sort jobs by finish times so that \( f_1 \leq f_2 \leq \ldots \leq f_n \). **compute** \( p(1), p(2), \ldots, p(n) \) \[ \text{Compute-Opt}(j) \{ \\ \quad \text{if} \ (j == 0) \\ \quad \quad \text{return} \ 0 \\ \quad \text{else} \\ \quad \quad \text{return} \ \max(v_j + \text{Compute-Opt}(p(j)), \text{Compute-Opt}(j-1)) \\ \} \] What is the size of the call tree here? How can you make it big, e.g. exponential? Analysis of the recursive solution **Observation.** Recursive algorithm considers exponential number of (redundant) sub-problems. Number of recursive calls for family of "layered" instances grows like Fibonacci sequence. ![Fibonacci sequence diagram] \[ p(1) = 0, p(j) = j-2 \] Code on previous slide becomes Fibonacci: \( \text{opt}(j) \) calls \( \text{opt}(j-1) \) and \( \text{opt}(j-2) \) --- Weighted Interval Scheduling: Memoization **Memoization.** Store results of each sub-problem in a cache; look up as needed. **input:** \( n, s_1, \ldots, s_n, f_1, \ldots, f_n, v_1, \ldots, v_n \) **sort** jobs by finish times so that \( f_1 \leq f_2 \leq \ldots \leq f_n \). **compute** \( p(1), p(2), \ldots, p(n) \) **for** \( j = 1 \) to \( n \) \[ M[j] = \text{empty} \] \[ M[0] = 0 \] **M-Compute-Opt** \( j \) \{ \text{if} (M[j] \text{ is empty}) \[ M[j] = \max(v_j + \text{M-Compute-Opt}(p(j)), \] \[ \text{M-Compute-Opt}(j-1)) \] \} **return** \( M[j] \) Weighted Interval Scheduling: Running Time **Claim.** Memoized version of $M$-$Compute$-$Opt(n)$ takes $O(n \log n)$ time. - $M$-$Compute$-$Opt(n)$ fills in all entries of $M$ ONCE in constant time - Since $M$ has $n+1$ entries, this takes $O(n)$ - But we have sorted the jobs - So Overall running time is $O(n \log n)$. --- Weighted Interval Scheduling: Finding a Solution **Question.** Dynamic programming computes optimal value. What if we want the choice vector determining which intervals are chosen. **Answer.** Do some post-processing, walking BACK through the dynamic programming table. ```plaintext Run Dynpro-Opt(n) Run Find-Solution(n) Find-Solution(j) { if (j = 0) output nothing else if ($v_j + M[p(j)] > M[j-1]$) print j Find-Solution(p(j)) else Find-Solution(j-1) } ``` Weighted Interval Scheduling: Bottom-Up **Bottom-up dynamic programming**, build a table. **input:** $n, s_1, \ldots, s_n, f_1, \ldots, f_n, v_1, \ldots, v_n$ sort jobs by finish times so that $f_1 \leq f_2 \leq \ldots \leq f_n$. compute $p(1), p(2), \ldots, p(n)$ Dynpro-Opt { $M[0] = 0$ for $j = 1$ to $n$ $M[j] = \max(v_j + M[p(j)], M[j-1])$ } By going in bottom up order $M[p(j)]$ and $M[j-1]$ are present when $M[j]$ is computed. This takes $O(n \log n)$ for sorting and $O(n)$ for Compute, so $O(n \log n)$ --- **Do it, do it: Recursive** <table> <thead> <tr> <th>S</th> <th>F</th> <th>V</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>5</td> </tr> <tr> <td>B</td> <td>2</td> <td>8</td> </tr> <tr> <td>C</td> <td>4</td> <td>3</td> </tr> <tr> <td>D</td> <td>6</td> <td>5</td> </tr> <tr> <td>E</td> <td>9</td> <td>10</td> </tr> <tr> <td>F</td> <td>11</td> <td>1</td> </tr> </tbody> </table> Sort in F order: $A, B, C, D, E, F$ Determine $p$ array: <p>| | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A</td> <td>0</td> </tr> <tr> <td>2</td> <td>B</td> <td>0</td> </tr> <tr> <td>3</td> <td>E</td> <td>2</td> </tr> <tr> <td>4</td> <td>D</td> <td>1</td> </tr> <tr> <td>5</td> <td>C</td> <td>0</td> </tr> <tr> <td>6</td> <td>F</td> <td>3</td> </tr> </tbody> </table> $6, F + 3, E + 2, B = 19$ Do the recursive algorithm. Left: take $+v$ next $p(j)$. Right: don’t take $0$, next $j-1$ Up: edge: add, node: take the max Do it, do it: Dynamic Programming <table> <thead> <tr> <th>S</th> <th>F</th> <th>V</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>5</td> </tr> <tr> <td>B</td> <td>2</td> <td>8</td> </tr> <tr> <td>C</td> <td>4</td> <td>3</td> </tr> <tr> <td>D</td> <td>6</td> <td>5</td> </tr> <tr> <td>E</td> <td>9</td> <td>10</td> </tr> <tr> <td>F</td> <td>11</td> <td>1</td> </tr> </tbody> </table> M[0] = 0 for j = 1 to n M[j] = max(v_j + M[p(j)], M[j-1]) Draw Intervals Sort in F order Determine p array <table> <thead> <tr> <th>Sort in F order</th> <th>Determine p array</th> </tr> </thead> <tbody> <tr> <td>1 A</td> <td>1A: 0</td> </tr> <tr> <td>2 B</td> <td>2B: 0</td> </tr> <tr> <td>3 E</td> <td>3E: 2B</td> </tr> <tr> <td>4 D</td> <td>4D: 1A</td> </tr> <tr> <td>5 C</td> <td>5C: 0</td> </tr> <tr> <td>6 F</td> <td>6F: 3E</td> </tr> </tbody> </table> Create M table Walk back to determine choices 6,F: take gets you 19, don't gets you 18, so take F 3,E: take gets you 18, don't gets you 8, so take E 2,B: take gets you 8, don't gets you 0, so take B Computing the p array Claim. Memoized version of M-Compute-Opt(n) takes O(n log n) time. - M-Compute-Opt(n) fills in all entries of M ONCE in constant time - Since M has n+1 entries, this takes O(n) - But we have sorted the jobs - So Overall running time is O(n log n). Computing the latest-predecessor array Visually, it is "easy" to determine \( p(i) \), the largest index \( i < j \) such that job \( i \) is compatible with \( j \). For the example below: \[ p[1...8] = [0, 0, 0, 1, 0, 2, 3, 5] \] How about an algorithm? Or even as a human, try it without the visual aid (give it 5 minutes) <table> <thead> <tr> <th>Activity</th> <th>A1</th> <th>A2</th> <th>A3</th> <th>A4</th> <th>A5</th> <th>A6</th> <th>A7</th> <th>A8</th> </tr> </thead> <tbody> <tr> <td>Start (s)</td> <td>1</td> <td>3</td> <td>0</td> <td>4</td> <td>3</td> <td>5</td> <td>6</td> <td>8</td> </tr> <tr> <td>Finish (f)</td> <td>4</td> <td>5</td> <td>6</td> <td>7</td> <td>8</td> <td>9</td> <td>10</td> <td>11</td> </tr> <tr> <td>( p )</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Computing the latest-predecessor array Spoiler alert: 1. Treat all the start and finish times as “events” and sort them in increasing order (resolve ties any way, as long as all the f events are before the s events) 2. Have global variables LFSF and IFLSF (for "Latest_Finish_So_Far," and "Index_of_LFSF") 3. Process events in order as follows: a. If it is a finish event, f, then update LFSF and IFLSF b. If it is a start event, s, then set p(i) to ILFSF <table> <thead> <tr> <th>Evnt</th> <th>LFSF</th> <th>IFLSF</th> <th>p(x)=y</th> </tr> </thead> <tbody> <tr> <td>s3</td> <td>0</td> <td>0</td> <td>p(3)=0</td> </tr> <tr> <td>s1</td> <td>0</td> <td>0</td> <td>p(1)=0</td> </tr> <tr> <td>s2</td> <td>0</td> <td>0</td> <td>p(2)=0</td> </tr> <tr> <td>s5</td> <td>0</td> <td>0</td> <td>p(5)=0</td> </tr> <tr> <td>f1</td> <td>4</td> <td>1</td> <td></td> </tr> <tr> <td>s4</td> <td>4</td> <td>1</td> <td>p(4)=1</td> </tr> <tr> <td>f2</td> <td>5</td> <td>2</td> <td></td> </tr> <tr> <td>s6</td> <td>5</td> <td>2</td> <td>p(6)=2</td> </tr> <tr> <td>f3</td> <td>6</td> <td>3</td> <td></td> </tr> <tr> <td>s7</td> <td>6</td> <td>3</td> <td>p(7)=3</td> </tr> <tr> <td>f4</td> <td>7</td> <td>4</td> <td></td> </tr> <tr> <td>f5</td> <td>8</td> <td>5</td> <td></td> </tr> <tr> <td>s8</td> <td>8</td> <td>5</td> <td>p(8)=5</td> </tr> <tr> <td>f6</td> <td>9</td> <td>6</td> <td></td> </tr> <tr> <td>f7</td> <td>10</td> <td>7</td> <td></td> </tr> <tr> <td>f8</td> <td>11</td> <td>8</td> <td></td> </tr> </tbody> </table> Discrete Optimization Problems **Discrete Optimization Problem (S,f)** - **S:** - Set of solutions of a problem, satisfying some constraint - **f : S → R** - Cost function associated with feasible solutions - **Objective:** find an optimal solution $x_{opt}$ such that - $f(x_{opt}) \leq f(x)$ for all $x$ in $S$ (minimization) - $f(x_{opt}) \geq f(x)$ for all $x$ in $S$ (maximization) - Ubiquitous in many application domains - planning and scheduling - VLSI layout - pattern recognition - bio-informatics Knapsack Problem - Given $n$ objects and a "knapsack" of capacity $W$ - Item $i$ has a weight $w_i > 0$ and value or profit $v_i > 0$. - Goal: fill knapsack so as to maximize total value. What would be a Greedy solution? - repeatedly add item with maximum $v_i / w_i$ ratio ... Does Greedy work? Capacity $M = 7$, Number of objects $n = 3$ \[ w = [5, 4, 3] \] \[ v = [10, 7, 5] \] (ordered by $v_i / w_i$ ratio) Either / or Recursion for Knapsack Problem Notation: $OPT(i, w) =$ optimal value of max weight subset that uses items 1, ..., $i$ with weight limit $w$. - Case 1: item $i$ is not included: - $OPT$ includes best of \{ 1, 2, ..., $i-1$ \} using weight limit $w$ - Case 2: item $i$ is included, if it can be included: $w_i <= w$ - new weight limit = $w - w_i$ - $OPT$ includes best of \{ 1, 2, ..., $i-1$ \} using weight limit $w - w_i$ \[ OPT(i, w) = \begin{cases} 0 & \text{if } i = 0 \\ OPT(i-1, w) & \text{if } w_i > w \\ \max \{ OPT(i-1, w), \ v_i + OPT(i-1, w - w_i) \} & \text{otherwise} \end{cases} \] **Knapsack Problem: Dynamic Programming** **Knapsack.** Fill an n+1 by W+1 array. **Input:** n, W, weights $w_1, \ldots, w_n$, values $v_1, \ldots, v_n$ for $w = 0$ to $W$ $M[0, w] = 0$ for $i = 1$ to $n$ for $w = 0$ to $W$ if $w_i > w$ $M[i, w] = M[i-1, w]$ else: $M[i, w] = \max (M[i-1, w], v_i + M[i-1, \text{w} - w_i])$ return $M[n, W]$ --- **Knapsack Algorithm** <table> <thead> <tr> <th>Item</th> <th>Value</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>6</td> <td>2</td> </tr> <tr> <td>3</td> <td>18</td> <td>5</td> </tr> <tr> <td>4</td> <td>22</td> <td>6</td> </tr> <tr> <td>5</td> <td>28</td> <td>7</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Item</th> <th>Value</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>6</td> <td>2</td> </tr> <tr> <td>3</td> <td>18</td> <td>5</td> </tr> <tr> <td>4</td> <td>22</td> <td>6</td> </tr> <tr> <td>5</td> <td>28</td> <td>7</td> </tr> </tbody> </table> $W = 11$ ### Knapsack Algorithm <table> <thead> <tr> <th>Item</th> <th>Value</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>6</td> <td>2</td> </tr> <tr> <td>3</td> <td>18</td> <td>5</td> </tr> <tr> <td>4</td> <td>22</td> <td>6</td> </tr> <tr> <td>5</td> <td>28</td> <td>7</td> </tr> </tbody> </table> **W = 11** At 1,1 we can fit item 1 and from then on, all we have is item 1. --- **At 2,2** we can either not take item 2 (value 1 (previous row[2])) or we can take item 2 (value 6 previous row[0]+ 6). **At 2,3** we can either not take item 2 (value 1) or we can take item 2 and item 1 (value 7). From then on we can fit both items 1 and 2 (value 7). Knapsack Algorithm <table> <thead> <tr> <th>Item</th> <th>Value</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>6</td> <td>2</td> </tr> <tr> <td>3</td> <td>18</td> <td>5</td> </tr> <tr> <td>4</td> <td>22</td> <td>6</td> </tr> <tr> <td>5</td> <td>28</td> <td>7</td> </tr> </tbody> </table> OPT: 40 How do we find the objects in the optimum solution? Walk back through the table!! Knapsack Algorithm \[ \text{OPT: } 40 \] \n\[ n=5 \text{ Don't take object 5 (7+28<40)} \] \[ W = 11 \] \[ 1 \{1\} \{1, 2\} \{1, 2, 3\} \{1, 2, 3, 4\} \{1, 2, 3, 4, 5\} \] \[ \text{Item} \quad \text{Value} \quad \text{Weight} \] \[ 1 \quad 1 \quad 1 \] \[ 2 \quad 6 \quad 2 \] \[ 3 \quad 18 \quad 5 \] \[ 4 \quad 22 \quad 6 \] \[ 5 \quad 28 \quad 7 \] Knapsack Algorithm \[ \text{OPT: } 40 \] \n\[ n=5 \text{ Don't take object 5 (7+28<40)} \] \[ W = 11 \] \[ 1 \{1\} \{1, 2\} \{1, 2, 3\} \{1, 2, 3, 4\} \{1, 2, 3, 4, 5\} \] \[ \text{Item} \quad \text{Value} \quad \text{Weight} \] \[ 1 \quad 1 \quad 1 \] \[ 2 \quad 6 \quad 2 \] \[ 3 \quad 18 \quad 5 \] \[ 4 \quad 22 \quad 6 \] \[ 5 \quad 28 \quad 7 \] ### Knapsack Algorithm ``` <table> <thead> <tr> <th>Item</th> <th>Value</th> <th>Weight</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>6</td> <td>2</td> </tr> <tr> <td>3</td> <td>18</td> <td>5</td> </tr> <tr> <td>4</td> <td>22</td> <td>6</td> </tr> <tr> <td>5</td> <td>28</td> <td>7</td> </tr> </tbody> </table> ``` OPT: 40 n=5 Don't take object 5 n=4 Take object 4 n=3 Take object 3 and now we cannot take anymore, so choice set is {3,4}, choice vector is [0,0,1,1,0] ### Knapsack Problem: Running Time Running time. $\Theta(nW)$. - Not polynomial in input size! - $W$ can be exponential in $n$ - Decision version of Knapsack is NP-complete. [Chapter 34 CLRS] **Knapsack approximation algorithm.** - There exists a poly-time algorithm that produces a feasible solution that has value within 0.01% of optimum.
{"Source-Url": "https://www.cs.colostate.edu/~cs320/Fall21/more_resources/slides/09_dynpro.pdf", "len_cl100k_base": 6638, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 48918, "total-output-tokens": 6712, "length": "2e12", "weborganizer": {"__label__adult": 0.00033593177795410156, "__label__art_design": 0.00030612945556640625, "__label__crime_law": 0.0004837512969970703, "__label__education_jobs": 0.0013570785522460938, "__label__entertainment": 7.110834121704102e-05, "__label__fashion_beauty": 0.0001785755157470703, "__label__finance_business": 0.00040340423583984375, "__label__food_dining": 0.000484466552734375, "__label__games": 0.0008783340454101562, "__label__hardware": 0.0016393661499023438, "__label__health": 0.0008563995361328125, "__label__history": 0.00029206275939941406, "__label__home_hobbies": 0.00020253658294677737, "__label__industrial": 0.0009140968322753906, "__label__literature": 0.0002135038375854492, "__label__politics": 0.00030112266540527344, "__label__religion": 0.0005340576171875, "__label__science_tech": 0.1038818359375, "__label__social_life": 0.00010025501251220704, "__label__software": 0.008331298828125, "__label__software_dev": 0.876953125, "__label__sports_fitness": 0.000476837158203125, "__label__transportation": 0.0007958412170410156, "__label__travel": 0.00023698806762695312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15531, 0.0485]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15531, 0.18894]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15531, 0.73637]], "google_gemma-3-12b-it_contains_pii": [[0, 143, false], [143, 405, null], [405, 1112, null], [1112, 2053, null], [2053, 2714, null], [2714, 3254, null], [3254, 4403, null], [4403, 5371, null], [5371, 6358, null], [6358, 7196, null], [7196, 8207, null], [8207, 9267, null], [9267, 9848, null], [9848, 11363, null], [11363, 12399, null], [12399, 13174, null], [13174, 13743, null], [13743, 14101, null], [14101, 14814, null], [14814, 15531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 143, true], [143, 405, null], [405, 1112, null], [1112, 2053, null], [2053, 2714, null], [2714, 3254, null], [3254, 4403, null], [4403, 5371, null], [5371, 6358, null], [6358, 7196, null], [7196, 8207, null], [8207, 9267, null], [9267, 9848, null], [9848, 11363, null], [11363, 12399, null], [12399, 13174, null], [13174, 13743, null], [13743, 14101, null], [14101, 14814, null], [14814, 15531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15531, null]], "pdf_page_numbers": [[0, 143, 1], [143, 405, 2], [405, 1112, 3], [1112, 2053, 4], [2053, 2714, 5], [2714, 3254, 6], [3254, 4403, 7], [4403, 5371, 8], [5371, 6358, 9], [6358, 7196, 10], [7196, 8207, 11], [8207, 9267, 12], [9267, 9848, 13], [9848, 11363, 14], [11363, 12399, 15], [12399, 13174, 16], [13174, 13743, 17], [13743, 14101, 18], [14101, 14814, 19], [14814, 15531, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15531, 0.225]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
93ce6e6876299fe3be97ca90d1e1eab214b9d25e
[REMOVED]
{"Source-Url": "https://www.researchgate.net/profile/Baris_Oezkan/publication/226309301_Formalization_Studies_in_Functional_Size_Measurement_How_Do_They_Help/links/549743220cf20f487d31661d.pdf?origin=publication_detail", "len_cl100k_base": 5468, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 24032, "total-output-tokens": 8831, "length": "2e12", "weborganizer": {"__label__adult": 0.00029468536376953125, "__label__art_design": 0.0003180503845214844, "__label__crime_law": 0.00023353099822998047, "__label__education_jobs": 0.0006833076477050781, "__label__entertainment": 5.65648078918457e-05, "__label__fashion_beauty": 0.00013506412506103516, "__label__finance_business": 0.00025463104248046875, "__label__food_dining": 0.0002522468566894531, "__label__games": 0.0004820823669433594, "__label__hardware": 0.0004322528839111328, "__label__health": 0.00028777122497558594, "__label__history": 0.00023305416107177737, "__label__home_hobbies": 5.9545040130615234e-05, "__label__industrial": 0.00022017955780029297, "__label__literature": 0.0003964900970458984, "__label__politics": 0.0001760721206665039, "__label__religion": 0.0002872943878173828, "__label__science_tech": 0.01024627685546875, "__label__social_life": 8.362531661987305e-05, "__label__software": 0.00820159912109375, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00018453598022460935, "__label__transportation": 0.0002713203430175781, "__label__travel": 0.00014710426330566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38364, 0.04211]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38364, 0.44748]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38364, 0.90049]], "google_gemma-3-12b-it_contains_pii": [[0, 4615, false], [4615, 11375, null], [11375, 14784, null], [14784, 18514, null], [18514, 23513, null], [23513, 29835, null], [29835, 35410, null], [35410, 38364, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4615, true], [4615, 11375, null], [11375, 14784, null], [14784, 18514, null], [18514, 23513, null], [23513, 29835, null], [29835, 35410, null], [35410, 38364, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38364, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38364, null]], "pdf_page_numbers": [[0, 4615, 1], [4615, 11375, 2], [11375, 14784, 3], [14784, 18514, 4], [18514, 23513, 5], [23513, 29835, 6], [29835, 35410, 7], [35410, 38364, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38364, 0.0625]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
f2ca705c9f2f1185bdd3185107e49e67ed8ede69
"Abstract—The key problem of successful developing of the software intensive system (SIS) is adequ(...TRUNCATED)
"{\"Source-Url\": \"https://www.polibits.gelbukh.com/2010_42/42_05.pdf\", \"len_cl100k_base\": 6351,(...TRUNCATED)
olmocr_science_pdfs
2024-11-29
2024-11-29
2c85983e8f7cee296e3f3cb77e5d965d52a26963
"1 INTRODUCTION\n\nA database Web service consists of a Web service interface with operations that p(...TRUNCATED)
"{\"Source-Url\": \"http://www.scitepress.org/papers/2008/17069/17069.pdf\", \"len_cl100k_base\": 68(...TRUNCATED)
olmocr_science_pdfs
2024-12-09
2024-12-09
8c7748c1aff50d1881b471763844ea993763761e
"Alignment in Enterprise Systems Implementations: The Role of Ontological Distance\n\nMichael Rosema(...TRUNCATED)
"{\"Source-Url\": \"http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1123&context=icis2004\", \"l(...TRUNCATED)
olmocr_science_pdfs
2024-11-25
2024-11-25
4cfcadb1727d0c79337b134fcad384fdd49c8d30
"INT02-C. Understand integer conversion rules\n\nConversions can occur explicitly as the result of a(...TRUNCATED)
"{\"Source-Url\": \"https://wiki.sei.cmu.edu/confluence/download/temp/pdfexport-20221220-201222-0654(...TRUNCATED)
olmocr_science_pdfs
2024-11-28
2024-11-28
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
82