6th International Conference on
Enterprise Information Systems

ICEIS 2004
ABSTRACTS





 

 

 
 

Call For Papers
Doctoral Consortium
Program Committee
Keynote Lectures
Tutorials
Workshops

Accepted Papers
Paper Templates
Proceedings
Reviewers Only
Registration

Conference Program
Transport and Accomodation
Social Events
Local Information

Organizing Committee
Sponsors
Journals
Hall of Fame
Links

Organized by:

INSTICC

Co-organized by:


Universidade Portucalense






 



Area 1 - Databases and Information Systems Integration
Area 2 - Artificial Intelligence and Decision Support Systems
Area 3 - Information Systems Analysis and Specification
Area 4 - Software Agents and Internet Computing
Area 5 - Human-Computer Interaction


AREA 1 - Databases and Information Systems Integration

Title:

A RECONFIGURATION ALGORITHM FOR DISTRIBUTED COMPUTER NETWORKS

Author(s):

Chanan Glezer , Moshe Zviran

Abstract: This article presents an algorithmic reconfiguration model, combining mechanisms of load balancing and fault tolerance in order to increase utilization of computer resources in a distributed multi-server, multi-tasking environment. The model has been empirically tested in a network of computers controlling telecommunication hubs and is compared to previous efforts to address this challenge.

Title:

BVA+ - A BIT VECTORS ALGORITHM FOR ACCELERATING QUERIES IN MULTILEVEL SECURE DATABASES

Author(s):

Ramzi Haraty , Arda Zeitunlian

Abstract: Much research has been done in the area of multilevel database systems, especially in the security area and accelerating queries. In this paper, we present BVA+, which is based on bit vectors to accelerate queries in multilevel secure database systems. As its predecessor (BVA), the BVA+ algorithm follows the classic Sea View Model, but it recovers query output from single-level relations in a faster and more space-efficient manner than the previous works performed on this subject. In addition, the BVA+ algorithm does not produce spurious or extra tuples, which have always been a major problem in the area of multilevel secure database systems.

Title:

CONNECTIVITY OF ERP SYSTEM

Author(s):

Vatcharaporn Esichaikul

Abstract: The study is an attempt to propose the criteria for determining the appropriate connectivity of ERP systems. The result of this study provides a framework assisting ERP adopters in selecting integration approach which are appropriate to their needs. A survey was conducted to obtain information from ERP users to learn about their opinions on factors and criteria affecting connectivity of ERP systems. Findings from the study revealed that data oriented approach and application integration oriented approach are the most preferred integration methodologies. Opinions on criteria for evaluating ERP connectivity are nature of business process of organization, availability of technologies and service supports, nature of information system of organization, system flexibility, degree of integration, transaction volume, implementation cost, ease of maintenance, implementation time, security, and budget. Finally, the study proposes a framework to determine appropriate connectivity of ERP systems.

Title:

CONCEPTUAL MODEL FOR SOFTWARE FAULT LOCALIZATION

Author(s):

Abdallah Tubaishat

Abstract: Existing cognitive science and psychology studies suggest that a bi-level approach to fault localization is needed with both shallow and deep reasoning. This approach form the underpinnings for developing our Conceptual Model for Software Fault Localization (CMSFL) to aid programmers with the problem of software fault localization. Our CMSFL proposes that, during the fault localization process programmers build two mental models: an actual code model (the buggy code), and an expectation model (the correct code). A multi dimensional approach is suggested with both shallow and deep reasoning phases to enhance the probability of localizing many types of faults.

Title:

ASSESSING EFFORT PREDICTION MODELS FOR CORRECTIVE SOFTWARE MAINTENANCE - AN EMPIRICAL STUDY

Author(s):

Eugenio Pompella , Andrea De Lucia , Silvio Stefanucci

Abstract: We present an assessment of an empirical study aiming at building effort estimation models for corrective maintenance projects. We show results from the application of the prediction models to a new corrective maintenance project within the same enterprise and the same type of software systems used in a previous study. The data available for the new project are finer grained according to the indications devised in the first study. This allowed to improve the confidence in our previous empirical analysis by confirming most of the hypotheses made and to provide other useful indications to better understand the maintenance process of the company in a quantitative way.

Title:

SUPPORTING KNOWLEDGE REUSE DURING THE SOFTWARE MAINTENANCE PROCESS THROUGH AGENTS

Author(s):

Mario Piattini , Aurora Vizcaino

Abstract: Knowledge management has become an important topic as organisations wish to take advantage of the information that they produce and that can be brought to bear on important decisions. This work describes a system to manage and reuse the information (and knowledge) generated during the software maintenance process, which consumes a large part of the software lifecycle costs. The architecture of the system is formed of a set of agent communities. Each community manages different types of knowledge. The communities’ agents have the goal of encouraging the reuse of good solutions and taking advantage of information obtained from previous experience. In consequence, the software maintenance is made easier and there are less costs and effort. To achieve this goal, agents use several reasoning techniques such as case based reasoning or decision tree based algorithms which allow them to generate new knowledge from the information that they manage.

Title:

RETRO-DYNAMICS AND E-BUSINESS MODEL APPLICATION FOR DISTRIBUTED DATA MINING USING MOBILE AGENTS

Author(s):

MOHAMED  MEDHAT , EZENDU  ARIWA

Abstract: Distributed data mining (DDM) is the semi-automatic pattern extraction of distributed data sources. The next generation of the data mining studies will be distributed data mining for many reasons. First of all, most of the current used data mining techniques require all data to be resident in memory, i.e., the mining process must be done at the data source site. This is not feasible for the exponential growth of the data stored in organization(s) databases. Another important reason is that data is inherently distributed for fault tolerance purposes. DDM requires two main decisions about the DDM implementations: A distributed computation paradigm (message passing, RPC, mobile agents), and the used integration techniques (Knowledge probing, CDM) in order to aggregate and integrate the results of the various distributed data miners. Recently, the new distributed computation paradigm, which has been evolved as mobile agent is widely used. Mobile agent is a thread of control that can trigger the transfer of arbitrary code to a remote computer. Mobile agents paradigm has several advantages: Conserving bandwidth and reducing latencies. Also, complex, efficient and robust behaviours can be realized with surprisingly little code. Mobile agents can be used to support weak clients, allow robust remote interaction, and provide scalability. In this paper, we propose a new model that can benefit from the mobile agent paradigm to build an efficient DDM model. Since the size of the data to be migrated in the DDM process is huge, our model will overcome the communication bottleneck by using mobile agents paradigm. Our model divides the DDM process into several stages that can be done in parallel on different data sources: Preparation stage, data mining stage and knowledge integration stage. We also include a special section on how current e-business models can use our model to reinforce the decision support in the organization. A cost analysis in terms of time consumed by each minor process (communication or processing) is given to illustrate the overheads of this model and the other models.

Title:

IMPORTANT FACTORS IN ERP SYSTEMS IMPLEMENTATIONS

Author(s):

Piotr Soja

Abstract: In the article the problem of success factors in ERP systems implementations has been discussed. The review of the literature concerning success factors has been discussed and the collection of potential ERP implementation success factors was identified. Next, the result of research has been presented, where respondents have been asked about their opinion about the importance of subsequent factors for the implementation success. There were two groups of respondents: the first consisted of people from Polish enterprises implementing ERP systems and the second comprised experts working in ERP systems suppliers. On the basis of the research, the most important and necessary factors in the respondents’ opinions have been identified, as well as the least important ones.

Title:

IDENTIFYING CLONES IN DYNAMIC WEB SITES USING SIMILARITY THRESHOLDS

Author(s):

Giuseppe Scanniello , Andrea De Lucia , Genny Tortora

Abstract: We propose an approach to automatically detect duplicated pages in dynamic Web sites. Our approach analyzes both the page structure, implemented by specific sequences of HTML tags, and the displayed content. In addition, for each pair of dynamic pages we also consider the similarity degree of their scripting source code. The similarity degree of two pages is computed using different similarity metrics for the different parts of a web page based on the Levenshtein string edit distance. We have implemented a prototype to automate the clone detection process on web applications developed using JSP technology and used it to validate our approach in a case study.

Title:

INFORMATION TECHNOLOGY STRATEGIC PLANNING: ADAPTING FACTS AND BELIEFS TO BUSINESS STRATEGY GENERATION

Author(s):

Julio Bernardo Clempner Kerik , Agustín Francisco Gutiérrez Tornés

Abstract: This paper introduces a framework for adpating facts and beliefs to business strategy generation. The adaptation process model is supported by an information technology planning (ITSP) model and methodology. Tha aim of this paper is to validate the model. In the ITSP model, real world is composed by entities realated in terms of goals, beliefs, etc., through interaction they incorporate or refuse facts or beliefs related to the enviornment conditions. The adaptation concept is proposed to generate gusiness strategies. Two different methos are proposed: 1)an inference logic method, that emplys facts related to the enviornment conditions to generate new business strategies; and 2) case-based reasoning, a storred cases recorgind specific prior episodes, that induce the incorporation of business strategies. Both methods are presented. The adaptation process is presented through application examples.

Title:

ERP BASED BUSINESS PROCESS REENGINEERING IN A HUMAN RESOURCES DEPARTMENT: A CASE STUDY APPROACH

Author(s):

THEODORA CHATZIKALLIA , KONSTANTINOS  CHERTOURAS

Abstract: Modern organizations are constantly facing new challenges regarding the reengineering of their business departments and processes. By the term Business Process we mean the profile of specific methods that can be employed to perform specific business tasks. In general, each Business Process is uniquely tailored to the organization it applies. Therefore, the resolution of a Business Process related problem is typically carried out with custom methods developed within organizations. In this paper we propose the use of Enterprise Resource Planning (ERP) as the basis for reengineering a business department and effectively the Business Process that it carries through. We discuss the application of ERP in the reengineering of the Business Process of a real world organization department (a Human Resources Department), which lead to a significant productivity enhancement.

Title:

ORGANIZATIONAL AND TECHNOLOGICAL CRITICAL SUCCESS FACTORS BEHAVIOR ALONG THE ERP IMPLEMENTATION PHASES

Author(s):

Jose Esteves , joan pastor

Abstract: During the last years some researchers have studied the topic of critical success factors in ERP implementations. Up to this moment, there is not enough research on the management and operationalization of critical success factors within ERP implementation projects. The identification of factors leading to success or failure of ERP systems is an issue of increasing importance, since the number of organizations choosing the ERP path keeps growing. In this paper, we analyzed the evolution of organizational and technological factors along the ERP implementation phases. Our findings suggest that while both good organizational and technological perspectives are essential for a successful ERP implementation project, their importance shifts as the project moves through its lifecycle.

Title:

ACME-DB: AN ADAPTIVE CACHING MECHANISM USING MULTIPLE EXPERTS FOR DATABASE BUFFERS

Author(s):

Markus Kirchberg

Abstract: An adaptive caching algorithm, known as Adaptive Caching with Multiple Experts (ACME), has recently been presented in the field of web-caching. We explore the migration of ACME to the database caching environment. By integrating recently proposed database replacement policies into ACME's existing policy pool, an attempt is made to gauge ACME's ability to utilise newer methods of database caching. The results suggest that ACME is indeed well-suited to the database environment and performs as well as the best currently caching policy within its policy pool at any particular moment in its request stream. Although execution time increases by integrating more policies into ACME, the overall processing time improves drastically with erratic patterns of access, when compared to static policies.

Title:

EVALUATION OF A DOCUMENT DATABASE DESCRIPTION BY DIFFERENT XML SCHEMAS

Author(s):

Pierre Bazex , Madani Kenab , Tayeb Ould Braham

Abstract: Title : Evaluation of a Document Database Description by Different XML Schemas Authors : Madani Kenab (1,2), Tayeb Ould Braham (2), Pierre Bazex (1) (1) IRIT, 118, Route de Narbonne 31062 Toulouse, France { kenab@info.unilim.fr, bazex@irit.fr } (2) MSI, 83, Rue d'isle 87000 Limoges, France { ould@unilim.fr } Address contact: Tayeb Ould Braham Email: ould@unilim.fr Tel : 33 5 55 43 69 71 Fax : 33 5 55 43 69 77 Abstract A document database could be represented by different XML schemas, it depends on the content of the documents that it contains. From a simple conceptual schema of a database containing structured data that we represent in form of a document, we propose and evaluate different XML schemas describing this database in order to deduce the best one. For the building of these XML schemas we propose different descriptions of the key concepts of the relational model (relation, key and reference link) . We also propose the description of different nestings between the elements of the document (total nesting, partial nesting and without nesting) . We conclude that the best-adapted XML schema depends on the use that we wish to do with this database and it is a combination of the representation of different concepts. This work is a preliminary of the integration of a relational database thanks to the best XML schema. Key Words : Entity-Association, Relational Concepts, XML Schema, XML Document, Nesting Elements.

Title:

TRANSACTION DESIGN FOR DATABASES WITH HIGH PERFORMANCE AND AVAILABILITY

Author(s):

Lars  Frank

Abstract: When many concurrent transactions like ERP and E-commerce orders want to update the same stock records, long duration locking may reduce the availability of the locked data. Therefore, transactions are often designed without analyzing the consequences of loosing the traditional ACID (Atomicity, Consistency, Isolation and Durability) properties. In this paper, we will analyze how low isolation levels, optimistic concurrency control, short duration locks, and countermeasures against isolation anomalies can be used to design transactions for databases with high performance and availability. Long duration locks are defined as locks that are held until a transaction has been committed, i.e. the data of a record is locked from the first read to the last update of any data used by the transaction. This will decrease the availability of locked data for concurrent transactions, and, therefore, optimistic concurrency control and low isolation levels are often used. However, in systems with relatively many updates like ERP-systems and E-commerce systems, low isolation levels cannot solve the availability problem as all update locks must be exclusive. In such situations, we will recommend the use of short duration locks. Short duration locks are local locks that are released as soon as possible, i.e. data will for example not be locked across a dialog with the user. Normally, databases where only short duration locks are used do not have the traditional ACID properties as at least the isolation property is missing when locks are not hold across a dialog with the user. The problems caused by the missing ACID properties may be managed by using approximated ACID properties, i.e. from an application point of view the system should function as if all the traditional ACID properties had been implemented. Examples using E-commerce will illustrate how to use the transaction design recommended in this paper. We have cooperated with one of the major ERP software companies in designing our transaction model.

Title:

INCREMENTAL DATA QUALITY IN THE DATA WAREHOUSE

Author(s):

Karsten Boye Rasmussen

Abstract: The data warehouse is the cornerstone for the production of business knowledge in the organization. The foundation of the quality of the business knowledge is the quality of the data in the data warehouse. Determination of dimensions of data quality in the data warehouse has been obtained through the intuitive, the empirical and the ontological approaches. The first point of this working paper is that data quality is not a static measure and that awareness of the data quality dimensions is a prerequisite to improve the data quality. The second point is that selection is the cornerstone of data quality in the data warehouse in relation to the quality dimensions. Thirdly, that post-load improvement of the data quality is obtainable. Metadata can be added incrementally containing information on the use of data – and thus the users' selections within the data warehouse – and on the users' judgment of the data.

Title:

A MIDDLEWARE FOR THE MANAGEMENT OF LARGE UTILITIES PLANTS

Author(s):

andrea rossettini , salvatore cavalieri , carmelo floridia , fabrizio d'urso

Abstract: The paper presents the main features of the European project Mobicossum IST 1999-57455, still running. The project is a CRAFT one approved inside the Fifth Framework Programme. It aims to define a middleware offering services for the management of large plants, in the field of gas and water distribution and waste water treatment systems. In the paper, the main features of the project will be explained, focusing on the description of the implementation of the core of the middleware, called Generalised Interface.

Title:

ACQUIRING AND INTEGRATING EXTERNAL DATA INTO DATA WAREHOUSES

Author(s):

Mattias Strand , Benkt  Wangler , Carl-Fredrik Laurén

Abstract: Data warehouses (DWs) has become one of the major IT-investments during the last decades and in order to fully exploit the potential of data warehouses, more and more organizations are acquiring and integrating external data into their star-schemas. However, the literature covering external data acquisition and integration is limited. Therefore, in this paper the results of an interview study conducted among banking organizations are presented. The study aimed at identifying different approaches for acquiring and integrating external data into DWs. The results show that there are many different approaches for the acquisition and integration, depending on the purpose and structure of the data being acquired. In addition, the most common external data acquisition and integration process is presented and discussed.

Title:

A CONCEPTUAL FRAMEWORK FOR FORECASTING ERP IMPLEMENTATION SUCCESS - A FIRST STEP TOWARDS THE CREATION OF AN IMPLEMENTATION SUPPORT TOOL

Author(s):

Fredrik Carlsson , Andreas  Nilsson , Johan Magnusson

Abstract: The continuing soar in popularity when it comes to standardized information systems sold en masse under the labelling of Enterprise Resource Planning (ERP) Systems is somewhat kept under control by the ever flowing stream of reports from the industry of implementations gone bad. According to some researchers it is possible to assume that as many as 90% of all initiated ERP implementation projects can be regarded as failures as a result of changes in scope, prolongation of the project time or simply budget overruns. With the implementation of an ERP system being a very costly and risky endeavour, organizations considering “getting on the bandwagon” stand much to gain from pre-emptively forecasting the probability of success for an ERP implementation in their enterprise. Given this, the purpose of this paper is to investigate a possible conceptual framework for forecasting ERP implementation success and discuss the role of such a framework in a software based tool. This was achieved through an initial in-depth literary review aimed at finding factors affecting the outcome of the ERP implementation projects. These results were then communicated to an industrial support group comprised of possible ERP implementation stakeholders. After lengthy discussions concerning the usability, validity and reliability of the proposed list of factors, a conceptual framework was agreed upon for forecasting ERP implementation success. The framework was then tested against a number of possible stakeholders outside the industrial support group. As the results show we have been able to create a conceptual framework for forecasting ERP implementation success that is currently in the second wave of testing. The usability, validity and reliability of the framework is discussed and elaborated upon, and this paper concludes that the perceived usability and hence also value of the conceptual framework is substantial, whereas the validity and reliability remain to be tested.

Title:

VIRTUAL ORGANIZATIONS AND DATABASE ACCESS - A CASE STUDY

Author(s):

Marko NIINIMAKi , Mikko Pitkanen , John White , Tapio Niemi

Abstract: This paper presents a case study of using virtual organization technologies in database access. A virtual organization (VO) is a collection of people in the same administrative domain. A user can belong to many virtual organizations and have a different role (user, client, administrator,..) in each of them. An authorization of a user to different services within a VO is based on the user's identity and a service called a Virtual Organization Membership Service (VOMS) that maps these identities with roles. The user's identity can be established in two ways. If the user communicates with the service using his web browser, the user's certificate must be included in the browser. Another possibility is to use a proxy certificate. There, in the proxy creation process, the program that writes the proxy adds the user's proxy certificate information about his participation in different VO's and his role in each of them. In order to demonstrate using these VO proxy certificates, we have extended the functionality Spitfire, a relational database front end. This involves assigning the user a database role (read/write/update) based on the VO information in his certificate. There is also a GUI for creating the mappings between VO roles and database access roles.

Title:

REASONS FOR ERP ACQUISITION

Author(s):

Sami Sarpola , Sanna Laukkanen , Petri Hallikainen

Abstract: Numerous reasons for why organisations acquire Enterprise Resource Planning (ERP) systems have been proposed in prior research. In this paper we form a synthesis of these different reasons and categorize them into technological and business reasons for acquiring ERP. Further, we test the validity of these reasons with empirical data concerning the acquisition of ERP systems in 41 Finnish companies.

Title:

DELEGATING AUTHORITY IN A DISTRIBUTED INFORMATION MANAGEMENT SYSTEM

Author(s):

Janet Barnett , Barbara Vivier , Kareem Aggour

Abstract: The need to manage large information repositories securely in a distributed environment increases with the growth of the Internet. To address this need, a system capable of managing the contents of an LDAP directory over the Web has been designed and developed. This system allows for the directory’s data to be divided into communities and supports the delegation of administrative authority over those communities to a distributed set of administrators. The communities may be subdivided recursively into subgroups, and rights over those subgroups also may be restricted. Thus, system administrators can dynamically delegate subsets of their permissions over a subset of their managed data, allowing for the effective control of permissions over the data within distributed organizations. The system solves the delegated administration problem for managing the contents of an LDAP directory in a distributed environment. Today, it supports the administration of over 20 production directories by well over 2000 distributed administrators.

Title:

DISTRIBUTED DATABASE SYSTEM OF AGRICULTURAL SCIENCE AND TECHNOLOGY ALLIANCE OF UNIVERSITIES IN CHINA

Author(s):

Longyong You , Junjing Yuan , Jiayun Wang , Jian Zhang

Abstract: There are three problems that need to be solved by establishing the Distributed Database System of the information platform of Chinese universities' Agricultural Science and Technology Alliance: distribution of the data resources, decomposition and optimization of the distributed query as well as safety of the data system. In this paper, firstly, through the overall analysis of the contents of Chinese universities' Agricultural Science and Technology Alliance, we establish the mixed data distributed system, make the database system more integrated, consistent and reliable, meanwhile improve efficiency of the local application. Secondly because the member of the alliance adopts different data mode, taking a query decomposition and optimization for overall mode in the way of extended semi-join will be the effective method to improve the system response time. Finally, utilizing the method of combining asymmetry encryption with symmetry encryption, we solve the safety problems of database identity validation, data transmission, visitation control and etc.

Title:

A DATA WAREHOUSE ARCHITECTURE FOR BRAZILIAN SCIENCE AND TECHNOLOGY ENVIRONMENT

Author(s):

Andre Luís Menolli , Maria Madalena Dias

Abstract: Science and technology in Brazil are areas that have few available resources and many times these scarce resources are badly used. The data warehouse is a tool that can make possible a better distribution of these resources. In this article are considered some issues in the development of a data warehouse for Science & Technology management. The paper describes the necessity of a supporting system to the decision taking regarding the distribution of the resources destined to Science & Technology in Brazil, and also shows a data warehouse architecture that is being developed to support this system. Data Modeling characteristics defined for the proposed data warehouse architecture are presented too.

Title:

SOFTWARE PRODUCT LINE ANALYSIS OF ENTERPRISE INFORMATION SYSTEM

Author(s):

Luiz Fernando Capretz , Faheem Ahmed

Abstract: Now a days geographical and physical constraint that allowed only for fixed and static placements of resources has vanished completely within an enterprise utilizing the concept of information technology to integrate their business needs. The object oriented programming approach has paved a way to reusability of components thus reducing cost and development efforts up to certain extend. Software product line has further strengthened the concept of reusability, and component-based architecture. In this paper we have analyzed the concept of Software Product Line Analysis for an Enterprise Information System which will help to construct a Software Product Line within the organization to produce high quality software product in order to full fill the information technology requirements of the organization.

Title:

AN APS ARCHITECTURE FOR WEB SERVICES BASED ENTERPRISE INTEGRATION

Author(s):

William Liu , FengYu Wang , Tay Jin Chua

Abstract: Web Services enabling technology is widely used to address enterprise integration within company or cross-organizations due to its language and operating system independency and support of loosely coupled integration. This paper presents an architecture for APS (Advanced Planning and Scheduling) system by describing an APS request handling engine and web services based functions, attempting to solve integration issues among APS, MES, ERP and other manufacturing systems that could not be handled properly using current approaches. In addition, as manufacturing planning has been extended to cover entire supply chain, this paper also discusses the necessary changes of the proposed architecture to cater for the extension. That would be helpful to figure out capacity issue in a big picture

Title:

OBTAINING E-R DIAGRAMS SEMI-AUTOMATICALLY FROM NATURAL LANGUAGE SPECIFICATIONS

Author(s):

Farid Meziane

Abstract: Since their inception, entity relationship models have played a central role in systems specification, analysis and development. They have become an important part of several development methodologies and standards such as SSADM. Obtaining entity relationship models, can however, be a lengthy and time consuming task for all but the very smallest of specifications. This paper describes a semi-automatic approach for obtaining entity relationship models from natural language specifications. The approach begins by using natural language analysis techniques to translate sentences to a meaning representation language called logical form language. The logical forms of the sentences are used as a basis for identifying the entities and relationships. Heuristics are then used to suggest suitable degrees for the identified relationships. This paper describes and illustrates the main phases of the approach and presents a summary of the results obtained when it is applied to a case study.

Title:

TOWARDS CONCEPTUAL MEDIATION

Author(s):

Ismael Navas D. , José F. Aldana M.

Abstract: Mediators are usually developed as monolithic systems which envelope the data source’s semantics as well as its location. Furthermore, its architecture based on wrappers involves a high coupling degree among the mediator’s components. This coupling does not allow sharing services with other organizations or the dynamic integration of new data sources. Therefore, wrappers must be re-designed and manually added for each mediation system. We propose an architecture for conceptual mediation in which the sources’ query capabilities are published as web services. These services can be registered in one or more resource directories (Semantic Directories), which are the core of this architecture because they provide the needed flexibility and scalability for dynamic integration. Finally, we show an application in a bioinformatics context to validate our approach.

Title:

AN AUTOMATION SYSTEM BASED ON LABVIEW TO CONTROL THE TEST OF MECHANICAL FLOW METERS

Author(s):

Víctor Mejia , Javier  Martínez , Victor Silva , Ricardo Alvarez , Petronilo Cortez

Abstract: A mechanical flow meter is a device used mainly to measure and calculates velocity of weater´s flow on rivers and open channels. These devices, as the time of use pass trough, suffer mechanical imperfections, that's why it is important to calibrate them twice a year, depending of its time of use. At the Mexican Institute of Water Technology (IMTA in Spanish) was designed and developed a circular water tank for propose of test of these meters. The present paper shows the automation systems designed to control the tests to calibrate these mechanical meters. The system is based on LabVIEW. LabVIEW is a general purpose programming tool with extensive libraries for data acquisition instrument control, data analysis, and data presentation. With this tool and a special hardware interface, it was possible to automate the process to test these meters. The system is called SCM (System of characterization of mechanical meters). SCM control the test of two mechanical meters simultaneously, and has some user's control features that permit the Operator a easy to use human machine interface.

Title:

FUZZY MULTIPLE-LEVEL SEQUENTIAL PATTERNS DISCOVERY FROM CUSTOMER TRANSACTION DATABASES

Author(s):

Huilin Ye , An Chen

Abstract: Sequential patterns discovery is a very important research topic in data mining and knowledge discovery, and it has been widely applied in business analysis. Previous works were focused on mining sequential patterns at a single concept level based on definite and accurate concept which may not be concise and meaningful enough for human experts to easily obtain nontrivial knowledge from the rules discovered. In this paper, we introduce concept hierarchies firstly, and then discuss a mining algorithm F-MLSPDA for discovering multiple-level sequential patterns with quantitative attribute based on fuzzy partitions.

Title:

A METADATA REPOSITORY FOR IMAGE RETRIEVAL ALGORITHMS

Author(s):

Sahudy Montenegro González , Akebo Yamakami

Abstract: Many of the problems involved in image database applications require some form of retrieval based on image content. The explosion in availability of image content, due to recent developments in multimedia technology, demands the formulation of algorithms to facilitate content-based retrieval. Many image retrieval algorithms are implemented according to the needs of specific applications. Yet, there is currently no standard form of manipulation for these algorithms. This fact has a secure impact on the availability of algorithms beyond the bounds of the application for which they were originally designed. This work defines a general purpose repository for the algorithms involved in the process of Image Retrieval. The main goal of the repository is to provide the application developer with an infrastructure to manipulate and query image algorithms, allowing the integration of the image retrieval algorithms, the creation of a stock of algorithms available to multiple users, and to reuse/share algorithms for multiple applications. We define a standard set of metadata, applicable to image retrieval algorithms, providing uniform semantic support to understand these algorithms. This repository acts as a support to the development of image retrieval applications. The repository architecture is centered on providing distributed database functionality.

Title:

THE CONCEPT AND IMPLEMENTATION OF THE MARKET PLACE E-UTILITIES•COM

Author(s):

Jamil Dimassi , Carine Souveyet , Colette Rolland

Abstract: In order to remain competitive in a deregulated environment, a group of European Utilities developed a prototype of a single Marketplace called e-utilities•com whose mandate is a clear customer centric orientation in the European environment for a successful mid-term multi-utility business via the Web. This paper highlights the concept of e-utilities•com and its implementation in a Web portal.

Title:

PERFORMANCE INDICATORS: IMPORTANT TOOL FOR BUSINESS INTELLIGENCE AND INFORMATION SYSTEMS

Author(s):

María Luisa Sené

Abstract: In this paper is treated the importance of performance indicators in order to have a healthy organization. Also are given elements to understand why standardization is so related to this topic, and the most important thing, how all this contributes to design an information system that will help the organization in the process of decision-making. Are included examples of performance indicators that can be applied in any organization.

Title:

ACCESS MODEL IN COOPERATIVE INFORMATION SYSTEMS

Author(s):

Eric Disson , Danielle Boulanger

Abstract: This research focuses on access security in cooperating information systems. The offered modeling has to treat the interoperation of open and evolutive information systems and, moreover, has to guarantee the respect of various local security policies. The coexistence of heterogeneous information sources within an information systems framework involves homogenization problems between local security policies. We distinguish two types of heterogeneity: heterogeneity of the local access policies and semantic heterogeneity between object or subject instances of the local access schemas. To solve this twofold difficulty, we propose an original role model allowing a unified representation of local access schemas. This model preserves the flow control properties in the three main access policies (discretionary, role based model and multilevel models). The described access schemas are enriched to establish intra-system access authorizations.

Title:

BUSINESS MODELLING THROUGH ROADMAPS

Author(s):

Judith Barrios Albornoz , Jonás  Montilva Calderón

Abstract: Business modelling is a central activity to many different areas, including Business Process Reengineering, Organisational Development, Enterprise Modelling & Integration, Business Process Management and Enterprise Application Integration. It is well known that the business domain is not easy to understand neither to represent even for specialised people. The success of most of the contemporary methods for modelling Business Organisations or Enterprise Information Systems (EIS) is strongly associated with the level of understanding that the modelling team can attain about the specific situation being modelled. This understanding is directly related with the degree of modelling experience that the team has, as well as their ability to work with the techniques and tools prescribed by a specific method. Nowadays, most of the existing business modelling methods are concentrated in what are the business concepts and how to represent them. But, they lack of process guidance, which is needed to help the team through the modelling process. We elaborated the method BMM for modelling business application domains that provides working guidelines for the modelling team. This method, based on method engineering concepts helps teams to, not only, get a comprehensive knowledge about the business domain being modelled, but also, about the process of modelling the domain itself. This paper concerns with the representation of the process of modelling a business by using a decision oriented process model formalism. It is represented at a higher level by a roadmap. The main contribution of our work is a set of roadmaps that contains the knowledge associated with team member’s modelling experience in business modelling and EIS development. This knowledge arises from several case studies.

Title:

AUTOMATIC DISCOVERY OF SEMANTIC RELATIONSHIPS BETWEEN SCHEMA ELEMENTS

Author(s):

Nikos Rizopoulos

Abstract: The identification of semantic relationships between schema elements, or \schema matching, is the initial step in the integration of data sources. Existing approaches in automatic schema matching have mainly been concerned with discovering equivalence relationships between elements. In this paper, we present an approach to automatically discover richer and more expressive semantic relationships based on a bidirectional comparison of the elements data and metadata. The experiments that we have performed on real-world data sources from several domains show promising results, considering that we do not rely on any user or external knowledge.

Title:

MANAGING INFORMATION FLOW DYNAMICS WITH AGILE ENTERPRISE ARCHITECTURES

Author(s):

Drakoulis Martakos , Panagiotis  Kanellis , Nancy Alexopoulou

Abstract: New organization forms and ways of conducting business require architectures for enterprise systems that can support and not hinder entrepreneurial activities. Primarily this means that the information flow between both internal as well as cross-enterprise processes must be managed by underlying systems that offer a high level of automation as well as being highly flexible and integrated. In this respect, we present an agile architecture that offers a coherent and high level conceptualisation of the above properties that enterprise information systems should display, consider a number of technologies as potential implementation candidates and demonstrate how the architecture addresses node density, velocity, viscosity and volatility as parameters for managing and controlling the dynamics of information flows.

Title:

A TRANSACTIONAL MULTIMODE MODEL TO HANDLE OVERLOAD IN DISTRIBUTED RTDBSS

Author(s):

Samia Saad-Bouzefrane

Abstract: Current applications, such as Web-based services, electronic commerce, mobile telecommunication systems, etc. are distributed in nature and manipulate time-critical databases. In order to enhance the performance and the availability of such applications, the major issue is to develop efficient protocols that cooperate with the scheduler to manage the overload of the distributed system. In order to help real-time database management systems (RTDBS) to maintain data logical consistency while attempting to enhance concurrency execution of transactions, we introduce a transactional multimode model to let the application transactions adapt their behavior to the overload consequences. In this paper, we propose for each transaction several execution modes and we derive an overload controller suitable for the proposed multimode model.

Title:

A FRAMEWORK FOR EVALUATING DIFFICULTIES IN ERP IMPLEMENTATION

Author(s):

Jorge Marcelo Montagna , Luis Ferrario

Abstract: Various sources point out very high percentages of failures to implement ERP systems. In this work, the main difficulties for this task are analyzed and a systematic classification of fundamental reasons is intended. By considering the reasons that lead to failure, a simple and effective mechanism is generated to evaluate in advance complications the project might present. In this way, the tools to be used can be adjusted to the specific characteristics of the project. Somehow, it is intended to solve the problem presented by general methodologies, which are used for any kind of enterprise, without previously considering its conditions and state to face this type of projects.

Title:

STUDY OF DIFFERENT APPROACHES TO THE INTEGRATION OF SPATIAL XML WEB RESOURCES

Author(s):

Jose Corcoles , Pascual Gonzalez

Abstract: The research community has begun to investigate foundations for the next stage of the Web, called Semantic Web. Current efforts include the Extensible Markup Language XML, the Resource description Framework, Topic Maps and the DARPA Agent Markup Language DAML+OIL. A rich domain that requires special attention is the Geospatial Semantic Web. However, in order to approach the Geospatial Semantic Web, it is necessary to solve the problem of developing an integration system for querying spatial resources stored in different sources. In this paper, we study two different approaches to integrating spatial and non-spatial information represented in the Geographical Markup Language (GML). The approaches studied follow LAV (Local as View) integration. With this study we obtain the best approach to developing a real system for querying GML resources stored in different sources.

Title:

CAPABILITY-BASED QUERY PLANNING IN MEDIATOR SYSTEMS

Author(s):

Jiu Yang Tang

Abstract: This paper addresses the impact of capability description on query planning in heterogeneous data integration system. Query planning covers the selection of data sources related to the query and the determination of subgoals’ execution orders. In the context of capability description, we propose a framework for data sources description towards generating good feasible query plans. Our approach uses information such as the semantic correspondences between local schemas and mediated schemas and the query capability descriptions to investigate factors that provide a good foundation for query planning. Finally, the proposed approach is compared with the other capability description approaches described in the literature. The obtained results demonstrate that our approach will allow data sources to advertise their capabilities in a flexible way and help to efficiently query planning.

Title:

AN EFFICIENT B+-TREE IMPLEMENTATION IN C++ USING THE STL STYLE

Author(s):

Gregory Butler

Abstract: Database indexes are the search engines for database management systems. The B+-tree is one of the most widely used and studied data structuresand provides an efficient index structure for databases. An efficient implementation is crucial for a B+-tree index. Our B+-tree index is designed to be a container by following the style of the C++ Standard Template Library (STL) and implemented efficiently by using design patterns and generic programming techniques. Therefore, our B+-tree index can adapt to different key types, data types, different queries, and different database application domains, and be easy and convenient for developers to reuse just like other containers in the STL.

Title:

XRM: AN XML-BASED LANGUAGE FOR RULE MINING SYSTEMS

Author(s):

Dominique  Laurent , Tao-Yuan Jen , Ahmed Cheriat , Béatrice Bouchou , Mirian Halfeld-Ferrari

Abstract: In this paper, we present XRM, an XML-based language capable of promoting the collaboration among data mining systems. Indeed, KDD systems usually need a platform to integrate and exchange their results with different tools. XRM is a general framework to express any system results and/or data as logic formulas. In this way, XRM offers flexibility to represent data, constraints and patterns, and allows mining systems to present their results in an exchangeable format. In this work, we concentrate on the use of XRM to represent different forms of association rules. Association rule mining has evolved giving rise to sophisticate approaches that require interaction with other tools. XRM is built on XML Schema - in this way we can assure a certain level of correctness of data and mining results.

Title:

AUDIOVISUAL ARCHIVE WITH MPEG-7 VIDEO DESCRIPTION AND XML DATABASE

Author(s):

Pedro Almeida , Helder Troca Zagalo , Joaquim  Sousa Pinto , Joaquin Arnaldo Martins

Abstract: This article presents the work that has been developed in the creation of an audiovisual archive that uses the MPEG-7 standard to describe the video content and a XML database to store the video descriptions. It presents the model adopted to describe the video content, the framework of the audiovisual archive information system, a video indexing tool developed to allow the creation and manipulation of XML documents with the video descriptions and an interface to visualize the videos over the Web.

Title:

ENHANCING THE SUCCESS RATIO OF DISTRIBUTED REAL-TIME NESTED TRANSACTIONS

Author(s):

Majed Abdouli , Bruno Sadeg , Laurent Amanton

Abstract: The traditional transaction models are not suited to real-time database systems RTDBSs. Indeed, many current applications managed by these systems necessitate a kind of transactions where some of the ACID properties must be ignored or adapted. In this paper, we propose a real-time concurrency control protocol and an adaptation of the Two-Phase Commit Protocol based on the nested transaction model where a nested transaction is viewed as a collection of both essential and non-essential subtransactions: the essential subtransaction has a firm2 deadline, and the non-essential one has a soft3 deadline. We show through simulation results, how our protocol, based on this assumption, allows better concurrency between transactions and between subtransactions of the same transaction, enhancing then the success ration4 and the RTDBS performances, i.e.,more transaction may meet their deadline.

Title:

USING IUCLID FOR WORLDWIDE EXCHANGE OF CHEMICAL AND TOXICOLOGICAL INFORMATION

Author(s):

Stefan Scheer , Remi Allanou

Abstract: A database management tool (IUCLID) has been created in order to provide with administering chemical and toxicological data sent in structured form due to existing EU legislation. This tool also offers – beyond the normal dataset administration functionality – mechanisms for data fusion, data reproduction and data deployment. Thus IUCLID is used not only by who has to receive submissions of that kind but also who has to produce such submissions. Hence this product is used by whoever is involved as stakeholder in the current legislative process, and even beyond that it has been recognized successfully. Consequently it was the worldwide acceptance that helped in promoting this software product ahead of its original purpose and to establish a network of exchange.

Title:

RAPID XML DATABASE APPLICATION DEVELOPMENT

Author(s):

Kjetil Norvag , Albrecht Schmidt

Abstract: This paper proposes a rapid prototyping framework for XML database application development. By splitting up the development process into several refinement steps while keeping the application programming interface stable, the framework aims at rapid implementation of a prototype with a well-defined interface and a subsequent implementation of more advanced concepts like business rules in several steps. The refinement process takes the form of incrementally adding domain-specific information to the application. This is achieved by transgressing from general-purpose XML tools that do not support the definition and enforcement of constraints to frameworks that support domain-specific models and constraints such as E/R modeling. We have employed this method in the development of an example application, and we give performance numbers that illustrate the incremental improvements of each step.

Title:

ONTOLOGY-BASED REQUIREMENT ELICITATION

Author(s):

cong wang

Abstract: The key problem of information system development is how to acquire requirement. It has become the puzzled problem to the system developers for a long time. How to build a communication bridge between the developers and users has become a hot issue in requirement engineering. Ontology defines the common concepts and the relationships among them. A communication bridge can be built between the domain users and the system developers. Therefore, the ontology can direct the users and the developers to construct the requirement model. According to the different views of the system, this paper provides ontologies named business ontology, technique ontology and functionality Ontology for requirement elicitation. Firstly, this paper defines the concept of the ontology. Second, we describe the three ontologies in detail. Finally, through the ontologies, this paper provides the domain requirement model.

Title:

A TRANSACTION MODEL FOR LONG RUNNING BUSINESS PROCESSES

Author(s):

Jinling Wang , Beihong Jin , Jing Li

Abstract: Many business processes in the enterprise applications are both long running and transactional in nature, but currently no transaction model can provide full transaction support for such long running business processes. In this paper, we proposed a new transaction model — PP/T model. It can provide structural transaction support for the long running business processes, so that application developers can focus on the business logic, with the underlying platform providing the required transactional semantics. Simulation results show that the model has good performance in processing the long running business processes.

Title:

CACHING STRATEGIES FOR MOBILE DATABASES

Author(s):

Murilo de Camargo

Abstract: Caching remote data in local storage of a mobile client has been considered an effective solution to improve system performance for data management in mobile computing applications. In this paper, we propose a taxonomy for cache management in mobile database systems. The aim is to provide a unifying framework for the problem of caching in mobile computing, then a comparative review of the work done in this area up to now. Such a framework, with the associated analysis of the existing approaches, provides a basis for identifying strengths and weaknesses of individual methodologies, as well as general guidelines for future improvements and extensions.

Title:

DM-XIDS — AN APPLICATION-LAYER ROUTER OF INCOMING XML STREAMS

Author(s):

HAO GUI

Abstract: With the explosion of the information on the Internet and the widely use of the XML as a data exchange media, more and more information application can communicate with each other and deliver data of large volume in a continuous streaming. This trend has led to the emergence of novel concepts in data acquisition, integration, exchange, management and access. In this paper, we propose middleware architecture on XML streams information dissemination and design a prototype DM-XIDS as an applicable extension to our traditional database management system (named DM). Friendly graphical user interface is developed to efficiently generate and manage the diverse information subscriptions, which are described as queries in XPath. Effective algorithm is adopted to filter and match the ad hoc segment in the whole document. Automata-based query filtering mechanism will successfully implement the selection of data according to the queries in regular path expression that may include both nested path declaration and value predicate. Dedicated architecture is designed to accomplish our goals to dynamically direct the incoming XML data-stream from a static collection of information into a specific physically or logically distributed database environment. As a middleware of our database system, DM-XIDS presents a novel concept of an application-layer information router with additional administrative functions, which builds bridges between the XML stream source and the underlying data storage conforming to the pre-customized strategy.

Title:

AN APPROACH FOR SCHEMA EVOLUTION IN ODMG DATABASES

Author(s):

Cecilia Delgado Negrete

Abstract: Schema evolution is the process of applying changes to a schema in a consistent way and propagating these changes to the instances while the database is in operation. However, when a database is shared by many users, updates to the database schema are always difficult. To overcome this problem, in this paper we propose a version mechanism for schema evolution in ODMG databases that preserves old schemas for continued support of existing programs running on the shared database when schema changes are produced. Our approach uses external schema definition techniques and is based on the fact that if a schema change is requested on an external schema, rather than modifying the schema, a new schema, which reflects the semantics of the schema change, is defined.

Title:

COMPARISON OF APPROACHES IN DATA WAREHOUSE DEVELOPMENT IN FINANCIAL SERVICES AND HIGHER EDUCATION

Author(s):

Janis Benefelds , Laila Niedrite

Abstract: When a decision to develop a Data Warehouse is made, some sensitive factors should be evaluated to understand the tasks and prioritize them. Of course, priorities and conditions are unique in each Data Warehouse project development. In this paper we assume that there are common characteristics for companies of similar business activities and different for those with opposite activities. This article looks at the interpretation of the same criteria of two Data Warehouse projects in for-profit and not-for-profit areas. As representatives of for-profit and not-for-profit areas we selected financial services (banking) and higher education institutions. We have used the criteria from (List et al. 2002) to compare the results of the two projects. Each section of the paper describes this set of criteria for each of the two areas. The Data Warehouse development methodology used in each case is described too. An evaluation matrix is provided in Conclusion. The results shown there are not very different from Data Warehouse project development in an organization with respectively different behavior.

Title:

CORRELATING EVENTS FOR MONITORING BUSINESS PROCESSES

Author(s):

Josef Schiefer , Carolyn McGregor

Abstract: With the increasing demand for real-time information on critical performance indicators of business processes, the capturing, transformation and correlation of real-world events with minimal latency are a prerequisite for improving the speed and effectiveness of an organization's business operations. Events often include key business information about their relationship to other events that can be utilized to collect relevant event data for the calculation of business performance indicators. In this paper we introduce an approach for correlating events of business processes that uses correlation sessions to represent correlation knowledge. Correlation sessions facilitate the processing of data across multiple events and thereby enable a calculating of business metrics in near real-time. The benefit over existing approaches is that it is tailored to instrument business processes and business applications that may operate in a heterogeneous software environment. We propose a Java-based, container-managed environment which provides a distributed, scalable, near-real time processing of events and which includes a correlation service that effectively manages correlation sessions. We also show a complete example that illustrates how correlation sessions can be utilized for computing the cycle time of business processes.

Title:

TRANSFORMATION-ORIENTED MIDDLEWARE FOR LEGACY SYSTEM INTEGRATION

Author(s):

Urs Frei , Guido Menkhaus

Abstract: Most established companies have acquired legacy systems through mergers and acquisitions. The systems were developed independently of each other and very often they do not align with the evolving IT infrastructure. Still, they drive day-to-day business processes. Replacing the legacy application with new solutions might not be feasible, practical or cost a considerable amount of time. However, immediate integration might be a requirement for a strategic project, such as supply chain management or e-business. This article presents a transformation system for legacy system integration that allows flexible and effective transformation of data between heterogeneous systems. Sequences of transformations are described using a grammar based approach.

Title:

SCHEMA EVOLUTION FOR STARS AND SNOWFLAKES

Author(s):

Christian Kaas , Torben Bach  Pedersen , Bjørn  Rasmussen

Abstract: The most common implementation platform for multidimensional data warehouses is RDBMSs storing data in relational star and snowflake schemas. DW schemas evolve over time, which may invalidate existing analysis queries used for reporting purposes. However, the evolution properties of star and snowflake schemas have not previously been investigated systematically. This paper systematically investigates the evolution properties of star and snowflake schemas. Eight evolution operations are considered, covering insertion and deletion of dimensions, levels, dimension attributes, and measure attributes. For each operation, the formal semantics of the changes for star and snowflake schemas are given, and instance adaption and impact on existing queries are described. Finally, we compare the evolution properties of star and snowflake schemas, concluding that the star schema is considerably more robust towards schema changes than the snowflake schema.

Title:

AN EVENT PROCESSING SYSTEM FOR RULE-BASED COMPONENT INTEGRATION

Author(s):

Susan  Urban

Abstract: The IJK project has developed an environment in which active rules, known as integration rules, are used together with transactions to provide an event-driven, rule-based approach to the integration of black-box components. This paper presents the event processing system that supports the use of integration rules over components. The event processing system is composed of the language framework for the specification of different types of events, an event generation system for generating event instances, and an event handler for communicating the occurrence of events to the integration rule processor. The language framework supports the enhancement of EJB components with events that are generated before and after the execution of methods on components. Since integration rule support an immediate coupling mode and execute in the context of nested transactions, a synchronization algorithm has been developed to coordinate the execution of immediate integration rules with the execution of methods on components. The synchronization algorithm makes it possible to suspend and resume distributed application transactions to accommodate the nested execution of integration rules with an immediate coupling mode.

Title:

CONV2XML: RELATIONAL SCHEMA CONVERSION TO XML NESTED-BASED SCHEMA

Author(s):

Angela Duta , Ken Barker

Abstract: Conversion of relational data to XML is a critical topic in the database area. This approach translates the rigid tabular structures of relational databases into hierarchical XML structures. Logical connections between bits of data depicted by relationships are represented more naturally by tree-like structures. Conv2XML and ConvRel are two algorithms for converting relational schema to XML Schema focusing on preserving the source relationships and their structural constraints. ConvRel translates each relationship individually into a nested XML structure. Conv2XML identifies complex nested structures capable of modeling all relationships existent in a relational database.

Title:

APPLYING CROSS-TOPIC RELATIONSHIPS TO SEARCHING WITH INCREMENTAL RELEVANCE FEEDBACK

Author(s):

Stephen  Chan

Abstract: General purpose search engines such as Google and Yahoo define search topic hierarchies for document organization, yet such hierarchical structures cover only a portion of the possible relationships among search topics. It is believed that search effectiveness can be improved significantly by making better use of the semantic relations among search topics. In general, the is-child relation allows starting a search from general concepts, while the is-neighbor relation provides fresh information that can help users identify related search areas. This paper describes a topic network encompassing such relations, based on Bayesian networks techniques, to support searching, Our experiments show that making use of such a topic network can improve search effectiveness in a search engine using incremental feedback

Title:

INFORMATION INVASION IN ENTERPRISE SYSTEMS

Author(s):

Stephen Crouch , Peter Henderson , Robert Walters

Abstract: With the proliferation of internet-based technologies within and between organisations, large-scale enterprise systems are becoming more interconnected than ever before. A significant problem facing these organisations is how their information systems will cope with inconsistency being introduced from external data sources. Major problems arise when low quality information enters an authoritative enterprise system from these external sources, and in so doing gains credibility. This problem is compounded by the propagation of this information to other systems and other enterprises, potentially 'invading' an inter-enterprise network. In this paper we will introduce and examine this behaviour, which we term 'information invasion'. Characterisation of systems that are most vulnerable from such an occurrence is provided, and details of an experiment are given which simulates information invasion on an example network topology.

Title:

KNOWLEDGE TRANSFER TO AND AMONG END-USERS IN PRE-PACKAGED ENTERPRISE APPLICATION SOFTWARE IMPLEMENTATION: AN EXPLORATORY STUDY OF THE ROLES OF COMMUNITIES OF PRACTICE

Author(s):

Jimmy Tanamal

Abstract: This paper is concerned with the roles of Communities of Practice (CoPs) in knowledge transfer during the implementation of a particular IT artefact, i.e. the Pre-packaged Enterprise Application Software (PEAS) or also known as Enterprise Resource Planning (ERP) software. Using an in-depth longitudinal case-study across different stages of a Financial PEAS implementation in a large Australian university, we assess the effectiveness and applicability of the practices of CoPs for transferring the PEAS knowledge to and among end-users. The key finding of this paper is that CoPs can be utilized to enhance knowledge transfer for a better PEAS implementation result. Our findings also indicate that CoPs can be assigned to steward this dynamic PEAS knowledge in its most updated version among the very people who are its owners.

Title:

AN OBJECT ORIENTED APPROACH FOR DOCUMENT MANAGEMENT

Author(s):

Abdul Adamu , Souheil Khaddaj

Abstract: It is already widely accepted that the use of data abstraction in object oriented modelling enables real world objects to be well represented in information systems. In this work we are particularly interested with the use of object oriented techniques for document management. Object orientation is well suited for such systems, which require the ability to handle multiple types content. However, the matter of how to deal with the reuse and management of existing documents over time remains a major issue. This paper aims to investigate a conceptual model, based on object versioning techniques, that will represent the semantics in order to allow the continuity and pattern of changes of documents to be determined over time.

Title:

HEALTH CARE PROCESS BASED ON THE ABC MODEL THROUGH A META-STRUCTURED INFORMATION SYSTEM

Author(s):

Christine  VERDIER , Gérard CLUZE

Abstract: We propose in this article to define a system which generates a generic care process based on the ABC method. For this purpose, we adapt dynamically the medical information system with UML packages in order to generate some semantic and syntactic links between the different packages that represent the “business objects” of a hospital. These packages contain all the information related to a specific problem for all the patients. So we are able to extract the particular data concerning a criteria (diagnosis, IP number, etc.) and a patient and, in that manner, to re-build the care process. The ABC method gives the skeleton of the care process and allows the definition of costs on a particular care process (e.g. the care process of the patient “John” concerning the disease “kidney failure” in the hospital H).

Title:

A DATA WAREHOUSE FOR WEATHER INFORMATION

Author(s):

Jose Torres-Jimenez , José  Torres Jímenez

Abstract: Data warehouse related technologies, allows to extract, group and analyze historical data in order to identify information valuable to decision making processes. In this paper the implementation of a weather data warehouse (WDW) to store Mexico’s weather variables is presented. The weather variables data were provided by the Mexican Institute for Water Technologies (IMTA), the IMTA does research, development, adaptation, human resource formation and technology transfer to improve the Mexico’s water management, and in this way contribute to the sustainable development of Mexico. The implemented WDW contains two dimension tables (one time dimension table and, one geographical dimension table) and one fact table (that stores the data values for weather variables). The time dimension table spans over ten years from 1980 to 1990. The geographical dimension table involves many Mexico’s hydrological zones and comes from 5551 measuring stations. The WDW enables (through the dimensions navigation) the identification of weather patterns that would be useful for: a) agriculture politics definition; b) climatic change research; and c) contingency plans over weather extreme conditions. Even it is well known, but it is important to mention, that the data warehouse paradigm (in many cases) is better to derivate knowledge from the data in comparison to the database paradigm, a fact that was confirmed through the WDW exploitation

Title:

INTEGRATION, FLEXIBILITY AND TRANSVERSALITY: ESSENTIAL CHARACTERISTICS OF ERP SYSTEMS

Author(s):

Louis Raymond , Sylvestre Uwizeyemungu

Abstract: The interest of firms in ERP systems has been echoed in both the scientific and professional literature. It is worth noting however that while this literature has become increasingly abundant, there does not yet exist an operational definition of the ERP concept that is, if not unanimously, at least widely accepted. This constitutes a handicap for both the research and practice communities. The present study outlines what could be considered as an ERP by first determining the essentially required characteristics of such a system : integration, flexibility and transversality. Indicators are then provided in order to operationalise these three characteristics. The study concludes by proposing a research framework on the impact of an ERP’s key characteristics upon the performance of the system in a given organisational setting.

Title:

SEMANTIC INTEGRATION OF DISPARATE DATA SOURCES IN THE COG PROJECT

Author(s):

Jos de Bruijn

Abstract: We present a novel approach to the integration of structured information sources in enterprises, based on Semantic Web technology. The semantic information integration approach presented in this paper was applies in the COG project. We describe Unicorn's Semantic Information Management along with the Unicorn Workbench tool, which is a component part of the Unicorn System, and how they were applied in the project to solve the information integration problem. We used the Semantic Information Management Methodology and the Unicorn Workbench tool to create an Information Model (an ontology) based on data schemas taken from the automotive industry. We map these data schemas to the Information Model in order to make the meaning of the concepts in the data schemas explicit and relate them to each other, thereby creating an information architecture that provides a unified view of the data sources in the organization.

Title:

IMPROVING VIEW SELECTION IN QUERY REWRITING USING DOMAIN SEMANTICS

Author(s):

Qingyuan Bai , Michael F. McTear , Jun Hong

Abstract: Query rewriting using views is an important issue in data integration. Several algorithms have been proposed, such as the bucket algorithm, the inverse rules algorithm, the SVB algorithm, and the MiniCon algorithm. These algorithms can be divided into two categories. The algorithms of the first category are based on use of buckets while the ones of the second category are based on use of inverse rules. The bucket-based algorithms have not considered the effects of integrity constraints, such as domain semantics, functional and inclusion dependencies. As a result, they might miss query rewritings or generate redundant query rewritings in the presence of these constraints. A bucket-based algorithm consists of two steps. The first step is called view selection that selects views relevant to a given query and puts the views into the corresponding buckets. The second step is to generate all the possible query rewritings by combining a view from each bucket. In this paper, we consider an improvement of view selection in the bucket-based algorithms using domain semantics. We use the resolution method to generate a pseudo residue for each view given a set of domain semantics. Given a query, the pseudo residue of each view is compared with it and any conflict that exists can be found. As a result, irrelevant views can be removed even before a bucket-based algorithm is used.

Title:

THE ABORTION RATE OF LAZY REPLICATION PROTOCOLS FOR DISTRIBUTED DATABASES.

Author(s):

Luis  Irún-Briz

Abstract: Lazy update protocols have proven to have an undesirable behavior due to their high abortion rate in scenarios with high degree of access conflicts. In this paper, we present the problem of the abortion rate in such protocols from an statistical point of view, in order to provide an expression that predicts the probability of an object to be out of date during the execution of a transaction. It is also suggested a pseudo-optimistic technique that makes use of this expression to reduce the abortion rate caused by accesses to out of date objects. The proposal is validated by means of simulations of the behavior of the expression. Finally, the application of the presented results to improve lazy update protocols is discussed, providing a technique to theoretically determine the boundaries of the improvement.

Title:

NEW FAST ALGORITHM FOR INCREMENTAL MINING OF ASSOCIATION RULES

Author(s):

yasser El-Sonbaty , Rasha Kashef

Abstract: Mining association rules is a well-studied problem, and several algorithms were presented for finding large itemsets. In this paper we present a new algorithm for incremental discovery of large itemsets in an increasing set of transactions. The proposed algorithm is based on partitioning the database and keeping a summary of local large itemsets for each partition based on the concept of negative border technique. A global summary for the whole database is also created to facilitate the fast updating of overall large itemsets. When adding a new set of transactions to the database, the algorithm uses these summaries instead of scanning the whole database, thus reducing the number of database scans. The results of applying the new algorithm showed that the new technique is quite efficient, and in many respects superior to other incremental algorithms like Fast Update Algorithm (FUP) and Update Large Itemsets (ULI).

Title:

WISH QUERY COMPOSER

Author(s):

Gregory Butler

Abstract: The WISH (With Intuitive Search Help) Query Composer is a software tool for composing form-based queries and their associated reports for relational databases. It incorporates the SQL and XML industry standards to generate user-friendly customizable queries and reports. It uses the very simple but flexible XML semantics to represent database schemas, SQL queries and result datasets, regardless of in which relational database management system (RDBMS) the data is stored. The tool is developed in the Eclipse development environment using the Java programming language with Swing components, and connects to the database through Java Database Connectivity (JDBC). The Java Architecture for XML Binding (JAXB) is used to automate the mapping between XML documents and Java objects.

Title:

AN EXCHANGE SERVICE FOR FINANCIAL MARKETS

Author(s):

Fethi Rabhi , Feras Dabous , Hairong Yu

Abstract: The critical business requirements and compelling nature of the competitive landscape are pushing Information Technology systems away from the traditional centrally controlled corporate-wide architectures towards dynamic, loosely coupled, self-defining and service-based solutions. Web services are regarded as a key technology for addressing the need for connecting extended applications and providing standards and flexibility for enterprise legacy systems integration. This paper reports our experiences when integrating a financial market trading system. This integration process starts from analysing the trading system’s architecture, then identifying system functionality and finally realising the design and implementation of a Web service. Performance and security and the trade-offs involved are the major focus points throughout this process. Comprehensive benchmarking is conducted with and without Web service and security considerations.

Title:

DYNAMIC CHANGE OF SERVER ASSIGNMENTS IN DISTRIBUTED WORKFLOW MANAGEMENT SYSTEMS

Author(s):

Manfred Reichert

Abstract: Process-oriented application systems can only be realized -- with reasonable effort and at acceptable costs -- by the use of a workflow management system (WfMS). Central WfMS, with a single server controlling all workflow (WF) instances, however, may become overloaded very soon. In the WF literature, therefore, many approaches suggest using a multi-server WfMS with distributed WF control. In such a distributed WfMS, the concrete WF server for the control of a particular WF activity is usually defined by an associated server assignment. Following such an approach, problems may occur if components (WF servers, subnets, or gateways) become overloaded or break down. As we know from other fields of computer science, a favorable approach to handle such cases may be to dynamically change hardware assignment. This corresponds to the dynamic change of server assignments in WfMS. For the first time, this paper analyses to what extend this approach is reasonable in such situations.

Title:

A/D CASE: A NEW HEART FOR FD3

Author(s):

Manuel Enciso

Abstract: In [anonymous] we introduce the Functional Dependencies Data Dictionary (FD3) as an architecture to facilitate the integration of database Systems. We propose the use of logics based on the notion of Functional Dependencies (FD) to allows formal specification of the objects of a data model and to conceive future automated treatment. The existence of a FD logic provides a formal language suitable to carry out integration tasks and eases the design of an automatic integration process based in the axiomatic system of the FD logic. Besides that, FD3, provides a High Level Functional Dependencies (HLFD) Data Model which is used in a similar way as the Entity/Relationship Model. In this paper, we develop a CASE tool named A/D CASE (Attribute/Dependence CASE) that illustrates the practical benefits of the FD3 architecture. In the development of A/D CASE we have taken into account other theoretical results which improve our original FD3 proposal [anonymous]. Particularly: * A new functional dependencies logic named SLfd, for removing redundancy in a database sub-model that we present in [anonymous]. The use of SLfd add formalization to software engineering process. * An efficient preprocessing transformation based on the substitution paradigm that we present in [anonymous]. Unlike A/D CASE is independent from the Relational Model, it can be integrated into different database systems and it is compatible with relational DBMSs.

Title:

EFFICIENT QUERYING OF TRANSFORMED XML DOCUMENTS

Author(s):

Georg Birkenheuer , Stefan Böttcher , Sven Groppe

Abstract: An application using XML for data representation requires the transformation of XML data if the application accesses XML data of other applications, or of a global database using another XML format. The common approach transforms entire XML documents from one format into another e.g. by using an XSLT stylesheet. The application can then work locally on a copy of the original document transformed in the application-specific format. Different from the common approach, we use an XSLT stylesheet in order to transform a given XPath query such that we retrieve and transform only that part of the XML document which is sufficient to answer the given query. Among other things, our approach avoids problems of replication, saves processing time and in distributed scenarios, transportation costs. Experimental results of a prototype prove that our approach is scalable and efficient.

Title:

ATTENUATING THE EFFECT OF DATA ABNORMALITIES ON DATA WAREHOUSES

Author(s):

Orlando Belo , Anália Lourenço

Abstract: Today’s informational entanglement makes it crucial to enforce adequate management systems. Data warehousing systems appeared with the specific mission of providing adequate contents for data analysis, ensuring gathering, processing and maintenance of all data elements thought valuable. Data analysis in general, data mining and on-line analytical processing facilities, in particular, can achieve better, sharper results, because data quality is finally taken into account. The available elements must be submitted to an intensive processing before being able to integrate them into the data warehouse. Each data warehousing system embraces extraction, transformation and loading processes which are in charge of all the processing concerning the data preparation towards its integration into the data warehouse. Usually, data is scoped at several stages, inspecting data and schema issues and filtering all those elements that do not comply with the established rules. This paper proposes an agent-based platform, which not only ensures the traditional data flow, but also tries to recover the filtered data when an data error occurs. It is intended to perform the process of error monitoring and control automatically. Bad data is processed and eventually repaired by the agents, integrating it again into the data warehouse’s regular flow. All data processing efforts are registered and afterwards mined in order to establish data error patterns. The obtained results will enrich the wrappers knowledge about abnormal situations’ resolution. Eventually, this evolving will enhance the data warehouse population process, enlarging the integrated volume of data and enriching its actual quality and consistency.

Title:

A HYBRID APPROACH FOR EFFICIENT STORAGE AND RETRIEVAL OF MULTIDIMENSIONAL DATA

Author(s):

Jagdish K.T. , Srivani T.K.

Abstract: Mapping from multidimensional data to one-dimensional using Hilbert Index has been studied as a way of indexing for storage and retrieval of multidimensional data. There are mainly two approaches towards Storage and Retrieval of Multidimensional data (Jurgens, 2002) one is the Tree Based Approach and other is Bitmap Indexing. One main benefit of the tree-based approach over the bit map indexing is that they have superior storage property and the insert/update operations are efficient on the other hand the bitmap indexing provides for faster retrieval. Our data structure is mainly based on the tree-based approach in which every node of the tree contains a bit array. The presence of a bit array in every node provides for faster retrieval thereby giving the benefit of both the approaches. In this paper, we present a tree (HT-tree) based on Hilbert Curves for efficient data storage and retrieval of Multidimensional data. The HT-tree data search method mainly makes use of the bit representation of the Hilbert Index values to search for the data, instead of using conventional point search methods as used in most of the R-trees. The proposed data structure overcomes the disadvantages of the HG-tree namely, extra computation of minimum bounding rectangle from the range of Hilbert values required for point search, range search and nearest neighbour search and also the problems occurring from the overlap area and redundant searches.

Title:

RELATIONAL SAMPLING FOR DATA QUALITY AUDITING AND DECISION SUPPORT

Author(s):

José Nuno Oliveira , Bruno Cortes

Abstract: This paper presents a strategy for applying sampling techniques to relational databases, in the context of data quality auditing or decision support processes. Fuzzy cluster sampling is used to survey sets of records for correctness of business rules. Relational algebra estimators are presented as a data quality-auditing tool.

Title:

TURNING INFORMATION INTO ACTION: FROM DATA TO BUSINESS PROCESSES THROUGH WEB SERVICES

Author(s):

Youcef Baghdadi

Abstract: Sharing Web services across the enterprise and to support business-to-business integration becomes more and more intensive and critical for businesses. This paper proposes a process to generate Web services from the attributes of the business objects and coordination artifacts as described in the highest abstraction level of a business model i.e. the universe of discourse where the elements are unique and not duplicated. Indeed, the elements of the information system, technology-based representation of the universe of discourse, are complex and redundant. The process is based on the concept of factual dependency. The factual dependency is a mechanism that allows aggregations of the attributes that are concerned by the same CRUD operations with respect to the time and the space. Factual dependencies are then validated with respect to the possible business events to keep only the relevant ones. Each distinct and specified operation in terms of input/output parameters generates a lowest level of granularity Web service. These Web services are then registered to be discovered and (re)used at request by any business process.

Title:

LIFESTREAMS: BRAIN-FRIENDLY DATA ACCESS

Author(s):

Jussi Kangasharju , Tobias Limberger , Gerhard Austaller

Abstract: Modern databases are rapidly growing in size and complexity. However, many users do not have enough domain knowledge to formulate precise queries and are thus unable to use these databases to their full potential. In this paper we present our LifeStreams project which aims at a brain-friendly access to data using associations between documents. Associations in LifeStreams are based on examining similarities between documents in several metadata dimensions such as time, location, and keywords. We present a model for real world and abstract entities and discuss how the relationships between entities and documents can be established. We show how LifeStreams visualizes collections of documents using a 3-dimensional visualization technique. We also discuss real-world application scenarios for LifeStreams in a corporate environment.

Title:

AN METHOD BASED ON CHAOTIC AND FRACTAL CONTROL FOR SOFTWARE QUALITY - AN EXPERIENCE

Author(s):

ZHANG Kai

Abstract: Despite the fact that great efforts have been made, there still have been major software problems unsolved, such as overtime and low quality. The chaotic and fractal have become a focal research field recent years, but there are only two papers to study the software quality by chaos tool. The purpose of this paper is to explore an approach how to early control software quality by the chaotic and fractal tools. After the analysis for the growing process of the software defects, the authors believe that the software defect growth has chaotic fractal characteristic, and design a method based on the chaotic and fractal control for process management of software quality. Two experiments have testified to the control efficiency.

Title:

IMPROVING QUERY PERFORMANCE ON OLAP-DATA USING ENHANCED MULTIDIMENSIONAL INDICES

Author(s):

Yaokai Feng , Hiroshi  Ryu , Akifumi Makinouchi

Abstract: Multidimensional indices are efficient to improve the query performance on OLAP data. As one multidimensional index structure, R*-tree is popular and successful, which is a member of the famous R-tree family. We enhance the R*-tree to improve the performance of range queries on OLAP data. First, the following observations are presented. (1) The clustering pattern of the tuples (of the OLAP data) among the R*-tree leaf nodes is a decisive factor on range search performance and it is controllable. (2) There often exist many slender nodes when the R*-tree is used to index business data, which causes some problems both with the R*-tree construction and with queries. And then, we propose an approach to control the clustering pattern of tuples and propose an approach to solve the problems of slender nodes, where slender nodes refer to those having a very narrow side (even the side length is zero) in some dimension. Our proposals are examined by experiments using synthetic data and TPC-H benchmark data.

Title:

MANAGING WEB-BASED INFORMATION

Author(s):

Tullio  Vernazza , Giancarlo Succi , Alberto  Sillitti , Marco Scotto

Abstract: The heterogeneity and the lack of structure of World Wide Web make automated discovery, organization, and management of Web-based information a non-trivial task. Traditional search and indexing tools provide some comfort to users, but they generally provide neither structured information nor categorize, filter, or interpret documents in an automated way. In recent years, these factors have prompted the need for developing data mining techniques applied to the web, giving rise to the term “Web Mining”. This paper introduces the problem of web data extraction and gives a brief analysis of the various techniques to address it. Then, News Miner, a tool for Web Content Mining applied to the news retrieval is presented.

Title:

ADVANTAGES OF UML FOR MULTIDIMENSIONAL MODELING

Author(s):

Sergio Luján-Mora , Juan Trujillo , Panos Vassiliadis

Abstract: In the last few years, various approaches for the multidimensional (MD) modeling have been presented. However, none of them has been widely accepted as a standard. In this paper, we summarize the advantages of using object orientation for MD modeling. Furthermore, we use the UML, a standard visual modeling language, for modeling every aspect of MD systems. We show how our approach resolves elegantly some important problems of the MD modeling, such as multistar models, shared hierarchy levels, and heterogeneous dimensions. We believe that our approach, based on the popular UML, can be successfully used for MD modeling and can represent most of frequent MD modeling problems at the conceptual level.

Title:

SEMI-STRUCTURED INFORMATION WAREHOUSES: AN APPROACH TO A DOCUMENT MODEL TO SUPPORT THEIR CONSTRUCTION

Author(s):

Juan Manuel Pérez Martínez , Rafael Berlanga Llavori , Maria Jose Aramburu Cabo

Abstract: During the last decade, data warehouse and OLAP techniques have helped companies to gather, organize and analyze the structured data they produce. Simultaneously, digital libraries have applied Information Retrieval mechanisms to query their repositories of unstructured documents. In this context, the emergence of XML means the convergence of these two approaches, making possible the development of warehouses for semi-structured information. Although there exist several extensions of traditional data warehouse technology to manage semi-structured information, none of them are based on an underlying document model able to exploit this kind of information. Along this paper we present a set of requirements for semi-structured warehouses, as well as a document model to support their construction.

Title:

FACILITATING BUSINESS PROCESS MANAGEMENT WITH HARMONIZED MESSAGING

Author(s):

Shazia Sadiq , Maria Orlowska , Wasim Sadiq , Karsten Schulz

Abstract: Process communication is characterized by complex interactions between heterogeneous and autonomous systems within the enterprise and often between trading partners. A number of initiatives and proposals are underway to provide solutions for process specification and communication. However, the focus is often on defining APIs and interfaces rather than the semantics of the underlying message exchange. We see a great potential in the enhancement of current messaging infrastructure, in its new role in facilitating complex, long running interactions for dynamic and collaborative processes operating in decentralized environments like the World-Wide Web. In this paper, we primarily present a vision for a technology aimed at providing a level of business logic on the messaging layer, which we denominate as harmonisation of messages.. We will provide the conceptual framework for the harmonized messaging technology and identify fundamental issues for the specification of complex interactions.

Title:

MINING CLICKSTREAM-BASED DATA CUBES

Author(s):

Orlando Belo , Ronnie Alves

Abstract: Clickstream analysis can reveal usage patterns on company’s web sites giving highly improved understanding of customer behaviour, which can be used to improve customer satisfaction with the website and the company in general, yielding a great business advantage. Such summary information and rules have to be extracted from very large collections of clickstreams in web sites. This is challenging data mining, both in terms of the magnitude of data involved, and the need to incrementally adapt the mined patterns and rules as new data is collected. In this paper, we present some guidelines for implementing on-line analytical mining (OLAM) engines which means an integration of OLAP and mining techniques for exploring multidimensional data cube structures. In addition, we describe a data cube alternative for analyzing clickstreams. Besides, we discussed implementations that we consider efficient approaches on exploring multidimensional data cube structures, such as DBMiner, WebLobMiner, and OLAP-based Web Access Engine.

Title:

TRANSACTION CONCEPTS FOR SUPPORTING CHANGES IN DATA WAREHOUSES

Author(s):

Zbyszko Krolikowski , Robert Wrembel , Bartosz Bebel

Abstract: A data warehouse (DW) provides an information, from external data sources, for analytical processing, decision making, and data mining tools. External data sources are autonomous, i.e. they change over time, independently of a DW. Therefore, the structure and content of a DW has to be periodically synchronized with its external data sources. This synchronization concerns DW data as well as schema. Concurrent work of synchronizing processes and user queries may result in various anomalies. In order to tackle this problem we propose to apply a multiversion data warehouse and an advanced transaction mechanism to a DW synchronization.

Title:

AN ALTERNATIVE APPROACH FOR BUILDING WEB-APPLICATIONS

Author(s):

Oleg Rostanin

Abstract: Nowadays in J2EE-world there is a lot of blueprints, articles and books that propose some recommendations, recipes and patterns for producing web-applications in right way. There are also ready decisions like Jakarta Struts that can be taken as a base of a new project development. While developing the DaMiT e-learning system we tried to collect, analyse and implement many of the architectural features being proposed as well as to invent some new mechanisms such as supporting multiple kinds of client software or introducing XML-based interfaces between application tiers.

Title:

RJDBC: A SIMPLE DATABASE REPLICATION ENGINE

Author(s):

Javier Esparza Peidro

Abstract: Providing fault tolerant services is a key question among many services manufacturers. Thus, enterprises usually acquire complex and expensive replication engines. This paper offers an interesting choice to organizations which can not afford such costs. RJDBC stands for a simple, easy to install middleware, placed between the application and the database management system, intercepting all database operations and forwarding them among all the replicas of the system. However, from the point of view of the application, the database management system is accessed directly, so that RJDBC is able to supply replication capabilities in a transparent way. Such solution provides acceptable results in clustered configurations. This paper describes the architecture of the solution and some significant results.

Title:

TOWARDS DESIGN RATIONALES OF SOFTWARE CONFEDERATIONS

Author(s):

Michal Zemlicka

Abstract: The paper discuss reasons why service-oriented architecture is a new software paradigm and the consequences of this fact for the design of enterprise information systems. It is shown that such systems called confederations need not (should not) use web services in the sense of W3C which are more or less a necessity in e-commerce. As business processes supported by enterprise systems must be supervised by businessmen, the same must hold for ccommunication inside confederations. It implies that the interfaces of the services must be user-oriented (user-firendly). It has possitive consequences for the software engineering properties of the confederation. Confederations should sometimes include parts based on a difficult implementation philosophy (e.g. data orientation). Pros and cons of it are discussed. Open issues of service orientation are presented.

Title:

SOLVING INTEROPERABILITY PROBLEMS ON A FEDERATION OF SOFTWARE PROCESS SYSTEMS

Author(s):

Mohamed-Amine MOSTEFAI , Mohamed AHMED-NACER

Abstract: Software process components that share information and that cooperate for common tasks lead to multiple problems of interoperability for software process support environments based on a federation of heterogeneous and autonomous components. Some based-interoperability approaches have been proposed, especially at the conceptual level. However, more problems remain to be solved to enable the heterogeneous process components interoperability at execution level. This paper presents a process-based approach (architecture) for the federation of software process systems. Based on this federation architecture, we focuss on its implementation problems for the process execution interoperability. We show how we solve these problems and we discuss their implementation through the main development platforms of distributed applications.

Title:

VERSION MANAGEMENT FOR DATA WAREHOUSE EVOLUTION

Author(s):

Alexandre Schlottgen , Nina Edelweiss

Abstract: Various multidimensional data models were proposed in the last years for Data Warehouse (DW) modeling. However, there is a considerable shortage of models that deal with DW schema evolutions. In order to understand the DW life cycle and guarantee the correct and consistent maintenance of the populated data, it is necessary to control the modifications made at multidimensional schemata. This article studies the DW schema modification operations, presenting an extension to ME/R (Multidimensional Entity Relationship Model) to support the multiple versions management of DW schemata.

Title:

A RESPONSIBILITY-DRIVEN ARCHITECTURE FOR MOBILE ENTERPRISE APPLICATIONS

Author(s):

Qusay Mahmoud

Abstract: This paper deals with wireless applications that get downloaded, over the air, on handheld wireless devices and get executed there. Once running, they may need to interact with applications residing on remote wired servers. The motivation for this work is provided in part by the characteristics of the wireless computing environment. There are several implications of these characteristics that require a software architecture that reduces the load on the wireless link and supports disconnected operations. We present a responsibility-driven architecture that enables mobile thin-clients to interact with enterprise servers. We extend this architecture with mobile agent to reduce the load on the wireless link and support disconnected operations. This architecture is capable of supporting multiple devices with or without a client browser.

Title:

DESIGN AND REPRESENTATION OF THE TIME DIMENSION IN ENTERPRISE DATA WAREHOUSES - A BUSINESS RELATED PRACTICAL APPROACH

Author(s):

Ahmed Hezzah , A Min Tjoa

Abstract: A data warehouse provides a consistent view of business data over time. In order to do that data is represented in logical dimensions, with time being one of the most important dimensions. Representing time, however, is not always straightforward due to the complex nature of time issues and the strong dependence of the time dimension on the type of business. This paper addresses the specific issues encountered during the design of the time dimension for multidimensional data warehouses. It introduces design and modeling techniques for representing time in the data warehouse by the use of one or multiple time dimensions or database timestamps. It also discusses generic problems linked to the design and implementation of the time dimension which have to be considered for (global) business processes, such as representing holidays and fiscal periods, increasing the granularity of business facts, considering the observation of daylight saving time and handling different time zones. These problems seem to have wide application, and yet, more in-depth investigations need to be conducted in this field for real-world time-based analysis in enterprise-wide data warehouses.

Title:

A METHOD FOR XML DOCUMENT SCHEMA EVOLUTION

Author(s):

Lina Al-Jadir

Abstract: XML has become an emerging standard for data representation and data exchange on the Web. Although XML data is self-describing, most application domains tend to use document schemas. Over a period of time, these schemas need to be modified to reflect a change in the real-world, a change in the user’s requirements, mistakes or missing information in the initial design. Most of the current XML management systems do not support schema changes. In this paper, we propose a method to manage XML document schema evolution. We consider XML documents associated with DTDs. Our method consists in three steps. First, the DTD and XML documents are stored as a database schema and a database instance respectively. Second, DTD changes are applied as schema changes on the database. Third, the updated DTD and XML documents are retrieved from the database. Our method supports a complete set of DTD changes. The semantics of each DTD change is defined by preconditions and postactions, such that the new DTD is valid, existing XML documents conform to the new DTD, and data is not lost if possible. We implemented our method in an object-oriented database system.

Title:

PROPOSAL FOR AUTOMATING THE GENERATION PROCESS OF QUESTIONNAIRES TO MEASURE THE SATISFACTION LEVEL OF SOFTWARE USERS

Author(s):

María Inés Lund , Sergio Zapata , Mauro Paparo

Abstract: The most recent concepts on software quality take into account the factors of product quality, process quality and the satisfaction level of users. Therefore, when putting forth a plan for improving a software product, special attention should be paid as to incorporate the level of users’ satisfaction into the development premises. On this latter respect, well-designed surveys have proven to be a valuable tool to obtain and measure satisfaction variables. The survey-based strategies, however, present a drawback on the fact that the tasks involved in questionnaire generation are difficult to automate, which renders the entire approach almost impracticable. This work presents a proposal for automating the various stages defined in questionnaire generation, with the aim at making the measurement method be both applicable and more practical.

Title:

ONTOEDITOR: A WEB TOOL FOR MANIPULATING ONTOLOGIES STORED IN DATABASE SERVERS

Author(s):

Claudio de Souza Baptista , Karine Freitas Vasconcelos , Ulrich Schiel , Ladjane Silva Arruda , Elvis Rodrigues da Silva

Abstract: The Web is moving to a new generation in which machine-understandable processing is mandatory. In order to achieve this goal it is essential to define ontologies which enable the modeling of application domains and can be shared and understood by different applications in different platforms. These ontologies are complex and so it is necessary to provide software tools which aims to facilitate ontology manipulation. In this paper, we describe a new tool for ontology manipulation known as OntoEditor. OntoEditor is a Web tool, which has a graphical interface for representing an ontology graph. Moreover, OntoEditor uses a database management system for ontology persistency and query manipulation. The ontologies are represented internally as RDF and RDF Schema.

Title:

REFERENCIAL INTEGRITY MODEL FOR XML DATA INTEGRATED FROM HETEROGENEOUS DATABASES SYSTEMS

Author(s):

Mauri Ferrandin

Abstract: This article presents a proposal for maintenance of the referential integrity in data integrated from relational heterogeneous databases stored in XML materialised views. The core idea is the creation of a rules repository that will have to be observed to if carrying through any operation of update in the mediating layer of a system for integration of heterogeneous relational sources of data to guarantee that the updates carried through in the data stored in this layer can be propagated to the relational databases that are part of the system integrated without causing problem of referential integrity in the same ones. This proposal has as main objective to specify a mechanism capable to guarantee that the data after exported from the relational heterogeneous databases in a mediating layer, continue respecting the same integrity which these were submitted in the origin databases.

Title:

MODEL BASED MIDDLEWARE INTEGRATION

Author(s):

Frédérick Seyler

Abstract: In this paper, we describe a process and a meta model that we are defining for the reuse of legacy based systems. This aims at filing the gap between design level bridges and the implementation of interoperability. Our proposal is a component based integration process, a metamodel based on welle known component research results and a reuse architecture allowing an operational integration of legacy applications. The metamodel, called Ugatze is composed by a set of UML packages covering multiple Viewpoints of the reuse activity. Ugatze is the Basque name for the Bearded Vulture, it reuses bones of death animals to eat, and its re-integration in Basque Country seems to be difficult, but it is a challenge.

Title:

REAL-TIME DATABASES FOR SENSOR NETWORKS

Author(s):

Maria Lígia Barbosa Perkusich , Pedro Fernandes  Ribeiro Neto , Angelo Perkusich

Abstract: In the last years, the demand of embedded systems has been increased. Also, due to the increasing competition among different kind of companies, such as cellular phone, automobiles and industrial automation, the requirements for such systems are getting more complex. However, the data storage and processing techniques, for these environments, are insufficient for the new requirements. In this paper, we develop a model for the integration of real-time database technology with an embedded sensor network systems, to tackle such deficiencies.

Title:

MEMORY MANAGEMENT FOR LARGE SCALE DATA STREAM RECORDERS

Author(s):

Zimmermann Roger , Kun Fu

Abstract: Presently, digital continuous media (CM) are well established as an integral part of many applications. In recent years, a considerable amount of research has focused on the efficient retrieval of such media. Scant attention has been paid to servers that can record such streams in real time. However, more and more devices produce direct digital output streams. Hence, the need arises to capture and store these streams with an efficient data stream recorder that can handle both recording and playback of many streams simultaneously and provide a central repository for all data. In this report we investigate memory management in the context of large scale data stream recorders. We are especially interested in finding the minimal buffer space needed that still provides adequate resources with varying workloads. We show that computing the minimal memory is an NP-complete problem and will require further research to find efficient heuristics.

Title:

CONVERTING LEGACY RELATIONAL DATABASE INTO XML DATABASE THROUGH REVERSE ENGINEERING

Author(s):

Anthony Lo , Reda Alhajj , Ken Barker , Chunyan Wang

Abstract: XML (eXtensible Markup Language) has emerged and is being gradually accepted as the standard for data interchange over the Internet. Since most data is currently stored in relational database systems, the problem of converting relational data into XML assumes special significance. Many researchers have already done some accomplishments in this direction. They mainly focus on finding XML schema (e.g., DTD, XML-Schema, and RELAX) that best describes a given relational database with a corresponding well-defined database catalog that contains all information about tables, keys and constraints. However, not all existing databases can provide the required catalog information. Therefore, these applications do not work well for legacy relational database systems that were developed following the logical relational database design methodology, without being based on any commercial DBMS, and hence do~not provide well-defined metadata files describing the database structure and constraints. In this paper, we address this issue by first applying the reverse engineering approach to extract the ER (Extended Entity Relationship) model from a legacy relational database, then convert the ER to XML Schema. The proposed approach is capable of reflecting the relational schema flexibility into XML schema by considering the mapping of binary and nary relationships. We have implemented a first prototype and the initial experimental results are very encouraging, demonstrating the applicability and effectiveness of the proposed approach.

Title:

KEYS GRAPH - BASED RELATIONAL TO XML TRANSLATION ALGORITHM

Author(s):

Christine  VERDIER , Wilmondes MANZI DE ARANTES

Abstract: The authors propose two algorithms for generating a DTD and an XML document respectively from the metadata and the content of a relational database without any intermediary language or user intervention. Such algorithms always generate semantically correct XML output by respecting database functional dependencies represented in a graph structure they take as input. Finally, different XML representations (or views) meeting expectations of different kind of users can be obtained from the same data according to the database entity chosen as translation pivot

Title:

DURATIVE EVENTS IN ACTIVE DATABASES

Author(s):

Juan Carlos Augusto , Rodolfo Gomez

Abstract: Active databases are DBMS which are able to detect certain events in the environment and trigger actions in consequence. Event detection has been subject of much research, and a number of different event specification languages is extant. However, this is far from being a trivial or accomplished task. Most of these languages handle just instantaneous events, but it has been noticed that a number of situations arise where it would be interesting or even necessary to handle durative events. We elaborate on a given specification language which combines instantaneous and durative events, revealing some issues which must be taken into account when the semantics of event composition is defined.

Title:

EMULATIVE SOFTWARE ENGINEERING - AN EXPERIMENT AND EXPERIENCE

Author(s):

Xiong Qianxing , Zhang Zhang Kai

Abstract: Concurrent Engineering is a good method, but the method overstress the communication and cooperation of various departments in an enterprise so that it does not meet the requirement of fast tempo and direct confrontation of modern work. This paper proposes a so-called Emulative Software Engineering and reposes its hope in the method to solve the difficult problems of software quality and schedule control. The authors did a development experiment based on a teaching activity, which three groups joined. The experimental result and data show that the method is feasible or workable. It is reasonably believed that the method has the great and practical value for software development in spite of the fact that it originates from a new idea and software development of teaching activity. In addition, the results of the experiment indicate that Emulative Software Engineering has the weakness in information exchange, which needs to be rectified with the support from the strong points, close exchange, of both Concurrent Engineering and knowledge management.

Title:

COOPERATIVE LEGACY DATABASES - AN ONTOLOGY BASED CONTEXT MEDIATION

Author(s):

Philippe Thiran , Djamal  Benslimane

Abstract: Enterprise information systems contain collections of existing databases that must cooperate to carry out common tasks. Most often, these databases are legacy, autonomous and heterogeneous systems. In this paper, we focus on the semantic and dynamic aspects of legacy database interoperation. We present a context mediation approach to support legacy database interoperability, which is based on a conceptual level of database description and on a dynamic resolution of structural and semantic conflicts. An object oriented data model is described, which provides tools for a conceptually rich description of legacy database, and foundations for resolving semantic heterogeneities among systems.

Title:

ORGANIZATIONAL INFORMATION SYSTEMS DESIGN AND IMPLEMENTATION WITH CONTEXTUAL CONSTRAINT LOGIC PROGRAMMING

Author(s):

Salvador Abreu

Abstract: In this article we sustain that Contextual Constraint Logic Programming (CxCLP for short) is a useful paradigm in which to specify and implement Organizational Information Systems, particularly when integrated with the ISCO mediator framework. We briefly introduce the language and its underlying paradigm, appraising it from the angle of both of its ancestries: Logic and Object-Oriented Programming. An initial implementation has been developed and is being actively used in a real-world setting -- Universidade de �vora's Academic Information System. We briefly describe both the prototype implementation and its first large-scale application. We conclude that the risk taken in adopting a developing technology such as the one presented herein for a mission-critical system has paid off, in terms of both development ease and flexibility as well as in maintenance requirements.

Title:

WEB-BASED TRAINING SYSTEM FOR FOREST FIRE OFFICE STAFF

Author(s):

Juan Garbajosa

Abstract: The objective of this paper is to present an approach for a web-based training system for Forest fire offices. The development of a modelling and simulation technology for systems with a network-like architecture is a growing day by day. Forest fire offices represents an appropriate application to do this development. The approach described is based on an XML languages family defined in a research project and applied to a number of systems that have been modelled and simulated. This paper introduces two different points of views: the first the system architecture; the second the XML-based language and its use for simulation.

Title:

ARCO: MOVING DIGITAL LIBRARY STORAGE TO GRID COMPUTING

Author(s):

Paulo  Trezentos

Abstract: Storage has been extensively studied during the past few decades \cite{Fost97,Trez01}. However, the emerging trends on distributed computing bring new solutions for existent problems. Grid computing proposes a distributed approach for data storing. In this paper, we introduce a Grid-based system (ARCO) developed for multimedia storage of large ammounts of data. The system is being developed for Biblioteca Nacional, the National Library of Portugal. Using Grid informational system and resources management, we propose a transparent system where TeraBytes of data are stored in a beowulf cluster built of commodity components with backup solution and error recover mechanisms.~

Title:

DATA EXTRACTION AND TRANSFORMATION WITH FLAT FILE FOR BUSINESS INTEGRATION

Author(s):

Sheng Ye , Wei Sun , Zhong Tian

Abstract: Documents and their exchange play important roles in business operations and transactions. With the development of e-business, the capability of exchanging data in different formats is necessary for integrating heterogeneous enterprise applications. Though XML is becoming the standard communication protocol over the Internet, most enterprise applications today can only process a specific format text data, mostly in a flat file. These diverse data formats will continue to exist until the enterprises’ applications are upgraded to a version supporting XML. So the transformation between XML and flat file is widely demanded in business integration solutions. This paper introduces a round trip transformation technology between flat file and XML, Flat File Adapter. This technology employs a systematic and patent pending data extraction and formatting method to support the processing of complex format flat file. By using Flat File Adapter, developer can design the data transformation rules shortly, and these rules will be captured in a template that make it is easy to update for later requirements’ changes. In this paper, we introduce the system architecture, detailed components, and particular data extraction and transformation method. Finally, a sample application in B2B e-procurement solution is also described.

Title:

UNDERSTANDING THE ERP POST-IMPLEMENTATION DISCOURSE

Author(s):

Fergal Carton , Frederic Adam , David Sammon

Abstract: This paper presents the first stage of a larger research project focusing on understanding the emergence of ERP II. ERP is now being seen for what it really is: ‘a means to an end’, in that, its primary benefit is in the integrated infrastructure that it introduces and its ability to support future IS investments. The paper focuses on the changes that have been observed in the services offered by vendors and consultants in the now renamed ERP II market. Now terms like ‘ERP’ and ‘e-business’ are for the most part avoided by vendors and consultants as they are perceived to be out-of-date. For example, SAP once promoted that fact that they were ‘29 years in the business of e-business’ with ‘the best-run e-businesses run SAP’, but now their message promotes, ‘30 years in the business of helping businesses grow’ with ‘the best-run businesses run SAP’. In this paper, issues of concern with the realities of ERP post-implementation are presented through examining: benefits realisation; informational requirements; and generic to specific solutions. While we would argue that it is difficult to understand the rationale for the introduction of these ‘newer’ ERP extensions, we must acknowledge that a market has been created and that once again the ‘new-look’ ERP vendors are the dominant ERP II players. This leads us to question whether there is anything new in ERP II.

Title:

A FRAMEWORK FOR ON-DEMAND INTEGRATION OF ENTERPRISE DATA SOURCES

Author(s):

Tapio Niemi

Abstract: Deploying a data warehouse system in a company is usually an expensive and risky investement. Constructing a data warehouse is a large project that can take very long time. However, a company cannot know in advance exactly what benefits a data warehouse will offer, nor is it easy to predict kind of functionality it should be able to support to be usable in an event where company's processes or structures are changed. Due to these reasons, in many cases data warehousing projects have either been abandoned or been shown to be at least partial failures. We propose a new method by providing a platform to implement business intelligence systems on. The basic idea is to construct the analysis database (i.e. an OLAP cube) on demand and only include the data that is needed for the analysis at hand from the operational databases. In this way the data is always up-to-date, suitable for the current analysis, and some of the biggest risks associated with data warehouse systems can be avoided. The computational costs related to the cube construction are likely to remain at acceptable level, since only the relevant part of the data for the current analysis is needed from operational databases. Moreover, business intelligence systems, like OLAP, are traditionally limited to the data stored in the data warehouse of the company. In many cases this is not enough since the phenomenon under analysis can be dependent on something outside the scope of the company. For example, the oil price or the weather can have a remarkable effect on business. If a decision support system can not have access to this kind of external data, the analysis cannot find the right explanation for the problem. The proposed method enables the user performing the analysis to include external data to the OLAP cube. We outline the use of Grid techologies - a research field closely related to Internet computing - in the implementation to offer a cost-effective way to harness enough computing power used on parallel processing and sufficient security infrastructure (GSI). Another aspect of the Grid is that due to its potential to offer large amounts of storage capacity in a way that optimally leverages the advances in price/capacity ratio of new storage devices, it is possible to retrieve also archived transactional data in transparent manner. To deal with heterogenous data sources the XML language with XSL transformations is applied.

Title:

MEASURING THE IMPACT OF ENTERPRISE SYSTEMS ON BUSINESS OBJECTIVES

Author(s):

Vincent Owhoso , Donald Chand , James Hunton , Sri Vasudevan , George Hachey

Abstract: This is a research-in-progress report on our research project aimed at understanding how to assess the success of ERP systems. Based on an in-depth study of a successful ERP implementation in a multinational manufacturing and service organizations, we have identified sample performance indicators in all four dimensions of Balanced Scorecard. Our study shows that instead of building an ERp scorecard, it is more fruitful to study the impacts of the ERP systems on business goals and strategies.

Title:

INFOFLEX: FLEXIBLE AND DISTRIBUTED CONTENT MANAGEMENT. USING WEB SERVICES AND SEMANTIC WEB TO MANAGE CONTENT

Author(s):

Antonio Hernández Pérez , Tomás Nogales Flores , David Rodríguez Mateos , Luis Sánchez Fernández , Jesús Arias Fisteus , Norberto Fernández García , Jesús Villamor Lugo

Abstract: The development of information and communication technologies and the expansion of the Internet means that nowadays there are huge amounts of information available via these emergent media. The need to manage such information, which was in the past stored on paper media, has become apparent in different fields. A number of content management systems have appeared which aim to achieve this task. Most of these systems are oriented towards Web publishing on a central site, and they do not support collaboration among several, distributed sources of managed content. In this paper we present a proposal for an architecture for the efficient and flexible management of distributed.

Title:

INFORMATION SYSTEM FOR SUPPORTING THE INCLUSION OF JOB SEEKERS TO THE LABOUR MARKET.

Author(s):

Theodoros Alevizos , Christos Skourlas , Paraskevas Hadjidiakos

Abstract: In this paper, the interconnection and integration problem of disparate Information sources including multilingual information related to the Unemployed and Business is analyzed. A possible solution based on the use of the European curriculum vitae and the creation of Data Marts is briefly described. The approach is also influenced by well-known Cross-Lingual Information Retrieval (CLIR) techniques. We also focus on the creation of a pilot Information System for the Institute of Labour (INE) of the Greek General Confederation of Labour (GSEE). Eventually, our experience and a first evaluation of the system are discussed.

Title:

COMPONENT BASED INFORMATION SYSTEM RE-ENGINEERING APPROACH

Author(s):

Abdelaziz  KHADRAOUI , Michel Léonard

Abstract: This paper presents a concept called Component Based Information System Re-Engineering (CISRE), which lays down the foundation of a new re-engineering approach. CISRE covers all the facets of an Information System at three levels: system, collaboration and organization. The proposed approach of IS Re-engineering distinguishes two main phases: the comprehension phase and the renovation one which are not disjointed. The cognitive space of the comprehension phase permits the clarification of links between legal texts (general procedures) and the IS. The main goal is to converge into a new IS, within a rapid evolving environment. Therefore, the new IS will be achieved on stable concepts based on invariants.

Title:

DEPLOYING A SUPPLY CHAIN PORTAL TO TRANSFORM MILITARY OPERATIONS

Author(s):

Robert Sullivan , Sandor Boyson , Robert Stevens

Abstract: This short paper addresses the challenges and anticipated benefits of building and deploying a comprehensive end to end supply chain technology infrastructure for the U.S. Army, layering a portal, middleware, collaborative planning and forecasting applications and integrated ERP software in a rapid deployment process. As noted by Boyson and Corsi a supply chain portal can “harness diverse real time data sources to: -provide a unified format and middleware platform for legacy, enterprise and internet data -personalize views based on user requirements and access classifications -distribute field-based data gathered from scanners, PDA devices and other information appliances to multiple users in real time over the portal Thus, the portal provides a unifying structure allowing a single shared database to coordinate all the transactions within the organization as well as the transactions between the organization and its trading partners in real time”

Title:

DDING SPATIAL COMPONENTS TO SCIENTIFIC DATA WAREHOUSES

Author(s):

Kevin Deeb

Abstract: For many years universities and government agencies have been collecting a wealth of scientific data. It is now time to transform these data into information and make them readily available in a common format that is easily accessible, fast, and bridges the islands of information that have evolved at each site. The best architecture for this application is the data warehouse that protects the confidentiality of data before it can be published by principal investigators, preserves the privacy of contributors, provides sufficient granularity to enable scientists to variously manipulate data, supports robust metadata services, and contains a standardized spatial component. The benefits of the warehouse can be further enhanced by adding a spatial component so that the data can be brought to life, overlapping layers of information in a format that is easily grasped by management, enabling them to tease out trends in their areas of expertise.

Title:

DEVELOPING A CORPORATE INFORMATION SYSTEM ARCHITECTURE: THE CASE OF EUROSTAT

Author(s):

François Vernadat , Georges  Pongas

Abstract: The paper presents the vision being deployed at the Statistical Office of the European Communities (Eurostat) about a rationalised IT infrastructure for integrated operations of its various statistical production systems. The new architecture being implemented isolates physical data from applications and users, uses database federation mechanisms, strongly relies on the use of meta-data about storage systems, application systems and data life cycles, emphasises the use of thematic and support servers and will use a message-oriented middleware as its backbone for data exchange. Portal technology will provide the unique gateway both for internal and external users to have public or restricted access to information produced by over 130 statistical production systems working in the back-office. Architectural principles and solutions are discussed.


AREA 2 - Artificial Intelligence and Decision Support Systems
 

Title:

THE DATA FLOW AND DISTRIBUTED CALCULATIONS INTELLIGENCE INFORMATION TECHNOLOGY FOR DECISION SUPPORT SYSTEM IN REAL TIME

Author(s):

Michael Okhtilev

Abstract: The aim of this investigation is to develop unified models of complex technological process as controlled object states knowledge presentation; methods, algorithms and system of complex technological process states monitoring (situation assessment) programs automatic synthesis accord-ing to preset target and capability of verification and optimization con-sidered; special software prototype realizing controlled objects automatic monitoring.

Title:

STRATEGIC DMSS FOR E-BUSINESS PLANNING

Author(s):

Lidan Ha , Guisseppi  Forgionne , Fen Wang

Abstract: Strategic business planning is a critical decision problem determining the long-term survival and prosperity of companies especially in this E-era. The complex planning process can be facilitated through management science, economics, statistics, and other technological tools. However, managers are rarely aware of these tools, are not proficient in their use, or are incapable of acquiring the proficiency. Through theoretical exploration in previous phases of an ongoing project, we believed that such proficiency can be delivered through decision making support systems. The current study aims to develop and implement such a DMSS to deliver the specified e-business planning model and statistical methodologies, which can provide integrated and intelligent support for decision makers during the entire decision making process. A SAS-based approach was selected as the desired system development and implementation environment. It is the first time that theoretical implications from management science, marketing strategies and economic rules have been integrated in a strategic DMSS and implemented in a field setting.

Title:

UNSUPERVISED ARTIFICIAL NEURAL NETWORKS FOR CLUSTERING OF DOCUMENT COLLECTIONS

Author(s):

Ayad Fekry Ayad , Abdel-Badeeh Salem , Mostafa Syiam

Abstract: The Self-Organizing Map (SOM) has shown to be a stable neural network model for high- dimensional data analysis. However, its applicability is limited by the fact that some knowledge about the data is required to define the size of the network. In this paper the Growing Hierarchical SOM (GHSOM) is proposed. This dynamically growing architecture evolves into a hierarchical structure of self–organizing maps according to the characteristics of input data. Furthermore, each map is expanded until it represents the corresponding subset of the data at specific level. We demonstrate the benefits of this novel model using a real world example from the document-clustering domain. Comparison between both models (SOM & GHSOM) was held to explain the difference and investigate the benefits of using GHSOM.

Title:

MULTILAYER PERCEPTRONS TECHNIQUE IN CLASSIFYING STOCKS: A CASE STUDY OF EGYPTIAN STOCKS EXCHANGE

Author(s):

Medhat Abdelaal

Abstract: Classification rates on out-of-sample predictions can often be improved through the use of model selection when fitting a model on the training data. In this paper, the multilayer perceptron neural network using the back-propagation algorithm is studies for the classification of financial variables of the Egyptian Stock Exchange. The best network architecture is made up of eleven layers: five input layers, five hidden layers and the output layer. The sensitivity analysis has been investigated. Sensitivity analysis can give important insights into the usefulness of individual variables. It often identifies variables that can be safely ignored in subsequent analysis, and key variables that must always be retained. Also, the receiver operating characteristic curve is used to compare between classifiers, and can be used to select an optimum decision threshold to select the best network which can catch most of the variability of the data.

Title:

LINGUISTIC DESCRIPTION OF PATTERNS FROM MINED IMAGES

Author(s):

Hema Nair

Abstract: The objective of this paper is to propose an approach to describe patterns in remote-sensed images utilising fuzzy logic. The general form of a linguistically quantified proposition is “QY’s are F” where Q is a fuzzy linguistic quantifier, Y is a class of objects and F is a summary that applies to that class. The truth of such a proposition can be determined for each object characterised by a tuple in the database. Fuzzy descriptions of linguistic summaries help to evaluate the degree to which a summary describes an object or pattern in the image. A genetic algorithm technique is used to obtain optimal solutions that describe all the objects or patterns in the database. Image mining is used to extract unusual patterns from multi-dated satellite images of a geographic area.

Title:

ARTIFICIAL INTELLIGENCE REPRESENTATIONS OF MULTI-MODEL BASED CONTROLLERS

Author(s):

Manuel de la Sen , Asier Ibeas

Abstract: This paper develops a representation of multi-model based controllers by using artificial intelligence typical structures. These structures will be neural networks, genetic algorithms and fuzzy logic. The interpretation of multimodel controllers in an artificial intelligence frame will allow the application of each specific technique to the design of multimodel based controllers. A method for synthesizing multimodel based neural network controllers from already designed single model based ones is presented. Some applications of the genetic algorithms and fuzzy logic to multimodel controller design are proposed.

Title:

COMPREHENSIBLE CREDIT-SCORING KNOWLEDGE VISUALIZATION USING DECISION TABLES AND DIAGRAMS

Author(s):

Jan Vanthienen , Christophe Mues , Bart Baesens

Abstract: One of the key decision activities in financial institutions is to assess the credit-worthiness of an applicant for a loan, and thereupon decide whether or not to grant the loan. Many classification methods have been suggested in the credit-scoring literature to distinguish good payers from bad payers. Especially neural networks have received a lot of attention. However, a major drawback is their lack of transparency. While they can achieve a high predictive accuracy rate, the reasoning behind how they reach their decisions is not readily available, which hinders their acceptance by practitioners. Therefore, we have, in earlier work, proposed a two-step process to open the neural network black box which involves: (1) extracting rules from the network; (2) visualizing this rule set using an intuitive graphical representation. In this paper, we will focus on the second step and further investigate the use of two types of representations: decision tables and diagrams. The former are a well-known representation originally used as a programming technique. The latter are a generalization of decision trees taking on the form of a rooted, acyclic digraph instead of a tree, and have mainly been studied and applied by the hardware design community. We will compare both representations in terms of their ability to compactly represent the decision knowledge extracted from two real-life credit-scoring data sets.

Title:

A COMPARISON BETWEEN THE PROPORTIONAL KEEN APPROXIMATOR AND THE NEURAL NETWORKS LEARNING METHODS

Author(s):

Peyman Kabiri

Abstract: The Proportional Keen Approximation method is a young learning method using the linear approximation to learn hypothesis. In the paper this methodology will be compared with another well-established learning method i.e. the Artificial Neural Networks. The aim of this comparison is to learn about the strengths and the weaknesses of these learning methods regarding different properties of their learning process. The comparison is made using two different comparison methods. In the first method the algorithm and the known behavioural model of these methods are analysed. Later, using this analysis, these methods are compared. In the second approach, a reference dataset that contains some of the most problematic features in the learning process is selected. Using the selected dataset the differences between two learning methods are numerically analysed and a comparison is made.

Title:

AN AGENT -BASED KNOWLEDGE MANAGEMENT MODEL FOR ENABLING A STATISTICAL TESTING APPROACH TO DECISION SUPPORT E-COMMERCE

Author(s):

Faiz Al-Shrouf , Walter James

Abstract: This paper integrates decision support e-commerce applications, and the knowledge management domain with software agent technology. First, we give a brief overview of decision support systems. Then we present our terminology for the decision support e-commerce model and its components that utilize e-commerce application, agent-based knowledge management components, and a statistical testing model. We give a scenario for a multi-bidding e-commerce application and formulate a statistical testing model (Likelihood ratio test) based on a bivariate normal distribution. The aforementioned test model uses the power function to simulate results using four main agents namely an Information searching agent, a computing agent, a knowledge agent, and a decision support agent.

Title:

CONSOLIDATED TREE CONSTRUCTION ALGORITHM: STRUCTURALLY STEADY TREES

Author(s):

Olatz Arbelaitz Gallego , Jesús Maria Pérez de la Fuente , Javier Muguerza Rivero , Ibai Gurrutxaga Goikoetxea

Abstract: This paper presents a new methodology for building decision trees or classification trees (Consolidated Trees Construction algorithm) that faces up the problem of unsteadiness appearing in the paradigm when small variations in the training set happen. As a consequence, the understanding of the made classification is not lost, making this technique different from techniques such as bagging and boosting where the explanatory feature of the classification disappears. The presented methodology consists on a new meta-algorithm for building structurally more steady and less complex trees (consolidated trees), so that they maintain the explaining capacity and they are faster, but, without losing the discriminating capacity. The meta-algorithm uses C4.5 as base classifier. Besides the meta-algorithm, we propose a measure of the structural diversity used to analyse the stability of the structural component. This measure gives an estimation of the heterogeneity in a set of trees from the structural point of view. The obtained results have been compared with the ones get with C4.5 in some UCI Repository databases and a real application of customer fidelisation from a company of electrical appliances.

Title:

PROMAIS: A MULTI-AGENT MODEL FOR PRODUCTION INFORMATION SYSTEMS

Author(s):

Khaled  Ghédira , Lobna HSAIRI , Faiez  Gargouri

Abstract: In the age of information proliferation and communication advances, Cooperative Information System (CIS) technology becomes a vital factor for production system design in every modern enterprise. In fact, current production system must hold to new strategic, economic and organizational structures in order to face new challenges. Consequently, intelligent software based on agent technology emerges to improve system design on the one hand, and to increase production profitability and enterprise competitive position on the other hand. This paper starts with an analytical description of logical and physical flows dealt with manufacturing, then proposes a Production Multi-Agent Information System (ProMAIS). ProMAIS is a collection of stationary and intelligent agent-agencies with specialized expertises, interacting to carry out the shared objectives: cost-effective production in promised delay and adaptability to the changes. In order to bring ProMAIS’s dynamic aspect out, interaction protocols are specially zoomed out by cooperation, negotiation and Contract Net protocols.

Title:

COGNITIVE REASONING IN INTELLIGENT MEDICAL INFORMATION SYSTEMS

Author(s):

Marek Ogiela

Abstract: This paper presents new approach for cognitive reasoning in the field of artificial intelligence, used in medical information systems. These systems are applied in various tasks supporting decisions taken in the wide area of Medical Imaging. Such systems, in particular Decision Support Systems can be based on the methods of perceptual cognitive analysis of visual medical data and are directed at offering possibilities of automatic interpretation and semantic understanding of this type of data. The paper shall present a general application method of DSS in selected cases of CR and MRI image meaning interpretation showing the development of disease processes.

Title:

A HYBRID DECISION SUPPORT TOOL

Author(s):

Panayotis Pintelas , sotiris kotsiantis

Abstract: In decision support systems a classification problem can be easily solved by employing one of several methods such as different types of artificial neural networks, decision trees, bayesian classifiers, etc. Moreover, it may happen that certain parts of instances’ space are better predicting by one method than the others. Thus, the decision of which particular method to choose is a complicated problem. A good alternative to choosing only one method is to create a hybrid forecasting system incorporating a number of possible solution methods as components (an ensemble of classifiers). For this purpose, we have implemented a hybrid decision support system that combines a neural net, a decision tree and a bayesian algorithm using a stacking variant methodology. The presented system can be trained with any data, but in the current implementation is mainly used by tutors of Hellenic Open University to identify drop-out prone students. However, a comparison with other ensembles using the same classifiers as base learner on several standard benchmark data sets, showed that this tool gives better accuracy in most cases.

Title:

PROBLEMS RESOLUTION IN MATHNET SYSTEM

Author(s):

Sofiane Labidi , Hélder Borges

Abstract: We proposed and developed an component to problems resolution within MATHNET Project environment. This project is result of the integration of the Computer Assisted Learning and Cooperative Learning paradigms, and that implements a computer model to interective environment of Cooperative Teaching and Learning based on multiple artificial and human agents, placed on a computer net structure, making use of several multimidia resources. The MATHNET nucleus is made of small software packs that effectively implement the Cooperative Learning paradigm. Due to its total integration with the computer, the use of multimidia resource and net technologies offer new oportunities on the educational field, challenging the traditional pedagogical methods to benefit the learning process. On this paper, we present the structure and the architecture of an Resolving Problem Assistant, that have the following objectivies: a) will carry fixation or evaluation problems to the student resolve them, regarding the apprentice profile; b) help the learner when necessary on the problem’s resolution and c) passing on an opinion about a solution found by an apprentice after analysing it.

Title:

STRUCTURAL INERTIA OF VOTING SYSTEMS

Author(s):

Francesc Carreras

Abstract: Simple games reflect with more or less fidelity the strategic tensions inherent to voting systems. An interesting feature of these systems is their capability to act, i.e. their decisiveness. We introduce in this work a normalized measure of the inertia of any simple game from the strictly structural or normative viewpoint. Mathematical properties of this measure are presented, including axiomatic characterizations. The application to a comparative study of certain actual voting systems evidences striking differences as to the inertia degrees they show.

Title:

ANALYSIS OF THE ITERATED PROBABILISTIC WEIGHTED K NEAREST NEIGHBOR METHOD, A NEW DISTANCE-BASED ALGORITHM

Author(s):

José María Martínez-Otzeta

Abstract: The k-Nearest Neighbor (k-NN) classification method assigns to an unclassified point the class of the nearest of a set of previously classified points. A problem that arises when aplying this technique is that each labeled sample is given equal importance in deciding the class membership of the pattern to be classified, regardless of the typicalness of each neighbor. We report on the application of a new hybrid version named Iterated Probabilistic Weighted k Nearest Neighbor algorithm (IPW-k-NN) which classifies new cases based on the probability distribution each case has to belong to each class. These probabilities are computed for each case in the training database according to the k Nearest Neighbors it has in this database; this is a new way to measure the typicalness of a given case with regard to every class. Experiments have been carried out using UCI Machine Learning Repository well-known databases and performing 10-fold cross-validation to validate the results obtained in each of them. Three different distances (Euclidean, Camberra and Chebychev) are used in the comparison done.

Title:

APPLICATION OF NEURAL NETWORKS FOR PRIOR APPRAISAL OF STRUCTURAL FUNDS PROJECT PROPOSALS

Author(s):

Tadeusz A. Grzeszczyk

Abstract: The subject of present paper is to discuss the layout of conception referred to the use of artificial intelligence methods (neural networks) for prior appraisal of project proposals to be submitted by Polish enterprises to European Union in order to get financial assistance for investments from the EU structural funds and the state budget. The experiments are limited to prior appraisal of the projects submitted only, as their practical execution may begin not earlier than on the 1st May 2004 (enlargement of European Union). Author of the present paper discusses the method referred to appraisal of project proposals submitted by enterprises. The method is related to review and acceptance of expenditures for investments co-financed by European Regional Development Fund. The author formulates conception for implementation of appraisal principles which could be considered as element of review and acceptance of expenditures according to Commission Regulation 1685/2000.

Title:

OPTIMIZATION OF NEURAL NETWORK’S TRAINING SETS VIA CLUSTERING: APPLICATION IN SOLAR COLLECTOR REPRESENTATION

Author(s):

João Paulo Domingos Silva , Daniel Alencar Soares , Antônia Sônia Cardoso Diniz , Elizabeth Marques Duarte Pereira , Luis Enrique Zárate Gálvez , Renato Vimieiro

Abstract: Due the necessity of new ways of energy producing solar collector systems have been widely used around the world. The efficiency of this kind of systems is calculated through measurement of process parameters. There are mathematical models that represent these systems. However these models involve several parameters that may lead to nonlinear equations of the process. Artificial Neural Networks have been proposed in this work as an alternative of these models. However, a better modeling of the process by means of ANN depends on a representative training set. In order to better define the training set, the clustering technique called k-means has been used in this work.

Title:

ONTOLOGY-BASED FRAMEWORK FOR DOCUMENT INDEXING

Author(s):

Youssef Amghar , D. Bahloul , P. Maret

Abstract: The work presented in this paper addresses a project for the Computer Center CIRITL1. This company wants to save and capitalize its knowledge and its know-how concerning the production activities in the particular, the technical hitches relating to software applications encountered during the exploitation of these applications. Indeed with a well accessing documents base; actors will be able to better solve problems. Our purpose is to focus on ontology-based framework for indexing relevant documents. The domain ontology (OntoCIRTIL) has a structure which supports a semantic model based on semantic links and inference mechanisms. In this paper, we present a new model called S3 which, permits to model knowledge in upstream and index documents (or formalized knowledge) in downstream. To illustrate partial results, this model is then applied to OntoCIRTIL.

Title:

AN INTELLIGENT TUTORING SYSTEM FOR DATABASE TRANSACTION PROCESSING

Author(s):

Paul Douglas

Abstract: We describe an intelligent tutoring system that may be used to assist university-level students to learn key aspects of database transaction processing. The tutorial aid is based on a well defined theory of learning, and is implemented using PROLOG and Java. Some results of the evaluation of the learning tool are presented to demonstrate its effectiveness as a tutorial aid in an e-learning environment.

Title:

A DISTRIBUTED TRANSIENT INTER-PRODUCTION SCHEDULING FOR FLEXIBLE MANUFACTURING SYSTEMS

Author(s):

Pascal YIM , Olfa Belkahla , Khaled GHEDIRA , Ouajdi KORBAA

Abstract: This paper deals with the problem of cyclic scheduling for Flexible Manufacturing Systems (FMS) and presents a new Multi-Agent Model, composed of cooperating agents, for computing the Transient states between successive cyclic productions (called transient inter-productions). It aims to minimize the global makespan while reducing temporal complexity. The originality of the model relies on the use of Artificial Intelligence techniques, Multi- Agent Systems and Production management. Indeed, the planning phase allows, in the cyclic context, to determine the cyclic productions to respect the initial demand. These cyclic productions have to be sequenced each one in relation to the others. Once this operation done, the transient state allowing going from a cyclic state to the following one has to be determined and optimized.

Title:

AN XML-BASED BOOTSTRAPPING METHOD FOR PATTERN ACQUISITION

Author(s):

Zeng Xingjie , Li Fang , Zhang Dongmo

Abstract: Extensible Markup Language (XML) has been widely used as a middleware because of its flexibility. Fixed domain is one of the bottlenecks of Information Extraction (IE) technologies. In this paper we present a XML-based domain-adaptable bootstrapping method of pattern acquisition, which focuses on minimizing the cost of domain migration. The approach starts from a seed corpus with some seed patterns; extends the corpus based on the seed corpus through the Internet and acquires the new patterns from extended corpus. Positive and negative examples been classified from training corpus are used to evaluate the patterns acquired. The result shows our method is a practical way in pattern acquisitions.

Title:

DYNAMIC MULTI-AGENT BASED VARIETY FORMATION AND STEERING IN MASS CUSTOMIZATION

Author(s):

Nizar Abdelkafi , Gerhard Friedrich , Gerold Kreutler , Thorsten Blecker

Abstract: Large product variety in mass customization involves a high internal complexity level inside a company’s operations, as well as a high external complexity level from a customer’s perspective. To cope with both complexity problems, an information system based on agent technology is able to be identified as a suitable solution approach. The mass customized products are assumed to be based on a modular architecture and each module variant is associated with an autonomous rational agent. Agents have to compete with each other in order to join coalitions representing salable product variants which suit real customers’ requirements. The negotiation process is based on a market mechanism supported by the target costing concept and a Dutch auction. Furthermore, in order to integrate the multi-agent system in the existing information system landscape of the mass customizer, a technical architecture is proposed and a scenario depicting the main communication steps is specified

Title:

USING MAS TO SOLVE PRODUCER CUSTOMER TRANSPORT PROBLEMS

Author(s):

Baltazar Frankovic , Tung Dang

Abstract: This paper deals with a problem of using multi-agent technology to simulate and resolve the planning problems. Concretely, multi-agent systems (MAS) are used in studying and resolving the optimization problems within the Producer-Customer-Transport (PCT) domain.

Title:

IMAGE CLASSIFICATION ACCORDING TO THE DOMINANT COLOUR

Author(s):

Amine Aït Younes , Isis Truck , Herman Akdag , Yannick Remion

Abstract: The aim of this work is to develop a user-friendly software allowing him to classify images according to their dominant colour expressed through linguistic expressions. With this aim in view, images are processed and stored in a database. The processing consists in assigning a profile to each image. To do this, we consider the pixels of the images in the colorimetric space HLS and then a restricted number of colours classes are built. These classes depend on the hue (H). For each colour class a certain number of subclasses depending on the lightness (L) and the Saturation (S) are defined. Finally the profile is drawn using the pixels membership of the classes and subclasses. Thus starting from a linguistic expression of a colour, the user can extract images from the database.

Title:

AN IMPLEMENTATION ENVIRONMENT OF KNOWLEDGE DISCOVERY SYSTEMS

Author(s):

Maria Dias , Roberto Pacheco

Abstract: After an organization having solved its operational problems, the need of systems appears for the support to the decision taking. Data mining is an area that is growing quickly to assist such new needs of the organization. However, the use of data mining techniques is uncommon by the difficult normally found in the development of knowledge discovery systems. This paper presents an environment of knowledge discovery in database, called ADesC. The main objective is to generate relevant information to decision taking, with the application of data mining techniques. This environment is based on agent technology to facilitate the performance of its tasks.

Title:

MULTI-AGENT APPROACH BASED ON TABU SEARCH FOR THE FLEXIBLE JOB SHOP SCHEDULING PROBLEM

Author(s):

Meriem Ennigrou , Khaled Ghédira

Abstract: This paper proposes a Multi-agent approach based on a tabu search method for solving the flexible Job Shop scheduling problem. The characteristic of the latter problem is that one or several machines can process one operation so that its processing time depends on the machine used. Such a generalization of the classical problem makes it more and more difficult to solve. The objective is to minimize the makespan or the total duration of the schedule. The proposed model is composed of three classes of agents: Job agents and Resource agents which are responsible for the satisfaction of the constraints under their jurisdiction, and an Interface agent containing the tabu search core. Different experimentations have been performed on different benchmarks and results have been presented.

Title:

MONTHLY FLOW ESTIMATION USING ELMAN NEURAL NETWORKS

Author(s):

Luiz Biondi Neto , João  Soares de Mello , Maria Fernandes Velloso , Lidia Angulo Meza , Pedro Gouvêa Coelho

Abstract: This paper investigates the application of partially recurrent artificial neural networks (ANN) in the flow estimation for São Francisco River that feeds the hydroelectric power plant of Sobradinho. An Elman neural network was used suitably arranged to receive samples of the flow time series data available for São Francisco River shifted by one month. For that, the neural network input had a delay loop that included several sets of inputs separated in periods of five years monthly shifted. The considered neural network had three hidden layers. There is a feedback between the output and the input of the first hidden layer that enables the neural network to present temporal capabilities useful in tracking time variations. The data used in the application concern to the measured São Francisco river flow time series from 1931 to 1996, in a total of 65 years from what 60 were used for training and 5 for testing. The obtained results indicate that the Elman neural network is suitable to estimate the river flow for 5 year periods monthly. The average estimation error was less than 0.2 %.

Title:

ISYDS - INTEGRATED SYSTEM FOR DECISION SUPPORT

Author(s):

Pedro Gouvêa Coelho , Eliane Gonçalves Gomes , João  Soares de Mello , Lidia Angulo Meza , Luiz Biondi Neto

Abstract: Data Envelopment Analysis is based on linear programming problems (LPP) to determine the efficiency of Decision Making Units (DMUs). This process can be computationally intense, as a LPP has to be run for each unit. Besides, a typical DEA LPP has a large number of redundant constraints concerning the inefficient DMUs. That results in degenerate LPPs and in some cases multiple efficient solutions. The developed work intends to to fill out a gap in current DEA softwares i.e. the lack of a software capable of producing full results in classic DEA models as well as the capability of using more advanced DEA models. The software interface as well as the models and solution algorithms were implemented in Delphi. Both basic and advanced DEA models are allowed in the software. Besides the main module that includes the DEA models, there is an additional module containing some models for decision support such as the multicriteria model called Analytic Hierarchic Process (AHP). The developed software was named as ISYDS – Integrated System for Decision Support. The software has been used in several theoretical and applied papers and has been very useful.

Title:

FACE PATTERN DETECTION

Author(s):

Adriano Moutinho , Antonio Carlos Thome , Luiz Biondi Neto , Pedro Henrique Golvea Coelho

Abstract: Security systems based on face recognition often have to deal with the problem of finding and segmenting the region of the face, containing nose, mouth and eyes, from the rest of the objects in the image. Finding the right position of a face is a part of any automatic identity recognition system, and it is, by itself, a very complex problem to solve, normally being handled separately. This paper describes an approach, using arti-ficial neural networks (ANN), to find the correct position and separate the face from the background. In order to accomplish this goal, a windowing method was created and combined with several image pre-processing steps, from histogram equalization to illumination correction, as an attempt to improve neural network recognition capability. This paper also proposes methods to segment facial features such as mouth, nose and eyes. Finally, the system is tested using 400 images and the performance of face and facial features segmentation is presented

Title:

RESULT COMPARISON OF TWO ROUGH SET BASED DISCRETIZATION ALGORITHMS

Author(s):

Shanchan Wu , Wenyuan Wang

Abstract: The area of knowledge discovery and data mining is growing rapidly. A large number of methods are employed to mine knowledge. Many of the methods rely of discrete data. However, most of the datasets used in real application have attributes with continuous values. To make the data mining techniques useful for such datasets, discretization is performed as a preprocessing step of the data mining. In this paper, we discuss rough set based discretization. We do experiments to compare the quality of Local discretization and Global discretization based on rough set. Our experiments show that Global discretization and Local discretization are dataset sensitive. Neither of them is always better than the other, though in some cases Global discretization generates far better results than Local discretization.

Title:

MANAGING ENGINEERING ASSETS: A KNOWLEDGE BASED ASSET MANAGEMENT METHODOLOGY THROUGH INFORMATION QUALITY

Author(s):

Abrar Haider

Abstract: As manufacturing organizations are becoming technology intensive, asset management is becoming crucial for profitability and efficiency of the business. Ensuring asset reliability, maintenance and management is profoundly dependent on knowledge based decision support backed by quality information. Multiplicity of data acquisition systems and techniques, together with the operation of assets in often unsettled and variable environments, makes it difficult to obtain quality information that could be used to make informed choices. Asset maintenance and reliability are important activities that can considerably influence an organisation’s ability to compete. This paper discusses the importance of data and information quality within asset management by analysing the intricacies of data quality and information flow within asset management systems and processes; and proposes frameworks for information quality and a model for an information driven, knowledge based asset management.

Title:

A DSS FOR ASSESSING TECHNOLOGY ENVIRONMENTS

Author(s):

Giovanni Camponovo , Yves Pigneur , Samuel Bendahan

Abstract: Assessing the external environment is an important component of organizations' survival and success. Unfortunately, a huge amount of information must be collected and processed in order to obtain a thorough and comprehensive representation of the environment. A decision support system can be very useful in helping decision makers to organize and analyze this information efficiently and effectively. This paper outlines a conceptual proposition helping to design such a system by presenting an ontology of the relevant information elements (actors, issues and needs) and a set of tools to analyze them. This paper also illustrates a prototype version of one of these tools which supports the analysis of the actors and issues perspectives.

Title:

WEB USAGE MINING WITH TIME CONSTRAINED ASSOCIATION RULES

Author(s):

Jan Vanthienen , Johan Huysmans , Bart Baesens

Abstract: Association rules are typically used to describe what items are frequently bought together. One could also use them in web usage mining to describe the pages that are often visited together. In this paper, we propose an extension to association rules by the introduction of timing constraints. Subsequently, the introduced concepts are used in an experiment to pre-process logfiles for web usage mining. We also describe how the method could be useful for market basket analysis and give an overview of related research. The paper is concluded by some suggestions for future research.

Title:

DOCTUS INTELLIGENT EXECUTIVE PORTAL FOR BUSINESS DECISIONS. USING HARD DATA AND SOFT KNOWLEDGE IN DOCTUS INTELLIGENT PORTAL

Author(s):

Zoltan Baracskai , Zoltan Nagy

Abstract: Business decision makers live in the avalanche of information, and the environment produces all kinds of surprises. However, companies should survive, that is what business decision makers struggle for. DoctuS, our knowledge based system; using case-based reasoning is integrated in an intelligent portal which links the soft knowledge of experts with the help of a company-wide knowledge map and its "zoom-ins". Also hard data can be derived ("mined") from data warehouses or integrated information systems. The portal supports business decision makers to take routine decisions easily, but also enables the contact to the Knowledge Factory, where new knowledge is created. This can assist business decision makers to make original decisions.

Title:

MODEL P : AN APPROACH OF THE ADAPTABILITY

Author(s):

Claude Petit , Mathilde Billy , François-Xavier Magaud

Abstract: This paper summarizes a new approach of the Cased-based Reasoning. The cases are not stored. The problem case solution is built as a puzzle. The puzzle obtained corresponds to the required solution. Each part is carrying information and has an associative behaviour. A piece seeks the piece which can be associated in width and in depth method. This associative behaviour is determined by several mechanisms: engine of expert system to binary rules, model of multicriterion choice of ordinal outclassing, search for close indices. A puzzle can thus have a complex mode of reasoning; each piece has a specific behaviour. The tool was tested on two applications of decision-making aid: identification of malaria facies and assistance to the specification of habitats.) These applications made it possible to check the interest of this original framework. In particular it brings an elegant solution to the phase of adaptation in CBR technique.

Title:

DATA MINING OF CRM KNOWLEDGE BASES FOR EFFECTIVE MARKET SEGMENTATION: A CONCEPTUAL FRAMEWORK

Author(s):

Jounghae  Bang , Nikhilesh Dholakia , Lutz Hamel , Ruby Roy Dholakia

Abstract: This paper illustrates the linkages between CRM systems, data mining techniques, and the strategic notions of market segmentation and relationship marketing. Using the hypothetical example of a consumer bank, the data in a relationship based marketing environment are illustrated and guidelines for knowledge discovery, data management and strategic marketing are developed.

Title:

A QUALITATIVE MODEL OF THE INDEBTEDNESS FOR THE SPANISH AUTONOMOUS REGIONS

Author(s):

Juan Moreno García , Luis Jimenez Linares , José Jesús Castro Sanchez , Victor Raúl López , José Baños

Abstract: This work shows a fuzzy model of the indebtedness for the Spanish autonomous regions that is obtained using approximate reasoning and induction methods. So, the algorithm ADRI is used to induce a linguistic model composed by a set of fuzzy rules. The quality of this linguistic model will be checked and its interpretation will be shown.

Title:

BDI AGENTS WITH FUZZY ASSOCIATIVE MEMORY FOR VESSEL BERTHING IN CONTAINER PORTS

Author(s):

Damminda Alahakoon , Parakrama Dissanayake , Prasanna  Lokuge

Abstract: Vessel scheduling in container terminals has to undergo various vague constraints of different issues and often use uncertain dynamically changing data. Faster turnaround time of the vessels in berths has direct impact on the improvement of terminals productivity. The need for an intelligent system that dynamically adapts to the changing environment is apparent, as there is limited number of berths and resources available in container terminals for delivering services to vessels. BDI (Beliefs, Desires and Intentions) agents are being proposed in a complex collaborative environment in the vessel scheduling assuring better management and control in the terminal. BDI agents to deal with many criteria and different goals with uncertain beliefs, it is proposed that fuzzy associative memory to use in the planning process of the BDI architecture facilitating better decision making in the whole process. In this paper we propose hybrid BDI architecture with fuzzy associative memory in handling uncertainty issues of the vessel berthing in container terminals. Execution of Plans in a collaborative multi agent environment would be strengthened with the introduction of fuzzy associative memory in BDI agents. Plans in the BDI agents are being constructed at different stages in order to achieve current desires. This would facilitate agents to observe dynamic changes in the environment and to be reflected in the next levels of planning.

Title:

DYNAMIC DIAGNOSIS OF ACTIVE SYSTEMS WITH FRAGMENTED OBSERVATIONS

Author(s):

Gianfranco Lamperti

Abstract: Diagnosis of discrete-event systems (DESs) is a complex and challenging task. Typical application domains include telecommunication networks, power networks, and digital-hardware networks. Recent blackouts in northern America and southern Europe offer evidence for the claim that automated diagnosis of large-scale DESs is a major requirement for the reliability of this sort of critical systems. The paper is meant as a little step toward this direction. A technique for the dynamic diagnosis of active systems with uncertain observations is presented. The essential contribution of the method lies in its ability to cope with uncertainty conditions while monitoring the systems, by generating diagnostic information at the occurrence of each newly-received fragment of observation. Uncertainty stems, on the one hand, from the complexity and distribution of the systems, where noise may affect the communication channels between the system and the control rooms, on the other, from the multiplicity of such channels, which is bound to relax the absolute temporal ordering of the observable events generated by the system during operation. The solution of these diagnostic problems requires nonmonotonic reasoning, where estimates of the system state and the relevant candidate diagnoses may not survive the occurrence of new observation fragments.

Title:

AN EFFICIENT FRAMEWORK FOR ITERATIVE TIME-SERIES TREND MINING

Author(s):

Ken Barker , Ajumobi  Udechukwu

Abstract: Trend analysis has applications in several domains including: stock market predictions, environmental trend analysis, sales analysis, etc. Temporal trend analysis is possible when the source data (either business or scientific) is collected with time stamps, or with time-related ordering. These time stamps (or orderings) are the core data points for time sequences, as they constitute time series or temporal data. Trends in these time series, when properly analyzed, lead to an understanding of the general behavior of the series so it is possible to more thoroughly understand dynamic behaviors found in data. This analysis provides a foundation for discovering pattern associations within the time series through mining. Furthermore, this foundation is necessary for the more insightful analysis that can only be achieved by comparing different time series found in the source data. Previous works on mining temporal trends attempt to efficiently discover patterns by optimizing discovery processes in a single pass over the data. Recent experience with data mining clearly indicates that the process is inherently iterative, with no guarantees that the best results are achieved in the first pass. Current iterative proposals introduce expensive re-computation after tuning the algorithm to address shortcomings discovered in the first heavy weight pass over the data. In fact, the same heavy weight process is then re-run on the data in the hope that new discoveries will be made on subsequent iterations. Unfortunately, this heavy weight re-execution and processing of the data is expensive. In this work we present a framework in which all the frequent trends in the time series are computed in a single pass, thus eliminating expensive re-computations in subsequent iterations. We also demonstrate that trend associations within the time series or with related time series can be found.

Title:

AUTOMATED PRODUCT RECOMMENDATION BY EMPLOYING CASE-BASED REASONING AGENTS

Author(s):

Reda Alhajj , Ozgur Baykal , Faruk Polat

Abstract: This paper proposes a cooperation framework for multiple role-based case-based reasoning (CBR) agents to handle the product recommendation problem for e-commerce applications. Each agent has different case structure with intersecting features and agents exploit all information related to the problem by cooperation, which is accomplished through the merge of distributed cases in order to form cases having better representation of the problem. The presented merge algorithm handles noisy distributed cases by negotiation on the difference values of the intersecting features. The role-based CBR agents merge the distributed cases by introducing a global heuristic function, which is used to evaluate the relevance of merged cases. The heuristic function exploits the relevancy of each merged case within the viewpoint of each agent and the satisfied/unsatisfied problem constraints. The viewpoint of an agent is represented by the value of consistency of distributed components of merged cases and agent’s individual relevance values of the merged cases. Finally, the proposed framework has been tested for elective course recommendation.

Title:

ASSESSMENT OF SPILLAGE OF LARGE-SCALE HYDROPOWER PLANT UNDERTAKING SPINNING RESERVE

Author(s):

Maihuan Zhao , Qiang Huang , Chenguang Xu

Abstract: Since the large-scale hydropower plant must undertake the spinning reserve of power system, a small amount of out-flow water does not generate electricity. In order to increase water use efficiency, it is necessary to calculate the spillage owing to improper dispatch. Therefore, the optimal operation of hydroelectric systems should be obligated with the undertaking spinning reserve. A calculation method of the spillage owing to improper dispatch for large-scale is discussed. And this method is used to calculate the spillage owing to improper dispatch of Longyangxia hydropower plant in 2001. The spillage owing to improper dispatch is remarkable, which could be saved by proper dispatch.

Title:

REDUCING REWORK IN THE DEVELOPMENT OF INFORMATION SYSTEMS THROUGH THE COMPONENTS OF DECISIONS

Author(s):

Bernadette Sharp , Andy Salter , Hanifa Shah

Abstract: The failure of information systems has been partially the result of incorrect or inefficient rework in the development of the systems. If greater transparency can be made in the decision making process then the number of examples of incorrect or inefficient rework could be reduced. Transparency in the process of development can be achieved through identifying and tracking the components of the decisions made during the development of the information system. This paper presents a theoretical framework for facilitating this tracking by comparing the components of the decisions in the development of the information system with those of an organisation and considering how the ‘needs’ of agents and the actions taken to fulfil those needs are related.

Title:

MINING SEQUENTIAL PATTERNS WITH REGULAR EXPRESSION CONSTRAINTS USING SEQUENTIAL PATTERN TREE

Author(s):

Mohamed Younis

Abstract: The significant growth of sequence database sizes in recent years increase the importance of developing new techniques for data organization and query processing. Discovering sequential patterns is an important problem in data mining with a host of application domains. For effectiveness and efficiency consideration, constraints are essential for many sequential applications. In this paper, we give a brief review of different sequential pattern mining algorithms, and then introduce a new algorithm (termed NewSPIRIT) for mining frequent sequential patterns that satisfy user specified regular expression constraints. The general idea of our algorithm is to use a finite state automata to represent the regular expression constraints and build a sequential pattern tree that represents all sequences of data which satisfy this constraints by scanning the database of sequences only once. Experimental results shows that our NewSPIRIT is much more efficient than existing algorithms.

Title:

WAREHOUSING AND MINING OF HIGHER EDUCATION DATA: USING EXISTING DATA TO MANAGE QUALITY

Author(s):

Pieter Conradie , Liezl  Van Dyk

Abstract: Data warehouses are constructed at higher education institutions (HEI) using data from transactional systems such as the student information system (SIS), the learning management system (LMS), the learning content management system (LCMS) as well as certain enterprise resource planning (ERP) modules. The most common HEI data mining applications are directed towards customer relationship management (CRM) and quality management. When students are viewed as material in manufacturing process, instead of the customer, different meaningful correlations, patterns and trends can be discovered which otherwise would have remained unexploited. As example statistical process control (SPC) – as data mining tool – is applied to student result data. This may eliminate the need to gather student-customer feedback for quality control purposes.

Title:

PREDICTING WEB REQUESTS EFFICIENTLY USING A PROBABILITY MODEL

Author(s):

Shanchan Wu , Wenyuan Wang

Abstract: As the world-wide-web grows rapidly and a user's browsing experiences are needed to be personalized, the problem of predicting a user's behavior on a web-site has become important. In this paper, we present a probability modal to utilize path profiles of users from web logs to predict the user's future requests. Each of the user's next probable requests is given a conditional probability value, which is calculated according to the function presented by us. Our modal can give several predictions ranked by the values of their probability instead of giving one, thus increasing recommending ability. Based on a compact tree structure, our algorithm is efficient. Our result can potentially be applied to a wide range of applications on the web, including pre-sending, pre-fetching, enhancement of recommendation systems as well as web caching policies. The experiments show that our modal has a good performance.

Title:

DATA MINING: PATTERN MINING AS A CLIQUE EXTRACTING TASK

Author(s):

Grete Lind , Rein Kuusik , Leo Võhandu

Abstract: One of the important tasks in solving data mining problems is finding frequent patterns in a given dataset. It allows to handle several tasks such as pattern mining, discovering association rules, clustering etc. There are several algorithms to solve this problem. In this paper we describe our task and results: a method for reordering a data matrix to give it a more informative form, problems of large datasets, (frequent) pattern finding task. Finally we show how to treat a data matrix as a graph, a pattern as a clique and pattern mining process as a clique extracting task. We present also a fast diclique extracting algorithm for pattern mining.

Title:

MULTIPLE ORGAN FAILURE DIAGNOSIS USING ADVERSE EVENTS AND NEURAL NETWORKS

Author(s):

Paulo Cortez

Abstract: In the past years, the Clinical Data Mining arena has suffered a remarkable development, where intelligent data analysis tools, such as Neural Networks, have been successfully applied in the design of medical systems. In this work, Neural Networks are applied to the prediction of organ dysfunction in Intensive Care Units. The novelty of this approach comes from the use of adverse events, which are triggered from four bedside alarms,being achived an overall predictive accuracy of 70%.

Title:

MINING SCIENTIFIC RESULTS THROUGH THE COMBINED USE OF CLUSTERING AND LINEAR PROGRAMMING TECHNIQUES

Author(s):

Sergio Greco , Andrea  Tagarelli , Irina Trubitsyna

Abstract: The paper proposes a technique based on a combined approach of data mining algorithms and linear programming methods for classifying organizational units, such as research centers. We exploit clustering algorithms for grouping information concerning the scientific activity of research centers. We also show that the replacement of an expensive efficiency measurement, based on the solution of linear programs, with a simple formula allows to efficiently compute clusters of very good quality. Some initial experimental results, obtained from the analysis of research centers in the agro-food sector, show the effectiveness of our approach, both from an efficiency and a quality-of-results viewpoint.

Title:

APPLICATION OF UNCERTAIN VARIABLES TO STABILITY ANALYSIS AND STABILIZATION FOR ABR ATM CONGESTION CONTROL SYSTEMS

Author(s):

Magdalena Turowska

Abstract: The paper presents the application of uncertain variables to stability analysis and stabilization of ABR ATM control systems. The unknown parameter is assumed to be a value of uncertain variable described by the certainty distribution given by a expert. The estimation of the certainty index that the congestion control system is stable is presented. A specific stabilization problem is considered.

Title:

HIERARCHICAL MODEL-BASED CLUSTERING FOR RELATIONAL DATA WITH AGGREGATES

Author(s):

Jianzhong CHEN , Sally McClean , Mary Shapcott , Kenny Adamson

Abstract: Clustering is a widely used technique in data mining to discover patterns in the underlying data. Most traditional clustering methods handle datasets that have single flat formats. Recently, there has been a growing interest in relational data mining, which deals with datasets containing multiple types of objects and richer relationships and are presented in relational formats, e.g. relational databases that have multiple tables. In this paper, we propose a hierarchical model-based method for clustering relational data by introducing frequency aggregates. We first define a relational data model that contains composite objects as an object-relational star schema, and present a method of integrating relational composite objects into flat aggregate objects through aggregation. In order to apply a hierarchical model-based clustering with the data, we define a new type of aggregates -- frequency aggregate, which has a vector data type and can be used to record not only the observed values but also the distribution of the values of a categorical attribute. A hierarchical agglomerative clustering algorithm with log-likelihood distance is then applied to cluster the aggregated data tentatively. After stopping at a coarse estimate of the number of clusters, a mixture model-based method with the EM algorithm is developed to perform a further relocation clustering, in which Bayes Information Criterion (BIC) is used to determine the optimal number of clusters. Finally we evaluate our approach on a real-world dataset.

Title:

BUILDING PROVEN CAUSAL MODEL BASES FOR STRATEGIC DECISION SUPPORT

Author(s):

Christian Hillbrand

Abstract: Since many Decision Support Systems (DSS) in the area of causal strategy planning methods incorporate techniques to draw conclusions from an underlying model but fail to prove the implicitly assumed hypotheses within the latter, this paper focuses on the improvement of the model base quality. Therefore, this approach employs Artificial Neural Networks (ANNs) to infer the underlying causal functions from empirical time series. As a prerequisite for this, an automated proof of causality for nomothetic cause-and-effect hypotheses has to be developed.

Title:

A SEMI-AUTOMATIC BAYESIAN ALGORITHM FOR ONTOLOGY LEARNING

Author(s):

Mario Vento , Massimo De Santo , Francesco Colace , Pasquale Foggia

Abstract: The entire world is living a transformation, perhaps the more important of last thirty years. The dissemination of the new technologies of the information is modifying radically the nature of the relationships between countries, markets, persons and culture. The technological revolution has favoured the process of globalization (Internet represents better than every other thing the global village) and the exchange of the information. Today the information can be considered an economic good whose value is closely connected the knowledge that can give. The dynamism of the new society forces the professional man to be abreast of technical progress. It is essential to introduce new didactic methodologies based on continuous long-life learning. A good solution can be E-learning. Although distance education environments are able to provide trainees and instructors with cooperative learning atmosphere, where students can share their experiences and teachers guide them in their learning, some problems must be still solved. One of the most important problem to solve is the correct definition of the domain of knowledge (i.e. ontology) related to the various courses. Often teachers are not able to easily formalize in correct way the reference ontology. On the other hand if we want realize some intelligent tutoring system that can help students and teachers during the learning process starting point is the ontology. In addition, the choice of best contents and information for students is closely connect to the ontology. In this paper, we propose a method for learning ontologies used to model a domain in the field of intelligent e-learning systems. This method is based on the use of the formalism of Bayesian networks for representing ontologies, as well as on the use of a learning algorithm that obtains the corresponding probabilistic model starting from the results of the evaluation tests associated with the didactic contents under examination. Finally, we will present an experimental evaluation of the method using data coming from real courses.

Title:

BAYESIAN NETWORK STRUCTURAL LEARNING FROM DATA: AN ALGORITHMS COMPARISON

Author(s):

Francesco Colace , Pasquale Foggia , Mario Vento , Massimo De Santo

Abstract: The manual determination of Bayesian Network structure or, more in general, of the probabilistic models, in particular in the case of remarkable dimensions domains, can be complex, time consuming and imprecise. Therefore, in the last years the interest of the scientific community in learning bayesian network structure from data is considerably increased. In fact, many techniques or disciplines, as data mining, text categorization, ontology description, can take advantages from this type of processes. In this paper we will describe some possible approaches to the structural learning of bayesian networks and introduce in detail some algorithms deriving from these ones. We will aim to compare results obtained using the main algorithms on databases normally used in literature. With this aim, we have selected and implemented five algorithms more used in literature. We will estimate the algorithms performances both considering the network topological reconstruction both the correct orientation of the obtained arcs.

Title:

MINING THE RELATIONSHIPS IN THE FORM OF THE PREDISPOSING FACTORS AND CO-INCIDENT FACTORS AMONG NUMERICAL DYNAMIC ATTRIBUTES IN TIME SERIES DATA SET BY USING THE COMBINATION OF SOME EXISTING TECHNIQUES

Author(s):

Suwimon  Kooptiwoot

Abstract: Temporal mining is a natural extension of data mining with added capabilities of discovering interesting patterns, inferring relationships of contextual and temporal proximity and may also lead to possible cause-effect associations. Temporal mining covers a wide range of paradigms for knowledge modeling and discovery. A common practice is to discover frequent sequences and patterns of a single variable. In this paper we present a new algorithm which is the combination of many existing ideas consists of the reference event as proposed in (Bettini, Wang et al. 1998), the event detection technique proposed in (Guralnik and Srivastava 1999), the large fraction proposed in (Mannila, Toivonen et al. 1997), the causal inference proposed in (Blum 1982) We use all of these ideas to build up our new algorithm for the discovery of multi-variable sequences in the form of the predisposing factor and co-incident factor of the reference event of interest. We define the event as positive direction of data change or negative direction of data change above a threshold value. From these patterns we infer predisposing and co-incident factors with respect to a reference variable. For this purpose we study the Open Source Software data collected from SourceForge website. Out of 240+ attributes we only consider thirteen time dependent attributes such as Page-views, Download, Bugs0, Bugs1, Support0, Support1, Patches0, Patches1, Tracker0, Tracker1, Tasks0, Tasks1 and CVS. These attributes indicate the degree and patterns of activities of projects through the course of their progress. The number of the Download is a good indication of the progress of the projects. So we use the Download as the reference attribute. We also test our algorithm with four synthetic data sets include noise up to 50 %. The results show that our algorithm can work well and tolerate to the noise data.

Title:

MINING THE RELATIONSHIPS IN THE FORM OF PREDISPOSING FACTOR AND CO-INCIDENT FACTOR IN TIME SERIES DATA SET BY USING THE COMBINATION OF SOME EXISTING IDEAS WITH A NEW IDEA FROM THE FACT IN THE CHEMICAL REACTION

Author(s):

Suwimon Kooptiwoot

Abstract: In this work we propose new algorithms from the combination of many existing ideas consisting of the reference event as proposed in (Bettini, Wang et al. 1998), the event detection technique proposed in (Guralnik and Srivastava 1999), the causal inference proposed in (Blum 1982; Blum 1982) and the new idea about the character of the catalyst seen in the chemical reaction. We use all of these ideas to build up our algorithms to mine the predisposing factor and co-incident factor of the reference event of interest. We apply our algorithms with OSS (Open Source Software) data set and show the result. We also test our algorithms with four synthetic data sets include noise up to 50 %. The results show that our algorithms can work well and tolerate to noise data.

Title:

THE DEVELOPMENT OF A KNOWLEDGE SYSTEM FOR ISO 9001 QUALITY MANAGEMENT

Author(s):

Hsun-Cheng HU , Sheng-Tun Li , Li-Yen Shue

Abstract: Many researchers in knowledge management point out that the first step toward knowledge management is the management of documents. However, the complexity imbedded in some documents could present great difficulty for most methodologies to deal with. The knowledge content for building an excellent quality management system that complies with ISO 9001 falls into this category; this knowledge is characterized by multi-dimensionality and knowledge embedment through various procedures and forms. We applied Ontology, which is a new approach in AI for better presenting knowledge structure of a domain, to develop a knowledge-based ISO 9001 quality management system for a Taiwanese chemical company that has to refer to a total of 175 ISO manuals. This system is built with Protégé 2000 as the knowledge platform, and we follow the development process recommended by Ontology Engineering of Toronto Virtual Enterprise. One main feature of the system is its capability of understanding the semantic of documents, which is a vital part of the inference mechanism in answering user’s queries.

Title:

AN EXPERIENCE WITH THE NEURAL NETWORK FOR AUTO-LANDING SYSTEM OF AN AIRCRAFT

Author(s):

sreenatha anavatti

Abstract: Abstract: Generalization by the Neural Networks is an added advantage that can provide very good robustness and disturbance rejection properties. By providing a sufficient number of training samples (inputs and their corresponding outputs), a network can deal with some inputs it has never seen before. This ability makes them very interesting for control applications because not only they can learn complicated control functions but they are able to respond to changing or unexpected environments. Aircraft landing system provides one such scenario wherein the flight conditions change quite dramatically over the path of descent. The present work discusses the training of a neural network to imitate a robust controller for auto-landing of an aircraft. The comparisons with the robust controller indicate the additional advantages of the neural network. The effects of disturbance and sensitivity analysis are presented to high light the generalization property of the neural network.

Title:

KNOWLEDGE MANAGEMENT AND ITS APPLICATION TO IMPROVE WORKFLOW

Author(s):

Tung Dang , Baltazar Frankovic

Abstract: This paper deals with one of many problems associated with building and developing a platform, based on the multi-agent technology for assisting office employees in their organization, and that is a problem of classification and identification of the right contacts. In order to assist newly arrived employees, agents search of the contacts used by previous employees and extract the one that is most appropriate for assisting the current activity. This paper presents methods for classification and selection of contacts based on the CBR technique and the forward search principle. The process of searching contacts is guided by user’s personal criteria. At the end, this paper discusses some possible techniques to solve user’s requirements, which cannot be achieved by using traditional search methods.

Title:

APPLYING DATA MINING TO SOFTWARE DEVELOPMENT PROJECTS: A CASE STUDY

Author(s):

Jacinto Mata Vázquez

Abstract: One of the main challenges that the project managers have during the building process of a software development project (SDP) is to optimise the values of the parameters that measure the viability of the final process. The accomplishment of this task, something that was not easy at the beginning, was helped with the appearance of dynamic models and simulation environments. The application of data mining techniques to the managing of Software Development Projects (SDP) is not an uncommon phenomenon, as in any other productive process that generates information in the way of input data and output variables. In this paper, we present and analyze the results obtained from a tool, developed by the authors, based on a Knowledge Discovery in Databases (KDD) technique. One of the most important contributions of these techniques to the software engineering field is the possibility of improving the management process of an SDP. The purpose is to provide accurate decision rules in order to help the project manager to take decisions during the development.

Title:

AN ADAPTABLE TIME-DELAY NEURAL NETWORK FOR PREDICT THE SPANISH ECONOMIC INDEBTEDNESS

Author(s):

Waldo Fajardo Contreras , Manuel Pegalar Cuellar , Mª Carmen Pegalajar Jimenez , Mª Angustias Navarro Ruiz , Ramón Pérez Pérez

Abstract: In this paper, we study and predict the indebtedness economic for the autonomic of Spain. In turn, we use model of neural network. In this study, we assess the feasibility of the Time-Delay neural network as an alternative to these classical forecasting models. This neural network permits accumulate more values of pass and to predict best the future. We show the assignment MSE to check the good forecasting of indebtedness economic.

Title:

A COMPARATIVE STUDY OF EVOLUTIONARY ALGORITHMS FOR TRAINING OF ELMAN RECURRENT NEURAL NETWORKS TO PREDICT THE AUTONOMOUS INDEBTNESS

Author(s):

M. Carmen Pegalajar , Manuel-Pegalajar Cuéllar

Abstract: In this paper we will show a training model for Elman Recurrent Neural Networks, based on Evolutionay Algorithms. It will be applied to Spanish Autonomous Indebtness Prediction. Applied Evolurionay Algorithms are Classic Genetic Algorithms, Multimodal Clearing algorithm and CHC algorithm. We will make a comparative study, training the net with each evolutionay algorithm to see the affectiveness of each training model to predict the Spanish Autonomous Indebtness.

Title:

DEVELOPMENT OF EXPERT SYSTEM FOR DETECTING INCIPIENT FAULTS IN TRANSFORMER BY USING DISSOLVED GAS ANALYSIS.

Author(s):

Nitin keshao Dhote -

Abstract: Power transformer is a vital component of power system, which has no substitute for its major role. They are quite expensive also. It is therefore, very important to closely monitor it’s in –service behavior to avoid costly outages and loss of production. Many devices have evolved to monitor the serviceability of power transformers. These devices such as Buchholz relay or differential relay respond only to a severe power failure requiring immediate removal of transformer from service, in which case, outages are inevitable. Thus, preventive techniques for early detection of faults to avoid outages would be valuable. A prototype of an expert system based on Dissolved Gas Analysis (DGA) technique for diagnosis of suspected transformers faults and their maintenance action are developed. The synthetic method is proposed to assist the popular gas ratio methods. This expert system is implemented into PC by using “Turbo Prolog” with rule based knowledge representations. The designed expert system has been tested for N.T.P.C., Talcher (India) transformer’s gas ratio records to show its effectiveness in transformer diagnosis.

Title:

PRACTICAL APPLICATION OF KDD TECHNIQUES TO AN INDUSTRIAL PROCESS

Author(s):

Victoria Pachón Álvarez

Abstract: In the process of smelting copper mineral a large amount of sulphuric dioxide (SO2) is produced. This compound would be highly pollutant if it was emitted to the atmosphere. By means of an acid plant it is possible to transform it into sulphuric acid, using for this a set of chemical and physical processes. In this way we dispose of a marketable product and, at the same time, the environment is protected. However, there are certain situations in which the gases escape to the atmosphere, creating pollutant situations. This would be avoidable if we exactly knew under which circumstances this problem is produced. In this paper we present a practical application of KDD techniques to the chemical industry. By means of the obtained results we show the viability of using automatic classifiers to improve a productive process, with an increase of the production and a decrease of the environmental pollution

Title:

DATABASES REDUCTION

Author(s):

Jesús S. Aguilar-Ruiz , Jose C. Riquelme , Roberto Ruiz Sánchez

Abstract: Progress in digital data acquisition and storage technology has resulted in the growth of huge databases. A great quantity of information. Nevertheless, these techniques often have high computational cost. Then, it is advisable to apply a preprocessing phase to reduce the complexity time. These preprocessing techniques are fundamentally oriented to either of the next goals: horizontal reduction of the databases or feature selection; and vertical reduction or editing. In this paper we present a new proposal to reduce databases applying sequentially vertical and horizontal reduction technics. They are based in our original works, and they use a projection concept as a method to choose examples and representative features. Results obtained are very satisfactory, because the reduced database offers the same knowledge with low added computational cost.

Title:

DATA MINING APPLICATION IN CLINICAL DATA OF PATIENTS WITH NEPHROLITHIASIS

Author(s):

Romero Paoliello , Paulo José Lage Alvarenga , Luis Enrique Zárate , Thiago Ribeiro

Abstract: Nephrolithiasis is a disease that is unknown yet a clinical treatment that determines its cure. In the adult population is esteemed an incidence around 5 to 12%, being a little lesser in the pediatric band. The renal colic, caused by nephrolithiasis, is the main disease symptom in the adults and it is observed in 14% of the pediatric patients. The disease symptoms in the pediatric patient don't follow a pattern, and this difficult the disease diagnosis. The main objective of this work is discovery the patters of the disease symptoms and identifies the apt population to acquire it. With this objective, is applied KDD methodology determining discriminant rules for the patterns of the symptoms, and with this, select the groups of patients with those sets of symptoms. The results and the conclusions of the work are presented in the end of the article.

Title:

QUALITY CONTROL USING FUZZY RULE BASED CLASSIFICATION SYSTEMS

Author(s):

Kumar Ujjwal ujjwal , Rajendra Sahu R.sahu , Rajendra Sahu R.sahu

Abstract: In recent years, Total Quality Management (TQM) has captured the worldwide attention and is being adopted in many organization both profit and non-profit. The aim of this paper is to generate those rules from the existing data that affect the quality of the product and use the generated rules to construct a Fuzzy Inference System (FIS) which can be used for product classification under the categories of Good, Average and Poor. The rules will incorporate all the important attributes that affect a particular product. This paper uses the concept of Fuzzy Inference Systems (FIS) which are widely used for process simulation or control. They can be designed either from expert knowledge or from data. For complex systems, FIS based on expert knowledge only may suffer from a loss of accuracy. This is the main incentive for using fuzzy rules inferred from data. In the synthesis of a fuzzy system from data two steps are generally employed: automatic rule generation and system optimization. This paper analyzes the grid partitioning approach of extracting rules from data and then it focuses on how the rules can be optimized and how the developed rules can be used for product classification on the basis of their quality.

Title:

OBJECTMINER: A NEW APPROACH FOR MINING COMPLEX OBJECTS

Author(s):

Rafael Berlanga , Roxana Danger , José Ruíz-Shulcloper

Abstract: Since their introduction in 1993, association rules have been successfully applied to the description and summarization of discovered relations between attributes in a large collection of objects. However, most of the research works in this area have focused on mining simple objects, usually represented as a set of binary variables. The proposed work presents a framework for mining complex objects, whose attributes can be of any data type (single and multi-valued). The mining process is guided by the semantics associated to each object feature, which is stated by users by providing both a comparison criterion and a similarity function over the object subdescriptions. Experimental results show the usefulness of the proposal.

Title:

INFORMATION ACCESS VIA TOPIC HIERARCHIES AND THEMATIC ANNOTATIONS FROM DOCUMENT COLLECTIONS

Author(s):

Hermine Njike Fotzo

Abstract: With the development and the availability of large textual corpora, there is a need for enriching and organizing these corpora so as to make easier the research and navigation among the documents. The Semantic Web research focuses on augmenting ordinary Web pages with semantics. Indeed, wealth of information exists today in electronic form, they cannot be easily processed by computers due to lack of external semantics. Furthermore, the semantic addition is an help for user to locate, process information and compare documents contents. For now, Semantic Web research has been focused on the standardization, internal structuring of pages, and sharing of ontologies in a variety of domains. Concerning external structuring, hypertext and information retrieval communities propose to indicate relations between documents via hyperlinks or by organizing documents into concepts hierarchies, both being manually developed. We consider here the problem of automatically structuring and organizing corpora in a way that reflects semantic relations between documents. We propose an algorithm for automatically inferring concepts hierarchies from a corpus. We then show how this method may be used to create specialization/generalization links between documents leading to document hierarchies. As a byproduct, documents are annotated with keywords giving the main concepts present in the documents. We also introduce numerical criteria for measuring the relevance of the automatically generated hierarchies and describe some experiments performed on data from the LookSmart and New Scientist web sites.

Title:

LEARNING BAYESIAN NETWORKS WITH LARGEST CHAIN GRAPHS

Author(s):

Mohamed BENDOU , Paul MUNTEANU

Abstract: This paper proposes a new approach for designing learning bayesian network algorithms that explore the structure equivalence classes space. Its main originality consists in the representation of equivalence classes by largest chain graphs, instead of essential graphs which are generally used in the similar task. We show that this approach drastically simplifies the algorithms formulation and has some beneficial aspects on their execution time.

Title:

MODEL-BASED COLLABORATIVE FILTERING FOR TEAM BUILDING SUPPORT

Author(s):

Alípio Jorge , Miguel Veloso , Paulo Azevedo

Abstract: In this paper we describe an application of recommender systems to team building in a company or organization. The recommender system uses a collaborative filtering model based approach. Recommender models are sets of association rules extracted from the activity log of employees assigned to projects or tasks. Recommendation is performed at two levels: first by recommending a single team element given a partially built team; and second by recommending changes to a complete team. The methodology is applied to a case study with real data. The results are evaluated through experimental tests and a users’ perception survey.

Title:

NEW ENERGETIC SELECTION PRINCIPLE IN DIFFERENTIAL EVOLUTION

Author(s):

Vitaliy Feoktistov

Abstract: The Differential Evolution (DE) algorithm goes back to the class of Evolutionary Algorithms and inherits its philosophy and concept. Possessing only three control parameters (size of population, differentiation and recombination constants) DE has promising characteristics of robustness and convergence. In this paper we introduce a new principle of Energetic Selection. It consists in both decreasing the population size and the computation efforts according to an energetic barrier function which depends on the number of generation. The value of this function acts as an energetic filter, through which can pass only individuals with lower fitness. Furthermore, this approach allows us to initialize a population of a sufficient (large) size. This method leads us to an improvement of algorithm convergence.

Title:

CASE-BASED APPROACH FOR EFFICIENT REDESIGN OF BUSINESS PROCESS

Author(s):

Farhi Marir

Abstract: Business Process Redesign (BPR) addresses the reengineering of one specific process within the firm. It helps rethinking a process in order to enhance its performance. Academics and Business practitioners have been developing methodologies to support the application of BPR principles. However, most methodologies generally lack actual guidance on deriving a process design threatening the success of BPR. In this paper, we suggest the use of a case-based reasoning technique (CBR) to support solving new problems by adapting previously successful solutions to similar problems. We investigate how CBR can support a BPR implementation. An implementation framework for BPR and the CBR’s cyclical process are used as a knowledge management technical support to serve for the effective reuses of redesign methods as a knowledge creation and sharing mechanism. This is developed in an attempt to improve the level of success of BPR implementation by using case stories.

Title:

TOWARDS HIGH DIMENSIONAL DATA MINING WITH BOOSTING OF PSVM AND VISUALIZATION TOOLS

Author(s):

Thanh-Nghi Do

Abstract: In the recent years support vector machines (SVM) have been successfully applied to a large number of applications. Training a SVM usually needs a quadratic programming, so that the learning task for large data sets requires large memory capacity and a long time. Proximal SVM proposed by Fung and Mangasarian is a new SVM formulation. It is very fast to train because it requires only the solution of a linear system. We have used the Sherman-Morrison-Woodbury formula to adapt the PSVM to process data sets with a very large number of attributes. We have extended this idea by applying boosting to PSVM for mining massive data sets with simultaneously very large number of data points and attributes. We have evaluated its performance on UCI, Twonorm, Ringnorm, Reuters-21578 and Ndc data sets. We also propose a new graphical tool for trying to interpret the results of the new algorithm by displaying the separating frontier between classes of the data set. This can help the user to deeply understand how the new algorithm can work.

Title:

ROBUST, GENERALIZED, QUICK AND EFFICIENT AGGLOMERATIVE CLUSTERING

Author(s):

Manolis Wallace

Abstract: Hierarchical approaches, which are dominated by the generic agglomerative clustering algorithm, are suitable for cases in which the count of distinct clusters in the data is not known a priori; this is not a rare case in real data. On the other hand, important problems are related to their application, such as susceptibility to errors in the initial steps that propagate all the way to the final output and high complexity. Finally, similarly to all other clustering techniques, their efficiency decreases as the dimensionality of their input increases. In this paper we propose a robust, generalized, quick and efficient extension to the generic agglomerative clustering process. Robust refers to the proposed approach's ability to overcome the classic algorithm's susceptibility to errors in the initial steps, generalized to its ability to simultaneously consider multiple distance metrics, quick to its suitability for application to larger datasets via the application of the computationally expensive components to only a subset of the available data samples and efficient to its ability to produce results that are comparable to those of trained classifiers, largely outperforming the generic agglomerative process.

Title:

TOWARDS VISUAL DATA MINING

Author(s):

Francois Poulet

Abstract: In this paper, we present our work in a new data mining approach called Visual Data Mining (VDM). This new approach tries to involve more intensively the user (being the data expert not a data mining or analysis specialist) in the data mining process and to increase the part of the visualisation in this process. The visualisation part can be increased with cooperative tools: the visualisation is used as a pre or post processing step of usual (automatic) data mining algorithms, or the visualisation tools can be used instead of the usual automatic algorithms. All these topics are addressed in this paper with an evaluation of the algorithms presented and a discussion of the interactive algorithms compared with automatic ones. All this work must be improved in order to allow the data specialists to use efficiently these kinds of algorithms to solve their problems.

Title:

HYBRID EXPERT SYSTEM FOR THE SELECTION OF RAPID PROTOTYPING PROCESSES

Author(s):

Farhi Marir

Abstract: A wide variety of rapid prototyping processes are available, each with different and unique features. Selecting the most suitable process for a given prototype can be difficult and costly if a mistake is made. In this paper, the design of a knowledge-based system to support the selection of a rapid prototyping process is presented. The method utilises a hybrid expert system, which is formulated to interrogate the acquired data streams from a rapid prototyping model simulator for the purpose of comparative studies with the knowledge base.

Title:

A CONNEXIONIST APPROACH FOR CASE BASED REASONING

Author(s):

José María de la Torre , Miguel Delgado , Eva Gibaja , Antonio B. Bailón

Abstract: Case Based Learning is an approach to automatic learning and reasoning based on the use of the knowledge gained in past experiences to solve new problems. To suggest a solution for a new problem it is necessary to search for similar problems in the base of problems for which we know their solutions. After selecting one or more similar problems their solutions are used to elaborate a suggested solution for the new problem. Associative memories recover patterns based on their similarity with a new input pattern. This behaviour made them useful to store the base of cases of a Case Based Reasoning system. In this paper we analyze the use of a special model of associative memory named CCLAM \cite{bailon2002a} with this objective. To test the potentiality of the tool we will discuss its use in a particular application: the detection of the ``health'' of a company.

Title:

INTELLIGENT VIRTUAL ENVIRONMENTS FOR TRAINING IN NUCLEAR POWER PLANTS

Author(s):

Pilar Herrero , Gonzalo Mendez , Angelica de Antonio

Abstract: Educational Virtual Environments are gaining popularity as tools to enhance student learning. These environments are often used to allow students to experience situations that would be difficult, costly, or impossible in the physical world. At the Technical University of Madrid we have developed several applications to explore the use of intelligent tutors in VR. In this paper we present two of these applications which have been used for training in radiological protection in Nuclear Power Plants (NPP). These applications are inhabited by avatars and/or agents which are continuously monitoring the state of the environment and manipulating it periodically through virtual motor actions. Our applications help students learn to perform physical, procedural tasks in some different risky areas of NPP.

Title:

BAYESIAN NETWORK CLASSIFIERS VERSUS K-NN CLASSIFIER USING SEQUENTIAL FEATURE SELECTION

Author(s):

Franz Pernkopf , Djamel  Bouchaffra

Abstract: The aim of this paper is to compare Bayesian network classifiers to the k-NN classifier based on a subset of features. This subset is established by means of sequential feature selection methods. Experimental results show that Bayesian network classifiers more often achieve a better classification rate on different data sets as selective k-NN classifiers. The $k$-NN classifier performs well in the case where the number of samples for learning the parameters of the Bayesian network is small. Bayesian network classifiers outperform selective k-NN methods in terms of memory requirements and computational demands. This paper demonstrates the strength of Bayesian networks for

Title:

G.R.E.E.N. AN EXPERT SYSTEM TO IDENTIFY GYMNOSPERMS

Author(s):

Antonio Bailón , Eva Gibaja

Abstract: The application of Artificial Intelligence techniques to the problem of botanical identification is not particularly widespread even less so on Internet. There are several interactive identification systems but they usually deal with raw knowledge so it appears that “research and development of web-based expert systems are still in their early stage” (Li et al., 2002). In this paper we present the G.R.E.E.N. (Gymnosperms Remote Expert Executed Over Networks) System as an Expert System for the identification of Iberian Gymnosperms which allows on-line uncertainty queries to be made. The System is operative and it can be consulted in http://drimys.ugr.es/experto/index.html.

Title:

ADAPTATIVE TECHNIQUES FOR THE HUMAN FACES DETECTION

Author(s):

João Fernando Marar , Danilo Nogueira Costa

Abstract: This paper presents results from an efficient approach to an automatic detection and extraction of human faces from images with any color, texture or objects in background, that consist in find isosceles triangles formed by the eyes and mouth.

Title:

OLIMPO SYSTEM WEB-TECNOLOGY FOR ELECTRONIC GOVERNMENT AND PACE WORLD

Author(s):

Andre Bortolon , Hugo Cesar Hoeschl , Tania Bueno , Eduardo Mattos , Vania Ferreira

Abstract: The paper describes the Olimpo System, a knowledge-based system that enables the user to access textual files and to retrieve information that is similar to the search context described by the user in natural language. The paper is focused on the innovation recently implemented on the system and its new features. A detailed description is presented about the search level and the similarity metrics used by the system. The methodology applied to the Olimpo system emphasises the use of information retrieval methods combined with the Artificial Intelligence technique named SCS (Structured Contextual Search).

Title:

DESIGN AND IMPLEMENTATION OF A SCALABLE FUZZY CASE-BASED MATCHING ENGINE

Author(s):

Jonas  Van Poucke , Bartel Van de Walle , Rami Hansenne , Veerle Van der Sluys

Abstract: We discuss the design and the implementation of a flexible and scalable fuzzy case-based matching engine. The engine’s flexible design is illustrated for two of its core components: the internal representation of cases by means of a variety of crisp and fuzzy data types, and the fuzzy operations to execute the ensuing case matching process. We investigate the scalability of the matching engine by a series of benchmark tests of increasing complexity, and find that the matching engine can manage an increasingly heavy load. This indicates that the engine can be used for demanding matching processes. We conclude by pointing at several applications in experimental electronic markets for which the matching engine currently is being put to use, and indicate avenues for future research.

Title:

INFORMED K-MEANS: A CLUSTERING PROCESS BIASED BY PRIOR KNOWLEDGE

Author(s):

Wagner Castilho , Hércules do Prado , Marcelo Ladeira

Abstract: Knowledge Discovery in Databases (KDD) is the process by which unknown and useful knowledge and information are extracted, by automatic or semi-automatic methods, from large amounts of data. Along the evolution of Information Technology and the rapid growth in the number and size of databases, the development of methodologies, techniques, and tools for data mining has become a major concern for researchers, and has led, in turn, to the development of applications in a variety of areas of human activity. About 1997, the processes and techniques associated with cluster analysis had begun to be researched with increasing intensity by the KDD community. Within the context of a model intended to support decisions based on cluster analysis, prior knowledge about the data structure and the application domain can be used as important constraints that lead to better results in the clusters’ configurations. This paper presents an application of cluster analysis in the area of public safety using a schema that takes into account the burden of prior knowledge acquired from statistical analysis on the data. Such an information was used as a bias for the k-means algorithm that was applied to identify the dactyloscopic (fingerprint) profile of criminals in the Brazilian capital, also known as Federal District. These results was then compared with a similar analysis that disregarded the prior knowledge. It is possible to observe that the analysis using prior knowledge generated clusters that are more coherent with the expert knowledge.

Title:

NEURAL NETWORK AND TIME SERIES AS TOOLS FOR SALES FORECASTING

Author(s):

Maria Emilia Camargo , Walter Priesnitz Filho , Angela Isabel dos Santos

Abstract: This paper presents the use of times series AutoRegressive Integrated Moving Average (ARIMA) ARIMA model with interventions, and neural network back-propagation model in analyzing the behavior of sales in a medium size enterprise located in Rio Grande do Sul Brazil for the period January 1979 December 2002. The forecasts obtained using the back-propagation model were found to be more accurate than those of ARIMA model with interventions.

Title:

A SYMBOLIC APPROACH TO LINGUISTIC NEGATION

Author(s):

Daniel PACHOLCZYK , Mazen EL-SAYED

Abstract: Negation processing is a challenging problem studied by a large number of researchers from different communities. This paper focuses on the linguistic negation rather than on the logical one. Our work is based on the main standard forms of linguistic negation interpretations represented as "x is not A". The reference frame associated with a standard form contains all its positive interpretations. The main goal of dealing with negation is the selection of one (or several) positive interpretation(s) associated with a negative sentence from its reference frame. The originality of our approach results from the fact that we do not research directly all affirmative interpretations of a negation, but we approximate its significance. We introduce two operators, one is optimistic and the other is pessimistic. They are defined according to rough set theory. By using the new negation formulation, we propose several generalizations of the Modus Ponens rule dealing with negative information. The new model is proposed within a symbolic many-valued predicate logic.

Title:

DYNAMIC INTEREST PROFILES: TRACKING USER INTRESTS USING PERSONAL INFORMATION

Author(s):

Joann Ruvolo , Justin Lessler , Vikas Krishna , Stefan Edlund

Abstract: When building applications it is usually the case that developers are forced to focus on “one size fits all” solutions. Customization is often burdensome for the user, or would be so complex that it would be unrealistic to ask an end user to undertake this task. In the areas of personal information management and collaboration there is no reason to accept this limitation, as there is a body of information about the user that reflects their interests: namely their personal documents. The Dynamic Interest Profile (DIP) is a system intended to track user interest to allow for the creation of more intelligent applications. In this paper we discuss our approach to implementing the DIP, challenges that this implementation presents, as well as the security and privacy concerns that the existence of such an application raises.

Title:

A FAST SCALE AND POSE INVARIANT FACE RECOGNITION METHOD

Author(s):

Dr younus Javed , Almas Anjum , Aamir Nadeem

Abstract: Abstract The high speed computing, database, networking technologies and sophisticated image processing methodologies have increased the topical significance of face recognition. The proposed system is a scale invariant face recognition model which works on reduced size of image to increase the speed and to reduce the complex computation .The approach transforms face images into a small set of characteristic features image matrices which are the principle component of the initial training set of images. On the basis of small sets of features a general matrix and difference matrices of the normalized images are formed which ultimately provide a base for the recognition of face. This model consists of two parts. The first part is conversion of RGB into gray image with averaging of RGB values and preprocessing of image. In the second part, the recognition is performed by projecting a test image to the face space spanned by general matrix, an error matrix is obtained and ultimately compared with difference matrices of all the training images and minimum error gives the recognized image. Recognition under reasonably varying conditions is achieved by training on a limited number of images with different poses and intensity levels. This approach has advantages over other face recognition schemes in its speed, simplicity, learning capacity and relative insensitivity to small or gradual changes of pose and intensity level in the face images and its size.

Title:

DYNAMIC NEGOTIATION FOR REAL-TIME MANUFACTURING EXECUTION

Author(s):

Li Qun Zhuang , Jing Bing Zhang , Bryan Tsong Jye Ng , Yi Zhi Zhao , Yue Tang

Abstract: This paper presents a dynamic negotiation framework for real-time execution in self-organised manufacturing environments. The negotiation strategies in this framework bridge the gap between distributed negotiation of self-interested agents and cooperative negotiation among agent groups. In particular, the proposed framework is based on the model of Performance and Cost for Manufacturing Execution (PCME). By forming the dynamic organisation called agent consortium, individual agent negotiates over the PCME in order to optimise the resource allocation under time constraints and uncertainty of job execution, and resolves the conflicts to fulfil the goal of the overall system. The ultimate goal of the framework is to reduce the negotiation time, make effective use of resources, adapt to the changes in execution and increase the throughput of the entire system. Experimental work based on PCME has been carried out to demonstrate the high performance of this approach despite unanticipated and dynamic changes in the manufacturing execution environments.

Title:

VISUALIZING SOFTWARE PROJECT ANALOGIES TO SUPPORT COST ESTIMATION

Author(s):

Martin Auer

Abstract: Software cost estimation is a crucial task in software project portfolio decisions like start scheduling, resource allocation, or bidding. A variety of estimation methods have been proposed to support estimators. Especially the analogy-based approach---based on a project's similarities with past projects---has been reported as both efficient and relatively transparent. However, its performance was typically measured automatically and the effect of human estimators' sanity checks was neglected. Thus, this paper proposes the visualization of high-dimensional software project portfolio data using multidimensional scaling (MDS). We (i) propose data preparation steps for an MDS visualization of software portfolio data, (ii) visualize several real-world industry project portfolio data sets and quantify the achieved approximation quality to assess the feasibility, and (iii) outline the expected benefits referring to the visualized portfolios' properties. This approach offers several promising benefits by enhancing portfolio data understanding and by providing intuitive means for estimators to assess an estimate's plausibility.

Title:

ORDER PLANNING DECISION SUPPORT SYSTEM FOR CUSTOMER DRIVEN MANUFACTURING: OVERVIEW OF MAIN SYSTEM REQUIREMENTS

Author(s):

Américo  Azevedo , Henrique  Proença

Abstract: An important goal in schedule production orders through a manufacturing facility is to assure that the work is completed as close as possible to its due date. Work that is late creates downstream delays, while early completion can be detrimental if storage space is limited. Production planning and control manufacturing is becoming more difficult as family products increase and quantity decreases. This paper presents an ongoing information system development that aims the production planning of special test tables equipment for automobile components manufacturers. The simulated based information system will be used to support planning and schedule activities; to compare and analyze the impact of planning rescheduling; to forecast the production completion date; to detect bottlenecks and to evaluate machines performance.

Title:

AN EXPERIENCE IN MANAGEMENT OF IMPRECISE SOIL DATABASES BY MEANS OF FUZZY ASSOCIATION RULES AND FUZZY APPROXIMATE DEPENDENCIES

Author(s):

J.M. Serrano , M. Sánchez-Marañón , Daniel Sánchez , M.A. Vila , G. Delgado , J. Calero

Abstract: In this work, we start from a database built with soil information from heterogeneous scientific sources (Local Soil Databases, LSDB). We call this an Aggregated Soil Database (ASDB). We are interested in determining if knowledge obtained by means of fuzzy association rules or fuzzy approximate dependencies can represent adequately expert knowledge for a soil scientific, familiarized with the study zone. A master relation between two soil attributes was selected and studied by the expert, in both ASDB and LSDB. Obtained results reveal that knowledge extracted by means of fuzzy data mining tools is significatively better than crisp one. Moreover, it is highly satisfactory from the soil scientific expert’s point of view, since it manages with more flexibility imprecision factors (IFASDB) commonly related to this type of information.

Title:

DECISION FOLLOW-UP SUPPORT MECHANISM BASED ON ASYNCHRONOUS COMMUNICATION

Author(s):

Wolfgang Prinz , Carla Valle

Abstract: Decision management and decision support systems are themes under investigation for several decades, and both research areas provided contributions for the quality of decision making processes. However, little work has been done in the area of decision follow-up, especially regarding decisions made during meetings. In this paper we analyse the concepts related to this problem and we propose a solution based on mechanisms supported by computer to assist the formalization of meeting outcomes, and to provide decision follow-up.

Title:

THE ORM MODEL AS A KNOWLEDGE REPRESENTATION FOR E-TUTORIAL SYSTEMS

Author(s):

tanaporn leelawattananon , suphamit chittayasothorn

Abstract: At present information technology plays important roles in teaching and learning activities. E-learning systems have the potential to reduce operating costs and train more people. Teachers and students do not have to be in the same place at the same time and the students have the opportunity to perform self-studies and self-evaluation using e-tutorial systems. E-learning systems could be considered expert systems in the sense that they provide expert advice in particular subjects of studies to students. The exploitation of knowledge base and knowledge representation techniques is therefore vital to the development of e-learning systems. This paper presents the development of a knowledge-based e-tutorial system that uses the Object Role Model (ORM) as its knowledge representation. The system provides Physics tutorials. It was implemented in Prolog and the knowledge base is on a relational database server.

Title:

IMPLEMENTING KNOWLEDGE MANAGEMENT TECHNIQUES FOR SECURITY PURPOSES

Author(s):

Ioannis  Drakopoulos , Petros Belsis , Stefanos Gritzalis , Christos Skourlas

Abstract: Due to its rapid growth, Information Systems Security becomes a new era of expertise, related to a vast quantity of knowledge. Exploiting all this knowledge becomes a difficult task, due to its heterogeneity. Knowledge Management (KM) on the other hand, becomes an expanding and promising discipline that has drawn considerable attention. In this paper we deploy our arguments about the benefits of KM techniques and their possible applications to assist security officers in improving their productivity and effectiveness. To prove this, we exploit possible technological prospects, and we present the architecture of a prototype developed to implement selected innovating KM components, embedding state-of-the-art multimedia java-based applications.

Title:

MAJORITY VOTING IN STABLE MARRIAGE PROBLEM WITH COUPLES

Author(s):

Tarmo Veskioja

Abstract: Providing centralised matching services can be viewed as a group decision support system (GDSS) for the participants to reach a stable matching solution. In the original stable marriage problem all the participants have to rank all members of the opposite party. Two variations for this problem allow for incomplete preference lists and ties in preferences. If members from one side are allowed to form couples and submit combined preferences, then the set of stable matchings may be empty (Roth et al., 1990). In that case it is necessary to use majority voting between matchings in a tournament. We propose a majority voting tournament method based on monotone systems and a value function for it. The proposed algorithm should minimize transitivity faults in tournament ranking.

Title:

OUTLIER DETECTION AND VISUALISATION

Author(s):

Lydia BOUDJELOUD , François POULET

Abstract: Abstract: The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are high dimensional domains in which the data can contain hundreds of dimensions. However, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. Many recent algorithms use heuristics such as genetic algorithms, the taboo search... in order to palliate these difficulties in high dimensional data. We present in this paper a new hybrid algorithm for outlier detection in high dimensional data. We evaluate the performances of the new algorithm on different high dimensional data sets, and visualise results for some data sets.

Title:

MULTI-AGENT ORGANISATIONAL MODEL FOR E-CONTACTING

Author(s):

Djamel  KHADRAOUI

Abstract: The paper covers the development and analysis tools, software and system architecture engineering, and development methodologies. It introduces the MOISE+ model for organizing agents inside a multi-agent system and it discussed the MOISE Java API. The Model of Organization for multI-agent System is a organizational model for Multi-Agent System seen under three points of view: structural, functional and deontic. In practical, this model is available as a JAVA component. The original contribution of the paper is the extension of the model to take into account the notion of sanctions. These are necessary in order to control the respect of normative specifications (obligation, permission, prohibition) of behaviors. The results of the generalized model is implemented on an eBusiness application dealing with eContracting.

Title:

MULTI-AGENT PROPOSITIONS TO MANAGE ORGANIZATIONAL KNOWLEDGE: POSITION PAPER CONCERNING A THREE-DIMENSIONAL RESEARCH PROJECT

Author(s):

Francisco  Guimarães , César  Rosa , Jorge  Louçã , Valmir  Meneses

Abstract: This paper presents the work in progress in a three-dimensional project, including the theoretical foundations and main goals of the lines of research incorporating our project: user modeling in a distributed cooperative system, interactive cooperation in a multi-agent structure, and knowledge representation in a cognitive agent architecture. These lines of research are complementary and share a main goal, to make propositions regarding the use of multi-agent systems in organizations, namely in what concerns support to decision making processes and, in a general way, knowledge management within organizations.

Title:

AN AGENT-BASED INFRASTRUCTURE FOR FACILITATING EVIDENCE-BASED HEALTH CARE

Author(s):

Jennifer Sampson

Abstract: Evidence-based medicine relies heavily on the timely dissemination of ‘best evidence’ to a wide audience of health practitioners (Atkins and Louw, 2000). However, finding, assimilating and using this information resource effectively can be difficult. In this paper we describe an infrastructure for facilitating evidence-based health care using Agora - a multi-agent system. This paper discusses our extensions to AGORA, and also describes issues for disseminating such medical knowledge via an adaptive, intelligent, distributed, mobile information service. We describe how an agent based approach can deliver clinical cases and diagnosis information to clinicians at point of care tailored to her/his needs. This research in progress is particularly important for the facilitating flow of information in health care.

Title:

AN ALGORITHM FOR LINEAR BILEVEL PROGRAMMING PROBLEMS

Author(s):

Jie Lu , Chenggen Shi

Abstract: For linear bilevel programming problems, the branch and bound algorithm is the most successful algorithm to deal with the complementary constraints arising from Kuhn-Tucker conditions. This paper proposes a new branch and bound algorithm for linear bilevel programming problems. Based on this result, a web-based bilevel decision support system is developed.


AREA 3 - Information Systems Analysis and Specification
 
Title:

PRIVACY CONCERNS IN INTERNET APPLICATIONS

Author(s):

Seev Neumann , Moshe Zviran

Abstract: The Merriam-Webster Dictionary defines privacy as “freedom from unauthorized intrusion”. While privacy has been a sensitive issue long before the advent of computers, the concern has been significantly elevated by the widespread use of large databases that make it easy to compile a dossier about an individual from many data sources. The problem of privacy has been further exacerbated by the fact that the Web makes it easy for new data to be automatically collected and added to databases and analyzed by sophisticated data mining tools and personalized marketing services. This study explores the nature of the privacy concern in detail, especially for the online environment. The objective of this study is to get a better understanding of the factors that can affect online privacy concerns and how this concern could affect the users’ behavior and the future of the Internet and electronic commerce.

Title:

A NEW VULNERABILITY TAXONOMY BASED ON PRIVILEGE ESCALATION

Author(s):

Yongzheng Zhang , Xiaochun Yun

Abstract: On the basis of analysis of research achievements of typical vulnerability taxonomies in the world, a privilege-escalating based vulnerability taxonomy with multidimensional quantitative attribute is presented in this paper. Then we give examples of three vulnerabilities to illustrate the characteristics of this taxonomy, and present the risk evaluation formula and ranks of the evaluation levels of risk.

Title:

A COMPARATIVE STUDY OF ELGAMAL BASED CRYPTOGRAPHIC ALGORITHMS

Author(s):

Ramzi Haraty , Hadi Otrok

Abstract: 038

Cryptography is the art or science of keeping messages secret. People mean different things when they talk about cryptography. Children play with toy ciphers and secret languages. However, these have little to do with real security and strong encryption. Strong encryption is the kind of encryption that can be used to protect information of real value against organized criminals, multinational corporations, and major governments. Strong encryption used to be only in the military domain; however, in the information society it has become one of the central tools for maintaining privacy and confidentiality. 038

As we move further into an information society, the technological means for global surveillance of millions of individual people are becoming available to major governments. Cryptography has become one of the main tools for privacy, trust, access control, electronic payments, corporate security, and countless other fields. 038

Perhaps the most striking development in the history of cryptography came in 1976 when Diffie and Hellman published $New$ $Directions$ $in$ $Cryptography$ [3]. Their work introduced the concept of public-key cryptography and provided a new method for key exchange. This method is based on the intractability of discrete logarithm problems. Although the authors had no practical realization of a public-key encryption scheme at the time, the idea was clear and it generated extensive interests and activities in the world of cryptography. One of the powerful and practical public-key schemes was produced by ElGamal in 1985 [4]. 038

El-Kassar and Awad [1][6] modified the ElGamal public-key encryption schemes from the domain of natural integers, $Z$, to two principal ideal domains, namely the domain of Gaussian integers, $Z[i]$, and the domain of the rings of polynomials over finite fields, $F[x]$, by extending the arithmetic needed for the modifications to these domains. 038

In this paper, we compare and evaluate the classical and modified ElGamal algorithms by implementing and running them on a computer. We investigate the issues of complexity, efficiency and reliability by running the programs with different sets of data. Moreover, comparisons will be done between these different algorithms given the same data as input. In addition, implementation of an attack algorithm will be presented. The attack algorithm consists of subroutines used to crack encrypted messages. This is done by applying certain mathematical concepts to find the private key of the encrypted message. After finding the key, it will be easy to decrypt the message. A study will be done using the results of running the attack algorithm to compare the security of the different classical and modified cryptographic algorithms.


Title:

ON INFORMATION SECURITY GUIDELINES FOR SMALL/MEDIUM ENTERPRISES

Author(s):

David Chapman , Leonid Smalov

Abstract: The adoption rate of Internet-based technologies by United Kingdom (UK) Small and Medium Enterprises (SMEs) is well-documented. Over several decades information security has evolved from early work such as the Bell La Padula (BLP) model toward widely disseminated Information Security Guidelines containing detailed advice. The overwhelming volume and level-of-detail provided often fails to address the information security requirements of SMEs. SMEs typically fail to implement effective Internet strategies due to lack of information security awareness, lack of technical skills and inadequate financial resources. Awareness of information security issues among SMEs is poor. The European Union supported ISA-EUNET Consortium has developed a set of best practices to support SMEs. We present a mapping of the Computer Security Expert Assist Team (CSEAT) Information Security Review Areas onto the Alliance for Electronic Business (AEB) web security guidelines as an example of a possible roadmap approach for SMEs to gain information security awareness.

Title:

ANALYSIS AND CONFIGURATION METHODOLOGY FOR VIDEO ON DEMAND SERVICES BASED ON MONITORING INFORMATION AND PREDICTION

Author(s):

Ángel Neira , Xabiel García Pañeda , David  Melendi Palacio , David Melendi , Roberto García , Víctor García

Abstract: This paper presents an analysis and configuration methodology for video-on-demand services. Usually, two entities take part in this kind of services: a network operator and a content provider. The former provides an Internet connection and manages servers and proxies, whereas the latter, normally a communication media, generates the provided contents. All their possibilities of configuration must be based on an accurate service behavioural analysis which evaluates the quality and the quantity of resources, contents and subscribers. This analysis can be performed using monitoring information and predictions of a near future behaviour established by managers. To formalize both analysis and configuration, a methodology must be developed in order to help the service managers to attain a good performance and at the same time, make a profit for their companies.

Title:

DESCRIBING SOFTWARE-INTENSIVE PROCESS ARCHITECTURES USING A UML-BASED ADL

Author(s):

Ilham ALLOUI , Flavio OQUENDO

Abstract: Many Architecture Description Languages (ADLs) have been proposed in the software architecture community, with several competing notations, each of them bringing its own body of specification languages and analysis techniques. The aim of all is to reduce the costs of error detection and repair while providing adequate abstractions for modelling large software-intensive systems and establishing properties of interest. However, there now exists a large consensus to standardise on notations and methods for software analysis and design as standardisation provides an economy of scale that results in various and better tools, better interoperability between tools, more available developers skilled in using the standard notation, and lower training costs. Therefore software-intensive process architectures can be relevantly described using a standard-compliant design notation. Among such notations, the UML modelling language that on one side makes use of visual notations and on the other side, is an emerging standard software design language and a starting point for bringing architectural modelling into industrial use. This paper presents an architecture-centred UML-based notation to describe software process architectures. The architectural concepts have already been formally defined in an Architecture Description textual Language. The notation is illustrated by a business-to-business process application. The main contribution of this work is to show that UML with its large and extensible set of predefined constructs imposes itself as a relevant candidate to be extended with the necessary architectural concepts and customisation to model software-intensive processes. The work presented is being developed and validated within the framework of the X IST 5 ongoing European project.

Title:

U_VBOOM : UNIFIED ANALYSIS AND DESIGN PROCESS BASED ON THE VIEWPOINT CONCEPT

Author(s):

Hair Abdellatif

Abstract: The introduction of viewpoint in object-oriented design provides several improvements in modeling complex systems. In fact, it enables the users to build a unique model accessible by different users with various points of view, instead of building several sub-models whose management is too hard to complete. Those concepts of view and viewpoint were implemented by VBOOL, the language which propose a new relationship "the visibility". VBOOM, the analyze/design method, integrates those concepts in an object-oriented modeling. The aims of this work are, firstly to propose a new representation of the visibility relationship of VBOOL in UML standard language for modeling and specifying object-oriented systems. Secondly, to complete UML by an oriented viewpoint method to get a complete software engineering process. The definition of this method is based on VBOOM method. This method is called U_VBOOM, which represents an adaptation of VBOOM in UML. The new representation of the visibility relationship encourages the multi-targets code generation and improve the process of development proposed by the VBOOM method.

Title:

TESTING SOFTWARE SYSTEMS FROM A USER'S PERSPECTIVE

Author(s):

Thomas Thelin

Abstract: An important attribute to whether a software system will be used is the satisfaction of the users during usage. In order to fulfil the users’ requirements during development, software inspection and testing are two important activities that are used. Software inspection is used in the first phases of development and testing is used after the system has been implemented. Several inspection and testing techniques have been developed, and some of these validate the software from the perspective of the users. Statistical usage testing (SUT) is one of these techniques, which is used to test a software product from a user's point of view. In SUT, usage models are designed to anticipate the future usage and then test cases are developed from the models. The development of test cases from the usage model can be made automatically by using a tool. This paper focuses on verification and validation from a usage perspective and presents a novel tool for SUT called MaTeLo. The purpose of the tool is to automatically produce test cases based on usage models, and to calculate important quality metrics like reliability. Furthermore, this paper describes the empirical evaluation of the tool and how SUT relates to inspection and estimation techniques with user focus.

Title:

WORKFLOW ACCESS CONTROL FROM A BUSINESS PERSPECTIVE

Author(s):

Dulce Domingos

Abstract: Workflow management systems are increasingly being used to support business processes. Methodologies have been proposed in order to derive workflow process definitions from business models. However, these methodologies do not comprise access control aspects. In this paper we propose an extension to the Work Analysis Refinement Modelling (WARM) methodology, which also enables to determine workflow access control information from the business process model. This is done by identifying useful information from business process models and showing how it can be refined to derive access control information. Our approach reduces the effort required to define the workflow access control, ensures that authorization rules are directly related to the business and aligns access control with the information system architecture that implements the business process.

Title:

USING SECURITY ATTACK SCENARIOS TO ANALYSE SECURITY DURING INFORMATION SYSTEMS DESIGN

Author(s):

Paolo  Giorgini , Haralambos Mouratidis , Gordon  Manson

Abstract: It has been widely argued in the literature that security concerns should be integrated with software engineering practices. However, only recently work has been initiated towards this direction. Most of this work, however, only considers how security can be analysed during the development lifecycles and not how the security of an information system can be tested during the analysis and design stages. In this paper we present results from the development of a technique, which is based on the use of scenarios, to test the reaction of an information system against potential security attacks.

Title:

METRICS FOR DYNAMICS: HOW TO IMPROVE THE BEHAVIOUR OF AN OBJECT INFORMATION SYSTEM

Author(s):

Maria Jose Escalona , Jean-Louis Cavarero

Abstract: If we ask about which is the main difference between modelling a system using a traditional model like the entity relationship model or an object oriented model, from our point of view the answer is that, in the first one, the processes are not located somewhere, and, in the second one, the processes (operations or methods) are encapsulated in classes. The choice of the right classes to home every operation is essential for the behaviour of the system. It is totally useless to design a well built system, according to a lot of statics metrics, if the system does not run well after. In other words, dynamic metrics allowing to evaluate the behaviour of a system when it runs are much more useful than any static metrics used to tell if the system is correctly built or not. According to this, we propose in this paper, a new approach to evaluate a priori the behaviour of a system, by taking into account the notion of event cost and the notion of time (which is obviously essential). The final goal of this approach is to deliver information on the way operations have to be placed in classes in order to get better performances when the system is running. However, the proposal of metrics is of no value if their practical use is not demonstrated, either by means of case studies taken from real projects or by controlled experiments. For this reason, an optimisation tool is being under construction in order to provide solutions to this problem.

Title:

ALIGNING BUSINESS PROCESS MODELING AND SOFTWARE SPECIFICATION IN A COMPONENT-BASED WAY, THE ADVANTAGES OF SDBC

Author(s):

Boris Shishkov , Jan L.G. Dietz

Abstract: One frequent cause of software project failure is the mismatch between the (business) requirements and the actual functionality of the delivered (software) application. In this paper, some popular methods (as well as their strengths and shortcomings) that address the mentioned problem are briefly outlined and an approach is proposed, for design of software, basing consistently this design on prior business process modeling. The alignment between these two tasks is realized in a component-based way, by deriving the software model from identified (generic) business components, thus - taking advantage of the benefits of object-orientation. The paper introduces not only the concepts of the approach but also elaborated views on how it could be implemented using particular software design and business process modeling techniques. A way to implement the approach is through UML - the standard language for designing software. The suggested approach is expected to be a useful contribution to the knowledge on aligning business process modeling and software design.

Title:

A NEW MODEL TO MANAGE IDS ALERTS

Author(s):

Walter Godoy Junior , Marco Bonato

Abstract: The goal of this paper is to present a new model to reduce the alerts generated by an IDS analyzer. This model allows the administrator to analyze only the messages that really generate risks for an environment or machine. This is very important when you have a complex environment with a lot of machines with many services in them.

Title:

CONSTRAINT-GUIDED ENTERPRISE PORTALS

Author(s):

Frank Kriwaczek , Christopher Hogger

Abstract: It is shown how an enterprise portal, supporting a community of users discharging roles expressed as combinations of plans and constraints, can be usefully guided by a constraint processor. In particular, constraint logic programming on finite domains provides the users with useful insights regarding their possible work schedules. Constraints assist also in shaping the electronic artefacts created and transmitted by the users. The implementation is supported by mechanisms for assigning and updating roles and for assisting the search for remedies in the case of constraint failure.

Title:

MODEL CHECKING AN OBJECT-ORIENTED DESIGN

Author(s):

Simon C Stanton , Vishv Malhotra

Abstract: Object classes are the building blocks in developing object-oriented software. The design methodologies have focused on methods, tools and representations to build classes taking advantages of inheritance and encapsulation properties. The guiding principle being that if all classes are correctly constructed a system consisting of objects in these classes will be correct. Efforts to include object constraints in the object-oriented programs have not attained the role commensurate with the role invariants play in the traditional imperative programs in understanding the programs and in enforcing correctness properties. The paper describes use of a model checker to establish correctness of an object-oriented design.

Title:

A TECHNIQUE FOR INTRODUCING STEREOTYPES INTO UML TOOLS

Author(s):

Miroslaw  Staron , Ludwik Kuzniarz

Abstract: The Unified Modeling Language is a general-purpose, visual object-oriented modeling language, which can be used for a variety of purposes. However, the usage of the language for specific purposes and needs can be done by customization with the help of the built-in extension mechanisms. The customization must be supported by the tools used to produce models in the software development. This paper elaborates on the capabilities of UML tools which results in identification of some problems. The paper proposes an alternative way of introducing stereotypes, which is independent of UML tools used, based on the Extensible Metadata Interchange (XMI) format and related XML technologies. The method is compared with the introduction of stereotypes directly into UML tools by an example design.

Title:

THE COMPONENT BASED PROGRAMMING MODEL FOR LINUX (CBPM)

Author(s):

Ali Raza , Omer Muhammad , Sikander Hayat , Imran Gondal

Abstract: Contemporary component model development is getting more and more important in software industry. Academic world is spending long time on development and refinement of their component models and rarely do they consider the alternative of not using a bridge. We propose and have implemented the Component based Programming Model (CBPM) for Linux which removes the usage of bridge while conforming to Component Object Model. CBPM aims to lower software development cost by providing sophisticated facilities for Component Object Model (COM) based component reuse on Linux. CBPM also focuses on eliminating the bridging overheads for using COM component. CBPM defines a standard for component interoperability, is not dependent on any particular programming language and is extensible.

Title:

EFFECTIVE XML REPRESENTATION FOR SPOKEN LANGUAGE IN ORGANISATIONS

Author(s):

Philip Windridge , Dali Dong , Rodney Clarke

Abstract: Spoken Language can be used to provide insights into organisational processes, unfortunately transcription and coding stages are very time consuming and expensive. The concept of partial transcription and coding is proposed in which spoken language is indexed prior to any subsequent processing. The functional linguistic theory of texture is used to describe the effects of partial transcription on observational records. The standard used to encode transcript context and metadata is called CHAT, but a previous XML schema developed to implement it contains design assumptions that make it difficult to support partial transcription for example. This paper describes a more effective XML schema that overcomes many of these problems and is intended for use in applications that support the rapid development of spoken language deliverables.

Title:

ONTOLOGY MODELING TOOL USING CONCEPT DICTIONARY AND INFERENCE

Author(s):

Yoichi Hiramatsu

Abstract: The usefulness of ontology is strongly dependent on the knowledge representation policy and its maintenance. The subject of knowledge representation and modeling tool has been one of the exciting themes among ontology scientists. Some ontology editing tools were born and grew up in the field of expert system and others designed originally by ontology research groups. Key features of the newly implemented tool are: (a) reference to concept dictionary (EDR and WordNet) to find out quickly the semantics of the words, and (b) use of inference algorithm provided by Schank’s Memory Organization Package. Satisfactory results were obtained in the application of ontology modeled by the present tool. The paper describes the implementation of the modeling tool and its effectiveness in solving some actual problems of enterprise integration.

Title:

OPEN SOURCE VS. CLOSED SOURCE

Author(s):

Vidyasagar Potdar , Elizabeth Chang , Ljiljana  Brankovic

Abstract: Open source software development represents a fundamentally new concept in the field of software engineering. Comparing to traditional software engineering approach, we found this approach is absolutely reversed. Open source development and delivery occurs on Internet time. Developers are not confined to a geographic area. They work voluntarily on a project of their choice; they do not have to join a particular project just because it needs more developers or the project has a high degree of urgency. Developers work for peer-recognition and self-satisfaction. In the open source community, each and every project has an equal priority. Software designed under open source is not released until the project owner thinks that the software has reached a functional stage. One of the success stories is the Linux operating system. Open Source software is always in an evolutionary stage: it never reaches a final stage. As new requirements emerge the software is enhanced by the user/developers. In this paper, we give an introduction to the insights of open source software development. We then elucidate the perceived benefits and point out the differences between open source and closed source software development approaches. At the end we propose a new model for open source software development.

Title:

USING WORKFLOW TECHNOLOGY: INTEGRATING EXISTING ENTERPRISE SYSTEMS WITH WORKFLOW TECHNOLOGY

Author(s):

Jeanne Stynes , Patrick Rushe

Abstract: Reducing costs and reducing time to market are two major keys to survival in the software market. Workflow reduces costs and time dramatically where applications involve the passage of work between recipients in order to meet certain business objectives. New projects in this area often use workflow technology. However, workflow’s applicability is often overlooked where developers are working on maintaining or upgrading existing systems. This paper discusses the work involved in integrating an existing system with a workflow management system, and examines the benefits of incorporating workflow into existing systems.

Title:

SMALL ENTERPRISES’ PREDISPOSITION TO ADOPT AN ERP

Author(s):

Suzanne Rivard , Danie Jutras , Louis  Raymond

Abstract: Enterprise resource planning systems (ERPs) are now being implemented in small and medium enterprises (SMEs). In addition to allowing for the integration of technological architectures, these systems make best practices available to small firms. This paper presents the results of a study that was aimed at identifying the dimensions of SMEs’ readiness for the adoption of this new technology.

Title:

FORMAL SPECIFICATION AND VERIFICATION OF XML-BASED BUSINESS DOMAIN MODELS

Author(s):

Wolfgang Schuetzelhofer

Abstract: The rapidly growing use of XML in the development of business to business (B2B) applications requires new approaches in building enterprise application infrastructures. In this field the modeling of business domain semantics, thus focusing on the user’s perception of data, in contrast to physical data representation, is gathering more and more importance. It is increasingly important to provide a sound mathematical foundation on modeling business domains, together with a well defined way to map business domain semantics to XML-structures. In our recent work we propose a semantic meta model, built on set- and algebra-theory, considered to serve for the formal definition of operations and transformations and to prove the correctness and completeness of design methods. Based on the mathematical model we propose an XML language to construct domain models and to formally express business domain semantics. The language not only allows to express structural schemas and static constraints but also provides to formulate dynamic business rules, which is considered critical for the quality of a business domain model and which is therefore centrally focused in our work. In addition we provide an XML syntax to encode domain instances and we apply standardized XML technologies to formally verify the validity of domain instances with respect to their specifying domain models. With our paper we contribute to the field of formal software engineering by proposing a business domain modeling language based on XML and founded on a sound mathematical model. The expression of dynamic business rules and the application of XML technologies to formally verify validity of domain instances and of entire domain models are the strength of our approach.

Title:

ANALYSIS AND RE-ENGINEERING OF WEB SERVICES

Author(s):

Axel Martens

Abstract: To an increasing extend software systems are integrated across the borders of individual enterprises. The Web Service approach provides group of technologies to describe components and their composition, based on well established protocols. Focused on business processes, one Web Service implements a local subprocess. A distributed business processes is implemented by the composition a set of communicating Web Services. At the moment, there are various modeling languages under development to describe the internal structure of one Web Service (e.g. Business Process Execution Language for Web Services) and the choreography of a set of Web Services (e.g. Web Service Choreography Interface). Nevertheless, there is a need for methods for stepwise construction and verification of such components. This paper abstracts from concrete syntax of any proposed language definition. Instead, we apply Petri nets to model Web Services. Thus, we are able to reason about essential properties, e.g. usability of a Web Service - our notion of a quality criterion. Based on this framework, we present an algorithm to analyze a given Web Service and to transfer a complex process model into a appropriate model of a Web Service.

Title:

BALANCING STAKEHOLDER’S PREFERENCES ON MEASURING COTS COMPONENT FUNCTIONAL SUITABILITY

Author(s):

Mario Piattini , Alejandra Cechich

Abstract: COTS (Commercial Off-The-Shelf) components can be incorporated into other systems to help software developers to produce a new system, so that both artefacts – components and the system – form a single functional entity. In that way, developing software becomes a matter of balancing required and offered functionality between the parties. But required functionality is highly dependent on component’s users, i.e. stakeholders of a COTS component selection process. Inputs to this process include discussions with composers, reuse architects, business process coordinators, and so forth. In this paper, we present an approach for balancing stakeholder’s preferences, which can be used in the process of measuring functional suitability of COTS candidates. We describe and illustrate the use of our proposal to weight requirements of components and determine suitable COTS candidates for given software.

Title:

A POLYMORPHIC CONTEXT FRAME TO SUPPORT SCALABILITY AND EVOLVABILITY OF INFORMATION SYSTEM DEVELOPMENT PROCESSES

Author(s):

Isabelle MIRBEL

Abstract: Nowadays, there is an increasing need for flexible approaches, adaptable to different kinds of Information System Development (ISD). But customization of ISD processes have mainly be thought of for the person in charge of building processes, i.e. the methodologists, in order to allow him/her to adapt the process to the need of its company or projects. But there is also a need for customizations dedicated to project team members (application engineers), to provide them with customized guidelines (or heuristics) which are to be followed while performing their daily task. We propose a knowledge capitalization framework to support evolvability and customization of ISD processes. Reuse and customization are handled through process fragments stored in a dedicated repository. Our purpose is not to propose a new way to built processes, as several approaches already exist on this topic, but to ease the use of existing ones by making them less rigid and allowing their adaptation to the need of the company, the project and most of all, the project team member. Therefore, in addition to a repository of process fragments, we propose a scalable and polymorphic structure allowing methodologists to define a working space through a context made of criterias. Thanks to this context the project team members better qualify their ISD problem in order to find a suitable solution. A solution is made of process fragments organized into a route-map specially built to answer the project team member need and directly usable by him/her. Our framework provides support to ISD by reuse as well as ISD for reuse. From the methodologists point of view, there is a need for a common framework for all the project team members working in the company and for means to keep project team members in the boundaries of the company development process. Such a structure should encourage project team members to focus on specific/critical aspects of the project they are involved in and the development process they use. It should help project team members to always take as much advantage as possible from the last version of the development process chosen, adapted and used in the company. From the project team member point of view, means have to be provided to help to select the right fragments to solve his/her problems and to allow him/her to qualify its re-usable elements of solution when he/she add it as a new fragment in the repository. The context-frame we focus on in this paper answers these needs. It is a scalable structure which supports evolution and tailoring by the methodologists for the project team member's need with regards to project and process features.

Title:

ON FIXPOINT SEMANTICS OF FUNCTIONAL PROGRAMS IN MONOTONIC MODELS OF TYPED -CALCULUS

Author(s):

Semyon Nigiyan

Abstract: The mathematical theory of programming languages based on typed -calculus, complete sets and monotonic mappings is considered. The fixpoit semantics (least solution) of functional programs in monotonic models of typed -calculus is investigated.

Title:

EVALUATION OF STRUCTURAL PROPERTIES FOR BUSINESS PROCESSES

Author(s):

Vladimír Modrák

Abstract: The article is analysing the issue of the evaluation of processes designed on the principles of reengineering. The models of enterprise processes are usually designed on the principles of the graph theory. The advantage of such models is that they do not require exhausting quantity of information about the modelled reality, but they put emphasis on the structural properties of the modelled system. These properties present the subject of analysis, in the framework of which some important structural properties are analysed.

Title:

METHOD-IN-ACTION AND METHOD-IN-TOOL: SOME IMPLICATIONS FOR CASE

Author(s):

Björn Lundell , Brian Lings

Abstract: Tool support for Information Systems development can be considered from many perspectives, and it is not surprising that different stakeholders perceive such tools very differently. This can contribute on one side to poor selection processes and ineffective deployment of CASE, and on another to inappropriate tool development. In this paper we consider the relationship between CASE tools and Information Systems development methods from three stakeholder perspectives: concept developer, Information Systems developer and product developer. These perspectives, and the tensions between them, are represented within a ‘stakeholder triangle’, which we use to consider how the concept of method-in-action affects and is affected by the concept of method-in-tool. We believe that the triangle helps when interpreting seemingly conflicting views about CASE adoption and development.

Title:

VIEW VISUALISATION FOR ENTERPRISE ARCHITECTURE

Author(s):

Maria-Eugenia Iacob , Diederik van Leeuwen

Abstract: In this paper we address the problem of visualisation of enterprise architectures. To this purpose a framework for the visualisation of architectural views and the design of an visualisation infrastructure are presented. Separation of concerns between storage, internal representation and presentation is the main requirement for setting up this framework, since it will allow us to select and subsequently present differently the same content (models) to different types of stakeholders. Our approach has resulted in an operational prototype that has been tested in a pilot case, also presented in what follows.

Title:

A TOOL INTEGRATION WORKBENCH FOR ENTERPRISE ARCHITECTURE

Author(s):

Diederik van Leeuwen , Hugo ter Doest , Marc Lankhorst

Abstract: Enterprise architecture incorporates the specification of relations between different domains, each speaking its own languages and using its own tools. As a consequence, the enterprise architecture asks for the integration of existing modelling tools. This integration has both technical and conceptual aspects. On a technical level, models in different formats managed by dedicated tools need to be related. On a conceptual level, models are expressed in different modelling languages or conceptual schemas, making the integration of these models complex. In this paper we present the design of a workbench for enterprise architecture that serves as a tool integration environment and a modelling tool at the same time: it supports both technical integration of existing modelling tools and conceptual integration of modelling schemas. The workbench is a viewpoint-driven environment that provides the means to bring together and elaborate upon existing heterogeneous content, as well as to break down existing content into more specific content managed by dedicated tools. This viewpoint-driven environment serves as a starting point for report generation for stakeholders more remote to the architecture design process. Moreover, re-use of architectural assets is supported in straightforward manner by a transparent disclosure of existing design artefacts in one integrated environment.

Title:

FORMALIZATION OF CLASS STRUCTURE EXTRACTION THROUGH LIFETIME ANALYSIS

Author(s):

Mikio Ohki

Abstract: For an analyst who tries to extract class structures from given requirements specifications for an application area with which he/she is not familiar, it is usually easier first to extract analysis elements, such as attributes, methods, and relationships, then to compose classes from those elements, than to extract entire classes at the same time. This paper demonstrates how to define the set of operations that can be used to derive lifetime-based class structures, provided that methods, including their identification names and lifetimes, can be extracted from given requirements specifications. The latter part of this paper describes an experiment that validates the defined operations by deriving typical design patterns, and also describes the differences between my approach and Pree's meta-pattern approach. Finally, it discusses the important role of lifetime analysis and an effective style of requirements specifications for object-oriented system development.

Title:

OO SYSTEMS DEVELOPMENT BARRIERS FOR STRUCTURAL DEVELOPERS

Author(s):

Aurona Gerber , Elsabe Cloete

Abstract: Paradigm contamination occurs where methods from different system development (SD) paradigms are integrated or combined. We investigate the OO and structural SD approaches and concern ourselves with the question of how paradigm contamination can be avoided, especially when developers were initially exposed to structural programming techniques and are now expected to apply an OO approach. By comparing the techniques associated with specific SD approaches, an outline is given of the particular differences and commonalities that regularly cause paradigm contamination. Guidelines for avoiding contamination traps are then provided. This is significant for practitioners as well as SD instructors, enabling them to be aware or make their students aware of the possible contamination pitfalls as well as how to avoid them, and as a result to reap the intended benefits of the chosen SD method.

Title:

P- MANAGER: THE IMPORTANCE OF AUTHORITY AND RESPONSIBILITY

Author(s):

Manuela Aparicio

Abstract: The use of tools to support management and coordination among workers is a subject of important effort performed by researchers in several fields of computer science and information systems. But here we stress the importance of authority and responsibility as way to achieve coordination. In this context, we propose a system used to support planning, organisation and control of operations. This system also intends to be enhanced with functionalities supported by mobile and wireless technology

Title:

INTERPRETING COLLABORATION DIAGRAMS BY USING DESCRIPTION LOGICS

Author(s):

Isamu Shioya , Takao Miura , Hiroyuki Nakanishi

Abstract: UML (Unified Modeling Language) is a de fact standard language for information system design and development. However, because of the ambiguity, we can't utilize intelligent operation like model transformation, examination of equivalence and redundancy as well as consistency. By using Description Logics, we can formalize UML especially for validating model consistency and reasoning that have been made by human-being. In this investigation, we put our focus on behavior over collaboration diagrams and propose how to describe and reason them. By this approach we can co-evaluate collaboration diagrams and class diagrams.

Title:

DIFFICULT ISSUES IN DESIGNING ADAPTIVE OBJECT MODEL SYSTEMS

Author(s):

Jinmiao Li , Greg Butler , Yun Mai

Abstract: The adaptive object model enables a system to change its behavior at run-time without re-programming. It provides an extremely extensible architecture solution for large software systems. As a particular kind of reflective architecture, the core of the adaptive object model encapsulates changeable system properties and behaviors as meta-information. Changing the meta-information reflects changes in the domain. However, this approach leads to a more complex design compared to a traditional object-oriented design and thus its implementation is difficult for developers. This paper provides a general design model that compiles techniques proposed by existing adaptive systems and models. The core of the design model is based on a layered architecture. The paper starts from a high level view of the architecture. It then zooms in different components. Major issues in designing various components are fully discussed. General design solutions are elicited as a result of the discussions.

Title:

REVERSING THE TREND OF COMMODITIZATION: A CASE STUDY OF THE STRATEGIC PLANNING AND MANAGEMENT OF A CALL CENTER

Author(s):

Robert  Galliers , Sue Newell , Brad  Poulson , Jimmy Huang

Abstract: Many The paper challenges the prevalent paradigm that differentiates between the management of a core competence and commodity processes. A case study is conducted to examine the strategic planning and management of a call center to illustrate that a commodity process, such as handling customers’ complaints and enquiries, can be transformed into a core competence, if a clear strategic intent is articulated and adequate management approaches are followed. Findings derived from this study suggest that a call center can provide substantial added value to the business and be managed differently through devising an appropriate intellectual capital management approach.

Title:

A PROTOTYPE TOOL FOR USE CASE REFACTORING

Author(s):

Gregory Butler

Abstract: Use cases are widely used in software engineering. It is important to improve the understandability and maintainability of use case models. We propose the approach of refactoring use case models. This paper describes a prototype tool for the refactoring process. We introduce the use case metamodel and its XML document type definition (DTD) used in the tool. Based on the Drawlets framework, we implement the functionality for drawing and viewing use case models. This is the basis for our refactoring framework which implements some (but not yet all) of our use case refactorings. Our experience shows that the tool greatly facilitates the process to reorganize use case models.

Title:

REASONS FOR INTEGRATING SOFTWARE COMPONENT SPECIFICATIONS IN BUSINESS PROCESS MODELS

Author(s):

Benneth Christiansson , Marie-Therese Christiansson

Abstract: Organisations, business processes and co-workers are in a ”never-ending change-mode”. It is therefore unrealistic to expect any definitive requirements for computer based information systems. In this paper we argue for the need of bridging the gap between business process modelling and software component specification. By using a core business process model that integrates both essential knowledge concerning business processes and their possible improvements and also integrates software component requirements in the form of software component specifications; IS professionals should be able to judge the potential, development and management of component-based information systems. This implies the need of an “informal” software component specification that is grounded in business processes and created for the people who are best suited to model requirements, i.e. people who run and perform business. This “close to business” specification can be expressed on a high-level and in an informal manner due to the fact that the specification does not have to serve as software development requirements because the software component already exists, the difficulty is acquiring, identification. With an integrated core business process model we have the possibility to perform modelling more effective and achieve more benefits and also use it as a foundation for software component acquisition. We can focus on business actions and their constant changes and at the same time identify the corresponding changes in software requirements.

Title:

A HOLISTIC INFORMATION SYSTEMS STRATEGY FOR ORGANISATIONAL MANAGEMENT (HISSOM), APPLIED TO EUROPE’S LARGEST BANCASSURER

Author(s):

David Lanc , Lachlan MacKinnon

Abstract: The importance of Information Systems Strategy, ISS, has become critical, as we have moved into an era of ever-faster information response systems, which have moved the emphasis from information providers to information users. Co-ordinated, integrated ISS and organisational strategy are necessary to ensure success. Ironically, innovative IS without an integrated organisational strategy to leverage its benefits, is likely to prove suboptimal, as witnessed from countless “dot-com” failures. HISSOM is a practical, holistic model for the integrated management and co-ordination of ISS as part of an organisation’s strategic planning and management process. HISSOM assesses an organisation’s IS capability, and focuses the development of that capability on supporting more effectively the achievement of organisational goals. HISSOM is applied to the first electronic commerce, e-commerce, enabled strategy implementation programme for Europe’s largest bancassurer. The impact of the new organisation’s corporate strategy on its IS capability, and the impact of IS driven initiatives on the corporate strategy, are highlighted. The differing HISSOM perspectives, of external stakeholders, executive management, business management, IS management responsible for IS strategy, the wider IS function, and their roles in creating the resulting strategy, are also described. Themes describing the applicability of HISSOM, and its relative success are provided.

Title:

DETECTING INTELLIGENT COORDINATED ATTACKS ON ENTERPRISE SYSTEMS

Author(s):

Sviatoslav Braynov

Abstract: The problem we raise in this paper is how to detect malicious cooperation. A formal model of cooperation detection is defined and two detection algorithms are discussed. The model is domain independent and could be applied to many problems and in many domains. For example, it could be applied in computer security to detect coordinated attacks, in e-commerce to detect deviations from market mechanisms, in distributed systems to detect deviations from distributed protocols, in security for identifying and detecting terrorist groups, to name just a few. The paper also raises the problem of detecting not only the actual executers of an attack, but also their assistants who organized, prepared, and made the attack possible, without taking active part in it. A solution to the problem is proposed and discussed.

Title:

SPECIFYING INFORMATION SYSTEM ARCHITECTURES WITH DASIBAO

Author(s):

Philippe BEDU , Anne PICAULT , Bruno TRAVERSON , Jean PERRIN , Juliette LE-DELLIOU

Abstract: If companies want to be competitive they undoubtedly have to manage IS evolution and IS architecture. EDF, the French state utility, has developed its own architecture method called DASIBAO. DASIBAO is based on two standards : OMG’s MDA and ISO/RM-ODP. DASIBAO provides guidelines for architecture design from capturing user needs to system implementation. DASIBAO progressive steps helps to choose between architecture scenarios and to keep track of these choices. This track enables to asses the impacts of any IS evolution and to limit them to the bare minimum. This article presents the use of DASIBAO through an example related to customer relationship. DASIBAO has been applied at EDF in various projects and is now on its start to be used on a large scale.

Title:

FEATURE MATCHING IN MODEL-BASED SOFTWARE ENGINEERING

Author(s):

Alar Raabe

Abstract: There is a growing need to reduce the cycle of business information systems development and make it independent of underlying technologies. Model driven synthesis of software offers solutions to these problems. This article describes a method for synthesizing business software implementations from technology independent business models. The synthesis of business software implementation is based on establishing a common feature space for problem and solution domains and it is performed in two steps. In the first step, a solution domain and a software architecture style is selected by matching the explicitly required features of a given software system, and implicitly required features of a given problem domain, to the features provided by the solution domain and architectural style. In the second step, all the elements of a given business analysis model are transformed into elements or configurations in the selected solution domain according to the selected architectural style, by matching their required features to the features provided by the elements and configurations of the selected solution domain. In both steps it is possible to define cost functions for selecting between different alternatives, which provide the same features. The differences of our method are the separate step of solution domain analysis during the software process, which produces feature model of solution domain, and usage of common feature space to select solution domain, architectural style and specific implementations.

Title:

THE BUSINESS RULE-TRANSFORMATION APPROACH

Author(s):

Ales Groznik , Andrej Kovacic

Abstract: The main goal of the paper is to present the business process renovation as the key element of e-business orientation and the highest level of strategy for managing change that commonly cannot be handled by continuous improvement and reengineering methods or organizational restructuring. The paper introduces a business rule-transformation approach to business renovation. Its motivation is to help establish an environment and approach in which business rules can be traced from their origin in the business environment through to their implementation in information systems.

Title:

EXPERIENCING AUML IN THE GAIA METHODOLOGY

Author(s):

Luca Cernuzzi , Franco Zambonelli

Abstract: In the last few years a great number of AOSE methodologies have been proposed trying to model specific agents architectures, extending accepted techniques and methods from traditional OO engineering paradigm, or centering in organizational aspects trying to better capture the behavior of agents societies. The last category may be considered very useful for modeling open systems composed of a great number of interacting autonomous agents. Gaia exploits organizational abstractions to provide clear guidelines for the analysis and design of complex and open Multi-Agent Systems (MAS). However, Gaia notation is probably less powerful (and perhaps less acceptable for industry solutions) than others (like AUML). In this sense, the present work is aimed to analyze the application of AUML into Gaia methodology. This paper explores the above issues, using an application example and paying specific attention to the problem of modeling the complexity of open MAS and emergent behaviors.

Title:

A MANUFACTURING SYSTEMS ARCHITECTURE FOR THE 21ST CENTURY

Author(s):

Kweku-Muata  Osei-Bryson , Delvin Grant

Abstract: The nature of the 21st century marketplace places many challenges on the modern manufacturing organization that requires it to display increased levels of agility supported by an effective information systems infrastructure. The objective of this paper is to propose an Information Systems Architecture that meets the challenges of modern manufacturing organizations. Given the organizational context of the modern manufacturing organization, we suggest that for the manufacturing information systems architecture (MISA) to be successful it has to satisfy five objectives: Support the Value Chain Activities; Support the interactions among the five Interacting Organizational Variables (i.e. Task, Communication, Technology, People, Structure); Effectively deal with Industry Factors and Forces ; Integrate the organization internally and with its environment; Address other Enterprise Engineering issues; and Support Infrastructure Capabilities. We present a MISA that satisfies these six objectives

Title:

SECURITY XML WEB SERVICES

Author(s):

Luminita Vasiu , Cristian  Donciulescu

Abstract: Abstract: Recently, Web services are emerging as a dominant application in the computing world At the moment, the Web has evolved into an active medium for providers and consumers of services. One of the major problems of Web services is the security related issue. The paper describes a comprehensive Web Services Management Architecture that supports, integrates and unified several security models, mechanisms and technologies in a way that enable a variety of systems to securely interoperates in a platform free manner.

Title:

INTEGRATING A PATTERN CATALOGUE IN A BUSINESS PROCESS MODEL

Author(s):

Lucinéia Heloisa Thom , Cirano Iochpe

Abstract: Once an organization has its origin in the Business Processes (BP) it performs, its structural aspects are present in those processes. Modern organizations have demands related to automation of their BP, due to their high complexity and the need for more efficiency in their execution. Within this context, workflow technology has shown to be very effective, mainly in the BP automation. In Workflow Systems, the BP is automated through a Workflow Process (WP) that uses a WP model to represent all singularities of the BP needed for their automation. There are several models for BP modeling, however none of them relate WP with structural aspects of the organizations. This fact may commit the accuracy and efficiency of the workflow project, once such WP may not reflect the reality of the BP carried out in the organizations. The present paper proposes a Transactional Model of Business Process (TMBP), an extension of the Transactional Model of Workflow Processes (TMWP) proposed in the context of the Workflow on Intelligent Distributed database Environment (WIDE). The TMBP mainly includes elements, such as, a Pattern Catalogue that make possible to create business sub-processes (BSP) from the reuse of BSP patterns based on structural aspects.

Title:

THE POWER OF USING ARCHITECTURE DESCRIPTIONS IN EXTENDED ENTERPRISE INTEGRATION

Author(s):

Raf Haesen , Monique Snoeck , Jacques Vandenbulcke , Manu De Backer , Wilfried Lemahieu , Frank Goethals

Abstract: In the very complex world of Business-to-Business integration (B2Bi), companies should try to come to a manageable integration solution. Integration is considered to be not merely an IT-issue, but also a business problem. This paper draws attention to the communication problems companies are confronted with when integrating their systems. To overcome these problems we propose the use of enterprise architecture descriptions when developing B2B systems. Therefore we give a bird’s-eye view of what enterprise architecture descriptions look like in the context of the Extended Enterprise, as well as the compelling advantages that can be gained from using such descriptions in integration exercises. This paper is no how-to guide for Extended Enterprise Architecture but is meant to show the importance of architecture descriptions and communication in this realm, something that is heartrendingly neglected.

Title:

TOWARDS PATTERN MANAGEMENT SYSTEM

Author(s):

Erki Eessaar

Abstract: Patterns allow dividing knowledge into manageable pieces. It is necessary to develop Pattern Management System (PMS) in order effectively use patterns. In this article components of PMS and its database are proposed and classification scheme of patterns is presented. PMS uses metadata about pattern types to guide management of patterns. Textual descriptions of patterns and models what specify pattern are stored in the database of patterns.

Title:

EVALUATION OF RECOMMENDER SYSTEMS THROUGH SIMULATED USERS

Author(s):

Miquel Montaner , Beatriz López , Josep Lluís de la Rosa

Abstract: Recommender systems have proved really useful in order to handle with the information overload on the Internet. However, it is very difficult to evaluate such a personalised systems since this involves purely subjective assessments. Actually, only very few recommender systems developed over the Internet evaluate and discuss their results scientifically. The contribution of this paper is a methodology for evaluating recommender systems: the "profile discovering procedure". Based on a list of item evaluations previously provided by a real user, this methodology simulates the recommendation process of a recommender system over time. Besides, two extensions of this methodology have been designed, one to perform cross-validations and another one to simulate the collaboration among users. At the end of the simulations, the desired evaluation measures (precision and recall among others) are presented to the user. This methodology and its extensions have been successfully used in the evaluation of different parameters and algorithms of a restaurant recommender system.

Title:

INTEGRATING PROCESS- AND OBJECT-APPROACHES: AN ONTOLOGICAL IMPERATIVE

Author(s):

Shivraj Kanungo

Abstract: There is an emerging belief about the virtually unanimous agreement that the object-oriented paradigm is superior to the classical (structured) paradigm. We do not accept such unqualified judgments. In this paper, we address the differences from the ontological perspective. We adopt a discursive approach to analysing and discussing the differences, similarities and resolution approaches. We accept the position that object-oriented programming is here to stay and is one of the legitimate silver bullets. However, the gap between object-oriented approaches and traditional approaches diminishes significantly when we move up the system development lifecycle. Once we contrast the two approaches, we explain how the consumer of the approach perceives its utility. By employing this approach, we highlight the end-user and developer perspectives. We conclude the paper by restoring some perspective on the uncontested superiority of the object paradigm over the classical paradigm. Lastly, we highlight research and pedagogical issues regarding contemporary treatment of structured and object-oriented approaches.

Title:

DIMANAGER: A TOOL FOR DISTRIBUTED SOFTWARE DEVELOPMENT MANAGEMENT

Author(s):

Tania Tait , Gabriel Santiago , Maria Edith Pedras , Elisa Huzita

Abstract: In a competitive world, is very important to have tools that offer an adequate support to the project manager with information about the development process. However at the moment, there is no a complete environment offering the adequate resources to develop distributed software, integrating both technical aspects related to software development and the management aspects. So it is very interesting to develop tools that offer an adequate support to project manager with technical and managerial information. This paper presents DIMANAGER - a tool to manage the distributed software development, including planning and monitoring aspects. It is part of the DiSEN environment and offers the adequate technical and managerial information for the project manager. These information can be used to make decision about how the resources can be better used and also make decision regarding the actions that need to be taken to obtain a software with quality.

Title:

A COMPARATIVE ANALYSIS OF STATE-BASED AND BEHAVIOURAL APPROACHES TO CHECKING LOGICAL CONSISTENCY IN UML

Author(s):

W. Lok Yeung

Abstract: An association between two classes implies some interaction between objects of the two classes. An association is often adorned with multiplicities and other constraints. In this paper, such adornments are considered as constraints upon the behaviour of objects of the associated classes, and checking that they satisfy these constraints is a consistency problem that can be addressed by formal means. Two formal approaches, namely, the state-based and behavioural approaches, to this consistency problem are compared and discussed.

Title:

A FORMAL APPROACH TO ENTERPRISE MODELING

Author(s):

Yoshiyuki Shinkawa

Abstract: Model driven development for software systems provides us with many advantages in quality, productivity, or reusability. For accurate modeling, we have to create many kinds of models from various viewpoints. When applying model driven development to enterprise information systems, those viewpoints include not only software oriented matters but also business oriented matters. Such complexity in modeling often causes inconsistency between models. This paper presents a formal and systematic way to create consistent and integrated enterprise models that reflect those various viewpoints. Set theory, Colored Petri Nets (CPNs), and Unified Modeling Language (UML) are used for this formalism. In addition, the paper proposes a set theoretic approach to evaluating consistency between enterprise models. The consistency is discussed in traditional hierarchical organization and modern matrix organization.

Title:

META DATA FRAMEWORK FOR ENTERPRISE INFORMATION SYSTEMS SPECIFICATION

Author(s):

Elizabeth Chang , Andrew Tierney , Jon Davis

Abstract: This paper reviews the nature of generic computational modelling and we propose a process for implementing a meta-data approach to defining a platform independent operational computer system application. It identifies Enterprise Information System (EIS) type systems as ideal candidates for implementation using this meta-data, based on the simplification opportunities available due to the typically visual and transactional component bias of EIS systems. It describes an architecture for the development of a suitable meta-data based application generator system. This development could lead to the generation of new accelerated EIS development methodologies in business modelling, analysis, design, system deployment and global information exchange.

Title:

THE DELEGATION PROBLEM AND PRACTICAL PKI-BASED SOLUTIONS

Author(s):

Venus L.S. Cheung , Lucas C.K. Hui , S.M. Yiu

Abstract: Delegation is a process where a person (called the delegator) grants or authorizes all or some of his/her power to another person (called the delegate), to work on his/her behalf. In an office, it is a common practice for officers to delegate their power to subordinates when they need assistance or they are on leave. In a paper-based environment, delegation can be achieved easily. However, in a digital environment (e.g. a secure enterprise information system with confidential electronic documents), how delegation can be handled properly is still an open question. In this paper, we address the delegation problem in the context of a secure information system, lay down a set of requirements from the users' point of view and propose several practical PKI-based schemes to solve the problem. Analysis on the proposed schemes concludes that Proxy Memo can solve the problem quite efficiently while reducing the key management problem.

Title:

IMPROVING JOB SHOP OPERATIONS BY PERSUADING SUPPLIERS: A SIMULATION APPROACH

Author(s):

Jorge Luis Navarro García , Raúl Morales Salcedo

Abstract: This paper presents the use of simulation as a modeling tool to demonstrate that a substantial factor, often ignored, can play an essential role in improving job shop operations. This factor is persuading vendors to be part of a collaborative effort to increase amount of early deliveries for final customers. Typical performance indexes of a such as delivery rate, manufacturing costs, holding costs, reordering costs, and penalty costs were considered to evaluate the effect of company-focused improvement scenarios, and later to show that how the simulation results can be utilized in order to persuade vendors to develop their own improvement programs.

Title:

MATCHING ERP FUNCTIONALITIES WITH THE LOGISTIC REQUIREMENTS OF FRENCH RAILWAYS: A SIMILARITY APPROACH

Author(s):

Camille Salinesi , Iyad ZOUKAR

Abstract: Ensuring the adequacy of ERP implementations to business requirements is still an issue that needs to be addressed if we want ERPs to provide the advantages expected by organisations. One important cause of inadequacy results from the lack of attention paid to the precise and systematic analysis of how well ERP functionalities match the business requirements. The reason for this is twofold: on the one hand, the language used to define ERPs is different from the one used to define business requirements, there is thus a language barrier to overcome first. On the other hand, no technique is available so far to evaluate systematically similarities between ERP functionality models and business requirements models. Our approach to this issue is (i.) to materialise with a unified goal/strategy modelling language both the ERP functionalities and the business requirements, and (ii) to systematically specify using a similarity model how a given ERP functionality model and a business requirement model expressed with our goal/strategy language match together. Based on these techniques, we have developed at SNCF a matching method that helps eliciting ERP implementation requirements. This paper outlines this matching method and explains how the similarity model was developed in a systematic way, and reports its application in the context of a project undertaken at SNCF to implement the PeopleSoft ERP to support the supply chain process.

Title:

MEASURING REQUIREMENT EVOLUTION - A CASE STUDY IN THE E-COMMERCE DOMAIN

Author(s):

Päivi Ovaska , Petteri  Johansson

Abstract: Despite changing requirements is a widely recognized phenomenon, there are only few approaches available to measure requirement evolution. These existing approaches are based on the assumption that all the requirements exist and can be seen in the requirement elicitation and analysis phases. They do no expect that there can emerge new requirements during the software development, which cannot be seen during the requirement elicitation and analysis. In this paper is introduced a new quantitative metric for measuring the requirement evolution, called a Conceptual Creep (CC). The CC metric measures those requirements, which do no exist in the beginning of the software development but emerge during it through organizational and social processes. We use a case study in an E-commerce domain to introduce the use of this metric for software development assessment. The results of our case study suggest that Conceptual Creep metric can be valuable metric to measure requirement evolution helping organizations to understand the phenomenon. By measuring requirement evolution, organizations can be more prepared for it when estimating project risks and timetables.

Title:

UML VS. IDEF: AN ONTOLOGY-ORIENTED COMPARATIVE STUDY IN VIEW OF BUSINESS MODELLING

Author(s):

Ovidiu Noran

Abstract: The UML and IDEF sets of languages characterize typical modelling approaches of software engineering and computer integrated manufacturing, respectively. This paper presents a comparative analysis of these languages based on their ontologies and in view of their use in business modelling. A brief introduction to UML and IDEF is followed by a high-level comparison taking into account underlying paradigms and language structure. This is followed by a comparative assessment of the expressive power of the two groups of languages, based on the ontologies of their relevant components. The analysis is structured using a set of views deemed appropriate for the modelling domain (i.e. business). The key findings of this paper aim to provide an insight into the suitability of UML 'versus' that of IDEF in business modelling.

Title:

MODELLING THE DYNAMIC RELATIONSHIPS BETWEEN WORKFLOW COMPONENTS

Author(s):

Elizabeth Chang , Leo Pudhota

Abstract: Whether the economy is strong or weak, competition is fierce. Changes come faster in this rough business environment [32] causing the business process model to get more dynamic and complex. However, one has to manage it so the efficiency of business processes are maximised. Workflow modelling offers methods and techniques to achieve this and allows a company to deliver its promises. As the workflow paradigm continues to infiltrate organisations that need to cope with complex and growing business operations, the workflow system will become a fundamental building block. Therefore, workflow modelling tools and methods are of the utmost importance for leaders of a company in the design and re-design of administrative processes, operational processes and management processes and the development of systems to support these processes. This paper is to develop the modelling approach for dynamic business processes, define exceptions and enable business strategies to be captured vigorously and simultaneously allowing changes to be handled. Often we see that the business processes are composed of several parts, a structured operational part and an unstructured operational part, or they could be composed of semi-structured parts with some given and some unknown details. This situation raises problems in workflow design and workflow systems development. One of the problems with current workflow systems is that it cannot deal with unpredictable situations and changes. Unpredictable situations may occur as a result of changes by management. The inability to deal with various changes greatly limits the applicability of workflow systems in real industrial and commercial operations. The workflow modelling methods are needed to model the business processes efficiently while allowing business’ users to maintain control and flexibility when dealing with changes, and the systems to remain flexible.

Title:

LEGACY MIGRATION AS PLANNED ORGANIZATIONAL CHANGE

Author(s):

Panagiotis Kanellis , Teta Stamati , Drakoulis Martakos , Konstantina Stamati

Abstract: Traditionally, legacy migration has been viewed as the simple replacement of aged or problematic hardware and software, including the applications, interfaces and databases that compose an information system infrastructure. Our position is that this view is outdated and is at best myopic taking into account that the role of technology is not merely supportive but today pervades every aspect of the way enterprises conduct their business. As such, our position is that migration should be approached as a planned change process that first and foremost requires an understanding and an approach that covers the range of issues and organisational entities involved. In this context, this paper presents such a structured approach that defines the landscape, deals with the semantics of legacy migration and can be applied by organisations that recognise the need to manage the process in a controlled and not in a piecemeal and ad hoc fashion.

Title:

CONSTRUCTIVE RESEARCH AS AN IS RESEARCH APPROACH

Author(s):

Timo Lainema

Abstract: We will first discuss how constructive research relates to Information Systems science. We will then introduce some views on constructive research. The process of constructive research will be defined. This process has a close relationship to action research; the research process of constructive research includes a phase which has action research characteristics. As the last topic we will discuss how to validate the results of constructive research. As a conclusion we state especially the validation of constructive research still needs to be studied.

Title:

ACTIVITY CREDITING IN DISTRIBUTED WORKFLOW ENVIRONMENTS

Author(s):

Eric Browne

Abstract: Workflow Management Systems (WfMSs) are increasingly being introduced to deal with cooperative inter-organisational business processes. There are many situations in these distributed workflow environments, where, for a given business process, activities might be undertaken in one enterprise that overlap with, or re-peat activities undertaken elsewhere. This paper examines such situations in the context of healthcare, where duplicated tests and procedures are costly and can have negative health impacts on patients undergoing un-necessary tests and interventions. Our approach is based on a two-tier goal/process representation of business processes and an execution model comprising a candidate discovery phase, followed by a component crediting phase. We introduce the notions of full vs. partial crediting, goal-level vs. activity-level crediting and examine the role that termporal constraints play in determining candidate components for crediting.

Title:

PROCESS MODELLING - BURDEN OR RELIEF? LIVING PROCESS MODELLING WITHIN A PUBLIC ORGANISATION

Author(s):

Silke Palkovits , Maria Wimmer , Thomas Roessler

Abstract: Process modelling and process reorganisation are key criteria in regards to successful implementation of e-government. Yet up to recently, e-government had a rather technical dimension. Nowadays, it is being recognised that e-government is multi-faceted and that it requires a holistic approach. However, the questions of ‘how can the concept of business process modelling (BPM) be applied successfully’ and ‘what is the added-value of managing an authority’s processes’ often cannot be answered immediately and directly due to the complexity of this topic. So, many public authorities shy at thinking in a comprehensive way and, instead, continue to focus on single issues because these are simpler to understand and easier to manage. The aims of this paper are to create awareness about the added-value of integrated business process modelling, to introduce a holistic concept for the analysis, re-organisation and modelling of government’s processes and to propose a tailor-made methodology for describing the processes. The authors will go deeply into the topic of process management with specific requirements of public authorities including legal as well as security aspects. Reading through this contribution, the reader should easily recognise the added-value of BPM for public administrations and that the management of processes within public administration is a relief and not a burden.

Title:

THE PROFILES OF PROJECTS SUPPLIED BY A FULL-SCALE ICT-SERVICES PROVIDER

Author(s):

Ari P. Hirvonen , Jarmo J Ahonen , Mirja Pulkkinen

Abstract: The role of modern ICT-services providers has changed from easily defined system engineers to full-scale vendors. The types, or profiles, of projects provided by such vendors are not very well understood. In this article a company specific analysis of a set of projects is presented. The analysis clarifies the types of projects and discusses the features of those types. Five distinctive profiles of projects were found excluding the maintenance. Some of the profiles were unexpected and seem to reflect the changing nature of the system engineering market. This change shows that changes to existing methodologies and new methodologies are required in order to offer proper methodological support for modern full-scale ICT-services providers.

Title:

PERFORMANCE IMPROVEMENT BY WORKFLOW MANAGEMENT SYSTEMS: PRELIMINARY RESULTS FROM AN EMPIRICAL STUDY

Author(s):

Hajo Reijers

Abstract: Workflow Management (WfM) systems have acquired a respectable place in the market of enterprise information systems. Although it is clear that implementation of a WfM system may shorten process execution and increase efficiency, little is known about the extent of these effects on business process performance. In this paper, we report on a running longitudinal multi-case study into the quantitative effects of WfM systems on logistic parameters such as lead time and service time. We conclude that in most cases significant decreases of lead time and service time will take place for the cases under consideration. In the presentation of our research outline, we show how we use process simulation for the validation of our measurements, the prediction of performance improvement, and the comparison of the pre- and post-implementation situation. As a side effect of this study, we present some interesting characteristics of actual business processes and the way WfM systems are implemented in practice.

Title:

EXPLICIT CONCEPTUALIZATIONS FOR KNOWLEDGE MAPPING

Author(s):

Willem-Olaf Huijsen , Jan Jacobs , Samuel Driessen

Abstract: Knowledge mapping supports members of an organization in finding knowledge available within the organization, and in developing insights into corporate expertise. An essential prerequisite is an explicit conceptualization of the subject domain to enable the classification of knowledge resources. Many tools exists to create explicit conceptualizations. This paper establishes a set of requirements for conceptualization tools from the perspective of knowledge mapping. Next, a number of tools are reviewed: thesauri, ontologies, and seman-tic networks. The reader is guided in the choice through a comparison of the tools using the following criteria: the complexity, the amount of effort required in building and maintenance, and the degree to which it is possible to integrate the conceptualization into the overall knowledge mapping system. Recommendations are given as to the use of conceptualization tools for knowledge mapping.

Title:

EMPLOYING THE C2C PRINCIPLE FOR MAKING DATA SERVICES USED FROM MOBILE PHONES MORE ATTRACTIVE

Author(s):

Hans Weghorn

Abstract: At the moment, the acceptance level of data services accessed from mobile phones appears much lower than what was estimated during the introduction of these services several years ago. This delay in developing a new market segment can be explained by different issues preventing the customer from broadly using these services. Main concerns are extremely high costs for data transfers through wireless telephony networks, and the low ergonomy of the software tools and implementations in terms of user handling. Here an approach based on a customer-to-customer (C2C) model is discussed, which has the capability of overcoming the main concerns, and which by that is presumed to make wireless services more attractive for the average customer.

Title:

A METHODOLOGY FOR INTEGRATING NEW SCIENTIFIC DOMAINS AND APPLICATIONS IN A VIRTUAL LABORATORY ENVIRONMENT

Author(s):

Louis O. Hertzberger , Hamideh Afsarmanesh , Ersin C. Kaletas

Abstract: Emergence of advanced complex experiments in experimental sciences resulted in a change in the way of experimentation. Several solutions have been proposed to support scientists with their complex experimentations, ranging from simple data portals to virtual laboratories. These solutions offer a variety of facilities to scientists, such as management of experiments and experiment-related information, and management of resources. However, issues related to adding new types of experiments to proposed support environments still remain untouched, causing inefficient utilization of efforts and inadequate transfer of expertise. Therefore, the main topic of this paper is to present a methodology for integrating new scientific domains and applications in a multi-disciplinary virtual laboratory environment. In order to complement the methodology with the right context, the paper also presents an experiment model that uniformly represents scientific experiments, data models for modelling experiment-related information, and mechanisms for the management of this information.

Title:

THE DEGREE OF DIGITALIZATION OF THE INFORMATION OVER-FLOW

Author(s):

Pasi Tyrväinen , Turo Kilpeläinen

Abstract: The degree of digitalization in organizations has increased remarkably. This trend will continue if the so called natural laws of information technology are veracious. At the same time the format of communicated information has shifted from traditional face-to-face and analogue communication to digital communication forms, such as digital documents. Because information is increasingly available in digital form on one hand for example the duplication and forwarding of email messages and attachments gets easier, which may easily lead to information overflow. On the other hand digitalization increases productivity, improved quality and reduced costs. As the ability of humans to adopt information has not developed at the same pace as information and communication technology (ICT), it is interesting to see whether the degree of digital communication correlates with the total amount of communication in organization. In this paper we tested this hypothesis in an industrial organization using genre-based measurement method to gather data on communication flows. The results show, that the correlation between the degree of digital communication and the amount of total communication can be seen in some degree.

Title:

A SOFTWARE REENGINEERING METHOD USING TRANSFORMATIONS AND COMPONENTS

Author(s):

Darley Rosa Peres , Raphael Marcilio de Souza Neto , Valdirene Fontanette , Vinicius Cardoso Garcia , Adriano Aleixo Bossonaro , Antonio Francisco do Prado , Joao Luis Cardoso de Moraes

Abstract: This article presents a Software Reengineering Method using Transformations and Components to reconstruct legacy systems. The proposed method extends the Software Reengineering using Transformation (SRT) method, adding resources to treat the components-based reengineering. The extension aims to support the construction and reuse of software components in legacy systems reengineering. The method is supported by two tools: a Software Transformation System named Draco-PUC, and a CASE, named MVCASE.

Title:

APPLYING ONTOLOGIES IN THE KNOWLEDGE DISCOVERY IN GEOGRAPHIC DATABASES

Author(s):

Guillermo Hess , Cirano Iochpe

Abstract: This article proposes a software architecture to integrate geographic databases conceptual models. The goal is the preprocessing phase on the knowledge discovery in database, using geographic databases conceptual schemas as input data, in order to obtain analysis patterns candidates. The semantic unification is very important in this process, since the data mining tools are not capable to recognize synonyms neither to distinguish between homonymous as, for example, classes and attributes names. In this way, the first step was the study of the different techniques of knowledge organization. Then a set of criteria was used to choose one of them, and after that a methodology to refer and update the knowledge basis was developed.

Title:

ENHANCING COLLABORATION IN BUSINESS PROCESS MODELLING

Author(s):

Nikos Karacapilidis , Emmanuel Adamides

Abstract: Business process modelling is widely considered as the most critical task in the development of enterprise information systems that address the actual needs of a company. As business processes cross functional and sometimes company boundaries, the coordinated inclusion of diverse perspectives and knowledge sources is necessary. Towards this end, this paper presents an information systems framework that aims at the exploitation of personalised knowledge through a structured process of collaborative and argumentative business process model construction. By integrating an argumentation system that is specific to business process modelling with a discrete-event modelling simulation tool, we provide the appropriate infrastructure to increase the productivity and effectiveness of process design and re-engineering efforts. The paper presents the design rationale, the structure and the functionality of the proposed framework through a comprehensive example of collaborative work towards building a model of a typical business process in a manufacturing company.

Title:

IMPLEMENTING A NEW SOFTWARE PROCESS: CASE STUDY

Author(s):

Elenita Nascimento , Luis Soeiro

Abstract: We present a case study in which a new software process for Brazilian Senate Software Plant was developed and implemented. There were already some software practices in place, however, they were not working as it should have. Worse, the main software project had already consumed a lot of resources and was off schedule. The new process is based upon known and tried Software Engineering best practices and techniques adapted to the IT department. The new process was validated by being deployed in a pilot project that encompasses a real part of the on-going software project.

Title:

TOWARDS A BUSINESS PROCESS FORMALISATION BASED ON AN ARCHITECTURE CENTRED APPROACH

Author(s):

Lionel BLANC DIT JOLICOEUR , Fabien LEYMONERIE

Abstract: Nowadays, enterprises need to control their business processes and to manage more and more information. EAI - Enterprise Application Integration - solutions offer a partial response to these requirements. However, the lack of formalisation that characterises such solutions limits the reuse, and verification of properties. This paper claims that business processes have to be formally defined using a formalism that presents certain features (representation of several abstraction levels, domain specific concepts, property expression and preservation, etc.) and proposes the use of an ADL - Architecture Description Language - as formalism. A case study illustrates our proposition.

Title:

FROM ONTOLOGY CHARTS TO CLASS DIAGRAMS: SEMANTIC ANALYSIS AIDING SYSTEMS DESIGN

Author(s):

Cecilia  Baranauskas , Kecheng Liu , Rodrigo Bonacin

Abstract: Despite the broader adoption of the Object Oriented paradigm of software development and the usefulness of the Unified Modelling Language, there still are aspects of business modelling not well captured and represented. Previous literature in Organisational Semiotics has shown that its methods could facilitate a converging process for reaching a semantic representation, which delivers an agreed business model. In this paper we define a process for informing UML class diagrams with results of Semantic Analysis. We provide a group of heuristic rules to aid the construction of a preliminary class diagram from an ontology chart.

Title:

TOWARDS A META MODEL FOR DESCRIBING COMMUNICATION: HOW TO ADDRESS INTEROPERABILITY ON A PRAGMATIC LEVEL

Author(s):

Boriana Rukanova , Kees van Slooten , Robert A. Stegwee

Abstract: The developments in the ICT led companies to strive to make parts of the business transaction electronic and raised again the issue of interoperability. Although interoperability between computer systems has been widely addressed in literature, the concept of interoperability between organizations is still to a large extent unexplored. Standards are claimed to help achieving interoperability. However, experience with the implementation of EDI standards shows that many EDI implementation projects led to technical solutions with unclear business benefits. New standards are currently being developed, however their implementation can also lead again to purely technical solution, if the social context is not taken sufficiently into account. In this paper we address the problem on how to identify interoperability problems on a pragmatic level that can occur between organizations that want to carry out business transactions electronically. We also point out that, in order to identify interoperability problems on a pragmatic level, it is necessary to capture the communication requirements of the business parties and to evaluate to what extent a standard is capable to meet these requirements. To perform that evaluation we develop a meta model for describing communication. The meta model is based on theory of speech-act and communicative actions. The use of the meta model to identify interoperability problems on a pragmatic level is illustrated with an example.

Title:

ORGANISATIONAL SEMIOTICS EMBEDDED IN A SYSTEM DEVELOPMENT CYCLE: A CASE STUDY IN A BUSINESS ORGANISATION

Author(s):

Carlos Alberto Cocozza Simoni

Abstract: Searching for competitiveness and excellence in quality, we have observed that companies have increased their processes generating new demands for the Information Technology (IT) area, involving knowledge that goes beyond the software development itself: understanding the organisation, its processes and businesses as a whole. Regarding software development methods, we have perceived significant advances in the technical aspects, but little evolution in relation to the domain analysis, which could have serious consequences in the applicationss and in the company. As a way to contribute in this direction we have been developing a research work involving the use of Organisational Semiotics in real contexts of development. The case study discussed in this paper is part of this research work and involves the introduction of this theoretical basis in a business organisation for evaluation in real work situations.

Title:

INTEGRATING AGILE AND MODEL-DRIVEN PRACTICES IN A METHODOLOGICAL FRAMEWORK FOR THE WEB INFORMATION SYSTEMS DEVELOPMENT

Author(s):

Esperanza Marcos Martinez , Paloma Caceres Garcia de Marina , Valeria de Castro

Abstract: Nowadays, the Web information systems (WIS) development has become an interesting area in both research and business world. On the one hand, the WIS development needs specific methodologies because traditional methodologies do not take into account certain aspects that are specific in Web systems. Therefore, traditional methodologies are too bureaucratic and tedious and they do not facilitate a quick and light development. This is the reason why the agile methodologies have appeared. They provide a sufficient development process in an adaptive way. On the other hand, new technologies are emerging and becoming popular constantly. Then, the enterprises usually develop their information systems according to these modern technologies and so, the system modeling becomes too specific. In this way, OMG proposes the Model-driven Architecture (MDA), a model-driven framework for software development. Due to the advantages of both agile and model-driven proposals, a topic of interest is the approach between them. We are working in a model-driven methodological framework for agile development of WIS, named MIDAS, and we present it in this work.

Title:

CAPRURING REQUIREMENTS VARIABILITY INTO COMPONENTS

Author(s):

Carine Souveyet , Sondes Bennasri

Abstract: Software Customisation also known as Software Variability is a central concept in the development of different kinds of software such as product families or software for disabled people. The solutions proposed in the literature to deal with the variability address design and implementation aspects like the mechanisms that can be used to implement the variability in a software architecture. The representation of the variability at a requirements level is neglected. Our contribution in this paper is to propose a goal driven approach that captures the variability at requirements level and maps it into a component-based solution centred on the concept of Customisable Component. An identification process is provided to assist the designer during the identification and the conceptualisation of the customisable components. The approach is illustrated with the Crews L’Ecritoire software.

Title:

SEMI-AUTOMATED SOFTWARE INTEGRATION: AN APPROACH BASED ON LOGICAL INFERENCE

Author(s):

Mikhail Kazakov

Abstract: The paper addresses a problem of semi-automated enterprise application integration. More close we discuss a problem of automation of integration of numerical simulation components in the area of manufacturing engineering information systems. We propose an approach that is based on annotation of software interfaces with formal logical specifications. Logical inference procedure is used to choose appropriate enterprise software component depending on client requests. First we discuss the problem and difficulties of integration of numerical simulation solvers and manufacturing engineering solutions in general. This is followed by description of the methodology of semi-automated integration based on use of description logics. Further we provide reader with details on applicability and software implementation of the methodology.

Title:

COMPONENT-BASED SOFTWARE DEVELOPMENT

Author(s):

Daniel Lucredio , Antonio Francisco do Prado , Iolanda Claudia Sanches Catarino , Adriano A. Bossonaro , Raphael Marcilio de Souza Neto , Joao Ronaldo Del Ducca Cunha

Abstract: This paper presents an Component-Based Software Development Environment, referred to as CBDE that supports the construction and reuse of software components according to the method Catalysis. Its integrates a CASE tool, called MVCase, and a RAD tool, called C-CORE, to support the whole process of Component-Based Software Development (CBD). The process CBD, follows the spiral model of software development, including activities that start from communication with the customers to identify the requirements for construction and reuse of components, until the delivery and customers component assessment. This paper focuses with more on details related to the two phases of the Construction of the CBD process. In a first phase the components of an application domain are modeled and then implemented in a component-oriented language, been available to reuse in a repository. In the second phase, the software engineer builds applications reusing the components, available in the repository and adding new specific components of the application. The MVCase and C-CORE tools help the software engineer by automating great part of the components construction and reutilization tasks.

Title:

SYSTEM DEVELOPMENT USING A PATTERN LANGUAGE-BASED TOOL

Author(s):

Rosana  Braga , Fernao Germano , PAULO MASIERO

Abstract: Domain-specific pattern languages can be used to model applications, so that following particular paths in the pattern language lead to the complete design of particular systems. This paper shows how to use a pattern language-based analysis method and tool to help in the development of domain-specific systems, where the development is basically done at the analysis level. The requirements of the target system are matched against analysis patterns, so that the system is specified in terms of the patterns used to model it. The tool is fed with this information and uses it to instantiate a framework that was built based on the same pattern language. The result is the source-code for the target system, that can be used as a prototype, extended or improved to become the real system.

Title:

DISTRIBUTED REQUIREMENTS SPECIFICATION: MINIMIZING THE EFFECT OF GEOGRAPHIC DISPERSION

Author(s):

Azriel Majdenbaum , Leandro Lopes , Rafael Prikladnicki , Jorge  Audy

Abstract: The requirements specification is an important phase of the requirements engineering area in the software development process. In geographically distributed environments, this phase becomes critical due to the characteristics of the distributed development (physical and temporal distance, cultural differences, trust, communication, etc). The objective of this paper is to analyze the requirements specification in geographically distributed environments, identifying the main challenges and proposing a process to minimize the impacts of this scenario. The results are based on a case study carried on a multinational organization that has offshore software development units in 3 countries, and was recognized as a SW-CMM level 2 organization in 2 of them. The results suggest the necessity to adapt the requirements specification phase to the distributed software development environment, addressing the main existing challenges. The problems and the solutions adopted are presented, aiming to relate these solutions to the organization distribution level, considering where the project team, users and customers are located.

Title:

OPEN ISSUES ON INFORMATION SYSTEM ARCHITECTURE RESEARCH DOMAIN: THE VISION

Author(s):

André Vasconcelos , Carla Pereira , Pedro Sousa , Jose Tribolet

Abstract: Currently organizations, pushed by several business and technological changes, are more concern about Information systems (IS) than ever. Though organizations usually still face each IS as a separately technological issue with slight relations with business domain. This paper discusses the importance of the Information System Architecture (ISA) as the tool for ensuring a global view on IS and for explicitly assessing alignment between technology and business processes and strategies. In this paper, considering the numerous topics, technologies and buzzwords surrounding ISA domain, we identify the major ISA open issues, namely: ISA Modelling, ISA Methodology, ISA Evaluation, IS Architectural Styles and Patterns, and IS/Business Alignment. We also present our advances in addressing some of these issues, by proposing an approach for ISA evaluation and IS/Business Alignment measure. This approach is supported on an ISA modelling framework and provides several indicators and measures for ISA evaluation. This approach is applied to an IS health care project evaluation.

Title:

DEVELOPMENT OF ICT IN PROFESSIONAL WORK

Author(s):

Ann Johansson

Abstract: ICT plays a critical role in many organisations of today. In well tried as well as in recently improved object-oriented methods are important. They provide prescribed and formal ways to perform the systems development. An ethnographical study has been performed in the NU health care organisation in Sweden. Another study is performed as a case study carried out at the Wing of Såtenäs, an airbase within the Swedish Air Force. The aim of this paper is to describe and analyse the character of professional work and its impact on systems development. Professions consist of special competences and attitudes to the work they are performing. The health care workers and the flight technicians can also point this out.

Title:

REQUIREMENTS ENGINEERING FOR THE BUSINESS PROCESS RE-ENGINEERING: AN EXAMPLE IN THE AGRO-FOOD SUPPLY CHAIN

Author(s):

Fabrizio Sannicolò , Floriana Marin , Paolo BRESCIANI

Abstract: eing able to reduce the gap between Requirements Engineering and Software Engin eering is crucial to foster the developments of better Informations Systems that more precisely adresses the organizational needs of the stakeholders. One of th e key factor toward this objective is adopting methodologies in which the the co nceptual level of Requirements Engineering techniques is raised, so that formal representations can be used sin ce the very early stages of requirements elicitation and analysis. The Tropos methodology target this objective by means of the so called \emph{Ear ly Requirements Analysis}, that is aimed at understanding and analyzing the orga nizational goals by means of a precise and very expressive diagrammatic notation . The paper exemplifies the use of Tropos Early Requirements while applied to a s implified case of business analysis in the context of the terminal part of the a gro-food products delivery chain. The case is extracted from a more comprehensive analysis performed in the context of an ongoing project in the field of the dissemination of knowledge co ncerning the topic of the ``Genetically Modified Organisms'' (GMO).

Title:

CONSTRUCTING A DISTRIBUTED BILINGUAL CONCEPT SPACE USING HOPFIELD NETWORK

Author(s):

Mohammad Azadnia , Ali Mohammad Zare bidoki , Mazeiar Salehie

Abstract: One of the crucial issues in a search engine is the ambiguity of a user query and how it can be resolved to match the user’s real information need. One solution for this problem is using a concept space to help the user to express her/his queries more precisely. In this paper we describe the construction of a bilingual concept space automatically from a sample text collection. We used co-occurrence analysis to discover co-relation between terms and phrases and then stored the constructed concept space in a Hopfield network as an associative memory. We‘ve done our experiments on MEDLINE collection that includes 1100/7500 docs/words and also with a text collection including 1000 Persian/English documents. For the sake of reaching higher speed in Hopfield convergence, we used a distributed architecture using Java/RMI.

Title:

COMPETENCE MODELING AND MANAGEMENT: A CASE STUDY

Author(s):

Mounira Harzallah , Giuseppe Berio

Abstract: This paper presents a novel approach to the enterprise competence management and a case study. This approach is based on a model called CRAI (Competency-Resource-Aspect-Individual) which allows representing enterprise personnel's competencies. On the other hand, the paper provides a generic competency management process in which the CRAI model plays the central role. The proposed case study is part of a real project developed with the partnership of a French enterprise in the manufacturing domain.

Title:

Q-ONLINE: INTEGRATING A QUESTIONNAIRE SYSTEM IN AN ORGANIZATION

Author(s):

Nuno Miguel  Vicente de Pina Gonçalves , Cláudio Miguel  Sapateiro , Hugo Gamboa

Abstract: Organizations are increasingly using questionnaires as a form of collecting data. Our work focus in a creation of a web based questionnaire platform, Q-Online, that managed a multi-questionnaires projects on a multi-users environment. The project goal is to provide a standard structure to collect data in several organization situations, particularly answering the needs of our organization: a school of technology. Examples of application of the platform are the collection of data from students or teachers and the usage inside of an Elearning system. The system was tested in a major school questionnaire focusing the entire school population. We present preliminary results from this questionnaire. The user interaction during the answering of the questionnaire was monitored in order to enable future retrieve of behavioural information. The data analysis developed permits a first overview of the questionnaire answers while Data Mining techniques will be provided to identify relevant information in the answers data.

Title:

BUSINESS-DRIVEN ENTERPRISE AUTHORIZATION - MOVING TOWARDS A UNIFIED AUTHORIZATION ARCHITECTURE

Author(s):

Tom Beiler

Abstract: Information systems of large enterprises experience a shift from an application-centric architecture towards a focus on process-orientation and service components. Additionally, the information system is opened to business partners to allow for self-management and seamless cross-border process integration. This strategy aims at a higher flexibility, but also produces new challenges the security and administrative support sys-tems have to cope with. We propose an architecture for enterprise authorization systems which allows the native integration of authorization processes into the business system, and permits a unified treatment of the authorization issues of an enterprise. This authorization system supports the information system architects with the ambition to avoid authorization becoming a bottleneck within the new architectural strategic direc-tion

Title:

AN INFORMATION SYSTEM DEVELOPMENT TOOL BASED ON PATTERN REUSE

Author(s):

Agnès Conte , Dominique  Rieu , laurent tastet , Ibtissem Hassine

Abstract: A pattern is a general and consensual solution to solve a problem frequently encountered in a particular context. The need pattern re-use in information systems made emerge many pattern systems. Patterns systems are becoming more and more numerous. They offer product patterns or process patterns of varied range and cover (analysis, design or implementation patterns, and general, domain or enterprise patterns). New application development environments have been developed together with these pattern-oriented approaches. These tools address two kinds of actors: patterns engineers who specify patterns systems, and applications engineers who use these systems to specify information systems. Most of the existing development environments are made for applications engineers; they offer few functionalities allowing definition and organization of patterns systems. This paper presents AGAP, a development environment for defining and using patterns, which distinguishes pattern formalisms from patterns systems. Not only does AGAP address applications engineers, but it also allows patterns engineers to define patterns systems. The same formalisms or items of existing formalisms may either be used in order to facilitate the engineering of patterns systems or to increase the level of their reuse to design information systems.

Title:

MIXIN BASED BEHAVIOUR MODELLING

Author(s):

Nicholas Simons , Ashley McNeile

Abstract: State Machines are the basic mechanism used to specify the behaviour of objects in UML based object models and admit the possibility of direct animation or execution of a model. Tools that exploit this potential offer the promise of both supporting early validation of a model under development and allowing generation of final code directly from the model. Recently, some new proposals have been made in how state machines are used to model behaviour: firstly, that complex object behaviour can be best modelled by the parallel composition of multiple state machines; and secondly, that a formal distinction can be made between purely event driven machines and those whose states are derived from other information in the model. We illustrate the advantages of this approach with a small example that shows how it can help reduce redundancy and promote simplicity.

Title:

INFORMATION SYSTEMS SUPPORT FOR MANUFACTURING PROCESSES - THE STANDARD S95 PERSPECTIVE

Author(s):

Patrícia Macedo , Pedro Sinogas , Jose Tribolet

Abstract: During the last years Manufacturing Execution Systems (MES) and Enterprise Resource Planning (ERP) Systems have been developed in order to support the Manufacturing Enterprise. These two families of systems have been developed independently, so they have grown without a scope or border strictly defined. The feature overlapping between both systems raises relevant issues in the integration with control systems. The main goal of this paper is to analyze how different Manufacturing Processes types (discrete, batch and continuous) are supported by ERP and MES systems, and how the standard developed by ISA: S95 - Enterprise-Control System Integration Standard, defines the scope of each system and provides manufacturing independence. This standard allows the separation of business processes from production processes. To better present the previous ideas, a case study from a paper mill enterprise is presented, where the business processes are identified and a system framework is proposed in accordance with the S95 hierarchy function model.

Title:

MODELLING ONTOLOGICAL AGENTS WITH GAIA METHODOLOGY

Author(s):

María de Lourdes Fernández , Alfredo Sánchez , Maria Auxilio Medina Nieto

Abstract: Abstract. Multi-agent systems have been successfully applied in information retrieval tasks, spe-cially in environments whose sources of information are distributed and highly heterogeneous. They can be perceived as an alternative to face problems that traditional search engines are not able to solve yet. On the other hand, ontologies have shown their efficiency to management different sources of information. We present the model of some software agents that use ontologies to im-prove information retrieval tasks in a set of federated digital libraries. Gaia methodology is used for this purpose and the paper highlights some of its main advantages. It also shows that this methodol-ogy can be easily used in similar environments to avoid ad hoc construction of agent-based systems.

Title:

A CASE STUDY OF COMBINING I* FRAMEWORK AND THE Z NOTATION

Author(s):

Aneesh Krishna , Sergiy Vilkomir , Aditya Ghose

Abstract: Agent-oriented conceptual modeling (AOCM) frameworks are gaining wider popularity in software engineering. In this paper, we are using AOCM framework i* and the Z notation together for requirements engineering (RE). Most formal techniques like Z are suitable for and designed to work in the later phases of RE and early design stages of system development. We argue that early requirements analysis is a very crucial phase of software development. Understanding the organisational environment, reasoning and rationale underlying requirements along with the goals and social dependencies of its stakeholders are important to model and build effective computing systems. The i* framework is one such language which addresses early stage RE issues cited above extremely well. It supports the modeling of social dependencies between agents with respect to tasks and goals both functional and non-functional. We have developed a methodology involving the combined use of i* and the Z notation for agent-oriented RE. In our approach we suggest to perform one-to-one mapping between i* framework and Z. At the first instance general i* model has been mapped into Z schemas, and then i* diagrams of the Emergency Flood Rescue Management Case Study are mapped into Z. Some steps explaining further information refinement with examples are also provided. Using Z specification schemas, we are in a position to express properties that are not restricted to the current state of the system, but also to its past and future history. The case study described in this paper is taken from one of the most important responsibilities of the emergency services agency, managing flood rescue and evacuation operations. By using this case study, we have tested the effectiveness of our methodology to real-life application.

Title:

IMPACT OF THE EVOLUTION OF BUSINESS RULES ON INTER-ORGANIZATIONAL WORKFLOWS

Author(s):

Joanna Li , Daniela Rosca

Abstract: One important characteristic of e-commerce is the ability to rapidly react to changes imposed by trading partners or market environment. Therefore, in the context of B2B integration, interorganizational workflows need to allow for a quick implementation of change. In this paper we are looking at the dynamic modification of interorganizational workflows, due to the addition of business rules. Business rules express statements and constraints about the way an enterprise is doing business. They represent the most dynamic component of a workflow. We are especially interested in enacting the business rules change without interrupting the existing workflow instances. To achieve this goal we apply the P2P approach, which guarantees that if the modification is done based on projection inheritance transformation rules, the initial behaviour of the workflow is not disrupted. In this paper, we introduce a rule markup language, rXRL, based on XML and grounded in Petri-Nets theory. It is an extension of XRL that allows for the representation and enactment of business rules in workflows. We demonstrate the expressive power of rXRL with a small e-commerce application that involves the cooperation of multiple trading partners.

Title:

OVERVIEW OF THE VIEW ORIENTED DEVELOPMENT

Author(s):

Ayman Moghnieh , Joumana Dargham

Abstract: In recent years the vast improvements achieved in the computer hardware industry have rendered the possibilities for the utilization of computers enormous. With the expansion of the computer utilities and utilization around the globe, and especially in the corporate sector, the market demand for complex software applications has increased in a way that upgraded the pressure on the current software development firms to increase the efficiency of their working methodologies and decrease the cost of their projects. Nevertheless, and in most cases, the utilization of classic programming techniques in the development of complex/multi-purpose applications is still a slow and complicated process, and thus the need for the upgrading of programming paradigms has become rather essential to facilitate and simplify the development of such applications. With the growth of the dependency on computer technology and complex multi-purpose applications in the global corporate sectors, a strong interest was born inside the software engineering research community to build Object Oriented complex application by new enhanced methods in order to diminish the currently colossal cost of such product, and to facilitate their production, maintenance, and upgrade. And thus, strong needs have been identified for software reusability and component concatenation on one hand, and application decentralization and support for dynamic behavior on the other. In this spirit, an approach whereby an application object is represented by a varying set of instances (called views)[1] is proposed. These instances would specialize in different parts of the behavioural domain of that object, while delegating the core functionalities that make the object identity, to a core instance representing that identity. In this approach, the object’s behaviour will be determined by the set of views attached to its core instance. This View Oriented offers a new programming methodology that can fill the needs identified for the progress of complex multipurpose application development technology.

Title:

AGENT-ORIENTED DESIGN PATTERNS: THE SKWYRL PERSPECTIVE

Author(s):

Manuel Kolp , Manuel Kolp , T. Tung Do

Abstract: Multi-Agent Systems (MAS) architectures are gaining popularity over traditional ones for building open, distributed, and evolving software required by today's corporate IT applications such as eBusiness systems, web services or enterprise knowledge bases. Since the fundamental concepts of multi-agent systems are social and intentional rather than object, functional, or implementation-oriented, the design of MAS architectures can be eased by using social patterns. They are detailed agent-oriented design idioms to describe MAS architectures as composed of autonomous agents that interact and coordinate to achieve their intentions, like actors in human organizations. This paper presents social patterns and proposes SKwyRL, a framework aimed to gain insight into these patterns. The framework can be integrated into agent-oriented software engineering methodologies used to build MAS. We consider the Broker social pattern as a combination of patterns and use it to illustrate the framework. The automatation of patterns design is also overviewed.

Title:

AS IS ORGANIZATIONAL MODELING, THE PROBLEM OF ITS DYNAMIC MANAGEMENT

Author(s):

Nuno Castela , José Tribolet

Abstract: In nowadays business competitive world, the organizations need to have some integrated and accurate representation of its business processes and information systems to allow fast responses to activities like business process reengineering, information systems requirements capture and quality systems implementation, to name a few. The frequency of this kind of activities is rising up. Unfortunately, the maintenance of this representation is not a trivial question and the business model tends to be constructed to be used and then “sit on the shelf”. In this paper, first is shown why frequently the As-Is model “sit on the shelf”. Then, is shown who the “clients” of the As-Is model are, and how these organizational actors can contribute to maintain the As-Is model updated. In the end the preliminary characteristics of a model for became self-sustainable are identified. A meta-model of the As-Is model and a tool prototype is also presented.

Title:

UML MODEL VERIFICATION THROUGH DIAGRAM DEPENDENCY RELATIONSHIPS

Author(s):

Faïez Gargouri , Hanêne Ben-Abdallah , Mouez Ali

Abstract: The Unified Modeling Language (UML) has merged as a de-facto standard for modeling language especially for information systems. However, in spite of its wide spread usage, UML still lacks support for verification methods ands tools. In fact, several researches were proposed verification methods for certain UML diagrams, however, none of the proposed methods covers all the UML diagrams which are semantically overlapping. In this paper, we propose a modular verification method for UML models. The proposed method uses the implicit (semantic) and explicit (syntactic) relations among all the diagrams of a UML model. The implicit inter-diagram relations are deduced from the UP design process. In this paper, we overview the proposed method and illustrate its feasibility through an example of an information system.

Title:

E-SYSTEMS DESIGN THROUGH THE STUDY OF AUTHENTIC WORK PRACTICE

Author(s):

John Perkins

Abstract: E-commerce systems involve collaborative systems that support and enable trading partners to work together as members of communities of practice. Eliciting the information requirements necessary to design, develop and run these systems requires understanding of what practitioners do in practice, as well as what policy directives impose as practice. A practice-centric approach is proposed for identification of elements of practice, a brief summary is made of some tools and concepts from Social Activity Theory and their relevance for further analysis of collaborative system information requirements is assessed.

Title:

BUSINESS PROCESS MODELING WITH OBJECTS AND ROLES

Author(s):

Artur Caetano , José Tribolet , António Rito Silva

Abstract: Role-based business process modeling is deals with partitioning the universe of process modeling into dif-ferent areas of concern by describing how business objects collaborate. A business object represents a con-cept of interest in the organization, such an activity or an entity, which can play multiple roles according to its behavior while interacting with other business objects. A specific business object collaboration can be expressed by the roles played by every participant in that scenario. Roles organize business object features that are required to display some behavior. This approach allows, on the one hand, creating semantically richer business process models, and, on the other, designing business objects where behavior is clearly sepa-rated and dependent on its usage context. Both of these results contribute to increase the understandability of process models and to improve business object reuse.

Title:

BUSINESS PROCESS MODELING TOWARDS DATA QUALITY: A ORGANIZATIONAL ENGINEERING APPROACH

Author(s):

José Tribolet , Hugo Bringel , Artur Caetano

Abstract: Data is produced and consumed everyday by information systems, and its inherent quality is a fundamental aspect to operational and support business activities. However, inadequate data quality often causes severe economic and social losses in the organizational context. The problem addressed in this paper is how to assure data quality, both syntactically and semantically, at information entity level. An information entity is a model representation of a real world business entity. To address this problem, we have taken an organizational engineering approach, consisting in using a business process-modeling pattern for describing, at a high level of abstraction, how to ensure and validate business object data. The pattern defines a conceptual data quality model with specific quality attributes. We use object-oriented concepts to take advantage of concepts such as inheritance and traceability. The concepts and notation we use are an extension to the Unified Modeling Language. A case study is detailed exemplifying the use of the proposed concepts.

Title:

INTRUSION DETECTION SYSTEMS USING ADAPTIVE REGRESSION SPLINES

Author(s):

Vitorino Ramos , Srinivas  Mukamala , Ajith Abraham , Andrew Sung

Abstract: Past few years have witnessed a growing recognition of soft computing technologies for the construction of intelligent and reliable intrusion detection systems. Due to increasing incidents of cyber attacks, building effective intrusion detection systems (IDSs) are essential for protecting information systems security, and yet it remains an elusive goal and a great challenge. In this paper, we report a performance analysis between Multivariate Adaptive Regression Splines (MARS), neural networks and support vector machines. The MARS procedure builds flexible regression models by fitting separate splines to distinct intervals of the predictor variables. A brief comparison of different neural network learning algorithms is also given.

Title:

VIEWS, SUBJECTS, ROLES AND ASPECTS: A COMPARISON ALONG SOFTWARE LIFECYCLE

Author(s):

Abdelaziz KRIOUILE , Bouchra EL ASRI , Mahmoud NASSAR , Bernard COULETTE

Abstract: To face the increasing complexity of software systems and to meet new needs in flexibility, adaptability and maintainability, classical object-oriented technology is not powerful enough. As pointed out by many authors, one must take into account the multiplicity of actors’ viewpoints in complex systems development. Views, subjects, roles and aspects are viewpoint-oriented concepts that permit a flexible adaptation of modelling and use of systems. This article aims to provide software developers with a comparison between view, subject, role and aspect approaches in respect to their principles and impacts on systems development as well as on systems use. After a brief presentation of these approaches, we discuss their similarities and differences by means of criteria positioning them along the software lifecycle.

Title:

REPRESENTATION OF BUSINESS INFORMATIONFLOW WITH AN EXTENSION FOR UML

Author(s):

Oliver Daute

Abstract: The importance of enterprise software solutions increased significantly. Today just one software solution can cover the whole work and the information flow of an entire enterprise. Up to many thousand of users work in a single and integrated enterprise solution now. From the earliest solution to nowadays, the requirements of software systems have changed dramatically in quality and quantity. While just the know-how in technology was the main factor for success, today much more effort is needed and involved to understand what the customer wants us, as a software engineer to produce. At this point Business Process Engineering (BPE) became an important way to describe the business and system requirements. Business Process Engineering is focused on requirements, terminology, processes, dependencies, and on the Business Process Flow (BPF). The Business Process Flow is the subject of this article, in particular the Business Information Flow (BIF). Many procedure and methodologies are available for modeling business process requirements. The most work with an own proprietary description language, which are not compatible to each other. Some are just useful to implement a pre-configured standard software solution, for instance to customize an ERP System. A much better way for modeling business processes, especially if no standard software solution is required is the use of an independent standard modeling language, like the Unified Modeling Language (UML). The UML is well established and quite often used in the domain of object orientated software development. [Meta] For Business Process Modeling with UML some more investigations are requested, especially for the representation of information business flow. An appropriate way, as proposed here is to add Information Flow to standard Use Case Diagrams.

Title:

A BUSINESS PROCESS MODEL FOR PUBLIC HEALTH INFORMATION SYSTEMS: A GOVERNMENTAL PERSPECTIVE

Author(s):

José Luís Oliveira , Daniel Polónia , Ilídio  Oliveira

Abstract: The business process models available for the telecom industry have, in the recent past, made significant developments and reached leading-edge maturity levels. The 1997-2000 technology bubble has injected significant amounts of cash in the market, which has allowed a quick maturing of both the process models and its supporting software applications and integration tools. In its turn, the health industry, in what concerns to technology and associate processes has been maturing more slowly, with lower levels of integration and process models more “institution oriented” than “client oriented”. In this paper it is proposed a process model for the health industry, derived from the enhanced Telecom Operations Model (eTOM), from which is derived a functional architecture that intends to support applications that respond to the current technological, political and economical challenges of the Portuguese national health service.

Title:

COMPONENT-BASED MODELLING OF ORGANISATIONAL RESOURCES USING COLOURED PETRI NETS

Author(s):

Khodakaram Salimifard

Abstract: Collaborative software applications such as workflow management systems require a clear separation between process model and resource model. The process model realises the partial order of business processes while the organisation model provides the structure of the resources to be utilised. In this paper, we propose a CPN-based framework for modelling organisational functioning units. The models are developed independent of the process layer. Maintaining the flexibility and scalability, the model, hence, is capable to be modified without altering the process model.

Title:

TOWARDS MEETING INFORMATION SYSTEMS: MEETING KNOWLEDGE MANAGEMENT

Author(s):

Vincenzo Vincenzo Pallotta

Abstract: Interaction through meetings is among the richest human communication activities. Recently, the problem of building information repositories out of recordings of real meetings has gained interest, and several research projects have started . We report here a summary of the first two years of research carried out within the Swiss funded research project (IM)2, together with some lessons learned and future perspectives. However, this paper is not intended as an activity report, rather its aim is to point out outstanding problems (and their possible solutions), arising from the first attempt to tackle the problem of designing Information Systems of Meeting records. Moreover, the paper will address issues related to the role that such an information system may play in the context of enterprise knowledge management.

Title:

ORGANISATIONAL LEARNING – FOUNDATIONAL ROOTS FOR DESIGN FOR COMPLEXITY

Author(s):

Angela Nobre

Abstract: Organisational learning has developed from many roots and threads of thought. As the field matures it is critical that some of the baseline concepts are not overlooked. In tune with hierarchical systems theory, it is necessary to distinguish those issues which have a structuring effect over the others thus allowing for an overall consistent development. Hierarchies exist because not everything has the same importance, which does not imply that we need hierarchical organisations as we know them. Prescriptive, simplistic and mechanistic forms of interpreting and promoting organisational learning are of less consequence than exploratory, complex and interpretative approaches. In order to envision what paths will lead us in what directions it is important to consider the criteria of what might bring us to a situation where the greatest diversity of possibilities may materialise, i.e. how may we open and keep open the complex systems in which we are immersed. The current paper focus on some of the origins of organisational learning and aims at pointing at an approach which may help 21st century organisations to deal with the daily struggle of bridging theory and practice, our intentions and our actions, and what the organisation as a whole officially states that it stands for and how that materialises into current reality.

Title:

HUMAN-CENTERED SYSTEMS DEVELOPMENT AND USE INCONSISTENCIES

Author(s):

Salem Dakhli , Claudine Toffolon

Abstract: The framework we describe in this paper is composed of two parts. The first part provides a typology of deviations and inconsistencies which occur during human-centered systems development and use. This typology, based on four facets and three levels of abstraction (conceptual, detailed, technical), permits identifying other types of deviations and inconsistencies not considered in the literature. It may be useful to define methods and tools to manage and reduce deviations and inconsistencies in compliance with the organization’s constraints, priorities and technical maturity. The second part consists in a coordination framework which permits reduction of deviations and inconsistencies inherent in human-centered systems.

Title:

XML DATA CONSTRAINT AND XINCAML

Author(s):

Shun Xiang Yang , Ying Nan Zuo , Jing Min  Xu , Zhong Tian

Abstract: XML is becoming the de facto standard for data exchange. Because it brings structures and semantics to the contents, it is very important for applications to verify the validity of XML data before further processing. W3C XML Schema language can specify many of the constraints in XML data, but it lacks of the capability of expressing application specific inter-nodes constraints. Therefore XincaML (eXtensible inter-nodes constraint Markup Language) is invented as a complement to XML Schema language to specify this kind of application constraints. XincaML is a descriptive inter-nodes constraint specification language. XincaML Processor is a reference Java implementation of the XincaML language parser and constraints checker. Developers can easily integrate the processor into their applications to handle inter-nodes constraints besides validating XML data against XML Schema. XincaML and the processor provide a common mechanism for applications to describe and process inter-nodes constraints, thus significantly eliminate the need and labor to hard-code the constraint handling in applications and speeds up the application development.

Title:

A USER-CENTERED METHODOLOGY TO GENERATE VISUAL MODELING ENVIRONMENTS

Author(s):

Carmine Gravino , Vincenzo Deufemia , Gennaro Costagliola , Filomena Ferrucci

Abstract: CASE tools supporting many activities of the software development process embed visual modeling environments. Indeed, visual languages are practical means to allow engineers to define models and different views of software systems. However the effectiveness of visual modeling environments strongly depends from the process and tools used for their development. In this paper we present a user-centered methodology for the development of customized visual environments, and a tool to support it. The use of UML meta-modeling techniques and formal methods characterizes the proposed approach. Moreover, incremental development and rapid prototyping are ensured by the use of an automatic generation tool that allows designers to focus on structural features of the target language disregarding the visual environment creation.

Title:

DETERMINING REQUIREMENTS AND SPECIFICATIONS OF ENTERPRISE INFORMATION SYSTEMS FOR PROFITABILITY

Author(s):

K. Donald  Tham , Mark S.  Fox

Abstract: A company’s profits may be defined as the positive difference between its income revenues and operational costs. Today, most companies use traditional costing methods and/or traditional Activity-Based Costing (ABC) to determine their operational costs with a view to direct operational and business process changes so that profits are realized. A tripartite approach is presented towards determining requirements and specifications of enterprise information systems for profitability (EISP). In the first part, an understanding of the nuances of traditional costing and ABC as is currently practiced in enterprises is presented to point the shortcomings of these current costing practices. The second part provides a case study that vividly demonstrates the problems in the current costing methods and clearly points their inadequacies towards profitability. The third part presents a framework for the specifications of enterprise information systems for profitability through ontology-based enterprise modeling, EABEM and Temporal-ABC for the attainment of improved knowledge about costs.

Title:

REQUIREMENTS ENGINEERING FOR ORGANISATIONAL MODELLING

Author(s):

Simon Tan , Kecheng Liu

Abstract: This paper explores a semiotic perspective to information systems engineering, using organisational modelling techniques rooted in organisational semiotics. The components and relationships of large corporations are highly complex, volatile and unstructured. Semiotic modelling techniques are therefore introduced to address these challenges posed by large enterprises. MEASUR, a suite of methods based on organisational semiotics, are used to address the IT and organisational requirements, needed to encapsulate behavioural patterns and to formalise the convoluted relationships. A case study illustrating the applicability of MEASUR is presented, to evaluate a crime reporting system from the Police Information Technology Organisation (PITO) in UK, and to examine its application and significance in the modelling of organisations. We focus on two key fundamental issues. Firstly we investigate the agent behaviour within the organisation. Secondly, we analyse the semantics of the relationships between these patterns of behaviour in building a normative model of a large organisation.

Title:

USING SAP SYSTEM CONFIGURATION SECURITY TEST TO COMPLY WITH SARBANES-OXLEY ACT

Author(s):

Jen-Hao Tu

Abstract: Most observers would agree that the Sarbanes-Oxley Act (SOA) is the single most important piece of legislation affecting corporate governance, financial disclosure and the practice of public accounting. On the other hand, the SAP system is the most widely used ERP (Enterprise Resource Planning) system in the world. There are thousands of seamlessly linked components and subsystems. Conducting security tests in a complicated ERP system is still a major challenge. Based on the study of the SAP system configuration security testing at the author’s company, this work-in-progress paper will discuss related configuration security weakness in SAP system and suggest practical solutions to enhance the security control of SAP to comply with SOA.

Title:

A FRAMEWORK FOR ASSESSMENT OF ENTERPRISE INTEGRATION APPROACHES AND TECHNOLOGIES

Author(s):

Alba Nydia Nunez , Ronald Giachetti , Dwayne Truex , Bertha M. Arteta

Abstract: Enterprise integration is the study of an organization, its business processes, and resources, understanding how they are related to each other and determining the enterprise structure so as to efficiently and effectively execute the enterprise goals. There are many separate research streams that have developed theories, approaches, and technologies for integrating the enterprise. There seems to be little sharing of concepts across disciplines or consensus on the topic of enterprise integration. Moreover, what is meant by the term ‘integration’ is poorly defined. In this article an enterprise integration framework is presented to bring together the divergent views of enterprise integration so that how they are related to each other can be understood. The enterprise integration framework defines five levels of the enterprise system to define the integration types encountered at each level. The five levels are organization, process, application, data, and network. The enterprise integration framework is used to analyse the many approaches used by different disciplines toward enterprise integration. The analysis identifies gaps for further research.

Title:

SECURING A WEB-BASED EPR: AN APPROACH TO SECURE A CENTRALIZED EPR WITHIN A HOSPITAL

Author(s):

Altamiro Costa-Pereira , Ricardo Correia , Ana Ferreira

Abstract: The introduction of new technologies such as the EPR stresses the importance of healthcare information security. The Biostatistics and Medical Informatics Department of Porto University is developing a centralized Electronic Patient Record at Hospital S. João, in Portugal, the HSJ.ICU. The main objective is to electronically integrate heterogeneous departmental information in a secure way, using Internet technology. The methodology used takes into consideration user-driven security issues: access control and secure communications (confidentiality); integrity of information and availability of that same information to authorized users. This was achieved using CEN/TC251 prestandards, Internet security protocols (e.g. TLS) and digital signature protocols. Having in mind the CIA (Confidentiality, Integrity and Availability) structure helps organizing and in a way, separating concepts that can be assessed in a more direct and efficient way. Security issues are already rooted and constitute a good basis for any enhancements that will be made in the future.

Title:

ACHIEVING SUPPLEMENTARY REQUIREMENTS USING ASPECT-ORIENTED DEVELOPMENT

Author(s):

Julie Vachon , Farida Mostefaoui

Abstract: Software development most often focuses on the elicitation, design and implementation of functional requirements. Although this is a common and reasonable approach, one should not neglect supplementary specifications, that is all those quality attributes and constraints required by the client or the users. The problem of attempting to work software quality around the end of the development process is frequent and quite risky for there is little chance the final architecture will be able to meet these quality requirements without important modifications. Supplementary specifications capture the requirements which are not defined by the use case model. Among other, these include functionalities which are common across many use cases as well as various URPS(*) quality attributes whose realization may require implementing complementary services. Supplementary requirements are some kind of cross-cutting concerns which one would like to plug in at some later stage of the design process (e.g. after prototyping use cases). The analogy with the aspect notion suggests that aspect-orientated techniques may here be called upon advantageously. This article presents an aspect-oriented methodology comprising techniques to support the development of supplementary specifications. Use-case analysis is adapted to take care of crosscutting requirements and a design pattern (structure and collaboration) is proposed for the elaboration of aspect-oriented solutions. (*)usability, reliability, performance, supportability.

Title:

FOUNDING ENTERPRISE SYSTEMS ON ENTERPRISE PERFORMANCE ANALYSIS

Author(s):

Ian Douglas

Abstract: Information, knowledge and learning systems are developed with the implicit belief that their existence will lead to better performance for those using them and that this will translate into better performance for the user’s organisation. There is an important activity that must occur prior to requirements analysis for such systems, that is organisational and human performance analysis. One key software application that is missing from most organisations is an integrated enterprise system for analysing performance needs, determining appropriate support solutions, monitoring the effect of those solutions and facilitating the reuse and sharing of the resulting knowledge. A model for such a system is presented, together with a prototype demonstrating how such a system could be implemented.

Title:

MITIGATING THE EFFECTS OF DENIAL OF SERVICE ATTACKS ON E-COMMERCE SITES

Author(s):

S. Zaidi , Mohammad Ali Awan

Abstract: DoS Attacks have caused substantial damage to the Internet. An attacker with limited resources can carry out these attacks against very sophisticated sites or networks causing huge financial losses, disruption in key public utilities dispensation and above all inducing a sense of insecurity in a world, which is increasingly becoming dependent on Internet. Recently, a number of mechanisms have been proposed to counter the threat posed by DoS attacks. Amongst them Proactive server roaming and feedback mechanism for Diffserv clients are two of the more prominent ones. We here describe in detail VIPNet, a mechanism for protecting e-commerce sites. In VIPNet, traffic is divided into two classes. Some of the important clients are chosen as VIPs and their packets given preferential treatment having guaranteed Quality of Service, enabling them withstand any congestive attacks. An attacker having a valid VIP right can only mount a DoS attack against another VIP. A non-VIP cannot utilize the resources of the VIP. A technical comparison of a VIPNet with other schemes is also provided in the paper. We show through practical implementation of VIPNet that it effectively counters the threat posed by DoS attacks. In the end we propose some enhancements in the VIP protocol in order to remove one of its limitations i.e. it assumes that the interaction between servers and clients is transaction-based where at the end of each transaction the server gets a payment. Our proposed enhancements would permit the protocol to support sites that charge a flat fee.


AREA 4 - Software Agents and Internet Computing
 
Title:

JURISDICTION IN B2C E-COMMERCE REDRESS

Author(s):

CHIN EANG ONG

Abstract: E-commerce jurisdiction has always been an issue because e-commerce exists in a borderless environment and this e-environment diminishes the importance of physical location and locality. This imposes a great concern over which country’s jurisdiction to engage when disputes occur between business and consumer in the e-environment. This is crucial when the consumer is seeking ‘redress’ as there is always the question as to where a court action should be brought in? The current jurisdictions by the European Commission (EC) within the European Union (EU), The E-commerce Directive – Country of Origin and Rome II are still in the drafting process. These legislations are not the total solution. This paper discusses the issue of current jurisdiction, whether there is a need to call for a single jurisdiction and what complications arise when seeking redress in this borderless e-environment. This paper also raises important issues that relate to the gaps and loopholes that exist in Country of Origin and Rome II.

Title:

E-ENTERPRISE: AWARENESS AND IMPLEMENTATION OF TRANSPARENT FACTORY IN SOUTH EAST ASIA

Author(s):

Abu Hassan Darussman , Gobbi Ramasamy , Josia Anthony , Seng Hoong Chua , Manimaran B

Abstract: The needs for flexible manufacturing due to demand, supply, product, process, workforce, and equipment variability forces companies to transform their current manufacturing system into more lean production system or Big just in time (JIT). Three strategies, which denoted as M3A, Management Automation, Marketing Automation and Manufacturing Automation have to be jointly incorporated to confront the more competitive market. In addition, to answer to these needs, transparent factory, which is an open automation framework, based on internet technologies that provide seamless communication between plant floor and business system has been introduced by Schneider Electric. Despite the good work and technology introduced, the acceptance is only significant in United States, Europe and Africa. Hence, this paper is to look into the awareness of the transparent factory in South East Asia (SEA) in particular. Some important figures will be presented to appreciate the alertness level in this part of the world. A particular reference to oil & gas plant in Indonesia, which had the system implemented recently, and a waste treatment plant in Malaysia will be highlighted in this paper.

Title:

DEVELOPING INTRANET AND EXTRANET BUSINESS APPLICATION FOR A LARGE TRAVEL AGENT

Author(s):

Anthony Atkins , Robert Shaw

Abstract: This paper outlines an e-business strategy for a large independent Travel Agent with multiple sales channels and business units. The present configuration does not provide a framework for the development of e-business solutions for the travel company. The paper discusses the creation of an infrastructure for the development of the companies Intranet to integrate its separate business units with Extranet technology using e-business application. This strategy provides a stable platform and infrastructure capable of supporting the traditional business system and allowing for development of e-business operations. The paper discusses a number of tools and techniques for strategic development to incorporate e-business sales channels. The most appropriate tools for application to the travel industry are discussed and their application has shown how the travel agent can develop competitive advantage through the use of strategic information systems. The creation of a centralised e-business system, utilising a Virtual Private Network (VPN) is outlined with a predicted cost savings of £1 million per annum. The application of a centralised e-business system supported by the VPN has allowed CRM system to be evaluated. An initial trial using CRM system gave increased sales of £150,000, which if applied throughout the business would increase sales by £1.2 million.

Title:

PROCESS DESIGN AND OUTSOURCING ISSUES IN E-COMMERCE

Author(s):

Anne Nelson , William  Nelson , Ali Yakhlef

Abstract: Electronic commerce (EC) involves business transactions, marketing efforts, information gathering, and other functional activities with respect to information technology (IT) both within and without an organization. It provides various opportunities to a firm to adopt different business sourcing models and allows new opportunities to configure organizational structure within the New Economy. Critical factors for EC success dictate that the firm must re-evaluate its business sourcing model from with complexity theory and the New Economy, thereby emphasizing the need for the firm to effectively coordinate its EC initiatives and consider all sourcing opportunities in this nonlinear, decentralized, alliance-focused, and CRM-based environment. This research will build from 1) an understanding of EC, to 2) the complex systems of EC in the New Economy, to 3) the sourcing mode used in the EC business model. The results have significant implications for IT managers deciding upon the ideal choice of sourcing mode for an EC initiative. The study results point to the determinants of the choice by the sample of large firms in the study. Cost savings expectations are an important consideration in the choice of sourcing mode. As the expectations of cost savings from outsourcing increased, the firms in the sample increasingly used market mechanisms (service providers) compared to internal resources. Firms were also concerned about the business potential associated with the project and when the business potential was high, they preferred joint ventures and internal development to the use of market mechanisms. This indicates that the move toward outsourcing based on cost savings expectations was mitigated by the desire to develop relevant capabilities for high-potential projects through increased day-to-day involvement.

Title:

A COOPERATIVE LEARNING MULTI-AGENT SYSTEM

Author(s):

Yacine Lafifi , Tahar Bensebaa

Abstract: The cooperation application’s interest is not more than to show as the education is fundamentally a cooperative process. Certain, cooperative learning influence on the learner’s level. Current events development interest more with learning’s environment within groups than to individual learning’s environment. In this paper, we present the architecture of the Multi-agent system (SACA), which supports the cooperative learning. The SACA‘s aim is to offer for each learner an adaptable learning’s environment taking into account learner’s aptitudes, capacities and needs. More than that, it offers the possibilities of an effective cooperation among learners in order to reach every one’s aim. This system is a set of heterogeneous agents. The artificial agent helps the learner in order to create the possibility of an effective cooperation. The agents are represented with its mental state and its capacities.

Title:

INCORPORATING THE ELEMENTS OF THE MASE METHODOLOGY INTO AGENT OPEN

Author(s):

Brian Henderson-Sellers

Abstract: Construction of an enterprise-wide, web-based system can be assisted by the use of agents and an agent-oriented methodology. As part of an extended research programme to create such an AO methodology by combining the benefits of method engineering and existing object-oriented frameworks (notably the OPF), we have analysed here contributions to the OPF repository of process components from the MASE agent-oriented methodology. We have identified three new Tasks, together with one additional Technique and two new Work Products.

Title:

MANAGING E-MARKET TRANSACTION PROCESSES

Author(s):

John Debenham

Abstract: Knowledge-driven processes are business processes whose execution is determined by the prior knowledge of the agents involved and by the knowledge that emerges during a process instance. They are characteristic of emergent business processes. The amount of process knowledge that is relevant to a knowledge-driven process can be enormous and may include common sense knowledge. If a process’ knowledge can not be represented feasibly then that process can not be managed; although its execution may be partially supported. In an emarket domain, the majority of transactions, including requests for advice and information, are knowledge-driven processes for which the knowledge base is the Internet, and so representing the knowledge is not an issue. These processes are managed by a multiagent system that manages the extraction of knowledge from this base using a suite of data mining bots.

Title:

A CASE STUDY ON SOCIAL NETWORK IN A COMPUTER GAME

Author(s):

Julita Vassileva , Golha Sharifi , Yang Cao , Yamini  Upadrashta

Abstract: When designing a distributed system where a certain level of cooperation among real people is important, for example CSCW systems, systems supporting workflow processes and peer-to-peer (P2P) systems, it is important to study the evolution of relationships among the users. People develop attitudes to other people and reciprocate the attitudes of other people when they able to observe them. We are interested to find out how the design of the environment, specifically the feedback mechanisms and the visualization may influence this process. For this purpose we designed a web-based multi-player computer game, which requires the players to represent explicitly their attitudes to other players and allows studying the evolution of interpersonal relationships in a group of players. Two versions of the game deploying different visualization techniques were compared with respect to the dynamics of attitude change and type of reactions. The results show that there are strong individual differences in the way people react to success and failure and how they attribute blame and change their attitude to other people involved in the situation. Also the level and way of visualizing the other players’ attitude influences significantly the dynamics of attitude change.

Title:

USING INTERACTION PROTOCOLS IN DISTRIBUTED CONSTRUCTION PROCESSES

Author(s):

Santtu Toivonen , Heikki Helin , Jung Ung Min , Tapio Pitkäranta

Abstract: We present an interaction protocol based approach for facilitating distributed construction processes. In our approach, software agents represent various participants of a construction project---such as contractor, subcontractor, and supplier. These agents are supposed to communicate according to predefined interaction protocols. Should an agent be unaware of some protocol needed in the process, it benefits from mechanisms for adopting it. We approach this problem with interaction protocol descriptions serialized in a commonly agreed upon format and design our agents so that they can adapt to the descriptions. We present a scenario in the field of construction industry, where the project participants do not know in advance how to communicate with each other. However, by adapting to the protocol descriptions provided by the respective parties they are eventually able to interact.

Title:

TOWARDS AN AGENT-BASED AND CONTEXT-ORIENTED APPROACH

Author(s):

Zakaria Maamar , Soraya Kouadri mostéfaoui , Hamdi Yahyaoui , Willem-Jan  van den Heuvel

Abstract: We present an agent-based and context-oriented approach for the composition ofWeb services. AWeb service is an accessible application that other applications and humans can discover and trigger to satisfy certain needs, e.g., hotel booking. Because of the complexity that characterizes the composition of Web services, two concepts in this paper are put forward to reduce this complexity namely software agent and context. A software agent is an autonomous entity that acts on behalf of users, whereas context is any relevant information that characterizes a situation. During the composition process, software agents engage conversations with their peers to agree on the Web services that will participate in this process. In these conversations, agents take into account the execution context of the Web services.

Title:

RESOURCE SHARING AND LOAD BALANCING BASED ON AGENT MOBILITY

Author(s):

Gilles Klein , Amal El Fallah-Seghrouchni , Alexandru Suna

Abstract: From the recent improvements in network and peer-to-peer technologies and the ever-growing needs for computer might, new ways of sharing resources between users have emerged. These methods are very diverse, from SETI@HOME which is a way to share the load of analysing the data from space in order to find traces of extraterrestrial life, to NAPSTER and its successors, and to Real-time video-games. However, these technologies allow only centralised calculus-sharing, even if they already offer "peer-to-peer" sharing of data. We present in this paper a method based on Multiagent systems which allow load-sharing between distant users.

Title:

FINANCIAL REPORTING: AN INTERNET CLEARINGHOUSE

Author(s):

Boris Stavrovsky , Max Gottlieb

Abstract: The creation of accounting transactions has changed from a manual to computerized recording. In many operational applications the accounting entries are generated as a byproduct of the underlying transactions (such as sales), thus making it possible to shorten the existing delays in creation of accounting data. Under this method it is possible to issue financial statements monthly or weekly, as opposed to the presently used quarterly and annual periods. Many corporations already generate such financial reports for their internal use, but not for external purposes. Corporations provide the Security and Exchange Commission (SEC) with more detailed and supplemental information in addition to the financial reporting, including sales of their stocks by their officers. Corporations also disclose substantial facts in their press releases and conferences with financial analysts. They are obligated to disclose this information to their shareholders. But how to do it quickly in a way that small investor could obtain this information at the same time as the institutional investors? It would be advisable to distribute financial reports via an electronic clearinghouse. This method would permit an instant access to the reports and assure that these documents can not be modified. In the following paragraphs we will review the existing reporting frequency contrasting them with the needs of investors, and describe the generation of accounting transactions. Next the proposed method of collection and distribution of financial reports as well as their possible analyses by a central electronic clearing house will be discussed. Finally, we will explore the need for changes of the attestation standards, describe how to assure the integrity of distributed electronically financial statements, and the proposed sequence of implementation of the new distribution.

Title:

VIRTUAL ACTIVE IP NODE FOR COLLABORATIVE ENVIRONMENTS

Author(s):

Francisco Puentes , Victor Carneiro

Abstract: The present document describes the VAIN architecture (Virtual Active IP Node), which enables users to deploy new network services based on virtual active networks, and how it solves the challenge of segmenting the incoming traffic that crosses nodes towards the services, conserving the original objective of independence of the protocol [1]. Our solution is based on using network expressions that use all the semantic contained in each incoming packet, which does not need to know the inner structure of the protocols. VAIN architecture has been development to response to challenges outlined by electronic commerce, specifically those regarding to collaborative environments and marketplaces. To achieve this objective we have considered the following goals: first, a three layer conceptualization; second, a transparent implantation and its integration with existing infrastructures; and third, a strategy of network traffic distribution based in all the information within the input packets, which is named “expressions based distribution”. Mainly, VAIN uses as guest code an interpreter of intermediate code from .NET architecture, although it is open to use other guest codes. VAIN is immediately over the link layer, being able to be extended to any other similar net protocol, and it is independent of upper protocols existing or not at the present time. Our architecture also presents a polymorphic character since it allows changing its behavior in a transparent way and virtually emulating other architectures without affecting to its functionality.

Title:

THE ASSESSMENT OF E-COMMERCE AWARENESS ON HIGHLY VALUABLE TRADITIONAL PRODUCTS IN THAILAND

Author(s):

Chamnong JungThirapanich , Sakuna Vanichvisuttikul

Abstract: This paper discusses the potential of e-commerce development of the Thai rural people who are the products owners in the government project entitled OTOP (One Tambon or District, One Product) Project in Thailand. It is done by reviewing the awareness and readiness of the products owners who are the regional products champions from all over Thailand. This study also identifies the enabling factors, the limitations, and forecasts the future growth of e-commerce for OTOP Project. Additionally, the paper will be beneficial to both parties; the government and the people in the rural areas, in solving the problem at the grass root level in Thailand. Five hundred products owners were selected from different products categories, 253 out of 500 responded with usable answers. The response rate was 50.6% which is higher than the expected rate for such surveys. The major problems of the existing OTOP production process and business operation are price, lack of funds for stock inventory and piracy of the local wisdom. Awareness of e-commerce among these rural people is high but the level of acceptance for the knowledge and technology transferred are rather low, due to the digital divide in Thailand. Most of them are facing the same situations which are about seeking more distribution channels and enhancing more markets.

Title:

INTRANET USE - A STUDY OF FIVE SWEDISH ORGANISATIONS

Author(s):

Christina Amcoff Nyström , Björn Bank

Abstract: This paper presents a study carried out 2002, concerning the Intranet in five Swedish organisations. The purpose was to investigate in what way different aspects influence the use and understanding of an Intranet. An explorative approach was used, based on two interview guides. The first guide was directed to managers and IS representatives and covered background aspects of the Intranets as well as data about the businesses and their Intranets. The second guide was directed to all kinds of users and covered aspects about the use of Intranet and in what way users could influence the content and understanding of the Intranet. The persons interviewed represented end-users, managers and members of the IS-staffs. The results show that the Intranets in the study were poorly matured and that the main use mode was “publishing”. The underlying philosophy of the Intranets seemed to be self-information rather than to inform others. Furthermore, the use and understanding of Intranet differed between end-users, managers and the IS-staff according to trust and ideas of responsibilities. Finally, we have identified important aspects to be considered when investigating use of Intranets in further research. These aspects are strategies, context, further development process, competence and the Intranets’ organisational affiliation as well as the culture of the organisation.

Title:

TEAMBROKER: CONSTRAINT BASED BROKERAGE OF VIRTUAL TEAMS

Author(s):

Achim Karduck

Abstract: Some consulting projects are carried out in virtual teams, which are networks of people sharing a common purpose and working across organizational and temporal boundaries by using information technologies. Multiple investigations covering these teams focus on coordination, group communication and computer supported collaborative work. However, additional perspectives like the formation of teams are also important. Here one should deal with the question "how to form the best team". To approach this question, we have defined team formation as the process of finding the right expert for a given task and allocating the set of experts that best fulfills team requirements. This has been further transformed into a problem of constraint based optimal resource allocation. Our environment for computer supported team formation has been developed by adopting the brokerage view consisting in mediating experts between peers requesting a team and the ones willing to participate in a team. Computer supported brokerage of experts has been realized as a distributed problem solving involving entities representing experts, brokers and team initiators.

Title:

DESIGN AND EVALUATION OF SOFTWARE AGENTS FOR ONLINE NEGOTIATIONS

Author(s):

Kaushal Chari , Manish Agrawal

Abstract: This paper presents a negotiation heuristic for software agents that enable agents to use market information and learn about the opponent’s behavior while conducting online negotiations. The heuristic is tested in a pilot experimental study, where the performance of agents is evaluated with respect to human negotiators in a simulated electronic market. Preliminary results indicate that agents may have the potential to do better than humans in multi-issue negotiation settings.

Title:

AN AGENT ARCHITECTURE FOR STEEL PRODUCT BUSINESS NETWORK

Author(s):

Janne Kipinä , Harri Haapasalo , Heli Helaakoski

Abstract: Networked manufacturing enterprises are now moving towards more open information exchange for integrating their activities with those of their suppliers, customers and partners within wide supply chain networks. Therefore there has been increasing need for software systems to support business networks. This paper introduces SteelNet agent architecture, which facilitates real collaboration of companies by enabling the seamless information and material flow in business network. SteelNet agent architecture has been developed to meet the requirements of steel product industry network that work as a supply chain. Different operations of order-delivery process in the network have been modelled as agents that are able to collaborate with each other. The SteeNet agent architecture is a basis for prototype that handles operations of manufacturing steel product in supply chain. By digitising the information flow between the collaborative companies it increases their competitive position and profitability.

Title:

THE IMPACT OF THE COMMUNICATION AND INFORMATION TECHNOLOGIES IN THE EDUCATIONAL SYSTEM – CASE STUDY OF NORTH OF PORTUGAL AND SOUTH OF GALICIA

Author(s):

Luis Vilán Crespo , Paulo Costa , Ana Isabel Díez Sanches , Manuel Perez Cota

Abstract: This article intends to synthesize the results obtained by the investigation work done in the North of Portugal ans South of Galicia, by GEAC - Group of Teaching Attended by Computer, of the Vigo University, under orientation of the PhD. Manuel Pérez Cota. The main objective, in this investigation phase, is to try to identify the teacher's profile, his sensibility for the teaching-learning process based in the technologies, in what way are or not the computer sciences tools used, how, when and where, the knowledge were acquired, in the meaning of computer science, for the teacher, try to identify the decisive causes and the importance of these, for an appropriate use of the computer science in the education process, inside of the your class.

Title:

THE PROJECT OF VIRTUAL LABORATORY FOR INFORMAL COMMUNICATION ON GIGABIT NETWORK

Author(s):

Kenzi Watanabe , Toshihiko Shimokawa , Yuko Murayama , Yasuo Ebara , Shinji Yamane , Yukinori Goto

Abstract: As the computer network technology has evolved, the Internet has spread out to include variety of users. They get together and create communities on the network, so that virtual relations between people have been emerging. Many such virtual communities use chat rooms, mailing lists and message boards making use of the existing applications. On the other hand, informal communication such as a chat and private conversation during a break at a conference, has been recognized important. We often come across a good idea when having relaxing conversations. In a daily life, we have various environments for informal communications which are necessary for keeping relations with the others even for having better formal communications. However, it is not so easy to have an environment for informal communications in a cyber space on the Internet when only relying on the existing applications. In this research, we try and use some new experimental informal communication tools in our virtual laboratory environment in which several universities are interconnected with Japan Gigabit Networks (JGN) /ATM. First, We have set up a CCD camera at each laboratory and deliver streaming live video to share the environment. Then, we constructed a meeting system using Microsoft Netmeeting and OpenMCU. In addition, we implemented on-door communication systems using the metaphor of a door on WWW as a media for novel types of informal communications. In this paper, We explain the summaries of experimental environment and the supporting applications for informal communications. We also let students to take the main role of communications so that they use tools for their informal communications in a realistic way.

Title:

EGOVERNMENT MATURITY MODEL(EGMM)

Author(s):

Kazem Haki , Ayob Mohammadian , Emad Farazmand , Hossein Safari , Gholamreza Khoshsima , Adel Moslehi

Abstract: eGovernment has been defined as an Information and Communication Technology( ICT) enabled route to good governance. eGovernment is an evolutionary path that its effective implementation requires a complete understanding of constituting elements and at the same time taking a holistic view to stay focused on its overall objectives. This paper introduces a new heuristic model that could be used to measure eGovernment maturity called “eGovernment Maturity Model” (eGMM). eGMM has five levels of maturity encompassing varying degree of initiatives, from the lowest to the highest. Stages include close, readiness, develop, manage, and seamless. In this model two aspects are considered: eService maturity and plan maturity.

Title:

A MULTI-SERVER APPROACH FOR DISTRIBUTED COLLABORATIVE KNOWLEDGE SPACES

Author(s):

Thomas Bopp

Abstract: Cooperative knowledge areas are a proven approach to supporting cooperative work processes and e-learning. The Paderborn open-source sTeam system establishes cooperative knowledge spaces in the form of a single-server implementation. This paper presents our architecture of distributed cooperative knowledge areas. The main conceptual idea of the sTeam system is to combine a document management system with a MUD. A distributed architecture of cooperative knowledge spaces enables us to create a single world of connected virtual knowledge spaces across different servers. This is particularly important when considering new scenarios for integrating peer-to-peer clients into a multi-server architecture. Distributed knowledge spaces must also encompass concepts for multi-server group and user management, allowing users to move transparently from one server to another. Materials should be structured independently of their location on a specific server. The paper begins by discussing the idea of structuring a virtual world into zones or areas, such as is also found in multi-user virtual environments. Then our architecture of distributed cooperative knowledge areas is presented. In the field of user management, two different approaches for peer-to-peer and master-server group and user management are possible, and these are discussed in detail. Our trial implementation will integrate both concepts and prototypes. The paper concludes with a discussion of potential extensions to our architecture.

Title:

ADAPTIVE AGENTS FOR SUPPLY NETWORKS

Author(s):

Jeff Barker , Gavin  Finnie

Abstract: Dynamic information flow in esupply networks requires that buyers and suppliers have the ability to react rapidly when needed. Using intelligent agents to automate the process of buyer/seller interaction has been proposed by a number of researchers. One problem in providing intelligent automated collaboration is incorporating learning capability i.e. an agent should be capable of adapting it’s behaviour as conditions change. This paper proposes a scalable multi-agent system which uses case-based reasoning as a framework for at least part of its intelligence. Tests with a simulated system show that such an agent is capable of learning the best supplier and also capable of adapting if supply conditions change.

Title:

USING AGENT PLATFORMS FOR SERVICE COMPOSITION

Author(s):

Michele Tomaiuolo , Matteo Somacher , Agostino Poggi , Paola Turci

Abstract: Agentcities is a network of agent platforms that constitute a distributed environment to demonstrate the potential of autonomous agents. One of the aims of the project is the development of a network architecture to allow the integration of platforms based on different technologies and models, but compliant to FIPA agent interoperability specifications. This network provides basic white pages and yellow pages services to allow the dynamic discovery of the hosted agents and the services they offer. An important outcome is the exploitation of the capability of agent-based applications to adapt to rapidly evolving environments. This is particularly appropriate to dynamic societies where agents act as buyers and sellers negotiating their goods and services, and composing simple services offered by different providers into new compound services.

Title:

A P2P-BASED INFRASTRUCTURE FOR VIRTUAL-ENTERPRISE'S SUPPLY-CHAIN MANAGEMENT

Author(s):

Maurizio Panti , Loris Penserini , Luca Spalazzi

Abstract: This paper proposes and describes a prototype of a peer-to-peer based infrastructure to support virtual enterprise's supply chain management. Because of a virtual enterprise is composed of autonomous, distributed, and continuously evolving entities, we have naturally modeled each business entity like a peer's agent platform that can play several roles according to the task to be fulfilled. Moreover, we are interested in coordination issues among both peer's agent platforms and agent platforms' roles. To this end, we describe and apply such roles, required to the organizational architecture, into a virtual storehouse scenario.

Title:

TRUSTED EMAIL: A PROPOSED APPROACH TO PREVENT CREDIT CARD FRAUD IN SOFT-PRODUCTS E-COMMERCE

Author(s):

saleh Alfuraih , DENNIS  MCLEOD , NIEN SUI

Abstract: Soft-products are intangible products that can be consumed without shipment, such as software, music and calling cards (calling time). The demand for soft-products on the Internet has been increasing for the past few years. At the same time, fraudulent credit card transactions have also increased. Compared to tangible products, fraudulent credit card transactions on soft-products are easier to conduct while difficult to recover. The fraudulent transaction is a major problem for e-commerce merchants, customers, and credit card issuers. In this paper, we classify the type of products sold on the Internet, and the common fraud that occurs for each type. We review some of the best existing credit card fraud prevention methods and introduce the Trusted email mechanism as a new way to prevent fraudulent transactions on soft-product. Trusted email is a custom email solution that can uniquely identify and authenticate the online customer, prevent unauthorized credit card transactions, and effectively resolve e-commerce disputes

Title:

A RECORDED STATE MECHANISM FOR PROTECTING MOBILE AGENTS AGAINST MALICIOUS HOSTS

Author(s):

Kamalrulnizam Abu Bakar , Bernard S. Doherty

Abstract: Mobile agent is an emerging technology that is gaining momentum in the field of distributed systems. It provides powerful and effective mechanisms to develop applications and often describes as a promising technology for developing application in open, distributed and heterogeneous environment, such as the Internet. However, without proper security protection, especially against a malicious host attack, the widespread use of this technology can be severely impeded. In this paper an approach that is able to protect the integrity of the mobile agent from being attack by the malicious host is proposed. The approach uses the state of a mobile agent, which is recorded during the mobile agent execution process inside the remote host environment, to detect the manipulation attack of the malicious host. The approach is implemented using master-slave agent architecture and operated on a distributed migration pattern.

Title:

PRINCIPLES FOR CREATING WEB SITES: A DESIGN PERSPECTIVE

Author(s):

J. Paulo Costa

Abstract: The importance of aesthetics is frequently forgotten, in order to solve this problem, we identified in the literature some of the theory that is underlying graphic design, gestalt theory and multimedia design. Based in the literature review, we proposed principles for web site design. We also present a tool to evaluate web design

Title:

AN AGENT BASED VIRTUAL MEDICAL DEVICES

Author(s):

Emil Jovanov , Dusan Starcevic , Zeljko Obrenovic

Abstract: In this paper we present the telemedical environment based on VMDs implemented with Java mobile agent technology, called aglets. The agent based VMD implementation provides ad-hoc agent interaction, support for mobile agents and different user interface components in the telemedical system. We have developed a VMD agent framework with four types of agents: data agents, processing agents, presentation agents, and monitoring agents. Data agents abstract data source, creating uniform view on different types of data, independent of data acquisition device. Processing agents produce derived data, such us FFT power spectrum, from raw data provided by the data agents. Presentation agents supply user interface components using a variety of user data views. User interface components are based on HTTP, SMS and WAP protocols. Monitoring agents collaborate with data and processing agents providing support for data mining operations, and search for relevant patterns. Typical example is monitoring for possible epileptic attacks. We have applied VMDs to facilitate distributed EEG analysis. We have found that the flexibility of distributed agent architecture is well suited for the telemedical application domain. This flexibility is particularly important in the case of an emergency, enabling swift system reconfiguration on the fly.

Title:

SEMANTICS-BASED RETRIEVAL IN P2P NETWORK: A VISION

Author(s):

Ingeborg Solvberg , Yun Lin , Hao Ding

Abstract: P2P systems are a revival paradigm for information sharing among distributed nodes in the network. A lot of research projects or practical applications have emerged, from the early ICQ, Napster, Gnutella to nowadays CAN, Gnougat, etc., but one of the questions is that few of them support the semantics retrieval. With the meeting of Semantic Web and Peer-to-Peer (P2P), it seems to have a highly innovative manner to solve the precision-recall contradictory in the information searching community. This paper uses a scenario in tourism domain to describe the problem encountered. Several main requirements have also been presented afterwards. Bared an ambitious goal, a preliminary architecture on semantic IR in P2P network also proposed.

Title:

M-COMPUTING SYSTEM FOR ENTERPRISES: A DESIGN FRAMEWORK

Author(s):

Tung Bui , Mai Thai

Abstract: Mobile computing has been touted as the next technological revolution that would finally allow businesses to achieve the required level of competitiveness in the new economy – that is to compete in a frictionless and (close to) real-time economy. This inevitable trend is made possible thanks to the development of mobile computing in the midst of the progressive miniaturization of virtually all system components as well as the convergence of mobile communications and computer technologies. It offers opportunities for enterprises to procreate their competitive advantages, form new business processes and improve old ones, while leveraging the time and location sensitivity that wireless communications have to offer. However, several enterprises have not been able to reap the opportunities since they do not know how to develop an effective mobile computing system that can satisfy their business desiderata. This paper presents a system development framework that guides enterprises to devise a cost-effective m-computing platform that is conducive to fulfill their business needs and creates the best returns on investment.

Title:

AUTOMATIC INTEGRATION OF INTER-ENTERPRISE PROCESSES WITH HIERARCHICAL BROKER FRAMEWORK

Author(s):

Li-Chen Fu , Ming-Yu Tsai , Shun-Fa  Chang

Abstract: In recent years, the manufacturing technologies are more and more complex. Almost all production processes need cooperation among multiple enterprises. It is true that today’s manufacturing process is a complex workflow forming a supply chain. Each enterprise provides their services to accomplish professional processes. With the growth of Internet usage, there are more and more services able to be processed on the web. Web-service is one of the applications on Internet and it can help enterprises cooperate with one another in their services easily. In this paper, we propose a Hierarchical Broker Framework to provide an advanced broker function for enterprises’ cooperation. In this framework, we can classify all services to keep searching easier, to present the relations between two enterprises more flexibly, to match buyers and sellers more precisely, and to cut down broker’s loading. On the ride of an enterprise client, we do not have to modify any existing enterprise architecture. Beside, we will also design an adapter to connect the broker server and the existing enterprises. By these designs, we try to find an automatic way to integrate these enterprise processes to improve efficiency and reduce their transaction overheads.

Title:

SEMANTIC E-LEARNING AGENTS - SUPPORTING ELEARNING BY SEMANTIC WEB AND AGENT TECHNOLOGIES

Author(s):

Juergen Dunkel , Ralf Bruns , Sascha Ossowski

Abstract: E-learning is starting to play a major role in the learning and teaching activities at institutions of higher education worldwide. The students perform significant parts of their study activities decentralized and access the necessary information sources via the Internet. Several tools have been developed providing basic infrastructures that enable individual and collaborative work in a location-independent and time-independent fashion. Still, systems that adequately provide personalized and permanent support for using these tools are still to come. This paper reports on the advances of the Semantic E-learning Agent (SAE) project, whose objective is to develop virtual student advisors, that render support to university students in order to successfully organize und perform their studies. The E-learning agents are developed with novel concepts of the Semantic Web and agents technology. The key concept is the semantic modeling of the E-learning domain by means of XML-based applied ontology languages such as DAML+OIL and OWL. Software agents apply ontological and domain knowledge in order to assist human users in their decision making processes. For this task, the inference engine JESS is applied in conjunction with the agent framework JADE.

Title:

SOUTH ASIA’S EMERGING ELECTRONIC MARKETS: PATTERNS AND PROSPECTS

Author(s):

Ruby Dholakia , Nikhilesh Dholakia , Nir Kshetri

Abstract: In terms of electronic commerce and electronic markets, South Asia has been a region of sharp contrasts. This paper examines the emergence of South Asia’s electronic markets and then identifies and analyzes various influences that are shaping and are likely to shape the e-commerce landscape of this region.

Title:

AGENT SUPPORT FOR COLLABORATIVE WORK

Author(s):

Igor Hawryszkiewycz , Aizhong Lin

Abstract: This paper describes a way to support cooperative information systems in evolving knowledge intensive environments. Such environments require users to themselves customize or reconfigure their systems as the work situation evolves. The paper proposes that agent systems support cooperative work by facilitating such evolution. It first defines a collaborative metamodel to describe collaborative work. The metamodel provides the framework to identify agents and the ways that they are to interact to support collaboration. The agents are defined using the same concepts as in the collaborative metamodel thus providing a systematic way define agent requirements. The paper then describes an example and prototype

Title:

A FEASIBILITY STUDY OF A PROPOSED UNIFIED SEMANTIC INFRASTRUCTURE IN THE EUROPEAN CONSTRUCTION SECTOR

Author(s):

Yacine Rezgui , Farid  Meziane

Abstract: FUNSIEC (Proposal No. 42059Y3C3FPAL2) is a research project funded by the European Commission (EC) under the e-Content programme. The key objective of FUNSIEC is to study the feasibility of building and maintaining an Open Semantic Infrastructure for the European Construction Sector (OSIECS) at a technical, organisational and business level. This infrastructure is being built by gathering multi-lingual linguistic electronic resources (e-resources) devoted to the construction sector, including various ontologies, taxonomies produced by international initiatives and EC-funded projects. OSIECS will be made available to content and service providers, as well as to other actors in the construction area, to help them exploit fully the advantages of Construction-oriented semantic-based e-resources.

Title:

A MODEL OF AGENT ONTOLOGIES FOR B2C E-COMMERCE

Author(s):

Domenico Rosaci

Abstract: This paper proposes a formal model of agent ontologies, suitable to represent the realities of both customers and sellers in a B2C electronic commerce scenario. This model is capable of describing the entities involved in the above realities (products, product features, product categories) as well as the behaviour of customers and sellers in performing their activities. A system architecture, based on the presented ontology model, is also briefly described.

Title:

UNDERLYING PLATFORM OF THE E-COMMERCE SYSTEM: J2EE VS. NET

Author(s):

Hamid Jahankhani , Mohammed Youssef

Abstract: when considering the implementation of any new Web-based application these days, the main two options available to developers are to either base the application on Sun Microsystems’ J2EE (Java 2 Enterprise Edition) platform, or on Microsoft’s .NET platform. Although other platforms do exist, the IT industry has identified these two as the main choices. .NET initiative is a broad new vision for embracing the Internet and the Web in the development, engineering and use of software. One key aspect of the .NET s strategy is its independence from a specific language or platform. This paper is about the strategic decision making that any Small and Medium size Enterprises (SME) should make to adopt a technology platform for a new project. This paper refers to an on ongoing development to provide an integrated business information and e-commerce system for a manufacturing company. The company uses Syspro ERP system. Consumers of ERP systems are demanding solutions that can be easily integrated with Web applications in order to provide such services as e-commerce to customers and browser-based access to remote workers. The aim of this paper is to compare the two technologies and discuss the main reasons why it is believed that .NET would be more appropriate than J2EE as a technology platform for the e-commerce solution.

Title:

INTELLIGENT ELECTRONIC INTER-SYSTEMIC CONTRACTING - ISSUES ON CONTRACT FORMATION

Author(s):

Francisco  Andrade

Abstract: Electronic contracting as an object of legal studies is getting more and more complex. Computers are currently being used not only as a way of searching and processing information, but also as communication tools, as automatic operators and already as a way of developing and accessing new forms of intelligent behaviour through the use of intelligent devices. New ways of electronic contracting have appeared each one with different specifications and ways of operating. The Brazilian legal doctrine has established a way of classifying electronic contracts according to the specific technical way of accomplishing each type of electronic communicating and contracting. For each category, there must be a different analysis relating to the main issue of the formation of contracts – mainly concerning the declaration of will, the expression of intent, the question of knowing whether a contract should be considered to be formed or completed. This issue is particularly problematic as far as Intelligent Electronic Inter-systemic contracting is concerned. The notions of digital signature and Interchange-Agreements may not be sufficient to grant validity to contracts formed not just through the machines, but indeed by the machines. So, it must be analyzed at least two main possibilities of considering the issue of the expression of consent in inter-systemic intelligent transactions: the possibility of considering the electronic devices as mere machines or tools, or the most daring possibility of considering the electronic devices as “legal persons”.

Title:

DEPENDABILITY: A FORGOTTEN ASPECT IN LOCATION-BASED SERVICES

Author(s):

Artem Katasonov

Abstract: The dependability aspect in location-based mobile services (LBSs) seems to be almost completely overlooked by both practitioners and researchers. However, as we argue in this paper, LBSs are applications that require high dependability and therefore this question must always be seriously considered when developing a new service. In recent years, mobile operators have launched many LBSs, but often they have not been as successful as had been hoped. We believe that low dependability is at least one principal reason that hinders user acceptance of existing services. In this paper, we discuss the existence of this disagreement between the actual importance of LBS dependability and the level of attention towards it among practitioners and researchers. We also identify and briefly discuss the major factors influencing dependability of LBSs, namely the quality of content, software reliability, algorithms appropriateness, interface quality, and communication quality.

Title:

LEARNING PROCESSES AND THE ROLE OF TECHNOLOGICAL NETWORKS AS AN INNOVATIVE CHALLENGE

Author(s):

Alberto Carneiro

Abstract: This paper intends to contribute to a comprehensive understanding of the role and the value of technological networks in learning processes, whose integration can enhance enterprise performance. Considering that the adequate combination of some variables, as IT, Internet, Intranet, computers, Information Systems and teamwork’s' activities may modify drastically organisations’ behaviour, a conceptual model for the optimisation of enterprises’ performance as a function of technological networks is suggested.

Title:

INTEGRATING DESIGN DOCUMENT MANAGEMENT SYSTEMS USING THE ROSETTANET E-BUSINESS FRAMEWORK

Author(s):

Paavo Kotinurmi , Hannu Laesvuori , Katrine Jokinen , Timo  Soininen

Abstract: Industry consortia have developed e-business frameworks providing standards and specifications enabling business partners to communicate over the Internet through integration of enterprise applications. This paper describes a prototype system for integrating design document management systems using the RosettaNet e-business framework. The requirements for the solution were extracted from a case product development network. We present the design and implementation of the prototype system. According to our experiences with it, the RosettaNet standards were relatively easy to implement and use. However, the RosettaNet specifications for product development processes and the related business document definitions, e.g. for design document delivery, are not sufficient in all respects. As a consequence, two implementations of the same RosettaNet standard process may be incompatible as they differ in the aspects that RosettaNet does not support sufficiently.

Title:

A NETWORK COMPONENT ARCHITECTURE FOR COLLABORATION IN MOBILE SETTINGS

Author(s):

Bernd Eßmann

Abstract: Today Computer Supported Cooperative Work (CSCW) is used in broad areas of human cooperation. With the propagation of radio-based communication and ad hoc networking it may enter new areas of human cooperation. One important aspect is the new quality in CSCW of being independent from special network-enabled places. Another aspect is the more intuitive support of face-to-face cooperation utilizing personal mobile devices. To open this field of collaboration our approach featuring Distributed Cooperative Knowledge Spaces specifically addresses conceptual issues pertaining to the transition from classical, server-centered to mobile, distributed collaboration environments. With this concept we introduce persistent and personal knowledge spaces as well as so-called temporary knowledge areas and groups. Our prototypical application for spontaneous collaboration implements this approach. We are able to draw here on many years of experience in the development and testing of our concept of Cooperative Virtual Knowledge Spaces.

Title:

FLOW-ORIENTED DEPLOYMENT OF A MULTI-AGENT POPULATION FOR DYNAMIC WORKFLOW ENACTMENT

Author(s):

Sebastian Kanzow , Karim Djouani , Yacine Amirat

Abstract: In the virtual enterprise paradigm, workflow processes are shared between different businesses partners lead to new requirements for workflow management applications. Several multi-agent systems have been proposed to cope with their inherently distributed nature. Most of those systems define agents as some kind of helper programs situated on (human) resource level, instantiated on some workflow participant’s personal computer. We argue that this concept is not adequate and propose an approach to create and deploy agents on a virtual flow level, where one agent takes care of one workflow sub-process, instead of attaching one or more agents to an existing resource. Finally, we present a probabilistic classification approach to decide on the assignment of tasks to agents.

Title:

PEER-TO-PEER NETWORK SIMULATION

Author(s):

Ralph Deters , Nyik Ting

Abstract: Peer-to-Peer (p2p) networks are the latest addition to the already large distributed systems family. With a strong emphasis on self-organization, decentralization and autonomy of the participating nodes, p2p-networks tend to be more scalable, robust and adaptive than other forms of distributed systems. The much-publicized success of p2p-networks for file-sharing and cycle-sharing has resulted in an increased awareness and interest into the p2p protocols and applications. However, p2p-networks are difficult to study due to their size and the complex interdependencies between users, application, protocol and network. This paper has two aims. First, to provide a review of existing p2p-network simulators and to make a case for our own simulator named 3LS (3-Level-Simulator). Second, it presents our current view that there is a need for more realistic/complex models in p2p-network simulation since ignoring the underlying network, topology and/or the behavior of applications can result in misleading simulation results.

Title:

SEAMLESS COMMUNICATION AND ACCESS TO INFORMATION FOR MOBILE USERS IN A WIRELESS ENVIRONMENT

Author(s):

Julita Vassileva , Golha Sharifi

Abstract: Providing mobile workers with mobile devices such as a Compaq iPaq with a CDPD card can support them in retrieving information from centralized information systems. More specifically, mobile devices can enable mobile users to make notifications for schedule changes and add new data into the information system. In addition these devices can facilitate group communication anytime and anywhere. This paper presents different ways of providing non-critical information in a timely fashion for nomadic users of mobile devices using a wireless network. A distributed application prototype to support nomadic users is proposed, and a simulated environment is used to evaluate the prototype. Since solutions for seamless access are highly domain specific, the study involves homecare workers at Saskatoon District Health (SDH). By keeping track of the users’ current context (time, location etc.) and a user task model, it is possible to predict the information needs of mobile users and to provide context dependent adaptation of both the content and the functionality. Moreover, to avoid interrupts in the user’s interaction with the main information sources, methods for mobile transactions management using agent-based smart proxies that buffer, delay or pre-fetch information/data are introduced.

Title:

AN ANALYSIS OF VARIATION IN TEACHING EFFORT ACROSS TASKS IN ONLINE AND TRADITIONAL COURSES

Author(s):

Gregory Hislop , Heidi Ellis

Abstract: As the role of the internet and internet technologies continues to grow in pace with the rapid growth of online education, faculty activities and tasks are changing to adapt to this increase in web-based instruction. However, little measurable evidence exists to characterize the nature of the differences in teaching effort required for online versus traditional courses. This paper reports on the results of a quantitative study of instructor use of time which investigates not only total time expended, but also examines differences in types of effort. The basis of the study is a comparison of seven comparable pairs of online and traditional course sections where instructors recorded time spent during course instruction for the seven pairs. This paper discusses relevant related work, presents the study motivation and design, discusses how teaching effort varies across different tasks between online and traditional courses, and presents thoughts for future research. The results of this study indicate that instructors of online courses spend more time on direct interaction with students when compared to instructors of traditional courses, but spend less time on other activities such as grading and materials preparation.

Title:

WEB SERVICE COMPONENT MARKETS: A COMPREHENSIVE ASSESSMENT OF THE THIRD WAVE OF SOFTWARE MARKETPLACES

Author(s):

Jos van Hillegersberg , Willem-Jan van den Heuvel

Abstract: The Service Oriented Computing paradigm, with as its main manifestation web-service technology, holds high promises, but develops its full potential only when packaged web-services are traded in a service market. The Internet seems ideal for this purpose and various sources have predicted a bright future for the Internet Web Service Market (WSM). However, very little is known about the current status, structure and trends within the WSM. This study develops a model of the WSM and a classification of components traded in the WSM. Using these, the WSM status and developments are investigated. The results show that the WSM is emerging but that its impact is not (yet) as dramatic as expected. Although there are some trends towards a mature market, the WSM is clearly in its early stages. However, intermediaries and component producers are offering promising new services that are likely to deliver new growth of the market in the coming years.

Title:

ANALYZING OBSERVABLE BEHAVIOURS OF DEVICE ECOLOGY WORKFLOWS

Author(s):

Sea Ling , Seng Loke

Abstract: We envision an Internet computational platform of the 21st century that will include device ecologies consisting of collections of devices interacting synergistically with one another, with users, and with Internet resources. We consider {\em device ecology workflows} as a type of workflow describing how such devices work together. It would be ideal if one can model the devices in a computer and analyze the effects when such workflows are executed in the device ecology. This paper provides a Petri Net model in terms of workflow nets for analyzing the observable effects of device ecology workflows.

Title:

USING ONTOLOGIES FOR PROSPECTION OF BEST OFFER ON THE WEB

Author(s):

Rafael Cunha Cardoso , Fernando  da Fonseca de Souza , Ana Carolina Salgado

Abstract: Nowadays, information retrieval and extraction systems play an important role trying to get relevant information from the vast amount of contents existing on the World Wide Web (WWW). Semantic Web can be seen as the Web’s future and thus, presents a series of new concepts and tools that may be used to insert “intelligence” into such contents in the current WWW. Among such techniques, Ontologies, for example, perform a fundamental role in such a new context. With Ontologies, intelligent agents can cover the Web to “understand” its sense in order to execute more complex and useful tasks on behalf of their users. This work has as its main objective to create a mechanism for searching and filtering specific information from a set of HTML or XML documents extracted from the Web, using techniques from the Semantic Web, particularly Ontologies.

Title:

MGAIA: EXTENDING THE GAIA METHODOLOGY TO MODEL MOBILE AGENT SYSTEMS

Author(s):

Shonali Krishnaswamy

Abstract: Mobile agents are a class of software agents that have the ability to move from host to host and are particularly relevant for mobile and distributed applications. The development of several mobile agent implementation environments has necessitated conceptual modelling techniques for mobile agent applications. In this paper, we present mGaia, our extension of the Gaia Agent Oriented Software Engineering (AOSE) methodology to model mobile agent systems. We discuss our experiences from applying a software engineering approach to building mobile agent applications by modelling applications using mGaia and mapping these models to two mobile agent toolkits, Aglets and Grasshopper.

Title:

STRATEGIC NEGOTIATION OF BANDWIDTH IN COOPERATIVE NETWORKS

Author(s):

Jonathan Bredin

Abstract: We analyze the scenario where a pair of network devices each periodically relies on the other to handle network traffic. Without immediate reward, the forwarding device incurs an opportunity cost in handling the other's request. We find, however, situations where rational decision makers prefer bandwidth exchange to isolated operation. We base our analysis on a take-or-leave-it protocol inspired by the Rubinstein bargaining model, and extend it to evaluate repeated interaction between pairs of devices.

Title:

SEMANTIC SUPPORT FOR AUTOMATED NEGOTIATION WITH ALLIANCES

Author(s):

Zlatko Zlatev , Rogier Brussee , Pascal van Eck , Stanislav Pokraev

Abstract: Companies can form alliances on the Internet to aggregate buying or selling power and create value. More concrete, resources are shared or new possibilities are exploited together that cannot be exploited alone. Most alliances are formed as the result of a negotiation process between the companies that constitute the alliance. This paper proposes a software framework that enables automated negotiation. Our framework allows for semantic descriptions of the negotiation objects and their attributes, and that provides the means for the exchange of negotiation messages, which can be unambiguously understood by all parties involved. Our framework supports ad hoc alliances by allowing parties with a common interest to first negotiate on the proposal they want to make to other market participants. The paper outlines a software architecture and implementation technology for our framework, in which a rule-based reasoning engine is used to enact the negotiation strategy.

Title:

AGENT PROGRAMMING LANGUAGE WITH INCOMPLETE KNOWLEDGE - AGENTSPEAK(I)

Author(s):

Aditya Ghose , Duc Vo

Abstract: This paper proposes an agent programming language called AgentSpeak(I). This new language allows agent programs (1) to effectively perform while having incomplete knowledge of the environment, (2) to detect no-longer possible goals and re-plan these goals correspondingly, and (3) to behave reactively to changes of environment. Specifically, AgentSpeak(I) uses default theory as agent belief theory, agent always act with preferred default extension at current time point (i.e. preference may changes over time). A belief change operator for default theory is also provided to assist agent program to update its belief theory. Like other BDI agent programming languages, AgentSpeak(I) uses semantics of transitional system. It appears that the language is well suited for intelligent applications and high level control robots, which are required to perform in highly dynamic environment.

Title:

ANALYZING WEB CHAT MESSAGES FOR RECOMMENDING ITEMS FROM A DIGITAL LIBRARY

Author(s):

Tiago Primo , Leonardo Amaral , Gabriel Simões , Roberto Rodrigues , Thyago Borges , Stanley Loh , Daniel Lichtnow , Ramiro Saldaña

Abstract: This work presents a recommender system that analyzes textual messages sent during a communication session in a private Web chat, identifies the context of each message and recommends items from a Digital Library. Recommendations are directly made to users in the chat screen and are decided by a software system through a proactive paradigm, without any request of the users. A domain ontology, containing concepts and a controlled vocabulary, is used to identify subjects in textual messages and to automatically classify items of the Digital Library.

Title:

USING MOBILE AGENTS TO SEARCH FOR DISTRIBUTED INVISIBLE INFORMATION

Author(s):

Eurico Carrapatoso , Paula Oliveira

Abstract: The access to the information available in the Web is facilitated by diverse search engines. However, there is a large amount of information that is not accessible through these engines: the “Invisible Web” or “Deep Web”. For the society to take advantage of these vast available resources, it is important that efficient models for searching the Web are established and made available for wide use. In this context, a model based mobile agents , adequate for searching for multimedia materials accessible through a network, is presented in this article. The model has been designed having in mind that it should be open, distributed, modular and platform independent. To corroborate the proposed model, an experimental prototype has been implemented, capable of searching heterogeneous databases accessible in the Web.

Title:

PERFORMANCE EVALUATION OF TCP/IP IN 802.11 WIRELESS NETWORKS

Author(s):

Sang Gap Lee , Dhinaharan Nagamalai , Beatice Cynthia Dhinaharan

Abstract: The increasing popularity of wireless networks indicates that wireless links will play an important role in future internetworks. TCP is a reliable transport protocol tuned to perform well in habitual networks made up of links with low bit-error rates. TCP was originally designed for wired networks, where loss of data is assumed to be due to congestion. However, networks with wireless and other lossy links also suffer from significant losses due to high bit error rates and handoff. But the assumption made by TCP, that loss of data is due to congestion in wireless environment causes degraded end-to-end performance. Hence a variety of mechanisms were proposed to improve TCP performance over wireless links. In this paper we wish to analyze the design and implementation of a simple protocol, called the snoop protocol that improves the performance of TCP in wireless networks. The protocol modifies the network layer software mainly at the base station and preserves end-to end TCP semantics. The main feature of this protocol is to cache packets at the base station and to perform local retransmission across the wireless links. The results of several experiments performed by implementing snoop protocol on a wireless test bed consisting of IBM think pads laptops and Pentium based personal computer running BSD/OS 2.1 from BSDI show that a reliable link layer protocol that is TCP-aware provides very good performance

Title:

INTRODUCING AN OPERATIONAL AND TECHONOLOGICAL E-COMMERCE FRAMEWORK FOR EUROPEAN SMES

Author(s):

Kostas Petropoulos , Achilleas Balatos , Ioannis Ignatiadis , Markus Lüken , Vladislav Jivkov

Abstract: Small and Medium Enterprises (SMEs) represent the driving force for local development and growth in European Less Favoured Regions (LFRs); geographical isolated areas characterized by poor business performance and a less developed and privileged economy. The introduction of e-commerce is considered as an essential element for improvement of local SME’s competitiveness and position in the Global Market, supporting simultaneously these regions to overcome their geographical limitations and follow up international business trends. In the context of the IST-2001-33251 LAURA project, the potential for regional and interregional e-commerce development has been analysed in four European LFRs (Epirus, Messinia, Saxony-Anhalt, and South Central Bulgaria). Based on these results and adopting the notion of a specific type of Virtual Organisation taxonomy (Request Based Virtual Organisation - RBVO), we present an operational and technological e-commerce framework adapted to the specific context of LFRs. The paper outlines the core identified factors that will influence the introduction and the effect of e-commerce in Less Favoured Regions.

Title:

HETEROGENEOUS INTEGRATION OF SERVICES INTO AN OPEN, STANDARDIZED WEB SERVICE - A WEB SERVICE-BASED CSCW/L SYSTEM

Author(s):

Joerg Halbsgut

Abstract: There are currently a wide variety of services that are difficult or impossible to use because their interfaces, protocols and programming languages are either unknown or proprietary. In the future, this problem will be compounded by the growing range of services available, especially in the area of e-learning, and not least by the increasing number of service consumers (clients) and the resulting heterogeneity in terms of applications and protocols. The web service architecture presented in this paper uses the successfully applied open-source sTeam system to illustrate how arbitrary services can be integrated into a heterogeneous web service. A flexible service structure of this kind is designed to create standardized interfaces allowing new web-based interoperability.

Title:

EMBEDDING JAAS IN JAVA AGENT ROLES TO APPLY LOCAL SECURITY POLICIES

Author(s):

Giacomo Cabri

Abstract: Agents are an emerging technology that grants programmers a new way to exploit distributed resources. They well suite the development of enterprise applications, since they can act as active network components, and can execute on heterogeneous platforms or architectures. One of the hardest difficulties in the development of agent-based applications is the managing of interactions, since agents must interact in a collaborative and/or competitive way to achieve their task. Roles are a powerful concept that can be used to model agent interactions, both between two (or more) agents and between agents and the environments where they are running. Roles allow separation of concerns and code reusability, but they should be developed taking into account permissions needed to the execution of their actions. The standard Java policy file mechanism does not suffice in this scenario, since a fine grain in managing permissions is required. This paper focuses on how to exploit the Java Authentication and Authorization Service (JAAS) at the role level in order to apply authorization and local policies to agents for limiting their operations.

Title:

SOFTWARE AGENTS FOR SUPPORTING STUDENT TEAM PROJECT WORK

Author(s):

Janice Whatley

Abstract: In this paper an agent system is described, which has been designed to support students undertaking team projects as part of their studies on campus or online. Team projects form an important part of the learning process for campus based students, but are not easily incorporated into the learning activities for online students. The particular problems of working on projects in teams are explored, and an agent system was designed to support some of the maintenance tasks of team working. Agent technology is suggested because of the ease of communication between software agents and their autonomy in operation. The agent system has been tested on student teams working on campus, and the results indicate that this type of support agent may be helpful to students. The modified version of the agent system was successfully implemented, and the trial suggests that it may be scaled up to use over the Internet to support online student teams.

Title:

CUSTOMIZABLE DATA DISTRIBUTION FOR SYNCHRONOUS GROUPWARE

Author(s):

Stephan Lukosch

Abstract: The state of a groupware application must be shared to support interactions between collaborating users. There have been a lot of discussions about the best distribution scheme for the state of a groupware application. Many existing groupware platforms support only one distribution scheme, e.g. a replicated or a central scheme, and apply the selected scheme to the entire application. None of these schemes fits well for every groupware application. Different applications and even single applications have different requirements concerning data distribution. This paper describes DreamObjects, a development platform that simplifies the development of shared data objects. DreamObjects supports a variety of distribution schemes which can be applied per shared data object. Additionally, it offers an interface that developers can use to introduce their own distribution schemes.

Title:

AMPLIA LEARNING ENVIRONMENT: A PROPOSAL FOR PEDAGOGICAL NEGOTIATION

Author(s):

Rosa Maria Vicari , Louise Seixas , João Carlos Gluz , Cecilia Dias Flores

Abstract: AMPLIA is an Intelligent Multi-Agent Learning Environment. It is designed to support training of diagnostic reasoning and modelling of domains with complex and uncertain knowledge. AMPLIA focuses on the medical area, where learner’s modelling tasks will consist of creating a Bayesian network for a problem the system will present. A pedagogic negotiation process (managed by an intelligent Mediator Agent) will handle with the differences of topology and probability distribution between the model the learner built and the one built-in in the system. That negotiation process occurs between the agents that represent the expert knowledge domain and the agent that represents the learner knowledge. The possibility of using Bayesian networks to create knowledge representation allows the learner to visualize his/her ideas organization, create and test hypothesis.

Title:

MULTI-AGENT SYSTEMS AND THE SEMANTIC WEB - THE SEMANTICCORE AGENT-BASED ABSTRACTION LAYER

Author(s):

Marcelo Ribeiro , Carlos Lucena

Abstract: In the Web first years, it was claimed that it would revolutionize the way people work and learn by creating a rich information environment where everybody would cooperate through content publish and recovering. This promising model showed its limitations with the information explosive growth. Many initiatives were taken to address this problem, but none of them gained such attention as the Semantic Web proposal. The combination of machine understandable content with human oriented content can avoid information overload and create a new set of possibilities in terms of software development and integration. Although the Semantic Web is on its very beginning, some proposals already address some requirements for the Semantic Web creation. This paper presents the SemantiCore agent-based abstraction layer for the Semantic Web. The SemantiCore uses high level agent-based abstractions to create applications for the semantic web. SemantiCore uses the middleware concept to allow the integration with well known technologies such as the FIPAOS platform and the Web Services standards.

Title:

MODELLING MOBILE AGENT APPLICATIONS BY EXTENDED UML ACTIVITY DIAGRAM

Author(s):

Kenji Taguchi , Miao Kang

Abstract: Mobile agent technology has gained increasing importance in recent years. However, little work has been done in defining notations/languages to capture and model mobile agent applications. This paper presents extensions of UML activity diagrams for modelling mobile agent applications, which capture specific features of mobile agents such as mobility, cloning and communications. In order to demonstrate their applicability as a design notation, a mobile agent auction system is designed in the proposed notation and is implemented in Java Agent Development (JADE) programming language.

Title:

AN EVENT-BASED FRAMEWORK FOR SERVICE-ORIENTED COMPUTING

Author(s):

Kees Leune

Abstract: Service Oriented Computing (SOC) demands a framework that seamlessly integrates all connection points between business processes, services and associated support resources. To address this challenge, we introduce the Event-driven Framework for Service Oriented Computing (EFSOC) that is organized in three tiers: the event tier, the business process tier, and the access control tier. The event tier encompasses definitions of business-related events, and supports their propagation throughout the business process flow. The business process tier specifies the interactions between business processes and services and the access control tier defines access roles that are allowed to invoke certain services.

Title:

FROM CORBA TO WEB SERVICES COMPOSITION

Author(s):

Slimane Hammoudi

Abstract: CORBA has some positive aspects to develop applications, but its communication model is limited to accomplish interactions among clients and enterprise servers on the Web. The technologies of Web Service seem to offer a better answer for developing distributed applications on the Web: B2B, B2C and A2A. The first part of this paper is a discussion about the evolution of CORBA and of Web Services, showing their benefits and limitations. The new solutions provided by the technologies of Web Services (XML, WSDL, UDDI and SOAP) are more adapted for the Web than CORBA. However, these technologies are not sufficient to compose Web Services, which represents a real challenge. Workflow Technology seems to be a better answer for this challenge. The second part of this paper deals with this integration of Workflow technology and Web service that is designed in WEWS. WEWS is proposed as an architecture to enable CORBA objects (Wrapped as Web Service) and Web Services to work together with the benefits of workflow. Also, an approach for transaction based on conversation plus optimistic commit protocol is presented. A comparison of our work and other propositions is provided too, highlighting similarities and differences

Title:

A CONTACT RECOMMENDER SYSTEM FOR A MEDIATED SOCIAL MEDIA

Author(s):

Laurence Vignollet , Jean-Charles Marty , Michel Plu , Layda Agosto Franco

Abstract: Within corporate intranet or on the WWW, a global search engine is the main service used to discover and sort information. Nevertheless, even the most "intelligent" ones have great difficulties to select those targeted to each user specific needs and preferences. We have built a mediated social media named SoMeONe, which helps people to control their information exchanges through trusted relationships. A key component of this system is a contact recommender, which helps people to open his relationship networks by exchanging targeted information with qualified new users. Instead of using only matching between interests of users, this "socially aware" recommender system takes also into account existing relationships in the social network of the system. We describe in this paper the computations of those recommendations based on a social network analysis.

Title:

E-COMMERCE PENETRATION AND ORGANIZATIONAL LEARNING IN SMES

Author(s):

Élisabeth Lefebvre , Onno Omta , Louis-A. Lefebvre , Elie Elia

Abstract: This paper attempts to (i) to assess the relative importance of benefits related to the gradual unfolding of business-to-business e-commerce (B-2-B e-commerce) penetration among manufacturing SMEs and (ii) to demonstrate that the scope and intensity of these benefits increase in the later stages of e-commerce penetration as organizational learning gradually takes place. Empirical evidence strongly suggests that these benefits are cumulative and that organizational learning allows SMEs to reap these benefits.

Title:

AGENT BASED DECENTRALIZED WORKFLOW ENACTMENT: COMPILATION AND TRANSFORMATION OF WORKFLOW MODELS

Author(s):

Hugo Miguel Mendes Ferreira

Abstract: Today’s workflow management systems are distributed albeit centralized information systems. In an attempt to increase the flexibility, robustness and scalability of such systems, a decentralized workflow engine based on autonomous mobile agents is being developed. This will allow the creation and development of a flexible and robust solution. Unlike previous work done in this area, this article focuses on process flow control and on the use of workflow patterns to describe and support such flow control. In its essence this article describes how workflow models are compiled and transformed into a set of agent that will enact the workflow process. During development and testing, several issues concerning process compilation, agent creation and process execution arose. Some of these are also briefly described.

Title:

DESIGNING QUALITY WEB APPLICATIONS USING WEB PATTERNS

Author(s):

Andreas S. Andreou , Stephanos Mavromoustakos

Abstract: Patterns are commonly utilized by Web developers for reusability purposes. However, this paper shows how Web patterns can also enhance the quality of Web applications. Firstly, Web quality is divided into five major components, namely usability, functionality, reliability, efficiency, and maintainability. Secondly, the relationship of these quality components with certain Web patterns is demonstrated and a set of guidelines for designing quality Web applications using these patterns is proposed. A successful Web site is then used as a case- study to demonstrate the efficacy of the proposed guidelines. The Web patterns utilized by the site under study are identified and matched with the proposed list of patterns. Finally, we investigated how these patterns contribute to the success of the specific Web application.

Title:

A HYBRID COLLABORATIVE RECOMMENDER SYSTEM BASED ON USER PROFILES

Author(s):

Giovanni Semeraro , Oriana Licchelli , Marco Degemmis , Maria Francesca Costabile , Stefano Paolo Guida , Pasquale Lops

Abstract: Nowadays, users are overwhelmed by the abundant amount of information created and delivered through the Internet. Especially in the e-commerce area, catalogues of the largest sites offer millions of products for sale and are visited by users having a variety of interests. It is of particular interest to provide customers with personal advice: Web personalization has become an indispensable part of e-commerce. One type of personalization that many Web sites have started to embody is represented by recommender systems, which provide customers with personalized advices about products or services. Collaborative systems actually represent the state-of-the-art of recommendation engines used in most e-commerce sites. In this paper, we propose an hybrid method that aims at improving collaborative techniques by means of user profiles that store knowledge about user interests.

Title:

FEDERATED MEDIATORS FOR QUERY COMPOSITE ANSWERS

Author(s):

Dong Cheng

Abstract: The capture, the structuring and the exploitation of the expertise or the capabilities of an ``object'' (like a business partner, an employee, a software component, a Web site, etc.) are crucial problems in various applications, like cooperative and distributed applications or e-business and e-commerce applications. The work we describe in this paper concerns the advertising of the capabilities or the know-how of an object. The capabilities are structured and organized in order to be used when searching for objects that satisfy a given objective or that meet a given need. One of the originality of our proposal is in the nature of the answers the intended system can return. Indeed, the answers are not Yes/No answers but they may be cooperative answers in that sense that when no single object meets the search criteria, the system attempts to find out what a set of ``complementary'' objects do satisfy the whole search criteria, every object in the resulting set satisfying part of the criteria. In this approach, Description Logics (DL) is used as a knowledge representation formalism and classification techniques are used as search mechanisms. The determination of the ``complementary objects'' is founded on the DL complement concept.

Title:

REDUCING SPAM: A SIMPLE SOLUTION.

Author(s):

Chris Rose

Abstract: Unsolicited e-mail, otherwise called spam, continues to flood the inboxes of all users of the Internet since it is estimated that more than a half of all e-mail or over one trillion pieces of spam will reach the inboxes of Internet users in 2003. However, the problems of controlling spam are many since:(a) spam is virtually free for the sender (b) the SMTP protocol which governs the transmission of e-mail on the Internet was not designed to handle the complexities of deception and mistrust on a large network and (c) many major corporations are surreptitiously involved in spam. Although the development of a social conscience might keep some large corporations from engaging in spam, but spam, as we know it, would cease to exist only if either the cost of sending e-mail increased or a new secure protocol to exchange e-mail was developed. Of the two options, the quickest and easiest remedy would be to eliminate the reverse economics of sending spam by introducing a computing cost for sending e-mail.

Title:

HOW TO BUILD A MULTI-MULTI-AGENT SYSTEM: THE AGENT.ENTERPRISE APPROACH

Author(s):

Tim Stockheim

Abstract: The maturity of technical foundations for multi-agent system and the support by development tools, infrastructure services, and a number of development methodologies leads to an increasing amount of existing multi-agent systems. In a more and more networked environment, coupling of these heterogeneous systems to large multi-multi-agent systems is required. Unfortunately, design and implementation steps necessary in this context are currently not supported by established development methodologies; conventional approaches mainly focus on isolated multi-agent systems. In this paper, we present an approach for the integration of heterogeneous multi-agent systems. The Agent.Enterprise system as a coupled multi-multi-agent system has been designed and tested in the manufacturing logistics domain.

Title:

ARCHCOLLECT FRONT-END: A WEB USAGE DATA MINING KNOWLEDGE ACQUISITION MECHANISM FOCUSED ON STATIC OR DYNAMIC CONTENTING APPLICATIONS.

Author(s):

Ahmed Esmin , Tiago Carneiro , Joubert Lima

Abstract: Knowledge acquisition mechanism is essencial to every Web usage mining project and it can be implemented on the user or on all servers configuration. This paper presents a low coupled mechanism once it acquires knowledge only from the Web browser, separates the requests: one for the monitored application and the other for the server called ArchCollect, and has a parser that automatically inserts the knowledge acquisition mechanism into the static/dynamic user’s page. It is flexible once the monitored applications can be developed in HTML, DHTML, XHTML or XML markup languages. It is scalable once it can deal with massive network traffic, adopting scalable ArchCollect servers or scalable internal components. It is efficient once it re-duces drastically the preprocessing, sharing this hard activity with all users, and once it makes no log files interpretation or completation. It is realible once it eliminates browser and server caches problems. This project can collect layout, usage and performance data, providing general application focus, like Srivastava et.al proposed.

Title:

A DYNAMIC AGGREGATION MECHANISM FOR AGENT-BASED SERVICES

Author(s):

Jerome Picault

Abstract: At a time when the web is switching from a data-oriented view to a service-oriented view, we can envision an environment where services are dynamically and automatically combined to solve new problems that one single service cannot solve. Agent technology provides a good basis for creating such an environment but many issues remain to be solved. This paper presents a step towards a dynamic service aggregation mechanism, introducing a pragmatic approach and an implementation. This work was carried out in the context of the Agentcities.RTD EU project.

Title:

CAN AVATARS REPLACE THE TRAINER? A CASE STUDY EVALUATION

Author(s):

Elaine  Ferneley , Ahmad Kamil Mahmood

Abstract: E-learning implementations have become an important agenda item for academic and business institutions as an enabler to complement their education and training needs. However, many of the existing e-learning systems, present several limitations such as them being static, passive and consisting of a time-consuming set of services. This has highlighted the need for functionality, which allows more creativity, autonomy, and flexibility on behalf of the learner. The inclusion of avatar technology in e-learning environments has been of growing interest aiming to encourage the learner to become more engaged and motivated whilst augmenting the use of human trainers. However, the empirical investigations on the effect of animated agents in teaching and learning has revealed diverse results in a continuum from avatars being helpful to them being distracting. This research has evaluated the utility of avatars. Unusually, the research has chosen a qualitative interpretive approach with supporting case study data as the chosen research methodology. The justification for the research approach will be made and the initial findings will be presented together with a proposed conceptual framework.

Title:

COMBINING ONE-CLASS CLASSIFIERS FOR MOBILE-USER SUBSTITUTION DETECTION

Author(s):

Seppo Puuronen , Oleksiy Mazhelis

Abstract: Modern personal mobile devices, as mobile phones, smartphones, and communicators can be easily lost or stolen. Due to the power and functional abilities of these devices, their use by an unintended person may result in a severe security incident concerning private or corporate data and services. The means of user substitution detection are needed to be able to detect situations when a device is used by a non-legitimate user. In this paper, the problem of user substitution detection is considered as a one-class classification problem where the current user behavior is classified as the one of the legitimate user or of another person. Different behavioral characteristics are to be analyzed independently by dedicated one-class classifiers. In order to combine the classifications produced by these classifiers, a new combining rule is proposed. This rule is applied in a way that makes the outputs of dedicated classifiers independent on the dimensionality of underlying behavioral characteristics. As a result, the overall classification accuracy may improve significantly as illustrated in the simulated experiments presented.

Title:

A WIRELESS APPLICATION THAT MONITORS ECG SIGNALS

Author(s):

Jimena  Rodriguez Arrieta , Lacramioara Dranca

Abstract: In this paper, we present an innovating on-line monitoring system that has been developed by applying new advances in biosensors, mobile devices and wireless technologies. The aim of the system is to monitor people that suffer from heart arrhythmias without having to be hospitalized; and therefore, living a normal life while feeling safe at the same time. On the one hand, the architecture of the system is presented; and, on the other hand, some performance results and implementation details are explained that show how the previous solution can be effectively implemented and deployed into a system that makes use of PDAs, and wireless communications: Bluetooth and GPRS. Moreover, special attention has been taken in two aspects: cost of the wireless communications and delay time for the detected serious heart anomalies.

Title:

BULB – VISUALISING BULLETIN BOARD ACTIVITY

Author(s):

David Elsweiler , Alasdair Mac Cormack , John Ferguson , Rehman Mohamed

Abstract: Visualisation is well known as an effective means of enriching user interaction with complex systems. Recent research with online communities has considered the application of visualisation tool support, with the intention of further improving communication between community members. This paper reviews existing work in this area with specific reference to the application of visualisation to improve user interaction within online forums such as bulletin boards. The paper then outlines work undertaken by the authors to develop a second-generation visualisation tool - ‘BulB’.

Title:

ANALYSIS OF PRIORITY AND PARTITIONING EFFECTS ON WEB CRAWLING PERFORMANCE

Author(s):

Ali Mohammad Zare Bidoki , Mazeiar Salehie

Abstract: Broad web search engines as well as many more specialized search tools for nearly a decade have used web crawlers to acquire and update large repository of web objects for indexing and analysis. Because of volatile nature of the web and web objects proliferation, building a high performance web crawler is still more complex rather than the other components of a typical search engine. Freshness of the page repository is one of the major metrics to assess the performance of a web crawler. In addition, network resources, I/O performance and OS limits must be taken into account in order to achieve high performance at a reasonable cost. The main purpose of this paper is to analyze how the importance factor, multi-crawling and partitioning affect on the freshness of the web page repository of a typical web search engine. By means of several experiments in a simulated environment, we tested several parameters to improve the freshness of the repository.

Title:

NTEGRATING SOFTWARE AGENTS WITH THE EXISTING WEB INFRASTRUCTURE

Author(s):

Leslie Yu , Qusay Mahmoud

Abstract: The mobile agent paradigm presents itself as a viable communication approach not only in the area of wired computing, but more so in the disconnected mobile computing environment. The fundamental way in which distributed systems interact nowadays is through the client-and-server paradigm, which has been around since the 1970s. In this paper, we will examine some of the performance and extensibility advantages that the mobile agent paradigm will bring to the table. We will examine how MA can bring about a better web browsing, information retrieval experience for end-users in both the wired and wireless computing environment. A few hurdles that are stopping MA from becoming commonplace will be looked at. This will be followed by a novel approach for integrating mobile agents into existing Web sites.

Title:

A VIRTUAL ASSISTANT FOR WEBSITES

Author(s):

Feliz Gouveia , Stanley Loh , Daniel Brahm , Lizandro Silva , José Luiz Duizith , Gustavo Tagliassuchi

Abstract: This work presents a Virtual Assistant (VA) whose main goal is to supply information for Websites users. A VA is a software system that interacts with persons through a Web browser, receiving textual questions and answering automatically without human intervention. The VA supplies information by looking for similar questions in a knowledge base and giving the corresponding answer. Artificial Intelligence techniques are employed in this matching process, to compare the user’s question against questions stored in the base. The main advantage of using the VA is to minimize information overload when users get lost in Websites. The VA can guide the user across the web pages or directly supply information. This is especially important for customers visiting an enterprise site, looking for products, services or prices or needing information about some topic. The VA can also help in Knowledge Management processes inside enterprises, offering an easy way for people storing and retrieving knowledge. An extra advantage is to reduce the structure of Call Centers, since the VA can be given to customers in a CD-ROM. Furthermore, the VA provides Webmasters with statistics about the usage of the VA (themes more asked, number of visitants, time of conversation).

Title:

A WEB-ENABLED MOBILE AGENT PLATFORM FOR E-COMMERCE

Author(s):

Leslie Yu , Qusay Mahmoud

Abstract: A side effect to our increasingly information-driven economy and lifestyle is the annoyance and headaches of Information Overload. Everywhere we go, we are bombarded by email, spam, online advertisements, beepers beeping, cell phones ringing, and incoming SMS messages. The wealth of information available at our fingertips online is both a blessing and a curse in disguise. In this paper, we will discuss the implementation details of our mobile agent system that tries to automate the process of online shopping. With the aid of user location information, mobile agents are deployed to engage in the process of wading through the mountains of information online in order to comparison shop on our behalf while filtering out irrelevant information. The idea behind deploying such a system is first given, then followed by a tour through its simple API, and finally, this paper delves into a discussion of security and how our system can be seamlessly integrated with the existing infrastructure.

Title:

RECENT RESEARCH AND FUTURE DIRECTIONS IN MOBILE AGENTS FOR MOBILE DEVICES

Author(s):

Xining Li , Qusay Mahmoud , Zhujun Xu

Abstract: Due to the potentially explosive growth of the mobile devices, people have showed great interests in the field of mobile computing by using various wireless applications. It benefits those users who would never have used a computer or who are simply not able to get one. However, most mobile devices suffer from limited computation resources such as low bandwidth and slow network connection. Low requirement of network connection of mobile agents make them a promising tool for the mobile devices. In this paper, we present an overview of currently available mobile agent platforms developed for mobile devices. Given the classification of these approaches underlining the platforms, the paper discusses both their advantages and disadvantages. To conclude, limitation and concerns for embedded mobile agent platforms are discussed.

Title:

ON ONTOLOGY MATCHING PROBLEMS (FOR BUILDING A CORPORATE SEMANTIC WEB IN A MULTI-COMMUNITIES ORGANIZATION)

Author(s):

Thanh Le BACH , Rose Dieng-Kuntz

Abstract: Ontologies are nowadays used in many domains such as Semantic Web, information systems… to represent meaning of data and data sources. In the framework of knowledge management in an heterogeneous organization, the materialization of the organizational memory in a “corporate semantic web” may require to integrate the various ontologies of the different groups of this organization. To be able to build a corporate semantic web in an heterogeneous, multi-communities organization, it is essential to have methods for comparing, aligning, integrating or mapping different ontologies. This paper proposes a new algorithm for matching two ontologies based on all the information available about the given ontologies (e.g. their concepts, relations, information about the structure of each hierarchy of concepts, or of relations...), applying TF/IDF scheme (a method widely used in the information retrieval community) and integrating WordNet (an electronic lexical database) in the process of ontology matching.

Title:

A RECOMMENDATION BASED FRAMEWORK FOR ONLINE PRODUCT CONFIGURATION

Author(s):

Thomas Leckner , Nikos  Karacapilidis

Abstract: Adopting a mass customization strategy, enterprises often enable customers to specify their individual product wishes by using web based configurator tools. With such tools, customers can interactively and virtually create their own instance of a product. However, customers are not usually supported in a comprehensive way during the configuration process, thus facing problems such as complexity, uncertainty, and lack of knowledge. To address the above issue, this paper presents a framework that aids customers in selecting and specifying individualized products by exploiting recommendations. Having first focused on the characteristics of configurator tools and the principles of model-based configuration, we then introduce the concept of masks for product models. The main contribution of this paper is the proposal of an integrated approach for supporting model-based product configurator tools by similarity-based recommendations. Our approach in providing recommendations has been based on the widely accepted theory of Fuzzy Sets and its associated concept of similarity measures, while recommendations provided are based on the processes of stereotype definitions and dynamic customer clustering.

Title:

A PATTERN FOR INTERCONNECTING DISTRIBUTED COMPONENTS

Author(s):

Khalid  Benali , Claude Godart , Walid GAALOUL , Karim  Baïna

Abstract: Nowadays, enterprises express huge needs for mechanisms allowing interconnection of their business components. Due to the weakness of component integration facilities, a large amount of research and development has been made in this area. Nevertheless, developed mechanisms are generally hard-coded, proprietary and lack a high level of abstraction. This paper presents our contribution to the design, the implementation, and the experimentation of an architectural pattern named “Service”. This pattern is able to support interconnection and cooperation between distributed components independently of their specific contexts (workflow processes, database robots, agents, networks nodes, etc.). Our “Service” pattern proposes a generic solution to interconnection and cooperation between components through object oriented structures and scenarios. The essence of the pattern is the ability for ”Service” to provide registration, discovery, negotiation and dynamic API information on behalf of a contained service. Moreover, several alternatives are presented to implement our pattern.

Title:

SOLVING TRANSACTIONAL CONTROL IN CURRENT MANAGEMENT FRAMEWORKS

Author(s):

Vitor Roque , Rui Pedro Lopes , José Luis Oliveira

Abstract: Today’s information systems are typically based on a large numbers of heterogeneous computing devices connected through communication networks, and joining together various resources, services, and user applications. These resources and applications are now indispensable to organizations, but as the whole system becomes increasingly larger and more complex, also a higher number of elements can be the source for the disruption of critical business operations. In fact, network management has gained in the last years great importance due the increased dependence of the enterprises on their computer systems, networks and networked applications. This dependence has made availability and performance of the network infra-structure and network services more critical than ever. In addition, the growth in size and complexity of modern networks increases the need of standard configuration mechanisms for an efficient network management. It is expected that these mechanisms are strongly related to fault-tolerance systems as well with performance management systems. The concept of policy-based management has emerged during the last years as an adequate paradigm to deal with this type of requirements and this concept has been widely supported by standards organizations such as the IETF and DMTF. In fact the Policy Working Group is chartered to define a scalable and secure framework for policy definition and administration. The development of policy-based management applications, due to the diversity and type of equipments, can be very complex in structure, with complex relationships between their constituent parts. Because of these, the success of network operations (configuration operations and others) is a critical issue in network management thus deserving great attention. In fact transactional control mechanisms are receiving today great attention in the scope of network management. It is particularly important in the context of policy-based network management. In here, we identify the lacks of current management paradigms concerning transactional control and we propose a policy-based network management system that allows specify operations over aggregations of agents and that provides high-level atomic transactions.

Title:

TURNING THE WEB INTO AN EFFECTIVE KNOWLEDGE REPOSITORY

Author(s):

Luís Veiga , Paulo Ferreira

Abstract: To fulfill Vannevar Bush's Memex and Ted Nelson's Hyper-Text vision of a world-size interconnected store of knowledge, there are still quite a few rough-edges to solve. There are no large-scale mechanisms to enforce referential integrity in the WWW. The weight of dynamically generated content w.r.t. static content has progressed enormously. Preserving accessibility to this type of content raises new issues. We propose a system, comprised of a distributed web-proxy and cache architecture, to access and automatically manage web content, static and dynamically generated. It is combined with an implementation of a cyclic distributed garbage collection algorithm for wide-area memory. It correctly handles dynamic content, enforces referential integrity on the web, and is complete w.r.t minimizing storage waste.

Title:

COMPOSITION OF WEB SERVICES IN THE ICS ARCHITECTURE

Author(s):

Carlos Roberto Baluz

Abstract: This paper proposes the use of the Web Services Composition to enhance the matchmaking process acctually in use within the ICS (Intelligent Commerce System), a Business-to-Business e-commerce system. The actual matchmaking process used in the ICS considers only single services and may return a high number of false-negative results. The new approach aims to reduce the number of false-negative results through the composition of existing single services to obtain new functionality.

Title:

AGENT-ORIENTED DESIGN OF E-COMMERCE SYSTEM ARCHITECTURE

Author(s):

T. Tung Do , Manuel Kolp , Stéphane Faulkner , Adrien Coyette

Abstract: Agent architectures are gaining popularity for building open, distributed, and evolving software required by e-commerce applications. Unfortunately, despite considerable work in software architecture during the last decade, few research efforts have aimed at truly defining patterns and languages for agent architectural design. This paper proposes a modern approach based on organizational structures and architectural description languages to define and specify agent architectures notably in the case of e-commerce system design.

Title:

EFFICIENET MULTICAST E-SERVICES OVER APPCAST; BY EXPLOITING NETWORK TOPOLOGY AND BROADCAST MEDIA PROPERTIES

Author(s):

Radha Vedala , A.K Pujari , V.P Gulati

Abstract: Multicasting is well known as a bandwidth conserving technology. Applications couldn’t exploit broadcast media property or reduce redundant packets’ movement over common paths of network as the applications are written in unicast mode due to complex multicast application programming support. Researchers turned to alternate multicast mechanisms like Application Layer Multicast - ALM, where in participating hosts are arranged in overlay topologies like tree, mesh etc and hosts route data among themselves in normal unicast mode. In this paper we discuss a new application layer topology – APPCAST and show common application architecture for both unicast and multicast using SOAP with no special effort from programmer for multicast. Architecture also allows exploiting broadcast nature of media.

Title:

DESIGNING A WEB-BASED APPLICATION FRAMEWORK

Author(s):

Liping Zhao , Abdelgadir  Ibrahim

Abstract: A framework can be viewed as a design scheme from which application systems derive. The article illustrates the design of a time booking framework. It describes the various design steps and considerations, from the requirements gathering, architectural arrangement to the organisation of classes. It shows that the framework can be easily extended to implement an application system.

Title:

OPTIMAL ALLOCATION IN SEQUENTIAL INTERNET AUCTION SYSTEMS WITH RESERVE PRICE

Author(s):

Wuyi YUE , Li Du , Qiying Hu

Abstract: In this paper, we present a new performance model and an analysis for its optimal allocation in a sequential Internet auction system with a set reserve price. In the sequential Internet auction system, a seller wants to sell a given amount of items through sequential auctions on the Internet. The seller has a reserve price for each item. For each auction, the seller should allocate a quantity of items from the total available items to be auctioned. The buyers arrive according to a Poisson process and bid honestly (without collusion, etc.). We first consider the sequential Internet auction model to be a Markov decision process and present its performance analysis for the Internet auction model. In the analysis, we show that the result is no difference whether the reserve price is private (known only to the seller) or public (posted on the web). Then we show that in the monotonous properties of the optimal policy, the more items are in hand or the fewer the horizons remain, the more items will be allocated for auction. Finally, numerical results are given, where we compute the maximal expected total revenue and the solution of the optimal allocation, display the effect of the arrival rate, and discuss the optimal reserve price and the available number of auctions.

Title:

TOOLKITS SUPPORTING OPEN INNOVATION IN E-GOVERNMENT

Author(s):

Alexander Felfernig , Manfred Wundara

Abstract: Today there exists a variety of efforts to bring public administration closer to its customers (citizens, entrepreneurs, etc.). This paper investigates the concept of open innovation w.r.t. its applicability in the area of e-government. The concept is well known within the context of mass customizing products and services, i.e. producing and selling of customer-individual products and services under mass production pricing conditions. The authors show how approaches from the area of artificial intelligence can be applied as tools for open innovation in e-government.

Title:

FORMATION AND FULFILLMENT OF ELECTRONIC CONTRACTS IN THE ICS

Author(s):

Sofiane Labidi , Nathalia  R. S. Oliveira

Abstract: This work is part of the ICS project (Intelligent Commerce System) whose aim is to design and implement an effective B2B E-commerce system based on mobile and intelligent agents. The ICS lifecycle is based on five phases: User Modeling, Matchmaking, Negotiation, Contract Formation and Contract Fulfillment. We propose here an automated process for the Contract Formation and Fulfillment phases. We present an ontology for sharing knowledge between agents that participate in the negotiation besides a repository to store contract templates and contract instances. For managing the contract fulfillment a Temporal Workflow and ECA rules are applied in order to develop this process.

Title:

MODELLING WEB SERVICES INTEROPERABILITY

Author(s):

Tarak Melliti

Abstract: With the development of the semantic Web, the specification of Web services has evolved from a ``remote procedure call'' style to a behavioral description including standard constructors of programming languages. Such a transformation introduces new problems since traditional clients will not be able to interact with these sophisticated services. In this work, we develop a generic agent capable to fully control the interaction process with a Web service given its XLANG behavioral description (XLANG being one of these languages). At first, we give an operational semantic to XLANG in terms of timed transition systems. Then we define a relation between two communicating systems which formalizes the concept of a correct interaction and we propose an algorithm which either detects ambiguity of the Web service or generates a timed deterministic automaton which controls the agent behavior during the interaction with the service. Starting from these theoretical developments we have built a platform which ensures to a user the correct handling of any complex Web service dynamically discovered through the Web.

Title:

E-SERVICES IN MISSION-CRITICAL ORGANIZATIONS:IDENTIFICATION ENFORCEMENT

Author(s):

Carlos Costa , José Luis  Oliveira , Augusto Silva

Abstract: The increasing dependency of enterprise on IT has rise up major concerns on security technology and procedures. Access control mechanisms, which are the core of most security policies, are mostly based on PIN and, some times, in Public Key Cryptography (PKC). Despite these techniques can be already broadly disseminated, the storage and retrieval of security secrets is yet a sensitive and open issue for organization and users. One possible solution can be provided by the utilization of smart cards to store digital certificates and private keys. However, there are special organizations where even this solution does not solve the security problems. When users deal with sensible data and it is mandatory to prevent the delegation of access privileges to third persons new solutions must be provided. In this case the access to the secrets can be enforced by a three-factor scheme: the possession of the token, the knowledge of a PIN code and the fingerprint validation. This paper presents a Professional Information Card system that dynamically combines biometrics with PKC technology to assure a stronger authentication that can be used indistinctly in Internet and Intranet scenarios. The system was designed to fulfill current mission-critical enterprises access control requirements, and was deployed, as a proof of concept, in a Healthcare Information System of a major Portuguese Hospital.

Title:

CONTEXT AWARE COLLABORATION IN ENTERPRISES

Author(s):

Krithi Ramamritham , Sridhar V , Harish Kammanahalli , Srividya Gopalan

Abstract: Providing the most relevant information at the most appropriate time at the most appropriate location helps in improving the overall enterprise productivity. Contextual information plays a role in achieving this objective. Richer and deeper the context is, higher is the relevance and appropriateness. In this paper, we discuss the various aspects of a context and the ways and means of tracking the same so as to exploit the most recent and expectedly accurate description of the business situation in delivering the information to assist in collaboration. Further, we discuss the role of data and app grids in meeting the real-time delivery requirements.

Title:

ANTECEDENTS OF SUCCESSFUL WEB BASED COMMUNITIES FOR DISABLED CITIZENS

Author(s):

James Lawler

Abstract: In this period of constrained economic conditions, this study initiates an analysis of the critical success factors in the implementation of World Wide Web-based communities in the non-profit sector, focusing on communities of disabled citizens, who have mental health conditions, in the City of New York. Non-profit organizations in New York depend ever more upon the technology of the Web to help disabled members in the city, as the disruption from the September 11 disaster continues to impact social services. Though investment in technology is limited in the non-profit sector, the preliminary analysis of this study imputes that implementation of community networks that service rehabilitating members of society is facilitated by distinct enabling factors of cohesion, effectiveness, help, language, relationship and self-regulation in the innovation of the supporting Web sites. The analysis contributes insight into the dynamics of communities on the Web that is applicable in an international context. This study furnishes a new framework to research Web-based communities in the non-profit sector.

Title:

GRIDBLOCKS - WEB PORTAL AND CLIENT FOR DISTRIBUTED COMPUTING

Author(s):

Marko Niinimaki

Abstract: GridBlocks is an architecture and a reference implementation of a distributed computing platform for heterogenous computer clusters. It can be used when there is a need to analyze vast amounts of data that is stored in a distributed fashion. The computing and storage resources can be used both by a Web interface or by a standalone Java client. Grid Security Infrastructure (GSI) is used for secure authentication and communication.

Title:

DATA ZOOMING – A CHALLENGE FOR EXPLORING THE SEMANTIC WEB

Author(s):

Jean-Marc Saglio , Talel  Abdessalem , Tuan Anh TA

Abstract: Zooming technique has been used in many applications to supply users with a cognitive way of exploring data/information. It means that a user should change focus to observe over all or in detail. This paper aims at presenting a dynamic exploration model on data. This model could be applied to the fields where the quantity of data is always important such as the Web. We call it zooming model because it permits users to focus in different sizes of data. Moreover, users can adjust zoom restriction parameters to explore dynamically objects appearing in a zoom. In this paper, we also show that our work can allow of a more intelligent framework for browsing the Semantic Web.

Title:

TOWARDS AN INFORMATION ASSESSMENT FRAMEWORK FOR USE WITH THE SEMANTIC WEB

Author(s):

Heidi Ellis

Abstract: The extension of the existing Web with meaningful information to form the Semantic Web holds great potential for allowing applications to carry out much more sophisticated tasks than supported by the current Web. As part of carrying out these tasks, Semantic Web applications must access and integrate information from a variety of sources including databases, services, programs, sensors, personal devices, etc. The ability of Semantic Web applications to assess this information with respect to its trustworthiness and quality is a key contribution to the successful completion of tasks. The availability of an information assessment framework for the Semantic Web that incorporates aspects of trust and information quality would enable applications to dynamically determine the trustability and worth of information. In addition, increasing interest in the research areas of security and information assurance highlight the need for an assessment framework that encompasses trust and information quality as both of these aspects are necessary components to information security and electronic commerce. This paper presents an overview of recent work in the area of information quality characteristics and models of trust on the Web. A research agenda is described for the development of an information assessment framework encompassing information quality and trust management or trust agency for the Semantic Web.

Title:

CONTENT ORIENTED ARCHITECTURE FOR CONSUMER-TO-BUSINESS E-COMMERCE

Author(s):

Borko Furht

Abstract: Consumer-to-Business (C2B) systems represent the future of eCommerce. Using natural language as a basis, and remaining keenly aware of its potential pitfalls, we describe a software specific communication model based on a new concept called content-biased language (CBL). It is shown that the requirements of a C2B system cannot be satisfied with anything less than the stretchability of a CBL. Once this fact has been established, the remainder of this paper discusses a representation for a CBL, as well as an architecture for utilizing that representation. This effort results in the description of a new software quality measure called stretchability, as well as the introduction of perspective domain graphs (PDGs), external open ontological type systems (EOOTS), and global and constituent systems. Finally, the discussion closes with the definition of a new distributed system design called the Content Oriented Architecture (COA).

Title:

FASTNEWS: SELECTIVE CLIPPING OF WEB INFORMATION

Author(s):

Gilnei Barroco Farias , Stanley Loh , Rodrigo Branco Kickhöfel

Abstract: This work presents a software system for selective clipping of web information. The system allows users to register queries, expressing their information needs, and monitors information sources (Web sites), in order to find new information and to push it to the users. The difference from traditional Web clipping systems is that FastNews only retrieves information relevant to the user’s need, that is, it has an intelligent engine that extracts only information parts according to the interest of the user. Currently, the system allows watching news, currency conversion and weather forecasting. An additional functionality is to allow users to enter an URL (Web site) to monitor, against the traditional use of predefined sources.

Title:

OBJECT-PROCESS METHODOLOGY APPLIED TO AGENT DESIGN

Author(s):

Zoheir  Ezziane

Abstract: As computer systems become ever more complex, we need more powerful abstractions and metaphors to explain their operations. System development shows that designing and building agent systems is a difficult task, which is associated with building traditional distributed, concurrent systems. Understanding natural, artificial, and social systems requires a well-founded, yet intuitive methodology that is capable of modeling these complexities in a coherent, straightforward manner. Object-Process Methodology (OPM) is a system development and specification approach that combines the major system aspects (function, structure, and behavior), into an integrated single model. This paper will provide a paradigm for designing agent systems using the object-process methodology. It aims to identify design concepts, and to indicate how they interact with each other.


AREA 5 - Human-Computer Interaction
 
Title:

ONLINE LEARNING FOR FUTURE INFORMATION TECHNOLOGY PROFESSIONALS

Author(s):

Deirdre Billings

Abstract: This paper discusses the development and implementation of a new Professional Skills for IT Practitioners course in the Bachelor of Computing Systems degree programme at UNITEC, Auckland, New Zealand. IT software skills learning from a previous Packages course was successfully integrated with that for business communication and other essential professional skills to form a more relevant and meaningful course. In developing the course prescription and researching and evaluating possible resources the main criteria were to ensure compatibility with an online delivery component as well as maintaining strong links between communication skills and the real information technology world at all stages of the course. A constructivist focus on collaborative learning and deep personal introspection into the learning process was built into the course framework from the outset. The collaborative learning element was utilised extensively by way of groupwork in the practical sessions and assignments and in online discussion board dialogue. The personal introspection aspect was the main feature of the weekly online log reflective reporting for the third assignment. The mode of delivery included an online component and resources and a holistic assessment methodology following a constructivist, student-centred paradigm where students were encouraged to be active learners, engaging in optimum peer dialogue, collaboration and reflective practice. In particular, it was considered vital to develop a framework within which the students could construct knowledge and understanding and the lecturer's role would be that of facilitator rather than knowledge-bearer.

Title:

USER INTERFACE DESIGN FOR VOICE CONTROL SYSTEMS

Author(s):

Wolfgang Tschirk

Abstract: A voice control system converts spoken commands into control actions, a process which is always imperfect due to errors of the speech recognizer. Most speech recognition research is focused on decreasing the recognizers' error rates; comparitatively little effort was spent to make user interfaces more robust towards such errors, in other words, to find strategies that optimize the overall system, given a fixed speech recognizer performance. In order to design and evaluate such strategies prior to their implementation and test, three components are required: 1) an appropriate set of performance figures of the speech recognizer, 2) suitable performance criteria for the user interface, and 3) a mathematical framework for estimating the interface performance from that of the speech recognizer. In this paper, we will identify four basic interface designs and propose a generic mathematical approach for predicting their respective performance.

Title:

ROBUST SPEECH RECOGNITION BASED ON MAPPING NOISY FEATURES TO CELAN FEATURES

Author(s):

Mohsen  Rahmani , Ahmad  Akbari , Babak nasersharif

Abstract: problem of robustness in ASR systems can be considered as a mismatch between the training and testing conditions and its solution would be to find a way to reduce it. Common approaches are: Data-Driven methods and model-based methods. In this paper, we study a model of environment and obtain a relation between noisy and clean speech features based on this model. We propose two techniques for mapping noisy features to clean features in cepstrum domain. We implement the proposed and some of precedent data-driven methods to show that proposed methods are effective for robust speech recognition in noisy environments.

Title:

AN INVESTIGATION INTO THE REQUIREMENTS FOR AN E-LEARNING SYSTEM

Author(s):

Yaen Yaacov  Sofer , Steve B. McIntosh

Abstract: The learning environment where students of the same age group learn together instructed by a teacher, was developed over the years and is known today as the traditional classroom. This traditional classroom may be changed by using the latest Web-based technology to replace and/or support the learning process. These new learning environments are accessible using the Internet as the main communication medium and by other remote means such as CD-ROM, and video. Many aspects of the current use of these new technologies reflect an approach to teaching and learning reminiscent of the “programmed learning” training material of the 1970s. This paper uses Soft Systems Methodology (SSM) and to construct a Consensus Primary Task Model (CPTM) to analyse the requirements for a distance or e-learning system. In conducting the analysis, we investigate the alternative methods proposed for the construction of a CPTM.

Title:

ASSESSMENT OF E-LEARNING SATISFACTION FROM CRITICAL INCIDENTS PERSPECTIVE

Author(s):

NIAN-SHING CHEN , KAN-MIN LIN

Abstract: Learner satisfaction is one of the key issues for evaluating the success of e-learning. To understand learner satisfaction and its affecting factors is very important for the quality development of e-learning. In this study, we proposed an e-learning satisfaction assessment model from critical incidents perspective and find out what critical incidents do really affect e-learning satisfaction. The model is called SAFE, abbreviated from Satisfaction Assessment from Frequency of negative critical incidents perspective for E-Learning. The theoretical model is tested by an empirical study of 230 learners on NSYSU Cyber-University. The result shows SAFE model can provide 71% explanatory power of overall cumulative satisfaction for e-learning and the learners of NSYSU Cyber-University are satisfied with the e-learning courses they enrolled. The critical incidents which affecting e-learning satisfaction can be classified into four categories; administration, functionality, instruction and interaction, among them the interaction and functionality are the most important factors than the others.

Title:

CABAL A BLISS PREDICTIVE COMPOSITION ASSISTANT FOR AAC COMMUNICATION SOFTWARE

Author(s):

Nicola Gatti , Matteo Matteucci

Abstract: In order to support the residual communication capabilities of verbal impaired people software applications allowing Augmentative and Alternative Communication (AAC) have been developed. AAC communication software applications provide verbal disables with an electronic table of AAC languages symbols (i.e. Bliss, PCS, PIC, etc.) in order to compose messages, exchange them via email, vocally synthetize them. A current open issue, fo these softwares, regards human-computer interaction in verbal impaired people suffering motor disorders. Such persons can adopt only ad-hoc input devices, such as buttons or switches, which require an intelligent automatic scansion of the AAC symbol table. In such perspective we have developed caba^2l an innovative composition assistant exploiting an user linguistic behavior model for predictive Bliss symbols scansion. caba^2l is based on an original discrete implementation of auto-regressive hidden Markov model called DAR-HMM and on a the semantic network formalism. caba^2l is able to predict a list of symbols as the most probable ones according to both the previously selected symbol and the semantic categories associated to them. We have implemented the caba^2l as a component of Bliss2003, an AAC communication software centered on Bliss language and we have experimentally validated it with real data.

Title:

MANAGERIAL OPENNESS AND THE ADOPTION OF DISTRIBUTED GROUP SUPPORT SYSTEMS: THE CASE OF WEBWIDE PARTICIPATION

Author(s):

John  Rohrbaugh

Abstract: The full involvement of designated participants in meeting process is a well-recognized standard of group effectiveness, yet most face-to-face meetings are undertaken without the presence of every group member. The problem of total participation in asynchronous meetings convened with distributed group support systems has been noted frequently but investigated rarely. This paper describes a portion of a large field study using the distributed group support system WebWide Participation in which explanations for meeting involvement (and non-involvement) were explored. In particular, three WebWide meetings with varying levels of participation were selected, and surveys were sent to all designated participants. The hypothesis was that non-participants have less openness (i.e., one of the key personality dimensions in Big Five personality theory--the characteristic of being intellectually curious and receptive to new experiences) than active participants who willingly joined in the meeting process. Using two indices of managerial openness, a discriminant analysis was undertaken that correctly distinguished over three-quarters of the participants and non-participants in the targeted WebWide meetings. The importance of this finding for advancing the adoption of other new group support technologies is discussed.

Title:

INFORMATION SYSTEMS FAILURE EXPLAINED THROUGH THE LENS OF THE CULTURAL WEB

Author(s):

David Wilson , David Avison

Abstract: This paper provides a discussion of the Australian telecommunications company One.Tel Limited. The paper examines the information technology strategies employed by the company and assesses the extent to which a failure of those strategies may have contributed to, or precipitated, the downfall of the business. In particular, it looks at the company through the lens of Johnson and Scholes’ (1993) cultural web. This perspective provides clear evidence of failings at the company, which were likely to have led to failure in its IT/IS policy and applications which, in turn, at least partly explains the downfall of the business.

Title:

AN AHP BASED DECISION MODEL TO EVALUATE VARIOUS E-LEARNING PACKAGES

Author(s):

Subhajyoti Ray

Abstract: e-Learning is being increasingly accepted as standard for corporate learning and acquiring of new skills and competencies and provides a flexible and cost effective solutions to organization’s training needs. With a plethora of e-learning providers and solutions available in the market, there is a new kind of problem faced by organizations and that is of selecting the right e-learning suite. This paper addresses the issue of selection of e-learning solution that best meets the criteria relevant to the training needs of the organization. The problem of selecting the most suitable e-learning solution has been formulated as a multi criteria decision problem to be solved by the Analytic Hierarchy Process (AHP). The hierarchical structure of the problem allows the decision maker to compare various e-learning solutions using content, navigation and evaluation features of e-learning solutions. A numerical example is also provided to illustrate the technique proposed here.

Title:

EFFECTIVE VISUALISATION OF WORKFLOW ENACTMENT

Author(s):

Wei Lai , Yun Yang , Jun Shen

Abstract: Although most existing teamwork management systems support user-friendly interface to some extent, few of them have take into consideration of the special requirements of workflow visualisation. This paper realises the unique features of visualisation for run-time workflow, i.e., workflow enactment and execution. We present a detailed discussion of the emerging problems against the general aesthetic criteria for drawing the workflow layout. In order to support most essential workflow enactment facilities, the following three mechanisms are provided. Firstly, Sugiyama algorithm has been systematically incorporated into our prototype to create well structured workflow layout initially. Secondly, when the workflow process dynamically changes, we can adjust workflow layout by our force-scan algorithm to retain the mental maps created earlier among team members. Thirdly, we have also applied the technique of the fisheye view to offer a context focus mechanism for workflow users and to utilise the screen size more effectively. The scenarios of the prototype are also presented.

Title:

A CONTINUOUS LINE PROCESS BASED MRF MODELIZATION

Author(s):

karim achour , lyes mahiddine

Abstract: In this paper, we present a method allowing to reconstruct contours. This study is tied to the dual problem of image segmentation and contours extraction. Indeed, the need to obtain closed areas in an image is due to the fact that it facilitates the high level processes in computer vision like pattern recognition and human-computer Interaction. To this end, our main concern is to obtain a contours-reconstructed image by the use of a Markov Random Field model coupled with a line process model. The optimization is done by the Mean Field Annealing algorithm.

Title:

MANAGING EMOTIONS IN SMART USER MODELS FOR RECOMMENDER SYSTEMS

Author(s):

Beatriz López , Josep Lluís de la Rosa , Gustavo González

Abstract: Our research focuses on the development of methodologies that take into account the human factor in user models. There is an obvious link between personality traits and user preferences - both being indications of default tendencies in behavior, that can be automated by systems that recommend items to a user. In this work, we define an emotional component for Smart User Models and provide a methodology to build and manage it. The methodology contemplates the acquisition of the emotional component, the use of emotions in a recommendation process and the updating of the Smart User Model according to the recommendation feedback. The methodology is illustrated with cases study.

Title:

A METHODOLOGY FOR INTERFACE DESIGN FOR OLDER ADULTS

Author(s):

Mary Zajicek

Abstract: This paper puts forward a new design method based upon Alexandrian patterns for interface design for particular user groups. The author has created a set of interface design patterns for speech systems for older adults with the aim of supporting the dynamic diversity in this group. The patterns themselves reflect a significant body of research work with this user group uncovering important information about how they interact with speech systems. The design knowledge embedded in these patterns is therefore closely linked to knowledge about the user and enables interface designers to clarify which users are excluded from their software.

Title:

DEFECTS, USEFULNESS AND USABILITY OF ETHICS THEORIES IN IS ETHICS EDUCATION

Author(s):

Tero Vartiainen , Mikko T. Siponen

Abstract: Computer ethics is recognized as an essential component of information systems curricula. However, little is known about how students perceive the usefulness and usability of ethics theories in solving computer-related moral conflicts, and what kinds of mistakes they make in solving moral problems by applying those theories. To fill this gap, an interpretive qualitative and quantitative study (n=20) was conducted to determine the defects, perceived usefulness and usability of alternative ethics theories (utilitarianism, Kantian ethics, virtue ethics, prima facie principles, Rawls' veil of ignorance) in computer ethics teaching. The results shed a new light on the use of these theories in this field of education, and also suggest new directions for it.

Title:

EVALUATING THE VISUAL MANIPULATION WITH CELLULAR AUTOMATA-LIKE ALGORITHMS

Author(s):

Mahmoud Saber , Nikolay Mirenkov

Abstract: The Active Knowledge Studio group at the University of Aizu is studying, designing, and developing multimedia programming environments for various domains. One of the promising domains is the cellular automata where the global behavior of systems arises from the collective effect of many locally interacting, simple components. The cellular automata (CA) systems have a rich theoretical basis. They have also been used in a great variety of applications. A number of programming languages and environments have been developed to support the implementation of the CA models. However, these languages focus on computational and performance issues only, and do not pay enough attention to programming productivity, usability, understandability, and other aspects of software engineering. In this paper, we provide an outline of our approach to manipulating such systems in a visual user-oriented environment. Then, we explain features of the environment, present case studies, and describe the results of evaluating our visual environment.

Title:

HEURISTICS SUPPORTING USABLE AUTHORING TOOLS

Author(s):

Paula Kotze , Elsabe Cloete

Abstract: In the past few years since e-learning has gained momentum, the user profile of instructional authoring tools has also evolved. It seems that commercial authoring products have not yet adapted to address all user groups, compelling lecturers who prepare e-learning materials to be impeded by their working environment, and as a result the materials do not meet the required quality. In this paper heuristics to design an authoring tool aimed at a specific user group, namely the ordinary lecturer, is described to enable subject-expert lecturers (not necessarily technically skilled) to create and reuse their own e-materials without undergoing intensive technical training. The significance of these heuristics lies in the fact that they provide a method to overcome many of the complexities associated with the design of instructional authoring tools. Furthermore, tools developed according to these heuristics might enable institutions to cope with the universal design demands associated with e-learning without their e-learning programmes being delayed by the scarcity of professional instructional designers and instructional programmers

Title:

INTERACTIVE 3D PRODUCT ASSEMBLER FOR THE WWW - A CASE STUDY OF A 3D FURNITURE STORE

Author(s):

Stephen  Chan

Abstract: We describe a system that allows customers to interactively select, assemble, and modify 3D products over the WWW, enhancing the usage of 3D techniques for e-business. It provides a framework for a web-based 3D assembling system that can significantly simplify the assembling process while retaining enough flexibility to build an approximate model of real products. The assembled object is captured in a two-level architecture. Components are first connected using simplified and automatic assembling mechanism; then a bundle of connected components are grouped together by a parametric object-oriented grouping method. This grouping method parameterize the components to build a group of descriptive, featured and related object types - product, part and primitive within the assembling model. The system enhances the flexibility and efficiency for the assembling process over the WWW. For archival and data transfer, we developed an assembly-specific data format – Assembly ML. In the prototype implementation of our interactive 3D assembler (I3DA), we integrated an intelligent decision helper to assist casual customers in selecting and assembling their desired product.

Title:

A COMPARATIVE SURVEY OF ACTIVITY-BASED METHODS FOR INFORMATION SYSTEMS DEVELOPMENT

Author(s):

Hanifa Shah , Amanda Quek

Abstract: The role of human factors and the importance of sociocultural and contextual issues in information systems (IS) development has long been recognised. However, these ‘soft’ details remain elusive and difficult to capture. Activity theory (AT) provides a framework with which to analyse and understand human behaviour in context. AT-based methods for IS development may therefore be a way forward. This paper presents a comparative survey of five AT-based methods. Each method is described, and its strengths and weaknesses briefly identified. The methods are then compared along nine key dimensions. As part of the findings, it is determined that most of the methods are selective in their use of AT, and are not sufficiently validated. Several correlations have also been noted across dimensions. Observations are presented on the limitations of existing methods, and suggestions are then made on possible ways forward.

Title:

THE MEETING OF GESTALT AND COGNITIVE LOAD THEORIES IN INSTRUCTIONAL SCREEN DESIGN

Author(s):

Juhani E. Tuovinen , Dempsey Chang

Abstract: Without doubt Gestalt Theory has formed an important basis for many aspects of educational visual screen design. Despite the familiarity many computer screen designers claim with it, Gestalt Theory is not a single small set of visual principles uniformly applied to by all designers. In fact, it appears that instructional visual design literature often deals with only a small set of Gestalt laws. Recently Gestalt literature was consulted to distil the most relevant Gestalt laws for educational visual screen design, resulting in eleven laws being identified. In this paper these laws are discussed in terms of the Cognitive Load Theory, (CLT), which has been used with considerable success to improve instructional design. The usefulness of the combined perspectives drawn from the Gestalt Theory and CLT for educational visual screen design were applied to the redesign of an instructional multimedia application, WoundCare, designed to teach nursing students wound management. The evaluation results were encouraging. Both the new design and the value of applying the eleven Gestalt laws and CLT principles to improve learning were strongly supported. However, many aspects of applying this combination of theories to educational interface design remain unclear and this forms a useful direction for future research.

Title:

LEARNING BY DOING AND LEARNING WHEN DOING

Author(s):

Bernd Tschiedel , Klaus P. Jantke , Steffen Lange , Gunter Grieser , Peter Grigoriev , Bernhard Thalheim

Abstract: In this paper, e-learning meets decision support in enterprises’ business practice. This presentation is based on an on-line e-learning system named DaMiT for the domain of knowledge discovery and data mining. The DaMiT system was primarily developed for technology enhanced learning in German academia. It is now on the cusp of entering training on demand in enterprises. Stand-alone e-learning seems quite unrealistic and does not meet the needs of industries. It is very unlikely that employees fully loaded with work take a detour to study theories of whatever sort. More likely, they are willing to engage in studies whenever the need derives directly from their practical work. In those cases, they might even be willing to dive into theories. How to dovetail e-learning and enterprise business applications, such that both sides draw a proper benefit?

Title:

REAL WORLD SENSORIZATION AND VIRTUALIZATION FOR OBSERVING HUMAN ACTIVITIES

Author(s):

Toshio Hori , Yoshifumi Nishida , Koji Kitamura , Takeo Kanade , Hiroshi Mizoguchi

Abstract: This paper describes a method for robustly detecting and efficiently recognizing daily human behavior in real world. The proposed method is performed with the following steps: 1) real world sensorization for robustly observing his or her behavior using ultrasonic 3D tags, which is a kind of an ultrasonic location system, 2) real world virtualization for creating a virtual environment through modeling 3D shape of real objects by a stereovision system, and 3) virtual sensorization of the virtualized objects for quickly registering human behavior handling objects in real world and efficiently recognizing target human behavior. As for real world sensorization, this paper describes algorithms for robustly estimating 3D positions of objects that a human handles. This paper also describes a method for real world virtualization and virtual sensorization using the ultrasonic 3D tag system and a stereo vision system.

Title:

SUPPORTING COURSE SEQUENCING IN A DIGITAL LIBRARY: USAGE OF DYNAMIC METADATA FOR LEARNING OBJECTS

Author(s):

Raul Morales Salcedo , Yano Yoneo , Hiroaki Ogata

Abstract: The production of interactive multimedia content is in most cases an expensive task in terms of time and cost. Hence, optimizing production by exploiting the reusability of interactive multimedia elements is mandatory. Reusability can be triggered by a combination of reusable multimedia components and the appropriate use of metadata to control the components as well as their combination. In the same way, digital libraries comprise vast digital repositories, a wide range of services, and user’s environments and interfaces, all intended to support learning and collaborative research activities. In this article, we discuss the reusability and adaptability aspects of interactive multimedia content in a digital library’s learning environment. We extend a component-based architecture to build interactive multimedia visualization within digital library’s learning environment with the use of metadata for reusability and customizability.

Title:

AUTOMATIC NAVIGATION AMONG MOBILE DTV SERVICES

Author(s):

chengyuan peng

Abstract: Limited input buttons on a mobile device restrict people to access digital broadcast services. In this paper, we present a reinforcement learning approach to automatic navigating among services in mobile digital television systems. Our approach uses standard Q-learning algorithm as a theory basis to predict next remote control button for the user by learning usage patterns from interaction experiences. We do the experiment using the modified algorithm in our system. The experimental results demonstrate that the performance is good and the method is feasible and appropriate in practice.

Title:

VERBOS Y TÓPIC MAPS: UNA PROPUESTA DESDE EL ADC PARA LA DOCUMENTACIÓN JURÍDICA

Author(s):

Calzada Francisco Javier , Morato Jorge , Bolaños Carmen , Marzal Miguel Ángel

Abstract: A final evaluation of the project Development of a verb thesaurus for dynamic information environments. Implementation of the ISO/IEC 12350:1999 standard is presented as a base for the use of its results on future research projects. Once the interest of the Library and Information Science field towards verbal structures is justified for its efficacy on the documental analysis of the movement for a relevant indentification, description and classification of hypermedia, its sufficient analytic documental adaptation to the the dynamic transverse character of the hypertext contents and the required redefinition of the hierarchical structure of thesauri, the methodological evolution of these dynamic concepts is tackled as well as its implementation to the development of verb thesauri for the Spanish legal language. Main contributions of this paper are new research lines oriented to a more specific field of theoric formulation and instrumentalimplementation.

Title:

BASIC STRATEGIES TO ENHANCE TELEOPERATION PLATFORMS THROUGH THE INTERNET

Author(s):

Carlos Cerrada , Rubén Gómez , Juan Escribano

Abstract: This paper shows in a schematic way the basic objectives to enhance teleoperation platforms, that have been achieved in the PhD Thesis ... . These objectives are the result of acquired experience working with the teleoperation platform developed by our research group in teleoperation. The objectives carried out have been formulated after analyzing the performance of the initial platform and detecting some restrictions in several key parts of its design. Specifically, three basic improvement areas have been take into account. The enhance objectives have been focused in aspects such as the operating system, the human-machine interface and finally, in aspects regarding to the communication between the master and the slave. In this paper the main detected problems are introduced as well as the chosen solutions in each case. Experimental results regarding to the last enhancement objective are finally shown.

Title:

PREDICTING THE USER ACCEPTANCE OF PERSONALIZED INFORMATION SYSTEMS: CASE MEDICAL PORTAL.

Author(s):

Seppo Pahnila

Abstract: This paper describes ongoing research, which focuses on the effect of attitudes and intentions in the use of personalized Web Information Systems (WIS). By applying the widely used Technology Acceptance Model (TAM), the theory of planned behavior, innovation diffusion theory and self-efficacy theory, we take an extended view of the factors explaining the individual acceptance and usage of newly emerging personalized Web Information Systems. Many features of personalized WIS differ from the “traditional” information systems, and we believe that this research will shed new light on the research into the acceptance of personalized WIS.

Title:

MODELLING DIALOGUES WITH EMOTIONAL INTERACTIVE AGENTS

Author(s):

Cveta Martinovska , Stevo  Bozinovski

Abstract: The paper describes a possible approach to the design of emotionally intelligent interactive agents capable of conveying emotional, verbal and non-verbal signals. User's affective profile is built according to the standard test in psychiatry and clinical psychology Emotions Profile Index. We introduce a fuzzy framework to model the emotional state and personality traits. Knowing that the user is extrovert or agreeable helps the agent to formulate appropriate responses and to infer the intent behind user's act. Dialog automata are used to conceptualise the conversation between the user and the animated agent.

Title:

USABILITY HEURISTICS FOR XML-BASED WEB DEVELOPMENT

Author(s):

José A. López Brugos , Marta Fernández de Arriba

Abstract: Heuristic evaluation is a usability engineering method for finding the usability problems in a user interface design. This paper discusses a set of rules to evaluate the level of usability of an XML-based Web site. Taking advantage that XML separates the contents of a document from its presentation, heuristics grouped by content and presentation are defined in an analogous way, allowing a suitable evaluation for that type of applications.

Title:

UNOBTRUSIVE ACQUSITION OF USER INFORMATION FOR E – COMMERCE APPLICATIONS

Author(s):

Arkady Zaslavsky , Oshadi Alahakoon , Seng Loke

Abstract: E-commerce has become a common activity among many people. Although widely used, the interfaces which users communicate with e-commerce systems are still at an early stage of development in terms of intelligence and user-friendliness. Unobtrusiveness is recognized as one of the most important desired attributes of an intelligent and friendly interface. In this paper we describe our work on an information architecture to minimize obtrusiveness. A layered information architecture supported by a structured user profile model is described in the paper. As example scenario is presented to clarify the new architecture and the development of a cost model for measuring the level of obtrusiveness is discussed.

Title:

ACCESSIBLE COMPUTER INTERACTION FOR PEOPLE WITH DISABILITIES: THE CASE OF QUADRIPLEGICS

Author(s):

Ayodele Adesina-Ojo , Jan Eloff , Mariki Eloff , Paula Kotze

Abstract: Universal design is the design of products and environments so that all people can use them without adaptation or specialised design. Life must be simplified for all by making products, communications and the built environment more usable by as many people as possible at little, or no extra cost. To understand the challenges that a disabled person has to face when using the computer, we have to know what capabilities such a person has. Only then will it be possible to apply universal design to computer interfaces. The purpose of this paper is to highlight the challenges that many people face in their everyday life and determine to what extend disabled people, especially people with limited or no use of their hands and arms interact independently with computer equipment. The paper specifically look at quadriplegics, their capabilities, a survey of how they use computer equipment, as well as special devices available to assist them in this interaction.