9th International Conference on Enterprise Information Systems 12-16, June 2007            Funchal, Madeira - Portugal Full Paper Submission: deadline expired  Position Paper Submission: deadline expired  Authors Notification: deadline expired  Final Paper Submission and Registration: deadline expired
 Organized by:

ICEIS 2007 Abstracts

Conference Areas
- Databases and Information Systems Integration
- Artificial Intelligence and Decision Support Systems
- Information Systems Analysis and Specification
- Software Agents and Internet Computing
- Human-Computer Interaction

 Area 1 - Databases and Information Systems Integration Title: FROM DATABASE TO DATAWAREHOUSE: A DESIGN QUALITY EVALUATION Author(s): Maurizio Pighin and Lucio Ieronutti Abstract: Data Warehousing provides tools and techniques for collecting, integrating and storing a large number of transactional data extracted from operational Databases, with the aim of deriving accurate management information that can be effectively used for supporting decision processes. However, the choice of which attributes have to be considered as dimensions and which as measures heavily influences the effectiveness of a data warehouse. Since this is not a trivial task, especially for databases characterized by a large number of tables and attributes, an expert is often required for correctly selecting the most suitable attributes and assigning them the correct roles. In this paper, we then propose a semantic-independent methodology targeted at (i) supporting the data warehouse design and creation, and (ii) deriving information on total quality of the built Data Warehouse. We also present the results of an experiment demonstrating the effectiveness of our methodology. Title: PTSM: A PORTLET SELECTION MODEL Author(s): Mª Ángeles Moraga, Coral Calero, Mario Piattini and Oscar Díaz Abstract: The use of Web portals continues to rise, showing their importance in the current information society. The success of a portal depends on customers using and returning to it. Nowadays, it is very easy for users to change from one portal to another, so improving/assessing portal quality is a must. Hence, appropriate quality model should be available to measure and drive portal development. Specifically, this work focuses on portlet-based portals. Portlets are web components, and they can be thought as COTS but in a Web setting. This paper presents a portlet selection model that guides the portal developer in choosing the best portlet, among a set of portlets with similar functions for specified tasks and user objectives, in accordance to five quality measures, namely, functionality, reliability, usability, efficiency and reusability, and other three characteristics not related to the quality but important to carry out the selection. Title: TRANSFORMATION OF LEGACY BUSINESS SOFTWARE INTO CLIENT-SERVER ARCHITECTURES Author(s): Thomas Rauber and Gudula Rünger Abstract: Business software systems in use contain specific knowledge which is essential for the enterprise using the software and the software has often grown over years. However, it is difficult to adapt these software systems to the rapidly changing hardware and software technologies. This so-called legacy problem is extremely cost intensive when a change in the software itself or the hardware platform is required due to a change in the business processes of the enterprise or the hardware technology. Thus, a common problem in business software is the cost effective analysis, documentation, and transformation of business software. In this paper, we concentrate on the transformation issue of software systems and propose an incremental process for transforming monolithic business software into client-server architectures. The internal logical structure of the software system is used to create software components in a flexible way. The transformation process is supported by a transformation toolset which preserves correctness and functionality. Title: INFORMATION SYSTEMS INTEGRATION DURING MERGERS - INTEGRATION MODES TYPOLOGY AND INTEGRATION PATHS Author(s): Gérald Brunetto Abstract: Today Information Systems (IS) integration constitutes one of the major success factors of mergers and acquisitions. This article draws on two case studies of firms having realized more than 10 mergers and acquisitions between 1990 and 2000. This paper shows the importance of carrying out a double approach to understand IS integration process. The first approach represents the necessity of using organizational configuration to define possible IS integration modes. Thus we show the importance of organizational, strategic and technological contingencies within the elaboration of integration mode. Title: ENTERPRISE INFORMATION SEARCH SYSTEMS FOR HETEROGENEOUS CONTENT REPOSITORIES Author(s): Trieu C. Chieu, Shyh-Kwei Chen and Shiwa S. Fu Abstract: In larger enterprises, business documents are typically stored in disparate, autonomous content repositories with various formats. Efficient search and retrieval mechanisms are needed to deal with the heterogeneousness and complexity of this environment. This paper presents a general architecture and two industrial implementations of a service-based information system to perform search in Lotus Notes databases and data sources with Web service interfaces. The first implementation is based on a federated database system that maps the various schemas of the sources into a common interface and aggregates information from their native locations. This implementation offers the advantages of scalability and accessibility to real-time information. The second one is based on a one-index enterprise-scale search engine that crawls, parses and indexes the document contents from the sources. This latter implementation offers the ability of scoring the relevance ranking of documents and eliminating duplications in search results. The relative merits and limitations of both implementations will be presented. Title: A FRAMEWORK FOR SUPPORTING KNOWLEDGE WORK PROCESSES Author(s): Weidong Pan, Igor Hawryszkiewycz and Dongbei Xue Abstract: Improving knowledge work processes has become increasingly important for modern enterprises to maintain a competitive status in nowadays information society. This paper will propose a way to improve knowledge work processes through supportive services. A framework for supporting knowledge work processes will be presented where the best practices of knowledge work processes, developed by process organizers or derived from some successful applications, are described and stored in a database, and according to the description, software agents dynamically organize supportive services to guide process participants to advance process steps towards an efficient completion of the process. The paper will provide an overview of the method and explore the development of the main components involved in the framework. Title: A NEW LOOK INTO DATA WAREHOUSE MODELLING Author(s): Nikolay Nikolov Abstract: The dominating paradigm of Data Warehouse design is the star schema (Kimball, 1996). The main debate within the scientific community for years has been not whether this paradigm is really the only way, but, rather, on its details (e.g. “to snowflake or not to snowflake” – Kimball et al., 1998). Shifting the emphasis of the discourse entirely within the star schema paradigm prevents the search for better alternatives. We argue that the star schema paradigm is an artefact of the transactional perspective and does not account for the analytic perspective. The most popular formalized method for deriving the star schema (Golfarelli et al., 1998) underlines just that by taking only the entity-relationship-model (ERM) as an input. Although this design approach follows the natural data and work-flow, it does not necessarily offer the best performance. The main thrust of our argument is that the query model should be used on a par with the ERM as a starting point in the data warehouse design process. The rationale is that the end design should reflect not just the structure inherent in the data model, but also that of the expected workload. Such approach results in a schema which may look very different than the traditional star schema but the performance improvement it may achieve justifies going off-the-beaten track. Title: AN ORDER ALLOCATION MODEL IN VIRTUAL ENTERPRISES BASED ON INDUSTRIAL CLUSTERS Author(s): Fangqi Cheng, Feifan Ye and Jianguo Yang Abstract: Industrial clusters can be found very often in the world, particularly in many developing countries. In the age of internationalisation and a highly competitive environment with shorter product life cycles, more customized needs and more uncertainties in markets, to build virtual enterprise based on an industrial cluster is one of the most important ways to improve the agility and competitiveness of manufacturing enterprises in the cluster. One of the key factors towards the success of virtual enterprises is the correct selection of cooperative partners in the virtual enterprise. An approach of order allocation and partner selection in the environment of industrial clusters is proposed. This approach is composed of two stages: task-resource matching and quantitative evaluation. In the first stage the potential candidates are identified and in the second stage evolutionary programming is applied to deal with partner selection and order allocation problem. The architecture for information evaluation and order allocation is studied for the proposed approach. The target function, in which the load rate of candidate enterprise is taken as the main variable, is developed, and a simplified example is used to verify the feasibility of the proposed approach. The result suggests that the proposed model and the algorithm obtain satisfactory solutions. It is expected that the proposed approach can efficiently improve the manufacturing resource’s utilization and enhance the agility of manufacturing enterprises in industrial clusters by means of virtual enterprise. Title: A DATABASE INTEGRATION SYSTEM BASED ON GLOBAL VIEW GENERATION Author(s): Uchang Park and Ramon Lawrence Abstract: Database integration is a common and growing challenge with the proliferation of database systems, data warehouses, data marts, and other OLAP systems in organizations. Although there are many methods of sharing data between databases, true interoperability of database systems requires capturing, comparing, and merging the semantics of each system. In this work, we present a database integration system that improves on the database federation architecture by allowing domain administrators to simply and efficiently capture database semantics. The semantic information is combined using a tool for producing a global view. Building the global view is the bottleneck in integration because there are few tools that support its construction, and these tools often require sophisticated knowledge and experience to operate properly. The technique and tool presented is simple and powerful enough to be used by all database administrators, yet expressive enough to support the majority of integration queries Title: UNASSSUMING VIEW-SIZE ESTIMATION TECHNIQUES IN OLAP - AN EXPERIMENTAL COMPARISON Author(s): Kamel Aouiche and Daniel Lemire Abstract: Even if storage was infinite, a data warehouse could not materialize all possible views due to the running time and update requirements. Therefore, it is necessary to estimate quickly, accurately, and reliably the size of views. Many available techniques make particular statistical assumptions and their error can be quite large. Unassuming techniques exist, but typically assume we have independent hashing for which there is no known practical implementation. We adapt an unassuming estimator due to Gibbons and Tirthapura: its theoretical bounds do not make unpractical assumptions. We compare this technique experimentally with stochastic probabilistic counting, LogLog probabilistic counting, and multifractal statistical models. Our experiments show that we can reliably and accurately (within 10%, 19 times out 20) estimate view sizes over large data sets (1.5 GB) within minutes, using almost no memory. However, only Gibbons-Tirthapura provides universally tight estimates irrespective of the size of the view. For large views, probabilistic counting has a small edge in accuracy, whereas the competitive sampling-based method (multifractal) we tested is an order of magnitude faster but can sometimes provide poor estimates (relative error of 100%). In our tests, LogLog probabilistic counting is not competitive. Experimental validation on the US Census 1990 data set and on the Transaction Processing Performance (TPC~H) data set is provided. Title: IMPLEMENTING SPATIAL DATAWAREHOUSE HIERARCHIES IN OBJECT-RELATIONAL DBMSS Author(s): Elzbieta Malinowski and Esteban Zimányi Abstract: Spatial Data Warehouses (SDWs) allow to analyze historical data represented in a space supporting the decision-making process. SDW applications require a multidimensional view of data that includes dimensions with hierarchies and facts with associated measures. In particular, hierarchies are important since traversing them users can analyze detailed and aggregated measures. To better represent users' requirements for SDWs, the conceptual model with spatial support should be used. Afterwards, the conceptual schema is translated to the logical and physical schemas. However, during the translation process the semantics can be lost. In this paper, we present the translation of spatial hierarchies from the conceptual to physical schemas represented in the MultiDimER model and Oracle 10g Spatial, respectively. Further, to ensure the semantic equivalence between the conceptual and the physical schemas, integrity constraints are exemplified mainly using triggers. Title: TEXT ANALYTICS AND DATA ACCESS AS SERVICES - A CASE STUDY IN TRANSFORMING A LEGACY CLIENT-SERVER TEXT ANALYTICS WORKBENCH AND FRAMEWORK TO SOA Author(s): E. Michael Maximilien, Ying Chen, Ana Lelescu, James Rhodes, Jeffrey Kreulen and Scott Spangler Abstract: As business information is made available via the intranet and Internet, there is a growing need to quickly analyze the resulting mountain of information to infer business insights. For instance, analyzing a company’s patent database against another’s to find the patents that are cross licensable. IBM Research’s Business Insight Workbench (BIW) is a text mining and analytics tool that allows end–users to explore, understand, and analyze business information in order to come up with such insight. However, the first incarnation of BIW used a thick-client architecture with a database back-end. Though very successful, the architecture caused limitations in the tool’s flexibility, scalability, and deployment. In this paper we discuss our initial experiences in converting BIW into a modern Service-Oriented Architecture. We also provide some insights into our design choices and also outline some lessons learned. Title: A NEW ALGORITHM FOR TWIG PATTERN MATCHING Author(s): Yangjun Chen Abstract: Tree pattern matching is one of the most fundamental tasks for XML query processing. Prior work has typically decomposed the twig pattern into binary structural (parent-child and ancestor-descendent) relationships or paths, and then stitch together these basic matches by join operations. In this paper, we propose a new algorithm which explores both the document tree and the twig pattern in a bottom-up way and show that the join operation can be completely avoided. The new algorithm runs in O(|T|Þ|Q|) time and O(|Q|ÞleafT) space, where T and Q are the document tree and the twig pattern query, respectively; and leafT represents the number of leaf nodes in T. Our experiments show that our method is effective, scalable and efficient in evaluating twig pattern queries. Title: A METHOD FOR EARLY CORRESPONDENCE DISCOVERY USING INSTANCE DATA Author(s): Indrakshi Ray and C. J. Michael Geisterfer Abstract: In the database integration research, much effort has gone into developing automated solutions to integrating the schema (and then, afterwards, the data itself). In most of the research literature, the solutions and approaches have concentrated on matching schema-level information to determine the correspondences between data concepts in the component databases. If instance-level information is even utilized, it is used only to augment the correspondences found using the schema-level information, to catch what the schema-level matching missed. To use schema-level information, the component schemas must be transformed to a canonical data model before they can be compared. Furthermore, this bases for database integration has a strong reliance on the availability of schema experts, schema documentation, and well-designed schemas - items that are often not available. The main contribution of this paper is to propose a method of initial instance-based correspondence discovery that greatly reduces the manual effort involved in the current integration processes. The gains are accomplished because the ensuing method uses only instance data (a body of database knowledge that is always available) to make its initial discoveries. A secondary contribution will be to show that correspondence discovery before schemas transformation is a viable, even desired alternative to the current general integration process. Title: XML SCHEMA STRUCTURAL EQUIVALENCE Author(s): Angela C. Duta, Ken Barker and Reda Alhajj Abstract: The Xequiv algorithm determines when two XML schemas are equivalent based on their structural organization. It calculates the percentages of schema inclusion in another schema by considering the cardinality of each leaf node and its interconnection to other leaf nodes that are part of a sequence or choice structure. Xequiv is based on the Reduction Algorithm that focuses on the leaf nodes and eliminates intermediate levels in the XML tree. Title: SECURE KNOWLEDGE EXCHANGE BY POLICY ALGEBRA AND ERML Author(s): Steve Barker and Paul Douglas Abstract: In this paper, we demonstrate how role-based access control policies may be used for secure forms of knowledge module exchange in an open, distributed environment. For that, we define an algebra that a security administrator may use for defining compositions and decompositions of shared information sources, and we describe a markup language for facilitating secure information exchange amongst heterogeneous information systems. We also describe an implementation of our approach and we give some performance measures, which offer evidence of the feasibility of our proposal. Title: MAINTENANCE COST OF A SOFTWARE DESIGN: A VALUE-BASED APPROACH Author(s): Daniel Cabrero, Javier Garzás and Mario Piattini Abstract: Alternative valid software design solutions can give response to the same software product requirements. In addition, a great part of the success of a software project depends on the selected software design. However, there are few methods to quantify how much value will be added by each design strategy, and hence very little time is spent choosing the best design option. This paper presents a new approach to estimate and quantify how profitable is to improve a design solution. This will be achieved by estimating the maintenance cost of a software project using two main variables: The probability of change of each design artifact, and the cost associated to each change. Two techniques are proposed in this paper to support this approach: COCM (Change-Oriented Configuration Management) and CORT (Change-Oriented Requirement Tracing). Title: THE CHALLENGES FACING GLOBAL ERP SYSTEMS IMPLEMENTATIONS Author(s): Paul Hawking, Andrew Stein and Susan Foster Abstract: Large global companies are increasing looking towards information systems to standardise business processes and enhance decision making across their operations in different countries In particular these companies are implementing enterprise resource planning systems to provide this standardisation. This paper is a review of literature which focusses on the use of ERP systems to support global operations. There are many technological and cultural challenges facing these implementations. However a major challenge faced by companies is the balance between centralisation and localisation. Title: INCENTIVES AND OBSTACLES IN IMPLEMENTING INTER-ORGANISATIONAL INTEROPERABILITY Author(s): Raija Halonen and Veikko Halonen Abstract: This paper explores the incentives and obstacles that rise when implementing interoperability in organisations. In the focus we have an inter-organisational information system especially in a context with the information system having interfaces with several information systems managed by different organisations. Information systems are implemented because they bring along several benefits: they enable interaction between organisations without physical attendance; they enable information to be forwarded across organisational borders; they enable organisations better to compete in the market; and they enable organisations to partner with each other. In this respect, inter-organisational information systems differ from other information systems. Inter-organisational information systems often are linked with information systems that are aimed to support functionalities in the partnering organisations and that are implemented earlier, even several years earlier in the organisations. We limit this paper to consider only inter-organisational information systems that are implemented to support pre-defined joint functionalities. Title: KNOWLEDGE-MASHUPS AS NEXT GENERATION WEBBASED SYSTEMS - CONVERGING SYSTEMS VIA SELF-EXPLAINING SERVICES Author(s): Thomas Bopp, Birger Kühnel, Thorsten Hampel, Christian Prpitsch and Frank Lützenkirchen Abstract: Webservice-based architectures are facing new challegenges in terms of convergence of systems. By example of a webservice integration of a digital repository/library, systems of knowledge management in groups, and learning management systems this contribution shows new potentials of flexible, descriptive webservices. Digital libraries are understood in their key position as searching, structuring, and archiving instances of digital media and they actively provide services in this sense. The goal of this article is to introduce services suitable for everyday use for coupling different system classes. Conceptually, the requirements of a possible standard in the area of convergence of knowledge management, digital libraries, and learning management systems are discussed. The results are publish and search services with negotiation capabilities with a low-barrier for adoption. Title: A FRAMEWORK FOR MODEL-DRIVEN PATTERN MATCHING Author(s): Ignacio García-Rodríguez de Guzmán, Macario Polo and Mario Piattini Abstract: Today, software technology is evolving to model engineering. Standards such as MOF and MDA and languages such as QVT and ATL are emerging to support this evolution from object paradigm to model engineering. At times, these standards and languages give rules and advice at a high level of abstraction and concrete solutions and implementations are difficult to perform. As a consequence of this technological immaturity and the lack of documentation, many capabilities in this new field are not exploited. To this end, the authors in this paper propose a first step of providing a framework for performing Model-Driven Pattern Matching operations. Pattern matching based on models is an evolution of a traditional concept adapted to the model realm. In this respect, this kind of pattern matching seems to be promising not only for finding occurrences of given models in others, but also for giving meaning or sense to these patterns in order to undertake actions over the resulting matchings. Title: MODELING OF AN ANALYTICAL DATABASE SYSTEM Author(s): Alex Sandro Romeu de Souza Poletto and Jorge Rady Almeida Junior Abstract: This paper describes a modeling for constructing an analytical database, with the objective to store historical values and also the most recent values starting from operational databases. The first function of this modeling is to map operational database into analytical database, using their E-R diagrams. For that, we created ten steps, which will support the mapping. The second function is to make available mechanisms for generation, transport and storage of these historic data. For that, we specified triggers and procedures for each step. Title: EVIE - AN EVENT BROKERING LANGUAGE FOR THE COMPOSITION OF COLLABORATIVE BUSINESS PROCESSES Author(s): Tony O’Hagan, Shazia Sadiq and Wasim Sadiq Abstract: Technologies that facilitate the management of collaborative processes are high on the agenda for enterprise software developers. One of the greatest difficulties in this respect is achieving a streamlined pipeline from business modelling to execution infrastructures. In this paper we present Evie - an approach for rapid design and deployment of event driven collaborative processes based on a significant language extensions to Java that are characterized by abstract and succinct constructs. The new language is positioned within an overall framework that provides a bridge between a high level modelling tool and the underlying deployment environment. Title: INDUCTION OF DATA QUALITY PROTOCOLS INTO BUSINESS PROCESS MANAGEMENT Author(s): Shazia Sadiq, Maria Orlowska and Wasim Sadiq Abstract: Data quality plays a fundamental role in the success of IT solutions deployment. Success of large projects may be compromised due to lack of governance and control of data quality. The criticality of this problem has increased manifold in the current business environment heavily dependent on external data, where such data may pollute enterprise databases. At the same time, it is well recognized that an organization’s business processes provide the backbone for business operations through constituent enterprise applications and services. As such business process management systems are often the first point of contact for dirty data. It is on the basis of this role that we propose that BPM technologies can and should be viewed as a vehicle for data quality enforcement. In this paper, we target a specific data quality problem, namely data mismatch. We propose to address this problem by explicitly inducting requisite data quality protocols in to the business process management system. In addition to presenting the details of the proposed approach, we will also present in this paper, a detailed analysis of process data properties and typical errors. Title: A DOCUMENT REPOSITORY ARCHITECTURE FOR HETEROGENEOUS BUSINESS INFORMATION MANAGEMENT Author(s): Mohamed Mbarki, Chantal Soulé-Dupuy and Nathalie Vallès-Parlangeau Abstract: As part of business memories, document repositories should bring some solutions to ensure flexible and efficient uses of dematerialized information content. While the fields of repositories modeling, document integration and information interrogation have independently attracted a huge amount of attention, few works have tried to propose a general architecture of document repository management. Thus we propose a repository architecture based on the integration of different complementary modules ensuring an efficient storage of fragmented digital documents and then flexible fragments exploitation. This paper presents an implementation of such architecture of document repository. Title: EXTRACTION AND TRANSFORMATION OF DATA FROM SEMI-STRUCTURED TEXT FILES USING A DECLARATIVE APPROACH Author(s): R. Raminhos and J. Moura-Pires Abstract: The ETL problematic is becoming progressively less specific to the traditional data-warehousing domain and is being extended to the processing of textual data. The World Wide Web appears as a major source of textual information, following a human-readable semi-structured format, referring to multiple domains, some of them highly complex. Traditional ETL approaches following the development of specific source code for each data source and based on multiple domain / computer-science experts interactions, become an inadequate solution, time consuming and prone to error. This paper presents a novel approach to ETL, based on its decomposition in two phases: ETD (Extraction, Transformation and Data Delivery) followed by IL (Integration and Loading). The ETD proposal is supported by a declarative language for expressing ETD statements and a graphical application for interacting with the domain expert. When applying ETD, mainly domain expertise is required, while computer-science expertise will be centred in the IL phase, linking the processed data to target system models, enabling a clearer separation of concerns. This paper also presents how ETD has been integrated, tested and validated in a full data processing solution for a space domain project, currently operational at the European Space Agency for the Galileo Mission. Title: EXTENSIBLE METADATA REPOSITORY FOR INFORMATION SYSTEMS AND ENTERPRISE APPLICATIONS Author(s): Ricardo Ferreira and João Moura-Pires Abstract: Today’s Information Systems and Enterprise Applications require extensive use of Metadata information. In Information Systems, metadata helps in integration and modelling their various components and computational processes, while in Enterprises metadata can describe business and management models, human or physical resources, among others. This paper presents a light and no-cost extensible Metadata Repository solution for such cases, relying on XML and related technologies to store, validate, query and transform metadata information, ensuring common operational concerns such as availability and security yet providing easy integration. The feasibility and applicability of the solution is proved by a set of case studies and applications where an implementation is running in operational state. Title: OLAP AGGREGATION FUNCTION FOR TEXTUAL DATA WAREHOUSE Author(s): Franck Ravat, Olivier Teste and Ronan Tournier Abstract: For more than a decade, OLAP and multidimensional analysis have generated methodologies, tools and resource management systems for the analysis of numeric data. With the growing availability of semi-structured data there is a need for incorporating text-rich document data in a data warehouse and providing adapted multidimensional analysis. This paper presents a new aggregation function for keywords. The AVG_KW function uses an ontology to join keywords into a more common one. This allows aggregation of textual data in OLAP environment as traditional arithmetic functions would do on numeric data. Title: DETERMINING THE COSTS OF ERP IMPLEMENTATION Author(s): Rob J. Kusters, Fred J. Heemstra and Arjan Jonker Abstract: The key question of the research reported here is 'which factors influence En¬terprise Resource Planning (ERP) implementation costs'. No sufficient answers to this question can as yet be found in literature. A 'theoretical' answer to this question has been designed by studying the sparsely available literature on ERP implementation costs, and adding to this relevant items from the related fields of software cost estimation, COTS implementation cost estimation, and ERP implementation critical success factors. This result has been compared with empirical data that have been obtained from two large corporations. The combined result can be seen as a first attempt to define a generally applicable list of cost drivers for ERP implementation. Title: STATISTICS API: DBMS-INDEPENDENT ACCESS AND MANAGEMENT OF DBMS STATISTICS IN HETEROGENEOUS ENVIRONMENTS Author(s): Tobias Kraft and Bernhard Mitschang Abstract: Many of todays applications access not a single but a multitude of databases running on different DBMSs. Federation technology is being used to integrate these databases and to offer a single query-interface to the user where he can run queries accessing tables stored on different remote databases. So, the optimizer of the federated DBMS has to decide what portion of the query should be processed by the federated DBMS itself and what portion should be executed at the remote systems. Thereto, it has to retrieve cost estimates for query fragments from the remote databases. The response of these databases typically contains cost and cardinality estimates but no statistics about the data stored in these databases. However, statistics are optimization-critical information which is the crucial factor for any kind of decision making in the optimizer of the federated DBMS. When this information is not available optimization has to rely on imprecise heuristics mostly based on default selectivities. To fill this gap, we propose Statistics API, a JAVA interface that provides DBMS-independent access to statistics data stored in databases running on different DBMSs. Statistics API also defines data structures used for the statistics data returned by or passed to the interface. We have implemented this interface for the three prevailing commercial DBMSs IBM DB2, Oracle and Microsoft SQL Server. These implementations will be available under the terms of the GNU Lesser General Public License (LGPL). This paper introduces the interface, i.e. the methods and data structures of the Statistics API, and discusses some details of the three interface implementations. Title: DYNAMIC COMMIT TREE MANAGEMENT FOR SERVICE ORIENTED ARCHITECTURES Author(s): Stefan Böttcher and Sebastian Obermeier Abstract: Whenever Service Oriented Architectures make use of Web service transactions and an atomic processing of these transactions is required, atomic commit protocols are used for this purpose. Compared to traditional client server architectures, atomicity for Web services and Web service composition is much more challenging since in many cases sub-transactions belonging to a global transaction are not known in advance. In this contribution, we present a dynamic commit tree that guarantees atomicity for transactions that invoke sub-transactions dynamically during the commit protocol's execution. Furthermore, our commit tree allows the identification of obsolete sub-transactions that occur if sub-transactions are aborted and restart. Title: A VIRTUALIZATION APPROACH FOR REUSING MIDDLEWARE ADAPTERS Author(s): Ralf Wagner and Bernhard Mitschang Abstract: Middleware systems use adapters to integrate remote systems and to provide uniform access to them. Different middleware platforms use different adapter technologies, e.g. the J2EE platform uses J2EE connectors and federated database systems based on the SQL standard use SQL wrappers. However, a middleware platform cannot use adapters of a different middleware platform, e.g. a J2EE application server cannot use an SQL wrapper. Even if an SQL wrapper exists for a remote system that is to be integrated by a J2EE application server, a separate J2EE connector for that remote system has to be written. Tasks like that occur over and over again and require to invest additional resources where existing IT infrastructure should be reused. Therefore, we propose an approach that allows to reuse existing adapters. Reusing adapters is achieved by means of a virtualization tier that can handle adapters of different types and that provides uniform access to them. This enables middleware platforms to use each others adapters and thereby avoids the costly task of writing new adapters. Title: XML INDEX COMPRESSION BY DTD SUBTRACTION Author(s): Stefan Böttcher, Rita Steinmetz and Niklas Klein Abstract: Whenever XML is used as data format to exchange large amounts of data or even for data streams, the verbose behaviour of XML is one of the bottlenecks. While compression of XML data seems to be a way out, it is essential for a variety of applications that the compression result still can be queried efficiently. Furthermore, for efficient evaluation of path queries, an index is desired, which usually generates an additional data structure. For this purpose, we have developed a compression technique that uses structure information found in the DTD to perform a structure-preserving compression of XML data and provides a compression of an index that still allows for efficient search in the compressed data. Our evaluation shows that overall compression factors which are close to gzip are possible, whereas the structural part of XML files can be compressed even better. Title: DISTRIBUTED APPROACH OF CONTINUOUS QUERIES WITH KNN JOIN PROCESSING IN SPATIAL DATA WAREHOUSE Author(s): Marcin Gorawski and Wojciech Gębczyk Abstract: The paper describes realization of distributed approach to continuous queries with kNN join processing in the spatial telemetric data warehouse. Due to dispersion of developed system, new structural members were distinguished such as the mobile object simulator, the kNN join processing service and query manager. Distributed tasks communicate using JAVA RMI methods. The kNN queries (k Nearest Neighbor) joins every point from one dataset with its k nearest neighbors in the other dataset. In our approach we use the Gorder method, which is a block nested loop join algorithm that exploits sorting, join scheduling and distance computation filtering to reduce CPU and I/O usage. Title: AN EXTENSIBLE RULE TRANSFORMATION MODEL FOR XQUERY OPTIMIZATION - RULES PATTERN FOR XQUERY TREE GRAPH VIEW Author(s): Nicolas Travers and Tuyêt Trâm Dang Ngoc Abstract: Efficient evaluation of XML Query Languages has become a crucial issue for XML exchanges and integration. Tree Pattern [1][2][3] are now well admitted for representing XML Queries and a model -called TGV [4][5]- has extended the Tree Pattern representation in order to make it more intuitive, respect full XQuery specification and got support to be manipulated, optimized and then evaluated. For optimization, a search strategy is needed. It consists in generating equivalent execution plan using extensible rules and estimate cost of plan to find the better one. We propose the specification of extensible rules that can be used in heterogeneous environment, supporting XML and manipulating Tree Patterns Title: AN OVERVIEW OF THE OBJECT-ORIENTED DATABASE PROGRAMMING LANGUAGE DBPQL Author(s): Markus Kirchberg Abstract: In this paper, we present an integrated object-oriented database programming and querying language. While object-oriented programming languages and languages supported by object-relational or object-oriented database systems appear to be closely related, there are a number of significant differences affecting language design and implementation. Such issues include the degree of encapsulation, persistence, the incooperation types and classes, inheritance, concurrency, NULL values etc. In this paper, we mainly focus on those issues that affect language design. Title: TOURISM INFORMATION AGGREGATION USING AN ONTOLOGY BASED APPROACH Author(s): Miguel Gouveia and Jorge Cardoso Abstract: Aggregating related information, from different data sources, allows the creation of data repositories with very useful information. In the tourism domain, aggregating tourism products with related tourism attractions will add value to those products. The ability to create dynamic packages is another reason to aggregate tourism information. Defining an ontology, composed by the concepts to aggregate, is the first step to create tourism aggregation systems. In this paper we define the approach and the architecture that guides to the creation of aggregated solutions that provide valued tourism information and that allow creation of dynamic packages. Title: ONE-TO-MANY DATA TRANSFORMATION OPERATIONS - OPTIMIZATION AND EXECUTION ON AN RDBMS Author(s): Paulo Carreira, Helena Galhardas, João Pereira and Andrzej Wichert Abstract: The optimization capabilities of RDBMSs make them attractive for executing data transformations that support ETL, data cleaning and integration activities. However, despite the fact that many useful data transformations can be expressed as relational queries, an important class of data transformations that produces several output tuples for a single input tuple cannot be expressed in that way. To address this limitation a new operator, named data mapper has been proposed as an extension of Relational Algebra for expressing one-to-many data transformations. In this paper we study the feasibility of implementing the mapper operator as a primitive operator on an RDBMS. Data transformations expressed as combinations of standard relational operators and mappers can be optimized resulting in interesting performance gains. Title: REVISITING THE OLAP INTERACTION TO COPE WITH SPATIAL DATA AND SPATIAL DATA ANALYSIS Author(s): Rosa Matias and João Moura-Pires Abstract: In this paper we propose a new interface for spatial OLAP systems. Spatial data deals with data related to space and its complex and specific nature brings challenges to OLAP environments. Humans only understand spatial data through maps. Our spatial OLAP environment is compounded of the following elements: a map, a support table and a detail table. Those areas have synchronized granularity. We also extend OLAP operation to performed spatial analysis, for instance, spatial drill-down, spatial roll-up and spatial slice. We take special care in the spatial slice where we identify two main groups of operations: spatial-semantic slice and spatial-geometric slice. Title: DEVELOPMENT OF AN ACCOUNTING SYSTEM - APPLYING THE INCREMENTALLY MODULAR ABSTRACTION HIERARCHY TO A COMPLEX SYSTEM Author(s): Kenji Ohmori and Tosiyasu L. Kunii Abstract: The new methodology for software development is introduced and applied to an accounting system. The new method is called the incrementally modular abstraction hierarchy (IMAH). IMAH has an abstraction hierarchy from abstract to concrete levels. Invariants defined on an abstract level are kept on a concrete level, which allows adding modules incrementally on each hierarchical level and avoiding combinatorial explosion of the serious problem in software engineering, while climbing down abstraction hierarchy in designing and modeling a complex system. This paper shows how IMAH is applied in developing an accounting system, which is fundamental in enterprise systems and a suitable example of complex software systems. At first, very simple example recording only journal vouches to a database system is used to describe methodologies of IMAH. Then, it is described how this simple system is incrementally developed to a conventional complex accounting system. Title: MODELING DIMENSIONS IN THE XDW MODEL - A LVM-DRIVEN APPROACH Author(s): R. Rajugan, Elizabeth Chang and Tharam S. Dillon Abstract: Since the introduction of eXtensible Markup Language (XML), XML repositories have gained a foothold in many global (and government) organizations, where, e-Commerce and e-Business models have maturated in handling daily transactional data among heterogeneous information systems. Due to this, the amount of data available for enterprise decision-making process is increasing exponentially and are being stored and/or communicated in XML. This presents an interesting challenge to investigate models, frameworks and techniques for organizing and analysing such voluminous, yet distributed XML documents for business intelligence in the form of XML warehouse repositories and XML marts. In our previous work, we proposed a view-driven, conceptual modelling framework for the design and development of an XML Document Warehouse (XDW) model with emphasis on warehouse user requirements. There, we presented a view-driven framework to conceptually model and deploy meaningful XML FACT repositories in the XDW model. Here, in this paper, we look at the hierarchical dimensions and their theoretical semantics used to design, specify and define dimensions over an XML FACT repository in the XDW model. One of the unique properties of this view-driven approach is that the dimensions are considered as first-class citizens of the XDW conceptual model. Also, here, to illustrate our concepts, we use a real-world case study example; a logically grouped, geographically dispersed, XDW model in the context of a global logistics and cold-storage company. Title: AN INFORMATION SYSTEMS AUDITOR’S PROFILE Author(s): Mariana Carroll and Alta van der Merwe Abstract: The increasing dependence upon Information Systems (IS) in the last few decades by businesses resulted in many concerns regarding auditing. Traditional IS auditing changed from auditing ‘around the computer’ to auditing through and with the computer. Technology is changing rapidly and so is the profession of IS auditing. As IS auditing is dependent on Information Technology (IT), it is essential that an IS auditor possesses IT and auditing knowledge to bridge the gap between the IT and auditing professions. In this paper we reflect on the auditor’s profile in this changing domain, where we first define the roles and responsibilities expected from IS auditors, describe the basic IT and audit knowledge required from IS auditors based on the roles and responsibilities identified, describe the soft skills required from IS auditors to successfully perform an IS audit assignment, define the main types of IS audit tools and techniques used most often to assist IS auditors in executing IS audit roles and responsibilities and lastly propose the IS auditor’s profile. Title: ON CORRECTNESS CRITERIA FOR WORKFLOW Author(s): Belinda M. Carter and Maria E. Orlowska Abstract: Exception handling during the execution of workflow processes is a frequently addressed topic in the literature. Policies describe the desired handling response to exception events in terms of the current state of process execution. In this paper, we present insights into the definition and verification of such policies for handling asynchronous, expected exceptions. In particular, we demonstrate that the definition of exception handling policies is not a trivial exercise in the context of complex processes, and, while different approaches to defining and enforcing exception handling policies have been proposed, the issue of verification of the policies has not yet been addressed. The main contribution of this paper is a set of correctness criteria which we envisage could form the foundation of a complete verification solution for exception handling policies. Title: PROBLEMS WITH NON-OPEN DATA STANDARDS IN SWEDISH MUNICIPALS: WHEN INTEGRATING AND ADOPTING SYSTEMS Author(s): Benneth Christiansson and Fredrik Svensson Abstract: Governments world-wide are applying information and communication technology in order to meet a broad range of citizen and organizational needs. When planning systems integration the choice should lead to the software that best suits the organizational needs, taking into account price, quality, ease of use, support, reliability, security and other characteristics considered important. This paper is based on experiences from the KOMpiere project which aims at modifying the open source licensed ERP-system Compiere for use in Swedish municipals. The overall goal of the project is to support and enhance the use of open source licensed software in the Swedish public sector and thereby enable municipals to lower their IT-related costs and gain strategic control over their own IT-environment. We discovered that at least some Swedish municipals don’t have free access to the data they are appointed to govern and protect. The software vendors have, by using non-open data standards, excluded the municipals from using their own data freely. Thereby denying Swedish municipals an open market. We have in this paper suggested the creation and usage of XML-based ODS for all systems in Swedish municipals. Title: USING AN INDEX OF PRECOMPUTED JOINS IN ORDER TO SPEED UP SPARQL PROCESSING Author(s): Sven Groppe, Jinghua Groppe and Volker Linnemann Abstract: SparQL is a query language developed by the W3C, the purpose of which is to query a data set in RDF representing a directed graph. Many free available or commercial products already support SparQL processing. Current index-based optimizations integrated in these products typically construct indices on the subject, predicate and object of an RDF triple, which is a single datum of the RDF data, in order to speed up the execution time of SparQL queries. In order to query the directed graph of RDF data, SparQL queries typically contain many joins over a set of triples. We propose to construct and use an index of precomputed joins, where we take advantage of the homogenous structure of RDF data. Furthermore, we present experimental results, which demonstrate the achievable speed-up factors for SparQL processing. Title: AN EXECUTIVE INFORMATION SYSTEM FOR SECURITIES BROKER’S RISK MANAGEMENT WITH DATA WAREHOUSING AND OLAP Author(s): Yung-Hsin Wang, Shing-Han Li and Kuo-Lung Sun Abstract: With the open domestic financial market, the targets of investment and money management are toward diversity. The competition from internationalization makes the stock market no more flourishing as usual. The risk of margin trading becomes important information that securities firms try to analyze and get controlled. According to current regulations and working process, this study constructs an executive information system with the application of data warehouse and online analytical processing (OLAP) to help securities brokers make decisions in the operation of risk management for margin purchase and short sale of securities. The result solves the problem that managers of margin trading usually face when using traditional account systems. Title: CHANGE MANAGEMENT IN DATA INTEGRATION SYSTEMS Author(s): Rahee Ghurbhurn, Philippe Beaune and Hugues Solignac Abstract: In this paper, we present a flexible architecture allowing applications and functional users to access heterogeneous distributed data sources. Our proposition is based on a multi-agent architecture and a domain knowledge model. The objective of such an architecture is to introduce some flexibility in the information systems architecture. This flexibility can be in terms of the ease to add or remove existing/new applications but also the ease to retrieve knowledge without having to know the underlying data sources structures. We propose to model the domain knowledge with the help of one or several ontologies and to use a multi-agent architecture maintain such a representation and to perform data retrieval tasks. The proposed architecture acts as a single point of entry to existing data sources. We therefore hide the heterogeneity allowing users and applications to retrieve data without being hindered by changes in these data sources. Title: RELEVANT VALUES: NEW METADATA TO PROVIDE INSIGHT ON ATTRIBUTE VALUES AT SCHEMA LEVEL Author(s): Sonia Bergamaschi, Mirko Orsini, Francesco Guerra and Claudio Sartori Abstract: Research on data integration has provided languages and systems able to guarantee an integrated intensional representation of a given set of data sources. A significant limitation common to most proposals is that only intensional knowledge is considered, with little or no consideration for extensional knowledge. In this paper we propose a technique to enrich the intension of an attribute with a new sort of metadata: the relevant values", extracted from the attribute values. Relevant values enrich schemata with domain knowledge; moreover they can be exploited by a user in the interactive process of creating/refining a query. The technique is automatic, independent of the attribute domain and it is based on data mining clustering techniques and emerging semantics from data values. It is parametrized with various metrics for similarity measures and is a viable tool for dealing with frequently changing sources, as in the Semantic Web context. The technique is fully implemented in a prototype we describe together with some experimental results. Title: A KOREAN SEARCH PATTERN IN THE LIKE OPERATION Author(s): Sung Chul Park, Eun Hyang Lo, Jong Chul Park and Young Chul Park Abstract: The string pattern search operator LIKE of SQL has been developed based on English such that each search pattern of English of the operator works for each character in the alphabet of English. For finding Korean, search patterns of the operator can be expressed by both the alphabet and syllables of Korean. As a phonetic symbol, each syllable of Korean is composed either of a leading sound and a medial sound or of a leading sound, a medial sound, and a trailing sound. By utilizing that characteristic of Korean syllables, in addition to the traditional complete-syllable based search pattern of Korean, this paper proposes an incomplete-syllable based search pattern of Korean, as a pattern of the operator LIKE, to find Korean syllables having specific leading sounds, specific medial sounds, or both specific leading sounds and medial sounds. Formulating predicates that are equivalent with the incomplete-syllable based search pattern of Korean by way of existing SQL expressions is cumbersome and might cause the portability problem of applications depending on the underlying character set of the DBMS. Title: INTEGRATING ENTERPRISE DATA FOR DECISION SUPPORT IN CONSTRUCTION ORGANISATIONS Author(s): Tanko Ishaya, James Chadband and Lucy Grierson Abstract: Information integration is one of the main problems to be addressed when designing a data warehouse for decision-making support. Possible inconsistencies and redundancies between data residing at the operational data sources needs to be resolved before migrating to a data warehouse, so that the data warehouse is able to provide an integrated and reconciled view of data within the organisation. This paper presents a performance-oriented data warehouse as an integrated data for decision-making support within a construction organisation. The process is based on a conceptual representation of the enterprise, which has been exploited both in the data integration phase of the warehouse information sources and during the decision-making activity from the information stored in the data warehouse. The application of the process has been supported by prototype Title: A CONTINUOUS DATA INTEGRATION METHODOLOGY FOR SUPPORTING REAL-TIME DATA WAREHOUSING Author(s): Ricardo Jorge Santos and Jorge Bernardino Abstract: A data warehouse provides information for analytical processing, decision making and data mining tools. As the concept of real-time enterprise evolves, the synchronism between transactional data and data warehouses, statically implemented, has been reviewed. Traditional data warehouse systems have static structures of their schemas and relationships between data, and therefore are not able to support any dynamics in their structure and content. Their data is only periodically updated because they are not prepared for continuous data integration. For these purposes, real-time data warehouses seem to be very promising. In this paper we present a methodology on how to adapt data warehouse schemas and user-end OLAP (On-Line Analytical Processing) queries for efficiently supporting real-time data integration. To accomplish this, we use techniques such as table structure replication and query predicate restrictions for selecting data, managing to enable continuous data integration in the data warehouse with minimum impact in query execution time. We demonstrate the functionality of the method by analyzing its impact in query performance using benchmark TPC-H executing query workloads while simultaneously performing continuous data integration at various insertion time rates. Title: ACTIVITY WAREHOUSE: DATA MANAGEMENT FOR BUSINESS ACTIVITY MONITORING Author(s): Oscar Mangisengi, Mario Pichler, Dagmar Auer, Dirk Draheim and Hildegard Rumetshofer Abstract: Nowadays tracking data from the checkpoints of business process activities of transactions becomes important data resource for business analyst and decision-makers to support tactical decisions in general and strategic decisions in particular. In the context of business process-oriented applications, business activity monitoring (BAM) systems predicted as a major role in the near future of the business-intelligence area is the most visible use of the current business needs. In this paper we address an approach to derive an activity warehouse model based on the BAM requirements. The implementation shows that data stored in activity warehouse is able to efficiently monitor the business process in real-time and provide a better real-time visibility of the business process. Title: LEGACY SYSTEM EVOLUTION – A COMPARATIVE STUDY OF MODERNISATION AND REPLACEMENT INITIATION FACTORS Author(s): Irja Kankaanpää, Päivi Tiihonen, Jarmo J. Ahonen, Jussi Koskinen, Tero Tilus and Henna Sivula Abstract: Decisions regarding information system evolution strategy become topical as the organisation’s information systems age and start to approach the end of their life cycle. An interview study was conducted in order to compare factors influencing modernisation and replacement initiation. The results show that the most prevalent individual reason for modernisation initiative is business development, while the most typical reason for system replacement is the old age of the existing system. System age, obsolete technology and high operation or maintenance costs, were identified in both modernisation and replacement projects. Other common initiation criteria for replacement projects were end of vendor support and system’s inability to respond to company’s business needs. Typically, modernisation projects were initiated because of a system’s old age and obsolete technology. Title: STRATEGIC FRAMEWORK TO IMPLEMENT A TELECOMMUNICATIONS BUSINESS INTELLIGENCE SOLUTION IN A DEVELOPING COUNTRY Author(s): D. P. du Plessis and T. McDonald Abstract: After privatisation a telecommunications company, who had exclusive rights, had to prepare itself for competition. It was of the utmost importance for the company to improve its business while there was still little competition and in so doing create a competitive advantage. A new business intelligence strategy was therefore required. A framework was developed to implement the company’s business intelligence strategy. The framework consisted of the steps that had to be followed to grow business intelligence and data warehousing in the company. These steps were supported by two modules, the data warehousing lifecycle model and the business intelligence literacy and cultural maturity model that formed part of the framework. All the components of the framework are discussed in detail. Title: TOWARDS INDUSTRIAL SERVICE BUSINESS: CHALLENGES IN DESIGNING ICT SUPPORT FOR THE NETWORKS OF COMPANIES Author(s): Sauli Hiippavuori, Markus Hänninen, Samuli Pekkola and Kari Luostarinen Abstract: Currently traditional manufacturing business is changing its shape and becoming a service industry. In addi-tion to products, manufacturers are also providing specialized knowledge-based services. This transforma-tion is not easy as both the manufacturers and their customers have to learn new ways of doing business to-gether. Although ICT can be perceived as an enabler for such operations, its support for the activities of the networks of companies is still more or less unknown. In these settings, ICT-related challenges are multifold in comparison to traditional intra-organizational domains. In this paper, we present our findings from a case study on constructing ICT support for industrial service business. We provide a list, derived from a study of synchronizing two organizations and their factory-floor level operations, of technological challenges to con-sider when designing and implementing systems to support daily business operations of industrial service business. Title: SOFTWARE COST ESTIMATION USING ARTIFICIAL NEURAL NETWORKS WITH INPUTS SELECTION Author(s): Efi Papatheocharous and Andreas Andreou Abstract: Software development is an intractable, multifaceted process encountering deep, inherent difficulties. Especially when trying to produce accurate and reliable software cost estimates, these difficulties are amplified due to the high level of complexity and uniqueness of the software process. This paper addresses the issue of estimating the cost of software development by identifying the need for countable entities that affect software cost and using them with artificial neural networks to establish a reliable estimation method. Input Sensitivity Analysis (ISA) is performed on predictive models of the Desharnais and ISBSG datasets aiming at identifying any correlation present between important cost parameters at the input level and development effort (output). The degree to which the input parameters define the evolution of effort is then investigated and the selected attributes are employed to establish accurate prediction of software cost in the early phases of the software development life-cycle. Title: DQXSD: AN XML SCHEMA FOR DATA QUALITY - AN XSD FOR SUPPORTING DATA QUALITY IN XML Author(s): Eugenio Verbo, Ismael Caballero and Mario Piattini Abstract: Traditionally, data quality management has mainly focused on both data source and data target. Increasingly, data processing to get a data product need raw data typically distributed among different data sources. However, if data quality is not preserved when transmitted, resulting data product and consequent information will not be of much value. It is necessary to improve exchange methods to get a better information process. This paper focus on that issue, proposing a new approach for data quality. Using XML and related technologies, a document structure that considers quality as a main topic is defined. The resulting schema is verified using several measures and comparing it to the data source. Title: MULTIDIMENSIONAL VECTOR ROUTING IN A P2P NETWORK Author(s): Laurent Yeh, Georges Gardarin and Florin Dragan Abstract: P2P systems tend to be largely accepted as a common support for deploying massively distributed data management applications. Many multidimensional data P2P indexing techniques suffer from severe limitations regarding the number of data dimensions to be indexed. In this paper, we propose a new approach for indexing multidimensional data in a P2P architecture. It is based on an efficient query overlay network (the so-called routing layer) built from a new data structure named skip-zone. We index data pieces of large dimensions as vectors and we adopt the polyhedral algebra for splitting the domain of possible values into sub-domains. Every peer controls a sub-domain of values. To manage the overlay network meta-data required for tracking the network evolution or for routing data at each peer, we propose an efficient distributed meta-data layer that works in cooperation with the routing layer. An evaluation outlines the main properties of our architecture versus those of similar systems. The insensibility of the vector dimension in our data model is a key advantage of our approach. Title: EXPOSING WORKFLOWS TO LOAD BURSTS Author(s): Dmytro Dyachuk and Ralph Deters Abstract: Well defined, loosely coupled services are the basic building blocks of the service-orientated design-integration paradigm. Services are computational elements that expose functionality (e.g. legacy applications) in a platform independent manner and can be described, published, discovered, orchestrated and consumed across language, platform and organizational borders. Using service-orientation (SO) it is fairly easy to expose existing applications/resources and to aggregate them into novel services called composite services (CS). This aggregation is achieved by defining a workflow that orchestrates the underlying services in a manner consistent with the desired functionality. Since CS can aggregate atomic and other CS they foster the development of service layers and reuse of already existing functionality. But by defining workflows, existing services are put into novel contexts and exposed to different workloads, which in turn can result in unexpected behaviours. This paper examines the behaviour of sequential workflows that experience short-lived load bursts. Using workflows of varying length, the paper reports on the transformations that loads experience as they are processed by providers. Title: ENABLING CSCW SYSTEMS TO AUTOMATICALLY BIND EXTERNAL KNOWLEDGE BASES Author(s): Thomas Bopp, Jonas Schulte and Thorsten Hampel Abstract: The usage of CSCW systems for teaching, training, and research collaboration increases since they offer a time- and place-independent as well as a cost-effective platform. The user’s search should not be restricted to local material; in fact users benefit from different search environments like for example digital libraries to find appropriate working material. Searching and further processing of documents imply a media breach since the search cannot be invoked in current CSCW systems directly. This paper presents the first prototype of a CSCW system which enables users to search in external sources without media breach. To provide arbitrary search environments no restrictions to data formats or search functionalities are allowed. Hence we have enhanced search environments with self description capabilities in order to realize an automatic binding of search environments in CSCW systems. By search environments we address any system offering searchable knowledge bases, such as digital libraries or the CSCW system itself. Furthermore our concept supports local search and searching in different external sources in parallel. Title: DOING THINGS RIGHT OR DOING THE RIGHT THINGS? PROPOSING A DOCUMENTATION SCHEME FOR SMALL TO MEDIUM ENTERPRISES Author(s): Josephine Antoniou, Panagiotis Germanakos and Andreas S. Andreou Abstract: Coping with the initial and finest systems’ functionality and performance is indeed one of the major problems nowadays, due to the rapid increase and continuous change of customer demands. Hence, it is crucial to move on with a research analysis in an attempt to identify whether documentation, the most reliable source for preserving a software system’s quality over the years, is properly created, updated and used in Small to Medium Enterprises (SME) operating in small EU markets, focusing both on the development process and the maintenance activities. Henceforth, the main objective of this paper is to propose a minimum documentation set required to fulfil both the Software Engineering principles and the SME practical needs by comparing literature suggestions with empirical findings. In further support of our documentation set suggestion, we present and discuss the results of a small survey conducted in nine IT-oriented SME in Cyprus and Greece. Title: OOPUS - A PRODUCTION PLANNING INFORMATION SYSTEM TO ASSURE HIGH DELIVERY RELIABILITY UNDER SHORT-TERM DEMAND CHANGES AND PRODUCTION DISTURBANCES Author(s): Wilhelm Dangelmaier, Tobias Rust, Thomas Hermanowski, Daniel Brüggemann, Daniel Kaschula, Andre Döring and Thorsten Timm Abstract: Batch-sizing and scheduling is the central decision problem in the area of production planning. A special challenge in this context is to handle the big amount of data in an adequate time interval. To aggregate and to illustrate this data clearly, appropriate techniques are required. This paper presents a new approach to integrate a Production Planning Table visualized by a Gantt chart and a cumulative quantity table for maximum information transparency in production planning. The discussed solution is realized in OOPUS, an object-oriented tool for planning and control, which became the leading production planning system in two motor assembly plants of an international automobile manufacturer. Title: MANAGING COMPLEX INFORMATION IN REACTIVE APPLICATIONS USING AN ACTIVE TEMPORAL XML DATABASE APPROACH Author(s): Essam Mansour, Kudakwashe Dube and Bing Wu Abstract: Some tasks in application domains, such as patient care practice, require constant monitoring of a dynamic context and environment based on best practice in the form of predefined evidence- or experience-based information or knowledge. In the most basic scenario, these applications take the form of reactive and active applications. Incorporating best practice into routines used in such domain applications is a challenging problem that requires dedicated approaches and methods for comprehensively managing complex information that is associated with these domains. This paper presents a generic framework for Complex Information Management (CIM) in domains where best practice applied to changing circumstances need to be incorporated into day-to-day work. The approach adopted uses the combined application of the event-condition-action (ECA) rule paradigm, a temporal mechanism, advanced DBMS features and XML technologies facilitating the support for the three key complex information management dimensions of specification, execution, and manipulation (dissemination, query and maintenance) of the complex domain information. The uniqueness of the work presented in this paper lie in supporting the multiple management dimensions for managing information under a single unified framework. The main contribution of our framework is in managing the reactive application logic within the specification and execution dimension as one object that is easy to be manipulated, queried, and disseminated within the manipulation dimension. The benefits of our approach include: the flexibility of managing the complex information as one document, and the easy incorporation of the complex information management system into other systems. Title: USING FUZZY DATACUBES IN THE STUDY OF TRADING STRATEGIES Author(s): M. Delgado Calvo-Flores, J. F. Nuñez Negrillo, E. Gibaja Galindo and C. Molina Fernández Abstract: A fuzzy multidimensional model can be used for exploratory analysis, modelling complex concepts that are very difficult to use in crisp ones. Some problems, as the edge problem, can be reduced using this approach. To hide the complexity of the fuzzy logic in this situation is important. In this paper we present an application of a fuzzy multidimensional model, that uses two layer representation to hide the complexity to the user, in the study of trading strategies. Title: ORGANIZATIONAL ISSUES ON COOPETITIVE FEDERATED INFORMATION SYSTEMS Author(s): Mirko Cesarini and Mario Mezzanzanica Abstract: In this paper we point out the organizational issues related to the set-up of an information systems federation based on a coopetitive behavior. The joint exploitation of information owned by different, independent, even competing entities may be carried out according to a "coopetitive model". The term coopetition is used in management literature to refer to a hybrid behavior comprising competition and cooperation. We will show in this paper that the set-up of a coopetitive cenario raises organizational issues, which can be addressed by the creation of inter-firm personal relationships as well as by the firms decision makers active engagement. Title: ON THE SEMI-AUTOMATIC VALIDATION AND DECOMPOSITION OF TERNARY RELATIONSHIPS WITH OPTIONAL ELEMENTS Author(s): Ignacio-J. Santos, Paloma Martínez Fernandez and Dolores Cuadra Abstract: This paper analyzes the problems that concern the design of databases. CASE tools supply a resources kit for the design and creation of database in a DBMS (Database Management System). Sometimes, these tools only help to draw diagrams. Ideally, they would verify and validate DB design and transform it from Conceptual to Logical Model. In a last step, they would transform the Logical Model to a specific DBMS. Currently, commercial tools do not verify or validate the model in an optimal way. This paper is focused on the validation and checking of database schemas. This work specially analyzes the ternary or higher-order relationships when there are optional components. Title: STAH-TREE: HYBRID INDEX FOR SPATIO TEMPORAL AGGREGATION Author(s): Marcin Gorawski, and Michał Faruga Abstract: This paper presents new index that stores spatiotemporal data and provides efficient algorithms for processing range and time aggregation queries where results are precise values not an approximation. In addition, this technology allows to reach detailed information when they are required. Spatiotemporal data are defined as static spatial objects with non spatial attributes changing in time. Range aggregation query computes aggregation over set of spatial objects that fall into query window. Its temporal extension allows to define additional time constraints. Index name (i.e. STAH-tree) is English abbreviation and can be extended as Spatio-Temporal Aggregation Hybrid Tree. STAH-tree is based on two well known indexing techniques. R– and aR–tree for storing spatial data and MVB-tree for storing non-spatial attributes values. These techniques were extended with new functionality and adopted to work together. Cost model for node accesses was also developed. Title: USING SEMANTIC WEB AND SERVICE ORIENTED TECHNOLOGIES TO BUILD LOOSELY COUPLED SYSTEMS: SWOAT – A SERVICE AND SEMANTIC WEB ORIENTED ARCHITECTURE TECHNOLOGY Author(s): Bruno Caires and Jorge Cardoso Abstract: The creation of loosely coupled and flexible applications has been a challenge faced by most organizations. This has been important because organization systems need to quickly respond and adapt to changes that occur in the business environment. In order to address these key issues, we implemented SWOAT, a ‘Service and Semantic Web Oriented Architecture Technology’ based middleware. Our system uses ontologies to semantically describe and formalize the information model of the organization, providing a global and integrated view over a set of database systems. It also allows interoperability with several systems using Web Services. Using ontologies and Web services, clients remain loosely coupled from data sources. As a result, data structures can be changed and moved without having to change all clients, internal or external to the organization. Title: A MULTI-VIEWS REPOSITORY FOR MULTI-STRUCTURED DOCUMENTS Author(s): Karim Djemal Abstract: The diversity of use of numerical documents has created new interests on archiving, storing and accessing the numerical documents. A lot of work has been done in this perspective. This paper presents a way of treating the multi-structured documents within repositories. The use of the views is a way of managing these documents. The meta-model proposed allows a better representation of the documents through the modelling of the elements, metadata and the relations that connect them. It proposes a better management of storage space for the views storage through their overlapping also Title: USABILITY ISSUES IN SERVICE-ORIENTED ARCHITECTURE Author(s): Jaroslav Král and Michal Zemlicka Abstract: Usability is of a growing importance. It is crucial for the acceptance of software systems nowadays. Software usability in its classical sense is mainly the property of the user interface of a system. Usable interface should have at least three properties: it must be easily understood and remembered and not too laborious in use. We show that in SOA systems called confederation the first two properties should have the interfaces of constituent application services. It is a precondition for the usability of user system interface. The properties are important for the software engineering aspects of confederations (scalability, modifiability, reuse of existing systems, stability) as well as for their functions, e.g. for business processes (flexibility, on-line modifiability, etc.). We discuss some standardization issues. We show that in many cases the requirement to have services with well usable interfaces is more important that the requirement that the interfaces should be based on widely used world-wide standards like SOAP. We discuss the reasons why the interfaces as well as the architecture should be coarse grained. Title: A XML-BASED QUALITY MODEL FOR WEB SERVICES CERTIFICATION Author(s): J. Jorge Dias Jr., J. Adson O. G. da Cunha, Alexandre Álvaro, Roberto S. M. de Barros and Sílvio Meira Abstract: Internet has made possible the development of software as services, consumed on demand and developed by third parties. In this sense, a quality model is necessary to enable evaluation and, consequently, reuse of the services by consumers. In this way, this paper proposes a quality model based on the ISO 9126 standard, defining a set of attributes and metrics for an effective evaluation of Web services. A XML-based representation model was created to support this quality model, and a security schema was proposed to guarantee integrity and authenticity of the model. Title: PREFERENCE RULES IN DATABASE QUERYING Author(s): Sergio Greco, Cristian Molinaro and Francesco Parisi Abstract: The paper proposes the use of preferences for querying databases. In expressing queries it is natural to express preferences among tuples belonging to the answer. This can be done in commercial DBMS, for instance, by ordering the tuples in the result. The paper presents a different proposal, based on similar approaches deeply investigated in the artificial intelligence field, where preferences are used to restrict the result of queries posed over a databases. In our proposal a query over a database DB is a triple , where q denotes the output relation, P a Datalog program (or an SQL query) used to compute the result and Phi is a set of preference rules used to introduce preferences on the computed tuples. In our proposal tuples which are "dominated" by other tuples do not belong to the result and cannot be used to infer other tuples. A new stratified semantics is presented where the program P is partitioned into strata and the preference rules associated to each stratum of P are divided into layers; the result of a query is carried out by computing one stratum at time and by applying the preference rules, one layer at time. We show that our technique is sound and that the complexity of computing queries with preference rules is still polynomial. Title: DIMENSION HIERARCHIES UPDATES IN DATA WAREHOUSES - A USER-DRIVEN APPROACH Author(s): Cécile Favre, Fadila Bentayeb and Omar Boussaid Abstract: We designed a data warehouse in collaboration with LCL-Le Crédit Lyonnais (LCL) meeting users’ needs regarding marketing operations decision. However, the nature of the work of users implies that their requirements are often changing and do not reach a final state. In this paper, we propose an original and global approach to achieve a user-driven schema evolution that provides answers to personalized analysis needs. Our approach is composed of four phases: (1) users’ knowledge acquisition under the form of aggregation rules, (2) knowledge integration to transform rules into mapping tables, (3) data warehouse schema update, and (4) on-line analysis. To validate our approach, we developed a prototype called WEDrik (data Warehouse Evolution Driven by Knowledge) within the Oracle 10g DBMS. Furthermore, we applied our approach on banking data of LCL and we illustrated our purpose with a simplified running example extracted from its case study. Title: A DATA WAREHOUSE ARCHITECTURE FOR INTEGRATING FIELD-BASED DATA Author(s): Alberto Salguero, Francisco Araque and Ramón Carrasco Abstract: Spatial DataWarehouses (SDWs) combine DWs and Spatial Data Bases (SDBs) for managing significant amounts of historical data that include spatial location. In order to manipulate spatial objects, a Spatial Database must include special data types to represent geometric characteristics of objects. Space also can be seen as a continuous field, and the information of interest is obtained at each point of a space. The previously proposed extensions of the multidimensional data model, used in Data Warehousing, only deal with spatial objects. None of them consider field-based information. Usually, the field-based data is made by interpolating the values of several sensors distributed along a surface. In Data Warehousing it is necessary to integrate semantically-related data from several data sources. This paper presents an architecture that automatically determines the best parameters for refreshing and integrating field-based data from different data sources. Title: TRANSACTION SERVICE COMPOSITION - A STUDY OF COMPATIBILITY RELATED ISSUES Author(s): Anna-Brith Arntsen and Randi Karlsen Abstract: Different application domains have varying transactional requirements. Such requirements must be met by an adaptable and flexible transaction processing environment. ReflecTS is such an environment providing flexible transaction processing by exposing the ability to select and dynamically compose a transaction service suitable for each particular transaction execution. A transaction service (TS) can be seen as a composition of a transaction manager (TM) and a number of involved resource managers (RMs). Dynamic service composition raises a need to examine issues regarding Compatibility between the components in a TS. In this work, we present a novel approach to transaction service composition by evaluating Property and Communication compatibility between a TM and RMs. Title: MONITORING WEB DATA SOURCES USING TEMPORAL PROPERTIES AS AN EXTERNAL RESOURCES OF A DATA WAREHOUSE Author(s): Francisco Araque, Alberto Salguero and Cecilia Delgado Abstract: Flexibility to react on rapidly changing general conditions of the environment has become a key factor for economic success of any company, and the WWW has become an important resource of information for this proposal. Nowadays most of the important enterprise has incorporated the Data Warehouse (DW) technology where the information retrieved from different sources, including the WWW, has been integrated. The quality of data provided to the decision makers depends on the capability of the DW system to convey in a reasonable time, from the sources to the data marts, the changes made at the data sources. If we use the data arrival properties of such underlying information sources, the DW Administrator can derive more appropriate rules and check the consistency of user requirements more accurately. In this paper we present an algorithm for data integration depending on the temporal characteristics of the data sources and an architecture for monitoring web sources on the WWW in order to obtain its temporal properties. In addition we show an example applied to tourim area where data integrated into DW can be used to schedule personalized travel as a value-added service for electronic commerce. Title: SEMANTIC ORCHESTRATION MERGING - TOWARDS COMPOSITION OF OVERLAPPING ORCHESTRATIONS Author(s): Clementine Nemo, Mireille Blay-Fornarino, Michel Riveill and Günter Kniesel Abstract: Service oriented architectures foster evolution of enterprise information systems by supporting loose coupling and easy composition of services. Unfortunately, current approaches to service composition are inapplicable to services that share subservices or data. In this paper, we define overlapping orchestrations, analyse the problems that they pose to existing composition approaches and propose orchestration merging, a novel, interactive approach to composition of overlapping orchestrations. Title: A METHOD PROPOSAL FOR ARCHITECTURAL RELIABILITY EVALUATION Author(s): Anna Grimán, María Pérez, Luis E. Mendoza and Edumilis Méndez. Abstract: Software quality characteristics, such as reliability, maintainability, usability, portability, among others, are directly determined by software architecture and, in consequence, it constitutes a very important artifact to be evaluated as soon as a general design is obtained. This article proposes a method to estimate software reliability by evaluating software architecture. Our method combines the strengths of three evaluation methods: ATAM (Kazman et al, 2000), DUSA (Bosch, 2000) and AEM (Losavio et al., 2004) obtained by identifying the main features needed in reliability architectural evaluation and studying several architectural mechanisms which promote this quality characteristic. Based on these features and the advantages of the studied methods and mechanism, we established phases, activities, roles, inputs/outputs, and artifacts; and we constructed a feasible method which can be applied in any organization interested in improving its software construction process and product. Title: A WEB TOOL FOR WEB DOCUMENT AND DATA SOURCE SELECTION WITH SQLFI Author(s): Marlene Goncalves and Leonid Tineo Abstract: WWW is composed of a great volume of documents that are stored by several data sources. Normally, a user is interested in those documents that include certain keywords. However, these documents might be incomplete, ancient or huge, and therefore, the user would have to discard irrelevant ones. An ideal web search tool might select the best documents in base on user criteria defined over quality parameters such as completeness, recentness, frequency of updates and granularity. Traditional query languages are very restrictive in expressing preference-based queries. Therefore new query languages, such as SQLf are needed. We present a tool that allows the selection of the best data sources and documents in terms of user preferences. Documents and data sources are described according to quality parameters. User preferences are expressed with SQLf queries. Our tool contains a wizard to retrieve the best documents and data sources. Thus, the user is oriented by a set of steps where a preference query that involve quality parameters is built. Title: A METRICS PROPOSAL TO EVALUATE SOFTWARE INTERNAL QUALITY WITH SCENARIOS Author(s): Anna Grimán, María Pérez, Maryoly Ortega and Luis Mendoza Abstract: Software quality should be evaluated from different perspectives; we highlight the internal and external ones (ISO/IEC, 2002). Specially, internal quality evaluation depends on the software architecture (or design) and programming aspects rather than on the product behaviour. On the other hand, architectural evaluation methods tend to apply scenarios for assessing the architecture respect to quality requirements; however, mainly scenarios aren’t effective enough to determine the level of satisfaction of the quality attributes. In practice, each scenario could need more than one measurement. Also, we need a quantitative way of comparing and reporting results. The main objective of this article is presenting a set of metrics grouped by quality characteristics and sub-characteristics, according to ISO 9126 standard, which can be applied to assess software quality based on architecture. Once selected the most important quality requirements, these metrics can be used directly, or in combination with quality scenarios, into an architectural evaluation method. Metrics proposed also consider some particular technologies, such as OO, distributed and web systems. Title: WFESELECTOR - A TOOL FOR COMPARING AND SELECTING WORKFLOW ENGINES Author(s): Karim Baïna Abstract: The task of selecting a workflow engine becomes more and more complex and risky. For this reason, organisations require a broad, and a clear vision of which workflow engines are, and will continue to be, suitable for changing requirements. This paper presents a workflow engines comparison model to analyse, compare, and select business process management modelling and enactment engines (Workflow Engines or WFEs) according to user specific requirements. After the description of the underlying model itself, we present the implementation of this workflow engines comparison model through our multi-criteria workflow engines comparison and selection prototype WFESelector. The later proposes two scenarios for selecting relevant WFE : either to express dynamically multi-criteria query upon a WFE evaluation database, or to browse the whole WFE classification through a reporting aggregation based dashboard. WFESelector is subsequently experimented to assess criteria satisfaction on a very large number of open source workflow engines (as numerous as 35). Title: INTEGRATING IDENTIFICATION CONSTRAINTS IN WEB ONTOLOGY Author(s): Thi Dieu Thu Nguyen and Nhan Le-Thanh Abstract: In recent years, there has been a growing interest in semantic integration in the Semantic Web environment, whose goal is to access, relate and combine knowledge from multiple sources. The needs of integrating the semantics from relational data sources into this environment therefore is also emerged. However, there is one important aspect of database schemas that OWL up to now has not captured yet, namely identification constraints. To address this problem, this paper introduce a decidable extension of OWL-DL, namely OWL-K, that supports such constraints. Title: THE HAV DATA INTEGRATION APPROACH: THE MAPPING IN HAV Author(s): Fatima Boulçane Abstract: This paper provides an overview on a hybrid approach of heterogeneous data integration which we term Hybrid As View (HAV) and it focuses on the HAV mappings between the global schema and source schemas through the partial schemas. The contribution of this approach is on two complementary axes: (i) to propose a multi-mediators architecture essentially made up of two types of components: specialized mediators and a global mediator. The specialized mediators provide each one an integrated view of sources with the same model. The global mediator integrates the partial schemas provided by the set of the specialized mediators to provide an access on a uniform view represented by a global schema. (ii) to model the relation between the global schema and the sources through the partial schemas by combining the best of the two approaches Global As View (GAV) and Local As View (LAV). Title: SIMPLIFIED QUERY CONSTRUCTION - QUERIES MADE AS EASY AS POSSIBLE Author(s): Brad Arshinoff, Damon Ratcliffe, Martin Saetre, Reda Alhajj and Tansel Özyer Abstract: QMAEP- Queries Made as Easy as Possible, is a new system to greatly simplify the process of query construction for statisticians and researchers. This document is focused on the usability of the Database query language and deals with visual representations of the query process, in specific the select query. Methods of integrating simple Graphical User Interfaces (GUIs) for building queries into pre-existing database forms will be explored as a means of providing users with an intuitive method of query construction. This paper explores data mining as it pertains to clinical research with emphasis on simplifying the data extraction process from complex databases so as to accommodate analysis of the data using important statistical software such as such as SASS, QMath and MS Excel. Title: AN EFFICIENT INTERFACE TO HANDLE COMPLEX STRUCTURE FOR DATABASE DESIGN Author(s): Hassan Badir and Adrian Tanasescu Abstract: The Visual design based on schema graphs simplify the design of database for technical and non-technical users who are not familiar with the database structure and don't want to use a formal language. We propose an assisting visual design interface for relational-object database called GUEDOS. The word is motivated by the successful adaptation of the interactive design paradigm in the conventional area. This graphical design system facilitates the user's tasks of formulating database schema. The adopted human-computer interaction techniques in this interactive design system and different graphical systems through a number of illustrative examples are also described. In this work we apply it in Molecular Biology, more precisely Organelle complete genome. We aim to offer biologists the possibility to access in a unified way information spread among heterogeneous genome databanks. Title: UNDERSTANDING THE DYNAMICS OF INFORMATION SYSTEMS Author(s): Abdelwahab Hamou-Lhadj Abstract: Information systems are in the process of undergoing significant transformations triggered by the Internet technology. However, most existing systems suffer from poor to non-existent documentation, which makes the maintenance process a daunting task even for a skilled software engineer. As a result, software engineers are often faced with the inevitable problem of understanding different aspects of the system before undertaking a simple maintenance task. This paper describes ongoing research in the area of program comprehension that aims at investigating efficient techniques for the understanding of the dynamics of software systems with a particular emphasis on information systems. The proposed approach is based on the analysis of system’s execution traces. The long-term objective is to create effective tool support for software engineers working on maintenance tasks. Title: PIN: A PARTITIONING & INDEXING OPTIMIZATION METHOD FOR OLAP Author(s): Ricardo Jorge Santos and Jorge Bernardino Abstract: Optimizing the performance of OLAP queries in relational data warehouses has always been a major research issue. There are various techniques which can be used in order to achieve its goals, such as data partitioning, indexing, data aggregation, data sampling, redefinition of database schemas, among others. In this paper we present a method which links partitioning and indexing based on the features present in predefined major decision making queries to optimize a data warehouse’s performance. The evaluation of this method is also presented using the TPC-H benchmark, comparing it with standard partitioning and indexing techniques, demonstrating its efficiency with single and multiple simultaneous user scenarios. Title: MODEL-DRIVEN DEVELOPMENT USING STANDARD TOOLS Author(s): Julián Garrido, Mª Ángeles Martos and Fernando Berzal Abstract: This paper describes a model-driven software development tool suitable for the rapid development of enterprise applications. Instead of requiring new specialized development environments, our tool builds on top of a conventional programming platform so that it is suitable for the progressive adoption of modeldriven development techniques within a software development organization. Title: ONE-TO-MANY DATA TRANSFORMATIONS - AS RELATIONAL OPERATIONS Author(s): Paulo Carreira Abstract: Transforming data is a fundamental operation in data management activities like data integration, legacy data migration, data cleaning, and extract-transform-load processes for data warehousing. Since data often resides on relational databases, data transformations are often implemented as relational queries that aim at leveraging the optimization capabilities of most RDBMSs. However, due to the limited expressive power of Relational Algebra, several important classes of data transformations cannot be specified as SQL queries. In particular, SQL is unable to express data transformations that require the dynamic creation of several tuples for each tuple of the source relation. This dissertation proposes to address this class of data transformations, common in data management activities, by extending Relational Algebra with a new relational operator named data mapper. A starting contribution of this work consists of studying the formal aspects of the mapper operator focusing on its formal semantics and expressiveness. A further contribution consists of supporting a cost-based optimization of data transformations expressions combining mappers with standard relational operators. To that end, a set of algebraic rewriting rules and different physical execution algorithms are being developed. Title: THE CONCEPTUAL FRAMEWORK FOR BUSINESS PROCESS INNOVATION: TOWARDS A RESEARCH PROGRAM ON GLOBAL SUPPLY CHAIN INTELLIGENCE Author(s): Charles Møller Abstract: Most industrial supply chains today are globally scattered and nearly all organizations rely on their Enterprise Information Systems (ES) for integration and coordination of their activities. In this context innovation in a global supply chain must be driven by advanced information technology. This position paper proposes a research program on Global Supply Chain Intelligence. The paper argues that a conceptual framework for Business Process Innovation is required to approach innovations in a global supply chain. A research proposal based on five interrelated topics is derived from the framework. The research program is intended to establish and to develop the conceptual framework for business process innovation further and to apply this framework in a global supply chain context. Title: A DATABASE MANAGEMENT SYSTEM KERNEL FOR IMAGE COLLECTIONS Author(s): Liana Stanescu, Dumitru Burdescu, Cosmin Stoica and Marius Brezovan Abstract: The paper presents a single-user, relational DBMS kernel, for managing visual information. The functions of this multimedia DBMS are: creating/deleting databases and tables, adding constrains, inserting, updating, deleting records, text based querying and content based visual querying using the color characteristics. The originality character of this DBMS is given by two aspects: the first aspect refers to the Image data type that permits binary storage of images and extracted color information, represented by the color histogram with maximum 166 colors; the second aspect refers to the visual interface for building content-based visual query using color characteristics, that generates a modified SELECT command that will be sent to kernel for execution. This DBMS has as advantages the low cost and easiness in usage, being recommended in medical or art domains where there are used large amounts of visual information. Title: MEDIATION FRAMEWORK FOR ENTERPRISE INFORMATION SYSTEM INFRASTRUCTURES: APPLICATION-DRIVEN APPROACH Author(s): Leonid Kalinichenko, Dmitry Briukhov, Dmitry Martynov, Nikolay Skvortsov and Sergey Stupnikov Abstract: This position paper provides a short summary of results obtained so far on a mediation-based application-driven approach for EIS development. This approach has significant advantages over the conventional, information resource driven approach. Basic methods for the application-driven approach are discussed including synthesis methods of canonical information models, unifying languages of various kinds of heterogeneous information sources in one extensible model, methods of identification of sources relevant to an application and their registration at the mediator applying GLAV techniques as well as ontological contexts reconciliation methods. Methodology of EIS application development according to the approach is briefly discussed emphasizing importance of a mediator consolidation phase by the respective community, application problem formulations in canonical model and their rewriting into the requests to the registered information sources. The technique presented is planned to be used in various EIS and information systems. The work reported was partially supported by the RFBR grants 06-07-08072 and 06-07-89188. Title: AN INSERTION STRATEGY FOR A TWO-DIMENSIONAL SPATIAL ACCESS METHOD Author(s): Wendy Osborn and Ken Barker Abstract: We present work in progress on the 2DR-tree, a novel approach for accessing spatial data. The 2DR-tree uses nodes that are the same dimensionality as the data space. Therefore, all relationships between objects are preserved and different searching strategies such as binary and greedy are supported. A validity test ensures that every node preserves the spatial relationships among its objects. The proposed insertion strategy adds a new object by recursively partitioning the space occupied by a set of objects. A performance evaluation shows the advantages of the 2DR-tree and identifies issues for current and future consideration. Title: TIMING BEHAVIOR ANOMALY DETECTION IN ENTERPRISE INFORMATION SYSTEMS Author(s): Matthias Rohr, Simon Giesecke and Wilhelm Hasselbring Abstract: Business-critical enterprise information systems (EIS) have to satisfy high dependability requirements. There is a need for automatic failure detection and diagnosis in order to achieve the required availability. A major cause of failures in EIS are software faults in the application-layer. In this paper we propose to use anomaly detection to diagnose failures in the application layer of EIS. Anomaly detection aims to identify “unusual” system behavior in monitoring data. These anomalies can be valuable indicators for availability or security problems, and support failure diagnosis. In this paper we outline the basic principles of anomaly detection, present the state of the art, and typical application challenges. We propose a new approach for anomaly detection in Enterprise Information Systems that addresses some of these challenges. Area 2 - Artificial Intelligence and Decision Support Systems Title: SPECIALIST KNOWLEDGE DIFFUSION Author(s): Mounir Kehal Abstract: Formats to handle knowledge of innovative organizations may prove to be complex, as such is assumed to be one of the main variables whilst a distinguishing factor of such organizations to survive within a marketplace. Their main asset is the knowledge of certain highly motivated individuals that appear to share a common vision for the continuity of the organization. Satellite technology is a good example of that. From early pioneers to modern day mini/micro satellites and nanotechnologies, one can see a large amount of risk at every stage in the development of a satellite technology, from inception to design phase, from design to delivery, from lessons learnt from failures to those learnt from successes, and from revisions to design and development of successful satellites. In their groundbreaking book The Knowledge Creating Company (1995), Nonaka et al laid out a model of how organisational knowledge is created through four conversion processes, being from: tacit to explicit (externalisation), explicit to tacit (internalisation), tacit to tacit (socialisation), and explicit to explicit (combination). Key to this model is the authors’ assertion that none are individually sufficient. All must be present to fuel one another. However, such knowledge creation and diffusion was thought to have manifested and only applied within large organizations and conglomerates. Observational (questionnaire-based) and systematic (corpus-based) studies – through case study elicitation experiments and analysis of specialist text, can support research in knowledge management. Organizations that manufacture, use, and maintain satellites depend on a continuous exchange of ideas, criticisms, and congratulations. One can regard such organisations from NASA to SSTL as amongst a class of knowledge-based organizations. Through selective use of the previously stated approaches, and concise reporting for the purposes of this paper we are to show how knowledge flows in a finite organisational setting and could be modelled by specialist text. We aim to describe in this paper our understanding of the nature of a specialist organization in a quantifiable manner, and the constructs of a knowledge management audit conducted through the observational study within a satellite manufacturing SME, based in the UK. We have examined how knowledge flows and is adapted between commercial and research types of corpora. One of the major results deduced from the observational study was that knowledge diffusion is paramount within the lifetime of an organization, and could be supported by information systems. Leading us to investigate on how knowledge diffusion takes place, in an empirical way. Our analysis shows that research papers (created within educational institution) and commercial documents (created within spin-offs of such higher education institution) can be distinguished rather on the basis of single word and compound terms. These two lexical signatures show the potential for identifying points of mutual interest in the diffusion of knowledge from the research institution to the commercialization process, thus to application(s) within a domain. Title: LSGENSYS - AN INTEGRATED SYSTEM FOR PATTERN RECOGNITION AND SUMMARISATION OF MULTI-BAND SATELLITE IMAGES Author(s): Hema Nair Abstract: This paper presents a new system developed in Java® for pattern recognition and pattern summarisation in multi-band (RGB) satellite images. The system design is described in some detail. Patterns such as land, island, water body, river, fire in remote-sensed images are extracted and summarised in linguistic terms using fuzzy sets. Some elements of supervised classification are introduced in the system to assist in the development of linguistic summaries. Results of testing the system to analyse and summarise patterns in SPOT MS images and LANDSAT images are also discussed. Title: A NEW FUZZY LOGIC CONTROLLER FOR TRADING ON THE STOCK MARKET Author(s): Francesco Maria Raimondi, Salvatore Pennacchio, Pietro Via and Marianna Mulè Abstract: A common problem that financial operators often meet in their own work is to make, at the right moment, the operational choices on the Stock Market. Once the Market to act on has been chosen, the financial operator has to decide when and how to operate on it, in order to achieve a profit. The problem that we are going to deal with is the planning of an automatic decisional system for the management of long positions on bull market. First, a trading system (TS) will be implemented pointing its features out. Then a fuzzy logic implementation of the TS will be introduced (FTS). The fuzzy system will be optimized by the genetic algorithms. Finally, the two different implementations of the trading system will be compared using some performance indexes. Title: EXPLANATION GENERATION IN BUSINESS PERFORMANCE MODELS - WITH A CASE STUDY IN COMPETITION BENCHMARKING Author(s): Hennie Daniels and Emiel Caron Abstract: In this paper, we describe an extension of the methodology for explanation generation in financial knowledge-based systems. This offers the possibility to automatically generate explanations and diagnostics to support business decision tasks. The central goal is the identification of specific knowledge structures and reasoning methods required to construct computerized explanations from financial data and business models. A multi-step look-ahead algorithm is proposed that deals with so-called cancelling-out effects, that are a common phenomenon in financial data sets. The extended methodology was tested on a case-study conducted for Statistics Netherlands involving the comparison of financial figures of firms in the Dutch retail branch. The analysis is performed with a diagnostic software application which implements our theory of explanation. Comparison of results of the method described in with the results of the extended method clearly improves the analyses when cancelling-out effects are present in the data. Title: A CONNECTIONIST APPROACH IN BAYESIAN CLASSIFICATION Author(s): Luminita State, Catalina Cocianu, Panayiotis Vlamos and Viorica Stefanescu Abstract: The research reported in the paper aims the development of a suitable neural architecture for implementing the Bayesian procedure in solving pattern recognition problems. The proposed neural system is based on an inhibitive competition installed among the hidden neurons of the computation layer. The local memories of the hidden neurons are computed adaptively according to an estimation model of the parameters of the Bayesian classifier. Also, the paper reports a series of qualitative attempts in analyzing the behavior of a new learning procedure of the parameters an HMM by modeling different types of stochastic dependencies on the space of states corresponding to the underlying finite automaton. The approach aims the development of some new methods in processing image and speech signals in solving pattern recognition problems. Basically, the attempts are stated in terms of weighting processes and deterministic/non deterministic Bayesian procedures. Title: ARCHITECTURAL DESIGN VIA DECLARATIVE PROGRAMMING Author(s): Luís Moniz Pereira and Ruben Duarte Viegas Abstract: Problem solving by declarative theory building can be an extremely effective method for porting concepts and knowledge from the problem domain to the solution domain, by allowing the implementation of complete procedural constructs and enabling to produce sound solutions. If conveniently expressed, such a theory may be directly coded into a declarative programming language. To wit, if expressed within the paradigm of logic programming, then the theory itself represents the very procedure to obtain its desired solutions. The illustrative case study considered here is the obtention of architectural layouts from an adjacency graph: Given a list of imposed adjacencies among a set of planar rectangular spaces (represented by the graph's nodes), the goal is to generate all permissible layouts schemas on the plane which respect the adjacencies, and to determine the minimal modular dimensions of such a set of spaces. Another aim of this article is also to show the guidelines of an effective translation of the theory constructed to solve the proposed problem in Logic Programming, making use of the combined power of two different semantics and their implementations, namely the Well Founded Semantics and the Stable Models one. Title: A MULTI-AGENT ARCHITECTURE FOR ENVIRONMENTAL IMPACT ASSESSMENT: INFORMATION FUSION, DATA MINING AND DECISION MAKING Author(s): Marina V. Sokolova and Antonio Fernández-Caballero Abstract: The paper introduces an approach to creating a multi-agent architecture for environmental impact assessment upon human health. As the indicators of the environmental impact we assume water pollution, indexes of traffic and industrial activity, wastes and solar radiation; and as the human health indicator we take morbidity. All the data comprise multiple heterogeneous data repositories. The general structure of the architecture is represented. Thus, the proposed system is logically and functionally divided into three layers, solving the tasks of information fusion, pattern discovery through data mining, and decision support making, respectively. The main steps of data processing and maintenance, and principles of data fusion, which are used in the system, are discussed. The discovered patterns will be used as a foundation for real-time decision making, which should be of great importance for adequate and effective management by responsible municipal and state government authorities. Title: HEAVYWEIGHT ONTOLOGY MATCHING - A METHOD AND A TOOL BASED ON THE CONCEPTUAL GRAPHS MODEL Author(s): Frédéric Furst and Francky Trichet Abstract: Managing multiple ontologies is now a core question in most of the applications that require semantic interoperability. The Semantic Web is surely the most significant application of this report: the current challenge is not to design, develop and deploy domain ontologies but to define semantic correspondences among multiple ontologies covering overlapping domains. In this paper, we introduce a new approach of ontology matching named axiom-based ontology matching. As this approach is founded on the use of axioms, it is mainly dedicated to heavyweight ontologies (an heavyweight ontology is a lightweight ontology, i.e. an ontology simply based on a hierarchy of concepts and a hierarchy of relations, enriched with axioms used to fix the semantic interpretation of concepts and relations), but it can also be applied to lightweight ontologies as a complementary approach to the current techniques based on the analysis of natural language expressions, instances and/or taxonomical structures of ontologies. This new matching paradigm is defined in the context of the Conceptual Graphs model (CG), where the projection (i.e. the main operator for reasoning with CG which corresponds to homomorphism of graphs) is used as a means to semantically match the concepts and the relations of two ontologies through the explicit representation of the axioms in terms of conceptual graphs. We also introduce an ontology of representation dedicated to the reasoning of heavyweight ontologies at the meta-level. Title: NEW LOCAL DIVERSIFICATION TECHNIQUES FOR THE FLEXIBLE JOB SHOP PROBLEM WITH A MULTI-AGENT APPROACH Author(s): Meriem Ennigrou and Khaled Ghédira Abstract: The Flexible Job Shop problem is among the hardest scheduling problems. It is a generalization of the classical Job Shop problem in that each operation can be processed by a set of resources and has a processing time depending on the resource used. The objective is to assign and to sequence the operations on the resources so that they are processed in the smallest time. In our previous works, we have proposed two Multi-Agent approaches based on the Tabu Search (TS) meta-heuristic. Depending on the location of the optimisation core in the system, we have distinguished between the global optimisation approach where the TS has a global view on the system and the local optimisation approach (FJS MATSLO) where the optimisation is distributed among a collection of agents, each of them has its own local view. In this paper, firstly, we propose new diversification techniques for the second approach in order to get better results and secondly, we propose a new promising approach combining the two latter ones. Experimental results are also presented in this paper in order to evaluate these new techniques. Title: RECURRENT NEURAL NETWORKS APPROACH TO THE DETECTION OF SQL ATTACKS Author(s): Jaroslaw Skaruz, Franciszek Seredynski and Pascal Bouvry Abstract: In the paper we present a new approach based on application of neural networks to detect SQL attacks. SQL attacks are those attacks that take advantage of using SQL statements to be performed. The problem of detection of this class of attacks is transformed to time series prediction problem. SQL queries are used as a source of events in a protected environment. To differentiate between normal SQL queries and those sent by an attacker, we divide SQL statements into tokens and pass them to our detection system, which predicts the next token, taking into account previously seen tokens. In the learning phase tokens are passed to recurrent neural network (RNN) trained by backpropagation through time (BPTT) algorithm. Teaching data are shifted by one token forward in time with relation to input. The purpose of the testing phase is to predict the next token in the sequence. All experiments were conducted on Jordan and Elman networks using data gathered from PHP Nuke portal. Experimental results show that the Jordan network outperforms the Elman network predicting correctly queries of the length up to ten. Title: SOLVING THE MULTI-OBJECTIVE MIXED MODEL ASSEMBLY LINE PROBLEM USING A FUZZY MULTI-OBJECTIVE LINEAR PROGRAM Author(s): Iraj Mahdavi, Babak. Javadi and S.S.Sabet Abstract: This paper develops a fuzzy multi-objective linear program (FMOLP) model for solving the multi-objective mixed model assembly line problem. In practice, vagueness and imprecision of the goals, constraints and parameters in this problem make the decision-making complicated. The proposed model attempts to simultaneously minimize total utility work cost, total production rate variation cost, and total setup cost. In this paper, an asymmetric fuzzy-decision making technique is applied to enable the decision-maker to assign different weights to various criteria in a real industrial environment. The model is explained by an illustrative example. Title: INTELLIGENT DATA BASES FOR AN EFFECTIVE INDUSTRIAL MAINTENANCE MANAGEMENT - THE AUTONOMOUS AGENTS CONWORM OF CONRAD SYSTEM Author(s): Benlazaar Sid Ahmed Nadjib and Belbachir Hafida Abstract: The goal of this paper is to introduce a model of information system management dedicated to the industrial maintenance and represented by an "intelligent" data base where autonomous agents will transform information to strategic knowledge referring to space-time. Their first role is to allow through a “reversed feedback” to analyze information in entry of the system and to propose a control ensuring system protection against not representative or erroneous information. Their second role is to make it possible to propose at exit strongly correlated strategic variables regroupings and through these last the regrouping of management tasks per "management categories". Lastly, we will describe the information system which we called CONRAD and the autonomous agents that we called ConWorm dedicated to the task of transformation and control. Title: THE IMPORTANCE OF AGGREGATION OPERATOR CHARACTERISTICS IN MARKETING RESEARCH Author(s): Kris Brijs, Benoît Depaire, Koen Vanhoof, Tom Brijs and Geert Wets Abstract: Our paper demonstrates that aggregation operator characteristics count as a promising avenue for applied fuzzy set research. It is shown by means of two cases that these characteristics are particularly valuable as proxies for hard to measure domain knowledge within the fields of customer satisfaction and country-of-origin. More in detail, the uninorm's neutral element could be identified as a useful asset for representing customers' expectations while the OWA operator's orness contributes to the quantification of consumers' degree of optimism when evaluating products coming from abroad. Both theoretical and empirical validation is provided to support the basic assumption that aggregation operator characteristics enable us to obtain superior consumer information with substantial managerial relevance. Title: IMPLEMENTING PRIORITIZED REASONING IN LOGIC PROGRAMMING Author(s): Luciano Caroprese, Irina Trubitsyna and Ester Zumpano Abstract: Prioritized reasoning is an important extension of logic programming and is a powerful tool for expressing desiderata on the program solutions in order to establish the best ones. This paper discusses the implementation of the case of preference relation among atoms and introduces a system, called CHOPPER, realizing choice optimization recently proposed. CHOPPER supports the ASO_Ch and ASO_FCh semantics based on the concept of choice, as a set of preference rules describing common choice options in different contexts, and the ASO semantics, which valuates each preference rule separately. This paper outlines the architecture of the system, discusses aspects of the choice identification strategies and of the feasibility of choice options. Moreover, the comparison of the proposed approach with the other implementation approaches proposed in the literature is provided. Title: AN EMPIRICAL STUDY OF SIGNIFICANT VARIABLES FOR TRADING STRATEGIES Author(s): M. Delgado Calvo-Flores, J. F. Núñez Negrillo, E. Gibaja Galindo and C. Molina Férnandez Abstract: Nowadays, stock market investment is governed by investment strategies. An investment strategy consists in following a fixed philosophy over a period of time, and it can have a scientific, statistical or merely heuristic base. No method currently exists which is capable of measuring how good an investment strategy is either objectively or realistically. Through the use of Artificial Intelligence and Data Mining tools we have studied the different investment strategies of an important Spanish management agency and extracted a series of significant characteristics to describe them. Our objective is to evaluate and compare investment strategies in order to be able to use those which produce a peak return in our investment. Title: MOBILE DECISION MAKING AND KNOWLEDGE MANAGEMENT: SUPPORTING GEOARCHAEOLOGISTS IN THE FIELD Author(s): Martin Blunn, Julie Cowie, David Cairns, Clare Wilson and Donald Davidson Abstract: There is a professional responsibility placed upon archaeologists to record all possible information about a particular excavated site of which soil analysis is one important but frequently marginalised aspect. This paper introduces SASSA (Soil Analysis Support System for Archaeologists), whose primary goal is to promote the wider use of soil analysis techniques through a selection of ‘web based’ software tools. A description is given of the field tool developed which supports both the recording of soil related archaeological data in a comprehensive manner and provide a means of inferring information about the site under investigation. Insight is gained through a user evaluating numerous decision trees relating to pertinent archaeological questions. Whilst the field tool is capable of working in isolation, it offers a superior experience when operated in unison with two additional software tools; a Wiki and Forum. A brief discussion of the use of the Wiki application within the SASSA project is also presented. Title: THE RETRIEVAL PROCESS IN THE SAFRS SYSTEM WITH THE CASE-BASED REASONING APPROACH Author(s): Souad Demigha Abstract: The paper presents the retrieval process in the SAFRS system (system supporting the training of radiologists-senologists) with the case-based reasoning approach (CBR, which is adopted to represent the experience of expert radiologists-senologists under the form of cases) and modelized with the MAP concept. The retrieval process relies on a procedure of case-based reasoning for retrieval of similar cases formalized using a MAP, a re-use methodology named the retrieval MAP. The model of the MAP is an intentional representation system. It is based on concepts of intention and strategy. The concept of intention (or a goal) aims to capture the objective to be achieved. A strategy is the manner an intention is achieved. The retrieval process with the MAP is a multi-step/multi-algorithm process, which permits to retrieve similar cases in various modes and strategies. It is achieved according to three complex strategies: global strategy (or global retrieval strategy), elementary strategy (or elementary retrieval strategy) and mixed strategy (or mixed retrieval strategy). First, this work is presented by introducing the architecture and working principles of the system, then, we describe briefly the case representation model and we describe in details the retrieval process model. Finally, our conclusions and future plans are further described. Title: INTEGRATION OF A FUZZY SYSTEM AND AN INFORMATION SYSTEM FOR THE TERRITORIAL UNITS RANKING Author(s): Miroslav Hudec and Mirko Vujošević Abstract: For ranking and classification of the territorial units, up-to-date and precise data as well as ranking tool are needed. The advantage of fuzzy systems (FS) in these tasks is in definition of a problem by linguistic terms. The disadvantage is in universality and complexity of the fuzzy systems for end users. This disadvantage comes from usage of FS to solve a wide area of different tasks. The advantages and the disadvantages, as well as constraints of FS are analyzed. The aim of this paper is to show the information systems about territorial units of the Slovak Republic and possibilities of integration fuzzy system for ranking territorial units with these information systems. This approach enables creation of the model, importing the input data, processing of the rules and presentation of the solution in a usable and understandable form. In this case solution is presented on a thematic map too. Title: DYNAMIC WEB DOCUMENT CLASSIFICATION IN E-CRM USING NEURO-FUZZY APPROACH Author(s): Iraj Mahdavi, Babak Shirazi, Namjae Cho, Navid Sahebjamnia and Meysam Aminzadeh Abstract: Internet technology enables companies to capture new customers, track their performances and online behavior, and customize communications, products, services, and price. The analysis of customers and customer interactions for electronic customer relationship management (e-CRM) can be performed by data-mining (DM), optimization methods, or combined approaches. Web mining is defined as the discovery and analysis of useful information from World Wide Web (WWW). Some of web mining techniques include analysis of user access patterns, web document clustering and classification. Most existing methods of classification are based on a model that assumes a fixed-size collection of keywords or key terms with predefined set of categories. This assumption is not realistic in large and diverse document collections such as World Wide Web. We propose a new approach to obtain category-keyword sets with unknown number of categories. On the basis of the training set of Web documents, the approach is used to classify test documents into a set of initial categories. Finally evolutionary rules are applied to these new sets of keywords and training documents to update the category-keyword sets to realize dynamic document classification. Title: POLICY-BASED AGENT GRID COLLABORATION FOR C-COMMERCE Author(s): Maoguang Wang, Ke Zhang, Zhongzhi Shi and Lida Xu Abstract: Collaborative commerce is expected to effectively coordinate production and work together across the organizations, which emphasizes comprehensive information collaboration at all levels among participants. Our research goal is to establish an intelligence agent grid platform to share the resources, eliminate the information islands and provide the intelligent collaboration making full use of the agent technologies. This paper studies the complex collaboration model and specifies the goal policy, utility policy, action policy etc. for collaborative commerce process. Then this paper studies the hierarchical collaboration scheme from the behaviour layer, agent layer to society collaboration layer based on policy. And based on policy-driven we develop an intelligent agent grid platform AGrIP to provide support for C-Commerce. Title: MODELLING HUMAN REASONING IN INTELLIGENT DECISION SUPPORT SYSTEMS Author(s): V. N. Vagin and A. P. Yeremeyev Abstract: Methods of analogy-based solution searches in intelligent decision support systems are considered. The special attention is drawn to methods based on a structural analogy that use the analogy of properties and relations and take the context into account. Besides the problem of concept generalization is viewed. Several algorithms based on the rough set theory are compared and the possibility to use them for generalization of data stored in real-world databases is tested. Title: AGENT-BASED APPROACH FOR ELECTRICITY DISTRIBUTION SYSTEMS Author(s): Kimmo Salmenjoki, Yaroslav Tsaruk, Vagan Terziyan and Marko Viitala Abstract: This paper describes how semantic web and agent technologies could be used in enhancing the electricity distribution systems. The paper starts by a brief overview of functioning of electricity distribution systems. The introduced approaches aim at improving functionality of electricity distribution network systems and assisting the experts by supporting automating routine tasks in daily operations. We focus on GUN (Global Understanding Environment) framework proposed by IOG (Industrial Ontologies Group) for intelligent services on industrial resources. The resources and infrastructure of the electricity power network are distributed. The interoperability, automation and integration features of GUN allow us to joint and arrange cooperation among heterogeneous resources of electricity network domain. The interaction and cooperation among resources in GUN platform are realized via resource agents. Based on discussions held with the domain expert we also decided to use agent approach for automated collection of additional information from heterogeneous resources and integrate this information to the operator interface (Dashboard). The context information supports expert in decision making process. Title: NAMED ENTITY RECOGNITION IN BIOMEDICAL LITERATURE USING TWO-LAYER SUPPORT VECTOR MACHINES Author(s): Feng Liu, Yifei Chen and Bernard Manderick Abstract: In this paper, we propose a named entity recognition system for biomedical literature using two-layer support vector machines. In addition, we employ a post-processing module called a boundary check module to eliminate some boundary errors, which can lead to improved system performance. Our system doesn’t make use of any external lexical resources and hence it is a fairly simple system. Furthermore, with carefully designed features and introducing a second layer, our system can recognize named entities in biomedical literature with fairly high accuracy, which can achieve the precision of 83.5%, recall of 80.8% and balanced Fb=1 score of 82.1%, an approximate state of the art performance for the moment. Title: DEVELOPMENT OF A DECISION SUPPORT SYSTEM FOR COMPUTER AIDED PROCESS PLANNING SYSTEM Author(s): Manish Kumar Abstract: A decision support system for Computer Aided Process Planning system has been designed, developed and implemented. The need to introduce decision support system for Computer Aided Process planning (CAPP) system arises specifically to solve the poorly structured stages in process planning such as determination of blank size, setup planning, operations planning in each setup, selection of machine tools, calculation of machining time etc. Decision Support System (DSS) is capable to support operations like turning, facing, tapering, arcing, grooving, filleting, chamfering, knurling, threading etc. The proposed system is capable to generating process plans for different types of rotational parts. Title: MULTI-AGENT BUILDING CONTROL IN SHARED ENVIRONMENT Author(s): Bing Qiao, Kecheng Liu and Chris Guy Abstract: Multi-agent systems have been adopted to build intelligent environment in recent years. Among previous projects, it was often claimed that energy efficiency and occupants’ comfort were the most important factors for evaluating the performance of a modern work environment, and multiagent systems presented a viable solution to handling the complexity of dynamic building environment. Experiments have been carried on to gather expertise from both building construction and agent technology society to create such a work/residential environment, where energy efficiency is achieved without compromising occupants’ comfort. While many researches have made significant advance in some aspects, their researches failed to perform when it comes to providing a satisfactory system or model for a “shared environment”, which, from our opinion, is the main reason that stops the development or even idea of Multi-agent building control system in building construction industry. This paper introduces an ongoing project on multi-agent for building control, which aims to achieve both energy efficiency and occupants’ comfort by using learning mechanisms that meet the requirements of personal profile and preferences in a shared environment. Title: HOLONIC ARCHITECTURE FOR A MULTIAGENT-BASED SIMULATION TOOL Author(s): Nancy Ruiz, Adriana Giret and Vicente Botti Abstract: Based on an investigation related to the User requirements for 21st century manufacturing systems, the Holonic approach has been applied successfully in the manufacturing area. In the last few years Holonic Manufacturing Systems (HMS) have been used to improve some areas such as control systems, monitoring and diagnosis systems, and Computer Integrated Manufacturing architectures. This paper presents an application of the holonic approach together with the Multi-agent Systems (MAS) to develop a Simulation Tool for manufacturing systems. The architecture for this tool is related to functionalities that support the Manufacturing Model created by the User. This proposal allows the User to take advantage of these two approaches to simulate a manufacturing environment according to the manufacturing requirements. Title: A QUALITATIVE EXPERT KNOWLEDGE APPROACH TO RENDERING OPTIMIZATION Author(s): D. Vallejo-Fernandez, C. Gonzalez-Morcillo and L. Jimenez-Linares Abstract: The rendering process allows the developer to obtain a raster 2D image from the definition of a 3D scene. This process is computationally intensive if the source scene has a certain complexity or high-quality images are required. Therefore, a lot of time is spent and many computational resources are needed. In this paper, a novel approach called QUEKARO (standing for a QUalitative Expert Knowledge Approach to Rendering Optimization) is presented for adjusting some relevant parameters involved in the rendering process by using expert systems. This way, the developer can obtain optimizated results which reduce the time spent in the rendering process and, in most cases, do not affect the final quality of the raster 2D image. These results will be exposed on the result section, in which different optimizations will be studied. As we discuss on the final section of this paper, the use of expert systems in the rendering process involves a novel approach which reduces drastically the resources used and provides us with a high-scalable system. Using these arguments, we will justify the inclusion of expert systems in this area and will study future works. Title: APPLYING INTEGRATED EXPERT SYSTEM IN NETWORK MANAGEMENT Author(s): Antonio Martín, Carlos León, Juan I. Guerrero and Francisco J. Molina Abstract: The management of modern telecommunications networks is becoming an increasingly demanding task that is difficult to implement using present traditional methods even assisted by conventional automation techniques. Integration of advanced Artificial Intelligence (AI) technology into existing and future network management system may resolve some of the difficulties. The goal of this research is to develop an integrated expert system for management network applications. The emphasis of this research is to provide a broad view of intelligent systems by capturing the knowledge of human experts and using a modular approach that integrates the knowledge management and network resources specifications. For this purpose, an extension of OSI management framework specifications language has been added and investigated. The advantage of integrating both is that a large problem can be broken down into smaller and manageable sub-problems/modules. Through modification of existing resources or addition of new resources, the integrated expert system can be conveniently expanded in the future to cover the latest research findings and updated standards of network communications. Title: AN ATTITUDE BASED MODELING OF AGENTS IN COALITION Author(s): Madhu Goyal Abstract: One of the main underpinning of the multi-agent systems community is how and why autonomous agents should cooperate with one another. Several formal and computational models of cooperative work or coalition are currently developed and used within multi-agent systems research. The coalition facilitates the achievement of cooperation among different agents. In this paper, a mental construct called attitude is proposed and its significance in coalition formation in a dynamic fire world is discussed. This paper presents ABCAS (Attitude Based Coalition Agent System) that shows coalitions in multi-agent systems are an effective way of dealing with the complexity of fire world. It shows that coalitions explore the attitudes and behaviors that help agents to achieve goals that cannot be achieved alone or to maximize net group utility. Title: IMPROVING CONTENT-ORIENTED XML RETRIEVAL BY APPLYING STRUCTURAL PATTERNS Author(s): Philipp Dopichaj Abstract: XML is the perfect format for storing (mostly) textual documents in a knowledge management system; its flexibility enables users to store both highly structured data and free text in the same document. For knowledge management, it is important to be able to search the free-text parts effectively; users need to find the information that helps them solve their problem without having to wade through much information that is not relevant for their problem. Content-oriented XML retrieval addresses this challenge: In contrast to traditional information retrieval, documents are not considered atomic units, that is, elements such as sections or paragraphs can be returned. One implication of this is that results can overlap (for example a paragraph and the surrounding section). Although overlapping results are undesirable in the final retrieval result as presented to the user, they can help to improve the quality of the final result: We take advantage of overlaps by applying patterns to small subtrees of the retrieval result (result contexts); matching patterns adjust the retrieval status values of the involved node in order to promote the best results. We demonstrate on the INEX 2005 test collection that this postprocessing can lead to a significant improvement in retrieval quality. Title: PERSONAL KNOWLEDGE MANAGEMENT AS AN ICEBREAKER: MOTIVATING CONTRIBUTIONS TO KNOWLEDGE MANAGEMENT SYSTEMS Author(s): Harald Kjellin and Terese Stenfors-Hayes Abstract: Personal Knowledge Management (PKM) includes a set of techniques that individuals can use to acquire, create and share knowledge without relying on technical or financial support from the employer. The purpose of this study is to find indications of detectable value from experimental implementations of PKM in a number of organisations. The study includes 75 implementations of PKM in 75 different organisations and evaluations of them all. The results from interviewing all employees that participated in the study showed that: 1) The implementation of PKM does not require extensive resources 2) The effects can be measured from a personal level, and 3) The employees assessed the positive value of the descriptions of personalised knowledge. Title: PAIRWISE COMPARISONS, INCOMPARABILITY AND PARTIAL ORDERS Author(s): Ryszard Janicki Abstract: A new approach to "Pairwise Comparisons" (Saaty, 1977) is presented. We start with an abstract model based on the concept of partial order (as originally suggested in (Janicki and Koczkodaj, 1996)) instead of numerical scale. The number are added later, if some quantative values can be assign to the attributes. Title: GU METRIC - A NEW FEATURE SELECTION ALGORITHM FOR TEXT CATEGORIZATION Author(s): Gulden Uchyigit and Keith Clark Abstract: To improve scalability of text categorisation and reduce over-fitting, it is desirable to reduce the number of words used for categorisation. Further, it is desirable to achieve such a goal automatically without sacrificing the categorisation accuracy. Such techniques are known as automatic feature selection methods. Such techniques are called automatic feature selection. Typically this is done in the way that each word is assigned a weight (using a word scoring metric) and the top scoring words are then used to describe the user's profile. The choice of the word scoring metric is important for the overall performance of the system. There are several popular word scoring metrics which have been employed in literature. In this paper we present these word scoring metrics, along with two other metrics which are employed in feature selection in the context of gene categorisation but have not been employed in textual domains, we also present a novel feature selection metric, the GU metric. The details of a comparative evaluation of all the other methods is given. This shows that our new algorithm outperforms or compares favourbly with the other older algorithms. Title: CONTEXT-BASED INTELLIGENT EDUCATIONAL SYSTEM FOR CAR DRIVERS Author(s): Juliette Brezillon, Patrick Brezillon, Thierry Artieres and Charles Tijus Abstract: Although initial training is concluded by a driving license, such learning is insufficient because new drivers do not know how to contextualize the learned procedures into effective practices. Our goal is to improve the drivers' situation awareness, in which the drivers perceive the environment’s events, the projection of their status in a close future. To achieve this goal, we aim to make an educational system for the drivers, which help them to become aware of their driving errors. This educational system aims to identify and correct drivers' drawbacks. In this paper, we discuss the reasons for associating two approaches: a local approach (resulting from cognitive sciences) and a global approach (resulting from machine learning), and we show the key role that context plays in the driving activity. Title: CREATING A BILINGUAL PSYCHOLOGY LEXICON FOR CROSS LINGUAL QUESTION ANSWERING, A PILOT STUDY Author(s): Andrea Andrenucci Abstract: This paper introduces a pilot study aimed at investigating the extraction of word relations from a sample of a medical parallel corpus in the field of Psychology. Word relations are extracted in order to create a bilingual lexicon for cross lingual question answering between Swedish and English. Four different variants of the sample corpus were utilized: word inflections with and without POS tagging, lemmas with and without POS tagging. The purpose of the study was to analyze the quality of the word relations obtained from the different versions of the corpus and to understand which version of the corpus was more suitable for extracting a bilingual lexicon in the field of psychology. The word alignments were evaluated with the help of reference data (gold standards), which were constructed before the word alignment process. Title: ENERGY MANAGEMENT INFORMATION SYSTEMS: AN EXPLORATORY STUDY OF IMPLEMENTATIONS USING ADAPTIVE STRUCTURATION THEORY Author(s): Orla Kirwan, Willie Golden and Padraig Molloy Abstract: This research is focusing on the implementation of an Information System (IS), more specifically a building energy management system (BEMS) within several organisations. One of the EU’s 7th Framework Programme’s (FP7) objectives is to “transform the current fossil fuel based energy system into a more sustainable one combined with enhanced energy efficiency (EE)”. This research is concerned with the use of information systems to achieve the latter of these objectives: enhanced energy efficiency. The research is being undertaken using a multi methodological approach incorporating case study methodology and grounded theory. Advanced structuration theory (AST) will provide a conceptual model that will help to capture the longitudinal change process. A modified AST model is proposed which will provide a theoretical framework that further investigates and explains the implementation process, using two organisations at different stages of BEMS implementation. The researcher has confirmed access to these organisations and data collection commenced on October 1st 2006. The paper concludes with an overview of how the research will progress. Title: OBTAINING AND EVALUATING GENERALIZED ASSOCIATION RULES Author(s): Veronica Oliveira de Carvalho, Solange Oliveira Rezende and Mário de Castro Abstract: Generalized association rules are rules that contain some background knowledge giving a more general view of the domain. This knowledge is codified by a taxonomy set over the data set items. Many researches use taxonomies in different data mining steps to obtain generalized rules. So, this work initially presents an approach to obtain generalized association rules in the post-processing data mining step using taxonomies. However, an important issue that has to be explored is the quality of the knowledge expressed by generalized rules, since the objective of the data mining process is to obtain useful and interesting knowledge to support in the user's decisions. In general, what researches do to help the users to select these pieces of knowledge is to reduce the obtained set by pruning some specialized rules using a subjective measure. In this context, this paper also presents a quality analysis of the generalized association rules. The quality of the rules obtained by the proposed approach was evaluated. The experiments show that some knowledge evaluation objective measures are appropriate only when the generalization occurs on one specific side of the rules. Title: HUMAN SKIN DETECTION - AN ARTIFICIAL NEURAL NETWORK APPROACH Author(s): Adriano Martins Moutinho and Antonio Carlos Gay Thomé Abstract: Skin detection is a computational method that is able to identify areas inside an image that may contain human skin. Skin detection can be used on several biometric image applications such as face detection, presence detection systems, adult content filters and others. In order to implement skin detection, an artificial neural network appoach is proposed, combined with image processing methods such as illumination correction, histogram equalization and morphological operators. Title: DATA MINING CLUSTERING TECHNIQUES IN ACADEMIA Author(s): Vasile Paul Breşfelean, Mihaela Breşfelean, Nicolae Ghişoiu and Călin-Adrian Comes Abstract: In the present paper the authors exemplify the connections among the undergraduate studies, continuing education and professional refinement on the foundations required by Romania’s integration in EU’s structures. The study was directed to a number of senior undergraduate students and master degree students at the Faculty of Economics and Business Administration, Babes-Bolyai University of Cluj-Napoca, using questionnaires in a collaborative manner, and the resulting data was processed using data mining clustering techniques through Weka workbench, graphical and percentage representations. Title: NEURAL NETWORKS FOR DATA QUALITY MONITORING OF TIME SERIES Author(s): Augusto Cesar Heluy Dantas and José Manoel de Seixas Abstract: Time series play an important role in most of large data bases. Much of the information come in temporal patterns which is often used for decision taking. Problems with missing and noisy data arise when data quality is not monitored, generating losses in many fields such as economy, customer relationship and health management. In this paper we present a neural network based system used to provide data quality monitoring for time series data. The goal of this system is to continuously adapt a neural model for each monitored series, generating a corridor of acceptance for new observations. Each rejected observation may be substituted by its estimated value. A group of four diverse time series was tested and the system proved to be able to detect the induced outliers. Title: FORECASTING OF CHANGES OF COMPANIES FINANCIAL STANDINGS ON THE BASIS OF SELF-ORGANIZING MAPS Author(s): Egidijus Merkevičius, Gintautas Garšva, Stasys Girdzijauskas and Vitolis Sekliuckis Abstract: This article presents the way how creditor can predict the trends of debtors financial standing. We propose the model for forecasting changes of financial standings. Model is based on the Self-organizing maps as a tool for prediction, grouping and visualization of large amount of data. Inputs for training of SOM are financial ratios calculated according any discriminate bankruptcy model. Supervised neural network lets automatically increase accuracy of performance via changing of weights of ratios. Title: NEURALTB WEB SYSTEM: SUPPORT TO THE SMEAR NEGATIVE PULMONARY TUBERCULOSIS DIAGNOSIS Author(s): Carmen Maidantchik, José Manoel de Seixas, Afrânio Kritski, Fernanda C. de Q Mello, Rony T. V. Braga, Pedro H. S. Antunes and João Baptista de Oliveira e Souza Filho Abstract: The World Health Organization estimates that one third of the world population is infected by Mycobacterium tuberculosis. TB affects mainly poor health places in developing countries. Therefore, it became mandatory to develop more efficient, fast, and inexpensive analysis methods. This paper presents a decision support system that uses neural networks to sustain the disease diagnosis. The output is the probability that a patient has or not the illness and the risk group. The NeuralTB system encapsulates the knowledge needed during an anamnesis interview integrated to demographic and threat factors typically known for tuberculosis diagnosis. It was developed with the Web technology and all data was described with a markup language to enable an efficient communication and information exchange among specialists. The data that is collected during the whole process can be used to identify new factors or symptoms, since the infection transmission may evolve. This information can also support tuberculosis control governmental entities to define effective actions to protect the health and safety of the population. Title: AN INTELLIGENT INFORMATION SYSTEM FOR ENABLING PRODUCT MASS CUSTOMIZATION Author(s): Haifeng Liu, Wee-Keong Ng, Bin Song, Xiang Li and Wen-Feng Lu Abstract: We propose to develop an intelligent design decision-support system to enable mass customization through product configuration using intelligent computational approaches. The system supports customer-driven product development throughout the product's life cycle and enables rapid assessment and changes of product design in response to changes in customer requirements. The overall system consists of four subsystems: customer requirement analysis subsystem, product configuration subsystem, product lifecycle cost estimation subsystem and product data management subsystem. Various challenging issues for developing the system are investigated, and a number of methodologies and techniques to resolve the issues are presented. The proposed system will allow SMEs to effectively compete with larger companies who command superior resources. Title: FEEDFORWARD NEURAL NETWORKS WITHOUT ORTHONORMALIZATION Author(s): Lei Chen, Hung Keng Pung and Fei Long Abstract: Feedforward neural networks have attracted considerable attention in many fields mainly due to their approximation capability. After Gram-Schmidt orthonormalization transformation, single-hidden-layer feedforward neural networks(SLFNs) are transformed into two-hidden-layer feedforward neural networks(TLFNs). The TLFNs do not need recomputation of network weights already calculated, therefore the orthonormal neural networks can reduce computing time. In this paper, we will show that it is equivalent between neural networks without orthonormal transformation and the orthonormal neural networks, thus we can naturally conclude that such orthonormalization transformation is not necessary for neural networks. Moreover, the neurons of the orthonormal neural networks are only suitable for kernel function. In this paper, we will extend such orthonormal neural networks into additive neurons by using the theories of Extreme Learning Machine(ELM). The experimental results based on many benchmark regression and classification applications further verify that neural networks without orthonormalization transformation, i.e., ELM, may achieve faster training speed while retaining the same generalization performance. Title: NETWORK ALIGNMENT TOOL FOR NOVEL INSIGHT IN CELLULAR MACHINERY Author(s): Shailja Singh, Anup Bhatekar and Ashwini Gupta Abstract: Molecular networks represent the backbone of molecular activity within the cell. Recent studies have taken a comparative approach toward interpreting these networks, contrasting networks of different species and molecular types, and under varying conditions.In this review, we survey the field of comparative biological network analysis and describe its applications to elucidate cellular machinery and to predict protein function and interaction. We highlight the open problems in the field as well as propose some initial mathematical formulations for addressing them. Many of the methodological and conceptual advances that were important for sequence comparison will likely also be important at the network level, including improved search algorithms, techniques for multiple alignment, evolutionary models for similarity scoring and better integration with public databases. Title: INTERCONNECTING DOCUMENTATION - HARNESSING THE DIFFERENT POWERS OF CURRENT DOCUMENTATION TOOLS IN SOFTWARE DEVELOPMENT Author(s): Christian Prause, Julia Kuck, Stefan Apelt, Reinhard Oppermann and Armin B. Cremers Abstract: Current software documentation tools (like text processors, email, documentation generators, reporting, configuration management, wikis) have different strengths in supporting the software engineering process. But one weakness they all have in common is their inability to combine the advantages of the various techniques. Integrating documentation with diverse origins would enhance the force of expression and compensate individual failings of the different techniques. In this paper, we present a new brand of documentation utilities --- exemplified by the Dendrodoc-system --- that overcomes current problems with documentation. By processing information at negligible cost that common tools ignore, our system represents an efficient way of improving software documentation. Title: ATTRIBUTE CONSTRUCTION FOR E-MAIL FOLDERING BY USING WRAPPERED FORWARD GREEDY SEARCH Author(s): Pablo Bermejo, José A. Gámez and José M. Puerta Abstract: E-mail classification is one of the outstanding tasks in text mining, however most of the efforts in this topic have been devoted to the detection of spam or junk e-mail, that is, a classification problem with only two possible classes: spam and not-spam. In this paper we deal with a different e-mail classification problem known as e-mail foldering that consists on the classification of incoming mail into the different folders previously created by the user. This task has received less attention and is quite complex due to the (usually large) cardinality of the class variable (the number of folders). In this paper we try to improve the classification accuracy by looking for new attributes that are derived from the existing ones by using a data-driven approach. The attribute is constructed by taking into account the type of classifier to be used later and following a wrapper approach guided by a forward greedy search. The experiments carried out show that in all the cases the accuracy of the classifier is improved when the new attribute is added to the original ones. Title: APPLICATION OF A GENETIC ALGORITHM TO A REAL WORLD NURSE ROSTERING PROBLEM INSTANCE Author(s): Özgür Kelemci and A. Sima Uyar Abstract: The nurse rostering problem involves assigning shifts to qualified personnel using a given timetable under some hard and soft constraints. In this study, the nurse rostering problem instance of the Fatih Sultan Mehmet Hospital is being attempted to be solved using a standard genetic algorithm. Currently, the rosters are being prepared by a head nurse who performs this tedious task by hand. Due to the existence of many constraints, usually the resulting schedules are suboptimal. The aim is this study is to generate better schedules automatically for this specific real world instance of the nurse rostering problem. This paper reports the results of the preliminary experiments conducted to understand the properties of a good genetic algorithm for this problem. The results are very promising and they promote further study. Title: INVESTIGATIONS ON OBJECT-CENTERED ROUTING IN DYNAMIC ENVIRONMENTS: ALGORITHMIC FRAMEWORK AND INITIAL NUMERICAL RESULTS - SUPPORT FOR DISTRIBUTED DECISION MAKING IN TRANSPORT SYSTEMS Author(s): Bernd-Ludwig Wenning, Carmelita Görg, Andreas Timm-Giel, Jörn Schönberger and Herbert Kopfer Abstract: Dynamics in logistics are a subject of increasing importance in logistic processes. The more detaild dynamics are considered, the more complicated it becomes to handle them in centralized planning. Therefore, decentralized approaches with autonomous cooperating entities might become more efficient. This paper introduces some aspects of decentralized approaches, mainly focussing on the process of information acquisition which enables the autonomous entities to decide about the handling of routes and orders. Title: TREND ANALYSIS BASED ON EXPLORATIVE DATA AND TEXT MINING: A DECISION SUPPORT SYSTEM FOR THE EUROPEAN HOME TEXTILE INDUSTRY Author(s): Andreas Becks and Jessica Huster Abstract: Trend-related industries like the European home-textile industry have to quickly adapt to evolving product trends and consumer behaviour in order to avoid economic risks generated by misproduction. Trend indicators are manifold, reaching from changes in ordered products and consumer behaviour to ideas and concepts published in magazines or presented at trade fairs. In this paper we report on the overall design of the Trend Analyser, a decision support system that helps designers and product developers of textile producers to perform market basket analyses as well as mining trend-relevant fashion magazines and other publications by trend-setters. Our tool design brings together explorative text and data mining methods in an ontology-based knowledge flow system, helping decision-makers to perform a better planning of their production. Title: LEARNING GREEK PHONETIC RULES USING DECISION-TREE BASED MODELS Author(s): Dimitrios P. Lyras, Kyriakos N. Sgarbas and Nikolaos D. Fakotakis Abstract: This paper deals with the use of decision-tree based induction techniques in order to achieve automatic extraction of phonetic knowledge. In particular, we compare the ID3 divide-and-conquer decision tree algorithm and Quinlan’s C4.5 decision tree learner model by applying them to two Greek pronunciation databases. The extracted knowledge is then evaluated quantitavely (i.e. measuring accuracy). In the ten cross-fold validation experiments that are conducted in our study, the decision tree models are shown to produce an accuracy higher than 99.96% when trained and tested on each one of the two aforementioned datasets. This extracted knowledge allows the better adaptation of speech processing and natural language processing systems to the variants of the Greek language and may be useful to various applications such as automatic bi-lingual dictionary construction, Grapheme-to-Phoneme and Phoneme-to-Grapheme converters, speech recognition etc. Title: DELINEATING TOPIC AND DISCUSSANT TRANSITIONS IN ONLINE COLLABORATIVE ENVIRONMENTS Author(s): Noriko Imafuji Yasui, Xavier Llorà, David E. Goldberg, Yuichi Washida and Hiroshi Tamura Abstract: In this paper, we propose some methodologies for delineating topic and discussant transitions in online collaborative environments, more precisely, focus group discussions for product conceptualization. First, we propose KEE (Key Elements Extraction) algorithm, an algorithm for simultaneously finding key terms and key persons in a discussion. Based on KEE algorithm, we propose approaches for analyzing two important factors of discussions: discussion dynamics and emerging social networks. Examining our approaches using actual network-based discussion data generated by real focus groups in a marketing environment, we report interesting results that demonstrate how our approaches could effectively discover knowledge in the discussions. Title: A NICHE BASED GENETIC ALGORITHM FOR IMAGE REGISTRATION Author(s): Giuseppe Pascale and Luigi Troiano Abstract: Image registration is a task of fundamental importance in many applications of image analysis. It aims to find the unknown set of transformations able to reduce two or more images to a common reference frame. Image registration can be regarded as an optimization problem, where the goal is to maximize a measure of image similarity (e.g. the images cross-correlation). The measure of similarity on the overall image can be computationally expensive, leading to measure the similarity of smaller subimages. However, the reduction of subimage size results into a higher multi-modality for the optimizing function. Recent investigations have shown that genetic algorithms can address this problem. However, the simple scheme of genetic algorithms can still fall in local optima. In this paper, we explore the application of niche-oriented genetic algorithms, showing their strengths in providing a more effective image registration algorithm. Title: PROBLEMS AND FEATURES OF EVOLUTIONARY ALGORITHMS TO BUILD HYBRID TRAINING METHODS FOR RECURRENT NEURAL NETWORKS Author(s): M. P. Cuéllar, M. Delgado and M. C. Pegalajar Abstract: Dynamical recurrent neural networks are models suitable to solve problems where the input and output data may have dependencies in time, like grammatical inference or time series prediction. However, traditional training algorithms for these networks sometimes provide unsuitable results because of the vanishing gradient problems. This work focuses on hybrid proposals of training algorithms for this type of neural networks. The methods studied are based on the combination of heuristic procedures with gradient-based algorithms. In the experimental section, we show the advantages and disadvantages that we may find when using these training techniques in time series prediction problems, and provide a general discussion about the problems and cases of different hybridations based on genetic evolutionary algorithms. Title: THE JUMP PROJECT: PRACTICAL USE OF SEMANTIC WEB TECHNOLOGIES IN EPSS SYSTEMS Author(s): Giovanni Semeraro, Ignazio Palmisano, Nicola Abbattista and Silverio Petruzzellis Abstract: The JUMP project aims at bringing together the knowledge stored in different systems in order for the needs of a user to be fulfilled. EPSS systems are meant to support a user in taking decisions, leveraging different information sources that are available. The JUMP framework is designed to offer multiple ways for the user to request information or advice to the central knowledge and document base; the knowledge and document base itself is the result of the integration of indipendent systems. In order to simplify communication and maximize knowledge sharing and standardization of languages and protocols, Semantic Web languages and technologies are used throughout the framework to represent, exchange and query the knowledge stored in each part of the framework. The basic assumption is that the user is knowledgeable w.r.t. the IT infrastructure and already has the background knowledge necessary to achieve most of the parts of the task he/she is involved into, but he/she is not an expert of the domain in which the task is to be achieved. The task that the JUMP system has to accomplish, in this situation, is to help the user ﬁnd the relevant information (e.g. the email address of someone who’s already accomplished the same task, or of people that can relay relevant pointers, or documents and self- instruction courses that contain references to the current activity). All the available material has then to be ranked according to the user proﬁle, i.e. on the base of what the JUMP system knows about the user or about the category the user falls in. Title: AN APPROACH FOR ASSESSING DESIGN SYSTEMS: DESIGN SYSTEM SIMULATION AND ANALYSIS FOR PERFORMANCE ASSESSMENT Author(s): Richard Sohnius, Eyck Jentzsch, Wolf-Ekkehard Matzke and Vadim Ermolayev Abstract: This position paper presents our work in assessing engineering design systems in the field of microelectronics with respect to their performance and, more specifically, to productivity. Current mainstream process assessment systems show deficiencies of the representation and analysis when dealing with dynamic, self-optimizing processes. To overcome this, a project called PRODUKTIV+ has been created with the goal to develop a new approach. This approach is to create a model of a design system and simulate the colaborative behavior of the involved engineers using a system of cooperating, intelligent software agents. The assessment of a design system is then done based on the detailed simulation results. Title: DAY OF THE WEEK EFFECT IN SMALL SECURITIES MARKETS Author(s): Virgilijus Sakalauskas and Dalia Kriksciuniene Abstract: In this article statistical investigation of the day of the week effect was explored for the case of small securities market. Though this effect has already been widely examined in numerous research articles, its influence to such type of markets has not been sufficiently explored. By applying statistical analysis of the return index data (Vilnius Stock OMX Index return) we found out, that the first and last trading day’s effect was not observed. We constructed the subset of return variable which indicated the influence of day of the week effect. For this variable we proved the effect of higher moments of return and concluded that the hypothesis of equality of the higher moments across days of the week can be rejected, indicating that a weekly pattern on the higher moments exists. Title: EMPLOYING SOFTWARE MULTI-AGENTS FOR SIMULATING RADIOLOGICAL ACCIDENTS Author(s): Tadeu Augusto de Almeida Silva and Oscar Luiz Monteiro de Farias Abstract: Through agent based systems we can build scenarios of radiological accidents that enable us to evaluate the consequences of accidental contaminations. The incidental release of radionuclides in an environment might cause the contamination of areas and people. So, it is necessary to make use of tools that allow us to foretell the effects of the exposition of the population and to evaluate the consequences and to suggest measures of protection. In this paper we introduce the use of software multi-agents systems immersed in a geographical representation of the world, as a viable option to simulate radiological accidents and assess doses. Title: GENERALIZED MULTICRITERIA OPTIMIZATION SOFTWARE SYSTEM MKO-2 Author(s): Mariana Vassileva, Vassil Vassilev, Boris Staykov and Danail Dochev Abstract: The paper describes a generalized multicriteria decision support system, called MKO-2, which is designed to model and solve linear and linear integer multicriteria optimization problems. The system implements the innovative generalized classification-based interactive algorithm for multicriteria optimization with variable scalarizations and parameterizations, which is applicable for different types of multicriteria optimization problems (i.e., linear, nonlinear, mixed variables). It is also applicable for different ways of defining preferences by the decision maker. It can apply different scalarizing problems and strategies in the search for new Pareto optimal solutions. The class of the problems solved, the structure, the functions and the user interface of the generalized multicriteria decision support system MKO-2 are described in the paper. The graphical user interface of this system enables decision makers with different degrees of qualification concerning methods and software tools to operate easily with the system. The MKO-2 system can be used both for education and for solving real-life problems. Because of its nature, the system contains specific expert knowledge of the field of multicriteria optimization and knowledge-based (expert) subsystems can be included in it concerning different levels of expertise. Title: INTEGRATING AGENTS INTO COOPERATIVE INTELLIGENT DECISION SUPPORT SYSTEMS Author(s): Abdelkader Adla Abstract: In this paper, we propose to integrate agents in a cooperative intelligent decision support system. The resulting system, ACIDS (Agent-based Cooperative Intelligent Decision-support System) is a decision support system designed to support operators during contingencies by giving them detailed, real-time information, allowing them to integrate and interpret it and then transmit and monitor their decisions through the chain of incident command. During the contingency, the operator using the ACIDS should be able to: gather information about the incident location; access databases related to the incident; activate predictive modelling programs; support analyses of the operator, and monitor the progress of the situation and action execution. The decision making process, applied to the boilers management system, relies in ACIDS on a cycle that includes recognition of the causes of a fault (diagnosis), plan actions to solve the incidences and, execution of the selected actions. Title: GROUP DECISION SYSTEMS FOR RANKING AND SELECTION - AN APPLICATION TO THE ACCREDITATION OF DOPING CONTROL LABORATORIES Author(s): Xari Rovira, Núria Agell, Mónica Sánchez, Francesc Prats and Montserrat Ventura Abstract: This paper presents a qualitative approach for representing and synthesising evaluations given by a team of experts involved in selection or ranking processes. The paper aims at contributing to decision-making analysis in the context of group decision making. A methodology is given for selecting and ranking several alternatives in an accreditation process. Patterns or alternatives are evaluated by each expert in an ordinal scale. Qualitative orders of magnitude spaces are the frame in which these ordinal scales are represented. A representation for the different patterns by means of k-dimensional qualitative orders of magnitude labels is proposed, each of these standing for the conjunction of k labels corresponding to the evaluations considered. A method is given for ranking patterns based on comparing distances against a reference k-dimensional label. The proposed method is applied in a real case in External Quality Assessment Schemes (EQAS) for Doping Control Laboratory contexts. Title: A PLATFORM DEDICATED TO KNOWLEDGE ENGINEERING FOR THE DEVELOPMENT OF IMAGE PROCESSING APPLICATIONS Author(s): Arnaud Renouf, Régis Clouard and Marinette Revenu Abstract: In this paper, we propose a platform dedicated to the knowledge extraction and management for image processing applications. The aim of this platform is a knowledge-based system that generates automatically applications from problem formulations giving by inexperienced users. We also present a new model for the formulation of such applications that covers all the image processing tasks and that is independent of any particular application domain. We show the contribution of this model to the platform performance and to the realization of the knowledge-based system. Title: SELF-LEARNING PREDICTION SYSTEM FOR OPTIMISATION OF WORKLOAD MANAGEMENT IN A MAINFRAME OPERATING SYSTEM Author(s): Michael Bensch, Dominik Brugger, Wolfgang Rosenstiel, Martin Bogdan, Wilhelm Spruth and Peter Baeuerle Abstract: We present a framework for extraction and prediction of online workload data from a workload manager of a mainframe operating system. To boost overall system performance, the prediction will be incorporated into the workload manager to take preventive action before a bottleneck develops. Model and feature selection automatically create a prediction model based on given training data, thereby keeping the system flexible. We tailor data extraction, preprocessing and training to this specific task. Using error measures suited to our task, we show that our approach is promising. To conclude, we discuss our first results and give an outlook on future work. Title: SELF-ORGANIZING MAPS FOR CLASSIFICATION OF THE RIO DE JANEIRO STATE CITIES BASED ON ELECTRICAL ENERGY CONSUMPTION Author(s): Luiz Biondi Neto, Pedro Henrique Gouvêa Coelho, João Carlos Soares de Mello and Lidia Angulo Meza Abstract: The purpose of the present work is to classify the 31 cities of Rio de Janeiro State in Brazil based on their energy consumption. The point is to search new criteria to cluster the users in order to establish, in a more homogeneous way, indices of energy quality. Moreover, it aims to bring about a framework from which it will be possible to determine the relative efficiency among the cities of all Brazilian states. Traditionally this classification task is carried out using a statistical technique known as K-means, in which only five variables are considered: the size of the main network in Kilometres, the offered power , the number of users, the average monthly consumption and the covered area. This paper uses the Kohonen Self Organizing Maps technique applied to 21 variables, including the residential, industrial, public and rural consumptions in order to seek a better classification. Title: IMPRECISE EMPIRICAL ONTOLOGY REFINEMENT - APPLICATION TO TAXONOMY ACQUISITION Author(s): Vít Novácek Abstract: The significance of uncertainty representation has become obvious in the Semantic Web community recently. This paper presents new results of our research on uncertainty incorporation into ontologies created automatically by means of Human Language Technologies. The research is related to OLE (Ontology LEarning)\footnote{The project's web page can be found at URL: \url{???}.}%\url{http://nlp.fi.muni.cz/projects/ole/}.} -- a project aimed at bottom-up generation and merging of ontologies. It utilises a proposal of expressive fuzzy knowledge representation framework called {\sf ANUIC} (Adaptive Net of Universally Interrelated Concepts). We discuss our recent achievements in taxonomy acquisition and show how even simple application of the principles of {\sf ANUIC} can improve the results of initial knowledge extraction methods. Title: SIMILARITY ASSESSMENT IN A CBR APPLICATION FOR CLICKSTREAM DATA MINING PLANS SELECTION Author(s): Cristina Wanzeller and Orlando Belo Abstract: We implemented a mining plans selection system founded on the Case Based Reasoning paradigm, in order to assist the development of Web usage mining processes. The system’s main goal is to suggest the most suited methods to apply on a data analysis problem. Our approach builds upon the reuse of the experience gained from prior successfully mining processes, to solve current and future similar problems. The knowledge acquired after successfully solving such problems is organized and stored in a relational case base, giving rise to a (multi-) relational cases representation. In this paper we describe the similitude assessment devised within the retrieval of similar cases, to cope with the adopted representation. Structured representation and similarity assessment over complex data are issues relevant to a growing variety of application domains, being considered in multiple related lines of active research. We explore a number of different similarity measures proposed in the literature and we extend one of them to better fit our purposes Title: LEARNING TO RANK FOR COLLABORATIVE FILTERING Author(s): Jean-Francois Pessiot, Tuong-Vinh Truong, Nicolas Usunier, Massih-Reza Amini and Patrick Gallinari Abstract: Up to now, most contributions to collaborative filtering rely on rating prediction to generate the recommendations. We, instead, try to correctly rank the items according to the users' tastes. First, we define a ranking error function which takes available pairwise preferences between items into account. Then we design an effective algorithm that optimizes this error. Finally we illustrate the proposal on a standard collaborative filtering dataset. We adapted the evaluation protocol proposed by (Marlin, 2004) for rating prediction based systems to our case, where pairwise preferences are predicted instead. The preliminary results are between those of two reference rating prediction based methods. We suggest different directions to further explore our ranking based approach for collaborative filtering. Title: PROVISION OF CONTEXT-SENSITIVE ENTERPRISE KNOWLEDGE FOR DECISION SUPPORT: AN APPROACH BASED ON ENTERPRISE MODELS AND INFORMATION DEMAND CONTEXTS Author(s): Tatiana Levashova, Michael Pashkin and Magnus Lundqvist Abstract: In this paper an approach for deriving abstract and operational context for context-sensitive decision support, and thereby also parts of information demand contexts, from enterprise models is presented together with some thoughts on how this can be utilised in the efforts of trying to provide users with current, correct, and relevant information with respect to the tasks such users perform within organisations. The different steps involved in the process of deriving context from enterprise models is explained by means of different representations of an example model produced in earlier research done by the authors. Title: RULE BASED STABILITY CRITERIA FOR COALITION FORMATION UNDER UNCERTAINTY Author(s): Chi-Kong Chan and Ho-Fung Leung Abstract: Efficiency and stability are two important concepts in coalition formation analysis. One common assumption in many well-known criteria such as the core and Pareto Efficiency is that there exists a publicly known value for each coalition or sub-coalition. However, in software agent applications, this assumption is often not true as the agents rarely know the exact coalition values for certain. Instead, agents have to rely on whatever evidence they can observe, and evaluate those evidence according to their private information base on past experience. There are two sources of uncertainty here. First, such private information is often uncertain in nature or may even be self-conflicting. Second, the agents, which are heterogeneous and autonomous, may have different conflict resolution strategies. Such uncertainties make the traditional approaches unfit for many real-world problems, except perhaps, in idealized scenarios. In this paper, we extend the Core and Pareto Optimality criteria by proposing a new rule based stability concepts under uncertain environment: the CU-Core. Title: A DECISION SUPPORT SYSTEM FOR PREDICTING THE RELIABILITY OF A ROBOTIC DISPENSING SYSTEM Author(s): J. Sturek, S. Ramakrishnan, P. Nagula and K. Srihari Abstract: Decision Support Systems (DSS) are information systems designed to support individual and collective decision-making. This research presents the development of a DSS to facilitate the prediction of the reliability of a Robotic Dispensing System (RDS). While it is extremely critical for design teams to identify the potential defects in the product before releasing them to the customers, predicting reliability is extremely difficult due to the absence of actual failure data. Design teams often adopt tools such as Failure Mode Effects and Analysis (FMEA) to analyze the various failure modes in the product. There are commercial softwares that facilitate predicting reliability and conducting FMEA. However, there are limited approaches that combine these two critical aspects of product design. The objective of this research is to develop a DSS that would help design teams track the overall system reliability, while concurrently using the data from the alpha testing phase to perform the FMEA. Hence, this DSS is capable of calculating the age-specific reliability value for a Robotic Dispensing System (RDS), in addition to storing the defect information, for the FMEA process. The Risk Priority Number (RPN) calculated using the data gathered serves as the basis for the design team to identify the modifications to the product design. The tool, developed in Microsoft Access®, would be subsequently utilized to track on-field performance of the RDS. This would facilitate continuous monitoring of the RDS from the customer site, especially during its “infant mortality” period. Title: INTELLIGENT E-LEARNING SYSTEMS - AN INTELLIGENT APPROACH TO FLEXIBLE LEARNING METHODOLOGIES Author(s): Sukanya Ramabadran and Vivekanand Gopalkrishnan Abstract: The evolution of the educational industry from adoption of classroom training methods to e-learning systems has been remarkable and has satisfied its purpose of existence. But it has not been able to address issues faced by students who do not want to be constricted with a set pattern of progress. We conducted a detailed study to analyze preferences of students with respect to features of an Intelligent e-Learning system. Based on results of the study,a framework of IeLS, that facilitates flexibility and maximum learners’ satisfaction, is developed.The framework consists of components like presentation, data mining, business logic, content management and database. The data mining component uses techniques such as association rule discovery and conceptual clustering to generate recommendations for students, course coordinators as well as the institute. This framework is implemented using PHP and MySQL with various modules such as registration, entry test, tutorials, guestbook and bulletin boards. This system allows flexibility in terms of choice of learning path,change in direction of learning path and change of learning approach. It also allows students to choose the levels of difficulty at which they would pursue the course. In this paper we discuss the role that such an Intelligent e-Learning system plays in satisfying the diverse needs of students. Title: FUZZY INTERVAL NUMBER (FIN) TECHNIQUES FOR MULTILINGUAL AND CROSS LANGUAGE INFORMATION RETRIEVAL Author(s): Theodoros Alevizos, Vassilis G. Kaburlasos, Stelios Papadakis, Christos Skourlas and Petros Belsis Abstract: Fuzzy Interval Numbers (FINs) could be seen as a set of techniques applied in Fuzzy System applications. In this paper, we propose a series of techniques to solve multi-Lingual and Cross Language Information Retrieval (CLIR) problems, based on Fuzzy Interval Numbers (FINs). Some experiments showing the importance of these techniques in the CLIR-systems are briefly described and discussed. Our method is evaluated using monolingual and bilingual public bibliographic data extracted from the National Archive of the Greek National Documentation Centre. All the experiments were conducted with and without the use of stemming, stop-words and other language dependent (pre-) processing techniques. It seems that a main advantage of our approach is that the method is language independent and there is also no need for any text pre-processing or higher level processing, avoiding thus the use of taggers, parsers, feature selection strategies, or the use of other language dependent NLP tools. Title: A DOMAIN KNOWLEDGE BASED APPROACH FOR SIMILARITY RETRIEVAL IN BRAIN IMAGES Author(s): Haiwei Pan, Qilong Han, Xiaoqin Xie, Wei Zhang and Jianzhong Li Abstract: The high incidence of brain disease, especially brain tumor, has increased significantly in recent years. It is becoming more and more concernful to discover knowledge through mining medical brain image to aid doctors’ diagnosis. Image mining is the important branch of data mining. It is more than just an extension of data mining to image domain but an interdisciplinary endeavor. Image clustering and similarity retrieval are two basilic parts of image mining. In this paper, we introduce a notion of image sequence similarity patterns (ISSP) for medical image database. ISSP refer to the longest similar and continuous sub-patterns hidden in two objects each of which contains an image sequence. These patterns are significant in medical images because the similarity for two medical images is not important, but rather, it is the similarity of objects each of which has an image sequence that is meaningful. We design the new algorithms with the guidance of the domain knowledge to discover the possible Space-Occupying Lesion (PSO) in brain images and ISSP for similarity retrieval. Our experiments demonstrate that the results of similarity retrieval are meaningful and interesting to medical doctors. Title: INTELLIGENT SYSTEM FOR IMAGE COMPRESSION Author(s): Adnan Khashman and Kamil Dimililer Abstract: The parallel processing capability of neural networks provides efficient means for processing images with large amount of data. Image compression using Discrete Cosine Transforms (DCT) is a lossy compression method where at higher compression ratios the quality of the compressed images is reduced, thus the need for finding an optimum compression ratio that combines high compression and good quality. This paper suggests that the image intensity can affect the choice of an optimum compression ratio. A neural network will be trained to establish the non-linear relationship between the image intensity and its compression ratios in search for an optimum ratio. Experimental results suggest that a trained neural network can relate image intensity or pixel values to its compression ratio and thus can be successfully used to predict optimum DCT compression ratios for different images. Title: MRE-KDD+: A MULTI-RESOLUTION, ENSEMBLE-BASED MODEL FOR ADVANCED KNOLWEDGE DISCOVERY Author(s): Alfredo Cuzzocrea Abstract: The problem of supporting advanced decision-support processes arise in many fields of real-life applications ranging from scenarios populated by distributed and heterogeneous data sources, such as conventional distributed data warehousing environments, to cooperative information systems. Here, data repositories expose very different formats, and knowledge representation schemes are very heterogeneous accordingly. As a consequence, a relevant research challenge is how to efficiently integrate, process and mine such distributed knowledge in order to make available it to end-users/applications in an integrated and summarized manner. Starting from these considerations, in this paper we propose an OLAM-based framework for advanced knowledge discovery, along with a formal model underlying this framework, called Multi-Resolution Ensemble-based Model for Advanced Knowledge Discovery in Large Databases and Data Warehouses (MRE-KDD+), and a reference architecture for such a framework. Another contribute of our work is represented by the proposal of KBMiner, a visual tool that supports the editing of even-complex KDD processes according to the guidelines drawn by MRE-KDD+. Title: A DISTRIBUTED MULTI-AGENT SYSTEM TO SOLVE AIRLINE OPERATIONS PROBLEMS Author(s): Antonio Castro and Eugenio Oliveira Abstract: An airline schedule very rarely operates as planned. Problems related with aircrafts, crew members and passengers are common and the actions towards the solution of these problems are usually knows as operations recovery or disruption management. The Airline Operations Control Center (AOCC) tries to solve these problems with the minimum impact in the airline schedule, with the minimum cost and, at the same time, satisfying all the required safety rules. Usually, each problem is treated separately and some tools have been proposed to help in the decision making process of the airline coordinators. In this paper we present the implementation of a Distributed Multi-Agent System (MAS) that represents the several roles that exist in an AOCC. This MAS deals with several operational bases and for each type of operations problems it has several specialized software agents that implements heuristic solutions and other solutions based in operations research mathematic models and artificial intelligence algorithms. These specialized agents compete to find the best solution for each problem. We present a real case study taken from an AOCC where a crew recovery problem is solved using the MAS. Computational results using a real airline schedule are presented, including a comparison with a solution for the same problem found by the human operators in the Airline Operations Control Center. We show that, even in simple problems and when comparing with solutions found by humans operators in the case of this airline company, it is possible to find valid solutions, in less time and with a smaller cost. Title: IMPROVED FACE RECOGNITION USING KERNEL DIRECT DISCRIMINANT ANALYSIS IN COMBINATION WITH SVM CLASSIFIER Author(s): Seyyed Majid Valiollahzadeh, Abolghasem Sayadiyan and Mohammad Nazari Abstract: Applications such as Face Recognition (FR) that deal with high-dimensional data need a mapping technique that introduces representation of low-dimensional features with enhanced discriminatory power and a proper classifier, able to classify those complex features .Most of traditional Linear Discriminant Analysis (LDA) suffer from the disadvantage that their optimality criteria are not directly related to the classification ability of the obtained feature representation. Moreover, their classification accuracy is affected by the “small sample size” (SSS) problem which is often encountered in FR tasks. In this short paper, we combine nonlinear kernel based mapping of data called KDDA with Support Vector machine (SVM) classifier to deal with both of the shortcomings in an efficient and cost effective manner. The proposed here method is compared, in terms of classification accuracy, to other commonly used FR methods on UMIST face database. Results indicate that the performance of the proposed method is overall superior to those of traditional FR approaches, such as the Eigenfaces, Fisherfaces, and D-LDA methods and traditional linear classifiers. Title: FACE DETECTION USING ADABOOSTED SVM-BASED CMPONENT CLASSIFIER Author(s): Seyyed Majid Valiollahzadeh, Abolghasem Sayadiyan and Mohammad Nazari Abstract: Boosting is a general method for improving the accuracy of any given learning algorithm. In this paper we employ combination of Adaboost with Support Vector Machine (SVM) as component classifiers to be used in Face Detection Task. Proposed combination outperforms in generalization in comparison with SVM on imbalanced classification problem. The proposed here method is compared, in terms of classification accuracy, to other commonly used Adaboost methods, such as Decision Trees and Neural Networks, on CMU+MIT face database. Results indicate that the performance of the proposed method is overall superior to previous adaboost approaches. Title: USING DECISION TREE LEARNING TO PREDICT WORKFLOW ACTIVITY TIME CONSUMPTION Author(s): Liu Yingbo, Wang Jianmin and Sun Jiaguang Abstract: Activity time consumption knowledge is essential to successful scheduling in workflow applications. However, the uncertainty of activity execution duration in workflow applications makes it a non-trivial task for schedulers to appropriately organize the ongoing processes. In this paper, we present a K-level prediction approach intended to help workflow schedulers to anticipate activities' time consumption. This approach first defines K levels as a global measure of time. Then, it applies a decision tree learning algorithm to the workflow event log to learn various kinds of activities' execution characteristics. When a new process is initiated, the classifier produced by the decision tree learning technique takes prior activities' execution information as input and suggests a level as the prediction of posterior activity's time consumption. In the experiment on three vehicle manufacturing enterprises, 896 activities were investigated, and we separately achieved and average prediction accuracy of 80.27%, 70.93% and 61.14% with K = 10. We also applied our approach on greater values of K, however the result is less positive. We describe our approach and report on the result of our experiment. Title: USING GRAMMARS FOR TEXT CLASSIFICATION Author(s): P. Kroha and T. Reichel Abstract: In this contribution we present our experiments with using grammars for text classification. Approaches usually used are based on statistical methods working with term frequency. We investigate short texts (stock exchange news) more deeply in that we analyze the structure of sentences and context of used phrases. Results are used for predicting market movements coming from the hypotheses that news move markets. Title: A EVOLUTIONARY APPROACH TO SOLVE SET COVERING Author(s): Broderick Crawford, Carolina Lagos, Carlos Castro and Fernando Paredes Abstract: In this paper we apply a new evolutive approach for solving the Set Covering Problem. This problem is a reasonably well known NP-complete optimization problem with many real world applications. We use a Cultural Evolutionary Architecture to maintain knowledge of Diversity and Fitness learned over each generation during the search process. Our results indicate that the approach is able to produce very competitive results in compare with other approximation algorithms solving a portfolio of test problems taken from the ORLIB. Title: REACTIVE COMMONSENSE REASONING - TOWARDS SEMANTIC COORDINATION WITH HIGH-LEVEL SPECIFICATIONS Author(s): Michael Cebulla Abstract: In contemporary distributed applications questions concerning coordination became increasingly urgent. There is a tradeoff however to be made between the need for a highly reative behavior and the need for semantically rich high level abstractions. Especially w.r.t. context-aware applications where various systems have to act together and come to coordinated conclusions the need for powerful semantic abstractions is evident. In our argument we start with a calculus for highly reactive behavior. Then we introduce stepwise two extensions w.r.t. the representation of semantic relationships. The first extension concerns the integration of description logics in order to represent statements about the current situation. The main extension however concerns the integration of classifications (also known as formal contexts). By integrating these highly abstract notions into our membrane-based calculus we make a proposal for the support of common sense reasoning during runtime. We claim that this proposal is a contributation for the robustness of systems behavior and context-awareness. Title: MISUSE DETECTION - AN ITERATIVE PROCESS VS. A GENETIC ALGORITHM APPROACH Author(s): Pedro A. Diaz-Gomez and Dean F. Hougen Abstract: With the explosion of the internet and its use, the development of security mechanims are quite important in order to preserve the confidentiality, integrity and availability of data stored in computers. However, the growth of intrusions can make such mechanims almost unusable, in the sense that the computation time, or space needed in order to mantain them can grow exponentially. This position paper present an iterative process for doing misuse detection, and compares it with other approach for doing that: a Genetic Algorithm. Title: MISUSE DETECTION - A NEURAL NETWORK VS. A GENETIC ALGORITHM APPROACH Author(s): Pedro A. Diaz-Gomez and Dean F. Hougen Abstract: Misuse detection can be addressed as an optimization problem, where the problem is to find an array of possible intrusions x that maximizes a function f (·) subject to a constraint r, impose by a user’s actions performed in a computer. This positional paper presents and compare two ways of finding x, in audit data, by using neural networks and genetic algorithms. Title: AN AGENT-BASED APPROACH TO SUPPORT PERFORMANCE MANAGEMENT FOR DYNAMIC AND COLLABORATIVE WORK Author(s): Nora Houari and Behrouz H. Far Abstract: This paper describes a comprehensive multiagent based modelling approach for collaborative and dynamic organizational roles support. The method is a role centred one, where agents collaborate to assist human users through collaboration within the same role, with other roles of the same team, as well as roles of different teams that share tasks dependencies. Agents in the system are not restricted by predefined schemes they can join and/or leave the coalition. We identify the key elements of the role model as rules, agents, and relationships. Our role model integrates both the operational functionalities and the performance management towards specific goals, the former involve human individuals in the loop, whereas the later are performed by software agents to assist in monitoring, control, and adaptation of the performance in dynamic organizations. Title: AUTOMATIC EVALUATION OF THE QUANTITATIVE SEISMOCARDIOGRAM Author(s): Z. Trefny, J. Svacinka, S. Trojan, J. Slavicek, P. Smrcka and M. Trefny Abstract: The device for quantitative seismocardiography (Q-SCG) detects cardiac vibrations caused by the heart activity, the measuring sensor is usually placed in the plate of the chair – additional instruments applied on the proband’s body are not required. The results of the Q-SCG analysis are usable in various clinical fields. The first and most important step in the process of detection of significant characteristics of measured Q-SCG curves is to detect pseudo-periods in the signal regardless of the initial pseudo-period position. Other characteristics can be acquired by a relatively simple process over the appointed pseudo-period. The experimental equipment for the Q-SCG measuring and analysis was developed and also special algorithms for preprocessing, segmentation and interactive analysis of the Q-SCG signal were developed. In this contribution technical principles of the quantitative seismocardiography are introduced; the method is easy, robust and is appropriate for real-time Q-SCG processing. Area 3 - Information Systems Analysis and Specification Title: TRUST AND REPUTATION ONTOLOGIES FOR ELECTRONIC BUSINESS Author(s): Stefan Schmidt, Tharam Dillon, Robert Steele and Elizabeth Chang Abstract: The emergence of social networks in centralized and distributed virtual communities is one of the hottest topics in today’s research communities. Trust and reputation ontologies which capture the social relationships and concepts among interacting parties offer a standardized and common under-standing of a problem domain such as electronic business in autonomous environments. To improve interoperability, on-tologies can be shared among interacting agents and form the basis for many of the autonomous activities of intelligent agents. The ontologies presented in this paper concentrate on the formalisation of business discovery, business selection, and business interaction QoS review concepts. Special focus is put on trust and reputation relationships which form among the entities involved. Title: CONSTRUCTING CONSISTENT USER REQUIREMENTS - LESSONS LEARNT FROM REQUIREMENTS VERIFICATION Author(s): Petra Heck Abstract: The user requirements specify what functions an information system has to fulfil. The user requirements serve as the basis for system implementation and test specification. In this paper we present a number of guidelines that improve the quality of the user requirements. In order to be able to reason about requirements in general we first present a structure that indicates the elements that requirements consist of and how these elements interrelate. Based on the general structure we have performed a number of case studies in the area of requirements verification. In the requirements that we have verified we have found many inconsistencies. If the guidelines we present are obeyed during requirements construction, certain types of inconsistencies will not be present in the resulting requirements. Better quality requirements lead to fewer errors in the other system development phases and during system changes. Title: THE NEED OF ‘INFORMATION ANALYSIS? FOR INFORMATION SYSTEMS AND OUTLINE OF A HERMENEUTIC APPROACH TO IT Author(s): Sufen Wang, Junkang Feng and Binyong Tang Abstract: Requirement analysis for information systems development (ISD) results in a specification that should represent a central reference point for subsequent stages of the development. But this stage is seen characterized by informality and uncertainty. One essential element in this is how the information that is required by the agents in a domain is identified and formulated. In this paper, we will look at how well-known information systems methodologies handle it. Then we will identify a number of problems with them, based on which we will argue that an information analysis stage would seem to be needed and useful for overcoming these problems. Such a stage would require a certain perspective. We suggest adopting Hermeneutics as such a perspective. We describe how Hermeneutics might enable us to look at the mechanism whereby information is created and information flow takes place. Title: COMPUTATIONAL REPRESENTATIONS OF ACTIVITIES Author(s): Peter Bøgh Andersen Abstract: Social simulations, pervasive computing, and business process research all require principled, scientifically based methods for representing actions and activities digitally. The paper uses findings from ethnography, linguistics and philosophy to make a list of features that need representation and suggests methods for rep-resenting these features in an object-oriented framework. The paper makes a sharp distinction between the representation and the represented. The representations are not activities, they only represent them. Title: RISK PROFILING OF MONEY LAUNDERING AND TERRORISM FUNDING - PRACTICAL PROBLEMS OF CURRENT INFORMATION STRATEGIES Author(s): B. H. M. Custers Abstract: In order to track money laundering and terrorist funding, banks have to create risk profiles of their clients. Banks that want to do business in the United States have to implement a world wide Know Your Customer (KYC) program, partially based on the Patriot Act. Implementing a KYC policy, however, raises several problems and does not seem to be effective and efficient to tracking money laundering and terrorist funding. Particularly because of problems regarding the identification of individuals it is not too difficult for criminals and terrorists to avoid being noticed during these types of screening. This contribution will discuss how risk profiling strategies are implemented in practical environments and which problems this may cause. Title: COMPARISON OF FIVE ARCHITECTURE DESCRIPTION LANGUAGES ON DESIGN FOCUS, SECURITY AND STYLE Author(s): Csaba Egyhazy Abstract: With the increasing complexity and size of software systems, defining and specifying software architectures becomes an important part of the software development process. In the past, many software architectures have been described and modeled in an ad hoc and informal manner. For the past 20 years, Architecture Description Languages (ADLs) have been proposed to facilitate the description and modeling of software architectures. This paper reviews the history of ADLs, selects five of them, and compares them based on their design focus, security modeling, and styles modeling. Title: AN ADAPTIVE P2P WORKFLOW MANAGEMENT SYSTEM - FLEXIBILITY AND EXCEPTION HANDLING SUPPORT IN P2P BASED WORKFLOW Author(s): A. Aldeeb, K. Crockett and M. J. Stanton Abstract: Workflow processes are moving from long-lasting, well-defined, centralised business processes to dynamically changing, distributed business processes with many variants. Existing research concentrates on decentralisation and on adaptability but there is more to be done on adaptability in decentralised workflow systems. The aim of this research is to overcome the limitation of current workflow management systems by moving from a centralised workflow to a flexible decentralised P2P workflow system. A P2P workflow management architecture is proposed which offers flexibility, exception handling and dynamic changes to both the workflow process definition and process instance level by applying a range of AI techniques. An Exception Handling Peer (EHP) captures exceptions, from the workflow peers, characterises the exceptions and applies a recovery policy. Initial prototyping of the system has been carried out using JBoss jBPM whilst the P2P network environment of this prototype is based on Sun MicroSystem’s JXTA. Title: GENERATING COLLABORATIVE WORK PROCESSES Author(s): Igor Hawryszkiewycz Abstract: The paper describes ways to support collaboration in business processes. Collaborative processes are different from predefined processes in the sense that they can change dynamically as the situation emerges. Such changes can be time consuming as they require users to continually adapt the system to changing contexts. The solution proposed here to support process evolution is to provide generic work objects and use software agents to assist users to dynamically change the process by quickly adding or changing work objects. The paper outlines a way of describing work processes in terms of generic work objects. The structure of the generic work objects is based on a metamodel, which provides the fundamental concepts to define generic objects. A prototype implementation is then described. Title: CP4WS - A METHOD FOR DESIGNING AND DEVELOPING SYSTEMS BASED ONWEB SERVICES Author(s): Zakaria Maamar, Djamal Benslimane and Chirine Ghedira Abstract: This paper presents $\mathcal{CP}$4$\mathcal{WS}$ (standing for $\mathcal{C}$ontext and $\mathcal{P}$olicy for $\mathcal{W}$eb $\mathcal{S}$ervices), which is a context-based and policy-driven method for the design and development of Web services-based systems. Although Web services constitute an active area of research, very little has been achieved for the benefit of those who are responsible for modeling and developing such systems. To address this lack of support, we developed $\mathcal{CP}$4$\mathcal{WS}$ that consists of several steps ranging from user needs identification to Web services behavior specification. A running scenario that illustrates the use of~$\mathcal{CP}$4$\mathcal{WS}$ is also discussed in the paper Title: SOFTWARE USABILITY EVALUATION - AN EMPIRICAL STUDY Author(s): Tereza G. Kirner and Alessandra V. Saraiva Abstract: This article presents an empirical study performed to evaluate the usability of a software applied to the agri-livestock area. The evaluation plan was prepared with basis on the Goal-Question-Metrics paradigm. The research was performed in a government department of São Paulo state, in Brazil, and the subjects were professionals that give assistance to small rural properties in the planning, execution and control of agri-livestock activities, which can be supported by software systems. Usability is concerned with the suitability of the software for its users, defined in this work through the following attributes: easiness of understanding, easiness of learning, operationability, software attractiveness and user satisfaction, and usefulness and accomplishment of the goals. The preparation and execution of the empirical study are described and the data analysis and conclusions are presented. The obtained results indicate a satisfactory level of usability for the considered software. Besides evaluating the software, the study aim to contribute to the detailing of a process, based on GQM, to perform usability evaluations. The work also represents a contribution to the software quality improvement, primarily to those systems applied to agri-livestock tasks. Title: EXPLORATIVE UML MODELING - COMPARING THE USABILITY OF UML TOOLS Author(s): Martin Auer, Ludwig Meyer and Stefan Biffl Abstract: UML tools are used in three main ways: (1) to exploratively sketch key system components during initial project stages; (2) to manage large software systems by keeping design and implementation synchronized; and (3) to extensively document a system after implementation. Professional tools cover (3) to some extent, and attempt to cover (2), but the vast number of languages, frameworks and deployment procedures makes those tasks all but impossible. By aiming at these two goals, tools must enforce formal UML language constructs more rigorously and thus become more complicated. They can become unsuitable for (1). This paper looks at explorative modeling with the leading UML tool Rational Rose and the open-source sketching tool UMLet. We define usability measures, assess both tools’ performance for common UML design tasks, and comment on the consequences for the application of UML tools. Title: A REFERENCE MODEL FOR ENTERPRISE SECURITY - HIGH ASSURANCE ENTERPRISE SECURITY Author(s): David W. Enström, D’Arcy Walsh and Siavosh Hossendoust Abstract: This paper describes a reference architecture, in UML, for enterprise IT security. It defines a PIM level enterprise security model, but more importantly provides a cohesive structure for the definition and implementation of security services. The complete framework is described, but with a focus on subjects, and protected objects and how access is controlled. Multiple layers of security are defined, building upon the “defence in depth” concept, augmented with “domain” and “zone” concepts and associated protections. The dynamic use of roles is described, a concept that along with user self–service provides a practical approach for the management and use of roles for access control. This model may also be used as a reference architecture for the definition and integration of a set of security services that permit multiple vendor implementations to work together, and to establish the level of compliance of specific systems. Title: AN ONTOLOGY SUPPORTING THE DAILY PRACTICE REQUIREMENTS OF RADIOLOGISTS-SENOLOGISTS WITH THE STANDARD BI-RADS Author(s): Souad Demigha Abstract: This paper presents concepts and relationships allowing the development of an ontology that supports the daily practice requirements of radiologists-senologists with the standard BI-RADS (Breast Imaging Reporting and Data System). This ontology aims at describing the radiologic-senologic knowledge shared by the community of technicians, practitioners, gynecologists, radiologists, surgeons and anatomo-pathologists. It represents a unifying scope for reducing and eliminating ambiguities as well as conceptual and terminological disarrays. It also ensures the understanding of the concerned community. It allows communication and dialogue between members of the scientific community even though they are working in different fields having different requirements and viewpoints. This ontology allowed us to obtain a conceptual model of the domain. Details concerning the development of the ontology and the generalization of the conceptual scheme that leads to the design of the conceptual model are described. Title: MODELLING OF MESSAGE SECURITY CONCERNS WITH UML Author(s): Farid Mehr and Ulf Schreier Abstract: Service oriented computing is increasingly accepted as a cross-disciplinary paradigm to integrate distributed application functionality through service interfaces. Integration through services as entry points for inter-organisational collaboration can be achieved by exchanging data in messages. In this architectural style, the security of sensitive exchanged data is essential. Security needs to be carefully considered during the entire life-cycle (Devanbu, 2000). Unfortunately, current UML-based modelling approaches do not support the adequate integration of message security concerns. In this paper, we investigate various integration options with UML systematically. The evaluation encompasses most of the options that are proposed today in science and industry as UML profiles. We conclude that neither of those approaches is sufficient for the systematic and comprehensive treatment of message security during modelling. To this end, we propose a new approach that is based on UML and very minor extensions of OCL. Title: AN EVALUATION OF CASE HANDLING SYSTEMS FOR PRODUCT BASED WORKFLOW DESIGN Author(s): Irene Vanderfeesten, Hajo A. Reijers and Wil M. P. van der Aalst Abstract: Case handling systems offer a solution to the lack of flexibility and adaptability in workflow management systems. Because they are data driven they potentially provide good support for Product Based Workflow Design (PBWD). In this paper we investigate to which degree current case handling systems (FLOWer and Activity Manager) are able to support PBWD. This is done by elaborating the design process of a case from industry in both systems. From this evaluation we concluded that current case handling systems are not yet completely ready for supporting PBWD. Therefore, we recognize that better tool support is needed to make PBWD more suitable for practical use. Title: DESIGNING AN E-BASED REAL TIME QUALITY CONTROL INFORMATION SYSTEM FOR DISTRIBUTED MANUFACTURING SHOPS Author(s): Iraj Mahdavi, Babak Shirazi, Maghsud Solimanpur and Shahram Ghobadi Abstract: Advanced manufacturing shops need to be developed for an enterprise to survive in the increasingly competitive global market. Statistical e-based quality control approach combines statistical quality analyses and reporting capabilities with web technology to deliver process optimization solutions. In this paper we develop a structured profile for statistical e-based quality information system to provide the capacity to access required data from distributed manufacturing shops. It helps enterprises to develop customized quality information systems, create and distribute reports via the internet, and to provide real-time display of quality profiles for processes monitoring. Statistical e-based quality profile is designed to bridge the gap between raw data and genuine quality improvement efforts by providing a powerful web-based solution for real-time quality process. A prototype information system (eQIS-DMS) has been also developed, and the results indicate that quality information system can control distributed manufacturing system to improve the efficiency. Title: AN EVOLUTIONARY APPROACH FOR BUSINESS PROCESS REDESIGN - TOWARDS AN INTELLIGENT SYSTEM Author(s): Mariska Netjes, Selma Limam Mansar, Hajo A. Reijers and Wil M. P. van der Aalst Abstract: Although extensive literature on BPR is available, there is still a lack of concrete guidance on actually changing processes for the better. It is our goal to provide a redesign approach which describes and supports the steps to derive from an existing process a better performing redesign. In this paper we present an evolutionary approach towards business process redesign and explain its first three steps: 1) modelling the existing process, 2) computing process measures, and 3 evaluating condition statements to find applicable redesign “best practices”. We show the applicability of these steps using an example process and illustrate the remaining steps. Our approach has a formal basis to make it suitable for automation. Title: REPRESENTING AUTHOR’S INTENTIONS OF SCIENTIFIC DOCUMENTS Author(s): Kanso Hassan, Soulé-Dupuy Chantal and Tazi Said Abstract: The existing structures of documents are not ample for nowadays user’s needs in terms of search and processing. The Intentional Structure (IS) is a model that maps author’s intentions to the segments of documents. It is defined to enhance documents process in terms of goals, means and reasons. The main objective of this work is to provide a methodology of recognizing intentions of communication of scientific documents associated to segments. This article focuses on the representational aspect of the author’s intention, by providing a graphical representation of intentions. Title: A SEMIOTIC APPROACH FOR FLEXIBLE E-GOVERNMENT SERVICE ORIENTED SYSTEMS Author(s): Rodrigo Bonacin, M. Cecilia C. Baranauskas and Thiago Medeiros dos Santos Abstract: E-Government is a multidisciplinary field which addresses many issues ranging from the social sciences to the technological ones. One of the big challenges of the e-Government field is the underlying complexity to elicit and model requirements. In practice it is quite hard to encompass the requirements of all citizens or organizations involved in the project. To deal with this challenge we propose a flexible distributed systems approach, which allies tailoring concepts to Organisational Semiotics methods in a SOA based architecture. The proposed approach is based on two Organisational Semiotics methods: the Semantic Analysis, which delivers a stable ontology of the context, and the Norm Analysis, that can be used to specify the volatile individual and collective requirements. The paper shows how norms changes in high level interfaces can have effect on different components of the software architecture. The architecture is experimented in a proof of concept for an e-Government project. Title: DESIGNING AN APPROPRIATE INFORMATION SYSTEMS DEVELOPMENT METHODOLOGY FOR DIFFERENT SITUATIONS Author(s): David Avison and Jan Pries-Heje Abstract: The number of information systems development methodologies has proliferated and practitioners and researchers alike have struggled to select a ‘one best’ approach for all applications. Several books and consultants have claimed to have found this ‘philosophers stone’, but there is no single methodology that will work for all development situations. The question then arises: ‘when to use which methodology?’ To address this question we used the design research approach to develop a radar diagram consisting of eight dimensions. Using three action research cycles, we attempt to validate our design in three projects that took place in a large administrative organization and elsewhere with groups of IT project managers. Title: MODEL-DRIVEN DESIGN OF CONTEXT-AWARE APPLICATIONS Author(s): Boris Shishkov and Marten van Sinderen Abstract: In many cases, in order to be effective, applications need to allow sensitivity to context changes. This implies however additional complexity associated with the need for adaptability. Being adaptable means to have the capability of capturing context, interpreting it and (based on this) reacting on it. Hence, we envision 3 ‘musts’ that, in combination, are especially relevant to the design of context-aware applications. Firstly, at the enterprise (business) modeling level, it is considered crucial that the different possible context states can be properly captured and modeled, which (in turn) correspond to certain desirable behaviors. Secondly, it must be known what are the dependencies between the two, namely between states and behaviors. And finally, business needs are to be aligned to application solutions. In this work, we address the mentioned challenges, by approaching the notion of context and extending from this perspective a previously reported business-software alignment approach. We illustrate our achieved results by means of a small example. It is expected that this research contribution will be useful as an additional result concerning the alignment between business modeling and software design. Title: BUILDING, AND LOSING, CONSUMER TRUST IN B2C E-BUSINESS Author(s): Phil Joyce and Graham Winch Abstract: Trust is emerging as a key element of any successful offering for B2C eBusiness. This has prompted practitioners and academics to develop better models of understanding of trust in the on-line environment. Many trust models are developed either from a firm basis in traditional research disciplines or from attempts to be multi-disciplinary. However most, if not all, such models are essentially descriptive and static, while the building and losing of trust is a dynamic process. This paper brings some new insights to this dynamics process by presenting a four-element model that offers pictorial representations of how different contributing factors drive trust to be gained or lost. Hence, this approach offers practical support in strategy formation for developing and understanding consumer trust in B2C eBusiness. Title: RELEVANCE FEEDBACK AS AN INDICATOR TO SELECT THE BEST SEARCH ENGINE: EVALUATION ON TREC DATA Author(s): Gilles Hubert and Josiane Mothe Abstract: This paper explores information retrieval system variability and takes advantage of the fact two systems can retrieve different documents for a given query. More precisely, our approach is based on data fusion (fusion of system results) by taking into account local performances of each system. Our method considers the relevance of the very first documents retrieved by different systems and from this information selects the system that will perform the retrieval for the user. We found that this principle improves the performances of about 9%. Evaluation is based on TREC ad-hoc collections (TREC 3, 5, 6 and 7) and on participant runs. It considers the two and five best systems that participate to TREC the corresponding year. Title: MAKING INCOMPLETE INFORMATION VISIBLE IN WORKFLOW SYSTEMS Author(s): Georg Peters and Roger Tagg Abstract: After a bumpy start in the nineties of the last century workflow systems have recently re-gained the focus of attention. Today they are considered as a crucial part of the recently introduced middleware based ERP systems. One of the central objectives and hopes for this technology is to make companies more process-orientated and flexible to keep up with the increasing speed of change of a global economy. This requires sophisticated instruments to optimally manage workflow systems, e.g. to deal with incomplete information effectively. In this paper we investigate the potential of rough set theory to make missing or incomplete information visible in workflow systems. Title: A FRAMEWORK FOR ANALYSING IT GOVERNANCE APPROACHES Author(s): Bruno Claudepierre and Selmin Nurcan Abstract: The Enron and Worldcom scandals showed the weaknesses of the organisations in their ability to prove the reliability of their financial information. The corporate governance and, related to our purpose, the information technology (IT) governance ensure that the enterprise strategies are properly applied. The information systems (IS) have a role of support and information processing. While IS activities are performed, information is created, updated or removed: it is advisable to set up a good IT process management and to be able to evaluate the effectiveness and the efficiency of the IS. Our study considers, and situates in a structural framework, the contributions of IT governance approaches (e.g.: CoBIT and ITIL) to an IS engineering method. The comprehension of these contributions anticipates innovating research projects whose objective is to work out an engineering method allowing us to build governable IS. Title: THE BUSINESS PROCESS KNOWLEDGE FRAMEWORK Author(s): Janez Hrastnik, Jorge Cardoso and Frank Kappe Abstract: Organizations today are confronted with huge problems regarding following and implementing their own business process models. On the one hand, due to a lack of planning and requirements analysis, process models are often unfeasible or difficult to execute in practice. On the other hand, process designers often ignore the importance of studying the different roles and their perspectives on a business process when constructing a process model. This leads to the deployment of process models that do not “satisfy” process stakeholders. This paper addresses those two problems and proposes a business process knowledge framework as a possible solution. Our framework for business process knowledge management integrates three elements that we consider fundamental to correctly model business processes: stakeholders’ perspectives, knowledge types and views. It is shown how the business process framework can contribute to the improvement of the process knowledge acquisition phase of process design, and how it can support process knowledge communication to stakeholders. Finally, we argue that the latest developments in the Semantic Web are an interesting solution to support the integration of information and knowledge represented within our framework. Title: AN INSTRUMENT FOR THE DEVELOPMENT OF THE ENTERPRISE ARCHITECTURE PRACTICE Author(s): Marlies van Steenbergen, Martin van den Berg and Sjaak Brinkkemper Abstract: In this paper we introduce an architecture maturity model for the domain of enterprise architecture. The model differs from other existing models in that it departs from the standard 5-level approach. It distinguishes 18 factors, called key areas, which are relevant to developing an architectural practice. Each key area has its own maturity development path that is balanced against the maturity development paths of the other key areas. Two real-life case studies are presented to illustrate the use of the model. Usage of the model in these cases shows that the model delivers recognizable results, that the results can be traced back to the basic approach to architecture taken by the organizations investigated and that the key areas chosen bear relevance to the architectural practice of the organizations. Title: A FRAMEWORK FOR ANALYZING BUSINESS/INFORMATION SYSTEM ALIGNMENT REQUIREMENTS Author(s): Islem Gmati and Selmin Nurcan Abstract: In order to provide a competitive advantage to the enterprise, business and information system (IS) strategies need to be aligned. Achieving strategic alignment continues to be a major concern for business executives and becomes more difficult to handle in an evolving environment. The literature provides conceptual frameworks dividing a company representation in independent and exchanging layers and aiming at the strategic alignment. In this paper, we describe ten among these works. Aiming a better understanding of the Business/IS alignment requirements, we propose an analysis framework, in which we position the studied approaches, and we bring out the most important results related to the forces and weaknesses of these approaches. Title: KEY-PROBLEMS AND MULTI-SCREEN VIEW: A FRAMEWORK TO PERFORM THE ALIGMENT OF MANUFACTURING IS Author(s): Virginie Goepp and François Kiefer Abstract: In today highly competitive environment, the complete alignment of information systems (IS) that is to say not only with the strategy, but also with the environment and with the uncertain evolution is crucial. For manufacturing IS these alignments take specific shapes due to the heterogeneity of facilities to be integrated and the variety of the stakeholders, which are not IS specialists. The state of the art concerning IS alignment shows that the existing frameworks mainly concern managers and do not fit to the IS manufacturing context. On the one hand, B-SCP tends to operationalize these frameworks by coupling them to requirements engineering. However, only the alignment with the strategy is tackled. On the other hand, the dialectical analysis based approach of manufacturing IS development tries to integrate multiple alignments through the “multi-screen” view tool. However, the underlying concepts of this tool remain fuzzy. Therefore, this paper addresses the formalisation of the “multi-screen” view, in order to work out a framework for analysing mechanisms of multiple alignments of manufacturing IS. To do this, the contributions of coupling dialectics and “multi-screen” view to manufacturing IS are detailed through UML class diagrams. Moreover, to better grasp these contributions, its similarities and differences with the B-SCP are outlined. Title: MULTIDIMENSIONAL REFERENCE MODELS FOR DATA WAREHOUSE DEVELOPMENT Author(s): Matthias Goeken and Ralf Knackstedt Abstract: In the area of Data Warehousing the importance of conceptual modelling increases as it gains the status of a critical success factor. Nevertheless the application of conceptual modelling in practice often remains un-done, due to time and cost restrictions. Reference models seem to be a suitable solution for this problem as they provide generic models which can be easily adapted to specific problems and thus decrease the model-ling outlay. This paper identifies the requirements for multidimensional modelling techniques whose ful-fillment are a prerequisite for the construction of reference models. Referring to the ME/RM, the concrete implementation of these requirements will be illustrated. Title: METHODOLOGY FOR PERFORMANCE MEASUREMENT SYSTEMS IMPLEMENTATION IN SMALL AND MEDIUM-SIZED ENTERPRISES Author(s): Matilla Magali and Chalmeta Ricardo Abstract: Performance Measurement Systems enable enterprises to evaluate the efficiency and effectiveness of their decisions and operations by means of a set of indicators related to the vision and strategy of the company. Nevertheless, and despite the important role these systems play in improving competitiveness, their implementation in small and medium-sized enterprises is scarce. In this paper we describe the PMS-IRIS methodology for designing and implementing performance measurement systems in small and medium-sized enterprises. The methodology embraces activities that concern the planning of the project, designing the strategy, the definition of the system of indicators, process improvement, monitoring, and the design of the computer system required to support the implementation of a performance measurement system. Title: A SYSTEM ON WEB-BASED CONTINUOUS SOFTWARE PROCESS ASSESSMENT (CONTINUOUS SPA) Author(s): Xian Chen, Paul Sorenson and John Willson Abstract: Software process assessments are now recognized as important quality improvement activities in the software industry. Although there are many types of assessment applications, they are all generally regarded as infrequent, expensive and disruptive for the workplace. Hence, it is advantageous to find alternative ways to the current status of software processes and monitor the implementation of improvement activities. In this paper, we focus on process capability monitoring and continuous process improvement. A web-based prototype system is developed to perform a practical study on continuous software process assessment in one process area: project management. The study results are positive and show that features such as global management, well-defined responsibility and visualization may help improve the efficiency and continuity of software process management. Title: A THEORETICAL MODEL TO EXPLAIN EFFECTS OF INFORMATION QUALITY AWARENESS ON DECISION MAKING Author(s): Mouzhi Ge and Markus Helfert Abstract: Making high quality decision is dependent upon the quality of the information that is used to support the decision. In most cases, decision makers are not aware of information quality issues. Decision makers frequently believe the information they use is of high quality. However often the decision relevant information is inaccurate and incomplete. With increasing intensity on decision making, information quality awareness is becoming important. In order to analyse the effects of information quality awareness has on decision making, in this paper, we propose a theoretical model to address the relationship between information quality awareness and decision quality. Our results show the effects of information quality awareness on decision making and the importance of building IQ culture in organizations. Title: DEVELOPING EXECUTABLE MODELS OF BUSINESS SYSTEMS Author(s): Joseph Barjis Abstract: Traditionally, business processes models are based on graphical artifacts that do not lend to model checking or simulation, e.g., any Flow Chart like representation or UML diagrams. To check whether business process models are syntactically correct, the models are either translated to other diagrams with formal semantics or the validation is carried out manually. This approach poses two issues: first, models not lending to execution (simulation) will hardly allow thorough insight into the dynamic behavior of the system under consideration; second, when manual checking for small models may not be too difficult, it is almost impossible for complex models. In this paper we investigate two research questions that result in a method allowing build executable business process models based on formal semantics of Petri net. The proposed method is theoretically based on the Transaction Concept. The two questions further studied in this paper concern graphical extension of Petri nets towards business process modeling, and developing a framework (guidelines) applying the proposed method. As a marginal contribution, this paper introduces a compact modeling approach. Title: A NEW APPROACH FOR WORKFLOW PROCESS DELTA ANALYSIS BASED ON SYN-NET Author(s): Xingqi Huang, Wen Zhao and Shikun Zhang Abstract: Many of today's information systems are driven by explicit process models. Creating a workflow design is a complicated process and typically there are discrepancies between the actual workflow processes and the processes as perceived by the management. Delta analysis aims at improving this by comparing process models obtained by process mining from event logs and predefined ones, to measure business alignment of real behaviour of an information system with the expected behaviour. Syn-net is a new workflow model based on Petri-net, with the conceptual foundation synchronizer and suggesting a three-layer perspective of workflow process. In this paper, we propose a new delta analysis approach based on the reduction rules of Syn-net, to examine the discrepancies between the discovered processes and the predefined ones. Title: ENTERPRISE SYSTEMS CONFIGURATION AS AN INFORMATION LOGISTICS PROCESS - A STUDY Author(s): Mats Apelkrans and Anne Håkansson Abstract: In this paper we suggest using rule-based descriptions of customer’s requirements for Enterprise Systems implementing Information Logistics. The rules are developed from the users’ requirements and inserted as schedules to the Enterprise System. The output, from testing these rules, is a list of modules and parameter setting to configure the system. By using rules, we can, at least partly, automate the configuration process for traverse the several modules and thousands parameters there are in an Enterprise System. We can select the modules and the parameters that meet the customer’s requirements. These selected modules and parameters are visually presented through a kind of Unified Modeling Language diagrams, to support the user investigation and then to configure the system either manually or automatically. Every attempt to match a customer’s requirement to the contents of the knowledge base within the Enterprise system can be thought of as an Information Logistics Process. The output from such a process must be examined by the user, which can give rise to a new call to the Information Logistics process. In other words the configuration work is done through a dialogue between the customer and the knowledge base of the Enterprise system. Title: USE CASE BASED REQUIREMENTS VERIFICATION - VERIFYING THE CONSISTENCY BETWEEN USE CASES AND ASSERTIONS Author(s): Stéphane S. Somé and Divya K. Nair Abstract: Use cases and operations are complementary requirements artefacts. A use case refers to operations and imposes their sequencing. Use cases templates usually include assertions such as preconditions, postconditions and invariants. Similarly operations are specified using contracts consisting in preconditions and postconditions. In this paper, we present an approach aiming at checking the consistency of each description against the other. We attempt to answer questions such as the following. Is the use case postcondition guaranteed by the operations ? Are all operations possible according to their preconditions ? We provide answers to these questions by deriving state predicates corresponding to each step in a use case, and by showing the satisfaction of assertions according to these predicates. Title: CONFLICT RESOLUTION IN COLLABORATIVE NETWORK ENTERPRISES Author(s): Sara Alves da Silva, Patrícia Macedo and Pedro Antunes Abstract: Enterprises are nowadays faced with increasing levels of flexibility and customer-orientation. As enterprises react to this challenge, they intensify their engagement in collaborative networks. Recent studies have shown that these networks must share goals, have some level of mutual trust, and agree with some practices and values. Value systems are considered as the ordering and prioritization of the ethical and ideological values held by collaborative networks. Conciliating different values and priorities may depend on a mediation process. From an information systems point of view, the mediation process is highly complex because of the informal and tacit nature of values systems. In this paper we propose using storytelling, an old technique used to reconstruct past events, to support conflict resolution. The paper presents an analysis of conflict resolution in collaborative network enterprises supported by the storytelling technique. Title: ON GROUPING OF ACTIVITIES INSTANCES IN WORKFLOW MANAGEMENT SYSTEMS Author(s): Dat C. Ma, Joe Y.-C. Lin and Maria E. Orlowska Abstract: Current research in the flexibility of workflow management systems covers many aspects of this technology. The focus of this paper is primarily on the practical capabilities of workflow management systems in handling preferred work practice while dealing with many short duration activities. It is motivated by the requirement of merging or grouping work items by one performer to achieve work performance enhancements by avoiding unnecessary communication with the system but still executing the required activities. The paper proposes a new function to group activity instances for a given process, investigates the impact, benefits, and potential implementation of such of extended functionality. Title: EML:A TREE OVERLAY-BASED VISUAL LANGUAGE FOR BUSINESS PROCESS MODELLING Author(s): Lei Li, John Hosking and John Grundy Abstract: Visual business process modelling can fulfil an important role to enable high-level specification of system interactions, improving system integration and supporting performance analysis. Existing modelling approaches typically use a workflow based method. Cobweb and labyrinth problems appear quickly when this type of notation is used to model a complex enterprise system with users having to deal with either very complex diagrams or many cross-diagram implicit relationships. In contrast, a tree based presentation can be very efficient for handling visual relationships. We present an overview of EML (Enterprise Modelling Language), a novel tree overlay-based visual specification for enterprise process modelling and its support tool. The highlight is its flexibility in modelling business processes using different layers. A service-oriented tree structure represents the system functional architecture. Business process modelling is constructed as an overlay on top of this service tree. By using a multi-layer structure, an enterprise system can be modelled with a variety of early aspects to satisfy design requirements. An Eclipse based software tool, MaramaEML has been developed to edit EML diagrams integrated with existing modelling languages such as BPMN and supports automatic generation of BPEL code. Title: MODELLING DATA TRANSFORMATION PROCESSES USING HIGH-LEVEL PETRI NETS Author(s): Li Peng Abstract: Data heterogeneity is one of the key problems in integrating multiple data sources, data warehousing, legacy data migration, etc. For integrating databases or information systems, the data need to be transformed from a source representation into a target representation. The foundation for developing efficient data transformation tools and automating data transformation processes is a data transformation process model. In this paper, I propose a CPN based data transformation process model. This model provides rich constructs to represent various data structures, transformation functions and rules; allows parallelization, composition and decomposition of data transformations. Furthermore, as the extension of the model, the CPNs are combined with high-order Petri nets. The components of the CPNs can be reused. This improves the efficiency of data transformations. Title: CODE INSPECTION - A REVIEW Author(s): Robson Ytallo Silva de Oliveira, Paula Gonçalves Ferreira, Alexandre Alvaro, Eduardo Santana de Almeida and Silvio Romero de Lemos Meira Abstract: The software inspection process is generally considered a software engineering best practice. For a long time, the code inspection had the goals of finding and fixing defects as soon as possible. For this reason, code inspection technique is suggested for use in a software reuse process in order to improve the quality of the assets developed and reused. Thus, the code become easier to understand and changeable, and improving the maintainability of the code, minimizing redundancies and improving language proficiency, safety and portability. In this way, looking for analyzing this area, this paper presents a survey of code inspection research. Title: SERVICE ORIENTED REAL-TIME ENTERPRISE CONTENT MANAGEMENT - IN ASSOCIATION WITH BUSINESS PROCESS INTEGRATION Author(s): Vikas S. Shah Abstract: Organization’s distributed and evolving enterprises demands an integrated approach providing consolidated control and secure information sharing among users and applications in support of business processes. Businesses faced considerable challenges due to unawareness of setting an integration infrastructure with specific business context. Recent industry trend is inclined to investigate rapid and cost effective BPI platform with indisputable business benefits in terms of Real-Time Enterprise Content Management (RT-ECM). RT-ECM ensures consistency among users and infrastructure. The perception also provides secure access to necessary and valid content in real-time. Modern RT-ECM architectures are focused to assist content sharing across multiple resources as well as enterprise applications. SOA, a distributed computing environment, is poised at the intersection of business and technology. SOA enables enterprises to seamlessly and rapidly adapting altering environment. Service-Oriented RT-ECM approach offers integration specific, flexible, and featured BPI platform. The contribution of this paper is an RT-ECM architecture framework illustrating most prominent technical challenges during establishment of business process perceptive integration and time sensitive content flow management. Real-time content management engines, business process engines, and service provisioning are at the centre of presented framework. Initiative behind the research effort is to capture and estimate generic aspects of BPI such that organizations may exclusively focus on unique business characteristic. Eventually, the paper discusses advantages and consequences of service oriented RT-ECM besides outstanding issues for further research. Title: A MODELING LANGUAGE FOR COLLABORATIVE LEARNING EDUCATIONAL UNITS - SUPPORTING THE COORDINATION OF COLLABORATIVE ACTIVITIES Author(s): Manuel Caeiro-Rodríguez Abstract: This paper introduces a modeling language to support the computational modeling of collaborative learning educational units. The languages supporting the computational modeling educational units are named as Educational Modeling Languages (EMLs). EMLs have been proposed to facilitate the development of complex and large e-learning applications. The introduced language is proposed as an EML specially oriented towards collaborative learning. A main goal is to enable the modeling of the variety of ways in which human interaction can be supported (e.g. well-structured and ill-structured, synchronous and asynchronous, strict-coordination and free-collaboration). To do it, a separation of concerns approach is followed. The proposal, named as Perspective-oriented EML (PoEML), involves several parts (named as perspectives) where all the modeling issues are arranged and separated. Each perspective focuses on a certain concern enabling to center the attention and efforts on it while abstracting from the concerns of other perspectives. The paper introduces the ideas and constructs of the main PoEML perspectives towards the modeling of the variety of forms for collaboration support. Title: GRID WORKFLOWSCHEDULING WITH TEMPORAL DECOMPOSITION Author(s): Fei Long and Hung Keng Pung Abstract: Grid workflow scheduling is very important system function issue in current Grid Systems, as an NP hard problem. In this paper, we propose a new scheduling method-- temporal decomposition'' -- which divides a whole grid workflow into some sub-workflows. By dividing a large problem~(workflow) into some smaller problems~(sub-workflows), the temporal decomposition'' achieves much lower computation complexity. Another motivation for the design of the idea behind temporal decomposition'' is at the availability of the dynamic grid resources which is always dynamically varying with time. Furthermore, we propose an efficient scheduling algorithm for scheduling sub-workflows. Numerical results show that our proposed scheme is more efficient in comparison with an well known existing grid workflow scheduling method. Title: VISUALISATION AND ANALYSIS OF RELATIONNAL DATA BY CONSIDERING TEMPORAL DIMENSION Author(s): Eloïse Loubier and Bernard Dousset Abstract: Visualization based on graph drawing allows the identification, the evaluation of passed and present structures between actors and concepts. It also allows the deduction of future ones. VisuGraph is developed in order to offer to the users the visualization and the interactive classification of relational data. We propose to complete this prototype with a morphing algotihm which animates with fluidity the representation between different time periods, emphasizing major elements and significant tendencies. Title: META MODEL FOR TRACING IMPACT OF CONTEXT INFORMATION EVOLUTION IN WEB-BASED WORKFLOWS Author(s): Jeewani Anupama Ginige and Athula Ginige Abstract: Environment that shapes a business process consists of various regulations, policies, guidelines, goals, values, etc., which are both external and internal to the organisation. In today’s global and competitive business world, constituents of this process environment changes rapidly. This Context Information (CI) evolution forces organisational processes to change. When processes are supported through web-based workflows, CI evolutions are required to be reflected in already automated systems, via process models. Current process modelling techniques are focused towards capturing implementation aspects only. As a result these models fail to encapsulate CI that associates process elements to its environment. This creates inconsistencies and errors when trying to change the implemented system to reflect high-level CI changes. To address this problem, we propose a model which allows tracing high-level CI changes down to the implementation level artefacts. Such a model needs to map the complex correlation between context information (CIs), all process elements (object, participants, actions and process flow rules), various web-based workflow artefacts (data repository, function code and UIs) to a types of changes (modify, add and delete). This holistic view of the proposed model makes this research standout among other research work in process evolution area. Title: AN ESTIMATION OF ATTACK SURFACE TO EVALUATE NETWORK (IN)SECURITY Author(s): Andrea Atzeni and Antonio Lioy Abstract: A measurement system is a natural requirement for every scientific work, otherwise extremely subjective and incomparable results may be claimed. In spite of this, measurement methods dealing with security are unusual in practice, leaving security assessment in the hands of security experts' judgment, with poor formal argumentations on the security level of the underlying system, and with the consequent difficulty to distinguish among security alternatives or justify possible security changes or improvements. Since network security is an aggregate concept, composed by many different aspects, in this work we focus on a limited but important set of security indicators, suitable to estimate the attack surface a system exposes, thus introducing a simple and objective metric for a fast evaluation of an important security facet. Title: OPTING FOR INNOVATION IN MOBILE APPLICATIONS Author(s): Jens H. Hosbond, Peter A. Nielsen and Ivan Aaen Abstract: In this paper we are concerned with innovation in the development of mobile applications. In particular, we address how we may come to think systematically about innovative aspects of mobile applications. We suggest that there is not enough support for this in the mobile systems literature and we hence suggest a framework that supports the thinking about the possible innovative features of a mobile application in a systemic and systematic way. The framework is inspired by the theory on scenario planning. In this framework we see mobile social arrangements of node, dyad, and group as fundamental units of analysis. We apply the framework to a case where the mobile users are truck drivers in a long-distance haulage business. Our use of the framework illustrates how we can arrive at a consistent and systemic view of a possible scenario for innovative mobile applications. We continue with a discussion of to what extent and in which ways the framework gives rise to innovative thinking by relating to a common theory of types of innovation and innovation processes. Title: REDUCING REQUIREMENTS TO EIS SPECIFICATIONS GAP USING RM-ODP ENTERPRISE VIEWPOINT Author(s): Christophe Addinquy and Bruno Traverson Abstract: As a reference model for distributed systems, RM-ODP (Open Distributed Processing) standard prescribes architectural viewpoint specifications, but does not address traceability with requirement expression. In this article, we propose a three-layer approach to requirements modeling, from the system high level goals, to the detailed business rules and non-functional requirements. This approach is built on top of well-recognized requirements process and connects to key elements of RM-ODP viewpoints. Title: A FRAMEWORK FOR QUALITY EVALUATION IN DATA INTEGRATION SYSTEMS Author(s): J. Akoka, L. Berti-Équille, O. Boucelma, M. Bouzeghoub, I. Comyn-Wattiau, M. Cosquer, V. Goasdoué-Thion, Z. Kedad, S. Nugier, V. Peralta and S. Sisaid-Cherfi Abstract: Ensuring and maximizing the quality and integrity of information is a crucial process for today enterprise information systems. It requires a clear understanding of the interdependencies between the dimensions characterizing quality of data (QoD), quality of conceptual data model (QoM) of the database, keystone of the EIS, and quality of data management and integration processes (QoP). The improvement of one quality dimension (such as data accuracy or model expressiveness) may have negative consequences on other quality dimensions (e.g., freshness or completeness of data). In this paper we briefly present a framework, called QUADRIS, relevant for adopting a quality improvement strategy on one or many dimensions of QoD or QoM with considering the collateral effects on the other interdependent quality dimensions. We also present the scenarios of our ongoing validations on a CRM EIS. Title: BUSINESS PROCESS VALIDATION: TESTING BEFORE DESIGNING Author(s): Cornelis G.F. [Kees] Ampt Abstract: Rather than waiting with testing until (nearly) the end of software project, and subsequent need to redesign major parts, the Business Process Validation (BPV) method aims at using systematic testing from the start of a project in the requirements phase, up to the final delivery. The method embraces three phases: a Transformation Model, a Service Model, and an IT/AO Model. A prototype of a software tool to automate the construction of classification trees, being the core of the Service Model – based on the initial wishes as laid down in the Transformation Model – has been developed as part of a Masters Thesis project. First real life tests of the prototype resulted for small projects in no significant time reduction. However for larger ones a time reduction of 50% is achieved, compared to development of the classification trees by hand, while for all projects several automated consistency checks can be performed. Title: EXTENDING THE EPC AND THE BPMN WITH BUSINESS PROCESS GOALS AND PERFORMANCE MEASURES Author(s): Birgit Korherr and Beate List Abstract: The Event-Driven Process Chain (EPC) and the Business Process Modeling Notation (BPMN) are designed for modelling business processes, but do not yet include any means for modelling process goals and their measures, and they do not have a published metamodel. We derive a metamodel for both languages, and extend the EPC and the BPMN with process goals and performance measures to make them conceptually visible. The extensions based on the metamodels are tested with example business processes. Title: EXTENDING BUSINESS PROCESS MODELING TOOLS WITH WORKFLOW PATTERN REUSE Author(s): Lucinéia Heloisa Thom, Jean Michel Lau, Cirano Iochpe and Jan Mendling Abstract: For its reuse advantages, workflow patterns (e.g., control flow patterns, resource patterns, activity patterns) are increasingly attracting the interest of both researchers and vendors. However, actual workflow modeling tools do not provide functionalities that enable users to define, query, and reuse workflow patterns properly. In this paper we gather a set of requirements for process modeling to aim to support pattern reuse in a direct way. In order to demonstrate the feasibility of these requirements we present a respective implementation project that extends the process modeling tool EPC Tools with pattern reuse functionality. Title: A CHANGE STRATEGY FOR ORGANISATIONAL SECURITY: THE ROLE OF CRITICAL SUCCESS FACTORS Author(s): Sue Foster, Kate Lazarenko, Paul Hawking and Andrew Stein Abstract: The focus for any organization should be in securing the critical components that are important to business survival This can be accomplished by adopting technical and non technical approaches. The non technical approaches however tend to be more problematic and include changing the way employees perceive enterprise security. People issues have always posed problems when implementing new systems, and an enterprise security strategy is no exception. The identification and adoption of critical success factors to support a sound security strategy could provide a successful security outcome. In this paper a security framework is developed from the literature and each part of the framework provides the opportunity to identify critical success factors. It is contended that by using this framework organizations are able to build a strong security base for their enterprise. Title: BUSINESS PROCESS PRIORISATION WITH MULTICRITERIA METHODS: CASE OF BUSINESS PROCESS REENGINEERING Author(s): Elena Kornyshova and Camille Salinesi Abstract: Business process (BP) engineering is used nowadays in many methods, techniques and tools. In domains such as change management, enterprise architecture, security analysis, or performance analysis one particular concern is the identification of key BPs, i.e. the BPs that should be dealt primarily. In practice, the number of BPs is often very large and it justifies the creation of a priorisation mechanism. However, the number of approaches available to prioritise BPs specifically is very limited. This paper presents a comparison of multicriteria (MC) decision-making methods, and an approach to guide the selection and application of the MC method found as the most appropriate for BP priorisation. The approach is illustrated with the case of selecting and applying a BP priorisation in the view of BP reengineering. Title: A FLEXIBLE PERSPECTIVE FOR SOFWARE PROCESSES - SUPPORTING FLEXIBILITY IN THE SOFTWARE PROCESS ENGINEERING METAMODEL Author(s): Ricardo Martinho, Dulce Domingos and João Varajão Abstract: The lack of flexibility in software process modeling is an important drawback pointed out as the main cause for the low adoption of Process-centered Software Engineering Environments (PSEEs). The Object Management Group (OMG) has been working on the Software Process Engineering Metamodel (SPEM) in order to provide a uniform object-oriented metamodel for building software process models, like the Rational Unified Process (RUP). Nevertheless, the SPEM neither takes into account flexibility aspects nor provides a flexibility metamodel for derived software process models. PSEEs that comply with the SPEM specification cannot benefit from a uniform flexibility metamodel, which is essential to build process models that capture complex software development processes, and also to manage the evolution of their models and related instances. This paper proposes a flexibility metamodel for building flexible SPEM-based software process models. This metamodel has its foundations on several flexibility aspects previously identified and classified. SPEM compliant PSEEs that implement the proposed flexibility metamodel will provide the ability to build flexible software process models, and to associate distinct flexible mechanisms to their corresponding components. Title: A COMPARATIVE STUDY BETWEEN WEB SERVICE AND GRID SERVICE DEVELOPMENTS IN A MDA FRAMEWORK Author(s): Marcos López Sanz, Valeria de Castro, Esperanza Marcos and José Luís Bosque Abstract: The application of the MDA approach to the development of service-oriented systems facilitates the system migration over different platforms. The specific features of each platform are reflected in the PSM level of MDA. In this paper a comparison between the development of systems based on Web services and the development of systems based on Grid services is presented. The comparative study is carried out through a case study implemented on both a standard Web Service platform and a Grid platform based on the Globus Toolkit 4 middleware. After the study we conclude that a subdivision of the MDA’s PSM level in two layers is needed: an upper layer with the characteristics shared by any service-based platform (a WSDL model and a model of the service code) and a lower layer with all the elements required to deploy the services successfully over the concrete execution platform. Title: CONVERTING RELATIONAL DATABASE INTO OWL ONTOLOGY BY MULTI-WAY SEMANTICS EXTRACTION Author(s): Sohee Jang, Insuk Park, Hoyun Cho And Soon Joo Hyun Abstract: Semantic Web provides means to share well-defined meaning of terms with semantically annotated information. In the current Web, most of the Web applications generate Web contents dynamically at the time of user request from underlying relational databases. To represent the relational data to the Semantic Web environment, the relational data should be transformed into the ontology form. In this paper, we propose a Semantic Web technique to convert relational database into ontology in OWL using multi-way semantics extraction technique. Extracted from E/R modeling components, schema descriptions and stored data, the generated ontology will provide application developers with rich semantics so as to quickly build a knowledge base for advanced Semantic Web services. Extracting the semantic information out of the traditional databases will provide enterprises with more opportunities for many value-added services. Title: BUSINESS PROCESS MODEL TRANSFORMATION ISSUES - THE TOP 7 ADVERSARIES ENCOUNTERED AT DEFINING MODEL TRANSFORMATIONS Author(s): Marion Murzek and Gerhard Kramler Abstract: Not least due to the widespread use of meta modeling concepts, model transformation techniques have reached a certain level of maturity (Czarnecki and Helsen, 2006). Nevertheless, defining transformations in some application areas in our case business process modeling is still a challenge because current transformation languages provide general solutions but do not support issues specific to a distinct area. We aim at providing generic solutions for model transformation problems distinct to the area of horizontal business process model transformations. As a first step in this endeavor, this work reports on the most pressing problems encountered at defining business process model transformations. Title: ONTOLOGY CONSTRUCTION IN PRACTICE - EXPERIENCES AND RECOMMENDATIONS FROM INDUSTRIAL CASES Author(s): Kurt Sandkuhl, Annika Öhgren, Alexander Smrinov, Nikolay Shilov and Alexey Kashevnik Abstract: Significant progress in ontology engineering during the last decade resulted in a growing interest in using ontologies for industrial applications. Based on case studies from different industrial domains, this paper presents experiences from ontology development and gives recommendations for industrial ontology construction projects. The recommendations include (1) using defined roles in a matrix project organisation, (2) perspectives on generalisation/specialisation strategy and ontology lifecycle phases, and (3) aspects of user participation in ontology construction Title: FUTURE COLLABORATIVE SYSTEMS BETWEEN PEER-TO-PEER AND MASSIVE MULTIPLAYER ONLINE GAMES Author(s): Markus Heberling, Robert Hinn, Thomas Bopp and Thorsten Hampel Abstract: Current CSCW architectures rely on a central server and offer only limited scalability. With the emergence of distributed hash tables as a comprehensive peer-to-peer infrastructure a wealth of new applications can be developed. In this paper we propose a new DHT-based CSCW architecture using existing systems and technologies. The resulting CSCW overlay network offers both robustness against network failures and scalability to support large numbers of users simultaneously. Title: ASPECT-ORIENTED ANALYSIS APPLIED TO THE SPACE DOMAIN Author(s): André Marques, Ricardo Raminhos, Ricardo Ferreira, Rita Ribeiro, Sérgio Agostinho, João Araújo and Ana Moreira Abstract: This paper presents an aspect metadata approach, which has been developed in the scope of the “Aspect Specification for the Space Domain” project, of the European Space Agency (ESA). This approach is based on XML and XML Schema technologies, enabling a rigorous knowledge representation. The proposed approach has been applied to a real complex system, the “Space Environment Support System for Telecom/Navigation Missions”, enabling a comparison and evaluation between the proposed approach and the “traditional” requirements analysis methods used during the development of the original version of the system. This paper presents a full description of both the identified metadata concepts and their relationships. The metadata concepts and associated instances have been stored in a Metadata Repository that provides simple navigation facilities between concepts. The Metadata Repository also enables the automatic generation of documentation. Title: DESIGN AND IMPLEMENTATION OF THE VALID TIME FOR SPATIO-TEMPORAL DATABASES Author(s): Jugurta Lisboa Filho, Gustavo Breder Sampaio, Evaldo de Oliveira da Silva and Alexandre Gazola Abstract: Three different types of time are identified in the literature on Temporal Database Management Systems: valid time, transaction time and existence time. This paper describes the design of the valid time for Spatial-Temporal Databases in Geographic Information Systems, based on the UML-GeoFrame conceptual data model. It is also presented two translation rules of the valid time from the conceptual to logical level, implemented for the TerraLib Spatial Components Library. Title: DYNAMIC ARCHITECTURE BASED EVOLUTION OF ENTERPRISE INFORMATION SYSTEMS Author(s): Sorana Cîmpan, Herve Verjus and Ilham Alloui Abstract: Enterprise Information Systems have to co-evolve with the enterprise they support. Their evolution is the one of an important software system. Software evolution should be addressed at all developpement phases in order to notably reduce costs (Lehman, 1996). The issue of software systems evolution has been addressed mainly at the code level. In this paper we present how evolution of enterprise information systems can take place at higher abstraction levels, when using an architecture-centred development process. The evolutions addressed are dynamic, i.e. they take place at runtime and concern both planned and unplanned evolutions of the enterprise information system Title: CONCEPTS OF MODEL DRIVEN SOFTWARE DEVELOPMENT IN PRACTICE - GENERIC MODEL REPRESENTATION AND DSL INTERPRETATION Author(s): Christian Erfurth, Wilhelm Rossak, Christian Schachtzabel, Detlef Hornbostel and Steffen Skatulla Abstract: This paper discusses the possibilities to realize the constructs of a domain specific model (DSL) on the concrete development and runtime platform Ibykus AP. Here the software engineering takes advantage of a combination of generative techniques and stable so-called DSL interpreters. These techniques to implement model driven software development (MDSD) concepts can improve the flexibility, the quality and the performance of the development of large application systems. Presenting the DSL interpreter approach the underlying techniques of generic repository structures to hold the software model as well as runtime configuration information are discussed. The importance of an associated clear and well structured interface and tuning alternatives for the repository are pointed out. Finally the paper concludes with an outlook to future research work. Title: MODELING OF A DEMOCRATIC CITIZENSHIP COMMUNITY TO FACILITATE THE CONSULTATIVE AND DELIBERATIVE PROCESS IN THE WEB Author(s): Cristiano Maciel and Ana Cristina Bicharra Garcia Abstract: Electronic democracy should facilitate the debate and participation of citizens as well as electronic voting in governmental issues. Governmental applications available in the Web have not evolved significantly toward real participation of citizens. The implementation of an e-democracy system can benefit from incorporating features from distinct information channels, especially television. This paper discusses an Interactive Government-Citizen Model that allows and stimulates the decision-making process between government and citizens, facilitating citizen participation through a virtual community and through integrated management of information in the Web environment. In this Model we identify the phases of an advisory and deliberative process as carried out through a Democratic Citizenship Community, the debate of which is structured in a Government-Citizen Interaction Language known as DemIL. An evaluation of the effectiveness of the processes in a community is therefore discussed, ultimately seeking to conceive an environment capable of promoting a high degree of deliberative participation. Title: TOWARDS THE DYNAMIC ADAPTABILITY OF SOA Author(s): Mehdi Ben Hmida, Céline Boutrous Saab, Serge Haddad, Valérie Monfort and Ricardo Tomaz Ferraz Abstract: Service Oriented Architectures (SOA) aim to give methodological and technical answers to achieve interoperabilty and loose coupling between heterogeneous Information Systems (IS). Currently, Web Services are the fitted technical solution to implement such architectures. However, both Web Services providers and clients are faced to some important difficulties to dynamically change their behaviours. From one side, Web Services providers have no mean to dynamically adapt an existing Web Service to business requirements changes. From the other side, Web Services clients have no way to dynamically adapt themselves to the service changing in order to avoid execution failures. In this paper, we show how we achieve a dynamic adaptable SOA by introducing the Aspect Oriented Programming (AOP) paradigm and Process Algebra (PA). We propose a Process Algebra formalism to specify a change-prone BPEL process (base service and aspect services) and demonstrate how to generate a client which dynamically adapt its behaviour to the service changes. We illustrate our approach through a concrete case study and present the Aspect Service Weaver (ASW) tool which implements our concepts. Title: INFORMATION SYSTEM REQUIREMENT ANALYSIS AND SPECIFICATION IN FOREST MANAGEMENT PLANNING PROCESS Author(s): Salvis Dagis Abstract: Forests cover up to 45% of the territory of Latvia and forestry is the most significant export sector in Latvia. In total forestry provides up to 14% of Gross Domestic Product. In order to manage the forests economically efficiently it is necessary to plan the management activities several periods in advance. The forecasting of tree growth takes an important place within the forest management planning process. In order to develop IT solution for the process of forest management planning, it is necessary to perform the analysis of forestry sector, as a result of which there are precedent models and also corresponding static structure developed. For the development of precedent models and static structure the Unified Modeling Language (UML) specification and notation is used. Title: GIS QUALITY MODEL - A METRIC APPROACH TO QUALITY MODELS Author(s): Willington Libardo Siabato Vaca and Adriana P. Rangel Sotter Abstract: In the past few years organizations and companies have developed new standards those have demonstrated highly efficacious both in the public sector and the private sector as a product of their work. The most important of them is the Quality Management System regulated by the standard ISO 9001. This standard evidences how its implementation represents sensitive improvements of the production system, optimizing the product and increasing his quality for the companies that decide to implement it. This phenomenon is not isolated within software engineering's frame. Many works have been published, like Boehm [BOEHM'78, BOEHM'76, BOEHM'73] and McCall [MCCALL'77], has given rise to standards among which the ISO/IEC 9126 is highlighted. Regarding this fact, it has been possible to create different solutions for multiple problems related with the Information Technologies. Nowadays, the Geographic Information Systems' projects managers does not have a tool for neither to select the software to implement their projects nor to support this selection in technical criteria. The question is which one of the commercial software is appropriate to my project? Which one of the commercial software follows the requests of the project out? Which one of the commercial software supports the needs of the users? This article presents a quality model to support this kind of decisions. This way, projects managers can take their decisions based on a set of metrics product of the deep evaluation of characteristics, sub-characteristics and attributes of the software. These elements allow that user knows which one of the software package is the best through a GIS Quality Indicator, generated from model. Eventually this indicator allows GIS’ projects managers to take decisions supported in a well constructed technical criterion and model based and supported in international standards related with product quality in software engineering such as ISO/IEC 9126-1. Title: BORM POINTS – NEW CONCEPT PROPOSAL OF COMPLEXITY ESTIMATION METHOD Author(s): Zdeněk Struska and Vojtěch Merunka Abstract: This paper contains an introduction of new method BORM points from the area of complexity estimation in object environment. In the first part of the paper there is a BORM description (Business and Object Relation Modelling). In the second part there is a world-wide premiere of BORM points' concept. It is the suggestion of estimation method using for the BORM environment. At the end of the article there is list of next steps to finish the methods and promote it to the wide scientific communities. Title: TOWARDS ENTERPRISE APPLICATIONS USING WIRELESS SENSOR NETWORKS Author(s): Stamatis Karnouskos and Patrik Spiess Abstract: Wireless Sensor Networks (WSN) have become a hot issue in research, and significant progress has been achieved in the past few years. Recently, the topic has gained lot of momentum and has become increasingly attractive for industry paving the way for new applications of sensor networks which go well beyond tradi-tional sensor applications. Sensor Networks is seen as one of the most promising technologies that will bridge the physical and virtual worlds enabling them to interact. Expectations go beyond the research vi-sions, towards deployment in real-world applications that would empower business processes and future business cases. In this paper we look at WSNs from the business software perspective, including business model, service-oriented architecture, integration with enterprise software systems as well as benefits and les-sons learned. As an example use-case we demonstrate the use of WSNs for hazardous goods management in the chemical industry. Finally, based on our experiences we depict some directions that can be followed in order to pave the road to real business applications for WSNs. Title: EVENT-BASED INFORMATION SYSTEM MODELS Author(s): Lars Bækgaard Abstract: Business events play important roles in information systems and their environments. We introduce and discuss a general notion of business events that cover a wide range of more specialized event concepts. We use the general event concept to analyze a set of modeling langauges that support the modeling of information about events and a set of modeling languages that support the modeling of event-based activities. Finally, we outline a set of general requirements that a modeling language must satisfy in order to support the modeling of events in a conceptually rich manner. Title: OPERATIONALIZING THEORY - MOVING FROM INSIGHT TO ACTION IN A SME Author(s): Lars-Olof Johansson, Björn Cronquist and Harald Kjellin Abstract: This paper presents a method for operationalizing theory. The method has its basis in the empirical findings arising from collaboration between the researchers and a research partner, Flower Systems Ltd. The research partner is a software company characterized as a SME. The presented method is exemplified with theories from learning organizations, usability, and visualization – which are all connected to the problem articulated by our partner. The method is an iterative process characterized by a systemic and holistic long-term view that incorporates feedback. The method takes as its point of departure the problematic area described by Flower Systems ltd; the researchers both intervene and interpret in this problematic area, so the method is both described and verified.. The paper combines the case study and action research methods in what is sometimes referred to as a “hybrid” method, the action case method. The view of innovation presented in this paper is that innovation entails supporting change processes in order to create purposeful and focused change.. The underlying research question has been: How usable is our method for operationalizing theory in solving the problem of adapting to changes in an SME? Title: ORGANISATIONAL LEARNING AND HEIDEGGER’S ONTOLOGY - DOES PHILOSOPHY MATTER FOR INFORMATION SYSTEMS DESIGN? Author(s): Angela Lacerda Nobre Abstract: Organisational Learning has built-in an intrinsic tension between the formal, formalised and procedural aspects of management and the flexible, innovative and informal processes that sustain and nurture each organisations’ own dynamism and action capacity. The present article revises certain key influences and main theoretical perspectives that are constitutive of the organisational learning field of study. These theories and influences, in turn, are then related to certain schools of thought, world-views, paradigms and epistemic shifts that help to highlight the complexity as well as the relevance of this area of study. Title: R-TOOL: A SUPPORTING TOOL FOR A QUALITY ORIENTED REUSE STRATEGY Author(s): Maryoly Ortega, Anna Grimán, María Pérez and Luis Eduardo Mendoza Abstract: The quality of reusable elements must be rigorously monitored and guaranteed before they can be reused, this is known as Certification. High levels of certification of these elements generate trust and stimulate reuse. In this paper we describe the development of a tool (Beta version) based on quality oriented reuse strategies. To this end, we take as starting point an ontology that rigorously correlates the essential concepts of systematic reuse to quality. This ontology reinforces the proposed strategy, which in turn is supported by the tool. The methodology used is based upon Methodological Systemic Framework for Information Systems Research proposed by Pérez et al. (2004). In addition, for the development of the tool, we used the iterative incremental development process Rational Unified Process (RUP.) We took into account the inception and elaboration phases, and developed an iteration of the construction phase. As a result of the development process we built a tool which supports the main activities of the proposed strategy. These activities are Certify Domain Models, Requirement Specifications, Architectural Designs and Code, through checklists, allowing to store, classify, search and recovery the reusable elements and its properties. Title: EXTRACTION OF SEMANTIC RELATIONSHIPS STARTING FROM SIMILARITY MEASUREMENTS Author(s): Mohamed Frikha, Mohamed Mhiri and Faiez Gargouri Abstract: Current applications’ modelling becomes increasingly complex. Indeed, it requires a hard work to study the particular domain in order to determine its main concepts and its relationships. The designers can have, in certain case many ambiguities concerning the comprehension of the domain to be modelled and the concepts to be used. In order to solve these ambiguities, we used an ontology like a reference to give more semantics to conceptual schemas. For that, we used an approach for an ontology building to represent the pertinent concepts for a domain. In this paper, we propose a set of allowing determining the resemblance between the concepts of a conceptual schema and the ontology. Then, we propose an algorithm using these similarity measurements to determine the semantic relationships. Title: SYNCHRONIZATION ISSUES IN UML MODELS Author(s): Marco Costa and Alberto Rodrigues da Silva Abstract: Information systems have been changing regarding not only technologies but also notations and methodologies till now. As the complexity of the implemented systems is growing steadily, the need for ways of systematically develop applications increase. A multitude of tools appear to help in the development process. Tools are supporting and generating a large number of artefacts but development teams still have a difficult task: how to manage the coherence of that information in a context of highly dinamic changes. We discuss some important questions regarding synchronization, not only traceability, namely how to develop a fully customizable and extensible application in this field, wich will instantiate a new class of applications. Title: BPEL PATTERNS FOR IMPLEMENTING VARIATIONS IN SOA APPLICATIONS Author(s): Samia Oussena, Dan Sparks and Balbir Barn Abstract: The main purpose of the COVARM research project is to define a candidate reference model utilizing a framework of web services to support a key UK Higher Education business process. Any given business domain may offer a level of complexity such that process activities, terminology (the ontology) and business rules may vary between organizations belonging to same domain. A reference model to support a domain requires a significant level of adaptability and customization in order to fully address the domain. Our approach to reference modelling was to define a generic (or canonical) business process for the domain, recognising that this canonical process needed to be adapted to support the “variability” required by different users of the domain. While a generic process can and has been built as part of the reference model, the flexibility (or variability) is afforded by the implementation strategy for the canonical model / generic process. We have implemented the following variations: activity ordering, cross-site terminology harmonization, and specific business rules to address the variability requirements. This paper presents our experience with explicitly managing the variability within the implementation technology. With the use of BPEL patterns, we describe how the management of these variations can be dealt with in an SOA application implementation. Title: A FRAMEWORK FOR ONTOLOGICAL STANDARDIZATION OF BUSINESS PROCESS CONTENT Author(s): Maya Lincoln, Reuven Karni and Avi Wasser Abstract: One of the main challenges currently facing the world of enterprise information technology in general and ERP/SCM/CRM systems in particular, is visibility into the business of organizations. The prevalent approach utilizes conceptual business process modelling as the foundation for creating and managing this visibility, aiming to connect business activity and its supporting information technology. While the phenomena of devising structural execution frameworks is widespread in academia, there have been few attempts to develop theory, empirical studies and supporting methods for the structured generation and customization of complete business process models that also include an actual content layer. These models move beyond structural data Modelling in the sense that they add semantics and relationships of actual business data. The research suggests an framework and set of methods for the organization and structured construction of business process content. Title: PROCESS USE CASES: USE CASES IDENTIFICATION Author(s): Pedro Valente and Paulo N. M. Sampaio Abstract: The identification of use cases is one key issue in the development of interactive information systems. User participation in the development life cycle can be seen as critical to achieve usable systems and has proven its efficacy in the improvement of systems appropriateness. Indeed, the involvement of users in the requirements definition can add a signification improvement in both consecutive/interleaved tasks of: (i) understanding and specifying the context of use, and, (ii) specifying the user and organizational requirements as defined in Human-Centered Design (HCD) (ISO, 1999). Existing solutions provide a way to identify business processes and/or use cases in order to achieve system definition, but they don’t do it in an agile and structured way that helps to efficiently bridge Business Process Management and Software Engineering. Process Use Cases is a methodology, defined in the Goals software construction process, for the identification of use cases and information entities during the modeling and reorganization of business processes focusing the results in the identification of the functional requirements for the correct development of an interactive information system. Title: REFINEMENT PROPAGATION - TOWARDS AUTOMATED CONSTRUCTION OF VISUAL SPECIFICATIONS Author(s): Irina Rychkova and Alain Wegmann Abstract: Creation and transformation of visual specifications is driven by modeler's design decisions. After a design decision has been made, the modeler needs to adjust the specification to maintain its correctness. The number of adjustments might make the design process tedious for large specifications. We are interested in technique that will reduce the modeler's obligation to control specification correctness. Every single transformation of the visual specification can be captured by the notion of refinement used in formal methods. In this work we present the technique that supports a stepwise refinement of visual specifications based on calculations. We use refinement calculus as a logic for reasoning about refinement correctness. When a design decision is made by the modeler, the necessary adjustments are calculated based on rules of refinement propagation. Refinement propagation can automate the specification adjustment and enforce its correctness. Title: TOWARDS UML-RT BEHAVIOURAL CONSISTENCY Author(s): Kawtar Benghazi Akhlaki, Manuel I. Capel Tuñón, Juan A. Holgado Terriza and Luis E. Mendoza Morales Abstract: Having an objective of achieving a formal characterisation of Sequence Diagrams (SD) as a means for ERTS development and validation, this paper introduces a CSP+T-based timed trace semantics for most concepts of SD. A trace is sequence of events, which gives the necessary expressiveness to capture the standard interpretation of UML SD. Timed SD (TSD) depict work flow, message passing and gives a general view of how system’s components cooperate over time to achieve a result. Such sequence, often called an scenario, also represents a part of the system behaviour and a possible execution of a state diagram. State diagrams and SD are used as complementary models for describing system behaviour. To demonstrate temporal consistency between SD and timed state diagrams, we propose a systematic transformation of both into a formal semantics timed traces of events. Finally, the application example consists of carrying out the dynamic modelling of key components of a well known manufacturing-industry paradigmatic case. Title: CHECKING BEHAVIOURAL CONSISTENCY OF UML-RT MODELS THROUGH TRACE-BASED SEMANTICS Author(s): Luis E. Mendoza Morales, Manuel I. Capel Tuñón and Kawtar Benghazi Akhlaki Abstract: Starting from a methodological approach intended to obtain a correct system specification in CSP+T from a UML--RT model of an RTS, we develop now a systematic procedure to check whether the obtained design is consistent with other vistas of the same system, such as the ones given by class digrams, collaboration diagrams and state charts. To achieve this objective, a formal semantics of the notational elements of UML--RT according to CSP+T process terms is presented, which guarantees that system requirements are preserved from their initial UML--RT modelling to the final system implementation. As a consequence, the formal support given by the compositional refinement of CSP+T process terms will allow performing the system's software compositional verification. In addition, the derived formal semantic definitions are applied to the Production Cell case study. Title: MATRIX BASED PROBLEM DETECTION IN THE APPLICATION OF SOFTWARE PROCESS PATTERNS Author(s): Chintan Amrit and Jos Van Hillegersberg Abstract: Software development is rarely an individual effort and generally involves teams of developers. Such collaborations require proper communication and regular coordination among the team members. In addition, coordination is required to sort out problems due to technical dependencies that exist when components of one part of the architecture requires services or data input from components of another part of the architecture. The dynamic allocation of the different tasks to people results in various socio-technical structure clashes (STSCs). These STSCs become more pronounced in an Agile Software Development environment and managerial intervention is constantly required to alleviate problems due to STSCs. In this paper we propose a technique based on dependency matrices that detects STSCs in the organizational process structure. We illustrate this technique using two examples from Organizational and Process Pattern literature. Title: AUTHORS’ INSTRUCTIONS - OSS FACTORY: DEVELOPMENT MODEL BASED AT OSS PRACTICES Author(s): Ana Isabella Muniz and José Augusto de O. Neto Abstract: In this paper we present OSS Factory (Open Source Software Factory), an ecosystem aligning software demands, undergraduation Computing students qualification and Open Software practices in a collaborating relation, dedicated to produce open software applications to cope with market demands, using students codification potential. A contest among students attending software engineering courses (or volunteers), guided by professors and coordinated by a central entity is the force to move OSS Factory. To validate elements and interaction proposed, experiments applying the structure described in the paper have been performed, and positive results were achieved. Title: SIMULATION BASED PERFORMANCE ANALYSIS OF THE MVB MAC LAYER Author(s): Yongxiang Wang, Lide Wang and Xiaobo Nie Abstract: This paper describes the design and validation of a CPN (Colored Petri Net) model for the MVB's (Multifunction Vehicle Bus) MAC layer based on the IEC61375-1 standard. A MVB Network has been modeled. The model comprises a MVB bus controller and three MVB Class-1 devices and a common channel which connects them all. By means of this model it is aimed to analyze the transmission performance under different conditions. Most works are done using CPN Tools which permits visual modeling, simulating and analyzing CPN. The simulation result and the performance comparison are presented. The author points out that the MVB network's reliability greatly depends on the data link layer and the redundant communication lines is necessary. Title: INTEROPERABILITY IN PERVASIVE ENTERPRISE INFORMATION SYSTEMS - A DOUBLE-FACED COIN BETWEEN SECURITY AND ACCESSABILITY Author(s): Dana Al Kukhun and Florence Sèdes Abstract: As transparency becomes a key requirement for assuring user satisfaction, Enterprise Information Systems are seeking to become pervasive in order to resolve the heterogeneity problems that face them while integrating their dynamic sub components or while interacting with customers or other business partners. In this article, we introduce the difficulties that face Enterprise Information Systems while trying to provide interoperable data exchange. We highlight the importance of applying adaptive access control policies that provides interoperability by differentiating between providing local system users full data accessibility or providing multi level security controls to external users. Pervasive Systems need to manage data integration and processing in highly dynamic environments where data, software, hardware and connectivity constraints are changing over time and should adapt automatically and proactively to user needs. Title: INCENTIVE-BASED AND PEER-ORIENTED DESIGN OF UBIQUITOUS COMMERCE Author(s): Kyoung Jun Lee and Jeong-In Ju Abstract: Seamlessness is the keyword of U-Commerce which may be defined as the commercial interaction among providers, consumers, products, and services, enabled and supported especially by the real-world seamless communication of each entity and object's digital information. However, the possibility of the seamless transactions increases the privacy risk of the entities involved. Therefore, the core issue of U-Commerce is how to promote seamless transactions while protecting the privacy. For the seamlessness, the role of incentive-emphasized business model is important since the seamlessness makes clear which economic entities contribute to a commercial transaction. Economic entities will reject the seamless transactions unless the sufficient incentives are given to them. In order to consider the privacy issue, we suggest an alternative U-Commerce architecture based on Hybrid P2P Model and Personal Information Base. Title: TOWARDS A FORMAL VERIFICATION OF PROCESS MODEL'S PROPERTIES SIMPLEPDL AND TOCL CASE STUDY Author(s): Benoît Combemale, Pierre-Loïc Garoche, Xavier Crégut, Xavier Thirioux and François Vernadat Abstract: More and more, models, through Domain Specific Languages (DSL), tend to be the solution to define complex systems. Expressing properties specific to these metamodels and checking them appears as an urgent need. Until now, the only complete industrial solutions that are available consider structural properties such as the ones that could be expressed in OCL. There are although some tentatives on behavioural properties for DSL. This paper addresses a method to specify and then check temporal properties over models. The case study is \textsc{SimplePDL}, a process metamodel. We propose a way to use a temporal extension of OCL, TOCL, to express properties. We specify a model transformation to Petri Nets and LTL formulae for both the process model and its associated temporal properties. We check these properties using a model checker and enrich the model with the analysis results. This work is a first step towards a generic framework to specify and effectively check temporal properties over arbitrary models. Title: A REFERENCE ARCHITECTURE FOR MANAGING BUSINESS PROCESS VARIANTS Author(s): Ruopeng Lu and Shazia Sadiq Abstract: Business process management systems (BPMS) have been prevalent in current business information systems for a substantial period of time. However, BPMS is still striving to vanquish emerging demands from current business environments. The most dominant challenge is supporting the dynamism of business processes, where requirements and process goals are constantly changing. This is particularly challenging in managing knowledge intensive business processes and has partially led to the demand for more complex BPMS functionality such as instance adaptation and streamlined process evolution. On the other hand, various process analysis and discovery techniques have been developed as an important component in BPMS. In this paper, we present a technology framework that supports process discovery from preferred work practices in a flexible process management system. The framework supports instance adaptation and a systematic approach towards process evolution/improvement. Title: MODELLING EXTERNAL INTERACTION CONTEXT FOR ENHANCED BUSINESS PROCESS INTEGERATION IN MOBILE ENTERPRISES Author(s): Subodh Sohi Abstract: Today, with increasing mobility, Enterprises proactively are looking to mobilize solutions and want to leverage on wireless knowledge for enhanced business processing. In mobile enterprises, knowledge about information flow across mobile devices can be controlled and interaction context can be leveraged to enable better decisions capabilities both by human and systems thereby leading to enhanced business processes integration. In current state of art we lack in modeling the appropriate interaction context for information flow across mobile devices. In this paper, we present a framework for deriving and leveraging interaction context in health care Enterprise. Title: DYNAMIC INTERACTION OF INFORMATION SYSTEMS - WEAVING ARCHITECTURAL CONNECTORS ON COMPONENT PETRI NETS Author(s): Nasreddine Aoumeur, Gunter Saake and Kamel Barkaoui Abstract: Advances in networking over heterogenous infrastructures are boosting market globalization and thereby forcing most software-intensive information systems to be fully distributed, cooperating and evolving to stay competitive. The emerging composed behaviour in such interacting components evolve dynamically/rapidly and unpredictably as market laws and users/application requirements change on-the-fly both at the coarse- type and fine-grained instance levels. \noindent Despite significant proposals for promoting interactions and adaptivity using mainly architectural techniques (e.g. components and connectors), rigorously specifying / validating / verifying and dynamically adapting complex communicating information systems both at type and instance levels still remains challenging. In this contribution, we present a component-based Petri nets governed by a true-concurrent rewriting-logic based semantics for specifying and validating interacting distributed information systems. For runtime adaptivity, we enhance this proposal with (ECA-business) rules Petri nets-driven behavioral connectors, and demonstrate how to dynamically weaving them on running components to reflect any emerging behavior. Title: ON THE LOGIC UNDERLYING COMMON SENSE Author(s): Janos J. Sarbo Abstract: In order to endow computers with common sense with respect to specific domains we need to have a representation of the world and make commitments about what knowledge is and how it is obtained. This paper is an attempt to introduce such a representation and underlying naive' logic on the basis of an analysis of the properties of cognitive activity. This paper is of interest to those engaged in the development of user interfaces and ontologies, as well as to those interested in the semiotic aspects of problem specification and requirement engineering. The focus of this paper is on the theory, applications are briefly mentioned due to lack of space. Title: TEXT CATEGORIZATION USING EARTH MOVER'S DISTANCE AS SIMILARITY MEASURE Author(s): Hidekazu Yanagimoto and Sigeru Omatu Abstract: We propose a text categorization system using Earth Mover's Distance (EMD) as similarity measure between documents. Many text categorization systems adopt the Vector Space Model and use cosine similarity as similarity measure between documents. There is an assumption that each of words included in documents is uncorrelated because of an orthogonal vector space. However, the assumption is not desirable when a document includes a lot of synonyms and polysemic words. The EMD does not demand the assumption because it is computed as a solution of a transportation problem. To compute the EMD in consideration of dependency among words, we define the distance between words, which needs to compute the EMD, using a co-occurrence frequency between the words. We evaluate the proposing method with ModApte split of Reuters-21578 text categorization test collection and confirm that the proposing method improves a precision rate for text categorization. Title: DEVELOPMENT OF ALGORITHMS TO SOLVE COMBINATORIAL PROBLEMS Author(s): Broderick Crawford, Carlos Castro and Eric Monfroy Abstract: This paper captures our experience developing Algorithms to solve Combinatorial Problems using different techniques. Because it is a Software Engineering problem, then to find better ways of developing algorithms, solvers and metaheuristics is our interest too. Since software development is a creative and knowledge intensive activity, an understanding from a Knowledge Management perspective offers important insights in our work. Here, we fixed the most valuable concepts from Knowledge Management and Software Engineering applied in our work. Title: GENERIC BUSINESS MODELLING FRAMEWORK Author(s): Christopher John Hogger and Min Li Abstract: We present a position paper setting out the essentials of a new declarative framework named GBMF intended for modelling the higher-level aspects of business. It is based upon logic programming including, where appropriate, finite-domain constraints. Business plans, processes, entity constraints, assets and business rules are representable in GBMF using an economical repertoire of primitive constructs and without requiring overly-burdensome programming effort. The framework, which has been fully implemented, has been applied so far to small-scale business exemplars. Our more general future aim, however, will be to demonstrate the framework's generic character by providing precise semantic mappings between it and other business modelling frameworks that rely upon specialized languages and engines. Title: ADAPTIVE PROCESSES IN E-GOVERNMENT - A FIELD REPORT ABOUT SEMANTIC-BASED APPROACHES FROM THE EU-PROJECT “FIT” Author(s): Andrea Leutgeb, Wilfrid Utz, Robert Woitsch and Hans-Georg Fill Abstract: For increasing the efficiency and effectiveness of public administration as well as improving the usability and adaptability of systems state-of-the-art semantic technologies can be combined with existing business process management (BPM) approaches in e-government. This position paper shows ontology-based approaches as implemented within the EU-project FIT. In FIT the customer approved business process modelling language ADOeGov® has been enriched with business rules in order to provide the necessary transparency, flexibility and efficiency. Title: A SUCCESS STORY: COLLABORATIVE EFFORT WITH THE INDUSTRY IN ADDRESSING REQUIREMENTS CHALLENGES FOR EARLY ADOPTION OF IWARP IN LINUX Author(s): Venkata Jagana, Claudia Salzberg, Renato Recio and Bernard Metzler Abstract: As customers are embracing Linux in solving business critical problems, the demand to support innovative and cutting edge technologies is also increasing at a dramatic pace. This has forced the system vendors to offer these technologies much sooner than the traditional cycles allowed. In addition, the open source of different implementations of the same technology by different vendors poses a significant risk in getting an agreement on the common implementation. In order to address these multitude of problems and get an open source implementation of new technology for acceptance into Linux mainstream sooner, we have adopted an innovative method. This method allowed us to work on a common implementation for Linux by avoiding the clash of multiple implementations right from the beginning but of bringing all the relevant vendors, much before the technology gains foothold in the market with any proprietary implementatiosn. In this paper, we'll clearly describe in detail the success story of iWARP support for Linux right from the start of how we have formed a community, generated the requirements that are agreeable to all vendors and open source developers, how it further drove an industry standard to define a programming interface in parallel with the implementation and how the code convergence should happen with an existing Infiniband technology. We’ll also describe further how this model can be applied for faster adoption of the upcoming future technologies into Open source based implementations after addressing the new technology challenges. Area 4 - Software Agents and Internet Computing Title: BUSINESS ADMINISTRATION AND IT PROFESSIONALS - A SOCIAL NETWORK ANALYSIS PERSPECTIVE Author(s): Susanne Berger and Georg Peters Abstract: There is ongoing discussion whether and to which extent and aspects professionals in the field of information technology (IT) and business administration (BA) are different. Often IT people are considered to be introverted while it is assumed that BA professionals are stronger with respect to communication and networking. In our paper we take a social network analysis perspective to examine if this prejudice is true for BA and IT professionals who are members of an online business community. Title: THE IP MULTIMEDIA SUBSYSTEM (IMS) & THE MOBILE INTERNET - OPPORTUNITIES FOR THE MOBILE OPERATOR Author(s): Amanda O'Farrell and Brendan Tierney Abstract: The IP Multimedia Subsystem (IMS) is a new mobile communications architecture, which enables many new and innovative services, and can extend the possibilities of mobile internet application development. These mobile internet applications and the IMS, are considered in terms of the impact that they can have on the critical success factors (CSFs) of mobile operators. The CSFs identified are particular to mobile operators that are competing in a highly saturated (in terms of mobile penetration) marketplace, and that are facing the threat of increasing competition Title: SPECIFICATION AND VERIFICATION OF VIEWS OVER COMPOSITE WEB SERVICES USING HIGH LEVEL PETRI-NETS Author(s): Khouloud Boukadi, Chirine Ghedira, Zakaria Maamar and Djamal Benslimane Abstract: This paper presents a high level Petri-Net approach for specifying and verifying views over composite Web service. High level Petri-Nets have the capacity of formally modelling and verifying complex systems. A view is mainly used for tracking purposes as it permits representing a contextual snapshot of a composite Web service specification. The use of the proposed high level Petri-Net approach is illustrated with a running example that shows how Web services composition satisfies users’ needs. A proof-of-concept of this approach is also presented in the paper. Title: INTEROPERABILITY CHALLENGES IN NEW MEMBER STATES SMALL AND MEDIUM ENTERPRISES REQUIRE SUITABLE EAI ARCHITECTURES Author(s): Karsten Tolle, Valentinas Kiauleikis, Gerald Knoll, Claudia Guglielmina and Alessandra Arezza Abstract: Study, design and develop a federated architecture that will use UBL messages and interoperability services, which aims at supporting SMEs EAI in New Member States. Title: COMEX: COMBINATORIAL AUCTIONS FOR THE INTRA-ENTERPRISE EXCHANGE OF LOGISTICS SERVICES Author(s): Oleg Gujo and Michael Schwind Abstract: The exchange of cargo capacities is an approach that is well established in the practice of logistics. Few of these mostly web-based market places, however, are able to take synergies into consideration that can be generated by the appropriate combination of the transportation lanes of different carriers. One way to achieve this is to employ combinatorial auctions, that allow one to bid on bundles of lanes. This article describes a combinatorial auction for the intra-enterprise exchange of logistic services. In the real world case considered here, we implement and analyze such an exchange process in an enterprise that is related to the food sector and organized in a profit center structure. In the intra-enterprise exchange process, each profit center is able to release delivery contracts for outsourcing if the geographic location of a customer allows a reduced-cost delivery by another profit center in the neighborhood. The cost calculation is based on the results of an integrated routing system, and the in and outsourcing process is managed by using the auction mechanism ComEx. For the purpose of customer retention the delivery contracts are kept by the corresponding profit center, the incentive for exchanging the customers is achieved by a cost-savings distribution mechanism. After a description of the web-based logistics auction together with the route optimization system DynaRoute, the article describes the search for a cost optimizing strategy that bundles the appropriate delivery contracts. Title: AN EFFICIENT SYSTEM FOR EJB MOBILIZATION Author(s): Liang Zhang, Beihong Jin, Li Lin and Yulin Feng Abstract: In these days, conducting business requires more and more employees to be mobile. To be efficient, these mobile workers need to access the enterprise applications with their mobile devices at anytime and any-where. How to efficiently extend the enterprise applications to the mobile devices becomes a challenging task to the enterprise. In this paper, we present our recently developed system for mobilizing enterprise ap-plications, which provides efficient access to EJB components from MIDP2.0 mobile devices. Considering the characteristics of wireless media, our system can dynamically choose the most appropriate communica-tion method and provide the synchronous exactly-once communication semantic. Security is explored by providing data encryption, two-way authentication and a simple tool for managing the access control list. We also develop a mechanism for supporting priority service. Thread pool and object caching are imple-mented to increase the efficiency. Lastly, our system offers various tools to enhance the development automatism, while still allowing the programming flexibility by providing a rich set of APIs. Title: A METHODOLOGY FOR DEVELOPING ONTOLOGIES USING THE ONTOLOGY WEB LANGUAGE (OWL) Author(s): Magdi N. Kamel, Ann Y. Lee and Edward C. Powers Abstract: It is generally agreed upon that ontologies are the knowledge representation component of the Semantic Web. There is a growing need for developing ontologies in different disciplines as means for sharing a common understanding of the structure of information in a domain among both people and machines. This paper describes a seven-step methodology for developing ontologies using the Ontology Web Language (OWL) based on related approaches for software and ontology development. As with contemporary software development methodologies, the steps of the proposed approach are applied iteratively and in a cyclical fashion in order to accurately capture the domain knowledge. Title: PATTERN-BASED COLLABORATION IN AD-HOC TEAMS THROUGH MESSAGE ANNOTATION Author(s): Daniel Schall, Robert Gombotz and Schahram Dustdar Abstract: In this paper we present a specification for annotating messages to enable computer-supported message processing, addressing, and analysis. The benefits of annotating messages according to our XML based specification are two-fold: Firstly, it allows computer support during collaboration by enabling automated message addressing (i.e., determining who should get a message) and message management (e.g., managing your messages according to activities, projects, and task). Secondly, it enables post-collaboration analysis of messages and mining of message logs for patterns and for workflow models. We provide a proof of concept by presenting how annotated messages may support and facilitate collaboration that happens according to certain collaboration patterns. In addition to the patterns we have already introduced in our previous work, we present more patterns such as Monitors that emphasize the applicability of computer supported message handling. Title: A NEW GROUP KEY MANAGEMENT STRUCTURE FOR FRAUDULENT INTERNET BANKING PAYMENTS DETECTION Author(s): Osama Dandash, Yiling Wang, Phu Dung Le and Bala Srinivasan Abstract: Fraudulent payments detection in the banking system is an extremely important form of risk management, particularly as the industry loses close to one billion dollars a year from fraud. Several modern techniques in detecting fraud are continually evolved and applied to many business fields. However; there is still no efficient detection mechanism that is able to identify legitimate users and trace their illegal activities. This paper presents a new Group Key Management (GKM) structure that facilitates internal fraudulent banking payments detection mechanism by dynamically combining an Individual Key (IK) and a Group Key (GK). The main objective of the proposed mechanism is to identify internal fraudsters and trace their records amongst other group members. Title: IMPROVING THE SEARCH AND CATALOGUING OF ITEMS IN C2C E-COMMERCE PORTALS Author(s): Antonio Gallardo, Jose Jesus Castro-Schez, Milagros Hazas and Juan Moreno-Garcia Abstract: The business achievement among consumer via e-commerce is getting more important at the present time. In this paper, we propose to make use of fuzzy logic with the aim to improve the search and cataloguing of goods and services in Consumer-to-Consumer electronic commerce (E-commerce) portals (e.g. ebay). These portals are the media through most the electronic transactions among consumers are conduced today. We suggest a method that tries to adapt to users' real needs. It allows to buyer carry out searches in an imprecise way and to the seller to deal with catalogues of items (goods or services) described also in a lacking exactness way. Title: AUTOMATIC ORCHESTRATION OF WEB SERVICES THROUGH SEMANTIC ANNOTATIONS Author(s): Philippe Larvet Abstract: A new service can be developed as an orchestrated composition of existing web services. This paper describes an original process to automate the composition of semantic web services, by processing their "semantic tags". These tags can be extracted from the WSDL descriptions of the services and inserted into a light semantic description attached to the operations of the considered web services. A specific mechanism can examine these tags and determine automatically the possible "connectivity" of two given web services: the output of WS1, for example, semantically fits with the input of WS2. Then, the two web services are semantically connectable. This process can be used within the context of a service creation environment, in which the developer often wishes to assemble different services corresponding to an initial request. By using the semantic tags, a specific composition mechanism is able to connect automatically the chosen services and to assemble them to produce the final service that fits with the original request. Title: INTERACTION BELIEFS - A WAY TO UNDERSTAND EMERGENT ORGANIZATIONAL BEHAVIOUR Author(s): Marco Stuit, Nick B. Szirbik and Cees de Snoo Abstract: We assume that business processes consist of sets of individual and collaborative activities performed by actors and the interactions between them. Each interaction involves multiple roles, which can be played by various agents – human or artificial. These have their own local beliefs and expectations about the behaviour(s) of the other participant(s). We represent these beliefs by using the ‘interaction belief’ concept. We show how a designer can reason about an interaction belief, how it can be modelled and what diagrams are necessary, and how it is constructed for the purpose of simulation and agent development. Some of the differences between workflow modelling and agent-oriented process modelling are discussed. In order to illustrate the new concept and how it is operationally used, we present a business interaction example that shows how agents, equipped with interaction beliefs, can enact a business process in a non-centralised, emergent manner. Finally, we explore some interesting questions that have arisen due to the introduction of the interaction belief concept and outline some promising topics for future research. Title: DYNAFLOW: AGENT-BASED DYNAMIC WORKFLOW MANAGEMENT FOR P2P ENVIRONMENTS Author(s): Adriana Vivacqua, Wallace Pinheiro, Ricardo Barros, Amanda de Mattos, Nathalia Cianni, Pedro Monteiro, Rafael de Martino, Vinicius Marques, Geraldo Xexeo, Jano Souza and Daniel Schneider Abstract: Many projects are characterized by their flexibility and high number of changes before a definitive solution is implemented. In these scenarios, the people involved may change, as may deadlines, assignments and roles, especially when the projects span a long period of time. Traditional workflow systems don’t handle dynamic scenarios well, as they are centralized and pre-defined at the start of the project. To address these problems, there is a need for systems that are able to adapt according to the situation, to deal with the dynamic aspects of the design process. In this paper, we present a P2P approach to dynamic workflow management, where peers may join or leave and roles may change depending on the situation. DynaFlow is an agent based system, where agents take action when exceptions occur. This type of system could provide adequate support for dynamic groups, such as open source projects, where participation is fluid and changes according to members’ availability. Title: SUMMARIZING DOCUMENTS USING FRACTAL TECHNIQUES Author(s): M. Dolores Ruiz and Antonio B. Bailón Abstract: Every day we search new information in the web, and we found a lot of documents which contain pages with a great amount of information. There is a big demand for automatic summarization in a rapid and precise way. Many methods have been used in automatic extraction but most of them do not take into account the hierarchical structure of the documents. A novel method using the structure of the document was introduced by Yang and Wang in 2004. It is based in a fractal view method for controlling the information displayed. We explain its drawbacks and we solve them using the new concept of fractal dimension of a text document to achieve a better diversification of the extracted sentences improving the performance of the method. Title: CORPORATE CULTURE: A NEW CHALLENGE TO E-SUPPLY CHAIN MANAGEMENT SYSTEMS Author(s): Khalid Al-Mutawah, Vincent Lee and Yen Cheung Abstract: Traditional supply chain management systems (SCMs) are giving way to opportunities provided by the rapid growth in Internet applications. The purpose of this study is to enhance the understanding of the structure of electronic SCMs (e-SCMS) and the role that corporate culture plays in e-SCMS’s performance. To achieve this, two path dependence research questions are stated: how can e-SCMS be described, and how corporate culture influences can be measured. To address the first question an in-depth literature review was conducted to describe the structure of e-SCMS. For the second question, a qualitative research approach was used where a case study in the Australian Supply Chain Pharmaceutical Industry was reviewed. Preliminary findings indicate that the organizational links in the e-supply chain can be described as a rotating ring that facilitates the flexibility factor in forming new strategic alliances. The findings also indicate a matrix model as a possible tool to measure the needs to implement corporate cultural changes for improved corporate performance. Title: DISCOVERING SEMANTIC WEB SERVICES IN FEDERATED DIRECTORIES Author(s): Michael Schumacher, Tim van Pelt, Ion Constantinescu, Alexandre de Oliveira e Sousa and Boi Faltings Abstract: This paper presents a flexible federated directory system called WSDir, which allows registration and discovery of semantic web services. Our directory system is used in a context where ubiquitous ehealth services should be flexibly coordinated and pervasively provided to the mobile user by intelligent agents in dynamically changing environments. The system has been modeled, designed and implemented as a backbone directory system to be searched by an infrastructure made up by such kind of agents coordinating web services. The system is modeled as a federation: directory services form its atomic units, and the federation emerges from the registration of directory services in other directory services. Directories are virtual clusters of service entries stored in one or more directory services. To create the topology, policies are defined on all possible operations to be called on directories. For instance, they allow for routed registration and selective access to directories. Title: A WEB-BASED CENTRAL GATEWAY INFRASTRUCTURE IN THE AUTOMOTIVE AFTER-SALES MARKET - BUSINESS INTEROPERABILITY THROUGH THE WEB Author(s): Geert Houben, Kris Luyten, Karin Coninx and Frank Schönherr Abstract: The Block Excemption Regulation of the European Commission was enacted in 2002 with the goal to strenghten competition between dependent and independent repairers in the automotive after-sales market. The FP6 MYCAREVENT project embraces these goals while triggering new business opportunities by activating a mobile accessible infrastructure with a single gateway to different kinds of resources. This information procurement framework allows customers to find specific vehicle repair and diagnostic data from different car manufacturers and 3rd parties in the same way. In order to provide a higher degree of accessibility, extensibility and adaptivity, the service-oriented infrastructure presented in this paper is web-based and consists of three main components: mobile clients, Service Portal and Remote Services. New communication and multimedia technologies are invoked to improve interoperability, usability and maintenance of the underlying Mobile Service World. In this paper we focus on the architecture of this highly flexible procurement infrastructure. Standardized elements and methodologies ensure an integrated solution and enable easy integration of new content, services and components. Title: ADAPTIVE WORKFLOWS FOR SMART DEVICES - A CONCRETE APPROACH TOWARDS DEVICE FAILURES Author(s): Seng Loke, Sea Ling, Maria Indrawan and Suryani Kurniati Abstract: Smart devices in an environment (e.g., home, factory, military settings, in-vehicle, office, etc) can be programmed and coordinated by a workflow in advance to achieve a user's goal. No matter how advanced or smart the devices are, devices can fail during workflow execution. In this paper, we describe an approach to remedy such situations. We apply the existing concept of adaptive workflow management to a collection of devices, called a {\em device ecology}. Information about the devices are kept in a device hierarchy so that a suitable substitute device that can perform a similar task can be retrieved to replace a failed device in order to ensure the workflow can continue execution. Similarity is defined based on a device hierarchy in an ontology language. A prototype has been implemented as proof of concept. Title: ONTOLOGY-BASED DYNAMIC SERVICE COMPOSITION USING SEMANTIC RELATEDNESS AND CATEGORIZATION TECHNIQUES Author(s): Yacine Rezgui and Samia Nefti Abstract: Organizations need to migrate their legacy systems to higher order applications capable of engaging in automated modes of collaboration to support distributed business processes. This requires a change of focus from intra-enterprise system integration through agreed data struc-tures to inter-enterprise business process integration through smart composition of web-serviced applications. The paper presents an approach aiming at supporting ontology-based semantic composition of web-services to support distributed electronic business processes. This new generation of composite services is semantically coordinated and pervasively provided in a secure, scalable, and resource-aware environment. Two services, at the heart of the service composition exercise, are featured, namely: the semantic compatibility and categoristion services. Title: A VIRTUAL LABORATORY FOR WEB AND GRID ENABLED SCIENTIFIC EXPERIMENTS Author(s): Francesco Amigoni, Mariagrazia Fugini and Diego Liberati Abstract: In current economic and scientific scenarios, interactions and organization models tend to be more and more oriented to flexibility of relationships, heterogeneity of elements, and collaboration among divisions. These requirements impose new approaches to problem solving procedures, and push technologies to new solutions to both cooperation and competition. A possible approach, which is a technical solution and an organizing paradigm at the same time, is based on the concept of Virtual Organization. This paper, starting from the Virtual Organization paradigm and from workflows, shows an approach to the definition and execution of distributed scientific experiments as set of services executed on distributed collaborating sites at different heterogeneous organizations. The focus is on flexibility, reuse, orchestration, collaboration, and interoperability of services within a cooperation process. The workflow of the experiment can be specified by actors with low information technology but high domain knowledge. The context of the work is e-Science, in particular, bioinformatics, but the presented concepts can be easily generalized and extended to other classes of business interaction. A prototype environment is described. Title: VALUE CREATION FOR SMES USING COLLABORATIVE COMMERCE MARKETPLACES Author(s): Yen Ping Cheung, Daisy Seng and Jay Bal Abstract: To compete with low cost competitors from other regions of the world, collaborative commerce marketplaces (CCMs) can assist SMEs to innovate and rejuvenate their business. For instance, CCMs allow the configuration of partners’ capabilities very quickly in response to market’s demand to collectively bid for tenders/projects. Through collaboration with partners in the CCM, SMEs are able to venture outside their regions to capture new markets. A comprehensive, visual and dynamic CCM model is presented in this paper which can be used as a basis for further study of CCMs. Two selected case studies from a CCM are used to verify the proposed model. The layered approach of the model provides opportunities for further examination of the dynamic and complex interactions in CCMs. Title: BUSINESS INNOVATION VIA COLLABORATION - E-MANUFACTURING: WEB-BASED COLLABORATION SYSTEMS FOR SMES Author(s): Kwangyeol Ryu, Seokwoo Lee, Wonpyo Hong, Dongyoon Lee and Honzong Choi Abstract: Unpredictable challenges from global markets and customers make it difficult for manufacturers to produce quality products satisfying cost and time constraints. To cope with competitive and dynamically changing internal and external conditions, the manufacturing industry needs to be equipped with advanced technologies including IT as well as substantial infrastructure. “e-Manufacturing” is referred to as a system methodology enabling the integration of manufacturing operations and functional objectives of an enterprise by using intelligent IT technologies such as the Internet, and tether-free communication methods including wireless networking or web-based connections. The key factor of e-Manufacturing is collaboration. Hence, we have developed four kinds of web-based collaboration systems, referred to as hub systems, while conducting our e-Manufacturing project funded by the Korea government. In this paper, therefore, the functions and characteristics of each collaboration hub system will be introduced. Furthermore, a case study of business innovation by applying collaboration systems for SMEs (small and medium sized enterprises) will be carried out. As a result of applying collaboration systems to SMEs, they can achieve competitiveness because of effective web-based tools, and also accomplish business innovation, which allows them to survive in a global market. Title: A NOVEL APPROACH FOR ROBUST WEB SERVICES PROVISIONING Author(s): Quan Z. Sheng and Anne H. H. Ngu Abstract: Availability and reliability of Web services are important issues for developing many electronic business applications. Unfortunately, it is hard to guarantee the availability of a service given that the number of its requests might be potentially huge. In this paper, we propose a novel approach for robust Web service provisioning based on mobile agent and resource discovery technologies. With our approach, new service instance can be instantiated at appropriate idle computing resources on demand, therefore reducing the risk of service being unavailable. We present a matchmaking algorithm for resources selection, as well as a multi-phase resource planning algorithm for composite Web services. Title: AN INDEPENDENT REPUTATION SYSTEM FOR P2P NETWORKS Author(s): Chaiyasit Tayabovorn and Songrit Maneewongvatana Abstract: Over the last few years, peer-to-peer or (P2P) networks have changed much the attitude of our online life. Entire users on such networks can play either provider and consumer role or even both simultaneously. They can share many diverse resources directly with each other regardless of true identity of interacting participants. To utilize such networks securely, the reputation-based system then becomes a critical part of today's P2P networks. Unfortunately, among current P2P networks, any reputation data used by the reputation system in such networks is still limited and valid to only the same community where its referrers and raters belong. This is regarded as an inflexibility, which causes problems in many aspects. In this paper, we propose a model of P2P reputation-based system whose main concept is to create an independent P2P community called reputation community. This community maintains all reputation data of various kinds from other communities. With this concept, we believe that it can make the reputation system more flexible, scalable, and secure. Title: INTERACTION-ORIENTED COLLABORATIONS Author(s): Giorgio Bruno Abstract: This paper addresses binary collaborations and choreographies, based on web services technology. The nature of the problem leads to two complementary approaches: one focuses on activities, and the other on interactions. This paper follows the interaction-oriented approach and proposes a modeling notation, called Interaction-Oriented Nets (IONs), which allows binary collaborations, choreographies and abstract orchestration models (i.e. abstract business processes made up of communication activities) to be homogeneously represented. Title: A CONTEXT-AWARE SEMANTIC WEB SERVICE EXECUTION AGENT Author(s): António Luís Lopes and Luís Miguel Botelho Abstract: This paper presents the research on agent technology development for context-aware execution of semantic web services, more specifically, the development of SEA (Service Execution Agent), a semantic web services execution agent that uses context information to adapt the execution process to a specific situation, thus improving its effectiveness and providing a faster and better service. Preliminary results show that the introduction of context information and context-aware capabilities in a service execution environment can speed up the execution process, in spite of the overhead that it is introduced by the agents’ communication and processing of context information. The developed service execution agent uses standards such as OWL S service descriptions and WSDL grounding information. Also, an Agent Grounding definition has been proposed to enable the execution of semantic web services provided by agents. The implementation of the agent was done through the integration and/or the extension of the following software tools: JADE, OWL S API and XSP. Title: DEVELOPING AGILE USER INTERFACES FOR HETEROGENEOUS DEVICES IN BUSINESS PROCESSES Author(s): Yaojin Yang and Lasse Pajunen Abstract: Along with the increasing popularity of mobile devices, for accessing services provided by business processes people now would like to use various devices in different circumstances or for different tasks. To support this kind of heterogeneous situation, user interfaces must be agile enough to adapt. In our research, we have identified five key requirements and five principles to help developers to achieve this. Furthermore, we have introduced the concepts of user interface process and user interface service, where user interface development for business processes is well positioned in a bigger picture of Web Service and Service Oriented Architecture. Our research results have been presented by a case study developing a group messaging system. Title: CD-LOMAS: A COLLABORATIVE DISTRIBUTED LEARNING OBJECT MANAGEMENT SYSTEM Author(s): Andrea De Lucia, Rita Francese, Ignazio Passero and Genoveffa Tortora Abstract: Learning Objects are stored in repositories and spread through Internet. The educational sector needs to share good quality educational contents, which can be reused and adopted in several contexts. In this paper we present CD-LOMAS (Collaborative Distributed Learning Object MAnagement System) to support the sharing of contents and the collaboration on their development in a highly distributed environment. Complex Learning Objects are decomposed into simpler Learning Objects that can be distributed at different sites. In CD-LOMAS artifact management features, such as coordination of cooperative workers and versioning, are integrated with context-awareness. Title: BROADBAND TECHNOLOGIES AND THE ACCESS NETWORK - EVALUATION TOOL PROPOSAL Author(s): João Paulo Ribeiro Pereira and José Adriano Pires Abstract: The deployment of Broadband technologies is a key driver of economic development, productivity, and social advancement, and has view a great growth over the past decade. But, the main part of this development and growth has been in the core networks, and the capacity of the access network to delivery broadband services remains as a challenge ("last mile problem"). The access network remains a bottleneck in terms of the bandwidth and service quality it affords the end user. By other side, the access network is much more spread geographically and covers larger areas. Then, this part of the network is usually the most expensive component in terms of capital investment and OAM cost. Some studies reefer that this networks required 70% of the total investment. Several access technologies can be used in this part of the network, which can be used to resolve the bandwidth bottleneck and the investment problem: xDSL, HFC, FTTx, FWA, WiMAX, PLC, Satellite, etc. The goal of this paper is to identify all the essential costs of building broadband access networks, and then to perform a comparison of different technologies in various scenarios. In order to do this, we have developed a model framework and a evaluation tool. The paper presents a techno-economic analysis of eight broadband technologies for access networks: digital subscriber line (DSL), hybrid fiber coax (HFC), power line communications (PLC), fiber to the home (FTTH), fiber to the curb (FTTC), fiber to the cabinet (FTTCab), and wireless alternatives such as WiMAX and satellite. Several actors (such operators, service providers, …) could use this tool to compare different technological solutions, forecast deployment costs, compare different scenarios, etc Title: DATA QUALITY FOR EFFECTIVE E-COMMERCE CUSTOMER RELATIONSHIP MANAGEMENT Author(s): Tanko Ishaya and Julien Raigneau Abstract: The quality of web data has become a critical concern for organisations and it has been an active area of Internet Computing research. Despite its importance and many years of active research and practice, the field still require ways for its assessment and improvement. This paper presents a framework for assessing the quality of customer web data and a well defined set of metrics for quantifying its quality. A prototype has been designed and implemented to demonstrate the usefulness of the data lifecycle and metrics for assessing the quality of customer data. Title: KEY ISSUES FOR LEARNING OBJECTS EVALUATION Author(s): Erla M. Morales Morgado, Ángela Barrón Ruiz and Francisco J. García Peñalvo Abstract: Web development is promoting important advantages for educational area specially e-learning systems. By one side Learning Objects (LOs) aim the possibility to reuse specific information and by the other side they can be interchanged though different context and platforms according to the user’s needs. However an urgent necessity exists to guarantee the LOs quality content. There exists a plethora of quality criteria to value digital sources but there are only a few suggestions about how to evaluate LOs to structure quality courses. This work is a proposal to evaluate LOs as a continued process taking into account different kind of LOs evaluation: Context, Input, Process and Product and quality criteria related to metadata information, pedagogical and usability issues, together with a strategy to ensure a continued LOs quality contents. Title: WHICH CONTRIBUTION OF THE WEB SERVICES IN THE IMPROVEMENT OF WEB SEARCHING? A BEHAVIOURAL STUDY OF THE NET SURFERS Author(s): Christian Belbèze and Chantal Soulé-Dupuy Abstract: The difficulty of finding information on the Web grows and this even for most expert among us. In order to better understand how search the Net surfers, we observed five adults and four children. Two different protocols of observation, both presenting imposed and free searches, were defined for the children and the adults. Thus, we could define a certain number of behaviours, attitudes and difficulties, and then we classified them. The result of these observations as well as the analysis of the behaviours are presented in this paper. Owing to these observations, the main goal of this paper is to introduce the contribution of Web services in the improvement of information access and Web searching. Title: TOWARDS COOPERATION AMONG COMPETITIVE TRADER AGENTS Author(s): Paulo André Lima de Castro and Jaime Simão Sichman Abstract: In order to manage their portfolios, human traders in stock markets use a set of algorithms created by economists, based on stock´s prices series, to determine buy and sell signals. These algorithms are usually referred as technical analysis. However, traders prefer to use several algorithms as indicators, rather than choosing a single algorithm. The several signals provided (and many times fundamental signals, experience trader) are used to determine the trader order or to decide not to submit any order. Some work tries to create new algorithm with learning skills in order to trade in autonomous way, by creating better algorithms using AI techniques. Inspired by traders decision processes, our approach tries to compose heterogeneous autonomous trader agents in competitive multiagent system. This architecture allows the use of several algorithms based on different technical analysis indexes to manage portfolios. We have implemented this architecture and have performed a set of simulation experiments using real-data. The system´s results were compared to the performance of agents playing alone. Our results show better performance when traders compete with each other for resources. These results indicate that competition among agents, as proposed here, may reach very good results, even among agents created to act alone in this kind of market. Title: A CASE STUDY ON THE APPLICATION OF THE MAAEM METHODOLOGY FOR THE SPECIFICATION MODELING OF RECOMMENDER SYSTEMS IN THE LEGAL DOMAIN Author(s): Lucas Drumond, Rosario Girardi and Adriana Leite Abstract: Recommender systems have been target of continuous research over the last years, being used as an approach to the information overload problem. The Semantic Web is a new generation of the Web which aims at improving the effectiveness of information access on the Web by structuring its content in a machine readable way. Agents have been also object of active research on the software engineering field considering the high level of abstraction for software development provided by the multi-agent paradigm. This paper describes the modeling of Infonorma, a multi-agent recommender system for the legal domain developed under the guidelines of MAAEM, a methodology for multi-agent application development, which is also evaluated here. Title: TOWARDS MIDDLEWARE SUPPORT FOR PERVASIVE COMPUTING Author(s): Dionisis X. Adamopoulos Abstract: Mobile ad hoc networks (manets) are dynamically reconfigurable multi-hop wireless networks with no fixed infrastructure, consisting of radio-equipped mobile hosts. Each host acts as a router and moves in an arbitrary manner. Under such a network environment routing becomes a challenging task that can be significantly supported and facilitated by the exploitation of location information, as this paper argues. More specifically, after a brief introduction to routing in manets, a location discovery algorithm is proposed. Then, the paper focuses on location-aware routing and after presenting briefly the most important related protocols attempts to compare them based on a number of qualitative properties. Finally, the emergence of location aware services is discussed, a service discovery scenario based on Jini technology is proposed and important related deployment challenges are highlighted. Title: WEB PLATFORM TO SUPPORT THE SHARE AND REMOTE ACCESS TO MEDICAL IMAGES Author(s): Sérgio Lima, Natércia Sousa, Carlos Costa and Augusto Silva Abstract: The production of digital medical images as been growing in every healthcare institution, representing nowadays one the most valuable tools supporting the medical decision process and treatment procedures (Hannan, 1999). One of the most important advantages of these digital systems is to simplify the widespread sharing and remote access of medical data between healthcare institutions. However, due to security and performance issues, the usage of these software packages has been restricted to Intranets. In general, the storage and transmission of digital medical image is based on the international DICOM standard and PACS systems. This paper analyses the traditional PACS communication limitations that contribute to their reduced usage in the Internet. It is also proposed an architecture, based on Webservices and encapsulation of DICOM objects in HTTP, to enable trans-institutional medical data transfers. Title: APPLYING HYBRID RECOMMENDATION POLICIES THROUGH AGENT-INVOKED WEB SERVICES IN E-MARKETS Author(s): Alexis Lazanas, Nikos Karacapilidis and Vagelis Katsoulis Abstract: Diverse recommendation techniques have been already proposed and encapsulated into several e-business systems aiming to perform a more accurate evaluation of the existing alternatives and accordingly augment the assistance provided to the users involved. Extending previous work, this paper focuses on the development of an agent-invoked web service that will be responsible for the coordination of the system’s recommendation module. The specific service will be invoked through a correspondent software agent that has been already implemented in our system’s platform and will perform the tasks of recommendation policy synthesis, as well as, the formulation of the appropriate knowledge rules.. The service is built in respect of coordination issues concerning the communication and the data interchange between the recommendation web service and its correspondent agent. The proposed web service architecture that been adopted and issues raised during its development are presented as well Title: A SIMULATION-BASED DIFFERENCE DETECTION TECHNIQUE FOR BOTTOM-UP PROCESS RECONCILIATION Author(s): Xi Chen and Paul W. H. Chung Abstract: With the increasing dynamic and changing business environment, bottom-up approaches for business process collaboration is currently receiving a great deal of attention in the research community. Bottom-up approaches are seen to be more flexible then top-down approaches. However, none of the available techniques for process collaboration are suitable for process reconciliation, which is a common problem when different organisations have to work together. In order to address the issue in a bottom-up way, a simulation-based technique for detecting differences between any two given processes is proposed. It is based on the extended definitions of process collaborative compatibility and is the core of the process reconciliation mechanism. Title: A MODEL TO OPTIMIZE THE USE OF IMAGING EQUIPMENT AND HUMAN SKILLS SCATTERED IN VERY LARGE GEOGRAPHICAL AREAS Author(s): Daniel Ferreira Polónia, Carlos M. A. Costa and José Luís Oliveira Abstract: Recent studies have shown that the good geographical coverage of Imagiologic Information Systems and equipment such as Picture Archiving and Communication Systems (PACS) is not matched by similar coverage levels of radiologists, especially in rural and academic health institutions. In this paper, we address this problem proposing a solution that is twofold, with the first one being process based, through the optimization of work assignment within pools of human resources according to Service Providers availability, and the second part being technology based, through the interconnection of all the health institutions PACS equipment and radiologists geographically dispersed. After describing the high level solution, we present some of the results of the implementation of this concept and some of the technical challenges still to overcome. Finally, the conclusion chapter presents the impact of the system in the involved institutions. Title: CONEWS: A COLLABORATIVE APPROACH TO ONLINE NEWS STORIES Author(s): Daniel Schneider, Jano de Souza and Ercilia de Stefano Abstract: In this paper we present the CoNews framework, a hybrid approach to publish online news stories, consisting of the combination of news entries extracted from authoritative sources - retrieved from news search engines - with blog articles and user-submitted less-authoritative stories. Some of the goals of this research include experimenting with new forms of mass communication and editorial control as well as improving users' access to the ever-expanding news information available to them. By creating an appropriate environment, we encourage users not only to become a more active audience for news stories and other issues, but also to contribute with his/her own articles to the body of knowledge. Title: ADAPTIVE INFORMATION PROCESSING IN A DOMAIN ONTOLOGY USING RECURSIVE TRANSFER FUNCTIONS TO DETERMINE THE NON-DETERMINISTIC VICINITY OF INTELLIGENT AGENTS Author(s): Muthu Chithambara Jothi and Kabilan Giridharan Abstract: An adaptive walk over a semantic network is possible by describing the domain information in the form of ontology. Leveraging the relationships made between the domain entities by intelligent agents through non-deterministic automation in an object model which is represented in the form of OWL or RDF resource, is the theme of this paper. The successful adaptive walk over a semantic network is with the advent of determining the vicinity of the intelligent agent based on analyzing the current service it received, with the prime goal it needs to achieve. The idea of recursive transfer functions is to make the agents travel in the semantic network until the final goal is achieved. The recursive transfer functions takes the current service received from an entity in the semantic network as its parameter and applies the light of the prime goal in order to achieve the next set of possible states. The next set of possible states lie in the similar line of the prime goal to be achieved. The adaptive walk over semantic network aids the agents to act as proxies for human beings there by fulfilling the business needs for the human beings by traveling the vast network of interconnected web resources. Title: E-COMMERCE TRANSACTION MODELING USING MODERATELY OPEN MULTI-AGENT SYSTEMS Author(s): A. Garcés, R. Quirós, M. Chover, J.Huerta and E. Camahort Abstract: In this paper we describe how to completely develop a Multi-Agent System using the HABA Development Framework. We propose a variant of the GAIA methodology to reduce the gap between the abstract model-ing of Multi-Agent Systems and their practical implementation. To achieve this goal, we reduce the scope of our methodology to a specific class of systems that we call Moderately Open Multi-Agent Systems. As an example, we use the implementation of a set of transactions for an electronic commerce system. Title: AN INTELLIGENT AGENT BASED APPROACH TO MOSAICA’S PEDAGOGICAL FRAMEWORK Author(s): Nazaraf Shah, Jawed Siddiqi and Babak Akhgar Abstract: There has been a significant amount of research in application of intelligent agent technology in virtual training and pedagogical environments. The characteristics of intelligent agents, such as flexible behaviour, autonomy and socialablity make them highly suitable candidates to overcome the shortcomings of the traditional course management systems, online learning and Web resources available to learners. In this paper we propose an agent based approach to manage virtual expeditions and learning activities in Semantically Enhanced, Multifaceted, Collaborative Access to Cultural Heritage (MOSAICA) project. We believe that Belief, Desire, Intention (BDI) model of intelligent agent better serve the pedagogical requirements identified in MOSAICA project. The cognitive characteristics of MOSAICA pedagogical framework are easy to map to mental model of BDI intelligent agents. Title: AN AGENT BASED INFORMATION SYSTEM FOR COMMUNITIES MEDIATION Author(s): Aluizio Haendchen Filho, Hércules Antonio do Prado, Miriam Sayão and Fénelon do Nascimento Neto Abstract: The adoption of the Multi-Agent System paradigm in the context of Enterprise Information Systems has been accelerated by the technology brought by Internet. The importance of MAS applications increases as the ubiquity of Internet, with its distributed and interconnected elements, becomes a de facto reality. How-ever, the development of MAS is not trivial; agents-based systems are typically complex and difficult to de-velop due to the features required, some of them hard to implement. In this paper we briefly describe MI-DAS, a service oriented (SOA) framework built on a reusable, adaptable and loosely coupled architecture, that aims to help in the development of MAS applications. An application in the domain of expert/customer mediation is presented to evidence the advantages of the framework. After that, the advantages of applying the SOA standard over the traditional message-based approach for MAS development are discussed. Title: CONTEXT AWARENESS OF MOBILE CONTENT DELIVERY BASED ON FINE LOCATION ESTIMATE Author(s): Tomohisa Yamashita, Daisuke Takaoka, Noriaki Izumi, Akio Sashima, Koichi Kurumatani and Koiti Hasida Abstract: In this paper, to tackle with uncertainty in the real world, the light-weight ontology drive approach is proposed for the realization of context dependent services. We concentrate on position information and an operation history, as a user’s context, and develop our location-aware content delivery system. The evaluation experiment of our location estimate engine is performed in Akihabara Software Showcase at Information Technology Research Institute. Furthermore, through the proofing experiment in Expo 2005 Aichi, our proposed architecture is confirmed to enables us to realize the real world application of context dependency. Finally, we compare our location-aware content delivery system and related researches, and discus the advantage of our system. Title: MULTI-LEVEL TEXT CLASSIFICATION METHOD BASED ON LATENT SEMANTIC ANALYSIS Author(s): Hongxia Shi, Guiyi Wei and Yun Pan Abstract: In multi-level text classification, all categories have level relation. The categories in the same layer have a certain generality. By applying LSA theory to multi-level text classification, the words’ semantic relationship is better represented and their weight equations are adjusted so they are more reasonable. This method extends the traditional vector space model to LSA space model and consequent experiments got very good results. Title: STRATEGIC ALIGNMENT OF E-BUSINESS DEVELOPMENT - PERFORMANCE OUTCOMES FOR MANUFACTURING SMES Author(s): Louis Raymond and François Bergeron Abstract: Facing pressures from an increasingly competitive business environment, manufacturing SMEs are called upon to implement strategies that are enabled and supported by information technologies and e-business applications. Based on the Internet and Web technologies, these include applications such as e-communication, e-commerce, e-business intelligence and e-collaboration. From a contingency theory perspective, and using survey data obtained from 107 Canadian manufacturing SMEs, this study examines the alignment of e-business development with business strategy, based on Miles and Snow’s strategic typology. The performance outcomes of this alignment in terms of growth, productivity and financial performance are also examined. Results indicate that the ideal e-business development profiles vary in the relation to the firms’ strategic orientation, whether it is of the Defender, Analyzer or Prospector type. Title: A COLLABORATIVE, SEMANTIC AND CONTEXT AWARE SEARCH ENGINE - POSITION PAPER Author(s): Manuela Angioni, Roberto Demontis, Massimo Deriu, Emanuela De Vita, Cristian Lai, Ivan Marcialis, Gavino Paddeu, Antonio Pintus, Andrea Piras, Raffaella Sanna, Alessandro Soro and Franco Tuveri Abstract: Search engines help people to find information in the largest public knowledge system of the world: the Web. Unfortunately its size makes very complex to discover the right information. The users are faced lots of useless results forcing them to select one by one the most suitable. The new generation of search engines evolve from keyword-based indexing and classification to more sophisticated techniques considering the meaning, the context and the usage of information. The key aspects are: collaboration, geo-referencing and semantics. Collaboration distributes storage, processing and trust on a world-wide network of nodes running on users’ computers, getting rid of bottlenecks and central points of failures. The geo-referencing of catalogued resources allows contextualisation based on user position. Semantic analysis lets to increase the results relevance. In this paper, we describe the studies, the concepts and the solutions developed in the DART project to introduce these three key features in a novel search engine architecture. Title: ECONOMY DRIVEN RESOURCE MANAGEMENT ARCHITECTURE FOR COMPUTE GRIDS Author(s): Kiran Kumar Pattanaik and Gadadhar Sahoo Abstract: The concept of coupling internet-wide computational resources (high-end computers and low-end personal computing devices) to form a huge pool of compute resources that would provide cost-effective renting services is not new. Various approaches and economics are initiated for the resource management in Grid, but so far no economic initiatives have been taken for creation of resource pool. Moreover the mechanisms proposed so far does not guarantee a minimum expected return-on-investment for the resource providers no matter how costly their services are as they are primarily governed by volunteering first and then generate revenue policy. This may cause the resource consumers maximize their time/budget/Quality-of-Service objective and leaving the resource providers’ – making the system one sided. In this paper we propose a distributed Resource Market Place concept that is based on dynamic cost model and adopts economic institution paradigm for Compute Market creation and resource management (discovery and scheduling) in the Internet scale distributed resource pool. Our proposed model ensures both the resource providers and consumers maximize their objectives through different auctioning strategies and is scaleable and comparable in-terms of message overhead with the existing ones. Title: SPECIFICATION OF A TOOL FOR MONITORING AND MANAGING A WEB SERVICES ARCHITECTURE Author(s): Youcef Baghdadi Abstract: Enterprises willing to realize the service-oriented architecture with Web services, to gain advantages of an Internet and standard-based IT infrastructure, need to monitor and manage the deployed Web services architecture for an effective use, namely for flexible composition of business processes. The paper presents architecture and a specification of a tool for Web services monitoring and management. It mainly specifies the components of the architecture that are: (1) an information system that represents the properties of Web services architecture with different perspectives, namely their description, transport protocols, discovery, deployment platform, and the business processes composed out of them, and (2) the required monitoring and management artefacts built on top of the Web services architecture information system. Title: A SEMANTIC WEB APPROACH TO ENRICH INFORMATION RETRIEVAL ANSWERS Author(s): R. Carolina Medina-Ramírez and Víctor M. Ramos R. Abstract: In previous works, we have presented the advantages of using a domain ontology and annotations for information retrieval as well as the translation problems between languages with different expression semantic. In this paper we focus on the view point of the end-user. In fact, we explore the impact and helpfulness of a domain ontology, semantic annotations relying on this ontology and semantic resource descriptions so as to enrich end-user answers extracted from an information retrieval system. A system that embodies this approach is presented. We argue that it is necessary to improve the format of end-user answer in order to share and re-use knowledge. Title: A VOIP PLATFORM AS A VIRTUAL PBX SERVICE Author(s): Walter Balzano, Maria Rosaria Del Sorbo and Mario Epifania Abstract: In the background of an increasing development of Voice over IP telephony, allowed by large band communication through Internet and supported by low cost communications perspectives, it looks straightforward thinking about a project of a communication platform for IP-PBX. The new idea is to use open source software in the aim to exploit a business LAN to implement an IP phone switch system, in which a virtual PBX can provide services and reliable communication to users. The challenge is to virtualize many systems in only one physical system using Linux vservers utilities. The advantages of this approach are multiple: easy restore in case of crashes, security and simple management of resources. The drawback is a heavier load of hardware resources, avoidable using more powerful CPU and memory architectures. Future developments are oriented to use virtualization to get optimal results in resource management. Title: ONTOLOGY GATEWAY - ENABLING INTEROPERABILITY BETWEEN FIPA COMPLAINT AGENTS AND OWL WEB SERVICES Author(s): Sabih Ur Rehman, Maruf Pasha, Farooq Ahmed and Hiroki Suguri Abstract: Ontology Web Language (OWL) is a W3C standard for providing explicit semantics for establishing and sharing ontologies on the World Wide Web, while FIPA Semantic Language is the core of the agent platforms due to its high expressive power. Ontologies play an important role in the knowledge representation, reuse and communication between web-services. Similarly in a Multi-agent system, ontology plays an important role for specifying explicit semantics where the messages exchanged between agents should conform to Ontology so that they could be understood. In this paper we will introduce a technology enabling bidirectional interoperability between FIPA compliant software agents and OWL Web-services. This extends our previous work in which we proposed the development of semantic translations that enabled an efficient communication between agents and web-services. We will also describe the testbed configuration through which a software agent will invoke and use a web service published in OWL and vice versa. Title: A DESIGN FOR BUSINESS INTELLIGENCE SERVICE IN DEMAND DRIVEN SUPPLY CHAIN MANAGEMENT Author(s): T. He, P. Ribbins, R. Brown, L. Sun and C. Gurr Abstract: This paper discusses the problems inherent within traditional supply chain management’s forecast and inventory management processes arising when tackling demand driven supply chain. A demand driven supply chain management architecture developed by Orchestr8 Ltd., U.K. is described to demonstrate its advantages over traditional supply chain management. Within this architecture, a metrics reporting system is designed by adopting business intelligence technology that supports users for decision making and planning supply activities over supply chain health. Title: E-GOVERNMENT AND GRID COMPUTING - POTENTIALS AND CHALLENGES TOWARDS CITIZEN-CENTRIC SERVICES Author(s): Ivo J. Garcia dos Santos and Edmundo R. Mauro Madeira Abstract: The need for the delivery of integrated and efficient public services has increased worldwide over the recent years, due mainly to the proliferation of the Information and Communication Technologies. In order to develop these new e-Government applications, many challenges are being faced, including higher interoperability demands, scalability and security issues. Grid computing’s promise to provide a vehicle for high computation and massive storage added to its recent convergence towards service-orientation has transformed it into an interesting middleware solution for supporting e-Government applications. This position paper intends to investigate the state of the art and the challenges concerning the use of Grid technologies as a support platform for Citizen-centric Services and Applications. Area 5 - Human-Computer Interaction Title: UNSUPERVISED INFERENCE OF DATA FORMATS IN HUMAN-READABLE NOTATION Author(s): Christopher Scaffidi Abstract: One common approach to validating data such as email addresses and phone numbers is to check whether values conform to some desired data format. Unfortunately, users may need to learn a specialized notation such as regular expressions to specify the format; furthermore, even after learning the notation, specifying formats may take substantial time. To address these problems, we introduce Topei, a system that infers a format from an unlabelled collection of examples (which may contain errors). The generated format is presented as plain natural language, so users can review the format and customize it if desired. In addition, the generated format can be transformed to an augmented context-free grammar, so applications can automatically check data against the format and find outliers that do not match the format. We evaluate Topei by applying it and an alternate algorithm to test data; ours shows substantially higher precision and recall. We demonstrate the usefulness of Topei by integrating it with spreadsheet, database, and web services systems. Title: EVALUATION OF FLEXIBLE POINTS ON USER INTERFACE FOR INFORMATION SYSTEM Author(s): Limin Shen, Chunyan Gao and Wenwen Jiang Abstract: In order to adapt to user requirement changes at runtime, software provides adaptable operations through user inter-faces to change software functionality. We suggest the FleXible Point (FXP), flexible changes, flexible degree, flexible force, and flexible distance, to evaluate the effect of such user interfaces. An approach and a case are given to illustrate the evaluation and measurement based on the FXP. The approach can be used as a guide to adjust, improve, and to compare the FXP on user interfaces. Title: TOWARDS A FORMAL MODEL OF KNOWLEDGE ACQUISITION VIA COOEPERATIVE DIALOGUE Author(s): Asma Moubaiddin and Nadim Obeid Abstract: We aim, in this paper, to make a a first step towards developing a model of knowledge acquisition/learning via cooperative dialogue. A key idea in the model is the concept of integrating exchanged information, via dialogue, within an agent's theory. The process is nonmonotonic. Dialogue is a structured process and the structure is relative to what an agent knows about the world or a domain of discourse. We employ a nonmonotonic logic system, NML3, which formalizes some aspects of revisable reasoning, to capture an agent's knowledge and reasoning. We will present a formalization of some basic dialogue moves and the protocols of various types of dialogue. We will show how arguments, proofs, some dialogue moves and reasoning may be carried out within NML3. Title: ASSESSING THE USER ATTITUDE TOWARD PERSONALIZED SERVICES Author(s): Seppo Pahnila Abstract: The fast growth of the Web has caused an excess of information to become available. Personalized systems try to predict individuals’ behavior based on user information, in order to deliver more accurate and targeted content by filtering out unimportant and irrelevant information. Prior personalization research has mostly focused on e-business issues, personalization techniques and processes or privacy concerns. In this research, we have studied users’ attitudes toward personalization and their desire to control personalized services. The results are based on a field study consisting of 196 relevant responses from the users of a personalized medical portal. We also analyzed respondents’ changes in attitude toward personalization by comparing responses from two field studies. The results show that the respondents appreciate personalized information which is closely related to their occupation. The respondents accept personalized services but they do not consider automatic content personalization to be important, nor do they appreciate automatic appearance personalization; they want to intervene in the transmitted information. Title: COLLABORATIVE AUGMENTED REALITY ENVIRONMENT Author(s): Claudio Kirner, Rafael Santin, Tereza G. Kirner and Ezequiel R. Zorzal Abstract: The face-to-face and remote collaborative learning has been successfully used in the educational area. Nowadays, the technological evolution allows the implementation and the improvement of interpersonal communication in networked computer environments, involving chat, audio and video conferencing, but the remote manipulation of objects remains a problem to be solved. However, the virtual reality and augmented reality make possible the manipulation of virtual objects in a way similar to real situations. This paper discusses those subjects and presents a solution for interactions on remote collaborative environments, using conventional resources for the interpersonal communication as well as augmented reality technology. Title: CONTRIBUTION TO THE REQUIREMENTS ENGINEERING OF VIRTUAL ENVIRONMENTS Author(s): Tereza G. Kirner, Valéria F. M. Salvador and Claudio Kirner Abstract: This paper aims to contribute to the requirements engineering of virtual environments. Requirements engineering is characterized by a process composed of the phases of elicitation, specification, and evaluation, based on concepts from the area and on experience obtained through the development of three virtual environments. The requirements engineering process is described and exemplified and the main conclusions are pointed out. Title: VISUALIZING THE PROCESS - A GRAPH-BASED APPROACH TO ENHANCING SYSTEM-USER KNOWLEDGE SHARING Author(s): Tamara Babaian, Wendy Lucas and Heikki Topi Abstract: Our research is concerned with developing design guidelines aimed at improving the usability of enterprise-wide information systems by employing collaborative problem solving as a model for user-system interaction. In this paper, we present our approach to addressing a critical design issue that was brought to our attention through our field research: namely, system-to-user communication involving components of a complex process flow. This approach uses a dynamic process graph and a set of related task links that are displayed alongside the traditional ERP task interfaces. We outline the collaborative framework and position our solution within it. This solution can benefit other application areas, especially those that involve protracted processes that are not familiar to the user. Title: A WE-CENTRIC SERVICE FOR MOBILE POLICE OFFICERS TO SUPPORT COMMUNICATION IN AD-HOC GROUPS Author(s): Ronald van Eijk, Nicole de Koning, Marc Steen and Erik Reitsma Abstract: We-centric services are meant to stimulate and facilitate people to communicate and cooperate with others in dynamic or ad-hoc groups. Typically, a we-centric service provides hints and reasons to contact others, and, because these other people receive similar hints and reasons, stimulates and facilitates people to experience “we”. The paper describes the development and evaluation of one we-centric service prototype for police officers. We found that key-issues related to developing we-centric services are (1) finding the proper context elements and information sources to take into account when searching for relevant others, (2) presenting the people found and the context of those people in an appropriate way, i.e. with clear explanations and information on their current availability and (3) supporting reciprocal relationships. Title: AN AUTHORING ARCHITECTURE FOR ANNOTATING EDUCATIONAL CONTENTS Author(s): José M. Gascueña, Antonio Fernández-Caballero and Pascual Gónzalez Abstract: E-learning platforms available nowadays are mainly centred in supporting management tasks, but they do not include or even consider in a too satisfactory way the adaptation to student’s profile, the reusability of educational materials, or the efficient search into educational materials. By combining the paradigms of ontologies and learning objects in authoring tools it is possible to annotate educational contents for generating personalized material. The characteristics introduced in this paper are the learning style best suited to the student, the device used to access the contents and the skill to be developed when using the material. The general architecture of the proposed tool is fundamentally composed of three different and interrelated ontologies: domain, sequencing and content-repository ontologies, where all knowledge about which educative content is taught, how it is taught and how it is organized is respectively stored. Title: NON-TECHNICAL SIDE OF IMPLEMENTATION OF ELECTRONIC HRM SYSTEM - DISCOURSIVE EXPLORATION OF LINE MANAGERS 'AND EMPLOYEES’ PERCEPTIONS Author(s): Tanya V. Bondarouk and Huub Ruël Abstract: Electronic Human Resource Management (e-HRM) is coming to a more full-grown stage within organisational life. Much is assumed and expressed about its advantages, however scientific proof of these advantages is scarce. No clarity exists about the answer to the question whether e-HRM contributes to the effectiveness of HRM processes. This paper contributes to the Enterprise Information Systems field in two ways. Firstly, findings-wise, we present results from the qualitative study on the contribution of e-HRM to HRM effectiveness. The data is collected in a Dutch Ministry of the Interior and Kingdom Relations. Results show that e-HRM applications have some impacts on the HRM practices. However, e-HRM is not perceived by the users as contributing to the HRM effectiveness. Interviews with line managers and employees have revealed interesting differences in their needs and perceptions about functionalities of e-HRM applications. Secondly, in this paper we integrate two approaches, namely technology-oriented approach, and organizational processes-oriented approach. An intersection of IT- and HRM- studies reveals new possibilities both for scientific and practical implications. Title: THE IMPACT OF SOCIAL PRESENCE ON THE EXPERIENCES OF ONLINE SHOPPERS - A CROSS-CULTURAL STUDY Author(s): Khaled Hassanein, Milena Head and Chunhua Ju Abstract: A notable difference between online and offline shopping that is hindering the growth of e-Commerce is the decreased presence of human and social elements in the online environment. This paper explores how human warmth and sociability can be integrated through the Web interface to positively impact online consumer perceptions. More specifically, the impact of design elements (emotive text and socially-rich pictures) is explored across two national cultures: Canadian and Chinese. Our results show increased levels of social presence through socially-rich design elements (i.e. socially-rich text and pictures) as having a positive impact on antecedents of attitude/intention of Canadian online shoppers (perceived usefulness, trust and enjoyment). We were also able to demonstrate similar results with Chinese online consumers in the case of perceived usefulness and enjoyment but not for trust. The paper concludes with a discussion of these results outlining implications for practitioners and directions for future research. Title: UNCONSCIOUS EMOTIONAL INFORMATION PROCESSING: THEORETICAL CONSEQUENCES AND PRACTICAL APPLICATIONS Author(s): Maurits van den Noort, Kenneth Hugdahl and Peggy Bosch Abstract: The nature of unconscious human emotional information processing remains a great mystery. On the one hand, classical models view human conscious emotional information processing as computation among the brain’s neurons but fail to address its enigmatic features. On the other hand, quantum processes (superposition of states, nonlocality, and entanglement) also remain mysterious, yet are being harnessed in revolutionary information technologies like quantum computation, quantum cryptography, and quantum teleportation. In this paper, a behavioral- and two neuroimaging studies will be discussed that suggest a special role for unconscious emotional information processing in human interaction with other objects. Since this is a new research field; we are only beginning to understand quantum information processing in the human brain (Hameroff, 2006; Van den Noort & Bosch, 2006). This research is important since it could have important theoretical consequences in the way we understand physics and information processing in the brain. Moreover, it could lead to new information technologies and applications. For instance, it might give new insights on human consumer behavior (Dijksterhuis, 2004; Dijksterhuis, Bos, Nordgren, & Van Baaren, 2006a; 2006b), and lead to new commercial strategies for multinationals. Title: THE EFFECT OF ICT ENABLED SOCIAL NETWORKS ON PERFORMANCE Author(s): Kon Shing Kenneth Chung and Liaquat Hossain Abstract: The importance of task-oriented and sociological effects of information and communication technology (ICT) use continue to play important role in studies on HCI (Human Computer Interaction). These effects have been evidenced at inter- and intra-organisational as well as occupational community levels. Research on the direct interplay between social network structure, ICT use and individual performance is however lacking to date. In this study, we propose a social network based model and operational constructs for exploring individual performance in knowledge-intensive work. The context of the study is the occupational community of general practitioners (GP). Numerous problems such as decreasing performance with age, obsolescence of technological knowledge, isolation from urban communities and various problems specific to rural practice makes this study significant. The research focuses on understanding the interplay between social network structure and ICT use for enhancing individual performance. We argue that individuals with high levels of ICT use, dense social network structures and those rich in connections to social clusters who are themselves not well connected perform better. The implications are significant for the design and development of HCI systems that incorporate social network concepts. Title: INTERACTION IN CRITICAL SYSTEMS: CONQUESTS AND CHALLENGES Author(s): Marcos Salenko Guimarães, M. Cecília C. Baranauskas and Eliane Martins Abstract: The need for critical systems is growing fast due to the demand on hardware and software systems for critical tasks that use to be executed exclusively by human beings. These critical systems require reliable interaction with users. Despite this fact, contributions from the interaction design field have progressed slowly. This work summarizes the main contributions from different fields to critical systems; presents some analysis based on a classification that helps to get different views and find out new possible research directions towards improving the quality of interaction with this type of system. Title: TLSA PLAYER: A TOOL FOR PRESENTING CONSISTENT SMIL 2.0 DOCUMENTS Author(s): Paulo N. M. Sampaio and Jean-Pierre Courtiat Abstract: Describing synchronization constraints in complex Interactive Multimedia Documents at authoring time can be an error-prone task, especially if the increasing number of media objects participating in these relations is considered. As a consequence, some synchronization constraints specified by the author may not be satisfied, leading the presentation of the document to undesirable deadlocks or to unexpected misbehaviours, characterizing the occurrence of an inconsistency. In particular, the flexibility of high level authoring models (such as SMIL 2.0) for the edition of complex IMDs can lead authors, in certain cases, to specify inconsistent documents. For this reason, it is important to apply multimedia players that can ensure the presentation of consistent documents. This paper presents the main aspects of the development of a multimedia player which ensures the presentation of consistent multimedia documents, the TLSA Player. The TLSA Player is part of a formal methodology for the design of multimedia documents which provides the formal semantics for the dynamic behaviour of the document, consistency checking, and the scheduling of the presentation taking into account the temporal non-determinism of these documents. Title: NOVEL VIEW TELEPRESENCE WITH HIGH-SCALABILITY USING MULTI-CASTED OMNI-DIRECTIONAL VIDEOS Author(s): Tomoya Ishikawa, Kazumasa Yamazawa and Naokazu Yokoya Abstract: The advent of high-speed network and high performance PCs has prompted research into networked telepresence, which allows a user to see virtualized real scenes in remote places. View-dependent representation, which provides a user with arbitrary view images using an HMD or an immersive display, is especially effective in creating a rich telepresence. The goal of our work is to realize of novel view telepresence which enables multiple users to control the viewpoint and view-direction independently by virtualizing real dynamic environments. In this paper, we describe a novel view generation method from multiple omni-directional images captured at different positions. We mainly describe our prototype system with high-scalability which enables multiple users to use the system simultaneously and some experiments with the system. The novel view telepresence system constructs a virtualized environment from real live videos. The live videos are transferred to multiple users by using multi-cast protocol without increasing network traffic. The system synthesizes a view image with a varying user’s viewpoint and view-direction measured by a magnetic sensor attached to an HMD and presents the generated view on the HMD. Our system can generate the user’s view image in realtime by giving correspondences among omni-directional images and estimating camera intrinsic and extrinsic parameters in advance. Title: ON THE DETECTION OF EARLY DEMENTIA AND THE COMMUNICATION SYSTEM FOR DEAF BLIND PERSON Author(s): Masahiro Aruga, Takashi Takeda and Shuichi Kato Abstract: The “YUBITSUKII” is the personal digital assistant terminal developed as communication system for blind deaf persons. The blind deaf persons have not been cared sufficiently until now. Therefore, they cannot fully convey their thoughts and feelings and cannot freely live in their daily life. Generally, even their family members cannot sufficiently know their capability of recognition of their outside. And it is difficult to discover early dementia of the blind deaf persons. In this paper, it is considered and estimated that there is some possibility of realization of the detection method and system of early dementia of the blind deaf persons and others. Title: INHIBITING FACTORS FOR COMMUNICATION AND INFORMATION TECHNOLOGIES USAGE - FIVE COLOMBIAN SMES STUDY Author(s): Olga L. Giraldo V. and Eduardo Arévalo S. Abstract: Small-to-Medium sized enterprises, SMEs, are the main development engine of economy, particularly in countries in development, as is the case in Colombia. SME must use information and communication technologies, ICT, as strategic tools to find their place in the global market; nevertheless, this is not a common situation for Colombian SME. In this work we present the results of a research aimed to find factors inhibiting five Colombian SMEs toward strategic usage of ICT. Through this study we have looked at the structure and strategy of those enterprises, their value chains, ICT support to their value chain activities -how and where they exploit ICT-, managerial attitude toward technology, and appropriation of ICT into business, looking for inhibiting factors. Results show that the most common inhibiting factors are: poor organizational planning; inability to identify strategic use of ICT or no ICT leadership; no funding for ICT projects; lack of ICT expertise and lack of proper ICT usage by final users; and lack of technical support. Even though findings are not conclusive they show an existing trend, and highlight the main ICT inhibitor factors to be surpassed to attain local sustainable industries. Title: 4I (FOR EYE) TECHNOLOGY - INTELLIGENT INTERFACE FOR INTEGRATED INFORMATION Author(s): Oleksiy Khriyenko Abstract: Next generation of integration systems will utilize different methods and techniques to achieve the vision of ubiquitous knowledge: Semantic Web and Web Services, Agent Technologies and Mobility. Nowadays, unlimited interoperability and collaboration are the important things for industry, business, education and research, health and wellness, and other areas of people life. All the parties in a collaboration process have to share data as well as information about the actions they are performing. Development of a Global Understanding eNvironment (GUN), which would support interoperation between all the resources (GUN-Resources) and exchange of shared information, is a very profit-promising and challenging task. And generally, a graphical user interface, that helps to perform this interoperation and collaboration processes in handy and easy for human/expert way, is one of the important things in a process performing, and creation. Following the new technological trends, it is time to start a new stage in user visual interface development – a stage of semantic-based context-dependent multidimensional resource visualization. Title: A FLEXIBLE INFRASTRUCTURE FOR P-LEARNING: A FIRST APPLICATION IN THE FIELD OF PROFESSIONAL TRAINING Author(s): Alain Derycke and Vincent Chevrin Abstract: With the availability of nomadic computing, and its new interaction user devices connected through wireless networks, it is obvious that the traditional way of delivering e-learning will be changed. This paper is focused on a new mode called pervasive learning which relies on the potential of new IT infrastructures able to provide dynamic adaptations of information contents and services according to various contexts. Using our previous experiences in the design and implementation of multi-channel accesses to services (mobile-commerce or e-learning) we are designing a new infrastructure, based on a Multi-Agent Systems, which satisfies our requirements for future p-learning systems. Its potential is illustrated through a dedicated scenario of uses drawn from needs founded in the field of learning on demand, in the framework of a shop, contextualized for several seller situations and professional activities. The dedicated system, called a Personal Training Assistant, is supported, in interaction with a Smartspace, through our infrastructure. Title: SILENT BILINGUAL VOWEL RECOGNITION - USING FSEMG FOR HCI BASED SPEECH COMMANDS Author(s): Sridhar Poosapadi Arjunan, Hans Weghorn, Dinesh Kant Kumar and Wai Chee Yau Abstract: The paper examines the use of fSEMG (facial Surface Electromyogram) to recognise speech commands in English and German language without evaluating any voice signals. The system is designed for applications based on speech commands for Human Computer Interaction (HCI). An effective technique is presented, which uses the facial muscle activity of the articulatory muscles and human factors for silent vowel recognition. The difference in the speed and style of speaking varies between experiments, and this variation appears to be more pronounced when people are speaking a different language other than their native language. This investigation reports measuring the relative activity of the articulatory muscles for recognition of silent vowels of German (native) and English (foreign) languages. In this analysis, three English vowels and three German vowels were used as recognition variables. The moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles is used to segment the signal and to identify the start and end of a silently spoken utterance. The relative muscle activity is computed by integrating and normalising the RMS values of the signals between the detected start and end markers. The output vector of this is classified using a back propagation neural network to identify the voiceless speech. The cross-validation was performed to test the reliability of the classification. The data is also tested using K-means clustering technique to determine the linearity of separation of the data. The experimental results show that this technique gives high recognition rate when used for each of the participants for both of the languages. The results also show that the system is easy to train for a new user and suggest that such a system works reliably for simple vowel based commands for human computer interface when it is trained for the user, who can speak one or more languages and for the people who have speech disability. Title: KEXPLORATOR - A 2D MAP EXPLORATION USER INTERFACE FOR RECOMMENDER SYSTEMS Author(s): Gulden Uchyigit, Keith Clark and Damien Coullon Abstract: ecommender systems have reached some maturity and are getting more widely used with the rise of online social networks. However, research until now was mostly focused on improving the rec ommendation engines, without really advancing the way the recommendations were brought to user s. This paper concentrates on improving the delivery of recommendations to users via a new algori thm to allow for generation and 2D visualisation of similarity networks with an emphasis on ma p stability. An implementation with a connection to the Amazon recommendation engine has been developed. Title: FLEXIBLE HUMAN SERVICE INTERFACES Author(s): Josef Spillner, Iris Braun and Alexander Schill Abstract: Dynamic web service invocation without special client software may help the adoption of service-oriented architectures on the consumer stage. Ad-hoc usage of services requires a powerful set of concepts to visualise the service input and output messages in a user-friendly, ergonomic and extensible way. Such concepts are collected in a research effort named Web Service Graphical User Interface and are presented in the paper in combination with an algorithm to combine the concepts into one imaginary GUI creation engine, for which a proof-of-concept implementation exists. Extensibility is achieved by using implicit and explicit GUI generation hints in addition to inference mechanisms based on the message structures. Title: CROSS-MEDIA USER INTERFACES FOR CONTROLLING THE ENTERPRISE - THE EAGLE INTEGRATED SYSTEM Author(s): Pedro Campos, Filipe Sousa, Lucas Pereira, Carlos Perestrelo and Duarte Freitas Abstract: Current Business Intelligence (BI) tools are aimed at providing managers with a way to control and measure their businesses. However, and despite much research and many commercial and academic prototypes, the user acceptability of these systems remains challenging. We describe an innovative approach to enhancing the ease of use and the visualization, control and decision-making of small-to-medium enterprises. Our approach to the design of BI tools is novel because (a) it combines user-centered design techniques with recent advances in quantitative information visualization, and (b) it employs several media (webcams, telephones, interactive maps and sparklines) to provide the user with a more powerful way of business control. We present a tool called Eagle which was designed using this approach and was developed in an industrial, real-world setting. We also describe some principles which were outlined along this case study. Title: SPATIAL AUDITORY INTERFACES COMPARED TO VISUAL INTERFACES FOR MOBILE USE IN A DRIVING TASK Author(s): Christina Dicke, Jaka Sodnik, Mark Billinghurst and Sašo Tomažič Abstract: This paper describes a study on user interaction with an in-vehicle mobile device using three different inter-faces. Two auditory interfaces were proposed and their efficiency was compared to a standard visual inter-face. Both auditory interfaces consisted of spatialized auditory cues, differing in the number of sound sources played simultaneously. The users were asked to solve several tasks on a mobile phone while driving a car simulator. The mobile phone could be operated using a standard hierarchal menu structure that was ei-ther displayed on a small in-vehicle LCD screen or played through a 7.1 speaker system with one of the auditory interfaces. In both cases a custom-made interaction device (scrolling wheel and two buttons) at-tached to the steering wheel was used for controlling the interface. The driving performance, task comple-tion times, and perceived workload were evaluated. Both auditory user interfaces were effective when solv-ing most experiment tasks except for the long tasks, such as composing a message and sending it to specific person. For shorter tasks (e.g. changing the active profile or deleting an image) the task completion times were comparable for all interfaces; however, driving performance was significantly better and the perceived workload was lower for the auditory interfaces. Title: SUPPORTING GEOGRAPHICAL MEASURES THROUGH A NEW VISUALIZATION METAPHOR IN SPATIAL OLAP Author(s): Sandro Bimonte, Anne Tchounikine, Sergio Di Martino and Filomena Ferrucci Abstract: Pivot tables are the de-facto standard paradigm for the visualization of data in the context of multidimensional OLAP analysis. However it is recognized that they are not suited, in their original definition, to support spatio-temporal data analysis (and in particular geographical measures). In this paper, we propose the GeOlaPivot Table, a visual metaphor intended as an extension of the pivot tables specifically conceived to assist decision makers in analyzing spatial information of data warehouses. Moreover, we present WebGeOlap, a web-based environment able to support geographical measures in SOLAP analyses, and exploiting the GeOlaPivot Table visual metaphor. This environment has been applied on a spatial data warehouse concerning the supervision of infectious diseases in Italy. This approach represents a first effort in adapting advanced geovisualization techniques to SOLAP ones, in order to create a specific visual paradigm for Spatial OLAP able to effectively support and fully exploit spatial multidimensional analysis process. Title: INVESTIGATIONS INTO SHIPBORNE ALARM MANAGEMENT - CONDUCTION AND RESULTS OF FIELD STUDIES Author(s): Florian Motz and Michael Baldauf Abstract: Safe navigation, including collision and grounding avoidance, is the main task of the navigating officer in charge to ensure the safety of sea transport during a ship's voyage. Modern ship bridges are highly-automated man-machine systems. With the enlarged number of systems and sensors onboard, and the increase of automation a proliferation of alarm signals on the bridge is associated. Field studies were performed on board of ships to investigate the situation with respect to the occurrence of alarms and its handling by the bridge team. Within this paper the conduction of the investigations, the used methods, and selected results for two field studies will be presented. An outlook for a future alarm management onboard is given. The investigations were partly performed under the framework of a national Research and Development project funded the German Ministry of Transport, Building and Urban Affairs, and under the European MarNIS – project, funded by the European Commission, Department for Energy and Transport. Title: IMPROVING ACCESSIBILITY TO BUSINESS PROCESSES FOR DISABLED PEOPLE BY DOCUMENT TAGGING Author(s): Norbert Kuhn, Stefan Richter and Stefan Naumann Abstract: Although many companies and governmental institutions have provided their costumers with access to electronic information systems and to their processes there still remains widespread use of paper-based communication. In this paper we present an approach, which we use in the FABEGG system to make these documents accessible to handicapped users. We discuss a prototypical system with both an administrative and a user front-end to work with printed forms in business processes. In particular, users with reading disabilities will be able to process documents they receive. Compared to purely electronic solutions we can also cope with forms, which contain already process related data e.g. like name, address and other personal information of the recipient. The prototypes presented here will be evaluated in local governmental authorities. Title: ANTHROPOMORPHIC VS NON-ANTHROPOMORPHIC USER INTERFACE FEEDBACK FOR ONLINE HOTEL BOOKINGS Author(s): Pietro Murano, Anthony Gee and Patrik O’Brian Holt Abstract: This paper describes an experiment and its results concerning research that has been going on for a number of years in the area of anthropomorphic user interface feedback. The main aims of the research have been to examine the effectiveness and user satisfaction of anthropomorphic feedback in various domains. The results are of use to all interactive systems designers, particularly when dealing with issues of user interface feedback design. There is currently some disagreement amongst computer scientists concerning the suitability of such types of feedback. This research is working to resolve this disagreement and in turn can help software houses to increase their profits by developing better user interfaces that will promote an increase in sales. The experiment detailed, concerns the specific software domain of Online Factual Delivery in the specific context of online hotel bookings. Anthropomorphic feedback was compared against an equivalent non-anthropomorphic feedback. Statistically significant results were obtained suggesting that the non-anthropomorphic feedback was more effective. The results for user satisfaction were however less clear. Title: WHAT A PROACTIVE RECOMMENDATION SYSTEM NEEDS - RELEVANCE, NON-INTRUSIVENESS, AND A NEW LONG-TERM MEMORY Author(s): M. C. Puerta Melguizo, T. Bogers, A. Deshpande, L. Boves and A. van den Bosch Abstract: The goal of the project À Propos is to develop a proactive, just-in-time recommendation system for professional writers. While authors are writing, the proactive system searches for relevant information to what is being written, and presents this information to the writers in a manner that is perceived as timely, non-intrusive, and trustworthy. In this paper we present our ideas and the first steps performed in order to reach this goal. Writing a professional document is a complex and highly demanding task that can be seriously affected by interruptions from the environment. Consequently, a proactive system should be 1) able to present highly relevant information consequently, 2) able to identify in what stage of writing the author is involved, and what are the moments in which information needs are more important and less disruptive, and 3) serve as an external long-term memory for the writer. In this paper we describe the steps and first results of our project À Propos in order to develop a proactive recommendation system that covers these goals. Title: TOWARDS A GENERIC AND CONFIGURABLE MODEL OF AN ELECTRONIC INFORMER TO ASSIST THE EVALUATION OF AGENT-BASED INTERACTIVE SYSTEMS Author(s): Chi Dung Tran, Houcine Ezzedine and Christophe Kolsky Abstract: This paper presents a generic and configurable model of an electronic informer to assist the evaluation for agent-based interactive systems. In order to propose this model, the current state of the art concerning architectures used for traditional interactive systems is presented, along with that for agent-based interactive systems. In this article, we propose an agent-oriented architecture that is considered as being mixed (it is both functional and structural). By using this architecture as a basis, we propose a generic and configurable model of an evaluation tool called “electronic informer” which assist evaluators in analysing and evaluating interactive systems of this architecture. Title: OOPUS-DESIGNER - USER-FRIENDLY MASTER DATA MAINTENANCE THROUGH INTUITIVE AND INTERACTIVE VISUALIZATION Author(s): Wilhelm Dangelmaier, Benjamin Klöpper, Björn Kruse, Daniel Brüggemann and Tobias Rust Abstract: Valid and consistent master data are pre-requisite for efficient working Enterprise Resource Planning (ERP) and Production Planning and Control (PPC) systems. Unfortunately users are often confused by a large number of forms or transactions in these systems. Confusing interfaces lead to faulty master data. In this paper we introduce a tool that provides intuitive and interactive visualization for the master data administration of a PPC system Title: USING GAMES IN THE TEACHING OF DIGITAL SYSTEMS AND OF COMPUTERS ARCHITECTURE Author(s): Pedro José Guerra de Araújo and João Vieira Baptista Abstract: This work presents a pedagogic experience regarding the use of games as support examples on Digital Systems and Computers Architecture teaching. For this work it is not the game itself that is important, but the technologies involved in the construction of the game. Examples of each one of these technologies are presented here through the construction of three different versions of the same game. The first version is totally made in hardware using digital integrated circuits, the following is accomplished by programming using the Assembly language and finally a third version using a high﷓level language. The importance of these examples lies in the fact that each one of these versions allows one to exemplify techniques that can also be found in other types of computer programs. Besides capturing the attention and the students' interest immediately, the games are good examples to serve as support to the concepts’ exposition done by the teacher in the classroom. Title: EXPERIENCE-BASED SOCIAL AND COLLABORATIVE PERFORMANCE IN AN ‘ELECTRONIC VILLAGE’ OF LOCAL INTEREST: THE EKONEΣ FRAMEWORK Author(s): D. Akoumianakis, N. Vidakis, G. Vellis, G. Milolidakis and D. Kotsalis Abstract: We present the baseline of a framework called eΚοΝΕΣ, for building electronic villages of local interest. An electronic village is considered as a virtual organization formed by representatives of different sectors who work together during a period of time to realize a common goal. We assume tight coupling between the virtual organization and a physical space to differentiate the electronic village of local interest from the notion of the global electronic village. In this context, the paper focuses on two primary issues, namely the stimulation and organization of collaborative work by virtual teams and the design of electronic artifacts which facilitate collaborative feedback and feedthrough in an exemplar case in the context of eΚοΝΕΣ- Tourism – a pilot electronic village on regional tourism. Title: A NEW LIP-READING APPROACH FOR HUMAN COMPUTER INTERACTION Author(s): Salah Werda, Walid Mahdi and Abdelmajid Ben Hamadou Abstract: Today, Human-Machine interaction represents a certain potential for autonomy especially of dependant people. Automatic Lip-reading system is one of the different assistive technologies for hearing impaired or elderly people. The need for an automatic lip-reading system is ever increasing. Extraction and reliable analysis of facial movements make up an important part in many multimedia systems such as videoconference, low communication systems, lip-reading systems. We can imagine, for example, a dependent person ordering a machine with an easy lip movement or by a simple visemes (visual phoneme) pronunciation. We present in this paper a new approach for lip localization and feature extraction in a speaker’s face. The extracted visual information is then classified in order to recognize the uttered viseme. We have developed our Automatic Lip Feature Extraction prototype (ALiFE). ALiFE prototype is evaluated with a multiple speakers under natural conditions. Experiments include a group of French visemes by different speakers. Results revealed that our system recognizes 92.50 % of French visemes. Title: TOWARDS AN IE AND IR SYSTEM DEALING WITH SPATIAL INFORMATION IN DIGITAL LIBRARIES – EVALUATION CASE STUDY Author(s): Christian Sallaberry, Mustapha Baziz, Julien Lesbegueries and Mauro Gaio Abstract: This paper deals with Information Extraction (IE) and Retrieval (IR) in a Geographic (spatial) oriented Digital Libraries environment. The proposed approach (implemented within PIV prototype) is based on a semantic analysis of digital corpora and free text queries. First, we present requirements and a methodology of semantic annotation for automatic indexing and geo-referencing of text documents. Then we report on a case study where the spatial-based IR process is evaluated and compared to classical (statistical-based) IR approaches using first, pure spatial queries and then, more general (“realistic”) ones containing both spatial and thematic scopes. The main result in these first experiments shows that combining a spatial approach with a classical (statistical-based) IR one, improves in a significant way retrieval accuracy, namely in the case of “realistic” queries. Title: AN ADAPTIVE DOMAIN KNOWLEDGE MANAGER FOR DIALOGUE SYSTEMS Author(s): Porfírio Filipe, Luís Morgado and Nuno Mamede Abstract: This paper describes the recent effort to improve our Domain Knowledge Manager (DKM) that is part of a mixed-initiative task based Spoken Dialogue System (SDS) architecture, namely to interact within an ambient intelligence scenario. Machine-learning applied to SDS dialogue management strategy design is a growing research area. Training of such strategies could in theory be done using human users or using corpora of human computer dialogue. However, the size of the state space grows exponentially according to the state variables taken into account, making the task of learning dialogue strategies for large-scale SDS very difficult. To address that problem, we propose a divide to conquer approach, assuming that practical dialogue and domain-independent hypothesis are true. In this context, we have considered a clear separation between linguistic dependent and domain dependent knowledge, which allows reducing the complexity of SDS typical components, specially the Dialoguer Manager (DM). Our contribution enables domain portability issues, proposing an adaptive DKM to simplify DM’s clarification dialogs. DKM learns, through trial and error, from the interaction with DM suggesting a set of best task-device pairs to accomplish a request and watching the user’s confirmation. This adaptive DKM has been tested in our domain simulator. Title: PHYSICAL DOCUMENT ADAPTATION TO USER’S CONTEXT AND USER’S PROFILE Author(s): Hassan Naderi and Béatrice Rumpler Abstract: Modern technology promises mobile users Internet connectivity anytime, anywhere, using any device. However, given the constrained capabilities of mobile devices, the limited bandwidth of wireless networks and the varying personal sphere, effective information access requires the development of new computational patterns. The variety of mobile devices available today makes device-specific authoring of web content an expensive approach. The problem is further compounded by the heterogeneous nature of the supporting networks and the users' behaviour. This research investigates the challenges posed by these problems, and proposes a context-aware adaptation framework to bridge the gap between the existing Internet content and today's heterogeneous computing environments. Title: A PROCESS PATTERN LANGUAGE FOR COORDINATED SOFTWARE DEVELOPMENT Author(s): Chintan Amrit, René ter Haar, Mehmet N. Aydin and Jos van Hillegersberg Abstract: In distributed and collocated teams we often find problems in the organizational process structures. Though process patterns have been around for many years, there has been little research in categorizing the different solutions to various problems dealing with coordination, for easy access by practitioners. This study aims to describe a way to use the emerging idea of a pattern language to deal with problems related to coordination in software development. The patterns are a result of conclusive statements in the information systems and software engineering field and a pattern language is used to develop these patterns. We propose a technique to convert the knowledge base in IS and CS research on coordination into process patterns which are more accessible to practitioners. Title: APROACHING TO EMOTIONAL CONTEXT ON INFORMATION SYSTEMS DESIGN IN A WEB SITE FRONT-OFFICE DEVELOPMENT - A CASE STUDY FOR PUBLIC HEALTH Author(s): Octávio Pereira, Tiago Luís, Pedro António, Eduardo Valente, Joao Caldeira, Paulo Alves, Paulo Jesus and João Cordeiro Abstract: In the beginning of this paper, a web application development context is described, so to implement an analysis laboratory web page for agriculture purposes. Safety and hygiene matters are introduced as actual concerns for public health and is defined generally, the emotional context appealing interaction between users and web applications and information systems. The project implementation database physical model, arquitecture and chosen technologies are then described and finally, some conclusions are taken. Title: JOINTS - ADDRESSING GROUP PSYCHOTHERAPY REQUIREMENTS Author(s): Luís Duarte, Luís Carriço, Marco de Sá and Diogo Luís Abstract: Providing computational support to group meetings is a challenge some applications are now addressing. Nonetheless, there are specific areas which require developers’ special attention to cover all inherent issues, which can reveal themselves as workflow, interface or context requirements, among other. In the group psychotherapy field of study it is necessary to be careful with both the therapist’s and the patient’s work, providing both groups with the necessary mechanisms, interfaces and tools to accomplish their tasks. This paper presents a project whose main goal is to address all this challenges in group psychotherapy sessions. Title: ENTERPRISE INFORMATION SYSTEMS INTEGRATION - PROPOSAL OF AN APPROACH BASED ON USER PROFILE AND NEEDS ANALYSIS Author(s): Boulesnane Sabrina and Bouzidi Laïd Abstract: The advent of Web technologies has influenced the development and use of Information and Communication Technologies (ICT) and Information System (IS) in organisation. On the social level, the massive integration of these technologies affects even the human actor’s linguistic practices. One of the challenges which should be surmounted by these organizations is the information needs formulation. We propose in this paper a mediation approach allowing users to create a favourable context to an "exact" expression and interpretation of their informational needs. This approach represents a competitive resource to these organizations, situated in an uncertain and changing business world. Title: TASKS MODELS FOR COMPONENT CONTEXTUALIZATION Author(s): Arnaud Lewandowski, Grégory Bourguin and Jean-Claude Tarby Abstract: In order to give an answer to the emerging and evolving nature of users’ needs towards the software environments supporting their activities, a solution consists in giving the users the means to adapt these environments, by the integration of new tools. Many technical solutions exist for software components integration, but their use is limited to a public of experts in software developement. One of the main reasons is that current dynamic integration approaches face a semantic problem: in order to finely integrate a tool in an activity, one must indeed well understand what will be its place in this activity. In order to facilitate this understanding and the dynamic integration of software components, we propose a new approach of conception and integration, inspired by previous work on task modeling. Title: MODELING USER INTERFACES WITH THE XIS UML PROFILE Author(s): Carlos Martins and Alberto Rodrigues da Silva Abstract: This paper discusses different UIs design approaches. We describe how to design user interfaces, based on a MDD approach, by applying the XPTO language. XPTO is a coherent UML profile focused on model interactive systems. XPTO integrates best practices and principles of the MDA/MDD paradigm to improve the UI design, such as separation of concerns, model-to-model and model-to-code transformations. In that way, we discuss some issues regarding the transformation processes, from XPTO-based models into software systems artifacts. Title: INTERACTIONAL OBJECTS: HCI CONCERNS IN THE ANALYSIS PHASE OF THE SYMPHONY METHOD Author(s): Guillaume Godet-Bar, Dominique Rieu, Sophie Dupuy-Chessa and David Juras Abstract: We present in this paper a set of concepts that extend a design method issued from the Software Engineering domain, in order to take into account Human-Computer Interaction design, in particular for Augmented Reality systems. Previous works focused on the initial phases of development (i.e., Specification phases). Our efforts concentrate on the Analysis phase, into which we have introduced a new concept – Interactional Objects- that allows designers to structure the interactional space, and a specific relation that permits to draw links between the business and interactional spaces. These contributions also enable developers to develop reusable components and encourage code generation Title: A NEW TOOL FOR APPROACHING E-LEARNING: VIDEORDERTM - VIDEORDER™ VOICE-BASED SPEECH RECOGNITION AND LANGUAGE PROCESSING SEARCH TECHNOLOGY WITH FINDER™ ENGINE Author(s): Ferenc Kiss, Lia Bassa and Viktor Justin Abstract: Videorder™ Voice-based Speech Recognition and Language Processing search technology with Finder™ engine from @Your Service Media Communication Agency offers a market ready solution today, for speech recognized searching at any kind of film material on the basis of audible information inside, like words, sentences. All this independent of the language involved! The main strength of the system is Language Processing and Speech Recognition in one application package. Videorder™ allows you to search any video or audio clips that are relevant to your query. The method is a good example how the basic aim of ICEIS can be implemented: bringing together the achievements of a research team with trainers of information and knowledge management as well as with a Foundation for Information Society becoming the practitioner of the program. Title: PLATFORM TO DRIVE AN INTELLIGENT WHEELCHAIR USING FACIAL EXPRESSIONS Author(s): Pedro Miguel Faria, Rodrigo A. M. Braga, Eduardo Valgôde and Luís Paulo Reis Abstract: Many of the physically injured use electric wheelchairs as an aid to locomotion. Usually, for commanding this type of wheelchair, it is required the use of one’s hands and this poses a problem to those who, besides being unable to use their legs, are also unable to properly use their hands. The aim of the work described here, is to create a prototype of a wheelchair command interface that do not require hand usage. Facial expressions were chosen instead, to provide the necessary visual information for the interface to recognize user commands. The facial expressions are captured by means of a digital camera and interpreted by an application running on a laptop computer on the wheelchair. The software includes digital image processing algorithms for feature detection, such as colour segmentation and edge detection, followed by the application of a neural network that uses these features to detect the desired facial expressions. The results obtained from the framework interface provide strong evidence that it is possible to comfortably drive an intelligent wheelchair using facial expressions. Title: EDUCATIONAL SIMULATORS - COMPLIANCE WITH THE REQUIREMENTS OF DIABETES PATIENTS AND DIABETES THERAPY GUIDELINES Author(s): Andrzej Izworski, Joanna Koleszynska and Ryszard Tadeusiewicz Abstract: This paper presents renewed approach to the computer-aided diabetes educations introducing GIGISim (Glucose-Insulin and Glycemic Index Web Simulator) e-learning tool. Together with our system, selected solutions were summarized and their functionality compliance with diabetes therapy requirements checked. The analysis of diabetes patients needs has been established through a series of intermediate research surveys and literature studies. The software implementation of newly proposed innovations is presented together with effectiveness and suitability rate of a system, prior to the identified requirements. Title: HOME NETWORK AND HUMAN INTERACTION SYSTEM Author(s): Rudolf Volner Abstract: The term security network intelligence is widely used in the field of communication security network. A number of new and potentially concepts and products based on the concept of security network intelligence have been introduced, including smart flows, intelligent routing, and intelligent web switching. Many intelligent systems focus on a specific security service, function, or device, and do not provide true end-to-end service network intelligence. True security network intelligence requires more than a set of disconnected elements, it requires an interconnecting and functionally coupled architecture that enables the various functional levels to interact and communicate with each other. Title: A PLATFORM BASED ON JAVA AND XML FOR PROTOTYPING INTERACTIVE DIGITAL TELEVISION PROGRAMS - INTERACTIVE MULTIMEDIA SYSTEMS AND HUMAN-COMPUTER INTERACTION Author(s): João Benedito dos Santos Junior, Iran Calixto Abrão, Marcos Augusto Loiola, Paulo Muniz de Ávila and André Bretas Nunes de Lima Abstract: This work presents the JiTV (Java Interactive Television) as a proposal of an integrated development platform that can be used for authoring of Digital Interactive Television Programs, discussing its implementation aspects. Among the main requirements of the JiTV, we can put light on description of scenarios using XML and the implementation of interaction controls using Java. In this scenario, the JiTV platform supports the specification of context-awareness aspects, acquiring of contextual data from software agents, building of data carousel from the both recorded and real-time video and audio streams, tagging of multimedia objects with XML schema, allowing also the information retrieval. Furthermore, context-awareness aspects are being added for providing personalization in the television environment. Title: AN INITIAL USABILITY EVALUATION OF SOME WORD-PROCESSING FUNCTIONALITIES WITH THE ELDERLY Author(s): Sergio Sayago and Josep Blat Abstract: This paper addresses initially two questions about evaluating the usability of word-processing functionalities with the young elderly: (i) Which factor (understanding the terminology, remembering the steps and using the mouse) is the most strongly correlated with the overall usability of some word-processing functionalities?; (ii) When designing a valid usability questionnaire for the young elderly, do we need to adapt standard Likert scales? Both questions are answered after running a two-hour MS Word session at an adult school with five young elderly people with experience with computers. The results point out that difficulties remembering the steps and using the mouse have a strong relationship with the overall usability of the word-processing functionalities evaluated. The responses elicited from young elderly people are mostly dependent on the visual arrangement (vertical, horizontal) of standard Likert scales because they draw firmly on everyday scales, which differ considerably from the scales used in standard usability questionnaires. It has a strong impact on questionnaires’ validity. Replacing numbers with adjectives meets the requirements of the elderly and increase the validity of questionnaires as adjectives seem to be easier to understand than numbers. Title: USABILITY COST-BENEFIT MODELS - DIFFERENT APPROACHES TO USABILITY COST ANALYSIS Author(s): Mikko Rajanen Abstract: There are few development organizations that have integrated usability activities as an integral part of their product development projects. One reason for this might be that the costs and benefits of usability activities are not visible to the management. In this paper the author analyses some of the characteristics of the published usability cost-benefit analysis models. These models have different approach for identifying the costs of usability. Title: USER EXPERIENCE ENHANCED PRODUCT LIFE CYCLE - INDUSTRY SPANNING MODULAR USER EXPERIENCE PARADIGMS Author(s): Joachim Schonowski Abstract: The introduction speed and complexity in the digital information / multimedia age poses challenges for the “normal” end user. New digital technologies, services, products are often provided to the end consumer having a bad user experience (e.g. WAP, PTT) [6]. Companies need to encompass user experience into the product lifecycle. Instead of multiple different user experiences for many “digital” based consumer products only a few should exist, using key user experience paradigms. Application of the key user paradigms shall enable users to use products more intuitive (even if different industries are involved) and guide them to use most of the supported use cases provided, when needed. Society is under constant evolution and change (e.g. demographic factor…) The user experience enhanced product life cycle should be harmonised with the standard product development: promotion, pricing, product, placement but also take the society change (e.g. silver generation) and new life concepts into account. Title: A QUALITY MANAGEMENT TRAINING SYSTEM ON ISO STANDARDS FOR ENHANCING COMPETITIVENESS OF SMES Author(s): Nunzio Casalino, Alessandro D’Atri and Ludmil Manev Abstract: The purpose of this paper is to introduce and discuss the benefits of on-line training in quality control and quality management as an help for professionals and managers of SMEs, to contribute to the improvement of the activities and of the business performance objectives of their organizations. This kind of training includes topics on: managing quality, quality process, auditing, total quality, ISO standards, mistake proofing, and more. The paper describes the expected benefits according with the preliminary results of a European project aiming to create and validate an on-line software product in the area of quality management and to provide an effective training on quality of the staff in SMEs. The international standards that are now covered by the project are: ISO 9001 (quality management system), ISO 14001 (environment management system) and HACCP (Hazard Analysis and Control of the Critical Points). The users could be also students, disadvantaged people and all people who show interest in the quality management systems. ICEIS Doctoral Consortium Title: MODELLING ACCESS CONTROL FOR HEALTHCARE INFORMATION SYSTEMS - HOW TO CONTROL ACCESS THROUGH POLICIES, HUMAN PROCESSES AND LEGISLATION Author(s): Ana Ferreira, David Chadwick and Luís Antunes Abstract: The widening use of Information Systems, which allow the collection, extraction, storage, management and search of information, is increasing the need for information security (e.g. confidentiality, integrity and availability). Access control is a security service that focuses on information confidentiality. After a user is successfully identified and authenticated to a system, he needs to be authorized to access the resources he requests. Access control is part of the authorisation process that checks if a user can access those resources. This is particularly important in the healthcare environment where there is the need to control access to Electronic Medical Records (EMR) that hold patients’ sensitive information. Although EMR is a vital support tool for the healthcare professional there are some barriers that prevent their successful integration. These barriers relate to the fact that healthcare professionals do not participate in the development of the EMR and this imposes them costs in terms of time and effort when they need to use the EMR. New security models and technologies to be implemented should focus on human processes and needs. The main objective of this research project is to reduce EMR barriers by including healthcare professionals and patients in the definition and improvement of access control policies and models. If access control could be improved according to the users’ needs and be properly adapted to their workflow patterns, the EMR can be more successfully integrated into the healthcare practice and provide for better patient treatment. Title: CONTEXT PATH SIMILARITY IN XML SCHEMA MATCHING Author(s): Amar Zerdazi Abstract: Similarity plays a crucial role in many research fields. Similarity serves as an organization principle by which individuals classify objects, form concepts. Similarity can be computed at different layers of abstraction: at data layer, at type layer or between the two layers (i.e. similarity between data and types). In this paper we propose an algorithm context path similarity, which captures the degree of similarity in the paths of two elements. In our approach, this similarity contributes to determine the structural similarity measure between XML schemas, in the domain of schema matching. We essentially focus on how to maximize the use of structural information to derive mappings between source and target XML schemas. For this, we adapt several existing algorithms in many fields, dynamic programming, data integration, and query answering to serve computing similarities. Title: EXPERIMENTING DYNAMIC FITNESS ASSIGNMENT FOR MUSIC CREATIVITY - TOWARDS A DYNAMIC GA ENVIRONMENT Author(s): Tzimeas Dimitrios and Mangina Eleni Abstract: Genetic Algorithms (GAs) face difficulties to solve sufficiently complicated music related creativity problems of aesthetic nature, where the judgement of the chromosome is based on the combination of qualitative and quantitative parameters. Due to fact that there is not a standard mechanism for fitness function (FF), design within this paper we claim that the utilization of critical damped oscillator model in the assignment of the FF values and critical GA operator rates can lead to an autonomous dynamic environment. The results of a series of experiments have been analyzed and described in terms of the frequency of appearances of different size motifs; the appearance of the motifs in a certain sequence on the chromosome and finally the inclusion of a dynamic allocation of the operator rates, to achieve multi-objectivity in a fully GA-based environment. Title: GENERAL ARCHITECTURE OF ADAPTIVE AND ADAPTABLE INFORMATION SYSTEMS Author(s): Martin Balík and Ivan Jelínek Abstract: The emerging information congestion of World Wide Web makes retrieval of appropriate piece of information relatively difficult. Current information systems tend to overcome this problem. Adaptation is used in the systems to be able to present only suitable information for the current user. Intensive research in the field of adaptive systems has been carried out in the last decade. Several models have been proposed for the description of adaptive hypermedia architecture. However, there is still lack of generality in the architecture, which makes collaboration and content reusability difficult, even impossible. In this paper we investigate the state of the art approaches and we try to extend them to fulfil the aspects of general use. We also discuss some related topics as security aspects and semantics integration. Title: A LOW-LEVEL BASED TASK AND PROCESS SUPPORT APPROACH FOR KNOWLEDGE-INTENSIVE BUSINESS ENVIRONMENTS - DISCOVERING AND SUPPORTING KNOWLEDGE –INTENSIVE TASK AND PROCESS Author(s): Andreas S. Rath Abstract: Knowledge-intensive work plays an important role in organizations of all types. For ensuring the effective and efficient support of knowledge workers, the first step that is undertaken by this PhD work is to capture knowledge work directly from the knowledge worker’s desktop during task and process executions. Discovering, modelling and distinguishing tasks of knowledge workers are as in the focus as revealing task patterns based on user interaction observations, system events and application usage data. By utilizing the identified task patterns process instances are constructed that can serve as a base for process mining approaches. The research challenges, the objectives, the followed methodology and the stage of the current research efforts are described in this paper. Title: SEMIOTIC LEARNING - A CONCEPTUAL FRAMEWORK FOR FACILITATING LEARNING IN KNOWLEDGE-INTENSIVE ORGANISATIONS Author(s): Angela Lacerda Nobre Abstract: Organisational learning has gradually gained wide recognition among researchers and practitioners since its development in the last two decades of the twentieth century. Argyris and Schön (1978) were among the first authors to develop organisational learning theories, and Senge (1990) and others further disseminated this field of study focusing on the idea of the learning organisation. These early developments in organisational learning were integrated in a broader school of thought generally characterised as a cognitivist turn in organisation theory. Under this perspective, there is prevalence for the cognitive, neurological and individual processes of learning. Later, other theorists developed complementary perspectives on organisational learning, namely introducing key aspects such as the social and cultural factors that influence learning at an organisational level. Examples of authors that have developed the social perspective on organisational learning are Cook and Yanow (1993), Gherardi and Nicolini (2001), and Elkjaer (1999, 2003), among others. The present dissertation is included in this social perspective as it explores, as its research goals, the connections between social theory and organisational practice that enable the facilitation of the organisational learning process in knowledge-intensive organisations. According to Drucker (1999), such organisations are aware of the central role that knowledge plays and explicitly nurture core knowledge processes. In parallel with other social learning theories, the present project focuses on knowledge as being embodied and embedded in organisational practices. This social perspective enables the improvement of such practices and therefore it empowers organisational learning initiatives. The social perspective complements the cognitivist one with the advantage of addressing the community level meaning-making processes. Therefore, it is particularly adequate for the promotion of stable results in terms of organisational learning practices in knowledge-intensive organisations. The present study uses the contributions from three social philosophy theories: (i) social semiotics theory (eg. Halliday, 1978, Kress, 1985, Lemke, 1995), which interprets social practices as being constitutively signifying and sense-making activities, thus semiotic in nature; (ii) pragmatism (Peirce, 1931), that takes a realistic and practical stance, rejecting the dual opposition between theory and practice, or the individual and the social; and (iii) Heidegger’s ontology (1962) that addresses the pre-reflexive, being-in-the-world nature of all human activities, already present, determining and conditioning all action and decision-making processes. The research methodology used is multi-grounded theory (Goldkuhl, Cronholm, 2003, Goldkuhl, 1999), an extension of Glaser and Strauss original approach (1967), which incorporates the use of specific and previously defined theoretical orientations, as is the case of the present project. Four organisations are analysed as case studies (Stake, 1995, Yin, 1989), and four other organisations are used to test the implementation of the methodology that has been developed. Therefore, “Semiotic Learning” is an organisational learning initiative that has the purpose of facilitating learning in knowledge-intensive organisations. As a conceptual framework, it rests on the philosophical foundations of organisational learning theory, and as a practical methodology it promotes and nurtures learning processes at an applied level. Title: DATA SCHEDULING FOR LARGE SCALE DISTRIBUTED APPLICATIONS Author(s): Mehmet Balman and Tevfik Kosar Abstract: Current large scale distributed applications studied by large communities of researchers result in new challenging problems in widely distributed environments. Especially, scientific experiments using geographically separated and heterogeneous resources necessitated transparently accessing distributed data and analyzing huge collection of information. We focus on data-intensive distributed computing and describe data scheduling approach to manage large scale scientific and commercial applications. We identify parameters affecting data transfer and also analyze different scenarios for possible use cases of data placement tasks to discover key attributes for performance optimization. We are planning to define crucial factors in data placement in widely distributed systems and develop a strategy to schedule data transfers according to characteristics of dynamically changing distributed environments. Title: UNIVERSAL ENTERPRISE FMC SOLUTION Author(s): Yevgeniy Yeryomin and Jochen Seitz Abstract: Fixed Mobile Convergence (FMC) is a technology, that allows a smooth cooperation between Public Switched Telephone Network (PSTN) and Public Land Mobile Network (PLMN) and common use all their services. FMC subscribers reaching the landline network through unlicensed mobile access are able to use them instead of licensed cellular access (Bluetooth, WLAN, WiMAX). Both carriers and customers profit by use of FMC. Carriers using WLAN hotspots and landline networks can increase capacity and coverage of their networks. The FMC subscribers benefit by use of landline due to lower value and high volume transaction. In addition to private customers and carriers, the FMC technology is attractive for the businesses too. The subject of this paper is the FMC for enterprises. The businesses can benefit by convergence of its own information and telecommunication networks with extern cellular networks. It is especially relevant for large scale enterprises with many offices around the world and with a big number of mobile employers. FMC allows an enterprise to expand its network with all its services and to reduce cellular carrier costs. There is a range of FMC solutions on the market today, but most of them are carrier-centric. This paper analyses the existing implementations and suggests a concept for new premises-based universal FMC solution, which can be deployed inside an enterprise network. Title: JINI-BASED ADAPTABLE MIDDLEWARE PLATFORM FOR VIRTUAL INSTRUMENTATION APPLICATIONS Author(s): Alfredo Moreno Guillén Abstract: The increasing heterogeneity of execution environments, with varying operating systems, processors, hardware devices, resources, etc. interconnected by different networks, require the design and development of new adaptable middleware platforms, capable to manage the complexity of current distributed systems. In this doctoral project, the design and development of a new adaptable middleware platform is proposed, with the study of the state-of-the-art distributed systems technology and development of novel mechanisms and services on top of known middleware architectures, allowing for easier instrument and data integration. This paper outlines the steps involved in the development of the proposed Jini-based middleware, JOVIM, establishes the requirements for such a platform and describes its main characteristics and mechanisms. Title: BALANCING THE USE OF REMOTE I/O VERSUS STAGING IN DISTRIBUTED ENVIRONMENTS Author(s): Ibrahim H. Suslu and Tevfik Kosar Abstract: Data staging and remote I/O are two most widely used data access methods for distributed applications. Application developers generally chose one over the other intuitively without making any scientific comparison specific to their applications since there is no generic model available that they can use. Our goal is to develop a model and set guidelines for the application developers which will help them to choose the most appropriate data access method for their application. We define the parameters that potentially affect the end-to-end performance of the distributed applications which need to access remote data. We aim to study several widely used and well known applications as well some local projects based on the parameters we define, and will develop performance models for these applications based on our findings. Our ultimate goal is to develop a generic model which can be applied to most data intensive distributed applications to decide the best data access model for those applications. Title: MODEL CHECKING OF COMPONENT-BASED SYSTEMS AND COORDINATION MODELS Author(s): Mohammad Izadi and Ali Movaghar Abstract: Reo is an exogenous coordination language for compositional construction of the coordinating subsystem of a component-based software. Constraint automaton is defined as the operational semantics of Reo. The main goal of this work is to pre- pare a model checking based verification environment for component-based systems, which their component connectors are modeled by Reo networks and Constraint Automata. We use compositional minimization and abstraction methods of model checking for verification of component-based systems and their component connec- tors modeled by Reo. Special Session on Business Intelligence, Knowledge Management and Knowledge Management Systems Title: HUMAN-CENTERED META-SYNTHETIC ENGINEERING FOR KNOWLEDGE CREATIVE SYSTEM Author(s): Cui Xia, Dai Ruwei, Li Yaodong and Zhao Mingchang Abstract: Meta-synthetic Engineering and Cyberspace for workshop of Meta-synthetic Engineering (CWME) is the methodology for Open Complex Giant System (OCGS), which synthesizes from qualitative opinion to the quantitative understanding by human-computer cooperated human-centered means. Meta-synthetic engineering is highly related to Data Mining, Knowledge Management, Collective Intelligence Emerging and Intelligent Information System (KCS) Complexity. It is an intelligent information system including human, computers and knowledge. In efforts to satisfy the great demand for new, powerful tools for turning data into useful, task-oriented knowledge in almost every area, the research area, called data mining and knowledge discovery emerge to explore ideas and methods which is concerning the qualitative-quantitative research dichotomy. This paper proposes a new architecture for Knowledge Creative System based on OCGS and human-centered meta-synthetic engineering. This type information system is knowledge conductive, human-centered date computing. Men think and decide the key points with creative thinking; machines carry out the repetitive and tedious work. With broader means, this type KCS is a multi-agents system consist of [expert/man, computer in network] social component, the system design and implement involves the interactions and organization of human-human, human-computer, computer-computer. Further, the collective intelligence emerges from the network of KCS established by the inter-response embodied in the contents of interactions among components. And we have developed the algorithm tools, i.e. analysis of the link structure of KCS to distill the emergent collective wisdom on some topic. Title: KNOWLEDGE FLOW ANALYSIS TO IDENTIFY KNOWLEDGE NEEDS FOR THE DESIGN OF KNOWLEDGE MANAGEMENT SYSTEMS AND STRATEGIES - A METHODOLOGICAL APPROACH Author(s): Oscar M. Rodríguez Elias, Ana I. Martínez García, Jesús Favela Vara, Aurora Vizcaíno and Juan Pablo Soto Abstract: This paper presents a methodological approach to identify knowledge needs in organizational processes. The methodology is oriented to facilitate obtaining requirements to design knowledge management systems and/or strategies. This approach has been applied for different purposes, including identifying relationships between the knowledge and sources involved in the activities of a process, the mechanisms used for managing knowledge in those processes, and the main problems affecting the flow of knowledge. In order to exemplify the usefulness and applicability of the proposed approach, a case study is described, in which the methodology was successfully applied to analyze a software development group. From this case study different possible solutions to some problems observed in the maintenance process were proposed. Title: KNOWLEDGE MANAGEMENT SYSTEMS WITH REPUTATION AND INTUITION - WHAT FOR? Author(s): Juan Pablo Soto, Aurora Vizcaíno, Javier Portillo and Mario Piattini Abstract: Nowadays knowledge management is considering to be one of the more important processes by those companies worried about their competitiveness. These companies focus their efforts on developing systems that can be used to capture, store and reuse the knowledge generated by theirs employees. Nevertheless, all this effort may be in vain if the system is not greatly used by the employees because the knowledge that these systems have is often not valuable or on other occasions the knowledge sources do not provide the necessary confidence to reuse the information. In an attempt to avoid this situation, we propose a multiagent architecture based on communities of practice and on the reputation concept with the purpose of controlling the utility of information stored in a knowledge base. Title: ENRICHING EXECUTIVES’ SITUATION AWARENESS AND MENTAL MODELS - A CONCEPTUAL ESS FRAMEWORK Author(s): Li Niu, Jie Lu and Guangquan Zhang Abstract: Regardless of cognitive orientation of increasing importance, most executive support systems (ESS) and other decision support systems (DSS) focus on providing behavioural support to executives’ decision-making. In this paper, we suggest that cognitive orientation in information systems is twofold: situation awareness (SA) and mental model. A literature review of SA and mental models from different fields shows that both the two human mental constructs play very important roles in human decision-making, particularly in the naturalistic settings with time pressure, dynamics, complexity, uncertainty, and high personal stakes. Based on a discussion of application problems of present ESSs, a conceptual ESS framework on cognitive orientation is developed. Under this framework, executives’ SA and mental models can be developed and enriched, which eventually increases the probability of good decision-making and good performance. Title: KNOWLEDGE SHARING AND ORGANIZATIONAL PERFORMANCE - AN AGENT-MEDIATED APPROACH Author(s): Virginia Dignum Abstract: Organizational effectiveness depends on many factors, including excellence, effective planning and capability to understand and match context requirements. Moreover, organizational performance cannot be just evaluated in economic or other global terms, but it must consider values of the participating agents (people or groups), such as individual satisfaction. Different organizational structures are clearly better matched to certain problems and context requirements than others, but evaluation methods are mostly lacking. In this paper, we will present ongoing work on tools and formalisms to model organizations and evaluate their performance according to global and individual values, under different circumstances. Special Session on Computer Supported Collaborative Editing Title: TRANSPARENT EXTENSION OF SINGLE-USER APPLICATIONS TO MULTI-USER REAL-TIME COLLABORATIVE SYSTEMS - AN ASPECT ORIENTED APPROACH TO FRAMEWORK INTEGRATION Author(s): Ansgar R. S. Gerlicher Abstract: This paper discusses the transformation of a single-user SVG editing application into a multi-user real-time collaborative editing system. The application’s extension with collaboration functionality was realized by using a novel aspect-oriented programming approach to framework integration. This approach is platform independent, supports heterogeneous applications and does not require an application specific API or access to the application’s source code. The collaboration functionality in this case is provided by the Collaborative Editing Framework for XML (CEFX) which uses the Document Object Model as a standard interface to the application’s data model. Title: SUPPORTING COLLABORATIVE WRITING OF XML DOCUMENTS Author(s): Gérald Oster, Hala Skaf-Molli, Pascal Molli and Hala Naja-Jazzar Abstract: Data management is a key issue in cooperative systems. Anyone who uses more than one computer or collaborates with other people is aware of the problems posed by having multiple copies of shared documents. Most existing synchronization tools are specific to a particular type of shared data i.e. text files, calendars, XML files. Therefore, user should use several tools to maintain their different copies up-to-date. This is not an easy task. To address this issue, we defined a generic synchronization framework based on the operational transformation approach. This framework allows to synchronise text files, calendars, XML files by using the same tool. The main objective of this paper is to present this framework and how it is used to support cooperative writing of XML document. An implementation is illustrated through the revision control system called So6, which is a part of a distributed collaborative technology called LibreSource. Title: SUPPORTING ASYNCHRONOUS COLLABORATIVE EDITING IN MOBILE COMPUTING ENVIRONMENTS Author(s): Marcos Bento and Nuno Preguiça Abstract: Mobile computing environments have changed in recent years, with the increasing use of different types of mobile devices and wireless communication technologies. To allow users to store and share their data in this new environment, we are building the Files EveryWhere (FEW) system, that explores the multiple available storage and communication devices to provide good availability and performance. This system is based on optimistic replication and it can be used as a tool for supporting asynchronous collaborative edition, as it allows users to cooperate by accessing and modifying shared documents. Our approach is implemented using the file system interface, thus allowing users to continue using their favorite application. In this paper, we focus mainly on the FEW reconciliation mechanism. Our approach is based on operational transformation and it includes several new techniques. First, we propose a new technique for handling operations in operational transformation algorithms that supports efficient epidemic dissemination. Second, we propose a new set of transformation functions that explicitly handle line versions in text files. Finally, we propose a set of transformation functions that explicitly handle file versions for opaque files. Title: USING NARRATIVES IN COLLABORATIVE WRITING - AN EXAMPLE Author(s): Nishadi De Silva Abstract: Document coherence is often harder to achieve in collaborative writing owing to a lack of group consensus and misaligned contributions by the co-authors. By ‘coherence’ we refer to the feature of a text that makes it easy to read and understand. This can be linked to the implicit story that a document conveys to its reader. Despite being an integral aspect of a successful document, software support for coherence is minimal. Collaborative writing tools do ensure syntactic consistency but this still does not guarantee coherence. Other approaches such as agreeing on an outline at the start can improve the document but outlines too have their shortcomings. Previously, a technique called narrative-based writing was introduced to fill these gaps and a prototype of a tool that allows co-authors to engage in this method was built. The purpose of this paper is to present a simple example of how a team of authors can make use of this narrative-based technique and tool, and show how the corresponding document evolves. Title: FLEXIBLE RECONCILIATION OF XML DOCUMENTS IN ASYNCHRONOUS EDITING Author(s): Claudia-Lavinia Ignat and Gérald Oster Abstract: As XML documents are increasingly being used in a wide variety of applications and often people work in teams distributed across space and time, it is very important that users are supported for editing collaboratively XML documents. Existing tools do not offer appropriate support for the management of conflicting changes performed in parallel on XML documents. In this paper we propose a merging mechanism that offers users the possibility to define conflict nodes prevented from concurrent changes. Changes referring to non-conflict nodes are automatically merged, while users are assisted to manually merge changes referring to conflict nodes. Changes are tracked by means of operations associated to the nodes they target and merging relies on an operation-transformation mechanism adapted for hierarchical structures. Special Session on Applications in a Real World Title: CHANNELS TO THE FUTURE Author(s): Gábor Magyar Abstract: The long-term archiving of digital documents is a very challenging task, because of policy, legal, intellectual property rights, metadata, semantic support and other issues. This paper merges technical and sociotechnical approaches. As more research disciplines and societal sectors have come to rely on data-driven models and observational data, the archiving problem is growing, the shortcomings of current technologies have become apparent and the need to preserve historical material has become imperative. The variety and complexity of digital documents as information technology objects brings up a basic question: does it necessary to preserve the variety and complexity of the original objects? Our answer in general is ’no’, essential attributes of a document are preserved when the document is transformed to different platforms. There are many reasons to change the format of a document. We use the categories of physical, logical, and conceptual layers in order to define generic properties that are true of all digital documents. This approach gives an overall framework for general preserving strategy managing technical obsolescence and semantic mutations. Title: INTELLIGENT AGENT AND KNOWLEDGE MANAGEMENT PERSPECTIVES FOR THE DEVELOPMENT OF INTELLIGENT TUTORING SYSTEMS Author(s): Janis Grundspenkis Abstract: The development of intelligent tutoring systems is discussed from intelligent agent and knowledge management perspectives. A conceptual model in which both perspectives are integrated is proposed. The model consists from system’s layer based on agent paradigm and knowledge worker’s layer responsible for personal knowledge management of knowledge worker (teacher and/or student). The implemented prototype of intelligent knowledge assessment system is described. Title: ANALYSIS OF BUSINESS PROCESS FLEXIBILITY AT DIFFERENT LEVELS OF ABSTRACTION Author(s): Marite Kirikova, Renate Strazdina, Janis Grundspenkis and Janis Osis Abstract: Business process flexibility is understood as capability of the process to be changed without replacing it completely. This implies that there should be one part of the process that may be changed and another part of the process (process core) that should not be changed. The challenge of business process analysis is the detection and separation of these process subparts. One of the possible ways to meet this challenge is through use of topological functional modeling and utilization of graph theory methods, such as paths and cycle detection in a digraph at different levels of abstraction. The cycles that are found at several levels of abstraction may help detecting the core of the business process while other cycles may point to the changeable parts of the process. Title: DEVELOPING AN IT MASTERPLAN: THE IMPLICATIONS FOR LOCAL SYSTEMS DEVELOPMENT Author(s): Robert Moreton, Phil Range and Jane Caddick Abstract: The paper reviews the issues of developing a policy for local systems development and how this policy impacts the corporate IS masterplan. The key elements of the policy are presented and the benefits that this ‘light touch’ approach can engender are presented. The process recognises that there needs to be some level of knowledge and management of such systems by the central IS/IT service although functionally and operationally they ‘sit outside’ formal IS management structures. Title: FORMS OF ENTERPRISE’S AGILITY Author(s): Stefan Trzcieliński Abstract: After lean production, agile manufacturing is considered to be the current paradigm for manufacturing businesses. However authors who write on this subject use the term as a synonym of agile enterprise, agile supply chain, and from the other side, even as a synonym of lean manufacturing. Each of these expressions have a different area of meaning, is connected with different scope of agility, and although in some cases they cane be used interchangeable, it should be done with an intent. In this paper the different scopes of agility are treated as its forms. In consistency a presumption is taken that there is no only one proper form of enterprise’s agility and that contingency approach should be applied when deciding about the form. Each of these forms are presented including IT that supports the particular form. Special Session on New Information System and Approaches for Product Maintenance Title: LOGISTICS TRACEABILITY FOR SUPPLY CHAIN IMPROVEMENT - CASE STUDY OF SMMART PROJECT Author(s): Paulina Blaszkowska, Jana Pieriegud and Michal Wolanski Abstract: Tracking and tracing of shipment is nowadays a key element of customer service. RFID technology, which provides real-time tracking, helps to indicate and to monitor the transition of the events along the supply chain. Traceability permits not only to reduce the total logistics cost and to shorten the order cycle time, but also to increase efficiency, to improve quality performance and to offer new added value services for clients. The Authors present the theoretical background of the problem as well as the experiences and the ideas of new solutions which are currently developed within the SMMART 6 Framework Program Project. Title: A POLICY-BASED PRIVACY STORAGE APPROACH Author(s): Julien Nowalczyk and Frédérique Tastet-Cherel Abstract: Current information systems rely on complex databases, where information is provided and retrieved by many actors interacting with many items during many operations. Data access and transfer must be protected from external access as well as from any unauthorized internal access. We propose a light and scalable mechanism to secure the storage of such a database. First, we define storage and access control policies to record information. Then we specify a secured storage proxy that applies these policies. If required, sensitive data are transferred and stored encrypted according to the previous decided security policy. Title: ACTIVITY THEORY MODEL - APPLICATION IN THE AUTOMOTIVE INDUSTRY Author(s): Jon Aldazabal, Gaizka Ballesteros and Juan Antonio Martín Abstract: Activity Theory is a set of basic principles that constitute a general conceptual system, rather than a highly predictive theory and constitutes an adequate research tool in order to generate organizational activity changes. This paper intends to show the activity theory model and to present a real case of its application in a real organization and how the model has achieved company’s reorganization. Title: SUPPLY CHAIN IMPROVEMENT - ASSESSING READINESS FOR CHANGE TROUGH COLLABORATION EVALUATION Author(s): Olivier Zephir, Emilie Chapotot, Stéphanie Minel and Benoît Roussel Abstract: Our goal here is to propose a practical model enabling the assessment of the readiness of cooperating organisational agents to face technological change. The focus is on the quality of cooperation and collaboration which we presume determines the agents’ readiness for change. Providing such a model facilitates decision making in process design such as organisation design or product/services design. The transformation feasibility of existing cooperation is determined through a collective operational effectiveness evaluation. Lillian T. Eby et al. (2000) outlined that little empirical research has focused on this phenomenon. Amenakis et al. (1993) have proposed a theory-based model where readiness for change is perceived as similar to Lewin’s (1951) concept of unfreezing. According to this theory beliefs and attitudes are core factors acting on organisational actors’ perception of the readiness for change. Readiness for change relates to the employees’ abilities and perceptions to face and support a pending organisational change. We consider the change in routines and practices of collaborating actors in interaction with the degree of activity change. In an organizational system based on cooperation, the various actors interact under a team spirit for a general interest and share a collective output. A certain degree of confidence and comprehension between actors is inferred. When change affects a company, technological or structural, organisational actors face change in roles, rules, methods, tools and habits. These transformations have an effect on the quality of cooperation and the related performance. We propose hereunder a methodology to measure the impact of change on activities accomplished trough cooperation. Our empirical research takes place in an organisation adopting a new technology in the maintenance sector. Special Session on Comparative Evaluation of Semantic Web Service Frameworks Title: SWS CHALLENGE - FIRST YEAR OVERVIEW Author(s): Charles Petrie, Holger Lausen and Michal Zaremba Abstract: The SWS Challenge has held three workshops in 2006, the third evaluating six (6) teams. For the most part, our experience has validated the methodology though we have learned much during the year. We are taking a software engineering approach to evaluating Semantic Web Services. Teams automatically validate their solution to problems by having their system send correct messages to the SWS Challenge infrastructure. At the workshops, teams present papers about their approach with claims about the ease of changing from one problem to another. Then we peerreview these claims and agree upon an evaluation of the approach, as well as certifying the technology problem level. Title: TOWARDS SEMANTIC INTEROPERABILTY - IN-DEPTH COMPARISON OF TWO APPROACHES TO SOLVING SEMANTIC WEB SERVICE CHALLENGE MEDIATION TASKS Author(s): Maciej Zaremba, Tomas Vitvar, Matthew Moran, Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico M. Facca and Christina Tziviskou Abstract: - Title: THE SWS MEDIATOR WITH WEBML/WEBRATIO AND JABC/JETI: A COMPARISON Author(s): Tiziana Margaria, Christian Winkler, Christian Kubczak, Bernhard Steffen, Marco Brambilla, Stefano Ceri, Dario Cerizza, Emanuele Della Valle, Federico M. Facca and Christina Tziviskou Abstract: - Title: SERVICE DISCOVERY WITH SWE-ET AND DIANE - A COMPARATIVE EVALUATION BY MEANS OF SOLUTIONS TO A COMMON SCENARIO Author(s): Ulrich Küster, Andrea Turati, Maciej Zaremba, B. König-Ries, D. Cerizza, E. Della Valle, M. Brambilla, S. Ceri, F. M. Facca and C. Tziviskou Abstract: - Title: A PARTIAL SOLUTION TO THE SEMANTIC WEB SERVICES CHALLENGE PROBLEM USING SWASHUP - THE RUBY ON RAILS SERVICES MASHUP APPROACH Author(s): E. Michael Maximilien Abstract: - Title: SWS CHALLENGE - STATUS, PERSPECTIVES AND LESSONS LEARNED SO FAR Author(s): Charles Petrie, Tiziana Margaria, Ulrich Küster, Holger Lausen and Michal Zaremba Abstract: - Title: AUTOMATIC COMPOSITION OF SEMANTIC WEB SERVICES USING PROCESS MEDIATION Author(s): Zixin Wu, Karthik Gomadam, Ajith Ranabahu, Amit P. Sheth and John A. Miller Abstract: - Workshop on Pattern Recognition in Information Systems Title: USING WAVELETS BASED FEATURE EXTRACTION AND RELEVANCE WEIGHTED LDA FOR FACE RECOGNITION Author(s): Khalid Chougdali, Mohamed Jedra and Nouredine Zahid Abstract: In this work, we propose an efficient face recognition system which has two steps. Firstly we take 2D wavelet coefficients as a representation of faces images. Secondly, for recognition module we present a new variant on Linear Discriminant Analysis (LDA). This algorithm combines the advantages of the recent LDA enhancements namely relevance weighted LDA and QR de-composition matrix analysis. Experiments on two well known facial databases show the effectiveness of the proposed method. Comparisons with other LDA-based methods show that our method improves the LDA classification perform-ance. Title: STATISTICAL AND SIMILARITY-BASED CASE MINING AND LEARNING FOR NOVEL CONCEPT RECOGNITION Author(s): Djemel Ziou, Nizar Bouguila, Petra Perner and Bechir Ayeb Abstract: Novelty detection, the ability to identify new or unknown situations that were never experienced before, is useful for intelligent systems aspiring to operate in environments where data are acquired incrementally. This characteristic is common to numerous problems such as information management, medical diagnosis, fault monitoring and detection, and visual perception. We propose to see novelty detection as a case-based reasoning approach. Our novelty-detection method is able to detect the novel situation as well as to use the novel events for immediate reasoning. To ensure this capacity we combine statistical and similarity inference and learning. This view of CBR takes into account the properties of data such as the uncertainty, and the underlying concepts such as adaptation, storage, learning, retrieval and indexing can be formalized and performed efficiently. Title: A WEIGHT VECTOR FEATURE FOR 3D SHAPE MATCHING Author(s): Yingliang Lu, Kunihiko Kaneko and Akifumi Makinouchi Abstract: Searching a database of 3D objects for objects which are similar to a given 3D search object is an important task which arises in a number of database applications for example, in Medicine and CAD fields. Most of the existing similarity models are based on global features of 3D objects. According to our investigations, it is challenge to develop a feature set or a feature vector of 3D object using their partial features. In this paper, we novel introduce a segment weight vector to matching 3D objects rapidly. We also describe a partial and geometrical similarity based solution to the problem of searching for similar 3D objects. As the first step, we split a 3D object into parts by its topology. Next, we introduce a new method to extract the thickness feature of each part and generate them as a feature vector of the 3D object. We also propose a novel searching algorithm using the introduced feature vector. Furthermore, we present a new solution for improving the accuracy of the similarity queries. Finally, we present a performance evaluation of our stratagem. The result indicates that our approach has a significant performance improvement over existing approach. Since the proposed method is based on partial features, it is particularly suited to searching objects having a distinct part structure and is invariant to part architecture. Title: RECOGNITION OF DYNAMIC SIGNATURES FOR PEOPLE VERIFICATION Author(s): Shern Yau and Dinesh Kumar Abstract: Machine based identity validation has applications such as authentication of documents, for financial transactions, and for entry into restricted space and database. The ineffectiveness of password and personal identification numbers has been demonstrated by recent explosion of frauds. This paper proposes the use of unpenned dynamic signature to validate the authentic user and related transactional instruments. A comparison of the ability of various classifiers for classifying the multi-dimensional features of the dynamic signatures is reported. The technique has been tested for single user and multi user and also when the forger is actively attempting to cheat the system. The system is able to perfectly determine the authentic user from other users when the user’s signature trace is secret. The system is also able to perfectly reject forgers who may have access to the user’s signature, with a 10% of the authentic user signature being classified as ‘unknown’. Title: RECOGNITION OF MULTIPLE OBJECTS WITH ADAPTIVE CORRELATION FILTERS Author(s): Vitaly Kober, Marco Pinedo-García and Victor Diaz-Ramirez Abstract: A new method for reliable pattern recognition of multiple objects in a cluttered background and consequent classification of the detected objects is proposed. The method is based on an adaptive composite correlation filter. The filter is designed with the help of an iterative algorithm exploiting a modified version of synthetic discriminant function filter. The impulse response of the filter contains information needed to localize and classify objects belonging different classes. Computer simulation results obtained with the proposed method are compared with those of known correlation based techniques in terms of performance criteria for recognition and classification of objects. Title: SHOT BOUNDARY DETECTION IN FOOTBALL VIDEO MANAGEMENT SYSTEM Author(s): Sanparith Marukatat Abstract: Today, video has become an important part in multimedia data which is broadcasted through various networks. Shot boundary detection is a fundamental task in the video processing system. This paper presents a shot boundary detection technique for football video. The detector is based on color histogram with adaptive threshold chosen by the entropic thresholding technique. This allows detecting both cut and gradual transition in the video. A special attention is taken to identify wipes among detected gradual transitions. This system is evaluated on more than one hour of football video. The obtained results are encouraging. An analysis of detection errors is also presented. This can give a guideline for further investigation of shot boundary detection. Title: RECOGNITION OF HUMAN MOVEMENTS USING HIDDEN MARKOV MODELS - AN APPLICATION TO VISUAL SPEECH RECOGNITION Author(s): Wai Chee Yau, Dinesh Kant Kumar and Hans Weghorn Abstract: This paper presents a novel approach for recognition of lower facial movements using motion features and Hidden Markov Models(HMM) for visual speech recognition applications. The proposed technique recognizes utterances based on mouth video without using the acoustic signals. This paper adopts a visual speech model that divide utterances into sequences of smallest, visually distinguishable units known as visemes. The proposed technique uses the viseme model of Moving Picture Experts Group 4 (MPEG-4) standard. The facial movements in the video data are represented using 2D spatial-temporal templates(STT). The proposed technique combines discrete stationary wavelet transform (SWT) and Zernike moments to extract rotation invariant features from the STTs. Continuous Hidden Markov Models (HMM) are used as speech classifier to model English visemes. The preliminary results demonstrate that the proposed technique is suitable for classification of visemes with an accuracy of 88.2%. Title: A NOVEL DISTANCE MEASURE FOR INTERVAL DATA Author(s): Jie Ouyang and Ishwar K. Sethi Abstract: Interval data is attracting attention from data analysis community due to its ability to describe complex concepts. Clustering,as an important data analysis tool, is extended to interval data. Applying traditional clustering methods on interval data loses information inherited in this particular data type. This paper proposes a novel dissimilarity measure which explores the internal structure of intervals based on domain knowledge. We expect a probabilistically explainable distance measure. Our experiments show that interval clustering based on the new dissimilarity produces meaningful results. Title: DISTRIBUTED K-MEDIAN CLUSTERING WITH APPLICATION TO IMAGE CLUSTERING Author(s): Aiyesha Ma and Ishwar Sethi Abstract: Storage of digital data has become more distributed, with data residing on different computers and loosely connected by some network. While some algorithms have been developed to deal with distributed data, many common algorithms only exist for single repository environments. This paper proposes a distributed K-Median algorithm for use in a distributed environment with centralized server, such as the Napster model in a peer-to-peer environment. Several approximate methods for computing the median in a distributed environment are proposed. These methods are analyzed in the context of the iterative K-Median algorithm. Using the proposed K-Median algorithm allows the clustering of multivariate data while ensuring that each cluster representative remains an item in the collection. This facilitates exploratory analysis where having a meaningful representative is important. Image clustering is one such application where having the representative be an object in the collection is beneficial. This paper describes the application of the proposed K-Median clustering algorithm to a 7100 image dataset. Clustering the images in a distributed environment is intended to assist with both browsing and indexing. Title: INCREMENTAL NON-NEGATIVE MATRIX FACTORIZATION FOR DYNAMIC BACKGROUND MODELLING Author(s): Serhat S. Bucak, Bilge Gunsel and Ozan Gursoy Abstract: In this paper, an incremental algorithm which is derived from Non-negative Matrix Factorization (NMF) is proposed for background modeling in surveillance type of video sequences. The adopted algorithm, which is called as Incremental NMF (INMF), is capable of modeling dynamic content of the surveillance video and controlling contribution of the subsequent observations to the existing representation properly. INMF preserves additive, parts-based representation, and dimension reduction capability of NMF without increasing the computational load. Test results are reported to compare background modeling performances of batch-mode and incremental NMF in surveillance type of video. Moreover, test results obtained by the incremental PCA are also given for comparison purposes. It is shown that INMF outperforms the conventional batch-mode NMF in all aspects of dynamic background modeling. Although object tracking performance of INMF and the incremental PCA are comparable, INMF is much more robust to illumination changes. Title: BRIDGING THE GAP BETWEEN NAIVE BAYES AND MAXIMUM ENTROPY TEXT CLASSIFICATION Author(s): Alfons Juan, David Vilar and Hermann Ney Abstract: The naive Bayes and maximum entropy approaches to text classification are typically discussed as completely unrelated techniques. In this paper, however, we show that both approaches are simply two different ways of doing parameter estimation for a common log-linear model of class posteriors. In particular, we show how to map the solution given by maximum entropy into an optimal solution for naive Bayes according to the conditional maximum likelihood criterion. Title: COMBINATION BETWEEN MATHEMATICAL MORPHOLOGY AND HOUGH TRANSFORM METHODS IN ORDER TO EXTRACT ARABIC HANDWRITTEN ZONES FROM SAUDI BANK CHECKS Author(s): Fadoua Bouafif Samoud, Samia Snoussi Maddouri and Noureddine Ellouze Abstract: This paper presents an automatic extraction of handwritten components: literal amount, digital amount and date zone, from Arabic bank checks. Two different methods are developed for this extraction. The first one is based on Mathematical Morphology tools (MM), the knowledge of the bank check structure and projection method. The second one is based on Hough Transform (HT). A combination between these two methods is proposed. Finally, developed methods are evaluated on 700 checks token from CENPARMI-Arabic Checks Database. Title: AUTOMATIC DETECTION OF FACIAL MIDLINE AS A GUIDE FOR FACIAL FEATURE EXTRACTION Author(s): Nozomi Nakao, Wataru Ohyama, Tetsushi Wakabayashi and Fumitaka Kimura Abstract: We propose a novel approach for the detection of the facial midline from a frontal face image. The use of a midline as a guide reduces the computation time required for facial feature extraction (FFE) because midline is able to strict multi-dimensional searching process into one-dimensional search. The proposed method detects facial midline from the edge image as the symmetry axis using a new application of the the generalized Hough transformation to detect the symmetry axis. Experimental results on the FERET database indicate that the proposed algorithm can accurately detect facial midline over many different scales and rotation. The total computational time for facial feature extraction has been reduced by a factor of 280 using midline detected by this method. Title: A COLOUR DOCUMENT INTERPRETATION: APPLICATION TO ANCIENT CADASTRAL MAPS Author(s): Romain Raveaux, Jean-Christophe Burie and Jean-Marc Ogier Abstract: In this paper, a colour graphic document analysis is proposed with an application to ancient cadastral maps. The approach relies on the idea that images of document are fairly different than usual images, such as natural scenes or paintings… From this statement, we present an architecture for colour document understanding. It is based on two paradigms. Firstly, a dedicated colour representation named adapted colour space which aims to learn the document specificity and secondly a document oriented segmentation using a region growing algorithm supervised by a hierarchical strategy. Experiments are performed to judge the whole process and the first results show a good behaviour in term of information retrieval. Title: IMPROVING ISOMETRIC HAND GESTURE IDENTIFICATION FOR HCI BASED ON INDEPENDENT COMPONENT ANALYSIS IN BIO-SIGNAL PROCESSING Author(s): Ganesh R Naik, Dinesh K. Kumar, Hans Weghorn, Vijay P. Singh and Marimuthu Palaniswami Abstract: Hand gesture identification has various human computer interaction (HCI) applications. There is an urgent need for establishing a simple yet robust system that can be used to identify subtle complex hand actions and gestures for control of prosthesis and other computer assisted devices. Here, an approach is explained to demonstrate how hand gestures can be identified from isometric muscular activity, where muscle activity is small and changes are very subtle. Obvious difficulties arise from a very low signal to noise ratio in the recorded electromyograms (EMG). Independent component analysis (ICA) is applied to separate these low level muscle activities. The order and magnitude ambiguity of ICA have been overcome by using aprioiri knowledge of the hand muscle anatomy and a fixed un-mixing matrix. The classification is achieved using a back propagation neural network. Experimental results are shown, where the system was able to reliably recognize motionless gestures. The system was tested across users to investigate the impact of inter subject variation. The ex-perimental results demonstrate an overall accuracy of 96%, and the system was shown being insensitive against electrode positions, since the experiments were repeated on different days. The advantage of such a system is, that it is easy to train by a lay user, and can easily be implemented in real time after an initial training. Hence, EMG-based input devices can provide an effective solution for designing mobile interfaces that are subtle and intimate, and there exist a range of applications for communication, emotive machines and human computer interface. Title: AUTOMATIC FACE ALIGNMENT BY MAXIMIZING SIMILARITY SCORE Author(s): Bas Boom, Luuk Spreeuwers and Raymond Veldhuis Abstract: Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a face registration method which searches for the optimal alignment by maximizing the output of a face recognition algorithm. In this paper we investigate the practical usability of our face registration method. Experiments show that our registration method reaches better results in face verification than our landmark based registration method. We even obtain results in face verification which similar to manually labelling eyes, nose and mouth and using these landmarks for registration. The performance of the method is tested on the FRGCv1 database using images taken controlled and uncontrolled conditions. Title: POINT DISTRIBUTION MODELS FOR POSE ROBUST FACE RECOGNITION: ADVANTAGES OF CORRECTING POSE PARAMETERS OVER WARPING FACES TO MEAN SHAPE Author(s): Daniel González-Jiménez and José Luis Alba-Castro Abstract: In the context of pose robust face recognition, some approaches in the literature aim to correct the original faces by synthesizing virtual images facing a standard pose (e.g. a frontal view), which are then fed into the recognition system. One way to do this is by warping the incoming face onto the average frontal shape of a training dataset, bearing in mind that discriminative information for classification may have been thrown away during the warping process, specially if the incoming face shape differs enough from the average shape. Recently, it has been proposed a method for generating synthetic frontal images by modification of a subset of parameters from a Point Distribution Model (the so-called pose parameters), and texture mapping. We demonstrate that if only pose parameters are modified, client specific information remains in the warped image and discrimination between subjects is more reliable. Statistical analysis of the verification experiments conducted on the XM2VTS database confirm the benefits of modifying only the pose parameters over warping onto a mean shape. Title: TEXTURE LEARNING BY FRACTAL COMPRESSION Author(s): Benoît Dolez and Nicole Vincent Abstract: This paper proposes a texture learning method based on fractal compression. This type of compression is efficient for extracting self-similarities in an image. Square blocks are studied and similarities between them highlighted. This allows establishing a score and thus a rating of the blocks. Selecting the blocks that best encode the biggest part of the image or a texture leads to a database of the most representative ones. The recognition step consists in labelling blocks and pixels of a new image. The blocks of the new image are matched with the ones of the different texture databases. As an application, we used our method to recognize vegetation and buildings on aerial images. Title: TRANSDUCTIVE SUPPORT VECTOR MACHINES FOR RISK RECOGNITION OF SUSTAINED VENTRICULAR TACHYCARDIA AND FLICKER AFTER MYOCARDIAL INFARCTION Author(s): Stanislaw Jankowski, Ewa Piatkowska-Janko, Zbigniew Szymański and Artur Oręzia Abstract: This paper presents the improved recognition of patients with sustained ventricular tachycardia and flicker after myocardial infarction based on signal averaged electrocardiography. The novel approach includes: new filtering technique (FIR and P), extended signal description by a set of 9 parameters and the application of transductive support vector machine classifier. The dataset consists of 376 patients selected and commented by cardiologists of the Warsaw Medical University. The best score 94% of successful recognition on the test set was obtained for signals filtered by FIR method, described by 9 parameters. Title: RELIABLE BIOMETRICAL ANALYSIS IN BIODIVERSITY INFORMATION SYSTEMS Author(s): Martin Drauschke, Volker Steinhage, Artur Pogoda de la Vega, Stephan Müller, Tiago Mauricio Francoy and Dieter Wittmann Abstract: The conservation and sustainable utilization of global biodiversity necessitates the mapping and assessment of the current status and of the risk of loss of biodiversity as well as the continual monitoring of biodiversity. These demands in turn require the reliable identification and comparison of animal and plant species or even subspecies, especcially of those species endanged by extinction. We have developed the Automated Bee Identification System (ABIS), which has not only been successfully deployed for monitoring in Germany, Brazil and the United States, but also supports innovative taxonomical research as part of the Entomological Data Information System (EDIS). Within this framework our paper presents two contributions. Firstly, we explain how we employ a model-driven extraction of polymorphic features, i.e. region-induced cells, line-induced veins, and point-induced vein junctions to derive a rich and reliable set of complementary morphological features. Thereby, we emphazise new improvements of the reliable extraction of region-induced and point-induced features. Secondly, we present how this approach is employed to derive new important results not only in biodiversity research, but also in practical implications for biodiversity management with an integrated taxonomical data base. Title: RELIABILITY ESTIMATION FOR MULTIMODAL ERROR PREDICTION AND FUSION Author(s): Krzysztof Kryszczuk, Jonas Richiardi and Andrzej Drygajlo Abstract: This paper focuses on the estimation of reliability of unimodal and multimodal verification decisions produced by biometric systems. Reliability estimates have been demonstrated to be an elegant tool for incorporating quality measures into the process of estimating the probability of correctness of the decisions. In this paper we compare decision- and score-level schemes of multimodal fusion using reliability estimates obtained with two alternative methods. Further, we propose a method of estimating the reliability of multimodal decisions based on the unimodal reliability estimates. Using a standard benchmarking multimodal database we demonstrate that the score-level reliability-based fusion outperforms alternative approaches, and that the proposed estimates of multimodal decision reliability allow for an accurate prediction of errors committed by the fusion module. Title: EXPERIMENTS ABOUT THE GENERALIZATION ABILITY OF COMMON VECTOR BASED METHODS FOR FACE RECOGNITION Author(s): Marcelo Armengot, Francesc J. Ferri and Wladimiro Díaz Abstract: This work presents some preliminary results about exploring and proposing new extensions of common vector based subspace methods that have been recently proposed to deal with very high dimensional classification problems. Both the common vector and the discriminant vector approaches are considered. The different dimensionalities of the subspaces that these methods use as intermediate step are considered in different situations and their relation to the generalization ability of each method is analyzed. Some proposals to improve this generalization ability are proposed. Comparative experiments using different databases for the face recognition problem are performed to support the main conclusions of the paper. Title: EXTENDING MORPHOLOGICAL SIGNATURES FOR VISUAL PATTERN RECOGNITION Author(s): Sébastien Lefèvre Abstract: Morphological signatures are powerful descriptions of the image content which are based on the framework of mathematical morphology. These signatures can be computed on a global or local scale: they are called pattern spectra (or granulometries and antigranulometries) when measured on the complete images and morphological profiles when related to single pixels. Their goal is to measure shape distribution instead of intensity distribution, thus they can be considered as a relevant alternative to classical intensity histograms, in the context of visual pattern recognition. A morphological signature (either a pattern spectrum or a morphological profile) is defined as a serie of morphological operations (namely openings and closings) considering a predefined pattern called structuring element. Even if it can be used directly to solve various pattern recognition problems related to image data, the simple definitions given in the binary and grayscale cases limit its usefulness in many applications. In this paper, we introduce several 2-D extensions to the classical 1-D morphological signature. More precisely, we elaborate morphological signatures which try to gather more image information and do not only include a dimension related to the object size, but also consider on a second dimension a complementary information relative to size, intensity or spectral information. Each of the 2-D morphological signature proposed in this paper can be defined either on a global or local scale and for a particular kind of images among the most commonly ones (binary, grayscale or multispectral images). We also illustrate these signatures by several real-life applications related to object recognition and remote sensing. Title: REMOVAL OF UNWANTED HAND GESTURES USING MOTION ANALYSIS Author(s): Khurram Khurshid and Nicole Vincent Abstract: This work presents an effective method for hand gesture recognition under non-static background conditions and removal of certain unwanted gestures from the video. For this purpose, we have developed a new approach which mainly focuses on the motion analysis of hand. For the detection and tracking of hand, we have made some small innovations in the existing methods, while for recognition, the local and the global motion of the detected hand region is analyzed by using optical flow. The system is initially trained for a gesture and the motion pattern of the hand for that gesture is identified. This pattern is associated with this gesture and is searched for in the test videos. The system thoroughly trained and tested on 20 videos, filmed on 4 different people, reported a success rate of 90%. Title: STRING PATTERNS: FROM SINGLE CLUSTERING TO ENSEMBLE METHODS AND VALIDATION Author(s): André Lourenço and Ana Fred Abstract: - Joint Workshop on Wireless Ubiquitous Computing Title: MI-GUIDE: A WIRELESS CONTEXT DRIVEN INFORMATION SYSTEM FOR MUSEUM VISITORS Author(s): Nigel Linge, David Parsons, Duncan Bates, Robin Holgate, Pauline Webb, David Hay and David Ward Abstract: The growth in wireless and mobile communications technologies offers many new opportunities for museums who are constantly striving to improve their overall visitor experience. There is considerable interest in the use of context-aware services to track visitors as they move around a museum gal-lery so that exhibit information can be delivered and personalised to the visitor. In this paper we present a visitor information system called mi-Guide that is to be deployed within a new communications gallery at the Museum of Science & Industry in Manchester. This paper also reviews previous research into context-driven information systems and other context-aware museum applications. Title: A STRATEGIC BUSINESS TOOL FOR MOBILE INFRASTRUCTURE Author(s): Awangku Hairul Nizam P. H. Ali and Anthony S. Atkins Abstract: This paper outlines the use of strategic business tool to assists decision makers in applying mobile applications in their commercial operations. Outline examples of mobile business applications in hospitals, retail operations and in customer relationship management are presented to improve quality of service. The paper also describes examples of wireless technologies in relation to geographic ranges and the costs of implementation of mobile technology into various business environments. A framework is proposed to assist practitioners in applying mobile technology to business infrastructure. Title: EXPERIENCES IN APPLYING A MOBILE SERVICE PLATFORM ACROSS DIFFERENT BUSINESS DOMAINS Author(s): Anssi Karhinen, Oskari Koskimies and Jukka K. Nurminen Abstract: The purpose of this report is to share the experiences and lessons learned in fast-paced mobile service research. We describe on the evolutionary steps of the mobile service platform that has been used to field different mobile service concepts in varying business domains. The platform was developed to solve issues of rapid deployment in different field force applications for data gathering and task management. In this report we focus on the changes needed when the platform was used to develop a mobile service for fleet management and automatic data gathering in the road-maintenance domain. Title: A KEY GENERATION SCHEME OF SELF-ENCRYPTION BASED MOBILE DISTRIBUTED STORAGE SYSTEM Author(s): Yoshihiro Kawahara, Hiroki Endo and Tohru Asami Abstract: Mobile phones are frequently lost or stolen. Latest mobile handsets contain important information such as an address book, short mail messages, and e-cash. To prevent a stranger from accessing to such private information, practical security mechanisms have to be introduced into mobile handsets. We have developed a distributed network storage system that protects private data files stored on the mobile handsets without demanding complex operations for users. As computation resource on the mobile handsets is limited, a lighter en-cryption scheme is indispensable. In this paper, we report a self-encryption scheme for mobile distributed storage system. Different from existing encryp-tion schemes such as PKI (Public Key Infrastructure), this scheme exploits a diversity of data files for generating unique encryption keys while minimizing computation overhead of encryption. Experimental results show that our scheme can generate completely random keys from zipped text files with light operations. Title: ENHANCING SECURITY OF TERMINAL PAYMENT WITH MOBILE ELECTRONIC SIGNATURES Author(s): Evgenia Pisko Abstract: With the growing number of debit card transactions, security issues have arisen correspondingly. By applying the latest technical innovations, criminals are using more and more effective methods of card fraud. They are exploiting security weaknesses of existing debit card payment rules. For instance, if a criminal has acquired the complete card data, he will be then able to use it to withdraw money until the card is blocked. To authorize each payment and to guarantee the integrity of payment information, we have developed a service architecture for mobile signature secured payments at the POS, which we present in this paper. To support the proposed architecture we suggest service subscription and payment protocols. Title: THE INTRODUCTION OF VALUE-ADDED SERVICES IN JAMAICA Author(s): Paul Golding and Opal Donaldson Abstract: Since the liberalization of the Jamaican telecommunications sector in 1999 the mobile market has become extremely competitive. The level of competition in the market coupled with a high level of handset saturation has created a ripened environment ideal for implementing mobile commerce services. However based on reviewed literature the introduction of mobile commerce services must be done from a consumer perspective. Telecommunication providers are becoming increasingly aware of the importance of understanding consumer attitude towards wide scale adoption of mobile services. The approach of introducing applications to market without consumer input could result in the loss of time and financial investments. In order to develop best practices for launching mobile commerce products within the market, researchers have taken the approach to study consumer interest in mobile commerce and to evaluate the characteristics of the products that are found most desirable. Consequently this research seeks to understand the interest of consumers within the Jamaican telecommunication market and to determine what are the value-added characteristics/ properties of services that are most desired by consumers Title: MODEL FOR GRID SERVICE INSTANCE MIGRATION Author(s): Dhaval Shah and Sanjay Chaudhary Abstract: Grid computing, emerging as a new paradigm for next-generation computing, enables the sharing, selection, and aggregation of distributed resources for solving large-scale problems in science, engineering, and commerce. The resources in the Grid are heterogeneous and geographically distributed. The resources in the Grid are dynamic in nature. Resource owners are at discretion to submit/donate the resources in to the Grid environment. The term Web services describes a standardized way of integrating Web-based applications using the XML, SOAP, WSDL and UDDI open standards over an Internet protocol backbone [8] Web Services are basically stateless in nature and there is a need to maintain state across the transactions based on Web Services. Grid Services is an extension of Web Services in a Grid environment having statefulness as a key feature. State of any Grid Services is exposed with the help of Service Data Elements. Grid Services may fail during its life cycle due to failure of a resource or a withdrawal of a resource by the resource owner. Thus, there is a need to provide a reliable solution in the form of Grid Service instance migration to protect the work of the users, which was carried out. This paper proposes a model that supports Grid Services instance migration. Migration of an instance can take place based on the failure of resource, increase in load at the resource, change in the policy of the domain in which resource resides, user specified migration, or migration due to withdrawal of a resource by the resource owner. It enables the users to specify the migration if the user does not trust the domain in which instance is running. The model includes an incremental checkpointing mechanism to facilitate migration. Title: AN ARCHITECTURE FOR UBIQUITOUS APPLICATIONS Author(s): Sofia Zaidenberg, Patrick Reignier and James L. Crowley Abstract: This paper proposes a framework intended to help developers to create ubiquitous applications. We argue that context is a key concept in ubiquitous computing and that, by nature, a ubiquitous application is distributed and needs to be easily deployable. Thus we propose an easy way to build applications made of numerous modules spread in the environment and interconnected. This network of modules forms a permanently running system. The control (in-stallation, update, etc.) of such a module is obtained by a simple, possibly re-mote, command and without requiring to stop the whole system. We ourselves used this architecture to create a ubiquitous application, which we present here as an illustration. Title: BLUETOOTH GAMING WITH THE MOBILE MESSAGE PASSING INTERFACE (MMPI) Author(s): Daniel C. Doolan and Sabin Tabirca Abstract: The Mobile Message Passing Interface (MMPI) is a library implemented under J2ME to provide the fundamental functions that can be found in the standard MPI libraries used for Clusters and Parallel Machines. Nodes of a Cluster are usually connected to one another over a very high speed cabled interconnect. Within the mobile domain one does not have the luxury of connecting the devices with cabling, hence the MMPI library was built to take advantage of the Bluetooth capabilities that the majority of current mobile devices feature as standard. Mobile devices inherently have limited processing abilities. The MMPI library alleviates this problem by allowing the processing power of several devices to be used. Thus one can solve problems that a single device would be incapable of doing within a reasonable time frame. This paper discusses how the MMPI library can be applied to the application domain of Bluetooth enabled mobile gaming. Title: PROACTIVE MOBILE LEARNING ON THE SEMANTIC WEB Author(s): Rachid Benlamri, Jawad Berri and Xiaoyun Zhang Abstract: Flexible and personalized instruction is one of the most important requirements to next generation intelligent educational systems. The intelligence of any e-learning system is thus measured by its ability to sense, aggregate and use, the various contextual elements to characterize the learner, and to react accordingly by providing a set of customized learning services. In this paper we propose a proactive context aware mobile learning system on the semantic Web. The contribution of this work is a combined model using both a probabilistic learning technique and an ontology-based approach to enable intelligent context processing and management. The system uses a Naïve Bayesian classifier to recognize high level contexts in terms of their constituent atomic context elements. Recognized contexts are then interpreted as triggers of actions yielding a Web service composition. This is achieved by reasoning on the ontological description of atomic context elements participating in the high level context. Workshop on Modelling, Simulation, Verification and Validation of Enterprise Information Systems Title: CHECKING PROPERTIES OF BUSINESS PROCESS MODELS WITH LOGIC PROGRAMMING Author(s): Volker Gruhn and Ralf Laue Abstract: Logic programming has been successfully used for reasoning about various kinds of models. However, in the area of business-process modeling it has not yet gained the attention it deserves. In this article, we give some examples how logical programming can be exploited for verifying or finding properties of graphical models that are used by business process modelers, for example event driven process chains (EPC), UML activity diagrams, BPMN or YAWL. We show how the approach works on different properties of business process models, including semantic (structural) correctness and modeling style. Title: CONSISTENCY OF LOOSELY COUPLED INTER-ORGANIZATIONAL WORKFLOWS WITH MULTILEVEL SECURITY FEATURES Author(s): Nirmal Gamia and Boleslaw Mikolajczak Abstract: The paper presents an algorithm to verify consistency of Inter-Organizational Workflows (IOWF) with Multi-level Security (MLS). The algorithm verifies whether the implementation of Inter-Organizational Workflow with Multi-level Security features meets the specification. The algorithm reduces the workflows of participating organizations using the reduction rules while preserving the communication patterns between organizations. The paper also presents an algorithm to identify redundant implicit places in the IOWF with MLS features. We conclude that IOWF with MLS features is k-consistent with Message Sequence Chart (MSC) if the number and order of messages passed between organizations in reduced IOWF with MLS features is same as that in original MSC. Title: CHECKING COMPLEX COMPOSITIONS OF WEB SERVICES AGAINST POLICY CONSTRAINTS Author(s): Andrew Dingwall-Smith and Anthony Finkelstein Abstract: Research in web services has allowed reusable, distributed, loosly coupled components which can easily composed to build systems or to produce more complex services. Composition of these components is generally done in an ad-hoc manner. As compositions of services become more widely used and, inevitably, more complex, there is a need to ensure that compositions of services obey constraints. In this paper, we consider the need to provide policy constraints on service compositions, that define how services can be composed in a particular business setting. We describe compositions using WS-CDL and we use {\tt xlinkit} to express policy constraints as consistency rules over XML documents. Title: BRIDGING THE GAP BETWEEN XPDL AND SITUATION CALCULUS: A HYBRID APPROACH FOR BUSINESS PROCESS VERIFICATION Author(s): Bing Li and Junichi Iijima Abstract: Business Process Verification (BPV) works as one of the important functions in the emerging BPM Systems. Presently the proposed approaches are not yet well applied because of the gap between the formal models defined in the researches and the informal models used in the industry. This paper tries to propose a hybrid approach to solve this problem. The XPDL will be used to describe business processes and Situation Calculus will be employed as the formalism to perform the function of BPV. The gap of these two process models will be bridged by an originally proposed language – XSSL. Based on the formalism of Situation Calculus, the XSSL formatted process model will be logically verified. A prototype system is implemented to demonstrate the feasibility of this approach. Title: SUPPORT OF PROJECT PLANNING IN CHEMICAL ENGINEERING VIA MODELING AND SIMULATION Author(s): Bernhard Kausch, Morten Grandt and Christopher M. Schlick Abstract: The quality of planning and conducting development processes is significantly influenced by the experience of the experts in project planning. Adequate support of these processes through objective and quantifiable data is largely missing. The approach explained further on in detail shows a method which supports the fast, easy and experience based generation of a project plan as well as the simulation supported examination and improvement of a project especially under consideration of available resources. Methods are briefly introduced via an example from the chemical engineering industry. It can be shown how various project constellations can be compared using Petri net simulation, thereby making it possible to reach an improved agreement on the project plan with regard to available persons and resources. The example project is simulated with different numbers of employees and different resource configurations. An analysis of results shows that using more than a certain number of employees no longer leads to a shortening of project duration. Additionally, the four resources that, as bottleneck resources, affect the total project duration to a major extent are determined. Title: RESOURCE WORKFLOW NETS: A PETRI NET FORMALISM FOR WORKFLOW MODELLING Author(s): Oana Otilia Prisecaru Abstract: A workflow is the automation of a business process that takes place inside one organization. While most of the formal approaches to workflow modelling consider only the process perspective, we propose a Petri net model which integrates both the process and the resource perspective. The paper introduces a special class of nested Petri nets, resource workflow nets (RWFN-nets), which unifies the two perspectives into a single model. Unlike other models, RWFN-nets permit a clear distinction between the perspectives, modelling efficiently their interaction, and ensure the flexibility of the system. The paper also defines a notion of behavioural correctness for RWFN-nets, soundness, and proves this property is decidable. Title: PROCESS-ORIENTED ORGANIZATION MODELING AND ANALYSIS Author(s): Viara Popova and Alexei Sharpanskykh Abstract: This paper presents a formal framework for process-oriented modeling and analysis of organizations. The high expressivity of a sorted predicate logic language used for specification allows representing a wide range of process-related concepts (e.g., tasks, processes, resources), characteristics and relations, which are described in the paper. Furthermore, for every organization, structural and behavioral constraints on process-related concepts can be identified. Some of them should always be fulfilled by the organization (e.g., physical world constraints), whereas others allow some degree of organizational flexibility (e.g., some domain specific constraints). An organizational model is correct if it satisfies a set of relevant organizational constraints. This paper describes automated formal techniques for establishing correctness of organizational models w.r.t. a set of diverse constraint types. The introduced framework is a part of a general framework for organization modeling and analysis. Title: A SPECIFICATION AND VALIDATION APPROACH FOR BUSINESS PROCESS INTEGRATION BASED ON WEB SERVICES AND AGENTS Author(s): Djamel Benmerzoug, Mahmoud Boufaida and Fabrice Kordon Abstract: Business process integration and automation have become high priorities for companies to achieve operational efficiency. With the burgeoning of e-commerce, there is a renewed interest in technologies for coordinating and automating intra- and inter-enterprise business processes. We believe that nteraction support technologies will greatly enhance the speed of e-business integration. Modelling of complex B2B integration, which themselves follow an interaction protocols seem to be natural. In other hand and in order to reach an implicit consensus about the possible states and actions in an interaction protocol, it is necessary for the protocol itself to be correct. From this perspective we develop a novel approach for business processes integration. Our approach is based on interaction protocols that enable autonomous, distributed business process modules to integrate and collaborate. In our case, the business processes integration is modelled using AUML and specified using BPEL4WS. Furthermore and to increase the reliability of interaction protocols at design time, our approach presented in this paper can validate the BPEL4WS specification with the business constraints (which are specified using the OCL language). The validated BPEL4WS specification is considered as a specification language for expressing the interaction protocols of the multi-agents system, which can then intelligently adapt to changing environmental conditions. Title: VALIDATING REASONING HEURISTICS USING NEXT-GENERATION THEOREM-PROVERS Author(s): Paul S. Steyn and John A. van der Poll Abstract: Set theory is a fundamental theory of mathematics and a corner stone of many formal specification languages. Reasoning about the properties of a formal specification is a tedious task that can be facilitated much through the use of an automated reasoner. However, reasoning in set theory poses demanding challenges to automated reasoners. To this end a number of heuristics has been developed to aid the Otter theorem prover in finding short proofs for set theoretic problems. This paper investigates the applicability of these heuristics to a next generation theorem prover Vampire. Title: TRANSFORMATION OF BPMN MODELS FOR BEHAVIOUR ANALYSIS Author(s): Ivo Raedts, Marija Petković, Yaroslav S. Usenko, Jan Martijn van der Werf, Jan Friso Groote and Lou Somers Abstract: In industry, many business processes are modelled and stored in Enterprise Information Systems (EIS). Tools supporting the verification and validation of business processes can help to improve the quality of these business processes. However, existing tools can not directly be applied to models used in industry. In this paper, we present our approach for model verification and validation: translating industrial models to Petri nets and mCRL2, and subsequently applying existing tools on the models derived from the initial industrial models. The following translations are described: BPMN models to Petri nets and Petri nets to mCRL2. It is shown what the analysis on the derived models can reveal about the original models. Title: EXTENDING CADP FOR ANALYZING C CODE Author(s): M. Mar Gallardo, P. Merino and D. Sanan Abstract: Many existing open source projects are written with the classic programming language C. Due to the size and complexity of such projects make it is necessary to study specific C-oriented methods and construct the corresponding automatic tools to increase the reliability. For instance, advanced reachability analysis techniques like model checking, that traditionally have been applied to software models, are now being considered as very promising methods to detect execution failures in final code. This paper focuses on extending the well known toolbox CADP in order to make it easier to analyze realistic concurrent C programs that make use of external functionality provided via well defined application programming interfaces (APIs). Our approach consists in constructing a tool to convert the C code into the usual formats expected by the set of tools integrating CADP. The new module allows us to exploit all the functionalities of CADP to assist software reliability: model checking, equivalence checking, testing, distributed verification or performance evaluation. Title: AN INTERPRETATION OF BEHAVIORAL CONSISTENCY OF UML–RT DIAGRAMS IN TERMS OF CSP+T Author(s): Manuel I. Capel Tuñón, Kawtar Benghazi Akhlaki, Juan A. Holgado Terriza and Luis E. Mendoza Morales Abstract: Although the syntax of UML and UML--RT diagrams is well understood and widely accepted in the industry today, it lacks a formal semantics, thus introducing a significant inconsistency risk when employing different kinds of diagrams to describe system's behavior during modelling. It has to be ensured that those different behavioral specifications are consistent, so we present here a consistency demonstration inside UML--{\it composite structure diagrams}. We demosntrate a refinement relation from capsule state machines formal representation to an specification of their inter--communication protocols described as sequence diagrams. A set of transformation rules is proposed to obtain a representation of both kinds of diagrams into the common semantic domain of CSP+T process execution traces. The consistency demonstration is applied to validate a UML--RT model of the Production Cell case study. Title: BUSINESS PROCESS MODELING USING AN INTERACTIVE FRAMEWORK FOR IMMERSIVE RESEARCH, SUPPORT AND TRAINING (I-FIRST) Author(s): Wade M. Poole and S. Ramaswamy Abstract: Business Process Management (BPM) has emerged as a leading technology for business process solutions .in current day enterprise systems. However, business processes do dynamically change as companies constantly evolve to meet their core business needs. In this paper, we propose an Interactive Framework for Immersive Research, Support and Training (I-FIRST) to assist Disadvantaged Business Enterprises’ (DBEs) with an integrated business decision-support system. I-FIRST which uses a dynamic approach to model the planning and integration of business processes facilitates the alignment of DBEs’ business processes to help them compete successfully. While the framework on one hand allows the ability to model the business process to leverage expert domain knowledge, on the other hand, it immerses a decision-maker with selective modifications of some business processes using a dynamic feedback mechanism. The methodology proposed utilizes a complete end-to-end systems-based approach to leverage appropriate feedback and a computer-based learning environment called Teachable Agents (TAs) which focuses on the learning by teaching paradigm. Title: USING ETHNOGRAPHIC TECHNIQUES TO DESCRIBE REQUIREMENTS ENGINEERING PROCESSES IN GEOGRAPHIC INFORMATION SYSTEMS WORKGROUPS Author(s): Luis Fernando Medina Cardona Abstract: Geographic Information Systems (GIS) have become a relevant field of focus in Information Systems due to the increasing need of managing spatial information. However GIS applications are hard to develop given the heterogeneity of GIS working groups involving several disciplines making difficult to assess the requirements of the system. This paper presents the initial results of a qualitative study conducted to describe the GIS community requirements processes and needs making use of ethnographic techniques. Ethnography is a discipline taken from social sciences that puts a strong emphasis in the fieldwork and for this reason its conceptual framework and tools such interviews, participative observation and qualitative analysis are presented as a way to extract the hidden knowledge of the GIS social networks. Applied through field work in different GIS scenarios as government offices, private consultants, NGOs and academy the results obtained are described allowing the identification of clue features related to requirements engineering in GIS applications. Finally, the conclusion includes reflections on the Ethnographic techniques applied and considerations to design better methodologies in the requitements engineering field for the GIS specific domain. Title: AN INNOVATIVE METHOD FOR BUSINESS PROCESS MODELING Author(s): Joseph Barjis Abstract: As a core area in the IS research field, business process modeling has long attracted theoreticians of new concepts, designers of artifacts, and practitioners of modeling. Their diligent efforts have resulted in numerous methods (e.g., UML, EPC, Flowcharts) for process modeling. The existing methods have been challenged in certain aspects. First, they mainly represent a flowchart model capturing only the normal flow of activities ignoring the depth or nested structure of processes. Second, they are informal or semi-formal, not lending to model checking and formal analysis without further translations and mapping procedures. Also the existing methods are more about control-flow than interaction of human actors. In this paper we discuss an innovative method hoping it will address the existing challenges and gap the disconnections, or, at least, provide an improved tool for that purpose. The proposed method encompasses a set of graphical notations fit into the concept it is based upon. Title: A HEALTHCARE CENTER SIMULATION USING ARENA Author(s): Joseph Barjis and Matt Hall Abstract: In this case study we report a detailed simulation project conducted in a family healthcare system. This study was conducted when the center was planning to implement an EMR (Electronic Medical record) system. In order to document the center’s business processes, identify how the processes flow, how different entities interact, and how each entity’s role will be affected after the system is implemented, we have designed a detailed business process model using the business transaction concept. Once the model identified the center’s main activities and involved actors, a simulation and animation models were developed using the Arena™ simulation tool and environment. The report format is of a case study. Title: FORMAL SEMANTICS FOR PROPERTY-PROPERTY RELATIONS IN SEAM VISUAL LANGUAGE: TOWARDS SIMULATION AND ANALYSIS OF VISUAL SPECIFICATIONS Author(s): Irina Rychkova and Alain Wegmann Abstract: Enterprise architecture (EA) is the discipline that analyzes and designs enterprises and IT systems. In general, EA frameworks do not propose a visual modeling notation. SEAM [1] is a EA method that defines a visual language for enterprise modeling. Our goal is to provide the formal semantics for SEAM. Model simulation, model comparison, and refinement verification are practical benefits we expect from this formalization. Formal semantics for SEAM was partially addressed in our previous work [2]. This paper complements the existing semantics by formalization of propertyproperty relations in SEAM visual language. This formalization is based on the theory of multi-relations [3] and Relation Partition Algebra (RPA)[4]. Title: UML-DRIVEN INFORMATION SYSTEMS AND THEIR FORMAL INTEGRATION VALIDATION AND DISTRIBUTION Author(s): Nasreddine Aoumeur and Gunter Saake Abstract: Being the de-facto standard (object-oriented-OO) method(-logy) for developing information systems, UML and its different diagrams and supporting tools and process represent the mostly accepted software-engineering means for developing contemporary information systems. Nevertheless, due to this wide-acceptance by all organization stakeholders (including novice ones), severe weaknesses at the modelling level have to be tackled before adventuring into further implementation phases. Such serious concerns include: (1) The absence of coherence and complementarity between different structural and behavioral diagrams such as class- collaboration- statecharts and OCL constraints; (2) Due to this absence of coherent global view, any consistent validation of the whole information systems conceptual models is deemed impossible; (3) Whereas current information systems are mostly networked and concurrent, UML-driven information systems still falls into the sequential and centralized old tradition. To leverage UML-driven information systems conceptual modelling towards (partially) circumventing these intrinsic shortcomming, we propose a semi-automatic intermediate abstract phase (instead of direct OO implementation using Java/C++ languages) governing by a rigorous component-based operational and visual conceptual model. Referred to as {\co}, this specification/validation formalism is based on a tailored formal integration of most OO concepts and mechanisms enhanced by modularity principles into a variant of algebraic Petri Nets. For rapid-prototyping purposes, {\co} is semantically interpreted into rewriting logic. This UML-{\co} proposal for distributed information systems rigorous development is illustrated through a non-trivial case-study for production systems. Workshop on Security in Information Systems Title: RESEARCH ON COUNTER HTTP DDOS ATTACKS BASED ON WEIGHTED QUEUE RANDOM EARLY DROP Author(s): Guo Rui, Chang Guiran, Hou Ruidong, Baojing Sun, Liu An and Bencheng Zhang Abstract: This paper proposes a new approach, called Weighted Queue Random Early Drop admission control, which protects small and medium online business Web sites against HTTP DDoS attacks. Weighted Queue Random Early Drop is used to compute dropping probability to avoid bursty traffic. Weighted Queue scheduler is adopted to implement access rate limit. The feasibility and effectiveness of our approach is validated by measuring the performance of an experimental prototype against a series of attacks. The advantages of the scheme are discussed and further research directions are given. Title: A PROPOSAL FOR EXTENDING THE EDUROAM INFRASTRUCTURE WITH AUTHORIZATION MECHANISMS Author(s): Manuel Sánchez Cuenca, Gabriel López, Óscar Cánovas and Antonio F. Gómez-Skarmeta Abstract: Identity federations are emerging in the last years in order to make easier the deployment of resource sharing environments among organizations. One common feature of those environments is the use of access control mechanisms based on the user identity. However, most of those federations have realized that user identity is not enough to offer a more grained access control and value added services. Therefore, additional information, such as user attributes, need to be taken into account. This paper presents how one of those real and widely spread identity federations, eduroam, has been extended in order to make use of the user attributes deﬁned in his home domain, to adopt authorization decisions during the access control process. This authorization framework has been integrated by means of the NAS-SAML infrastructure, which deﬁnes a network access control service based on SAML and the AAA architecture. Title: A REPUTATION SYSTEM FOR ELECTRONIC NEGOTIATIONS Author(s): Omid Tafreschi, Dominique Maehler, Janina Fengel, Michael Rebstock and Claudia Eckert Abstract: In this paper we present a reputation system for electronic negotiations. The proposed system facilitates trust building among business partners who interact in an ad-hoc manner with each other. The system enables market participants to rate the business performance of their partners as well as the quality of offered goods. These ratings are the basis for evaluating the trustworthiness of market participants and the quality of their goods. The ratings are aggregated using the concept of Web of Trust. This approach leads to robustness of the proposed system against malicious behavior aiming at manipulating the reputation of market participants. Title: A FAIR NON-REPUDIATION SERVICE IN A WEB SERVICES PEER-TO-PEER ENVIRONMENT Author(s): Berthold Agreiter, Michael Hafner and Ruth Breu Abstract: “Non-Repudiation“, a well known concept in security engineering, provides measures to ensure that participants in a communication process cannot later on deny of having participated in it. Such a concept is even more important in service oriented architectures (e.g. electronic billing). However, there is no sophisticated standard implementing fair non-repudiation in such an environment. In this paper, we will introduce a framework providing fair non-repudiation for Web Service messages. It executes a previously specified protocol using Web Services technology itself, but completely hides the protocol execution from the target Web Services. To allow the integration of such security requirements already in an early phase of development, a model-driven configuration approach is used. Furthermore, the procedure is not tied to non-repudiation protocols only, which means that a broad range of protocols can be integrated in a similar way. The framework presented in this paper leverages existing standards and protocols for an efficient adoption in service oriented architectures. Title: A GENERAL APPROACH TO SECURELY QUERYING XML Author(s): Ernesto Damiani, Majirus Fansi, Alban Gabillon and Stefania Marrara Abstract: Access control models for XML data can be classified in two major categories: node filtering and query rewriting systems. The first category includes approaches that use access policies to compute secure user view on XML data sets. User queries are then evaluated on those views. In the second category of approaches, authorization rules are used to transform user queries to be evaluated against the original XML dataset. The aim of this paper is to describe a model combining the advantages of these approaches and overcoming their limitations. The model specification is given using a Finite State Automata, ensuring generality and easiness of standardization w.r.t. specific implementation techniques Title: A THREE LAYERED MODEL TO IMPLEMENT DATA PRIVACY POLICIES Author(s): Gerardo Canfora and Corrado Aaron Visaggio Abstract: Many business services for private companies and citizens are increasingly accomplished trough the web and mobile devices. Such a scenario is characterized by high dynamism and untrustworthiness, as a large number of applications exchange different kinds of data. This poses an urgent need for effective means in preserving data privacy. This paper proposes an approach, inspired to the front-end paradigm, to manage data privacy in a very flexible way. Our approach has the potential to reduce the change impact due to the dynamism and to foster the reuse of strategies, and their implementations, across organizations. Title: A PRIVACY AWARE AND EFFICIENT SECURITY INFRASTRUCTURE FOR VEHICULAR AD HOC NETWORKS Author(s): Klaus Plößl and Hannes Federrath Abstract: Vehicular Ad Hoc Networks (VANETs) have the potential to dramatically increase road safety by giving drivers more time to react adequately to dangerous situations. To prevent abuse of VANETs, a security infrastructure is needed that ensures security requirements like message integrity, confidentiality, and availability. After giving more details on the requirements we propose a security infrastructure that uses asymmetric as well as symmetric cryptography and tamper resistant hardware. While fulfilling the requirements, our proposal is especially designed to protect privacy of the VANET users and proves to be very efficient in terms of computational needs and bandwidth overhead. Title: OBTAINING USE CASES AND SECURITY USE CASES FROM SECURE BUSINESS PROCESS THROUGH THE MDA APPROACH Author(s): Alfonso Rodríguez and Ignacio García-Rodríguez de Guzmán Abstract: MDA is an approach based on the transformation of models for software development and it is complemented with QVT as a language for transformations specifications. This approach is being paid much attention by researchers and practitioners since it promotes the early specification of requirements at high levels of abstractions, independently of computation, that will be later part of models closer to the software solution. Taking into account this approach, we can create business process models incorporating requirements, even those of security, that will be later part of more concrete models. In our proposal, based on MDA, we start from secure business process specifications and through transformations specified with QVT, we obtain use cases and security use cases. Such artifacts complement the first stages of an ordered and systematic software development process such as UP. Title: NEW PRIMITIVES TO AOP WEAVING CAPABILITIES FOR SECURITY HARDENING CONCERNS Author(s): Azzam Mourad, Marc-André Laverdière and Mourad Debbabi Abstract: In this paper, we present two new primitives to Aspect-Oriented Programming (AOP) languages that are needed for systematic hardening of security concerns. These primitives are called exportParameter and importParameter and are used to pass parameters between two pointcuts. They allow to analyze a program’s call graph in order to determine how to change function signatures for the passing of parameters associated with a given security hardening. We find this feature necessary in order to implement security hardening solutions that are infeasible or impractical using the current AOP proposals. Moreover, we show the viability and correctness of our proposed primitives by elaborating their algorithms and presenting experimental results. Title: A DRM ARCHITECTURE FOR SECURING USER PRIVACY BY DESIGN Author(s): Daniel Kadenbach, Carsten Kleiner and Lukas Grittner Abstract: Privacy considerations are one serious point against current DRM systems, because they would allow the License-Issuers to collect large amounts of user data, up to the the time a user listens to a song or which users are reading which kind of books. This sort of data could be used for marketing purposes but also for malicious deeds. This paper addresses this threat and establishes a DRM architecture which protects user privacy by the core of its design by adding a third trusted party and an appropriate communication protocol. The work was influenced by a project in mobile DRM based on the OMA specification. Title: A NEW WAY TO THINK ABOUT SECURE COMPUTATION: LANGUAGE-BASED SECURE COMPUTATION Author(s): Florian Kerschbaum Abstract: Assume two parties, Alice and Bob, want to compute a joint function, but they want to keep their inputs private. This problem setting and its solutions are known as secure computation. General solutions to secure computation require the construction of a binary circuit for the function to be computed. This paper proposes the concept of language-based secure computation. Instead of constructing a binary circuit program code is directly translated into a secure computation protocol. This concept is compared to the approaches for language-based information-flow security and many connections between the two approaches are identified. The major challenge in this translation is the secure translation of the program's control-flow without leaking private information via a timing channel. The paper presents a method for translating an \code{if} statement with a secret branching condition that may not be known to any party. Furthermore, that protocol can be optimized using trusted computing, such that the overall performance of a program executed as a secure computation protocol can be greatly improved. Title: SREPPLINE: TOWARDS A SECURITY REQUIREMENTS ENGINEERING PROCESS FOR SOFTWARE PRODUCT LINES Author(s): Daniel Mellado, Eduardo Fernández-Medina and Mario Piattini Abstract: Security related requirements are increasingly becoming a significant portion of the total set of requirements for many software systems. At the same time, nowadays many systems are developed based on the product line engineering paradigm. Within product lines, security requirements issues are extremely important because weakness in security can cause problems throughout the lifecycle of a line. The main contribution of this work is that of providing a standard-based process, which is an add-in of activities in the domain engineer-ing as well as in application engineering processes. These processes deal with the security requirements from the early stages of product line lifecycle in a sys-tematic and intuitive way especially adapted for product line based develop-ment. It is based on the use of the latest security requirements techniques, to-gether with the integration of the Common Criteria (ISO/IEC 15408) into the product line lifecycle. Additionally, it deals with security artefacts reuse, by providing us with a Security Resources Repository. Moreover, it facilitates the conformance to the most relevant security standards with regard to the man-agement of security requirements Title: ON THE RELATIONSHIP BETWEEN CONFIDENTIALITY MEASURES: ENTROPY AND GUESSWORK Author(s): Reine Lundin, Thijs Holleboom and Stefan Lindskog Abstract: In this paper, we investigate in detail the relationship between entropy and guesswork. After a short discussion of the two measures, and the differences between them, the formal definitions are given. Then, a redefinition of guesswork is made, since the measure is not completely accurate. The change is a minor modification in the last term of the sum expressing guesswork. Finally, two theorems are stated. The first shows that the redefined guesswork is equal to the concept of cross entropy, and the second shows, as a consequence of the first theorem, that guesswork is indeed equal to the sum of the entropy and the relative entropy. Title: ROBOADMIN: A DIFFERENT APPROACH TO REMOTE SYSTEM ADMINISTRATION Author(s): Marco Ramilli and Marco Prandini Abstract: The most widespread approach to system administration consists in connecting to remote servers by means of a client-server protocol. This work analyzes the limitations of such approach, in terms of security and flexibility, and illustrates an alternative solution. The proposed model eliminates the predictable management port on the server by introducing an additional system, placed in between the remote server and its administrator, which allows to devise a management service behaving as a network client instead of a server. While the focus of this work is placed on the alternative communication model, whose effectiveness has been experimentally validated, the resulting architecture can be seen as part of a larger picture, tracing a research path leading to the definition of a novel administration framework, aimed at overcoming many other issues of the current techniques. Title: IMPLEMENTING MOBILE DRM WITH MPEG 21 AND OMA Author(s): Silvia Llorente, Jaime Delgado and Xavier Maroñas Abstract: Digital Rights Management (DRM) is an important issue when trying to provide advanced multimedia content distribution services. DRM is mostly thought for personal computers, but now, with the emerging mobile devices which include multimedia support and higher bandwidth capabilities, users demand more valuable content for their mobile devices. The most relevant initiatives in the area, MPEG 21 and OMA DRM have been used for implementing a DRM system for mobiles. A lot of work has still to be done in this area and what we present here is a combination of techniques coming from MPEG 21 and OMA for providing a more complex system with combined capabilities. Title: AN ONTOLOGY FOR THE EXPRESSION OF INTELLECTUAL PROPERTY ENTITIES AND RELATIONS Author(s): Víctor Rodríguez, Marc Gauvin and Jaime Delgado Abstract: Abstract. Ontologies represent knowledge in a particular area. Intellectual Property (IP) Entities lifecycle lacks any explicit standard representation, and a semantic expression of its processes and rules would report a series of benefits. To formalise the expression of IP Entities and their relations, an Ontology Web Language (OWL) ontology is proposed to establish a common framework where the different interested parties can interact. As a demonstration, a sample application based on the ontology is described, where a central reasoning server receives qualified statements and queries over the ontology, giving the pertinent logical results. Title: MMISS-SME PRACTICAL DEVELOPMENT: MATURITY MODEL FOR INFORMATION SYSTEMS SECURITY MANAGEMENT IN SMES Author(s): Luis Enrique Sánchez, Daniel Villafranca and Mario Piattini Abstract: For enterprises to be able to use information technologies and communications with guarantees, it is necessary to have an adequate security management system. However, this requires that enterprises know in every moment their security maturity level and to what extend their information security system must evolve. Moreover, this security management system must have very reduced costs for its implementation and maintenance in small and medium-size enterprises (from now on, SMEs) to be feasible. In this paper, we will put forward our proposal of a maturity model for security management in SMEs and we will briefly analyse other models that exist in the market. This approach is being directly applied to real cases, thus obtaining an improvement in its application. Title: SECURITY IN TCINMP SYSTEMS Author(s): Katalin Anna Lázár and Csilla Farkas Abstract: In our earlier work we proposed a modification of a grammar systems theoretic construction, called network of parallel language processors, to describe the behavior of peer-to-peer (P2P) systems. In the model the language processors form teams, send/receive information through collective and individual filters. In this paper we demonstrate how the formal language theoretic model can be employed to incorporate network security requirements. More specifically, we show how to model and detect SYN flooding attacks and enforce Discretionary Access Control. Title: CONFINING THE INSIDER THREAT IN MASS VIRTUAL HOSTING SYSTEMS Author(s): Marco Prandini, Eugenio Faldella and Roberto Laschi Abstract: Mass virtual hosting is a widespread solution to the market need for a platform allowing the inexpensive deployment of web sites. By leveraging the ever-increasing performances of server platforms, it is possible to let hundreds of customers share the available storage, computing, and connectivity facilities, eventually attaining a satisfying level of service for a fraction of the total cost of the platform. Since the advent of dynamic web programming, however, the implementation of a mass hosting solution, achieving a sensible tradeoff between security and efficiency, has become quite difficult. This paper compares the characteristics of different hosting implementation techniques, focusing on efficiency intended as the number of different web sites that a server can power, and security intended as the effectiveness at isolating (the resources of) the different customers from each other. Subsequently, we propose a solution based on the integration of components which are already very well rooted in the internet server’s world. The described architecture can be adapted to an assortment of implementations, of which one has been implemented and tested. Title: A KEY MANAGEMENT METHOD FOR CRYPTOGRAPHICALLY ENFORCED ACCESS CONTROL Author(s): Anna Zych, Milan Petković and Willem Jonker Abstract: Cryptographic enforcement of access control mechanisms relies on encrypting protected data with the keys stored by authorized users. This approach poses the problem of the distribution of secret keys. In this paper, a key management scheme is presented where each user stores a single key and is capable of efficiently calculating appropriate keys needed to access requested data. The proposed scheme does not require to encrypt the same data (key) multiple times with the keys of different users or groups of users. It is designed especially for the purpose of access control. Thanks to that, the space needed for storing public parameters is significantly reduced. Furthermore, the proposed method supports flexible updates when users access rights change. Title: COMPARISON OF IPSEC TO TLS AND SRTP FOR SECURING VOIP Author(s): Barry Sweeney and Duminda Wijesekera Abstract: With the IETF requirement to include Internet Protocol Security (IPsec) in every implementation of Internet Protocol version 6 (IPv6), it is prudent to consider IPsec as a viable protocol for securing IPv6 Voice over Internet Protocol (VoIP) sessions. This approach is currently inconsistent with the direction of industry, which has chosen Transport Layer Security (TLS) to secure the Session Initiation Protocol (SIP) packets and Secure Real-time Transport Protocol (SRTP) to secure the Real-time Transport Protocol (RTP) packets for VoIP sessions. A comparison of these two approaches is currently not available and this paper attempts to provide that comparison and discuss the advantages and disadvantages of each approach so that implementers and Information Assurance (IA) architects may make an informed decision. This paper is not necessarily an IA document, but is instead focused on the comparison of the two approaches based on many factors to include IA concerns. Title: INFERRING SECRET INFORMATION IN RELATIONAL DATABASES Author(s): Stefan Böttcher Abstract: We formalize the problem of finding information leaks in multi-user database systems, and we reduce this problem to the problem of inferring secret answers to database queries from other answers to database queries and a set of given Boolean integrity constraints. Furthermore, we investigate some sufficient conditions under which the answer to a query can be inferred from a previously answered set of database queries and a set of Boolean integrity constraints. Finally, we present a heuristic algorithm to search for a proof that secret information is inferable from answers to database queries and integrity constraints. Title: SECRDW: AN EXTENSION OF THE RELATIONAL PACKAGE FROM CWM FOR REPRESENTING SECURE DATA WAREHOUSES AT THE LOGICAL LEVEL Author(s): Emilio Soler, Juan Trujillo, Eduardo Fernández-Medina and Mario Piattini Abstract: Data Warehouses (DWs) constitute a valuable support to store extensive volumes of historical data for the decision making process. For this reason, it is vital to incorporate security requirements from the early stages of the DWs projects and enforce them in the further design phases. Very few approaches specify security and audit measures in the conceptual modeling of DWs. Furthermore, these security measures are specified in the final implementation on top of commercial systems as there is not a standar relational representation of security measures for DWs. On the other hand, the Common Warehouse Metamodel (CWM) has been accepted as the standard for the exchange and the nteroperability of the metadata. Nevertheless, it does not allow us to specify security measures for DWs. In this paper, we make use of the own extension mechanisms provided by the CWM to extend the relational package to specify at the logical level the security and audit rules captured during the conceptual modelling phase of the DWs design. Finally, in order to show the benefits of our extension, we apply it to a case study related to the management of the pharmacies consortium businesses. Workshop on Natural Language Processing and Cognitive Science Title: IDENTIFYING BOUNDARIES AND SEMANTIC LABELS OF ECONOMIC ENTITIES USING STACKING AND RE-SAMPLING Author(s): Katia Lida Kermanidis Abstract: Semantic entities of the economic domain are detected and labeled in free Modern Greek text using Instance-based learning in two phases (stacking) to force the classifier to learn from its mistakes, and random undersampling of the majority class to improve classification accuracy of the instances of the mi-nority classes. By not making use of any external sources (gazetteers etc), and limited linguistic information for pre-processing, a mean f-score value of 73.3% for the minority classes is achieved. Title: THE COMPUTATION OF SEMANTICALLY RELATED WORDS: THESAURUS GENERATION FOR ENGLISH, GERMAN, AND RUSSIAN Author(s): Reinhard Rapp Abstract: A method for the automatic extraction of semantically similar words is presented which is based on the analysis of word distribution in large monolingual text corpora. It involves compiling matrices of word co-occurrences and reducing the dimensionality of the semantic space by conducting a singular value decomposition. This way problems of data sparseness are reduced and a generalization effect is achieved which considerably improves the results. The method is largely language independent and has been applied to corpora of English, German, and Russian, with the resulting thesauri being freely available. For the English thesaurus, an evaluation has been conducted by comparing it to experimental results as obtained from test persons who were asked to give judgements of word similarities. According to this evaluation, the machine generated results come close to native speaker’s performance. Title: THE ROLE OF INTENSIONAL AND EXTENSIONAL INTERPRETATION IN SEMANTIC REPRESENTATIONS – THE INTENSIONAL AND PREEXTENSIONAL LAYERS IN MULTINET Author(s): Hermann Helbig and Ingo Glöckner Abstract: Although it is well known that the full meaning of a concept includes an intensional and an extensional aspect, the latter is neglected in almost all practically used knowledge representations and semantic formalisms, which consider only the interrelationships between concepts but not their referential meaning. In logically oriented representations with the usual model-theoretic interpretation, the extensions of predicates or symbols of the language in general belong to a metalevel (the model level) clearly distinguished from the logical language. In this paper, we show in what way the extensional aspect is important (in addition to the intensional aspect) to account for the full meaning of natural language expressions. The necessity of dealing with both types of meaning is illustrated by means of semantic phenomena like factual and intensional negation, intensional quantification and cardinalities, or by the meaning of prepositions describing set relationships, and others. We will utilize the framework of Multilayered Extended Semantic Networks (MultiNet) to discuss these issues. The MultiNet formalism with its explicit modeling of intensional and preextensional layers offers an explanation of the interplay of intension and extension of conceptual entities in the overall process of constituting the meaning representations of natural language expressions. Title: AN EFFICIENT METHOD FOR MAKING UN-SUPERVISED ADAPTATION OF HMM-BASED SPEECH RECOGNITION SYSTEMS ROBUST AGAINST OUT-OF-DOMAIN DATA Author(s): Thomas Plötz and Gernot A. Fink Abstract: Major aspects of cognitive science are based on natural language processing utilizing automatic speech recognition (ASR) systems in scenarios of human-computer interaction. In order to improve the accuracy of related HMM-based ASR systems efficient approaches for un-supervised adaptation represent the methodology of choice. The recognition accuracy of speaker-specific recognition systems derived by online acoustic adaptation directly depends on the quality of the adaptation data actually used. It drops significantly if sample data out-of-scope (lexicon, acoustic conditions) of the original recognizer generating the necessary annotation is exploited without further analysis. In this paper we present an approach for fast and robust MLLR adaptation based on a rejection model which rapidly evaluates an alternative to existing confidence measures, so-called log-odd scores. These measures are computed as ratio of scores obtained from acoustic model evaluation to those produced by some reasonable background model. By means of log-odd scores threshold based detection and rejection of improper adaptation samples, i.e. out-of-domain data, is realized. By means of experimental evaluations on two challenging tasks we demonstrate the effectiveness of the proposed approach. Title: DIALOGUE AS INTER-ACTION Author(s): Gemma Bel-Enguix and M. Dolores Jiménez-López Abstract: In this paper we introduce a formal model of dialogue based on grammar systems theory: Conversational Grammar Systems (CGS). The model takes into account ideas from the study of human-human dialogue in order to define a flexible mechanism for coherent dialogues that may help in the design of effective and user-friendly computer dialogue systems. The main feature of the model is to present an action view of dialogue. CGS model dialogue as an inter-action, this is a sequence of acts performed by two or more agents in a common environment. We claim that CGS are able to model dialogue with a high degree of flexibility, what means that they are able to accept new concepts and modify rules, protocols and settings during the computation. Title: AN ARTIFICIAL IMMUNE SYSTEM BASED APPROACH FOR ENGLISH GRAMMAR CHECKING Author(s): Akshat Kumar and Shivashankar B. Nair Abstract: Grammar checking and Grammar correction are well known problems in the area of Natural Language Processing (NLP). Traditional approaches fall in two major categories: rule based and corpus based. Rule based approaches rely heavily on grammar rules while corpus based approaches are statistical in nature. We provide a novel corpus based approach for grammar checking based on Artificial Immune System. We treat grammatical error as pathogens (in immunological terms) and build antibody detectors which detect all these grammatical errors while letting correct constructs pass through. Our results show that we can detect a range of grammatical error. Our system has many potential applications as an intelligent tutoring system, general purpose grammar checker and in machine evaluation of documents. Title: EXPERIENCES WITH THE LVQ ALGORITHM IN MULTILABEL TEXT CATEGORIZATION Author(s): A. Montejo-Ráez, M. T. Martín-Valdivia and L. A. Ureña-López Abstract: Text Categorization is an important information processing task. This paper presents a neural approach to a text classifier based on the Learning Vector Quantization (LVQ) algorithm. We focus on multilabel multiclass text categorization. Experiments were carried out using the High Energy Physics (HEP) text collection. The HEP collection is an highly unbalanced collection. The results obtained are very promising and show that our neural approach based on the LVQ algorithm behaves robustly over different parameters. Title: SENSE ABSTRACTNESS, SEMANTIC ACTIVATION AND WORD SENSE DISAMBIGUATION: IMPLICATIONS FROM WORD ASSOCIATION NORMS Author(s): Oi Yee Kwong Abstract: Automatic word sense disambiguation (WSD) often draws on a variety of contextual cues, and uses some lexical resources or statistical classifiers to decide on the most suitable sense accordingly. While many researchers recognise the importance of using multiple types of lexical information for the task, not many pay serious attention to the cognitive aspects and decisions are left to probabilistic terms alone. In this study, we compare the responses from a word association task with the lexical associations available from WordNet, a widely used computational lexicon, to explore the effect of sense abstractness on semantic activation, and thus the implications on the lexical sensitivity of WSD. Title: DICTIONARY MANAGEMENT SYSTEM FOR THE DEB DEVELOPMENT PLATFORM Author(s): Aleš Horák and Adam Rambousek Abstract: In the paper, we introduce new dictionary management interface for design, preparation and presentation of generic electronic XML dictionaries using the DEB (Dictionary Editing and Browsing) development platform. The DEB platform provides a strict client-server environment for general dictionary writing systems. So far several successful NLP tools have been implemented on this platform, one of the most known being the DEBVisDic tool for wordnet semantic network editing and visualization. This paper describes a new part of the DEB platform -- the Administration interface that is shared by all DEB applications running on one server machine. Title: A LINGUISTICALLY-BASED APPROACH TO DISCOURSE Author(s): Rodolfo Delmonte, Gabriel Nicolae, Sanda Harabagiu and Cristina Nicolae Abstract: We present an unsupervised linguistically-based approach to discourse relations recognition, which uses publicly available resources like manually annotated corpora (Discourse Graph Bank, Penn Discourse TreeBank, RST-DT), as well as empirically derived data from “causally” annotated lexica like LCS, to produce a rule-based algorithm. In our approach we use the subdivision of Discourse Relations into four subsets – CONTRAST, CAUSE, CONDITION, ELABORATION, proposed by Marcu & Echihabi, 2000 in their paper where they report results obtained with a machine-learning approach from a similar experiment against which we compare our results. Our approach is fully symbolic and is partially derived from the system called GETARUNS, for text understanding, adapted to a specific task: recognition of Causality Relations in free text. We show that in order to achieve better accuracy both in the general task and in the specific one, semantic information needs to be used besides syntactic structural information. Our approach outperform results reported in previous papers (Soricut & Marcu, 2003). Title: SEMANTIC RELATION MODELING AND REPRESENTATION FOR PROBLEM-SOLVING ONTOLOGY-BASED LINGUISTIC RESOURCES: ISSUES AND PROPOSALS Author(s): Francisco Alvarez, Antonio Vaquero, Fernando Sáenz and Manuel de Buenaga Abstract: Semantic relations are an important element in the construction of ontologies and models of problem domains. Nevertheless, they remain under-specified. This is a pervasive problem in both Software Engineering and Artificial Intelligence. Thus, we find semantic links that can have multiple interpretations in ontologies that support information systems, semantic data models with abstractions that are not enough to capture the relation richness of problem domains, and improperly structured taxonomies. However, if provided with precise semantics, some of these problems can be avoided, and meaningful operations can be performed on them that can be an aid in the ontology construction process. In this paper we present some insightful issues about the modeling and representation of semantic relations that encompass modeling and representational shortcomings as well as the description of the available proposals aiming to provide relations with clear and precise semantics. Title: OBJECT ORIENTED TECHNIQUES FOR AN INTELLIGENT MULTI-PURPOSE ENGLISH LANGUAGE DICTIONARY SYSTEM Author(s): Samia Yousif and Mansoor Al-A'ali Abstract: This research utilizes the features of the Object Orientation (OO) to develop TOOT, a dictionary system containing English words, their Arabic meanings, associated actions, semantic relationships, inherited actions and attributes, exceptional relationships and semantics as well as other characteristics. TOOT utilizes OO major notions such as objects, classes, aggregation, inheritances, encapsulation and polymorphism. Each word in this dictionary system belongs to a class and may have one or more subclasses. Subclasses inherit all the public attributes and operations of their super class and this concept is utilized in all types of processing on the TOOT dictionary system. TOOT is a knowledge base and can be thought of as an intelligent language model which can be used for many purposes. This research shows how simple phrases can be generated or validated to be semantically correct. In the process of using OO UML to represent semantic knowledge, we have made enhancements and additions to UML itself. Title: THE CAT AND THE BROCADED BAG: USING METAPHOR ANALYSIS TO COMPUTATIONALLY PROCESS CREATIVELY MODIFIED IDIOMS Author(s): Sylvia Weber Russell, Ingrid Fischer and Ricarda Dormeyer Abstract: Theories and computational models of natural language understanding that handle idioms generally circumvent the question of novel modifications to idioms. Yet such variations are prevalent in the media. This paper addresses the perhaps most challenging type of idiom variation, i.e., variation of decomposable idioms through nontrivial metaphoric modifications in the source domain, i.e., the domain of the words of the idiom in their literal senses. An existing metaphor representation system is used as a basis for interpreting such idioms. Title: A MACHINE-LEARNING BASED TECHNIQUE TO ANALYZE THE DYNAMIC INFORMATION FOR VISUAL PERCEPTION OF CONSONANTS Author(s): Wai Chee Yau, Dinesh Kant Kumar and Hans Weghorn Abstract: This paper proposes a machine-learning based technique to investigate the significance of the dynamic information for visual perception of consonants. The visual speech information can be described using static (facial appearance) or dynamic (movement) features. The aim of the paper is to determine the saliency of dynamic information represented by the lower facial movement for visual speech perception. The experimental results indicate that the facial movement is distinguishable for nine English consonants with a success rate of 85% using the proposed approach. The results suggest that time-varying information of visual speech contained in lower facial movements is useful in machine recognition of consonants and may be essential cues for human perception of visual speech. Title: SAYING IS NOT MODELLING Author(s): Christophe Roche Abstract: In this article we claim that the conceptual modelling built from text is rarely an ontology. Such a conceptualization is corpus-dependent and does not offer the main properties we expect from ontology, e.g. reusability and soundness. Furthermore, ontology extracted from text in general does not match ontology defined by expert using a formal language. Such a result is not surprising since ontology is an extra-linguistic conceptualization whereas knowledge extracted from text is the concern of textual linguistics. Incompleteness of text and using rhetorical figures, like synecdoche, deeply modify the perception of the conceptualization we may have. It means that ontological knowledge, which is necessary for text understanding, is not in general embedded into documents. The article will end on some remarks about formal languages. If they allow to define “a specification of a conceptualization” they nevertheless raise their own issues mainly due to their epistemological neutrality. Ontology design remains an epistemological issue. Title: MACHINE ASSISTED STUDY OF WRITERS' REWRITING PROCESSES Author(s): Julien Bourdaillet, Jean-Gabriel Ganascia and Irène Fenoglio Abstract: This paper presents a joint work between artificial intelligence and literary studies. As part of the humanities, textual genetic criticism deals with writers' rewriting processes. By studying drafts and manuscripts issued from these processes, the genesis of the text is discovered. When draft comparison is done manually, it requires a huge amount of work. The introduction of the machine provides a high gain on efficiency and enables to focus on the interpretative work. The application we developed relies on a sequence alignment algorithm close to the ones used in molecular biology. This paper describes the textual alignment algorithm, presents an experimental validation, and illustrates the textual analysis with two genetic studies. Title: TOWARDS A SYSTEM ARCHITECTURE FOR INTEGRATING CROSS-MODAL CONTEXT IN SYNTACTIC DISAMBIGUATION Author(s): Patrick McCrae and Wolfgang Menzel Abstract: Abstract Most natural language utterances are inherently ambiguous, which results in semantic underspecification. Yet, despite the omnipresence of ambiguity, human communication still succeeds in most cases and even displays a remarkable robustness – quite in contrast to the majority of natural language applications today. The reason for this is that in processing an ambiguous utterance humans also integrate information from sources other than the utterance itself, linguistic or non-linguistic in nature, and thus have access to additional knowledge to enrich the semantic specification that guides disambiguation. One important such source of additional semantic knowledge in humans is sensory input from cross-modal perception. While a wide range of studies has systematically investigated the impact of context upon structural disambiguation in human sentence processing, there are surprisingly few attempts to model the integration of context in natural language processing applications. In this paper we describe a system architecture motivated by effects during human sentence processing which permits to study the integration of cross-modal context knowledge in syntactic parsing. We hypothesise that integrating cross-modal context into syntactic constraint dependency parsing will significantly and substantially improve the accuracy of structural ambiguity resolution. Title: WHO'S NEXT? FROM SENTENCE COMPLETION TO CONCEPTUALLY GUIDED MESSAGE COMPOSITION Author(s): Michael Zock, Paul Sabatier and Line Jakubiec-Jamet Abstract: Sentence generation is both a complex and knowledge intensive problem. Given some goal, one or several messages must be chosen, words must be activated, put into the right place, shaped morphologically and finally produced in spoken or written form. Given the complexity at hand, people often fail, talking themselves into a corner (dead end) or getting stuck, because they can't find the right word to continue, or simply because they fail to remember what more precisely they had in mind. Illico, a system developed for French, was meant to help people overcome this problem. Yet, being designed for sentence completion rather than message construction, it tends to drown the user, a shortcoming that we try to overcome by adding a linguistically motivated ontology together with a tool ensuring conceptual well formedness. After all, authors normally know what they want to say, and their messages are generally complete and well-formed. Title: A DIALOGUE MANAGER FOR AN INTELLIGENT MOBILE ROBOT Author(s): Marcelo Quinderé, Luís Seabra Lopes and António J. S. Teixeira Abstract: This paper focuses on a dialogue manager developed for Carl, an intelligent mobile robot. It uses the Information State (IS) approach and it is based on a Knowledge Acquisition and Management (KAM) module that integrates information obtained from various interlocutors. This mixed-initiative dialogue manager handles pronoun resolution, it is capable of performing different kinds of clarification questions and to comment information based on the current knowledge acquired. Title: CONCEPTUAL VECTORS - A COMPLEMENTARY TOOL TO LEXICAL NETWORKS Author(s): Didier Schwab, Lim Lian Tze and Mathieu Lafourcade Abstract: There is currently much research in natural language processing focusing on lexical networks. Most of them, in particular the most famous, WordNet, lack syntagmatic information and especially thematic information (\sentence{Tennis Problem}). This article describes conceptual vectors that allows the representation of ideas in any textual segment and offers a continuous vision of related thematics, based on the distances between these thematics. We show the characteristics of conceptual vectors and explain how they complement lexico-semantic networks. We illustrate this purpose by adding conceptual vectors to WordNet by emergence. Title: BUILDING PARALLEL CORPORA FROM MOVIES Author(s): Lavecchia Caroline, Smaïli Kamel and Langlois David Abstract: This paper proposes to use DTW algorithm to construct parallel corpora from difficult data. In fact, several parallel corpora have been used by machine translation community. Frequently, one uses European or Canadian parliament corpora. In order to achieve a realistic machine translation system, we decided to use subtitle movie files. These data could be considered as difficult because they contain unfamiliar expressions, abbreviations, hesitations, words which do not exist in classical dictionaries (as vulgar words), etc. The obtained parallel corpora can be used to train decoding translation machine. From 40 movies, we align 43013 English subtitles with 42306 French subtitles. This leads to 37625 aligned pairs with a precision of 92,3%. Workshop on Computer Supported Activity Coordination Title: TASK INTEGRATION FOR KNOWLEDGE WORKERS: ESPECIALLY THOSE INVOLVED IN MULTIPLE COLLABORATIVE ACTIVITIES Author(s): Roger Tagg Abstract: One approach to overcoming the information overload that bedevils much collaborative work is to move towards an activity focus in the next generation of groupware tools. This implies a need to integrate and categorize all the activities, including tasks, that each user is required to do. This paper describes an ongoing research programme to address this need. An architectural model for future developments is proposed, and some prototypes developed by the author’s research team are described. Title: A FRAMEWORK FOR BUSINESS PROCESS INTEGRATION - AN AGENT-MEDIATED APPROACH Author(s): Zheng Zhao, Virginia Dignum and Frank Dignum Abstract: Coordination of business processes within and across organizations attracts more and more attention because of the growth of e-commerce and the implicit boundaries of organizations. Current approaches are not flexible enough to support decision making concerning Business Process Integration (BPI) solutions that take into account economic, social, and technical aspects. In this paper, we will analyse the problems that currently exist and propose an agent-based framework for mediation. We identify the requirements of this agent-mediated framework and argue the advantages of using self-adaptive agent organizations as model for mediation. Title: MINING WORKFLOW EVENT LOG TO FIND PARALLEL TASK DISPATCHING RULES Author(s): Liu Yingbo Wang Jianmin and Sun Jiaguang Abstract: In many workflow applications, actors are free to pick up work items in their work list. It is not unusual for an actor to start a work item before completing previously accepted one. Frequent occurrence of this behavior implies potential patterns of work parallelism, which is serviceable to a workflow scheduler to better dispatch ongoing tasks. In this paper, we apply association rule mining techniques to workflow event log to analyze various kinds of activity parallel execution patterns. When an actor accepts a new work item, the parallel execution rules mined from event log can help a workflow scheduler to find those work items that might be suitable to be undertaken by the same actor simultaneously. In the experiment on three vehicle manufacturing enterprises, we have found 32 strong rules of 40 different workflow activities. We describe our approach and report on the result of our experiment. Title: SUPPORTING TIME-VARIANT ARTIFACTS IN GROUPWARE APPLICATIONS Author(s): Eberhard Grummt and Alexander Lorz Abstract: Asynchronous groupware strives to provide a "shared memory" for distributed workers. However, current systems fail to keep track of changes in work and organizational structures, leading to old information being discarded instead of being archived. Especially in so called "Virtual Organizations", where such changes happen often, being able to "go back in time" is desirable. We present a generic relational data model including operations capable of storing and querying time-variant data. The applicability of this model is discussed based on experiences with a prototypical application enabling visualization and interaction with respective information. Title: PROACTIVE CONTRACT MANAGEMENT THROUGH RSF SPECIFICATIONS Author(s): Rossella Aiello and Giancarlo Nota Abstract: The modelling and automation of e-contracts is an active research area that aims at providing a valid support to organizations for the definition and management of contractual relations. The approach adopted in this paper allows the modelling and monitoring of contracts specified in terms of RSF (Requirement Specification Language) rules. Starting from the planning of events relied to contract clauses established during the negotiation phase, we define a set of RSF rules that can be used as patterns for the monitoring of both: contract fulfillments and contract violations with respect to obligations, permissions and prohibitions. We also extend the semantics of the RSF language in order to allow the treatment of planned events, together with occurred and not occurred events, in a single transition rule. This enriched semantics supports the proactive behaviour of a contract management system enabling the immediate notification of fulfillments and non-compliances as well as the detection of imminent contract violations. Title: A MULTI AGENT DECISION SUPPORT SYSTEM FOR REAL TIME SCHEDULING Author(s): N. Taghezout and P. Zarate Abstract: The environment of firms and market requirements necessitates a performance increased by the product flux management in the course of their manufacture. This management plays a certain number of functions among which the scheduling of workshop is taking more and more importance. This article presents an approach which takes into account robustness and flexibility of a real time scheduling, by defining and undertaking an interactive decision support system. We develop a structure of piloting and supervising of distributed and integrated, as well as reactive workshop. Independent modules of piloting in the latter allow organizing resource tasks and adapting generated plans according to disturbances. As support to modelling and decisional system implantation, we propose a multi-agent system. The agent actions of the same hierarchical level become concrete across the analysis and reaction procedures, they also trigger off specific behaviours to compensate a disturbance. The decision centres (agents ISP) negotiate compromises to solve conflicts, under the control of a supervisor entity. Title: ACTION, LANGUAGE AND SOCIAL SEMIOTICS – ATHEORETICAL CONTRIBUTION TO COLLABORATIVE WORK AND LEARNING Author(s): Angela Lacerda Nobre Abstract: The epistemic grounding of organisational and computing science thinking is highly relevant to a discussion of collaboration and coordination activity. The critical importance of semiotics, and of action and of language philosophy, has been explicitly recognised by scientific communities such as “Organisational Semiotics” (OS) and “Language and Action Perspective” (LAP). The importance of the recognition of the social embeddeness of organisational activity, and of phenomenological interpretations of knowledge and meaning, are often referred as a “social turn” in organisational theories, such as organisational learning, knowledge management and communities of practice. However, the full potential of such approaches needs to be supported by a renewed interest in philosophical perspectives able to sustain and disseminate their intrinsic value. Opposing a humanist paradigm, based on structuralism and cognitivism, the alternative perspective, grounded in social semiotics, Heidegger’s ontology, and Peircean pragmatism, enables an interpretation of organisations, information systems, collaboration and coordination, that radically shifts its focus towards their inner social dynamisms. Title: ONTOLOGY AND E-LEARNING Author(s): Fabio Clarizia, Francesco Colace and Massimo De Santo Abstract: In the last decade the evolution on educational technologies forced an extraordinary interest in new methods for delivering learning content to learners. Distance education represents today an effective way for supporting and sometimes substituting the traditional formative processes, thanks to the technological improvements achieved in the field in recent years. However the role of technology has often been overestimated and on the other hand the amount of information students can obtain from the Internet is huge and they can easily be confused. Teachers can also be disconcerted by this quantity of contents and they are often unable to suggest the correct contents to their students. In the open scientific literature, it is widely recognized that an important factor of this success is related with the capability of customizing the learning process for the specific needs of a given learner. This feature is still far to have been reached and there is a real interest for investigating new approaches and tools to adapt the formative process on specific individual needs. In this scenario, the introduction of ontology formalism can improve the quality of formative process, allowing the introduction of new and effective services. Ontologies can lead to important improvements in the definition of courses knowledge domain, in the generation of adapted learning path and in the assessment phase. This paper provides an initial discussion of the role of ontologies in the context of e-learning. We discuss such improvements related to the introduction of ontologies formalism in the E-Learning field and we present a novel algorithm for ontology building through the use of Bayesian Networks. Finally, we show its application in the assessment process and some experimental results.. Title: ARTIFACT-BASED COORDINATION IN MULTIMEDIA PRODUCTION Author(s): Hilda Tellioğlu Abstract: This paper tries to understand different settings for coordination depending on interdependencies between activities carried out by team members. By applying coordination theories and investigating real work settings in multimedia production companies, we introduce the concept of artifact-based coordination. It is defined as a kind of coordination in organizations, which is mainly initiated, handled and negotiated by means of artifacts. Artifacts are permanent symbolic constructs that act as mediators of the coordination. They are used to clarify ambiguities and to settle disputes. They mediate articulation work by acting as an intermediary with a specific material format between actors. Artifacts can be of different types like specialized, material, visual, coordinative, common and multilayered. The artifact-centred view to coordinative work practices helps to clarify dependencies, structures and dynamics in organizations around a design project. Title: STRATEGY OF RISK MANAGEMENT FOR A DISTRIBUTED SOFTWARE ENGINEERING ENVIRONMENT Author(s): Lafaiete Henrique Rosa Leme, Tania Fatima Calvi Tait and Elisa Hatsue M. Huzita Abstract: The risk management is a subset of the entire management of software development project and includes the process concerned about identify, analyze and control any threat to the project success. The objective of this paper is to present a strategy for an effective risk management, for supporting the project management model integrated to DiSEN (Distributed Software Engineering Environment). This strategy proposes supplying the lack of a well-defined process for the risk management not only for DiSEN, but for distributed environments in general. Workshop on Model-Driven Enterprise Information Systems Title: MODELING SEMANTIC WEB SERVICES USING UML 2 Author(s): Florian Lautenbacher and Bernhard Bauer Abstract: The development of web services and especially semantic web services is a tremendous task for people who are not familiar with all language constructs. Especially with the growing mass of several semantic web service approaches a language-independent service development is needed. Using the model-driven software development one can fill this gap. Therefore, in this paper we introduce a meta-model for semantic web services and show the benefits of automatic code generation on the basis of a small example. Title: MODELS IN CONFLICT - DETECTION OF SEMANTIC CONFLICTS IN MODEL-BASED DEVELOPMENT Author(s): Thomas Reiter, Kerstin Altmanninger, Alexander Bergmayr, Wieland Schwinger and Gabriele Kotsis Abstract: To make the model-driven paradigm a widespread success, appropriate tools such as version control systems (VCS) are required to adequately support a model-based development process. However, first approaches specializing on model-based versioning, do not take into account the semantics of the artefacts they operate upon. Thus, conflict detection mechanisms are based on detecting conflicting concurrent modifications on a software artefact's syntactic representation, only, without explicitly considering the semantics the artefact stands for. As opposed to a heavyweight approach relying on formal mathematics, we follow a lightweight approach that is based on creating views of a model that explicate a certain aspect of a modeling language's semantics. Such a view is created through a model transformation from the original model which has been edited by the developers. Using both the original model and the generated view our approach relies on graph-based comparison strategies to detect conflicts due to concurrent editing to determine \textit{syntactic} and \textit{semantic conflicts}, respectively. Consequently, by means of various example scenarios, we demonstrate how our approach is able detect conflicts that otherwise would remain undetected. Title: AN ASPECT-ORIENTED APPROACH TO MANAGE QOS DEPENDABILITY DIMENSIONS IN MODEL DRIVEN DEVELOPMENT Author(s): Carsten Köllmann, Lea Kutvonen, Peter Linington and Arnor Solberg Abstract: Model-driven development approaches commonly use an abstraction of platform specific features for improving reusability and verifiability of the core functionality models. However, the core functionality may still be tangled with features that address important dependability concerns across a design model – for example features such as security, trust and performance. These features can commonly be called Quality of Service (QoS) features. This paper presents an approach for managing several dependability dimensions. We use aspect oriented and model driven development techniques to separate and construct QoS independent models, and graph-based transformation techniques to derive the corresponding QoS specific models. Title: GENERATING AND MERGING BUSINESS RULES BY WEAVING MDA AND SEMANTICWEB Author(s): Mouhamed Diouf, Sofian Maabout and Kaninda Musumbu Abstract: Information systems (IS) are getting more and more complex. The design of such systems requires various individuals with varied expertises. The requirements expressed, when the design of such a system is decided, may change upon time. These changes may even occur very often. Thus, the more the system is flexible, the easier the upgrades will be. One of the standard ways to design flexible systems is the use of the so called “business rules” whose aim is the separation of business from system in an application. Business rules define and constrains business processes in enterprizes. Therefore, many business-governing rules have to be implemented in business-supporting applications, in order to reflect the real business environment. The aim of this paper is to give the way to automatically generate and merge a part of the business rules by combining Model Driven Architecture and the Semantic Web using Ontology Definition Metamodel. The semantic web aims to add semantics to the web content in order to use automatic reasoning. We present on an example the real benefits to be derived from the adoption of semantic-web based ontologies. Title: DEVELOPMENT OF TRANSFORMATIONS FROM BUSINESS PROCESS MODELS TO IMPLEMENTATIONS BY REUSE Author(s): Teduh Dirgahayu, Dick Quartel and Marten van Sinderen Abstract: This paper presents an approach for developing transformations from business process models to implementations by reusing parts of a transformation. A transformation is developed as a composition of pattern recognition, pattern realization and activity transformation. The approach aims to allow the reuse of pattern recognition and pattern realization. Activity transformation is not reusable. The approach includes a pattern language for defining an intermediate model between pattern recognition and pattern realization. Title: APPLYING A MODEL-DRIVEN APPROACH TO MODEL TRANSFORMATION DEVELOPMENT Author(s): E. Victor Sánchez Rebull, Orlando Avila-García, José Luis Roda García and Antonio Estévez García Abstract: One of the cornerstones of MDA is the specification and execution of model transformations. This paper proposes the application of MDA to the development of model transformations. In this novel approach, we transform instances of different transformation languages with differing levels of abstraction in a PIM-PSM style. By means of two case studies, we discuss technical details and assess what possible gains it can offer in terms of productivity and maintainability. Title: A CASE STUDY ON THE TRANSFORMATION OF CONTEXT-AWARE DOMAIN DATA ONTO XML SCHEMAS Author(s): Cléver R. G. de Farias, Luís Ferreira Pires and Marten van Sinderen Abstract: In order to accelerate the development of context-aware applications, it would be convenient to have a smooth path between the context models and the automated services that support these models. This paper discusses how MDA technology (metamodelling and the QVT standard) can support the transformation of high-level models of context-aware services onto the imple-mentation of these services using web services. The total transformation proc-ess from context-aware services onto web services involves the following as-pects: 1. service signatures, which should be translated onto WSDL defini-tions; 2. context-aware domain data used as input and output data in service operations, which should be translated onto XML schemas, and; 3. service be-haviours, which should be used to generate the service implementation. This paper concentrates on the modelling and transformation of the context-aware domain data. The results of this paper are generally applicable to the transfor-mation of elements of any domain-specific language expressed in terms of a metamodel onto XML Schema data. Title: PROCESS DRIVEN ARCHITECTURE: A MODEL DRIVEN DEVELOPMENT APPROACH FOR PROCESS SUPPORT SOFTWARE Author(s): Sascha Mueller, Stefan Jablonski and Matthias Faerber Abstract: Abstract. In this paper we propose a new model based development method specialized on the efficient production of process support software, called the Process Driven Architecture (PDA). It is based on our experiences in the domain of clinical process support and a comprehensive review of related model driven software approaches, like e.g. the OMG’s Model Driven Architecture (MDA) or the Software Factories approach. The exemplary implementation of a clinical research process finally illustrates the feasibility of our PDA approach. Title: TOWARDS A SCALABLE REPRESENTATION OF RUN-TIME INFORMATION: THE CHALLENGE AND PROPOSED SOLUTION Author(s): Abdelwahab Hamou-Lhadj Abstract: An important issue in application modernization is the time and effort needed to understand existing applications. Reverse-engineering software to recover behavioral models is a difficult task, further complicated by the lack of a scalable way of representing the extracted knowledge. The behavior of a software system is typically represented in the form of execution traces. Traces, however, can be extraordinary large. Existing metamodels such OMG’s Knowledge Discovery Meta-model and the UML meta-model provide limited support for handling large execution traces. In this paper, we describe a meta-model called the Compact Trace Format (CTF) for efficient modeling of traces of routine (method) calls generated from multi-threaded systems. CTF is intended to facilitate the interoperability among modernization tools that focus on the analysis of the behavior of software systems. CTF is designed to be easily extensible to support other types of traces. Joint Workshop on Technologies for Collaborative Business Processes and Management of Enterprise Information Systems Title: BINARY COLLABORATION MODELS RELATED TO MANUAL ACTIVITIES Author(s): Giorgio Bruno Abstract: The operational implications of manual activities in business processes lead to an interesting case study on binary collaborations in which situations of clashing interactions appear. As manual activities come in a number of flavors, several types of collaborations are needed and this paper categorizes them into 3 major patterns. These patterns are presented by means of a notation based on colored Petri nets in which most transitions are related to interactions. A policy is proposed in order to deal with the clashing interactions. Title: KAT BASED CAD MODEL OF PROCESS ELEMENTS FOR EFFECTIVE MANAGEMENT OF PROCESS EVOLUTION Author(s): Jeewani Anupama Ginige, Athula Ginige and Uma Sirinivasan Abstract: Processes consist of actions, participants, object and rules, known as elements. In a process, these elements are inter-woven together to achieve desired business goals. When managing process evolutions and changes, it is im-perative to understand dependencies and constraints among process elements. Use of high-level graphical models that encapsulate these dependencies and constraints is limited in practice. Therefore, here we present a formal algebraic methodology, to model the dependencies and constraints among process ele-ments. This algebraic methodology is based on a relatively new branch of alge-bra named Kleene Algebra with Tests (KAT). This paper demonstrates mapping of various dependencies and constraints of process elements to create a single compact KAT expression. Most importantly, in these KAT expressions schema and instance level dependencies and constraints are segregated. The holistic and cohesive nature in identifying and modeling, both schema and instance level, dependencies and constraints is the unique contribution of this research. Title: CLUSTERING ERP IMPLEMENTATION PROJECT ACTIVITIES: A FOUNDATION FOR PROJECT SIZE DEFINITION Author(s): Guy Janssens, Rob Kusters and Fred Heemstra Abstract: ERP implementation projects are large and risky projects for organizations, because they affect large parts of an implementing organization and lead to changes in the way an organization performs its tasks. For these reasons ERP projects can be considered different from implementing large information systems and should be treated as such. The costs for implementing these systems are hard to estimate. The size of an ERP project can be a useful measurement for predicting the effort needed to complete an ERP implementation project. Unfortunately however this measurement does not exist yet. Therefore research is needed to find a set of variables which can define the size of an ERP implementation project. This paper describes the first step for this research. It shows 21 logical clusters of ERP implementation project activities as a result of a formal group session. The clusters are based on 405 ERP implementation project activities, which were retrieved from literature. These clusters can be used in further research to find variables for defining the size of an ERP implementation project. Title: ELEMENTS OF PERCEPTION REGARDING THE IMPLEMENTATION OF ERP SYSTEMS IN SWISS SMES Author(s): Catherine Equey and Emmanuel Fragnière Abstract: ERP systems are more and more adopted in large companies. It seems that this trend is followed by small and medium companies too. We have conducted a questionnaire based survey to identify how Swiss SMEs perceive this phenomenon. The sample size is 687 of which 125 have actually implemented an ERP. Our main findings are twofold. First SMEs that have not implemented ERP invokes concerns (e.g. costs), which are typically not perceived as major problems by SMEs that went through an ERP implementation. Indeed the latter companies generally acknowledge that ultimately benefits (e.g. improved business information) significantly out passes costs and difficulties of implementation. Second, this survey brings new empirical knowledge on the implementation, utilization and benefits provided by ERP systems in Swiss SMEs. We learn for instance that the main difficulties encountered during the implementation phase correspond to the “complexity” of these systems. Title: AUTHORITY AND ITS IMPLEMENTATION IN ENTERPRISE INFORMATION SYSTEMS Author(s): Alexei Sharpanskykh Abstract: The concept of power is inherent in human organizations of any type. As power relations have important consequences for organizational viability and productivity, they should be explicitly represented in enterprise information systems (EISs). Although organization theory provides a rich and very diverse theoretical basis on organizational power, still most of the definitions for power-related concepts are too abstract, often vague and ambiguous to be directly implemented in EISs. To create a bridge between informal organization theories and automated EISs, this paper proposes a formal logic-based specification language for representing power- (in particular authority) relations. The use of the language is illustrated by considering authority structures of organizations of different types. Moreover, the paper demonstrates how the formalized authority relations can be integrated into an EIS. Title: ERP NON-IMPLEMENTATION: A CASE STUDY OF A UK FURNITURE MANUFACTURE Author(s): Julie Dawson and Jonathan Owens Abstract: Enterprise Resource Planning (ERP) systems are pervasive information systems that have been fundamental in organisations for the past two decades. ERP systems may well count as the most important development in technology in the 1990s’. There are many ERP success stories; equally there are as many failure stories. This paper reviews current literature of the Critical Success Factors (CSF) of ERP implementations. This review will be used in conjunction with the case of a UK furniture manufacturer’s (Company X) implementation of an ERP system. This paper considers the factors that resulted in the failure of the ERP at Company X in the initial stages of the implementation. Title: MEMORY AS AN ELEPHANT: HOW PRIOR EVENTS DETERMINE USER ATTITUDES IN ERP IMPLEMENTATION Author(s): Lene Pries-Heje Abstract: Assimilation of a standard ERP system to an organization is difficult. User involvement seems to be the crux of the matter. However, even the best intentions for user involvement may come to nothing. A case study of a five year ERP implementation process reveals that a main reason for this miss may be that the perception of usefulness in any given phase of the implementation is heavily dependant on preceding events – the process. A process model analysis identifies eight episodes and nine encounters in the case showing that users attitude towards an ERP system change between acceptance, equivocation, resistance and rejection depending on three things: (1) Dynamic between user and consultants, (2) Dynamic between different user groups, and (3) Understanding of technical, organizational and socio-technical options. Workshop on RFID Technology - Concepts, Applications, Challenges Title: RFID PRIVACY PROTECTION SCHEME FOR SECURE UBIQUITOUS COMPUTING Author(s): Hyun-Seok Kim, Jung-Hyun Oh and Jin-Young Choi Abstract: Radio frequency identification (RFID) is an emerging technology which brings enormous productivity benefits in applications where objects have to be identified automatically in mobile and ubiquitous computing. This paper presents issues concerning security and privacy of RFID Systems which are heavily discussed in public and introduces PPP(Privacy Protection Protocol) for a RFID security protocol which serves as a proof of concept for authentication an RFID tag to a reader device using the vernam and standard encryption as a cryptographic primitive. To verify our protocol, we use model checking methodology, that is, Casper(A Compiler for Security Protocol), CSP(Communicating Sequential Processes) and then verify security properties such as secrecy and authentication using FDR(Failure Divergence Refinement) tool. Title: M2MGEN- AN APPLICATION GENERATOR FOR MACHINE TO MACHINE (M2M) APPLICATIONS Author(s): Chanan Glezer, Sudha Krishnamurthy, Kilian Schloeder, Omer Anson and Gil Tahan Abstract: This research conceptualizes an architecture (M2MGen) aimed at generating M2M applications as a service offered by a telecommunications network provider. M2MGen employs various plug-ins and knowledge constructs which can be configured to meet requirements of clients from various industry sectors interested in deploying M2M applications. The architecture addresses the following aspects: Data-acquisition Management, Communication Management, Business-service Management and Control Management. Title: RFID TAG ANTENNAS DESIGNED BY FRACTAL FEATURES AND MANUFACTURED BY PRINTING TECHNOLOGY Author(s): Chi-Fang Huang, Jing-Qing Zhan, and Tsung-Yu Hao Abstract: Based on the fractal features, this work is to design RFID Tag antennas with minimum area needed as a tag. The designed passive tag is to be responsible for the EM wave of 915MHz from the reader. The concept of complex conjugated matching is used in designing the antenna. Electromagnetic Simulation tool is also used to help this design as well. The techniques of measuring the material parameters necessary in the design procedure are also mentioned in this paper. Offset printing technology is employed to manufacture these tag antennas which is assumed as a kind of low-cost tags. Real performance of these tags is also shown in this work. Title: RFID DATA MANAGEMENT IN SUPPLY CHAINS: CHALLENGES, APPROACHES AND FURTHER RESEARCH REQUIREMENTS Author(s): Adam Melski, Lars Thoroe and Matthias Schumann Abstract: The implementation of RFID leads to improved visibility in supply chains. However, as a consequence of the increased data collection and enhanced data granularity, supply chain participants have to deal with new data management challenges. In this paper, we give an overview of the current challenges and solution proposals in the area of data collection and transformation, data organization and data security. We also identify further research requirements. Title: EXPLOITING RFID IN A CHALLENGING ENVIRONMENT: A COMMERCIAL CASE STUDY OF PLANT RENTAL AND INTERMITTENT-WIRELESS HAND-HELD PDA-BASED SCANNERS Author(s): Peter Dickman, Gareth P. McSorley, Jim Liddel, John Glen and Jim Green Abstract: This short industrial experience report presents an overview of the COMPANY PRODUCT software system. This is an RFID-based asset-tracking solution, exploiting robust hand-held PDA-based scanners with intermittent wireless connectivity to integrate operational activities with ERP/logistical information systems for the plant-rental sector. Unusual challenges in the operating environment and user community have been overcome using novel techniques and unique combinations of technology and methodology. The PRODUCT system is an exemplar of an innovative RFID application overcoming significant data management problems. Wireless security issues have been addressed and the system includes internal web-services interfaces that are now being extended for exploitation in operational and corporate oversight applications. The middleware platform enables integration of previously separate applications, with extension of business processes to the operational domain. The experiences reported have been gained in development, deployment and use in several countries and offer an insight into the effectiveness of an RFID-enabled infrastructure for improved business performance in a new commercial sector. Title: RFID USE IN HOSPITALS: A BUSINESS PERSPECTIVE Author(s): Tim Berezny and Khaled Hassanein Abstract: RFID technology is experiencing wide adoption in a number of industries while providing numerous unique benefits to each. However, the healthcare industry has been slow to adopt, with some claiming cost constraints, satisfaction with barcodes or issues with standards as reasons. Through the analysis of the needs and concerns of the various stakeholders in healthcare organizations and the best practices of those who have successfully implemented RFIDs, healthcare institutions can be in a good position to reap the benefits of this technology. This paper provides a background on RFID technology, facilitating and inhibiting factors for its adoption, its applications in healthcare at hospitals and best practices for increasing user adoption and successful deployment within that environment. Title: IF OBJECTS COULD TALK: SEMANTIC-ENHANCED RADIO-FREQUENCY IDENTIFICATION Author(s): Michele Ruta, Tommaso Di Noia, Eugenio Di Sciascio and Floriano Scioscia Abstract: We propose to extend basic RFID usage by storing semantically annotated data within RFID tags memory, so that objects may actually `describe themselves'' in a variety of scenarios. In particular here we exploit our approach to carry out an advanced discovery process using annotations stored in RFIDs. A -fully backward compatible- modification to the original RFID data exchange protocol is presented, integrated in a semantic-enabled Bluetooth resource discovery framework. Motivations and benefits of the approach are outlined in an u-commerce context. Title: PIPE-DEPCAS: A MIDDLEWARE SOLUTION FOR EPC-RFID DATA ACQUISITION SYSTEMS Author(s): Carlos Cerrada, Ismael Abad, José Antonio Cerrada and Vicente Dies Abstract: The impacting growth of the market of the RFID devices, such as EPC (Electronic Product Component) elements in the most varied branches of the industry and the services, joined to the new advantages that every day are discovered on their intensive use, have provoked the immediate attention of multitude of HW, SW and key-in-hand solutions providers. The disparity of potential applications in which these devices can be used makes that the methods and means for their management and control differ from the classical ones known till now. In this type of applications it is necessary to include some mechanism of acquisition and management of the radio frequency information. In this article a possible solution for this question is presented. It represents a middleware solution based on a pipeline and filter architecture that brings together the suitable proportion of flexibility, power of calculation, connectivity and simplicity of use to achieve suitable solutions for automation, management and control of goods and services. It has been developed in a more generic framework, that we call DEPCAS (Data EPC Acquisition System), and it is denoted as Pipe-DEPCAS. This specific solution contributes in this field with the best practices of design at the time that it assures an effective control of cost on having used standards of opened code and very known, simple and refined architectures. The paper describes DEPCAS and Pipe-DEPCAS new concepts and developments. Title: USING RFID TECHNOLOGY FOR SUPPORTING DOCUMENT MANAGEMENT Author(s): Thierry Bodhuin, Rosa Preziosi and Maria Tortorella Abstract: The activity flow characterizing a business process may depend on the moving of a definite sequence of paper documents from a given organization’s office to another one. If the document circulation is monitored and managed by using RFID technology, additional data can be captured from Information System of an organization that will enrich the traditional document management systems. In addition, by extracting information from this data, an organization improves its knowledge regarding its activity flows. As a result, less time for performing a business process is needed, planning and decisional capability of is more accurate, evaluation errors are decreased and economic advantages are obtained. In this paper an RFID design, addressing this thesis, is described Title: UNOBTRUSIVE USER PROFILING: THE USE OF RFID TO CREATE A SMART WARDROBE Author(s): Maria Indrawan, Seng Loke, Sea Ling and Frida Samara Abstract: Many profiling systems rely on the user interaction directly with the systems to gather their raw data. We present a profiling system based on RFID. Unlike many profiling systems, the proposed profiling system is unobtrusive because it does not interact directly with a mobile device or computer systems. We illustrate the idea by developing the Smart Wardrobe system which creates the profile by observing the movement of clothing items that have been tagged by RFID, in or out of the wardrobe. Based on the observations, the fashion profile of the user is generated. The prototype shows that meaningful profile can be created and can be used to feed other applications, such as shopping assistant or recommender systems. Title: PERFORMANCE REVIEW OF RFID IN THE SUPPLY CHAIN Author(s): Paul Golding and Vanesa Tennant Abstract: Radio Frequency Identification (RFID) the technology offers manufacturers, retailers, and suppliers with the tools to efficiently collect, manage, distribute, and store information on inventory, business processes, and security controls. The growth of RFID in the supply chain has been spurred by mandate compliance from retailers such as Wal-Mart and Target. Despite the intrinsic advantages of the technology, factors such as lack of standardization, interop-erability, cost and performance issues have slowed the pace of adoption. This paper will provide a quantitative content analysis using existing literature on the performance and reliability of RFID in the supply chain. The factors that will be examined include tag location, tag orientation sensitivity, read range, and interference from metal and water. The reliability of RFID system is a paramount factor that may determine the technology’s ultimate adoption and diffusion. The paper will provide practitioners with insights to the issues affect-ing RFID implementation. Title: ANTECEDENTS TO RFID ADOPTION: PERSPECTIVES OF RETAIL SUPPLY CHAIN STAKEHOLDERS Author(s): Stephen Waters and Shams Rahman Abstract: This paper examines factors antecedent to the adoption of Radio Frequency Identification (RFID) from the perspectives of three key retail supply chain stakeholders: retailers, retailer suppliers, and technology providers and explores the impact of RFID on retail supply chain performance. Drawing on extant interorganisational information system theory, this research identifies factors likely to impact on the adoption of RFID. Four categories of factors such as technological, economic, organisational and external are identified. In order to assess the relationship between the level of RFID adoption and retail supply chain performance a conceptual framework has been developed employing Analytical Hierarchy Process (AHP). Finding from the literature search are validated by the results of two Australian pilot studies and reactions from stakeholders of a mini survey. The study identifies several gaps and proposes that each stakeholder group must be aware of, and agree to the salient factors that trigger an RFID adoption decision. Title: THE PRACTICALITY OF MULTI-TAG RFID SYSTEMS Author(s): Leonid Bolotnyy, Scott Krize and Gabriel Robins Abstract: Radio Frequency Identification (RFID) is an increasingly popular technology that uses radio signals for object identification. Successful object identification is the primary objective of RFID technology (after all, the last two letters of the acronym “RFID” stand for “identification”). Yet, a recent major study by Wal-Mart has shown that object detection probability can be as low as 66%. In this paper we address the fundamental issue of improving object detection by tagging objects with multiple tags. This confirms for the first time the practicality and efficacy of previous works on multi-tag RFID systems. Using different configurations of commercial RFID equipment, we show significant improvements in object detection probability as the number of tags per object increases. We compare various combinations of multi-tags, readers, and antennas, and demonstrate that adding multi-tags to a system can improve object detection probabilities more dramatically than adding more readers. We also address issues such as tag orientation and variability, as well as the effect of multi-tags on anti-collision algorithms. Workshop on Human Resource Information Systems Title: STORYTELLING ON THE INTERNET TO DEVELOP WEAK-LINK NETWORKS: TWO CASE STUDIES "ARTISTORIA" Author(s): Bernard Fallery, Carole Marti and Gerald Brunetto Abstract: Is there an opportunity with Internet to build new weak-link networks for sharing knowledge and developing innovation? This article describes the research carried out in a French Regional Chamber of Trade and Crafts. Our work consisted of establishing two successive interactive portals collecting stories: about the experiences of craftsmen using ICT and about the experiences of collaborative spouses at work. Firstly, a study of the different knowledge management models allowed us to determine the characteristics necessary for the construction of such portals. Secondly we present the tool that we have developed and implemented for craftsmen and we analysed the sharing and re-use processes by experimenting with an initial qualitative study and a quantitative phase based on 48 cases. Thirdly we present the second tool that we are going to develop for collaborative spouses at work: we present a semantic analysis of the stories already collected, and we make recommendations for this second portal implementation. Finally we propose a discussion of the weak links concept, in order to understand the opportunity with Internet to build new weak links networks for sharing knowledge and developing innovation. Title: E-LEARNING: WHICH EFFECTS ON SOCIALIZATION IN A WORK TEAM? Author(s): Ewan Oiry Abstract: E-learning tools appear to be an attractive way to help HR become a real business partner. The argument the most often developed to support this linkage derives from the fact that E-learning might propose a new way of dispensing training which would be more effective than the classic one. With several case studies carried out in a number of large French Banks, this paper attempts to demonstrate that the effectiveness of this tool does not principally come from the fact that it proposes a new kind of “training model” but, more basically, from costs savings that it permits. This paper also shows that the E-learning tools have a major and unexpected effect on the functioning of work teams, on the positioning of team leaders and on their relationship with training. It therefore proposes to include this dimension in the reflection on the implementation, the use and the effectiveness of E-learning. Title: DOES THE "LOCAL UNIVERSE" IMPACT ON REPRESENTATIONS, LEVELS OF UTILISATION OF AN HR INTRANET BY THE MIDDLE MANAGEMENT USER Author(s): Karine Guiderdoni – Jourdain Abstract: The e-HR dynamic implies the development and the integration of an HR intranet in order to achieve a better distribution of messages and to optimise the HR services delivered to HR clients, especially to the middle management. This kind of ICT investment is very expensive. The question about its benefits is quickly asked by the Board Management. That is why the assessment of the HR intranet through middle management’s positions and uses is necessary. We present in this paper the case of a HR department of a major Aeronautical and Space company, which has developped an HR intranet. From 53 interviews of middle managers, a typology of actors emerges : the “super technician”, the “assembly line boss”, the “industrial artisan”, the “free electron” and the “hybrid”, each of them got specific position and use of this tool. This confirms our hypothesis on the effect of what we call the local universe on practices of the HR intranet tool. Title: DEVELOPING THE INITIAL FRAMEWORK OF HRIS Author(s): Hilkka Poutanen and Vesa Puhakka Abstract: The history of Human Resource Information Systems (HRIS) stretch back to the 1960s when human resource (HR) activities started to increase in organizations. In the 1980s researchers and practitioners became interested in HRIS and in the 1990s several studies, articles, user experiences, opinions and descriptions were published in journals, magazines and internet. Many different issues have arisen in these discussions. Some researchers have constructed models and definitions for HRIS. However, there is a lack of a framework which constructs a description out of the fragmented discipline of HRIS. In this workshop paper we introduce our intitial framework to underline the importance and need to consolidate teh knowledge of HRIS. It bases on the literature and internet site reviews. The framework does not intend to cover teh HRIS field as whole but to signal taht it is time to construct frameworks to support and lead the research and the theory making in HRIS. Many research questions are stated in the referenced articles. Now it is time to start findings the answers. Title: BUSINESS INTELLIGENCE AS A HRIS FOR ABSENTEEISM Author(s): Alysson Bolognesi Prado, Carmen E. Feitosa de Freitas and Thiago Ricardo Sbrici Abstract: This paper reports the use of Business Intelligence systems for workers absenteeism analysis on a large organization. The related concepts are explained, an actual application case is presented, and finally we discuss some contributions of decision support systems to the Human Resources area. Title: RE-DRESSING THE TECHNOLOGICAL FRAMES OF HUMAN RESOURCE INFORMATION SYSTEMS Author(s): Tanya Bondarouk Abstract: HRIS is coming to a more full-grown stage within organisational life. Much is assumed and expressed about its advantages, however scientific proof of these advantages is scarce. No clarity exists about the answer to the question whether e-HRM contributes to the effectiveness of HRM processes. This paper contributes to the Enterprise Information Systems field in two ways. Firstly, findings-wise, we present results from the qualitative study on the contribution of e-HRM to HRM effectiveness. The data is collected in a Dutch Ministry of the Interior and Kingdom Relations. Results show that e-HRM applications have some impacts on the HRM practices. However, e-HRM is not perceived by the users as contributing to the HRM effectiveness. Interviews with line managers and employees have revealed interesting differences in their needs and perceptions about functionalities of e-HRM applications. Secondly, in this paper we integrate two approaches, namely technology-oriented approach, and organizational processes-oriented approach. An intersection of IT- and HRM- studies reveals new possibilities both for scientific and practical implications. Title: DO CURRENT HRIS MEET THE REQUIREMENTS OF HRM?: AN EMPIRICAL EVALUATION USING LOGISTIC REGRESSION AND NEURAL NETWORK ANALYSIS Author(s): Stefan Strohmeier and Rüdiger Kabst Abstract: Our paper examines the question whether major features of current HRIS actually meet requirements of HRM. To do so, we initially identify major features of current HRIS, discuss why these features may or may not meet re-quirements of HRM and derive corresponding hypotheses. Subsequently, we employ an international large-scale survey to test these hypotheses by combin-ing logistic regression and neural network analysis. Our results draw a rather positive picture of HRIS: If equipped with the right functionality and delivery features current HRIS are able to meet requirements of HRM. Title: STUDYING HUMAN RESOURCE INFORMATION SYSTEMS IMPLEMENTATION USING ADAPTIVE STRUCTURATION THEORY: THE CASE OF AN HRIS IMPLEMENTATION AT DOW CHEMICAL COMPANY Author(s): Huub Ruël and Charles Che Chiemeke Abstract: This paper aims at setting an agenda for HRIS research from an integrative perspective. This perspective assumes that organization and information systems cannot be separated. By first elaborating on this integrated perspective in terms of a web of causes and consequences, a list of phenomena is presented. Subsequently, research on HRIS’ up to date is summarized, resulting in the observation that HRIS research needs to be broadened and deepened. In section three, we combine the list of phenomena with how HRIS’ are being implemented and used in mainly large global companies. Per phenomenon we raise a number of critical questions for HRIS research to focus at and to provide answers for. Title: STUDYING HUMAN RESOURCE INFORMATION SYSTEMS FROM AN INTEGRATED PERSPECTIVE: A RESEARCH AGENDA Author(s): Rodrigo Magalhães and Huub Ruël Abstract: Human Resource Information Systems (HRIS) research lacks theoretical depth and richness. For that reason this paper applies a theory to HRIS implementation developed by DeSanctis & Poole (1994), originally for studying information systems implementation, namely Adaptive Structuration Theory (AST). AST is based on Structuration Theory, a theory from sociology, and assumes that information systems and organizations are interrelated. They influence each other mutually. In this paper AST is applied to the HRIS implementation at Dow Chemicals. The case shows how at Dow organizational structures limit and support structures within an HRIS and in this way redirect the way how the HRIS and the organization ‘get along’ together and evolve. Title: INNOVATOR’S STRATEGY ON THE MARKET Author(s): Olga Girstlová Abstract: INTERNAL KNOWLEDGE NETWORK Internal corporate network Creating of knowledge within the internal corporate network Knowledge sharing within the internal corporate network Knowledge accounts of the internal corporate network CREATING OF KNOWLEDGE – EVALUATION TOOLS Personal account of an individual Knowledge account of an individual Regular assessment interviews Title: IMPLEMENTATION OF HRMS IN INDIAN BANKS Author(s): Hemalatha Diwakar and Sushama Chaudhari Abstract: The public sector banks(PSB) in India in which the government are majority shareholders are inducting technology based banking solutions that are well aligned with their business objectives to achieve business success. In the process the banks have realized that for technology to be used efficiently , an HRMS in place is mandatory. In this paper, we show why HRMS is crucial for bank’s progress by presenting the currently existing conventional HR practices and their drawbacks by using UML diagrams. A case study of a PSB that has im-plemented HRMS successfully is described and the benefit the bank has derived is shown by the time-manpower comparison matrix for various HR functions.. The issues that PSBs should keep in mind and the strategy that they should fol-low for a successful HRMS implementation are presented. Title: HR PORTALS AND HR METRICS FOR INTELLECTUAL CAPITAL DEVELOPMENT Author(s): Dino Ruta Abstract: Intellectual capital is nowadays considered a key issue when analysing critical determinants to company performance. It represents one of the main and difficult to imitate source for sustainable competitive advantage and it helps organizations, which can create, maintain, measure and leverage it, generate a superior performance. Companies, according to their strategic needs, generate a unique mix of human, social and organizational capital and exploit it to achieve their strategy. This paper intends to explore how information technology, and in particular HR portals, could help the organization to really be able to create a superior competitive advantage by leveraging its intangible assets coherently with its strategy . In addition, the paper provides a deep analysis of HR portals implementation benefits, focusing on their contribute at measuring and aligning intellectual capital to company strategy. Title: HR INTRANETS IN FRANCE: A LONGITUDINAL STUDY Author(s): Veronique Guilloux, Michel Kalika and Florence Laval Abstract: The article presents some interesting results of an exploratory and longitudinal study based on a sample of French firms. The survey identifies the diffusion of HRM practices, as well as their development stages . The investigation aims at answering the following questions: How intranet networks go with the HR competence development? How could they support the HR Function? What are the best French practices linking ITC and skills management?

Page Updated on 27-08-2007