o A New Approach to Water Flow Algorithm for Text Line Segmentation By www.jucs.org Published On :: 2011-04-07T14:38:30+02:00 This paper proposes a new approach to water flow algorithm for the text line segmentation. Original method assumes hypothetical water flows under a few specified angles to the document image frame from left to right and vice versa. As a result, unwetted image frames are extracted. These areas are of major importance for text line segmentation. Method modifications mean extension values of water flow angle and unwetted image frames function enlargement. Results are encouraging due to text line segmentation improvement which is the most challenging process stage in document image processing. Full Article
o An OCR Free Method for Word Spotting in Printed Documents: the Evaluation of Different Feature Sets By www.jucs.org Published On :: 2011-04-07T14:38:36+02:00 An OCR free word spotting method is developed and evaluated under a strong experimental protocol. Different feature sets are evaluated under the same experimental conditions. In addition, a tuning process in the document segmentation step is proposed which provides a significant reduction in terms of processing time. For this purpose, a complete OCR-free method for word spotting in printed documents was implemented, and a document database containing document images and their corresponding ground truth text files was created. A strong experimental protocol based on 800 document images allows us to compare the results of the three feature sets used to represent the word image. Full Article
o The Use of Latent Semantic Indexing to Mitigate OCR Effects of Related Document Images By www.jucs.org Published On :: 2011-04-07T14:38:42+02:00 Due to both the widespread and multipurpose use of document images and the current availability of a high number of document images repositories, robust information retrieval mechanisms and systems have been increasingly demanded. This paper presents an approach to support the automatic generation of relationships among document images by exploiting Latent Semantic Indexing (LSI) and Optical Character Recognition (OCR). We developed the LinkDI (Linking of Document Images) service, which extracts and indexes document images content, computes its latent semantics, and defines relationships among images as hyperlinks. LinkDI was experimented with document images repositories, and its performance was evaluated by comparing the quality of the relationships created among textual documents as well as among their respective document images. Considering those same document images, we ran further experiments in order to compare the performance of LinkDI when it exploits or not the LSI technique. Experimental results showed that LSI can mitigate the effects of usual OCR misrecognition, which reinforces the feasibility of LinkDI relating OCR output with high degradation. Full Article
o Fusion of Complementary Online and Offline Strategies for Recognition of Handwritten Kannada Characters By www.jucs.org Published On :: 2011-04-07T14:38:48+02:00 This work describes an online handwritten character recognition system working in combination with an offline recognition system. The online input data is also converted into an offline image, and in parallel recognized by both online and offline strategies. Features are proposed for offline recognition and a disambiguation step is employed in the offline system for the samples for which the confidence level of the classier is low. The outputs are then combined probabilistically resulting in a classier out-performing both individual systems. Experiments are performed for Kannada, a South Indian Language, over a database of 295 classes. The accuracy of the online recognizer improves by 11% when the combination with offline system is used. Full Article
o Color Image Restoration Using Neural Network Model By www.jucs.org Published On :: 2011-04-07T14:38:54+02:00 Neural network learning approach for color image restoration has been discussed in this paper and one of the possible solutions for restoring images has been presented. Here neural network weights are considered as regularization parameter values instead of explicitly specifying them. The weights are modified during the training through the supply of training set data. The desired response of the network is in the form of estimated value of the current pixel. This estimated value is used to modify the network weights such that the restored value produced by the network for a pixel is as close as to this desired response. One of the advantages of the proposed approach is that, once the neural network is trained, images can be restored without having prior information about the model of noise/blurring with which the image is corrupted. Full Article
o Visualizing and Analyzing the Quality of XML Documents By www.jucs.org Published On :: 2011-04-07T14:38:59+02:00 In this paper we introduce eXVisXML, a visual tool to explore documents annotated with the mark-up language XML, in order to easily perform over them tasks as knowledge extraction or document engineering. eXVisXML was designed mainly for two kind of users. Those who want to analyze an annotated document to explore the information contained-for them a visual inspection tool can be of great help, and a slicing functionality can be an effective complement. The other target group is composed by document engineers who might be interested in assessing the quality of the annotation created. This can be achieved through the measurements of some parameters that will allow to compare the elements and attributes of the DTD/Schema against those effectively used in the document instances. Both functionalities and the way they were delineated and implemented will be discussed along the paper. Full Article
o Nabuco - Two Decades of Document Processing in Latin America By www.jucs.org Published On :: 2011-04-07T14:39:03+02:00 This paper reports on the Joaquim Nabuco Project, a pioneering work in Latin America on document digitalization, enhancement, compression, indexing, retrieval and network transmission of historical document images. Full Article
o Choice of Classifiers in Hierarchical Recognition of Online Handwritten Kannada and Tamil Aksharas By www.jucs.org Published On :: 2011-04-07T14:39:07+02:00 In this paper, we propose a novel dexterous technique for fast and accurate recognition of online handwritten Kannada and Tamil characters. Based on the primary classifier output and prior knowledge, the best classifier is chosen from set of three classifiers for second stage classification. Prior knowledge is obtained through analysis of the confusion matrix of primary classifier which helped in identifying the multiple sets of confused characters. Further, studies were carried out to check the performance of secondary classifiers in disambiguating among the confusion sets. Using this technique we have achieved an average accuracy of 92.6% for Kannada characters on the MILE lab dataset and 90.2% for Tamil characters on the HP Labs dataset. Full Article
o Developing a Mobile Collaborative Tool for Business Continuity Management By www.jucs.org Published On :: 2011-07-08T12:29:58+02:00 We describe the design of a mobile collaborative tool that helps teams managing critical computing infrastructures in organizations, a task that is usually designated Business Continuity Management. The design process started with a requirements definition phase based on interviews with professional teams. The elicited requirements highlight four main concerns: collaboration support, knowledge management, team performance, and situation awareness. Based on these concerns, we developed a data model and tool supporting the collaborative update of Situation Matrixes. The matrixes aim to provide an integrated view of the operational and contextual conditions that frame critical events and inform the operators' responses to events. The paper provides results from our preliminary experiments with Situation Matrixes. Full Article
o The Iceberg Effect: Behind the User Interface of Mobile Collaborative Systems By www.jucs.org Published On :: 2011-07-08T12:29:59+02:00 Advances in mobile technologies are opening new possibilities to support collaborative activities through mobile devices. Unfortunately, mobile collaborative systems have been difficult to conceive, design and implement. These difficulties are caused in part by their unclear requirements and developers' lack of experience with this type of systems. However, several requirements involved in the collaborative back-end of these products are recurrent and should be considered in every development. This paper introduces a characterization of mobile collaboration and a framework that specifies a list of general requirements to be considered during the conception and design of a system in order to increase its probability of success. This framework was used in the development of two mobile collaborative systems, providing developers with a base of back-end requirements to aid system design and implementation. The systems were positively evaluated by their users. Full Article
o An Empirical Study on Human and Information Technology Aspects in Collaborative Enterprise Networks By www.jucs.org Published On :: 2011-07-08T12:30:00+02:00 Small and Medium Enterprises (SMEs) face new challenges in the global market as customers require more complete and flexible solutions and continue to drastically reduce the number of suppliers. SMEs are trying to address these challenges through cooperation within collaborative enterprise networks (CENs). Human aspects constitute a fundamental issue in these networks as people, as opposed to organizations or Information Technology (IT) systems, cooperate. Since there is a lack of empirical studies on the role of human factors in IT-supported collaborative enterprise networks, this paper addresses the major human aspects encountered in this type of organization. These human aspects include trust issues, knowledge and know-how sharing, coordination and planning activities, and communication and mutual understanding, as well as their influence on the business processes of CENs supported by IT tools. This paper empirically proves that these aspects constitute key factors for the success or the failure of CENs. Two case studies performed on two different CENs in Switzerland are presented and the roles of human factors are identified with respect to the IT support systems. Results show that specific human factors, namely trust and communication and mutual understanding have to be well addressed in order to design and develop adequate software solutions for CENs. Full Article
o Managing Mechanisms for Collaborative New-Product Development in the Ceramic Tile Design Chain By www.jucs.org Published On :: 2011-07-08T12:30:02+02:00 This paper focuses on improving the management of New-Product Development (NPD) processes within the particular context of a cluster of enterprises that cooperate through a network of intra- and inter-firm relations. Ceramic tile design chains have certain singularities that condition the NPD process, such as the lack of a strong hierarchy, fashion pressure or the existence of different origins for NPD projects. We have studied these particular circumstances in order to tailor Product Life-cycle Management (PLM) tools and some other management mechanisms to fit suitable sectoral reference models. Special emphasis will be placed on PLM templates for structuring and standardizing projects, and also on the roles involved in the process. Full Article
o A Petri Nets based Approach to Specify Individual and Collaborative Interaction in 3D Virtual Environments By www.jucs.org Published On :: 2011-07-08T12:30:03+02:00 This work describes a methodology that supports the design and implementation of software modules, which represent the individual and collaborative three-dimensional interaction process phases. The presented methodology integrates three modeling approaches: Petri Nets, a collaborative manipulation model based on the combination of single user interaction techniques taxonomy, and object-oriented programming concepts. The combination of these elements allows for the description of interaction tasks, the sequence of interaction processes being controlled by Petri Nets with the codes generated automatically. By the integration of these approaches, the present work addresses not only the entire development cycle of both individual and collaborative three-dimensional interaction, but also the reuse of developed interaction blocks in new virtual environment projects. Full Article
o LaSca: a Large Scale Group Decision Support System By www.jucs.org Published On :: 2011-07-08T12:30:04+02:00 Decision-making involves choosing between one ore more alternatives, to achieve one or more goals. To support this process, there are decision support systems that employ different approaches, supporting groups or not. Generally, however, these systems do not have great flexibility; their users have to follow preestablished decision methods. This paper, after exposing some decision-making processes, describes a system, LaSca (from Large Scale), to support decisions in large-scale groups. This system, besides allowing effective achievement of the benefits of deciding in large groups through the proper structuring of the group, also allows its users to define themselves how this structuring will happen, based or not in the existing theories on the subject. So, in addition to facilitate the decision-making process, LaSca also allows its users to decide how to decide. Full Article
o Let Me Tell You a Story - On How to Build Process Models By www.jucs.org Published On :: 2011-07-08T12:30:05+02:00 Process Modeling has been a very active research topic for the last decades. One of its main issues is the externalization of knowledge and its acquisition for further use, as this remains deeply related to the quality of the resulting process models produced by this task. This paper presents a method and a graphical supporting tool for process elicitation and modeling, combining the Group Storytelling technique with the advances of Text Mining and Natural Language Processing. The implemented tool extends its previous versions with several functionalities to facilitate group story telling by the users, as well as to improve the results of the acquired process model from the stories. Full Article
o Security and Privacy Preservation for Mobile E-Learning via Digital Identity Attributes By www.jucs.org Published On :: 2011-07-08T12:30:07+02:00 This paper systematically discusses the security and privacy concerns for e-learning systems. A five-layer architecture of e-learning system is proposed. The security and privacy concerns are addressed respectively for five layers. This paper further examines the relationship among the security and privacy policy, the available security and privacy technology, and the degree of e-learning privacy and security. The digital identity attributes are introduced to e-learning portable devices to enhance the security and privacy of e-learning systems. This will provide significant contributions to the knowledge of e-learning security and privacy research communities and will generate more research interests. Full Article
o Realising the Potential of Web 2.0 for Collaborative Learning Using Affordances By www.jucs.org Published On :: 2011-07-08T12:30:08+02:00 With the emergence of the Web 2.0 phenomena, technology-assisted social networking has become the norm. The potential of social software for collaborative learning purposes is clear, but as yet there is little evidence of realisation of the benefits. In this paper we consider Information and Communication Technology student attitudes to collaboration and via two case studies the extent to which they exploit the use of wikis for group collaboration. Even when directed to use a particular wiki designed for the type of project they are involved with, we found that groups utilized the wiki in different ways according to the affordances ascribed to the wiki. We propose that the integration of activity theory with an affordances perspective may lead to improved technology, specifically Web 2.0, assisted collaboration. Full Article
o Enhancement of Collaborative Learning Activities using Portable Devices in the Classroom By www.jucs.org Published On :: 2011-07-08T12:30:09+02:00 Computer Supported Collaborative Learning could highly impact education around the world if the proper Collaborative Learning tools are set in place. In this paper we describe the design of a collaborative learning activity for teaching Chemistry to Chilean students. We describe a PDA-based software tool that allows teachers to create workgroups in their classrooms in order to work on the activity. The developed software tool has three modules: one module for teachers, which runs on a PC and lets them create the required pedagogical material; second, there is a PDA module for students which lets them execute the activity; finally, a third module allows the teacher set workgroups and monitor each workgroup during the activity. Full Article
o CSCWD: Applications and Challenges By www.jucs.org Published On :: 2011-07-08T12:30:10+02:00 Full Article
o Coordinated System for Real Time Muscle Deformation during Locomotion By www.jucs.org Published On :: 2011-04-24T11:14:51+02:00 This paper presents a system that simulates, in real time, the volumetric deformation of muscles during human locomotion. We propose a two-layered motion model. The requirements of realism and real time computation lead to a hybrid locomotion system that uses a skeleton as first layer. The muscles, represented by an anatomical surface model, constitute the second layer, whose deformations are simulated with a finite element method (FEM). The FEM subsystem is fed by the torques and forces got from the locomotion system, through a line of action model, and takes into account the geometry and material properties of the muscles. High level parameters (like height, weight, physical constitution, step frequency, step length or speed) allow to customize the individuals and the locomotion and therefore, the deformation of the persons' muscles. Full Article
o The Architectural Design of a System for Interpreting Multilingual Web Documents in E-speranto By www.jucs.org Published On :: 2011-04-24T11:14:58+02:00 E-speranto is a formal language for generating multilingual texts on the World Wide Web. It is currently still under development. The vocabulary and grammar rules of E-speranto are based on Esperanto; the syntax of E-speranto, however, is based on XML (eXtensible Markup Language). The latter enables the integration of documents generated in E-speranto into web pages. When a user accesses a web page generated in E-speranto, the interpreter interprets the document into a chosen natural language, which enables the user to read the document in any arbitrary language supported by the interpreter. The basic parts of the E-speranto interpreting system are the interpreters and information resources, which complies with the principle of separating the interpretation process from the data itself. The architecture of the E-speranto interpreter takes advantage of the resemblance between the languages belonging to the same linguistic group, which consequently results in a lower production cost of the interpreters for the same linguistic group. We designed a proof-of-concept implementation for interpreting E-speranto in three Slavic languages: Slovenian, Serbian and Russian. These languages share many common features in addition to having a similar syntax and vocabulary. The content of the information resources (vocabulary, lexicon) was limited to the extent that was needed to interpret the test documents. The testing confirmed the applicability of our concept and also indicated the guidelines for future development of both the interpreters and E-speranto itself. Full Article
o The Synthesis of LSE Classifiers: From Representation to Evaluation By www.jucs.org Published On :: 2011-04-24T11:15:08+02:00 This work presents a first approach to the synthesis of Spanish Sign Language's (LSE) Classifier Constructions (CCs). All current attempts at the automatic synthesis of LSE simply create the animations corresponding to sequences of signs. This work, however, includes the synthesis of the LSE classification phenomena, defining more complex elements than simple signs, such as Classifier Predicates, Inflective CCs and Affixal classifiers. The intelligibility of our synthetic messages was evaluated by LSE natives, who reported a recognition rate of 93% correct answers. Full Article
o On Compound Purposes and Compound Reasons for Enabling Privacy By www.jucs.org Published On :: 2011-04-24T11:15:18+02:00 This paper puts forward a verification method for compound purposes and compound reasons to be used during purpose limitation. When it is absolutely necessary to collect privacy related information, it is essential that privacy enhancing technologies (PETs) protect access to data - in general accomplished by using the concept of purposes bound to data. Compound purposes and reasons are an enhancement of purposes used during purpose limitation and binding and are more expressive than purposes in their general form. Data users specify their access needs by making use of compound reasons which are defined in terms of (compound) purposes. Purposes are organised in a lattice with purposes near the greatest lower bound (GLB) considered weak (less specific) and purposes near the least upper bound (LUB) considered strong (most specific). Access is granted based on the verification of the statement of intent (from the data user) against the compound purpose bound to the data; however, because purposes are in a lattice, the data user is not limited to a statement of intent that matches the purposes bound to the data exactly - the statement can be a true reflection of their intent with the data. Hence, the verification of compound reasons against compound purposes cannot be accomplished by current published verification algorithms. Before presenting the verification method, compound purposes and reasons, as well as the structures used to represent them, and the operators that are used to define compounds is presented. Finally, some thoughts on implementation are provided. Full Article
o Early Results of Experiments with Responsive Open Learning Environments By www.jucs.org Published On :: 2011-04-24T11:15:28+02:00 Responsive open learning environments (ROLEs) are the next generation of personal learning environments (PLEs). While PLEs rely on the simple aggregation of existing content and services mainly using Web 2.0 technologies, ROLEs are transforming lifelong learning by introducing a new infrastructure on a global scale while dealing with existing learning management systems, institutions, and technologies. The requirements engineering process in highly populated test-beds is as important as the technology development. In this paper, we will describe first experiences deploying ROLEs at two higher learning institutions in very different cultural settings. The Shanghai Jiao Tong University in China and at the “Center for Learning and Knowledge Management and Department of Information Management in Mechanical Engineering” (ZLW/IMA) at RWTH Aachen University have exposed ROLEs to theirs students in already established courses. The results demonstrated to readiness of the technology for large-scale trials and the benefits for the students leading to new insights in the design of ROLEs also for more informal learning situations. Full Article
o Pragmatic Knowledge Services By www.jucs.org Published On :: 2011-04-24T11:15:39+02:00 Knowledge, innovations and their implementation in effective practices are essential for development in all fields of societal action, e.g. policy, business, health, education, and everyday life. However, managing the interrelations between knowledge, innovation and practice is complicated. Facilitation by suitable knowledge services is needed. This paper explores the theory of converging knowledge, innovation, and practice, discusses some advances in information systems development, and identifies general requirements for pragmatic knowledge services. A trialogical approach to knowledge creation and learning is adopted as a viable theoretical basis. Also three examples of novel knowledge services Opasnet, Innovillage, and Knowledge Practices Environment (KPE), are presented. Eventually, it is concluded that pragmatic knowledge services, as hybrid systems of information technology and its users, are not only means for creation of practical knowledge, but vehicles of a cultural change from individualistic perceptions of knowledge work towards mediated collaboration. Full Article
o Rule of Law on the Go: New Developments of Mobile Governance By www.jucs.org Published On :: 2011-04-24T11:15:48+02:00 This paper offers an overview of the emerging domain of mobile governance as an offspring of the broader landscape of e-governance. Mobile governance initiatives have been deployed everywhere in parallel to the development of crowdsourced, open source software applications that facilitate the collection, aggregation, and dissemination of both information and data coming from different sources: citizens, organizations, public bodies, etc. Ultimately, mobile governance can be seen as a tool to promote the rule of law from a decentralized, distributed, and bottom-up perspective. Full Article
o IDEA: A Framework for a Knowledge-based Enterprise 2.0 By www.jucs.org Published On :: 2011-07-08T12:31:41+02:00 This paper looks at the convergence of knowledge management and Enterprise 2.0 and describes the possibilities for an over-arching exchange and transfer of knowledge in Enterprise 2.0. This will be underlined by the presentation of the concrete example of T-System Multimedia Solutions (MMS), which describes the establishment of a new enterprise division "IG eHealth". This is typified by the decentralised development of common ideas, collaboration and the assistance available to performing responsibilities as provided by Enterprise 2.0 tools. Taking this archetypal example and the derived abstraction of the problem regarding the collaboration of knowledge workers as the basis, a regulatory framework will be developed for knowledge management to serve as a template for the systemisation and definition of specific Enterprise 2.0 activities. The paper will conclude by stating factors of success and supporting Enterprise 2.0 activities, which will facilitate the establishment of a practical knowledge management system for the optimisation of knowledge transfer. Full Article
o Enterprise Microblogging for Advanced Knowledge Sharing: The References@BT Case Study By www.jucs.org Published On :: 2011-07-08T12:31:42+02:00 Siemens is well known for ambitious efforts in knowledge management, providing a series of innovative tools and applications within the intranet. References@BT is such a web-based application with currently more than 7,300 registered users from more than 70 countries. Its goal is to support the sharing of knowledge, experiences and best-practices globally within the Building Technologies division. Launched in 2005, References@BT features structured knowledge references, discussion forums, and a basic social networking service. In response to use demand, a new microblogging service, tightly integrated into References@BT, was implemented in March 2009. More than 500 authors have created around 2,600 microblog postings since then. Following a brief introduction into the community platform References@BT, we comprehensively describe the motivation, experiences and advantages for an organization in providing internal microblogging services. We provide detailed microblog usage statistics, analyzing the top ten users regarding postings and followers as well as the top ten topics. In doing so, we aim to shed light on microblogging usage and adoption within a globally distributed organization. Full Article
o Leveraging Web 2.0 in New Product Development: Lessons Learned from a Cross-company Study By www.jucs.org Published On :: 2011-07-08T12:31:43+02:00 The paper explores the application of Web 2.0 technologies to support product development efforts in a global, virtual and cross-functional setting. It analyses the dichotomy between the prevailing hierarchical structure of CAD/PLM/PDM systems and the principles of the Social Web under the light of the emerging product development trends. Further it introduces the concept of Engineering 2.0, intended as a more bottom up and lightweight knowledge sharing approach to support early stage design decisions within virtual and cross-functional product development teams. The lessons learned collected from a cross-company study highlight how to further developblogs, wikis, forums and tags for the benefit of new product development teams, highlighting opportunities, challenges and no-go areas. Full Article
o On the Construction of Efficiently Navigable Tag Clouds Using Knowledge from Structured Web Content By www.jucs.org Published On :: 2011-07-08T12:31:45+02:00 In this paper we present an approach to improving navigability of a hierarchically structured Web content. The approach is based on an integration of a tagging module and adoption of tag clouds as a navigational aid for such content. The main idea of this approach is to apply tagging for the purpose of a better highlighting of cross-references between information items across the hierarchy. Although in principle tag clouds have the potential to support efficient navigation in tagging systems, recent research identified a number of limitations. In particular, applying tag clouds within pragmatic limits of a typical user interface leads to poor navigational performance as tag clouds are vulnerable to a so-called pagination effect. In this paper, a solution to the pagination problem is discussed, implemented as a part of an Austrian online encyclopedia called Austria-Forum, and analyzed. In addition, a simulation-based evaluation of the new algorithm has been conducted. The first evaluation results are quite promising, as the efficient navigational properties are restored. Full Article
o A Clustering Approach for Collaborative Filtering Recommendation Using Social Network Analysis By www.jucs.org Published On :: 2011-07-08T12:31:46+02:00 Collaborative Filtering(CF) is a well-known technique in recommender systems. CF exploits relationships between users and recommends items to the active user according to the ratings of his/her neighbors. CF suffers from the data sparsity problem, where users only rate a small set of items. That makes the computation of similarity between users imprecise and consequently reduces the accuracy of CF algorithms. In this article, we propose a clustering approach based on the social information of users to derive the recommendations. We study the application of this approach in two application scenarios: academic venue recommendation based on collaboration information and trust-based recommendation. Using the data from DBLP digital library and Epinion, the evaluation shows that our clustering technique based CF performs better than traditional CF algorithms. Full Article
o Markup upon Video - towards Dynamic and Interactive Video Annotations By www.jucs.org Published On :: 2011-07-08T12:31:47+02:00 Interactive video is increasingly becoming a more and more dominant feature of our media platforms. Especially due to the popular YouTube annotations framework, integrating graphical annotations in a video has become very fashionable these days. However, the current options are limited to a few graphical shapes for which the user can define as good as no dynamic behaviour. Despite the enormous demand for easy-creatable, interactive video there are no such advanced tools available. In this article we describe an innovative approach, to realize dynamics and interactivity of video annotations. First we explain basic concepts of video-markup like the generic element model and visual descriptors. After that we introduce the event-tree model, which can be used to define event-handling in an interactive video formally as well as visually. By combining these basic concepts, we can give an effective tool to the video community for realizing interactive and dynamic video in a simple, intuitive and focused way. Full Article
o ODR, Ontologies, and Web 2.0 By www.jucs.org Published On :: 2011-07-08T12:31:49+02:00 Online communities and institutions create new spaces for interaction, but also open new avenues for the emergence of grievances, claims, and disputes. Consequently, online dispute resolution (ODR) procedures are core to these new online worlds. But can ODR mechanisms provide sufficient levels of reputation, trust, and enforceability for it to become mainstream? This contribution introduces the new approaches to ODR and provides a description of the design and structure of Ontomedia, a web-based platform to facilitate online mediation in different domains. Full Article
o Web 2.0: Applications and Mechanisms By www.jucs.org Published On :: 2011-07-08T12:31:50+02:00 Full Article
o Modeling Quality Attributes with Aspect-Oriented Architectural Templates By www.jucs.org Published On :: 2011-05-06T16:03:16+02:00 The quality attributes of a software system are, to a large extent, determined by the decisions taken early in the development process. Best practices in software engineering recommend the identification of important quality attributes during the requirements elicitation process, and the specification of software architectures to satisfy these requirements. Over the years the software engineering community has studied the relationship between quality attributes and the use of particular architectural styles and patterns. In this paper we study the relationship between quality attributes and Aspect-Oriented Software Architectures - which apply the principles of Aspect-Oriented Software Development (AOSD) at the architectural level. AOSD focuses on identifying, modeling and composing crosscutting concerns - i.e. concerns that are tangled and/or scattered with other concerns of the application. In this paper we propose to use AO-ADL, an aspect-oriented architectural description language, to specify quality attributes by means of parameterizable, and thus reusable, architectural patterns. We particularly focus on quality attributes that: (1) have major implications on software functionality, requiring the incorporation of explicit functionality at the architectural level; (2) are complex enough as to be modeled by a set of related concerns and the compositions among them, and (3) crosscut domain specific functionality and are related to more than one component in the architecture. We illustrate our approach for usability, a critical quality attribute that satisfies the previous constraints and that requires special attention at the requirements and the architecture design stages. Full Article
o Bio-Inspired Mechanisms for Coordinating Multiple Instances of a Service Feature in Dynamic Software Product Lines By www.jucs.org Published On :: 2011-05-06T16:03:21+02:00 One of the challenges in Dynamic Software Product Line (DSPL) is how to support the coordination of multiple instances of a service feature. In particular, there is a need for a decentralized decision-making capability that will be able to seamlessly integrate new instances of a service feature without an omniscient central controller. Because of the need for decentralization, we are investigating principles from self-organization in biological organisms. As an initial proof of concept, we have applied three bio-inspired techniques to a simple smart home scenario: quorum sensing based service activation, a firefly algorithm for synchronization, and a gossiping (epidemic) protocol for information dissemination. In this paper, we first explain why we selected those techniques using a set of motivating scenarios of a smart home and then describe our experiences in adopting them. Full Article
o Automatically Checking Feature Model Refactorings By www.jucs.org Published On :: 2011-05-06T16:03:26+02:00 A feature model (FM) defines the valid combinations of features, whose combinations correspond to a program in a Software Product Line (SPL). FMs may evolve, for instance, during refactoring activities. Developers may use a catalog of refactorings as support. However, the catalog is incomplete in principle. Additionally, it is non-trivial to propose correct refactorings. To our knowledge, no previous analysis technique for FMs is used for checking properties of general FM refactorings (a transformation that can be applied to a number of FMs) containing a representative number of features. We propose an efficient encoding of FMs in the Alloy formal specification language. Based on this encoding, we show how the Alloy Analyzer tool, which performs analysis on Alloy models, can be used to automatically check whether encoded general and specific FM refactorings are correct. Our approach can analyze general transformations automatically to a significant scale in a few seconds. In order to evaluate the analysis performance of our encoding, we evaluated in automatically generated FMs ranging from 500 to 2,000 features. Furthermore, we analyze the soundness of general transformations. Full Article
o QoS-based Approach for Dynamic Web Service Composition By www.jucs.org Published On :: 2011-05-06T16:03:31+02:00 Web Services have become a standard for integration of systems in distributed environments. By using a set of open interoperability standards, they allow computer-computer interaction, regardless the programming languages and operating systems used. The Semantic Web Services, by its turn, make use of ontologies to describe their functionality in a more structural manner, allowing computers to reason about the information required and provided by them. Such a description also allows dynamic composition of several Web Services, when only one is not able to provide the desired functionality. There are scenarios, however, in which only the functional correctness is not enough to fulfill the user requirements, and a minimum level of quality should be guaranteed by their providers. In this context, this work presents an approach for dynamic Web Service composition that takes into account the composition overall quality. The proposed approach relies on a heuristics to efficiently perform the composition. In order to show the feasibility of the proposed approach, a Web Service composition application prototype was developed and experimented with public test sets, along with another approach that does not consider quality in the composition process. The results have shown that the proposed approach in general finds compositions with more quality, within a reasonable processing time. Full Article
o An Aspect-Oriented Framework for Weaving Domain-Specific Concerns into Component-Based Systems By www.jucs.org Published On :: 2011-05-06T16:03:36+02:00 Software components are used in various application domains, and many component models and frameworks have been proposed to fulfill domain-specific requirements. The general trend followed by these approaches is to provide ad-hoc models and tools for capturing these requirements and for implementing their support within dedicated runtime platforms, limited to features of the targeted domain. The challenge is then to propose more flexible solutions, where components reuse is domain agnostic. In this article, we present a framework supporting compositional construction and development of applications that must meet various extra-functional/domain-specific requirements. The key points of our contribution are: i) We target development of component-oriented applications where extra-functional requirements are expressed as annotations on the units of composition in the application architecture. ii) These annotations are implemented as open and extensible component-based containers, achieving full separation of functional and extra-functional concerns. iii) Finally, the full machinery is implemented using the Aspect-Oriented Programming paradigm. We validate our approach with two case studies: the first is related to real-time and embedded applications, while the Full Article
o Context-Aware Composition and Adaptation based on Model Transformation By www.jucs.org Published On :: 2011-05-06T16:03:43+02:00 Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems. Full Article
o An Approach for Feature Modeling of Context-Aware Software Product Line By www.jucs.org Published On :: 2011-05-06T16:03:50+02:00 Feature modeling is an approach to represent commonalities and variabilities among products of a product line. Context-aware applications use context information to provide relevant services and information for their users. One of the challenges to build a context-aware product line is to develop mechanisms to incorporate context information and adaptation knowledge in a feature model. This paper presents UbiFEX, an approach to support feature analysis for context-aware software product lines, which incorporates a modeling notation and a mechanism to verify the consistency of product configuration regarding context variations. Moreover, an experimental study was performed as a preliminary evaluation, and a prototype was developed to enable the application of the proposed approach. Full Article
o Software Components, Architectures and Reuse By www.jucs.org Published On :: 2011-05-06T16:03:55+02:00 Full Article
o A Framework to Evaluate Interface Suitability for a Given Scenario of Textual Information Retrieval By www.jucs.org Published On :: 2011-07-04T16:04:41+02:00 Visualization of search results is an essential step in the textual Information Retrieval (IR) process. Indeed, Information Retrieval Interfaces (IRIs) are used as a link between users and IR systems, a simple example being the ranked list proposed by common search engines. Due to the importance that takes visualization of search results, many interfaces have been proposed in the last decade (which can be textual, 2D or 3D IRIs). Two kinds of evaluation methods have been developed: (1) various evaluation methods of these interfaces were proposed aiming at validating ergonomic and cognitive aspects; (2) various evaluation methods were applied on information retrieval systems (IRS) aiming at measuring their effectiveness. However, as far as we know, these two kinds of evaluation methods are disjoint. Indeed, considering a given IRI associated to a given IRS, what happens if we associate this IRI to another IRS not having the same effectiveness. In this context, we propose an IRI evaluation framework aimed at evaluating the suitability of any IRI to different IR scenarios. First of all, we define the notion of IR scenario as a combination of features related to users, IR tasks and IR systems. We have implemented the framework through a specific evaluation platform that enables performing IRI evaluations and that helps end-users (e.g. IRS developers or IRI designers) in choosing the most suitable IRI for a specific IR scenario. Full Article
o Nondeterministic Query Algorithms By www.jucs.org Published On :: 2011-07-04T16:04:43+02:00 Query algorithms are used to compute Boolean functions. The definition of the function is known, but input is hidden in a black box. The aim is to compute the function value using as few queries to the black box as possible. As in other computational models, different kinds of query algorithms are possible: deterministic, probabilistic, as well as nondeterministic. In this paper, we present a new alternative definition of nondeterministic query algorithms and study algorithm complexity in this model. We demonstrate the power of our model with an example of computing the Fano plane Boolean function. We show that for this function the difference between deterministic and nondeterministic query complexity is 7N versus O(3N). Full Article
o Descriptional Complexity of Ambiguity in Symmetric Difference NFAs By www.jucs.org Published On :: 2011-07-04T16:04:44+02:00 We investigate ambiguity for symmetric difference nondeterministic finite automata. We show the existence of unambiguous, finitely ambiguous, polynomially ambiguous and exponentially ambiguous symmetric difference nondeterministic finite automata. We show that, for each of these classes, there is a family of n-state nondeterministic finite automata such that the smallest equivalent deterministic finite automata have O(2n) states. Full Article
o Improving Security Levels of IEEE802.16e Authentication by Involving Diffie-Hellman PKDS By www.jucs.org Published On :: 2011-07-04T16:04:45+02:00 Recently, IEEE 802.16 Worldwide Interoperability for Microwave Access (WiMAX for short) has provided us with low-cost, high efficiency and high bandwidth network services. However, as with the WiFi, the radio wave transmission also makes the WiMAX face the wireless transmission security problem. To solve this problem, the IEEE802.16Std during its development stage defines the Privacy Key Management (PKM for short) authentication process which offers a one-way authentication. However, using a one-way authentication, an SS may connect to a fake BS. Mutual authentication, like that developed for PKMv2, can avoid this problem. Therefore, in this paper, we propose an authentication key management approach, called Diffie-Hellman-PKDS-based authentication method (DiHam for short), which employs a secret door asymmetric one-way function, Public Key Distribution System (PKDS for short), to improve current security level of facility authentication between WiMAX's BS and SS. We further integrate the PKMv1 and the DiHam into a system, called PKM-DiHam (P-DiHam for short), in which the PKMv1 acts as the authentication process, and the DiHam is responsible for key management and delivery. By transmitting securely protected and well-defined parameters for SS and BS, the two stations can mutually authenticate each other. Messages including those conveying user data and authentication parameters can be then more securely delivered. Full Article
o Least Slack Time Rate first: an Efficient Scheduling Algorithm for Pervasive Computing Environment By www.jucs.org Published On :: 2011-07-04T16:04:46+02:00 Real-time systems like pervasive computing have to complete executing a task within the predetermined time while ensuring that the execution results are logically correct. Such systems require intelligent scheduling methods that can adequately promptly distribute the given tasks to a processor(s). In this paper, we propose LSTR (Least Slack Time Rate first), a new and simple scheduling algorithm, for a multi-processor environment, and demonstrate its efficient performance through various tests. Full Article
o Hierarchical Graph-Grammar Model for Secure and Efficient Handwritten Signatures Classification By www.jucs.org Published On :: 2011-07-04T16:04:47+02:00 One important subject associated with personal authentication capabilities is the analysis of handwritten signatures. Among the many known techniques, algorithms based on linguistic formalisms are also possible. However, such techniques require a number of algorithms for intelligent image analysis to be applied, allowing the development of new solutions in the field of personal authentication and building modern security systems based on the advanced recognition of such patterns. The article presents the approach based on the usage of syntactic methods for the static analysis of handwritten signatures. The graph linguistic formalisms applied, such as the IE graph and ETPL(k) grammar, are characterised by considerable descriptive strength and a polynomial membership problem of the syntactic analysis. For the purposes of representing the analysed handwritten signatures, new hierarchical (two-layer) HIE graph structures based on IE graphs have been defined. The two-layer graph description makes it possible to take into consideration both local and global features of the signature. The usage of attributed graphs enables the storage of additional semantic information describing the properties of individual signature strokes. The verification and recognition of a signature consists in analysing the affiliation of its graph description to the language describing the specimen database. Initial assessments display a precision of the method at a average level of under 75%. Full Article
o Cost-Sensitive Spam Detection Using Parameters Optimization and Feature Selection By www.jucs.org Published On :: 2011-07-04T16:04:49+02:00 E-mail spam is no more garbage but risk since it recently includes virus attachments and spyware agents which make the recipients' system ruined, therefore, there is an emerging need for spam detection. Many spam detection techniques based on machine learning techniques have been proposed. As the amount of spam has been increased tremendously using bulk mailing tools, spam detection techniques should counteract with it. To cope with this, parameters optimization and feature selection have been used to reduce processing overheads while guaranteeing high detection rates. However, previous approaches have not taken into account feature variable importance and optimal number of features. Moreover, to the best of our knowledge, there is no approach which uses both parameters optimization and feature selection together for spam detection. In this paper, we propose a spam detection model enabling both parameters optimization and optimal feature selection; we optimize two parameters of detection models using Random Forests (RF) so as to maximize the detection rates. We provide the variable importance of each feature so that it is easy to eliminate the irrelevant features. Furthermore, we decide an optimal number of selected features using two methods; (i) only one parameters optimization during overall feature selection and (ii) parameters optimization in every feature elimination phase. Finally, we evaluate our spam detection model with cost-sensitive measures to avoid misclassification of legitimate messages, since the cost of classifying a legitimate message as a spam far outweighs the cost of classifying a spam as a legitimate message. We perform experiments on Spambase dataset and show the feasibility of our approaches. Full Article
o Service Oriented Multimedia Delivery System in Pervasive Environments By www.jucs.org Published On :: 2011-07-04T16:04:50+02:00 Service composition is an effective approach for large-scale multimedia delivery. In previous works, user requirement is represented as one fixed functional path which is composed of several functional components in a certain order. Actually, there may be several functional paths (deliver different quality level multimedia data, e.g., image pixel, frame rate) that can meet one request. And due to the diversity of devices and connections in pervasive environment, system should choose a suitable media quality delivery path in accordance with context, instead of one fixed functional path. This paper presents a deep study of multimedia delivery problem and proposes an on-line algorithm LDPath and an off-line centralized algorithm LD/RPath respectively. LDPath aims at delivering multimedia data to end user with lowest delay by choosing services to build delivery paths hop-by-hop, which is adapted to the unstable open environment. And LD/RPath is developed for a relatively stable environment, which generates delivery paths according to the trade-off between delay and reliability metrics, because the service reliability is also an important fact in such scenario. Experimental results show that both algorithms have good performance with low overhead to the system. Full Article