science and technology

Context-Aware Composition and Adaptation based on Model Transformation

Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems.




science and technology

An Approach for Feature Modeling of Context-Aware Software Product Line

Feature modeling is an approach to represent commonalities and variabilities among products of a product line. Context-aware applications use context information to provide relevant services and information for their users. One of the challenges to build a context-aware product line is to develop mechanisms to incorporate context information and adaptation knowledge in a feature model. This paper presents UbiFEX, an approach to support feature analysis for context-aware software product lines, which incorporates a modeling notation and a mechanism to verify the consistency of product configuration regarding context variations. Moreover, an experimental study was performed as a preliminary evaluation, and a prototype was developed to enable the application of the proposed approach.




science and technology

Software Components, Architectures and Reuse




science and technology

A Framework to Evaluate Interface Suitability for a Given Scenario of Textual Information Retrieval

Visualization of search results is an essential step in the textual Information Retrieval (IR) process. Indeed, Information Retrieval Interfaces (IRIs) are used as a link between users and IR systems, a simple example being the ranked list proposed by common search engines. Due to the importance that takes visualization of search results, many interfaces have been proposed in the last decade (which can be textual, 2D or 3D IRIs). Two kinds of evaluation methods have been developed: (1) various evaluation methods of these interfaces were proposed aiming at validating ergonomic and cognitive aspects; (2) various evaluation methods were applied on information retrieval systems (IRS) aiming at measuring their effectiveness. However, as far as we know, these two kinds of evaluation methods are disjoint. Indeed, considering a given IRI associated to a given IRS, what happens if we associate this IRI to another IRS not having the same effectiveness. In this context, we propose an IRI evaluation framework aimed at evaluating the suitability of any IRI to different IR scenarios. First of all, we define the notion of IR scenario as a combination of features related to users, IR tasks and IR systems. We have implemented the framework through a specific evaluation platform that enables performing IRI evaluations and that helps end-users (e.g. IRS developers or IRI designers) in choosing the most suitable IRI for a specific IR scenario.




science and technology

Nondeterministic Query Algorithms

Query algorithms are used to compute Boolean functions. The definition of the function is known, but input is hidden in a black box. The aim is to compute the function value using as few queries to the black box as possible. As in other computational models, different kinds of query algorithms are possible: deterministic, probabilistic, as well as nondeterministic. In this paper, we present a new alternative definition of nondeterministic query algorithms and study algorithm complexity in this model. We demonstrate the power of our model with an example of computing the Fano plane Boolean function. We show that for this function the difference between deterministic and nondeterministic query complexity is 7N versus O(3N).




science and technology

Descriptional Complexity of Ambiguity in Symmetric Difference NFAs

We investigate ambiguity for symmetric difference nondeterministic finite automata. We show the existence of unambiguous, finitely ambiguous, polynomially ambiguous and exponentially ambiguous symmetric difference nondeterministic finite automata. We show that, for each of these classes, there is a family of n-state nondeterministic finite automata such that the smallest equivalent deterministic finite automata have O(2n) states.




science and technology

Improving Security Levels of IEEE802.16e Authentication by Involving Diffie-Hellman PKDS

Recently, IEEE 802.16 Worldwide Interoperability for Microwave Access (WiMAX for short) has provided us with low-cost, high efficiency and high bandwidth network services. However, as with the WiFi, the radio wave transmission also makes the WiMAX face the wireless transmission security problem. To solve this problem, the IEEE802.16Std during its development stage defines the Privacy Key Management (PKM for short) authentication process which offers a one-way authentication. However, using a one-way authentication, an SS may connect to a fake BS. Mutual authentication, like that developed for PKMv2, can avoid this problem. Therefore, in this paper, we propose an authentication key management approach, called Diffie-Hellman-PKDS-based authentication method (DiHam for short), which employs a secret door asymmetric one-way function, Public Key Distribution System (PKDS for short), to improve current security level of facility authentication between WiMAX's BS and SS. We further integrate the PKMv1 and the DiHam into a system, called PKM-DiHam (P-DiHam for short), in which the PKMv1 acts as the authentication process, and the DiHam is responsible for key management and delivery. By transmitting securely protected and well-defined parameters for SS and BS, the two stations can mutually authenticate each other. Messages including those conveying user data and authentication parameters can be then more securely delivered.




science and technology

Least Slack Time Rate first: an Efficient Scheduling Algorithm for Pervasive Computing Environment

Real-time systems like pervasive computing have to complete executing a task within the predetermined time while ensuring that the execution results are logically correct. Such systems require intelligent scheduling methods that can adequately promptly distribute the given tasks to a processor(s). In this paper, we propose LSTR (Least Slack Time Rate first), a new and simple scheduling algorithm, for a multi-processor environment, and demonstrate its efficient performance through various tests.




science and technology

Hierarchical Graph-Grammar Model for Secure and Efficient Handwritten Signatures Classification

One important subject associated with personal authentication capabilities is the analysis of handwritten signatures. Among the many known techniques, algorithms based on linguistic formalisms are also possible. However, such techniques require a number of algorithms for intelligent image analysis to be applied, allowing the development of new solutions in the field of personal authentication and building modern security systems based on the advanced recognition of such patterns. The article presents the approach based on the usage of syntactic methods for the static analysis of handwritten signatures. The graph linguistic formalisms applied, such as the IE graph and ETPL(k) grammar, are characterised by considerable descriptive strength and a polynomial membership problem of the syntactic analysis. For the purposes of representing the analysed handwritten signatures, new hierarchical (two-layer) HIE graph structures based on IE graphs have been defined. The two-layer graph description makes it possible to take into consideration both local and global features of the signature. The usage of attributed graphs enables the storage of additional semantic information describing the properties of individual signature strokes. The verification and recognition of a signature consists in analysing the affiliation of its graph description to the language describing the specimen database. Initial assessments display a precision of the method at a average level of under 75%.




science and technology

Cost-Sensitive Spam Detection Using Parameters Optimization and Feature Selection

E-mail spam is no more garbage but risk since it recently includes virus attachments and spyware agents which make the recipients' system ruined, therefore, there is an emerging need for spam detection. Many spam detection techniques based on machine learning techniques have been proposed. As the amount of spam has been increased tremendously using bulk mailing tools, spam detection techniques should counteract with it. To cope with this, parameters optimization and feature selection have been used to reduce processing overheads while guaranteeing high detection rates. However, previous approaches have not taken into account feature variable importance and optimal number of features. Moreover, to the best of our knowledge, there is no approach which uses both parameters optimization and feature selection together for spam detection. In this paper, we propose a spam detection model enabling both parameters optimization and optimal feature selection; we optimize two parameters of detection models using Random Forests (RF) so as to maximize the detection rates. We provide the variable importance of each feature so that it is easy to eliminate the irrelevant features. Furthermore, we decide an optimal number of selected features using two methods; (i) only one parameters optimization during overall feature selection and (ii) parameters optimization in every feature elimination phase. Finally, we evaluate our spam detection model with cost-sensitive measures to avoid misclassification of legitimate messages, since the cost of classifying a legitimate message as a spam far outweighs the cost of classifying a spam as a legitimate message. We perform experiments on Spambase dataset and show the feasibility of our approaches.




science and technology

Service Oriented Multimedia Delivery System in Pervasive Environments

Service composition is an effective approach for large-scale multimedia delivery. In previous works, user requirement is represented as one fixed functional path which is composed of several functional components in a certain order. Actually, there may be several functional paths (deliver different quality level multimedia data, e.g., image pixel, frame rate) that can meet one request. And due to the diversity of devices and connections in pervasive environment, system should choose a suitable media quality delivery path in accordance with context, instead of one fixed functional path. This paper presents a deep study of multimedia delivery problem and proposes an on-line algorithm LDPath and an off-line centralized algorithm LD/RPath respectively. LDPath aims at delivering multimedia data to end user with lowest delay by choosing services to build delivery paths hop-by-hop, which is adapted to the unstable open environment. And LD/RPath is developed for a relatively stable environment, which generates delivery paths according to the trade-off between delay and reliability metrics, because the service reliability is also an important fact in such scenario. Experimental results show that both algorithms have good performance with low overhead to the system.




science and technology

Semantic Web: Theory and Applicationsns




science and technology

Knowledge Extraction from RDF Data with Activation Patterns

RDF data can be analyzed with various query languages such as SPARQL. However, due to their nature these query languages do not support fuzzy queries that would allow us to extract a broad range of additional information. In this article we present a new method that transforms the information presented by subject-relationobject relations within RDF data into Activation Patterns. These patterns represent a common model that is the basis for a number of sophisticated analysis methods such as semantic relation analysis, semantic search queries, unsupervised clustering, supervised learning or anomaly detection. In this article, we explain the Activation Patterns concept and apply it to an RDF representation of the well known CIA World Factbook.




science and technology

Algorithms for the Evaluation of Ontologies for Extended Error Taxonomy and their Application on Large Ontologies

Ontology evaluation is an integral and important part of the ontology development process. Errors in ontologies could be catastrophic for the information system based on those ontologies. As per our experiments, the existing ontology evaluation systems were unable to detect many errors (like, circulatory error in class and property hierarchy, common class and property in disjoint decomposition, redundancy of sub class and sub property, redundancy of disjoint relation and disjoint knowledge omission) as defined in the error taxonomy. We have formulated efficient algorithms for the evaluation of these and other errors as per the extended error taxonomy. These algorithms are implemented (named as OntEval) and the implementations are used to evaluate well-known ontologies including Gene Ontology (GO), WordNet Ontology and OntoSem. The ontologies are indexed using a variant of already proposed scheme Ontrel. A number of errors and warnings in these ontologies have been discovered using the OntEval. We have also reported the performance of our implementation, OntEval.




science and technology

Towards Classification of Web Ontologies for the Emerging Semantic Web

The massive growth in ontology development has opened new research challenges such as ontology management, search and retrieval for the entire semantic web community. These results in many recent developments, like OntoKhoj, Swoogle, OntoSearch2, that facilitate tasks user have to perform. These semantic web portals mainly treat ontologies as plain texts and use the traditional text classification algorithms for classifying ontologies in directories and assigning predefined labels rather than using the semantic knowledge hidden within the ontologies. These approaches suffer from many types of classification problems and lack of accuracy, especially in the case of overlapping ontologies that share common vocabularies. In this paper, we define an ontology classification problem and categorize it into many sub-problems. We present a new ontological methodology for the classification of web ontologies, which has been guided by the requirements of the emerging Semantic Web applications and by the lessons learnt from previous systems. The proposed framework, OntClassifire, is tested on 34 ontologies with a certain degree of overlapping domain, and effectiveness of the ontological mechanism is verified. It benefits the construction, maintenance or expansion of ontology directories on the semantic web that help to focus on the crawling and improving the quality of search for the software agents and people. We conclude that the use of a context specific knowledge hidden in the structure of ontologies gives more accurate results for the ontology classification.




science and technology

A Semantic Wiki Based on Spatial Hypertext

Spatial Hypertext Wiki (ShyWiki) is a wiki which represents knowledge using notes that are spatially distributed in wiki pages and have visual characteristics such as colour, size, or font type. The use of spatial and visual characteristics in wikis is important to improve human comprehension, creation and organization of knowledge. Another important capability in wikis is to allow machines to process knowledge. Wikis that formally structure knowledge for this purpose are called semantic wikis. This paper describes how ShyWiki can make use of spatial hypertext in order to be a semantic wiki. ShyWiki can represent knowledge at different levels of formality. Users of ShyWiki can annotate the content and represent semantic relations without being experts of semantic web data description languages. The spatial hypertext features make it suitable for users to represent unstructured knowledge and implicit graphic relations among concepts. In addition, semantic web and spatial hypertext features are combined to represent structured knowledge. The semantic web features of ShyWiki improve navigation and publish the wiki knowledge as RDF resources, including the implicit relations that are analyzed using a spatial parser.




science and technology

A Ranking Tool Exploiting Semantic Descriptions for the Comparison of EQF-based Qualifications

Nowadays, one of the main issues discussed at the Community level is represented by the mobility of students and workers across Europe. During the last years, in order to deal with the above picture, several initiatives have been carried out: one of them is the definition of the European Qualification Framework (EQF), a common architecture for the description of qualifications. At the same time, several research activities were established with the aim of finding how semantic technologies could be exploited for qualifications comparison in the field of human resources acquisition. In this paper, the EQF specifications are taken into account and they are applied in a practical scenario to develop a ranking algorithm for the comparison of qualifications expressed in terms of knowledge, skill and competence concepts, potentially aimed at supporting European employers during the recruiting phase.




science and technology

Ontology-based User Interface Development: User Experience Elements Pattern

The user experience of any software or website consists of elements from the conceptual to the concrete level. These elements of user experience assist in the design and development of user interfaces. On the other hand, ontologies provide a framework for computable representation of user interface elements and underlying data. This paper discusses strategies of introducing ontologies at different user interface layers adapted from user experience elements. These layers range from abstract levels (e.g. User needs/Application Objectives) to concrete levels (e.g. Application User Interface) in terms of data representation. The proposed ontological framework enables device independent, semi-automated GUI construction which we will demonstrate at a personal information management example.




science and technology

Ontology-based Competency Management: the Case Study of the Mihajlo Pupin Institute

Semantic-based technologies have been steadily increasing their relevance in recent years in both the research world and business world. Considering this, the present article discusses the process of design and implementation of a competency management system in information and communication technologies domain utilizing the latest Semantic Web tools and technologies including D2RQ server, TopBraid Composer, OWL 2, SPARQL, SPARQL Rules and common human resources related public vocabularies. In particular, the paper discusses the process of building individual and enterprise competence models in a form of ontology database, as well as different ways of meaningful search and retrieval of expertise data on the Semantic Web. The ontological knowledge base aims at storing the extracted and integrated competences from structured, as well as unstructured sources. By using the illustrative case study of deployment of such a system in the Human Resources sector at the Mihajlo Pupin Institute, this paper shows an example of new approaches to data integration and information management. The proposed approach extends the functionalities of existing enterprise information systems and offers possibilities for development of future Internet services. This allows organizations to express their core competences and talents in a standardized, machine processable and understandable format, and hence, facilitates their integration in the European Research Area and beyond.




science and technology

A Comparison of Different Retrieval Strategies Working on Medical Free Texts

Patient information in health care systems mostly consists of textual data, and free text in particular makes up a significant amount of it. Information retrieval systems that concentrate on these text types have to deal with the different challenges these medical free texts pose to achieve an acceptable performance. This paper describes the evaluation of four different types of information retrieval strategies: keyword search, search performed by a medical domain expert, a semantic based information retrieval tool, and a purely statistical information retrieval method. The different methods are evaluated and compared with respect to its appliance in medical health care systems.




science and technology

Cloud Computing




science and technology

An Ontology based Agent Generation for Information Retrieval on Cloud Environment

Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture.




science and technology

ORPMS: An Ontology-based Real-time Project Monitoring System in the Cloud

Project monitoring plays a crucial role in project management, which is a part of every stage of a project's life-cycle. Nevertheless, along with the increasing ratio of outsourcing in many companies' strategic plans, project monitoring has been challenged by geographically dispersed project teams and culturally diverse team members. Furthermore, because of the lack of a uniform standard, data exchange between various project monitoring software becomes an impossible mission. These factors together lead to the issue of ambiguity in project monitoring processes. Ontology is a form of knowledge representation with the purpose of disambiguation. Consequently, in this paper, we propose the framework of an ontology-based real-time project monitoring system (ORPSM), in order to, by means of ontologies, solve the ambiguity issue in project monitoring processes caused by multiple factors. The framework incorporates a series of ontologies for knowledge capture, storage, sharing and term disambiguation in project monitoring processes, and a series of metrics for assisting management of project organizations to better monitor projects. We propose to configure the ORPMS framework in a cloud environment, aiming at providing the project monitoring service to geographically distributed and dynamic project members with great flexibility, scalability and security. A case study is conducted on a prototype of the ORPMS in order to evaluate the framework.




science and technology

Cloud Warehousing

Data warehouses integrate and aggregate data from various sources to support decision making within an enterprise. Usually, it is assumed that data are extracted from operational databases used by the enterprise. Cloud warehousing relaxes this view permitting data sources to be located anywhere on the world-wide web in a so-called "cloud", which is understood as a registry of services. Thus, we need a model of dataintensive web services, for which we adopt the view of the recently introduced model of abstract state services (AS2s). An AS2 combines a hidden database layer with an operation-equipped view layer, and thus provides an abstraction of web services that can be made available for use by other systems. In this paper we extend this model to an abstract model of clouds by means of an ontology for service description. The ontology can be specified using description logics, where the ABox contains the set of services, and the TBox can be queried to find suitable services. Consequently, AS2 composition can be used for cloud warehousing.




science and technology

Cooperation as a Service in VANETs

Vehicular Networks, including Vehicular Adhoc Networks (VANETs) and Vehicular Sensor Networks (VSNs), stimulate a brand new variety of services, ranging from driver safety services, traffic information and warnings regarding traffic jams and accidents, to providing weather or road condition, parking availability, and advertisement. 3G networks and sophisticated Intelligent Transportation Systems (ITS), including deploying costly roadside base stations, can indeed be used to offer such services, but these come with a cost, both at network and hardware levels. In this paper we introduce Cooperation as a service (CaaS): A novel architecture that will allow providing a set of services for free and without any additional infrastructure, by taking advantage of Vehicle-to-Vehicle communications. CaaS uses a hybrid publish/subscribe mechanism where the driver (or subscriber) expresses his interests regarding a service (or a set of services) and where cars having subscribed to the same service will cooperate to provide the subscriber with the necessary information regarding the service he subscribed to, by publishing this information in the network. CaaS structures the network into clusters, and uses Content Based Routing (CBR) for intra-cluster communications and geographic routing for inter-cluster communications.




science and technology

An efficient edge swap mechanism for enhancement of robustness in scale-free networks in healthcare systems

This paper presents a sequential edge swap (SQES) mechanism to design a robust network for a healthcare system utilising energy and communication range of nodes. Two operations: sequential degree difference operation (SQDDO) and sequential angle sum operation (SQASO) are performed to enhance the robustness of network. With equivalent degrees of nodes from the network's centre to its periphery, these operations build a robust network structure. Disaster attacks that have a substantial impact on the network are carried out using the network information. To identify a link between the malicious and disaster attacks, the Pearson coefficient is employed. SQES creates a robust network structure as a single objective optimisation solution by changing the connections of nodes based on the positive correlation of these attacks. SQES beats the current methods, according to simulation results. When compared to hill-climbing algorithm, simulated annealing, and ROSE, respectively, the robustness of SQES is improved by roughly 26%, 19% and 12%.




science and technology

A feature-based model selection approach using web traffic for tourism data

The increased volume of accessible internet data creates an opportunity for researchers and practitioners to improve time series forecasting for many indicators. In our study, we assess the value of web traffic data in forecasting the number of short-term visitors travelling to Australia. We propose a feature-based model selection framework which combines random forest with feature ranking process to select the best performing model using limited and informative number of features extracted from web traffic data. The data was obtained for several tourist attraction and tourism information websites that could be visited by potential tourists to find out more about their destinations. The results of random forest models were evaluated over 3- and 12-month forecasting horizon. Features from web traffic data appears in the final model for short term forecasting. Further, the model with additional data performs better on unseen data post the COVID19 pandemic. Our study shows that web traffic data adds value to tourism forecasting and can assist tourist destination site managers and decision makers in forming timely decisions to prepare for changes in tourism demand.




science and technology

An architectural view of VANETs cloud: its models, services, applications and challenges

This research explores vehicular ad hoc networks (VANETs) and their extensive applications, such as enhancing traffic efficiency, infotainment, and passenger safety. Despite significant study, widespread deployment of VANETs has been hindered by security and privacy concerns. Challenges in implementation, including scalability, flexibility, poor connection, and insufficient intelligence, have further complicated VANETs. This study proposes leveraging cloud computing to address these challenges, marking a paradigm shift. Cloud computing, recognised for its cost-efficiency and virtualisation, is integrated with VANETs. The paper details the nomenclature, architecture, models, services, applications, and challenges of VANET-based cloud computing. Three architectures for VANET clouds - vehicular clouds (VCs), vehicles utilising clouds (VuCs), and hybrid vehicular clouds (HVCs) - are discussed in detail. The research provides an overview, delves into related work, and explores VANET cloud computing's architectural frameworks, models, and cloud services. It concludes with insights into future work and a comprehensive conclusion.




science and technology

DeFog: dynamic micro-service placement in hybrid cloud-fog-edge infrastructures

DeFog is an innovative microservice placement and load balancing approach for distributed multi-cluster cloud-fog-edge architectures to minimise application response times. The architecture is modelled as a three-layered hierarchy. Each layer consists of one or more clusters of machines, with resource constraints increasing towards lower layers. Applications are modelled as service oriented architectures (SOA) comprising multiple interconnected microservices. As many applications can be run simultaneously, and as the resources of the edge and the fog are limited, choosing among services to run on the edge or the fog is the problem this work is dealing with. DeFog focuses on dynamic (i.e., adaptive) decentralised service placement within each cluster with zero downtime, eliminating the need for coordination between clusters. To assess the effectiveness of DeFog, two realistic applications based on microservices are deployed, and several placement policies are tested to select the one that reduces application latency. Least frequently used (LFU) is the reference service placement strategy. The experimental results reveal that a replacement policy that uses individual microservice latency as the crucial factor affecting service placement outperformed LFU by at least 10% in application response time.




science and technology

Smart and adaptive website navigation recommendations based on reinforcement learning

Improving website structures is the main task of a website designer. In recent years, numerous web engineering researchers have investigated navigation recommendation systems. Page recommendation systems are critical for mobile website navigation. Accordingly, we propose a smart and adaptive navigation recommendation system based on reinforcement learning. In this system, user navigation history is used as the input for reinforcement learning model. The model calculates a surf value for each page of the website; this value is used to rank the pages. On the basis of this ranking, the website structure is modified to shorten the user navigation path length. Experiments were conducted to evaluate the performance of the proposed system. The results revealed that user navigation paths could be decreased by up to 50% with training on 12 months of data, indicating that users could more easily find a target web page with the help of the proposed adaptive navigation recommendation system.




science and technology

International Journal of Web and Grid Services




science and technology

Transactions on Data Privacy 12:2 (2019)

Transactions on Data Privacy, Volume 12 Issue 2 (2019) has been published.




science and technology

Transactions on Data Privacy 12:3 (2019)

Transactions on Data Privacy, Volume 12 Issue 3 (2019) has been published.




science and technology

Transactions on Data Privacy 13:1 (2020)

Transactions on Data Privacy, Volume 13 Issue 1 (2020) has been published.




science and technology

Transactions on Data Privacy 13:2 (2020)

Transactions on Data Privacy, Volume 13 Issue 2 (2020) has been published.




science and technology

Transactions on Data Privacy 13:3 (2020)

Transactions on Data Privacy, Volume 13 Issue 3 (2020) has been published.




science and technology

Transactions on Data Privacy 14:1 (2021)

Transactions on Data Privacy, Volume 14 Issue 1 (2021) has been published.




science and technology

Transactions on Data Privacy 14:2 (2021)

Transactions on Data Privacy, Volume 14 Issue 2 (2021) has been published.




science and technology

Transactions on Data Privacy 14:3 (2021)

Transactions on Data Privacy, Volume 14 Issue 3 (2021) has been published.




science and technology

Transactions on Data Privacy 15:1 (2022)

Transactions on Data Privacy, Volume 15 Issue 1 (2022) has been published.




science and technology

Transactions on Data Privacy 15:2 (2022)

Transactions on Data Privacy, Volume 15 Issue 2 (2022) has been published.




science and technology

Transactions on Data Privacy 15:3 (2022)

Transactions on Data Privacy, Volume 15 Issue 3 (2022) has been published.




science and technology

Transactions on Data Privacy 16:1 (2023)

Transactions on Data Privacy, Volume 16 Issue 1 (2023) has been published.




science and technology

Transactions on Data Privacy 16:3 (2023)

Transactions on Data Privacy, Volume 16 Issue 3 (2023) has been published.




science and technology

Transactions on Data Privacy 17:1 (2024)

Transactions on Data Privacy, Volume 17 Issue 1 (2024) has been published.




science and technology

Transactions on Data Privacy 17:2 (2024)

Transactions on Data Privacy, Volume 17 Issue 2 (2024) has been published.




science and technology

Transactions on Data Privacy 17:3 (2024)

Transactions on Data Privacy, Volume 17 Issue 3 (2024) has been published.




science and technology

Volume 15




science and technology

Issue 15:1 (1-169)




science and technology

Big Brother is Watching But He Doesn’t Understand: Why Forced Filtering Technology on the Internet Isn’t the Solution to the Modern Copyright Dilemma

by Mitchell Longan[1] Introduction The European Parliament is currently considering a proposal to address problems of piracy and other forms of copyright infringement associated with the digital world.[2] Article 13 of the proposed Directive on Copyright in the Digital Single