ed

IDEA: A Framework for a Knowledge-based Enterprise 2.0

This paper looks at the convergence of knowledge management and Enterprise 2.0 and describes the possibilities for an over-arching exchange and transfer of knowledge in Enterprise 2.0. This will be underlined by the presentation of the concrete example of T-System Multimedia Solutions (MMS), which describes the establishment of a new enterprise division "IG eHealth". This is typified by the decentralised development of common ideas, collaboration and the assistance available to performing responsibilities as provided by Enterprise 2.0 tools. Taking this archetypal example and the derived abstraction of the problem regarding the collaboration of knowledge workers as the basis, a regulatory framework will be developed for knowledge management to serve as a template for the systemisation and definition of specific Enterprise 2.0 activities. The paper will conclude by stating factors of success and supporting Enterprise 2.0 activities, which will facilitate the establishment of a practical knowledge management system for the optimisation of knowledge transfer.




ed

Enterprise Microblogging for Advanced Knowledge Sharing: The References@BT Case Study

Siemens is well known for ambitious efforts in knowledge management, providing a series of innovative tools and applications within the intranet. References@BT is such a web-based application with currently more than 7,300 registered users from more than 70 countries. Its goal is to support the sharing of knowledge, experiences and best-practices globally within the Building Technologies division. Launched in 2005, References@BT features structured knowledge references, discussion forums, and a basic social networking service. In response to use demand, a new microblogging service, tightly integrated into References@BT, was implemented in March 2009. More than 500 authors have created around 2,600 microblog postings since then. Following a brief introduction into the community platform References@BT, we comprehensively describe the motivation, experiences and advantages for an organization in providing internal microblogging services. We provide detailed microblog usage statistics, analyzing the top ten users regarding postings and followers as well as the top ten topics. In doing so, we aim to shed light on microblogging usage and adoption within a globally distributed organization.




ed

Leveraging Web 2.0 in New Product Development: Lessons Learned from a Cross-company Study

The paper explores the application of Web 2.0 technologies to support product development efforts in a global, virtual and cross-functional setting. It analyses the dichotomy between the prevailing hierarchical structure of CAD/PLM/PDM systems and the principles of the Social Web under the light of the emerging product development trends. Further it introduces the concept of Engineering 2.0, intended as a more bottom up and lightweight knowledge sharing approach to support early stage design decisions within virtual and cross-functional product development teams. The lessons learned collected from a cross-company study highlight how to further developblogs, wikis, forums and tags for the benefit of new product development teams, highlighting opportunities, challenges and no-go areas.




ed

On the Construction of Efficiently Navigable Tag Clouds Using Knowledge from Structured Web Content

In this paper we present an approach to improving navigability of a hierarchically structured Web content. The approach is based on an integration of a tagging module and adoption of tag clouds as a navigational aid for such content. The main idea of this approach is to apply tagging for the purpose of a better highlighting of cross-references between information items across the hierarchy. Although in principle tag clouds have the potential to support efficient navigation in tagging systems, recent research identified a number of limitations. In particular, applying tag clouds within pragmatic limits of a typical user interface leads to poor navigational performance as tag clouds are vulnerable to a so-called pagination effect. In this paper, a solution to the pagination problem is discussed, implemented as a part of an Austrian online encyclopedia called Austria-Forum, and analyzed. In addition, a simulation-based evaluation of the new algorithm has been conducted. The first evaluation results are quite promising, as the efficient navigational properties are restored.




ed

Modeling Quality Attributes with Aspect-Oriented Architectural Templates

The quality attributes of a software system are, to a large extent, determined by the decisions taken early in the development process. Best practices in software engineering recommend the identification of important quality attributes during the requirements elicitation process, and the specification of software architectures to satisfy these requirements. Over the years the software engineering community has studied the relationship between quality attributes and the use of particular architectural styles and patterns. In this paper we study the relationship between quality attributes and Aspect-Oriented Software Architectures - which apply the principles of Aspect-Oriented Software Development (AOSD) at the architectural level. AOSD focuses on identifying, modeling and composing crosscutting concerns - i.e. concerns that are tangled and/or scattered with other concerns of the application. In this paper we propose to use AO-ADL, an aspect-oriented architectural description language, to specify quality attributes by means of parameterizable, and thus reusable, architectural patterns. We particularly focus on quality attributes that: (1) have major implications on software functionality, requiring the incorporation of explicit functionality at the architectural level; (2) are complex enough as to be modeled by a set of related concerns and the compositions among them, and (3) crosscut domain specific functionality and are related to more than one component in the architecture. We illustrate our approach for usability, a critical quality attribute that satisfies the previous constraints and that requires special attention at the requirements and the architecture design stages.




ed

Bio-Inspired Mechanisms for Coordinating Multiple Instances of a Service Feature in Dynamic Software Product Lines

One of the challenges in Dynamic Software Product Line (DSPL) is how to support the coordination of multiple instances of a service feature. In particular, there is a need for a decentralized decision-making capability that will be able to seamlessly integrate new instances of a service feature without an omniscient central controller. Because of the need for decentralization, we are investigating principles from self-organization in biological organisms. As an initial proof of concept, we have applied three bio-inspired techniques to a simple smart home scenario: quorum sensing based service activation, a firefly algorithm for synchronization, and a gossiping (epidemic) protocol for information dissemination. In this paper, we first explain why we selected those techniques using a set of motivating scenarios of a smart home and then describe our experiences in adopting them.




ed

QoS-based Approach for Dynamic Web Service Composition

Web Services have become a standard for integration of systems in distributed environments. By using a set of open interoperability standards, they allow computer-computer interaction, regardless the programming languages and operating systems used. The Semantic Web Services, by its turn, make use of ontologies to describe their functionality in a more structural manner, allowing computers to reason about the information required and provided by them. Such a description also allows dynamic composition of several Web Services, when only one is not able to provide the desired functionality. There are scenarios, however, in which only the functional correctness is not enough to fulfill the user requirements, and a minimum level of quality should be guaranteed by their providers. In this context, this work presents an approach for dynamic Web Service composition that takes into account the composition overall quality. The proposed approach relies on a heuristics to efficiently perform the composition. In order to show the feasibility of the proposed approach, a Web Service composition application prototype was developed and experimented with public test sets, along with another approach that does not consider quality in the composition process. The results have shown that the proposed approach in general finds compositions with more quality, within a reasonable processing time.




ed

An Aspect-Oriented Framework for Weaving Domain-Specific Concerns into Component-Based Systems

Software components are used in various application domains, and many component models and frameworks have been proposed to fulfill domain-specific requirements. The general trend followed by these approaches is to provide ad-hoc models and tools for capturing these requirements and for implementing their support within dedicated runtime platforms, limited to features of the targeted domain. The challenge is then to propose more flexible solutions, where components reuse is domain agnostic. In this article, we present a framework supporting compositional construction and development of applications that must meet various extra-functional/domain-specific requirements. The key points of our contribution are: i) We target development of component-oriented applications where extra-functional requirements are expressed as annotations on the units of composition in the application architecture. ii) These annotations are implemented as open and extensible component-based containers, achieving full separation of functional and extra-functional concerns. iii) Finally, the full machinery is implemented using the Aspect-Oriented Programming paradigm. We validate our approach with two case studies: the first is related to real-time and embedded applications, while the




ed

Context-Aware Composition and Adaptation based on Model Transformation

Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems.




ed

Least Slack Time Rate first: an Efficient Scheduling Algorithm for Pervasive Computing Environment

Real-time systems like pervasive computing have to complete executing a task within the predetermined time while ensuring that the execution results are logically correct. Such systems require intelligent scheduling methods that can adequately promptly distribute the given tasks to a processor(s). In this paper, we propose LSTR (Least Slack Time Rate first), a new and simple scheduling algorithm, for a multi-processor environment, and demonstrate its efficient performance through various tests.




ed

Service Oriented Multimedia Delivery System in Pervasive Environments

Service composition is an effective approach for large-scale multimedia delivery. In previous works, user requirement is represented as one fixed functional path which is composed of several functional components in a certain order. Actually, there may be several functional paths (deliver different quality level multimedia data, e.g., image pixel, frame rate) that can meet one request. And due to the diversity of devices and connections in pervasive environment, system should choose a suitable media quality delivery path in accordance with context, instead of one fixed functional path. This paper presents a deep study of multimedia delivery problem and proposes an on-line algorithm LDPath and an off-line centralized algorithm LD/RPath respectively. LDPath aims at delivering multimedia data to end user with lowest delay by choosing services to build delivery paths hop-by-hop, which is adapted to the unstable open environment. And LD/RPath is developed for a relatively stable environment, which generates delivery paths according to the trade-off between delay and reliability metrics, because the service reliability is also an important fact in such scenario. Experimental results show that both algorithms have good performance with low overhead to the system.




ed

Knowledge Extraction from RDF Data with Activation Patterns

RDF data can be analyzed with various query languages such as SPARQL. However, due to their nature these query languages do not support fuzzy queries that would allow us to extract a broad range of additional information. In this article we present a new method that transforms the information presented by subject-relationobject relations within RDF data into Activation Patterns. These patterns represent a common model that is the basis for a number of sophisticated analysis methods such as semantic relation analysis, semantic search queries, unsupervised clustering, supervised learning or anomaly detection. In this article, we explain the Activation Patterns concept and apply it to an RDF representation of the well known CIA World Factbook.




ed

Algorithms for the Evaluation of Ontologies for Extended Error Taxonomy and their Application on Large Ontologies

Ontology evaluation is an integral and important part of the ontology development process. Errors in ontologies could be catastrophic for the information system based on those ontologies. As per our experiments, the existing ontology evaluation systems were unable to detect many errors (like, circulatory error in class and property hierarchy, common class and property in disjoint decomposition, redundancy of sub class and sub property, redundancy of disjoint relation and disjoint knowledge omission) as defined in the error taxonomy. We have formulated efficient algorithms for the evaluation of these and other errors as per the extended error taxonomy. These algorithms are implemented (named as OntEval) and the implementations are used to evaluate well-known ontologies including Gene Ontology (GO), WordNet Ontology and OntoSem. The ontologies are indexed using a variant of already proposed scheme Ontrel. A number of errors and warnings in these ontologies have been discovered using the OntEval. We have also reported the performance of our implementation, OntEval.




ed

A Semantic Wiki Based on Spatial Hypertext

Spatial Hypertext Wiki (ShyWiki) is a wiki which represents knowledge using notes that are spatially distributed in wiki pages and have visual characteristics such as colour, size, or font type. The use of spatial and visual characteristics in wikis is important to improve human comprehension, creation and organization of knowledge. Another important capability in wikis is to allow machines to process knowledge. Wikis that formally structure knowledge for this purpose are called semantic wikis. This paper describes how ShyWiki can make use of spatial hypertext in order to be a semantic wiki. ShyWiki can represent knowledge at different levels of formality. Users of ShyWiki can annotate the content and represent semantic relations without being experts of semantic web data description languages. The spatial hypertext features make it suitable for users to represent unstructured knowledge and implicit graphic relations among concepts. In addition, semantic web and spatial hypertext features are combined to represent structured knowledge. The semantic web features of ShyWiki improve navigation and publish the wiki knowledge as RDF resources, including the implicit relations that are analyzed using a spatial parser.




ed

A Ranking Tool Exploiting Semantic Descriptions for the Comparison of EQF-based Qualifications

Nowadays, one of the main issues discussed at the Community level is represented by the mobility of students and workers across Europe. During the last years, in order to deal with the above picture, several initiatives have been carried out: one of them is the definition of the European Qualification Framework (EQF), a common architecture for the description of qualifications. At the same time, several research activities were established with the aim of finding how semantic technologies could be exploited for qualifications comparison in the field of human resources acquisition. In this paper, the EQF specifications are taken into account and they are applied in a practical scenario to develop a ranking algorithm for the comparison of qualifications expressed in terms of knowledge, skill and competence concepts, potentially aimed at supporting European employers during the recruiting phase.




ed

Ontology-based User Interface Development: User Experience Elements Pattern

The user experience of any software or website consists of elements from the conceptual to the concrete level. These elements of user experience assist in the design and development of user interfaces. On the other hand, ontologies provide a framework for computable representation of user interface elements and underlying data. This paper discusses strategies of introducing ontologies at different user interface layers adapted from user experience elements. These layers range from abstract levels (e.g. User needs/Application Objectives) to concrete levels (e.g. Application User Interface) in terms of data representation. The proposed ontological framework enables device independent, semi-automated GUI construction which we will demonstrate at a personal information management example.




ed

Ontology-based Competency Management: the Case Study of the Mihajlo Pupin Institute

Semantic-based technologies have been steadily increasing their relevance in recent years in both the research world and business world. Considering this, the present article discusses the process of design and implementation of a competency management system in information and communication technologies domain utilizing the latest Semantic Web tools and technologies including D2RQ server, TopBraid Composer, OWL 2, SPARQL, SPARQL Rules and common human resources related public vocabularies. In particular, the paper discusses the process of building individual and enterprise competence models in a form of ontology database, as well as different ways of meaningful search and retrieval of expertise data on the Semantic Web. The ontological knowledge base aims at storing the extracted and integrated competences from structured, as well as unstructured sources. By using the illustrative case study of deployment of such a system in the Human Resources sector at the Mihajlo Pupin Institute, this paper shows an example of new approaches to data integration and information management. The proposed approach extends the functionalities of existing enterprise information systems and offers possibilities for development of future Internet services. This allows organizations to express their core competences and talents in a standardized, machine processable and understandable format, and hence, facilitates their integration in the European Research Area and beyond.




ed

A Comparison of Different Retrieval Strategies Working on Medical Free Texts

Patient information in health care systems mostly consists of textual data, and free text in particular makes up a significant amount of it. Information retrieval systems that concentrate on these text types have to deal with the different challenges these medical free texts pose to achieve an acceptable performance. This paper describes the evaluation of four different types of information retrieval strategies: keyword search, search performed by a medical domain expert, a semantic based information retrieval tool, and a purely statistical information retrieval method. The different methods are evaluated and compared with respect to its appliance in medical health care systems.




ed

An Ontology based Agent Generation for Information Retrieval on Cloud Environment

Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture.




ed

ORPMS: An Ontology-based Real-time Project Monitoring System in the Cloud

Project monitoring plays a crucial role in project management, which is a part of every stage of a project's life-cycle. Nevertheless, along with the increasing ratio of outsourcing in many companies' strategic plans, project monitoring has been challenged by geographically dispersed project teams and culturally diverse team members. Furthermore, because of the lack of a uniform standard, data exchange between various project monitoring software becomes an impossible mission. These factors together lead to the issue of ambiguity in project monitoring processes. Ontology is a form of knowledge representation with the purpose of disambiguation. Consequently, in this paper, we propose the framework of an ontology-based real-time project monitoring system (ORPSM), in order to, by means of ontologies, solve the ambiguity issue in project monitoring processes caused by multiple factors. The framework incorporates a series of ontologies for knowledge capture, storage, sharing and term disambiguation in project monitoring processes, and a series of metrics for assisting management of project organizations to better monitor projects. We propose to configure the ORPMS framework in a cloud environment, aiming at providing the project monitoring service to geographically distributed and dynamic project members with great flexibility, scalability and security. A case study is conducted on a prototype of the ORPMS in order to evaluate the framework.




ed

An efficient edge swap mechanism for enhancement of robustness in scale-free networks in healthcare systems

This paper presents a sequential edge swap (SQES) mechanism to design a robust network for a healthcare system utilising energy and communication range of nodes. Two operations: sequential degree difference operation (SQDDO) and sequential angle sum operation (SQASO) are performed to enhance the robustness of network. With equivalent degrees of nodes from the network's centre to its periphery, these operations build a robust network structure. Disaster attacks that have a substantial impact on the network are carried out using the network information. To identify a link between the malicious and disaster attacks, the Pearson coefficient is employed. SQES creates a robust network structure as a single objective optimisation solution by changing the connections of nodes based on the positive correlation of these attacks. SQES beats the current methods, according to simulation results. When compared to hill-climbing algorithm, simulated annealing, and ROSE, respectively, the robustness of SQES is improved by roughly 26%, 19% and 12%.




ed

A feature-based model selection approach using web traffic for tourism data

The increased volume of accessible internet data creates an opportunity for researchers and practitioners to improve time series forecasting for many indicators. In our study, we assess the value of web traffic data in forecasting the number of short-term visitors travelling to Australia. We propose a feature-based model selection framework which combines random forest with feature ranking process to select the best performing model using limited and informative number of features extracted from web traffic data. The data was obtained for several tourist attraction and tourism information websites that could be visited by potential tourists to find out more about their destinations. The results of random forest models were evaluated over 3- and 12-month forecasting horizon. Features from web traffic data appears in the final model for short term forecasting. Further, the model with additional data performs better on unseen data post the COVID19 pandemic. Our study shows that web traffic data adds value to tourism forecasting and can assist tourist destination site managers and decision makers in forming timely decisions to prepare for changes in tourism demand.




ed

DeFog: dynamic micro-service placement in hybrid cloud-fog-edge infrastructures

DeFog is an innovative microservice placement and load balancing approach for distributed multi-cluster cloud-fog-edge architectures to minimise application response times. The architecture is modelled as a three-layered hierarchy. Each layer consists of one or more clusters of machines, with resource constraints increasing towards lower layers. Applications are modelled as service oriented architectures (SOA) comprising multiple interconnected microservices. As many applications can be run simultaneously, and as the resources of the edge and the fog are limited, choosing among services to run on the edge or the fog is the problem this work is dealing with. DeFog focuses on dynamic (i.e., adaptive) decentralised service placement within each cluster with zero downtime, eliminating the need for coordination between clusters. To assess the effectiveness of DeFog, two realistic applications based on microservices are deployed, and several placement policies are tested to select the one that reduces application latency. Least frequently used (LFU) is the reference service placement strategy. The experimental results reveal that a replacement policy that uses individual microservice latency as the crucial factor affecting service placement outperformed LFU by at least 10% in application response time.




ed

Smart and adaptive website navigation recommendations based on reinforcement learning

Improving website structures is the main task of a website designer. In recent years, numerous web engineering researchers have investigated navigation recommendation systems. Page recommendation systems are critical for mobile website navigation. Accordingly, we propose a smart and adaptive navigation recommendation system based on reinforcement learning. In this system, user navigation history is used as the input for reinforcement learning model. The model calculates a surf value for each page of the website; this value is used to rank the pages. On the basis of this ranking, the website structure is modified to shorten the user navigation path length. Experiments were conducted to evaluate the performance of the proposed system. The results revealed that user navigation paths could be decreased by up to 50% with training on 12 months of data, indicating that users could more easily find a target web page with the help of the proposed adaptive navigation recommendation system.








ed

Niagara Health offering free parking after delays reported - News Talk 610 CKTB

  1. Niagara Health offering free parking after delays reported  News Talk 610 CKTB
  2. Implementation of new Niagara Health patient info system resulting in long wait times  St. Catharines Standard
  3. Temporary delays impacting registration at emergency departments  Thorold News
  4. Niagara Health Working Through Delays  101.1 More FM
  5. Niagara Health experiencing temporary delays impacting registration and EDs  Niagara Health







ed

UN chief warns COP29 summit to pay up or face climate-led disaster for humanity - The Globe and Mail

  1. UN chief warns COP29 summit to pay up or face climate-led disaster for humanity  The Globe and Mail
  2. Climate Summit, in Early Days, Is Already on a ‘Knife Edge’  The New York Times
  3. At COP29 summit, nations big and small get chance to bear witness to climate change  The Globe and Mail
  4. Terence Corcoran: COP29 hit by political ‘dunkelflaute’  Financial Post
  5. COP29: Albania PM goes off script to ask 'What on Earth are we doing?'  Euronews




ed

Former B.C. premier John Horgan dies aged 65, after third bout of cancer - National Post

  1. Former B.C. premier John Horgan dies aged 65, after third bout of cancer  National Post
  2. Adam Pankratz: John Horgan wasn't your typical NDP premier  National Post
  3. John Horgan: Reluctant leader became B.C.'s most-loved premier  Vancouver Sun
  4. Premier’s statement on the passing of John Horgan  BC Gov News
  5. UBC political scientist remembers former B.C. premier John Horgan’s legacy  CBC.ca




ed

Big Brother is Watching But He Doesn’t Understand: Why Forced Filtering Technology on the Internet Isn’t the Solution to the Modern Copyright Dilemma

by Mitchell Longan[1] Introduction The European Parliament is currently considering a proposal to address problems of piracy and other forms of copyright infringement associated with the digital world.[2] Article 13 of the proposed Directive on Copyright in the Digital Single




ed

SCRIPTed is turning 15!

“Fifteen Years of Evolution of Law, Technology and Society” To celebrate SCRIPTed’s 15th birthday, we are hosting a conference on Monday 28 January (3pm-7pm) at Evolution House here in Edinburgh. For more details, the programme, and (free) registration, see our




ed

Predicting Innovation: Why Facebook/WhatsApp Merger Flunked

By Hasan Basri Cifci[1] In the world of 2014, the Commission of Facebook/WhatsApp merger case[2] concluded that integration and interoperation of Facebook and WhatsApp were unfeasible. However, Facebook integrated its three subsidiaries (WhatsApp, Instagram, and Facebook) under its brand in




ed

Timed influence: The future of Modern (Family) life and the law

By Lucas Miotto Lopes and Jiahong Chen The future of real-time appeal Knowing when to say or do something is often just as important as knowing what to say or do. The right advice at the wrong time is not




ed

Risk evaluation method of electronic bank investment based on random forest

Aiming at the problems of high error rate, low evaluation accuracy and low investment return in traditional methods, a random forest-based e-bank investment risk evaluation method is proposed. First, establish a scientific e-bank investment risk evaluation index system. Then, G1-COWA combined weighting method is used to calculate the weights of each index. Finally, the e-bank investment risk evaluation index data is taken as the input vector, and the e-bank investment risk evaluation result is taken as the output vector. The random forest model is established and the result of e-banking investment risk evaluation is obtained. The experimental results show that the maximum relative error rate of this method is 4.32%, the evaluation accuracy range is 94.5~98.1%, and the maximum return rate of e-banking investment is 8.32%. It shows that this method can accurately evaluate the investment risk of electronic banking.




ed

Research on Weibo marketing advertising push method based on social network data mining

The current advertising push methods have low accuracy and poor advertising conversion effects. Therefore, a Weibo marketing advertising push method based on social network data mining is studied. Firstly, establish a social network graph and use graph clustering algorithm to mine the association relationships of users in the network. Secondly, through sparsisation processing, the association between nodes in the social network graph is excavated. Then, evaluate the tightness between user preferences and other nodes in the social network, and use the TF-IDF algorithm to extract user interest features. Finally, an attention mechanism is introduced to improve the deep learning model, which matches user interests with advertising domain features and outputs push results. The experimental results show that the push accuracy of this method is higher than 95%, with a maximum advertising click through rate of 82.7% and a maximum advertising conversion rate of 60.7%.




ed

E-commerce growth prediction model based on grey Markov chain

In order to solve the problems of long prediction consumption time and many prediction iterations existing in traditional prediction models, an e-commerce growth prediction model based on grey Markov chain is proposed. The Scrapy crawler framework is used to collect a variety of e-commerce data from e-commerce websites, and the feedforward neural network model is used to clean the collected data. With the cleaned e-commerce data as the input vector and the e-commerce growth prediction results as the output vector, an e-commerce growth prediction model based on the grey Markov chain is built. The prediction model is improved by using the background value optimisation method. After training the model through the improved particle swarm optimisation algorithm, accurate e-commerce growth prediction results are obtained. The experimental results show that the maximum time consumption of e-commerce growth prediction of this model is only 0.032, and the number of iterations is small.




ed

A method for selecting multiple logistics sites in cross-border e-commerce based on return uncertainty

To reduce the location cost of cross-border e-commerce logistics sites, this article proposes a multi-logistics site location method based on return uncertainty. Firstly, a site selection model is established with the objective function of minimising site construction costs, transportation costs, return costs, and operating costs, and the constraint conditions of return recovery costs and delayed pick-up time; Then, using the Monte Carlo method to simulate the number of returned items, and using an improved chicken swarm algorithm based on simulated annealing, the cross-border e-commerce multi-logistics site location model is solved to complete the location selection. Experimental results show that this method can effectively reduce the related costs of cross-border e-commerce multi-logistics site selection. After applying this method, the total cost of multi-logistics site selection is 19.4 million yuan, while the total cost of the five comparative methods exceeds 20 million yuan.




ed

Exploring the impact of TPACK on Education 5.0 during the times of COVID-19: a case of Zimbabwean universities




ed

The Impact of Physics Open Educational Resources (OER) on the Professional Development of Bhutanese Secondary School Physics Teachers




ed

Feature-aware task offloading and scheduling mechanism in vehicle edge computing environment

With the rapid development and application of driverless technology, the number and location of vehicles, the channel and bandwidth of wireless network are time-varying, which leads to the increase of offloading delay and energy consumption of existing algorithms. To solve this problem, the vehicle terminal task offloading decision problem is modelled as a Markov decision process, and a task offloading algorithm based on DDQN is proposed. In order to guide agents to quickly select optimal strategies, this paper proposes an offloading mechanism based on task feature. In order to solve the problem that the processing delay of some edge server tasks exceeds the upper limit of their delay, a task scheduling mechanism based on buffer delay is proposed. Simulation results show that the proposed method has greater performance advantages in reducing delay and energy consumption compared with existing algorithms.




ed

Risk-based operation of plug-in electric vehicles in a microgrid using downside risk constraints method

To achieve the benefits as much as possible, it is required to identify the available PEV capacity and prepare scheduling plans based on that. The analysis revealed that the risk-based scheduling of the microgrid could reduce the financial risk completely from $9.89 to $0.00 and increases the expected operation cost by 24% from $91.38 to $112.94, in turn. This implies that the risk-averse decision-maker tends to spend more money to reduce the expected risk-in-cost by using the proposed downside risk management technique. At the end, by the help of fuzzy satisfying method, the suitable risk-averse strategy is determined for the studied case.




ed

Research on multi-objective optimisation for shared bicycle dispatching

The problem of dispatching is key to management of shared bicycles. Considering the number of borrowing and returning events during the dispatching period, optimisation plans of shared bicycles dispatching are studied in this paper. Firstly, the dispatching model of shared bicycles is built, which regards the dispatching cost and lost demand as optimised objectives. Secondly, the solution algorithm is designed based on non-dominated Genetic Algorithm. Finally, a case is given to illustrate the application of the method. The research results show that the method proposed in the paper can get optimised dispatching plans, and the model considering borrowing and returning during dispatching period has better effects with a 39.3% decrease in lost demand.




ed

Adaptive terminal sliding mode control of a non-holonomic wheeled mobile robot

In this paper, a second-order sliding mode adaptive controller with finite time stability is proposed for trajectory tracking of robotic systems. In order to reduce the chattering phenomenon in the response of the variable structure resistant controller, two dependent sliding surfaces are used. In the outer loop, a kinematic controller is used to compensate the geometric uncertainty of the robot, and in the inner loop, the proposed resistive control is used as the main loop. On the other hand, considering the dynamic uncertainty and disturbance of the robot, an adaptive strategy has been used to estimate the uncertainty limit during the control process in order to eliminate the need for basic knowledge to determine the uncertainty limit in the resistant structure. The proposed control method demonstrates significant enhancements in performance, with the linear velocity error improving by approximately 80%, leading to a more accurate and responsive system.




ed

Machine learning and deep learning techniques for detecting and mitigating cyber threats in IoT-enabled smart grids: a comprehensive review

The confluence of the internet of things (IoT) with smart grids has ushered in a paradigm shift in energy management, promising unparalleled efficiency, economic robustness and unwavering reliability. However, this integrative evolution has concurrently amplified the grid's susceptibility to cyber intrusions, casting shadows on its foundational security and structural integrity. Machine learning (ML) and deep learning (DL) emerge as beacons in this landscape, offering robust methodologies to navigate the intricate cybersecurity labyrinth of IoT-infused smart grids. While ML excels at sifting through voluminous data to identify and classify looming threats, DL delves deeper, crafting sophisticated models equipped to counteract avant-garde cyber offensives. Both of these techniques are united in their objective of leveraging intricate data patterns to provide real-time, actionable security intelligence. Yet, despite the revolutionary potential of ML and DL, the battle against the ceaselessly morphing cyber threat landscape is relentless. The pursuit of an impervious smart grid continues to be a collective odyssey. In this review, we embark on a scholarly exploration of ML and DL's indispensable contributions to enhancing cybersecurity in IoT-centric smart grids. We meticulously dissect predominant cyber threats, critically assess extant security paradigms, and spotlight research frontiers yearning for deeper inquiry and innovation.




ed

Robust watermarking of medical images using SVM and hybrid DWT-SVD

In the present scenario, the security of medical images is an important aspect in the field of image processing. Support vector machines (SVMs) are a supervised machine learning technique used in image classification. The roots of SVM are from statistical learning theory. It has gained excellent significance because of its robust, accurate, and very effective algorithm, even though it was applied to a small set of training samples. SVM can classify data into binary classification or multiple classifications according to the application's needs. Discrete wavelet transform (DWT) and singular value decomposition (SVD) transform techniques are utilised to enhance the image's security. In this paper, the image is first classified using SVM into ROI and RONI, and thereafter, to enhance the images diagnostic capabilities, the DWT-SVD-based hybrid watermarking technique is utilised to embed the watermark in the RONI region. Overall, our work makes a significant contribution to the field of medical image security by presenting a novel and effective solution. The results are evaluated using both perceptual and imperceptibility testing using PSNR and SSIM parameters. Different attacks were introduced to the watermarked image, which shows the efficacy and robustness of the proposed algorithm.