base

Context-Aware Composition and Adaptation based on Model Transformation

Using pre-existing software components (COTS) to develop software systems requires the composition and adaptation of the component interfaces to solve mismatch problems. These mismatches may appear at different interoperability levels (signature, behavioural, quality of service and semantic). In this article, we define an approach which supports composition and adaptation of software components based on model transformation by taking into account the four levels. Signature and behavioural levels are addressed by means of transition systems. Context-awareness and semanticbased techniques are used to tackle quality of service and semantic, respectively, but also both consider the signature level. We have implemented and validated our proposal for the design and application of realistic and complex systems. Here, we illustrate the need to support the variability of the adaptation process in a context-aware pervasive system through a real-world case study, where software components are implemented using Windows Workflow Foundation (WF). We apply our model transformation process to extract transition systems (CA-STS specifications) from WF components. These CA-STSs are used to tackle the composition and adaptation. Then, we generate a CASTS adaptor specification, which is transformed into its corresponding WF adaptor component with the purpose of interacting with all the WF components of the system, thereby avoiding mismatch problems.




base

A Semantic Wiki Based on Spatial Hypertext

Spatial Hypertext Wiki (ShyWiki) is a wiki which represents knowledge using notes that are spatially distributed in wiki pages and have visual characteristics such as colour, size, or font type. The use of spatial and visual characteristics in wikis is important to improve human comprehension, creation and organization of knowledge. Another important capability in wikis is to allow machines to process knowledge. Wikis that formally structure knowledge for this purpose are called semantic wikis. This paper describes how ShyWiki can make use of spatial hypertext in order to be a semantic wiki. ShyWiki can represent knowledge at different levels of formality. Users of ShyWiki can annotate the content and represent semantic relations without being experts of semantic web data description languages. The spatial hypertext features make it suitable for users to represent unstructured knowledge and implicit graphic relations among concepts. In addition, semantic web and spatial hypertext features are combined to represent structured knowledge. The semantic web features of ShyWiki improve navigation and publish the wiki knowledge as RDF resources, including the implicit relations that are analyzed using a spatial parser.




base

A Ranking Tool Exploiting Semantic Descriptions for the Comparison of EQF-based Qualifications

Nowadays, one of the main issues discussed at the Community level is represented by the mobility of students and workers across Europe. During the last years, in order to deal with the above picture, several initiatives have been carried out: one of them is the definition of the European Qualification Framework (EQF), a common architecture for the description of qualifications. At the same time, several research activities were established with the aim of finding how semantic technologies could be exploited for qualifications comparison in the field of human resources acquisition. In this paper, the EQF specifications are taken into account and they are applied in a practical scenario to develop a ranking algorithm for the comparison of qualifications expressed in terms of knowledge, skill and competence concepts, potentially aimed at supporting European employers during the recruiting phase.




base

Ontology-based User Interface Development: User Experience Elements Pattern

The user experience of any software or website consists of elements from the conceptual to the concrete level. These elements of user experience assist in the design and development of user interfaces. On the other hand, ontologies provide a framework for computable representation of user interface elements and underlying data. This paper discusses strategies of introducing ontologies at different user interface layers adapted from user experience elements. These layers range from abstract levels (e.g. User needs/Application Objectives) to concrete levels (e.g. Application User Interface) in terms of data representation. The proposed ontological framework enables device independent, semi-automated GUI construction which we will demonstrate at a personal information management example.




base

Ontology-based Competency Management: the Case Study of the Mihajlo Pupin Institute

Semantic-based technologies have been steadily increasing their relevance in recent years in both the research world and business world. Considering this, the present article discusses the process of design and implementation of a competency management system in information and communication technologies domain utilizing the latest Semantic Web tools and technologies including D2RQ server, TopBraid Composer, OWL 2, SPARQL, SPARQL Rules and common human resources related public vocabularies. In particular, the paper discusses the process of building individual and enterprise competence models in a form of ontology database, as well as different ways of meaningful search and retrieval of expertise data on the Semantic Web. The ontological knowledge base aims at storing the extracted and integrated competences from structured, as well as unstructured sources. By using the illustrative case study of deployment of such a system in the Human Resources sector at the Mihajlo Pupin Institute, this paper shows an example of new approaches to data integration and information management. The proposed approach extends the functionalities of existing enterprise information systems and offers possibilities for development of future Internet services. This allows organizations to express their core competences and talents in a standardized, machine processable and understandable format, and hence, facilitates their integration in the European Research Area and beyond.




base

An Ontology based Agent Generation for Information Retrieval on Cloud Environment

Retrieving information or discovering knowledge from a well organized data center in general is requested to be familiar with its schema, structure, and architecture, which against the inherent concept and characteristics of cloud environment. An effective approach to retrieve desired information or to extract useful knowledge is an important issue in the emerging information/knowledge cloud. In this paper, we propose an ontology-based agent generation framework for information retrieval in a flexible, transparent, and easy way on cloud environment. While user submitting a flat-text based request for retrieving information on a cloud environment, the request will be automatically deduced by a Reasoning Agent (RA) based on predefined ontology and reasoning rule, and then be translated to a Mobile Information Retrieving Agent Description File (MIRADF) that is formatted in a proposed Mobile Agent Description Language (MADF). A generating agent, named MIRA-GA, is also implemented to generate a MIRA according to the MIRADF. We also design and implement a prototype to integrate these agents and show an interesting example to demonstrate the feasibility of the architecture.




base

ORPMS: An Ontology-based Real-time Project Monitoring System in the Cloud

Project monitoring plays a crucial role in project management, which is a part of every stage of a project's life-cycle. Nevertheless, along with the increasing ratio of outsourcing in many companies' strategic plans, project monitoring has been challenged by geographically dispersed project teams and culturally diverse team members. Furthermore, because of the lack of a uniform standard, data exchange between various project monitoring software becomes an impossible mission. These factors together lead to the issue of ambiguity in project monitoring processes. Ontology is a form of knowledge representation with the purpose of disambiguation. Consequently, in this paper, we propose the framework of an ontology-based real-time project monitoring system (ORPSM), in order to, by means of ontologies, solve the ambiguity issue in project monitoring processes caused by multiple factors. The framework incorporates a series of ontologies for knowledge capture, storage, sharing and term disambiguation in project monitoring processes, and a series of metrics for assisting management of project organizations to better monitor projects. We propose to configure the ORPMS framework in a cloud environment, aiming at providing the project monitoring service to geographically distributed and dynamic project members with great flexibility, scalability and security. A case study is conducted on a prototype of the ORPMS in order to evaluate the framework.




base

A feature-based model selection approach using web traffic for tourism data

The increased volume of accessible internet data creates an opportunity for researchers and practitioners to improve time series forecasting for many indicators. In our study, we assess the value of web traffic data in forecasting the number of short-term visitors travelling to Australia. We propose a feature-based model selection framework which combines random forest with feature ranking process to select the best performing model using limited and informative number of features extracted from web traffic data. The data was obtained for several tourist attraction and tourism information websites that could be visited by potential tourists to find out more about their destinations. The results of random forest models were evaluated over 3- and 12-month forecasting horizon. Features from web traffic data appears in the final model for short term forecasting. Further, the model with additional data performs better on unseen data post the COVID19 pandemic. Our study shows that web traffic data adds value to tourism forecasting and can assist tourist destination site managers and decision makers in forming timely decisions to prepare for changes in tourism demand.




base

Smart and adaptive website navigation recommendations based on reinforcement learning

Improving website structures is the main task of a website designer. In recent years, numerous web engineering researchers have investigated navigation recommendation systems. Page recommendation systems are critical for mobile website navigation. Accordingly, we propose a smart and adaptive navigation recommendation system based on reinforcement learning. In this system, user navigation history is used as the input for reinforcement learning model. The model calculates a surf value for each page of the website; this value is used to rank the pages. On the basis of this ranking, the website structure is modified to shorten the user navigation path length. Experiments were conducted to evaluate the performance of the proposed system. The results revealed that user navigation paths could be decreased by up to 50% with training on 12 months of data, indicating that users could more easily find a target web page with the help of the proposed adaptive navigation recommendation system.




base

Risk evaluation method of electronic bank investment based on random forest

Aiming at the problems of high error rate, low evaluation accuracy and low investment return in traditional methods, a random forest-based e-bank investment risk evaluation method is proposed. First, establish a scientific e-bank investment risk evaluation index system. Then, G1-COWA combined weighting method is used to calculate the weights of each index. Finally, the e-bank investment risk evaluation index data is taken as the input vector, and the e-bank investment risk evaluation result is taken as the output vector. The random forest model is established and the result of e-banking investment risk evaluation is obtained. The experimental results show that the maximum relative error rate of this method is 4.32%, the evaluation accuracy range is 94.5~98.1%, and the maximum return rate of e-banking investment is 8.32%. It shows that this method can accurately evaluate the investment risk of electronic banking.




base

Research on Weibo marketing advertising push method based on social network data mining

The current advertising push methods have low accuracy and poor advertising conversion effects. Therefore, a Weibo marketing advertising push method based on social network data mining is studied. Firstly, establish a social network graph and use graph clustering algorithm to mine the association relationships of users in the network. Secondly, through sparsisation processing, the association between nodes in the social network graph is excavated. Then, evaluate the tightness between user preferences and other nodes in the social network, and use the TF-IDF algorithm to extract user interest features. Finally, an attention mechanism is introduced to improve the deep learning model, which matches user interests with advertising domain features and outputs push results. The experimental results show that the push accuracy of this method is higher than 95%, with a maximum advertising click through rate of 82.7% and a maximum advertising conversion rate of 60.7%.




base

E-commerce growth prediction model based on grey Markov chain

In order to solve the problems of long prediction consumption time and many prediction iterations existing in traditional prediction models, an e-commerce growth prediction model based on grey Markov chain is proposed. The Scrapy crawler framework is used to collect a variety of e-commerce data from e-commerce websites, and the feedforward neural network model is used to clean the collected data. With the cleaned e-commerce data as the input vector and the e-commerce growth prediction results as the output vector, an e-commerce growth prediction model based on the grey Markov chain is built. The prediction model is improved by using the background value optimisation method. After training the model through the improved particle swarm optimisation algorithm, accurate e-commerce growth prediction results are obtained. The experimental results show that the maximum time consumption of e-commerce growth prediction of this model is only 0.032, and the number of iterations is small.




base

A method for selecting multiple logistics sites in cross-border e-commerce based on return uncertainty

To reduce the location cost of cross-border e-commerce logistics sites, this article proposes a multi-logistics site location method based on return uncertainty. Firstly, a site selection model is established with the objective function of minimising site construction costs, transportation costs, return costs, and operating costs, and the constraint conditions of return recovery costs and delayed pick-up time; Then, using the Monte Carlo method to simulate the number of returned items, and using an improved chicken swarm algorithm based on simulated annealing, the cross-border e-commerce multi-logistics site location model is solved to complete the location selection. Experimental results show that this method can effectively reduce the related costs of cross-border e-commerce multi-logistics site selection. After applying this method, the total cost of multi-logistics site selection is 19.4 million yuan, while the total cost of the five comparative methods exceeds 20 million yuan.




base

Risk-based operation of plug-in electric vehicles in a microgrid using downside risk constraints method

To achieve the benefits as much as possible, it is required to identify the available PEV capacity and prepare scheduling plans based on that. The analysis revealed that the risk-based scheduling of the microgrid could reduce the financial risk completely from $9.89 to $0.00 and increases the expected operation cost by 24% from $91.38 to $112.94, in turn. This implies that the risk-averse decision-maker tends to spend more money to reduce the expected risk-in-cost by using the proposed downside risk management technique. At the end, by the help of fuzzy satisfying method, the suitable risk-averse strategy is determined for the studied case.




base

A robust feature points-based screen-shooting resilient watermarking scheme

Screen-shooting will lead to information leakage. Anti-screen-shooting watermark, which can track the leaking sources and protect the copyrights of images, plays an important role in image information security. Due to the randomness of shooting distance and angle, more robust watermark algorithms are needed to resist the mixed attack generated by screen-shooting. A robust digital watermarking algorithm that is resistant to screen-shooting is proposed in this paper. We use improved Harris-Laplace algorithm to detect the image feature points and embed the watermark into the feature domain. In this paper, all test images are selected on the dataset USC-SIPI and six related common algorithms are used for performance comparison. The experimental results show that within a certain range of shooting distance and angle, this algorithm presented can not only extract the watermark effectively but also ensure the most basic invisibility of watermark. Therefore, the algorithm has good robustness for anti-screen-shooting.




base

What drives mobile game stickiness and in-game purchase intention? Based on the uses and gratifications theory

Despite the considerable growth potential predicted for mobile games, little research explored what motivates users to be sticky and make purchases in the mobile game context. Drawing on uses and gratifications theory (UGT), this study evaluates the influencing effects of players' characteristics (i.e., individual gratification and individual situation) and the mobile game structure (i.e., presence and governance) on players' mobile game behaviour (i.e., stickiness and purchase intention). Specifically, the model was extended with factors of the individual situation and governance. After surveying 439 samples, the research model was examined using the Partial least squares structural equation modelling (PLS-SEM) approach. The results indicate that stickiness is a crucial antecedent for users' in-game purchase intention. The individual situation plays an essential role in influencing user gratification, and individual gratification is the most vital criterion affecting stickiness. Finally, except for incentives, presence, and integration positively affect stickiness. This study provides further insights into both mobile game design and governance strategies.




base

An effective differential privacy protection method of location data based on perturbation loss constraint

Differential privacy is usually applied to location privacy protection scenarios, which confuses real data by adding interference noise to location points to achieve the purpose of protecting privacy. However, this method can result in a significant amount of redundant noisy data and impact the accuracy of the location. Considering the security and practicability of location data, an effective differential privacy protection method of location data based on perturbation loss constraint is proposed. After applying the Laplace mechanism under the condition of differential privacy to perturb the location data, the Savitzky-Golay filtering technology is used to correct the data with noise, and the data with large deviation and low availability is optimised. The introduction of Savitzky-Golay filtering mechanism in differential privacy can reduce the error caused by noise data while protecting user privacy. The experiments results indicate that the scheme improves the practicability of location data and is feasible.




base

Emotion recognition method for multimedia teaching classroom based on convolutional neural network

In order to further improve the teaching quality of multimedia teaching in school daily teaching, a classroom facial expression emotion recognition model is proposed based on convolutional neural network. VGGNet and CliqueNet are used as the basic expression emotion recognition methods, and the two recognition models are fused while the attention module CBAM is added. Simulation results show that the designed classroom face expression emotion recognition model based on V-CNet has high recognition accuracy, and the recognition accuracy on the test set reaches 93.11%, which can be applied to actual teaching scenarios and improve the quality of classroom teaching.




base

Design of traffic signal automatic control system based on deep reinforcement learning

Aiming at the problem of aggravation of traffic congestion caused by unstable signal control of traffic signal control system, the Multi-Agent Deep Deterministic Policy Gradient-based Traffic Cyclic Signal (MADDPG-TCS) control algorithm is used to control the time and data dimensions of the signal control scheme. The results show that the maximum vehicle delay time and vehicle queue length of the proposed algorithm are 11.33 s and 27.18 m, which are lower than those of the traditional control methods. Therefore, this method can effectively reduce the delay of traffic signal control and improve the stability of signal control.




base

Application of integrated image processing technology based on PCNN in online music symbol recognition training

To improve the effectiveness of online training for music education, it was investigated how to improve the pulse-coupled neural network in image processing for spectral image segmentation. The study proposes a two-scale descent method to achieve oblique spectral correction. Subsequently, a convolutional neural network was optimised using a two-channel feature fusion recognition network for music theory notation recognition. The results showed that this image segmentation method had the highest accuracy, close to 98%, and the accuracy of spectral tilt correction was also as high as 98.4%, which provided good image pre-processing results. When combined with the improved convolutional neural network, the average accuracy of music theory symbol recognition was about 97% and the highest score of music majors was improved by 16 points. This shows that the method can effectively improve the teaching effect of online training in music education and has certain practical value.




base

Multi-agent Q-learning algorithm-based relay and jammer selection for physical layer security improvement

Physical Layer Security (PLS) and relay technology have emerged as viable methods for enhancing the security of wireless networks. Relay technology adoption enhances the extent of coverage and enhances dependability. Moreover, it can improve the PLS. Choosing relay and jammer nodes from the group of intermediate nodes effectively mitigates the presence of powerful eavesdroppers. Current methods for Joint Relay and Jammer Selection (JRJS) address the optimisation problem of achieving near-optimal secrecy. However, most of these techniques are not scalable for large networks due to their computational cost. Secrecy will decrease if eavesdroppers are aware of the relay and jammer intermediary nodes because beamforming can be used to counter the jammer. Consequently, this study introduces a multi-agent Q-learning-based PLS-enhanced secured joint relay and jammer in dual-hop wireless cooperative networks, considering the existence of several eavesdroppers. The performance of the suggested algorithm is evaluated in comparison to the current algorithms for secure node selection. The simulation results verified the superiority of the proposed algorithm.




base

Injury prediction analysis of college basketball players based on FMS scores

It is inevitable for basketball players to have physical injury in sports. Reducing basketball injury is one of the main aims of the study of basketball. In view of this, this paper proposes a monocular vision and FMS injury prediction model for basketball players. Aiming at the limitations of traditional FMS testing methods, this study introduces intelligent machine learning methods. In this study, random forest algorithm was introduced into OpenPose network to improve model node occlusion, missed detection or false detection. In addition, to reduce the computational load of the network, the original OpenPose network was replaced by a lightweight OpenPose network. The experimental results show that the average processing time of the proposed model is about 90 ms, and the output video frame rate is 10 frames per second, which can meet the real-time requirements. This study analysed the students participating in the basketball league of the College of Sports Science of Nantong University, and the results confirmed the accuracy of the injury prediction of college basketball players based on FMS scores. It is hoped that this study can provide some reference for the research of injury prevention of basketball players.




base

Attention-based gating units separate channels in neural radiance fields

We introduce a unique inductive bias to improve the reconstruction quality of Neural Radiance Fields (NeRF), NeRF employs the Fourier transform to map 3D coordinates to a high-dimensional space, enhancing the representation of high-frequency information in scenes. However, this transformation often introduces significant noise, affecting NeRF's robustness. Our approach allocates attention effectively by segregating channels within NeRF using attention-based gating units. We conducted experiments on an open-source data set to demonstrate the effectiveness of our method, which leads to significant improvements in the quality of synthesised new-view images compared to state-of-the-art methods. Notably, we achieve an average PSNR increase of 0.17 compared to the original NeRF. Furthermore, our method is implemented through a carefully designed special Multi-Layer Perceptron (MLP) architecture, ensuring compatibility with most existing NeRF-based methods.




base

BEFA: bald eagle firefly algorithm enabled deep recurrent neural network-based food quality prediction using dairy products

Food quality is defined as a collection of properties that differentiate each unit and influences acceptability degree of food by users or consumers. Owing to the nature of food, food quality prediction is highly significant after specific periods of storage or before use by consumers. However, the accuracy is the major problem in the existing methods. Hence, this paper presents a BEFA_DRNN approach for accurate food quality prediction using dairy products. Firstly, input data is fed to data normalisation phase, which is performed by min-max normalisation. Thereafter, normalised data is given to feature fusion phase that is conducted employing DNN with Canberra distance. Then, fused data is subjected to data augmentation stage, which is carried out utilising oversampling technique. Finally, food quality prediction is done wherein milk is graded employing DRNN. The training of DRNN is executed by proposed BEFA that is a combination of BES and FA. Additionally, BEFA_DRNN obtained maximum accuracy, TPR and TNR values of 93.6%, 92.5% and 90.7%.




base

QoS-based handover approach for 5G mobile communication system

5G mobile communication systems are an in-depth fusion of multi-radio access technologies characterised with frequent handover between cells. Handover management is a particularly challenging issue for 5G networks development. In this article, a novel optimised handover framework is proposed to find the optimal network to connect with a good quality of service in accordance with the user's preferences. This framework is based on an extension of IEEE 802.21 standard with new components and new service primitives for seamless handover. Moreover, the proposed vertical handover process is based on an adaptive heuristic model aimed at achieving an optimised network during the decision-making stage. Simulation results demonstrate that, compared to other existing works, the proposed framework is capable of selecting the best network candidate accurately based on the quality-of-service requirements of the application, network conditions, mobile terminal conditions and user preferences. It significantly reduces the handover delay, handover blocking probability and packet loss rate.




base

Human resource management and organisation decision optimisation based on data mining

The utilisation of big data presents significant opportunities for businesses to create value and gain a competitive edge. This capability enables firms to anticipate and uncover information quickly and intelligently. The author introduces a human resource scheduling optimisation strategy using a parallel network fusion structure model. The author's approach involves designing a set of network structures based on parallel networks and streaming media, enabling the macro implementation of the enterprise parallel network fusion structure. Furthermore, the author proposes a human resource scheduling optimisation method based on a parallel deep learning network fusion structure. It combines convolutional neural networks and transformer networks to fuse streaming media features, thereby achieving comprehensive identification of the effectiveness of the current human resource scheduling in enterprises. The result shows that the macro and deep learning methods achieve a recognition rate of 87.53%, making it feasible to assess the current state of human resource scheduling in enterprises.




base

Natural language processing-based machine learning psychological emotion analysis method

To achieve psychological and emotional analysis of massive internet chats, researchers have used statistical methods, machine learning, and neural networks to analyse the dynamic tendencies of texts dynamically. For long readers, the author first compares and explores the differences between the two psychoanalysis algorithms based on the emotion dictionary and machine learning for simple sentences, then studies the expansion algorithm of the emotion dictionary, and finally proposes an extended text psychoanalysis algorithm based on conditional random field. According to the experimental results, the mental dictionary's accuracy, recall, and F-score based on the cognitive understanding of each additional ten words were calculated. The optimisation decreased, and the memory and F-score improved. An <i>F</i>-value greater than 1, which is the most effective indicator for evaluating the effectiveness of a mental analysis problem, can better demonstrate that the algorithm is adaptive in the literature dictionary. It has been proven that this scheme can achieve good results in analysing emotional tendencies and has higher efficiency than ordinary weight-based psychological sentiment analysis algorithms.




base

Dual network control system for bottom hole throttling pressure control based on RBF with big data computing

In the context of smart city development, the managed pressure drilling (MPD) drilling process faces many uncertainties, but the characteristics of the process are complex and require accurate wellbore pressure control. However, this process runs the risk of introducing un-modelled dynamics into the system. To this problem, this paper employs neural network control techniques to construct a dual-network system for throttle pressure control, the design encompasses both the controller and identifier components. The radial basis function (RBF) network and proportional features are connected in parallel in the controller structure, and the RBF network learning algorithm is used to train the identifier structure. The simulation results show that the actual wellbore pressure can quickly track the reference pressure value when the pressure setpoint changes. In addition, the controller based on neural network realises effective control, which enables the system to track the input target quickly and achieve stable convergence.




base

Educational countermeasures of different learners in virtual learning community based on artificial intelligence

In order to reduce the challenges encountered by learners and educators in engaging in educational activities, this paper classifies learners' roles in virtual learning communities, and explores the role of behaviour characteristics and their positions in collaborative knowledge construction networks in promoting the process of knowledge construction. This study begins with an analysis of the relationship structure among learners in the virtual learning community and then applies the FCM algorithm to arrange learners into various dimensional combinations and create distinct learning communities. The test results demonstrate that the FCM method performs consistently during the clustering process, with less performance oscillations, and good node aggregation, the ARI value of the model is up to 0.90. It is found that they play an important role in the social interaction of learners' virtual learning community, which plays a certain role in promoting the development of artificial intelligence.




base

Computer aided translation technology based on edge computing intelligent algorithm

To explore the computer-aided translation technology based on the intelligent algorithm of edge computing. This paper presents the research on computer-aided translation technology based on edge computing intelligent algorithm. In the K-means computer edge algorithm, it analyses the traditional way of average resource allocation and the way of virtual machine allocation. In the process of online solution, we have a more detailed understanding of the data information at the edge, and also avoid the connection relationship between network users and the platform, which has a certain impact on the internal operation efficiency of the system. The network user group is divided into several different types of existence through K-means computer algorithm, and various information resources are counted according to their own characteristics. Computer-aided translation technology can significantly improve the quality of translation, improve the translation efficiency, and reduce the translation cost.




base

Urban public space environment design based on intelligent algorithm and fuzzy control

With the development of urban construction, its spatial evolution is also influenced by behavioural actors such as enterprises, residents, and environmental factors, leading to some decision-making behaviours that are not conducive to urban public space and environmental design. At the same time, some cities are vulnerable to various factors such as distance factors, transportation factors, and human psychological factors during the construction of public areas, resulting in a decline in the quality of urban human settlements. Urban public space is the guarantee of urban life. For this, in order to standardise urban public space and improve the quality of urban living environment, the standardisation of the environment of urban public space is required. The rapid development of intelligent algorithms and fuzzy control provides technical support for the environmental design of urban public spaces. Through the modelling of intelligent algorithms and the construction of fuzzy space, it can meet the diverse.




base

Design of data mining system for sports training biochemical indicators based on artificial intelligence and association rules

Physiological indicators are an important basis for reflecting the physiological health status of the human body and play an important role in medical practice. Association rules have also been one of the important research hotspots in recent years. This study aims to create a data mining system of association rules and artificial intelligence in biochemical indicators of sports training. This article uses Markov logic for network creation and system training, and tests whether the Markov logic network can be associated with the training system. The results show that the accuracy and recall rate obtained are about 90%, which shows that it is feasible to establish biochemical indicators of sports training based on Markov logic network, and the system has universal, guiding and constructive significance, ensuring that the construction of training system indicators will not go in the wrong direction.




base

Digital architectural decoration design and production based on computer image

The application of computer image digitisation has realised the transformation of people's production and lifestyle, and also promoted the development of the construction industry. This article aims to realise the research on architectural decoration design and production under computer network environment and promote the ecological development of indoor and outdoor design in the construction industry. This article proposes to use virtual reality technology in image digitisation to guide architectural decoration design research. In the comparative analysis of the weight of architectural decoration elements, among the calculated weights of secondary elements, the spatial function has the largest weight, which is 0.2155, and the landscape has the smallest weight, which is 0.0113. Among the three-level unit weights, the service area has the largest weight, which is 0.0976, and the fence frame has the smallest weight, which is 0.0119.




base

The role of pre-formation intangible assets in the endowment of science-based university spin-offs

Science-based university spin-offs face considerable technology and market uncertainty over extended periods of time, increasing the challenges of commercialisation. Scientist-entrepreneurs can play formative roles in commercialising lab-based scientific inventions through the formation of well-endowed university spin-offs. Through case study analysis of three science-based university spin-offs within a biotechnology innovation ecosystem, we unpack the impact of <i>pre-formation</i> intangible assets of academic scientists (research excellence, patenting, and international networks) and their entrepreneurial capabilities on spin-off performance. We find evidence that the pre-formation entrepreneurial capabilities of academic scientists can endow science-based university spin-offs by leveraging the scientists' pre-formation intangible assets. A theory-driven model depicting the role of pre-formation intangible assets and entrepreneurial capabilities in endowing science-based university spin-offs is developed. Recommendations are provided for scholars, practitioners, and policymakers to more effectively commercialise high potential inventions in the university lab through the development and deployment of pre-formation intangible assets and entrepreneurial capabilities.




base

E-portfolio Assessment System for an Outcome-Based Information Technology Curriculum




base

Database Security: What Students Need to Know




base

A Tools-Based Approach to Teaching Data Mining Methods




base

Fostering Digital Literacy through Web-based Collaborative Inquiry Learning – A Case Study




base

The Implementation of Hypertext-based Learning Media for a Local Cultural Based Learning




base

Effective Adoption of Tablets in Post-Secondary Education: Recommendations Based on a Trial of iPads in University Classes




base

Technology-based Participatory Learning for Indigenous Children in Chiapas Schools, Mexico




base

Enhancing Classroom Learning Experience by Providing Structures to Microblogging-based Activities




base

Designing a Mobile-app-based Collaborative Learning System




base

Automatic Grading of Spreadsheet and Database Skills




base

A Cross-Case Analysis of the Use of Web-Based ePortfolios in Higher Education




base

A Database Practicum for Teaching Database Administration and Software Development at Regis University




base

Using Student e-Portfolios to Facilitate Learning Objective Achievements in an Outcome-Based University




base

Advancing Creative Visual Thinking with Constructive Function-based Modelling




base

A Template-Based Short Course Concept on Android Application Development




base

A Real-time Plagiarism Detection Tool for Computer-based Assessments

Aim/Purpose: The aim of this article is to develop a tool to detect plagiarism in real time amongst students being evaluated for learning in a computer-based assessment setting. Background: Cheating or copying all or part of source code of a program is a serious concern to academic institutions. Many academic institutions apply a combination of policy driven and plagiarism detection approaches. These mechanisms are either proactive or reactive and focus on identifying, catching, and punishing those found to have cheated or plagiarized. To be more effective against plagiarism, mechanisms that detect cheating or colluding in real-time are desirable. Methodology: In the development of a tool for real-time plagiarism prevention, literature review and prototyping was used. The prototype was implemented in Delphi programming language using Indy components. Contribution: A real-time plagiarism detection tool suitable for use in a computer-based assessment setting is developed. This tool can be used to complement other existing mechanisms. Findings: The developed tool was tested in an environment with 55 personal computers and found to be effective in detecting unauthorized access to internet, intranet, and USB ports on the personal computers. Recommendations for Practitioners: The developed tool is suitable for use in any environment where computer-based evaluation may be conducted. Recommendation for Researchers: This work provides a set of criteria for developing a real-time plagiarism prevention tool for use in a computer-based assessment. Impact on Society: The developed tool prevents academic dishonesty during an assessment process, consequently, inculcating confidence in the assessment processes and respectability of the education system in the society. Future Research: As future work, we propose a comparison between our tool and other such tools for its performance and its features. In addition, we want to extend our work to include testing for scalability of the tool to larger settings.