Runtime uncertainty such as unpredictable resource unavailability, changing environmental conditions and user needs, as well as system intrusions or faults represents one of the main current challenges of self-adaptive systems. Moreover, today’s systems are increasingly more complex, distributed, decentralized, etc. and therefore have to reason about and cope with more and more unpredictable events. Approaches to deal with such changing requirements in complex today’s systems are still missing. This work presents SACRE (Smart Adaptation through Contextual REquirements), our approach leveraging an adaptation feedback loop to detect self-adaptive systems’ contextual requirements affected by uncertainty and to integrate machine learning techniques to determine the best operationalization of context based on sensed data at runtime. SACRE is a step forward of our former approach ACon which focus had been on adapting the context in contextual requirements, as well as their basic implementation. SACRE primarily focuses on architectural decisions, addressing selfadaptive systems’ engineering challenges. Furthering the work on ACon, in this paper, we perform an evaluation of the entire approach in different uncertainty scenarios in real-time in the extremely demanding domain of smart vehicles. The real-time evaluation is conducted in a simulated environment in which the smart vehicle is implemented through software components. The evaluation results provide empirical evidence about the applicability of SACRE in real and complex software system domains.
We consider the NP-hard problem of scheduling n jobs in F identical parallel flow shops, each consisting of a series of m machines, and doing so with a blocking constraint. The applied criterion is to minimize the makespan, i.e., the maximum completion time of all the jobs in F flow shops (lines). The Parallel Flow Shop Scheduling Problem (PFSP) is conceptually similar to another problem known in the literature as the Distributed Permutation Flow Shop Scheduling Problem (DPFSP), which allows modeling the scheduling process in companies with more than one factory, each factory with a flow shop configuration. Therefore, the proposed methods can solve the scheduling problem under the blocking constraint in both situations, which, to the best of our knowledge, has not been studied previously. In this paper, we propose a mathematical model along with some constructive and improvement heuristics to solve the parallel blocking flow shop problem (PBFSP) and thus minimize the maximum completion time among lines. The proposed constructive procedures use two approaches that are totally different from those proposed in the literature. These methods are used as initial solution procedures of an iterated local search (ILS) and an iterated greedy algorithm (IGA), both of which are combined with a variable neighborhood search (VNS). The proposed constructive procedure and the improved methods take into account the characteristics of the problem. The computational evaluation demonstrates that both of them –especially the IGA– perform considerably better than those algorithms adapted from the DPFSP literature.
Developing data-driven fault detection systems for chemical plants requires managing uncertain data labels and dynamic attributes due to operator-process interactions. Mislabeled data is a known problem in computer science that has received scarce attention from the process systems community. This work introduces and examines the effects of operator actions in records and labels, and the consequences in the development of detection models. Using a state space model, this work proposes an iterative relabeling scheme for retraining classifiers that continuously refines dynamic attributes and labels. Three case studies are presented: a reactor as a motivating example, flooding in a simulated de-Butanizer column, as a complex case, and foaming in an absorber as an industrial challenge. For the first case, detection accuracy is shown to increase by 14% while operating costs are reduced by 20%. Moreover, regarding the de-Butanizer column, the performance of the proposed strategy is shown to be 10% higher than the filtering strategy. Promising results are finally reported in regard of efficient strategies to deal with the presented problem
This paper analyzes a realistic variant of the Permutation Flow-Shop Problem (PFSP) by considering a non-smooth objective function that takes into account not only the traditional makespan cost but also failure-risk costs due to uninterrupted operation of machines. After completing a literature review on the issue, the paper formulates an original mathematical model to describe this new PFSP variant. Then, a Biased-Randomized Iterated Local Search (BRILS) algorithm is proposed as an efficient solving approach. An oriented (biased) random behavior is introduced in the well-known NEH heuristic to generate an initial solution. From this initial solution, the algorithm is able to generate a large number of alternative good solutions without requiring a complex setting of parameters. The relative simplicity of our approach is particularly useful in the presence of non-smooth objective functions, for which exact optimization methods may fail to reach their full potential. The gains of considering failure-risk costs during the exploration of the solution space are analyzed throughout a series of computational experiments. To promote reproducibility, these experiments are based on a set of traditional benchmark instances. Moreover, the performance of the proposed algorithm is compared against other state-of-the-art metaheuristic approaches, which have been conveniently adapted to consider failure-risk costs during the solving process. The proposed BRILS approach can be easily extended to other combinatorial optimization problems with similar non-smooth objective functions.
Dimensionality reduction is required to produce visualisations of high dimensional data. In this framework, one of the most straightforward approaches to visualising high dimensional data is based on reducing complexity and applying linear projections while tumbling the projection axes in a defined sequence which generates a Grand Tour of the data. We propose using smooth nonlinear topographic maps of the data distribution to guide the Grand Tour, increasing the effectiveness of this approach by prioritising the linear views of the data that are most consistent with global data structure in these maps. A further consequence of this approach is to enable direct visualisation of the topographic map onto projective spaces that discern structure in the data. The experimental results on standard databases reported in this paper, using self-organising maps and generative topographic mapping, illustrate the practical value of the proposed approach. The main novelty of our proposal is the definition of a systematic way to guide the search of data views in the grand tour, selecting and prioritizing some of them, based on nonlinear manifold models.
Context and motivation: Service-Based Systems are highly dynamic software systems composed of several web services. In contrast to other types of systems, Service-Based Systems rely on service providers to ensure that their web services comply with the agreed Quality of Service. Delivering an adequate Quality of Service is a critical and significant challenge that requires monitoring along the different activities in the Service-Based System's lifecycle.; Question/problem: Current monitoring systems are designed to support specific activities (e.g. service selection, adaptation, etc.), but do not fulfil the requirements of all the activities in the Service-Based System's lifecycle.; Principal ideas/results: In this paper, we present SALMon, a QoS monitoring framework able to support the whole Service-Based System's lifecycle. SALMon is highly versatile, since it combines different strategies for its configuration (model-based and invocation-based) and for the way it gets the Quality of Service (passive monitoring and online testing). Furthermore, its architecture supports easy extensibility with new quality attributes, independence of the technology of the monitored services and interoperability with other tools. We conducted a performance evaluation over real web services using suitable estimators for response time and evaluated both its overhead and capacity.; Contribution: SALMon provides infrastructure that can be used in very different scenarios, as exemplified in this paper, both in terms of the lifecycle's phase addressed and the type of system (pure Service-Oriented Architecture, cloud-based systems, etc.). This diversity of situations addressed makes SALMon a significant contribution both for practitioners that may be interested in integrating a working technology in their software solutions, and for researchers who can conduct their investigation on top of a reliable infrastructure.
Cortés, D.; Gordillo, N.; Riba Romeva, C.; Lloveras, J. Expert systems with applications Vol. 42, num. 15-16, p. 6147-6154 DOI: 10.1016/j.eswa.2015.03.030 Data de publicació: 2015-09 Article en revista
This work presents a methodology for the selection and comparison of non-traditional sheet metal cutting processes as a new structure of selection by means of an expert system. The model is generated from a knowledge base acquired from diverse experts, and the use of fuzzy logic techniques. With a simple input of the parameters of a piece, the system offers the most appropriate cutting options (based on the requirements of the piece) allowing a non-expert user selecting the most appropriate process with emphasis on a predefined priority: finish, cost or time. The selection process consists of four base algorithms that measure the attributes of each process as a dependent indicator of the other processes, that is, a pre-selection that considers (1) the process capability to cut a material-thickness relation, (2) the speed that can be achieved with this relation, (3) the inherent complexity of the piece to be cut, and (4) the process tolerance. Results of experiments under three different approaches prove that the expert system here presented accurately prioritizes the most convenient cutting processes.
This paper presents a high performing Discrete Artificial Bee Colony algorithm for the blocking flow shop problem with flow time criterion. To develop the proposed algorithm, we considered four strategies for the food source phase and two strategies for each of the three remaining phases (employed bees, onlookers and scouts). One of the strategies tested in the food source phase and one implemented in the employed bees phase are new. Both have been proved to be very effective for the problem at hand. The initialization scheme named HPF2(¿, µ) in particular, which is used to construct the initial food sources, is shown in the computational evaluation to be one of the main procedures that allow the DABC_RCT to obtain good solutions for this problem. To find the best configuration of the algorithm, we used design of experiments (DOE). This technique has been used extensively in the literature to calibrate the parameters of the algorithms but not to select its configuration. Comparing it with other algorithms proposed for this problem in the literature demonstrates the effectiveness and superiority of the DABC_RCT
Test assembly design problems appear in the areas of psychology and education, among others. The goal of these problems is to construct one or multiple tests to evaluate the test subject. This paper studies a recent formulation of the problem known as the one-dimensional minimax bin-packing problem with bin size constraints (MINIMAX_BSC). In the MINIMAX_BSC, items are initially divided into groups and multiple tests need to be constructed using a single item from each group, while minimizing differences among the tests. We first show that the problem is NP-Hard, which remained an open question. Second, we propose three different local search neighborhoods derived from the exact resolution of special cases of the problem, and combine them into a variable neighborhood search (VNS) metaheuristic, Finally, we test the proposed algorithm using real-life-based instances. The results show that the algorithm is able to obtain optimal or near-optimal solutions for instances with up to 60000-item pools. Consequently, the algorithm is a viable option to design large-scale tests, as well as to provide tests for online small-sized situations such as those found in e-learning platforms. (C) 2015 Elsevier Ltd. All rights reserved.
Mijumbi, R.; Gorricho, J.; Serrat, J.; Shen, M.; Xu, K.; Yang, K. Expert systems with applications Vol. 42, num. 3, p. 1376-1390 DOI: 10.1016/j.eswa.2014.08.058 Data de publicació: 2015-02-15 Article en revista
Network virtualisation promises to lead to better manageability of the future Internet by allowing for adaptable sharing of physical network resources among different virtual networks. However, the sharing
of resources is not trivial as virtual nodes and links should first be mapped onto substrate nodes and links, and thereafter the allocated resources managed throughout the lifetime of the virtual network. In this
paper, we design and evaluate reinforcement learning-based neuro-fuzzy algorithms that perform dynamic, decentralised and coordinated self-management of substrate network resources. The objective is to achieve better efficiency in the utilisation of substrate network resources while ensuring that the quality of service requirements of the virtual networks are not violated. The proposed algorithms are evaluated through comparisons with a Q-learning-based approach as well as two static resource allocation schemes.
Beginning with a variation of the sequencing problem in a mixed-products line (MMSP-W: Mixed-Model Sequencing Problem with Workload Minimization), we propose two new models that incorporate a set of working conditions in regard with human resources of workstations on the line. These conditions come from collective agreements and therefore must be respected by both company and labor unions. The first model takes into account the saturation limit of the workstations, and the second model also includes the activation of the operators throughout the working day. Two computational experiments were carried out using a case study of the Nissan motor plant in Barcelona with two main objectives: (1) to study the repercussions of the saturation limit on the decrease in productivity on the line and (2) to evaluate the recovery of productivity on the line via both activation of operators, while maintaining the same quality in working conditions achieved by limiting the saturation, and auxiliary processors. By results we state that saturation limitation leads an important increase of work overload, which means average economic losses of 28,731.8 Euros/day. However, the productivity reduction may be counteracted by the work pace factor increase, at certain moments of workday, and/or by the incorporation of auxiliary processors into the line.
Neuro-oncologists must ultimately rely on their acquired knowledge and accumulated experience to undertake the sensitive task of brain tumour diagnosis. This task strongly depends on indirect, non-invasive measurements, which are the source of valuable data in the form of signals and images. Expert radiologists should benefit from their use as part of an at least partially automated computer-based medical decision support system. This paper focuses on Magnetic Resonance Spectroscopy signal analysis and illustrates a method that combines Gaussian Decomposition, dimensionality reduction by Moving Window with Variance Analysis and classification using adaptively regularized Artificial Neural Networks. The method yields encouraging results in the task of binary classification of human brain tumours, even for tumour types that have seldom been analyzed from this viewpoint.
Rodriguez-Martin, D.; Sama, A.; Perez, C.; Catala, A.; Cabestany, J.; Rodríguez, A. Expert systems with applications Vol. 40, num. 18, p. 7203-7211 DOI: 10.1016/j.eswa.2013.07.028 Data de publicació: 2013-12 Article en revista
Analysis of human body movement is an important research area, specially for health applications. In order to assess the quality of life of people with mobility problems like Parkinson’s disease o stroke patients, it is crucial to monitor and assess their daily life activities. The main goal of this work is the characterization of basic activities using a single triaxial accelerometer located at the waist. This paper presents a novel postural detection algorithm based in SVM methods which is able to detect and identify Walking, Stand, Sit, Lying, Sit to Stand, Stand to sit, Bending up/down, Lying from Sit and Sit from Lying transitions with a sensitivity of 97% and specificity of 84% with 2884 postures analyzed from 31 healthy volunteers. Parameters and models found have been tested in another dataset from Parkinson’s disease patients, achieving results of 98% of sensitivity and 78% of specificity in postural transitions. The proposed algorithm has been optimized to be easily implemented in real-time system for on-line monitoring applications.
In this paper, we present an extension to the mixed-model sequencing problem with work overload minimisation (MMSP-W) for production lines with serial workstations and parallel homogeneous processors. The extension is intended to meet the industrial need to sequence various products so that the work required and completed at the workstations over time is as constant as possible. Several approaches are proposed to formulate the problem, giving rise to different models. Four models are selected, and their performances are compared after running the Gurobi solver, using instances from the literature and a case study of the Nissan powertrain plant in Barcelona.
In this paper, a flexible role-based architecture for Body Sensor Networks (BSNs) is introduced. The proposed non-layered context-aware architecture is application-oriented and able to incorporate future applications. Particular applications have certain requirements. Functional units (roles) instead of protocol layers are designed to perform the required tasks for applications to work properly. The role data of an application is inserted in the role headers of the container and is available for other applications with the same basic, specific or particular roles. Furthermore, the performance of Automatic Repeat Request (ARQ), Forward Error Correction (FEC) block codes and FEC convolutional codes with respect to the throughput efficiency has also been analyzed for a BSN following the proposed role-based architecture. The numerical results show that the proposed role-based architecture outperforms the traditional layered architecture with respect to the throughput efficiency for all error control schemes. FEC block codes are able to maintain a high throughput efficiency over longer distances because the hop length extension technique is applied.
Steganography is the process of hiding information on a host signal. Transparency is referred to the ability
to avoid suspicion about the existence of a secret message. The most popular mechanisms for hiding data
in audio signals are the Least Significant Bit (LSB) substitution, Frequency Masking (FM), Spread Spectrum
(SS), and Shift Spectrum Algorithm (SSA). In this paper, we adapt the Frequency Masking concept using an
efficient sorting of the wavelet coefficients of the secret messages and use an indirect LSB substitution for
hiding speech signals into speech signals. The experimental results show that the proposed model, the
Efficient Wavelet Masking (EWM) scheme, has a hiding capacity significantly higher than the Spread
and Shift Spectrum Algorithms and additionally a statistical transparency higher than all of the above
mentioned mechanisms. Moreover, the transparency is not dependent of the host signal chosen because
the wavelet sorting guarantees the adaptation of the secret message to the host signal.
Bistarelli, S.; Gadducci, F.; Larrosa, J.; Rollon, E.; Santini, F. Expert systems with applications Vol. 39, num. 2, p. 1708-1717 DOI: 10.1016/j.eswa.2011.06.062 Data de publicació: 2012-02-01 Article en revista
Angulo, C.; Cabestany, J.; Rodriguez, P.; Batlle, M.; González, A.; de Campos, S. Expert systems with applications Vol. 39, num. 1, p. 1011-1020 DOI: 10.1016/j.eswa.2011.07.102 Data de publicació: 2012 Article en revista
In order to prevent and reduce water pollution, promote a sustainable use,
protect the environment and enhance the status of aquatic ecosystems, this
article deals with the application of advanced mathematical techniques designed
to aid in the management of records of different water quality monitoring
networks. These studies include the development of a software tool for
decision support, based on the application of fuzzy logic techniques, which
can indicate water quality episodes from the behaviour of variables measured
at continuous automatic water control networks. Using a few physicalchemical
variables recorded continuously, the expert system is able to obtain
water quality phenomena indicators, which can be associated, with a high
probability of cause-effect relationship, with human pressure on the water
environment, such as urban discharges or diffuse agricultural pollution. In
this sense, at the proposed expert system, automatic water quality control
networks complement manual sampling of official administrative networks
and laboratory analysis, providing information related to specific events (discharges)
or continuous processes (eutrophication, fish risk) which can hardly
be detected by discrete sampling.
This paper introduces a new method to implement a motion recognition process using a mobile phone fitted with an accelerometer. The data collected from the accelerometer are interpreted by means of a statistical study and machine learning algorithms in order to obtain a classification function. Then, that function is implemented in a mobile phone and online experiments are carried out. Experimental results show that this approach can be used to effectively recognize different human activities with a high-level accuracy.
We propose a modification of the Shapley value for monotonic games with a coalition structure. The resulting coalitional value is a twofold extension of the Shapley value in the following sense: (1) the amount obtained by any union coincides with the Shapley value of the union in the quotient game; and (2) the players of the union share this amount proportionally to their Shapley value in the original game (i.e., without unions). We provide axiomatic characterizations of this value close to those existing in the literature for the Owen value and include applications to coalition formation in bankruptcy and voting problems.
Ribas, V.; Vellido, A.; Ruiz-Rodríguez, Juan C.; Rello, J. Expert systems with applications Vol. 39, num. 22, p. 1937-1943 DOI: 10.1016/j.eswa.2011.08.054 Data de publicació: 2011-02-01 Article en revista
Most of the decision support systems for balancing industrial assembly lines are designed to report a huge number of possible line configurations, according to several criteria. In this contribution, we tackle a more realistic variant of the classical assembly line problem formulation, time and space assembly line balancing. Our goal is to study the influence of incorporating user preferences based on Nissan automotive domain knowledge to guide the multi-objective search process with two different aims. First, to reduce the number of equally preferred assembly line configurations (i.e., solutions in the decision space) according to Nissan plants requirements. Second, to only provide the plant managers with configurations of their contextual interest in the objective space (i.e., solutions within their preferred Pareto front region) based on real-world economical variables. We face the said problem with a multi-objective ant colony optimisation algorithm. Using the real data of the Nissan Pathfinder engine, a solid empirical study is carried out to obtain the most useful solutions for the decision makers in six different Nissan scenarios around the world.
Gaudioso, E.; Montero, M.; Talavera, L.; Hernandez del Olmo, F. Expert systems with applications Vol. 36, num. 2, p. 2260-2265 DOI: 10.1016/j.eswa.2007.12.035 Data de publicació: 2009-03 Article en revista
Collaborative student modeling in adaptive learning environments allows the learners
to inspect and modify their own student models. It is often considered as a
collaboration between students and the system to promote learners’ reflection and
to collaboratively assess the course. When adaptive learning environments are used
in the classroom, teachers act as a guide through the learning process. Thus, they
need to monitor students’ interactions in order to understand and evaluate their
activities. Although, the knowledge gained through this monitorization can be extremely
useful to student modeling, collaboration between teachers and the system
to achieve this goal has not been considered in the literature. In this paper we
present a framework to support teachers in this task. In order to prove the usefulness
of this framework we have implemented and evaluated it in an adaptive
web-based educational system called PDinamet.