Dealing with missing data is of great practical and theoretical interest in forecasting applications. In this study, we deal with the problem of forecasting with missing data in smart grid and BEMS applications, where the information from home area sensors and/or smart meters is sometimes missing, which may hinder or even prevent the forecasting of the next hours and days. In concrete, we focus in a Soft Computing technique called Fuzzy Inductive Reasoning (FIR) and its improved version that can cope with missing information in the forecasting process: flexible FIR. In this article eight different strategies for flexible FIR forecasting are defined and studied taking into account: causal relevance of input variables, consistency of predictions, inertia criterion and K-Nearest Neighbours. Furthermore, we evaluate the implications of prediction accuracy and number of predictions, when the number of Missing Values (MVs) in the training dataset is increased progressively. To this end, a real smart grid forecasting application, i.e. electricity load forecasting, has been chosen in this study. The results show that all eight strategies proposed are able to cope with MVs and take advantage of the inherent information in the data, with better results in those strategies making use of causal relevance. In addition, the robustness of flexible FIR and its eight strategies are proved taking into account that the percentage of electricity load predictions from the test dataset is on average 96.15% when the %MVs in training dataset was around 73%.
In this paper, we present a new automatic screening method to assess whether a patient from ambulatory care or emergency should be referred to a cardiology service. This method is based on deep neural networks with pretraining and takes as an input a raw ECG signal without annotation.
This work is based on a prospective clinical study that took place at Hospital Clínic in Barcelona between 2011–2012 and recruited 1390 patients. For each patient, we recorded a 12-lead ECG and the diagnosis was conducted by the cardiology service at the same hospital. Normal, borderline normal and normal variant ECGs were labelled as normal and the rest as abnormal.
Our deep neural networks with pretraining were tested through cross-validation with a cohort of 416 test patients. The performance of our model was compared against other standard classification methods such as neural networks without pretraining, Support Vector Machines, Extreme Learning Machines, k-Nearest Neighbours and a professional classification algorithm certified for medical use that annotates the raw ECG signals prior to classification.
The resulting best classifier was a pretrained neural network with three hidden layers and 700 units in every layer. This network yielded an accuracy of 0.8552, a sensitivity of 0.9176 and a specificity of 0.7827. The best alternative classification method was a Support Vector Machine with a Gaussian kernel, which yielded an accuracy of 0.8476, a sensitivity of 0.9446 and a specificity of 0.7346. The professional classification algorithm yielded an accuracy of 0.8407, a sensitivity of 0.8558 and a specificity of 0.8214.
Neural networks with pretraining automatically obtain a representation of the input data without resorting to any annotation and, thus, simplify the process of assessing normality of ECG signals. The results that we have obtained are slightly better than those obtained with the professional classification system and, for some network configurations, they can be considered as exchangeable.
Neural networks with pretraining open up a promising line of research for the automatic assessment of ECG signals that may be used in the future in clinical practice.
One of the most complex physiological systems whose modeling is still an open study is the respiratory control system where different models have been proposed based on the criterion of minimizing the work of breathing (WOB). The aim of this study is twofold: to compare two known models of the respiratory control system which set the breathing pattern based on quantifying the respiratory work; and to assess the influence of using direct-search or evolutionary optimization algorithms on adjustment of model parameters. This study was carried out using experimental data from a group of healthy volunteers under CO2 incremental inhalation, which were used to adjust the model parameters and to evaluate how much the equations of WOB follow a real breathing pattern. This breathing pattern was characterized by the following variables: tidal volume, inspiratory and expiratory time duration and total minute ventilation. Different optimization algorithms were considered to determine the most appropriate model from physiological viewpoint. Algorithms were used for a double optimization: firstly, to minimize the WOB and secondly to adjust model parameters. The performance of optimization algorithms was also evaluated in terms of convergence rate, solution accuracy and precision. Results showed strong differences in the performance of optimization algorithms according to constraints and topological features of the function to be optimized. In breathing pattern optimization, the sequential quadratic programming technique (SQP) showed the best performance and convergence speed when respiratory work was low. In addition, SQP allowed to implement multiple non-linear constraints through mathematical expressions in the easiest way. Regarding parameter adjustment of the model to experimental data, the evolutionary strategy with covariance matrix and adaptation (CMA-ES) provided the best quality solutions with fast convergence and the best accuracy and precision in both models. CMAES reached the best adjustment because of its good performance on noise and multi-peaked fitness functions. Although one of the studied models has been much more commonly used to simulate respiratory response to CO2 inhalation, results showed that an alternative model has a more appropriate cost function to minimize WOB from a physiological viewpoint according to experimental data.
Agell, N.; van Ganzewinkel,, C.; Sanchez, M.; Rosello, L.; Prats, F.; Andriessen, P. Applied soft computing Vol. 35, p. 942-948 DOI: 10.1016/j.asoc.2015.03.024 Data de publicació: 2015-10 Article en revista
This paper proposes a new model of consensus based on linguistic terms to be implemented in Delphi processes. The model of consensus involves qualitative reasoning techniques and is based on the concept of entropy. The proposed model has the ability to reach consensus automatically without the need for either a moderator or a final interaction among panelists. In addition, it permits panelists to answer with different levels of precision depending on their knowledge on each question. The model defined hasbeen used to establish the relevant features for the definition of a type of chronic disease. A real-case application conducted in the Department of Neonatology of Máxima Medical Center in The Netherlands is presented. This application considers the opinions of stakeholders of neonate health-care in order toreach a final consensual definition of chronic pain in neonates.
This paper proposes a new model of consensus based on linguistic terms to be implemented in Delphi processes. The model of consensus involves qualitative reasoning techniques and is based on the concept of entropy. The proposed model has the ability to reach consensus automatically without the need for either a mod- erator or a final interaction among panelists. In addition, it permits panelists to answer with different levels of precision depending on their knowledge on each question. The model defined has been used to establish the relevant features for the definition of a type of chronic disease. A real-case application conducted in the Department of Neonatology of M ́ axima Medical Center in The Netherlands is pre- sented. This application considers the opinions of stakeholders of neonate health- care in order to reach a final consensual definition of chronic pain in neonates.
Benyoucef, Abou Soufyane; Chouder, A.; Kara, K.; Silvestre, S.; Shaed, Oussama Ait Applied soft computing Vol. 32, p. 38-48 DOI: 10.1016/j.asoc.2015.03.047 Data de publicació: 2015-07-01 Article en revista
Artificial bee colony (ABC) algorithm has several characteristics that make it more attractive than other bio-inspired methods. Particularly, it is simple, it uses fewer control parameters and its convergence is independent of the initial conditions. In this paper, a novel artificial bee colony based maximum power point tracking algorithm (MPPT) is proposed. The developed algorithm, does not allow only overcoming the common drawback of the conventional MPPT methods, but it gives a simple and a robust MPPT scheme. A co-simulation methodology, combining Matlab/Simulink™ and Cadence/Pspice™, is used to verify the effectiveness of the proposed method and compare its performance, under dynamic weather conditions, with that of the Particle Swarm Optimization (PSO) based MPPT algorithm. Moreover, a laboratory setup has been realized and used to experimentally validate the proposed ABC-based MPPT algorithm. Simulation and experimental results have shown the satisfactory performance of the proposed approach.
A new support vector machine, SVM, is introduced, called GSVM, which is specially designed for bi-classification problems where balanced accuracy between classes is the objective. Starting from a standard SVM, the GSVM is obtained from a low-cost post-processing strategy by modifying the initial bias. Thus, the bias for GSVM is calculated by moving the original bias in the SVM to improve the geometric mean between the true positive rate and the true negative rate. The proposed solution neither modifies the original optimization problem for SVM training, nor introduces new hyper-parameters. Experimentation carried out on a high number of databases (23) shows GSVM obtaining the desired balanced accuracy between classes. Furthermore, its performance improves well-known cost-sensitive schemes for SVM, without adding complexity or computational cost.
Assembly lines for mass manufacturing incrementally build production items by performing tasks on them while flowing between workstations. The configuration of an assembly line consists of assigning tasks to different workstations in order to optimize its operation subject to certain constraints such as the precedence relationships between the tasks. The operation of an assembly line can be optimized by minimizing two conflicting objectives, namely the number of workstations and the physical area these require. This configuration problem is an instance of the TSALBP, which is commonly found in the automotive industry. It is a hard combinatorial optimization problem to which finding the optimum solution might be infeasible or even impossible, but finding a good solution is still of great value to managers configuring the line. We adapt eight different Multi-Objective Ant Colony Optimization (MOACO) algorithms and compare their performance on ten well-known problem instances to solve such a complex problem. Experiments under different modalities show that the commonly used heuristic functions deteriorate the performance of the algorithms in time-limited scenarios due to the added computational cost. Moreover, even neglecting such a cost, the algorithms achieve a better performance without such heuristic functions. The algorithms are ranked according to three multi-objective indicators and the differences between the top-4 are further reviewed using statistical significance tests. Additionally, these four best performing MOACO algorithms are favourably compared with the Infeasibility Driven Evolutionary Algorithm (IDEA) designed specifically for industrial optimization problems
P2P systems are very important for future distributed systems and applications. In such systems, the computational burden of the system can be distributed to peer nodes of the system. Therefore, in decentralized systems users become themselves actors by sharing, contributing and controlling the resources of the system. This characteristic makes P2P systems very interesting for the development of decentralized applications. Data replication techniques are commonplace in P2P systems. Data replication means storing copies of the same data at multiple peers thus improving availability and scalability. The trustworthiness of peers also is very important for safe communication in P2P system. The trustworthiness of a peer can be evaluated based on the reputation and actual behaviour of peers to provide services to other peers. In this paper, we propose two fuzzy-based systems for data replication and peer trustworthiness for JXTA-Overlay P2P platform. The simulation results have shown that in the first system, replication factor increases proportionally with increase of number of documents per peer, replication percentage and scale of replication per peer parameters and the second system can be used successfully to select the most reliable peer candidate to execute the tasks.
Research in metaheuristics for combinatorial optimization problems has lately experienced a noteworthy shift towards the hybridization of metaheuristics with other techniques for optimization. At the same time, the focus of research has changed from being rather algorithm-oriented to being more problem-oriented.
Nowadays the focus is on solving the problem at hand in the best way possible, rather than
promoting a certain metaheuristic. This has led to an enormously fruitful cross-fertilization of different areas of optimization. This cross-fertilization is documented by a multitude of powerful hybrid algorithms that were obtained by combining components from several different optimization techniques. Hereby, hybridization is not restricted to the combination of different metaheuristics but includes, for example, the combination of exact algorithms and metaheuristics. In this work we provide a survey of some of the most important lines of hybridization. The literature review is accompanied by the presentation of illustrative examples.
Evolutionary algorithms are among the most successful approaches for solving a number of problems where systematic searches in huge domains must be performed. One problem of practical interest that falls into this category is known as The Root Identification Problem in Geometric Constraint Solving, where one solution to the geometric problem must be selected among a number of possible solutions bounded by an exponential number. In previous works we have shown that applying genetic algorithms, a category of evolutionary algorithms, to solve the Root Identification Problem is both feasible and effective.
In this work, we report on an empirical statistical study conducted to establish the influence of the driving parameters in the PBIL and CHC evolutionary algorithms when they are used to solve the Root Identification Problem. We identify a set of values that optimize algorithms performance. The driving parameters considered for the PBIL algorithm are population size, mutation probability, mutation shift and learning rate. For the CHC algorithm we studied population size, divergence rate, differential threshold and the set of best individuals. In both cases we applied unifactorial and multifactorial analysis, post hoc tests and best parameter level selection. Experimental results show that CHC outperforms PBIL when applied to solve the Root Identification Problem.