Optimization under uncertainty has attracted recently an increasing interest in the process systems engineering literature. The inclusion of uncertainties in an optimization problem inevitably leads to the need to manage the associated risk in order to control the variability of the objective function in the uncertain parameters space. So far, risk management methods have focused on optimizing a single risk metric along with the expected performance. In this work we propose an alternative approach that can handle several risk metrics simultaneously. First, a multi-objective stochastic model containing a set of risk metrics is formulated. This model is then solved efficiently using a tailored decomposition strategy inspired on the Sample Average Approximation. After a normalization step, the resulting solutions are assessed using Pareto filters, which identify solutions showing better performance in the uncertain parameters space. The capabilities and benefits of our approach are illustrated through a design and planning supply chain case study
This paper introduces an optimization-based approach for the simultaneous solution of batch process synthesis and plant allocation, with decisions like the selection of chemicals, process stages, task-unit assignments, operating modes, and optimal control profiles, among others. The modeling strategy is based on the representation of structural alternatives in a state-equipment network (SEN) and its formulation as a mixed-logic dynamic optimization (MLDO) problem. Particularly, the disjunctive multistage modeling strategy by Oldenburg and Marquardt (2008) is extended to combine and organize single-stage and multistage models for representing the sequence of continuous and batch units in each structural alternative and for synchronizing dynamic profiles in input and output operations with material transference. Two numerical examples illustrate the application of the proposed methodology, showing the enhancement of the adaptability potential of batch plants and the improvement of global process performance thanks to the quantification of interactions between process synthesis and plant allocation decisions.
A novel scenario-based dynamic negotiation approach is proposed for the coordination of decentralized supply chains under uncertainty. The relations between the involved organizations (client, provider and third parties) and their respective conflicting objectives are captured through a non-zero-sum and non symmetric roles SBDN negotiation. The client (leader) designs coordination agreements considering the uncertain reaction of the provider (follower) resulting from the uncertain nature of the third parties, which is modeled as a probability of acceptance function. Different negotiation scenarios are studied: (i) cooperative, and (ii) non-cooperative and (iii) standalone cases. The use of the resulting models is illustrated through a case study with different vendors around a "leader" (client) in a decentralized scenario. Although the usual cooperation hypothesis will allow higher overall profit expectations, using the proposed approach it is possible to identify non-cooperative scenarios with high individual profit expectations which are more likely to be accepted by all individual partners. (C) 2016 Elsevier Ltd. All rights reserved.
Carbon capture and storage (CCS) and carbon capture and utilisation (CCU) are acknowledged as important R&D priorities to achieve environmental goals set for next decades. This work studies biomass-based energy supply chains with CO2 capture and utilisation. The problem is formulated as a mixed-integer linear program. This study presents a flexible supply chain superstructure to answer issues on economic and environmental benefits achievable by integrating biomass-coal plants, CO2 capture and utilisation plants; i.e. location of intermediate steps, fraction of CO2 emissions captured per plant, CO2 utilisation plants' size, among others. Moreover, eventual incentives and environmental revenues will be discussed to make an economically feasible project. A large-size case study located in Spain will be presented to highlight the proposed approach. Two key scenarios are envisaged: (i) Biomass, capture or utilisation of CO2 are not contemplated; (ii) Biomass, capture and CO2 utilisation are all considered. Finally, concluding remarks are drawn.
Hjaila, K.; Lainez, J.M.; Zamarripa, M.; Puigjaner, L.; Espuña, A. Computers & chemical engineering Vol. 86, p. 48-61 DOI: 10.1016/j.compchemeng.2015.12.002 Data de publicació: 2016-03 Article en revista
A generic tactical model is developed considering third party price policies for the optimization of coordinated and centralized multi-product Supply Chains (SCs). To allow a more realistic assessment of these policies in each marketing situation, different price approximation models to estimate these policies are proposed, which are based on the demand elasticity theory, and result in different model implementations (LP, NLP, and MINLP). The consequences of using the proposed models on the SCs coordination, regarding not only their practical impact on the tactical decisions, but also the additional mathematical difficulties to be solved, are verified through a case study in which the coordination of a production–distribution SC and its energy generation SC is analyzed. The results show how the selection of the price approximation model affects the tactical decisions. The average price approximation leads to the worst decisions with a significant difference in the real total cost in comparison with the best piecewise approximation.
Askarian, M.; Escudero, G.; Graells, M.; Zarghami, R.; Jalali Farahani, F.; Mostoufi, N. Computers & chemical engineering Vol. 84, p. 104-116 DOI: 10.1016/j.compchemeng.2015.08.018 Data de publicació: 2016-01-04 Article en revista
An important problem to be addressed by diagnostic systems in industrial applications is the estimation of faults with incomplete observations. This work discusses different approaches for handling missing data, and performance of data-driven fault diagnosis schemes. An exploiting classifier and combined methods were assessed in Tennessee-Eastman process, for which diverse incomplete observations were produced. The use of several indicators revealed the trade-off between performances of the different schemes. Support vector machines (SVM) and C4.5, combined with k-nearest neighbourhood (kNN), produce the highest robustness and accuracy, respectively. Bayesian networks (BN) and centroid appear as inappropriate options in terms of accuracy, while Gaussian naive Bayes (GNB) is sensitive to imputation values. In addition, feature selection was explored for further performance enhancement, and the proposed contribution index showed promising results. Finally, an industrial case was studied to assess informative level of incomplete data in terms of the redundancy ratio and generalize the discussion. (C) 2015 Elsevier Ltd. All rights reserved.
Current supply chain (SC) optimization models deal with material and information flows along few echelons of the SC ("own SC"), minimizing the role of the complex behavior of third parties (raw materials and utilities suppliers, clients, waste and recovery systems, etc.) in the decision-making process of this SC of interest. Third parties are just represented by simplified parameters (capacity, cost, etc.) usually considered constant, but the decisions based on this picture are not adequate when the third parties' behavior is significantly affected by these decisions or other circumstances, especially when global coordination is attained. In this work, the role of these third parties, which might face different objectives, has been integrated and a solution based on the full SC management problem is proposed. This results on a generic model which may be used to optimize the planning decisions of the multi-product multisite SC of interest (production/distribution echelons), taking into account the production vs. demand coherence among this SC and the third parties. The features of the proposed model are illustrated using a case study which considers the coordination of a series of resource (energy) generation SCs linked to a production/distribution SC ("SC of interest"). The results show how the behavior of the considered SCs determines the best planning decisions of each organization, which will depend on the way used to coordinated them (e.g. toward less total or individual costs), adding to the PSE science a new point of view which allows all involved organizations to share responsibilities in the system. (C) 2014 Elsevier Ltd. All rights reserved.
Current supply chain (SC) optimization models deal with material and information flows along few echelons of the SC (
Muñoz, E.; Capon-Garcia, E.; Lainez, J.M.; Espuña, A.; Puigjaner, L. Computers & chemical engineering Vol. 66, p. 139-150 DOI: 10.1016/j.compchemeng.2014.02.026 Data de publicació: 2014-07-04 Article en revista
The basis of decision-making in the enterprise consists in formally representing the system and its subsystems in models which adequately capture those features which are necessary to reach consistent decisions. This work represents the elements of the enterprise which are included in mathematical models (i.e. decisions, parameters, constraints, performance indicators) in an ontology which captures the knowledge of the mathematical domain. Thus, this ontology relates the mathematical elements of the models to their corresponding semantic representation within the enterprise ontology. As a result, the mathematical symbolic abstractions of a given enterprise element in different models are directly linked to their actual unique meaning, and the integration of decisions in the enterprise is transparent and improved. The purpose of this work is illustrated in a case study related to capacity planning in the supply chain and scheduling problems. (C) 2014 Elsevier Ltd. All rights reserved.
Muñoz , E.; Capon-Garcia, E.; Lainez, J.M.; Espuña, A.; Puigjaner, L. Computers & chemical engineering Vol. 72, p. 52-67 DOI: 10.1016/j.compchemeng.2014.06.002 Data de publicació: 2014-06 Article en revista
The integration of planning and scheduling decisions in rigorous mathematical models usually results in large scale problems. In order to tackle the problem complexity, decomposition techniques based on duality and information flows between a master and a set of subproblems are widely applied. In this sense, ontologies improve information sharing and communication in enterprises and can even represent holistic mathematical models facilitating the use of analytic tools and providing higher flexibility for model building. In this work, we exploit this ontologies’ capability to address the optimal integration of planning and scheduling using a Lagrangian decomposition approach. Scheduling/planning sub-problems are created for each facility/supply chain entity and their dual solution information is shared by means of the ontological framework. Two case studies based on a STN representation of supply chain planning and scheduling models are presented to emphasize the advantages and limitations of the proposed approach.
The effective management of multi-site systems involves the proper coordination of activities performed in multiple factories, distribution centers (DCs), retailers and end-users located in many different cities, countries and/or continents. To optimally manage numerous production and transportation decisions, a novel monolithic continuous-time MILP-based framework is developed to determine the best short-term operational planning to meet all customer requests at minimum total cost. The formulation lies on the unit-specific general precedence concept for the production scheduling problem whereas the immediate precedence notion is used for transportation decisions. To illustrate the applicability and potential benefits of the model, a challenging example corresponding to a supply chain comprising several locations geographically spread in six European countries has been solved to optimality with modest CPU times. Several scenarios with different logistics features were addressed in order to remark the significant advantages of using the integrated approach.
Zamarripa, M.; Aguirre, A.; Méndez, C. A.; Espuña, A. Computers & chemical engineering Vol. 42, num. SI, p. 178-188 DOI: 10.1016/j.compchemeng.2012.03.009 Data de publicació: 2012-07-11 Article en revista
In the domain of chemical process engineering, there is an increased interest in the integration of the enterprise hierarchical levels for decision-making purposes. At the scheduling level, decisions on the allocation of tasks to resources, sequencing and timing of tasks must be managed. However, such decisions are directly related to other enterprise actions, such as control and planning, but they are difficult to coordinate because they are modeled at different time and space scales, and their goals are not the same. In order to achieve integrated decisions supported by high quality information, there is a need to improve and develop robust computational tools and consistent models. In general, scheduling optimization approaches for decision-making differ depending on problem features, such as physical layout or time representation. Therefore, this work focuses on providing a framework based on a semantic model that captures the diversity in scheduling problem representation. Such semantic model uses the master recipe concept from the ANSI/ISA-88 standard perspective and encapsulates the scheduling decision task features. As a result, by the use of a single representation approach, any scheduling problem can be modeled and solved by its adequate optimization tool. The potential of a general model representation is presented by means of several case studies related to the scheduling function. Such case studies shed light to the model capabilities to represent different kinds and particular scheduling problems, achieving integration at the different decision support levels.
The production scheduling of a real-world multistage food process is considered in this work. An efficient mixed integer programming (MIP) continuous-time model is proposed to address the production problem under study. The overall mathematical framework relies on an efficient modeling approach of the sequencing decisions, the integrated modeling of all production stages, and the inclusion of a set of strong tightening constraints. The simultaneous optimization of all processing stages aims at facilitating the interaction among the different departments of the production facility. Moreover, an alternative MIP-based solution strategy is proposed for dealing with large-scale food processing scheduling problems. Although this method may no guarantee global optimality, it favors low computational requirements and solutions of very good quality. Several problem instances are solved to reveal the salient computational performance and the practical benefits of the proposed MIP formulation and solution strategy.
The resource-constrained production planning problem in semicontinuous multiproduct food industries is addressed. In particular, the case of yogurt production, a representative food process, in a real-life dairy facility is studied in detail. The problem in question is mainly focused on the packing stage, whereas timing and capacity constraints are imposed with respect to the batch stage to ensure the generation of
feasible production plans. A novel mixed discrete/continuous-time mixed-integer linear programming model, based on the definition of families of products, is proposed. Timing and sequencing decisions are
taken for product families rather than for products; thus, reducing significantly the model size. Additionally, material balances are realized for every particular product, permitting the detailed optimization of
inventory and operating costs. Packing units operate in parallel and share resources. Qualitative as well as quantitative objectives are considered. Several industrial case studies, including also some unexpected events scenarios, have been solved to optimality.
This paper introduces a data-based fault diagnosis system that includes an enhanced characterization of faults during transient stages. First, data under abnormal operating conditions (AOC) is projected onto a
reference PCA model constructed with data under normal operating conditions (NOC). T2 and Q-statistic measures of this first PCA model are both used to detect the fault and to estimate the duration and delay of its transient evolution. After a dimensionality reduction, a second NOC PCA model is used to process data before diagnosing the faults by standard classification methods such as Artificial Neural Networks (ANN) or Support Vector Machines (SVM). A quantitative validation of the procedure has been carried
out using simulated on-line data sets of the Tennessee Eastman Process (TEP). Results indicate that the incorporation of transient data in models improves the overall diagnosis performance, regardless of the
particular choice between the statistical methods or the classification methods.
Oxygen-blown biomass integrated gasification combined cycle (IGCC) plants are one of the most promising options for clean energy generation with CO2 abatement potential. However, the integrated nature of IGCC leads to difficult design problems. In this study, we present an advanced simulation environment
for the preliminary design and retrofit of IGCC plants. We describe the modelling approach, the model validation strategy and the plant behaviour, as determined by sensitivity analyses. The simulation environment uses Pareto curves to examine various co-gasification and co-production case studies in terms of technical, economic and environmental performance. It serves as a decision support tool in the design stage, which can be used to explore ways to improve plant performance and to analyse the influence of raw materials and the unit’s operational parameters. The test and validation results are discussed.
Muñoz, E.; Capon-García, E.; Moreno, M.; Espuña, A.; Puigjaner, L. Computers & chemical engineering Vol. 35, num. 5, p. 774-786 DOI: 10.1016/j.compchemeng.2011.01.025 Data de publicació: 2011-05-11 Article en revista
The complexity of decision-making in process industries and the need of highly competitive organizations require new supporting tools to coordinate and optimize the information flow among decision levels.
This work presents a framework for integrating the scheduling and control decision levels by means of an ontology, which allows and coordinates the information exchange among the different modeling
paradigms/conventions currently used for the enterprise-wide optimization (EWO). The scheduling of two multiproduct batch plants with increasing complexity is presented for illustrating the proposed working procedure.
A crucial step for batch process improvement and optimization is to develop information structures that streamline data gathering and, above all, are capable of integrating transactional data into a system
using the analytical tools that are developed. Current trends in electronics, computer science, artificial intelligence and control system technology are providing technical capability that greatly facilitates the
development of multilevel decision-making support. In this paper, we present the batch process ontology(BaPrOn), wherein different concepts regarding batch processes are categorized and the relationships
between them are examined and structured in accordance with ANSI/ISA-88 standards, which provide a solid and transparent framework for integrating batch-related information. This paper also focuses on systematic integration of different actors within the control process. The proposed approach bases
the conceptualization through the ANSI/ISA-88 representation, providing the advantage of establishing a more general conceptualization of the batch process domain. The capabilities of the envisaged
ontological framework were assessed in a test bed PROCEL pilot plant: scheduling-monitoring and control-rescheduling was closed, information quality was accessed by knowledge description, and an optimum decision-making task was performed. The ontological structure can be extended in the future to incorporate other hierarchical levels and their respective modeling knowledge.
Bojarski, A.; Lainez, J.; Espuña, A.; Puigjaner, L. Computers & chemical engineering Vol. 33, num. 10, p. 1747-1759 DOI: 10.1016/j.compchemeng.2009.04.009 Data de publicació: 2009-05 Article en revista
Corporate approaches to improve environmental performance cannot be undertaken in isolation, so a concerted effort along the supply chain (SC) entities is needed which poses another important challenge to managers. This work addresses the optimization of SC planning and design considering economical and environmental issues. The strategic decisions considered in the model are facility location, processing technology selection and production–distribution planning. A life cycle assessment (LCA) approach is
envisaged to incorporate the environmental aspects of the model. IMPACT 2002+ methodology is selected to perform the impact assessment within the SC thus providing a feasible implementation of a combined midpoint–endpoint evaluation. The proposed approach reduces the value-subjectivity inherent to the assignment of weights in the calculation of an overall environmental impact by considering endpoint
damage categories as objective function. Additionally, the model performs an impact mapping along the comprising SC nodes and activities. Such mapping allows to focus financial efforts to reduce environmental
burdens to the most promising subjects. Furthermore, consideration of CO2 trading scheme and temporal distribution of environmental interventions are also included with the intention of providing
a tool that may be utilized to evaluate current regulatory policies or pursue more effective ones. The mathematical formulation of this problem becomes a multi-objective MILP (moMILP). Criteria selected for the objective function are damage categories impacts, overall impact factor and net present value (NPV). Main advantages of this model are highlighted through a realistic case study of maleic anhydride SC production and distribution network.
Yélamos, I.; Escudero, G.; Graells, M.; Puigjaner, L. Computers & chemical engineering Vol. 33, num. 1, p. 244-255 DOI: 10.1016/j.compchemeng.2008.08.008 Data de publicació: 2009-01-13 Article en revista
Fault diagnosis in chemical plants is reviewed and discussed, while an innovative data-based fault diagnosis system (FDS) approach is proposed. The use of support vector machines (SVM) is considered for
their simpler design and implementation, and for allowing the better handling of complex and large data sets. In order to compare results with previously reported works, a standard case study such as the Tennessee Eastman (TE) process benchmark is considered. SVM achieves consistent and promising results. However, the difficulties arising when comparing SVM with previously reported results reveals the need for a systematic procedure for contrasting the performance of different FDS. Hence, general performance assessment indexes based on precision and recall of each FDS are proposed and used. In this sense, this study provides a data set and evaluation measures that could be used as a framework for future
There is a large body of work on supply chain (SC) optimization in the chemical process industry (CPI). However, some of the basic aspects of
the optimization problem are not adequately handled by the models and solution strategies developed so far in the literature. This paper focuses on
the underlying philosophy of our approach to supply chain management (SCM) in the CPI, which aims to overcome the challenges posed by this
problem. Two main topics that offer great opportunities for improvement in SCM are discussed. These are the development of modeling approaches
and solution strategies that reflect SC dynamics, the inclusion of environmental considerations, and the incorporation of novel business aspects
and key performance indicators (KPI) into the existing formulations to enlarge the scope of SC analysis, which is currently rather limited. Our
integrated solution strategy for SCM, which covers the aforementioned aspects and implements the ideas and concepts developed in our research,
is also presented and its advantages are highlighted in a case study.
Mele, F.; Guillen-Gosalbez, G.; Espuña, A.; Puigjaner, L. Computers & chemical engineering Vol. 31, num. 5-6, p. 722-735 DOI: 10.1016/j.compchemeng.2006.12.013 Data de publicació: 2007-05 Article en revista
This paper describes a methodology proposed for tuning real time evolution (RTE) parameters RTE has been introduced in a previous work [Industrial Engineering and Chemical Research 41 (7) (2002) 1815] as a new approach to on-line model-based optimization. Such strategy differs from classical real time optimization (RTO) in that waiting for steady state is not necessary and also in the use of simple optimization concepts in the solution procedure. Instead, current plant set points are periodically improved around the current operation neighborhood, following an also periodically updated model. Thus, RTE is based on a continuous improvement of the plant operation rather than on the formal optimization of a hypothetical future steady state operation. In spite of using a simpler scheme, the proposed strategy offers a faster response to disturbances, better adaptation to changing conditions and smoother plant operation regardless the complexity of the control layer. However, a successful application of such strategy requires an appropriate parameter tuning, this is: how often set points should be adjusted and which is the size of the neighborhood to be used. Although the optimal values of these parameters strongly depend on the process dynamics and require complex calculations, this work uses a simple benchmark to obtain general guidelines and illustrates the methodology for easily parameter tuning as a function of the process information typically available
Rodrigues, L.; Graells, M.; Cantón, J.; Espuña, A.; Puigjaner, L. Computers & chemical engineering Vol. 24, num. 2, p. 353-359 DOI: 10.1016/S0098-1354(00)00472-5 Data de publicació: 2000-07 Article en revista
The problem considered is short term scheduling in multipurpose batch plants when the main objective is adherence to due dates. The solution approach contemplates a two-phase procedure. The planning phase consists of processing time-windows for all the necessary batches to meet the products' demand. This information is extensively uses in the scheduling phase. Three approaches are considered in the last phase, a MILP formulation using uniform discrete time representation; constrained based search (CBS) and simulated annealing (SA). And assessment is made of these techniques in heavily constrained problems and how these approaches deal with equipment units sharing and constraints imposed by material balances and storage conditions. The advantages of the two-phase approach are shown in each case with illustrative example
Ruiz, D.; Nougues, J.; Calderon, Z.; Espuña, A.; Puigjaner, L. Computers & chemical engineering Vol. 24, num. 6, p. 777-784 DOI: 10.1016/S0098-1354(00)00371-9 Data de publicació: 2000-06 Article en revista
In this work, an artificial neural network (ANN) based framework for fault diagnosis in batch chemical plants is presented. The proposed FDS consists of an ANN structure supplemented with a knowledge based expert system (KBES) in a block-oriented configuration. The system combines the adaptive learning diagnostic procedure of the ANN and the transparent deep knowledge representation of the KBES. The information needed to implement the FDS includes a historical database of past batches, a hazard and operability (HAZOP) analysis and a model of the batch plant. The historical database that includes information related to normal and abnormal operating conditions is used to train the ANN structure. The deviations of the on-line measurements from a reference profile are processed by a multi-scale wavelet in order to determine the singularities of the transients and to reduce the dimensionality of the data. The processed signals are the inputs of an ANN. The ANNs outputs are the signals of the different suspected faults. The HAZOP analysis is useful to build the process deep knowledge base (KB) of the plant. This base relies on the knowledge of the operators and engineers about the process and allows the formulation of artificial intelligence algorithms. The case study corresponds to a batch reactor. The FDS performance is demonstrated through the simulation of different process faults. The FDS proposed is also compared with other approaches based on multi-way principal component analysis