The wide spread deployment of smart edge devices and applications that require real-time data processing, have with no doubt created the need to extend the reach of cloud computing to the edge, recently also referred to as Fog or Edge Computing. Fog computing implements the idea of extending the cloud where the "things" are, or in other words, improving application performance and resource efficiency by removing the need to processing all the information in the cloud, thus also reducing bandwidth consumption in the network. Fog computing is designed to complement cloud computing, paving the way for a novel, enriched architecture that can benefit from and include both edge(fog) and cloud resources. From a resources perspective, this combined scenario requires resource continuity when executing a service, whereby the assumption is that the selection of resources for service execution remains independent of their physical location. This new resources model, i.e., resource continuity, has gained recently significant attention, as it carries potential to seamlessly providing a computing infrastructure from the edge to the cloud, with an improved performance and resource efficiency. In this paper, we study the main architectural features of the managed resource continuity, proposing the foundation of a coordinated management plane responsible for resource continuity provisioning. We study an illustrative example on the performance benefits in relationship to the size of databases with regard to the proposed architectural model.
The wide spread deployment of smart edge devices and applications that require real-time data processing, have with no doubt created the need to extend the reach of cloud computing to the edge, recently also referred to as Fog or Edge Computing. Fog computing implements the idea of extending the cloud where the
Ramirez, W.; Masip, X.; Marin, E.; Barbosa, V.; Jukan, A.; Ren, G.; González de Dios , O. Computer communications Vol. 113, p. 43-52 DOI: 10.1016/j.comcom.2017.09.011 Data de publicació: 2017-11-15 Article en revista
The need to extend the features of Cloud computing to the edge of the network has fueled the development of new computing architectures, such as Fog computing. When put together, the combined and continuous use of fog and cloud computing, lays the foundation for a new and highly heterogeneous computing ecosystem, making the most out of both, cloud and fog. Incipient research efforts are devoted to propose a management architecture to properly manage such combination of resources, such as the reference architecture proposed by the OpenFog Consortium or the recent Fog-to-Cloud (F2C). In this paper, we pay attention to such a combined ecosystem and particularly evaluate the potential benefits of F2C in dynamic scenarios, considering computing resources mobility and different traffic patterns. By means of extensive simulations we specifically study the aspects of service response time, network bandwidth occupancy, power consumption and service disruption probability. The results indicate that a combined fog-to-cloud architecture brings significant performance benefits in comparison with the traditional standalone Cloud, e.g., over 50% reduction in terms of power consumption.
Fog computing has emerged as a promising technology that can bring cloud applications closer to the physical IoT devices at the network edge. While it is widely known what cloud computing is, how data centers can build the cloud infrastructure and how applications can make use of this infrastructure, there is no common picture on what fog computing and particularly a fog node, as its main building block, really is. One of the first attempts to define a fog node was made by Cisco, qualifying a fog computing system as a “mini-cloud” located at the edge of the network and implemented through a variety of edge devices, interconnected by a variety, mostly wireless, communication technologies. Thus, a fog node would be the infrastructure implementing the said mini-cloud. Other proposals have their own definition of what a fog node is, usually in relation to a specific edge device, a specific use case or an application. In this paper, we first survey the state of the art in technologies for fog computing nodes, paying special attention to the contributions that analyze the role edge devices play in the fog node definition. We summarize and compare the concepts, lessons learned from their implementation, and end up showing how a conceptual framework is emerging towards a unifying fog node definition. We focus on core functionalities of a fog node as well as in the accompanying opportunities and challenges towards their practical realization in the near future.
Barbosa, V.; Masip, X.; Marin, E.; Ramirez, W.; Sanchez, S. IEEE International Workshop on Computer-aided Modeling Analysis and Design of Communication Links and Networks p. 1-5 DOI: 10.1109/CAMAD.2017.8031528 Data de presentació: 2017-06-19 Presentació treball a congrés
The increasing number of end user devices at the edge of the network, along with their ever increasing computing capacity, as well as the advances in Data Center technologies, paved the way for the generation of Internet of Things (IoT). Several IoT services have been deployed leveraging Cloud Computing and, more recently, Fog Computing. In order to enable efficient control of cloud and fog premises, Fog-to-Cloud (F2C) has been recently proposed as a distributed architecture for coordinated management of both fog and cloud resources. Certainly, many challenges remain unsolved in combined Fog-to-Cloud systems, mostly driven by the dynamicity and volatility imposed by edge devices, such as the recovery of failures at the edge of the network. Indeed, possible failures in computing commodities may be prohibitive for the achievement of the envisioned performance in F2C systems. In this work, we assess proactive and reactive strategies for failure recovery of network elements by modelling them as a Multidimensional Knapsack Problem (MKP) and study the impact of each one on several aspects such as service allocation time, recovery delay and computing resources load. The obtained results show the effect each strategy brings, thus concluding with some analysis on the recovery strategy best suiting distinct IoT scenarios.
Barbosa, V.; Gomez, A.; Masip, X.; Marin, E.; Garcia, J. IEEE/ACM International Symposium on Quality and Service p. 1-5 DOI: 10.1109/IWQoS.2017.7969140 Data de presentació: 2017-06-15 Presentació treball a congrés
The recent deployment of novel network concepts, such as M2M communication or IoT, has undoubtedly stimulated the placement of a new set of services, leveraging both centralized resources in Cloud Data Centers and distributed resources shared by devices at the edge of the network. Moreover, Fog Computing has been recently proposed having as one of its main assets the reduction of service response time, further enabling the deployment of real-time services. Albeit QoS-aware network researches have been originally focused on data plane issues, the successful deployment of real-time services, demanding very low delay on the allocation of distributed resources, depends on the assessment of the impact of controlling decisions on QoS. Recently, Fog-to-Cloud (F2C) computing has been proposed as a hierarchical layered-architecture relying on a coordinated and distributed management of both Fog and Cloud resources, enabling the distributed and parallel allocation of resources at distinct layers, thus suitably mapping services demands into resources availability. In this paper, we assess the layered management architecture in F2C systems, taking into account its distributed nature. Preliminary results show the tradeoff observed regarding controllers capacity, number of controllers, and number of controller layers in the F2C architecture.
Sinaeepourfard, A.; Garcia, J.; Masip, X.; Marin, E. IEEE International Conference on Distributed Computing Systems p. 2622-2623 DOI: 10.1109/ICDCS.2017.202 Data de presentació: 2017-06-05 Presentació treball a congrés
Traditional smart city resources management rely on cloud based solutions to provide a centralized and rich set of open data. The advantages of cloud based frameworks are their ubiquity, (almost) unlimited resources capacity, cost efficiency, as well as elasticity. However, accessing data from the cloud implies large network traffic, high data latencies, and higher security risks. Alternatively, fog computing emerges as a promising technology to absorb these inconveniences. The use of devices at the edge provides closer computing facilities, reduces network traffic and latencies, and improves security. We have defined a new framework for data management in the context of smart city through a global fog to cloud management architecture; in this paper we present the data acquisition block. As a first experiment we estimate the network traffic during data collection, and compare it with a traditional real system. We also show the effectiveness of some basic data aggregation techniques in the model, such as redundant data elimination and data compression.
Future IoT services execution may benefit from combining resources at cloud and at the edge. To that end, new architectures should be proposed to handle IoT services in a coordinated way at either the edge of the network, the cloud, or both. Reacting to that need, the Fog-to-Cloud concept has been recently proposed. A key aspect in the F2C design refers to security, since F2C raises security issues besides those yet unsolved in fog and cloud. Thus, we envision the need for new security strategies to handle all components in the F2C architecture. In this paper we propose an SDN-based (mater/slave) security architecture leveraging a centralized controller on the cloud, and distributed controllers at the edge of the network. We argue that the proposed architecture brings more security and privacy to the cloud users by reducing the distance between them and, therefore, reducing the risks of the so called man-in-the-middle attacks. The proposed security architecture is analyzed in some critical infrastructure scenarios in order to illustrate their potential benefits.
Fog computing brings cloud computing capabilities closer to the end-device and users, while enabling location-dependent resource allocation, low latency services, and extending significantly the IoT services portfolio as well as market and business opportunities in the cloud sector. With the number of devices exponentially growing globally, new cloud and fog models are expected to emerge, paving the way for shared, collaborative, extensible mobile, volatile and dynamic compute, storage and network infrastructure. When put together, cloud and fog computing create a new stack of resources, which we refer to as Fog-to-Cloud (F2C), creating the need for a new, open and coordinated management ecosystem. The mF2C proposal sets the goal of designing an open, secure, decentralized, multi-stakeholder management framework, including novel programming models, privacy and security, data storage techniques, service creation, brokerage solutions, SLA policies, and resource orchestration methods. The proposed framework is expected to set the foundations for a novel distributed system architecture, developing a proof-of-concept system and platform, to be tested and validated in real-world use cases, as envisioned by the industrial partners in the consortium with significant interest in rapid innovation in the cloud computing sector.
Sinaeepourfard, A.; Garcia, J.; Masip, X.; Marin, E. IEEE/ACM International Conference on Big Data Computing, Applications and Technologies p. 100-106 DOI: 10.1145/3006299.3006311 Data de presentació: 2016-12-06 Presentació treball a congrés
A huge amount of data is constantly being produced in the world. Data coming from the IoT, from scientific simulations, or from any other field of the eScience, are accumulated over historical data sets and set up the seed for future Big Data processing, with the final goal to generate added value and discover knowledge. In such computing processes, data are the main resource, however, organizing and managing data during their entire life cycle becomes a complex research topic. As part of this, Data LifeCycle (DLC) models have been proposed to efficiently organize large and complex data sets, from creation to consumption, in any field, and any scale, for an effective data usage and big data exploitation. 2. Several DLC frameworks can be found in the literature, each one defined for specific environments and scenarios. However, we realized that there is no global and comprehensive DLC model to be easily adapted to different scientific areas. For this reason, in this paper we describe the Comprehensive Scenario Agnostic Data LifeCycle (COSA-DLC) model, a DLC model which: i) is proved to be comprehensive as it addresses the 6Vs challenges (namely Value, Volume, Variety, Velocity, Variability and Veracity, and ii), it can be easily adapted to any particular scenario and, therefore, fit the requirements of a specific scientific field. In this paper we also include two use cases to illustrate the ease of the adaptation in different scenarios. We conclude that the comprehensive scenario agnostic DLC model provides several advantages, such as facilitating global data management, organization and integration, easing the adaptation to any kind of scenario, guaranteeing good data quality levels and, therefore, saving design time and efforts for the scientific and industrial communities.
Cloud computing has been lately extended by fog computing. The main aim of fog computing is to shift computing
resources to the edge of the network, hence generating proximate rich infrastructures highly matching common latency and privacy requirements for IoT services. Recently, fog computing and cloud computing have been merged in a collaborative computing model referred to as Fog-to-Cloud computing (F2C). F2C’s aim is to make the most out of the set of distributed and heterogeneous resources found at fog and cloud premises, hence building a global stack of resources, offered for an optimized service performance. The F2C paradigm is then based on providing services with those resources best matching their demands. In this paper, we illustrate how F2C may be used for a
particular ehealth scenario with specific constraints in mobility, also including future research lines in the area.
The novel Fog-to-Cloud (F2C) computing paradigm has been recently proposed aiming at the enhanced integration of Fog Computing and Cloud Computing through the coordinated management of underlying resources, taking into account the peculiarities inherent to each computing model, and enabling the parallel and distributed execution of services into distinct fog/cloud resources. Nevertheless, studies on F2C are still premature and several issues remain unsolved yet. For instance, in an F2C scenario service allocation must cope with the specific aspects associated to cloud and fog resource models, requiring distinct strategies to properly map IoT services into the most suitable available resources. In this paper, we propose a QoS-aware service distribution strategy contemplating both service requirements and resource offerings. We model the service allocation problem as a multidimensional knapsack problem (MKP) aiming at an optimal service allocation taking into consideration delay, load balancing and energy consumption. The presented results, demonstrate that the adopted strategy may be applied by F2C computing reducing the service allocation delay, while also diminishing load and energy consumption on cloud and fog resources.
Sinaeepourfard, A.; Garcia, J.; Masip, X.; Marin, E. IEEE International Conference on eScience p. 276-281 DOI: 10.1109/eScience.2016.7870909 Data de presentació: 2016-10-24 Presentació treball a congrés
There is a vast amount of data being generated every day in the world, coming from a variety of sources, with different formats, quality levels, etc. This new data, together with the archived historical data, constitute the seed for future knowledge discovery and value generation in several fields of eScience. Discovering value from data is a complex computing process where data is the key resource, not only during its processing, but also during its entire life cycle. However, there is still a huge concern about how to organize and manage this data in all fields, and at all scales, for efficient usage and exploitation during all data life cycles. Although several specific Data LifeCycle (DLC) models have been recently defined for particular scenarios, we argue that there is no global and comprehensive DLC framework to be widely used in different fields. For this reason, in this paper we present and describe a comprehensive scenario agnostic Data LifeCycle (COSA-DLC) model successfully addressing all challenges included in the 6Vs, namely Value, Volume, Variety, Velocity, Variability and Veracity, not tailored to any specific environment, but easy to be adapted to fit the requirements of any particular field. We conclude that a comprehensive scenario agnostic DLC model provides several advantages, such as facilitating global data organization and integration, easing the adaptation to any kind of scenario, guaranteeing good quality data levels, and helping save design time and efforts for the research and industrial communities.
Sinaeepourfard, A.; Garcia, J.; Masip, X.; Marin, E.; Yin, X.; Wang, C. International Conference on Information and Communication Technology Convergence p. 400-405 DOI: 10.1109/ICTC.2016.7763506 Data de presentació: 2016-10-19 Presentació treball a congrés
Smart Cities are the most challenging and promising technological solutions for absorbing the increasing pressure of population growth, while simultaneously enforcing a sustainable economic progress as well as a higher quality of life. Several technologies are involved in a potential Smart City deployment, although data are the fuel to achieve the demanded and mandatory smartness. Data can be obtained from multiple sources, in large quantities, and with a variety of formats, therefore, an appropriate management is critical for their effective usage. Data life cycle models constitute an effective trend towards developing an integral and efficient data management framework, from data creation to data consumption and removal. In this paper we present the Smart City Comprehensive Data LifeCycle (SCC-DLC) model, a data management architecture generated from a comprehensive scenario agnostic model, tailored for the particular scenario of Smart Cities. We define the management of each data life phase, and describe its implementation on a Smart City with Fog-to-Cloud (F2C) resources management, an architecture that combines the advantages of both cloud and fog strategies.
The ever increasing requirements of new Internet applications are pushing to optimize the design of optical networks. A key design criterion in network design is the ability to recover from failures in an agile and efficient manner. Protection capabilities are highly required in optical networks since the failure of an optical link might potentially lead to a significant traffic loss. Under this context, Network Coding Protection (NCP) has emerged as an innovative solution to proactively enable protection in an agile and efficient manner by means of throughput improvement techniques such as Network Coding (NC). Nevertheless, the benefits of NC can be reduced by the negative effects of inaccurate Network State Information (NSI), which are common in dynamic scenarios.In this paper, we propose a novel proactive protection strategy based on NC jointly with a Path Computation Element (PCE) architecture called Predictive Network Coding Protection (PNCP). PNCP leverages predictive techniques in order to mitigate the negative impact of the inaccurate NSI on the blocking probability. In addition, PNCP computes resilient lightpaths with a low amount of network resources devoted for path protection.By means of extensive simulation results we show that in comparison with proactive protection strategies such as Dedicated Path Protection (DPP), and conventional dynamic NCP, PNCP reduces the blocking probability as well as the network resources allocated for path protection in dynamic scenarios.
The recent advances in cloud services technology are fueling a plethora of information technology innovation, including networking, storage, and computing. Today, various flavors have evolved of IoT, cloud computing, and so-called fog computing, a concept referring to capabilities of edge devices and users' clients to compute, store, and exchange data among each other and with the cloud. Although the rapid pace of this evolution was not easily foreseeable, today each piece of it facilitates and enables the deployment of what we commonly refer to as a smart scenario, including smart cities, smart transportation, and smart homes. As most current cloud, fog, and network services run simultaneously in each scenario, we observe that we are at the dawn of what may be the next big step in the cloud computing and networking evolution, whereby services might be executed at the network edge, both in parallel and in a coordinated fashion, as well as supported by the unstoppable technology evolution. As edge devices become richer in functionality and smarter, embedding capacities such as storage or processing, as well as new functionalities, such as decision making, data collection, forwarding, and sharing, a real need is emerging for coordinated management of fog-to-cloud (F2C) computing systems. This article introduces a layered F2C architecture, its benefits and strengths, as well as the arising open and research challenges, making the case for the real need for their coordinated management. Our architecture, the illustrative use case presented, and a comparative performance analysis, albeit conceptual, all clearly show the way forward toward a new IoT scenario with a set of existing and unforeseen services provided on highly distributed and dynamic compute, storage, and networking resources, bringing together heterogeneous and commodity edge devices, emerging fogs, as well as conventional clouds.
Sinaeepourfard, A.; Garcia, J.; Masip, X.; Marin, E.; Cirera, J.; Grau, G.; Casaus, F. IFIP Annual Mediterranean Ad Hoc Networking Workshop p. 1-8 DOI: 10.1109/MedHocNet.2016.7528424 Data de presentació: 2016-06 Presentació treball a congrés
Nowadays, Smart Cities are positioned as one of the most challenging and important research topics, highlighting major changes in people's lifestyle. Technologies such as smart energy, smart transportation or smart health are being designed to improve citizen's quality of life. Smart Cities leverage the deployment of a network of devices - sensors and mobile devices-, all connected through different means and/or technologies, according to their network availability and capacities, setting a novel framework feeding end-users with an innovative set of smart services. Aligned to this objective, a typical Smart City architecture is organized into layers, including a sensing layer (generates data), a network layer (moves the data), a middleware layer (manages all collected data and makes it ready
for usage) and an application layer (provides the smart services benefiting from this data). In this paper a real Smart City is analyzed, corresponding to the city of Barcelona, with special emphasis on the layers responsible for collecting the data generated by the deployed sensors. The amount of daily sensors data transmitted through the network has been estimated and a rough projection has been made assuming an exhaustive deployment that fully covers all city.
Finally, we discuss some solutions to both reduce the data transmission and improve the data management.
Fog Computing recently came up as an extension of cloud computing to facilitate the development of IoT services with strong requirements in latency, security while minimizing the traffic load in the network. The stack of resources set by putting together fog and cloud premises has been recently coined as Fog-to-Cloud (F2C) computing, and has been positioned as an innovative computing paradigm best matching current and foreseen IoT services demands. This paper emphasizes the benefits F2C may bring to a particular health area, namely COPD, whose patients' quality of life intensely depends on the patients mobility. We argue that by enriching current breath assistance systems for COPD patients with F2C capacities, the patients may comfortably afford physical activities, therefore
impacting on reducing not only patients deterioration but also the re-admission incidence rate IRR) with a clear impact on the health costs as well
Barbosa, V.; Ramirez, W.; Masip, X.; Marin, E.; Ren, G.; Tashakor, G. IEEE International Conference on Communications p. 1-5 DOI: 10.1109/ICC.2016.7511465 Data de presentació: 2016-05 Presentació treball a congrés
The recent technological advances related to computing, storage, cloud, networking and the unstoppable deployment of end-user devices, are all coining the so-called Internet of Things (IoT). IoT embraces a wide set of heterogeneous services in highly impacting societal sectors, such as Healthcare, Smart Transportation or Media
delivery, all of them posing a diverse set of requirements, including real time response, low latency, or high capacity. In order to properly address such diverse set of requirements, the combined use of Cloud and Fog computing turns up as an emerging trend. Indeed, Fog provides low delay for services demanding real time response, constrained to support low capacity queries, whereas Cloud provides high capacity at the cost of a higher latency. It is with no doubt that a
new strategy is required to ease the combined operation of cloud and fog infrastructures in IoT scenarios, also referred to as Combined Fog-Cloud (CFC), in terms of service execution performance metrics. To that end, in this paper, we introduce and formulate the QoS-aware service allocation problem for CFC architectures as an integer optimization problem, whose solution minimizes the latency experienced by the services while guaranteeing the fulfillment of the
Marrero, R.; Marin, E.; Masip, X.; Sanchez, S. IFIP Annual Mediterranean Ad Hoc Networking Workshop p. 1-8 DOI: 10.1109/MedHocNet.2015.7173293 Data de presentació: 2015-06-10 Presentació treball a congrés
Intelligent Transport Systems (ITS) can improve safety, mobility and productivity for cities, but the ITS ecosystem lacks efficient methods to ease content integration and communication between different road transport modes. We believe that an emerging technology able to provide relevant capabilities to integrate content, applications and services for ITS users is Cloud Computing. This technology offers the delivery of computing services through the network and is characterized by different service and deployment models, which can allow scalability, reliability, pay as you go, shared resources and service customization for the heterogeneous ITS users. In this paper, we propose a Conceptual Approach of a Hierarchical Cloud Architecture for ITS that is able to provide content integration, and may also provide the possibilities to create different business models based on the Cloud and the ITS ecosystem.
Barbosa, V.; Masip, X.; Marin, E.; Ramirez, W.; Sanchez, S. European Conference on Networks and Optical Communications p. 1-6 DOI: 10.1109/NOC.2015.7238629 Data de presentació: 2015-06-05 Presentació treball a congrés
Despite the increasing number of mobile heterogeneous network elements (NEs) interconnected through the Internet, all of them setting the foundations for an agile IoT development, many issues remain still unsolved. The scalability of the current host-oriented Internet model is one of these problems. In this paper, we present a novel service-oriented architecture dealing with the scalability problem leveraging the Path Computation Element (PCE) concept. PCE has already been proved as an efficient technology to decouple the control tasks from the forwarding nodes, what undoubtedly impacts on scalability growth. Given the importance of control solutions for IoT, we propose to enrich the current host-oriented PCE model to become a Service-oriented PCE (SPCE). Results obtained after running several evaluation tests show that the proposed PCE-based solution may support a higher number of Network Elements (NEs).
IP/MPLS and Optical technologies are the foundations of current Carrier-Grade Networks (CGNs), due to both the flexibility of IP/MPLS to provide services with distinct requirements and the high transport capacity offered by new Optical technologies, such as Elastic Optical Networks (EONs). However, despite the widespread adoption of these two technologies, interoperability issues still impact on key network features, in particular on resilience capabilities. Resilience is gaining momentum in recent years due to the advent of new CGN scenarios, such as Data Center Networks (DCNs), where a link or a node failure might lead to a substantial traffic loss. We consider that any potential contribution on the resilience arena, must be built on top of a solid knowledge of available proposals emphasizing most appealing research trends and the limitations affecting the management of resilience in CGNs. Aligned to this scenario, the main goals of this article are: (I) compile distinct approaches for managing resilience; (2) describe in a comprehensive manner, the challenges faced by CGN operators in order to manage resilience; (3) show why current solutions for managing resilience do not completely address the interoperability issues present in multi-layer CGNs with multi-vendor settings, and; (4) provide insights for future trends. (C) 2015 Elsevier B.V. All rights reserved
The rapid emergence of new network scenarios and architectures, such as Data Centers Networks (DCNs), Path
Computation Element (PCE), and Software Defined Networking (SDN), has refreshed some on-line routing-related problems, objective of many research efforts in the past. As a result, new scalable and efficient path computation algorithms are required to address particular characteristics and demands of on-line scenarios, such as those brought by inaccurate Network State Information (NSI), strongly affecting the overall blocking
probability. In this paper, we propose a prediction-based PCE scheme, referred to as PPCE. PPCE is devised for highly dynamic network scenarios, aiming at reducing the amount of signaling messages as well as the blocking probability.
The rapid emergence of new network scenarios and architectures, such as Data Centers Networks (DCNs), Path
Computation Element (PCE), and Software Defined Networking (SDN), has refreshed some on-line routing-related problems, objective of many research efforts in the past. As a result, new scalable and efficient path computation algorithms are required to address particular characteristics and demands of on-line scenarios, such as those brought by inaccurate Network State Information (NSI), strongly affecting the overall blocking probability. In this paper, we propose a prediction-based PCE scheme, referred to as PPCE. PPCE is devised for highly dynamic network scenarios, aiming at reducing the amount of signaling messages as well as the blocking probability.
Ramirez, W.; Masip, X.; Marin, E.; Yannuzzi, M.; Martinez, A.; Sanchez, S.; Siddiqui, M.; López, V. IEEE Global Communications Conference p. 2148-2153 Data de presentació: 2014-12-01 Presentació treball a congrés
The recent advances in optical technologies pave the way to the deployment of high-bandwidth services. As reliability becomes a mandatory requirement for some of these services, network providers must endow their networks with resilience capabilities. In recent years, network coding protection (NCP) has emerged as a tentative solution aiming at enabling network resilience in a proactive and efficient way. The goal of this
paper is to conduct a techno-economic study to evaluate the protection cost required by NCP schemes deployed either at the IP/MPLS or at the Optical layer of a multi-layer network, as well as its impact on both the capital and operational expenditures (CAPEX, OPEX) of a network provider. Our evaluation results show that a significant reduction in both CAPEX and OPEX is obtained with NCP. Indeed, at least a 49% and 52% of CAPEX
and OPEX reduction is achieved respectively in comparison with conventional proactive protection schemes.
The design of new inter-domain optical routing protocols may start from scratch, or on the contrary exploits all the research already developed in IP networks with the Border Gateway Protocol (BGP). Even though the network premises under which BGP was conceived have drastically changed, the pervasive deployment of BGP makes almost impossible its replacement, hence everything indicates that BGP-based routing will remain present in the coming years. In light of this, the approach often used for distributing reachability information and routing inter-domain connections below the IP layer has been to propose extensions to the BGP protocol, what unfortunately exports all well-known BGP weaknesses to these routing scenarios. In this paper we deeply analyze all these problems in order the reader to get a clear idea of the existing limitations inherent to the BGP, before exploring the routing problem in optical networks. Then, focusing on the optical layer we will demonstrate that current optical extensions of BGP do not meet the particular optical layer constraints. We then propose minor, though effective, changes to a path vector protocol overall offering a promising line of work and a simple solution designed to be deployed on a multi-domain and multi-layer scenario. (C) 2013 Elsevier B.V. All rights reserved.
Amongst the few distinctive European alliances carrying the promise of making a lasting impact in software-defined networking (SDN) and the IT industry is the research and industrial communities pursuing developments in the Path Computation Element (PCE), an effort 'made-in-the-EU'. In regards to PCE design, deployment, and evolution, Europeans are amongst today's leaders in the process of transitioning PCE from software-defined concept to interoperable networking standards. This is particularly the case in the standardization bodies of IETF and ETSI, including the areas of advanced optical networks, sensor networks as well as frameworks for the Internet of Things (IoT).\nWe are still facing a number of open challenges in the SDN areas, yet to be addressed through PCE design. First, current Open Networking Foundation (ONF) specifications lack control systems and well-defined interfaces with external (non-ONF) functions, which represents a not-to-be-missed opportunity for the PCE-based architecture. Second, the new ETSI initiative on Network Functions Virtualization (NFV) is leveraging virtualization concepts, with the growing role of PCE concepts. Third, Application-based Network Operations (ABNO) – an architecture proposed, developed and led within the IETF by leading members of our consortium – uses a variety of PCE-based tools and techniques.\nPACE seizes this great opportunity of consolidation of the existing PCE developments, thereby facilitating a one-stop solution for all PCE-related issues, with an open-source software repository, workshops, standardization activities, plugin marketplace, etc. PACE will bring together a community of standardization and community leaders, developers, and academics. PACE will ensure that different aspects of PCE are not developed in isolation, while addressing interoperability issues, and thus avoiding any delays in innovation, and in sealing European leadership in the sector.
Serral, R.; Yannuzzi, M.; Marin, E.; Martinez, A.; Masip, X. International journal of internet protocol technology Vol. 7, num. 3, p. 148-164 DOI: 10.1504/IJIPT.2013.055471 Data de publicació: 2013 Article en revista
Peer-to-peer (P2P) is a growing technology offering an affordable platform to deploy distributed services and streaming of multimedia content through peer-to-peer television (P2PTV). Nevertheless, to promote such technology it is necessary to provide a solid streaming quality assessment mechanism. In this context, legacy solutions tied to service level agreements are no longer suitable, as for P2P systems, the service consumers become an active part by assisting in the service delivery. To overcome this limitation, we present a generic multi-layer monitoring and management framework to assess the quality of service (QoS) and the quality of experience (QoE) of multimedia traffic in any P2PTV streaming application. We also demonstrate the usefulness of our solution, by analysing the performance of a real streaming application in a P2PTV environment.
Puype, B.; Marin, E.; Colle, D.; Sanchez, S.; Pickavet, M.; Masip, X.; Demeester, P. Photonic network communications Vol. 23, num. 2, p. 172-182 DOI: 10.1007/s11107-011-0348-5 Data de publicació: 2012-04 Article en revista
Multilayer traffic engineering (MLTE) allows
coping with ever-increasing and varying traffic demands in IP-over-Optical multilayer networks. It utilizes cross-layer
TE (Traffic Engineering) techniques to provision optical lightpath capacity to the IP/MPLS (Internet Protocol/ Multi-Protocol Label Switching) logical topology on-demand.
Such provisioning however causes optical connection arrival rates that pose strong performance requirements to Routing
and Wavelength Assignment (RWA) strategies. Collecting up-to-date network information for the RWA with rapidly changing network states can be quite difficult. Exposing optical layer state information to the IP layer in the overlay model, or transforming this optical layer information in a workable
representation in an integrated control plane is similarly problematic.
Prediction-Based Routing (PBR) has been proposedas a RWA mechanism for optical transport networks; it bases routing not on possibly inaccurate or outdated network state, but instead on previous connections set-up. In this article, we propose to implement PBR as the RWA mechanism in the
optical layer of a multilayer network, and use the predictive capabilities of PBR to expose dynamic optical network information
into the multilayer traffic engineering algorithm with minimal control plane overhead. Some simulations show the benefits of using the PBR in the optical layer for MLTE purposes.
Over the last years, a number of query-based routing protocols have been proposed for Wireless Sensor Networks (WSNs). In this context, routing protocols can be classified into two categories, energy savers and energy balancers. In a nutshell, energy saving protocols aim at decreasing the overall energy consumed by a WSN, whereas energy balancing protocols attempt to efficiently distribute the consumption of energy throughout the network. In general terms, energy saving protocols are not necessarily good at balancing energy and vice versa. In this paper, we introduce an Energy-aware Query-based Routing protocol for WSNs (EQR), which offers a good trade-off between the traditional energy balancing and energy saving objectives. This is achieved by means of learning automata along with zonal broadcasting so as to decrease the total energy consumption. We consider that, in the near future, EQR could be positioned as the routing protocol of choice for a number of query-based WSN applications, especially for deployments where the sensors show moderate mobility.
The success of Peer-to-Peer (P2P) overlay networks has led to its broad adoption on different scenarios, thereby increasing the set of features present on such protocols, e. g., audio and video-conferencing, streaming of multimedia content using Peer-to-Peer TeleVision (P2PTV), and so on. The wide acceptance of P2P multimedia streaming by the end-users has captured the attention of content providers, making it a suitable candidate to be adopted as technology for the offered multimedia services. Nevertheless, to deliver a proper service, content providers, need to quantify and assess the quality perceived by the users. In the past, this assessment was embedded into Service Level Agreements under the client/server paradigm, but with the introduction of P2P, these solutions became unsuitable. To overcome this limitation, we present a Monitoring and Management Framework to assess the Quality of Service and the Quality of Experience of multimedia traffic in any P2PTV streaming application.
Ahvar, E.; Marin, E.; Yannuzzi, M.; Masip, X.; Ahvar, S. European Conference on Networks and Optical Communications p. 1-6 DOI: 10.1109/NOC.2012.6249944 Data de presentació: 2012 Presentació treball a congrés
Providing networks with QoS guarantees is one of the key issues to support current and future expected clients' demands. In this scenario, QoS routing is definitely critical as being responsible for defining those optimal routes supporting traffic forwarding throughout the whole network. This paper proposes two new QoS-aware RWA algorithms dealing with the routing inaccuracy problem, aiming at reducing blocking probability while limiting signaling overhead and balancing network load. The proposed algorithms extend the work already published by the authors on prediction based routing by adding a novel fuzzy-based technique featuring a powerful tool for modeling uncertainty. The proposed algorithms are compared with a well-known RWA algorithm and results show the benefit of introducing the fuzzy techniques in the RWA selection.
The flourishing of user driven demand, the heterogeneity of networks, the multiplicity of new devices, all mean that the Internet as we know it is reaching a saturation point. One of the main challenges of Future Internet research is to address the surge in complexity that service and network developers are facing.\n\nBuilding on top of the on-going actions to support large-scale experimentation for Future Internet protocols, TEFIS brings evaluation processes one step further. TEFIS provides an open platform to support experimentations at large-scale of resource demanding Internet services in conjunction with upcoming Future Internet networking technologies and user-oriented living labs.\n\nIt will act as a single access point to a variety of existing and next generation of experimental facilities.\n\nTEFIS outcome will be:\n•\tOpen platform to integrate and use heterogeneous testbeds based on a connector models, and exposed as a classical service.\n•\tIntegration of 8 complementary experimental facilities, including network and software testing facilities, and user oriented living labs.\n•\tPlatform to share expertise and best practices.\n•\tCore services for flexible management of experimental data and underlying testbeds resources during the experiment workflow\n•\tSingle access point to testbeds instrumented with a large number of tools to support the users throughout the whole experiment lifecycle (compilation, integration, deployment, dimensioning, user evaluation, monitoring, etc) and allow them to work together by sharing expertise.\n\nA specific action is foreseen via an Open Call to engage new experimentations and to gradually expand TEFIS.\nCombining the efforts of the software and service industry, the FIRE community and the user-centric Living Labs, TEFIS will foster research and business communities in collaboratively elaborating knowledge about the provisioning of Future Internet services.
Martinez, A.; Ramirez, W.; German, M.; Serral, R.; Marin, E.; Yannuzzi, M.; Masip, X. International Conference on Wired/Wireless Internet Communications Data de presentació: 2011-06 Presentació treball a congrés
Next Generation Internet points out the challenge of addressing "things" on both a network with (wired) and without (wireless) infrastructure. In this scenario, new efficient and scalable addressing and routing schemes must be sought, since currently proposed solutions can hardly manage current scalability issues on the current global Internet routing table due to for example multihoming practices. One of the most relevant proposals for an addressing scheme is the Locator Identifier Separation Protocol (LISP) that focuses its key advantage on the fact that it does not follow a disruptive approach. Nevertheless, LISP has some drawbacks especially in terms of reachability in the border routers. In face of this, in this paper we propose a protocol so-called LISP Redundancy Protocol (LRP), which provides an interesting approach for managing the reachability and reliability issues common on a LISP architecture, such as those motivated by an inter-domain link failure.
Yannuzzi, M.; Marin, E.; Serral, R.; Masip, X.; González de Dios , O.; Jimenez, J.; Verchere, D. International Conference on Optical Network Design and Modeling Data de presentació: 2011-02-08 Presentació treball a congrés
The advantages of optical transparency are still confined
to the boundaries of a domain, since the optical signals are
subject to O/E/O conversions at the border nodes that separate
two optical domains. The extension of transparent connections
across domains requires advances both in the modeling of the
impairments suffered by the optical signals as well as in devising
strategies to exploit such models in practice. In this latter regard,
one of the main challenges is to design information exchange
models and protocols enabling optical bypass without disclosing
detailed physical-layer information among domains. In this paper,
we focus on the modeling and the exchange of impairmentrelated
information between optical domains. We propose a
model that conveniently captures the degradation experienced
by an optical signal along a lightpath, and describe it use in the
frontier between two neighbor domains. Our approach respects
the privacy and administrative limits of carrier networks, while
enabling the provision of transparent connections beyond domain
boundaries. The model and strategies proposed in this paper
generalize the contributions made by some of the most relevant
works in the field, providing in this way a first attempt toward
a unifying view and theory for quantifying the transmission
impairments in DWDM optical networks.
This paper presents a Cross-layer Network Management System (NMS) that allows Service Providers (SPs) to perform cost-effective network resource reservations with their Network Operators (NOs). The novelty of our NMS is that it offers a fresh and promising approach to use the endusers satisfaction level as the metric to perform the resource
management. We show that our system is capable of achieving considerable reductions in the operational costs of SPs, while keeping proper bounds in the end-user satisfaction for the offered multimedia services.
Serral, R.; Marin, E.; Yannuzzi, M.; Masip, X.; Sanchez, S. IEEE Global Communications Conference p. 1627-1631 DOI: 10.1109/GLOCOMW.2010.5700215 Data de presentació: 2010-12-06 Presentació treball a congrés
Network performance measurements have been
broadly used in order to debug and to assess the reliability of the network. In general performance measurement is a resource consuming process, which in practice derives in hardly scalable systems. However, in order to control and monitor network resource usage and to assess the quality of multimedia traffic, it is broadly accepted that efficient and scalable network performance evaluation infrastructures must be present in nowadays networks. Nevertheless, most of current research is focused on the assessment of the different performance metrics in order to determine if the quality of the delivered service is correct. Differently, the main contribution presented in this paper, relies on the fact that the accurate estimation of the network
metrics is generally not necessary, given that it is much more efficient to directly detect service disruptions. To this end, we
propose an algorithm to detect anomalies on the service level delivered to the users. Our solution is based on the distance
measurement between acquired reference distributions on the Inter-Packet Arrival Times. This simple to measure time-series
permit to detect very efficiently QoS disruptions, making of the solution a very good candidate to be used on Service Level
Agreement assessment systems, as we prove in our experimental set up, where we evaluate the performance and accuracy of the proposal in a real scenario.
Marin, E.; Masip, X.; Yannuzzi, M.; Serral, R.; Sanchez, S. International Congress on Ultra Modern Telecommunications and Control Systems and Workshops p. 1206-1211 Data de presentació: 2010-10-19 Presentació treball a congrés
This paper studies the impact of considering
crosstalk effects in the IA-RWA process and in the regenerator allocation algorithms. First of all, we evaluate if the regenerator
allocation optimization is still useful when the crosstalk effects are considered. For this end we improve a simple physical model
which computes the quality of the signal factor for each optical connection. Until now this physical model was unaware of the
physical layer impairments depending on the interference between different optical connections, such as the crosstalk. Following
the same simple methodology we add the switching node crosstalk effect to the computation of the quality of the signal
Once the new physical model is enhanced we evaluate the impact of the crosstalk on the performance of IA-RWA algorithms that
optimize the use of the regenerators; and also we evaluate the impact of different e wavelength assignation techniques.
There is a wide consensus among telecom vendors and operators that the next decade will see a mélange of evolving Internet architectures embedded into high-bandwidth technologies and carrier-grade systems for control and management. The combined Internet Protocol (IP) and Ethernet-based optical transport solutions are expected to drastically lower capital and operational expenses and improve overall network performance. Central to this premise is the concept of autonomic network management, offering a radical improvement in the way Internet can interact with the transport layers, making automated use of available capacity and physical interconnectivity.Unfortunately, practice lags far behind this promise. The segmentation of IP and carrier-grade technologies has not only produced the carrier's organizational separation, but also a fragmentation of the technical competence through separate Network Management Systems (NMSs). In the isolated Internet and carrier-grade management ecosystems, even simple operations, such as IP link upgrades, require multiple human-assisted configurations, and are far from automation. As a result, carriers are seeking ways to alleviate the dependency on manual processes that do not only create management expenditures, but also lead to a heavy overprovision of the IP network.In the project ONE, we propose to alleviate the current isolation between the IP and carrier-grade management ecosystems. As first step towards a commercially-viable autonomic management solution, we plan to design and prototype an ontology-based communication adapter between the two NMS systems, enabling: i) automated provisioning of IP topologies and services; ii) policy-based setup/release of resources; and iii) coordinated self-healing. We emphasize that the solution does not aim to integrate the NMSs, but it should enable their communication, and thus effectively exploit a set of common objectives as they evolve in future systems.
This paper presents a promising Autonomic Network Management System (ANMS) that allows Service Providers (SPs) to perform efficient network resource provisioning with their Network Operators (NOs). The novelty of our ANMS is that it offers an starting point to use the end-users perceived quality as a metric to manage the network resources. We show that our system is capable of achieving
considerable reductions in the operational costs of SPs, while keeping proper bounds in the end-user satisfaction for the offered multimedia services.
The performance of the regenerator allocation
algorithms in WDM networks strongly depends on the accuracy
of the physical-layer information such as the Q factor. In
a translucent WDM network the already installed regenerators
along the lightpath are allocated based on the physical information
(Q factor) in order to maximize the quality of the optical
signal while minimizing the opaqueness of the network. The
Q factor used by the IA-RWA algorithms is usually inaccurate
due to the drift suffered by the physical-layer parameters during
the operation of the optical network. In this scenario the
allocation of regenerators is not optimized and then the performance
of the network is worsened. New regenerator allocation
schemes should be proposed in order of counteracting the
inherent and unpredictable uncertainty in the physical-layer
Quality assessment of multimedia traffic is a hot topic. In this paper we present a Multimedia Management System, which can be used on-line to assess and guarantee the
quality of multimedia traffic in a wired and wireless network. The proposed platform uses both network and application layer metrics to build up a scalable quality assessment framework. The core of this framework provides means for traffic provisioning
capabilities by coordinating the network access and usage both from the wireless node and from the network access point. These two combined features permit our platform to guarantee a satisfactory multimedia user experience. We evaluate our proposal by issuing an experimental deployment in a testbed and performing a series of tests under different network situations to demonstrate the Quality of Experience guaranties of our system. The results show that the quality of video perceived by endusers is considerably improved compared to the typical wireless network.
Marin, E.; Yannuzzi, M.; Serral, R.; Masip, X.; Sanchez, S. European Conference on Networks and Optical Communications and the Conference on Optical Cabling and Infrastructure p. 175-180 Data de presentació: 2010-06-08 Presentació treball a congrés
Regenerator allocation consists on selecting which of the already installed regenerators
in a translucent network may be used according to the dynamic traffic requests in order
to maximize the quality of the optical signal while minimizing the opaqueness of the
network. A recent study has shown that the performance of the regenerator allocation
techniques strongly depends on the accuracy of the physical-layer information. The
reason of this physical inaccuracy is the drift suffered by the physical-layer parameters
during the operation of the optical network. In these conditions, the performance of the
Impairment Aware-Routing and Wavelength Assignment (IA-RWA) process might drop
sharply when allocating regenerators inappropriately. In this paper, we propose new
regenerator allocation schemes taking into account the inherent and unavoidable
inaccuracy in the physical-layer information
Serral, R.; Yannuzzi, M.; Marin, E.; Masip, X.; Sanchez, S. International Conference on Wired/Wireless Internet Communications p. 180-191 DOI: 10.1007/978-3-642-13315-2_15 Data de presentació: 2010-06-03 Presentació treball a congrés
In this paper we present a Multimedia Wireless Management System, which can be used on-line to assess and guarantee the quality of multimedia traffic in a wireless network. The proposed platform uses both network and application layer metrics to build up a scalable quality assessment of multimedia traffic. Moreover, the system provides traffic provisioning capabilities by coordinating the network access and usage both from the wireless node and from the network access point. These two combined features permit our platform to guarantee a
satisfactory multimedia user experience in wireless environments. We evaluate our proposal by issuing an experimental deployment in a testbed and performing
a series of tests under different network situations to demonstrate the Quality of
Experience guaranties of our system. The results show that the quality of video
perceived by end-users is considerably improved compared to the typical wireless
The routing inaccuracy problem is one of the major
issues impeding the evolution and deployment of Constraint-
Based Routing (CBR) techniques. This paper proposes a promising
CBR strategy that combines the strengths of prediction with
an innovative link-state cost. The latter explicitly integrates a
two-bit counter predictor, with a novel metric that stands for
the degree of inaccuracy (seen by the source node) of the state
information associated with the links along a path. In our routing
model, Link-State Advertisements (LSAs) are only distributed
upon topological changes in the network, i.e., the state and
availability of network resources along a path are predicted
from the source rather than updated through conventional LSAs.
As a proof-of-concept, we apply our routing strategy in the
context of circuit-switched networks. We show that our approach
considerably reduces the impact of routing inaccuracy on the
blocking probability, while eliminating the typical LSAs caused
by the traffic dynamics in CBR protocols.
Marin, E.; Yannuzzi, M.; Masip, X.; Sanchez, S.; Martínez, R.; Muñoz, R.; Casellas, R.; Maier, G. International Conference on Optical Network Design and Modeling Data de presentació: 2010-02-02 Presentació treball a congrés
Optimized regenerator allocation techniques select
which of the already installed regenerators in a translucent
network must be used in order to maximize the quality of the
optical signal while minimizing the opaqueness of the network.
Unfortunately, the performance of an optimized regenerator
allocation strategy strongly depends on the accuracy of the
physical-layer information. In this paper, we investigate the
effects of optimized regenerator allocation techniques when the
physical-layer information is inaccurate. According to the performed
experiments, we conclude that mostly of the current
techniques of regenerator usage optimization are only possible
when perfect knowledge of physical information is available.
Hence, new regenerator allocation schemes taking into account
the inherent inaccuracy in the physical-layer information need
to be designed.