Traffic anomalies can create network congestion, so its prompt and accurate detection would allow network operators to make decisions to guarantee the network performance avoiding services to experience any perturbation. In this paper, we focus on origin–destination (OD) traffic anomalies; to efficiently detect those, we study two different anomaly detection methods based on data analytics and combine them with three monitoring strategies. In view of the short monitoring period needed to reduce anomaly detection, which entails large amount of monitoring data to be collected and analyzed in a centralized repository, we propose bringing data analytics to the network nodes to efficiently detect traffic anomalies, while keeping traffic estimation centralized. Once an OD traffic anomaly is detected, a network reconfiguration can be triggered to adapt the network to the new traffic conditions. However, an external event might cause multiple related traffic anomalies. In the case of triggering a network reconfiguration just after one traffic anomaly is detected, some Key Performance Indicators (KPI) such as the number of network reconfigurations and the total reconfiguration time would be unnecessarily high. In light of that, we propose the Anomaly and Network Reconfiguration (ALCOR) method to anticipate whether other ODs are anomalous after detecting one anomalous OD pair. Exhaustive simulation results on a realistic network scenario show that the monitoring period should be as low as possible (e.g., 1 min) to keep anomaly detection times low, which clearly motivates to place traffic anomaly detection function in the network nodes. In the case of multiple anomalies, results show that ALCOR can significantly improve KPIs such as the number of network reconfigurations, total reconfiguration time, as well as traffic losses.
Online advertising, the pillar of the “free” content on the Web, has revolutionized the marketing business in recent years by creating a myriad of new opportunities for advertisers to reach potential customers. The current advertising model builds upon an intricate infrastructure composed of a variety of intermediary entities and technologies whose main aim is to deliver personalized ads. For this purpose, a wealth of user data is collected, aggregated, processed and traded behind the scenes at an unprecedented rate. Despite the enormous value of online advertising, however, the intrusiveness and ubiquity of these practices prompt serious privacy concerns. This article surveys the online advertising infrastructure and its supporting technologies, and presents a thorough overview of the underlying privacy risks and the solutions that may mitigate them. We first analyze the threats and potential privacy attackers in this scenario of online advertising. In particular, we examine the main components of the advertising infrastructure in terms of tracking capabilities, data collection, aggregation level and privacy risk, and overview the tracking and data-sharing technologies employed by these components. Then, we conduct a comprehensive survey of the most relevant privacy mechanisms, and classify and compare them on the basis of their privacy guarantees and impact on the Web.
The ever increasing requirements of new Internet applications are pushing to optimize the design of optical networks. A key design criterion in network design is the ability to recover from failures in an agile and efficient manner. Protection capabilities are highly required in optical networks since the failure of an optical link might potentially lead to a significant traffic loss. Under this context, Network Coding Protection (NCP) has emerged as an innovative solution to proactively enable protection in an agile and efficient manner by means of throughput improvement techniques such as Network Coding (NC). Nevertheless, the benefits of NC can be reduced by the negative effects of inaccurate Network State Information (NSI), which are common in dynamic scenarios.In this paper, we propose a novel proactive protection strategy based on NC jointly with a Path Computation Element (PCE) architecture called Predictive Network Coding Protection (PNCP). PNCP leverages predictive techniques in order to mitigate the negative impact of the inaccurate NSI on the blocking probability. In addition, PNCP computes resilient lightpaths with a low amount of network resources devoted for path protection.By means of extensive simulation results we show that in comparison with proactive protection strategies such as Dedicated Path Protection (DPP), and conventional dynamic NCP, PNCP reduces the blocking probability as well as the network resources allocated for path protection in dynamic scenarios.
Telecom operators are starting the deployment of Content Delivery Networks (CDN) to better control and manage video contents injected into the network. Cache nodes placed close to end users can manage contents and adapt them to users' devices, while reducing video traffic in the core. By adopting the standardized MPEG-DASH technique, video contents can be delivered over HTTP. Thus, HTTP servers can be used to serve contents, while packagers running as software can prepare live contents. This paves the way for virtualizing the CDN function. In this paper, a CDN manager is proposed to adapt the virtualized CDN function to current and future demand. A Big Data architecture, fulfilling the ETSI NFV guide lines, allows controlling virtualized components while collecting and pre-processing data. Optimization problems minimize CDN costs while ensuring the highest quality. Re-optimization is triggered based on threshold violations; data stream mining sketches transform collected into modeled data and statistical linear regression and machine learning techniques are proposed to produce estimation of future scenarios. Exhaustive simulation over a realistic scenario reveals remarkable costs reduction by dynamically reconfiguring the CDN.
The performance of protocols and architectures for upcoming vehicular networks is commonly investigated by means of computer simulations, due to the excessive cost and complexity of large-scale experiments. Dependable and reproducible simulations are thus paramount to a proper evaluation of vehicular networking solutions. Yet, we lack today a reference dataset of vehicular mobility scenarios that are realistic, publicly available, heterogeneous, and that can be used for networking simulations straightaway. In this paper, we contribute to the endeavor of developing such a reference dataset, and present original synthetic traces that are generated from high-resolution real-world traffic counts. They describe road traffic in quasi-stationary state on three highways near Madrid, Spain, for different time-spans of several working days. To assess the potential impact of the traces on networking studies, we carry out a comprehensive analysis of the vehicular network topology they yield. Our results highlight the significant variability of the vehicular connectivity over time and space, and its invariant correlation with the vehicular density. We also underpin the dramatic influence of the communication range on the network fragmentation, availability, and stability, in all of the scenarios we consider.
In this work, we address the problem of jointly deciding the placement of the contents delivered by a Content Distribution Network (CDN) among the available data centers, together with the allocation of the lightpaths required to serve the anycast demands initiated by the CDN network users, assuming an underlying highcapacity Elastic Optical Network (EON). We firstly present an Integer Linear Programming (ILP) formulation to optimally solve the targeted problem. This ILP formulation is of high complexity, though, and cannot be used to solve realistically sized problem instances. Hence, we also introduce a novel heuristic called CPRMSAPD, which decomposes the problem into three sub-problems and applies greedy heuristics and simulated annealing meta-heuristic techniques to yield accurate solutions with practical execution times. We validate the performance of our CPRMSA-PD heuristic in medium-sized problem instances by comparing its results to the ones of the optimal ILP formulation. Next, we use it to give extensive insights into the effects of different key parameters identified in large CDN over EON backbone networks.
In a question-driven survey, the answers to one question may decide which question is presented next. In this case, encrypting the answers of the participants is not enough to protect their privacy since the system is able to learn them by inspection of the next question the participants request.; In this article, we explore the technologies involved in surveys performed through a mobile phone. Participants receive the questions using VoIP technologies and, since their answers affect which questions are presented next, they must protect the selection of the relevant questions. In addition, this paper considers the performance of the proposed encryption technologies in mobile phones. Finally, the answers to the poll must be sent to the server. This paper proposes an eVoting framework to preserve the privacy of the users while sending the answers to the system.; Such a scenery involves many different communication channels and technologies. As we will show, the decisions taken in some of the modules force some technologies and decisions in the others. (C) 2015 Elsevier B.V. All rights reserved.
Ricciardi, S.; Sembroiz, D.; Palmieri, F.; Germán Santos-Boada; Perello, J.; Careglio, D. Computer communications Vol. 77, p. 85-99 DOI: 10.1016/j.comcom.2015.06.010 Data de publicació: 2015-06-25 Article en revista
In the last years, the power consumption of telecommunication networks has attracted the attention of both researchers and field experts in order to contain the associated energy bills and reduce their ecological impact. Many of the proposed solutions have been focused exclusively on the reduction of the power consumption, without adequately considering more traditional network engineering objectives such as balancing resource utilization, routing policy, or resilience schemes. As a consequence, network control plane strategies passed from one extreme to the other, from being totally energy-unaware to exclusively energy-efficient at the expenses of load-balancing, with obvious impacts on the power consumption in the former case and on the blocking rate in the latter one.
In this paper, we present a hybrid routing and wavelength assignment algorithm that, when the network is lightly loaded, operates in an energy-efficient way, by routing the connections on the paths requiring the lowest amount of energy, while, when the network load increases, it dynamically switches to a pure load-balancing scheme in order to best allocate the available communication resources. The switching decision among load-balancing and energy-awareness is taken dynamically, driven by a threshold on the number of new connections requests reaching the network during a prefixed time window.
Simulation results show the effectiveness of the hybrid algorithm, which achieves lower energy consumption than a pure load-balancing algorithm while keeping the network load fairly distributed on the available resources.
IP/MPLS and Optical technologies are the foundations of current Carrier-Grade Networks (CGNs), due to both the flexibility of IP/MPLS to provide services with distinct requirements and the high transport capacity offered by new Optical technologies, such as Elastic Optical Networks (EONs). However, despite the widespread adoption of these two technologies, interoperability issues still impact on key network features, in particular on resilience capabilities. Resilience is gaining momentum in recent years due to the advent of new CGN scenarios, such as Data Center Networks (DCNs), where a link or a node failure might lead to a substantial traffic loss. We consider that any potential contribution on the resilience arena, must be built on top of a solid knowledge of available proposals emphasizing most appealing research trends and the limitations affecting the management of resilience in CGNs. Aligned to this scenario, the main goals of this article are: (I) compile distinct approaches for managing resilience; (2) describe in a comprehensive manner, the challenges faced by CGN operators in order to manage resilience; (3) show why current solutions for managing resilience do not completely address the interoperability issues present in multi-layer CGNs with multi-vendor settings, and; (4) provide insights for future trends. (C) 2015 Elsevier B.V. All rights reserved
Velasco, L.; Asensio, A.; Berral, J.; Bonetto, E.; Musumeci, F.; López, V. Computer communications Vol. 50, p. 142-151 DOI: 10.1016/j.comcom.2014.03.004 Data de publicació: 2014-09-01 Article en revista
The huge energy consumption of datacenters providing cloud services over the Internet has motivated different studies regarding cost savings in datacenters. Since energy expenditure is a predominant part of the total operational expenditures for datacenter operators, energy aware policies for minimizing datacenters' energy consumption try to minimize energy costs while guaranteeing a certain quality of experience (QoE). Federated datacenters can take advantage of its geographically distributed infrastructure by managing appropriately the green energy resources available in each datacenter at a given time, in combination with workload consolidation and virtual machine migration policies. In this scenario, inter-datacenter networks play an important role and communications cost must be considered when minimizing operational expenditures. In this work we tackle the Elastic Operations in Federated Datacenter for Performance and Cost Optimization (ELFADO) problem for scheduling workload orchestrating federated datacenters. Two approaches, distributed and centralized, are studied and integer linear programming (ILP) formulations and heuristics are provided. Using those heuristics, we analyze cost savings with respect to a fixed workload placement. For the sake of a compelling analysis, exhaustive simulation experiments are carried out considering realistic scenarios. Results show that the centralized ELFADO approach can save up to 52% of energy cost and more than 44% when communication costs are also considered.
Vázquez Rodas, A.; de la Cruz Llopis, Luis J.; Aguilar Igartua, M.; Sanvicente, E. Computer communications Vol. 44, p. 44-58 DOI: 10.1016/j.comcom.2014.03.003 Data de publicació: 2014-05-15 Article en revista
Buffer overflow is an important phenomenon in data networks that has much bearing on the overall network performance. Such overflow critically depends on the amount of storage space allotted to the transmission channels. To properly dimension this buffering capacity a detailed knowledge of some set of probabilities is needed. In real practice, however, that information is seldom available, and only a few average values are at the analyst disposal. In this paper, the use of a solution to this quandary based on maximum entropy is proposed. On the other hand, when wireless devices are taken into account, the transmission over a shared medium imposes additional restrictions. This paper also presents an extension of the maximum entropy approach for this kind of devices. The main purpose is that wireless nodes become able to dynamically self-configure their buffer sizes to achieve more efficient memory utilization while keeping bounded the packet loss probability. Simulation results using different network settings and traffic load conditions demonstrate meaningful improvement in memory utilization efficiency. This could potentially benefit devices of different wireless network technologies like mesh routers with multiple interfaces, or memory constraint sensor nodes. Additionally, when the available memory resources are not a problem, the buffer memory reduction also contributes to prevent the high latency and network performance degradation due to overbuffering. And it also facilitates the design and manufacturing of devices with faster memory technologies and future all-optical routers.
Wireless Local Area Networks have been increasingly deployed and become very popular. They offer important advantages such as the higher flexibility and user mobility, however, this kind of networks also present some security concerns due to its broadcast nature. Security mechanisms can be classified into two groups: user authentication and data confidentiality.; The IEEE 802.11i specification presents RSNA authentication mechanism, which allows user authentication employing IEEE 802.1x protocol and EAP methods. Authentication mechanisms consist in an important step of the handoff process, which occurs when a mobile node leaves the coverage area of an access point and performs association with another. Handoff results in a critical function for IEEE 802.11 MAC operation due to important delay restrictions. Thus, pre-authentication and IEEE 802.11r mechanisms have been presented to allow important latency reduction, which provide interesting results in real time communications.; Besides, usually, WLAN users employ mobile devices, which provide limited capabilities in terms of energy management.; In this way, in this paper, we evaluate authentication latency and battery consumption by means of an analytical model that we have developed for this purpose. The analysis also includes the influence of transmission errors, which allows the evaluation of authentication mechanisms in error-prone scenarios. (C) 2014 Elsevier B.V. All rights reserved.
Service Providers (SPs), which offer services based on elastic reservations with a guaranteed Grade of Service (GoS), should be interested in knowing how to price these services, i.e. service-i-, how to calculate the associated benefits to this service or, how to know the time until which the price for service-i-could be maintained, when an evolutionary function of the aggregate demand considered is involved and the established GoS for the elastic service is guaranteed. Thus this paper proposes a method that price elastic services (or elastic reservations) with guaranteed GoS in a scenario of evolutionary function of the aggregate demand. The method obtains: first at all, the average rate of the accepted elastic reservations of class-i with guaranteed GoS. Second, according to the accepted reservations, calculates the price that maximizes the selected revenue function. The considered aggregate demand function depends not only on a demand modulation factor, the mean reserved bandwidth, Bres,i, but on the evolution of this aggregate demand function, according to a Bass diffusion model. Third, in a scenario where not plenty access bandwidth Bi is available, evaluates the optimum value of the elasticity of the reservations that maximizes the revenue function for the obtained price. Finally, it is forecasted the time until the SP does not need to change the price or elasticity calculated when the demand increases and the GoS is guaranteed. The paper applies the method to a class-i- of elastic reservations, analyzes the influence of each one of the parameters and could be extended to multiple classes of independent and guaranteed elastic services.
Existing methods to detect and measure heavy-hitters (frequent items) are either lightweight but too inaccurate and memory-demanding (e.g. those relying on sampling), or too heavyweight to be deployed at high speeds. In this paper, we present several sampled-based algorithms to the problem and show that they exhibit two critical features. First, despite sampling, our schemes provide accurate results and detection guarantees that are independent of the traffic properties. Second, they are provably shown to require memory that is not only constant regardless of the amount of traffic observed and its composition, but a small factor above the theoretical minimum. Thus, unlike most solutions, ours scale in both space and speed; the use of sampling allowing to trade off performance for cost. As we will see, our algorithms build on similar principles. The first two use a constant sampling probability. Upgrading the second to support a variable sampling rate and to adjust it depending on the traffic intensity and CPU available yields our third scheme; a highly versatile solution that performs quasi-optimally and requires minimal configuration.
Vera-del-Campo, J.; Pegueroles, J.; Hernández-Serrano, J.; Soriano, M. Computer communications Vol. 36, num. 1, p. 90-104 DOI: 10.1016/j.comcom.2012.07.018 Data de publicació: 2012-08 Article en revista
The success and intensive use of social networks makes strategies for efficient document location a hot
topic of research. In this paper, we propose a common vector space to describe documents and users to
create a social network based on affinities, and explore epidemic routing to recommend documents
according to the user’s interests. Furthermore, we propose the creation of a SoftDHT structure to improve
the recommendation results. Using these mechanisms, an efficient document recommender system with
a fast organization of clusters of users based on their affinity can be provided, preventing the creation of
unlinked communities. We show through simulations that the proposed system has a short convergence
time and presents a high recall ratio.
de la Cruz Llopis, Luis J.; Vázquez Rodas, A.; Sanvicente, E.; Aguilar Igartua, M. Computer communications Vol. 35, num. 8, p. 993-1003 DOI: 10.1016/j.comcom.2012.02.015 Data de publicació: 2012-03-07 Article en revista
Nowadays, video on demand is one of the services more highly appreciated and demanded by customers. As the number of users increases, the capacity of the system that provides these services must also be increased to guarantee the required quality of service. An approach to that end is to have available several videoservers at various distribution points in order to satisfy the different incoming demands (videoservercluster). When a movie demand arrives to such a cluster, a load balancing device must assign the request to a specific server according to a procedure that must be fast, easy to implement and scalable. In this article we consider the problem of appropriately splitting this load to improve on the system performance. After an analysis of the video packet generation, we point out the similarity between this problem and that of optimally routing packets in data networks. With this similarity in mind, a new mechanism to select the appropriate videoserver is proposed. The purpose of this mechanism is to minimize the average packet transfer time (waiting time plus transmission time) at the videoservercluster. In this way, we are able to obtain a dynamic load balancing policy that performs satisfactorily and that is very easy to implement in practice. The results of several experiments run with real data are shown and commented to substantiate our claims. A description of a practical implementation of the system is also included.
The digital content industry is facing significant challenges. One of the most significant challenges is the Intellectual Property protection. This challenge has been addressed technologically by using Digital Rights Management (DRM) systems that on a first stage ensured the appropriate management over digital content.
However, rights management systems, as they are today, are completely non-interoperable, creating immense problems to the digital content final users. Perhaps one of the biggest problems is the fact that the same digital content that is governed by different rights management systems cannot be exchanged between different rights governance systems.
This paper presents some of the work that has been performed under the framework of the VISNET-II Network of Excellence (NoE) to address some of the rights management systems interoperability problems. The approach followed by the proposed work consists on the usage of service description and service-oriented architectures as a mean to create a common and interoperable environment between the different systems.
Trullols-Cruces, O.; Fiore, M.; Casetti, C.; Chiasserini, C-F.; Barcelo, J. Computer communications Vol. 33, num. 1, p. 432-442 DOI: 10.1016/j.comcom.2009.11.021 Data de publicació: 2010-06-01 Article en revista
The increasing spread of mobile nodes along with the technical advances
in multi-hop MANETs (Mobile Ad hoc NETworks) make this
kind of networks an important type of access network of next generation.
The demand of multimedia services from these networks is expected
to significantly grow in the next years. Multimedia services, though, require
the provision of Quality of Service (QoS). Nevertheless, the highly
dynamic nature of MANETs, the energy constraints, the lack on centralized
infrastructure and the variable link capacity, makes the QoS provision
over MANETs a matter that challenges attention. These features make self-configuration and system adaptation questions of major importance when developing a QoS-aware framework. To tackle this issue, we have designed a-MMDSR (adaptive-Multipath Multimedia Dynamic Source Routing), a multipath routing protocol able to self-configure dynamically
depending on the state of the network. The approach includes cross-layer techniques especially designed to improve the end-to-end performance
of video-streaming services over IEEE 802.11e Ad Hoc networks.
Besides, a straightforward analytical model to estimate the path error probability is presented. This model is used by the routing scheme to estimate the lifetime of the paths. In this way, proper proactive decisions can be made before the paths get broken. The model simplicity is appropriate for low capacity wireless devices. Simulation results validate the proposal and show the improvement on standard DSR (Dynamic Source Routing) and on a previous static version.
Forne, J.; Rebollo-Monedero, D.; Solanas, A.; Martínez-Ballesté, A. Computer communications Vol. 33, num. 6, p. 762-774 DOI: 10.1016/j.comcom.2009.11.024 Data de publicació: 2010-04-15 Article en revista
Privacy and security are paramount in the proper deployment of location-based services (LBSs). We present a novel protocol based on user collaboration to privately retrieve location-based information from an LBS provider. Our approach does not assume that users or providers can be completely trusted with regard to privacy, and does not rely on a trusted third party. In addition, user queries, containing accurate locations, remain unchanged, and the collaborative protocol does not impose any special requirements on the query–response function of the LBS. Our protocol is analyzed in terms of privacy, network traffic, and processing overhead. We define the concept of guaranteed privacy breach probability, and we show that our proposal provides exponential scalability in that probability, at the expense of a linear relative network cost.
The ease with which nodes may join or leave a Mobile Ad-hoc Network (MANET) implies changing trust relationships among them and problems to build certification paths. Peer-to-peer Public Key Infrastructures (PKIs) are quite dynamic and certification paths can be built although part of the infrastructure is temporarily unreachable. However, path discovery is difficult because trust relationships are bidirectional. On the contrary, in hierarchical PKIs, there is only one path between two entities and certification paths are easy to find. We propose a protocol that establishes a virtual hierarchy in a peer-to-peer PKI. This protocol is suitable for dynamic environments such as MANETs since it is executed in a short time. In addition, our protocol does not require to issue new certificates among PKI entities, facilitates the certification path discovery process and the maximum path length can be adapted to the characteristics of users with limited processing and storage capacity
The possibility of making the Internet accessible via mobile devices has generated an important opportunity for electronic commerce. Nevertheless, some deficiencies deter a massive use of m-commerce applications. Security and easiness of use are unavoidable conditions. The use of brokerage systems constitutes an interesting solution to speed up the information delivery to the users. Moreover, brokers can use mobile agents to efficiently and easily perform the search and retrieval of commercial information in the Internet. Although the mobile agent technology is a very suitable choice for the m-commerce scenario, there are security issues that hinder its use. In particular, an important aspect that must be solved for the m-commerce scenario is the mobile agent protection from manipulation attacks performed by malicious hosts. The first part of this paper describes a mechanism to reach this protection. We describe how to use software watermarking techniques in the mobile agent to detect manipulation attacks, and how the broker can be used to punish the malicious hosts. Once an m-commerce site is selected by the user, an end-to-end secure transaction must be established. The transaction can use several protocols, from a simple secure TLS channel to send a credit card number until a sophisticated payment protocol. In any case, Public Key Certificates (PKCs) are required for these protocols. It must be stressed that certificates management is a heavy process and that clients in the brokerage scenario are usually resource-limited. For this reason, the best option is that clients delegate this task to the broker. Notice that the broker is a Trusted Third Party (TTP) and, in general, it is not resource-limited. Therefore, the broker is appropriate for storing and managing PKCs. The second part of this paper addresses this issue, with a particular emphasis in the certificate status management which is the most complex task of certificate management.
Masip, X.; Yannuzzi, M.; Domingo, J.; Fonte, A.; Curado, M.; Kuipers, F.; Van Mieghem, P. Computer communications Vol. 29, num. 5, p. 563-581 DOI: 10.1016/j.comcom.2005.06.008 Data de publicació: 2006-03 Article en revista
Quality of Service Routing is at present an active and remarkable research area, since most emerging network services require specialized Quality of Service (QoS) functionalities that cannot be provided by the current QoS-unaware routing protocols. The provisioning of QoS based network services is in general terms an extremely complex problem, and a significant part of this complexity lies in the routing layer. Indeed, the problem of QoS Routing with multiple additive constraints is known to be NP-hard. Thus, a successful and wide deployment of the most novel network services demands that we thoroughly understand the essence of QoS Routing dynamics, and also that the proposed solutions to this complex problem should be indeed feasible and affordable. This article surveys the most important open issues in terms of QoS Routing, and also briefly presents some of the most compelling proposals and ongoing research efforts done both inside and outside the E-Next Community to address some of those issues