Centrality and influence spread are two of the most studied concepts in social network analysis. In recent years, centrality measures have attracted the attention of many researchers, generating a large and varied number of new studies about social network analysis and its applications. However, as far as we know, traditional models of influence spread have not yet been exhaustively used to define centrality measures according to the influence criteria. Most of the considered work in this topic is based on the independent cascade model. In this paper we explore the possibilities of the linear threshold model for the definition of centrality measures to be used on weighted and labeled social networks. We propose a new centrality measure to rank the users of the network, the Linear Threshold Rank (LTR), and a centralization measure to determine to what extent the entire network has a centralized structure, the Linear Threshold Centralization (LTC). We appraise the viability of the approach through several case studies. We consider four different social networks to compare our new measures with two centrality measures based on relevance criteria and another centrality measure based on the independent cascade model. Our results show that our measures are useful for ranking actors and networks in a distinguishable way.
In this study, we describe a new coordination mechanism for non-atomic congestion games that leads to a (selfish) social cost which is arbitrarily close to the non-selfish optimal. This mechanism incurs no additional cost, in contrast to tolls that typically differ from the social cost as expressed in terms of delays.
The longest arc-preserving common subsequence problem is an NP-hard combinatorial optimization problem from the field of computational biology. This problem finds applications, in particular, in the comparison of art-annotated ribonucleic acid (RNA) sequences. In this work we propose a simple, hybrid evolutionary algorithm to tackle this problem. The most important feature of this algorithm concerns a crossover operator based on solution merging. In solution merging, two or more solutions to the problem are merged, and an exact technique is used to find the best solution within this union. It is experimentally shown that the proposed algorithm outperforms a heuristic from the literature.
Lizárraga, E.; Blesa, M.; Blum, C. European Conference on Evolutionary Computation in Combinatorial Optimization p. 60-74 DOI: 10.1007/978-3-319-55453-2_5 Data de presentació: 2017-04 Presentació treball a congrés
Both, Construct, Merge, Solve and Adapt (CMSA) and Large Neighborhood Search (LNS), are hybrid algorithms that are based on iteratively solving sub-instances of the original problem instances, if possible, to optimality. This is done by reducing the search space of the tackled problem instance in algorithm-specific ways which differ from one technique to the other. In this paper we provide first experimental evidence for the intuition that, conditioned by the way in which the search space is reduced, LNS should generally work better than CMSA in the context of problems in which solutions are rather large, and the opposite is the case for problems in which solutions are rather small. The size of a solution is hereby measured by the number of components of which the solution is composed, in comparison to the total number of solution components. Experiments are conducted in the context of the multi-dimensional knapsack problem.
This paper studies the complexity of computing a representation of a simple game as the intersection (union) of weighted majority games, as well as, the dimension or the codimension. We also present some examples with linear dimension and exponential codimension with respect to the number of players.
We analyze the computational complexity of the power measure in models of collective decision: the generalized opinion leader-follower model and the oblivious and non-oblivious infuence models. We show that computing the power measure is #P-hard in all these models, and provide two subfamilies in which the power measure can be computed in polynomial time.
Este trabajo analiza cuantitativamente las diferentes medidas adoptadas en la evaluación contínua del curso de Programación-1 de la Facultat d’Informàtica de Barcelona de la Universitat Politècnica de Catalunya desde 2010, año en que comienza la impartición del nuevo grado en Ingeniería Informática. Se analiza el efecto de dichas medidas sobre las categorías de calificaciones obtenidas por los estudiantes en relación al incremento de carga docente del profesorado. El estudio de la evolución de estos dos valores a lo largo del tiempo usa una aproximación de coste-beneficio marginal,
clásica en ámbitos de economía. Usando herramientas
provenientes de las ciencias sociales, este estudio se
complementa con un análisis preliminar de desigualdad.
Ambos enfoques pretenden introducir nuevas técnicas para el análisis objetivo del impacto de medidas docentes.
Este trabajo analiza cuantitativamente las diferentes medidas adoptadas en la evaluación contínua del curso de Programación-1 de la Facultat d’Informàtica de Barcelona de la Universitat Politècnica de Catalunya desde 2010, año en que comienza la impartición del nuevo grado en Ingeniería Informática. Se analiza el efecto de dichas medidas sobre las categorías de caliﬁcaciones obtenidas por los estudiantes en relación al incremento de carga docente del profesorado. El estudio de la evolución de estos dos valores a lo largo del tiempo usa una aproximación de coste-beneﬁcio marginal, clásica en ámbitos de economía. Usando herramientas provenientes de las ciencias sociales, este estudio se complementa con un análisis preliminar de desigualdad. Ambos enfoques pretenden introducir nuevas técnicas para el análisis objetivo del impacto de medidas docentes.
This paper analyzes quantitatively the different measures taken in the continuous assessment of the CS1 course at the Facultat d’Informàtica de Barcelona of the Universitat Politècnica de Catalunya since 2010, when the new degree in Informatics Engineering began. The effect of those measures on the marks obtained by the students is analyzed with respect to the increment of the course load for the faculty members. These two values are compared along time through a classical marginal cost-beneﬁt approach. Using tools commonly accepted in the social sciences, this study is complemented with a preliminary analysis of inequality. Both approaches aim at introducing new techniques for an objective analysis on the impact of teaching measures.
We consider two collective decision-making models associated with influence games , the oblivious and non-oblivious influence decision models. The difference is that in oblivious influence models the initial decision of the actors that are neither leaders nor independent is never taken into account, while in the non-oblivious it is taken when the leaders cannot exert enough influence to deviate from it. Following  we consider the power measure, for an actor in a decision-making model. Power is ascribed to an actor when, by changing its inclination, the collective decision can be changed. We show that computing the power measure is #P-hard in both oblivious and non-oblivious influence decision models. We present two subfamilies of (oblivious and non-oblivious) influence decision models in which the power measure can be computed in polynomial time.
An opinion leader-follower model (OLF) is a two-action collective decision-making model for societies, in which three kinds of actors are considered: "opinion leaders", "followers", and "independent actors". In OLF the initial decision of the followers can be modified by the influence of the leaders. Once the final decision is set, a collective decision is taken applying the simple majority rule. We consider a generalization of \OLF, the gOLF models which allow collective decision taken by rules different from the single majority rule. Inspired in this model we define two new families of collective decision-making models associated with cooperative influence games. We define the "oblivious" and "non-oblivious" influence models. We show that gOLF models are non-oblivious influence models played on a two layered bipartite influence graph.
Together with OLF models, the Satisfaction measure was introduced and studied. We analyze the computational complexity of the satisfaction measure for gOLF models and the other collective decision-making models introduced in the paper. We show that computing the satisfaction measure is #P-hard in all the considered models except for the basic OLF inwhich the complexity remains open. On the other hand, we provide two subfamilies of decision models in which the satisfaction measure can be computed in polynomial time.
Exploiting the relationship with influence games, we can relate the satisfaction measure with the Rae index of an associated simple game. The Rae index is closely related to the Banzhaf value. Thus, our results also extend the families of simple games for which computing the Rae index and the Banzhaf value is computationally hard.
An opinion leader-follower model (OLF) is a two-action collective decision-making model for societies, in which three kinds of actors are considered:
In this paper we present the application of a recently proposed, general, algorithm for combinatorial optimization to the repetition-free longest common subsequence problem. The applied algorithm, which is labelled Construct, Merge, Solve & Adapt, generates sub-instances based on merging the solution components found in randomly constructed solutions. These sub-instances are subsequently solved by means of an exact solver. Moreover, the considered sub-instances are dynamically changing due to adding new solution components at each iteration, and removing existing solution components on the basis of indicators about their usefulness. The results of applying this algorithm to the repetition-free longest common subsequence problem show that the algorithm generally outperforms competing approaches from the literature. Moreover, they show that the algorithm is competitive with CPLEX for small and medium size problem instances, whereas it outperforms CPLEX for larger problem instances.
We introduce the quad-kd tree: a general purpose and hierarchical data structure for the storage of multidimensional points. Quad-kd trees include point quad trees and kd trees as particular cases and therefore they could constitute a general framework for the study of fundamental properties of trees similar to them. Besides, quad-kd trees can be tuned by means of insertion heuristics and bucketing techniques to obtain trade-offs between their costs in time and space. We propose three such heuristics and we show analytically and experimentally their competitive performance. Our analytical results back the experimental outcomes and suggest that the quad-kd tree is a flexible data structure that can be tailored to the resource requirements of a given application.
We analyze the computational complexity of the problem of deciding whether, for a given simple game, there exists the possibility of rearranging the participants in a set of j given losing coalitions into a set of j winning coalitions. We also look at the problem of turning winning coalitions into losing coalitions. We analyze the problem when the simple game is represented by a list of wining, losing, minimal winning or maximal loosing coalitions.
Simple games are cooperative games in which the benefit that a coalition may have is always binary, i.e., a coalition may either win or loose. This paper surveys different forms of representation of simple games, and those for some of their subfamilies like regular games and weighted games. We analyze the forms of representations that have been proposed in the literature based on different data structures for sets of sets. We provide bounds on the computational resources needed to transform a game from one form of representation to another one. This includes the study of the problem of enumerating the fundamental families of coalitions of a simple game. In particular we prove that several changes of representation that require exponential time can be solved with polynomial-delay and highlight some open problems.
Celebrity games, a new model of network creation games is introduced. The specific features of this model are that players have different celebrity weights and that a critical distance is taken into consideration. The aim of any player is to be close (at distance less than critical) to the others, mainly to those with high celebrity weights. The cost of each player depends on the cost of establishing direct links to other players and on the sum of the weights of those players at a distance greater than the critical distance. We show that celebrity games always have pure Nash equilibria and we characterize the family of subgames having connected Nash equilibria, the so called star celebrity games. Exact bounds for the PoA of non star celebrity games and a bound of O(n/ß+ß) for star celebrity games are provided.
The upper bound on the PoA can be tightened when restricted to particular classes of Nash equilibria graphs. We show that the upper bound is O(n/ß) in the case of 2-edge-connected graphs and 2 in the case of trees.