In this paper we provide new algebraic tools to study the relationship between different Matrix Diffie-Hellman (MDDH) Problems, which are recently introduced as a natural generalization of the so-called Linear Problem. Namely, we provide an algebraic criterion to decide whether there exists a generic black-box reduction, and in many cases, when the answer is positive we also build an explicit reduction with the following properties: it only makes a single oracle call, it is tight and it makes use only of operations in the base group. It is well known that two MDDH problems described by matrices with a different number of rows are separated by an oracle computing cer- tain multilinear map. Thus, we put the focus on MDDH problems of the same size. Then, we show that MDDH problems described with a different number of parameters are also separated (meaning that a suc- cessful reduction cannot decrease the amount of randomness used in the problem instance description). When comparing MDDH problems of the same size and number of pa- rameters, we show that they are either equivalent or incomparable. This suggests that a complete classification into equivalence classes could be done in the future. In this paper we give some positive and negative par- tial results about equivalence, in particular solving the open problem of whether the Linear and the Cascade MDDH problems are reducible to each other. The results given in the paper are limited by some technical restrictions in the shape of the matrices and in the degree of the polynomials defining them. However, these restrictions are also present in most of the work dealing with MDDH Problems. Therefore, our results apply to all known instances of practical interest.
The final publication is available at link.springer.com
Acyclic Join Dependencies (AJD) play a crucial role in database design and normalization. In this paper, we use Formal Concept Analysis (FCA) to characterize a set of AJDs that hold in a given dataset. This present work simplifies and generalizes the characterization of Multivalued Dependencies with FCA.
We propose the use of the angel-daemon framework to assess the Coleman's power of a collectivity to act under uncertainty in weighted voting games.
In this framework uncertainty profiles describe the potential changes in the weights of a weighted game and fixes the spread of the weights' change. For each uncertainty profile a strategic angel-daemon game can be considered. This game has two selfish players, the angel and the daemon, the angel selects its action as to maximize the effect on the measure under consideration while daemon acts oppositely.
Players angel and daemon give a balance between the best and the worst. The angel-daemon games associated to the Coleman's power are zero-sum games and therefore the expected utilities of all the Nash equilibria
are the same. In this way we can asses the Coleman's power under uncertainty. Besides introducing
the framework for this particular setting we analyse basic properties and make some computational complexity considerations. We provide several examples based in the evolution of the voting rules of the EU Council of Ministers.
We put forward a new family of computational assumptions, the Kernel Matrix Diffie-Hellman Assumption. Given some matrix A sampled from some distribution D, the kernel assumption says that it is hard to find “in the exponent” a nonzero vector in the kernel of A>. This family is a natural computational analogue of the Matrix Decisional Diffie-Hellman Assumption (MDDH), proposed by Escala et al. As such it allows to extend the advantages of their algebraic framework to computational assumptions. The k-Decisional Linear Assumption is an example of a family of decisional assumptions of strictly increasing hardness when k grows. We show that for any such family of MDDH assumptions, the corresponding Kernel assumptions are also strictly increasingly weaker. This requires ruling out the existence of some black-box reductions between flexible problems (i.e., computational problems with a non unique solution).
The final publication is available at link.springer.com
Automatic Speech Recognition has reached almost human performance in some controlled scenarios. However, recognition of impaired speech is a difficult task for two main reasons: data is (i) scarce and (ii) heterogeneous. In this work we train different architectures on a database of dysarthric speech. A comparison between architectures shows that, even with a small database, hybrid DNN-HMM models out- perform classical GMM-HMM according to word error rate measures. A DNN is able to improve the recognition word error rate a 13% for subjects with dysarthria with respect to the best classical architecture. This improvement is higher than the one given by other deep neural networks such as CNNs, TDNNs and LSTMs. All the experiments have been done with the Kaldi toolkit for speech recognition for which we have adapted several recipes to deal with dysarthric speech and work on the TORGO database. These recipes are publicly available.
The final publication is available at https://link.springer.com/chapter/10.1007%2F978-3-319-49169-1_10
Automatic Speech Recognition has reached almost human performance in some controlled scenarios. However, recognition of impaired speech is a difficult task for two main reasons: data is (i) scarce and (ii) heterogeneous. In this work we train different architectures on a database of dysarthric speech. A comparison between architectures shows that, even with a small database, hybrid DNN-HMM models outperform classical GMM-HMM according to word error rate measures. A DNN is able to improve the recognition word error rate a 13% for subjects with dysarthria with respect to the best classical architecture. This improvement is higher than the one given by other deep neural networks such as CNNs, TDNNs and LSTMs. All the experiments have been done with the Kaldi toolkit for speech recognition for which we have adapted several recipes to deal with dysarthric speech and work on the TORGO database. These recipes are publicly available.
Valls, F.; Redondo, E.; Fonseca, D.; Garcia-Almirall, Pilar; Subiros, J. Lecture notes in computer science Vol. 9733, p. 436-447 DOI: 10.1007/978-3-319-39513-5_41 Data de publicació: 2016-06-19 Article en revista
Videogame technology is quickly maturing and approaching levels of realism once reserved to 3D rendering applications used in architecture, in real-time and with the capacity to react in real-time to user input. This paper describes an educational experience using videogame technology in architecture education, exploring its applicability in the field in architecture compared to more traditional media. A prototype application modeling a proposed urban space was developed using Unreal Engine and a group of architecture students were asked to use the software to navigate the virtual environment. The development process of the applications is discussed as well as the design of the survey to assess the participants’ experience in four key areas (a) player profile, (b) experience using the beta version, (c) use of videogame technology as an educational tool, and (d) applicability of game engines in Architecture.
In this paper we present the application of a recently proposed, general, algorithm for combinatorial optimization to the repetition-free longest common subsequence problem. The applied algorithm, which is labelled Construct, Merge, Solve & Adapt, generates sub-instances based on merging the solution components found in randomly constructed solutions. These sub-instances are subsequently solved by means of an exact solver. Moreover, the considered sub-instances are dynamically changing due to adding new solution components at each iteration, and removing existing solution components on the basis of indicators about their usefulness. The results of applying this algorithm to the repetition-free longest common subsequence problem show that the algorithm generally outperforms competing approaches from the literature. Moreover, they show that the algorithm is competitive with CPLEX for small and medium size problem instances, whereas it outperforms CPLEX for larger problem instances.
In the early age of the internet users enjoyed a large level of anonymity. At the time web pages were just hypertext documents; almost no personalisation of the user experience was offered. The Web today has evolved as a world wide distributed system following specific architectural paradigms. On the web now, an enormous quantity of user generated data is shared and consumed by a network of applications and services, reasoning upon users expressed preferences and their social and physical connections. Advertising networks follow users’ browsing habits while they surf the web, continuously collecting their traces and surfing patterns. We analyse how users tracking happens on the web by measuring their online footprint and estimating how quickly advertising networks are able to profile users by their browsing habits.
Process mining techniques rely on event logs: the extraction of a process model (discovery) takes an event log as the input, the adequacy of a process model (conformance) is checked against an event log, and the enhancement of a process model is performed by using available data in the log. Several notations and formalisms for event log representation have been proposed in the recent years to enable efficient algorithms for the aforementioned process mining problems. In this paper we show how Conditional Partial Order Graphs (CPOGs), a recently introduced formalism for compact representation of families of partial orders, can be used in the process mining field, in particular for addressing the problem of compact and easy-to-comprehend representation of event logs with data. We present algorithms for extracting both the control flow as well as the relevant data parameters from a given event log and show how CPOGs can be used for efficient and effective visualisation of the obtained results. We demonstrate that the resulting representation can be used to reveal the hidden interplay between the control and data flows of a process, thereby opening way for new process mining techniques capable of exploiting this interplay. Finally, we present open-source software support and discuss current limitations of the proposed approach.
Currently, there is a trend to promote personalized health care in order to prevent diseases or to have a healthier life. Using current devices such as smart-phones and smart-watches, an individual can easily record detailed data from her daily life. Yet, this data has been mainly used for self-tracking in order to enable personalized health care. In this paper, we provide ideas on how process mining can be used as a fine-grained evolution of traditional self-tracking. We have applied the ideas of the paper on recorded data from a set of individuals, and present conclusions and challenges.
The complexity of urban processes needs professionals trained in understanding and managing the design of its spaces and the implementation of urban policies. This paper discusses an educational methodology to complement the standard Project-Based Learning approach with an experience using serious games with gamification elements to stimulate critical thinking in urban planning and urban design students, to promote designing spaces more adaptable and usable for a wide range of users and situations of public life. The proposed methodology uses five “mini games” that place students in different situations: (1) finding an unknown landmark, (2) reaching goal avoiding obstacles, (3) navigating with artificial lighting, (4) simulating the point of view of a person with a disability, and (5) simulating group behaviour. As a secondary objective the experience will track the participants’ behaviour to extract data to be incorporated into an agent-based model rule set.
La publicació final està disponible a Springer a través del http://dx.doi.org /10.1007/978-3-319-20609-7_59
One way to build secure electronic voting systems is to use Mix-Nets, which break any correlation between voters and their votes. One of the characteristics of Mix-Net-based eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally- ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by non-technical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the Mix-Net. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e- cient concrete instantiation based on well-studied assumptions
One way to build secure electronic voting systems is to use Mix-Nets, which break any correlation between voters and their votes. One of the characteristics of Mix-Net-based eVoting is that ballots are usually decrypted individually and, as a consequence, invalid votes can be detected during the tallying of the election. In particular, this means that the ballot does not need to contain a proof of the vote being valid. However, allowing for invalid votes to be detected only during the tally- ing of the election can have bad consequences on the reputation of the election. First, casting a ballot for an invalid vote might be considered as an attack against the eVoting system by non-technical people, who might expect that the system does not accept such ballots. Besides, it would be impossible to track the attacker due to the anonymity provided by the Mix-Net. Second, if a ballot for an invalid vote is produced by a software bug, it might be only detected after the election period has nished. In particular, voters would not be able to cast a valid vote again. In this work we formalize the concept of having a system that detects invalid votes during the election period. In addition, we give a general construction of an eVoting system satisfying such property and an e - cient concrete instantiation based on well-studied assumptions
Martin Wirsing is one of the earliest contributors to the area of Algebraic Specification (e.g., ), which he explored in a variety of domains over many years. Throughout his career, he has also inspired countless researchers in related areas. This paper is inspired by one of the domains that he explored thirty years or so after his first contributions when leading the FET Integrated Project SENSORIA : the use of constraint systems to deal with non-functional requirements and preferences [13,8]. Following in his footsteps, we provide an extension of the traditional notion of algebraic data type specification to encompass soft-constraints as formalised in . Finally, we relate this extension with institutions  and recent work on graded consequence in institutions .
SDN and NFV are two novel paradigms that open the way for a more efficient operation and management of networks, allowing the virtualization and centralization of some functions that are distributed in current network architectures. Optical access networks present several characteristics (tree-like topology, distributed shared access to the upstream channel, partial centralization of the complex operations in the OLT device) that make them appealing for the virtualization of some of their functionalities. We propose a novel EPON architecture where OLTs and ONUs are partially virtualized and migrated to the network core following SDN and NFV paradigms, thus decreasing CAPEX and OPEX, and improving the flexibility and efficiency of network operations.
The Firefighter Problem was proposed in 1995  as a deterministic discrete-time model for the spread (and containment) of a fire. Its applications reach from real fires to the spreading of diseases and the containment of floods. Furthermore, it can be used to model the spread of computer viruses or viral marketing in communication networks.
In this work, we study the problem from a game-theoretical perspective. Such a context seems very appropriate when applied to large networks, where entities may act and make decisions based on their own interests, without global coordination.
We model the Firefighter Problem as a strategic game where there is one player for each time step who decides where to place the firefighters. We show that the Price of Anarchy is linear in the general case, but at most 2 for trees. We prove that the quality of the equilibria improves when allowing coalitional cooperation among players. In general, we have that the Price of Anarchy is in T(n/k) where k is the coalition size. Furthermore, we show that there are topologies which have a constant Price of Anarchy even when constant sized coalitions are considered.
An effective global impostor selection method is proposed in
this paper for discriminative Deep Belief Networks (DBN) in the context
of a multi-session i-vector based speaker recognition. The proposed
method is an iterative process in which in each iteration the whole
impostor i-vector dataset is divided randomly into two subsets. The
impostors in one subset which are closer to each impostor in another
subset are selected and impostor frequencies are computed. At the
end, those impostors with higher frequencies will be the global selected
ones. They are then clustered and the centroids are considered as the
final impostors for the DBN speaker models. The advantage of the
proposed method is that in contrary to other similar approaches, only
the background i-vector dataset is employed. The experimental results
are performed on the NIST 2014 i-vector challenge dataset and it is
shown that the proposed selection method improves the performance
of the DBN-based system in terms of minDCF by 7% and the whole
system outperforms the baseline in the challenge by more than 22%
This paper describes the design, development and execution of a MOOC entitled “Approaches to Machine Translation: rule-based, statistical and hybrid”. The course is launched from the Canvas platform used by recognized European universities. The course contains video-lecture, quizzes and laboratory assignments. Evaluation is done using a virtual learning environment for computer programming and peer-to-peer strategies. This MOOC allows to introduce people from various areas to the Machine Translation theory and practice. Also it allows to internationalize different tools developed at the Universitat Politècnica de Catalunya.
Pérez-Pellitero, E.; Salvador, J.; Torres-Xirau, I.; Ruiz-Hidalgo, J.; Rosenhahn, B. Lecture notes in computer science p. 346-359 DOI: 10.1007/978-3-319-16811-1_23 Data de publicació: 2014-11-01 Article en revista
Regression-based Super-Resolution (SR) addresses the upscaling problem by learning a mapping function (i.e. regressor) from the low-resolution to the high-resolution manifold. Under the locally linear assumption, this complex non-linear mapping can be properly modeled by a set of linear regressors distributed across the manifold. In such methods, most of the testing time is spent searching for the right regressor within this trained set. In this paper we propose a novel inverse-search approach for regression-based SR. Instead of performing a search from the image to the dictionary of regressors, the search is done inversely from the regressors’ dictionary to the image patches. We approximate this framework by applying spherical hashing to both image and regressors, which reduces the inverse search into computing a trained function. Additionally, we propose an improved training scheme for SR linear regressors which improves perceived and objective quality. By merging both contributions we improve speed and quality compared to the state-of-the-art.
Álvarez Cabrales, A.; Zayas F, E.E.; Pérez, R.; Simeón, Rolando; Riba Romeva, C.; Cardona, S. Lecture notes in computer science num. 8683, p. 214-221 DOI: 10.1007/978-3-319-10831-5 Data de publicació: 2014-09-14 Article en revista
Commercial Computer-Aided Design systems have mainly focused to support the process of capturing and representing geometric shapes and incorporating technological information. In contrast, few utilities in these systems are present to facilitate decision making in the early stages of the design process, such as the capture, modeling and conceptual design synthesis of solutions. Typical tasks of the conceptual design process in mechanical design are lead in applications stand-alone or are based on the heuristic knowledge of the designer. Such approaches are non-interoperable with de commercial computer-aided design systems, leading to discontinuous design information of design process. This study addresses this subject and proposes a method to improve the integration of product conceptual design synthesis into a Computer Aided Design system. To validate the feasibility of the approach implemented, a prototype application based on a Computer Aided Design system was developed and a study case was accomplished
Minimum distance controlled tabular adjustment (CTA) is an emerging perturbative method of statistical disclosure control for tabular data. The goal of CTA is to find the closest safe table to some original tabular data with sensitive information. Closeness is usually measured by l1 or l2 distances. Distance l1 provides solutions with a smaller l0 norm than l2 (i.e., with a lesser number of changes with respect to the original table). However the optimization problem formulated with l2 requires half the number of variables than that for l1, and it is more efficiently solved. In this work a pseudo-Huber function (which is a continuous nonlinear approximation of the Huber function) is considered to measure the distance between the original and protected tables. This pseudo-Huber function approximates l1 but can be formulated with the same number of variables than l2. It results in a nonlinear convex optimization problem which, theoretically, can be solved in polynomial time. Some preliminary results using the Huber-CTA model are reported
Minimum distance controlled tabular adjustment (CTA) is a perturbative technique of statistical disclosure control for tabular data. Given a table to be protected, CTA looks for the closest safe table by solving an optimization problem using some particular distance in the objective function. CTA has shown to exhibit a low disclosure risk. The purpose of this work is to show that CTA also provides a low information loss, focusing on two-way tables. Computational results on a set of midsize tables validate this statement.
Network virtualisation is a promising technique for a better future Internet by allowing for network resource sharing. However, resource sharing requires that virtual nodes and links be embedded onto substrate nodes and links (virtual network embedding), and thereafter the allocated resources dynamically managed throughout the lifetime of the virtual network (dynamic resource allocation). Since the constrained virtual network embedding problem is NP–Hard, many existing approaches are not only static, but also make simplifying assumptions, most of which would not apply in practical environments. This PhD research proposes improvements to both virtual network embedding and dynamic resource allocation. The objective is to achieve an efficient utilisation of physical network resources. To this end, we propose a path generation-based approach for a one-shot, unsplittable flow virtual network embedding, and a reinforcement learning-based dynamic allocation of substrate network resources.
Online Reputation Systems allow markets to exclude providers providers that are untrustworthy or unreliable. System failures and outages may decrease the reputation of honest providers, which would lose potential clients. For that reason, providers require trust-aware management policies aimed at retaining their reputation when unexpected failures occur. This paper proposes policies to operate cloud resources to minimise the impact of system failures in the reputation. On the one side, we discriminate clients under conflicting situations to favour those that would impact more positively the reputation of the provider. On the other side, we analyse the impact of management actions in the reputation and the revenue of the provider to select those with less impact when an actuation is required. The validity of these policies is demonstrated through experiments for various use cases.
Cloud providers may not always fulfil the Service Level Agreements with the clients because of outages in the data centre or inaccurate resource provisioning. Minimizing the Probability of Failure of the tasks that are allocated within a Cloud Infrastructure can be economically infeasible because overprovisioning resources increases the cost and is economically inefficient. This paper intends to increase the fulfilment rate of Service Level Agreements at the infrastructure provider side while maximizing the economic efficiency, by considering risk in the decision process. We introduce a risk model based on graph analysis for risk propagation, and we model it economically to provide three levels of risk to the clients: moderate risk, low risk, and very low risk. The client may decide the risk of the service and proportionally pay: the lower the risk the higher the price.
Huang-Ming, C.; Diaz, M.; Catala, A.; Chen, W.; Rauterberg, M. Lecture notes in computer science Vol. 8520 Design, User Experie, p. 220-231 DOI: 10.1007/978-3-319-07638-6_22 Data de publicació: 2014-04-03 Article en revista
Emotion is an essential part of user experience. While researchers are striving for new research tools for evaluate emotional experiences in design, designers have been using experience-based tools for studying emotions in practice, such as mood boards. Mood boards were developed for communicating emotional qualities between designers and clients, but have not yet been considered as an evaluation tool for investigating emotional experience. In this study we examined whether design students and non-design students have similar criteria in evaluating these mood boards. The results showed that the inter-rater reliability among all participants were considerably high, which suggested that mood boards are potential to be used as an evaluation tool for research on emotion.
This work introduces an interactive robotic system for assistance, conceived to tackle some of the challenges that domestic environments impose. The system is organized into a network of heterogeneous components that share both physical and logical functions to perform complex tasks. It consists of several robots for object manipulation, an advanced vision system that supplies in-formation about objects in the scene and human activity, and a spatial augmented reality interface that constitutes a comfortable means for interacting with the system. A first analysis based on users’ experiences confirms the importance of having a friendly user interface. The inclusion of context awareness from visual perception enriches this interface allowing the robotic system to become a flexible and proactive assistant.
Recently cloud services have been evaluated by scientific communities as a viable solution to satisfy their computing needs, reducing the cost of ownership and operation to the minimum. The analysis on the adoption of the cloud computing model for eScience has identified several areas of improvement as federation management and interoperability between providers. Portability between cloud vendors is a widely recognized feature to avoid the risk of lock-in of users in proprietary systems, a stopper to the complete adoption of clouds.
In this paper we present a programming framework that allows the coordination of applications on federated clouds used to provide flexibility to traditional research infrastructures as clusters and grids. This approach allows researchers to program applications abstracting the underlying infrastructure and providing scaling and elasticity features through the dynamic provision of virtualized resources. The adoption of standard interfaces is a basic feature in the development of connectors for different middlewares ensuring the portability of the code between different providers.
The literature on categorial type logic includes proposals for semantically inactive additives, quantifiers, and modalities (Morrill 1994; Hepple 1990; Moortgat 1997), but to our knowledge there has been no proposal for semantically inactive multiplicatives. In this paper we formulate such a proposal (thus filling a gap in the typology of categorial connectives) in the context of the displacement calculus Morrill et al. (2011), and we give a formulation of words as types whereby for every expression w there is a corresponding type W(w). We show how this machinary can treat the syntax and semantics of collocations involving apparently contentless words such as expletives, particle verbs, and (discontinuous) idioms. In addition, we give an account in these terms of the only known examples treated by Hybrid Type Logical Grammar (HTLG henceforth; Kubota and Levine 2012) beyond the scope of unaugmented displacement calculus: gapping of particle verbs and discontinuous idioms.
This work aims to design an academic experience involving the implementation of an augmented reality tool in architecture education practices to improve the motivation and final marks of the student. We worked under different platforms for mobile devices to create virtual information channels through a database associated with 3D virtual models and any other type of media content, which are geo-located in their real position. The basis of our proposal is the spatial skills improvement that students can achieve using their innate affinity with user-friendly digital media such as smartphones or tablets, which allow them to visualize educational exercises in real geo-located environments and to share and evaluate students’ own-generated proposals on site. The proposed method aims to improve the access to multimedia content on mobile devices, allowing access to be adapted to all types of users and contents. The students were divided into various groups, control and experimental, in respect of the function of the devices and activities to perform. The goal they were given was to display 3D architectural geo-referenced content using SketchUp and ArMedia for iOS and a custom platform or Android environment.
Microaggregation is one of the most commonly employed microdata protection methods. The basic idea of microaggregation is to anonymize data by aggregating original records into small groups of at least k elements and, therefore, preserving k -anonymity. Usually, in order to avoid information loss, when records are large, i.e., the number of attributes of the data set is large, this data set is split into smaller blocks of attributes and microaggregation is applied to each block, successively and independently. This is called multivariate microaggregation. By using this technique, the information loss after collapsing several values to the centroid of their group is reduced. Unfortunately, with multivariate microaggregation, the k -anonymity property is lost when at least two attributes of different blocks are known by the intruder, which might be the usual case.
In this work, we present a new microaggregation method called one dimension microaggregation ( Mic1D-k ). With Mic1D-k , the problem of k -anonymity loss is mitigated by mixing all the values in the original microdata file into a single non-attributed data set using a set of simple pre-processing steps and then, microaggregating all the mixed values together. Our experiments show that, using real data, our proposal obtains lower disclosure risk than previous approaches whereas the information loss is preserved.
Public transport optimisation is becoming everyday a more dicult and challenging task, because of the increasing number of transportation options as well as the exponential increase of users. Many research contributions about this issue have been recently published under
the umbrella of the smart cities research. In this work, we sketch a possible framework to optimize the tourist bus in the city of Barcelona. Our framework will extract information from Twitter and other web services, such as Foursquare to infer not only the most visited places in Barcelona, but also the trajectories and routes that tourist follow. After that, instead of using complex geospatial or trajectory clustering methods, we propose to use simpler clustering techniques as k-means or DBScan but using a real sequence of symbols as a distance measure to incorporate in the clustering process the trajectory information.
Public transport optimisation is becoming everyday a more di cult and challenging task, because of the increasing number of transportation options as well as the exponential increase of users. Many research contributions about this issue have been recently published under the umbrella of the smart cities research. In this work, we sketch a possible framework to optimize the tourist bus in the city of Barcelona. Our framework will extract information from Twitter and other web services, such as Foursquare to infer not only the most visited places in Barcelona, but also the trajectories and routes that tourist follow. After that, instead of using complex geospatial or trajectory clustering methods, we propose to use simpler clustering techniques as k-means or DBScan but using a real sequence of symbols as a distance measure to incorporate in the clustering process the trajectory information.
In logical categorial grammar [23,11] syntactic structures are categorial proofs and semantic structures are intuitionistic proofs, and the syntax-semantics interface comprises a homomorphism from syntactic proofs to semantic proofs. Thereby, logical categorial grammar embodies in a pure logical form the principles of compositionality, lex-icalism, and parsing as deduction. Interest has focused on multimodal versions but the advent of the (dis)placement calculus of Morrill, Valentín and Fadda  suggests that the role of structural rules can be reduced, and this facilitates computational implementation. In this paper we specify a comprehensive formalism of (dis) placement logic for the parser/theorem prover CatLog integrating categorial logic connectives proposed to date and illustrate with a cover grammar of the Montague fragment.
The sequent calculus sL for the Lambek calculus L () has no structural rules. Interestingly, sL is equivalent to a multimodal calculus mL, which consists of the nonassociative Lambek calculus with the structural rule of associativity. This paper proves that the sequent calculus or hypersequent calculus hD of the discontinuous Lambek calculus (,  and ), which like sL has no structural rules, is also equivalent to an ¿-sorted multimodal calculus mD. More concretely, we present a faithful embedding translation (·)# between mD and hD in such a way that it can be said that hD absorbs the structural rules of mD.
Capítol de llibre d'homenatge "Categories and Types in Logic, Language, and Physics. Essays Dedicated to Jim Lambek on the Occasion of His 90th Birthday"
Case retrieval is one important step in the case-based reasoning cycle. Up to now, several algorithms have been proposed for the indexing of cases, since the original indexing approach of k-d trees came up in literature. Main approaches propose the use a precomputed binary search tree to get an average logarithmic time effort in searching. The proposal presented in this paper consists of an indexing algorithm based on the principle of binary search trees for efficient case retrieval according to a given similarity measure called sim. The proposed NIAR k-d tree algorithm embodies two main steps based on the computation of the average value of the corresponding attribute among the subtree cases, and selecting for that attribute, the value of the Nearest Instance/case to the Average as the Root (partition value). Experimental results with some databases have shown that the retrieval step in NIAR k-d tree is faster than the standard k-d tree approach. The time efficiency, the depth and breadth in both trees are analyzed. The results obtained depict a significant difference of levels in the trees. The presented approach is implemented within a current research work on introspective reasoning framework for case-based reasoning in continuous domains.
The dynamic nature of the Web and its increasing importance as an economic platform create the need of new methods and tools for business efficiency. Current Web analytic tools do not provide the necessary abstracted view of the underlying customer processes and critical paths of site visitor behavior. Such information can offer insights for businesses to react effectively and efficiently. We propose applying Business Process Management (BPM) methodologies to e-commerce Website logs, and present the challenges, results and potential benefits of such an approach.
We use the Business Process Insight (BPI) platform, a collaborative process intelligence toolset that implements the discovery of loosely-coupled processes, and includes novel process mining techniques suitable for the Web. Experiments are performed on custom click-stream logs from a large online travel and booking agency. We first compare Web clicks and BPM events, and then present a methodology to classify and transform URLs into events. We evaluate traditional and custom process mining algorithms to extract business models from real-life Web data. The resulting models present an abstracted view of the relation between pages, exit points, and critical paths taken by customers. Such models show important improvements and aid high-level decision making and optimization of e-commerce sites compared to current state-of-art Web analytics.
Redondo, E.; Sanchez Riera, A.; Fonseca, D.; Peredo, Alberto Lecture notes in computer science Vol. 8022 LNCS, num. part 2, p. 188-197 DOI: 10.1007/978-3-642-39420-1_21 Data de publicació: 2013-07 Article en revista
This work addresses the implementation of a mobile Augmented Reality (AR) browser on educational environments. We seek to analyze new educational tools and methodologies, non-traditional, to improve students’ academic performance, commitment and motivation. The basis of our claim lies in the skills improvement that students can achieve thanks to their innate affinity to digital media features of new Smartphones. We worked under the Layar platform for mobile devices to create virtual information channels through a database associated to 3D virtual models and any other type of media content. The teaching experience was carried out with Master Architecture students, and developed in two subjects focused on the use of ICT and Urban Design. We call it Geo-elearning because of the use of new eLearning strategies and methodologies that incorporate geolocation, allowing receiving, sharing, and evaluate own-generated student’s proposals, on site.
The lighting of models displayed in Augmented Reality (AR) is now one of the most studied techniques and is in constant development. Dynamic control of lighting by the user can improve the transmission of information displayed to enhance the understanding of the project or model presented. The project shows the development of a dynamic control of lighting based on a data-glove with accelerometers and A/D NI-DAQ converter. This device transmits (wired/wirelessly) the signal into the AR software simulating the keystrokes equivalent to lighting control commands of the model. The system shows how fast and easy it is to control the lighting of a model in real-time following user movements, generating great expectations of the transmission of information and dynamism in AR.
In this paper we present a keypoint detector based on the bimodality of the histograms of oriented gradients (HOGs). We compute the bimodality of each HOG, and a bimodality image is constructed from the result of this bimodality test. The maxima with highest dynamics of the image obtained are selected as robust keypoints. The bimodality test of HOGs used is also based on dynamics. We compare the results obtained using this method with a set of well-known keypoint detectors.
Chang, H.; Ivonin, L.; Diaz, M.; Catala, A.; Chen, W.; Rauterberg, M. Lecture notes in computer science Vol. 8028, p. 205-214 DOI: 10.1007/978-3-642-39351-8_23 Data de publicació: 2013-03-01 Article en revista
According to the theories of symbolic interactionism, phenomenology of perception and archetypes, we argue that symbols play the key role in
translating the information from the physical world to the human experience, and archetypes are the universal knowledge of cognition that generates the background of human experience (the life-world). Therefore, we propose a conceptual framework that depicts how people experience the world with symbols, and how archetypes relate the deepest level of human experience. This framework indicates a new direction of research on memory and emotion, and also suggests that archetypal symbolism can be a new resource of aesthetic experience desig
According to the theories of symbolic interactionism, phenomenology of perception and archetypes, we argue that symbols play the key role in translating the information from the physical world to the human experience, and archetypes are the universal knowledge of cognition that generates the
background of human experience (the life-world). Therefore, we propose a conceptual framework that depicts how people experience the world with symbols, and how archetypes relate the deepest level of human experience. This framework indicates a new direction of research on memory and emotion, and also suggests that archetypal symbolism can be a new resource of aesthetic
A new approach to simplify orthogonal pseudo-polyhedra (OPP) and binary volumes is presented. The method is incremental and produces a level-of-detail (LOD) sequence of OPP. Any object of this sequence contains the previous objects and, therefore, it is a bounding orthogonal approximation of them. The sequence finishes with the minimum axis-aligned bounding box (AABB). OPP are represented by the Extreme Vertices Model, a complete model that stores a subset of their vertices and performs fast Boolean operations. Simplification is achieved using a new approach called merging faces, which relies on the application of 2D Boolean operations. We also present a technique, based on the model continuity, for a better shape preservation. The method has been tested with several datasets and compared with two similar methods.
In this paper we study the permutability of the composition of fuzzy consequence operators (fuzzy closings) and fuzzy interior operators (fuzzy openings). We establish several characterizations and we show the relation of permutability with the fuzzy closure and fuzzy interior of a fuzzy operator. We also study the connection between permutability and the preservation of the operator type through the composition. More precisely, when the composition of two openings is an opening and the composition of two closings is a closing.
Online Reputation Systems would help mitigate the information asymmetry between clients and providers in Cloud Computing Markets. However, those systems raise two main drawbacks: the disagreement for assuming the cost of ownership of such services and their vulnerability to reputation attacks from dishonest parties that want to increase their reputation. This paper faces both problems by describing a decentralized model that would not need from the intervention of a central entity for managing it. This model includes mechanisms for allowing participants to avoid such dishonest behaviour from other peers: each client statistically analyses the external reports about providers and accordingly weights them in the overall trust calculation. The validity of the model is demonstrated through experiments for several use cases.
Abellanas, M.; Bajuelos, A.; Canales, S.; Claverol, M.; Hernández, G.; Pereira de, I. Lecture notes in computer science num. LNCS 7579, p. 210-219 DOI: 10.1007/978-3-642-34191-5 Data de publicació: 2012-11 Article en revista
Let S be a set of n + m sites, of which n are red and have weight wR, and m are blue and weigh wB. The objective of this paper
is to calculate the minimum value of the red sites’ weight such that the union of the red Voronoi cells in the weighted Voronoi diagram of S is a connected region. This problem is solved for the multiplicativelyweighted
Voronoi diagram in O((n+m)2 log(nm)) time and for both the additively-weighted and power Voronoi diagram in O(nmlog(nm)) time
In many industrial robotic applications there is a need to track periodic reference signals and/or reject periodic disturbances. This paper presents a novel repetitive control design for systems with constant time-delays in both forward and feedback control channels. An additional delay is introduced together with plant delays to construct an internal model for periodic signals, and a simple proportional control is utilized to stabilize the closed-loop system. Sufficient stability conditions of the closed-loop system and the robustness analysis under modeling uncertainties are studied. Experimental results are included to evaluate the validity and effectiveness of the proposed method.