Humans have no problem segmenting different motion stimuli despite the ambiguity of local motion signals. Adaptive surround modulation, i.e., the apparent switching between integrative and antagonistic modes, is assumed to play a crucial role in this process. However, so far motion processing models based on local integration have not been able to provide a unifying explanation for this phenomenon. This motivated us to investigate the problem of local stimulus disambiguation in an alternative and fundamentally distinct motion-processing model which uses global motion filters for velocity computation. Local information is reconstructed at the end of the processing stream through the constructive interference of global signals, i.e., inverse transformations. We show that in this model local stimulus disambiguation can be achieved by means of a novel filter embedded in this architecture. This gives rise to both integrative and antagonistic effects which are in agreement with those observed in psychophysical experiments with humans, providing a functional explanation for effects of motion repulsion
Recently, error minimized extreme learning machines (EM-ELMs) have been proposed as a simple and efficient approach to build single-hidden-layer feed-forward networks (SLFNs) sequentially. They add random hidden nodes one by one (or group by group) and update the output weights incrementally to
minimize the sum-of-squares error in the training set. Other very similar methods that also construct SLFNs sequentially had been reported earlier with the main difference that their hidden-layer weights
are a subset of the data instead of being random. These approaches are referred to as support vector sequential feed-forward neural networks (SV-SFNNs), and they are a particular case of the sequential approximation with optimal coefficients and interacting frequencies (SAOCIF) method. In this paper, it is firstly shown that EM-ELMs can also be cast as a particular case of SAOCIF. In particular, EM-ELMs can easily be extended to test some number of random candidates at each step and select the best of them, as SAOCIF does. Moreover, it is demonstrated that the cost of the computation of the optimal outputlayer
weights in the originally proposed EM-ELMs can be improved if it is replaced by the one included in SAOCIF. Secondly, we present the results of an experimental study on 10 benchmark classification and 10 benchmark regression data sets, comparing EM-ELMs and SV-SFNNs, that was carried out under the
same conditions for the two models. Although both models have the same (efficient) computational cost, a statistically significant improvement in generalization performance of SV-SFNNs vs. EM-ELMs was found
in 12 out of the 20 benchmark problems.
Most of the existing research on multivariate time series concerns supervised forecasting problems. In comparison, little research has been devoted to their exploration through unsupervised clustering
and visualization. In this paper, the capabilities of Generative Topographic Mapping Through Time, a model with foundations in probability theory, that performs simultaneous time series clustering and visualization, are assessed in detail. Focus is placed on the visualization of the evolution of signal regimes and the exploration of sudden transitions, for which a novel identification index is defined. The
interpretability of time series clustering results may become extremely difficult, even in exploratory visualization, for high dimensional datasets. Here, we define and test an unsupervised time series relevance determination method, fully integrated in the Generative Topographic Mapping Through Time model, that can be used as a basis for time series selection. This method should ease the interpretation of time series clustering results.
The Generative Topographic Mapping (GTM) was originally conceived as a probabilistic alternative to the well-known, neural network-inspired, Self-Organizing Maps. The GTM can also be interpreted as a constrained mixture of distribution models. In recent years, much attention has been directed towards Student t-distributions as an alternative to Gaussians in mixture models due to their robustness towards outliers. In this paper, the GTM is redefined as a constrained mixture of t-distributions: the t-GTM, and the Expectation–Maximization algorithm that is used to fit the model to the data is modified to carry out missing data imputation. Several experiments show that the t-GTM successfully detects outliers, while minimizing their impact on the estimation of the model parameters. It is also shown that the t-GTM provides an overall more accurate imputation of missing values than the standard Gaussian GTM.
The Self-Organizing Map (SOM) is a neural network algorithm that has the special property ofcreating spatially organized tepresetüatioes of various features of input signals. The resulting maps resemble realneural structures found in the cortices of developed animal brains.: Also, the SOM. has been successful in various pattern recognition tasks involving noisy signals, as for instance, speech recognition and for this reason we are studying its application to some astronomical problems. In this paper w~ present the 2-D mapping and subsequerít study of one local sample of 12000 stars using SOM. The available attributes are 14: 3-D position and velocitiesvphotometric indexes, spectral type and luminosity class. The possible location of halo, thick disk and thin disk stars is discussed. Their kinematical properties are also compared using the velocity distribution moments up to order four.