The use of the Hilbert–Huang transform in the analysis of biomedical signals has increased during the past few years, but its use for respiratory sound (RS) analysis is still limited. The technique includes two steps: empirical mode decomposition (EMD) and instantaneous frequency (IF) estimation. Although the mode mixing (MM) problem of EMD has been widely discussed, this technique continues to be used in many RS analysis algorithms.
In this study, we analyzed the MM effect in RS signals recorded from 30 asthmatic patients, and studied the performance of ensemble EMD (EEMD) and noise-assisted multivariate EMD (NA-MEMD) as means for preventing this effect. We propose quantitative parameters for measuring the size, reduction of MM, and residual noise level of each method. These parameters showed that EEMD is a good solution for MM, thus outperforming NA-MEMD. After testing different IF estimators, we propose Kay¿s method to calculate an EEMD-Kay-based Hilbert spectrum that offers high energy concentrations and high time and high frequency resolutions. We also propose an algorithm for the automatic characterization of continuous adventitious sounds (CAS). The tests performed showed that the proposed EEMD-Kay-based Hilbert spectrum makes it possible to determine CAS more precisely than other conventional time-frequency techniques.
This paper deals with the problem of optimal decentralized power control in systems whose spectrum is regulated in time and space, the so-called time-area-spectrum (TAS) licensed. This license system gives to the owner the right of using a given frequency band at a given time and within a given geographical area. In this paper we consider those locations with colliding transmissions; thus, addressing a scenario with full interference. In order to facilitate the coexistance of different TAS licenses, the power spectral density of the used band shall be limited. Since controlling the overall radiated power in a given area is cumbersome (especially when several base stations or access points operate in an uncoordinated way), we control the amount of received power. First, we present the achievable rates (i.e. the rate Pareto set) and their corresponding powers by means of multi-criteria optimization theory. Second, we study a completely decentralized and gradient-based power control that obtains Pareto-efficient rates and powers, the so-called DPC-TAS (Decentralized Power Control for TAS). The power control convergence and the possibility of guaranteeing a minimum Quality of Service (QoS) per user are analyzed. Third, in order to gain more insight into the features of DPC-TAS, this paper compares it with other baseline power control approaches. For the sake of comparison, a simple pricing mechanism is proposed. Numerical simulations verify the good performance of DPC-TAS.
This paper presents a low-complexity algorithm for multiuser scheduling and resource allocation in the Multiple Input Single Output (MISO) downlink channel with Orthogonal Frequency Division Multiple Access (OFDMA). The goal of the algorithm is to maximize the sum-rate on the radio channel and to ensure that the rate assignment is suitably balanced among users. The proposed algorithm uses partial Channel State Information (CSI) and therefore has a reduced feedback requirement. It also allows an on-line implementation, based on an ergodic optimization framework with dual optimization and stochastic approximation. Performance and complexity reduction are quantified by considering comparison with other solutions in a realistic single-cell system configuration. It is shown that the algorithm is effective to balance average rate among users even in heterogeneous and non-stationary channel conditions with lower computational complexity and feedback requirements.
In the last few years, a new design paradigm has arisen in the field of wireless communications research: the so-called cross-layer optimization. In fact, this paradigm implies the redefinition of the overall design strategies for this kind of systems as it breaks the classical OSI model. The endless need for higher and higher bit rates, stringent QoS requirements and anytime-anywhere connections for wireless systems leads to the necessity of squeezing to the utmost the available radio bandwidth. Cross-layer plays a key role to achieve this goal. The amount of literature about this issue is still relatively scarce, but the premier published results show that the potential obtainable gains are worthy to deserve the increasingly attention that cross-layer is getting. This paper revises the different definitions used for such paradigm, describes the possible mechanisms that can be fitted into the definitions, outlines research challenges to meet in the near future, and analyses different strategies proposed by the authors showing some recent novel results for CDMA-based and WLAN systems.
This paper presents a new approach to robust beamforming based on fuzzy logic theory that is suitable for both point and scattered sources. The presented technique is based on the optimum beamformer and makes it robust to an imperfect estimate of the direction of arrival (DOA) even when powerful interferences are within the uncertainty range of the desired source. This robust approach solves the deficiencies of the classical non-robust space reference beamformers (SRB) in which the real DOAand the presumed one are taken as equal, although they are not. At low
signal-to-noise ratio (SNR) the fuzzy inference-based beamformer relies on the fuzzy description of the DOA estimation, and at high SNR it places more emphasis on the estimated DOA. Interference rejection is well achieved for Interference to noise ratios (INRs) over the SNR. When the number of antennas is large, the fuzzy inference based
beamformer can be implemented by means of a general side lobe canceller (GSLC) architecture and stability improves while the beamformer is still capable of suppressing weak interferences. The proposed schemes are compared with existing techniques, showing that the fuzzy inference beamformer can be an alternative when considering scenarios with DOAuncertainty and interferences. The main goals of the proposed scheme are: DOA robustness, adjustability, and numerical stability, which shortens the distance between theory and implementation.
This paper presents a beamforming technique for the reception of frequency hopping (FH) modulated signals, which takes advantage of their inherent frequency diversity. This technique, based on a code reference beamformer, requires neither temporal nor spatial a priori reference and allows continuous self-calibration of the array. The proposed framework is composed of two different stages. The first stage employs the inverse of the noise plus interference covariance matrix obtained by an anticipative processor. The second stage makes use of the steering vector of the signal of interest which is adaptively obtained by maximizing the output signal to interference plus noise ratio (SINR). Using this information, the first stage is in turn readjusted and, as a result, the scheme is able to track non-stationary scenarios following the channel variations with no previous references other than knowledge of the frequency hopping sequence. The two-stage code reference beamformer provides the convergence rate necessary to avoid the SINR reduction associated with frequency hops in existing methods.
This paper proposes a trigonometric functional extension, hereafter named the Fourier model, as an alternative framework to the Volterra approach for non-linear systems modelling. This work is focused on the general advantages that trigonometric functionals show in adaptive implementations and also on the possibility they provide to reuse well-known linear processing tools in a non-linear context. The performance of the Fourier model is compared in a set of simulations that cover companders for audio and radio frequency amplifiers, probability density function (PDF) whitening and PDF estimation.
This paper deals with the class of extensive operators in the lattice of partitions. These operators are merging techniques. They can be used as filtering tools or as segmentation algorithms. In the first case, they are known as connected operators and, in the second case, they are region growing techniques. This paper discusses the basic elements that have to be defined to create a merging algorithm: merging order, merging criterion and region model. This analysis highlights the similarity and differences between a filtering tool, such as a connected operator, and a segmentation algorithm. Taking benefit from the filtering and segmentation viewpoints, we propose a general merging algorithm that can be used to create new connected operators (in particular self-dual operators) and efficient segmentation algorithms (robust criteria and efficient implementation).
This paper presents a new technique for downlink transmission beamformer design in cellular mobile communication systems using antenna arrays at the base station. The method is based on estimation of an underlying spatial distribution associated with each source’s spatial downlink channel. The algorithm is ‘blind’ in the sense that it depends only on uplink spatial channel statistics, requiring no mobile-to-base station feedback in the design procedure. The assumed underlying spatial distribution models are general enough to be used in a wide variety of mobile communications scenarios (e.g., rural, urban, sub-urban, indoor). The technique is of reasonable complexity with a significant portion of the computation carried out off-line. Simulation results verify the effectiveness of the approach.
One of the most important steps in the vector quantization of images is the design of the codebook. The codebook is generally designed using the LBG algorithm, that is in essence a clustering algorithm which uses a large training set of empirical data that is statistically representative of the image to be quantized. The LBG algorithm, although quite effective for practical applications, is computationally very expensive and the resulting codebook has to be recalculated each time the type of image to be encoded changes. One alternative to the generation of the codebook, called stochastic vector quantization, is presented in this paper. Stochastic vector quantization (SVQ) is based on the generation of the codebook according to some previous model defined for the image to be encoded. The well-known AR model has been used to model the image in the current implementations of the technique, and has shown good performance in the overall scheme. To show the merit of the technique in different contexts, stochastic vector quantization is discussed and applied to both pixel-based and segmentation-based image coding schemes.
This paper deals with the problem of multisensor-multiuser detection in CDMA systems in the presence of interferences external to the system. The paper presents a new method able to estimate the spatial signature of all active users projected onto the subspace orthogonal to the external interference. With this information a specific beamformer can be designed for each user in order to combat the external and multiple access interference. The method performs in frequency non-selective multipath scenarios as they arise from some CDMA satellite and indoor communications systems. The estimation process does not require any training signal nor any a priori spatial information. It exploits the temporal structure of CDMA signals and extract the required information directly from the received signals. In addition, the method is independent of the nature of the external interference and it can cope with narrow-band and wide-band interfering signals.
This work deals with Boolean functions of non-linear and linear basis. The Boolean random functions of non-linear basis were proposed by Serra (1988,1989). These functions are generated through a Poisson point process upon which a family of independent functions, called germ functions, are installed. This process of installation consists in taking the Sup (supremum), point to point, of the result of placing the germ functions upon the points of the Poisson process. Boolean functions of linear basis, which are defined and proposed in this paper, are generated in the same manner as the non-linear functions but with a modified installation process. Instead of taking the Sup point to point, the sum point to point is defined. So the process is then equivalent to the convolution of a Poisson train of deltas with a random pulse. The aim of this paper is to analyse textures through these two models, in order to infere their genetics through a given realisation of the process, i.e., to analyse the complete statistics of the germ functions and the density of the associated Poisson process in order to characterise a given texture. Experiments and results are provided which prove that the real textures can be understood as realisations of Boolean random functions (of linear and non-linear basis), and that it has been possible to infere the genetics of unidimensional Boolean random functions of linear basis with the algorithm proposed here. It has also been possible to do it with non-linear Boolean functions but only by imposing two restrictive conditions on the genetics of the realisation.
A new architecture to model and design nonlinear transfer functions is presented using a new formulation for nonlinear systems. This approach follows the guidelines of the mapping theorem due to A. Kolmogorov and it is based on the direct Fourier transform of the transfer function. The resulting scheme is formed by two stages; the first stage contains phase modulators, which, based on random sampling concepts reported by I. Bilinskis, are duplicated with a small perturbation in the modulation factor. This stage depends on the number of diversity data and it is independent of the function. The second step reduces to Volterra systems and a direct combiner of the new diversity kernels. The reported architecture and design seem to be able to cope with both linear and nonlinear filtering problems, which can be considered as a formal framework for generalised signal processing.
The problem of parameter estimation of a cyclostationary signal is addressed in this paper. The maximum likelihood function is approximated by the mean squared error (MSE) between the estimated and expected cyclic autocorrelation matrices, and a joint frequency-timing estimator is obtained. The derived estimator is connected with optimum detection, time-frequency analysis and the maximum SNR spectral line generation. A recursive filter is designed based on Bayesian estimation theory. The filter bears close resemblance to the extended Kalman filter (EKF), but the gain matrices can be computed off-line. The resulting filter is simple and performs an automatic, gradual transition between the acquisition and tracking stages.
This paper is concerned with a hierarchical segmentation algorithm for image coding. It is based on mathematical morphologyl which is very attractive for this purpose. Indeed, morphology can efficiently deal with geometrical features such as size, shape, contrast or connectivity that can be considered as object-oriented, and therefore segmentation-oriented, features. The proposed algorithm follows a purely top-down procedure. It first takes into account the most global information of the image and produces a coarse (with a reduced number of regions) segmentation. Then, the segmentation is improved by introducing regions corresponding to more local information. Each segmentation stage relies on four basic steps: simplification, feature extraction, decision and quality estimation. The simplification removes information from the input image to make it easier to segment. Morphological filters based on reconstruction processes are proved to be very efficient for this purpose. The feature extraction, called marker extraction, identifies the presence of homogeneous regions. The goal of the decision is to locate precisely the contours of the regions detected by the feature extraction. This decision process is performed by the watershed algorithm. Finally, the quality estimation concentrates on an image, called the coding residue, all the information about the regions that have not been properly represented by the current segmentation. The procedure allows the introduction of the texture and contour coding schemes within the segmentation algorithm. The coding residue is transmitted to the next segmentation stage to improve the segmentation and coding quality. Examples of transformations which can fit within this structure are described and discussed. Finally, some segmentation and coding examples are presented to show the validity and interest of the coding approach.
In this paper a method for segmenting image sequnces and its application for motion estimation are presented. This method is based on a three-dimensional (3D) morphological segmentation. A 3D (i.e. two spatial dimensions plus time) approach has advantages over a 2D one, as it produces a coherent segmentation along the time dimension. Mathematical morphology is a very attractive tool for segmentation purposes because it deals with geometric features, such as size, shape, contrast or connectivity, which can be considered as object-oriented, and therefore segmentation-oriented features. The method proposed follows a purely top-down procedure, i.e. first produces a coarse segmentation in a first level and refines it in the following levels. The original image sequences are considered as functions defined on a 3D space. As a result, it will directly segment 3D regions. Furthermore, a time-recursive approach is introduced in order to deal with interactive applications, thus avoiding the drawbacks of purely 3D methods. Sequence segmentation has many applications in image sequence processing. In this paper, its application for motion analysis is discussed. As the segmentation is performed in a three-dimensional space, the produced regions are connected components in this space which can be related with moving objects. This implies a complete knowledge about the position and shape of every segmented object of the scene in every time section. From this information, an affine transformation is used within each connected component in order to estimate the parameters of motion of every region.
We address the problem of estimating the parameters of a pure AR causal model excited with non-Gaussian, ergodic, unobservable process. Output samples may be corrupted with colored Gaussian noise. Other types of noise are allowed provided that they have a set of mth order statistics whose value is zero, if the same set of statistics are different from zero for the signal. It is shown that each sample of the impulse response of the AR system that generates the process may be expressed as a linear combination of cumulant slices of any order, thus providing a new framework to combine cumulants of different orders. The resulting algorithm is shown to be well behaved and to provide consistent estimates while reducing the complexity significantly with respect to other approaches.
A general class of higher-order moments- and spectra-based time-frequency distributions, including Wigner higher-order moment spectra (WHOS), has been defined and studied recently as an extension of bilinear time-frequency representations (TFR) in terms of multilinear (higher than two) moments of the signal. In this paper, the analysis of mono- and multi-component signals is addressed using higher-order TFR. A computationally feasible implementation of the Wigner bispectrum and trispectrum (i.e., WHOS in the third- and fourth-order domains) is proposed considering two-dimensional slices of the time-multifrequency domain of support of WHOS. It is shown that these simplified formulations provide us with information about the evolution of higher-order spectra of a signal with similar resolution and complexity as bilinear TFRs. The problem of cross-terms cancellation is also considered by defining reduced interference distributions as extensions of the Choi-Williams distribution in the fourth-order domain. A simplified implementation, that employs only the principal slice of WHOS, is also proposed for practical purposes.
Several papers have been devoted to the alternative maximum entropy method (MEM2) which uses a finite length cepstrum modelling to estimate the spectrum from a given set of autocorrelations. In this paper, a simple technique that avoids the computational burden of MEM2 by using the causal part of the autocorrelation instead of the complete two-sided sequence is presented. Its finite length cepstrum model, that arises from the minimization of a root-mean-square measure of spectral distance, can readily include prior information without increasing the computational complexity of the algorithm. As illustrated with some numerical examples, the new method demonstrates a greater potential than MEM2.
This paper presents a procedure for CIR beamforming that avoids SVD-like proceduresm and is derived from the classic CINR solution. The resulting nulls along the direction of the impinging interferers are of the same degree as those used in noise subspace procedures. The method is based on both the use of a temporal reference, available in the satellite payload, and on the spatial steered beamvector (the Applebaum solution). The resulting procedure is a valuable alternative to the classic covariance loading methods, as further matrix inversion is not required, and the noise estimate is derived from the existing hardware. Some results corresponding to a data relay satellite (DRS) scenario are reported.
Two important questions in array signal processing are addressed in this paper: the data matrix versus autocorrelation matrix alternative and the recursive implementation of subspace DOA methods. The discussion of the first question is done in face of the proposed class of recursive algorithms. These new algorithms are easily implementable and have a high degree of parallelism that is suitable for on-line implementations. Algorithms for recursive implementation of the eigendecomposition (ED) of the autocorrelation matrix and SVD of the data matrix are described. The ED/SVD trade-off is discussed.
The Karhunen-Loeve transformation is an optimal method for encoding images in the mean square error (MSE) sense. Only one spatially adaptive method is known to be reported in the literature. Many other suboptimal encoding methods have been developed to avoid the problems encountered in the application of the Karhunen-Loeve transform (KLT). Such methods give performance which is inferior to the Karhunen-Loeve scheme in both MSE and visual quality, although some of them are quite effiient for first order Markov processes. We have modified the KLT scheme such that the number of eigenvectors assigned to code each subimage is determined according to some specified rule. Depending on the amount of detail or variation among the pixels more or fewer eigenvectors are alloteed to reconstruct the subimage to a given mean square error. Following this idea an adaptive algorithm in the spatial domain is presented that provides, approximately, an increase of 30% in compression when compared to the nonadaptive KLT scheme. Image processing examples are given and compared against nonadaptive KL transform. On the other hand, in some situations the covariance matrix is nearly singular, making some eigenvalues equal to zero and the eigenvectors completely undetermined. We present an algorithm that minimizes the problem of ill-conditioned eigenvectors by computing the eigenvector corresponding to the largest eigenvalue of successive covariance matrices. Using this approach computational errors and ill-conditioning are minimized. This algorithm can be used with the adaptive KLT scheme thus providing a new and efficient coding scheme. Finally, a computer experiment is also presented that for some class of images shows that the same code set may be used, thus avoiding the computational problem normally associated with the KLT encoding scheme.
A filtering method for sampling the lowpass equivalent of a bandpass analog signal is introduced. This input is sampled at exactly twice the sampling rate of the in-phase and quadrature components provided at the output. The sampling frequency determination and the lowpass decimator filter design are straightforward. The practical implementation of the sampling method is accomplished by the polyphase realization of the filter; this implementation is particularly suitable for linear phase FIR designs.
In the optimization formulation, once the known autocorrelations are fixed as constraints, the objective functional completely characterizes the corresponding method of spectral estimation. After observing that this functional involves a sort of spectral flatness maximization, the relationship between the form of its integrand and the salient trends of the various methods is pointed out. This serves as a basis to propose a generalized optimization approach that encompasses some classical estimators and originates new ones.
This paper begins with a classification of power spectral estimates from the point of view of bank filter analysis. To reinforce the interest of such a classification, a review of the main and most familiar procedures for spectral estimation is included. Starting from the most general approach, due to Frost, we indicate why it is not appropriate to classify Capon's maximum likelihood method as a low resolution procedure.
The second part of the paper deals with a modification of the so-called maximum likelihood estimate in order to obtain the resolution which corresponds to a power density estimate. The modification provided here consists in a bandwidth normalization. The resulting estimate shows how the area of application of ML filters (as the data depending filters reported some years ago by Capon and Lacoss could be named) is considerably extended to a reliable procedure for power level and power density level estimation.
We also explain in this paper how to get cross-spectral estimates from ML filters. From our point of view, this approach is the only one, among currently reported methods, that enhances the adequate levels of quality in order to compete with classical Fourier analyzers.
In addition, the interesting ideas of Pisarenko about power function estimates can also be applied to the new approach presented here. The resulting family of power function estimates can further improve resolution up to the quality provided by SVD like methods, but avoiding the computational burden associated with them.