Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.
Raventos, A.; Quijada, R.; Torres, L.; Tarres, F.; Carasusan, E.; farre, D. International Multi-Conference on Systems, Signals and Devices p. 1-6 DOI: 10.1109/SSD.2014.6808845 Data de presentació: 2014-02 Presentació treball a congrés
Automatic generation of sports highlights from recorded audiovisual content has been object of great interest in recent years. The problem is indeed important in the production of second and third division leagues highlights videos where the quantity of raw material is significant and does not contain manual annotations. Many approaches are mostly based on the analysis of the video and disregard the important information provided by the audio track. In this paper, a new approach that combines audio and video descriptors for automatic soccer highlights generation is proposed. The approach is based on the segmentation of the video contents into shots that are further analyzed in order to determine its relevance and interest. These video-shots are scored taking into account the fusion between different audio and video features. The paper is mainly focused to emphasize the importance of audio detectors that play a key role in the analysis and scoring of the video-shots. Specifically, a new algorithm for referee's whistle detection is proposed. The algorithm has been proven to be very robust and efficiently discriminates professional whistles against other types of noises such as public cheering-up, music instruments, etc. Several results have been produced using real soccer video sequences that prove the validity of the proposed audio and video fusion scheme.
Quijada, R.; Raventos, A.; Tarres, F.; Dura, R.; Hidalgo, S. International Multi-Conference on Systems, Signals and Devices p. 1-4 DOI: 10.1109/SSD.2014.6808796 Data de presentació: 2014-02 Presentació treball a congrés
IC Reverse engineering is the process to analyze an integrated circuit to obtain information about its design, materials, logic circuitry, functionality, performance and other relevant features. The increasingly complexity of microchips using a greater number of layers and logic gates makes this process unaffordable when using traditional methods that rely on human inspection and analysis. Therefore, digital image processing is presented as a fruitful field for automation. In this paper a system for the circuitry extraction, analysis and presentation is described. It is divided in three blocks: 2D Image Tiling, Logic Gates Localization & Recognition and Microchip Navigator. The paper presents an overview of the complete system and is mainly based on the description of the image processing algorithms that are applied to the different blocks such as image stitching, customized Scale Invariant Feature Transform (SIFT) and logic gate localization & recognition.
Shot detection in sports video sequences has been of
great interest in the last years. In this paper, a new
approach to detect onboard camera shots in
compressed Formula 1 video sequences is presented.
To that end, and after studying the characteristics of
the shot, a technique based in the thresholding
comparison between a high motion area and a
stationary one has been devised. Efficient computation
is achieved by direct decoding of the motion vectors in
the MPEG stream. The shot detection process is done
through a frame-by-frame hysteresis thresholding
analysis. In order to enhance the results, a SVD shot
boundary detector is applied. Promising results are
presented that show the validity ofthe approach.
En este entregable se identifican todos los demostradores de software implementados hasta la fecha. De cada demostrador se especifica su localización en el SVN y se detalla un breve manual para una correcta compresión de sus funciones y utilización.
En este entregable se recoge una visión general de los descriptores de vídeo MPEG-7, tanto de los seleccionados para su implementación en el documento E1.3.1, como los que conforman el resto del conjunto del estándar.
En este entregable se especifican las diferentes metodologías para el análisis del sonido ambiente y la locución de eventos deportivos, así como la extracción de los metadatos que los identifican. Como resultado, se dispondrá de un conjunto reducido de metadatos que se anotarán de forma automática y cuyos algoritmos serán diseñados, realizados y evaluados en esta actividad.
En este entregable se recoge el estado del arte de la tecnología para la anotación automática de metadatos y su uso para la elaboración de resúmenes de secuencias de vídeo. El objetivo es disponer de un informe técnico que pueda utilizarse como hoja de ruta durante la ejecución del proyecto, y que se utilizará como base tecnológica para los desarrollos enfocados a la generación automática de contenidos.