Graphic summary
  • Show / hide key
  • Information


Scientific and technological production
  •  

1 to 50 of 86 results
  • Tuning and hybrid parallelization of a genetic-based multi-point statistics simulation code

     Peredo, Oscar; Ortiz, Julian; Herrero Zaragoza, José Ramón; Samaniego, Cristobal Augusto
    Parallel computing
    Date of publication: 2014-05
    Journal article

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    One of the main difficulties using multi-point statistical (MPS) simulation based on annealing techniques or genetic algorithms concerns the excessive amount of time and memory that must be spent in order to achieve convergence. In this work we propose code optimizations and parallelization schemes over a genetic-based MPS code with the aim of speeding up the execution time. The code optimizations involve the reduction of cache misses in the array accesses, avoid branching instructions and increase the locality of the accessed data. The hybrid parallelization scheme involves a fine-grain parallelization of loops using a shared-memory programming model (OpenMP) and a coarse-grain distribution of load among several computational nodes using a distributed-memory programming model (MPI). Convergence, execution time and speed-up results are presented using 2D training images of sizes 100 × 100 × 1 and 1000 × 1000 × 1 on a distributed-shared memory supercomputing facility.

  • Level-3 Cholesky factorization routines improve performance of many Cholesky algorithms

     Gustavson, Fred G.; Wasniewski, Jerzy; Dongarra, Jack J.; Herrero Zaragoza, José Ramón; Langou, Julien
    ACM transactions on mathematical software
    Date of publication: 2013-02
    Journal article

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    Four routines called DPOTF3i, i = a, b, c, d, are presented. DPOTF3i are a novel type of level-3 BLAS for use by BPF (Blocked Packed Format) Cholesky factorization and LAPACK routine DPOTRF. Performance of routines DPOTF3i are still increasing when the performance of Level-2 routine DPOTF2 of LAPACK starts decreasing. This is our main result and it implies, due to the use of larger block size nb, that DGEMM, DSYRK, and DTRSM performance also increases! The four DPOTF3i routines use simple register blocking. Different platforms have different numbers of registers. Thus, our four routines have different register blocking sizes. BPF is introduced. LAPACK routines for POTRF and PPTRF using BPF instead of full and packed format are shown to be trivial modifications of LAPACK POTRF source codes. We call these codes BPTRF. There are two variants of BPF: lower and upper. Upper BPF is "identical" to Square Block Packed Format (SBPF). "LAPACK" implementations on multicore processors use SBPF...

  • Graphics processing unit computing and exploitation of hardware accelerators

     Amor López, Margarita; Doallo Biempica, Ramón; Fraguela Rodríguez, Basilio Bernardo; Herrero Zaragoza, José Ramón; Quintana Ortí, Enrique Salvador; Strzodka, Robert
    Concurrency and computation: practice and experience
    Date of publication: 2013-06-10
    Journal article

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    This special issue contributes to this promising field with extended and carefully reviewed versions of selected papers from two workshops, namely the 2nd Minisymposium on GPU Computing, which was held as part of the 9th International Conference on Parallel Processing and Applied Mathematics (PPAM 2011) in Torun (Poland); and the Workshop on Exploitation of Hardware Accelerators (WEHA 2011), which was held in conjunction with The 2011 International Conference on High Performance Computing & Simulation in Istanbul (Turkey)

  • Square block code for positive definite symmetric Cholesky band routines

     Gustavson, Fred G.; Herrero Zaragoza, José Ramón; Morancho Llena, Enrique
    Workshop on Numerical Algorithms on Hybrid Architectures
    Presentation's date: 2013-09-09
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    Evaluación del trabajo Final de Grado  Open access

     Sanchez Carracedo, Fermin; Climent Vilaro, Juan; Corbalan Gonzalez, Julita; Fonseca Casas, Pau; Garcia Almiñana, Jordi; Herrero Zaragoza, José Ramón; Llinas Audet, Francisco Javier; Rodriguez Hontoria, Horacio; Sancho Samsó, Maria Ribera
    Jornadas de Enseñanza Universitaria de la Informática
    Presentation's date: 2013-07
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Los Proyectos de Fin de Carrera (PFC) se han evaluado tradicionalmente a partir de una memoria y de una presentación pública. Esta evaluación, en general, la realiza un tribunal formado por varios profesores, que juzga de forma integral el proyecto a partir de la documentación entregada y de su presentación pública. Para poner la nota final los centros no disponen, en general, de unos criterios claros y precisos, por lo que cada tribunal usa su propia experiencia previa para decidir la nota de cada proyecto. Los Trabajos de Fin de Grado (TFG) substituyen en los nuevos planes de estudios de las ingenierías a los antiguos PFC. La evaluación de los TFG debe consi- derar, de forma explícita, tanto las competencias específicas como las genéricas, y es necesario que existan criterios claros sobre la forma de evaluarlas. Para avanzar en este sentido, el Ministerio de Cien cia e Innovación y la Agencia para la Calidad del Sistema Universitario de Catalunya financiaron en 2008 y 2009 el proyecto ¿Guía para la evaluación de competencias en los Trabajos de Fin de Grado y de Máster en las Ingenierías¿. Esta guía es, en realidad, una guía para ayudar a que cada centro/titulación defina su propio procedimiento de evaluación del TFG. En este trabajo se presenta una implementación de las propuestas contenidas en la guía y se define una metodología para evaluar los TFG a partir de las competencias que se trabajan en la titulación de Grado en Ingeniería Informática de la Facultat d¿Informàtica de Barcelona. La metodología puede ser fácilmente replicada o adaptada para otros centros y otras titulaciones, lo que puede facilitar la realización de su propia guía de evaluación de los TFG.

    Los Proyectos de Fin de Carrera (PFC) se han evaluado tradicionalmente a partir de una memoria y de una presentación pública. Esta evaluación, en general, la realiza un tribunal formado por varios profesores, que juzga de forma integral el proyecto a partir de la documentación entregada y de su presentación pública. Para poner la nota final los centros no disponen, en general, de unos criterios claros y precisos, por lo que cada tribunal usa su propia experiencia previa para decidir la nota de cada proyecto. Los Trabajos de Fin de Grado (TFG) substituyen en los nuevos planes de estudios de las ingenierías a los antiguos PFC. La evaluación de los TFG debe considerar, de forma explícita, tanto las competencias específicas como las genéricas, y es necesario que existan criterios claros sobre la forma de evaluarlas. Para avanzar en este sentido, el Ministerio de Ciencia e Innovación y la Agencia para la Calidad del Sistema Universitario de Catalunya financiaron en 2008 y 2009 el proyecto “Guía para la evaluación de competencias en los Trabajos de Fin de Grado y de Máster en las Ingenierías”. Esta guía es, en realidad, una guía para ayudar a que cada centro/titulación defina su propio procedimiento de evaluación del TFG. En este trabajo se presenta una implementación de las propuestas contenidas en la guía y se define una metodología para evaluar los TFG a partir de las competencias que se trabajan en la titulación de Grado en Ingeniería Informática de la Facultat d’Informàtica de Barcelona. La metodología puede ser fácilmente replicada o adaptada para otros centros y otras titulaciones, lo que puede facilitar la realización de su propia guía de evaluación de los TFG.

  • On new computational local orders of convergence

     Grau Sanchez, Miguel; Noguera Batlle, Miguel; Grau Gotés, Mª Ángela; Herrero Zaragoza, José Ramón
    Applied mathematics letters
    Date of publication: 2012-12-01
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Organización y gestión de una titulación del EEES  Open access

     Sanchez Carracedo, Fermin; Sancho Samsó, Maria Ribera; Herrero Zaragoza, José Ramón
    Jornadas de Enseñanza Universitaria de la Informática
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    La organización de los nuevos grados y másteres del EEES debe garantizar que el estudiante adquiere las competencias técnicas y transversales definidas por la titulación. Para ello, deben existir mecanismos muy precisos de coordinación entre asignaturas que aseguren la adquisición de estas competencias, y que garanticen que la organización de los estudios permite la evaluación continuada y el aprendizaje activo. Para alcanzar estos objetivos, es imprescindible ofrecer al profesorado las herramientas apropiadas para que diseñe y ponga en marcha las asignaturas siguiendo los objetivos del centro.

    SUMMARY: The organization of new bachelor and master degree studies within EHEA must guarantee the student's achievement of hard and professional skills defined in each degree. Consequently, coordination mechanisms must exist. Such mechanisms must ensure the achievement of skills and warrant that both, active learning and continuous assessment are possible. In order to achieve these goals, it is necessary to offer appropriate tools to the teachers so that they can design and implement courses according to the objectives of the institution.

  • Special Issue: GPU computing

     Herrero Zaragoza, José Ramón; Quintana Ortí, Enrique Salvador; Strzodka, Robert
    Concurrency and computation. Practice and experience
    Date of publication: 2011-05
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    Estudio y evaluación de formatos de almacenamiento para matrices dispersas en arquitecturas multi-core  Open access

     Pasarin, Marc; Otero Calviño, Beatriz; Herrero Zaragoza, José Ramón
    Date: 2011-11-01
    Report

    Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

  • Operación stencil en CUDA

     Garces, Bernardo; Herrero Zaragoza, José Ramón; Otero Calviño, Beatriz
    Date: 2011-11-29
    Report

     Share Reference managers Reference managers Open in new window

  • Desarrollo de Algoritmos Escalables y Tolerantes a Fallos Basados en Métodos Probabilisticos

     Herrero Zaragoza, José Ramón; Otero Calviño, Beatriz; Acebrón de Torres, Juan A.; Rodriguez Rozas, Angel; Gonçalves, Pedro
    Participation in a competitive project

     Share

  • New level-3 BLAS kernels for Cholesky factorization

     Gustavson, Fred G.; Wasniewski, Jerry; Herrero Zaragoza, José Ramón
    International Conference on Parallel Processing and Applied Mathematics
    Presentation's date: 2011
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Sistemes operatius

     Herrero Zaragoza, José Ramón; Jové Lagunas, Teodor; Marzo Lázaro, José Luis; Morancho Llena, Enrique; Royo Valles, Maria Dolores
    Date of publication: 2010-09-01
    Book

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Sistemas operativos

     Herrero Zaragoza, José Ramón; Jové Lagunas, Teodor; Marzo Lázaro, José Luis; Morancho Llena, Enrique; Royo Valles, Maria Dolores
    Date of publication: 2010-09-01
    Book

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Parallelizing dense and banded linear algebra libraries using SMPSs

     Badia Sala, Rosa Maria; Herrero Zaragoza, José Ramón; Labarta Mancho, Jesus Jose; Perez, Josep M.; Quintana Ortí, Enrique Salvador; Quintana-Ortí, Gregorio
    Concurrency and Computation: Practice and Experience
    Date of publication: 2009-12-25
    Journal article

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    The promise of future many-core processors, with hundreds of threads running concurrently, has led the developers of linear algebra libraries to rethink their design in order to extract more parallelism, further exploit data locality, attain better load balance, and pay careful attention to the critical path of computation. In this paper we describe how existing serial libraries such as (C)LAPACK and FLAME can be easily parallelized using the SMPSs tools, consisting of a few OpenMP-like pragmas and a runtime system. In the LAPACK case, this usually requires the development of blocked algorithms for simple BLAS-level operations, which expose concurrency at a finer grain. For better performance, our experimental results indicate that column-major order, as employed by this library, needs to be abandoned in benefit of a block data layout. This will require a deeper rewrite of LAPACK or, alternatively, a dynamic conversion of the storage pattern at run-time. The parallelization of FLAME routines using SMPSs is simpler as this library includes blocked algorithms (or algorithms-by-blocks in the FLAME argot) for most operations and storage-by-blocks (or block data layout) is already in place.

  • MPEXPAR: MODELS DE PROGRAMACIO I ENTORNS D'EXECUCIO PARAL·LELS

     Gonzalez Tallada, Marc; Sirvent Pardell, Raül; Guitart Fernández, Jordi; Carrera Perez, David; Torres Viñals, Jordi; Badia Sala, Rosa Maria; Alonso López, Javier; Gil Gómez, Maria Luisa; Navarro Mas, Nacho; Martorell Bofill, Xavier; Cortes Rossello, Antonio; Corbalan Gonzalez, Julita; Costa Prats, Juan Jose; Farreras Esclusa, Montserrat; Herrero Zaragoza, José Ramón; Becerra Fontal, Yolanda; Nou Castell, Ramon; Tejedor Saavedra, Enric; Labarta Mancho, Jesus Jose; Ayguade Parra, Eduard
    Participation in a competitive project

     Share

  • Hypermatrix oriented supernode amalgamation

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Journal of supercomputing
    Date of publication: 2008-10
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Sistemes Operatius. Quadern de Laboratori

     Pajuelo González, Manuel Alejandro; Lopez Alvarez, David; Millan Vizuete, Amador; Heredero Lazaro, Ana M.; Durán, Alex; Herrero Zaragoza, José Ramón; Verdu Mula, Javier; Becerra Fontal, Yolanda; Morancho Llena, Enrique
    Date of publication: 2008-07
    Book

     Share Reference managers Reference managers Open in new window

  • Implantación de la Evaluación Continuada en SO

     Lopez Alvarez, David; Pajuelo González, Manuel Alejandro; Herrero Zaragoza, José Ramón; Duran González, Alejandro
    Date: 2007-04
    Report

     Share Reference managers Reference managers Open in new window

  • Sistemes Operatius. Conceptes Bàsics

     Duran González, Alejandro; Herrero Zaragoza, José Ramón; Pajuelo González, Manuel Alejandro; Lopez Alvarez, David
    Date of publication: 2007-09
    Book

     Share Reference managers Reference managers Open in new window

  • Resultats de la implantació de l'avaluació continuada a l'assignatura Sistemes Operatius

     Lopez Alvarez, David; Pajuelo González, Manuel Alejandro; Herrero Zaragoza, José Ramón; Duran González, Alejandro
    Jornades de Docència del Departament d'Arquitectura de Computadors. 10 Anys de Jornades
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Recepció, classificació i resposta automàtica de pràctiques i treballs

     Herrero Zaragoza, José Ramón
    Jornades de Docència del Departament d'Arquitectura de Computadors. 10 Anys de Jornades
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Elaboració de pàgines WWW per a les assignatures

     Artiaga Amouroux, Ernest; Herrero Zaragoza, José Ramón
    Jornades de Docència del Departament d'Arquitectura de Computadors. 10 Anys de Jornades
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Evaluacion continuada sin morir en el intento

     Lopez Alvarez, David; Pajuelo González, Manuel Alejandro; Herrero Zaragoza, José Ramón; Duran González, Alejandro
    Jornadas de Enseñanza Universitaria de la Informática
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • New Data Structures for Matrices and Specialized Inner Kernels: Low overhead for High Performance

     Herrero Zaragoza, José Ramón
    European Signal Processing Conference
    Presentation's date: 2007-09-03
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • New Data Structures for Matrices and Specialized Inner Kernels: Low Overhead for High Performance

     Herrero Zaragoza, José Ramón
    7th International Conference PPAM 2007. Parallel Processing and Applied Mathematics
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Analysis of a sparse hypermatrix Cholesky with fixed-sized blocking

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Applicable algebra in engineering communication and computing
    Date of publication: 2007-05
    Journal article

     Share Reference managers Reference managers Open in new window

  • Exploiting computer resources for fast nearest neighbor classification

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Pattern analysis and applications
    Date of publication: 2007-10
    Journal article

     Share Reference managers Reference managers Open in new window

  • Sparse Hypermatrix Cholesky: Customization for High Performance

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    IAENG international journal of applied mathematics
    Date of publication: 2007-02
    Journal article

     Share Reference managers Reference managers Open in new window

  • New Data Structures for Matrices and Specialized Inner Kernels: Low Overhead for High Performance

     Herrero Zaragoza, José Ramón
    Lecture notes in computer science
    Date of publication: 2007-09
    Journal article

     Share Reference managers Reference managers Open in new window

  • Computación de altas prestaciones V

     Martorell Bofill, Xavier; Valero Cortes, Mateo; Gil Gómez, Maria Luisa; Ramirez Bellido, Alejandro; Alvarez Martinez, Carlos; Torres Viñals, Jordi; Herrero Zaragoza, José Ramón; Guitart Fernández, Jordi; Morancho Llena, Enrique
    Participation in a competitive project

     Share

  • A Proposal for Continuous Assessment at Low Cost

     Lopez Alvarez, David; Herrero Zaragoza, José Ramón; Pajuelo González, Manuel Alejandro; Duran González, Alejandro
    37th Annual Frontiers in Education Conference Program
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • A framework for efficient execution of matrix computations  Open access

     Herrero Zaragoza, José Ramón
    Defense's date: 2006-07-07
    Department of Computer Architecture, Universitat Politècnica de Catalunya
    Theses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Matrix computations lie at the heart of most scientific computational tasks. The solution of linear systems of equations is a very frequent operation in many fields in science, engineering, surveying, physics and others. Other matrix operations occur frequently in many other fields such as pattern recognition and classification, or multimedia applications. Therefore, it is important to perform matrix operations efficiently. The work in this thesis focuses on the efficient execution on commodity processors of matrix operations which arise frequently in different fields.We study some important operations which appear in the solution of real world problems: some sparse and dense linear algebra codes and a classification algorithm. In particular, we focus our attention on the efficient execution of the following operations: sparse Cholesky factorization; dense matrix multiplication; dense Cholesky factorization; and Nearest Neighbor Classification.A lot of research has been conducted on the efficient parallelization of numerical algorithms. However, the efficiency of a parallel algorithm depends ultimately on the performance obtained from the computations performed on each node. The work presented in this thesis focuses on the sequential execution on a single processor.There exists a number of data structures for sparse computations which can be used in order to avoid the storage of and computation on zero elements. We work with a hierarchical data structure known as hypermatrix. A matrix is subdivided recursively an arbitrary number of times. Several pointer matrices are used to store the location ofsubmatrices at each level. The last level consists of data submatrices which are dealt with as dense submatrices. When the block size of this dense submatrices is small, the number of zeros can be greatly reduced. However, the performance obtained from BLAS3 routines drops heavily. Consequently, there is a trade-off in the size of data submatrices used for a sparse Cholesky factorization with the hypermatrix scheme. Our goal is that of reducing the overhead introduced by the unnecessary operation on zeros when a hypermatrix data structure is used to produce a sparse Cholesky factorization. In this work we study several techniques for reducing such overhead in order to obtain high performance.One of our goals is the creation of codes which work efficiently on different platforms when operating on dense matrices. To obtain high performance, the resources offered by the CPU must be properly utilized. At the same time, the memory hierarchy must be exploited to tolerate increasing memory latencies. To achieve the former, we produce inner kernels which use the CPU very efficiently. To achieve the latter, we investigate nonlinear data layouts. Such data formats can contribute to the effective use of the memory system.The use of highly optimized inner kernels is of paramount importance for obtaining efficient numerical algorithms. Often, such kernels are created by hand. However, we want to create efficient inner kernels for a variety of processors using a general approach and avoiding hand-made codification in assembly language. In this work, we present an alternative way to produce efficient kernels automatically, based on a set of simple codes written in a high level language, which can be parameterized at compilation time. The advantage of our method lies in the ability to generate very efficient inner kernels by means of a good compiler. Working on regular codes for small matrices most of the compilers we used in different platforms were creating very efficient inner kernels for matrix multiplication. Using the resulting kernels we have been able to produce high performance sparse and dense linear algebra codes on a variety of platforms.In this work we also show that techniques used in linear algebra codes can be useful in other fields. We present the work we have done in the optimization of the Nearest Neighbor classification focusing on the speed of the classification process.Tuning several codes for different problems and machines can become a heavy and unbearable task. For this reason we have developed an environment for development and automatic benchmarking of codes which is presented in this thesis.As a practical result of this work, we have been able to create efficient codes for several matrix operations on a variety of platforms. Our codes are highly competitive with other state-of-art codes for some problems.

  • Using non-canonical array layouts in dense matrix operation

     Herrero Zaragoza, José Ramón
    Workshop on state-of-the-art in scientific computing (PARA'2006)
    Presentation's date: 2006-06-18
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Sparse Hypermatrix Cholesky: Customization for High Performance

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    International Multiconference of Engineers and Computer Scientists 2006
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Compiler-Optimized Kernels: An Efficient Alternative to Hand-Coded Inner Kernels

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    The 2006 International Conference on Computational Science and its Applications
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • A Framework for Accurate Measurements with Low Resolution Clocks

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    The 18th IASTED International Conference on Software Engineering and Applications
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Advances in Sparse Hypermatrix Cholesky Factorization

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Recent Advances in Engineering and Computer Science 2007
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Compiler-Optimized Kernels: An Efficient Alternative to Hand-Coded Inner Kernels

     Herrero Zaragoza, José Ramón
    The 2006 International Conference on Computational Science and its Applications
    Presentation's date: 2006-05-07
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • A Framework for Accurate Measurements with Low Resolution Clocks

     Herrero Zaragoza, José Ramón
    The 18th IASTED International Conference on Software Engineering and Applications
    Presentation's date: 2006-11-13
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Using non-canonical array layouts in dense matrix operation

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Workshop on state-of-the-art in scientific computing (PARA'2006)
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Compiler-Optimized Kernels: An Efficient Alternative to Hand-Coded Inner Kernels

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Lecture notes in computer science
    Date of publication: 2006-05
    Journal article

     Share Reference managers Reference managers Open in new window

  • Optimization of a Statically Partitioned Hypermatrix Sparse Cholesky Factorization

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Lecture notes in computer science
    Date of publication: 2006-01
    Journal article

     Share Reference managers Reference managers Open in new window

  • Using Non-canonical Array Layouts in Dense Matrix Operations

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Lecture notes in computer science
    Date of publication: 2006-06
    Journal article

     Share Reference managers Reference managers Open in new window

  • Intra-Block Amalgamation in Sparse Hypermatrix Cholesky

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    International Conference on Parallel Processing
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Intra-Block Amalgamation in Sparse Hypermatrix Cholesky Factorization

     Herrero Zaragoza, José Ramón
    International Conference on Computational Science and Engineering
    Presentation's date: 2005-06-27
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Efficient Implementation of Nearest Neighbor Classification

     Herrero Zaragoza, José Ramón
    4th International Conference on Computer Recognition Systems
    Presentation's date: 2005-05-22
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • A Study on Load Imbalanced in Parallel Hypermatrix Multiplication Using Open MP

     Herrero Zaragoza, José Ramón; Navarro Guerrero, Juan Jose
    Parallel Processing and Applied Mathematics
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window