Graphic summary
  • Show / hide key
  • Information


Scientific and technological production
  •  

1 to 50 of 163 results
  • A general framework for dynamic and automatic I/O scheduling in hard and solid-state drives

     González Férez, Pilar; Piernas, Juan; Cortes Rossello, Antonio
    Journal of parallel and distributed computing
    Vol. 74, num. 5, p. 2380-2391
    DOI: 10.1016/j.jpdc.2014.02.002
    Date of publication: 2014-05-01
    Journal article

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    The selection of the right I/O scheduler for a given workload can significantly improve the I/O performance. However, this is not an easy task because several factors should be considered, and even the "best" scheduler can change over the time, specially if the workload's characteristics change too. To address this problem, we present a Dynamic and Automatic Disk Scheduling framework (DADS) that simultaneously compares two different Linux I/O schedulers, and dynamically selects that which achieves the best I/O performance for any workload at any time. The comparison is made by running two instances of a disk simulator inside the Linux kernel. Results show that, by using DADS, the performance achieved is always close to that obtained by the best scheduler. Thus, system administrators are exempted from selecting a suboptimal scheduler which can provide a good performance for some workloads, but may downgrade the system throughput when the workloads change.

  • File System Metadata Virtualization  Open access

     Artiaga Amouroux, Ernest
    Universitat Politècnica de Catalunya
    Theses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    L'avançament dels sistemes informàtics ha creat noves formes d'utilitzar i accedir a les dades, portant l'arquitectura dels sistemes de fitxers tradicionals fins als seus límits i fent-los inadequats per a les noves necessitats. Els reptes actuals afecten tant al rendiment dels sistemes com a la seva usabilitat des de la perspectiva de les aplicacions. D'una banda, els sistemes informàtics d'alt rendiment s'estan convertint en agregacions de múltiples elements de computació en forma de clusters, grids o clouds. D'altra banda, hi ha un ventall cada cop més ampli d'aplicacions científiques i comercials que intenten aprofitar les noves possibilitats. Els requeriments d'aquestes aplicacions són sovint heterogenis, introduint grans variacions en els patrons d'ús dels sistemes de fitxers. Els centres de processament de dades han intentat compensar aquesta situació proporcionant diferents sistemes de fitxers per a diferents necessitats. Típicament, les característiques y les formes d'ús d'aquests sistemes es comuniquen als usuaris per tal que es facin responsables del seu ús adequat. La mateixa filosofia s'utilitza en entorns de computació personal, on hi acostuma a haver una clara distinció entre la porció de l'espai de noms del sistema de fitxers dedicat a l'emmagatzemament local, la part corresponent a sistemes de fitxers remots i, recentment, les parts enllaçades amb serveis al cloud tals com, per exemple, directoris per sincronitzar dades entre dispositius, per compartir fitxers amb altres usuaris o per realitzar còpies de seguretat. A la pràctica, aquesta explicitació de les funcionalitats dificulta la usabilitat dels sistemes de fitxers i la possibilitat d'aprofitar-ne tots els beneficis potencials. En aquesta tesi hem considerat que aquestes dificultats es poden alleujar fent que les característiques i funcionalitats es determinin fitxer a fitxer, i no basant-se en la localització dins d'un arbre de directoris rígid. A més, la usabilitat es podria incrementar mitjançant la disponibilitat de múltiples espais de noms dinàmics que es puguin adaptar a les necessitats específiques de les diverses aplicacions. Aquesta tesi contribueix a aquest objectiu mitjançant la proposta d'un mecanisme per desacoplar la visió del sistema d'emmagatzemament que té l'usuari de la seva estructura real. Aquest mecanisme consisteix en la virtualització de les metadades del sistema de fitxers (incloent l'espai de noms i els atributs dels objectes) i la interposició d'una capa intel.ligent que determini còm s'haurien d'emmagatzemar els fitxers per tal de treure el màxim benefici dels sistemes de fitxers subjacents, sense caure en problemes de rendiment o usabilitat a causa d'un ús inadequat dels sistemes de baix nivell. Aquesta tècnica permet oferir, simultàniament, múltiples vistes virtuals de l'espai de noms i dels atributs dels objectes del sistema de fitxers, les quals poden adaptar-se a les necessitats específiques de les aplicacions sense que calgui modificar la organització del sistema d'emmagatzemament subjacent. La primera contribució de la tesi introdueix el disseny d'una infraestructura de virtualització de metadades que fa possible el desacoplament de l'estructura física mencionat anteriorment; la segona contribució consisteix en un mètode per millorar el rendiment dels sistemes de fitxers de gran escala mitjançant el mencionat desacoplament; finalment, la tercera contribució consisteix en un mètode que utilitza la virtualització de les metadades per millorar la usabilitat dels sistemes d'emmagatzemament basats en el cloud.

    The advance of computing systems has brought new ways to use and access the stored data that push the architecture of traditional file systems to its limits, making them inadequate to handle the new needs. Current challenges affect both the performance of high-end computing systems and its usability from the applications perspective. On one side, high-performance computing equipment is rapidly developing into large-scale aggregations of computing elements in the form of clusters, grids or clouds. On the other side, there is a widening range of scientific and commercial applications that seek to exploit these new computing facilities. The requirements of such applications are also heterogeneous, leading to dissimilar patterns of use of the underlying file systems. Data centres have tried to compensate this situation by providing several file systems to fulfil distinct requirements. Typically, the different file systems are mounted on different branches of a directory tree, and the preferred use of each branch is publicised to users. A similar approach is being used in personal computing devices. Typically, in a personal computer, there is a visible and clear distinction between the portion of the file system name space dedicated to local storage, the part corresponding to remote file systems and, recently, the areas linked to cloud services as, for example, directories to keep data synchronized across devices, to be shared with other users, or to be remotely backed-up. In practice, this approach compromises the usability of the file systems and the possibility of exploiting all the potential benefits. We consider that this burden can be alleviated by determining applicable features on a per-file basis, and not associating them to the location in a static, rigid name space. Moreover, usability would be further increased by providing multiple dynamic name spaces that could be adapted to specific application needs. This thesis contributes to this goal by proposing a mechanism to decouple the user view of the storage from its underlying structure. The mechanism consists in the virtualization of file system metadata (including both the name space and the object attributes) and the interposition of a sensible layer to take decisions on where and how the files should be stored in order to benefit from the underlying file system features, without incurring on usability or performance penalties due to inadequate usage. This technique allows to present multiple, simultaneous virtual views of the name space and the file system object attributes that can be adapted to specific application needs without altering the underlying storage configuration. The first contribution of the thesis introduces the design of a metadata virtualization framework that makes possible the above-mentioned decoupling; the second contribution consists in a method to improve file system performance in large-scale systems by using such metadata virtualization framework; finally, the third contribution consists in a technique to improve the usability of cloud-based storage systems in personal computing devices.

  • Scalability in Extensible and Heterogeneous Storage Systems  Open access

     Miranda Bueno, Alberto
    Universitat Politècnica de Catalunya
    Theses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    L¿evolució dels sistemes de computació ha dut un creixement exponencial dels volums de dades, que porta al límit la capacitat d¿organitzar i accedir informació de les arquitectures d¿emmagatzemament actuals. Amb una incessant creació de dades que creix a un ritme estimat del 40-60% per any, les infraestructures de dades requereixen de distribucions de dades cada cop més escalables que puguin adaptar-se a aquest creixement amb un rendiment adequat. Per tal de proporcionar aquest rendiment, els sistemes d¿emmagatzemament de gran escala fan servir agregacions RAID5 o RAID6 connectades amb xarxes d¿alta velocitat com FibreChannel o SAS. Malauradament, el rendiment de la tecnologia més emprada actualment, el disc magnètic, no creix prou ràpid per sostenir tal creixement explosiu. D¿altra banda, les prediccions apunten que els dispositius d¿estat sòlid, els successors de la tecnologia actual, no substituiran els discos magnètics fins d¿aquí 5-10 anys. Tot i que el rendiment és molt superior, la indústria NAND necessitarà invertir centenars de milions de dòlars per construir prou fàbriques per satisfer la demanda prevista. A més dels problemes derivats de limitacions tècniques i mecàniques, el creixement massiu de les dades suposa més problemes: la solució més flexible per construir una infraestructura d¿emmagatzematge consisteix en fer servir grups de dispositius que es poden fer créixer bé afegint-ne de nous, bé reemplaçant-ne els més vells, incrementant així la capacitat i el rendiment del sistema de forma transparent.Aquesta solució, però, requereix distribucions de dades que es puguin adaptar a aquests canvis a la topologia i explotar el rendiment potencial que el hardware ofereix. Aquestes distribucions haurien de poder reconstruir la col¿locació de les dades per acomodar els nous dispositius, extraient-ne el màxim rendiment i oferint una càrrega de treball balancejada. Una distribució inadient pot no fer servir de manera efectiva la capacitat o el rendiment addicional ofert pels nous dispositius, provocant problemes de balanceig com colls d¿ampolla o infrautilització. A més, els sistemes d¿emmagatzematge massius estaran inevitablement formats per hardware heterogeni: en créixer els requisits de capacitat i rendiment, es fa necessari afegir nous dispositius per poder suportar la demanda, però és poc probable que els dispositius afegits tinguin la mateixa capacitat o rendiment que els ja instal¿lats. A més, en cas de fallada, els discos són reemplaçats per d¿altres més ràpids i de més capacitat, ja que no sempre és fàcil (o barat) trobar-ne un model particular. A llarg termini, qualsevol arquitectura d¿emmagatzematge de gran escala estarà formada per una miríade de dispositius diferents. El títol d¿aquesta tesi, ¿Scalability in Extensible and Heterogeneous Storage Systems¿, fa referència a les nostres contribucions a la recerca de distribucions de dades escalables que es puguin adaptar a volums creixents d¿informació. La primera contribució és el disseny d¿una distribució escalable que es pot adaptar canvis de hardware només redistribuint el mínim per mantenir un càrrega de treball balancejada. A la segona contribució, fem un estudi comparatiu sobre l¿impacte del generadors de números pseudo-aleatoris en el rendiment i qualitat de les distribucions pseudo-aleatòries de dades, i provem que una mala selecció del generador pot degradar la qualitat de l¿estratègia. La tercera contribució és un anàlisi dels patrons d¿accés a dades de llarga duració en traces de sistemes reals, per determinar si és possible oferir un alt rendiment i una bona distribució amb una rebalanceig inferior al mínim. A la contribució final, apliquem el coneixement adquirit en aquest estudi per dissenyar una arquitectura RAID extensible que es pot adaptar a canvis en el número de dispositius sense migrar grans volums de dades, i demostrem que pot ser competitiva amb les distribucions ideals RAID actuals, amb només una penalització del 1.28% de la capacitat.

    The evolution of computer systems has brought an exponential growth in data volumes, which pushes the capabilities of current storage architectures to organize and access this information effectively: as the unending creation and demand of computer-generated data grows at an estimated rate of 40-60% per year, storage infrastructures need increasingly scalable data distribution layouts that are able to adapt to this growth with adequate performance. In order to provide the required performance and reliability, large-scale storage systems have traditionally relied on multiple RAID-5 or RAID-6 storage arrays, interconnected with high-speed networks like FibreChannel or SAS. Unfortunately, the performance of the current, most commonly-used storage technology-the magnetic disk drive-can't keep up with the rate of growth needed to sustain this explosive growth. Moreover, storage architectures based on solid-state devices (the successors of current magnetic drives) don't seem poised to replace HDD-based storage for the next 5-10 years, at least in data centers. Though the performance of SSDs significantly improves that of hard drives, it would cost the NAND industry hundreds of billions of dollars to build enough manufacturing plants to satisfy the forecasted demand. Besides the problems derived from technological and mechanical limitations, the massive data growth poses more challenges: to build a storage infrastructure, the most flexible approach consists in using pools of storage devices that can be expanded as needed by adding new devices or replacing older ones, thus seamlessly increasing the system's performance and capacity. This approach however, needs data layouts that can adapt to these topology changes and also exploit the potential performance offered by the hardware. Such strategies should be able to rebuild the data layout to accommodate the new devices in the infrastructure, extracting the utmost performance from the hardware and offering a balanced workload distribution. An inadequate data layout might not effectively use the enlarged capacity or better performance provided by newer devices, thus leading to unbalancing problems like bottlenecks or resource underusage. Besides, massive storage systems will inevitably be composed of a collection of heterogeneous hardware: as capacity and performance requirements grow, new storage devices must be added to cope with demand, but it is unlikely that these devices will have the same capacity or performance of those installed. Moreover, upon failure, disks are most commonly replaced by faster and larger ones, since it is not always easy (or cheap) to find a particular model of drive. In the long run, any large-scale storage system will have to cope with a myriad of devices. The title of this dissertation, "Scalability in Extensible and Heterogeneous Storage Systems", refers to the main focus of our contributions in scalable data distributions that can adapt to increasing volumes of data. Our first contribution is the design of a scalable data layout that can adapt to hardware changes while redistributing only the minimum data to keep a balanced workload. With the second contribution, we perform a comparative study on the influence of pseudo-random number generators in the performance and distribution quality of randomized layouts and prove that a badly chosen generator can degrade the quality of the strategy. Our third contribution is an an analysis of long-term data access patterns in several real-world traces to determine if it is possible to offer high performance and a balanced load with less than minimal data rebalancing. In our final contribution, we apply the knowledge learnt about long-term access patterns to design an extensible RAID architecture that can adapt to changes in the number of disks without migrating large amounts of data, and prove that it can be competitive with current RAID arrays with an overhead of at most 1.28% the storage capacity.

    L'evolució dels sistemes de computació ha dut un creixement exponencial dels volums de dades, que porta al límit la capacitat d'organitzar i accedir informació de les arquitectures d'emmagatzemament actuals. Amb una incessant creació de dades que creix a un ritme estimat del 40-60% per any, les infraestructures de dades requereixen de distribucions de dades cada cop més escalables que puguin adaptar-se a aquest creixement amb un rendiment adequat. Per tal de proporcionar aquest rendiment, els sistemes d'emmagatzemament de gran escala fan servir agregacions RAID5 o RAID6 connectades amb xarxes d'alta velocitat com FibreChannel o SAS. Malauradament, el rendiment de la tecnologia més emprada actualment, el disc magnètic, no creix prou ràpid per sostenir tal creixement explosiu. D'altra banda, les prediccions apunten que els dispositius d'estat sòlid, els successors de la tecnologia actual, no substituiran els discos magnètics fins d'aquí 5-10 anys. Tot i que el rendiment és molt superior, la indústria NAND necessitarà invertir centenars de milions de dòlars per construir prou fàbriques per satisfer la demanda prevista. A més dels problemes derivats de limitacions tècniques i mecàniques, el creixement massiu de les dades suposa més problemes: la solució més flexible per construir una infraestructura d'emmagatzematge consisteix en fer servir grups de dispositius que es poden fer créixer bé afegint-ne de nous, bé reemplaçant-ne els més vells, incrementant així la capacitat i el rendiment del sistema de forma transparent. Aquesta solució, però, requereix distribucions de dades que es puguin adaptar a aquests canvis a la topologia i explotar el rendiment potencial que el hardware ofereix. Aquestes distribucions haurien de poder reconstruir la col.locació de les dades per acomodar els nous dispositius, extraient-ne el màxim rendiment i oferint una càrrega de treball balancejada. Una distribució inadient pot no fer servir de manera efectiva la capacitat o el rendiment addicional ofert pels nous dispositius, provocant problemes de balanceig com colls d¿ampolla o infrautilització. A més, els sistemes d'emmagatzematge massius estaran inevitablement formats per hardware heterogeni: en créixer els requisits de capacitat i rendiment, es fa necessari afegir nous dispositius per poder suportar la demanda, però és poc probable que els dispositius afegits tinguin la mateixa capacitat o rendiment que els ja instal.lats. A més, en cas de fallada, els discos són reemplaçats per d'altres més ràpids i de més capacitat, ja que no sempre és fàcil (o barat) trobar-ne un model particular. A llarg termini, qualsevol arquitectura d'emmagatzematge de gran escala estarà formada per una miríade de dispositius diferents. El títol d'aquesta tesi, "Scalability in Extensible and Heterogeneous Storage Systems", fa referència a les nostres contribucions a la recerca de distribucions de dades escalables que es puguin adaptar a volums creixents d'informació. La primera contribució és el disseny d'una distribució escalable que es pot adaptar canvis de hardware només redistribuint el mínim per mantenir un càrrega de treball balancejada. A la segona contribució, fem un estudi comparatiu sobre l'impacte del generadors de números pseudo-aleatoris en el rendiment i qualitat de les distribucions pseudo-aleatòries de dades, i provem que una mala selecció del generador pot degradar la qualitat de l'estratègia. La tercera contribució és un anàlisi dels patrons d'accés a dades de llarga duració en traces de sistemes reals, per determinar si és possible oferir un alt rendiment i una bona distribució amb una rebalanceig inferior al mínim. A la contribució final, apliquem el coneixement adquirit en aquest estudi per dissenyar una arquitectura RAID extensible que es pot adaptar a canvis en el número de dispositius sense migrar grans volums de dades, i demostrem que pot ser competitiva amb les distribucions ideals RAID actuals, amb només una penalització del 1.28% de la capacitat

  • Living objects: Towards flexible big data sharing

     Marti Fraiz, Jonathan; Queralt Calafat, Anna; Gasull Moreira, Daniel; Cortes Rossello, Antonio
    Journal of Computer Science & Technology
    Vol. 13, num. 2, p. 56-63
    Date of publication: 2013-10
    Journal article

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    Data sharing and especially enabling third parties to build new services using large amounts of shared data is clearly a trend for the future and a main driver for innovation. However, sharing data is a challenging and involved process today: The owner of the data wants to maintain full and immediate control on what can be done with it, while users are interested in offering new services which may involve arbitrary and complex processing over large volumes of data. Currently, flexibility in building applications can only be achieved with public or non-sensitive data, which is released without restrictions. In contrast, if the data provider wants to impose conditions on how data is used, access to data is centralized and only predefined functions are provided to the users. We advocate for an alternative that takes the best of both worlds: distributing control on data among the data itself to provide flexibility to consumers. To this end, we exploit the well-known concept of object, an abstraction that couples data and code, and make it act and react according to the circumstances.

  • Access to the full text
    Towards DaaS 2.0: enriching data models  Open access

     Marti Fraiz, Jonathan; Gasull Moreira, Daniel; Queralt Calafat, Anna; Cortes Rossello, Antonio
    IEEE World Congress on Services
    p. 349-355
    DOI: 10.1109/SERVICES.2013.59
    Presentation's date: 2013-06
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Current Data as a Service solutions present a lack of flexibility in terms of allowing users to customize the underlying data models by including new concepts or functionalities. Data providers either publish global APIs to make data available, or 'sell' and transfer data to clients so they can do whatever they want with it. Thereby, collaboration and B2B becomes limited and sometimes is not even feasible. Our technology implements the necessary mechanisms for data providers to enable their clients to enrich data models both with additional concepts and with new methods that can be executed and, in turn, published as new services.

    Current Data as a Service solutions present a lack of flexibility in terms of allowing users to customize the underlying data models by including new concepts or functionalities. Data providers either publish global APIs to make data available, or 'sell' and transfer data to clients so they can do whatever they want with it. Thereby, collaboration and B2B becomes limited and sometimes is not even feasible. Our technology implements the necessary mechanisms for data providers to enable their clients to enrich data models both with additional concepts and with new methods that can be executed and, in turn, published as new services.

    Postprint (author’s final draft)

  • Access to the full text
    DYON: Managing a new scheduling class to improve system performance in multicore systems  Open access

     Nou Castell, Ramon; Giralt Celiméndiz, Jacobo; Cortes Rossello, Antonio
    Workshop on Runtime and Operating Systems for the Many-core Era
    p. 759-768
    DOI: 10.1007/978-3-642-54420-0_74
    Presentation's date: 2013-08
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Due to the increase in the number of available cores in current systems, plenty of system software starts to use some of these cores to perform tasks that will help optimize the application behaviour. Unfortunately, current Onload mechanisms are too limited. On the one hand, there is no dynamic way to decide the number of cores that is taken from applications and given to these system helpers. And, on the other hand, the onload mechanisms do not offer enough control over when and where onloading tasks should to be executed. In this paper we propose a new Onload Framework that addresses these issues. First, we propose DYON, a dynamic and adaptive method to control the amount of extra CPUs offered to the Onload Framework to generate benefits for the whole system. And second, we propose a submission mechanism that given a task, executes it if there are idle resources or rejects it otherwise. This feature is useful to move the execution of small pieces of code out of the critical path (allowing parallel execution) when this is possible, or discard them and execute a code that will not rely on them.

    Due to the increase in the number of available cores in current systems, plenty of system software starts to use some of these cores to perform tasks that will help optimize the application behaviour. Unfortunately, current Onload mechanisms are too limited. On the one hand, there is no dynamic way to decide the number of cores that is taken from applications and given to these system helpers. And, on the other hand, the onload mechanisms do not offer enough control over when and where onloading tasks should to be executed. In this paper we propose a new Onload Framework that addresses these issues. First, we propose DYON, a dynamic and adaptive method to control the amount of extra CPUs offered to the Onload Framework to generate benefits for the whole system. And second, we propose a submission mechanism that given a task, executes it if there are idle resources or rejects it otherwise. This feature is useful to move the execution of small pieces of code out of the critical path (allowing parallel execution) when this is possible, or discard them and execute a code that will not rely on them.

    Postprint (author’s final draft)

  • Direct lookup and hash-based metadata placement for local file systems

     Lensing, Paul Hermann; Cortes Rossello, Antonio; Brinkmann, Andre
    International Systems and Storage Conference
    p. Article 5-
    DOI: 10.1145/2485732.2485741
    Presentation of work at congresses

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    New challenges to file systems' metadata performance are imposed by the continuously growing number of files existing in file systems. The total amount of metadata can become too big to be cached, potentially leading to multiple storage device accesses for a single metadata lookup operation. This paper takes a look at the limitations of traditional file system designs and discusses an alternative metadata handling approach, using hash-based concepts already established for metadata and data placement in distributed storage systems. Furthermore, a POSIX compliant prototype implementation based on these concepts is introduced and benchmarked. A variety of file system metadata and data operations as well as the inuence of different storage technologies are taken into account and performance is compared with traditional file systems.

    New challenges to file systems' metadata performance are imposed by the continuously growing number of files existing in file systems. The total amount of metadata can become too big to be cached, potentially leading to multiple storage device accesses for a single metadata lookup operation. This paper takes a look at the limitations of traditional file system designs and discusses an alternative metadata handling approach, using hash-based concepts already established for metadata and data placement in distributed storage systems. Furthermore, a POSIX compliant prototype implementation based on these concepts is introduced and benchmarked. A variety of file system metadata and data operations as well as the inuence of different storage technologies are taken into account and performance is compared with traditional file systems.

  • An autonomic framework for enhancing the quality of data grid services

     Sánchez, Alberto; Montes, Jesús; Pérez, María S.; Cortes Rossello, Antonio
    Future generation computer systems
    Vol. 28, num. 7, p. 1005-1016
    DOI: 10.1016/j.future.2011.08.016
    Date of publication: 2012-07
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • DADS: dynamic and automatic disk scheduling

     González Férez, Pilar; Piernas, Juan; Cortes Rossello, Antonio
    ACM Symposium on Applied Computing
    p. 1759-1764
    DOI: 10.1145/2245276.2232062
    Presentation's date: 2012
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • A study on data deduplication in HPC storage systems

     Meister, Dirk; Kaiser, Jürgen; Brinkmann, Andre; Cortes Rossello, Antonio; Kuhn, Michael; Kunkel, Julian
    International Conference for High Performance Computing, Networking, Storage and Analysis
    p. 1-11
    DOI: 10.1109/SC.2012.14
    Presentation's date: 2012-11-13
    Presentation of work at congresses

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    Deduplication is a storage saving technique that is highly successful in enterprise backup environments. On a file system, a single data block might be stored multiple times across different files, for example, multiple versions of a file might exist that are mostly identical. With deduplication, this data replication is localized and redundancy is removed – by storing data just once, all files that use identical regions refer to the same unique data. The most common approach splits file data into chunks and calculates a cryptographic fingerprint for each chunk. By checking if the fingerprint has already been stored, a chunk is classified as redundant or unique. Only unique chunks are stored. This paper presents the first study on the potential of data deduplication in HPC centers, which belong to the most demanding storage producers. We have quantitatively assessed this potential for capacity reduction for 4 data centers (BSC, DKRZ, RENCI, RWTH). In contrast to previous deduplication studies focusing mostly on backup data, we have analyzed over one PB (1212 TB) of online file system data. The evaluation shows that typically 20% to 30% of this online data can be removed by applying data deduplication techniques, peaking up to 70% for some data sets. This reduction can only be achieved by a subfile deduplication approach, while approaches based on whole-file comparisons only lead to small capacity savings.

  • Access to the full text
    Analyzing long-term access locality to find ways to improve distributed storage systems  Open access

     Miranda Bueno, Alberto; Cortes Rossello, Antonio
    Euromicro International Conference on Parallel, Distributed, and Network-Based Processing
    p. 544-553
    DOI: 10.1109/PDP.2012.15
    Presentation's date: 2012
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    An efficient design for a distributed filesystem originates from a deep understanding of common access patterns and user behavior which is obtained through a deep analysis of traces and snapshots. In this paper we analyze traces for eight distributed filesystems that represent a mix of workloads taken from educational, research and commercial environments. We focused on characterizing block access patterns, amount of block sharing and working set size over long periods of time, and we tried to find common behaviors for all workloads that can be generalized to other storage systems. We found that most environments shared large amounts of blocks over time, and that block sharing was significantly affected by repetitive human behavior. We also found that block lifetimes tended to be short, but there were significant amounts of blocks with long lifetimes that were accessed over many consecutive days. Lastly, we determined that most daily accesses were made to a reduced set of blocks. We strongly believe that these findings can be used to improve long-term caching policies as well as data placement algorithms, thus increasing the performance of distributed storage systems.

    Postprint (author’s final draft)

  • Access to the full text
    AbacusFS integrated storage and computational platform  Open access

     Nuhic, Isak; ¿terk, Marjan; Cortes Rossello, Antonio
    International Convention on Information and Communication Technology, Electronics and Microelectronics
    p. 306-312
    Presentation's date: 2012-05-21
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Today's applications, especially those in the scientific community, deal with an ever growing amount of data. Among the problems that arise from this explosion of data are how to organize the data so that the information about how the data was produced is not lost, how to ensure repeatability of calculations, how to automate calculations, and how to save computational and storage resources. The storage is currently a passive component that neither imposes any rules on data organization nor helps with associating a calculation with its result. We thus introduce AbacusFS – an integrated storage and computational platform which interacts with the file system and connects the calculations and the storage. Users and applications see it as a normal file system; however, the semantics are changed for files that are results of calculations done with the platform. When such a file is open for reading, AbacusFS first regenerates it if the input files used in the calculation have been modified. Alternatively, result files can be virtual, i.e. not stored anywhere but rather regenerated on every read access.

    Postprint (author’s final draft)

  • Access to the full text
    Automatic I/O scheduler selection through online workload analysis  Open access

     Nou Castell, Ramon; Giralt, Jacobo; Cortes Rossello, Antonio
    IEEE International Conference on Autonomic and Trusted Computing
    p. 431-438
    DOI: 10.1109/UIC-ATC.2012.12
    Presentation's date: 2012-09-04
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    I/O performance is a bottleneck for many workloads. The I/O scheduler plays an important role in it. It is typically configured once by the administrator and there is no selection that suits the system at every time. Every I/O scheduler has a different behavior depending on the workload and the device. We present a method to select automatically the most suitable I/O scheduler for the ongoing workload. This selection is done online, using a workload analysis method with small I/O traces, finding common I/O patterns. Our dynamic mechanism adapts automatically to one of the best schedulers, sometimes achieving improvements on I/O performance for heterogeneous workloads beyond those of any fixed configuration (up to 5%). This technique works with any application and device type (RAID, HDD, SSD), as long as we have a system parameter to tune. It does not need disk simulations or hardware models, which are normally unavailable. We evaluate it in different setups, and with different benchmarks.

    Postprint (author’s final draft)

  • Efficient OpenMP over sequentially consistent distributed shared memory systems  Open access

     Costa Prats, Juan Jose
    Department of Computer Architecture, Universitat Politècnica de Catalunya
    Theses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Nowadays clusters are one of the most used platforms in High Performance Computing and most programmers use the Message Passing Interface (MPI) library to program their applications in these distributed platforms getting their maximum performance, although it is a complex task. On the other side, OpenMP has been established as the de facto standard to program applications on shared memory platforms because it is easy to use and obtains good performance without too much effort. So, could it be possible to join both worlds? Could programmers use the easiness of OpenMP in distributed platforms? A lot of researchers think so. And one of the developed ideas is the distributed shared memory (DSM), a software layer on top of a distributed platform giving an abstract shared memory view to the applications. Even though it seems a good solution it also has some inconveniences. The memory coherence between the nodes in the platform is difficult to maintain (complex management, scalability issues, high overhead and others) and the latency of the remote-memory accesses which can be orders of magnitude greater than on a shared bus due to the interconnection network. Therefore this research improves the performance of OpenMP applications being executed on distributed memory platforms using a DSM with sequential consistency evaluating thoroughly the results from the NAS parallel benchmarks. The vast majority of designed DSMs use a relaxed consistency model because it avoids some major problems in the area. In contrast, we use a sequential consistency model because we think that showing these potential problems that otherwise are hidden may allow the finding of some solutions and, therefore, apply them to both models. The main idea behind this work is that both runtimes, the OpenMP and the DSM layer, should cooperate to achieve good performance, otherwise they interfere one each other trashing the final performance of applications. We develop three different contributions to improve the performance of these applications: (a) a technique to avoid false sharing at runtime, (b) a technique to mimic the MPI behaviour, where produced data is forwarded to their consumers and, finally, (c) a mechanism to avoid the network congestion due to the DSM coherence messages. The NAS Parallel Benchmarks are used to test the contributions. The results of this work shows that the false-sharing problem is a relative problem depending on each application. Another result is the importance to move the data flow outside of the critical path and to use techniques that forwards data as early as possible, similar to MPI, benefits the final application performance. Additionally, this data movement is usually concentrated at single points and affects the application performance due to the limited bandwidth of the network. Therefore it is necessary to provide mechanisms that allows the distribution of this data through the computation time using an otherwise idle network. Finally, results shows that the proposed contributions improve the performance of OpenMP applications on this kind of environments.

  • Access to the full text
    Reliable and randomized data distribution strategies for large scale storage systems  Open access

     Miranda Bueno, Alberto; Effert, S.; Kang, Y.; Miller, E.L.; Brinkmann, A.; Cortes Rossello, Antonio
    International Conference on High Performance Computing
    p. 1-10
    DOI: 10.1109/HiPC.2011.615274
    Presentation's date: 2011
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    The ever-growing amount of data requires highly scalable storage solutions. The most flexible approach is to use storage pools that can be expanded and scaled down by adding or removing storage devices. To make this approach usable, it is necessary to provide a solution to locate data items in such a dynamic environment. This paper presents and evaluates the Random Slicing strategy, which incorporates lessons learned from table-based, rule-based, and pseudo-randomized hashing strategies and is able to provide a simple and efficient strategy that scales up to handle exascale data. Random Slicing keeps a small table with information about previous storage system insert and remove operations, drastically reducing the required amount of randomness while delivering a perfect load distribution.

    Postprint (author’s final draft)

  • A high performance suite of data services for grids

     Sánchez, Alberto; Pérez, María S.; Montes, Jesús; Cortes Rossello, Antonio
    Future generation computer systems
    Vol. 26, num. 4, p. 622-632
    DOI: 10.1016/j.future.2009.11.006
    Date of publication: 2010-04
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Transient congestion avoidance in software distributed shared memory systems

     Costa Prats, Juan Jose; Cortes Rossello, Antonio; Martorell Bofill, Xavier; Bueno Hedo, Javier; Ayguade Parra, Eduard
    International Conference on Parallel and Distributed Computing, Applications and Technologies
    p. 357-364
    DOI: 10.1109/PDCAT.2010.32
    Presentation's date: 2010-12
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Access to the full text
    XtreemOS application execution management: a scalable approach  Open access

     Nou Castell, Ramon; Giralt Celiméndiz, Jacobo; Corbalan Gonzalez, Julita; Tejedor Saavedra, Enric; Fitó Comellas, Josep Oriol; Perez, Josep Maria; Cortes Rossello, Antonio
    ACM/IEEE International Conference on Grid Computing
    p. 49-56
    Presentation's date: 2010-10-25
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Designing a job management system for the Grid is a non-trivial task. While a complex middleware can give a lot of features, it often implies sacrificing performance. Such performance loss is especially noticeable for small jobs. A Job Manager’s design also affects the capabilities of the monitoring system. We believe that monitoring a job or asking for a job status should be fast and easy, like doing a simple ’ps’. In this paper, we present the job management of XtreemOS - a Linux-based operating system to support Virtual Organizations for Grid. This management is performed inside the Application Execution Manager (AEM). We evaluate its performance using only one job manager plus the built-in monitoring infrastructure. Furthermore, we present a set of real-world applications using AEM and its features. In XtreemOS we avoid reinventing the wheel and use the Linux paradigm as an abstraction.

  • Access to the full text
    Using file system virtualization to avoid metadata bottlenecks  Open access

     Artiaga Amouroux, Ernest; Cortes Rossello, Antonio
    Design, Automation and Test in Europe
    p. 1-6
    Presentation's date: 2010-03
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Abstract—Parallel file systems are very sensitive to adverse conditions, and the lack of synergy between such file systems and some of the applications running on them has a negative impact on the overall system performance. Our observations indicate that the increased pressure on metadata management is one of the relevant causes of performance drops. This paper proposes a virtualization layer above the native file system that, transparently to the user, reorganizes the underlying directory tree, mitigating bottlenecks by taking advantage of the native file system optimizations and limiting the effects of potentially harmful application behavior. We developed COFS (COmposite File System) as a proof-of-concept virtual layer to evaluate the feasibility of the proposal.

  • HRaidTools: an on-line suite of simulation tools for heterogeneous RAID systems

     González, José Luis; Cortes Rossello, Antonio; Delgado-Meraz, Jaime; Rubio, Ana Piedad
    International Conference on Simulation Tools and Techniques
    Presentation's date: 2010-03
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Access to the full text
    XtreemOS-MD: Grid computing from mobile devices  Open access

     Martínez, Álvaro; Prieto, Santiago; Gallego, Noé; Nou, Ramon; Giralt, Jacobo; Cortes Rossello, Antonio
    International ICST Conference on MOBILe Wireless MiddleWARE, Operating Systems, and Applications
    p. 45-58
    DOI: 10.1007/978-3-642-17758-3_4
    Presentation's date: 2010-06
    Presentation of work at congresses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Grid and Cloud computing are well known topics, and currently one of the focus of many research and commercial projects. Nevertheless, the transparent access to Grid facilities from mobile devices (like PDAs or smartphones) is normally out of the scope of those initiatives, taking into account the intrinsic limitations of such devices: screen size and resolution, storage capacity, computational power, etc. To provide an integrated solution for mobile access to Grid computing, aspects like Virtual Organizations (VOs) support, graphical job management, flexible authentication and access to the file system user's volume in the Grid from a mobile device should be covered. That is the purpose of XtreemOS-MD, the mobile flavour of XtreemOS - a Linux-based operating system to support VOs for Grids -, the transparent access to Grid facilities from mobile devices.

  • Simultaneous evaluation of multiple I/O strategies

     Gonález Férez, Pilar; Piernas, Juan; Cortes Rossello, Antonio
    International Symposium on Computer Architecture and High Performance Computing
    p. 183-190
    DOI: 10.1109/SBAC-PAD.2010.30
    Presentation's date: 2010-10-27
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Reducing data access latency in SDSM systems using runtime optimizations

     Bueno Hedo, Javier; Martorell Bofill, Xavier; Costa Prats, Juan Jose; Cortes Rossello, Antonio; Ayguade Parra, Eduard; Zhang, Guansong; Barton, Christopher; Silvera, Raul
    Conference of the Center for Advanced Studies on Collaborative Research (CASCON)
    p. 160-173
    DOI: 10.1145/1923947.1923965
    Presentation's date: 2010-11-01
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • FaTLease: scalable fault-tolerant lease negotiation with Paxos

     Hupfeld, Felix; Kolbeck, Björn; Stender, Jan; Högqvist, Mikael; Cortes Rossello, Antonio; Martí, Jonathan; Malo, Jesús
    Cluster computing
    Vol. 12, num. 2, p. 175-188
    DOI: 10.1007/s10586-009-0074-2
    Date of publication: 2009-06
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • MPEXPAR: MODELS DE PROGRAMACIO I ENTORNS D'EXECUCIO PARAL·LELS

     Nou Castell, Ramon; Gonzalez Tallada, Marc; Tejedor Saavedra, Enric; Becerra Fontal, Yolanda; Herrero Zaragoza, José Ramón; Navarro Mas, Nacho; Gil Gómez, Maria Luisa; Farreras Esclusa, Montserrat; Costa Prats, Juan Jose; Corbalan Gonzalez, Julita; Badia Sala, Rosa Maria; Torres Viñals, Jordi; Martorell Bofill, Xavier; Carrera Perez, David; Labarta Mancho, Jesus Jose; Cortes Rossello, Antonio; Guitart Fernández, Jordi; Sirvent Pardell, Raül; Alonso López, Javier; Ayguade Parra, Eduard
    Competitive project

     Share

  • Overlapping communication with computation on NAS BT benchmark

     Costa Prats, Juan Jose; Bueno Hedo, Javier; Cortes Rossello, Antonio; Martorell Bofill, Xavier; Ayguade Parra, Eduard
    Advanced Computer Architecture and Compilation for Embedded Systems
    p. 55-58
    Presentation's date: 2009-07
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Optimization and functional improvement of disk arrays.

     Gonzalez Compean, Jose Luis
    Department of Computer Architecture, Universitat Politècnica de Catalunya
    Theses

     Share Reference managers Reference managers Open in new window

  • Measuring TCP bandwidth on top of a Gigabit and Myrinet network

     Costa Prats, Juan Jose; Bueno Hedo, Javier; Martorell Bofill, Xavier; Cortes Rossello, Antonio
    Date: 2009
    Report

     Share Reference managers Reference managers Open in new window

  • The XtreemFS architecture: a case for object-based file systems in Grids

     Hupfeld, Felix; Cortes Rossello, Antonio; Kolbeck, Björn; Stenter, Jan; Focht, Erich; Hess, Matthias; Malo, Jesús; Marti, Jonathan; Cesario, Eugenio
    Concurrency and Computation: Practice and Experience
    Vol. 20, num. 17, p. 2049-2060
    DOI: 10.1002/cpe.1304
    Date of publication: 2008-12
    Journal article

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • Distributing Orthogonal Redundancy on Adaptive Disk Arrays

     Gonzalez, Jl; Cortes Rossello, Antonio
    Lecture notes in computer science
    Vol. 53331, p. 914-931
    Date of publication: 2008-01
    Journal article

     Share Reference managers Reference managers Open in new window

  • Resource Management in Virtualized Execution Environments: New Opportunities for Application Specific Decisions

     Becerra Fontal, Yolanda; Cortes Rossello, Antonio; Garcia Almiñana, Jordi; Navarro Mas, Nacho
    Date: 2008-06
    Report

     Share Reference managers Reference managers Open in new window

  • Optimizing programming models for massively parallel computers  Open access

     Farreras Esclusa, Montserrat
    Department of Computer Architecture, Universitat Politècnica de Catalunya
    Theses

    Read the abstract Read the abstract Access to the full text Access to the full text Open in new window  Share Reference managers Reference managers Open in new window

    Since the invention of the transistor, clock frequency increase was the primary method of improving computing performance. As the reach of Moore's law came to an end, however, technology driven performance gains became increasingly harder to achieve, and the research community was forced to come up with innovative system architectures. Today increasing parallelism is the primary method of improving performance: single processors are being replaced by multiprocessor systems and multicore architectures. The challenge faced by computer architects is to increase performance while limited by cost and power consumption. The appearance of cheap and fast interconnection networks has promoted designs based on distributed memory computing. Most modern massively parallel computers, as reflected by the Top 500 list, are clusters of workstations using commodity processors connected by high speed interconnects. Today's massively parallel systems consist of hundreds of thousands of processors. Software technology to program these large systems is still in its infancy. Optimizing communication has become a key to overall system performance. To cope with the increasing burden of communication, the following methods have been explored: (i) Scalability in the messaging system: The messaging system itself needs to scale up to the 100K processor range. (ii) Scalable algorithms reducing communication: As the machine grows in size the amount of communication also increases, and the resulting overhead negatively impacts performance. New programming models and algorithms allow programmers to better exploit locality and reduce communication. (iii) Speed up communication: reducing and hiding communication latency, and improving bandwidth. Following the three items described above, this thesis contributes to the improvement of the communication system (i) by proposing a scalable memory management of the communication system, that guarantees the correct reception of data and control-data, (ii) by proposing a language extension that allows programmers to better exploit data locality to reduce inter-node communication, and (iii) by presenting and evaluating a cache of remote addresses that aims to reduce control-data and exploit the RDMA native network capabilities, resulting in latency reduction and better overlap of communication and computation. Our contributions are analyzed in two different parallel programming models: Message Passing Interface (MPI) and Unified Parallel C (UPC). Many different programing models exist today, and the programmer usually needs to choose one or another depending on the problem and the machine architecture. MPI has been chosen because it is the de facto standard for parallel programming in distributed memory machines. UPC was considered because it constitutes a promising easy-to-use approach to parallelism. Since parallelism is everywhere, programmability is becoming important and languages such as UPC are gaining attention as a potential future of high performance computing. Concerning the communication system, the languages chosen are relevant because, while MPI offers two-sided communication, UPC relays on a one-sided communication model. This difference potentially influences the communication system requirements of the language. These requirements as well as our contributions are analyzed and discussed for both programming models and we state whether they apply to both programming models.

  • Distributing orthogonal redundancy on adaptive disk arrays

     Cortes Rossello, Antonio; González, José Luis
    International Conference on Grid computing, high-performAnce and Distributed Applications
    p. 914-931
    DOI: 10.1007/978-3-540-88871-0_64
    Presentation's date: 2008-11
    Presentation of work at congresses

    Read the abstract Read the abstract View View Open in new window  Share Reference managers Reference managers Open in new window

    When upgrading storage systems, the key is migrating data from old storage subsystems to the new ones for achieving a data layout able to deliver high performance I/O, increased capacity and strong data availability while preserving the effectiveness of its location method. However, achieving such data layout is not trivial when handling a redundancy scheme because the migration algorithm must guarantee both data and redundancy will not be allocated on the same disk. The Orthogonal redundancy for instance delivers strong data availability for distributed disk arrays but this scheme is basically focused on homogeneous and static environments and a technique that moves overall data layout called re-striping is applied when upgrading it. This paper presents a deterministic placement approach for distributing orthogonal redundancy on distributed heterogeneous disk arrays, which is able to adapt on-line the storage system to the capacity/performance demands by only moving a fraction of data layout. The evaluation reveals that our proposal achieve data layouts delivering an improved performance and increased capacity while keeping the effectiveness of the redundancy scheme even after several migrations. Finally, it keeps the complexity of the data management at an acceptable level.

  • Evaluating the effectiveness of REDCAP to recover the locality missed by today's Linux Systems

     González Férez, Pilar; Piernas, Juan; Cortes Rossello, Antonio
    IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems
    Presentation's date: 2008-10
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • XtreemOS grid checkpointing architecture

     Mehnert-Spahn, John; Schöttner, Michael; Ropars, Thomas; Margery, David; Morin, Christine; Corbalan Gonzalez, Julita; Cortes Rossello, Antonio
    IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing
    Presentation's date: 2008-05
    Presentation of work at congresses

    View View Open in new window  Share Reference managers Reference managers Open in new window

  • FaTLease: scalable fault-tolerant lease negotiation with Paxos

     Hupfeld, Felix; Kolbeck, Björn; Stender, Jan; Högqovist, Mikael; Cortes Rossello, Antonio; Marti, Jonathan; Malo, Jesús
    International Symposium on High-Performance Distributed Computing
    Presentation's date: 2008-06
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Increasing Parallelism for Workflows in the Grid

     Jonathan, Marti; Jesus, Malo; Cortes Rossello, Antonio
    Lecture notes in computer science
    Vol. 4641, num. 1, p. 415-424
    Date of publication: 2007-08
    Journal article

     Share Reference managers Reference managers Open in new window

  • Improving GridFTP transfers by means of a multiagent parallel file system

     Alberto, Sanchez; Maria, S Perez; Gueant, Pierre; Montes, Jesús; Herrero, Pilar; Cortes Rossello, Antonio
    Multiagent and grid systems
    Vol. 3, num. 4, p. 1
    Date of publication: 2007-10
    Journal article

     Share Reference managers Reference managers Open in new window

  • Adaptive Data Block Placement Based on Deterministic Zones (AdaptiveZ)

     Gonzalez, J L; Cortes Rossello, Antonio
    Lecture notes in computer science
    Vol. 4804, num. 1, p. 1214-1232
    Date of publication: 2007-11
    Journal article

     Share Reference managers Reference managers Open in new window

  • The Design of New Journaling File Systems: The DualFS case

     Piernas, J; Cortes Rossello, Antonio; García, J M
    IEEE transactions on computers
    Vol. 56, num. 2, p. 267-281
    Date of publication: 2007-02
    Journal article

     Share Reference managers Reference managers Open in new window

  • The RAM Enhanced Disk Cache Project (REDCAP)

     Gonzalez, Pilar; Piernas, Juan; Cortes Rossello, Antonio
    24th IEEE Conference on Mass Storage Systems and Technologies
    p. 251-256
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Oraculo A Scalable File Access Prediction Service

     Jonathan, Marti; Jesus, Malo; Artiaga Amouroux, Ernest; Cortes Rossello, Antonio
    Workshop on Execution Environments for Distributed Computing
    p. 33-40
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Aprendizaje activo

     Barcelò Ordinas, José María; Cortes Rossello, Antonio; Fernandez Jimenez, Agustin; Garcia Vidal, Jorge; Morancho Llena, Enrique; Navarro Guerrero, Juan Jose; Valero Garcia, Miguel; Valero-Garcia, Miguel
    Jornades de Docència del Departament d'Arquitectura de Computadors. 10 Anys de Jornades
    p. 1-10
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Observación institucional

     Barcelò Ordinas, José María; Cortes Rossello, Antonio; Fernandez Jimenez, Agustin; Garcia Vidal, Jorge; Morancho Llena, Enrique; Valero Garcia, Miguel; Valero-Garcia, Miguel
    Jornades de Docència del Departament d'Arquitectura de Computadors. 10 Anys de Jornades
    p. 1-10
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Adaptive Data Block Placement Based on Deterministic Zones (AdaptiveZ)

     Gonzalez, J L; Cortes Rossello, Antonio
    On the Move to Meaningful Internet Systems 2007: CooplS, DOA, ODBASE, GADA, and IS
    p. 1214-1232
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Lessons learnt from cluster computing: How they can be applied to grid environments

     Alberto, Sanchez; Cortes Rossello, Antonio; Montes, Jesus; Gueant, Pierre; Maria, S Perez
    The 8th Hellenic European Research on Computer Mathematics & its Applications Conference
    p. 1
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Towards an Open Grid Marketplace Framework for Resource Trade

     Cortes Rossello, Antonio
    On the Move to Meaningful Internet Systems 2007: CooplS, DOA, ODBASE, GADA, and IS
    Presentation's date: 2007-11-25
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Improving Data Locality in NAS BT Benchmark

     Vaquero, Jordi; Gonzalez Tallada, Marc; Costa Prats, Juan Jose; Javier, Bueno; Martorell Bofill, Xavier; Cortes Rossello, Antonio; Ayguade Parra, Eduard
    Third International Summer School on Advanced Computer Architecture and Compilation for Embedded Systems (ACACES 2007)
    p. 199-202
    Presentation's date: 2007-07
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • XtreemFS - an object - based file system for large scale federated IT infrastructures

     Eugenio, Cesario; Cortes Rossello, Antonio; Erich, Foch; Matthias, Hess; Hupfeld, Felix; Kolbeck, Björn; Jesus, Malo; Jonathan, Marti; Stender, Jan
    LinuxTAG
    p. 1
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window

  • Adaptive Data Block Placement Based on Deterministic Zones (AdaptiveZ)

     Gonzalez, J L; Cortes Rossello, Antonio
    International Conference on Grid Computing, High-Performance and Distributed Applications (GADA'07)
    p. 1214-1232
    Presentation of work at congresses

     Share Reference managers Reference managers Open in new window