Cantitate/Preț
Produs

High Performance Computing: International Symposium, ISHPC'97, Fukuoka, Japan, November 4-6, 1997, Proceedings: Lecture Notes in Computer Science, cartea 1336

Editat de Constantine Polychronopoulos, Kazuki Joe, Keijiro Araki, Makoto Amamiya
en Limba Engleză Paperback – 22 oct 1997
This book constitutes the refereed proceedings of the International Symposium on High Performance Computing, ISHPC '97, held in Fukuoka, Japan in November 1997.
The volume presents four distinguished papers and 16 revised regular papers selected from more than 40 submissions on the basis of at least three peer reviews. Also included are seven invited contributions by leading authorities and 10 selected poster presentations. The papers are organized in topical chapters on high performance systems architectures, networks, compilers, systems software, and applications in various areas.
Citește tot Restrânge

Din seria Lecture Notes in Computer Science

Preț: 34358 lei

Preț vechi: 42948 lei
-20% Nou

Puncte Express: 515

Preț estimativ în valută:
6576 6854$ 5474£

Carte tipărită la comandă

Livrare economică 06-20 ianuarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9783540637660
ISBN-10: 3540637664
Pagini: 436
Ilustrații: XIII, 423 p.
Dimensiuni: 216 x 279 x 23 mm
Greutate: 1 kg
Ediția:1997
Editura: Springer Berlin, Heidelberg
Colecția Springer
Seria Lecture Notes in Computer Science

Locul publicării:Berlin, Heidelberg, Germany

Public țintă

Research

Cuprins

The generation of optimized codes using nonzero structure analysis.- On the importance of an end-to-end view of memory consistency in future computer systems.- High performance distributed object systems.- Instruction cache prefetching using multilevel branch prediction.- High performance wireless computing.- High-performance computing and applications in image processing and computer vision.- Present and future of HPC technologies.- Evaluation of multithreaded processors and thread-switch policies.- A multithreaded implementation concept of prolog on Datarol-II machine.- Thread Synchronization Unit (TSU): A building block for high performance computers.- Data dependence path reduction with tunneling load instructions.- Performance estimation of embedded software with pipeline and cache hazard modeling.- An implementation and evaluation of a distributed shared-memory system on workstation clusters using fast serial links.- Designing and optimizing 3-connectivity communication networks using a distributed genetic algorithm.- Adaptive routing on the Recursive Diagonal Torus.- Achieving multi-level parallelization.- A technique to eliminate redundant inter-processor communication on parallelizing compiler TINPAR.- An automatic vectorizing/parallelizing Pascal compiler V-Pascal ver. 3.- An algorithm for automatic detection of loop indices for communication overlapping.- NaraView: An interactive 3D visualization system for parallelization of programs.- Hybrid approach for non-strict dataflow program on commodity machine.- Resource management methods for general purpose massively parallel OS SSS-CORE.- Scenario-based hypersequential programming: Formulation of parallelization.- Parallelization of space plasma particle simulation.- Implementing iterative solvers for irregularsparse matrix problems in high performance Fortran.- Parallel navigation in an A-NETL based parallel OODBMS.- High performance parallel FFT on distributed memory parallel computers.- Parallel computation model logPQ.- Cost estimation of coherence protocols of software managed cache on distributed shared memory system.- A portable distributed shared memory system on the cluster environment: Design and implementation fully in software.- An object-oriented framework for loop parallelization.- A method for runtime recognition of collective communication on distributed-memory multiprocessors.- Improving the performance of automated forward deduction system EnCal.- Efficiency of parallel machine for large-scale simulation in computational physics.- Parallel PDB data retriever “PDB diving booster”.- A parallelization method for neural networks with weak connection design.- Exploiting parallel computers to reduce neural network training time of real applications.