Cantitate/Preț
Produs

Languages and Compilers for Parallel Computing: 8th International Workshop, Columbus, Ohio, USA, August 10-12, 1995. Proceedings: Lecture Notes in Computer Science, cartea 1033

Editat de Chua-Huang Huang, Ponnuswamy Sadayappan, Utpal Banerjee, David Gelernter, Alex Nicolau, David Padua
en Limba Engleză Paperback – 24 ian 1996
This book presents the refereed proceedings of the Eighth Annual Workshop on Languages and Compilers for Parallel Computing, held in Columbus, Ohio in August 1995.
The 38 full revised papers presented were carefully selected for inclusion in the proceedings and reflect the state of the art of research and advanced applications in parallel languages, restructuring compilers, and runtime systems. The papers are organized in sections on fine-grain parallelism, interprocedural analysis, program analysis, Fortran 90 and HPF, loop parallelization for HPF compilers, tools and libraries, loop-level optimization, automatic data distribution, compiler models, irregular computation, object-oriented and functional parallelism.
Citește tot Restrânge

Din seria Lecture Notes in Computer Science

Preț: 64808 lei

Preț vechi: 81010 lei
-20% Nou

Puncte Express: 972

Preț estimativ în valută:
12403 12883$ 10302£

Carte tipărită la comandă

Livrare economică 03-17 februarie 25

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9783540607656
ISBN-10: 354060765X
Pagini: 620
Ilustrații: XIV, 606 p.
Dimensiuni: 155 x 235 x 35 mm
Greutate: 0.86 kg
Ediția:1996
Editura: Springer Berlin, Heidelberg
Colecția Springer
Seria Lecture Notes in Computer Science

Locul publicării:Berlin, Heidelberg, Germany

Public țintă

Research

Cuprins

Array data flow analysis for load-store optimizations in superscalar architectures.- An experimental study of an ILP-based exact solution method for software pipelining.- Insertion scheduling: An alternative to list scheduling for modulo schedulers.- Interprocedural array region analyses.- Interprocedural analysis for parallelization.- Interprocedural array data-flow analysis for cache coherence.- An interprocedural parallelizing compiler and its support for memory hierarchy research.- V-cal: a calculus for the compilation of data parallel languages.- Transitive closure of infinite graphs and its applications.- Demand-driven, symbolic range propagation.- Optimizing Fortran 90 shift operations on distributed-memory multicomputers.- A loop parallelization algorithm for HPF compilers.- Fast address sequence generation for data-parallel programs using integer lattices.- Compiling array statements for efficient execution on distributed-memory machines: Two-level mappings.- A communication backend for parallel language compilers.- Parallel simulation of data parallel programs.- A parallel processing support library based on synchronized aggregate communication.- FALCON: A MATLAB interactive restructuring compiler.- A simple mechanism for improving the accuracy and efficiency of instruction-level disambiguation.- Hoisting branch conditions —improving super-scalar processor performance.- Integer loop code generation for VLIW.- Dependence analysis in parallel loops with i ± k subscripts.- Piecewise execution of nested data-parallel programs.- Recovering logical structures of data.- Efficient distribution analysis via graph contraction.- Automatic selection of dynamic data partitioning schemes for distributed-memory multicomputers.- Data redistribution in an automatic datadistribution tool.- General purpose optimization technology.- Compiler architectures for heterogeneous systems.- Virtual topologies: A new concurrency abstraction for high-level parallel languages.- Interprocedural data flow based optimizations for compilation of irregular problems.- Automatic parallelization of the conjugate gradient algorithm.- Annotations for a sparse compiler.- Connection analysis: A practical interprocedural heap analysis for C.- Language and run-time support for network parallel computing.- Agents: An undistorted representation of problem structure.- Type directed cloning for Object-Oriented programs.- The performance impact of granularity control and functional parallelism.