Cantitate/Preț
Produs

Computational Auditory Scene Analysis: Proceedings of the Ijcai-95 Workshop

Editat de David F. Rosenthal, Hiroshi G. Okuno, Hiroshi Okuno, David Rosenthal
en Limba Engleză Paperback – 2 dec 2019
The interest of AI in problems related to understanding sounds has a rich history dating back to the ARPA Speech Understanding Project in the 1970s. While a great deal has been learned from this and subsequent speech understanding research, the goal of building systems that can understand general acoustic signals--continuous speech and/or non-speech sounds--from unconstrained environments is still unrealized. Instead, there are now systems that understand "clean" speech well in relatively noiseless laboratory environments, but that break down in more realistic, noisier environments. As seen in the "cocktail-party effect," humans and other mammals have the ability to selectively attend to sound from a particular source, even when it is mixed with other sounds. Computers also need to be able to decide which parts of a mixed acoustic signal are relevant to a particular purpose--which part should be interpreted as speech, and which should be interpreted as a door closing, an air conditioner humming, or another person interrupting.

Observations such as these have led a number of researchers to conclude that research on speech understanding and on nonspeech understanding need to be united within a more general framework. Researchers have also begun trying to understand computational auditory frameworks as parts of larger perception systems whose purpose is to give a computer integrated information about the real world. Inspiration for this work ranges from research on how different sensors can be integrated to models of how humans' auditory apparatus works in concert with vision, proprioception, etc. Representing some of the most advanced work on computers understanding speech, this collection of papers covers the work being done to integrate speech and nonspeech understanding in computer systems.
Citește tot Restrânge

Preț: 46103 lei

Preț vechi: 54239 lei
-15% Nou

Puncte Express: 692

Preț estimativ în valută:
8822 9234$ 7343£

Carte tipărită la comandă

Livrare economică 31 martie-14 aprilie

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780367447847
ISBN-10: 0367447843
Pagini: 414
Dimensiuni: 210 x 280 x 22 mm
Greutate: 0.45 kg
Ediția:1
Editura: CRC Press
Colecția CRC Press

Public țintă

Academic, Professional, and Professional Practice & Development

Cuprins

Contents: Preface. A.S. Bregman, Psychological Data and Computational ASA. J. Rouat, M. Garcia, A Prototype Speech Recognizer Based on Associative Learning and Nonlinear Speech Analysis. M. Slaney, A Critique of Pure Audition. R. Meddis, L. O'Mard, Psychophysically Faithful Methods for Extracting Pitch. F. Berthommier, C. Lorenzi, Implications of Physiological Mechanisms of Amplitude Modulation Processing for Modelling Complex Sounds Analysis and Separation. D. Wang, Stream Segregation Based on Oscillatory Correlation. G.J. Brown, M. Cooke, Temporal Synchronization in Neural Oscillator Model of Primitive Auditory Stream Segregation. F. Klassner, V. Lesser, H. Nawab, The IPUS Blackboard Architecture as a Framework for Computational Auditory Scene Analysis. K. Kashino, K. Nakadai, T. Kinoshita, H. Tanaka, Application of the Bayesian Probability Network to Music Scene Analysis. D.J. Godsmark, G.J. Brown, Context-Sensitive Selection of Competing Auditory Organizations: A Blackboard Model. M. Goto, Y. Muraoka, Musical Understanding at the Beat Level:Real-Time Beat Tracking for Audio Signals. S.H. Nawab, C.Y. Espy-Wilson, R. Mani, N.N. Bitar, Knowledge-Based Analysis of Speech Mixed With Sporadic Environmental Sounds. T. Nakatani, M. Goto, T. Ito, H.G. Okuno, Multiagent Based Binaural Sound Stream Segregation. M.K. Bhandaru, V.R. Lesser, Discrepancy Directed Model Acquisition for Adaptive Perceptual Systems. A. Fishbach, Auditory Scenes Analysis:Primary Segmentation and Feature Estimation. J.W. Grabke, J. Blauert, Cocktail Party Processors Based on Binaural Models. D. Ellis, D. Rosenthal, Midlevel Representations for Computational Auditory Scene Analysis: The Weft Element L. Solbach, R. Wöhrmann, J. Kliewer, The Complex-Valued Continuous Wavelet Transform as a Preprocessor for Auditory Scene Analysis. N. Saint-Arnaud, K. Popat, Analysis and Synthesis of Sound Textures. S.M. Boker, Predicting the Grouping of Rhythmic Sequences Using Local Estimators of Information Content. B.L. Karlsen, G.J. Brown, M. Cooke, M. Crawford, P. Green, S. Renals, Analysis of a Simultaneous-Speaker Sound Corpus. H. Kawahara, Hearing Voice: Transformed Auditory Feedback Effects on Voice Pitch Control. L. Wyse, S.W. Smoliar, Toward Content-Based Audio Indexing and Retrieval and a New Speaker Discrimination Technique. E.D. Scheirer, Using Musical Knowledge to Extract Expressive Performance Information from Audio Recordings.

Notă biografică

Rosenthal\, David F.; Okuno\, Hiroshi G.; Okuno\, Hiroshi; Rosenthal\, David

Descriere

This book is a collection of papers that are representative of a growing body of work in computational auditory scene analysis. It covers the work being done to integrate speech and nonspeech understanding in computer systems by representing some advanced work on computers understanding speech.