中文部分
[1] 韓寶強,「音的歷程」,中國文聯出版社,2003。
[2] 蕭雅文,「聽力學導論」,五南圖書出版公司,第8-31頁,2008。
[3] 詹智閔,「小波分析在音樂訊號上的應用」,生醫電資所,2012。
[4] 徐茂銘,漫談聽力及聽力障礙,1977。http://210.60.224.4/ct/content/1977/00110095/0006.htm
[5] Henry Doktorski,「在線音樂理論:音調、音質與音色」,2012。
http://www.dolmetsch.com/musictheory27.htm
[6] 張智星,「音訊處理與辨識」,2006。
http://neural.cs.nthu.edu.tw/jang/books/
audiosignalprocessing/index.asp
[7] 林信鋒、蔡正富,「植基於離散小波轉換之聲音浮水印技術」,
東華大學,2004。
[8] 劉超群,「雙重任務干擾與資訊處理模式」,國立中央大學機械工程學
系,2010。
[9] 許勝雄、彭游、吳水丕,「人因工程」,滄海書局,第118-119頁,2004。
英文部分
Ahn, L., Blum, M., & Hopper, N. J. (2004). Telling humans and computers apart (Automatically) or How lazy cryptographers do AI.Communications of the ACM, 47, 57–60.
Brokx J.P.L and Nooteboom S.G. (1982) Intonation and the perceptual separation of simultaneous voices. Journal of Phonetics 10,23-36.
Barry Arons (2008) . A Review of The Cocktail Party Effect .MIT Media Lab Conversational Computer Systems.
Bigham, J. P., & Cavender, A. C. (2009). Evaluating existing audio CAPTCHAs and an interface optimized for non-visual use.Proceedings of the 27th international conference on Human factors in computing systems, CHI ’09 (1829–1838)
Cherry, E. C. (1953). Some experiments on the recognition of speech, with one and with two ears. Journal of the acoustical society of America,25(5), 975–979.
Chan, T.-Y. . (2003). Using a test-to-speech synthesizer to generate a reverse Turing test. 15th IEEE International Conference on Tools with Artificial Intelligence, 2003. Proceedings (p 226 – 232).
Chellapilla, K., Larson, K., Simard, P. Y., & Czerwinski, M. (2005). Building segmentation based human-friendly human interaction proofs (HIPs). Proceedings of Second International Workshop on Human Interactive Proofs, May 2005, 1–26.
Cainer, K. E., James, C., & Rajan, R. (2008). Learning speech-in-noise discrimination in adult humans. Hearing Research, 238(1–2), 155–164.
Du, Y., Kong, L., Wang, Q., Wu, X., & Li, L. (2011). Auditory frequency-following response: A neurophysiological measure for studying the “cocktail-party problem”. Neuroscience & Biobehavioral Reviews, 35(10), 2046–2057.
Helenius, R. & Hongisto, V., (2004), The effect of acoustical improvement of an open-plan office on workers. Proceedings of Inter-Noise 2004, Paper 674, 21–25 August.
Haichang Gao, Honggang Liu, Dan Yao, Xiyang Liu, & Aickelin, U. (2010). An Audio CAPTCHA to Distinguish Humans from Computers. 2010 hird International Symposium on Electronic Commerce and Security (ISECS)(pp. 265-269).
Jennifer Tam, Jiri Simsa, Sean Hydn, and Luis Von Ahn.(2007). Breaking Audio CAPTCHAs. Computer Science Department Carnegie Mellon University .
Jennifer Tam, Jiri Simsa, David Huggins-Daines, Luis von Ahn, and Manuel Blum.(2008). Improving Audio CAPTCHAs. Computer Science Department,Carnegie Mellon University .
Killion, M.C., Niquette, P.A., Gudmundsen, G.I., (2004). Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss innormal-hearing and hearing-impaired listeners. J. Acoust. Soc. Am.116, 2395–2405.
Kolupaev, A., & Ogijenko, J. (2008). CAPTCHAs: Humans vs. Bots. IEEE Security & Privacy, 6(1), 68-70.
Sauer, G., Hochheiser, H., Feng, J., & Lazar, J. (2008). Towards a universally usable CAPTCHA. Proceedings of the Symposium onAccessible Privacy and Security.
Soupionis, Y., Tountas, G., & Gritzalis, D. (2009). Audio CAPTCHA for SIP-Based VoIP., Emerging Challenges for Security, Privacy and Trust, IFIP Advances in Information and Communication Technology ( 297, p25–38).
Venetjoki, N., Kaarlela-Tuomaala, A., Keskinen, E., & Hongisto, V. (2006). The effect of speech and speech intelligibility on task performance. Ergonomics, 49(11), 1068–1091.
Wickens,C. D.(1984).Engineering psychology and human performance.Upper Saddle River New Jersey 07458.
Yan, J., & El Ahmad, A. S. (2008). Usability of CAPTCHAs or usability issues in CAPTCHA design. Proceedings of the 4th symposium on Usable privacy and security, 44–52