中文部份
伊彬、林演慶,(2006),視覺影像處理之眼球運動相關研究探討,設計學報第11卷第4期。日本厚生労働省労働基準局(2002),新しい「VDT作業における労働衛生管理のためのガイドライン」の策定について。
取自: http://www.mhlw.go.jp/houdou/2002/04/h0405-4.html
徐志翔,(2010),探討年齡對文字驗證碼工作之影響,碩士論文,朝陽科技大學工業工程與管理系。陳玥君,(2011),應用AR教具於單雙子葉植物特徵學習之眼動評估,碩士論文,台南大學數位學習科技學系。張育銘,(2007),購物網站介面設計樣式對消費者使用性影響之研究,碩士論文,國立成功大學工業設計系。
蔡介立、顏妙琁、汪勁安,(2005),眼球移動測量及在中文閱讀研究之應用,應用心理研究,第28期,91-104頁。陳學志、賴惠德、邱發忠,(2010),眼球追蹤技術在學習與教育上的應用,眼球追蹤與學習教育,第55期,第4卷,39-68頁。
陳弘庭,(2001),模糊分群方法、語意變數、分群群數關係之研究─以市場區隔為例,碩士論文,國立成功大學 工業管理科學系。英文部份
Ahn, L., Blum, M., & Hopper, N. J. (2004). Telling humans and computers apart (Automatically) or How lazy cryptographers do AI. Communications of the ACM, 47, 57–60.
Broadbent, D. E., & Broadbent, M. H. (1980). Priming and the passive/active model of word recognition, In R. Nikerson (Eds.), Attention and performance VIII. New York: Academic Press.
Buswell, G. T. (1935). How people look at pictures. Chicago: University of Chicago Press.
Chellapilla, K., Larson, K., Simard, P. Y., & Czerwinski, M. (2005). Building segmentation based human-friendly human interaction proofs (HIPs). Proceedings of Second International Workshop on Human Interactive Proofs, May 2005, 1–26.
Chew, M., & Baird, H. S. (2003). Baffle Text: a Human Interactive Proof, 10th IS&T/SPIE Document Recognition, 305–316.
De Graef, P., Christiaens, D., & d''Ydewalle, G. (1990). Perceptual effects of scene context on object identification. Psychological Research, 52(4), 317–329.
Duchowski, A. T., (2002). A breadth-first survey of eye-tracking applications. Behavior Research Methods, 34(4), 455–470.
Fitts, P. M., Jones, R. E., & Milton, J. L. (1950). Eye movements of aircraft pilots during instrument-landing approaches. Aeronautical Engineering Review 9(2), 24–29.
Friedman, A. (1979). Framing pictures: The role of knowledge in automatized encoding and memory for gist. Journal of Experimental Psychology: General, 108(3), 316–355.
Haber, R. N., & Schindler, R. M. (1981). Error in Proofreading: Evidence of Syntactic Control of Letter Processing. Journal of Experimental Psychology: Human Perception and Performance, 7(3), 573–579.
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Human mental workload, Amsterdam: North Holland Press, 239–250.
Henderson, J. M., & Hollingworth, A. (1999). High-level scene perception. Annual Review of Psychology, 50(1), 243–271.
Henderson, J. M., Weeks Jr, P. A., & Hollingworth, A. (1999). The effects of semantic consistency on eye movements during complex scene viewing. Journal of Experimental Psychology: Human Perception and Performance, 25(1), 210–228.
Heuer, H., Hollendiek, G., Kröger, H., & Römer, T. (1989). Die Ruhelage der Augen und ihr Einfluss auf Beobachtungsabstand und visuelle Ermüdung bei Bildschirmarbeit. Zeitschrift für experimentelle und angewandte psychologie, 36(4), 538–566.
Hoffman, J. E., & Subramaniam, B. (1995). The role of visual attention in saccadic eye movements. Attention, Perception, & Psychophysics, 57(6), 787–795.
Hughes, A., Wilkens, T., Wildemuth, B., & Marchionini, G. (2003). Text or pictures? An eyetracking study of how people view digital video surrogates. Proceedings of the International Conference on Image and Video Retrieval (CIVR 2003), 271–280.
Hyona, J., Lorch Jr, R. F., & Kaakinen, J. K. (2002). Individual differences in reading to summarize expository text: Evidence from eye fixation patterns. Journal of Educational Psychology, 94(1), 44–55.
Jacob, R. J. (1990). What you look at is what you get: eye movement-based interaction techniques. Proceedings of the SIGCHI conference on Human factors in computing systems: Empowering people, 11–18.
Jacob, R. J. K., & Karn, K. S. (2003). Eye tracking in Human-Computer Interaction and usability research: Ready to deliver the promises, In J. Hyönä, R. Radach, & H. Deubel (Eds.), The mind''s eye: Cognitive and applied aspects of eye movement research. Amsterdam: Elsevier, 573-605.
Jain, A. K., Murty, M. N., & Flynn, P. J. (1999). Data clustering: a review. ACM computing surveys (CSUR), 31(3), 264–323.
Just, M. A., & Carpenter, P. A. (1980). A theory of reading: from eye fixations to comprehension. Psychological Review, 87(4), 329–354.
Kolupaev, A., & Ogijenko, J. (2008). Captchas: Humans vs. bots. Security & Privacy, IEEE, 6(1), 68–70.
Kotval, X. P., & Goldberg, J. H. (1998). Eye movements and interface components grouping: An evaluation method. Proceedings of the 42nd Annual Meeting of the Human Factors and Ergonomics Society, 486–490.
Lee, Y. L., & Hsu, C. H. (2011). Usability study of text-based CAPTCHAs. Displays, 32(2), 81–86.
Loftus, G. R., & Mackworth, N. H. (1978). Cognitive determinants of fixation location during picture viewing. Journal of Experimental Psychology: Human Perception and Performance, 4(4), 565–572.
Mackworth, N. H., & Morandi, A. J. (1967). The gaze selects informative details within pictures. Attention, Perception, & Psychophysics, 2(11), 547–552.
McClelland, J. L., & Johnston, J. C. (1977). The role of familiar units in perception of words and nonwords. Attention, Perception, & Psychophysics, 22(3), 249–261.
McConkie, G. W., & Rayner, K. (1975). The span of the effective stimulus during a fixation in reading. Perception & Psychophysics, 17(6), 578–586.
Mori, G., & Malik, J. (2003). Recognizing Objects in Adversarial Clutter: Breaking a Visual CAPTCHA. Computer Vision and Pattern Recognition, IEEE Computer Society Conference, 1, 134–141.
Nakayama, K., Shimojo, S., & Silverman, G. H. (1989). Stereoscopic depth: its relation to image segmentation, grouping, and the recognition of occluded objects. Perception, 18(1), 55–68.
Pal, N., & Bezdek, J. (1995). On cluster validity for the fuzzy c-means model. Fuzzy Systems, IEEE Transactions on, 3(3), 370-379.
Pal, N., & Bezdek, J. (1997). Correction to "On Cluster Validity for the Fuzzy c-Means Model" [Correspondence]. Fuzzy Systems, IEEE Transactions on, 5(1), 152-153.
Pan, B., Hembrooke, H. A., Gay, G. K., Granka, L. A., Feusner, M. K., & Newman, J. K. (2004). The determinants of web page viewing behavior: an eye-tracking study. Proceedings of the 2004 symposium on Eye tracking research & applications, 147–154.
Pelz, J. B., Canosa, R., & Babcock, J. (2000). Extended tasks elicit complex eye movement patterns. Proceedings of the 2000 symposium on Eye tracking research & applications, 37–43.
Pernice, K., & Nielsen, J. (2009). Eyetracking methodology: How to conduct and evaluate usability studies using eyetracking. USA: Nielsen Norman Group, 1–163.
Rayner, K. (1992). Eye movements and visual cognition: Scene perception and reading. New York: Springer-Verlag, 428–448.
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological bulletin, 124(3), 372–422.
Rayner, K., & Pollatsek, A. (1989). The psychology of reading. Prentice-Hall Erlbaum Publishers.
Rayner, K., Rotello, C. M., Stewart, A. J., Keir, J., & Duffy, S. A. (2001). Integrating text and pictorial information: Eye movements when looking at print advertisements. Journal of Experimental Psychology: Applied, 7(3), 219-226.
Shirali-Shahreza, M., & Shirali-Shahreza, S. (2008). CAPTCHA systems for disabled people. Intelligent Computer Communication and Processing, 2008. 4th International Conference, 319–322.
Von Ahn, L., Blum, M., Hopper, N., & Langford, J. (2003). CAPTCHA: Using hard AI problems for security. Advances in Cryptology—EUROCRYPT 2003, 294–311.
Von Ahn, L., Maurer, B., McMillen, C., Abraham, D., & Blum, M. (2008). recaptcha: Human-based character recognition via web security measures. Science, 321, 1465–1468.
Wooding, D. S. (2002). Fixation maps: quantifying eye-movement traces. Proceedings of the 2002 symposium on Eye tracking research & applications, 31–36.
Yan, J., & El Ahmad, A. S. (2008). Usability of CAPTCHAs or usability issues in CAPTCHA design. Proceedings of the 4th symposium on Usable privacy and security, 44–52.
Yarbus, A. L. (1967). Eye movements during perception of complex objects. Eye movements and vision, 7, 171–196.