Automatic Lipreading Using Color Lip Images and Principal Component Analysis


The KIPS Transactions:PartB , Vol. 15, No. 3, pp. 229-236, Jun. 2008
10.3745/KIPSTB.2008.15.3.229,   PDF Download:

Abstract

This paper examines effectiveness of using color images instead of grayscale ones for automatic lipreading. First, we show the effect of color information for performance of humans' lipreading. Then, we compare the performance of automatic lipreading using features obtained by applying principal component analysis to grayscale and color images. From the experiments for various color representations, it is shown that color information is useful for improving performance of automatic lipreading; the best performance is obtained by using the RGB color components, where the average relative error reductions for clean and noisy conditions are 4.7% and 13.0%, respectively.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
J. S. Lee and C. H. Park, "Automatic Lipreading Using Color Lip Images and Principal Component Analysis," The KIPS Transactions:PartB , vol. 15, no. 3, pp. 229-236, 2008. DOI: 10.3745/KIPSTB.2008.15.3.229.

[ACM Style]
Jong Seok Lee and Cheol Hoon Park. 2008. Automatic Lipreading Using Color Lip Images and Principal Component Analysis. The KIPS Transactions:PartB , 15, 3, (2008), 229-236. DOI: 10.3745/KIPSTB.2008.15.3.229.