Towards an image-computable visual text quality metric using deep neural networks

Human Vision and Electronic Imaging (HVEI)


Image quality metrics have become invaluable tools for image processing and display system development. These metrics are typically developed for and tested on images and videos of natural content. Text, on the other hand, has unique features and supports a distinct visual function: Reading. It is therefore not clear if these image quality metrics are effective or optimal as measures of text quality. Here, we developed a domain-specific image quality metric for text and compared its performance against quality metrics developed for natural images. To develop our metric, we first trained a deep neural network to perform text classification on a data set of distorted letter images. We then compute the responses of internal layers of the network to uncorrupted and corrupted images of text, respectively. We used the cosine dissimilarity between the responses as a measure of text quality. Preliminary results indicate that both our model and more established quality metrics (e.g., SSIM) are able to predict general trends in participants’ text quality ratings. In some cases, our model is able to outperform SSIM. We further developed our model to predict response data in a two-alternative forced choice (2AFC) experiment, on which only our model achieved very high accuracy. Lastly, we demonstrated our model has the potential to generalize to novel perceptual dimensions that it has not been explicitly trained on.

Latest Publications