SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

This article is part of the series Patches in Vision.

Open Access Research Article

Models for Patch-Based Image Restoration

Mithun Das Gupta1*, Shyamsundar Rajaram1, Nemanja Petrovic2 and ThomasS Huang1

Author Affiliations

1 Beckman Institute, Department of Electrical and Computer Engineering (ECE), University of Illinois at Urbana-Champaign (UIUC), IL 61801, USA

2 Google Inc., NY 10011, USA

For all author emails, please log on.

EURASIP Journal on Image and Video Processing 2009, 2009:641804  doi:10.1155/2009/641804

The electronic version of this article is the complete one and can be found online at: http://jivp.eurasipjournals.com/content/2009/1/641804

Received:29 April 2008
Accepted:24 October 2008
Published:29 January 2009

© 2009 The Author(s).

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


We present a supervised learning approach for object-category specific restoration, recognition, and segmentation of images which are blurred using an unknown kernel. The novelty of this work is a multilayer graphical model which unifies the low-level vision task of restoration and the high-level vision task of recognition in a cooperative framework. The graphical model is an interconnected two-layer Markov random field. The restoration layer accounts for the compatibility between sharp and blurred images and models the association between adjacent patches in the sharp image. The recognition layer encodes the entity class and its location in the underlying scene. The potentials are represented using nonparametric kernel densities and are learnt from training data. Inference is performed using nonparametric belief propagation. Experiments demonstrate the effectiveness of our model for the restoration and recognition of blurred license plates as well as face images.

Publisher note

To access the full article, please see PDF.