17 July 2012
My main research interests include machine learning, image processing and computer vision, with a focus on sensing and modelling multimodal human affective states and interactive signals. The interdisciplinary nature of my research makes it more challenging and interesting. I particularly enjoy the fact that the research interests in which I am currently working are being adopted by the industry and swiftly introduced into daily lives of people as soon as we develop them.
The thought that my work has immediate application in the real world keeps me more focused. Automatically detecting non-posed facial behaviour in naturalistic contexts has become an increasingly important research area. It involves computer vision, machine learning and behavioural sciences, and can be used for many applications such as design of practical home and health support appliances, intelligent living and working environments, interactive devices including game companions and robots, tutoring systems, security support systems, etc.
Image enhancement is one of the most interesting and visually appealing areas of image processing. The aim of image enhancement is to improve the perception of information in images for viewers, or to provide restored input for other automated image processing tasks. Illumination problems in general and directional illumination problems in particular are the biggest challenges. The encoding of natural lighting changes is a fundamental and ubiquitous human perceptual skill, our brain copes with the challenge of illumination variations so easily but it is still a real problem in computer vision. Furthermore, there is a need for more research on the reduction of the complexity and computational cost of the developed algorithms and to make them real-time applicable by employing refined multi-grid approaches and implementing on parallel hardware architectures.
Existing image enhancement algorithms amplify noise when they amplify weak edges since they cannot distinguish between these two image features. However, since weak edges are geometric structures and noise is not, in this Letter we propose a nonsubsampled contourlet transform to disambiguate them. Low contrast images are transformed into multiscale and multidirectional contour information, where a nonlinear mapping function is used to modify the contour coefficients at each level. The enhancement is achieved by amplifying weak edges and inhibiting the background noise while adjusting the dynamic range. Furthermore, the proposed algorithm can be effectively applied to both grey-level and colour images without any parameter tuning under diverse illumination conditions.
The enhancement achieved by the proposed Letter can be used in the pre-processing stages of many other problems for improved performance by potentially contributing to the areas of healthcare technologies, emerging image and vision based technologies, and the film and television production industries.
As part of my work at the Computer Vision and Interaction group at Queen Mary University of London, I am also working on the development of a pose and illumination robust affect recognition system for high-quality human-robot-interaction in a FP7 ICT project called LIREC (Living with Robots and Interactive Companions).
Currently, transform-based image enhancement algorithms are regarded as computationally complex. However, over the next few years, with the development of processors with more computing power, research on image enhancement will include more studies on multiscale and multidirectional information based approaches since they enable global and local contrast enhancement simultaneously by transforming the signals in the appropriate bands or scales. Thus, we expect to have wide impact on consumer electronics and medical imaging products.
The Letter presenting the results on which this interview is based can be found on the IET Digital Library.
Browse or search all papers in the latest or past issues of Electronics Letters on the IET Digital Library.