The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera.
Autonomous Mobile Robot That Can Read
Research Laboratory on Mobile Robotics and Intelligent Systems (LABORIUS), Department of Electrical Engineering and Computer Engineering, University of Sherbrooke, Sherbrooke, Quebec J1K 2R1, Canada
EURASIP Journal on Advances in Signal Processing 2004, 2004:595142 doi:10.1155/S1110865704408142
The electronic version of this article is the complete one and can be found online at: http://asp.eurasipjournals.com/content/2004/17/595142
|Received:||18 January 2004|
|Revisions received:||11 May 2004|
|Published:||27 December 2004|
© 2004 Létourneau et al.