With about 3.6 million adults in the United States having visual impairment or blindness, assistive technology is essential to give these people grocery shopping independance. This thesis presents a new method to detect and localize nutrition facts tables (NFTs) on mobile devices more quickly and from less-ideal inputs than before. The method is a drop-in replacement for an existing NFT analysis pipeline and utilizes multiple image analysis methods which exploit various properties of standard NFTs.
In testing, this method performs very well with no false-positives and 42% total recall. These results are ideal for real-world application where inputs are analyzed as quickly as possible. Additionally, this new method exposes many possibilities for future improvement.
|Advisor:||Kulyukin, Vladimir A.|
|Commitee:||Dyreson, Curtis, Flann, Nick|
|School:||Utah State University|
|School Location:||United States -- Utah|
|Source:||MAI 52/04M(E), Masters Abstracts International|
|Keywords:||Assistive technology, Computer vision, Image recognition, Nutrition facts tables|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be