Food wastage is a problem that affects all demographics and regions of the world. Each year, approximately one-third of food produced for human consumption is thrown away. In an effort to track and reduce food waste in the commercial sector, some companies utilize third party devices which collect data to analyze individual contributions to the global problem. These devices track the type of food wasted (such as vegetables, fruit, boneless chicken, pasta) along with the weight. Some devices also allow the user to leave the food in a kitchen container while it is weighed, so the container weight must also be accounted for. Through the use of these devices, a company may better understand the amount and type of food being wasted, along with the cost and an estimated CO2 impact. Unfortunately, data collection is often a manual process which requires a human to identify what is being thrown away at the time of the event. Manual data entry is prone to user error, which in turn leads to less accurate trending of waste and overall environmental impact.
In this thesis, convolutional neural networks (CNNs) are trained and evaluated on a novel food waste dataset to assist in automating food waste classification. To fully realize automation, both food waste and container classifiers are required. A major contribution of this work is to highlight the value of cleaning the test data, while also emphasizing the importance of leaving it untouched until the very end to avoid introducing bias into our estimate of the out of sample error. Another major contribution is to test the feasibility of learning on commercial food waste datasets. Following the recent successes of others in machine learning, best known practices are outlined and applied in dataset creation, network design, and performance evaluation. Some examples of performance metrics used include classification accuracy, micro-averaged precision and recall, F-Score, Grad-CAM visualizations, and the confusion matrix. The resulting models are high performing, but do have some limitations, such as the inability to effectively classify instances with mixed foods and fully distinguish container depth. Specifically, the food waste model achieved top 1 accuracy of 90.7% and top 5 accuracy of 98.8%, excluding mixed instances. The container model, which attempts to distinguish between different depths, achieves top 1 accuracy of 70.2% and top 5 accuracy of 94.6%, while the model which ignores depth achieves 84.6% and 98.6% accuracy, respectively.
In future work, more sophisticated networks can be trained to perform other computer vision tasks, such as semantic and instance segmentation. Through segmentation, images with mixed foods can be better classified. There may also be opportunities to reduce model size and inference time with deep compression. Other recommendations for improving the food waste and container classification models are also provided.
|Commitee:||Lipor, John, Wan, Eric A|
|School:||Portland State University|
|Department:||Electrical and Computer Engineering|
|School Location:||United States -- Oregon|
|Source:||MAI 81/7(E), Masters Abstracts International|
|Subjects:||Electrical engineering, Computer science, Artificial intelligence|
|Keywords:||Classification accuracy, Convolutional neural networks, Deep learning, EfficientNet, Food Waste|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be