|
[1] L. Jiang, B. Qiu, X. Liu, C. Huang, and K. Lin, “Deepfood: food image analysis and dietary assessment via deep model,” IEEE Access, vol. 8, pp. 47 477– 47 489, Dec. 2020. [2] T. T. Tai, D. N. H. Thanh, and N. Q. Hung, “A dish recognition framework using transfer learning,” IEEE Access, vol. 10, pp. 7793–7799, Jan. 2022. [3] R. Z. Tan, X. Chew, and K. W. Khaw, “Quantized deep residual convolutional neural network for image-based dietary assessment,” IEEE Access, vol. 8, pp. 111 875–111 888, Jun. 2020. [4] H. Liang, G. Wen, Y. Hu, M. Luo, P. Yang, and Y. Xu, “Mvanet: Multi-task guided multiview attention network for chinese food recognition,” IEEE Transactions on Multimedia, vol. 23, pp. 3551–3561, 2020. [5] C. Liu, Y. Liang, Y. Xue, X. Qian, and J. Fu, “Food and ingredient joint learning for finegrained recognition,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 31, no. 6, pp. 2480–2493, Aug. 2020. [6] B. Mandal, N. B. Puhan, and A. Verma, “Deep convolutional generative adversarial network-based food recognition using partially labeled data,” IEEE Sensors Letters, vol. 3, no. 2, pp. 1–4, Feb. 2018. [7] B. Zhu, C.-W. Ngo, and W.-K. Chan, “Learning from web recipe-image pairs for food recognition: Problem, baselines and performance,” IEEE Transactions on Multimedia, vol. 24, pp. 1175–1185, Oct. 2021. [8] G. Xiao, Q. Wu, H. Chen, D. Cao, J. Guo, and Z. Gong, “A deep transfer learning solution for food material recognition using electronic scales,” IEEE Transactions on Industrial Informatics, vol. 16, no. 4, pp. 2290–2300, Jul. 2019. [9] B. Arslan, S. Memiş, E. B. Sönmez, and O. Z. Batur, “Fine-grained food classification methods on the uec food-100 database,” IEEE Transactions on Artificial Intelligence, vol. 3, no. 2, pp. 238–243, Aug. 2021. [10] G. Song, Z. Tao, X. Huang, G. Cao, W. Liu, and L. Yang, “Hybrid attention-based prototypical network for unfamiliar restaurant food image few-shot recognition,” IEEE Access, vol. 8, pp. 14 893–14 900, Jan. 2020. [11] M. N. Razali, E. G. Moung, F. Yahya, C. J. Hou, R. Hanapi, R. Mohamed, and I. A. T. Hashem, “Indigenous food recognition model based on various convolutional neural network architectures for gastronomic tourism business analytics,” Information, vol. 12, no. 8, p. 322, Jun. 2021. [12] S. Jiang, W. Min, L. Liu, and Z. Luo, “Multi-scale multi-view deep feature aggregation for food recognition,” IEEE Transactions on Image Processing, vol. 29, pp. 265–276, Jul. 2019. [13] H. Zhao, K.-H. Yap, A. C. Kot, and L. Duan, “Jdnet: A joint-learning distilled network for mobile visual food recognition,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 4, pp. 665–675, Jan. 2020. [14] B. Sainz-De-Abajo, J. M. García-Alonso, J. J. Berrocal-Olmeda, S. Laso-Mangas, and I. De La Torre-Díez, “Foodscan: Food monitoring app by scanning the groceries receipts,” IEEE Access, vol. 8, pp. 227 915–227 924, Dec. 2020. [15] M. B. Lam, T.-H. Nguyen, and W.-Y. Chung, “Deep learning-based food quality estimation using radio frequency-powered sensor mote,” IEEE Access, vol. 8, pp. 88 360–88 371, May. 2020. [16] P. Zhou, C. Bai, J. Xia, and S. Chen, “Cmrdf: A real-time food alerting system based on multimodal data,” IEEE Internet of Things Journal, vol. 9, no. 9, pp. 6335–6349, May. 2020. [17] T. Ilyas, A. Khan, M. Umraiz, Y. Jeong, and H. Kim, “Multi-scale context aggregation for strawberry fruit recognition and disease phenotyping,” IEEE Access, vol. 9, pp. 124 491– 124 504, Sep. 2021. [18] O. M. Lawal, “Yolomuskmelon: quest for fruit detection speed and accuracy using deep learning,” IEEE Access, vol. 9, pp. 15 221–15 227, Jan. 2021. [19] Z. Liu, J. Wu, L. Fu, Y. Majeed, Y. Feng, R. Li, and Y. Cui, “Improved kiwifruit detection using pre-trained vgg16 with rgb and nir information fusion,” IEEE Access, vol. 8, pp. 2327–2336, 2019. [20] F. A. Kateb, M. M. Monowar, M. A. Hamid, A. Q. Ohi, and M. F. Mridha, “Fruitdet: Attentive feature aggregation for real-time fruit detection in orchards,” Agronomy, vol. 11, no. 12, pp. 2440–, Oct. 2021. [21] Y. Zhu, X. Zhao, C. Zhao, J. Wang, and H. Lu, “Food det: Detecting foods in refrigerator with supervised transformer network,” Neurocomputing, vol. 379, pp. 162–171, Feb. 2020. [22] Y.-C. Liu, D. D. Onthoni, S. Mohapatra, D. Irianti, and P. K. Sahoo, “Deep-learningassisted multi-dish food recognition application for dietary intake reporting,” Electronics, vol. 11, no. 10, p. 1626, Apr. 2022. [23] X. Xu, L. Wang, M. Shu, X. Liang, A. Z. Ghafoor, Y. Liu, Y. Ma, and J. Zhu, “Detection and counting of maize leaves based on two-stage deep learning with uav-based rgb image,” Remote Sensing, vol. 14, no. 21, p. 5388, Oct. 2022. [24] Q. Cai, J. Li, H. Li, and Y. Weng, “Btbufood-60: Dataset for object detection in food field,” in 2019 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 1–4, Feb. 2019. [25] J. Qi, X. Liu, K. Liu, F. Xu, H. Guo, X. Tian, M. Li, Z. Bao, and Y. Li, “An improved yolov5 model based on visual attention mechanism: Application to recognition of tomato virus disease,” Computers and Electronics in Agriculture, vol. 194, p. 106780, Mar. 2022. [26] L. Rachakonda, S. P. Mohanty, and E. Kougianos, “ilog: An intelligent device for automatic food intake monitoring and stress detection in the iomt,” IEEE transactions on consumer electronics, vol. 66, no. 2, pp. 115–124, Feb. 2020. [27] J. Li, J. Xiong, and Z. Chen, “Food-agnostic dish detection: A simple baseline,” IEEE Access, vol. 9, pp. 125 375–125 383, Aug. 2021. [28] D. Pandey, P. Parmar, G. Toshniwal, M. Goel, V. Agrawal, S. Dhiman, L. Gupta, and G. Bagler, “Object detection in indian food platters using transfer learning with yolov4,” in 2022 IEEE 38th International Conference on Data Engineering Workshops (ICDEW), DOI 10.1109/ICDEW55742.2022.00021, pp. 101–106, May. 2022. [29] S. Wang, Y. Liu, Y. Qing, C. Wang, T. Lan, and R. Yao, “Detection of insulator defects with improved resnest and region proposal network,” IEEE Access, vol. 8, pp. 184 841–184 850, Oct. 2020. [30] R. Nijhawan, A. Batra, O. Loyola-Gonz´alez, M. Kumar, and D. K. Jain, “Food classification of indian cuisines using handcrafted features and vision transformer network,” Available at SSRN 4014907, pp. 1–27, Jan. 2022. [31] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” Advances in neural information processing systems, vol. 28, pp. 1–14, Jun. 2015. [32] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10 012–10 022. IEEE, Feb. 2021. [33] D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016. [34] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature pyramid networks for object detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117–2125, Feb. 2017. [35] S. Qiao, L.-C. Chen, and A. Yuille, “Detectors: Detecting objects with recursive feature pyramid and switchable atrous convolution,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10 213–10 224, Jun. 2021. [36] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 4, pp. 834–848, Apr. 2017. [37] M. Chen, K. Dhingra, W. Wu, L. Yang, R. Sukthankar, and J. Yang, “Pfid: Pittsburgh fastfood image dataset,” in 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 289–292, Nov. 2009. [38] H. Hoashi, T. Joutou, and K. Yanai, “Image recognition of 85 food categories by feature fusion,” in 2010 IEEE International Symposium on Multimedia, pp. 296–301, Dec. 2010. [39] L. Bossard, M. Guillaumin, and L. V. Gool, “Food-101–mining discriminative components with random forests,” in European conference on computer vision, vol. 8694, pp. 446–461, Sep. 2014. [40] G. Ciocca, G. Micali, and P. Napoletano, “State recognition of food images using deep features,” IEEE Access, vol. 8, pp. 32 003–32 017, Feb. 2020. [41] K. Chen, J. Wang, J. Pang, Y. Cao, Y. Xiong, X. Li, S. Sun, W. Feng, Z. Liu, J. Xu, Z. Zhang, D. Cheng, C. Zhu, T. Cheng, Q. Zhao, B. Li, X. Lu, R. Zhu, Y. Wu, J. Dai, J. Wang, J. Shi, W. Ouyang, C. C. Loy, and D. Lin, “MMDetection: Open mmlab detection toolbox and benchmark,” arXiv preprint arXiv:1906.07155, pp. 1–11, Jun. 2019.
|