Metode Anotasi Visual Baru untuk Meningkatkan Akurasi Sistem Penentuan Posisi dalam Ruangan Berbasis Kamera
Abstract
Sistem penentuan posisi dalam ruangan (Indoor Positioning System/IPS) memiliki peranan penting dalam berbagai aplikasi seperti navigasi robot, pelacakan aset, dan sistem layanan otomatis. Salah satu pendekatan IPS yang menjanjikan adalah berbasis kamera menggunakan algoritma YOLOv4 karena biayanya yang relatif rendah dan fleksibilitas penggunaannya, namun akurasinya masih tergolong rendah. Studi ini meningkatkan akurasi sistem IPS berbasis kamera melalui pendekatan anotasi visual baru, yaitu dengan menempatkan anotasi pada titik tengah antara dua kaki manusia sebagai acuan posisi objek di lantai. Dataset khusus berisi 2.000 citra digunakan untuk melatih model YOLOv4 yang telah disesuaikan. Hasil evaluasi menunjukkan peningkatan performa dengan nilai mean average precision (mAP) sebesar 99,19% setelah pelatihan sebanyak 6.000 iterasi. Rasio konversi pixel-ke-centimeter yang diperoleh mencapai 0,309 cm/pixel (sumbu-x) dan 0,308 cm/pixel (sumbu-y), dengan peningkatan akurasi sebesar 62,43% dan penurunan standar deviasi sebesar 69,01%. Temuan ini menunjukkan bahwa metode kalibrasi menggunakan anotasi pada kaki secara signifikan meningkatkan akurasi estimasi posisi dalam ruangan.

Keywords
References
F. Zafari, A. Gkelias, and K. K. Leung, “A Survey of Indoor Localization Systems and Technologies,” IEEE Communications Surveys and Tutorials, vol. 21, no. 3, pp. 2568–2599, 2019, doi: 10.1109/COMST.2019.2911558.
C. J. N. Syazwani, N. H. A. Wahab, N. Sunar, S. H. S. Ariffin, K. Y. Wong, and Y. Aun, “Indoor Positioning System: A Review,” International Journal of Advanced Computer Science and Applications, vol. 13, no. 6, pp. 477–490, Autumn 2022, doi: 10.14569/IJACSA.2022.0130659.
J. Kunhoth, A. G. Karkar, S. Al-Maadeed, and A. Al-Ali, “Indoor positioning and wayfinding systems: a survey,” Human-centric Computing and Information Sciences 2020 10:1, vol. 10, no. 1, pp. 1–41, May 2020, doi: 10.1186/S13673-020-00222-0.
Y. Shi et al., “Design of a Hybrid Indoor Location System Based on Multi-Sensor Fusion for Robot Navigation,” Sensors 2018, Vol. 18, Page 3581, vol. 18, no. 10, p. 3581, Oct. 2018, doi: 10.3390/S18103581.
J. Huang, S. Junginger, H. Liu, and K. Thurow, “Indoor Positioning Systems of Mobile Robots: A Review,” Robotics 2023, Vol. 12, Page 47, vol. 12, no. 2, p. 47, Mar. 2023, doi: 10.3390/ROBOTICS12020047.
J. B. Kim and H. S. Jun, “Vision-based location positioning using augmented reality for indoor navigation,” IEEE Transactions on Consumer Electronics, vol. 54, no. 3, pp. 954–962, 2008, doi: 10.1109/TCE.2008.4637573.
B. C. Huang, J. Hsu, E. T. H. Chu, and H. M. Wu, “ARBIN: Augmented Reality Based Indoor Navigation System,” Sensors 2020, Vol. 20, Page 5890, vol. 20, no. 20, p. 5890, Oct. 2020, doi: 10.3390/S20205890.
F. Ahmed, M. Phillips, S. Phillips, and K. Y. Kim, “Comparative Study of Seamless Asset Location and Tracking Technologies,” Procedia Manuf, vol. 51, pp. 1138–1145, Jan. 2020, doi: 10.1016/J.PROMFG.2020.10.160.
S. J. Hayward, J. Earps, R. Sharpe, K. van Lopik, J. Tribe, and A. A. West, “A novel inertial positioning update method, using passive RFID tags, for indoor asset localisation,” CIRP J Manuf Sci Technol, vol. 35, pp. 968–982, Nov. 2021, doi: 10.1016/J.CIRPJ.2021.10.006.
A. Morar et al., “A Comprehensive Survey of Indoor Localization Methods Based on Computer Vision,” Sensors 2020, Vol. 20, Page 2641, vol. 20, no. 9, p. 2641, May 2020, doi: 10.3390/S20092641.
K. Maulana Azhar, I. Santoso, D. Yosua, and A. A. Soetrisno, “IMPLEMENTASI DEEP LEARNING MENGGUNAKAN METODE CONVOLUTIONAL NEURAL NETWORK DAN ALGORITMA YOLO DALAM SISTEM PENDETEKSI UANG KERTAS RUPIAH BAGI PENYANDANG LOW VISION,” Transient: Jurnal Ilmiah Teknik Elektro, vol. 10, no. 3, pp. 502–509, Sep. 2021, doi: 10.14710/TRANSIENT.V10I3.502-509.
J. Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” Cvpr, vol. 2016-December, pp. 779–788, Dec. 2016, doi: 10.1109/CVPR.2016.91.
A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and Accuracy of Object Detection,” Apr. 2020, Accessed: May 31, 2025. [Online]. Available: https://arxiv.org/pdf/2004.10934
G. Kucukayan and H. Karacan, “YOLO-IHD: Improved Real-Time Human Detection System for Indoor Drones,” Sensors 2024, Vol. 24, Page 922, vol. 24, no. 3, p. 922, Jan. 2024, doi: 10.3390/S24030922.
J. Du, “Understanding of Object Detection Based on CNN Family and YOLO,” J Phys Conf Ser, vol. 1004, no. 1, p. 012029, Apr. 2018, doi: 10.1088/1742-6596/1004/1/012029.
C. Liu, Y. Tao, J. Liang, K. Li, and Y. Chen, “Object detection based on YOLO network,” Proceedings of 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference, ITOEC 2018, pp. 799–803, Dec. 2018, doi: 10.1109/ITOEC.2018.8740604.
- Thifal Nurrifqi Ariel Kurniawan, “SISTEM DETEKSI POSISI OBJEK DALAM RUANGAN MENGGUNAKAN KAMERA SMARTPHONE DENGAN ALGORITMA COMPUTER VISION YOLO,” Jan. 2024, Accessed: May 30, 2025. [Online].
DOI: http://dx.doi.org/10.31544/jtera.v10.i1.2025.105-112


Refbacks
- There are currently no refbacks.
Copyright (c) 2025 JTERA (Jurnal Teknologi Rekayasa)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright @2016-2025 JTERA (Jurnal Teknologi Rekayasa) p-ISSN 2548-737X e-ISSN 2548-8678.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
JTERA Editorial Office:
Politeknik Sukabumi
Jl. Babakan Sirna 25, Sukabumi 43132, West Java, Indonesia
Phone/Fax: +62 266215417
Whatsapp: +62 81809214709
Website: https://jtera.polteksmi.ac.id
E-mail: [email protected]