Application of You Only Look Once (YOLO) Method for Sign Language Identification
Abstract
Limited understanding of sign language has widened the social gap for deaf people, creating barriers in communication and social interaction. To address this challenge, technology-based solutions are required to facilitate inclusive communication. Deep learning-based detection methods, particularly the You Only Look Once (YOLO) algorithm, have gained attention for their speed and accuracy in real-time object detection. This research aims to develop and evaluate a YOLO training model for the identification of Indonesian sign language system (sistem isyarat bahasa Indonesia, SIBI). The dataset was obtained from resource person at the State Special School Prof. Dr. Sri Soedewi Masjchun Sofwan, SH. Jambi, and enriched with additional images collected from external subjects. Augmentation techniques with Roboflow were applied to expand the dataset, and several training schemes were implemented. Model performance was assessed using confusion matrix while considering accuracy and indications of overfitting. The results showed that the quality and quantity of training data, as well as the epoch values, strongly influenced the accuracy of the trained model. The best performance was achieved with 40 primary images per label class, augmented to 60 images, and trained over 24 epochs, resulting in a confusion matrix accuracy of 99.9%. The implemented model was able to recognize SIBI gestures in real-time using a webcam with fast processing. Overall, the proposed YOLO-based model successfully identifies sign language in real-time and demonstrates strong potential for reducing communication barriers among deaf people. However, further refinement and expansion of the dataset are recommended to improve effectiveness and enable broader real-world applications.
References
N.P.L. Wedayanti, A.P. Lintangsari, and G.A.P. Wirawan, “Perkembangan bahasa isyarat daerah Denpasar,” Linguist. Indones., vol. 39, no. 2, pp. 217–223, Aug. 2021, doi: 10.26499/li.v39i2.230.
D. Bragg et al., “Sign language recognition, generation, and translation: An interdisciplinary perspective,” in ASSETS '19, Proc. 21st Int. ACM SIGACCESS Conf. Comput. Access., 2019, pp. 16–31, doi: 10.1145/3308561.3353774.
M. Sholawati, K. Auliasari, and Fx. Ariwibisono, “Pengembangan aplikasi pengenalan bahasa isyarat abjad SIBI menggunakan metode convolutional neural network (CNN),” JATI, vol. 6, no. 1, pp. 134–144, Feb. 2022, doi: 10.36040/jati.v6i1.4507.
A. Pratiwi, “Penggunaan sistem isyarat bahasa Indonesia (SIBI) sebagai media komunikasi (Studi pada siswa tunarungu di SLB Yayasan Bukesra Ulee Kareng, Banda Aceh),” J. Ilm. Mhs. FISIP (JIMFISIP), vol. 4, no. 3, pp. xx–xx, Aug. 2019.
I.J. Thira et al., “Pengenalan alfabet sistem isyarat bahasa Indonesia (SIBI) menggunakan convolutional neural network,” J. Algoritma, vol. 20, no. 2, pp. 421–432, Oct. 2023, doi: 10.33364/algoritma/v.20-2.1480.
R.R.D. Jannah, “Pola komunikasi guru dalam meningkatkan kemampuan belajar siswa tunarungu di sekolah luar biasa negeri Lubuk Linggau,” Wardah, vol. 22, no. 2, pp. 1–15, Dec. 2021, doi: 10.19109/wardah.v22i2.10830.
E. Juherna, E. Purwanti, Melawati, and Y.S. Utami, “Implementasi pendidikan karakter pada disabilitas anak tunarungu,” J. Gold. Age, vol. 4, no. 1, pp. 12–19, Jun. 2020, doi: 10.29408/jga.v4i01.1809.
I. Damayanti and S.H. Purnamasari, “Hambatan komunikasi dan stres orangtua siswa tunarungu sekolah dasar,” J. Psikol. Insight, vol. 3, no. 1, pp. 1–9, Apr. 2019, doi: 10.17509/insight.v3i1.22311.
E. Mustapić and F. Malenica, “The signs of silence – An overview of systems of sign languages and co-speech gestures,” ELOPE, Engl. Lang. Overseas Perspect. Enq., vol. 16, no. 1, pp. 123–144, Jun. 2019, doi: 10.4312/elope.16.1.123-144.
Renaldy and A.B. Dharmawan, “Pengenalan citra bahasa isyarat berdasarkan sistem isyarat bahasa Indonesia menggunakan metode vision transformer,” JIKSI (J. Ilmu Komput, Sist. Inf.), vol. 12, no. 2, pp. 1–9, Jul. 2024, doi: 10.24912/jiksi.v12i2.31559.
A. Jinan and B.H. Hayadi, “Klasifikasi penyakit tanaman padi mengunakan metode convolutional neural network melalui citra daun (Multilayer perceptron),” J. Comput. Eng. Sci., vol. 1, no. 2, pp. 37–44, Apr. 2022.
Y. Hartiwi, E. Rasywir, Y. Pratama, and P.A. Jusia, “Sistem manajemen absensi dengan fitur pengenalan wajah dan GPS menggunakan YOLO pada platform Android,” J. Media Inform. Budidarma, vol. 4, no. 4, pp. 1235–1242, Oct. 2020, doi: 10.30865/mib.v4i4.2522.
D.N. Alfarizi et al., “Penggunaan metode YOLO pada deteksi objek: Sebuah tinjauan literatur sistematis,” J. Artif. Intel. Sist. Penunjang Keputusan, vol. 1, no. 1, pp. 55–63, Jun. 2023.
J.S.W. Hutauruk, T. Matulatan, and N. Hayaty, “Deteksi kendaraan secara real time menggunakan metode YOLO berbasis Android,” J. Sustain., J. Has. Penelit. Ind. Terap., vol. 9, no. 1, pp. 8–14, May 2020, doi: 10.31629/sustainable.v9i1.1401.
J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in 2016 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 779–788, doi: 10.1109/CVPR.2016.91.
D. Pestana et al., “A full featured configurable accelerator for object detection with YOLO,” IEEE Access, vol. 9, pp. 75864–75877, May 2021, doi: 10.1109/ACCESS.2021.3081818.
A. Riansyah and A.H. Mirza, “Pendeteksi mobil berdasarkan merek dan tipe menggunakan algoritma YOLO,” SMATIKA, STIKI Inform. J., vol. 13, no. 01, pp. 43–52, Jun. 2023, doi: 10.32664/smatika.v13i01.719.
A. Gallu, A.R. Himamunanto, and H. Budiati, “Pengenalan emosi pada citra wajah menggunakan metode YOLO,” KESATRIA, J. Penerapan Sist. Inf. (Komput. Manaj.), vol. 5, no. 3, pp. 1253–1261, Jul. 2024, doi: 10.30645/kesatria.v5i3.444.
G.A. Sidik, “Deteksi tindak kekerasan dan perundungan pada anak berbasis YOLOV8 (You only look once),” Kohesi, J. Multidisiplin Saintek, vol. 3, no. 9, pp. 71–80, Jun. 2024, doi: 10.3785/kohesi.v3i9.4044.
B.K. Pratama, S. Lestanti, and Y. Primasari, “Implementasi algoritma you only look once (YOLO) untuk mendeteksi bahasa isyarat SIBI,” J. ProTekInfo, vol. 11, no. 2, pp. 7–14, Aug. 2024, doi: 10.30656/protekinfo.v11i2.9105.
L. Mahdiyah, S. Oktamuliani, and W.L. Putri, “Penerapan algoritma deep learning YOLOv8 pada platform Roboflow untuk segmentasi citra panoramik,” J. Fis. Unand (JFU), vol. 14, no. 3, pp. 228–234, May 2025, doi: 10.25077/jfu.14.3.228-234.2025.
M. Heydarian, T.E. Doyle, and R. Samavi, “MLCM: Multi-label confusion matrix,” IEEE Access, vol. 10, pp. 19083–19095, Feb. 2022, doi: 10.1109/ACCESS.2022.3151048.
M. Grandini, E. Bagli, and G. Visani, “Metrics for multi-class classification: An overview,” 2020, arXiv:2008.05756.
D. Chicco and G. Jurman, “The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation,” BMC Genom., vol. 21, pp. 1–13, Jan. 2020, doi: 10.1186/s12864-019-6413-7.
S.V.N. Afni, E.P. Silmina, and I.B. Pangestu, “Computer vision used to monitor the youth during the pandemic COVID-19,” in Procedia Eng. Life Sci., 2021, pp. 1–4, doi: 10.21070/pels.v1i2.1019.
T.A. Dompeipen, S.R.U.A. Sompie, and M.E.I. Najoan, “Computer vision implementation for detection and counting the number of humans,” J. Tek. Inform., vol. 16, no. 1, pp. 65–76, Mar. 2021, doi: 10.35793/jti.v16i1.31471.
L. Liu et al., “Deep learning for generic object detection: A survey,” Int. J. Comput. Vis., vol. 128, no. 2, pp. 261–318, Feb. 2020, doi: 10.1007/s11263-019-01247-4.
H. Adusumalli et al., “Face mask detection using OpenCV,” in 2021 3rd Int. Conf. Intell. Commun. Technol. Virtual Mob. Netw. (ICICV), 2021, pp. 1304–1309, doi: 10.1109/ICICV50876.2021.9388375.
O.P. Orochi and L.G.Kabari, “Text-to-speech recognition using Google API,” Int. J. Comput. Appl., vol. 183, no. 15, pp. 18–20, Jul. 2021, doi: 10.5120/ijca2021921474.
“Pygame,” Pygame. Access date: 25-Sep-2025. [Online]. Available: https://www.pygame.org
© Jurnal Nasional Teknik Elektro dan Teknologi Informasi, under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License.

1.png)

