Identification of Fencing Athletes Based on Anthropometric Measurements Using MediaPipe Pose

  • Bagas Alif Fimaskoro School of Applied Science, Telkom University, Bandung, Indonesia
  • Suci Aulia School of Applied Science, Telkom University, Bandung, Indonesia
  • Dery Rimasa Sports Physical Coaching Study, Universitas Pendidikan Indonesia, Bandung, Indonesia
Keywords: Anthropometry, Image Processing, Pose Detection, Fencing, Talent Identification

Abstract

Over time, numerous developments in digital technology have benefited people, including anthropometric measurements that provide information on an athlete’s ability in sports. The use of digital technology in sports must continue, particularly in the National Sports Committee of Indonesia (Komite Olahraga Nasional Indonesia, KONI) of Bandung City. This study proposed a technique for classifying and identifying fencing athletes’ talents. This work developed a methodology for evaluating sports talent based on anthropometric measurements of athletes’ bodies using the posture detection approach. Fencing and nonfencing athletes in KONI Bandung City were categorized using this talent identification. This study used 36 datasets of body posture images from various skills of the sport. These images were in JPEG or JPG format with a resolution of 3,024 × 4,032 and were acquired using a Canon EOS 1300D camera. This study utilized four points landmarks, which are usually used as measurement components in KONI, to categorize fencing athletes. The four points are shoulder (S), elbow (E), index (I), and hip (H) landmarks. The testing was done using three different dataset settings. According to the test results of all scenarios, scenario 2 had the highest accuracy. This scenario was able to categorize fencing and nonfencing athletes with an accuracy rate of 89% and an average processing time of less than 3 s per image.

References

I. Mahfud, R. Yuliandra, and A. Gumantan, “Model latihan dribling sepakbola untuk pemula usia SMA,” Sport Sci. Educ. J., vol. 1, no. 2, pp. 1–9, Jul. 2020, doi: 10.33365/ssej.v1i2.823.

R.M. Kurniawan, T.N. Damayanti, and D. Rimasa, “Aplikasi pengukuran antropometri tubuh pada atlet di KONI Kota Bandung berbasis image processing,” eProc. Appl. Sci., vol. 7, no. 4, pp. 843–860, Aug. 2021.

H. Purnomo, “Pengukuran antropometri tangan usia 18 sampai 22 tahun Kabupaten Sleman Yogyakarta,” Pros. Ind. Eng. Nat. Conf. (IENACO), 2014, pp. 106–112.

S. Aulia, F.E. Satria, and R.D. Atmaja, “Sistem pengukur tinggi dan berat badan berbasis morphological image processing,” ELKOMIKA, J. Tek. Energi Elekt. Tek. Telekomun. Tek. Elektron., vol. 6, no. 2, pp. 219–231, May 2018, doi: 10.26760/elkomika.v6i2.219.

N. Sarafianos, B. Boteanu, B. Ionescu, and I.A. Kakadiaris, “3D human pose estimation: A review of the literature and analysis of covariates,” Comput. Vis. Image Underst., vol. 152, pp. 1–20, Nov. 2016, doi: 10.1016/j.cviu.2016.09.002.

Y. Chen, Y. Tian, and M. He, “Monocular human pose estimation: A survey of deep learning-based methods,” Comput. Vis. Image Underst., vol. 192, pp. 1–20, Mar. 2020, doi: 10.1016/j.cviu.2019.102897.

J. Wang et al., “Deep 3D human pose estimation: A review,” Comput. Vis. Image Underst. vol. 210, pp. 1–21, Sep. 2021, doi: 10.1016/j.cviu.2021.103225.

E. Alam, A. Sufian, P. Dutta, and M. Leo, “Vision-based human fall detection systems using deep learning: A review,” Comput. Biol., Med., vol. 146, pp. 1–22, Jul. 2022, doi: 10.1016/j.compbiomed.2022.105626.

M.M.E. Yurtsever and S. Eken, “BabyPose: Real-time decoding of baby’s non-verbal communication using 2D video-based pose estimation,” IEEE Sens. J., vol. 22, no. 14, pp. 13776–13784, Jul. 2022, doi: 10.1109/JSEN.2022.3183502.

S. Li and A.B. Chan, “3D human pose estimation from monocular images with deep convolutional neural network,” in Computer Vision - ACCV 2014, D. Cremers, I. Reid, H. Saito, and M.-H. Yang, Eds., Cham, Switzerland: Springer, 2015, pp. 332–347, doi: 10.1007/978-3-319-16808-1_23.

X. Zhou et al., “Deep kinematic pose regression,” in Computer Vision - WCCV 2016 Workshops, G. Hua and H. Jégou, Eds., Cham, Switzerland: Springer, 2016, pp. 186–201, doi: 10.1007/978-3-319-49409-8_17.

G. Pavlakos, X. Zhou, K.G. Derpanis, and K. Daniilidis, “Coarse-to-fine volumetric prediction for single-image 3D human pose,” 2017 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 1263–1272, doi: 10.1109/CVPR.2017.139.

D.C. Luvizon, D. Picard, and H. Tabia, “2D/3D pose estimation and action recognition using multitask deep learning,” 2018 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 5137–5146, doi: 10.1109/CVPR.2018.00539.

D. Tome, C. Russell, and L. Agapito, “Lifting from the deep: Convolutional 3D pose estimation from a single image,” 2107 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5689–5698, doi: 10.1109/CVPR.2017.603.

J. Wang, S. Huang, X. Wang, and D. Tao, “Not all parts are created equal: 3D Pose estimation by modeling bi-directional dependencies of body parts,” 2019 IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2019, pp. 7770–7779, doi: 10.1109/ICCV.2019.00786.

B. Wandt and B. Rosenhahn, “RepNet: Weakly supervised training of an adversarial reprojection network for 3D human pose estimation,” 2019 IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2019, pp. 7774–7783, doi: 10.1109/CVPR.2019.00797.

V. Bazarevsky et al., “BlazePose: On-device real-time body pose tracking,” 2020, arXiv: 2006.1024.

R. Soneja, S. Prashanth, and R. Aarthi, “Body weight estimation using 2D body image,” Int. J. Adv. Comput. Sci. Appl., vol. 12, no. 4, pp. 321–326, Apr. 2021, doi: 10.14569/IJACSA.2021.0120440.

J.W. Kim, J.Y. Choi, E.J. Ha, and J.H. Choi, “Human pose estimation using mediapipe pose and optimization method based on a humanoid model,” Appl. Sci., vol. 13, no. 4, 2023, doi: 10.3390/app13042700.

D.D. Prasetya and Sulistyorini, “Analisis antropometri dan daya tahan VO2Max peserta ekstrakulikuler bolavoli putri SMAN 1 Talun Blitar,” Indones. Perform. J., vol. 4, no. 1, pp. 46–55, Jun. 2019, doi: 10.17977/um077v4i12020p46-55.

M. Maulina, “Profil antropometri dan somatotipe pada atlet bulutangkis,” AVERROUS J. Kedokt., Kesehat. Malikussaleh, vol. 1, no. 2, pp. 69–74, Nov. 2015, doi: 10.29103/averrous.v1i2.413.

E.B. Setiawan and R. Herdianto, “Penggunaan smartphone android sebagai alat analisis kebutuhan kandungan nitrogen pada tanaman padi,” J. Nas. Tek. Elekt. Teknol. Inf., vol. 7, no. 3, pp. 273–280, Aug. 2018, doi: 10.22146/jnteti.v7i3.435.

J. Ma et al., “Contrast limited adaptive histogram equalization-based fusion in YIQ and HSI color spaces for underwater image enhancement,” Int. J. Pattern Recognit., Artif. Intell., vol. 32, no. 7, pp. 1–27, Jul. 2018, doi: 10.1142/S0218001418540186.

Nofirza dan Z. Infi, “Perancangan alat belajar dan bermain yang ergonomis di Taman Kanak-Kanak Islam Permata Selat Panjang,” J. Ilm. Tek. Ind., vol. 10, no. 1, hal. 48–58, Jun. 2011.

Sandhopi, L. Zaman P.C.S.W, and Y. Kristian, “Identifikasi motif jepara pada ukiran dengan memanfaatkan convolutional neural network,” J. Nas. Tek. Elekt., Teknol. Inf., vol. 9, no. 4, pp. 403–413, Nov. 2020, doi: 10.22146/jnteti.v9i4.541.

R. Islamadina et al., “Pengukuran badan ikan berupa estimasi panjang, lebar, dan tinggi berdasarkan visual capture,” J. Nas. Tek. Elekt., Teknol. Inf., vol. 7, no. 1, pp. 57–63, Feb. 2018, doi: 10.22146/jnteti.v7i1.401.

R. Rulaningtyas, A.B. Suksmono, T.L.R. Mengko, and G.A.P. Saptawati, “Segmentasi citra berwarna dengan menggunakan metode clustering berbasis patch untuk identifikasi Mycobacterium Tuberculosis,” J. Biosains Pascasarj., vol. 17, no. 1, pp. 19–25, Jan. 2015, doi: 10.20473/jbp.v17i1.2015.19-25.

S.E. Indraani, I.D. Jumaddina, S. Ridha, and S. Sinaga, “Implementasi edge detection pada citra grayscale dengan metode operator Prewitt dan operator Sobel,” 2014.

R. Sehgal, R. Gupta, and N. Anand, “Automatic extraction of 3D body measurements from 2D images of a female form,” IOSR J. Polym. Text. Eng., vol. 5, no. 3, pp. 7–17, May.¬/Jun. 2018, doi: 10.9790/019X-05030717.

M. Aslam, F. Rajbdad, S. Khattak, and S. Azmat, “Automatic measurement of anthropometric dimensions using frontal and lateral silhouettes,” IET Comput. Vis., vol. 11, no. 6, pp. 434–447, Jul. 2017, doi: 10.1049/iet-cvi.2016.0406.

M. Dantone, J. Gall, C. Leistner, and L.V. Gool, “Human pose estimation using body parts dependent joint regressors,” 2013 IEEE Conf. Comput. Vis. Pattern Recognit., 2013, pp. 3041–3048, doi: 10.1109/CVPR.2013.391.

L. Pishchulin, M. Andriluka, P. Gehler, and B. Schiele, “Poselet conditioned pictorial structures,” 2013 IEEE Conf. Comput. Vis., Pattern Recognit., 2013, pp. 588–595, doi: 10.1109/CVPR.2013.82.

G. Ning, Z. Zhang, and Z. He, “Knowledge-guided deep fractal neural networks for human pose estimation,” IEEE Trans. Multimed., vol. 20, no. 5, pp. 1246–1259, May. 2018, doi: 10.1109/TMM.2017.2762010.

A. Bulat and G. Tzimiropoulos, “Human pose estimation via convolutional part heatmap regression,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., Cham, Switzerland: Springer, 2016, pp. 717–732, doi: 10.1007/978-3-319-46478-7_44.

A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., Cham, Switzerland: Springer, 2016, pp. 483–499, dot: 10.1007/978-3-319-46484-8_29.

W. Yang et al., “Learning feature pyramids for human pose estimation,” 2017 IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 1290–1299, doi: 10.1109/ICCV.2017.144.

X. Chu et al., “Multi-context attention for human pose estimation,” 2017 IEEE Conf. Comput. Vis., Pattern Recognition (CVPR), 2017, pp. 5669–5678, doi: 10.1109/CVPR.2017.601.

Y. Chen et al., “Adversarial PoseNet: A structure-aware convolutional network for human pose estimation,” 2017 IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 1221–1230, doi: 10.1109/ICCV.2017.137.

C.J. Chou, J.T. Chien, and H.T. Chen, “Self adversarial training for human pose estimation,” 2018 Asia-Pac. Signal Inf. Process. Assoc. Annu. Summit Conf. (APSIPA ASC), 2018, pp. 17–30, doi: 10.23919/APSIPA.2018.8659538.

A. Toshev and C. Szegedy, “DeepPose: Human pose estimation via deep neural networks,” 2014 IEEE Conf. Comput. Vis., Pattern Recognit. (CVPR), 2014, pp. 1653–1660, doi: 10.1109/CVPR.2014.214.

J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik, “Human pose estimation with iterative error feedback,” 2016 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 4733–4742, doi: 10.1109/CVPR.2016.512.

D.C. Luvizon, H. Tabia, and D. Picard, “Human pose regression by combining indirect part detection and contextual information,” Comput. Graph., vol. 85, pp. 15–22, Dec. 2019, doi: 10.1016/j.cag.2019.09.002.

D. Mehta et al., “VNect: Real-time 3D human pose estimation with a single RGB camera,” ACM Trans. Graph., vol. 36, no. 4, pp. 1–14, Aug. 2017, doi: 10.1145/3072959.3073596.

J. Martinez, R. Hossain, J. Romero, and J.J. Little, “A simple yet effective baseline for 3D human pose estimation,” 2017 IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 2659–2668, doi: 10.1109/ICCV.2017.288.

C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu, “Human3.6M: Large scale datasets and predictive methods for 3D human sensing in natural environments,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 7, pp. 1325–1339, Jul. 2014, doi: 10.1109/TPAMI.2013.248.

C.H. Chen and D. Ramanan, “3D human pose estimation = 2D pose estimation + matching,” 2017 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5759–5767, doi: 10.1109/CVPR.2017.610.

S. Liang, X. Sun, and Y. Wei, “Compositional human pose regression,” Comput. Vis., Image Underst., vol. 176–177, pp. 1–8, Nov./Dec. 2018, doi: 10.1016/j.cviu.2018.10.006.

Published
2024-02-02
How to Cite
Bagas Alif Fimaskoro, Suci Aulia, & Dery Rimasa. (2024). Identification of Fencing Athletes Based on Anthropometric Measurements Using MediaPipe Pose . Jurnal Nasional Teknik Elektro Dan Teknologi Informasi, 13(1), 11-17. https://doi.org/10.22146/jnteti.v13i1.8145
Section
Articles