Jurnal Nasional Teknik Elektro dan Teknologi Informasi https://journal.ugm.ac.id/v3/JNTETI <p><strong><img style="display: block; margin-left: auto; margin-right: auto;" src="/v3/public/site/images/khanifan/HEADER_JNTETI_2020_1200x180_Background_baru_tanpa_list1.jpg" width="600" height="90" align="center"></strong></p> <p><strong>Jurnal Nasional Teknik Elekto dan Teknologi Informasi</strong>&nbsp;is an international journal accommodating research results in electrical engineering and information technology fields.<br><br><strong>Topics cover the fields of:</strong></p> <ul> <li class="show">Information technology: Software Engineering, Knowledge and Data Mining, Multimedia Technologies, Mobile Computing, Parallel/Distributed Computing, Data Communication and Networking, Computer Graphics, Virtual Reality, Data and Cyber Security.</li> <li class="show">Power Systems: Power Generation, Power Distribution, Power Conversion, Protection Systems, Electrical Material.</li> <li class="show">Signal, System and Electronics: Digital Signal Processing Algorithm, Robotic Systems, Image Processing, Biomedical Engineering, Microelectronics, Instrumentation and Control, Artificial Intelligence, Digital and Analog Circuit Design.</li> <li class="show">Communication System: Management and Protocol Network, Telecommunication Systems, Antenna, Radar, High Frequency and Microwave Engineering, Wireless Communications, Optoelectronics, Fuzzy Sensor and Network, Internet of Things.</li> </ul> <p><strong>Jurnal Nasional Teknik Elekto dan Teknologi Informasi is published four times a year: February, May, August, and November.<br></strong><strong><br>Jurnal Nasional Teknik Elektro dan Teknologi Informasi has been accredited by Directorate General of Higher Education, Ministry of Education and Culture, Republic of Indonesia, </strong>Number 28/E/KPT/2019 of September 26, 2019 (<strong>Sinta 2</strong>),&nbsp;<strong>Vol. 8 No. 2 Year 2019 up to Vol. 12 No. 2 Year 2023<br></strong><strong><br>Publisher<br></strong>Department of Electrical and Information Engineering, Faculty of Engineering, Universitas Gadjah Mada<br>Jl. Grafika No 2. Kampus UGM Yogyakarta 55281<br>Website&nbsp; :&nbsp;&nbsp;<a href="https://jurnal.ugm.ac.id/v3/JNTETI">https://jurnal.ugm.ac.id/v3/JNTETI</a><br>Email&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; :&nbsp;&nbsp; jnteti@ugm.ac.id<br>Telephone&nbsp;&nbsp; :&nbsp; +62 274 552305</p> en-US <p style="text-align: justify;">© <span style="font-weight: 400;">Jurnal Nasional Teknik Elektro dan Teknologi Informasi, under the terms of the</span><a href="https://creativecommons.org/licenses/by-sa/4.0/"> <span style="font-weight: 400;">Creative Commons Attribution-ShareAlike 4.0 International License</span></a><span style="font-weight: 400;">.</span></p> jnteti@ugm.ac.id (JNTETI Secretariat) jnteti@ugm.ac.id (JNTETI Secretariat) Wed, 13 May 2026 08:47:33 +0700 OJS 3.1.2.0 http://blogs.law.harvard.edu/tech/rss 60 Student Behavior Detection Using YOLOv10 for Classroom Engagement Analysis https://journal.ugm.ac.id/v3/JNTETI/article/view/24611 <p>Student engagement is a critical determinant of learning effectiveness, yet manual observation in classroom environments remains labor-intensive, subjective, and difficult to scale. This study examined a student behavior detection framework built on You Only Look Once (YOLO) version 10 or YOLOv10, the latest generation of real-time object detection models. A dataset of 2,600 annotated classroom images covering eight behavioral categories was collected under diverse conditions, including variations in lighting, camera perspectives, and occlusion. Five YOLOv10 variants (n, s, m, l, x) were trained and evaluated using precision, recall, F1 score, and mean average precision (mAP). The best-performing configuration achieved an overall mAP@0.5 of 0.821 and mAP@0.5:0.95 of 0.640, with strong performance on upright (AP = 0.967), bow head (AP = 0.958), and sleep (AP = 0.943), while more subtle behaviors such as writing (AP = 0.519) and hand-raising (AP = 0.650) proved challenging. Importantly, the system maintained real-time inference speeds ranging from 40 to 88 FPS depending on the YOLOv10 variant, when evaluated on an RTX 2060 GPU, thereby demonstrating its robustness for deployment in classroom settings. To ensure usability, the optimized YOLOv10 model was integrated into a Streamlit-based interactive dashboard, enabling educators to monitor engagement levels and respond with timely interventions. By combining state-of-the-art YOLOv10 architecture with real-time behavioral analytics, this work establishes a scalable foundation for intelligent classroom monitoring and contributes to advancing technology-enhanced education.</p> Resa Pramudita, Mochamad Rizal Fauzan, Ilyasa Nafan Faza, Jaja Kustija, Ibnu Hartopo, Muhammad Adli Rizqulloh Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/24611 Tue, 12 May 2026 11:08:51 +0700 Optimizing YOLOv8 Architecture and Augmentation for Efficient License Plate Detection https://journal.ugm.ac.id/v3/JNTETI/article/view/24886 <p class="JNTETIIntisari"><span lang="EN-US">Automatic Number Plate Recognition (ANPR) is crucial for intelligent transportation systems but often falters in real-world conditions due to environmental variations. This study constructed a robust and computationally efficient vehicle license plate detection system that achieved high accuracy under diverse real-world challenges and was deployable on resource-constrained edge hardware for real-time operation. The proposed holistic framework integrated three key components: (a) the creation of the Dynamic Vehicle License Plate Dataset (DVLPD) v1.0, containing 866 annotated images with variations in lighting, weather, and camera angles; (b) the implementation of a targeted data augmentation pipeline employing geometric and photometric transformations to enhance model robustness; and (3) the architectural optimization of a You Only Look Once (YOLO) version 8 or YOLOv8 model through pruning, quantization, and hyperparameter tuning specifically for edge deployment. The optimized model achieved a mean average precision (mAP) of 91% on the test set. When deployed on a Raspberry Pi 4 in a prototype parking system, it demonstrated practical viability with an inference latency of 0.4 seconds per frame and an error rate of 4.2%. The results validate that the integration of a diverse dataset, strategic augmentation, and model optimization can yield an accurate and efficient ANPR solution suitable for real-time edge applications. Future work will focus on expanding the dataset to include more extreme conditions for greater generalization.</span></p> Muryan Awaludin, Yoke Lucia R Rehatalanit Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/24886 Tue, 12 May 2026 11:09:15 +0700 A Comparison of SR and CBAM for Optimized Thermal Drone Object Detection https://journal.ugm.ac.id/v3/JNTETI/article/view/24931 <p class="JNTETIIntisari"><span lang="EN-US">Human detection using thermal cameras is very useful in certain conditions, such as detecting people lost in mountainous areas that are difficult to explore. Rescue operations are usually conducted by deploying a search and rescue (SAR) team to the location, which is not always effective because this operation can only be carried out under certain conditions and may pose a risk to the SAR team itself. Therefore, one alternative approach is the use of drones equipped with human detection and recognition capabilities. In this context, thermal cameras are used because they can penetrate challenging environments, making them suitable for SAR operations. The object detection method used in this study was You Only Look Once (YOLO) version 8 or YOLOv8. This study aimed to compare the effectiveness of integrating enhanced super-resolution generative adversarial networks (ESRGAN) with YOLOv8 and incorporating a convolutional block attention module (CBAM) into the neck architecture of YOLOv8. The performance of ESRGAN with YOLOv8 and CBAM with YOLOv8 was evaluated using precision, mean average precision (mAP), and training loss. Based on the experimental results, the combination of ESRGAN with YOLOv8 outperformed the CBAM-based modification. This is indicated by higher precision and mAP values, as well as lower training loss in the ESRGAN-enhanced YOLOv8 detection framework. The experimental findings highlight that image enhancement using ESRGAN is more effective than CBAM-based modification in improving thermal image-based human detection performance for SAR applications.</span></p> Helfy Susilawati, Akhmad Fauzi Ikhsan, Firman, Arief Suryadi Satyawan, Chandra Rahmana Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/24931 Tue, 12 May 2026 11:09:35 +0700