https://journal.ugm.ac.id/v3/JNTETI/issue/feed Jurnal Nasional Teknik Elektro dan Teknologi Informasi 2026-03-02T14:00:13+07:00 JNTETI Secretariat jnteti@ugm.ac.id Open Journal Systems <p><strong><img style="display: block; margin-left: auto; margin-right: auto;" src="/v3/public/site/images/khanifan/HEADER_JNTETI_2020_1200x180_Background_baru_tanpa_list1.jpg" width="600" height="90" align="center"></strong></p> <p><strong>Jurnal Nasional Teknik Elekto dan Teknologi Informasi</strong>&nbsp;is an international journal accommodating research results in electrical engineering and information technology fields.<br><br><strong>Topics cover the fields of:</strong></p> <ul> <li class="show">Information technology: Software Engineering, Knowledge and Data Mining, Multimedia Technologies, Mobile Computing, Parallel/Distributed Computing, Data Communication and Networking, Computer Graphics, Virtual Reality, Data and Cyber Security.</li> <li class="show">Power Systems: Power Generation, Power Distribution, Power Conversion, Protection Systems, Electrical Material.</li> <li class="show">Signal, System and Electronics: Digital Signal Processing Algorithm, Robotic Systems, Image Processing, Biomedical Engineering, Microelectronics, Instrumentation and Control, Artificial Intelligence, Digital and Analog Circuit Design.</li> <li class="show">Communication System: Management and Protocol Network, Telecommunication Systems, Antenna, Radar, High Frequency and Microwave Engineering, Wireless Communications, Optoelectronics, Fuzzy Sensor and Network, Internet of Things.</li> </ul> <p><strong>Jurnal Nasional Teknik Elekto dan Teknologi Informasi is published four times a year: February, May, August, and November.<br></strong><strong><br>Jurnal Nasional Teknik Elektro dan Teknologi Informasi has been accredited by Directorate General of Higher Education, Ministry of Education and Culture, Republic of Indonesia, </strong>Number 28/E/KPT/2019 of September 26, 2019 (<strong>Sinta 2</strong>),&nbsp;<strong>Vol. 8 No. 2 Year 2019 up to Vol. 12 No. 2 Year 2023<br></strong><strong><br>Publisher<br></strong>Department of Electrical and Information Engineering, Faculty of Engineering, Universitas Gadjah Mada<br>Jl. Grafika No 2. Kampus UGM Yogyakarta 55281<br>Website&nbsp; :&nbsp;&nbsp;<a href="https://jurnal.ugm.ac.id/v3/JNTETI">https://jurnal.ugm.ac.id/v3/JNTETI</a><br>Email&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; :&nbsp;&nbsp; jnteti@ugm.ac.id<br>Telephone&nbsp;&nbsp; :&nbsp; +62 274 552305</p> https://journal.ugm.ac.id/v3/JNTETI/article/view/20351 Integrasi IoT pada Evaluasi Efisiensi Panel Surya Off-Grid pada Beban Resistif dan Induktif 2026-02-23T13:18:46+07:00 Aripin Triyanto dosen01315@unpam.ac.id Akbar Maulana akbarmaulanasimdigei@gmail.com Joko Tri Susilo dosen02659@unpam.ac.id Yoyok Dwi Setyo Pambudi dosen00789@unpam.ac.id <p class="JNTETIIntisari" style="line-height: 102%;"><span lang="EN-US">Energi surya merupakan salah satu sumber energi terbarukan yang banyak dimanfaatkan, terutama dalam sistem <em>off-grid</em>. Namun, efisiensi konversinya dipengaruhi oleh jenis beban yang digunakan. Beban resistif dan induktif memiliki karakteristik konsumsi daya yang berbeda, sehingga memengaruhi kinerja panel surya. Oleh karena itu, diperlukan analisis mengenai dampak kedua jenis beban terhadap efisiensi panel surya, dengan pemantauan berbasis <em>internet of things</em> (IoT) untuk pengumpulan data secara <em>real-time</em>. Penelitian ini bertujuan untuk menganalisis pengaruh penggunaan beban resistif (lampu pijar) dan induktif (kipas) terhadap efisiensi sistem panel surya <em>off-grid</em> 50 Wp serta mengevaluasi efektivitas IoT dalam pemantauan kinerja sistem. Metode yang digunakan meliputi pengujian dengan menghubungkan panel surya ke kedua jenis beban tersebut. Parameter yang diamati mencakup tegangan, arus, dan daya yang dihasilkan oleh panel surya serta daya yang dikonsumsi oleh masing-masing beban. Data dikumpulkan menggunakan sensor dan dikirim ke platform IoT untuk dianalisis secara jarak jauh. Hasil penelitian menunjukkan bahwa beban resistif menghasilkan efisiensi lebih tinggi, berkisar antara 44,47% hingga 49,54%, dibandingkan dengan beban induktif yang hanya mencapai 39,61% hingga 48,12%. Efisiensi yang lebih rendah pada beban induktif disebabkan oleh komponen reaktif yang menurunkan faktor daya dan kinerja sistem. Maka, dapat disimpulkan bahwa jenis beban berpengaruh signifikan terhadap efisiensi panel surya <em>off-grid</em> dan implementasi IoT terbukti efektif dalam pemantauan kinerja sistem secara <em>real-time</em>.</span></p> 2026-01-30T09:30:22+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/18846 Penggunaan Struktur CSRR untuk Peningkatan Kinerja BPF Berbasis Substrate Integrated Waveguide 2026-02-23T13:23:32+07:00 Junas Haidi junas.haidi@unib.ac.id Novelita Rahayu nove001@brin.go.id Achmad Munir munir@ieee.org <p>Makalah ini mengeksplorasi pemanfaatan struktur <em>complementary split ring resonator</em> (CSRR) untuk meningkatkan kinerja <em>bandpass filter</em> (BPF) berbasis <em>substrate integrated waveguide</em> (SIW). BPF berbasis SIW yang didesain menggunakan material RO 4003C dengan nilai permitivitas 3,38 dan rugi-rugi dielektrik 0,0027, dibuat dalam bentuk empat persegi panjang dengan ukuran 37,5 mm (panjang) × 35 mm (lebar) × 1,52 mm (tinggi). BPF berbasis SIW terdiri atas permukaan SIW dengan ukuran 22,4 mm (panjang) × 35 mm (lebar) dan 28 <em>via</em>. Pada bagian permukaan SIW dibuat CSRR berbentuk persegi panjang berukuran 4 mm (panjang) × 4 mm (lebar). Untuk mengoptimalkan kinerja BPF berbasis SIW, 12 CSRR dikonfigurasi menjadi enam baris dan dua kolom. Berdasarkan hasil eksplorasi yang telah dilakukan, jarak antara baris dan kolom CSRR secara substansial memengaruhi kinerja BPF. Makin dekat jarak baris antar-CSRR, makin jauh pergeseran frekuensi kerja kedua BPF ke arah frekuensi rendah. Penambahan CSRR pada BPF berbasis SIW berhasil menurunkan nilai koefisien kopling dari 0,38 menjadi 0,28. Penambahan CSRR pada BPF berbasis SIW telah menyebabkan nilai koefisien transmisi (S<sub>21</sub>) menurun dari -2,32 dB menjadi -0,70 dB, yang berarti meningkatkan kinerja BPF sebesar 1,62 dB. Penambahan CSRR pada BPF berbasis SIW juga telah menurunkan nilai koefisien refleksi (S<sub>11</sub>) dari -4,56 dB menjadi -10,96 dB atau&nbsp; meningkatkan kinerja BPF sebesar 6,4 dB.</p> 2026-02-03T14:28:01+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/23253 The Use and Impact of AI on Students’ Achievement in Mathematics Courses 2026-02-24T09:52:53+07:00 Albinur Limbong alimbong@unai.edu Idauli Simbolon idauli.simbolon@unai.edu <p>Artificial intelligence (AI) programs are now being widely used and their impact has been proven to enhance student achievement. The aim of the study is to observe the use and impact of AI on students’ achievement in Mathematics and Statistics courses. The population in this study was all students of the Faculty of Information Technology, Universitas Advent Indonesia, taking or have taken Mathematics and Statistics courses from the lecturer (the first author of this paper), totalling 191 students. The study showed that 48% of students admitted that AI could improve their academic performance, while around 44% of students became more active and satisfied in learning mathematics and statistics. However, the students’ achievements were not permanent, as 71.3% of students admitted that their achievements were not lasting. The results indicate, based on experimental research comparing the results of quizzes (which were discussed after the tests) with mid or final exam results (derived from quizzes), that there was no improvement in test scores and even no significant impact (and correlation) between quiz results and mid or final exams. In fact, the majority (60.6%) of students were unsure, disagreed, or even strongly disagreed whether the impact of AI is more positive than negative on their learning. The results of this study suggest that AI is best used as a learning tool not &nbsp;merely to achieve learning outcomes, but assist deep learning to improve critical thinking.</p> 2026-02-24T09:52:51+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/22694 Fuzzy Control of Three-Phase Induction Motor Using Mitsubishi PLC: Experimental Study 2026-02-24T10:11:50+07:00 Nanang Rohadi nanang.rohadi@unpad.ac.id Liu Kin Men liu@phys.unpad.ac.id Akik Hidayat akik@unpad.ac.id <p>Three-phase induction motors are extensively deployed in industrial automation due to their robustness, simplicity, and efficiency. Nevertheless, maintaining speed stability under dynamically varying loads remains a significant control challenge. This study investigated the design and implementation of a fuzzy logic-based speed control system fully embedded within a Mitsubishi FX3U-64M programmable logic controller (PLC), eliminating the dependency on external software platforms. The system integrated a rotary encoder for real-time speed feedback, an FX2N-2DA digital-to-analog converter for signal output, and a Mitsubishi FR-E520 inverter for frequency and voltage regulation. The fuzzy controller utilized two input variables, speed error and rate of change, which were fuzzified and processed through a Mamdani-type inference mechanism. All fuzzy operations, including rule evaluation and centroid-based defuzzification, were executed using ladder diagram programming via GX Works2. Experimental validation was performed across five speed references (300 to 1200 rpm) and varying mechanical loads (0.5–1.5 kg). The controller consistently achieved steady-state errors below 1% in no-load conditions and below 0.5% under load, with recovery times ranging from 1.5 to 6.75 s. These results demonstrate that the proposed PLC-based fuzzy controller provides a responsive, accurate, and fully integrable solution for real-time industrial motor speed regulation under variable operating conditions.</p> 2026-02-24T10:11:49+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/23124 IoT-Based Smart Irrigation and Fertilization System with Realtime Cloud Integration 2026-02-25T10:20:29+07:00 Dani Rofianto danirofianto@polinela.ac.id Eko Win Kenali ekowins07@polinela.ac.id Khusnatul Amaliah khusnatul@polinela.ac.id Jaka Fitra jakafitra@polinela.ac.id Halim Fathoni fathoni@polinela.ac.id Tiara Kurnia Khoerunnisa tiarakurniakhoerunnisa@polinela.ac.id Hevia Purnama Sari heviapurnamasari@polinela.ac.id <p>Internet of Things (IoT)-based smart agriculture provides an innovative solution to enhance the efficiency and sustainability of agricultural production amid challenges such as water scarcity, inefficient fertilization, and climate variability. This study developed an IoT-based smart irrigation and fertilization management system integrated with the Firebase Realtime Database for real-time monitoring and control. The system combined soil moisture, air humidity, and temperature sensors with an ESP32 microcontroller, enabling automatic and manual decision-making based on environmental conditions. Users could interact with the system via a responsive web dashboard that provided both data visualization and manual control. System testing conducted in a greenhouse environment demonstrated stable and accurate data acquisition, with average readings of 27.91°C for temperature, 74.75% RH for air humidity, and 71.31% for soil moisture, within ±2.3% of analogue measurements. The relay actuation response time was less than 1 s, while Firebase synchronization achieved over 98% reliability during continuous operation. Additionally, the system achieved 20% water savings compared to manual irrigation methods and successfully controlled fertilizer distribution and exhaust ventilation to stabilize humidity. These results confirm that the proposed system supports real-time, precise, and energy-efficient control, suitable for small to medium-scale agricultural applications, especially in areas with unstable internet connectivity. This research establishes a strong foundation for future integration with AI-based systems, such as fuzzy logic and machine learning, to enable fully autonomous, adaptive precision agriculture.</p> 2026-02-25T10:20:27+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/24882 Parameter Optimization of Battery Energy Storage System Considering Degradation Using Reinforcement Learning 2026-02-27T13:52:47+07:00 Muhammad Dzaky Ashidqi Dzakyash@gmail.com Silviana Windasari silviana.windasari@lecturer.sains.ac.id Rahmat Rahmat rahmat.r@lecturer.sains.ac.id Probokusumo Probokusumo probo.p@lecturer.sains.ac.id <p>Accurate and sustainable operation of battery energy storage systems (BESS) is critical for supporting renewable energy integration, ensuring both short-term reliability and long-term asset preservation. This study proposed a reinforcement learning (RL)-based scheduling framework designed to minimize power mismatch while mitigating degradation in lithium-ion batteries. The framework dynamically adapted to fluctuations in photovoltaic generation and residential load, enabling real-time decision-making. The performance was evaluated over a 30-day horizon using three indicators: average power mismatch, cumulative capacity loss, and system stability index (SSI). Results demonstrated that the proposed method achieved near-perfect load balance with an average mismatch of only 0.002 kW, while cumulative degradation remained limited to 0.22% and SSI was maintained at 0.96, reflecting high operational stability. The estimated daily degradation rate of 0.0073% corresponded to an annual capacity loss of approximately 2.7%, significantly lower than the 5–6% typically observed in uncontrolled cycling scenarios. Comparative analysis with simulated annealing (SA) and multi-objective genetic algorithm (MOGA) highlighted the balanced performance of the RL method. While MOGA eliminated mismatch at the expense of excessive degradation (0.60%) and simulated annealing reduced degradation but suffers from high mismatch (0.012 kW), the RL framework delivered the most balanced trade-off across all metrics. These findings confirm the potential of RL as a practical and sustainable strategy for PV–BESS integration, providing both technical resilience and extended battery lifetime.</p> 2026-02-25T12:49:28+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/23865 Implementation Smart Contract on E-Voting System for Secure and Transparent Student Election 2026-03-02T13:43:18+07:00 Hussain Abdillah Tugas Kelarno hussainkelarno@gmail.com Widi Widayat ww130@ums.ac.id <p>Traditional paper-based voting system for student organization leaders election has issues related to security, transparency, and trust. This research addressed these issues by implementing a blockchain on e-voting system utilizing smart contracts to ensure the security and transparency of the voting process. The system was developed using the agile software development life cycle (SDLC) methodology and was tested using black-box and system usability scale (SUS) method to evaluate its functionality and usability. Security testing was conducted through unit testing on the smart contract and block verification within the Sepolia network. The results showed that the decentralized e-voting system could prevent vote manipulation and detecting duplicate voters, as evidenced by the unit testing of the smart contract, which confirmed that recorded votes could not be manipulated and attempts to submit multiple votes were detected and rejected. Meanwhile, system transparency was demonstrated through direct verification using a block explorer, showing that the entire voting process and the smart contract code were publicly accessible and transparent. The system was successfully simulated on a small scale within a student organization, and usability testing using the SUS method was conducted with 30 respondents. The test resulted in a score of 72 points, indicating that the system was in the good category and was well accepted by users. Therefore, the decentralized approach in this e-voting system has been proven to enhance transparency and overcome the problems of security issues in the voting process.</p> 2026-02-27T09:49:15+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/23855 Voice Command Recognition Using CNN-LSTM Parallel Architecture 2026-02-27T09:49:55+07:00 Santoso 7022202011@student.its.ac.id Tri Arief Sardjono sardjono@bme.its.ac.id Djoko Purwanto djoko@its.ac.id <p>A parallel convolutional neural network–long short-term memory (CNN–LSTM) architecture is introduced for voice command recognition, designed to simultaneously extract spatial and temporal features from speech signals. Conventional serial architectures process these components sequentially, which can lead to the loss of temporal information after CNN-based spatial compression. This study aimed to improve recognition performance by preserving complementary spectral and temporal representations through parallel feature modeling. In the proposed approach, the CNN branch extracted spectral features from Mel-frequency cepstral coefficients (MFCCs), while the LSTM branch independently modeled long-term temporal dependencies from the same input stream. The outputs from both branches were fused through concatenation to form a comprehensive acoustic representation enhancing discrimination between phonetically similar commands. The model was trained and evaluated using a dataset containing eight classes of spoken commands. During training, the proposed model achieved a loss of 0.0186 and an accuracy of 99.87%, indicating effective learning. On the validation and test datasets, the model reached an accuracy of 89.16%, demonstrating stable convergence and consistent generalization performance. Evaluation using precision, recall, and F1 score metrics confirmed balanced recognition across classes, with particularly high accuracy for commands such as “stop,” “right,” and “yes,” while “go” and “no” showed lower accuracy due to acoustic similarity. In conclusion, the proposed parallel CNN–LSTM architecture effectively integrates convolutional and recurrent learning, resulting in improved recognition accuracy and robust performance with strong potential for real-time voice control and embedded applications.</p> 2026-02-27T09:49:55+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/24073 Test-First Protocol for Deriving Unit Tests from Use Case Specifications 2026-02-27T14:08:14+07:00 Muhammad Ridho Kurniawan Pratama muhammadridho@unj.ac.id Deni Utama deniutama@unj.ac.id Rauhil Fahmi rauhilfahmi@unj.ac.id <p class="JNTETIIntisari" style="line-height: 102%;"><span lang="EN-US">Early and systematic derivation of unit test scenarios remains challenging in software engineering, particularly in aligning functional requirements with executable tests. Graduate-level observations reveal that most students operate without granular traceability, standardized structures, or alternate flow testing. This study explored a structured test-first protocol that transformed use case specifications into coverage-aware test scenarios by applying object-oriented analysis and design principles. The protocol integrated sequence diagrams via behavioral modeling. Internal logic was extracted from sequence diagrams and visualized using control flow graphs. Basis path testing identified independent paths, serving as foundations for deriving unit test cases using the arrange-act-assert pattern. The “Pay the Order” use case in a hypothetical e-commerce system demonstrated the feasibility of the protocol. Cyclomatic complexity analysis yielded a complexity of 2, indicating that two independent test paths were required for complete coverage. The protocol successfully derived two-unit test cases with 100% basis path coverage, demonstrating complete traceability from functional requirements to unit test scenarios with one-to-one mapping between control flow paths and test cases. Results highlight the protocol’s ability to support early verification and validation processes. Unlike prior works focused on automated system-level test generation, this protocol offers a lightweight, human-centric approach promoting testability, traceability, and strong semantic alignment between requirements and implementation. The protocol is well-suited for educational settings and environments that prioritize traceability. Future research should pursue empirical validation, scalability investigations, semi-automated tool development, domain generalization across paradigms, and longitudinal impact assessment.</span></p> 2026-02-27T14:08:14+07:00 Copyright (c) https://journal.ugm.ac.id/v3/JNTETI/article/view/23919 Geolokasi Nirkabel Graf Faktor RSS Mencapai Tingkat Akurasi MilimeterWave pada 6G 2026-03-02T14:00:13+07:00 Muhammad Reza Kahar Aziz reza.kahar@el.itera.ac.id Heriansyah heri@el.itera.ac.id Syanne Octavia Mabuka syanne.120400103@student.itera.ac.id Muhammad Wahyu Fajrilah muhammadwahyufajrilah@gmail.com Efa Maydhona Saputra maydhona@el.itera.ac.id Anita Pascawati anita.pascawati@brin.go.id Ardiansyah Musa Efendi ardiansyah.musa.efendi@huawei.com <p>Artikel ini membahas teknik geolokasi nirkabel yang menggunakan <em>factor graph</em> (FG) atau graf faktor berbasis <em>received signal strength</em> (RSS). Pentingnya menurunkan suatu teori batas (<em>bound</em>) adalah untuk melihat efektivitas dan validitas dari sebuah teknik yang ditemukan serta untuk memperbesar peluang inovasi suatu teknologi. Di dalam artikel ini, <em>Cramer-Rao lower bound</em> (CRLB) didapatkan dari penurunan matriks Jacobian, yang terletak di dalam <em>Fisher</em> <em>information matrix</em> (FIM). Kemudian, formula tersebut, yang terletak di dalam <em>function node</em> (simpul fungsi) utama dari FG berbasis RSS (RSS-FG), memiliki informasi hubungan antara RSS dan koordinat target. Selain itu, artikel ini juga diperkaya dengan investigasi berbagai skenario luas <em>grid monitoring spot </em>sebagai teknik geolokasi RSS-FG dalam mencapai akurasi yang diharapkan oleh 6G, yaitu tingkat 1 cm di lingkungan <em>milimeterWave </em>(mmWave)<em>. </em>Hasil simulasi menunjukkan bahwa CRLB yang telah diturunkan memiliki tingkat akurasi yang paling tinggi, sehingga valid untuk menjadi <em>bound</em> bagi teknik geolokasi berbasis RSS-FG tersebut. Hal ini ditunjukkan dengan nilai kurva <em>root mean squared error</em> (RMSE) yang paling rendah untuk CRLB. Teknik RSS-FG ini pun dapat mencapai akurasi 1 cm dan juga mencapai CRLB di sekitar <em>signal-to-noise ratio </em>(SNR) lebih besar atau sama dengan 20 dB untuk skenario luas <em>grid </em> &nbsp;m × 1 m. Temuan lainnya menunjukkan bahwa variasi frekuensi tidak begitu memengaruhi akurasi teknik RSS-FG. Artikel ini diharapkan dapat memberikan pemahaman yang mendalam dan jelas mengenai teknik geolokasi nirkabel berbasis FG dengan masukan ke sistem berupa hasil pengukuran RSS.</p> 2026-02-27T14:54:25+07:00 Copyright (c)