Identification of Jepara Motifs on Carvings by Utilizing Convolutional Neural Network

  • Sandhopi Institut Sains dan Teknologi Terpadu Surabaya
  • Lukman Zaman P.C.S.W Institut Sains dan Teknologi Terpadu Surabaya
  • Yosi Kristian Institut Sains dan Teknologi Terpadu Surabaya
Keywords: Color Space, Convolutional Neural Network, Jepara Motifs, Transfer Learning, Carving

Abstract

The more the development of the carving motifs, the more varied the shapes and variations. It complicates the determination of a carving with Jepara motif. In this paper, the transfer learning method with developed FC was used to identify Jepara's distinctive motifs in a carving. The dataset was divided into three color spaces, i.e., LUV, RGB, and YcrCb. Besides, sliding windows, non-max suppression, and heat maps were utilized for the process of tracing the area of the engraved object and identifying Jepara motifs. The test results of all weights showed that the Xception on the Jepara motif classification had the highest accuracy values, namely 0.95, 0.95, and 0.94 for each LUV, RGB, and YCrCb color space dataset. However, when all the model weights were applied to the Jepara motif identification system, ResNet50 was able to outperform all networks with motif identification percentage values of 84%, 79%, and 80%, for the LUV, RGB, and YCrCb color spaces, respectively. These results prove that the system is able to assist in the process of determining whether a carving is included in the Jepara carving or not, by identifying the typical Jepara motifs contained in the carving.

References

Pemerintah Kabupaten Jepara, Buku Analisis: Penanganan Masalah Budaya Lokal Seni Ukir Kabupaten Jepara, Jepara, Indonesia: Pemerintah Kabupaten Jepara, 2014.

Alamsyah, “The Ups and Down of Wood Furniture Industry Center at the North Cost of Java and After Reformation Era: The Case of Jepara Furniture Center,” Adv. Sci. Lett., Vol. 23, hal. 9981-9983, 2017.

S. Gustami, Seni Kerajinan Mebel Ukir Jepara: Kajian Estetik Melalui Pendekatan Multidisiplin, Yogyakarta, Indonesia: Kanisius, 2000.

A.P. Pratiwi, K.K. Kenang, dan U.A. Ruki, “Analisa Perkembangan Motif Ukiran di Jepara pada Abad Ke-16 hingga Abad Ke-17,” Kreasi, Vol. 2, No. 2, hal. 5–25, 2017.

S.L. Jurj, F. Opritoiu, dan M. Vladutiu, “Identification of Traditional Motifs Using Convolutional Neural Networks,” 2018 IEEE 24th Int. Symp. Des. Technol. Electron. Packag., 2018, hal. 191–196.

M. Toğaçar, “Brain Hemorrhage Detection based on Heat Maps, Autoencoder and CNN Architecture,” 2019 IEEE 1st Int. inf. & Software Eng. Conf., 2019, hal. 1–5.

R. Kulkarni, S. Dhavalikar, dan S. Bangar, “Traffic Light Detection and Recognition for Self Driving Cars Using Deep Learning,” Proc. - 2018 4th Int. Conf. Comput. Commun. Control Autom. ICCUBEA 2018, 2018, hal. 1–4.

I.Z. Mukti dan D. Biswas, “Transfer Learning Based Plant Diseases Detection Using ResNet50,” 2019 4th Int. Conf. Electr. Inf. Commun. Technol. EICT 2019, 2019, hal. 1–6.

A.S.B. Reddy dan D.S. Juliet, “Transfer Learning with RESNET-50 for Malaria Cell-Image Classification,” Proc. 2019 IEEE Int. Conf. Commun. Signal Process. ICCSP 2019, 2019, hal. 945–949.

W. Cai, J. Li, Z. Xie, T. Zhao, dan K. Lu, “Street Object Detection Based on Faster R-CNN,” 2018 37th Chinese Control Conf. (CCC), 2018, hal. 9500–9503.

H. Yanagisawa, T. Yamashita, dan H. Watanabe, “A Study on Object Detection Method from Manga Images Using CNN,” 2018 Int. Work. Adv. Image Technol. IWAIT 2018, 2018, hal. 1–4.

K. He, X. Zhang, S. Ren, dan J. Sun, “Deep Residual Learning for Image Recognition,” Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, hal. 770–778.

J. Muller, A. Fregin, dan K. Dietmayer, “Disparity Sliding Window: Object Proposals from Disparity Images,” IEEE Int. Conf. Intell. Robot. Syst., 2018, hal. 5777–5784.

C.H. Lampert, M.B. Blaschko, dan T. Hofmann, “Beyond Sliding Windows: Object Localization by Efficient Subwindow Search,” 2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008, hal. 1–8.

S.E. Limantoro, Y. Kristian, dan D.D. Purwanto, “Pemanfaatan Deep Learning pada Video Dash Cam untuk Deteksi Pengendara Sepeda Motor,” JNTETI, Vol. 7, No. 2, hal. 167-173, 2018..

K.U. Sharma dan N.V Thakur, “A Review and an Approach for Object Detection in Images,” Int. J. Comput. Vis. Robot., Vol. 7, No. 1, hal. 196–237, 2017.

V.H.C. De Melo, S. Leao, D. Menotti, dan W.R. Schwartz, “An Optimized Sliding Window Approach to Pedestrian Detection,” Proc. - Int. Conf. Pattern Recognit., 2014, hal. 4346–4351.

L. Tychsen-Smith dan L. Petersson, “Improving Object Localization with Fitness NMS and Bounded IoU Loss,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2018, hal. 6877–6885.

N.T. Do, S.H. Kim, H.J. Yang, G.S. Lee, dan I.S. Na, “Face Tracking with Convolutional Neural Network Heat-Map,” ACM Int. Conf. Proceeding Ser., 2018, hal. 29–33.

M. Oberweger, M. Rad, dan V. Lepetit, “Making Deep Heatmaps Robust to Partial Occlusions for 3D Object Pose Estimation,” dalam Computer Vision – ECCV 2018, ECCV 2018, Lecture Notes in Computer Science, Vol. 11219, V. Ferrari, M. Hebert, C. Sminchisescu, dan Y. Weiss, Eds., Cham, Switzerland: Springer, 2018., hal. 125–141.

X. Chen, S. Xiang, C.L. Liu, dan C.H. Pan, “Vehicle Detection in Satellite Images by Hybrid Deep Convolutional Neural Networks,” IEEE Geosci. Remote Sens. Lett., Vol. 11, No. 10, hal. 1797–1801, 2014.

K. Simonyan dan A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015, 2015, hal. 1–14.

D. Zhang, J. Lv, dan Z. Cheng, “An Approach Focusing on the Convolutional Layer Characteristics of the VGG Network for Vehicle Tracking,” IEEE Access, Vol. 8, hal. 112827–112839, 2020.

M.M. Leonardo, T.J. Carvalho, E. Rezende, R. Zucchi, dan F.A. Faria, “Deep Feature-Based Classifiers for Fruit Fly Identification (Diptera: Tephritidae),” Proc. - 31st Conf. Graph. Patterns Images, SIBGRAPI 2018, 2018, hal. 41–47.

M. Talo, “Automated Classification of Histopathology Images Using Transfer Learning,” Artif. Intell. Med., Vol. 101, hal. 1-16, 2019.

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, dan Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., 2016, hal. 2818–2826.

K. Liu, S. Yu, dan S. Liu, “An Improved InceptionV3 Network for Obscured Ship Classification in Remote Sensing Images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 13, hal. 4738–4747, 2020.

F. Chollet, “Xception: Deep Learning with Depthwise Separable Convolutions,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, 2017, hal. 1800–1807.

T. Stark, M. Wurm, X.X. Zhu, dan H. Taubenbock, “Satellite-Based Mapping of Urban Poverty with Transfer-Learned Slum Morphologies,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., Vol. 13, hal. 5251–5263, 2020.

P. Pisantanaroj, P. Tanpisuth, P. Sinchavanwat,S. Phasuk, P. Phienphanich, P. Jangtawee, K. Yakoompai, M. Donphoongpi, S. Ekgasit, dan C. Tantibundhit, “Automated Firearm Classification from Bullet Markings Using Deep Learning,” IEEE Access, Vol. 8, hal. 78236–78251, 2020.

C. Nwankpa, W. Ijomah, A. Gachagan, dan S. Marshall, “Activation Functions: Comparison of trends in Practice and Research for Deep Learning,” arXiv:1811.03378, hal. 1–20, 2018.

S. Ioffe dan C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” 32nd Int. Conf. Mach. Learn. ICML 2015, 2015, hal. 448–456.

D.P. Kingma dan J.L. Ba, “Adam: A Method for Stochastic Optimization,” Proc. 3rd Int. Conf. Learn. Represent. ICLR 2015, 2015, hal. 1–15.

Published
2020-12-10
How to Cite
Sandhopi, Lukman Zaman P.C.S.W, & Yosi Kristian. (2020). Identification of Jepara Motifs on Carvings by Utilizing Convolutional Neural Network. Jurnal Nasional Teknik Elektro Dan Teknologi Informasi, 9(4), 403-413. https://doi.org/10.22146/jnteti.v9i4.541
Section
Articles