Neural Network Pruning in Unsupervised Aspect Detection based on Aspect Embedding

https://doi.org/10.22146/ijccs.72981

Muhammad Haris Maulana(1*), Masayu Leylia Khodra(2)

(1) Master Program of Informatics, SEEI; ITB, Bandung
(2) School of Electrical Engineering and Informatics, Institut Teknologi Bandung, Bandung
(*) Corresponding Author

Abstract


 Aspect detection systems for online reviews, especially based on unsupervised models, are considered better strategically to process online reviews, generally a very large collection of unstructured data.  Aspect embedding-based deep learning models are designed for this problem however they still rely on redundant word embedding and they are sensitive to initialization which may have a significant impact on model performance. In this research, a pruning approach is used to reduce the redundancy of deep learning model connections and is expected to produce a model with similar or better performance. This research includes several experiments and comparisons of the results of pruning the model network weights based on the general neural network pruning strategy and the lottery ticket hypothesis. The result of this research is that pruning of the unsupervised aspect detection model, in general, can produce smaller submodels with similar performance even with a significant amount of weights pruned. Our sparse model with 80% of its total weight pruned has a similar performance to the original model. Our current pruning implementation, however, has not been able to produce sparse models with better performance.


Keywords


aspect embedding; unsupervised aspect detection; pruning; lottery ticket hypothesis

Full Text:

PDF


References

[1] M. Pontiki, D. Galanis, J. Pavlopoulos, H. Papageorgiou, I. Androutsopoulos, and S. Manandhar, “SemEval-2014 Task 4: Aspect Based Sentiment Analysis,” p. 9, 2014.

[2] T. Shi, L. Li, P. Wang, and C. K. Reddy, “A Simple and Effective Self-Supervised Contrastive Learning Framework for Aspect Detection,” p. 13, Aug. 2020.

[3] S. Brody and N. Elhadad, “An Unsupervised Aspect-Sentiment Model for Online Reviews,” Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics., 2010.

[4] A. García-Pablos, M. Cuadros, and G. Rigau, “W2VLDA: Almost unsupervised system for Aspect Based Sentiment Analysis,” Expert Systems with Applications, vol. 91, pp. 127–137, Jan. 2018, doi: 10.1016/j.eswa.2017.08.049.

[5] A. Mukherjee and B. Liu, “Aspect Extraction through Semi-Supervised Modeling,” p. 10, 2012.

[6] Q. Liu, B. Liu, Y. Zhang, D. S. Kim, and Z. Gao, “Improving Opinion Aspect Extraction Using Semantic Similarity and Aspect Associations,” p. 7, 2016.

[7] R. He, W. S. Lee, H. T. Ng, and D. Dahlmeier, “An Unsupervised Neural Attention Model for Aspect Extraction,” presented at the Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, 2017. doi: 10.18653/v1/P17-1036.

[8] R. Shu and H. Nakayama, “Compressing Word Embeddings via Deep Compositional Code Learning,” arXiv:1711.01068 [cs], Nov. 2017, Accessed: Jun. 24, 2021. [Online]. Available: http://arxiv.org/abs/1711.01068

[9] A. See, M.-T. Luong, and C. D. Manning, “Compression of Neural Machine Translation Models via Pruning,” in Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Germany, Aug. 2016, pp. 291–301. doi: 10.18653/v1/K16-1029.

[10] D. Blalock, J. J. Gonzalez Ortiz, J. Frankle, and J. Guttag, “What is the State of Neural Network Pruning?,” Proceedings of Machine Learning and Systems, vol. 2, pp. 129–146, Mar. 2020.

[11] J. Frankle and M. Carbin, “The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks,” arXiv:1803.03635 [cs], Mar. 2019, Accessed: Jun. 02, 2021. [Online]. Available: http://arxiv.org/abs/1803.03635

[12] B. Bartoldson, A. Morcos, A. Barbu, and G. Erlebacher, “The Generalization-Stability Tradeoff In Neural Network Pruning,” in Advances in Neural Information Processing Systems, 2020, vol. 33, pp. 20852–20864. Accessed: Aug. 31, 2022. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/ef2ee09ea9551de88bc11fd7eeea93b0-Abstract.html

[13] S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both weights and connections for efficient neural networks,” in Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, Cambridge, MA, USA, December 2015, pp. 1135–1143.

[14] H. Yu, S. Edunov, Y. Tian, and A. S. Morcos, “Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP,” arXiv:1906.02768 [cs, stat], Feb. 2020, Accessed: Sep. 21, 2021. [Online]. Available: http://arxiv.org/abs/1906.02768

[15] M. Behnke and K. Heafield, “Losing Heads in the Lottery: Pruning Transformer Attention in Neural Machine Translation,” in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 2020, pp. 2664–2674. doi: 10.18653/v1/2020.emnlp-main.211.



DOI: https://doi.org/10.22146/ijccs.72981

Article Metrics

Abstract views : 1487 | views : 887

Refbacks

  • There are currently no refbacks.




Copyright (c) 2022 IJCCS (Indonesian Journal of Computing and Cybernetics Systems)

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.



Copyright of :
IJCCS (Indonesian Journal of Computing and Cybernetics Systems)
ISSN 1978-1520 (print); ISSN 2460-7258 (online)
is a scientific journal the results of Computing
and Cybernetics Systems
A publication of IndoCEISS.
Gedung S1 Ruang 416 FMIPA UGM, Sekip Utara, Yogyakarta 55281
Fax: +62274 555133
email:ijccs.mipa@ugm.ac.id | http://jurnal.ugm.ac.id/ijccs



View My Stats1
View My Stats2