Test-First Protocol for Deriving Unit Tests from Use Case Specifications
Abstract
Early and systematic derivation of unit test scenarios remains challenging in software engineering, particularly in aligning functional requirements with executable tests. Graduate-level observations reveal that most students operate without granular traceability, standardized structures, or alternate flow testing. This study explored a structured test-first protocol that transformed use case specifications into coverage-aware test scenarios by applying object-oriented analysis and design principles. The protocol integrated sequence diagrams via behavioral modeling. Internal logic was extracted from sequence diagrams and visualized using control flow graphs. Basis path testing identified independent paths, serving as foundations for deriving unit test cases using the arrange-act-assert pattern. The “Pay the Order” use case in a hypothetical e-commerce system demonstrated the feasibility of the protocol. Cyclomatic complexity analysis yielded a complexity of 2, indicating that two independent test paths were required for complete coverage. The protocol successfully derived two-unit test cases with 100% basis path coverage, demonstrating complete traceability from functional requirements to unit test scenarios with one-to-one mapping between control flow paths and test cases. Results highlight the protocol’s ability to support early verification and validation processes. Unlike prior works focused on automated system-level test generation, this protocol offers a lightweight, human-centric approach promoting testability, traceability, and strong semantic alignment between requirements and implementation. The protocol is well-suited for educational settings and environments that prioritize traceability. Future research should pursue empirical validation, scalability investigations, semi-automated tool development, domain generalization across paradigms, and longitudinal impact assessment.
References
D. Janzen and H. Saiedian, “Test-driven development concepts, taxonomy, and future direction,” Computer, vol. 38, no. 9, pp. 43–50, Sep. 2005, doi: 10.1109/MC.2005.314.
D.S. Janzen and H. Saiedian, “On the influence of test-driven development on software design,” in 19th Conf. Softw. Eng. Educ. Train. (CSEET'06), 2006, pp. 141–148, doi: 10.1109/CSEET.2006.25.
M.S. Rahman et al., “Evaluating the impact of test-driven development on software quality enhancement,” Int. J. Math. Sci. Comput. (IJMSC), vol. 10, no. 3, pp. 51–76, Sep. 2024, doi: 10.5815/ijmsc.2024.03.05.
W. Ren and S. Barrett, “Test – Driven development, engagement in activity, and maintainability: An empirical study,” IET Softw., vol. 17, no. 4, pp. 509–525, Jul. 2023, doi: 10.1049/sfw2.12135.
V. Bhadauria, R. Mahapatra, and S. Nerur, “Performance Outcomes of Test-Driven Development: An Experimental Investigation,” J. Assoc. Inf. Syst., vol. 21, pp. 1045–1071, Jul. 2020, doi: 10.17705/1jais.00628.
C. Nebut, F. Fleurey, Y. Le Traon, and J.-M. Jezequel, “Automatic test generation: A use case driven approach,” IEEE Trans. Softw. Eng., vol. 32, no. 3, pp. 140–155, Mar. 2006, doi: 10.1109/TSE.2006.22.
B. Hasling, H. Goetz, and K. Beetz, “Model based testing of system requirements using UML use case models,” in 2008 1st Int. Conf. Softw. Test. Verif. Valid. Lillehammer, 2008, pp. 367–376, doi: 10.1109/ICST.2008.9.
B. Rumpe, “Model-Based Testing of Object-Oriented Systems,” in Form. Methods Compon. Objects, F.S. de Boer, M.M. Bonsangue, S. Graf, and W.-P. de Roever, Eds. 2003, pp. 380–402.
Meiliana et al., “Automated test case generation from UML activity diagram and sequence diagram using depth first search algorithm,” Procedia Comput. Sci., vol. 116, pp. 629–637, Oct. 2017, doi: 10.1016/j.procs.2017.10.029.
B. Graics, M. Mondok, V. Molnár, and I. Majzik, “Model-based testing of asynchronously communicating distributed controllers using validated mappings to formal representations,” Sci. Comput. Program., vol. 242, pp. 1–40, May 2025, doi: 10.1016/j.scico.2025.103265.
T.J. McCabe, “A complexity measure,” IEEE Trans. Softw. Eng., vol. SE-2, no. 4, pp. 308–320, Dec. 1976, doi: 10.1109/TSE.1976.233837.
C.T.M. Hue, D.-H. Dang, N.N. Binh, and A.-H. Truong, “USLTG: Test case automatic generation by transforming use cases,” Int. J. Softw. Eng. Knowl. Eng., vol. 29, no. 9, pp. 1313–1345, Sep. 2019, doi: 10.1142/S0218194019500414.
C. Wang, F. Pastore, A. Goknil, and L.C. Briand, “Automatic generation of acceptance test cases from use case specifications: An NLP-based approach,” IEEE Trans. Softw. Eng., vol. 48, no. 2, pp. 585-616, Feb. 2022, doi: 10.1109/TSE.2020.2998503.
P. Jha et al., “Application of machine learning in software testing of healthcare domain,” in Proc. 7th Int. Conf. Adv. Comput. Intell. Eng., 2024, pp. 63–73, doi: 10.1007/978-981-99-5015-7_6.
V.A. Karlsson et al., “Automation of the creation and execution of system level hardware-in-loop tests through model-based testing,” in A-TEST 2022, Proc. 13th Int. Workshop Autom. Test Case Des. Sel. Eval., 2022, pp. 9–16, doi: 10.1145/3548659.3561313.
M.N. Zafar et al., “Model-based testing in practice: An industrial case study using graphwalker,” in ISEC '21, Proc. 14th Innov. Softw. Eng. Conf. (Former. Known India Softw. Eng. Conf.), 2021, pp. 1–11, doi: 10.1145/3452383.3452388.
W.C. Ouédraogo et al., “Enriching automatic test case generation by extracting relevant test inputs from bug reports,” Empir. Softw. Eng., vol. 30, no. 3, pp. 1–60, Mar. 2025, doi: 10.1007/s10664-025-10635-z.
Z. Yuan et al., “Evaluating and improving ChatGPT for unit test generation,” in Proc. ACM Softw. Eng., 2024, pp. 1703–1726, doi: 10.1145/3660783.
A. Deljouyi and R. Ramsin, “MDD4REST: Model-driven methodology for developing RESTful web services,” in Proc. 10th Int. Conf. Model-Driven Eng. Softw. Dev., 2022, pp. 93–104, doi: 10.5220/0011006300003119.
S. Wu et al., “CydiOS: A model-based testing framework for iOS apps,” in ISSTA 2023, Proc. 32nd ACM SIGSOFT Int. Symp. Softw. Test. Anal., 2023, pp. 1–13, doi: 10.1145/3597926.3598033.
Y. Deng et al., “TARGET: Automated scenario generation from traffic rules for testing autonomous vehicles via validated LLM-guided knowledge extraction,” IEEE Trans. Softw. Eng., vol. 51, no. 7, pp. 1950–1968, Jul. 2025, doi: 10.1109/TSE.2025.3569086.
A.E.I. Brownlee et al., “Enhancing genetic improvement mutations using large language models,” in Search-Based Software Engineering, 2024, pp. 153–159, doi: 10.1007/978-3-031-48796-5_13.
J. Ackerman and G. Cybenko, “Large language models for fuzzing parsers (registered report),” in FUZZING 2023, Proc. 2nd Int. Fuzzing Workshop, 2023, pp. 31–38, doi: 10.1145/3605157.3605173.
C. Tsigkanos, P. Rani, S. Müller, and T. Kehrer, “Variable discovery with large language models for metamorphic testing of scientific software,” in Comput. Sci. – ICCS 2023, 2023, pp. 321–335, doi: 10.1007/978-3-031-35995-8_23.
M.L. Siddiq et al., “Using large language models to generate JUnit tests: An empirical study,” in EASE '24, Proc. 28th Int. Conf. Eval. Assess. Softw. Eng., 2024, pp. 313–322, doi: 10.1145/3661167.3661216.
R. Settimi et al., “Supporting software evolution through dynamically retrieving traces to UML artifacts,” in Proc., 7th Int. Workshop Princ. Softw. Evol., 2004, pp. 49–54, doi: 10.1109/IWPSE.2004.1334768.
N. Narayan, Y. Li, J. Helming, and M. Koegel, “Interaction centric requirements traceability,” in Proc. 6th Int. Conf. Eval. Nov. Approaches Softw. Eng., 2011, pp. 232–238, doi: 10.5220/0003463502320238.
J. Guo, J. Cheng, and J. Cleland-Huang, “Semantically enhanced software traceability using deep learning techniques,” in 2017 IEEE/ACM 39th Int. Conf. Softw. Eng. (ICSE), 2017, pp. 3–14, doi: 10.1109/ICSE.2017.9.
M. Rahimi and J. Cleland-Huang, “Evolving software trace links between requirements and source code,” Empir. Softw. Eng., vol. 23, pp. 2198–2231, Aug. 2018, doi: 10.1007/s10664-017-9561-x.
D. Fucci, E. Alégroth, and T. Axelsson, “When traceability goes awry: An industrial experience report,” J. Syst. Softw., vol. 192, pp. 1–10, Oct. 2022, doi: 10.1016/j.jss.2022.111389.
C.C. Raţiu, C. Mayr-Dorn, W.K.G. Assunção, and A. Egyed, “Taming cross-tool traceability in the wild,” in 2023 IEEE 31st Int. Requir. Eng. Conf. (RE), 2023, pp. 233–243, doi: 10.1109/RE57278.2023.00031.
X. Chen, J. Hosking, J. Grundy, and R. Amor, “DCTracVis: A system retrieving and visualizing traceability links between source code and documentation,” Autom. Softw. Eng., vol. 25, pp. 703–741, Dec. 2018, doi: 10.1007/s10515-018-0243-8.
C. Mills, J. Escobar-Avila, and S. Haiduc, “Automatic traceability maintenance via machine learning classification,” in 2018 IEEE Int. Conf. Softw. Maint. Evol. (ICSME), 2018, pp. 369–380, doi: 10.1109/ICSME.2018.00045.
J. Lin, Y. Liu, and J. Cleland-Huang, “Supporting program comprehension through fast query response in large-scale systems,” in ICPC '20, Proc. 28th Int. Conf. Program Compr., 2020, pp. 285–295, doi: 10.1145/3387904.3389260.
T. Hey, F. Chen, S. Weigelt, and W.F. Tichy, “Improving traceability link recovery using fine-grained requirements-to-code relations,” in 2021 IEEE Int. Conf. Softw. Maint. Evol. (ICSME), 2021, pp. 12–22, doi: 10.1109/ICSME52107.2021.00008.
A. Kicsi, V. Csuvik, and L. Vidacs, “Large scale evaluation of natural language processing based test-to-code traceability approaches,” IEEE Access, vol. 9, pp. 79089–79104, May 2021, doi: 10.1109/ACCESS.2021.3083923.
C.D. Laliberte, R.E. Giachetti, and M. Kolsch, “Evaluation of natural language processing for requirements traceability,” in 2022 17th Annu. Syst. Syst. Eng. Conf. (SOSE), 2022, pp. 21–26, doi: 10.1109/SOSE55472.2022.9812649.
W. Sun et al., “Method-level test-to-code traceability link construction by semantic correlation learning,” IEEE Trans. Softw. Eng., vol. 50, no. 10, pp. 2656–2676, Oct. 2024, doi: 10.1109/TSE.2024.3449917.
X. Jin et al., “Tracets4J: A traceable unit test generation dataset,” in 2025 IEEE Int. Conf. Softw. Anal. Evol. Reengineering (SANER), 2025, pp. 757–767, doi: 10.1109/SANER64311.2025.00077.
E. Lautenschläger, “The perception of test driven development in computer science – Outline for a structured literature review,” in Bus. Inf. Syst. Workshops, 2022, pp. 121–126, doi: 10.1007/978-3-031-04216-4_13.
S. Parsa, M. Zakeri-Nasrabadi, and B. Turhan, “Testability-driven development: An improvement to the TDD efficiency,” Comput. Stand. Interfaces, vol. 91, pp. 1–25, Jan. 2025, doi: 10.1016/j.csi.2024.103877.
F. Uyaguari et al., “Reliability of systematic literature reviews on test-Driven development,” Inf. Softw. Technol., vol. 184, pp. 1–23, Aug. 2025, doi: 10.1016/j.infsof.2025.107762.
D.T. Penagos and N. Agudelo, “Agile testing using user language automation with artificial intelligence in Enjisst,” in 2024 IEEE Lat. Am. Conf. Comput. Intell. (LA-CCI), 2024, pp. 1–5, doi: 10.1109/LA-CCI62337.2024.10814749.
M.M. Moe and K.K. Oo, “Comparative results of dependent and independent variables focused on regression analysis using test-driven development,” in Proc. 2020 10th Int. Workshop Comput. Sci. Eng. (WCSE 2020), 2020, pp. 27–35, doi: 10.18178/wcse.2020.02.006.
M. M. Moe and K. K. Oo, “Consequences of dependent and independent variables based on acceptance test suite metric using test driven development approach,” in 2020 IEEE Conf. Comput. Appl. (ICCA), 2020, pp. 1–6, doi: 10.1109/ICCA49400.2020.9022828.
H.A. Ramzan, S. Ramzan, and T. Kalsum, “Test-driven development (TDD) in small software development teams: Advantages and challenges,” in 2024 5th Int. Conf. Adv. Comput. Sci. (ICACS), 2024, pp. 1–5, doi: 10.1109/ICACS60934.2024.10473291.
P. Calais and L. Franzini, “Test-driven development benefits beyond design quality: Flow state and developer experience,” in 2023 IEEE/ACM 45th Int. Conf. Softw. Eng.: New Ideas Emerg. Results (ICSE-NIER), 2023, pp. 106–111, doi: 10.1109/ICSE-NIER58687.2023.00025.
F.G. Rocha, L.S. Souza, T.S. Silva, and G. Rodriguez, “Enhancing the student learning experience by adopting TDD and BDD in course projects,” in 2021 IEEE Glob. Eng. Educ. Conf. (EDUCON), 2021, pp. 1116–1125, doi: 10.1109/EDUCON46332.2021.9453916.
S. Romano et al., “Results from a replicated experiment on the affective reactions of novice developers when applying test-driven development,” in Agile Process. Softw. Eng. Extreme Program., 2020, pp. 223–239, doi: 10.1007/978-3-030-49392-9_15.
A. Santos et al., “A family of experiments on test-driven development,” Empir. Softw. Eng., vol. 26, no. 3, pp. 1–53, Mar. 2021, doi: 10.1007/s10664-020-09895-8.
D. Abuaiadah, M. Bosu, and S. Jayalal, “Revisiting the concepts of basis path testing,” in Inf. Syst. Intell. Syst. (ISBM 2024), 2025, pp. 503–515, doi: 10.1007/978-981-96-1747-0_41.
V. Khorikov, Unit Testing Principles, Practices and Patterns. Shelter Island, NY, USA: Manning Publications Co., 2020.
C. Wei et al., “How do developers structure unit test cases? An empirical analysis of the AAA pattern in open source projects,” IEEE Trans. Softw. Eng., vol. 51, no. 4, pp. 1007–1038, Apr. 2025, doi: 10.1109/TSE.2025.3537337.
I. Jacobson, Object Oriented Software Engineering: A Use Case Driven Approach. Reading, MA, USA: Addison-Wesley, 1992.
S. van Deursen and M. Seemann, Dependency Injection Principles, Practices, and Patterns. Shelter Island, NY, USA: Manning Publications Co., 2019.
O.P.N. Slyngstad et al., “The impact of test driven development on the evolution of a reusable framework of components – An industrial case study,” in 2008 3rd Int. Conf. Softw. Eng. Adv., 2008, pp. 214–223, doi: 10.1109/ICSEA.2008.8.
S. Bannerman and A. Martin, “A multiple comparative study of test-with development product changes and their effects on team speed and product quality,” Empir. Softw. Eng., vol. 16, no. 2, pp. 177–210, Apr. 2011, doi: 10.1007/s10664-010-9137-5.
© Jurnal Nasional Teknik Elektro dan Teknologi Informasi, under the terms of the Creative Commons Attribution-ShareAlike 4.0 International License.

1.png)

