Contribute
Apache Unomi Testing
This document outlines how to write tests, which tests are appropriate where, and when tests are run, with some additional information about the testing systems at the bottom.
Testing Scenarios
Ideally, all available tests should be run against a pull request (PR) before it's allowed to be committed to Unomi's Github repo. This is not possible, however, due to a combination of time and resource constraints. Running all tests for each PR would take hours or even days using available resources, which would slow down development considerably.
Thus tests are split into pre-commit and post-commit suites. Pre-commit is fast, while post-commit is comprehensive. As their names imply, pre-commit tests are run on each PR before it is committed, while post-commits run periodically against the master branch (i.e. on already committed PRs).
Unomi uses GitHub Actions to run pre-commit and post-commit tests.
Pre-commit
The pre-commit test suite verifies correctness via two testing tools: unit tests and integration tests. Unit tests ensure correctness at a basic level, while a full build (mvn clean install) compiles all modules and runs unit tests to verify that a basic level of functionality exists.
This combination of tests hits the appropriate tradeoff between a desire for short (ideally <30m) pre-commit times and a desire to verify that PRs going into Unomi function in the way in which they are intended.
Pre-commit jobs are kicked off when a contributor makes a PR against the apache/unomi repository. Job statuses are displayed at the bottom of the PR page. Clicking on "Details" will open the status page in the selected tool; there, you can view test status and output.
Post-commit
Running in post-commit removes the stringent time constraint, which gives us the ability to do more comprehensive testing. In post-commit the full integration test suite is executed against both supported search engines — Elasticsearch and OpenSearch — via the -Pintegration-tests Maven profile. The suite is defined in AllITs.java and currently contains 30+ test classes covering profiles, segments, events, conditions, rules, privacy, import/export, GraphQL API, JSON Schema validation, health checks, and data migration scenarios.
Adding new integration tests is generally as easy as adding a *IT.java file to the itests/ module and registering it in the AllITs.java suite class. New tests should extend BaseIT, which provides access to all Unomi OSGi services, an HTTP client for REST API calls, and helper methods for waiting and retrying asynchronous operations.
Post-commit test results can be found in GitHub Actions.
Testing Types
Unit
Unit tests are, in Unomi as everywhere else, the first line of defense in ensuring software correctness. As all of the contributors to Unomi understand the importance of testing, Unomi has a robust set of unit tests, as well as testing overage measurement tools, which protect the codebase from simple to moderate breakages. Unomi Java unit tests are written in JUnit.
How to run Java tests
Apache Unomi uses JUnit 4 for unit tests and the Maven Surefire Plugin for execution. Unit tests are located in each module’s src/test/java directory and follow standard Maven conventions. To run all unit tests across the project:
$ mvn clean test
To run a specific test class:
$ mvn test -Dtest=MyClassTest
To run the full integration test suite (requires Elasticsearch or OpenSearch running):
$ mvn clean install -Pintegration-tests
For OpenSearch instead of Elasticsearch, add -Duse.opensearch=true. Integration tests use the Maven Failsafe Plugin and follow the *IT.java naming convention.
E2E
Integration tests (E2E) are meant to verify at the very highest level that the Unomi codebase is working as intended. They boot a real Apache Karaf container with Unomi deployed inside it, connected to a real search engine (Elasticsearch or OpenSearch). These tests verify that core Unomi services are fully operational end-to-end, including profile management, event collection, segmentation, rule execution, context serving, consent and privacy controls, data import/export workflows, GraphQL API operations, JSON Schema validation, and data migration between versions.
Testing Systems
Integration Testing Framework
Unomi’s integration tests use the following technology stack:
- Pax Exam — An OSGi testing framework that provisions and manages the Apache Karaf container used during tests. It handles downloading the Unomi distribution, configuring features, and injecting OSGi services into test classes.
- Apache Karaf Test Support — Provides the base class (
KarafTestSupport) for container-based integration tests, enabling Karaf shell command execution and OSGi service lookups. - Maven Failsafe Plugin — Executes the integration tests during the
integration-testphase and verifies results in theverifyphase. Tests follow the*IT.javanaming convention. - Embedded Elasticsearch / Docker OpenSearch — The Elasticsearch profile uses the
elasticsearch-maven-pluginto start an embedded Elasticsearch instance, while the OpenSearch profile uses thedocker-maven-pluginto spin up an OpenSearch container. - JaCoCo — Optionally collects code coverage data during integration test runs.
All test classes extend BaseIT, which provides access to injected Unomi services (ProfileService, RulesService, SegmentService, etc.), a pre-configured HTTP client for REST API testing, and utility methods for polling asynchronous operations.
Continuous Integration
Unomi uses GitHub Actions for continuous integration. The unomi-ci-build-tests.yml workflow is triggered on every push to master and on pull requests. It runs in two stages:
- Unit tests — A full
mvn clean installbuild with a 15-minute timeout. This must pass before integration tests begin. - Integration tests — Run in a matrix against both Elasticsearch (port 9400) and OpenSearch (port 9401) using JDK 17. Test results and logs are archived as GitHub Actions artifacts on failure.
Best practices for writing tests
The following best practices help you to write reliable and maintainable tests.
Aim for one failure path
An ideal test has one failure path. When you create your tests, minimize the possible reasons for a test failure. A developer can debug a problem more easily when there are fewer failure paths.
Avoid non-deterministic code
Reliable tests are predictable and deterministic. Tests that contain non-deterministic code are hard to debug and are often flaky. Non-deterministic code includes the use of randomness, time, and multithreading.
To avoid non-deterministic code, mock the corresponding methods or classes.
Use descriptive test names
Helpful test names contain details about your test, such as test parameters and the expected result. Ideally, a developer can read the test name and know where the buggy code is and how to reproduce the bug.
An easy and effective way to name your methods is to use these three questions:
- What you are testing?
- What are the parameters of the test?
- What is the expected result of the test?
For example, consider a scenario where you want to add a test for the
Divide method
If you use a simple test name, such as testDivide(), you are missing important information such as the expected action, parameter information, and expected test result. As a result, triaging a test failure requires you to look at the test implementation to see what the test does.
Instead, use a name such as invokingDivideWithDivisorEqualToZeroThrowsException(), which specifies:
- the expected action of the test (
invokingDivide) - details about important parameters (the divisor is zero)
- the expected result (the test throws an exception)
If this test fails, you can look at the descriptive test name to find the most probable cause of the failure. In addition, test frameworks and test result dashboards use the test name when reporting test results. Descriptive names enable contributors to look at test suite results and easily see what features are failing.
Long method names are not a problem for test code. Test names are rarely used (usually when you triage and debug), and when you do need to look at a test, it is helpful to have descriptive names.
Use a pre-commit test if possible
Post-commit tests validate that Unomi works correctly in broad variety of scenarios. The tests catch errors that are hard to predict in the design and implementation stages
However, we often write a test to verify a specific scenario. In this situation, it is usually possible to implement the test as a unit test or a component test. You can add your unit tests or component tests to the pre-commit test suite, and the pre-commit test results give you faster code health feedback during the development stage, when a bug is cheap to fix.