What Bali brings to your process implementation in compare with common CI based
The primary difference - CI operates processes and is build for wide range of tasks templates (build, deploy, execute, e.t.c) , but Bali operates JVM threads and resources offering unreachable before approaches and is targeted specially for java effective test execution.
Direct —Āompare is not correct here, but You can see the differences of implementing execution by both ways

Suites test set selection and start


    1. start predefined test set by default only
    2. modify available through tags or other parameters
    3. need to keep in memory list, in case of huge size - not comfortable
    4. not able to select dedicated set
    5. can't preview test list in a visible structure
    6. before start usually takes few minutes to checkout and make a build
        1. project is already built and classloaded, execution starts immediate
        2. no boaring period of checkout, building
        3. projects tests structure is visible in tree view (per yours design)
        4. Get popup info on test mouse over (per your design, post gherklin steps and any other description)
        5. Select tests just from click in test tree or through tags
        6. Able to create/edit/start personal suites
        7. Able to start suite from external system (by http trigger link from CI, for example after target system is built and deployed)

          1. No need to keep in mind a lot of info (exact test spellings, tags)
          2. Get even more details on test steps beside just simple test name
          Multi-threading with resources pool

            1. Multi-threading is supported in testNG and latest jUnit, but not other frameworks (for example, cucumber JVM). Also it is already territory of process external to self CI one. Lets assume, per fact there is an approach.
            2. But what is yet not enough here?
            • We can have 2+ contradicting tests in suite, not allowed to run in parallel
              For example: one is checking nodes restart, while other one requires them to be online.
            • You have limited quantity of resources required for tests run (browsers, channels, agents, users) and You expected system will not start test until there will be no free resources for it in pool. Most official frameworks doesn't handle such cases in parallel mode.
            • So if few users will start test jobs they start in parallel, threads will go, good result.
              But potencially they will access limited shared resources without synchronization and generate distortion and fails.

                1. You able to run in parallel each java code (testNG, classic Java Main, jUnit, Cucumber JVM scenarious with sub-scenrio preceise) and do it by original engine wrapped in Bali's runner interface, also in Bali's TestExecutorors, so paralleling is solved universally. Feature page
                2. Resource pool mechanism implemented. Each test signs which resources requires for run. Bali engine start test job in case theese items are available in pool, blocks it before, unlocks on run finish. Resources sharing is synchronized.
                  Feature page
                3. TestExecution thread count is also modifiable and detailed info rendered on web
                  1. multi-threading is solved in common for any java test framework by Bali's own engine
                  2. Synchronized resource pools mechanizm offered
                  Online suite execution details&control

                    1. On CI you can see the outline information on suite: passed/failed/total... e.t.c.
                    2. Final report artefact is built at the end, requires waiting up to few hours
                    3. Can cancel suite run, but not able to remove subset from

                        1. Max detailed info on suite run is presented (ajax update): test list, status, errors, resources under use, summary stat and more. Feature page
                        2. Instant access to current moment actual 3 types of reports: errors classify, states flow, metrics.
                        3. Each test can be removed from suite just in a click
                        4. Rerun executed test (its info removed from errors) just in a click
                        5. Wait of error: thread will pause on error, do debug, click resume and thread will continue, usefull for long scenarious
                        6. Soft Stop: set sign to test context, test will check it on defined step using API, and if finds then it will stop itself correctly.
                        7. Hard stop: sent Thread.stop() (not recommended for use, but supported) Better implement soft stop from test inside.
                        8. Cancel whole suite by soft stop

                          1. Get max visibility on what is going and how
                          2. Get advanced run control: operate threads, rather than processes
                          Orchestrate directly from test report

                          1. Not supported in CI (need custom coding)


                            1. Activate rerun of tests in suite blocked by defined reason
                            2. Kick off new suites of tests blocked by any reason on other envs

                            1. operate directly from test report, save routine chain of actions
                            Metrics and Stateflow report

                            1) Reports belongs to CI artefact level, there are variety of options (basic or Yandex.Allure)
                            2) But usually only one report is offered

                              2 more types of report offered:
                              1) Stateflow report, which represents capturing of test scenario states at runtime (
                              for example list of screenshots on any key events like page open, click, e.t.c). Can be constructed per tester design (information and rendering).
                              Usefull to browse the runtime details in offline.
                              2) Metrics report, where You can see graph rendered per your data series
                              pushed in test (time of response, page load, CPU, memory or any other)
                                • See test runtime details after execution, even if it is passed
                                • Collect and visualize metrics
                                Made on