Zephyr includes a built-in testing framework designed to help you validate and execute both unit and integration tests. Remember: ztest is the testing framework, while twister is the tool used to run the tests. 


Now, let’s move on to creating our first test. For this example, we’ll assume you have an empty project with the minimum setup. 

app
├── app.overlay
├── CMakeLists.txt
├── include
├── prj.conf
└── src
    └── main.c

In the app folder, create a custom driver with some dummy functions for simple unit testing. Place the header file in the include folder and the source file in the src directory

app/include/dummy.h

#ifndef __DUMMY_H__
#define __DUMMY_H__

int suma( int a, int b );

#endif // __DUMMY_H__

app/src/dummy.c

#include <zephyr/kernel.h>
#include "dummy.h"

int suma( int a, int b )
{
    return a + b;
}

Create a folder named app/tests and add a new file called test_dummy.c to store the unit testing functions. As a rule of thumb, name your test files after the file being tested, using the prefix test_. Maintaining a consistent naming convention for all your test cases is highly recommended

#include <zephyr/ztest.h>
#include "dummy.h"

ZTEST_SUITE(framework_tests, NULL, NULL, NULL, NULL, NULL);

/*test case. by default all your test cases shall be start with test_*/
ZTEST(framework_tests, test_sum__two_integers)
{
    /*run the function under test with known paramters and catch the return value
    in variable res*/
    int res = suma(2,3);

    /*the following ztest fucntion validates the result, first parameter is the expected result, 
    second one is the actual result from our function under test, and finally the thirtd one
    is just a message to display in case of error*/
	zassert_equal(5, res, "2 + 3 should be 5");
}

In the tests directory, add a CMakeLists.txt file because Zephyr treats each test as a separate application in its own right

# Minimum CMake required version
cmake_minimum_required(VERSION 3.20.0)

# Get Zephyr directory
find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})

# Project and source files
project(test_suma)
target_include_directories(app PRIVATE ../include)
target_sources(app PRIVATE test_dummy.c)
target_sources(app PRIVATE ../src/dummy.c)

You also need to add a .conf file in the tests directory. By convention, we’ll use the usual name prj.conf (although this can be customized, we’ll stick to this for simplicity and to avoid confusing beginners). In this file, you must enable ztest.

CONFIG_ZTEST=y

We’re still missing one critical file—the most important one for running our tests. It is mandatory for the tests folder to include a file named testcase.yaml. In this file, you will specify the platform where your test cases will run and provide a tag name. The test name in our example is dummy.testing.ztest, but you can name it whatever you prefer

tests:
  dummy.testing.ztest:
    build_only: true
    platform_allow:
      - native_sim
    integration_platforms:
      - native_sim
    tags: test_framework

In case you are wondering, this is how looks like our project directory

app
├── app.overlay
├── CMakeLists.txt
├── include
│   └── dummy.h
├── prj.conf
├── src
│   ├── dummy.c
│   └── main.c
└── tests
    ├── CMakeLists.txt
    ├── prj.conf
    ├── test_dummy.c
    └── testcase.yaml

Ok, we have everything in place, so let's run Twister using West to execute our test case in test_dummy.c from the project’s root directory.

$ west twister -v -n -T app/tests/
INFO    - Using Ninja..
INFO    - Zephyr version: v4.3.0
INFO    - Using 'zephyr' toolchain.
INFO    - Selecting default platforms per testsuite scenario
INFO    - Building initial testsuite list...
INFO    - Writing JSON report /home/user/workspace/twister-out/testplan.json
INFO    - JOBS: 16
INFO    - Adding tasks to the queue...
INFO    - Added initial list of jobs to queue
INFO    - 1/1 native_sim/native         dummy.testing.ztest                                NOT RUN (build <host>)

INFO    - 1 test scenarios (1 configurations) selected, 0 configurations filtered (0 by static filter, 0 at runtime).
INFO    - 0 of 1 executed test configurations passed (0.00%), 1 built (not run), 0 failed, 0 errored, with no warnings in 8.34 seconds.
INFO    - 0 of 0 executed test cases passed (0.00%) on 0 out of total 1259 platforms (0.00%).
INFO    - 1 selected test cases not executed: 1 not run (built only).
INFO    - 0 test configurations executed on platforms, 1 test configurations were only built.
INFO    - Saving reports...
INFO    - Writing JSON report /home/user/workspace/twister-out/twister.json
INFO    - Writing xunit report /home/user/workspace/twister-out/twister.xml...
INFO    - Writing xunit report /home/user/workspace/twister-out/twister_report.xml...
INFO    - Run completed

There’s a lot of information in the terminal, but we still don’t see our test results anywhere. The reason is simple: in our testcase.yaml file, we specified build only, not run. To actually execute the test case, we need to run the application itself. Since we’re using the Zephyr simulator as our test platform, we can simply run the executable located at the following path:

 $ ./twister-out/native_sim_native/host/app/tests/dummy.testing.ztest/zephyr/zephyr.exe
*** Booting Zephyr OS build v4.3.0 ***
Running TESTSUITE framework_tests
===================================================================
START - test_sum__two_integers
 PASS - test_sum__two_integers in 0.000 seconds
===================================================================
TESTSUITE framework_tests succeeded

------ TESTSUITE SUMMARY START ------

SUITE PASS - 100.00% [framework_tests]: pass = 1, fail = 0, skip = 0, total = 1 duration = 0.000 seconds
 - PASS - [framework_tests.test_sum__two_integers] duration = 0.000 seconds

------ TESTSUITE SUMMARY END ------

===================================================================
RunID: ce5fc53efa93b408fdeb3ce260b8cdbf
PROJECT EXECUTION SUCCESSFUL

We can perform both steps at once by setting build_only: to false in the testcase.yaml file and then running Twister again.

twister-out/native_sim_native/host/app/tests/dummy.testing.ztest/handler.log

Choosing to only build has its own advantages. Your test code might fail for reasons unrelated to the actual test execution—for example, due to syntax errors or missing includes. Catching these issues early during the build phase can save time and avoid confusion.

Now let’s see what happens when a test case fails. Modify the test_dummy.c file to intentionally force a failure. For example, let’s pretend that 2 + 3 = 6 (for some magical reason). In this case, set the expected result to 6 and add a new test case to trigger the failure.

ZTEST(framework_tests, test_sum__error_result)
{
    int res = suma( 2, 3 );
    /*for simplicity reasons we assume 2+3=6 that is why the expected result is 6,
    but our function will return 5, failing the test in turn*/
    zassert_equal( 6, res, "2 + 3 should be 5");
}

Run the test cases as same as before to notice it fails!!

$ west twister -v -n -T app/tests/
Keeping artifacts untouched
INFO    - Using Ninja..
INFO    - Zephyr version: v4.3.0
INFO    - Using 'zephyr' toolchain.
INFO    - Selecting default platforms per testsuite scenario
INFO    - Building initial testsuite list...
INFO    - JOBS: 16
INFO    - Adding tasks to the queue...
INFO    - Added initial list of jobs to queue
INFO    - 1/1 native_sim/native         dummy.testing.ztest                                FAILED rc=1 (native 0.005s <host>)
ERROR   - see: /home/user/workspace/twister-out/native_sim_native/host/app/tests/dummy.testing.ztest/handler.log

INFO    - 1 test scenarios (1 configurations) selected, 0 configurations filtered (0 by static filter, 0 at runtime).
INFO    - 0 of 1 executed test configurations passed (0.00%), 0 built (not run), 1 failed, 0 errored, with no warnings in 6.41 seconds.
INFO    - 1 of 2 executed test cases passed (50.00%), 1 failed on 1 out of total 1259 platforms (0.08%).
INFO    - 1 test configurations executed on platforms, 0 test configurations were only built.
...

The terminal only shows an ERROR message, but it doesn’t provide any details. We want to know which test case caused the failure. Fortunately, we have two options:

  1. Run the generated program manually, just like we did before, or
  2. Check the handler.log file located in the same directory.
*** Booting Zephyr OS build v4.3.0 ***
Running TESTSUITE framework_tests
===================================================================
START - test_sum__error_result

    Assertion failed at WEST_TOPDIR/app/tests/test_dummy.c:25: framework_tests_test_sum__error_result: (6 not equal to res)
2 + 3 should be 5
 FAIL - test_sum__error_result in 0.000 seconds
===================================================================
START - test_sum__two_integers
 PASS - test_sum__two_integers in 0.000 seconds
===================================================================
TESTSUITE framework_tests failed.

------ TESTSUITE SUMMARY START ------

SUITE FAIL -  50.00% [framework_tests]: pass = 1, fail = 1, skip = 0, total = 2 duration = 0.000 seconds
 - FAIL - [framework_tests.test_sum__error_result] duration = 0.000 seconds
 - PASS - [framework_tests.test_sum__two_integers] duration = 0.000 seconds

------ TESTSUITE SUMMARY END ------

===================================================================
RunID: ce5fc53efa93b408fdeb3ce260b8cdbf
PROJECT EXECUTION FAILED

One thing to note is that I’m using a few flags when invoking Twister. Here’s a short explanation:

  • -v — Makes the output more verbose. With this option, we can see messages such as PASS or FAILED.
  • -n — Reuses the existing Twister output directory, avoiding the creation of new directories with a numeric suffix.
  • -T — Specifies the directory from which to run the tests.

You can read more details in the Zephyr documentation

Another option I like to use is setting the output directory. I usually use build/ as the location for generated binaries, so we can tell Twister to place the test output directory there with any name we choose. For example, something like this:

$ west twister -v -n --outdir build/tests -T app/tests/

This was only a simple introduction for those who are just getting started with testing their Zephyr code. But there’s much more you can do—such as integration testing and running tests on actual hardware. For the moment, I suggest reading the official documentation