Unit / Integration testing examples in 2025

I'm looking into options for unit testing, or more specific, I'm looking for a well-lit path that works end-to-end.

There are a bunch of (old) topics in this forum, but I'm not sure what is supposed to work today.

My priorities are firstly) unit testing code (possibly with basic wiring API mocked), and secondly) integration testsing code. I'm targeting a P2.

I see that there is Aunit, ArduinoUnit, but I have not seen any working example. I did look into device-os-test-runner, but given that it only works on nodejs 12, I wonder whether a) this is still maintained, and b) can suscessfully be used to thest user firmware - at least I did not get it to run within a few hours.

Then there is GitHub - rickkas7/UnitTestLib: Library to easily unit test parts of some Particle device code off device (native gcc compile), which I believe would work, but that is going to lock me into off-device unit testing.

This is compilicated by my development being split across private libraries, which I import as git submodules.

My project involves NFC, displays and a bunch of hardware accessories, and of course a bunch of cloud integrations. For example, I would really like to unit test the UART communication with the NFC chip, either by using mock serial streams, or by faking communication in a hardware test fixture that includes multiple P2. I'm especially interested in testing error scenarios / corner cases, which are not reliably testable manually.
What is the state of the art in Particle firmware to develop code test driven? Are there example repositories out there? What are the development platforms? I ended up on WSL (after hickups with plain windows or chromeos). Do you run tests from the particle workspace, or from the console? and if its the latter, how do I get access to the correct toolchain?

I'm sorry for the many questions in this post, but I'm trying to understand how to best end-to-end develop the software (my background is in software engineering)

Thanks,
MikeS

Both of those options will work, and probably should both be used.

The UnitTestLib is good for off-device unit tests of modules that can be tested that way. It also provides a good way to be able to run tests easily from CI. On Linux, you can run the tests under Valgrind, which also allows for testing for memory leaks, memory block overwrites, and using freed heap-allocated memory.

The device-os-test-runner is still actively used on every Device OS release to do on-device testing as part of the release process. You won't use it as-is since you're not testing Device OS itself, but is one model to follow since you'll be doing the same thing, just with different tests.

Are there examples I can follow / fork?

One thing I'm a bit confused about is that (at least in the past) is seems folks would have forked the device-os repository and put their firmware in the user/ subdirectory (according to device-os/docs/gettingstarted.md at develop · particle-iot/device-os · GitHub) - but this seems not how current development is done, where the toolchain is installed separately. In this case, I assume the device-os-test-runner would have worked.

Is there some documentation on how the buildfiles makefiles work / are intended to work when writing user firmware? It seems the build is driven by make, and some CMake, and the Particle Workbench extension somehow sets up a bash enviroment in which the toolchain is available and the make files are executed.

I'd prefer to read some authoritive documentation over reverse-engineering and making assumptions, since I'm afraid my reverse-engineered assumptions could easily be brokend by future platform updates :slight_smile:

Thanks!

1 Like

There isn't documentation for how the build system works. However, you do not need to directly use or modify it.

I would use the existing tooling including the CLI cloud compilers, Workbench, or Github actions CI/CD flows to generate binaries.

You may find it easier to just use that tooling to create the binaries, and use the Particle CLI to flash your test devices. There's really nothing magical about how the test runner works, and in fact if you have a different test platform you prefer to use, there is no reason why you couldn't use that instead.

Thanks, that is good to know.

Where I got stuck is: how would I compile and flash a test binary? Assuming that I have a bunch of test.cpp in different sub directories, I dont know how I would choose what binary to compile and flash.

This is also where I got stuck with Auint and the like.

My intuition tells me that I need run make in each test directory, flash the resulting binary to run the test. My understanding is that device-os-test-runner does exactly that.

Correct. But there's a lot of extra stuff in device-os-test-runner, so it may be easier to just do the build, flash, and check the results from your own unit test runner, but both are viable options.

I tried to run the tests as in JsonParserGeneratorRK without modifictaion, but it wont compile out of the box.

Are there only specific versions of the toolchain is supported? I'm on linux, and installed the build-essential package.

The error I get is below. Have you seen this before? Also, this will only work for "pure" C++ - As far as I can tell, there are no mocks available, for example for USARTSerial?

cc -std=c++11 -c -o time_compat.o time_compat.cpp
time_compat.cpp: In function ‘tm* localtime32_r(const time32_t*, tm*)’:
time_compat.cpp:27:12: error: ‘localtime_r’ was not declared in this scope; did you mean ‘localtime32_r’?
27 | return localtime_r(&tmp, result);
| ^~~~~~~~~~~
| localtime32_r
time_compat.cpp: In function ‘time32_t mktime32(tm*)’:
time_compat.cpp:31:22: error: ‘mktime’ was not declared in this scope
31 | return (time32_t)mktime(tm);
| ^~~~~~

There are differences between platforms and versions of native compilers with handling of time32, unfortunately, and I never bothered to implement autoconf. The easiest workaround is just to fix the few compile errors so it builds with your native compiler.

There isn't documentation for how the build system works.

This is problematic! How do you expect customers to test their code without any documentation on how builds work? Expecting us to suffer through it 'til it works doesn't make me want to stick around and the grab-basket of add-ons (all of which are barely documented themselves) isn't the friendly ecosystem of tools you might think it is.

Please make a properly documented build system that supports targeting multiple of your platforms with both a main firmware binary and an arbitrary number of unit test binaries a top priority.

1 Like

I've been using the Particle platform for my small (unit quantity) but rather demanding industrial product for about 5 years, and I too would put build system rework at the top of my desired features, and specifically how it relates to unit testing of user code+libs.

The current build system is great in that it just works out of the box, and got me up and running fast. But it starts to break down and become a significant hindrance as demands go up.

Affording reasonable integration with any modern test suite (GTest, CppUTest, Catch2) for user code would have saved me multiple weeks of effort and significant frustration over the past few years.

I've gotten by with some things that feel pretty hacky to work around the current built system, and while they work, the more they accumulate, the less maintainable my project becomes.

E.g. I always put this in my projects' build.mk, to prevent the overzealous built script from trying to compile/link everything under the sun:

APPSOURCES := $(filter-out $(USRSRC_SLASH)test/%,$(APPSOURCES))
CPPSRC := $(filter-out $(USRSRC_SLASH)test/%,$(CPPSRC))
CSRC := $(filter-out $(USRSRC_SLASH)test/%,$(CSRC))

I also have dedicated build scripts that run inside the Particle provided Docker containers for compiling binarys I release (thanks for providing those devs!), but they took some effort to get running nicely in parallel with the expectations of the Workbench-based build system which I will occasionally rely upon if I need to fire up a debugger.

Essentially I maintain my own (sub par) build infrastructure (bash scripts + CMake) so I can have unit tests in my user code. Unfortunately the overhead and friction leads to writing fewer tests. Its unfortunate, but ultimately I decided the value provided by the Particle ecosystem was worth those pains.

Who knows, maybe in a few months someone can just point a LLM at the problem and feed it $$, but I've found they just produce bowls of spaghetti when there's any level of complexity or engineering required for the build system.

2 Likes

In case anyone else comes looking:
DO NOT RELY ON SYMBOLIC LINKS!
They might seem like a great option when you're compiling a multi-device project (and I have succeeded in getting them working for local builds only), but the whole system falls apart if you try to compile with the Github Action because it uses docker, and docker doesn't allow symbolic links (even relative ones) because they're "not repeatable."

What good reason could particle possibly have for running GHA builds (which are themselves containerized) inside docker? What's wrong with installing the toolchain directly on the GHA action runner just like the VS Code Workbench does? The fact that builds do not work the same for all methods is a significant blocker to anyone who needs reliable and testable software running on a particle product. Needing to maintain, at minimum, two different building methods (local and CI) is not feasible and opens the door to all manner of other errors stemming from diverging build paths.