Local build using gcc-arm

Curious what everyone is doing to compile extended layout projects in CI/CD pipelines these days. Does the guidance here still apply to device-os v6.x+?

Any reason the toolchain isn't included in the particle cli so local builds could be run easily from a cli without jumping through hoops (unsuccessfully, so far) to install and configure a headless vscode + workbench setup?

If you are using Github, by far the best way to do builds in CI/CD is using Github Actions.

If you want to do builds a different way and can use Docker, the best option is to use a Particle Docker BuildPack which is actually how the cloud compilers work, so they're always maintained and updated as necessary with the correct versions of all of the tools and compilers for specific versions of Device OS.

3 Likes

The compile action does not work due to our project's multi-target structure. The action only loads the sources-folder specified, not any of its parent or sibling directories. Consider the following structure:

src
├── app
├── hardware
└── platform
    ├── electron
    ├── linux (for unit tests)
    └── monitor-one

where ./src/app is common code, src/hardware defines interfaces for particle hardware (.h files only), and each src/platform child is an extended layout particle project (each containing its own ./src, project.properties and main.cpp file)

Setting the compile action's sources-folder option to any of ./src/platform/X does not load anything in ./src/app so the build step always fails (due to files "missing"). Can you add a second path option (e.g. sources-include) so the entire repo can be loaded into the buildpack instance?

Again, this would not be an issue if the toolchain were installed with particle-cli so it could also handle local builds. Does the guidance on independent gcc-arm installs still apply?

The gcc-arm versions in the old guide are wrong, but other than that, the process should be similar.

If you really need a non-containerized build environment the best way is to use Workbench to install the toolchain. You don't actually need to use Workbench to invoke the build, just to install the toolchain.

However I would probably front-end your build so it just copies your subproject into a new directory and mixes in the common code, then invokes a normal build. This would make it work correctly for Workbench, cloud compile, command line local build, or Docker buildpack.

1 Like

Do you have any guidance on installing the toolchain using Workbench from the CLI? I am able to successfully install VSCode and Workbench, but I haven't been successful in actually getting the toolchain parts of Workbench to show up where I expect them (~/.particle/ in my dev environment). Github Actions does not natively support a GUI and builds need to run unattended in CI.

In Workbench, open a Particle project then do a **Particle: Launch Compiler Shell". Then do an echo $PATH in the command window. That's the path you need to set in your standalone build in order to access the toolchain. There is not one global toolchain because different versions of Device OS require different toolchain versions.

2 Likes

From a compiler shell launched from my local dev Workbench:
echo $PATH filtered to particle related results:
... :/home/akurzweil/.particle/toolchains/gcc-arm/10.2.1/bin:/home/akurzweil/.particle/toolchains/buildtools/1.1.1:/home/akurzweil/.particle/toolchains/openocd/0.11.0-particle.4/bin: ...

This is hosted on a WSL2 instance. $HOME/.particle/ does not exist after installing workbench on a github actions runner. Does the toolchain location change based on the machine's type? Is there a setup/configuration step that needs to be run on the action runner after installing workbench to actually download/install the items found in $HOME/.particle?

I don't know enough about the local runner to answer that. I use the Github hosted runner that builds using the Particle compile servers, so the amount of paid compute is negligible. It should be possible to do what you want, I'm just not exactly sure how.