We are currently using local builds of particle system firmware for our application. Building on Ubuntu 16.04 using arm-none-eabi-gcc 5.3.1.
I’m working on upgrading our system firmware from 0.5.3 to 0.6.4, and eventually 0.7.0.
My current method for wrestling the particle makefile into submission is to include a custom makefile that does a bunch of hacky-scripting to inject out-of-tree relative paths to common code directories (we have multiple projects and binaries that share a common set of code, particles current build system prevents pulling in out of tree resources in multi-project setups like ours). This works just fine with 0.5.3, but breaks in 0.6.4. Still trying to figure out why, but it looks like all applications built after the first fail with multiple definitions of symbols. Could be that they are pulling files in more than once for some reason.
Instead of diving into more makefile hacks, I was wondering if there might be a better approach.
Our application is a bit too complex for a flat directory structure, and since a good chunk of the code is re-used, there is a lot of potential for some static linking here.
I’ve got a couple of questions:
What changed between 0.5.3 and 0.6.4 that might break INCLUDE_DIRS and CPPSRC? Reading through the git repo change notes, but might have already missed something.
Is it possible to direct the particle makefile to produce static libraries that can simply be linked to all the tests and applications we have? I imagine I’d need to tinker with the makefiles and fork the system firmware a bit to support this, but it would save us upwards of 20 minutes per build.
If it’s not possible to create cache-able libraries, could the common code be broken into libraries that could be referenced in each project, even if they are rebuilt each time? I’m trying to avoid symlinks and copying code for maintainability. This would mean that the particle directory structures violate this use case (since the it looks like libraries need to live inside a specific project tree).