Issues with 0.4.9 firmware [SOLVED]

Help! I tried updating my photon to 0.4.9, and now it’s perpetually stuck in breathing magenta.

I just tried updating a photon from 0.4.7 to 0.4.9 and ran into a couple of problems:
first, when I run:

sudo npm update -g particle-cli && particle update

it loads the 0.4.9 system firmware, but immediately on restart, it goes into breathing magenta mode and hangs there indefinitely (10 mins and counting, I’ve done 3 restarts with the same results each time). I get the same results if I load the new 0.4.9 firmware over dfu

I also tried to return to 0.4.7 by loading the 0.4.7 system firmware and a binary compiled under 0.4.7 over DFU, but that doesn’t seem to work – I’m able to load the code fine, but the photon again sticks in breathing magenta upon reboot.

No matter what I do, my photon is stuck in breathing magenta. Any ideas how to fix this?


Try flashing Tinker to it?

Quick update –
I wrote a little sketch for another photon that prints out system.version() over serial. I ran it after running sudo npm update -g particle-cli && particle update, and it prints out 0.4.7. I ran it a couple times to be sure, and particle update is absolutely loading 0.4.7 firmware.

I ran the sketch after loading the 0.4.9 firmware with dfu-util, and it correctly loads 0.4.9. I’m unable to recreate the breathing magenta issue with this photon, although it’s still going on with the photon I had my origin issue with.

@Moors – flashing tinker cleared up the breathing magenta issue. Thanks!

I’m also seeing an issue with flash --usb

I compiled a sketch using the particle cloud compiler:

particle compile photon sketch.ino

and then I put the photon into dfu mode and tried to load the binary on over USB, and got this error:

tests-MBP-3:8x japhy$ particle flash --usb binary.bin 
running dfu-util -l
Found DFU device 2b04:d006

Error writing firmware...CRC is invalid, use --force to override

I’m seeing the same error on two different macbooks. OTA updates or loading directly via dfu-util works fine, but flash --usb always gives me this error

Try adding --force to the command?

I was able to update my Photon from 0.4.7 to 0.4.9 without any problems. (Photon in DFU-Mode blinking yellow and connected via USB)

I used

npm update -g particle-cli && particle update

to update :slight_smile:

OK, so I also managed to update one photon (so far) to 0.4.9 by bringing CLI up-t-date) downloading bins and using
particle flash --usb xxx.bin (parts 1 & 2).

Worked perfectly first time.


I rebuilt my existing (0.4.7) app and its fine.

Add a new call (ATOMIC_BLOCK()) and both the Atom dev and CLI compile commands fail with
error: ‘ATOMIC_BLOCK’ was not declared in this scope

So – looks like despite all info to the contrary the on-line compiler is NOT compiling for 0.4.9 (yet ??), so how do I make it do so please :-O.



The firmware was released on GitHub but not yet pushed to the cloud systems. That means: no compiling in Web IDE or CLI (which uses the cloud compiler as well). So we need to wait until they get everything up and running with the new firmware version.

1 Like

@enjrolas, @hl68fx, @mhdevx, @GrahamS: 0.4.9 is out on the public build farm now too!

1 Like

0.4.9 Seems to have broken something with SPI and FastLED. I get 30+ compile errors, all pointing to chipsets.h inside FastLED complaining that “mSPI was not declared in this scope.”

Possibly related, I also get six repeated errors that say, "expected unqualified-id before ‘)’ token in ../wiring/inc/spark_wiring_spi.h:115:32

You might want to file an issue for this on the lib repo.
Meanwhile you might need to stick with 0.4.7 :weary:

Or you could try #pragma SPARK_NO_PREPROCESSOR with its implications - the preproc might play up with this.

Thanks for the suggestion but my code is already setup for this. (I tried enabling the preprocessor too, but it didn’t help.) Filing an issue now.

1 Like

We just ran into a similar issue with Serial. The problem is that Serial and SPI are now macros so definitions like this won’t work:

1 Like

Actually, apologies for the noise - I retract the utter certainty in last statement :smile:

While it is definitely still be related, I didn’t notice that that block of code I linked to typedef’d SPI before using it. I say it’s still related because of this:

Indeed, line 115 is the macro definition of SPI:

So most likely the macro is preprocessed such that fastled’s typedef ends up looking like garbage. I guess a quick fix would be to typedef to a different symbol, but I’m guessing this issue will affect other popular libraries as well.

Sadly, this is always a potential issue when we have everything in a global namespace.

A quick fix is to add

#undef SPI

before including any headers which themselves define SPI.


Brilliant! Adding this to the top of chipsets.h in the FastLED lib solved the problem. I’ll submit an issue on the FastLED GitHub, but also page @kriegsman here.

FYI I’m also using the SdFat library, and I had to move that #include above FastLED in order to compile, since it doesn’t define it’s own SPI.

EDIT: The changes to Particle SPI were reverted shortly after the 0.4.9 release, and #undef SPI is no longer needed.

Sadly, this is always a potential issue when we have everything in a global namespace.

The problem isn’t that everything’s in a global namespace, the SPI types in FastLED are nested inside of class definitions that themselves are nested inside of a library namespace.

The problem is that #define macros stomp all over everything no matter how well scoped/namespace’d they are. It’s an excellent reason not to use them (I’ve been slowly either pulling them out of FastLED entirely, or renaming them so that they don’t stomp on other things - i should not have to convolute the naming of everything in my library (variables in functions, class members, nested types, etc…) to guard against particle’s #defines (this is at least the second one that I know of to mess with FastLED - one of my scoped enums for RGB ordering gets screwed because a #define macro for RGB instead of particle using a constant or an enum themselves).

Thanks for your thoughts. I fully agree that #defines should be avoided where possible (e.g. our pin constants A3, D0 etc will become true constants rather than defines.)

In this case, do you see how we can ensure the global objects are initialized before use when used themselves from a global constructor without using #defines?

Have SPI’s constructor call a function, eg iniit:

class CSPI { public: CSPI() { init(); } void init() { magic; } };

Then have your constructor for the other object call init() on the object before using it - the memory would have already been allocated (the nice thing about static globals).

C++ guarantees that all static/global constructors will be called before main - and it guarantees order in a single file will be order of appearance. Sadly, it makes no such guarantees across files, and in fact is officially undefined.

Having static globals that depend on static globals in other compilation units being initialized first is generally a bad idea.

(Apologies for typos - on a phone at 35,000 feet)

1 Like

Thanks for the suggestion.

That could work, but requires developers to take additional steps in their code to know when to call the init method - we’d prefer to avoid the additional steps where possible. I’m wondering if this is a little dangerous - will the virtual method table be initialized for the instance when the constructor hasn’t executed?

I agree having global instances calling others is a bad idea, but it’s a reality we are facing.