it loads the 0.4.9 system firmware, but immediately on restart, it goes into breathing magenta mode and hangs there indefinitely (10 mins and counting, I’ve done 3 restarts with the same results each time). I get the same results if I load the new 0.4.9 firmware over dfu
I also tried to return to 0.4.7 by loading the 0.4.7 system firmware and a binary compiled under 0.4.7 over DFU, but that doesn’t seem to work – I’m able to load the code fine, but the photon again sticks in breathing magenta upon reboot.
No matter what I do, my photon is stuck in breathing magenta. Any ideas how to fix this?
Quick update –
I wrote a little sketch for another photon that prints out system.version() over serial. I ran it after running sudo npm update -g particle-cli && particle update, and it prints out 0.4.7. I ran it a couple times to be sure, and particle update is absolutely loading 0.4.7 firmware.
I ran the sketch after loading the 0.4.9 firmware with dfu-util, and it correctly loads 0.4.9. I’m unable to recreate the breathing magenta issue with this photon, although it’s still going on with the photon I had my origin issue with.
@Moors – flashing tinker cleared up the breathing magenta issue. Thanks!
I’m also seeing an issue with flash --usb
I compiled a sketch using the particle cloud compiler:
particle compile photon sketch.ino
and then I put the photon into dfu mode and tried to load the binary on over USB, and got this error:
tests-MBP-3:8x japhy$ particle flash --usb binary.bin
running dfu-util -l
Found DFU device 2b04:d006
Error writing firmware...CRC is invalid, use --force to override
I’m seeing the same error on two different macbooks. OTA updates or loading directly via dfu-util works fine, but flash --usb always gives me this error
OK, so I also managed to update one photon (so far) to 0.4.9 by bringing CLI up-t-date) downloading bins and using
particle flash --usb xxx.bin (parts 1 & 2).
Worked perfectly first time.
BUT!!!
I rebuilt my existing (0.4.7) app and its fine.
Add a new call (ATOMIC_BLOCK()) and both the Atom dev and CLI compile commands fail with
error: ‘ATOMIC_BLOCK’ was not declared in this scope
So – looks like despite all info to the contrary the on-line compiler is NOT compiling for 0.4.9 (yet ??), so how do I make it do so please :-O.
The firmware was released on GitHub but not yet pushed to the cloud systems. That means: no compiling in Web IDE or CLI (which uses the cloud compiler as well). So we need to wait until they get everything up and running with the new firmware version.
0.4.9 Seems to have broken something with SPI and FastLED. I get 30+ compile errors, all pointing to chipsets.h inside FastLED complaining that “mSPI was not declared in this scope.”
Possibly related, I also get six repeated errors that say, "expected unqualified-id before ‘)’ token in ../wiring/inc/spark_wiring_spi.h:115:32
Actually, apologies for the noise - I retract the utter certainty in last statement
While it is definitely still be related, I didn't notice that that block of code I linked to typedef'd SPI before using it. I say it's still related because of this:
Indeed, line 115 is the macro definition of SPI:
So most likely the macro is preprocessed such that fastled's typedef ends up looking like garbage. I guess a quick fix would be to typedef to a different symbol, but I'm guessing this issue will affect other popular libraries as well.
Brilliant! Adding this to the top of chipsets.h in the FastLED lib solved the problem. I'll submit an issue on the FastLED GitHub, but also page @kriegsman here.
FYI I'm also using the SdFat library, and I had to move that #include above FastLED in order to compile, since it doesn't define it's own SPI.
EDIT: The changes to Particle SPI were reverted shortly after the 0.4.9 release, and #undef SPI is no longer needed.
Sadly, this is always a potential issue when we have everything in a global namespace.
The problem isn't that everything's in a global namespace, the SPI types in FastLED are nested inside of class definitions that themselves are nested inside of a library namespace.
The problem is that #define macros stomp all over everything no matter how well scoped/namespace'd they are. It's an excellent reason not to use them (I've been slowly either pulling them out of FastLED entirely, or renaming them so that they don't stomp on other things - i should not have to convolute the naming of everything in my library (variables in functions, class members, nested types, etc...) to guard against particle's #defines (this is at least the second one that I know of to mess with FastLED - one of my scoped enums for RGB ordering gets screwed because a #define macro for RGB instead of particle using a constant or an enum themselves).
Thanks for your thoughts. I fully agree that #defines should be avoided where possible (e.g. our pin constants A3, D0 etc will become true constants rather than defines.)
In this case, do you see how we can ensure the global objects are initialized before use when used themselves from a global constructor without using #defines?
Then have your constructor for the other object call init() on the object before using it - the memory would have already been allocated (the nice thing about static globals).
C++ guarantees that all static/global constructors will be called before main - and it guarantees order in a single file will be order of appearance. Sadly, it makes no such guarantees across files, and in fact is officially undefined.
Having static globals that depend on static globals in other compilation units being initialized first is generally a bad idea.
That could work, but requires developers to take additional steps in their code to know when to call the init method - we’d prefer to avoid the additional steps where possible. I’m wondering if this is a little dangerous - will the virtual method table be initialized for the instance when the constructor hasn’t executed?
I agree having global instances calling others is a bad idea, but it’s a reality we are facing.