C++11 standard for firmware build

I want to use some of the standard library functions in <random>, but this issues an error if the C++ standard isn’t set to C++11 (-std=c++11 passed to gcc.)

Is there a reason we are not using c++11 by default for the build?

probably because even in gcc 4.8 its still experimental even for x86 let alone arm

the firmware doesn’t seem to build if i enable the flag in my local build environment (4.8.3)

1 Like

You could try -std=gnu++11 - there may be gnu-specific extensions being used in the firmware. I’m using that and it builds successfully here.

AFAIK, the way gcc works is that the compiler turns the object code into an intermediate representation, which is separate from the code generating back-end, so that if it’s supported on one architecture it’s usually supported on all of them.

I’m trying to use stdlib’s random new in C++11 to produce objects that generate a sequence from a given seed. I may be able to fall back to the global srand()/rand() functions since only one generator instance is used at a time.

1 Like

Are you trying to compile locally or in the cloud IDE?

Locally. I added the compiler flags to the makefile.

I’m curious too when I really think about it. I feel that there must be a good reason considering some of the relatively serious work put into things like String.

maybe it is because C++11 features are mostly for the stdlibs, and including them tends to make the project larger quicker? I’m just speculating.

I’ve seen some CXX0X-kinda ifdefs in the firmware code, so someone has definitely experimented with it.
But since there were multiple somewhat obscure errors trying to build in C++11 mode out of the box, I didn’t bother trying to go further with it. Rather watered down my own code and standards and went back to gargantuan iterators etc. :stuck_out_tongue:

I don’t know about the arm side, but for x86 I wouldn’t really call it “experimental”. Some parts of the standard might still be under implementation, but the most obvious and important ones have been around and working fine for a few years already.
It’d be nice for the firmware to compile in C++11 mode. Maybe someone from the firmware team, who apparently has experimented with it already, could chip in with how plausible it might be :stuck_out_tongue:

1 Like

Just to be clear on the details, I added -std=gnu++11 only to core-firmware, not to the communications or common libs, but this was because I’m porting ArduinoUnit, which uses the GNU typeof, so gnu extensions are required.

Having said that, I just tried with all common and communication libs:

  • The common lib throws warnings about fd_set and __FD_ZERO being redefined, but it does this anyway without the any -std flash.
  • The core lib fails with ‘strnlen’ not being found:

…/src/spark_protocol.cpp: In member function ‘bool SparkProtocol::add_event_handler(const char*, EventHandler)’:
…/src/spark_protocol.cpp:565:67: error: ‘strnlen’ was not declared in this scope
const size_t FILTER_LEN = strnlen(event_name, MAX_FILTER_LEN);

But the common and comms libs at present don’t need c++11, so there’s not really any point adding it there, I would just add it to the firmware lib, and then use gcc++11 so that the gnu extensions can be used.

So, that’s what I did and the code compiles and runs for me.

1 Like

I’m going to ping @zachary and @satishgn. They’re the firmware gurus, so maybe they can provide some more insight!

Thanks Garrett. Good idea all. We’ll consider using C++11 in the next sprint. Pull requests welcome as always!

I would like to use static_assert (apparently a new feature) and it compiles in the cloud, but not locally.

I see that -std=gnu++11 was added to core-firmware/build/makefile, but only if TEST is defined? Should that be for all builds to line up with what the cloud is doing?

have you tried the feature/hal branch - this is what will become 0.4.0. I also implemented a STATIC_ASSERT macro in the services library that doesn’t depend on C++11 features. Hope that helps!

I think so! I’ll get to it when it’s higher priority.

Sorry for the delay Matt, I now know the difference between “tracking” and “watching” in the support forum.

How’s that even though we build against C++11, when using std::to_string(1) I get error: 'to_string' is not a member of 'std'

I get it both with local and cloud compilation. I also added CPPFLAGS += -std=c++11 to my application’s mk.

This is a topic mentioning the same issue: https://gcc.gnu.org/ml/libstdc++/2013-10/msg00245.html

Have you seen this thread? Is this the same problem?


and the answer:


Thanks for the quick answer. The thread you mentioned is linked in my previous message too. It is the same issue I’m encountering but if in there lies the answer to my issue than I didn’t understand it, which is not surprising.

As I understand it, this is a bug in newlib in that it uses flags in a way that links issues that could be unlinked. Fixing that would require you to change newlib and recompile it, which can be done locally.

This is not really a Particle specific issue but a general ARM gcc + newlib toolchain problem.

What problem are you trying to solve?

You are potentially dragging in a lot of code with std on this platform, which is fine, but there might be a simple way to fix your problem by say using Arduino String objects or char arrays.

Indeed I will just look for a different solution. I wanted to use this JSON library that uses std::to_string, but I will just look for another one. This one is a good candidate for instance.

Why would “draggin a lot of code” be an issue BTW?

Code space is a limited resource on any embedded platform! Photon has a decent amount of space for user code, but the older Core was very easy to overflow, for instance. Every time you link in a std method, you are pulling stuff into code space, sometimes a lot of stuff since there are dependencies.

The second library from bblanchon has already been ported to the Particle world and is available in the webIDE or from github.

I see. I haven’t used the Core indeed and for us definitely space is not an issue. Thanks for pointing out the SparkJson, I was just looking at it.
Is the porting due to Particle flat dirs structure? Do you know if this will be improved in the future?