Mutex.h compilation issues with application.h?

I need access to some of the C++11 std lib functionality, like in mutex.h. I’d like to use the std::mutex class (e.g. http://en.cppreference.com/w/cpp/thread/mutex). This super basic code below seems to blow up compilation in the web IDE. Any suggestions? (I’ve tried it with/without the mutex header included, with/without the “std” namespace, etc). Seems some macros (e.g. min, max) defined in the implicitly included “application.h” header (Spark Core) break compilation of stuff lower down in the chain?

//#include "stdlib.h"
#include <mutex>

std::mutex  some_mutex;

void setup() 
{
}

void loop() 
{
}

Compilation errors:

In file included from /opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/bits/char_traits.h:39:0,
from /opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/string:40,
from /opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/stdexcept:39,
from /opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/array:38,
from /opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/tuple:39,
from /opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/mutex:38,
from test2.cpp:2:
/opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/bits/stl_algobase.h:239:56: error: macro “min” passed 3 arguments, but takes just 2
min(const _Tp& __a, const _Tp& __b, _Compare __comp)
^
/opt/gcc_arm/arm-none-eabi/include/c++/4.8.4/bits/stl_algobase.h:260:56: error: macro “max” passed 3 arguments, but takes just 2
max(const _Tp& __a, const _Tp& __b, _Compare __comp)
^
(…)

Hi @jdr

It looks like you are running into the Arduino definitions of min() and max(). You could try adding this above the include:

#undef min(a,b)
#undef max(a,b)
//as before
#include <mutex>
...

I don’t know how much of std lib you are going to be able to fit on the core–probably not very much of it but if you only need a few things you might be ok.

1 Like

Thanks for the reply @bko. It’s looking like a hairy mess, even with undefining those macros. I think I’ll just port the mutex code over myself to avoid this. I agree, stdlib is massive and I just wanted to cherry pick a few useful things for my project.

I think it would be handy to have a basic mutex class part of the Spark firmware. I’m working on a project with a main (loop) thread and two smaller threads (one driven off a hardware timer, and one driven off an external interrupt from a radio module). There’s a small set of shared variables that need to be modified atomically (and sometimes together). Having a lock to synchronize on would be useful.

Joe

1 Like

The internal SPI bus is protected by a mutex to ensure only one device asserts its CS line - https://github.com/spark/core-common-lib/blob/master/SPARK_Firmware_Driver/src/spi_bus.c#L24

I’d really like to replace those macros with inline functions so we avoid the problem with stdlib, thus keeping our low floor, and raising the ceiling.

1 Like

Thanks @mdma. Checked out that code. Is calling __sync_synchronize() prior to a __sync_bool_compare_and_swap() something you should always do? Seems you want to force all memory changes to take effect right before you do the mutex compare & swap? Just curious.

Related to all this, part of my project is also running on the AVR platform. The __sync_bool_compare_and_swap() built-in isn’t supported by GCC for that environment (sadly), but it seems I can leverage the <util/atomic.h> library that’s part of avr-libc to do the equivalent (the ATOMIC_BLOCK macro globally disables and then restores interrupt state, guaranteeing an atomic block to R/W the mutex var). Code I wrote for that below:

#include <util/atomic.h> 

    bool try_mutex_lock(volatile bool *lock)
    {
       ATOMIC_BLOCK(ATOMIC_RESTORESTATE)
       {
          if (*lock)
             return(false);
          else
             *lock = true;
       }
       return(true);    
    }

Hi @jdr!

Some nice code there! Yes, it's a pity the builtins aren't available on arduino, but as you found, the atomic macro emulates that - although I don't know if the atomic provides any kind of memory fence (or if that is even an issue on the arduino!)

I don't believe __sync_synchronize() is needed before the other atomic built-ins - the gcc page on atomics says this:

In most cases, these builtins are considered a full barrier. That is, no memory operand will be moved across the operation, either forward or backward. Further, instructions will be issued as necessary to prevent the processor from speculating loads across the operation and from queuing stores after the operation.

Haha! The word "most" is key - which ones are full memory barriers and which aren't! :question::exclamation: The compare and swap atomic is only meaningful with full memory barrier semantics so I hedged a bet that's how it's implemented.

Cheers,
mat.

On the Core there is precisely one thread of execution. What’s the need for any complicated mutex algorithm?

Interrupts - there are multiple threads of execution.

Yes, interrupts. You simply set a flag - you call noInterrupts() around critical sections of the code. We don’t need test&set or Patterson’s algorithm. Am I being naive?

Yes. :wink:

1 Like

OK, thanks for the education, but if interrupts are disabled you know for sure that you are the only current thread - you won’t be interrupted - there is no pre-emptive thread switching. So then simple solutions such as this works fine:

[I’ve re-edited this mutex setting and using code to make it plain that a mutex is not required in a single threaded and therefore non-pre-emptive machine - all the ISR needs do is set a state. The mutex code is redundant.]

int mutex = 0;

void ObtainMutex()
{
  while( true) {
    while( mutex != 0)
      delay(10); // stuck unless interrupted
    noInterrupts();
    if( mutex == 0)
      break;
    allowInterrupts();
  }
  mutex=1;
  allowInterrupts();
  return;
}

void ReleaseMutex()
{
  mutex=0;
  return;
}

// And this function really shows the futility of the whole exercise -
// if the other thread has the mutex we are *not* running
// unless we are an ISR.
// if we are not an ISR 
//    and we are running we must have the mutex - if there is one
// if we are not an ISR
//    and we are running and we don't have the mutex then we never set it
// and we call this because we can't remember whether we have set it!
int TestMutex()
{
  return mutex;
}


void ThreadAAAA()
{
  DoNonCriticalAAAA();

  // Better below to simply write "if( state)"
  ObtainMutex();
  DoCriticalAAAA();
  ReleaseMutex();
  return;
}

void ThreadBBBB()
{
  DoNonCriticalBBBB();

  // Much better would be "if( ! state)"
  ObtainMutex();
  DoCriticalBBBB();
  ReleaseMutex();
  return;
}

void loop()
{
  if( state == 1)
    ThreadAAAA();
  else
    ThreadBBBB();
}

ISR()
{
  state = ! state;
}

I had the discussion about masking interrupts before - @mdma decided on (and more importantly, implemented) a different approach.

There’s enough forward progress that needs to be made, so revisiting prior decisions like mutex vs irq masks is probably something that we can productively skip.

Possibly, but all I am suggesting is the momentary (long enough to set a volatile byte) masking of interrupts to allow a mutex to be implemented. I don’t know how expensive switching off and re-enabling interrupts is but I had (naively?) thought this might be very cheap also.

I’m not really coming down one side or the other, mutex vs irq masks, I’m simply suggesting one possible way the original poster might get his mutex implemented.

And looking for the reason why my naive suggestion won’t work. Possibly granularity?

If a generalised mutex solution by @mdma exists why not communicate this to the original poster?

Dunno - the last thing I do is put words in @mdma’s mouth.

I’m just trying to demarc ground that has already been covered.

I understand, but I address the problem, not some Spark architectural issue. Perhaps the @mdma method could be generalised into a Spark mutex library. I am not competent to do that low level stuff - my solution merely uses the documented functions.

This issue has been left hanging.

@jdr, in conclusion(?) therefore:

The Core’s one memory space, one process, one thread of execution (plus interrupts) means that the full-blown GCC mutex library is inappropriate and unnecessary (as well as difficult to get working).

Usually mutex protection is not needed on the Core in the way it is on “proper” pre-emptive multi-threaded/process machines.

If a mutex is necessary on the Core only interrupt protection is needed, as per above example. But a mutex isn’t necessary: just set a flag in your interrupt service routine and return.

Yes, we can close this out. Thanks for your replies and suggestions. I have spent so long writing multi-threaded app code that sometimes it’s a trap to think there’s a chance of even unlocking a mutex in this single-threaded/ISR-driven scenario. (/facepalm).

I ended up just disabling interrupts completely to create atomic code blocks when needed (maybe a bit more intensely than just using noInterrupts(), as I didn’t want Spark interrupts doing processing in the background to mess with precise timing in my project). My project runs across Spark (ARM) and AVR chips, so I use the following wrapper code (with some #define’s, of course) when wanting uninterruptible code blocks. (That, and carefully thinking through shared variable accesses and the state machine, so you don’t need a mutex!) :smile:

#ifdef __PLATFORM_ARM
uint32_t mask_val = __get_PRIMASK();
__disable_irq();
// Do uninterruptible stuff on Spark chip
#endif

#ifdef __PLATFORM_AVR
ATOMIC_BLOCK(ATOMIC_RESTORESTATE) {
// Do uninterruptible stuff on AVR chip
#endif

// Reenable interrupt state
#ifdef __PLATFORM_ARM
if (!mask_val)
   enable_irq();
#endif
#ifdef __PLATFORM_AVR
   }  // End of ATOMIC_BLOCK() macro
#endif
1 Like