Can Spark Core's malloc implementation fail gracefully?

Hi everyone, I have a number of questions about memory allocation on the Core.

In principle, when malloc is unable to satisfy a request, it’s supposed to fail by returning a null pointer - will the Core’s malloc implementation ever do this?

In practice I’ve found that every time I’ve made a call to malloc that it can’t satisfy, I end up with the Flashing Red Light of Doom.

I’ve been trying to work out how to cache some data (on the order of a few hundred bytes, maybe multiplied by several times if possible) opportunistically, and ideally I’d like to be able to do it by trying to allocate what I want, then falling back to clearing the existing cache if the allocation fails. Obviously this relies upon malloc being able to return null, which doesn’t appear to be the case.

A different approach I’ve taken is to keep track of the total size of allocations I’ve made in order to keep it within some well-defined limit; I got further with this approach, but eventually found that it doesn’t take many allocations before I get the FRLoD even when the total allocation size is still well under the limit - presumably due to heap fragmentation?

Alternatively (and avoiding the issue of heap fragmentation for the moment) is there any way to find out how much heap space is available? Do the stack and the heap just grow towards each other until they collide? That’s what I’m inferring from https://community.spark.io/t/how-to-know-how-much-ram-flash-i-am-using/2150/3. If I understand that correctly, I think that if I modify that not to add the size of the free list then the result should indicate that there is definitely that much contiguous memory available, but of course I may not have understood it correctly. I guess I could then scan through the free list if I really wanted to. Is that kind of approach my best option, or has anything changed in the interim?

Ultimately, I guess my question is ‘is there any way to know ahead of time whether a given memory allocation request will cause the Core to panic?’

1 Like

So glad you brought this up, since I’ve been wanting to do this for a while! Dynamic memory allocation will be a key part of the firmware for the Photon and hopefully this can be backfitted to the Core also.

Yes, the stack and heap do grow together, so you could look at the heap and stack pointers to see if allocation might fail.

In fact, in src/newlib_stubs.cpp you’ll find the _sbrk function, and it does exactly that - looks at the stack and heap pointers to see if more memory can be allocated. _sbrk is called by the memory manager to allocate more heap,. and it attempts to allocate more heap, or calls the SOS panic when memory cannot be allocated. Changing this to return NULL rather than panic should be all that is required to have malloc gracefully fail.

I hope that helps you get started - let us know how you get on!

Thanks for confirming I’m not totally off the rails.

I wonder if you could clarify this: _sbrk compares heap_end with __Stack_Init, which the comment notes is determined by the linker. Is this a constant point that represents a limit on how far the stack is permitted to grow? That is, that the heap is guaranteed to be able to grow up to that point, versus comparing with the stack pointer to determine what space just happens to be available right now?

I’m wondering why this is different to the method used in freeMemory in the post I linked, which just allocates something new on the stack and checks its address - an approach which now looks wrong in light of the way _sbrk works.