Hi everyone, I have a number of questions about memory allocation on the Core.
In principle, when malloc is unable to satisfy a request, it’s supposed to fail by returning a null pointer - will the Core’s malloc implementation ever do this?
In practice I’ve found that every time I’ve made a call to malloc that it can’t satisfy, I end up with the Flashing Red Light of Doom.
I’ve been trying to work out how to cache some data (on the order of a few hundred bytes, maybe multiplied by several times if possible) opportunistically, and ideally I’d like to be able to do it by trying to allocate what I want, then falling back to clearing the existing cache if the allocation fails. Obviously this relies upon malloc being able to return null, which doesn’t appear to be the case.
A different approach I’ve taken is to keep track of the total size of allocations I’ve made in order to keep it within some well-defined limit; I got further with this approach, but eventually found that it doesn’t take many allocations before I get the FRLoD even when the total allocation size is still well under the limit - presumably due to heap fragmentation?
Alternatively (and avoiding the issue of heap fragmentation for the moment) is there any way to find out how much heap space is available? Do the stack and the heap just grow towards each other until they collide? That’s what I’m inferring from https://community.spark.io/t/how-to-know-how-much-ram-flash-i-am-using/2150/3. If I understand that correctly, I think that if I modify that not to add the size of the free list then the result should indicate that there is definitely that much contiguous memory available, but of course I may not have understood it correctly. I guess I could then scan through the free list if I really wanted to. Is that kind of approach my best option, or has anything changed in the interim?
Ultimately, I guess my question is ‘is there any way to know ahead of time whether a given memory allocation request will cause the Core to panic?’