[SOLVED] Exceeding Vector array at 4097th value

Hi All. I’m playing with vectors for the first time on my Photon and I’ve encountered a problem. Whenever I go beyond 4096 values (12 bit #) in the array, I get garbage data.

#include <vector>
#include <numeric>

int sensorPin1 = A0, arraySizeMax, arrayCapacity;
std::vector<int> calibrationArray;

void setup() {
    pinMode(sensorPin1, INPUT);
    arraySizeMax = 4095;
    arraySizeMax = 4096;
int calibrate(int pinToCalibrate) {
    while (calibrationArray.size() <= arraySizeMax) {
    Particle.publish("Array Capacity", String(calibrationArray.capacity()), PRIVATE);
    Particle.publish("Initial Value", String(calibrationArray.front()), PRIVATE);
    Particle.publish("Final Value", String(calibrationArray.back()), PRIVATE);

void loop() {

Array Capacity 4096
Initial Value 503
Final Value 16392
Array Capacity 8192 <—looks like additional memory has been allocated but…
Initial Value 537001984 <—garbage
Final Value 0 <—garbage

I’ve cleaned up the code a little to make the issue clearer. Could really use help on this one, I’m not getting anywhere.

what happens if you don’t call clear() and just add another element the 2nd call to the sampling function?

Hi @icedMocha

I don’t know what size std::vector will preallocate on this platform, but it probably is not what you wanted anyway. Why trust the memory management to be perfect on an embedded processor? I would preallocate the std::vector to the size I needed. If you need lots of dynamic memory allocation for 16kbyte chunks (out of a total of 128kbytes of RAM so each one is 1/8 of total memory) this is likely to be not the best platform for your needs.

You can preallocate the vector size by adding a size argument to constructor or by calling the reserve method later. Either way will likely save you some heartache.

1 Like

No change.

I added calibrationArray.reserve(8192); above the while statement. Looks that fixed the problem. Thank you!

As far as why to trust the memory management of an embedded processor, I guess I’d ask why not? (Honest question) This problem is forcing me to understand more about memory management than I have before so I’d be interested to know more.

I believe the way std::vector allocated memory is through exponential growth, so each time I hit a limit it reserves double the memory, or something approximating that. I suppose I could control that more tightly through the reserve function. I was previously calling the callibration.capacity function to make sure I had enough space, but I guess the value that returns has nothing to do with the actual capacity of the hardware.

1 Like

Hi @icedMocha

Memory is a scarce resource on any small platform like this and managing it closely when dealing with objects that are large proportion of the total available memory is often required. If you don’t want to understand the limitations of memory here, you will need to stick to smaller objects in memory. Your first object is 1/8 of the total RAM and the second time through, I suspect you took up 1/4 of the available memory. That memory is shared with the networking code and RTOS, so you will have problems if you take too much. If you were working with 100 elements vectors, I would not have necessarily suggested preallocation, but if you really do know the size or have a max size, then preallocation often really helps.

The exponential growth (doubling) on a push that has no room works great on big computers like PCs with lots of physical RAM and virtual memory too. But on a smaller processor like Particle’s it can run wild and take too much.

1 Like

An additional point might be, that the free space you get reported does not mean that this space is available as one big chunk, but might be scattered due to fragmentation all over the place. So you could fit that many small elements into the free gaps, but not one single big one.
If there was a function that reported the biggest chunk of mem available, this woukd be the figure to check against.

Hence, for µCs, you need to care about memory management, due to the limitations.

1 Like

So is it correct to say that the correct memory addresses won’t necessarily be referenced if the array isn’t stored in one big chunk? I had just assumed the system would add some kind of marker for where to jump for the next piece of data. (Why isn’t my mysterious magical box working??? :confounded: )

That’s what a vector is: a contiguous storage

To have objects that can be scattered, you could use lists, sets or maps

1 Like