[Submission] Flashee-eeprom library: eeprom storage in external flash


v0.1.7.8 published in the IDE.

  • Fixes the intermittent problem with data corruption after reboot
  • Improves robustness with power failures and invalid data
  • Unit/integration tests to highlight bugs and validate fixes.

(If you're interested in the details, the github commit and related issues contains more info.)

Please take it for a spin!

Addendum: some tips to ensure a corruption free experience:

  • if compiling locally, be sure to pull the latest code from all 3 core repos
  • if you experienced corrupted data, be sure to erase any previous data using Devices::userFlash().eraseAll()


fyi, this library supports writing strings like this:

flash->writeString("myString", myAddress);

If you have lots of strings and numbers to read/write, then using a flash stream is a convenient way to do this:

    FlashWriter writer(device);
    writer.writeString("Hello World");
    writer.writeString("My name is Fred");

The streams free you from having to keep track of the addresses.


@mdma First off, your library is awesome! I'm building the library locally and in your documentation you had asked to copy all the *.h files to a new folder core-firmware\inc\flashee-eeprom.

In the ff.cpp and flashee-eeprom.cpp files, its still #include "flashee-eeprom.h" which throws errors when I do make. If I changed all the includes in the files to #include "flashee-eeprom/flashee-eeprom.h", it works fine. Am I doing it correctly?


Hi @nitred!,

You can change the include if you want, but of course you'll have to manage this change when pulling updates.

An alternative way that doesn't change any includes is to add the inc/flashee folder to the list of include folders in the core-firmware/src/build.mk, like this:

INCLUDE_DIRS += inc/flashee

But thinking about it now, in hindsight, it would probably have been uber-simplest to ask folks to clone the git repo, then set up pointers in the build.mk to that. E.g. if you clone the spark-flashee-eeprom repo to the same level that the core-firmware folder is in, then you could add

INCLUDE_DIRS += ../spark-flashee-eeprom/firmware
CPPSRC += ../spark-flashee-eeprom/firmware/flashee-eeprom.cpp
CPPSRC += ../spark-flashee-eeprom/firmware/ff.cpp

to core-firmware/src/build.mk and it should compile, and pulling updates is then super easy.

I hope that helps - let me know how you get on! If it works for you, I'll update the docs!


@mdma I just added INCLUDE_DIRS += inc/flashee-eeprom to src/build.mk and it works just fine now!

In your documentation you ask people who are building locally to update the build.mk anyway with. So maybe you could mention to add this additional line to that part of the documentation and it should be all good smile


It appears that the corruption is reduced, but I'm still getting corrupted / old data. I'll be working on this all day so I'll keep track of anything I discover. :/


Thanks for the feedback - I'm interested in anything you can find.

I've created many tests for this, which are all passing (along with some older tests that were inexplicably failing), so I'm hoping this is fixed.

It's a good idea to completely erase the storage via device->eraseAll(), otherwise it's possible you will still see corrupted data from what has been left over by the previous version of the library. You should then not see any corruption or old data after that.


Yes, I had that same thought. I tried it with my createAddressErase portion at that has been working just fine (well, one strange thing but it may be my fault). The createWearLevelErase portion seems to be having issues, and definitely wasn't erasing when I called eraseAll.

As a test, I'm writing 10 'stories', each given a page of space (4096 bytes). It's only using 435 bytes for the test however. I run eraseAll, then read back the first 435 bytes from the first ten pages, and they have been properly erased. I write the ten stories, read them back to make sure the data is sound. Then I power cycle, and read out the data of each of those ten pages.

In the first page I'm seeing the data written to the second page. In the second page, I'm seeing REALLY old data, from numerous eraseAlls ago, from before allocating the entire flash as createAddressErase and running erase all, before going back to two separate flash spaces. From the the third page on, I'm seeing the proper data I wrote, but offset to one page earlier. The final page is the data that I wrote to the first page.

In each set of data I write I include numbers so I can tell which is which, starting with '0', ending with '9'. To write and read, I'm starting with an offset of zero, and increasing the offset by a single page size with each loop, reading and printing the first 435 bytes.

So yeah...still something weird. Erase isn't working for me. To be clear, I'm calling eraseAll() on the flash spaces I allocation. If I should be calling some sort of universal erase I was unaware.


Are you compiling locally or with the online IDE? If local, please check you have pulled from all 3 repos, since there are a fixes in the firmware concerning external flash access. If you're compiling against the spark compile farm then you already have the latest code.


Locally, and double, triple, extra checked. I'm linking against your repo with all the latest. I suppose I could try with a fresh spark and see how it does?


Ok, the good news:
Using a new Spark Core, I'm not seeing any issues. The two strange items I saw (a blip in my metadata, and the weird offset concerning where my story data was) were both the result of a very silly bug. A struct breaks up my metadata into bits for flags, and after adding a new flag I forgot to reduce the reserved bits by 1, so I had a uint8_t with 9 bits. smile

My best guess is the old data in flash is actually KEEPING it from properly erasing...does this seem plausible?

What's the easiest way to just WIPE OUT all the flash memory?

I'll let you know if I discover any more issues after having started with a fresh core and blank memory. Thanks for the great support!


Ah, I just had a thought - to really erase all memory, use


This wipes the flash at the lowest level. What I told you before (device->eraseAll()) wipes the flash at whatever level your device is working at (e.g. addressErase or wearLevelling), so corrupt metadata could still be interfering. That's not the case with the userFlash() which is simply a direct access to the user region of external flash.


Excellent. I'll test that out with the other Core and see if that gets it back to normal! Thanks again for the support. Things are looking good!


@mdma I'm maintaining a linked list in the Flash memory and I'm still testing my logic. During this test phase I have to reset some of my memory bytes quite often to some default value so that I can start fresh.

What's the best default value to choose in-terms of endurance? 0 or 255 or something else?

Normally I would choose 0 but I've read that in order to set a 0 bit to 1 bit in the flash, the memory sector must be erased. In this case would 255 be better?


Yes, 255 is best, since then you can change that value without needing to erase the sector first. If you set to 0, then you'd have to perform an erase. At least with the wear leveling scheme.

With addressErase, it lets you erase in place 7 times before the underlying flash memory is physically erased. By writing 0 as default in addressErase, you'd be using up one of those erases from the outset, whereas writing 255, you can then write any other value without needing an erase.

So yes, using 255 as the default is better all round.

I was thinking about adding a symbol to the library DEFAULT_VALUE = 0xFF so that users could know what the most appropriate default value is for each memory device.


I just had an idea. Imagine you want to store 20K worth of data in a string or vector, like this;

   String s;
   for (int i=0; i<5000; i++) {
       s += "Hello World";
       s += i;


   std::vector<int> vector;
   for (int i=0; i<20000; i++) {

This would fail on the current spark since there isn't enough memory.

However, with flashee, it's possible to provide implementations of these containers using external memory. To the programmer this would look just like a normal vector or string - the code above would be almost identical, but behind the scenes, the new implementation is taking care of storing the data in external memory.

With the external flash device and @kennethlimcp's FRAM/microSD shield, this would open up the possibility of having seamless access to megabytes or even gigabytes of memory on the spark!!! And without requiring any hardware changes to the spark!

I think this is a pretty cool idea! Does anyone else think so? Is it worth building?



have you tried this before?! Flexible Static Memory Controller.

It will be cool to have these pins available for the user in future versions of hardware wink



Thanks for the clarification!

As for Flash RAM I've used something similar with Arduino once before : http://arduino.cc/en/Reference/PROGMEM


Hi @Nitred!

Just to clarify, the PROGMEM on the arduino isn't the same thing as what I'm proposing.
In fact, there already a PROGMEM equivalent on the spark - you simply declare your data const and it will be stored in internal flash, with your program.

There are some downsides with this:

  1. the data has to be available at compile time
  2. the data is read only
  3. there's not much free storage available in program flash (a 10-20k max.)

What I'm proposing is taking the external memory (external flash, microSD, FRAM) on the spark which can be megabytes or gigabytes in size and making it available in code using familiar data structures like the STL. That way you get seamless access using familiar coding paradigms.

Also, switching between memory types later would be super-easy. Say, if Spark later introduce new hardware with lots of memory (say 256k or more of RAM), the code only needs one change, and will continue to work as before. That's why I say this is seamless - you don't have to jump through hoops to make it happen! smile


Ahh I get it. Seems pretty useful and extremely difficult smiley

I can imagine the data being stored on the external memory will be slow to retrieve when compared to the SRAM. I hope you intend it to have a custom datatype or at the least a custom pre-fix like ExtData uint8_t instead normal uint8_t so that I get to choose which of my data I will lose out on performance.

Also if there is a stack on the external memory that keeps track of which data is where then one tiny memory fault here would throw everything off! frowning