Let's get (non)volatile

When you move to the CC3000 eeprom - be careful attempting to reallocate the user file 1. I wrote a small test harness to allocate/delete the user files so I can reallocate and expand the 16 bytes already allocated in user file 1 to 384 bytes, and the 2nd user file to ca 5k. This caused my core to stop connecting with the spark cloud. I tried flashing the cc3000 patch which just made things worse - now the core just flashes white on startup.

@gruvin, @mdma, Noted your points and for time being I removed the conditional WIP code for cc3000ā€™s eepromā€™s use. So the only option now is to use the internal flash as emulated EEPROM.
I have submitted the feature branch for merge request. Thanks!

1 Like

It would be nice if we could add the EEPROM functions to the official docs. I'm having a play with the atm, so I can write some notes as I go and write up an example.

Are there any details that should be added to the docs that might not be obvious? The docs need to cover EEPROM.read() and EEPROM.write().

uint8_t read(int);
void write(int, uint8_t);

Are there any definitive details on EEPROM write cycles? Any warnings to inappropriate usage? Is this still accurate "Internal Flash Page size = 1KByte"? Should exist arduino libraries work?

@Blake, you are SO right in stating there needs to be documentation on this. I will see what I can do :smile:

1 Like

@Blake, I added the documentation here :smile:

4 Likes

It would be nice to have the EEPROM class provide a size() method so we know how large the eeprom is. (I always felt this was an omission from the arduino - the amount of eeprom varies from device to device so not knowing the size makes itā€™s hard to write code to target different arduino mcus.)

For the spark, we could imagine in future different implementations of the static EEPROM class being selected at compile time which provide different sizes of eeprom available. E.g.
if someone makes the CC3000 eeprom available, that will be about 5k, and a external flash-based eeprom will be larger still.

@mdma, as you may know at this time, the EEPROM (emulation) class only provides 100 bytes via internal flash. There are known timing issues between the CC3300 and the external flash, at least from the user-accessible aspects, so using the CC3000 or external flash as EEPROM is less than ideal. Some members have had success with external flash by injecting delays between operations. I am leaning towards using FRAM due to its speed, cost and ease of use. @kennethlimcp is finalizing a great FRAM/SD shield as we speak. :smile:

1 Like

Sorry for the hijack but I know itā€™s an important shield and @will is helping me to send the files to the fab house as we speak!

1 Like

@kennethlimcp. sorry for the shameless but necessary plug :blush:

So youā€™re saying the external flash is essentially unusable to user code? If so, thatā€™s pretty unfortunate, since that was a motivating factor behind my buying my cores (all 5 of them.)

I realize the CC3000 and the external flash use the internal SPI channel, but I would have thought that it is meant to be used by multiple resources and resulting contention managed appropriately?

The core firmware manages to write to flash successfully - so Iā€™m curious what are the issues with doing this from user code? Any pointers appreciated about the specific problems and/or possible solutions, as I would like to look into this and find a resolution. Using the external flash to buffer data would be useful to me.

IMHO, itā€™s overkill to have to buy another shield to get non-volatile storage which is a pretty common feature for arduino-compatible mcus, especially when the storage is already there, and that itā€™s likely a firmware issue that is preventing more widespread use.

Hi @mdma

I donā€™t think the external FLASH is unusable. I think that the current problems can and will be fixed but it requires quite a bit of change with the interrupt driven TI CC3000 driver. Right now the WiFi chip has full priority but the FLASH code has not been written to be interruptible.

I had good luck with the delay(20) before and after every FLASH call method and I ran continuous tests overnight. I think it is wise to be cautious, but I think it can work.

Thanks for a quick response bko. Itā€™s good that you feel it can be made to work!

Wow! Iā€™m astounded a delay worked - I was expecting you to say disabling interrupts so that the sFLASH calls were atomic was the solution. Do you have any insight as to why the delay worked? And do you think it would be a robust solution? (probably not?)

I agree itā€™s wise to be cautious, it takes many pieces that all need to be in the right place and sequence for systems like this to work properly!

Iā€™ve got some spare time so I will code up a wear-leveling flash driver that Iā€™ve been meaning to write for a while now. Itā€™s sufficiently well abstracted from the lower level hardware that it should work with the external flash or any other memory devices we chose to use. (But I guess you donā€™t really need wear leveling on FRAM!)

1 Like

My working theory is that the delay() which allows the SPARK_WLAN_Loop() to run gets a lot of the TI CC3000 service out of the way and lowers the probability of collisions. I think it is a band-aid, not a fix, but it did work OK for me.

Disabling interrupts around the FLASH access just makes the watchdog timer for the cloud connection go off eventually and is also not a solution.

I donā€™t know the spark firmware well, but that sounds to me like an issue with the watchdog timer. In principle, atomic code should be able to disable interrupts for short periods.

Yesā€¦ what @bko said :wink: What I was trying to communicate what that as it stands, the operation of external flash is less than optimal. Because of the shared SPI and the fact that the CC3000 uses interrupts, I get a feeling that ā€œsnatchingā€ SPI cycles from the CC3000 causes unexpected latencies. From what I can see, data is piped from/to the CC3000 via DMA. I donā€™t know if the SPI code is written to handle ad-hoc SPI requests for the external flash during DMA bursts. As @bko pointed out, it may be partially an issue of interruptability but also SPI management.

Hi,

Iā€™m writing SFS, ā€˜silly|small file systemā€™ for the flash memory. Itā€™s all most ready, but now I want to upload test data with DFU-UTIL. It this the correct syntax:
dfu-util -d 1d50:607f -a 0 -s 0x090000:leave -D C:\spark\projects\index.htm
to upload index.htm to address 0x90000?
And what means ā€˜leaveā€™??

0x90000 is not the correct starting address for user firmware and the command leave is simply used to tell the :spark: core to restart after the upload is done.

You can leave it out to try and see what happens :slight_smile:

Thanks kenneth,
@zach wrote:
Another ā€œjust in case you didnā€™t knowā€ā€”you can use dfu-util to read and write directly between your computer and the external flash chip on the Core. Thatā€™s how we program keys and factory reset firmware onto external flash during manufacturing.

I want to find out what the dfu-util command line looks like to achieve the following:
load a text file to external flash starting at address 0x90000.

Suggestions will be highly appreciated!

The command you used is exactly the correct command but you are writing to a space already used by the core.

You can see the memory allocation for the external flash here:

http://docs.spark.io/hardware/#memory-mapping-external-flash-memory-map

UPDATE

Hmm..you're uploading a .htm files?

Example, writing to the cloud public key memory space on External flash is:

dfu-util -d 1d50:607f -a 1 -s 0x00001000 -v -D cloud_public.der

Notice that there is a flag --> -a 1

This selects the External flash and -a 0 selects the STM32 flash.

:slight_smile:

One tip: you need to write an even number of bytes like in this post:

1 Like