I am looking in to if Particle.publish fails to send to sever, I want to save that message into like a text file on a SD card.
If i manage to get the connection back i want to send that text file with publish to my server.
The problem i am having now is how to read from a text-file one line at the time and delete that one.
2016-01-01 12:01:00; 1337
2016-01-01 12:03:00; 1238
2016-01-01 12:05:00; 1349
Or is there a better way to do this fail handling?
I worked on a similar mechanism. In the end I opted for the use of multiple files instead of a single file. Each file contains a “json formatted” line.
What if i have 50 000 files on the SD card?
I’d assume that you eventually intend to have all data from that file published, allowing you to delete/truncate the file.
I also assume that one publishing turn would “get rid” of more lines than would be added during that time.
Consequently I’d expect the file to be worked off in a finite amount of turns.
Having said this, how about not actually removing the published lines but only mark them as processed.
In a next turn you can quickly skip all already processed lines and just process some more till you are done.
Only then actually remove all data.
This would only require appending new lines and flipping e.g. a flag per line.
In my case it is a maximum of 100 or 150 files per day. They are deleted when sent. I use an SD card FAT32 formatted. With this file system the maximum number of files are 65534 per folder and 268435437 in total.Not the best solution but it works. If the post will come out some interesting solution I will be happy