Little FS, Wear levelling and SequentialFileRK / PublishQueuePosix

This is my first time posting to this forum so very grateful for any help / advice you can give for a datalogging application:

Suppose we want to save the last say 2 weeks of data to the File System (for periodic serial download / data backup) ~= 4000 samples, each less than 250 bytes in length.
[I also plan to maintain a queue of maxLength 400 samples for (synchronously controlled) cloud publishing, but I don’t think this is especially relevant to this query]
I am wondering how much my design choices might impact a) flash wear and b) if relevant, power consumption – or whether I can rely on Little FS to optimize it all regardless.

I’ve read some of the prior posts such as:
https://community.particle.io/t/boron-2g-3g-little-fs-4mb-flash-is-unable-to-store-data/57845/8
https://community.particle.io/t/using-littlefs-with-publishqueueasyncrk/59627/9
but still have some questions. Could anyone help educate me re the following?

  1. Is there any benefit to storing each sample in a separate file (similar to the approach used in SequentialFileRK and PublishQueuePosix) versus storing each sample in a large single file / circular buffer?
  2. SequentialFileRK appears to number and increment filenames infinitely(?) Is there any disadvantage to re-using or overwriting old filenames eg. a circular buffer of filenames?
  3. In general, does repeatedly writing fresh data to one specific filename increase the wear on one particular flash sector?
  4. In general, does repeatedly writing fresh data to one specific location within a given file increase the wear on one particular flash sector?
  5. Is there any wear associated with renaming files if otherwise leaving their contents unchanged?
  6. Sectors are 512 bytes long. Is the wear associated with overwriting 1 byte the same as overwriting 512 bytes?
  7. Is there any benefit to saving more data? If instead of saving 4000 samples (either in a single circular buffer file or in 4000 separate files) we only saved the last 40 samples, then these specific files (or specific locations within a single file) would be written to 100 times more often. Would this cause more wear or does Little FS take care of leveling this across all as-yet-unused sectors?
    Many thanks.
  1. Is there any benefit to storing each sample in a separate file (similar to the approach used in SequentialFileRK and PublishQueuePosix) versus storing each sample in a large single file / circular buffer?

It tends to make it easier to get rid of old data when using separate files, because there is no way to efficiently remove the beginning of the file. You could implement a circular buffer within a file, but it’s difficult to do that atomically and not corrupt the file if you are in the middle of a write and the device resets.

  1. SequentialFileRK appears to number and increment filenames infinitely(?) Is there any disadvantage to re-using or overwriting old filenames eg. a circular buffer of filenames?

The file number is 8 digits, so if you’re writing a lot of files you could conceivably so you will probably never run out of numbers. The problem is reusing numbers is keeping them in order still. It probably would be easier to use 8 alphanumerics instead and never reuse file numbers.

  1. In general, does repeatedly writing fresh data to one specific filename increase the wear on one particular flash sector?

No effect on wear.

  1. In general, does repeatedly writing fresh data to one specific location within a given file increase the wear on one particular flash sector?

No effect on wear.

  1. Is there any wear associated with renaming files if otherwise leaving their contents unchanged?

Renaming will change a sector in the directory, so that’s one sector write.

  1. Sectors are 512 bytes long. Is the wear associated with overwriting 1 byte the same as overwriting 512 bytes?

Yes, the entire sector is generally rewritten if any single byte within the sector will generally rewrite the entire sector in a new location.

It’s actually a little more complicated than that, because NOR flash can convert a 1 bit into a 0 bit without erasing the entire sector, so some changes can be made in-place. Those changes don’t affect the wear on the sector, only the erase cycle does.

  1. Is there any benefit to saving more data? If instead of saving 4000 samples (either in a single circular buffer file or in 4000 separate files) we only saved the last 40 samples, then these specific files (or specific locations within a single file) would be written to 100 times more often. Would this cause more wear or does Little FS take care of leveling this across all as-yet-unused sectors?

Every time a sector is rewritten it’s wear-leveled to an unused portion of the file system, so it won’t make a difference.

One thing to beware of: LittleFS only wear levels over free sectors. It doesn’t move un-changed sectors so if your filesystem is constantly full of unchanging data you can wear out a subset of the sectors with changing data. It doesn’t sound like you’ll be in that situation, but it can occur if you use a single file system for a large number of static resources along with a smaller amount of constantly changing data.

1 Like

Wow! Thanks for such a speedy and helpful response :grinning:!

Re Question. #2:

I was thinking that every time I would write/overwrite a file I would increment the file number according to:

            fileNum++;  if (fileNum>=MAX_FILES) fileNum=0;

That way I can ensure I don’t accidentally fill the file system (I’m less worried about running out of filenames than I am with running out of file space). The files are still ordered though I might want to keep track of the most recently written or oldest file to know where the sequence starts / ends. Does that sound ok, or am I missing something?

Now that I think about it some more, I picked 99999999 (8 digit filename) because that’s a really large number. You could write a new file every 3 seconds for 10 years before you run out of numbers.

That also makes it really easy to delete the lowest numbered files to free up space when you run low.

All went well for a couple of days, but then… Major problems and strange behaviors!
I implemented my code using a fileNum that recycled back to zero after a max limit of 4000, overwriting previous files, or resetting to zero on power cycling. I still haven’t reached anywhere near 4000, but on power cycling the overwriting of previous files seemed to work fine.

Then after successfully writing about 450 files, it stopped being able to write files. Worse, I could no longer reprogram my device even though it was online, the status leds indicated flash in progress, and CLI flash reported success. I tried in Safe mode, and in Listening mode (using particle flash –serial). All indicated success, but the new program was not uploaded to the device. CLI particle Doctor failed “Error writing firmware: file does not exist and no known app found”, and CLI particle Update failed “An error occurred while attempting to update the system firmware of your device: File too short for DFU suffix”. Finally, I was able to reprogram my device using CLI particle flash –usb while in DFU mode, but I still can’t reprogram it any other way :frowning:

Getting back to my original code, I was worried I had somehow filled the file system completely (even though it’s limited to 4000 files max, of which only ~450 had been used, and each file is less than 250 bytes). I inserted a few more diagnostic messages:

		int DataStore::put(String dataString) {
			int returnValue = -1;
			int fd = open(fileRoot + String(fileNum), O_RDWR | O_CREAT | O_TRUNC);
			if (fd != -1) {
				Log.info("writing to file: " + fileRoot + String(fileNum));
				returnValue = write(fd, dataString.c_str(), dataString.length()+1); // add 1 for the null character
				close(fd);
				if (returnValue >= 0) {
					Log.info("#bytes successfully written = " + String(returnValue));
					returnValue = fileNum;
					fileNum++;  if (fileNum>=MAX_FILES) fileNum=0;
				} else if (errno == ENOSPC) {
					Log.error("file system full!");
				} else if (errno == EBADF) {
					Log.error("bad file handle");
				} else {
					Log.error("file write error " + String(errno));
				}
			} else {
				Log.error("failed to open file " + fileRoot + String(fileNum));
				Log.error("file open error " + String(errno));
			}
			return(returnValue);
		}

The result shows that the file opens successfully, but the write() function fails with an errno=0, which does not appear to be listed. My program is running SYSTEM_THREAD(ENABLED), but I didn’t (did not really know how to) take precautions to ensure my code was thread safe (some of my other variables are in retained memory which might also be accessing the flash memory), - so now I wonder if there’s some persistent lock on the file system that is causing problems.

At this point I’m a bit stuck a) to identify the cause of my program failure, and b) how to regain full flash functionality for my device (CLI particle Update and Doctor still fail). Any thoughts / comments / advice would be much appreciated!

This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.