Opening a new thread since CloudConfigRK is closed...
First, thanks to @rickkas7 for this really fabulous library! The integration with Google Sheets is very elegant too!
A few questions:
If my full/initial config JSON string is {"a":123,"b":"testing","c":true}, then:
What happens if a later update sends fewer parameters, eg. just {"a":123}?
Does just a get updated? [From my experiments I think the answer is yes. b, and c persist in RAM but are not stored]
Are b and c still safe in storage and accessible using say CloudConfig::instance().getString("b")? [From my experiments I think the answer is no, they are no longer in storage so not accessible or restored on reboot]
If 'b' and 'c' are still safe in storage, is there any way to get rid of them if you wanted to? [From my experiments, it looks like b and c are no longer in storage so this question is moot]
What happens if a later update sends more parameters, eg. {"a":123,"b":"testing","c":true,âdâ:ânewParamâ}?
Is d just appended to the stored data? [From my experiments I think the answer is yes, no problem]
Just for curiosity, what is being stored? Is it the JSON string itself?
My use case is I have some parameters that get changed often, but others only rarely, so I donât want to be forced to update all of them every time, especially since the rarely updated ones are lengthy.
Possible approaches/solutions for comment:
How hard would it be to add a flag to the storage method (or a field to the JSON string itself) so if true (default) it behaves as it seems to now, overwriting the data storage with only the parameters just sent and implicitly deleting any others in storage, but if false it would overwrite the new parameters just sent while leaving the rest of the storage intact? [I tried looking at the code, but canât see where the file read/writing actually happens]
I had wondered about declaring a new instance of this tool for my different update types, or separate storage files, but I see that CloudConfig is a single-instance object and only one storage method and one update method is allowed.
Is there any other way to use this library to manage two separate sets of config data, one âoftenChangedâ, and one ârarelyChangedâ?
Interesting use case⌠whatâs the constraint or the problem you are trying to solve for? Are you exceeding MB of data usage? Also, whats your definition of rarely vs often. Are we talking minutes, hour, or days for each. Whatâs the relative JSOn size in number of characters for the rarely vs often config data?
I personally do something similar but I currently do not use this library. In my approach, I decided itâs way easier to just send the entire JSON object every time any member of the JSON changes rarely then only the members of the JSON that changed. Each time you send anything itâs going to be a data operation and in vast majority of data usage the limit you first hit is in total data operations rather than MB so for me it doesnât add any value sending down a partial message and have to deal with the added complexity of that. It was a keep it simple approach. Unless you are running into some constraints of some sort, Iâd recommend the same.
a) I guess the fundamental constraint is the 1024 character limit, since 'rarely' + 'often' exceeds this limit. A partial config capability would allow configuration in 'chunks' which also reflect the logical structure of the system.
b) 'Rarely' also requires invoking some actions beyond just setting variables, but I guess if I send all the config data every time as you suggest, I could also send a flag indicating whether these actions need to be performed.
c) [I haven't thought about this much], but 'rarely' might have different permissions from 'often', so only the owner could perform 'rarely'
I'm still intrigued on what your definition of often vs rarely is. How frequent are we talking about? every 5 minutes, once an hour, every 12 hours, once a day? Just a ball park would help. The approach might be different depending on every 5 minutes vs once a day.
The 1024 limit might be one to consider separating it out. I'd say you could chunk it up into 2 and then join back together but for a frequently updating config even once an hour you likely don't want to use 2 data operations every single time. This all depends on what often means to you though.
If it was me, I'd still sharpen the pencil a bit on what you can do to limit what you actually need to send and to structure the JSON to limited the characters used. For example use a single character for the key of the JSON similar to your example. Just to keep it simple.
Maybe you use 2 approaches. 1 using this library for the infrequent config and then a simple Particle.Function() to send down the frequently changing data. Do they both require retention of the data (i.e. writing it to flash).
I personally use 2 (Actually 3 different particle functions):
Update Normal Configuration (maybe used 6-8 times a day on the high side)
Update a slow changing dimension (mostly static information that only changes with new devices)
Update a slow changing dimension that is greater than 1024 char (same as above but the JSON is chunked up and then re-assembled)
In all the scenarios, the JSON is stored in flash file system as a string object, in setup() the device reads the string, parses it out and has the necessary config without having to connect to the cloud first.
Interested to hear what Rick or others from the community thinks as well.
Your assumptions are correct, and the library was not intended to be used with partial updates. It's unlikely that the library will be updated with new features, because the upcoming Ledger feature in Device OS does the same thing, but much better.
Ledger does support changing specific keys, larger data size (up to 16 Kbytes JSON of data), bidirectional synchronization (cloud to device or device to cloud), value change notification on both cloud and on-device, and more.
Thanks for your thoughts @jgskarda. I think my case is very similar to yours:
'rarely' = on deployment / redeployment
'often' = device management (maybe 6-8 times a day on high side)
Both cases require retention of data.
Yes, data compression/key-shortening is one way to go to fit within 1024, but if I still go over, then I think I'll have to follow your approach writing my own code to chunk, reassemble, and store the data - or wait for the upcoming Ledger feature that I was not aware of till now...
Many thanks! I was not aware of the upcoming Ledger feature which sounds really great! I've now registered for the Beta. Do you have any idea when it might be available?