I have 8 Cores / Photons in my home, all sharing data and communicating via Particle Events. This gave me a problem, when many near concurrent events occur the Core / Photon may miss events or attempt to trigger (interrupt) events already being processed. Hence I developed a circular event queue, the aim is to store inbound events as quickly as possible (with as little code as possible) then act on them in a more timely manner.
Your thoughts and optimisations are greatly appreciated (especially as I have not coded c++ for many years).
// Buffer Code - updated 14/12/15
const int event_buffer_size = 10; // use to make buffer smaller or larger
const int event_size = 63;
const int data_size = 255;
struct event_buffer_item {
char event[event_size];
char data[data_size];
} event_buffer[event_buffer_size];
int event_buffer_top = 0;
int event_buffer_tail = 0;
bool xEvent_processing = false;
// This is the rapid inbound xEvent handler
// it processes everything and stores in a fast buffer
// use this if you receive lots of events in a short time
// direct your incoming events e.g. Particle.subscribe("xEvent/", add_to_event_buffer, MY_DEVICES);
void add_to_event_buffer(const char *_buffer_event, const char *_buffer_data) {
memcpy(event_buffer[event_buffer_top].event, _buffer_event, event_size);
memcpy(event_buffer[event_buffer_top].data, _buffer_data, data_size);
event_buffer_top ++;
if (event_buffer_top >= event_buffer_size) event_buffer_top = 0;
if (event_buffer_top == event_buffer_tail) event_buffer_tail ++;
if (event_buffer_tail >= event_buffer_size) event_buffer_tail = 0;
}
// run this every 5 seconds or so, it processes events in a more timely manner without clashes
// run using a timer e.g.
// declare - Timer buffer_timer(5000, get_from_event_buffer);
// start - buffer_timer.start();
// In the code below replace xHandler with your normal function you use to handle incoming Particle Events
void get_from_event_buffer() {
if (!xEvent_processing && event_buffer_tail != event_buffer_top && event_buffer[event_buffer_tail].event != '\0') {
xEvent_processing = true;
xHandler(event_buffer[event_buffer_tail].event, event_buffer[event_buffer_tail].data); // replace this with your event handler
event_buffer[event_buffer_tail] = { '\0', '\0' };
event_buffer_tail ++;
if (event_buffer_tail >= event_buffer_size) event_buffer_tail = 0;
xEvent_processing = false;
}
}
// End Buffer Code
I wish I understood more about what is written here. I have a similar problem. I am sending rapid RFID tag reads to an Electron. I really wish I had a way to only get unique reads then publish unique tag IDs to the cloud in que.
Once I get this figured out I will send the data to a firebase DB which I will consume in a web UI to get the time stamps and calculate lap times.
I like the looks of that. I will download the lib and test it out to see if I can gain an understanding of how it works.
Essentially, I will setting this up on a track that USAF members will be running on for their fitness assessment. So I can have up to 8 people crossing a lap/finish line at a given time. The firmware of the RFID reader will send in serial to the Electron:
TestTrackTag01 or
TestTrackTag02,
TestTrackTag03,
TestTrackTag04
and so on...
So if get a few runners crossing at the same time it will send those tag reads in burst mode to serial to the Electron. I need to find out how to keep only the first unique read for each tag and queue it up for a publish. The publish will throw a timestamp in as well so that I can store the tag and time in a firebase DB. and the UI in the web will make the calculations for each lap. and on the 24th crossing automatically complete their assessment with the total time ran. I feel like I have a good handle on the JS side of the house. C++ always gets a little confusing though.
Regarding circular buffer codeing: There is a simpler approach. I wrote a tutorial on the topic a few years back and just transferred the first few pages to Google Docs. Tutorial Circular Buffers Made Easy
@callen This looks like it should work the way it is. A couple of comments:
You may want to consider increasing the max Data length of 50 to a little more depending on how many characters the data from the RFID tag is going to be. You use up 20 characters for the boilerplate JSON and another 10 for the datastamp, leaving 20 characters for your RFID before you start dropping data. PublishManager will reject a publish event if it's over the max (unless the event can be published with no delay)
You might be able to make some of this up space-wise by lowering the max eventName length from 20 to 5 if all of your events will only be "Lap"
ex.
PublishManager<20,20,50> publishManager // uses about 1400 bytes
PublishManager<20,20,70> publishManager // uses about 1800 bytes
PublishManager<20,5,70> publishManager // uses about 1500 bytes
//Template is <elements in cache, maxEventName, maxDataLength> --> bytes ~=(maxEventName + maxDataLength) * elements in cache
You shouldn't need this. Publish manager will take care of the timing of publishes for you. It could look like this:
What does the serial connection send when no RFID tags are present? You may want to have a check to make sure you're getting a valid RFID tag in addition to being unique.
Thanks for trying out PublishManager. Let me know how I can improve the docs or API.
What I found today was that my strcomp works but not exactly the way I need it to. I will get anywhere between 20-30 tag reads from each tag as the cross the lap line. Using the strcomp I can cut down in half to may 10-20 from each tag. As it stands now each lap a runner completed will get 10-20 lap entries in the dB all with very similar time stamps not to mention the data usage is overkill.
I could root out extra entries in JS but the data is still over cooked in the dB. I am looking into how to only keep unique tags for span of 5 seconds.
How exactly are you using it?
I don't think it's an issue of strcmp() but how it's used.
Without going through the complete thread, I'd think you need a local list of all arleady found IDs with their latest timestamp and iterate over the whole list for each "newly" reported ID and compare them first, if you found the same ID compare the current time with the stored latest, if the difference is greater than your threshold, update the timestamp and report the new time, otherwise break out of the loop.
If you have reached the end of the list without finding the ID, add the ID to the list with the current timestamp, report time and move on to the next "newly" reported ID.
rinse - repeat
Interestingly enough, the example won’t compile on Particle Build unless using 0.5.3-rc3 or higher and gives PublishManager.h: No such file or directory error. Not sure why?
Also, should libraries be included using quotations or angle brackets
#include <PublishManager.h>
// vs
#include "PublishManager.h"
@bveenema, the reason is rooted in the fact that along with the new system firmware also a new file structure for libraries (commonly refered to as Libraries 2.0) was deployed which changed the previous include format from #include "libraryName/libraryName.h" to #include "libraryName.h" (whether double quotes or pointy brakets doesn’t matter).
This new structure also allows for cascaded libraries where one library can require another which in turn can also require another without the user needing to track all the dependencies and import all of them manually.
And I still get wicked amounts of data to the DB for each tag. Despite trying to use strcmp to reduce the amount of times a tag is sent to the PublishManager.
if( strcmp(s, pS) != 0){ // Compare the tag to the previouslu read tag
pS = s; // Store this tag as previous tag
publishWithTimeStamp("Lap", s);
}
Perhaps I need to say a maximum number of runners exists (perhaps 8) and then in code use milis() to set an interval (perhaps 5000) that a tag can be read. So if tag 1 is read, it will be ignored for 5 seconds until it can be read again.
Here’s my cloud function for Firebase just in case.
Man I dont know how I missed this! I started something very similar. But to no avail.
I figured I should start with one tag and when I get the desired outcome, add another and keep going to until I max out at perhaps 15 tags (most of the time we don't have too many airman testing at the same time).
So I did the simplest code to start.
void loop() {
unsigned long currentMillis = millis();
if(Serial1.available()){
String s = Serial1.readStringUntil('\n'); // From Sparkfun RedBoard
//Serial.printlnf("Recieved: %s", s.c_str()); // Serial Printing Only
Serial.println(s);
// if( strcmp(s, pS) != 0){ // Compare the tag to the previously read tag
// pS = s; // Store this tag as previous tag
// publishWithTimeStamp("Lap", s);
// }
if (s.equals("TestTrackTag01")){
Serial.println("Matched");
}
}
}
This does not ever seem to match. I used strcmp as well but to not avail. What does a list need to look like anyways? Do I intialize an empty array?