Hey all,
Just wanted to reach out to see if anyone else is having this problem and/or if the Spark team is aware of these problems. If the spark core is sending udp data to an ip address on a regular basis and receives one datagram small enough to fit in the buffer that is not processed (read from) you get a CFOD.
Please note that the receive buffer is larger than the 12 bytes and I have patched the CC3000 to 1.28
#define RX_BUF_MAX_SIZE 512
Here is the setup:
- Core transmitting udp packets on an interval.
- Same core receives a packet from another source.
- Within few seconds CFOD.
- Resets does it again and again…
In my example below, spark core has 10.0.0.13 ip and the nodejs server is running on 10.0.0.3.
spark core code:
UDP Udp;
unsigned char TxMsg[12] = { 1, 254, 1, 254, 1, 254, 43, 212, 71, 184, 3, 252 };
void setup() {
pinMode(D7, OUTPUT);
digitalWrite(D7, LOW);
Udp.begin(9000);
}
void loop() {
Udp.beginPacket(IPAddress(10,0,0,3), 9000);
Udp.write(TxMsg, 12);
Udp.endPacket();
digitalWrite(D7, LOW);
delay(200);
digitalWrite(D7, HIGH);
}
nodejs code:
var PORT = 9000;
var HOST = "10.0.0.13";
var dgram = require("dgram");
var message = new Buffer("sparky CFOD");
var client = dgram.createSocket("udp4");
client.send(message, 0, message.length, PORT, HOST, function(err, bytes) {
if (err) throw err;
console.log("UDP message (len = " + message.length + ") sent to " + HOST +":"+ PORT);
client.close();
});
What I would have assumed was… the packet would just sit there on the buffer and do absolutely nothing.
I also noticed a few other strange UDP behaviors.
Anomaly 1: I don’t seem to ever need to call Udp.endPacket(); If I call the Udp.write(char) method, I can not call it it multiple times to assemble the message I want. Instead each call to Udp.write(char) writes out a new datagram and it is delivered right away. I was expecting that… I would do beginPacket, write, write, write, endPacket and then the packet is sent. In this case you would have sent 3 packets. In order to send one datagram I need to have my own buffer/array and pass the Udp.write(buffer, size).
Anomaly 2: More than one packet on the received sitting on the buffer is processed as though it is one datagram. Spark core receives 2 datagrams both 12 bytes in my case from two different sources. A call to parsePacket seems to process the datagrams as one giant datagram 24 bytes. I can only retrieve source information such as ip and source port from the first datagram and not the second.
Anomaly 3: Whenever the above spark core program is running flashing the core via web ide is a pain. I basically have two choices. 1. Use my phone to try to push tinker to the core and then re-flash what I want. OR 2. Use the dfu to do it. What happens is the core starts to flash, does a small pause with the magenta light then goes dark. The core then resets running last program.
I am also expecting that if the receive buffer is full and more data is received that the new data is lost and does not crater the core. I haven’t tested this yet, but didn’t know if anyone has or not?
I find it really frustrating to see on waffle that the udp problems are sitting in the ideas category when udp and tcp functionality is what the spark core is mainly about.
I know I asked a lot of questions here but I am looking for some insight. Maybe I need to code things differently etc.
Thanks again all