I recently needed to download a large amount of data from the Spark Core using Variables. Due to memory limitations, I was limited to what I could do, and was constantly being pushed towards the limits of the Spark Core, both in terms of Flash and RAM. This is how I solved the problem.
I was initially generating CSV strings using the Strings class. This caused me to use a lot of storage space, and also RAM. sprintf actually made matters worse. Then I went old school, and found an itoa library on the web, and implemented that. It helped.
Please bear in mind I then stopped using the following code… as will be explained later.
#define ALPHANUMS "zyxwvutsrqponmlkjihgfedcba9876543210123456789abcdefghijklmnopqrstuvwxyz"
int itoa(int number, char* out, int base) {
int t, count;
char *p, *q;
char c;
p = q = out;
if (base < 2 || base > 36) base = 10;
do {
t = number;
number /= base;
if (out) *p = ALPHANUMS[t+35 - number*base];
p++;
} while (number);
if (t < 0) {
if (out) *p = '-';
p++;
}
count = p-out;
if (out) {
*p-- = '\0';
while(q < p) {
c = *p;
*p-- = *q;
*q++ = c;
}
}
return count;
}
I would then build the string up ‘old school’ in the following manner:
itoa (HARDWARE_PRODUCT,temp,10);
strcat (v_version,temp);
It worked, but was not ideal. One of my data storage structures on the Spark Core looked like the following:
#define CIRCUIT_COUNT 8
struct BillingPosStruct {
unsigned int start;
unsigned int last;
struct {
unsigned int PP_POS_CNT;
unsigned int PQ_POS_CNT;
unsigned int PS_CNT;
} count[CIRCUIT_COUNT + 1];
};
struct BillingPosStruct BillingPos;
This took up 116 bytes of RAM in itself, and would require significantly more as a comma delimited string.
I ended up encoding it as a BASE64 string, meaning I could send it as a 156 character string, which is still long, but not too bad. The BASE64 library I used appears below.
// base 64
static char encoding_table[] = {'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',
'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',
'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',
'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',
'o', 'p', 'q', 'r', 's', 't', 'u', 'v',
'w', 'x', 'y', 'z', '0', '1', '2', '3',
'4', '5', '6', '7', '8', '9', '+', '/'};
static int mod_table[] = {0, 2, 1};
void base64_encode(char *data,
size_t input_length,
char *encoded_data) {
// From http://stackoverflow.com/questions/342409/how-do-i-base64-encode-decode-in-c
int output_length = ((input_length - 1) / 3) * 4 + 4;
encoded_data[output_length] = 0;
for (unsigned int i = 0, j = 0; i < input_length;) {
unsigned int octet_a = i < input_length ? (unsigned char)data[i++] : 0;
unsigned int octet_b = i < input_length ? (unsigned char)data[i++] : 0;
unsigned int octet_c = i < input_length ? (unsigned char)data[i++] : 0;
unsigned int triple = (octet_a << 0x10) + (octet_b << 0x08) + octet_c;
encoded_data[j++] = encoding_table[(triple >> 3 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 2 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 1 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 0 * 6) & 0x3F];
}
for (int i = 0; i < mod_table[input_length % 3]; i++)
encoded_data[output_length - 1 - i] = '=';
}
To use this I used the following code
char v_energy_pos[165];
base64_encode ((char *)&BillingPos, sizeof (BillingPos), v_energy_pos);
This was sent as a Spark variable
Spark.variable ("energy_pos", &v_energy_pos, STRING);
To read this on the other side, I will give you a snippet in Python.
ret = myCore.readVariable('energy_pos')
dec = base64.b64decode(ret)
un = unpack ("IIIIIIIIIIIIIIIIIIIIIIIIIIIII", dec)
At least in Python, you can find information on decoding the structures at the following URL https://docs.python.org/2/library/struct.html
In my case, The upper case I represents 4 byte unsigned integers. The unpacked data is a list of the values of the integers which can then be used as normal.
I am not saying that any of this is an ideal solution to getting a lot of data around, but when you have a lot of data, it seems to work for me
Darryl