Porting arduinolifx, a LIFX bulb

I did send the WIFI credentials and its breathing cyan but from the output of debug its 0.0.0.0

I did a reset again and its did showed me the right IP address but then I started the LIFX app and it did a reset.

hmm… i won’t be able to assist on that part…

The firmware code is now available and you will have to test further.

Sorry about that. :wink:

Thanks. Hopefully someone can pick it up. The idea is that someone can plug whatever on the sparkcore and have it interact with the LIFX app and hopefully IFTTT soon. You don’t need a LIFX bulb since that’s what we are trying to do here is to emulate one. :slight_smile:

Can you share which app you are using?

I will try to test it out.

Android or iOS version available

So I’m trying to debug it and I can make it restart every time I connect to it via telnet on port 57600. It connects but then you send arbitrary command and it doesn’t handle it well. In theory it should ignore it but it makes the core reset. PacketBuffer?

Are the EEPROM setting in lifx.h OK with the spark in terms of length.

#define EEPROM_BULB_LABEL_START 0 // 32 bytes long
#define EEPROM_BULB_TAGS_START 32 // 8 bytes long
#define EEPROM_BULB_TAG_LABELS_START 40 // 32 bytes long
// future data for EEPROM will start at 72...

#define EEPROM_CONFIG "AL1" // 3 byte identifier for this sketch's EEPROM settings
#define EEPROM_CONFIG_START 253 // store EEPROM_CONFIG at the end of EEPROM

I’m doing a test right now and it might be the way UDP/TCP data is being handled by the CC3000 which is causing issues.

Missed out some part of the example code for UDP. adding them in and test now :smiley:

At least they are talking now but the code is ignored somehow:

-UDP 0 0 0 34 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 FF DA 0 8 0 0 0 0 C 38 0 40 84 4 0 20 
  Received packet type 484
  Unknown packet type, ignoring
-UDP 0 0 0 34 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 FF DA 0 8 0 0 0 0 C 38 0 40 84 4 0 20 
  Received packet type 484
  Unknown packet type, ignoring
-UDP 0 0 0 34 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 FF DA 0 8 0 0 0 0 C 38 0 40 84 4 0 20 
  Received packet type 484
  Unknown packet type, ignoring

Cool!

I tried the arduino and put it in debug mode. It has way more information coming through. The same EEPROM dump is

EEPROM dump:
65 114 100 117 105 110 111 32 66 117 108 98 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 65 76 49 

notice the 3 numbers at the end? Let me know how I can help.

I have pushed the updated example to github.

It’s either my porting skills broke something or the UDP/TCP is having issues.

I see that the core managed to receive the right UDP code after some packets being ignored but stopped there.

The EEPROM seems fine as anything after the 72 byte is not used by the firmware.

Indeed its seeing some of the network transmission but stop after a minute and then reset itself. Should there by a some sort of flush command somewhere?

@kennethlimcp could you try printing the result from parsePacket() when receiving the UDP, i think there was an issue with it not respecting datagram boundries… it gives you the size of whole buffer not the size of the datagram

And the UDP write sends a datagram for each write and the end packet does nothing. bko gave me some code to buffer it and send as one datagram

lots of good info here…

so it looks like porting this code wouldn’t be possible because of the problem with UDP (read below)? The LIFX does send/receive UDP from multiple bulbs. Is their a work around for this?

I think it is possible if you know the size the datagrams should be… or at least know the issue so you cand handle it accordingly.

the send thing isn’t an issue there is some overloads for that, not sure who wrote it but it helped me a bunch!

//----- UDP + overloading the inappropriate UDP functions of the Spark Core (REQUIRED !)
class myUDP : public UDP {
private :
	uint8_t myBuffer[128];
	int offset = 0;
public :
	virtual int beginPacket(IPAddress ip, uint16_t port){
		offset = 0;
		return UDP::beginPacket(ip, port);
	};
	virtual int endPacket(){
		return UDP::write(myBuffer, offset);
	};
	virtual size_t write(uint8_t buffer) {
		write(&buffer, 1);
		return 1;
	}
	virtual size_t write(const uint8_t *buffer, size_t size) {
		memcpy(&myBuffer[offset], buffer, size);
		offset += size;
		return size;
	}
};

myUDP Udp;

I think @Hootie81 is exactly right here. I looked at the protocol here and all the UDP packet sizes can be known by reading what packet type you have received. The current lifx emulator code has a “process” function that copies the data and a “handle” parser that knows the packet types and walks over the copied data. These two functions need to be merged on Spark so that you don’t read out data that you don’t know to be part of the current packet.

This plus the UDP tx buffering would make it work in my opinion.

1 Like

How can i add the code by @Hootie81 into the lifx code? It’s beyond me :smiley:

I’m seeing a consistent udp,ParsePacket() of 36

@lightx, are you able to use DEBUG and capture the UDP messages sent so that we can compare?

I see that the core occasionally manages to read the correct data:

ParsePacket :36
-UDP 24 0 0 34 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 
 Received packet type 2
+UDP 29 0 0 1 0 0 0 0 DE AD DE AD DE AD 0 0 DE AD DE AD DE AD 0 0 0 0 0 0 0 0 0 0 3 0 0 0 1 7C 1 0 0 
+UDP 29 0 0 1 0 0 0 0 DE AD DE AD DE AD 0 0 DE AD DE AD DE AD 0 0 0 0 0 0 0 0 0 0 3 0 0 0 2 7C 1 0 0 

The core gets stuck somewhere after it receives the correct data and we have to figure that out as well.

Also, i did not set the mac address in the code. Does that affect it somehow too?

I hope i did the library porting correctly. :stuck_out_tongue:

1 Like

Ive manage to receive a bunch of packet type 2 on the spark.

-UDP 24 0 0 34 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0
  Received packet type 2
+UDP 29 0 0 1 0 0 0 0 D0 73 D5 0 DE 0 0 0 D0 73 D5 0 DE 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 1 7C 1 0 0
+UDP 29 0 0 1 0 0 0 0 D0 73 D5 0 DE 0 0 0 D0 73 D5 0 DE 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 2 7C 1 0 0

On the arduino, the serial monitor goes rather fast. Not sure all the communication is monitored on the spark. On the arduino to turn off a light it looks like

-UDP 26 0 0 14 0 0 0 0 DE 1 DE AD DE AD 0 0 DE 1 DE AD DE AD 0 0 0 0 0 0 0 0 0 0 15 0 0 0 0 0 
  Received packet type 15
Set light - hue: 0, sat: 0, bri: 65535, kel: 2000, power: 0(off)
+UDP 26 0 0 54 0 0 0 0 DE 1 DE AD DE AD 0 0 DE 1 DE AD DE AD 0 0 0 0 0 0 0 0 0 0 16 0 0 0 0 0 
-UDP 26 0 0 14 0 0 0 0 DE 1 DE AD DE AD 0 0 DE 1 DE AD DE AD 0 0 0 0 0 0 0 0 0 0 15 0 0 0 0 0 
  Received packet type 15
Set light - hue: 0, sat: 0, bri: 65535, kel: 2000, power: 0(off)
+UDP 26 0 0 54 0 0 0 0 DE 1 DE AD DE AD 0 0 DE 1 DE AD DE AD 0 0 0 0 0 0 0 0 0 0 16 0 0 0 0 0 
-UDP 26 0 0 14 0 0 0 0 DE 1 DE AD DE AD 0 0 DE 1 DE AD DE AD 0 0 0 0 0 0 0 0 0 0 15 0 0 0 0 0 
  Received packet type 15
Set light - hue: 0, sat: 0, bri: 65535, kel: 2000, power: 0(off)

I did set the mac address on the core so that they are different then the arduino.

Im not sure if the core spits out when it receive a command that it needs to process!?

Is it possible to send one UDP packet somehow?

Yes, use the class that @Hootie81 pointed you to: the whole thing is just above in his posting. Change the library to use myUDP instead of UDP and you should be good to go.

It manage to not compile

In file included from ../inc/spark_wiring.h:30:0,
from ../inc/application.h:29,
from RGBMoodLifx.h:16,
from RGBMoodLifx.cpp:1:
../../core-common-lib/SPARK_Firmware_Driver/inc/config.h:12:2: warning: #warning "Defaulting to Release Build" [-Wcpp]
#warning "Defaulting to Release Build"
^
In file included from ../inc/spark_wiring.h:30:0,
from ../inc/application.h:29,
from arduinolifx.cpp:34:
../../core-common-lib/SPARK_Firmware_Driver/inc/config.h:12:2: warning: #warning "Defaulting to Release Build" [-Wcpp]
#warning "Defaulting to Release Build"
^
arduinolifx.cpp:35:52: error: 'virtual' outside class declaration
virtual int beginPacket(IPAddress ip, uint16_t port);
^
arduinolifx.cpp:36:23: error: 'virtual' outside class declaration
virtual int endPacket();
^
arduinolifx.cpp:37:36: error: 'virtual' outside class declaration
virtual size_t write(uint8_t buffer);
^
arduinolifx.cpp:38:56: error: 'virtual' outside class declaration
virtual size_t write(const uint8_t *buffer, size_t size);
^

all i change is this…

// Ethernet instances, for UDP broadcasting, and TCP server and client
//----- UDP + overloading the inappropriate UDP functions of the Spark Core (REQUIRED !)
class myUDP : public UDP {
private :
	uint8_t myBuffer[128];
	int offset = 0;
public :
	virtual int beginPacket(IPAddress ip, uint16_t port){
		offset = 0;
		return UDP::beginPacket(ip, port);
	};
	virtual int endPacket(){
		return UDP::write(myBuffer, offset);
	};
	virtual size_t write(uint8_t buffer) {
		write(&buffer, 1);
		return 1;
	}
	virtual size_t write(const uint8_t *buffer, size_t size) {
		memcpy(&myBuffer[offset], buffer, size);
		offset += size;
		return size;
	}
};

myUDP Udp;

//UDP Udp;