Using Spark with APIPA IP address scheme?

I wish to use Spark Core with APIPA IP address scheme, at least in a deployment. With this scheme no router is involved and hence multiple nodes interact with each other and decide IP address in the range through I know with this scheme (no Internet connectivity), I am ok. Because I want only concerned nodes to interact with each other. Typically when a node (such as desktop) boots up and doesn’t get dynamic IP address, the node interacts with similar node and decides its own address.

So is this possible with Spark Core?

Maybe somebody else with more knowledge of the internals can chime in. But from my brief research, it looks like APIPA relies on the use of ARP to figure out what addresses are already in use, but it appears that there is no way to access the ARP table of the CC3000 (it is only used internally, and not exposed).

I’m not sure if the CC3000 will fall back to APIPA on its own if DHCP configuration fails. Have you tried it? If not, you could just try assigning an IP, but I don’t know how you could detect/avoid collisions.

Anyone know more about this subject? @Dave?

Sadly, I don’t think this is possible with the CC3000 that is used by the Core.

1 Like

I think not supporting APIPA has to do with the firmware and not with CC3000. I mean the way the firmware is currently written. Typically when say a desktop boots it tries to get an address. After a time out, it goes into APIPA mode. Even if it is alone (no router, no DHCP, or other nodes etc), the desktop assigns an address to itself. If there are some nodes around, both negotiate and get unique addresses in the APIPA range. To try APIPA, just boot a Windows/Linux node which doesn’t have a DHCP server around and the node is configured to get a dynamic address. After a few seconds the node gets its APIPA address.