Create virtual Spark Device

Hi, do you have a software version of a Spark Device e.g. for Spark.Variables and functions? I’m thinking about using the Spark Cloud (private) for more devices than Core or Photon and it would help a lot to not start from scratch.

If you look on the fw_hal branch, there is a GCC version that runs on your PC. Haven’t tried it and not sure everything it supports, but you should be able to compile that and see

3 Likes

I think what you’re looking for is what’s coming round the corner with the HAL (hardware abstraction layer).

@harrisonhjones is playing around with building an EXE of some sort, maybe he could give some more details or tag some other pro :wink:


Darn, and again, I got beaten by @eely22 :+1:

1 Like

Would something like this be useful to e.g. add a Raspberry Pi to my private Spark Cloud…

Or something like a nodejs implementation of the device-side with regard to Spark protocol.

@eely22 is into low-latency all around....

2 Likes

It depends on what you want to do. Pretty sure the Spark team uses this to test the FW without a core. Do you have some particular use case?

Anything can connect to the cloud, I have the FW ported to the Nordic nrf51 now. I guess it just depends on what you want to connect and why.

So one example is the following:

I have a raspberry Pi running a nodejs app which communicates with the Spark Cloud software. When I now invoke a function or a send a message via the Spark Cloud I would like to invoke these on my raspberry as if this is also a Spark device.

In my current setup I would need one Pub/Sub Server for my Raspberry and one Spark Cloud for the Spark devices. This seems like a waste :smile:

1 Like

You should be able to use the hal branch this way, it is really just a C/C++ application with the abstracted spark libraries. You would basically just replace the pub/sub server with the spark FW executable.

I have found the spark JS API’s to be sufficient and would do it that way, but you can certainly do this if you want. I don’t know how “commercialized” the gcc build of the fw_hal branch is, so you may run into issues or corner cases. I think it is currently more an internal tool for Spark, so it may not be tested and validated like the public API’s.

One thing to note, this wouldn’t port well to the live cloud because you would need a provisioning system for the raspberry pi. So if you wanted to put it on the live cloud at any point, I would recommend you use the JS API’s.

Thanks for your reply. It seems I have to dig deeper into the Spark JS API. I thought the API is more the “client” side of things and less the “device” side. Do you understand what I’m trying to say?

I guess it depends exactly on what you want to do. I thought you could subscribe to and publish events from the JS API. If that is all you want to do, that may be simplest. It is the client side of things, but you can still do a lot. You could still invoke functions on the pi by subscribing to specific events and doing something when they’re received.

If you want the raspberry pi to show up as an actual device, you would have to use the fw_hal build. This would allow you to directly invoke functions on the pi through the REST API.

So, depending on what you want, one way may be easier.

1 Like

The latter is what I would prefer to do. I will see if I can make use of the fw_hal build. I’m not a C programmer so it will probably take me some time to adjust this build to my needs.
Do you know if there is documentation about the protocol aside from the basic overview?

I know @mdma was working on a guide to add device types, not quite what you’re doing but may help. Also, looks like @harrisonhjones was working on this in another thread: http://community.spark.io/t/compiling-the-hal-branch-for-gcc/10488

I have done this for the nrf51, I had to add a new device type to compile, which is way more than what you need. You should just have to change the PLATFORM_ID in the build/platform-id.mk file to 3. Anything beyond that, I haven’t seen much documentation and i don’t know what platforms they have tested on or any limitations that may exist…

1 Like

@eely22 would you mind doing a short write up with what you achieved using the NRF? I’m curious how you attached that to the network/internet

There is a thread on the project: http://community.spark.io/t/sparkle-a-bluetooth-le-powered-spark-core-clone/6108

Basically, I use gateways to go from BLE to the internet. Right now, the gateways are Android/iOS apps, a shield that takes a Core/Photon and has a BLE central device on it, or a raspberry pi with BLE dongle. They all do the same thing, just take BLE packets and forward them to the Spark cloud.

I am also going to add native IPv6 support as well since that is now supported.

1 Like

Ah! You’re the SparkLE guy. Cool stuff. I had been loosely following that thread. I’ll look into it. Thanks for all the work.

[Edit] Good lord I didn’t realize how far behind I was on that thread. You’ve made really awesome strides!

2 Likes

Thanks! Should be launching it soon, am going to try to run a Kickstarter campaign for it sometime this spring.

2 Likes

Hey, Can you tell me the where to look for fw_hal branch on github. Sorry I could not find it. and Thankyou.

https://github.com/spark/firmware/tree/feature/hal

Sorry to append to an old thread, but it does provide an ideal context for my question.

First, some background: I’m one of the Digistump Oak Kickstarter supporters who now will become Particle users thanks to Digistump’s switch from its internally-developed RootCloud to the Particle stack.

The main reason I selected Oak over Spark Core (or Particle Photon) was Digistump’s commitment to support a fully functional local server (at that time, the Particle Local Cloud wasn’t seeing much love). Since I’ll be layering some home safety and security features over my Particle devices, I must not rely on external servers.

I have 2 RaspPi2 devices, and I’d like to get a head-start learning the Particle stack before my Oak hardware arrives in the Fall.

Ideally, I’d like to have a single Local Cloud and multiple Devices (controlling and/or listening to one or more physical I/O pins) running on each RasPi, along with a “Virtual” Device that monitors the Local Cloud instance (for possible fail-over support).

How feasible does this sound? Are there limitations in the Particle stack that would make it unworkable?

I’m looking more for pointers from experienced RasPi/Particle devs than a ready-to-use solution (which would be nice, if it exists).

TIA,

-BobC

This is very feasible, but there is no out of the box solution available. It’s something we’d love to build at some point!

Are you a developer? If so and you’re interested in coding any of this, just let me know and I’ll provide some pointers of where to begin.

This sprint I will finally get to work on the virtual spark device that I hacked up many months ago. That will form the foundation of the solution.