Suitability of Spark as compared to Arduino


I am new to this platform but have used Arduino in the past. I have an Ethermega. This product looks fascinating. The website, videos and literature are phenomenal. I am so impressed.

Gushing completed…

I am wanting to create project whereby there are three mini banks of switches which are separated out within a normal house. Each bank of switches has, say, four switches and four LEDs. Someone at one bank of switches flicks a switch and an LED on an opposing bank lights up.

Trouble picturing it? If you think of it as a telegraph system or a riff on a butler bell system in a 1920s house you’d be on the right track.

So my question is, is this platform the right one? It certainly seems much preferable to Arduinos and connecting wires. But will the opposing bank respond quickly to the switch being flicked? It’s desirable the reaction to the switch is near instant.

The second thing is, to what extent is the Spark able to control and power things like LEDs? In a comparable way to the Arduino? I am going to want to switch 24v 1A, will I need a relay module or can I use a transistor?

many thanks for humouring me.

This should work fairly well with Spark.publish and Spark.subscribe where each core will publish their own event while the rest will “subscribe” to the other core events and change the LED status accordingly.

24V AC or DC? In either way, you will need a relay to as a switch to connect/disconnet.

The Spark core is Arduino Compatible but packs more capabilities/features so just think of it as whatever the Arduino can do, Spark core can do it as well but much better.


You could also use a high-voltage MOSFET if you need PWM or generally higher switching frequencies than a relay can supply - like the BTS3104SDL which logic input accepts 3.3V to switch up to 60V with 6A

Enjoy! :sunflower:

What is the latency of Spark Publish and Spark Subscribe? If it is too long for “near instantaneous” semaphoring of the switch positions maybe a non-Cloud solution would be better?

@phec, you could setup a local cloud server to reduce latency. The Subscribe and Publish functions should be available on the local cloud soon.

1 Like

“live” round trip latency can be found here: Depending on how you define “near instantaneous”, 0.072s seems pretty quick.


Generally the latency is very short if you’re in the continental US (under 100ms). We’ll be adding more availability zones which will significantly improve latency worldwide later this year hopefully. :slight_smile:


1 Like

Goodness me, I’ve just been playing around and latency is NOTHING to worry about for my purposes! It’s as instant as ever I could wish it! So I’m happy there.

Now before I go any further I just wanted to resolve a potential problem further down the line.

I read somewhere, can’t remember where, that there is a limit on the number of spark.publish and spark.subscribes one can have?

Ultimately I want to have four Spark Cores chatting to each other. Say I want each Spark Core to have six buttons and for each Spark Core to be able to subscribe to and perhaps react to the status of those buttons - will that be possible?


1 Like

@daneboomer, if you don’t need massive bursts of events published in a very short periode of time I’d not see a problem here.
AFAIK one publish per sec should be considered as your top rate - more is possible tho’, but should only occure infrequently.

If you are concerned about the number of different events (event names) you can publish, you could work out a suitable codeing scheme for your published string, so you could even make do with only one event shared accross all your Cores.

1 Like

There’s currently a limit of 1 p/s, with bursts up to 4 p/s allowed for short periods of time. If you’d set up a local server, you could increase this limit to whatever your server can handle. Depending on how quickly you want to push the buttons on your Cores, you could manage with the current cloud, if you see to it that you don’t surpass those limits

1 Like

Thanks guys. What I was seeking clarification on with my “limit on numbers” request was how many different things I could “publish” - like PIR 1, PIR 2, Door contact 1, Door contact 2 etc etc. So far I haven’t seen that there is a limit.

In the event (no pun intended) I got some useful info about how frequently one can publish to the cloud, which was interesting. Thank you!

This sounds interesting - is there any Spark documentation on this? THANKS!

You can publish under as many names as you like if you just look out for the rate limit.
I believe there isn’t a tutorial for what @ScruffR said, because it more of a personal programming thing. You can design your own syntax for it as long as you parse it accordingly. An example for your PIR/DOOR could be something like this:
“D1HD2LP1LP2H”, which could be parsed as “Door 1 High, door 2 low, PIR 1 low, PIR 2 high”.
You could perhaps also do this in JSON format, since this is often easier to parse with standard code.

Hi rastapasta - any ideas where I can buy the BTS3104SDL in smallish (fewer than 10 would be good) quantities in the UK? I have tried Googling but all I get is the product page. Thanks for your help.

Hi, sorry to revive this old thread, just a bit concerned by something I have seen a spark staff member write.

The cloud doesn’t limit how many subscriptions you can make, or how many events are sent to your devices, but I think the firmware can only register 4 subscription handlers at the moment.

I’m about to start coding my project and I don’t want to find the spark or the cloud can’t handle multiple I/O events.


You probably should have just stayed in one thread–it can make the conversation difficult to follow.

What @Dave was saying is that there are limits for the code inside your Spark core for things like how many variables you can have, how many event subscriptions you can have in the firmware that runs on the core. There are also time-based limits like not publishing events at a rate of more than once per second on average with a burst of four allowed. These limits are in place to keep the firmware fitting into memory and to keep the cloud usable by a lot of users.

I have found the Spark cloud to be remarkably scalable, in general, and I am not aware of any limits on the number of end-points you can use for accessing cloud variables, functions or events from the web side.

If you describe what you want to do, someone call almost certainly say if it work or not.


I basically want to have about 6 LEDs, 6 switches and 4 other sensors (magnetic door contacts etc) connected to each spark and for the sparks to transmit the state of the switches and sensors to other Sparks via Spark Publish/ Spark Subscribe. I have four sparks. So an LED connected to spark 1 would be mirroring the state of a switch connected to spark 2.

Other than IO port constraints, I think what you want to do should be easy.

If you want it to be reliable, I would plan to publish the state of the sensors periodically (you can also publish changes in state). If you only publish the state changes, that will be very fragile and any internet problems will get you out of sync.

Thanks @bko, how is publishing state different to publishing a state change? I think I know what you mean and how to do it in my loop but it would be helpful if you might clarify. I am worried that the solution I have in mind might be network traffic heavy. You no doubt have a more elegant solution in mind.

It is the difference between just publishing every time a switch is changed (say) and publishing the open/closed status of all switches every (say) 5 seconds.

In theory someone watching the stream of published events could over time understand if each switch was open or closed, but if they miss one message, it is hard to recover.


Imagine if you only publish something when the state changes. If you’ve got a window sensor hooked up (like you mentioned), and that changes its state, then a publish will be triggered. That’s great, and works like expected.
Let’s say the connections fails for 2 minutes, because of god-knows-what, and the sensors changes state in that period. Now, no publish is triggered, since the connection was gone, and your receiving end won’t respond. That might not be so great, if that means your window is now open, without you knowing that. And it won’t know that it’s open unless the state changes again.

If you publish the state periodically, then this will be corrected as soon as the connection is up again. If the sensors triggers within a fall-out, it will be picked up next time a publish is scheduled. So it act like a sort of backup in case of unforeseen connectivity issues.