Does SPARK support simulation?

Hi,
Its my first look on SPARK framework and products. They are amazing in term of cloudyfying IoT services. I want to experiment with large number of IoT devices like SPARK Photon to be available as Private/Public Cloud provided services. I wonder if SPARK support such simulation/emulation based solution or not?

I had just been thinking about this yesterday. Spark does not have a tool to do this today, but it is possible to some degree. The new hal branch of FW offers a way to run the Spark FW as an executable on a PC. I consider that one virtualized core/photon. So all it needs is a service around that to launch and manage many of these instances.

I was envisioning a system very similar to AWS EC2 where you can quickly spin up virtualized Core instances, through a web interface or a REST API. They won’t have real HW attached, clearly, but can do basic things like pub/sub and be available for function calls. You could even test FW updates by sending them compiled binaries which get saved and then run, while the initial instance is terminated. Ultimately, you could virtualize sensors too, having them feed fake (but realistic) data in to simulate real activity, but that’s probably a long-term feature.

The limitations are, at least as I see them, this could only work with a local cloud for now. You wouldnt be able to provisions these virtualized Cores for the live cloud without working with Spark and probably paying for them. At the same time, that may not be desirable for the public cloud as it means there could be an infinite number of devices, instead of being restricted by HW.

The manager would have to create each instance with a unique ID and then provision that ID with the local cloud. I just imagine a simple node.js app that runs and can do all this, provide a REST API and also a simple webpage to do it manually.

Anything i missed? What do people think? Is this something worth doing? I would find it useful for my own Spark based products so I can load and unit test my back-end which gathers data from online devices. Would be perfect for automated deployment and testing.

1 Like

This is a valid concern and I would look to Spark for either a) Providing load metrics based on live Cloud loads or b) Creating a load simulation cloud for private access by companies needing specific load testing.

If a company is using the Public Cloud then existing metrics will reflect a more realistic load environment whereas a "private" simulation cloud may not. I am not sure if Spark offers dedicated Cloud services for larger clients. :grinning:

That definitely is a good point. For any type of real-time application, I would certainly want to know that the loading of the public cloud, plus the loading of my devices, wouldn’t cause issues.

For my case, though, I wouldn’t consider that as important. My real goal is to test my own backend. We are building a system to monitor our devices and collect data. I just need to test that my own backend can handle X,000 devices at once. I am basically trusting that the public cloud is doing the right thing and that the load on it doesn’t matter.

I could certainly see instances, though, where loading of the public cloud could affect products using it. My assumption is sort of that the public cloud infrastructure is based on some quantitative metric and they have built the cloud to handle it. In other words, they know the number of devices, and they can make estimates of how much data each one uses, and of which type. Then they can build the cloud to handle that much load and scale as more devices come online in a predictable way.

1 Like

@eely22, good points. Having the ability to create simulated Core/Photons/Electrons is great for load testing a backend, especially if it is not depending on the Public Cloud. The ability to create different simulated payloads would be a great benefit for that testing. :smiley: