Execution profiling?

I’m trying to find out why some curl connections to webduino “Hello World” server are handled immediately, while others take several seconds to return. (To keep things simple, the cloud is disabled.)

For this kind of problem, an execution profiler would be the tool of choice, so you can see where the cpu is spending it’s time.

Are there any profiling tools for the STM32?

I’ve seen mention of execution profiling in connection with ETM trace, but I’m not familiar with this. Any pointers appreciated!

EDIT: The Keil ULinkPro advertises instruction tracing at full speed. Does anyone have experience with this?

1 Like

The Keil ULinkPro is $1250 - no problem for professionals but for folks coding in their spare time it’s too much.

How about spark buy in one or more of these, and have it setup connected to spark core wired to a linux/windows VM that is remotely accessible. This resource could be booked for use by anyone that needs it.

This would bring excellent debugging capabilities to the hands of many.

Ooh, interesting, thoughts @mohit / @zachary?

That sounds like a wonderful idea. I haven’t used that tool before. Do you think OpenOCD combined with a FT2232H based JTAG unit will be a cheaper option?

At a cursory glance, I’m not sure that does profiling. My intent was to offer something worth value that most hobbyists can’t buy - that device was $27 which is about what we pay for regular jtag debugging devices. (not to mention the cost of the jtag shield!)

In terms of cost, I imagine the overheads in administering the device, setting up the VM and booking and access system would be far in excess of the hardware costs.

My intent was to find if there was something similar in the offering at a lower cost : )

At the same time, the idea of giving access to a high end, remotely accessible debug tool nicely aligns with Spark’s idea for TDD and continuous integration testing! We should definetly look into it. What say @jgoggins @zachary ?

1 Like

Great idea!

I need to understand better what that would mean. What would an API call look like? What would the output look like?

Let’s flesh this out more. I love the idea of bringing advanced capabilities to a broader audience!

As I see it, it would be like a regular local development environment that users are given remote access to. The spark API calls all happen as they do now. What is different is that the user will be developing via a remote spark core using a remote development environment. Setup like this:

  • In spark HQ, a spark core connected to the Keil debugger and to a usb port on a machine hosting a desktop OS. The OS would be run in a VM for easier manageability so this could eventually scale to multiple instances of different OSs (each with their own core) as demand grows.

  • The OS is configured with a spark development environment, the spark cli, Netbeans/Eclipse, the Keil debugger IDE etc…

  • Users gain access to the development environment via desktop remoting technology such as Remote Desktop on Windows, or VNC on linux.

  • After logging into the remote environment, the user writes code using the local IDE. When ready it is flashed to the connected spark core as normal (e.g. dfu-util.). The user can see output via the serial port, or use the software provided with the Keil debugger to visualize in detail what is happening in the core.

  • A booking system, access control etc. would be needed to provision the remote desktop instances.

With this in place, it would provide users the ability to debug their code on a spark core connected to the pricey Keil debugger.

If the user doesn’t need any hardware attached to the core to run their code then they can just get started. E.g. My use case was debugging the TCPServer stack to find out why there are sometimes delays of several seconds before handling a request. This use case would be ideal for this setup, as are all software-only setups, which are the easiest to get started.

Should a user need some hardware connected to the spark, then there are a couple of choices:

  • communicate the hardware needs to someone on site so that it can be setup. Given the extra effort required by someone on side, I don’t envisage this being the norm, but it could be workable for cases where the bug is significant enough.
  • rework the code to abstract out the external hardware. This is probably the best approach generally since it provides an isolated test case.

Finally, a fallback when debugging remotely isn’t possible for some reason, the debugging device could be shipped and loaned out to trusted developers. This might in fact be the initial mode of operation since we’ll obtain the Keil debugger before the rest of the infrastructure is in place.

What a fancy idea @mdma ! Rad. I’m way into the idea of providing advanced capabilities like this users, but I’m not sure if enough people would want to jump through these hoops to leverage this capability though. Also, I’m a bit spooked by how complex it would be to setup the required infrastructure securely and maintain it.

It sounds like @zachary was thinking that this capability could be delivered to users via the a REST API of some kind. Which could potentially be provided to the masses cost-effectively. However, based on the little I know of this Keil debugger tool–this might bury the real value of it–it’s Streaming Trace UI and it’s Real-Time Trace UI.

In any case, a first step might be for Spark HQ to purchase a Keil debugger and have one our embedded devs evaluate to how a tool like this could be incorporated into our development, testing, and debugging workflow. If we find that a tool like this is off the charts awesome, we could investigate how we might package it up for usage by the broader Spark community via something like what @mdma described or an API of some kind.

Thanks @mdma—super fun ideas! And @jgoggins, thanks, I just hadn’t thought through how the tool might be used.

Would anybody out in the community be interested in using a setup like this?

1 Like