Spark-core debugging via JTAG/SWD

I would be interested to know whether it might be possible to make spark-core debugging implementing a JTAG/SWD connection with a ST-Link and my PC (running Keil, Atollic, etc.) as host?.

Here the typical handshaking for the standard JTAG and STM32 device:

Spark-Core pin… JTAG pin
JTMS/SWDIO (PA13) ------------------ (7) TMS
JTCK/SWCLK (PA14) ------------------ (9) TSK
JTDI (PA15) ------------------------------ (5) TDI
JTDO (PB3) ------------------------------ (13) TDO
nJTRST (PB4) --------------------------- (3) nTRST
NRST -------------------------------------- (15) nSRST

In the core side PA13,PA15, PB4 with internal pull-up active and PA14 with internal pull-down active.
In the JTAG side pins 11 (RTCK), pin 17 (DBGRQ) and pin 19 (DBGACK) with 10k external pull-down resistors. Thank you and best regards!

Definitely. In fact, that’s basically what the Programmer Shield is doing:

If you follow the schematics of a programmer shield (or just purchase one from us), you’ll be able to debug with JTAG.

I recommend buying the Programmer Shield instead of making one :slight_smile: and I have made one already and enjoy making things! See here:

Is it possible to describe more in detail how I can debug a Spark cores. I use an Eclipse IDE in combination with GNU GCC and a J-Link programmer/debugger. I'm able to compile the source code but I debugging gives problems. De debugging jumps to strange addresses and memory locations outside the specified memory areas.

I can tell you how we do it on our systems, and hopefully it’ll be relatively straightforward to translate to your debug environment.

We use the stlink command line interface:

Once I’ve got the Core hooked up through the ST-Link/v2 programmer and in bootloader mode (flashing yellow), I would open one terminal window, and then type st-util -p 9025, which opens a gdb port.

Next, I would open another terminal window, browse to my build directory, and run arm-none-eabi-gdb core-firmware.elf. Then within the gdb console:

target extended :9025

When I try to debug, I get "load failed".

Reading symbols from r:\sdk\spark\core\core-firmware\build\core-firmware.elf...d
(gdb) target extended :4242
Remote debugging using :4242
0xa32c2014 in ?? ()
(gdb) kill
Kill the program being debugged? (y or n) y
(gdb) load
Loading section .isr_vector, size 0x10c lma 0x8005000
Load failed

I built successfully on the command line, and the code runs normally otherwise.

If I run the same commands, but without load, and simply run after kill. I get

(gdb) run
Starting program: r:\sdk\spark\core\core-firmware\build\core-firmware.elf

And gdb hangs - I cannot type until I hit ctrl-c and then I get:

0xa32c2014 in ?? ()

It looks like it's in the wrong place and not making any progress, since the address is the same as before?

Sorry if these are daft questions, I've no prior experience with gdb.

Did you comment out #define SWD_JTAG_DISABLE in main.h, otherwise the JTAG pins are used for other purposes and the JTAG interface won’t work.

Thanks for the response. Yes, I did add the USE_SWD_JTAG = y symbol in the makefile, which then defines USE_SWD_JTAG for the preprocessor.

I can try doing it directly in the main.h file just to be sure. Will post back if any change.

@JvD - did you make any progress getting debugging to work over JTAG? I think I’ve hit the same problem as you - jumping into out of bounds addresses.

Hey @zach and @zachary ! A bunch of us would love to get better tools for debugging these Spark Core apps and problems. I spent all weekend trying to get GDB stuff setup with the Spark Core via command prompt on windows and then also via Netbeans. The problem is, I can't reliably get anything useful out of it yet in either case. Please see my rough draft tutorial for command line GDB here and let me know if you see anything I'm doing wrong:

  • When you get to the point where you type continue, what exactly happens on your system?

  • Can you describe the steps necessary to set a breakpoint via command line GDB and what the output looks like?

  • Do you know of anyone who has setup an IDE like Netbeans with working GDB support for the Spark Core yet?

Could part of the problem be this $20 STLINKv2 J-Tag as well? I've been reading that a lot of people use the Segger J-Link but holy crap it's $1250! That's a little bit out of my price range for debugging tools at the moment.

Open source is nice and inexpensive so far, but getting the proper tools to work reliably (on multiple machines) has been less than optimal. We're almost there though! Just one more little bump in the road and I think we'll have a fantastic end-to-end development toolchain.

I agree. We need an IDE that works for local JTAG debugging. I’ve spent 10’s of hours now trying with Eclipse (very hard to setup especially on Windows), Netbeans (can’t seem to figure out how to integrate GDB even though it’s suppose to work), and CooCox (they won’t let you customize compiler settings). A as @BDub mentioned, GDB by itself seems to have problems.

Note, that I had really good luck with CooCox debugging on a Olimex board (same processor) with ST-Link so I know this should be possible on the Spark (but not with CooCox because of compiler setting thing). I’m pretty sure ST-Link isn’t the problem.

1 Like

Key thing I don’t see mentioned is to set useful breakpoints and watchpoints, then step through one line at a time. There are lots of good tutorials on using gdb out there, and the manual’s pretty helpful. Especially applicable here are the sections Setting Breakpoints and Continuing and Stepping.

If, for instance, you wanted to always watch every command execute in your loop, but didn’t care what happened in between, you could do the following in gdb after you’ve connected to the remote target and loaded the elf file.

break loop

That would run from the beginning and would not stop until it hit the loop() function. When it hits loop you’ll drop to a prompt, where you might repeatedly type


which is short for next. Say you wanted to keep watching the value of a variable you have named stuff. You could, in between each next command, type

p stuff

Where p is short for print. After you’ve stepped past the end of your loop, you could type


which is short for continue and the code will stop again the next time loop() is called.

Make sense? I don’t have my JTAG rig with me right now. I haven’t gotten debugging setup in an IDE.

Keep throwing questions my way!

1 Like

Check out Adafruit for an EDU version at $69. This is the full J-LINK but it can't be used for commercial projects. This is OK for most of use here using the Spark Core at home. At that price it's a steal if we can get it to work.

I am still trying to get it talking to Netbeans but it does connect to the core via the GDB Server.

1 Like

I’ve found openocd to be quite stable with the st-link v2 and gdb.

plus openocd works flawlessly(*) for me too.

(*) Once I had bludgeoned openocd into submission. It takes a couple of config files (one for the programmer, one for the target) before it is useful, but once you’re past that it just seems to work.

Would you mind posting your openocd config files, or a few lines on how to get this setup? I have a stlink v2 and would like to try it with openocd, but seems to be quite a steep learning curve and lots of docs to read! :smile:

Im trying to setup debugging with OpenOCD. This is my spark_core.cfg file:

source [find interface/stlink-v2.cfg]
source [find target/stm32f1x_stlink.cfg]
reset_config srst_only srst_nogate

Im able to successfully connect to my Spark Core (using SWD, not JTAG). When I start gdb server and do:

target remote localhost:3333
monitor reset halt
jump Reset_Handler

It just doesn't seem to get into my setup() and loop() routines at all, this is what i see whenever I CTRL-C:

(gdb) where
#0  GPIO_Init (GPIOx=0xdf2855ce, GPIO_InitStruct=GPIO_InitStruct@entry=0x20004f44)
    at MCU/STM32F1xx/STM32_StdPeriph_Driver/src/stm32f10x_gpio.c:232
#1  0x0800f7ac in LED_Init (Led=Led@entry=LED1) at MCU/STM32F1xx/SPARK_Firmware_Driver/src/hw_config.c:393
#2  0x0800fe8a in Set_System () at MCU/STM32F1xx/SPARK_Firmware_Driver/src/hw_config.c:159
#3  0x0800e716 in HAL_Core_Config () at src/core-v1/core_hal.c:81
#4  0x08008212 in LoopFillZerobss () at ../build/arm/startup/spark_init.S:3
#5  0x08008212 in LoopFillZerobss () at ../build/arm/startup/spark_init.S:3
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

Is this because I use SWD? I cant use JTAG because I need the ports it uses.

Sometimes, after several gdb commands:

jump Reset_Handler

I get to app_setup_and_loop and eventually to setup.

  • What does Reset_Handler actually do and why do i need to issue it 2x with CTRL-C?

When in setup code, i can run all the lines until i hit:


This seems to hang gdb, the timer seems to not be updated in function HAL_Timer_Get_Milli_Seconds(void). But again, I can get past this by issuing:

set variable system_1ms_tick = 10
  • Why is the variable system_1ms_tick not being updated? Are interrupts disabled or something when Spark core runs under gbd by default?

Essentially I have the same problems as with st-util:

but im witnessing the same symptoms as when using st-util:

I’ve been able to get the J-Link and Blackmagic probes running with the Spark Core on a Mac using an open toolchain and GDB - here are my notes:

Haven’t yet setup Eclipse IDE, but I imagine that it should work fine using GDB for the debugger.

@fsoo, minor typo in your post. Should be Broadcom :wink:

I have the Segger J-Link LITE It only supports SWD debugging.
Here is the pinout -

Will this work with the Spark Core?

From this thread I’m missing nTRST & nSRST