In the core side PA13,PA15, PB4 with internal pull-up active and PA14 with internal pull-down active. In the JTAG side pins 11 (RTCK), pin 17 (DBGRQ) and pin 19 (DBGACK) with 10k external pull-down resistors. Thank you and best regards!
Is it possible to describe more in detail how I can debug a Spark cores. I use an Eclipse IDE in combination with GNU GCC and a J-Link programmer/debugger. I'm able to compile the source code but I debugging gives problems. De debugging jumps to strange addresses and memory locations outside the specified memory areas.
Reading symbols from r:\sdk\spark\core\core-firmware\build\core-firmware.elf...d one. (gdb) target extended :4242 Remote debugging using :4242 0xa32c2014 in ?? () (gdb) kill Kill the program being debugged? (y or n) y (gdb) load Loading section .isr_vector, size 0x10c lma 0x8005000 Load failed
I built successfully on the command line, and the code runs normally otherwise.
If I run the same commands, but without load, and simply run after kill. I get
(gdb) run Starting program: r:\sdk\spark\core\core-firmware\build\core-firmware.elf
And gdb hangs - I cannot type until I hit ctrl-c and then I get:
0xa32c2014 in ?? ()
It looks like it's in the wrong place and not making any progress, since the address is the same as before?
Sorry if these are daft questions, I've no prior experience with gdb.
Hey @zach and @zachary ! A bunch of us would love to get better tools for debugging these Spark Core apps and problems. I spent all weekend trying to get GDB stuff setup with the Spark Core via command prompt on windows and then also via Netbeans. The problem is, I can’t reliably get anything useful out of it yet in either case. Please see my rough draft tutorial for command line GDB here and let me know if you see anything I’m doing wrong:
When you get to the point where you type continue, what exactly happens on your system?
Can you describe the steps necessary to set a breakpoint via command line GDB and what the output looks like?
Do you know of anyone who has setup an IDE like Netbeans with working GDB support for the Spark Core yet?
Could part of the problem be this $20 STLINKv2 J-Tag as well? I’ve been reading that a lot of people use the Segger J-Link but holy crap it’s $1250! http://www.segger.com/jlink-debug-probes.html That’s a little bit out of my price range for debugging tools at the moment.
Open source is nice and inexpensive so far, but getting the proper tools to work reliably (on multiple machines) has been less than optimal. We’re almost there though! Just one more little bump in the road and I think we’ll have a fantastic end-to-end development toolchain.
I agree. We need an IDE that works for local JTAG debugging. I’ve spent 10’s of hours now trying with Eclipse (very hard to setup especially on Windows), Netbeans (can’t seem to figure out how to integrate GDB even though it’s suppose to work), and CooCox (they won’t let you customize compiler settings). A as @BDub mentioned, GDB by itself seems to have problems.
Note, that I had really good luck with CooCox debugging on a Olimex board (same processor) with ST-Link so I know this should be possible on the Spark (but not with CooCox because of compiler setting thing). I’m pretty sure ST-Link isn’t the problem.
Key thing I don’t see mentioned is to set useful breakpoints and watchpoints, then step through one line at a time. There are lots of good tutorials on using gdb out there, and the manual’s pretty helpful. Especially applicable here are the sections Setting Breakpoints and Continuing and Stepping.
If, for instance, you wanted to always watch every command execute in your loop, but didn’t care what happened in between, you could do the following in gdb after you’ve connected to the remote target and loaded the elf file.
That would run from the beginning and would not stop until it hit the loop() function. When it hits loop you’ll drop to a prompt, where you might repeatedly type
which is short for next. Say you wanted to keep watching the value of a variable you have named stuff. You could, in between each next command, type
Where p is short for print. After you’ve stepped past the end of your loop, you could type
which is short for continue and the code will stop again the next time loop() is called.
Make sense? I don’t have my JTAG rig with me right now. I haven’t gotten debugging setup in an IDE.
Check out Adafruit for an EDU version at $69. This is the full J-LINK but it can’t be used for commercial projects. This is OK for most of use here using the Spark Core at home. At that price it’s a steal if we can get it to work.
I am still trying to get it talking to Netbeans but it does connect to the core via the GDB Server.
Would you mind posting your openocd config files, or a few lines on how to get this setup? I have a stlink v2 and would like to try it with openocd, but seems to be quite a steep learning curve and lots of docs to read!
Im able to successfully connect to my Spark Core (using SWD, not JTAG). When I start gdb server and do:
target remote localhost:3333
monitor reset halt
It just doesn’t seem to get into my setup() and loop() routines at all, this is what i see whenever I CTRL-C:
#0 GPIO_Init (GPIOx=0xdf2855ce, GPIO_InitStruct=GPIO_InitStruct@entry=0x20004f44)
#1 0x0800f7ac in LED_Init (Led=Led@entry=LED1) at MCU/STM32F1xx/SPARK_Firmware_Driver/src/hw_config.c:393
#2 0x0800fe8a in Set_System () at MCU/STM32F1xx/SPARK_Firmware_Driver/src/hw_config.c:159
#3 0x0800e716 in HAL_Core_Config () at src/core-v1/core_hal.c:81
#4 0x08008212 in LoopFillZerobss () at ../build/arm/startup/spark_init.S:3
#5 0x08008212 in LoopFillZerobss () at ../build/arm/startup/spark_init.S:3
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
Is this because I use SWD? I cant use JTAG because I need the ports it uses.
Sometimes, after several gdb commands:
I get to app_setup_and_loop and eventually to setup.
What does Reset_Handler actually do and why do i need to issue it 2x with CTRL-C?
When in setup code, i can run all the lines until i hit:
This seems to hang gdb, the timer seems to not be updated in function HAL_Timer_Get_Milli_Seconds(void). But again, I can get past this by issuing:
set variable system_1ms_tick = 10
Why is the variable system_1ms_tick not being updated? Are interrupts disabled or something when Spark core runs under gbd by default?
Essentially I have the same problems as with st-util:
but im witnessing the same symptoms as when using st-util: