In the previous article, we demonstrated how to implement a Docker container for a simple program written in C and compiled with GCC, to build and flash our embedded device. However, as embedded engineers, we typically perform cross-compilation for microcontrollers. In this article, we’ll explore a real-world example using the ARM GNU toolchain and OpenOCD. For this series of examples, we’ll use the Nucleo-G0B1RE from STMicroelectronics, but you can follow along with any other board.
Build and Flash
We need to modify our dockerfile to install our favorite ARM GNU compiler and also OpenOCD to flash our microcontroller, as usual we do this with pacman the Arch Linux package manager
# Fetch a new image from archlinux
FROM archlinux:base
#install build tools for our stm32g0 microcontroller, and openocd to flash our device
RUN pacman -Sy --noconfirm make openocd \
arm-none-eabi-gcc arm-none-eabi-gdb arm-none-eabi-newlib
#create and change directory to app
WORKDIR /app
# Create a volume using the app directory
VOLUME /app
Before anything else, we need to ensure that OpenOCD is effectively communicating with our board. In addition to the instructions in our Dockerfile, we must configure our container to pass through the USB port using the appropriate settings. just run with: --device=/dev/bus/usb:/dev/bus/usb
.
$ docker build -t testimg .
...
$ docker run -it --rm --device=/dev/bus/usb:/dev/bus/usb testimg
Once the container is running, connect using OpenOCD. You should see a message similar to the one below. Press Ctrl + C
to end the connection, then type exit
to close the container. If you don't see something like this, OpenOCD is not connecting to your board, and something may be wrong. Check your board and make sure the USB cable is properly connected.
[root@98dbd276cd10 app]# openocd -f board/st_nucleo_g0.cfg
Open On-Chip Debugger 0.12.0
Licensed under GNU GPL v2
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
srst_only separate srst_nogate srst_open_drain connect_deassert_srst
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : clock speed 2000 kHz
Info : STLINK V2J45M31 (API v2) VID:PID 0483:374B
Info : Target voltage: 3.223037
Info : [stm32g0x.cpu] Cortex-M0+ r0p1 processor detected
Info : [stm32g0x.cpu] target has 4 breakpoints, 2 watchpoints
Info : starting gdb server for stm32g0x.cpu on 3333
Info : Listening on port 3333 for gdb connections
Let’s clone our template project for the stm32g0 micro we always use in embedded house, but before remove the main.c and the makefile files from previous examples
$ git clone https://modularmx-admin@bitbucket.org/modularmx/template-g0.git project
Remember the template comes with a pre-existing blinking LED program. We just need to build it using make, but first, we should build and run our container adding the flag -v "$(pwd)":/app
to share our current directory, just like we did i the previous part.
$ docker build -t testimg .
...
$ docker run -it --rm -v "$(pwd)":/app --device=/dev/bus/usb:/dev/bus/usb testimg
Once our container is running, we build the project with make and then we flash our board using the makefile target make flash
[root@15ada4fd90ed app]# cd project
[root@15ada4fd90ed project]# make
...
arm-none-eabi-objcopy -Oihex Build/temp.elf Build/temp.hex
arm-none-eabi-objdump -S Build/temp.elf > Build/temp.lst
arm-none-eabi-size --format=berkeley Build/temp.elf
text data bss dec hex filename
2188 20 1572 3780 ec4 Build/temp.elf
[root@15ada4fd90ed project]# make flash
...
** Programming Finished **
** Verify Started **
** Verified OK **
** Resetting Target **
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
Info : Unable to match requested speed 2000 kHz, using 1800 kHz
shutdown command invoked
There you go! You can edit the program on your local machine, but build and flash it inside the container, without needing any additional tools other than your code editor installed on your host. All the necessary tools, along with their specific versions, are included in the container.
Build and Debug with one container
Using the previous project and image, run the container again, but this time assign it a proper name. It doesn't matter if the container will be removed afterward; you'll still need the name for what’s coming. Then, open a connection using OpenOCD with the makefile
target open
.
$ docker run -it --rm -v "$(pwd)":/app --device=/dev/bus/usb:/dev/bus/usb --name server testimg
[root@15ada4fd90ed app]# cd project
[root@15ada4fd90ed project]# make open
...
Info : [stm32g0x.cpu] Cortex-M0+ r0p1 processor detected
Info : [stm32g0x.cpu] target has 4 breakpoints, 2 watchpoints
Info : starting gdb server for stm32g0x.cpu on 3333
Info : Listening on port 3333 for gdb connections
Now, our terminal is busy with OpenOCD, waiting for a connection. If we have previous experience with this debug server, we know we need to open a second terminal and use GDB to connect to the server. But how do we open another terminal for our container? It's simple: we attach to the container using the docker exec
command, specifying the container name and the shell we want to use, which in our case is /bin/bash
.
$ docker exec -it server /bin/bash
[root@92b7b7aae5f0 app]#
Now that we have another terminal (process) running within the same container, we can call the make
target debug
, which essentially invokes arm-none-eabi-gdb
with the following command:arm-none-eabi-gdb Build/temp.elf -iex "set auto-load safe-path /"
This will connect to OpenOCD, flash the device, and run the program from the main function. From here, you can start debugging your code using the GDB command-line interface.
[root@92b7b7aae5f0 app]# cd project/
[root@92b7b7aae5f0 project]# make debug
...
xPSR: 0xf1000000 pc: 0x08000218 msp: 0x20024000
Breakpoint 1 at 0x8000198: file app/main.c, line 27.
Note: automatically using hardware breakpoints for read-only addresses.
Breakpoint 1, main () at app/main.c:27
27 HAL_Init( );
(gdb)
Build and Debug with separate containers
We can say that the previous example is more than enough, but there’s more to explore. You’ll see that it’s recommended for each container to run only a single process. In the previous example, we’re running two processes in the same container—so how should we handle this? Time to dive into Docker networks!
Using the previous project and image, run the container again. Once inside the container, you need to find the IP address assigned to it. It’s as simple as typing ip addr show
. In my case, the address is 172.17.0.2
. Also, note that we’re adding the flag -p 3333:3333
, which tells Docker to expose (or publish) port 3333. This is the port OpenOCD uses to accept GDB connections.
$ docker run -it --rm -v "$(pwd)":/app --device=/dev/bus/usb:/dev/bus/usb -p3333:3333 testimg
[root@15ada4fd90ed app]# ip addr show
...
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Build the project using make
(if needed), then open a connection to the target with OpenOCD. This time, specify the IP address assigned to your container, as we want to accept remote connections from a different machine or container. You can do this using the bindto
flag in OpenOCD. If you prefer to use the make open
target, modify the makefile accordingly.
[root@15ada4fd90ed app]# cd project
[root@15ada4fd90ed project]# make
...
[root@15ada4fd90ed project]# openocd -f board/st_nucleo_g0.cfg -c "bindto 172.17.0.2"
Now that the container is listening on port 3333, the next step is to run arm-none-eabi-gdb
. The first thing that might come to mind is to install this tool on your local host machine, but we’re not going to do that. Why not? Because that's exactly what we want to avoid. The solution is to create a new container from the same image. Note that we don’t need to specify the --device
flag, because this new container doesn't need access to any USB ports.
$ docker run -it --rm -v "$(pwd)":/app testimg
[root@42e31fa9a8a0 app]# cd project/
[root@42e31fa9a8a0 project]# arm-none-eabi-gdb Build/temp.elf
...
(gdb) target extended-remote 172.17.0.2:3333
Remote debugging using 172.17.0.2:3333
...
Woo-hoo! The connection has been established, and from here, the sky's the limit. Flash the microcontroller, apply a reset, and start debugging your software with the usual GDB commands.
(gdb) load
Loading section .isr_vector, size 0xbc lma 0x8000000
Loading section .text, size 0x7d0 lma 0x80000bc
Loading section .init_array, size 0x4 lma 0x800088c
Loading section .fini_array, size 0x4 lma 0x8000890
Loading section .data, size 0xc lma 0x8000894
Start address 0x08000218, load size 2208
Transfer rate: 6 KB/sec, 441 bytes/write.
(gdb) mon reset halt
Unable to match requested speed 2000 kHz, using 1800 kHz
Unable to match requested speed 2000 kHz, using 1800 kHz
[stm32g0x.cpu] halted due to debug-request, current mode: Thread
xPSR: 0xf1000000 pc: 0x08000218 msp: 0x20024000
(gdb) list
389 if (wait < HAL_MAX_DELAY)
390 {
391 wait += (uint32_t)(uwTickFreq);
392 }
393
394 while ((HAL_GetTick() - tickstart) < wait)
395 {
Again you can use the make target debug, just remember to modify the following line in your .gdinit to specify the container IP address where OpenOCD is running
#---connect and load program
target extended-remote 172.17.0.2:3333
Connecting to a container using its IP address is not ideal. For one, it’s easy to forget the IP address, and you would have to look it up each time. Instead, we can use container names, which are much more convenient. In our case, the container running OpenOCD will be referenced in both instances, so it’s the only one we need to give a name.
But first, we need to create a custom network to connect both containers. By default, Docker uses the "bridge" network, but the problem is that it doesn’t allow mapping container IPs to their respective names. The solution is to create a new bridge network.
$ docker network create mynet bridge
Let’s run the OpenOCD container again, this time providing the name openocd_server and also specifying the new network, mynet, using the --network
flag.
$ docker run -it --rm -v "$(pwd)":/app --device=/dev/bus/usb:/dev/bus/usb -p3333:3333 --network mynet --name openocd_server testimg
Then inside our container we run openocd but this time we use the container name instead of the ip address to bind the connection
[root@15ada4fd90ed project]# openocd -f board/st_nucleo_g0.cfg -c "bindto openocd_server"
Now, run the second container (without assigning a name, since it's not needed). In the GDB command, use the OpenOCD container's name instead of the previous IP address with the target extended-remote
command.
$ docker run -it --rm -v "$(pwd)":/app --network mynet testimg
[root@42e31fa9a8a0 app]# cd project/
[root@42e31fa9a8a0 project]# arm-none-eabi-gdb Build/temp.elf
...
(gdb) target extended-remote openocd_server:3333
Remote debugging using openocd_server:3333
...
The last part demonstrates how we can set up all the necessary tools inside Docker containers and connect multiple containers. I strongly recommend reading more about Docker networking, as it’s a broad and important topic—especially when managing containers across different networks, host machines, and securing connections. But for now, this setup is more than sufficient.
If you run your container in a custom network (like --network my_custom_network
) or a thee default network, Docker will manage the internal IPs and allow communication between containers on that network. Port mapping (-p
) is not needed between containers on the same network because they can communicate directly.
On Linux, the Docker engine doesn’t always require you to expose ports because containers can communicate via Docker's internal network. Port mapping (-p
) is only necessary if you want external (non-Docker) systems or users to be able to access the container's services
On Docker Desktop for Windows, Linux and macOS, Docker uses a VM to run Linux containers. As a result, Docker has to manage networking differently, and port mapping (-p
flag) is necessary for external communication between the container and the host. This is because Docker on Windows/macOS relies on the VM’s network interfaces for bridging, which requires you to map ports explicitly.
To simplify we will always use the -p
flag in all our code examples not matter the case if inter-container communication or host-container communication, but as always please do some experimentation and get your conclusions