Unable to run `baseline_solution`

I am trying to run the baseline_solution to check if my setup is working properly. I have gone through the requirements and installed Docker, p7zip-full, python3 virtualenv, and nvidia container toolkit.

I followed the documentation for the baseline_solution but when I run bpc test bpc_pose_estimator:example ipd, I get the following:

Step 3/6 : FROM bpc_pose_estimator:example
no more output and success not detected
Failed to build detector image
WARNING unable to detect os for base image 'bpc_pose_estimator:example', maybe the base image does not exist

From docker desktop, I can see that I do indeed have the image.

image

Perhaps, I have done something wrong, could anyone provide any advice on how I can trouble shoot this?

Also, I’m running Ubuntu 24.04.

Please let me know if I missed out on anything, I am more than happy to provide further details.

Could you please double check what happens if you manually run the components (i.e. follow this: GitHub - opencv/bpc at baseline_solution)?

Can do!

Given below is the output after running docker run --init --rm --net host eclipse/zenoh:1.2.1 --no-multicast-scouting,

#!/bin/ash
cat /entrypoint.sh
echo " * Starting: /$BINARY $*"
exec /$BINARY $*
 * Starting: /zenohd --no-multicast-scouting
2025-02-25T19:36:38.810573Z  INFO main ThreadId(01) zenohd: zenohd v1.2.1 built with rustc 1.75.0 (82e1608df 2023-12-21)
2025-02-25T19:36:38.810837Z  INFO main ThreadId(01) zenohd: Initial conf: {"access_control":{"default_permission":"deny","enabled":false,"policies":null,"rules":null,"subjects":null},"adminspace":{"enabled":true,"permissions":{"read":true,"write":false}},"aggregation":{"publishers":[],"subscribers":[]},"connect":{"endpoints":[],"exit_on_failure":null,"retry":null,"timeout_ms":null},"downsampling":[],"id":"efaa0ffaf8c08e3269073f01655e93da","listen":{"endpoints":{"peer":["tcp/[::]:0"],"router":["tcp/[::]:7447"]},"exit_on_failure":null,"retry":null,"timeout_ms":null},"metadata":null,"mode":"router","open":{"return_conditions":{"connect_scouted":null,"declares":null}},"plugins":{},"plugins_loading":{"enabled":true,"search_dirs":[{"kind":"current_exe_parent","value":null},".","~/.zenoh/lib","/opt/homebrew/lib","/usr/local/lib","/usr/lib"]},"qos":{"publication":[]},"queries_default_timeout":null,"routing":{"interests":{"timeout":null},"peer":{"mode":null},"router":{"peers_failover_brokering":null}},"scouting":{"delay":null,"gossip":{"autoconnect":null,"enabled":null,"multihop":null,"target":null},"multicast":{"address":null,"autoconnect":null,"enabled":false,"interface":null,"listen":null,"ttl":null},"timeout":null},"timestamping":{"drop_future_timestamp":null,"enabled":null},"transport":{"auth":{"pubkey":{"key_size":null,"known_keys_file":null,"private_key_file":null,"private_key_pem":null,"public_key_file":null,"public_key_pem":null},"usrpwd":{"dictionary_file":null,"password":null,"user":null}},"link":{"protocols":null,"rx":{"buffer_size":65535,"max_message_size":1073741824},"tcp":{"so_rcvbuf":null,"so_sndbuf":null},"tls":{"close_link_on_expiration":null,"connect_certificate":null,"connect_private_key":null,"enable_mtls":null,"listen_certificate":null,"listen_private_key":null,"root_ca_certificate":null,"so_rcvbuf":null,"so_sndbuf":null,"verify_name_on_connect":null},"tx":{"batch_size":65535,"keep_alive":4,"lease":10000,"queue":{"allocation":{"mode":"lazy"},"batching":{"enabled":true,"time_limit":1},"congestion_control":{"block":{"wait_before_close":5000000},"drop":{"max_wait_before_drop_fragments":50000,"wait_before_drop":1000}},"size":{"background":2,"control":2,"data":2,"data_high":2,"data_low":2,"interactive_high":2,"interactive_low":2,"real_time":2}},"sequence_number_resolution":"32bit","threads":3},"unixpipe":{"file_access_mask":null}},"multicast":{"compression":{"enabled":false},"join_interval":2500,"max_sessions":1000,"qos":{"enabled":false}},"shared_memory":{"enabled":true,"mode":"lazy"},"unicast":{"accept_pending":100,"accept_timeout":10000,"compression":{"enabled":false},"lowlatency":false,"max_links":1,"max_sessions":1000,"open_timeout":10000,"qos":{"enabled":true}}}}
2025-02-25T19:36:38.811197Z  INFO main ThreadId(01) zenoh::net::runtime: Using ZID: efaa0ffaf8c08e3269073f01655e93da
2025-02-25T19:36:38.812221Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fdc4:f303:9324::3]:7447
2025-02-25T19:36:38.812233Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fdc4:f303:9324::6]:7447
2025-02-25T19:36:38.812237Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fe80::5054:ff:fe12:3456]:7447
2025-02-25T19:36:38.812240Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fe80::44bc:8aff:fe06:6166]:7447
2025-02-25T19:36:38.812244Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fe80::42:74ff:fe07:f260]:7447
2025-02-25T19:36:38.812247Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fe80::ecef:78ff:feba:c610]:7447
2025-02-25T19:36:38.812251Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/192.168.65.9:7447
2025-02-25T19:36:38.812254Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/192.168.65.6:7447
2025-02-25T19:36:38.812258Z  INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/172.18.0.1:7447

And given below is the output after running rocker --nvidia --cuda --network=host bpc_pose_estimator:example

Active extensions ['cuda', 'network', 'nvidia']
Step 1/6 : FROM golang:1.19 as detector
 ---> 80b76a6c918c
Step 2/6 : RUN git clone -q https://github.com/dekobon/distro-detect.git &&     cd distro-detect &&     git checkout -q 5f5b9c724b9d9a117732d2a4292e6288905734e1 &&     CGO_ENABLED=0 go build .
 ---> Using cache
 ---> f8e07cc1a8a0
Step 3/6 : FROM bpc_pose_estimator:example
no more output and success not detected
Failed to build detector image
WARNING unable to detect os for base image 'bpc_pose_estimator:example', maybe the base image does not exist

Looks like the bpc_pose_estimator:example is not built. Did you run the docker buildx command successfully (this one: GitHub - opencv/bpc at baseline_solution)?

Hi @vaheta ,

Thanks for getting back to me. I was able to run the docker buildx command successfully. However, for some reason the built image could not be found when I ran the bpc test command. I believe it was some type of nvidia container toolkit misconfiguration as I noticed I was unable to run any of the examples given here GitHub - osrf/rocker: A tool to run docker containers with overlays and convenient options for things like GUIs etc..
I’m working on a new machine now and everything seems to be functioning properly. i might reinstall both docker and nvidia toolkit in my old machine and see if that makes any difference .

1 Like

Hi @vaheta @rohit_thampy
I am also facing the same issue. I am also running in Ubuntu 24.04. @rohit_thampy Have you able to troubleshoot the issue?

HI @pranay_vandanapu ,

Could you please share which type of docker you have installed on your system? Is it docker engine or docker desktop?

I have installed docker desktop.

Thanks for the confirmation.

Could you please try running the following commands and sharing the output.

  1. docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

  2. rocker --nvidia --x11 osrf/ros:crystal-desktop rviz2

Sure, I have got the following outputs after running the above commands

  1. docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi
docker: Error response from daemon: unknown or invalid runtime name: nvidia

Run 'docker run --help' for more information
  1. rocker --nvidia --x11 osrf/ros:crystal-desktop rviz2
Active extensions ['nvidia', 'x11']
Step 1/6 : FROM golang:1.19 as detector
 ---> 80b76a6c918c
Step 2/6 : RUN git clone -q https://github.com/dekobon/distro-detect.git &&     cd distro-detect &&     git checkout -q 5f5b9c724b9d9a117732d2a4292e6288905734e1 &&     CGO_ENABLED=0 go build .
 ---> Using cache
 ---> 4101282244ba
Step 3/6 : FROM osrf/ros:crystal-desktop
 ---> 58bc7d1f8a15
Step 4/6 : COPY --from=detector /go/distro-detect/distro-detect /tmp/detect_os
 ---> 168cd55773d9
Step 5/6 : ENTRYPOINT [ "/tmp/detect_os", "-format", "json-one-line" ]
 ---> Running in d3aee8b6c39e
 ---> Removed intermediate container d3aee8b6c39e
 ---> 908f8b94c3f1
Step 6/6 : CMD [ "" ]
 ---> Running in eb28aa14fba3
 ---> Removed intermediate container eb28aa14fba3
 ---> ff4a0b67fb62
Successfully built ff4a0b67fb62
Successfully tagged rocker:os_detect_osrf_ros_crystal-desktop
running,  docker run -it --rm ff4a0b67fb62
output:  Unable to find image 'ff4a0b67fb62:latest' locally
docker: Error response from daemon: Head "https://registry-1.docker.io/v2/library/ff4a0b67fb62/manifests/latest": Get: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Run 'docker run --help' for more information

/tmp/detect_os failed:
> Unable to find image 'ff4a0b67fb62:latest' locally
> docker: Error response from daemon: Head "https://registry-1.docker.io/v2/library/ff4a0b67fb62/manifests/latest": Get: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
> 
> Run 'docker run --help' for more information
WARNING unable to detect os for base image 'osrf/ros:crystal-desktop', maybe the base image does not exist

Thanks for sending them through.

Perhaps it’s worth going through the installation for nvidia container toolkit again with the following commands.

  1. curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

  2. sudo apt-get update

  3. sudo apt-get install -y nvidia-container-toolkit

  4. sudo nvidia-ctk runtime configure --runtime=docker

  5. sudo systemctl restart docker

If you are able to run all of these commands without any errors, try running the sample workload again with docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi.

Let me know how it goes.