8000 GitHub - UM-Driverless/driverless: Autonomous System made by Formula Student Driverless team UMotorsport
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

UM-Driverless/driverless

Repository files navigation

Contents

Setup

Here you have a tutorial in Spanish about the installation process.

First of all clone this repo

git clone https://github.com/UM-Driverless/driverless.git ~/driverless

We will use pyenv to install python without permission problems, a python virtual environment called .venv, within the root of the project, then use a pip editable install based on our setup.py, which will install all the requirements and allow code changes to be reflected immediately. This also helps manage the paths correctly without having to explictly add them to the PYTHONPATH.

cd ~
sudo apt-get update
sudo apt-get install -y \
  make build-essential libssl-dev zlib1g-dev libbz2-dev \
  libreadline-dev libsqlite3-dev wget curl llvm \
  libncurses5-dev xz-utils tk-dev libxml2-dev \
  libxmlsec1-dev libffi-dev liblzma-dev

# 2. Install pyenv (if not already installed):
git clone https://github.com/pyenv/pyenv.git ~/.pyenv

# 3. Set up pyenv in your shell (add these lines to your ~/.bashrc and reload):
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(pyenv init -)"' >> ~/.bashrc
source ~/.bashrc

# 4. Install the desired Python version (you can have multiple versions installed)
pyenv install 3.12.3

# 5. In your project root (e.g. ~/driverless), set the local Python version:
cd ~/driverless # ADJUST IF DIFFERENT
pyenv local 3.12.3
# This creates a .python-version file specifying Python 3.12.3 for this project.

# 6. Create a new isolated virtual environment using the local Python:
#    This ensures you're using the Python from your project rather than the pyenv shim.
python -m venv .venv
# (If that fails, try: ~/.pyenv/versions/3.12.3/bin/python -m venv .venv)

# 7. Activate your virtual environment:
source .venv/bin/activate

# 8. Verify that the virtual environment is active and using the local Python:
which python
# Expected output: ~/driverless/.venv/bin/python

# 9. Upgrade pip inside the venv:
python -m ensurepip --upgrade
python -m pip install --upgrade pip
which pip
# Expected output: ~/driverless/.venv/bin/pip

# 10. Install your project in editable mode:
pip install -e .
# If that fails, try:
# python -m pip install --isolated --force-reinstall -e .

For NVidia acceleration --- CUDA on Ubuntu 24.04

  • To check cuda availability
    • Check if pytorch sees cuda
      import torch
      print(torch.__version__)
      print("CUDA available:", torch.cuda.is_available())
      print("Device:", torch.device("cuda" if torch.cuda.is_available() else "cpu"))
    • Check driver working with nvidia-smi (if broken you won't get a response)
    • Check that project uses cuda-enabled pytorch with pip show torch
      • +cpu is cpu only, +cuXXX is cuda enabled
  • Install NVidia drivers compatible with CUDA 12.6 for Ubuntu 24 (or adjust for your system)
    • 550 driver recommended. It's compatible with old and new. (for CUDA 12.4, works with 12.6)
      # 1. Purge any old NVIDIA packages
      sudo apt-get purge -y 'nvidia-*' 
      
      # 2. Add the official graphics-drivers PPA (if you haven’t)
      sudo add-apt-repository ppa:graphics-drivers/ppa
      sudo apt-get update
      
      # 3. Install the 550 series driver to match NVML 550.144
      sudo apt-get install -y nvidia-driver-550
      
      # 4. Reboot so the new kernel module loads
      sudo reboot
      Then check with nvidia-smi if the driver is installed correctly.
    • To remove your current ones (so you can install the new ones):
      sudo apt-get remove --purge '^nvidia-.*' 'libnvidia-.*' nvidia-kernel-common-* -y
      sudo apt-get autoremove -y
  • Run this (link from NVidia to get install code) to get the toolkit (headers and compiler):
  • https://pytorch.org/get-started/locally/
    pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126

If still have problems, try this:

  • Install NVIDIA driver utils (550 driver is for CUDA 12.4, compatible with old and new)
    sudo apt-get update
    sudo apt-get install -y nvidia-utils-550
    sudo ldconfig
  • Install general drivers: ubuntu-drivers autoinstall
  • Install CUDA with: apt install nvidia-cuda-toolkit
  • (skipping steps like GDS and Mellanox here)
  • Install GCC to compile c++ code and as requirement from nvidia website (see)
  • Install cuDNN as a requirement for Tensorflow see cuDNN:
  • wget https://developer.download.nvidia.com/compute/cudnn/9.1.1/local_installers/cudnn-local-repo-ubuntu2204-9.1.1_1.0-1_amd64.deb
  • sudo dpkg -i cudnn-local-repo-ubuntu2204-9.1.1_1.0-1_amd64.deb
  • sudo cp /var/cudnn-local-repo-ubuntu2204-9.1.1/cudnn-*-keyring.gpg /usr/share/keyrings/
  • sudo apt-get update
  • sudo apt-get -y install cudnn-cuda-12

Then install the necessary packages that utilize CUDA, with matching version (12.6 should be backwards compatible up to 12.0):

Yolov5 model is a bit outdated, so it needs to run with PyTorch 2.5.1. You can install those with this command. It's designed for CUDA 12.1, but CUDA 12.6 is backwards compatible, so newer versions should work as well. It should just lose the newer features after CUDA 12.1:

pip install torch==2.5.1+cu121 torchvision==0.20.1+cu121 torchaudio==2.5.1+cu121 --extra-index-url https://download.pytorch.org/whl/cu121

Install the simulator

Go to https://github.com/FS-Driverless/Formula-Student-Driverless-Simulator/releases and download the latest version. This is an executable file that will run the simulator. It can be stored and run from anywhere. To connect to the Python code, clone the repo in the same folder as Deteccion_conos. Here you can see Python examples.

Test program
# This code adds the fsds package to the pyhthon path.
# It assumes the fsds repo is cloned in the home directory.
# Replace fsds_lib_path with a path to wherever the python directory is located.
import sys, os
# fsds_lib_path = os.path.join(os.path.expanduser("~"), "Formula-Student-Driverless-Simulator", "python")
fsds_lib_path = os.path.join(os.getcwd(),"python")
print('CARPETA:',fsds_lib_path)
sys.path.insert(0, fsds_lib_path)

import time

import fsds

# connect to the AirSim simulator 
client = fsds.FSDSClient()

# Check network connection
client.confirmConnection()

# After enabling api controll only the api can controll the car. 
# Direct keyboard and joystick into the simulator are disabled.
# If you want to still be able to drive with the keyboard while also 
# controll the car using the api, call client.enableApiControl(False)
client.enableApiControl(True)

# Instruct the car to go full-speed forward
car_controls = fsds.CarControls()
car_controls.throttle = 1
client.setCarControls(car_controls)

time.sleep(5)

# Places the vehicle back at it's original position
client.reset()

To use, first run the fsds-... file, click "Run simulation", then run the python code

Use the code

Check ~/driverless/src/driverless/config.yaml for the configuration of the simulator. The easiest setup is with CAMERA_MODE: image and COMM_MODE: off. To take the image from the simulator use CAMERA_MODE: sim. To also control the simulator, use COMM_MODE: sim.

The main script that should be run is ~/driverless/src/driverless/main.py

To-Do

  • use default logger python library
  • Use sampling profiler?
  • the agent should have all the conditionals and control the vehicle when in a mission. Should have the while True loop?
  • rename github project from Deteccion_conos to um_driverless
  • check delays between simulator and processed image, response time
  • knowing the pickling error, try to visualize to a thread
  • TODO with open to camera and threads, simulator control? So it can close when stopped.
  • SEND CAN HEARTBEAT
  • MAKE ZED WORK AGAIN
  • RESTORE GENERIC AGENT CLASS FOR NO SPECIFIC TEST. THEN THE TESTS INHERIT FROM IT. COMMENTED.
  • PUT GLOBAL VARS AS ATTRIBUTE OF CAR OBJECT?
  • Initialize trackbars of ConeProcessing. Why?
  • Only import used libraries from activations with global config constants
  • SET SPEED ACCORDING TO CAN PROTOCOL, and the rest of state variables (SEN BOARD)
  • check edgeimpulse
  • Print number of cones detected per color
  • Xavier why network takes 3s to execute. How to make it use GPU?
  • Make net faster. Remove cone types that we don't use? Reduce resolution of yolov5?
  • Move threads to different files to make main.py shorter
  • Check NVPMODEL with high power during xavier installation
  • find todos and fix them
  • TODO with open to camera and threads, simulator control? So it can close when stopped.
  • SEND CAN HEARTBEAT
  • MAKE ZED WORK AGAIN
  • RESTORE GENERIC AGENT CLASS FOR NO SPECIFIC TEST. THEN THE TESTS INHERIT FROM IT. COMMENTED.
  • PUT GLOBAL VARS AS ATTRIBUTE OF CAR OBJECT?
  • Initialize trackbars of ConeProcessing. Why?
  • Only import used libraries from activations with global config constants
  • SET SPEED ACCORDING TO CAN PROTOCOL, and the rest of state variables (SEN BOARD)
  • check edgeimpulse
  • Print number of cones detected per color
  • Xavier why network takes 3s to execute. How to make it use GPU?
  • Make net faster. Remove cone types that we don't use? Reduce resolution of yolov5?
  • Move threads to different files to make main.py shorter
  • Check NVPMODEL with high power during xavier installation
  • find todos and fix them

We won't use Conda since it's not necessary, and the several python versions have caused problems. Also conda can't install all the packages we need, so there would be some packages installed with pip and others with conda. It also caused problems with docker.

  • First apt installs

    sudo apt update && sudo apt upgrade -y #; spd-say "I finished the update"
    sudo apt install curl nano git pip python3 zstd #zstd is zed dependency
    pip install --upgrade pip; #spd-say "Finished the installs"
  • Clone the GitHub directory:

    git clone https://github.com/UM-Driverless/Deteccion_conos.git
  • Install the requirements (for yolo network and for our scripts)

    cd ~/Deteccion_conos
    pip install -r {requirements_file_name}.txt #yolo_requirements.txt requirements.txt
  • [OPTIONAL] If you want to modify the weights, include the weights folder in: "yolov5/weights/yolov5_models"

  • ZED Camera Installation.

    1. Download the SDK according to desired CUDA version and system (Ubuntu, Nvidia jetson xavier jetpack, ...). If it doesn't find the matching CUDA version of the SDK, it will install it. When detected, it will continue with the installation.
    2. Add permits:
      sudo chmod 777 {FILENAME}
    3. Run it without sudo (You can copy the file and Ctrl+Shift+V into the terminal. Don't know why tab doesn't complete the filename):
      sh {FILENAME}.run
    4. By default accept to install cuda, static version of SDK, AI module, samples and Python API. Diagnostic not required.
    5. Now it should be installed in the deault installation path: /usr/local/zed
    6. To get the Python API (Otherwise pyzed won't be installed and will throw an error):
      python3 /usr/local/zed/get_python_api.py
  • To make sure you are using the GPU (Get IS CUDA AVAILABLE? : True)

    • Check what GPU driver you should install: https://www.nvidia.co.uk/Download/index.aspx?lang=en-uk
    • Check what GPU driver you have. X.Org -> nvidia-driver-515. In Software and Updates.
    • If errors, reinstall the driver from scratch:
      sudo apt-get remove --purge nvidia-* -y
      sudo apt autoremove
      sudo ubuntu-drivers autoinstall
      sudo service lightdm restart
      sudo apt install nvidia-driver-525 nvidia-dkms-525
      sudo reboot
  • To check all cuda versions installed dpkg -l | grep -i cuda

  • You can the cuda version compatible with the graphics driver using nvidia-smi or the built-in app in xavier o orin modules.

  • To check the cuda version of the installed compiler, use /usr/local/cuda/bin/nvcc --version

  • Now pytorch should use the same CUDA version as the ZED camera. Check this: https://www.stereolabs.com/docs/pytorch/

  • You should be able to run:

    python3 main.py
  • To explore if something fails:
    • sudo apt-get install python3-tk

Notes

  • To use CAN comm with the Nvidia Jetson Orin, the can bus has to be working properly and connected when the Orin turns on. There has to be at least another device to acknowledge messages.
  • For CAN to work first run setup_can0.sh
    • To run on startup, add to /etc/profile.d/
  • For CAN to work first run setup_can0.sh
    • To run on startup, add to /etc/profile.d/

NVIDIA JETSON XAVIER NX SETUP

TODO Testing with Jetpack 5.1

KVASER Setup in Ubuntu

tar -xvzf linuxcan.tar.gz
sudo apt-get install build-essential
sudo apt-get install linux-headers-`uname -r`

In linuxcan, and linuxcan/canlib, run:

make
sudo make install

In linuxcan/common, run:

make
sudo ./installscript.sh

To have the python API:

pip3 install canlib

To DEBUG:

make KV_Debug_ON=1

conda

Cliente para realizar la detección de conos en el simulador

Este cliente funciona en conjunto con el simulador desarrollado en https://github.com/AlbaranezJavier/UnityTrainerPy. Para hacerlo funcionar solo será necesario seguir las instrucciones del repositorio indicado para arrancar el simulador y posteriormente ejecutar el cliente que podemos encontrar en el archivo /PyUMotorsport/main_cone_detection.py

Los pesos de la red neuronal para el main.py se encuentran en el siguiente enlace: https://drive.google.com/file/d/1H-KOYKMu6KM3g8ENCnYPSPTvb6zVnnFX/view?usp=sharing Se debe descomprimir el archivo dentro de la carpeta: /PyUMotorsport/cone_detection/saved_models/

Los pesos de la red neuronal para el main_2.py se encuentran en el siguiente enlace: https://drive.google.com/file/d/1NFDBKxpRcfPs8PV3oftLya_M9GxW8O5h/view?usp=sharing Se debe descomprimir el archivo dentro de la carpeta: /PyUMotorsport_v2/ObjectDetectionSegmentation/DetectionData/

To test

Go to canlib/examples

./listChannels
./canmonitor 0

To install any driver (canlib and kvcommon must be installed first):

make
sudo ./installscript.sh

Old stuff

Crea tu entorno virtual en python 3.8 y activalo

conda create -n formula python=3.8
conda activate formula
#conda install tensorflow-gpu

A continuación vamos a installar el Model Zoo de detección de Tensorflow

Si no tienes todavía la carpeta models/research/

git clone --depth 1 https://github.com/tensorflow/models

Una vez dispones de la carpeta models/research/

cd models/research/
protoc object_detection/protos/*.proto --python_out=.
cp object_detection/packages/tf2/setup.py .
python -m pip install .

Actualizar Xavier para ejecutar YOLOv5 (06/2022)

git clone https://github.com/UM-Driverless/Deteccion_conos.git
cd Deteccion_conos
pip3 install -r yolov5/yolo_requeriments.txt
sh can_scripts/enable_CAN.sh
python3 car_actuator_testing_zed_conect_yolo.py
  • Try to use a preconfigured JetPack 5.0.2 PyTorch Docker container, with all the dependencies and versiones solved: https://blog.roboflow.com/deploy-yolov5-to-jetson-nx/
    • Register in docker website
    • Login. If it doesn't work, reboot and try again.
      docker login
    • Take the tag of a container from here: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch . For example, for JetPack 5.0.2 (L4T R35.1.0) it's l4t-pytorch:r35.1.0-pth1.13-py3
    • Pull container
      # l4t-pytorch:r35.1.0-pth1.13-py3 ->
      sudo docker pull nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3
    • Run container
      # Will download about 10GB of stuff
      sudo docker run -it --rm --runtime nvidia --network host nvcr.io/nvidia/l4t-pytorch:r35.1.0-pth1.13-py3
    • TODO FINISH

(Install visual studio, pycharm, telegram, ...)

About

Autonomous System made by Formula Student Driverless team UMotorsport

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 9

0