8000 GitHub - GMR-AI/GMR-AI: Repository for the code of our robot GMR-AI.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

GMR-AI/GMR-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Logo

GMR-AI

Automated robot designed to take care of the grass on extensive surfaces. It has an APP that will allow the user to select the height at which they want the grass to be cut. In addition, it can measure the temperature and humidity of the environment, providing useful information about the climatic conditions. Equipped with multiple cameras that allow the robot to avoid obstacles and follow its own route efficiently.

Table of Contents
  1. About The Project
  2. Project Structure
  3. Getting Started
  4. Usage
  5. Project Details
  6. Project Task List
  7. References
  8. License

About The Project

This project involves the design of an automated robot capable of mowing the lawn on large-scale fields such as an aircraft carrier. The robot can be adjusted to cut the grass at various levels and is programmed to follow the most efficient mowing pattern possible.

In addition, it incorporates computer vision technology, which enables the robot to avoid obstacles. The robot is accompanied by multiple cameras that cover all the terrain from different types of views. These cameras survey the terrain and send all the information to the lawn-mowing robot for processing. This allows the robot to adjust its preplanned path as necessary.

The robot is also paired with a mobile application, which allows users to control the grass cutting level and turn the robot on and off. Furthermore, the application provides an accurate 3D representation of the entire terrain based on the images captured by these cameras. This comprehensive view gives users a clear understanding of the terrain and the ongoing mowing process.

(back to top)

Built With

  • ROS2
  • Fritzing
  • AndroidStudio
  • Flutter
  • CoppeliaSim
  • Python
  • YOLO
  • Flask
  • InstantNGP
  • Google Cloud
  • Firebase

(back to top)

Project Structure

  • assets: folder with the assets used in this README.md.
  • docs: documentation for the project, including the hardware and software scheme.
  • MyGMR: reference to the MyGMR app git project.
  • sim: main robot folder. Here the simulation and the ROS2 workspace can be found.
    • coppeliasim: folder to store simulations in CoppeliaSim.
    • ros2_ws: ROS2 workspace. All the packages are in the src folder and a brief description of each can be found in their respective package.xml.

(back to top)

Getting Started

To set up your project, follow the instructions below.

Prerequisites

The primary prerequisite to run this project is to have the latest version of Ubuntu 22.04. You can install it from the following link:

Once you have your operating system set up, install the simulator to see your GMR robot running. Make sure to install the EDU version and use it only for educational purposes:

For the 3D reconstruction process, it is necessary to install certain dependencies prior to cloning the instant-ngp project. The README.md file provided in the instant-ngp repository contains detailed instructions, particularly in the 'Requirements' section, which outlines the essential tools that need to be installed.

Installation

  1. Clone the 3D reconstruction repository named Instant-NGP:

    git clone https://github.com/NVlabs/instant-ngp.git
  2. Clone our repository:

    git clone https://github.com/GMR-AI/GMR-AI.git
  3. Replace the run.py file from Instant-NGP with our custom run.py file:

    mv ~/GMR-AI/sim/run.py ~/instant-ngp/scripts/run.py
  4. Install the CoppeliaSim dependencies:

    sudo apt install python3-pip
    python3 -m pip install pyzmq cbor2 empy==3.3.4 lark
    sudo apt install xsltproc
  5. Install ROS2 Humble Version by following the instructions on the ROS2 Humble Installation page.

  6. Install colcon:

    sudo apt install python3-colcon-common-extensions
  7. Install rosdep:

    sudo apt-get install python3-rosdep
    sudo rosdep init
    rosdep update
  8. Navigate to the ros2_ws folder and install dependencies:

    rosdep install --from-paths src --ignore-src -y --rosdistro humble
  9. Prepare the packages (still in the ros2_ws folder):

    colcon build --symlink-install
  10. Source the setup file in a new terminal window (still in the ros2_ws folder):

    . install/setup.bash

(back to top)

Usage

To run your project, you will need to set up your phone device and install our APK available in our repository MyGMR. Once you create your account and add your GMR device to your list, run the following command in your computer:

ros2 launch gmrai_description main.launch.py coppelia_root_dir:=</absolute/path/to/coppelia/root/folder> coppelia_scene_path:=</absolute/path/to/coppelia/scene> coppelia_headless:=<True|False>

Now the GMR is connected. You can start a new job using your app, where it creates a 3D model of the terrain. You can then choose the height of the grass cut, and select the area that you want to cut. Start your job, where you will have a timer and a 3D model of the terrain you are cutting.

(back to top)

Project Details

In this section, we will delve into each aspect of the project. This includes detailed explanations about the hardware used, the circuitry of the said hardware, an overview of the mobile application, and finally, the computer vision sector. In the latter, we will discuss the deep learning models and 3D reconstruction techniques employed in the project.

Hardware

The hardware used in this project are the following:

Component Units Price
N20 DC 6V 1 4.5 €
Metal DC Geared Motor w/Encoder - 6V 2 19.90 €
Mini servo FEETECH 3.5kg 1 7.95 €
POWERBANK 5V 10000 MAh, Black 1 14 €
Virtavo Wireless HD 1080p Camera 32 30.99 €
NVIDIA Jetson Nano 1 259 €
Adafruit Servo Driver 1 14.95 €
6 AA BATTERY HOLDER WITH CONNECTOR 1 3.85 €
L298N Dual H-Bridge Motor Controller 2 15.00 €
Pack of 6 AA Rechargeable Batteries 1.2V 1 11.43 €
Adafruit IMU 1 34.95 €
Total Price: 1412.11 €

Fritzing

Using the hardware and the connections between each component from the table above, we’ve created the following circuitry through the Fritzing platform:

Fritzing

MyGMR - Control your GMR device with our Android APP

We decided that the best form of controlling your GMR devices is by creating a friendly app that gives you the opportunity to give full control of your GMR devices and also offering a ton of information to decied whether or not cutt the grass in certain times. You can know how the APP works the source code by going to the MyGMR repository.

Logo

Communication via ROS2 and Google Cloud

To communicate both the robot and the MyGMR app via Google cloud, we created a client that acts as a bridge between the main robot manager and the remote server. There are different states defined depending on the robot status such as Requesting when the robot isn't assigned to any account or Working when it's currently doing some job.

This client also sends the app all the information it needs, from the robot status to the 3D reconstruction of the terrain to show the user.

Software Diagram

YOLO V8 - Obstacle Detection

For obstacle detection, we created our own dataset with a training set, a validation set, and a test set. We started with a dataset of 63 images taken from a bird's-eye view of the entire simulation terrain. We then performed data augmentation, applying horizontal and vertical flips, 90-degree rotations clockwise and counter-clockwise, upside-down rotations, blurring up to 1.1 px, and adding noise to up to 0.3% of pixels.

This process expanded our dataset to a total of 129 training images, 12 validation images, and 6 test images, for a grand total of 147 images. The objects in these images are relatively easy to identify due to their vibrant and striking colors, especially the trees.

We used Roboflow to create this dataset. Roboflow is an end-to-end computer vision platform that simplifies the process of building computer vision models. It streamlines the process between labeling your data and training your model, making it an ideal tool for our project.

We also utilized YOLOv8, the one of the latest versions of the YOLO (You Only Look Once) series of models, known for their considerable accuracy while maintaining a small model size. YOLOv8 is a state-of-the-art model that can be used for object detection, image classification, and instance segmentation tasks.

We decided to use 12 images for the validation set and 6 images for the test set. We chose to have more training images to increase our model's accuracy. We ran a test to determine the model's accuracy and found that it is 95.53%, meaning there is a 4.47% chance that the detection can fail.

You can check our custom dataset here.

Logo

3D Reconstruction with Instant-NGP

In order to perform a 3D reconstruction of the terrain, we had to strategically place a total of 32 vision sensors across the area to ensure every detail was covered. The reason for using so many cameras is due to our utilization of a technology developed by NVLabs called Instant-NGP. This technology performs 3D reconstructions primarily using video inputs, where the camera moves around to capture every detail for optimal results. In our case, we didn’t have an aerial camera for this purpose, so we opted to use static images instead, specifically 32 images.

This process of reconstruction from images is resource-intensive, so we carry out the reconstruction on our PCs. However, if we had hardware like the Jetson Nano from NVIDIA, it would be capable of performing the 3D reconstruction independently.

The application of this technology is quite straightforward. As we mentioned in our installation segment, you simply need to clone their repository and then use our run.py program, replacing theirs.

It’s important to note that there’s a small chance (around 7.3%) that the 3D reconstruction might fail and not produce a precise result. However, the as we constantly do the reconstruction, the map will correct itself next time. The really important part is to make sure the initial reconstruction you receive is accurate, as it is the moment the coverage planning is processed.

Below is an image of the final reconstruction, which can be viewed through the MyGMR app.

Logo

(back to top)

Project Task List

The task list for our project is divided into three distinct blocks, which are worked on in parallel. However, in some cases, collaboration between blocks is necessary. The tasks for each block are as follows:

CoppeliaSim Simulation

  1. Designing the GMR Device: Design the Grass Management Robot (GMR) and export it to CoppeliaSim, a robot simulation software.

  2. Applying Joints and Wheels: Add joints and links to the model and generate an URDF for future use.

  3. Creating a Personalized World: Create the world with some obstacles in which the robot operates and maneuver.

  4. Placing Cameras: Place some cameras all over the area to ensure a good quality 3D reconstruction. (In the end, 32 cameras were needed to achieve that).

  5. Programming Robot Movements and ROS2 Integration: Build all the robot logic and connect all its differents component together. To do this, use ROS2 (Robot Operating System 2) as it allows easy partitions between all the different functionalities and the required multitasking.

  6. Setting Up Server Communication: Create a client in the robot that connects with the Google Cloud server to communicate with the MyGMR App.

MyGMR - App

  1. App Development: Develop an application using Flutter (Dart) for the frontend and Flask (Python) for the backend. This combination allows for a robust and user-friendly interface while ensuring efficient server-side operations.

  2. Account Creation: Implement a feature that allows users to create an account using their Google credentials via Firebase. This provides a secure and convenient way for users to access the app.

  3. Device Management: Provide the ability for users to add and delete Ground Monitoring Robot (GMR) devices from their device list. This gives users full control over the devices they want to manage.

  4. Weather Information: Display the current weather of the user's location. This can help users plan their GMR operations based on weather conditions.

  5. Job Creation: Enable users to create a job for a specific GMR device. Users should be able to specify the height of the grass and the area they want to cut. This allows for customized operations based on user preferences.

  6. 3D Terrain Model: Show a 3D model of the terrain that is being cut. This model is generated by the robot using the Jetson Nano. This visual representation can help users understand the terrain better and plan their operations accordingly.

  7. Job Records: Once a job is completed, users should be able to view a record of each job for each GMR device. This can help users track the performance and efficiency of their devices.

  8. Robot Information: Provide information about the robot, including the model name, battery life, charging time, etc. This can help users manage their devices more effectively.

  9. Google Cloud Connection: Connect the app to Google Cloud to facilitate communication between the app and the robot in the CoppeliaSim simulation. This ensures real-time data transfer and remote control of the robot.

Computer Vision - YoloV8 & Instant-NGP

  1. 3D Reconstruction Using Instant-NGP: Use the 32 images captured by the cameras to perform a 3D reconstruction of the terrain using Instant-NGP, which generates a mesh (obj) and a pointcloud (ply).

  2. Pointcloud To Top-Down 2D Image: Use the pointcloud to generate a 2D Top-Down image.

  3. Create Custom Dataset: Generate a custom dataset using a set of previously generated images with the aforementioned method to detect obstacles and the robot on the field. This includes creating ground truth labels for both training and testing datasets using RoboFlow.

  4. Train Object Detection Model: Train a Yolov8 model for object segmentation using the previous dataset. After training, test the performance of the model to ensure it meets the required standards.

  5. Apply Object Detection: Use the trained model for inference in each 2D generated image for slam. Use RViz, a 3D visualization tool for ROS and ROS2, to check that the robot's comprehension of its surroundings is as it should. This step is crucial to ensure the model can accurately detect objects in the simulated environment.

  6. Send 3D Reconstruction and 2D Image to Google Cloud and MyGMR App: Send a 3D model and a 2D image to Google Cloud when asked from the MyGMR App.

(back to top)

References

  • ROS2: Documentation for the Robot Operating System 2 (ROS2).
  • Roboflow Dataset: The specific dataset used for training the object detection model in our project.
  • Instant-NGP: The GitHub repository for NVLabs' Instant-NGP technology used for 3D reconstruction.
  • Flutter Libraries: A collection of libraries for Flutter, the UI toolkit used for frontend development in our app.
  • YoloV8: The GitHub repository for the YoloV8 model used for object detection.
  • Flask: Documentation for Flask, the micro web framework used for backend development in our app.
  • Firebase: Documentation for Firebase, the platform used for user authentication in our app.
  • CloudSQL: Samples and documentation for CloudSQL, the service used for database management in our app.
  • Coverage Planning: The GitHub repository for PythonRobotics, which includes algorithms for coverage planning.

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

About

Repository for the code of our robot GMR-AI.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  
0