Automated robot designed to take care of the grass on extensive surfaces. It has an APP that will allow the user to select the height at which they want the grass to be cut. In addition, it can measure the temperature and humidity of the environment, providing useful information about the climatic conditions. Equipped with multiple cameras that allow the robot to avoid obstacles and follow its own route efficiently.
Table of Contents
This project involves the design of an automated robot capable of mowing the lawn on large-scale fields such as an aircraft carrier. The robot can be adjusted to cut the grass at various levels and is programmed to follow the most efficient mowing pattern possible.
In addition, it incorporates computer vision technology, which enables the robot to avoid obstacles. The robot is accompanied by multiple cameras that cover all the terrain from different types of views. These cameras survey the terrain and send all the information to the lawn-mowing robot for processing. This allows the robot to adjust its preplanned path as necessary.
The robot is also paired with a mobile application, which allows users to control the grass cutting level and turn the robot on and off. Furthermore, the application provides an accurate 3D representation of the entire terrain based on the images captured by these cameras. This comprehensive view gives users a clear understanding of the terrain and the ongoing mowing process.
- assets: folder with the assets used in this README.md.
- docs: documentation for the project, including the hardware and software scheme.
- MyGMR: reference to the MyGMR app git project.
- sim: main robot folder. Here the simulation and the ROS2 workspace can be found.
- coppeliasim: folder to store simulations in CoppeliaSim.
- ros2_ws: ROS2 workspace. All the packages are in the src folder and a brief description of each can be found in their respective package.xml.
To set up your project, follow the instructions below.
The primary prerequisite to run this project is to have the latest version of Ubuntu 22.04. You can install it from the following link:
Once you have your operating system set up, install the simulator to see your GMR robot running. Make sure to install the EDU version and use it only for educational purposes:
For the 3D reconstruction process, it is necessary to install certain dependencies prior to cloning the instant-ngp
project. The README.md
file provided in the instant-ngp
repository contains detailed instructions, particularly in the 'Requirements' section, which outlines the essential tools that need to be installed.
-
Clone the 3D reconstruction repository named Instant-NGP:
git clone https://github.com/NVlabs/instant-ngp.git
-
Clone our repository:
git clone https://github.com/GMR-AI/GMR-AI.git
-
Replace the
run.py
file from Instant-NGP with our customrun.py
file:mv ~/GMR-AI/sim/run.py ~/instant-ngp/scripts/run.py
-
Install the CoppeliaSim dependencies:
sudo apt install python3-pip python3 -m pip install pyzmq cbor2 empy==3.3.4 lark sudo apt install xsltproc
-
Install ROS2 Humble Version by following the instructions on the ROS2 Humble Installation page.
-
Install
colcon
:sudo apt install python3-colcon-common-extensions
-
Install
rosdep
:sudo apt-get install python3-rosdep sudo rosdep init rosdep update
-
Navigate to the
ros2_ws
folder and install dependencies:rosdep install --from-paths src --ignore-src -y --rosdistro humble
-
Prepare the packages (still in the
ros2_ws
folder):colcon build --symlink-install
-
Source the setup file in a new terminal window (still in the
ros2_ws
folder):. install/setup.bash
To run your project, you will need to set up your phone device and install our APK available in our repository MyGMR. Once you create your account and add your GMR device to your list, run the following command in your computer:
ros2 launch gmrai_description main.launch.py coppelia_root_dir:=</absolute/path/to/coppelia/root/folder> coppelia_scene_path:=</absolute/path/to/coppelia/scene> coppelia_headless:=<True|False>
Now the GMR is connected. You can start a new job using your app, where it creates a 3D model of the terrain. You can then choose the height of the grass cut, and select the area that you want to cut. Start your job, where you will have a timer and a 3D model of the terrain you are cutting.
In this section, we will delve into each aspect of the project. This includes detailed explanations about the hardware used, the circuitry of the said hardware, an overview of the mobile application, and finally, the computer vision sector. In the latter, we will discuss the deep learning models and 3D reconstruction techniques employed in the project.
The hardware used in this project are the following:
Component | Units | Price |
---|---|---|
N20 DC 6V | 1 | 4.5 € |
Metal DC Geared Motor w/Encoder - 6V | 2 | 19.90 € |
Mini servo FEETECH 3.5kg | 1 | 7.95 € |
POWERBANK 5V 10000 MAh, Black | 1 | 14 € |
Virtavo Wireless HD 1080p Camera | 32 | 30.99 € |
NVIDIA Jetson Nano | 1 | 259 € |
Adafruit Servo Driver | 1 | 14.95 € |
6 AA BATTERY HOLDER WITH CONNECTOR | 1 | 3.85 € |
L298N Dual H-Bridge Motor Controller | 2 | 15.00 € |
Pack of 6 AA Rechargeable Batteries 1.2V | 1 | 11.43 € |
Adafruit IMU | 1 | 34.95 € |
Total Price: | 1412.11 € |
Using the hardware and the connections between each component from the table above, we’ve created the following circuitry through the Fritzing platform:
We decided that the best form of controlling your GMR devices is by creating a friendly app that gives you the opportunity to give full control of your GMR devices and also offering a ton of information to decied whether or not cutt the grass in certain times. You can know how the APP works the source code by going to the MyGMR repository.
To communicate both the robot and the MyGMR app via Google cloud, we created a client that acts as a bridge between the main robot manager and the remote server. There are different states defined depending on the robot status such as Requesting when the robot isn't assigned to any account or Working when it's currently doing some job.
This client also sends the app all the information it needs, from the robot status to the 3D reconstruction of the terrain to show the user.
For obstacle detection, we created our own dataset with a training set, a validation set, and a test set. We started with a dataset of 63 images taken from a bird's-eye view of the entire simulation terrain. We then performed data augmentation, applying horizontal and vertical flips, 90-degree rotations clockwise and counter-clockwise, upside-down rotations, blurring up to 1.1 px, and adding noise to up to 0.3% of pixels.
This process expanded our dataset to a total of 129 training images, 12 validation images, and 6 test images, for a grand total of 147 images. The objects in these images are relatively easy to identify due to their vibrant and striking colors, especially the trees.
We used Roboflow to create this dataset. Roboflow is an end-to-end computer vision platform that simplifies the process of building computer vision models. It streamlines the process between labeling your data and training your model, making it an ideal tool for our project.
We also utilized YOLOv8, the one of the latest versions of the YOLO (You Only Look Once) series of models, known for their considerable accuracy while maintaining a small model size. YOLOv8 is a state-of-the-art model that can be used for object detection, image classification, and instance segmentation tasks.
We decided to use 12 images for the validation set and 6 images for the test set. We chose to have more training images to increase our model's accuracy. We ran a test to determine the model's accuracy and found that it is 95.53%, meaning there is a 4.47% chance that the detection can fail.
You can check our custom dataset here.
In order to perform a 3D reconstruction of the terrain, we had to strategically place a total of 32 vision sensors across the area to ensure every detail was covered. The reason for using so many cameras is due to our utilization of a technology developed by NVLabs called Instant-NGP. This technology performs 3D reconstructions primarily using video inputs, where the camera moves around to capture every detail for optimal results. In our case, we didn’t have an aerial camera for this purpose, so we opted to use static images instead, specifically 32 images.
This process of reconstruction from images is resource-intensive, so we carry out the reconstruction on our PCs. However, if we had hardware like the Jetson Nano from NVIDIA, it would be capable of performing the 3D reconstruction independently.
The application of this technology is quite straightforward. As we mentioned in our installation segment, you simply need to clone their repository and then use our run.py program, replacing theirs.
It’s important to note that there’s a small chance (around 7.3%) that the 3D reconstruction might fail and not produce a precise result. However, the as we constantly do the reconstruction, the map will correct itself next time. The really important part is to make sure the initial reconstruction you receive is accurate, as it is the moment the coverage planning is processed.
Below is an image of the final reconstruction, which can be viewed through the MyGMR app.
The task list for our project is divided into three distinct blocks, which are worked on in parallel. However, in some cases, collaboration between blocks is necessary. The tasks for each block are as follows:
-
Designing the GMR Device: Design the Grass Management Robot (GMR) and export it to CoppeliaSim, a robot simulation software.
-
Applying Joints and Wheels: Add joints and links to the model and generate an URDF for future use.
-
Creating a Personalized World: Create the world with some obstacles in which the robot operates and maneuver.
-
Placing Cameras: Place some cameras all over the area to ensure a good quality 3D reconstruction. (In the end, 32 cameras were needed to achieve that).
-
Programming Robot Movements and ROS2 Integration: Build all the robot logic and connect all its differents component together. To do this, use ROS2 (Robot Operating System 2) as it allows easy partitions between all the different functionalities and the required multitasking.
-
Setting Up Server Communication: Create a client in the robot that connects with the Google Cloud server to communicate with the MyGMR App.
-
App Development: Develop an application using Flutter (Dart) for the frontend and Flask (Python) for the backend. This combination allows for a robust and user-friendly interface while ensuring efficient server-side operations.
-
Account Creation: Implement a feature that allows users to create an account using their Google credentials via Firebase. This provides a secure and convenient way for users to access the app.
-
Device Management: Provide the ability for users to add and delete Ground Monitoring Robot (GMR) devices from their device list. This gives users full control over the devices they want to manage.
-
Weather Information: Display the current weather of the user's location. This can help users plan their GMR operations based on weather conditions.
-
Job Creation: Enable users to create a job for a specific GMR device. Users should be able to specify the height of the grass and the area they want to cut. This allows for customized operations based on user preferences.
-
3D Terrain Model: Show a 3D model of the terrain that is being cut. This model is generated by the robot using the Jetson Nano. This visual representation can help users understand the terrain better and plan their operations accordingly.
-
Job Records: Once a job is completed, users should be able to view a record of each job for each GMR device. This can help users track the performance and efficiency of their devices.
-
Robot Information: Provide information about the robot, including the model name, battery life, charging time, etc. This can help users manage their devices more effectively.
-
Google Cloud Connection: Connect the app to Google Cloud to facilitate communication between the app and the robot in the CoppeliaSim simulation. This ensures real-time data transfer and remote control of the robot.
-
3D Reconstruction Using Instant-NGP: Use the 32 images captured by the cameras to perform a 3D reconstruction of the terrain using Instant-NGP, which generates a mesh (obj) and a pointcloud (ply).
-
Pointcloud To Top-Down 2D Image: Use the pointcloud to generate a 2D Top-Down image.
-
Create Custom Dataset: Generate a custom dataset using a set of previously generated images with the aforementioned method to detect obstacles and the robot on the field. This includes creating ground truth labels for both training and testing datasets using RoboFlow.
-
Train Object Detection Model: Train a Yolov8 model for object segmentation using the previous dataset. After training, test the performance of the model to ensure it meets the required standards.
-
Apply Object Detection: Use the trained model for inference in each 2D generated image for slam. Use RViz, a 3D visualization tool for ROS and ROS2, to check that the robot's comprehension of its surroundings is as it should. This step is crucial to ensure the model can accurately detect objects in the simulated environment.
-
Send 3D Reconstruction and 2D Image to Google Cloud and MyGMR App: Send a 3D model and a 2D image to Google Cloud when asked from the MyGMR App.
- ROS2: Documentation for the Robot Operating System 2 (ROS2).
- Roboflow Dataset: The specific dataset used for training the object detection model in our project.
- Instant-NGP: The GitHub repository for NVLabs' Instant-NGP technology used for 3D reconstruction.
- Flutter Libraries: A collection of libraries for Flutter, the UI toolkit used for frontend development in our app.
- YoloV8: The GitHub repository for the YoloV8 model used for object detection.
- Flask: Documentation for Flask, the micro web framework used for backend development in our app.
- Firebase: Documentation for Firebase, the platform used for user authentication in our app.
- CloudSQL: Samples and documentation for CloudSQL, the service used for database management in our app.
- Coverage Planning: The GitHub repository for PythonRobotics, which includes algorithms for coverage planning.
Distributed under the MIT License. See LICENSE.txt
for more information.