The camera driver has been modified to use USB cameras instead of the PSPeye. This change utilizes OpenCV with Video4Linux2 as the backend to communicate with the cameras. To check the camera status, you may optionally install the required package with the following command:
sudo apt install v4l-utils
The modifications are organized into two main sections:
- Camera Calibration (in
_Camera Params
) – Handles intrinsic parameters for the new cameras. - Mocap Backend (in
computer_code/api
) – Implements the motion capture system backend.
Additionally, some visual modifications have been made in computer_code/src
to enhance the user interface for a more intuitive appearance.
To calibrate the camera, follow these steps:
-
First, review the following introductory files for a better understanding of the calibration process:
_How to Setup OpenCV
_How to get world coordinates
_Getting Camera Params
-
Use the calibration scripts to generate the optimal intrinsic properties for the camera. This is typically necessary when switching to new cameras or after any changes to existing cameras.
The mocap backend has been updated as follows:
-
Modified Files:
index.py
– Main file for the mocap backend.IrCamera.py
– Camera driver code.helpers.py
– Data processing for the motion capture system.
-
Calibration Process:
- To calibrate the system for a new camera layout, follow the GUI instructions.
- Use a single LED marker for initial calibration, followed by two LED markers with a fixed spacing (e.g., 15cm or another spacing specified in the code).
- Once calibration is complete, you can track objects using a three-marker setup.
Read through the code to understand how the system works and how it integrates with the calibration and mocap tracking processes.
- Buy Network Video Recorder and IP cameras to make the system portable. Cameras should work at 850nm or 940nm infrared.
- Modify the camera driver to work with new cameras, along with all the calibrations.
- Simplify the setup using VICON wand (it has fixed distances for scale calibration)
- Test cameras outdoors, possibly need some thresholding as outdoor environment is bright in IR range.
A general purpose motion capture system built from the ground up, used to autonomously fly multiple drones indoors
Watch this for information about the project & a demo! https://youtu.be/0ql20JKrscQ?si=jkxyOe-iCG7fa5th
Install the pseyepy python library: https://github.com/bensondaled/pseyepy
This project requires the sfm (structure from motion) OpenCV module, which requires you to compile OpenCV from source1. This is a bit of a pain, but these links should help you get started: SFM dependencies OpenCV module installation guide
install npm and yarn
From the computer_code directory Run yarn install
to install node dependencies
Then run yarn run dev
to start the webserver. You will be given a url view the frontend interface.
In another terminal window, run python3 api/index.py
to start the backend server. This is what receives the camera streams and does motion capture computations.
The documentation for this project is admittedly pretty lacking, if anyone would like to put type definitions in the Python code that would be amazing and probably go a long way to helping the readability of the code. Feel free to also use the discussion tab to ask questions.
My blog post has some more information about the drones & camera: joshuabird.com/blog/post/mocap-drones
This post by gumby0q explains how camera_params.json
can be calculated for your cameras.
This motion capture system is an "outside-in" system, with external cameras tracking objects within a fixed space. There are also "inside-out" systems which use cameras on the drones/robots to determine their locations, not requiring any external infrastructure.
My undergraduate dissertation presents such a system, which is capable of localizing multiple agents within a world in real time using purely visual data, with state-of-the-art performance. Check it out here: https://github.com/jyjblrd/distributed_visual_SLAM