Key Features • Build • Plugins parameters • Metadata • Example pipelines • RTSP Server • Related • License
GStreamer plugin package for ZED Cameras. The package is composed of several plugins:
zedsrc
: acquires camera color image and depth map and pushes them in a GStreamer pipeline.zedmeta
: GStreamer library to define and handle the ZED metadata (Positional Tracking data, Sensors data, Detected Object data, Detected Skeletons data).zeddemux
: receives a compositezedsrc
stream (color left + color right
data orcolor left + depth map
+ metadata), processes the eventual depth data and pushes them in two separated new streams namedsrc_left
andsrc_aux
. A third source pad is created for metadata to be externally processed.zeddatamux
: receive a video stream compatible with ZED caps and a ZED Data Stream generated by thezeddemux
and adds metadata to the video stream. This is useful if metadata are removed by a filter that does not automatically propagate metadatazeddatacsvsink
: example sink plugin that receives ZED metadata, extracts the Positional Tracking and the Sensors Data and save them in a CSV file.zedodoverlay
: example transform filter plugin that receives ZED combined stream with metadata, extracts Object Detection information and draws the overlays on the oncoming filterRTSP Server
: application for Linux that instantiates an RTSP server from a text launch pipeline "gst-launch" like.
-
Install the latest ZED SDK v3.5.x from the official download page [Optional to compile the
zedsrc
plugin to acquire data from a ZED camera device] -
Install Git or download a ZIP archive
-
Install CMake
-
Install a GStreamer distribution (both
runtime
anddevelopment
installers). -
The installer should set the installation path via the
GSTREAMER_1_0_ROOT_X86_64
environment variable. -
Add the path
%GSTREAMER_1_0_ROOT_X86_64%\bin
to the system variablePATH
-
Run the following commands from a terminal or command prompt, assuming CMake and Git are in your
PATH
.git clone https://github.com/stereolabs/zed-gstreamer.git cd zed-gstreamer mkdir build cd build cmake -G "Visual Studio 16 2019" .. cmake --build . --target INSTALL --config Release
-
Install the latest ZED SDK v3.5.x from the official download page
-
Update list of
apt
available packages$ sudo apt update
-
Install GCC compiler and build tools
$ sudo apt install build-essential
-
Install CMake build system
$ sudo apt install cmake
-
Install GStreamer, the development packages and useful tools:
$ sudo apt install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgstrtspserver-1.0-0 gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev libgstrtspserver-1.0-dev
-
[Optional] Install OpenCV to build the
zedodverlay
filter$ sudo apt install libopencv-dev libopencv-contrib-dev
$ git clone https://github.com/stereolabs/zed-gstreamer.git
$ cd zed-gstreamer
$ mkdir build
$ cd build
$ cmake -DCMAKE_BUILD_TYPE=Release ..
$ make # Note: do not use the `-j` flag because parallel build is not supported
$ sudo make install
-
Check
ZED Video Source Plugin
installation inspecting its properties:gst-inspect-1.0 zedsrc
-
Check
ZED Video Demuxer
installation inspecting its properties:gst-inspect-1.0 zeddemux
-
Check
ZED Data Mux Plugin
installation inspecting its properties:gst-inspect-1.0 zeddatamux
-
Check
ZED CSV Sink Plugin
installation inspecting its properties:gst-inspect-1.0 zeddatacsvsink
-
Check
ZED Object Detection Overlay Plugin
installation inspecting its properties:gst-inspect-1.0 zedodoverlay
Most of the parameters follow the same name as the C++ API. Except that _
is replaced by -
to follow gstreamer common formatting.
camera-resolution
: stream resolution - {VGA (3), HD270 (2), HD1080 (1), HD2K (0)}camera-fps
: stream framerate - {15, 30, 60, 100}stream-type
: type of video stream - {Left image (0), Right image (1), Stereo couple (2), 16 bit depth (3), Left+Depth (4)}sdk-verbose
: SDK verbose mode - {TRUE, FALSE}camera-image-flip
: Use the camera in forced flip/no flip or automatic mode - {No Flip (0), Flip (1), Auto (2)}camera-id
: Select camera from cameraID - [0, 256]camera-sn
: Select camera from camera serial numbersvo-file-path
: SVO file path for SVO inputinput-stream-ip
: Specify IP adress when using streaming inputinput-stream-port
: Specify port when using streaming inputdepth-minimum-distance
: Minimum depth valuedepth-maximum-distance
: Maximum depth valuedepth-mode
: Depth Mode - {NONE (0), PERFORMANCE (1), QUALITY (2), ULTRA (3)}coordinate-system
: 3D Coordinate System - {Image (0) - Left handed, Y up (1) - Right handed, Y up (2) - Right handed, Z up (3) - Left handed, Z up (4) - Right handed, Z up, X fwd (5)}camera-disable-self-calib
: Disable the self calibration processing when the camera is opened - {TRUE, FALSE}depth-stabilization
: Enable depth stabilization - {TRUE, FALSE}con 8000 fidence-threshold
: Specify the Depth Confidence Threshold - [0,100]texture-confidence-threshold
: Specify the Texture Confidence Threshold - [0,100]measure3D-reference-frame
: Specify the 3D Reference Frame - {WORLD (0), CAMERA (1)}sensing-mode
: Specify the Depth Sensing Mode - {STANDARD (0), FILL (1)}enable-positional-tracking
: Enable positional tracking - {TRUE, FALSE}set-as-static
: Set to TRUE if the camera is static - {TRUE, FALSE}area-file-path
: Area localization file that describes the surroundings, saved from a previous tracking sessionenable-area-memory
: This mode enables the camera to remember its surroundings. This helps correct positional tracking drift, and can be helpful for positioning different cameras relative to one other in space - {TRUE, FALSE}enable-imu-fusion
: This setting allows you to enable or disable IMU fusion. When set to false, only the optical odometry will be used - {TRUE, FALSE}enable-pose-smoothing
: This mode enables smooth pose correction for small drift correction - {TRUE, FALSE}set-floor-as-origin
: This mode initializes the tracking to be aligned with the floor plane to better position the camera in space.initial-world-transform-x
: X position of the camera in the world frame when the camera is started.initial-world-transform-y
: Y position of the camera in the world frame when the camera is started.initial-world-transform-z
: Z position of the camera in the world frame when the camera is started.initial-world-transform-roll
: Roll orientation of the camera in the world frame when the camera is started.initial-world-transform-pitch
: Pitch orientation position of the camera in the world frame when the camera is started.initial-world-transform-yaw
: Yaw orientation of the camera in the world frame when the camera is started.od-enabled
: Enable Object Detection - {TRUE, FALSE}od-image-sync
: Set to TRUE to enable Object Detection frame synchronization - {TRUE, FALSE}od-enable-tracking
: Set to TRUE to enable tracking for the detected objects - {TRUE, FALSE}od-detection-model
: Object Detection Model - {(0): Object Detection Multi class, (1): Object Detection Multi class ACCURATE, (2): Skeleton tracking FAST, (3): Skeleton tracking ACCURATE, (4): Object Detection Multi class MEDIUM, (5): Skeleton tracking MEDIUM}od-confidence
: Minimum Detection Confidence - {0,100}od-max-range
: Maximum Detection Range - [-1,20000]od-body-fitting
: Set to TRUE to enable body fitting for skeleton tracking - {TRUE, FALSE}brightness
: Image brightness - {0,8}contrast
: Image contrast - {0,8}hue
: Image hue - {0,11}saturation
: Image saturation - {0,8}sharpness
: Image sharpness - {0,8}gamma
: Image gamma - {1,9}gain
: Camera gain - {0,100}exposure
: Camera exposure - {0,100}aec-agc
: Automatic camera gain and exposure - {TRUE, FALSE}aec-agc-roi-x
: Auto gain/exposure ROI top left 'X' coordinate (-1 to not set ROI) - {-1,2208}aec-agc-roi-y
: Auto gain/exposure ROI top left 'Y' coordinate (-1 to not set ROI) - {-1,1242}aec-agc-roi-w
: Auto gain/exposure ROI width (-1 to not set ROI) - {-1,2208}aec-agc-roi-h
: Auto gain/exposure ROI height (-1 to not set ROI) - {-1,1242}aec-agc-roi-side
: Auto gain/exposure ROI side - {LEFT (0), RIGHT (1), BOTH (2)}whitebalance-temperature
: Image white balance temperature - {2800,6500}whitebalance-auto
: Image automatic white balance - {TRUE, FALSE}led-status
: Camera LED on/off - {TRUE, FALSE}
is-depth
: indicates if the bottom stream of a compositestream-type
of theZED Video Source Plugin
is a color image (Right image) or a depth map.stream-data
: Enable binary data streaming onsrc_data
pad - {TRUE, FALSE}
location
: Location of the CSV file to writeappend
: Append data to an already existing CSV file
The zedsrc
plugin add metadata to the video stream containing information about the original frame size,
the camera position and orientatio, the sensors data and the object and skeleton detected by the Object Detection
module.
The zeddatacsvsink
and zedodoverlay
elements demonstrate how to handle, respectively, the sensors data and the
detected object data.
The GstZedSrcMeta
structure is provided to handle the zedmeta
metadata and it is available in the gstzedmeta
library.
The GstZedSrcMeta is subdivided in four sub-structures:
ZedInfo
: info about camera model, stream type and original stream sizeZedPose
: position and orientation of the camera if positional tracking is enabledZedSensors
: sensors data (only ZED-Mini and ZED2)ZedObjectData
: detected object information (only ZED2)
More details about the sub-structures are available in the gstzedmeta.h
file
- Linux:
simple-fps_rendering.sh
- Windows:
simple-fps_rendering.bat
gst-launch-1.0 zedsrc ! queue ! autovideoconvert ! queue ! fpsdisplaysink
- Linux:
simple-depth-fps_rendering.sh
- Windows:
simple-depth-fps_rendering.bat
gst-launch-1.0 zedsrc stream-type=3 ! queue ! autovideoconvert ! queue ! fpsdisplaysink
gst-launch-1.0 \
zedsrc stream-type=2 ! queue ! \
zeddemux is-depth=false name=demux \
demux.src_left ! queue ! autovideoconvert ! fpsdisplaysink \
demux.src_aux ! queue ! autovideoconvert ! fpsdisplaysink
gst-launch-1.0 \
zedsrc stream-type=4 ! queue ! \
zeddemux name=demux \
demux.src_left ! queue ! autovideoconvert ! fpsdisplaysink \
demux.src_aux ! queue ! autovideoconvert ! fpsdisplaysink
- Linux:
local-rgb-depth-sens-csv.sh
- Windows:
local-rgb-depth-sens-csv.bat
gst-launch-1.0 \
zedsrc stream-type=4 ! \
zeddemux stream-data=TRUE name=demux \
demux.src_left ! queue ! autovideoconvert ! fpsdisplaysink \
demux.src_aux ! queue ! autovideoconvert ! fpsdisplaysink \
demux.src_data ! queue ! zeddatacsvsink location="${HOME}/test_csv.csv" append=FALSE
- Linux:
local-rgb-od_multi-overlay.sh
- Windows:
local-rgb-od_multi-overlay.bat
gst-launch-1.0 \
zedsrc stream-type=0 od-enabled=true od-detection-model=0 camera-resolution=2 camera-fps=30 ! queue ! \
zedodoverlay ! queue ! \
autovideoconvert ! fpsdisplaysink
- Linux:
local-rgb-skel_fast-overlay.sh
- Windows:
local-rgb-skel_fast-overlay.bat
gst-launch-1.0 \
zedsrc stream-type=2 od-enabled=true od-detection-model=1 camera-resolution=0 camera-fps=15 ! queue ! \
zedodoverlay ! queue ! \
autovideoconvert ! fpsdisplaysink
gst-launch-1.0 \
zedsrc stream-type=0 od-enabled=true od-detection-model=2 camera-resolution=0 camera-fps=15 ! queue ! \
zedodoverlay ! queue ! \
autovideoconvert ! fpsdisplaysink
Local Left/Depth stream + Fast Skeleton Tracking result displaying + demux + Skeleton Tracking result displaying + Depth displaying with FPS
- Linux:
local-od-fps_overlay.sh
- Windows:
local-od-fps_overlay.bat
gst-launch-1.0 \
zedsrc stream-type=4 camera-resolution=2 camera-fps=30 od-enabled=true od-detection-model=1 ! \
zeddemux name=demux \
demux.src_left ! queue ! zedodoverlay ! queue ! autovideoconvert ! fpsdisplaysink \
demux.src_aux ! queue ! autovideoconvert ! fpsdisplaysink
Local Left/Depth stream + Fast Skeleton Tracking result displaying + demux + rescaling + remux + Skeleton Tracking result displaying + Depth displaying with FPS
- Linux:
local-rgb-rescale-od-overlay.sh
- Windows:
local-rgb-rescale-od-overlay.bat
gst-launch-1.0 \
zeddatamux name=mux \
zedsrc stream-type=4 camera-resolution=0 camera-fps=15 od-enabled=true od-detection-model=1 ! \
zeddemux stream-data=true is-depth=true name=demux \
demux.src_aux ! queue ! autovideoconvert ! videoscale ! video/x-raw,width=672,height=376 ! queue ! fpsdisplaysink \
demux.src_data ! mux.sink_data \
demux.src_left ! queue ! videoscale ! video/x-raw,width=672,height=376 ! mux.sink_video \
mux.src ! queue ! zedodoverlay ! queue ! \
autovideoconvert ! fpsdisplaysink
RTSP Server
< 73CD /a>Available only for Linux
An application to start an RTSP server from a text pipeline (using the same sintax of the CLI command gst-launch-1.0
) is provided.
Usage:
gst-zed-rtsp-launch [OPTION?] PIPELINE-DESCRIPTION
Help Options:
-h
,--help
-> Show help options.--help-all
-> Show all help options.--help-gst
-> Show GStreamer Options.
Application Options:
-p
,--port=PORT
-> Port to listen on (default: 8554).-a
,--address=HOST
-> Host address (default: 127.0.0.1).
Example:
gst-zed-rtsp-launch zedsrc ! videoconvert ! 'video/x-raw, format=(string)I420' ! x264enc ! rtph264pay pt=96 name=pay0
It is mandatory to define at least one payload named pay0
; it is possible to define multiple payloads using an increasing index (i.e. pay1
, pay2
, ...).
Ready to use scripts are available in the scripts/ folder for windows and linux.
-
local-od-fps_overlay : Left and Depth image rendering with object detection (FAST HUMAN BODY TRACKING) data (overlay).
-
local-rgb-depth-sens-csv : Left and Depth image rendering and sensors data saved in csv file.
-
local-rgb_left_depth-fps_rendering : Left and Depth image rendering.
-
local-rgb_left_right-fps_rendering : Left and Right image rendering.
-
local-rgb-od_multi-overlay : Left image rendering with object detection on overlay (MULTI_CLASS)
-
local-rgb-rescale-od-overlay : Left and Depth image rendering with object detection with rescaling filter
-
local-rgb-skel_accurate-overlay : Left image rendering with human body pose ACCURATE overlay
-
local-rgb-skel_fast-overlay : Left/Right in top/bottom image rendering with human body pose FAST overlay
-
[Linux only] udp and rtsp sender/receiver.
This library is licensed under the LGPL License.
If you need assistance go to our Community site at https://community.stereolabs.com/