8000 Add Timestamp to stream OSD @ Ingest · Issue #187 · evercam/ex_nvr · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Add Timestamp to stream OSD @ Ingest #187

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
marcoherbst opened this issue Oct 3, 2023 · 7 comments
Open

Add Timestamp to stream OSD @ Ingest #187

marcoherbst opened this issue Oct 3, 2023 · 7 comments

Comments

@marcoherbst
Copy link
Member

Similar to undistorting.

Using different cameras we're getting different qualities of timestamp.

Let's offer the option to ingest an image with no timestamp and to add the NVR timestamp.

Benefit 1: It will be more beautiful and consistent accross cameras
Benefit 2: We remove one possible source of confusion (a camera with the wrong time settings).

@gBillal
Copy link
Member
gBillal commented Oct 3, 2023

Unfortunately there's a downside of decoding/encoding again which is impossible in Raspberry pi and it'll consume a lot of resources for Jetson.

@marcoherbst
Copy link
Member Author

We have to try. If it means putting more powerful hardware on the edge, so be it. Let's try, get the data and then better understand our next actions. We will be making this functionality.

@marcoherbst
Copy link
Member Author
marcoherbst commented Oct 28, 2023

https://github.com/membraneframework/membrane_ffmpeg_video_filter_plugin

TextOverlay element is implemented, based on ffmpeg drawtext filter. This element enables adding text on top of given raw video frames.

In this example they use HTML to define an overlay. That seems clever:
https://developer.ridgerun.com/wiki/index.php/OpenGL_Accelerated_HTML_Overlay/Basics
https://shop.ridgerun.com/products/htmloverlay?variant=44683564646587

Also:
https://compositor.live/ (Part of Membrane)

@marcoherbst marcoherbst added this to the Backlog (Long Finger) milestone Nov 9, 2023
@marcoherbst marcoherbst changed the title Add Timestamp Watermark @ Ingest Add Timestamp to stream OSD @ Ingest Jul 23, 2024
@marcoherbst marcoherbst removed this from the Backlog (Long Finger) milestone Jul 23, 2024
@oussamabonnor1
Copy link
Contributor

@halimb
Copy link
Member
halimb commented May 1, 2025

@oussamabonnor1 no, the "x-timestamp" header is just an HTTP header.
I think this issue is about:
1- configure camera to not overlay timestamp text on the video stream.
2- make ex_nvr render an overlay timestamp on the video stream (using ffmpeg, as pixels on the video itself)

@halimb
Copy link
Member
halimb commented May 1, 2025

Personal opinion on this issue
Although we haven't tried it, I agree with Billal that it would be costly to burn the timestamp on the RTSP stream (needs decoding / re-encoding). In addition to the hardware cost, it will inevitably introduce an additional delay for the consumers of the feed.

Favorite option:
Get as close to the camera as possible (let the camera be the source of truth)

Either using:

  • RTSP metadata and/or EXIF data (when available + depends on vendors): for snapshots, strange, but I tried the EXIF approach with Milesight recently, and their EXIF timestamp is actually incorrect (delayed by 1 second on average, compared to the OSD time)
  • (costly, but efficient) running OCR every few seconds to calculate the drift from the camera timestamp and adjust ExNVR's internal clock.

Least favorite:
If we really want to have ex_nvr decide and encode timestamps, we may go for a soft-overlay (i.e.: subtitles in an SRT file, muxed with the MP4)
we can use that file later for burning the timestamps when a snapshot is requested, or when an HLS stream is started.

@marcoherbst
Copy link
Member Author

Yes to all of this, including the statement that we should not be decoding/encoding the camera stream - for now.

But, in the not too distant roadmap, 2 things are likely to happen:
a) We may be decoding/encoding in order to do computer vision on the edge.
b) We may be supporting USB or non-RTSP video inputs that provide raw frames, so, no decoding required, only encoding. In that story, the equation looks different.

It's ok to close this or to put it on the long-finger. It's valid, but not now, as I see it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
0