web123456

SDK folder analysis of DeepStream series

sample_apps
├── deepstream-app
├    End-to-end example demonstrates 4 cascadesNeural NetworkMulti-camera stream (1 primary detector and 3 secondary classifiers) and displays tiled output.
├── deepstream-dewarper-test
├    Demonstrate the distortion function of a single or multiple 360-degree camera streams. Read camera calibration parameters from CSV files,
├      And render the aisle and spotted surfaces on the display.
├── deepstream-gst-metadata-test
├    Demonstrate how to set metadata before the Gst-nvstreammux plugin in the DeepStream pipeline,
├      And how to access metadata after Gst-nvstreammux.
├── deepstream-image-decode-test
├    Built on deepstream-test3 to demonstrate image decoding instead of video. This example uses a custom decoder box,
├      Therefore, the MJPEG codec can be used as input.
├── deepstream-infer-tensor-meta-test
├    Demonstrate how to pass and access nvinfer tensor output as metadata.
├── deepstream-nvof-test
├    Demonstrate the optical flow function of a single or multiple streams. This example uses two GStreamer plugins (Gst-nvof and Gst-nvofvisual).
├    The Gst-nvof element generates MV (motion vector) data and attaches it as user metadata. Gst-nvofvisual element use
├    Predefined color wheel matrix visualizes MV data.
├── deepstream-perf-demo
├    Perform single-channel cascading inference and object tracking on all flow orders in the directory.
├── deepstream-segmentation-test
├    Demonstrate segmenting multi-stream video or images using semantic or industrial neural networks and presenting the output to the display.
├── deepstream-test1
├    A simple example of how to use DeepStream elements for a single H.264 stream: filesrc → decode decoding → nvstreammux → nvinfer
├   (Main Detector) → nvosd → renderer renderer.
├── deepstream-test2
├    Simple application, built on test1, displaying additional attributes such as tracking and secondary classification attributes.
├── deepstream-test3
├    Built based on deepstream-test1 (simple test application 1) to demonstrate how:
├   •Use multiple sources in pipeline
├   •Use uridecodebin to accept any type of input (e.g. RTSP/file), any container format supported by GStreamer, and any codec
├   •Configure Gst-nvstreammux to generate a batch of frames and infer these frames to improve resource utilization
├   •Extract stream metadata, which contains useful information about frames in batch buffers
├── deepstream-test4
├   Build a single H.264 stream based on deepstream-test1: filesrc, decode, nvstreammux, nvinfer, nvosd, renderer demonstrates how:
├   •Use Gst-nvmsgconv and Gst-nvmsgbroker plugins in the pipeline
├   •Create metadata of type NVDS_META_EVENT_MSG and append it to the buffer
├   •Use NVDS_META_EVENT_MSG for different types of objects, such as vehicles and people
├   •Implement the “copy” and “free” functions of metadata extended through the extMsg field
├── deepstream-test5
├   Built on deepstream-app. exhibit:
├   •Use Gst-nvmsgconv and Gst-nvmsgbroker plugins for multi-streaming in pipeline
├   •How to configure the Gst-nvmsgbroker plugin as a receiver plugin from a configuration file (for KAFKA, Azure, etc.)
├   •How to handle RTCP sender reports from RTSP servers or cameras, and how to convert Gst Buffer PTS to UTC timestamps.
├    For more details, please refer to the deepstream_test5_app_main.c registered and used by the RTCP sender report callback function test5_rtcp_sender_report_callback.
├   The process of using the "handle-sync" signal of the rtpmanager element to register the GStreamer callback is recorded in apps-common /src / deepstream_source_bin.c.
├──deepstream-user-metadata-test
├    Demonstrate how to any DeepStreamComponentsAdd custom or user-specific metadata to it. Test code will fill a user
├    The 16-byte array of data is attached to the selected component. The data is retrieved in another component.