Video Capture
Last updated: 10 months ago • 1 versions | Visibility: Public
Description
The video capture allows developers to access camera streams of the Lynx-R1 : RGB, Tracking and Handtracking.
Frames format:
RGB cameras
- YUV - NV21
- Resolution: 1536 x 1404 per eye
- Buffer size: 1536 x (1404 x 1.5) per eye
Tracking cameras
- Grayscale - 8 bits per pixel
- Resolution: 1280 x 400 (640 x 400 per eye, but buffers are merged)
- Buffer size: 1280 x 400 per eye
Handtracking cameras
- Grayscale - 8 bits per pixel
- Resolution: 400 x 400
- Buffer size: 400 x 400 per eye
Get started
1. Start the camera you want to stream with the max expected FPS.
LynxCaptureAPI.StartCapture(LynxCaptureAPI.ESensorType.RGB, fps); // Open RGB cameras stream
LynxCaptureAPI.StartCapture(LynxCaptureAPI.ESensorType.TRACKING, fps); // Open Tracking cameras stream
LynxCaptureAPI.StartCapture(LynxCaptureAPI.ESensorType.HANDTRACKING, fps); // Open Handtracking cameras stream
2. Attach your callback each time a frame is received
LynxCaptureAPI.onRGBFrames += OnCallback; // Attach function called at each frame from the RGB stream
LynxCaptureAPI.onTrackingFrames+= OnCallback; // Attach function called at each frame from the Tracking stream
LynxCaptureAPI.onHandtrackingFrames+= OnCallback; // Attach function called at each frame from the Handtracking stream
public void OnCallback(LynxFrameInfo frameInfo)
{
// Process the frame herre
}
3. Stop the camera
LynxCaptureAPI.StopCamera(LynxCaptureAPI.ESensorType.RGB); // Stop RGB cameras stream
LynxCaptureAPI.StopCamera(LynxCaptureAPI.ESensorType.TRACKING); // Stop Tracking cameras stream
LynxCaptureAPI.StopCamera(LynxCaptureAPI.ESensorType.HANDTRACKING); // Stop Handtracking cameras stream
LynxCaptureAPI.StopAllCameras(); // Stop all cameras stream
LynxFrameInfo
- version: Uint32
- width: Uint32
- height: Uint32
- bufferSize: Uint32
- leftEyeBuffer: IntPtr
- rightEyeBuffer: IntPtr
Samples
The package contains samples to help you go through the Video Capture.
- DisplayAll - display all 6 cameras in one scene.
- DisplayBothEyes - display RGB stream for each eye in a different frame
- DisplayCaptureCamera - basic RGB capture and display for one eye.
- DisplayMixedCapture - display AR + VR capture.
- DisplayMixedCaptureResized - display AR + VR capture with resize to reduce processing time.
- DisplayResizedCaptureCamera - Resize and display RGB stream.
LynxCaptureAPI
A list of available features using LynxCaptureAPI:
public static bool IsCaptureRunning; // Check if capture is currently running
public static OnFrameDelegate onRGBFrames; // To subscribe RGB cameras stream
public static OnFrameDelegate onTrackingFrames; // To subscribe Tracking cameras stream
public static OnFrameDelegate onHandtrackingFrames; // To subscribe Handtracking cameras stream
public static bool StartCapture(ESensorType sensorType, int maxFPS = 30) // To start cameras for the given type at an expected framerate.
public static void StopCapture(ESensorType sensorType); // To stop cameras for the given type.
public static void StopAllCameras(); // To stop all running cameras
public static bool ReadCameraParameters(ESensorType sensorType, out LynxCaptureLibraryInterface.IntrinsicData intrinsic, out LynxCaptureLibraryInterface.ExtrinsicData extrinsic); // Return intrinsic and extrinsic data for the given camera type.
LynxOpenCV
To facilitate image processing, the LynxOpenCV API provides some useful functions:
LynxOpenCV.YUV2RGB(IntPtr YUVBuffer, uint width, uint height, byte[] outRGBBuffer); // Convert YUV buffer to RGB buffer
LynxOpenCV.YUV2RGBA(IntPtr YUVBuffer, uint width, uint height, byte[] outRGBBuffer); // Convert YUV buffer to RGBA buffer
LynxOpenCV.Compose_ARVR_2_RGBA_From_YUV_AR(IntPtr inYUVBufferAR, uint width, uint height, byte[] inOutRGBABufferVR); // Blend AR YUV and VR RGBA together based on VR alpha
LynxOpenCV.Compose_ARVR_2_RGBA_from_RGBA_AR(byte[] inRGBABufferAR, uint width, uint height, byte[] inOutRGBABufferVR); // Blend AR RGBA and VR RGBA together based on VR alpha
LynxOpenCV.ResizeFrame(uint width, uint height, uint depth, uint new_width, uint new_height, byte[] inBuffer, byte[] outBuffer); // Resize buffer from byte array
LynxOpenCV.ResizeFrame(uint width, uint height, uint depth, uint new_width, uint new_height, IntPtr inBuffer, byte[] outBuffer); // Resize buffer from IntPtr
LynxOpenCV.FlipRGBAFrame(uint width, uint height, bool verticalFlip, bool HorizontalFlip, byte[] inOutBuffer); // Flip a RGBA buffer horizontally and/or vertically
LynxOpenCV.FlipRGBFrame(uint width, uint height, bool verticalFlip, bool HorizontalFlip, byte[] inOutBuffer); // Flip a RGBA buffer horizontally and/or vertically
Limitations
This package has the current limitations:
- Heavy package as this version contains lot of dependencies for video capture management.
- Tracking sensor buffers are currently not splitted. Each buffer contains the stereo capture of both cameras.
- Development Build mode is not available.
Status
The current package is a preview version.
This feature is planned to be integrated as a full part of the SDK.
Due to the limitations, the current version is provided as a complete library. Thereafter, the video capture will be provided directly as a service on the Lynx-R1 to ease development build and reduce its size.