1. Home
  2. Docs
  3. Video
  4. Introduction


The LightBuzz SDK can record camera frames in real-time. Everything you see while streaming data can be saved in special video files. You can then play the recorded data using our built-in tools and user interface components. There are two main components for these tasks:

  • Video Recorder
  • Video Player

The LightBuzz SDK is recording lots of different data types (color, depth, skeleton, IMU, floor). Typical MP4 or AVI files can only store color information. As a result, standard video formats are not enough when it comes to skeleton tracking. So, our team has implemented a simple alternative protocol to store all necessary data.

When you record a LightBuzz Body Tracking video, the recorder is not storing the UI elements (dots, lines, etc) of your app. There are two significant advantages to that approach:

  • You are not limited by the default 2D screen visualization. Having the data in this flexible format means that you can change and update the user interface with the same frame data!
  • You can use the existing color and depth information to re-run the skeleton tracking AI inference in future updates.

If you do need to record your UI elements, instead, prefer a traditional screen capture system, such as Apple’s built-in Screen Recorder.

In terms of performance, the recording procedure is not impacting the live feed. The recorder uses parallel background threads, so your app runs smoothly.

Each video is stored in a separate system folder. All you need to do is provide a valid folder path where the recorder can store data. Here are the different file types that are stored internally.

Video settings

The video data are stored in a single folder on a per-frame basis. Every valid video folder includes two special files:

File nameFile typeDescription
configuration.configurationA text file with video properties, such as resolution and intrinsic camera parameters.
timestamps.timestampsA text file containing a list of all the video frame timestamps.

Never delete files! These configuration files are essential. If you delete them, the video player will not be able to play the videos.

Video data

Each frame may include one or more of the following files:

File typeFile extensionDescription
Color.colorJPEG-encoded color data.
Depth.depthDepth data in binary form.
IMU.imuAccelerometer vector, gyroscope vector, and rotation information.
Body.bodyText file with body IDs, joint types, confidence score, position vectors (2D and 3D coordinates), and orientation quaternions.
Floor.floorRaw text with normal vector and origin point of the floor instance.

Most of the time, you don’t need to record everything. For simple apps, color and body data are usually enough. You can select which data types to record by modifying the video recording configuration settings.

The name of every single file is the unique timestamp of its corresponding frame. So, the color, depth, and body files of the same frame would have the exact same name.


The software is encoding the one-dimensional RGBA array in JPEG format. This way, we reduce the amount of space used, while keeping processing times short. The JPEG quality is configurable (0-100%).


Depth data are stored in arrays of distance numbers (unsigned 16-bit integers). Each number represents the distance of a pixel in millimeters. The recorder is storing the depth values consecutively into binary files.


The IMU frame samples include two vectors and three rotation values.

  • Acceleration vector (X, Y, Z)
  • Gyroscope vector (X, Y, Z)
  • Raw, pitch, yaw

The recorder is storing these values in plain text.


The body data files include the ID of each skeleton and the joint information:

  • Joint type
  • Tracking confidence
  • 2D position (X, Y)
  • 3D position (X, Y, Z)
  • Orientation (X, Y, Z, W)

The recorder is storing these values sequentially in plain text.


The floor data files include the two vectors which Mathematically describe a floor clip plane:

  • Normal vector (X, Y, Z)
  • Origin point (X, Y, Z)

The recorder is storing these values sequentially in plain text.

Most of the time, you won’t need to process the video data yourself. Instead, use the built-in VideoPlayer class to play the video and automatically load the data in your UI. If you need to import data from raw files and create your own video processor, use the VideoHelper class.

How can we help?