1. Home
  2. Docs
  3. Getting Started
  4. Creating your first project

Creating your first project

The LightBuzz SDK is designed according to the following workflow: a sensor generates frames. Each frame contains color, depth, and body data. Each body contains joints. Each joint has a joint type, a confidence level, 2D position, 3D position, and orientation.

Launch Unity3D, create a new project, and import the LightBuzz Body Tracking package.

Configuring the sensor

Your new project should be empty. So, create a new Unity scene, add an empty game object, and attach a new C# script. Open the script to write some code. In the C# file, the first thing you need to do is import the LightBuzz.BodyTracking namespace.

using LightBuzz.BodyTracking;

Before initializing the sensor object, we need to create the desired configuration. Simply put, the configuration parameters specify the type and characteristics of the sensor.

SensorConfiguration configuration = new SensorConfiguration
{
    SensorType = SensorType.Webcam,
    Smoothing = 0.3f,
    DeviceIndex = 0,
    RequestedFPS = 30,
    RequestedColorWidth = 1280,
    RequestedColorHeight = 720
};

In Unity3D, you can also use the DeviceConfiguration class, which is exposed in the Editor:

[SerializeField]
private DeviceConfiguration _configuration;

The Editor will display the above configuration field with some nice visual elements. Depending on the selected sensor type, the available properties will be different.

Sensor Type

The sensor type property specifies the kind of input device to use. It can have one of the following values:

  • Webcam: USB cameras, phone rear & front cameras
  • iOS LiDAR: iPhone and iPad depth camera
  • RealSense: Intel RealSense D415, D435, D435i, D455, D405, and L515 depth cameras
  • Orbbec: Orbbec Femto, Femto W, and Astra+ depth cameras
  • Structure: Occipital Structure Core depth camera
  • OAK-D: Luxonis OAK-D and OAK-D Lite depth cameras
  • Video: video file input
Webcam
Unity Editor sensor type - Webcam
iOS LiDAR
Unity Editor sensor type - iOS LiDAR
Intel RealSense
Unity Editor sensor type - RealSense
Video file
Unity Editor sensor type - Video

Smoothing

Smoothing allows you to control or eliminate the amount of jitter. It’s a floating-point number between 0 and 1. 0 indicates there will be no smoothing applied and, as a result, data may be jittery. 1 indicates that the highest level of smoothing will be applied. We recommend a value between 0.2 and 0.3.

Device index

If you have more than 1 device of the same kind connected to your device, the Device Index indicates which one to use. On iOS, use the device index to switch between the front and rear cameras. On Windows, use the device index to switch between different USB cameras.

Requested FPS

Many cameras support running on various frame rates. For example, RealSense can run at 15, 30, 60, or 90 FPS. Set the frame rate property to specify the desired rate. If you specify an invalid value, a default value will be applied.

Requested color resolution

Just like the frame rate, one camera may support different color resolutions (frame width & height). For example, a webcam may support 640×480 or 1280×720. Specify the resolution of your choice. If you provide unsupported values, a default resolution will be used.

Requested depth resolution

Just like the color resolution, the depth resolution specifies the width and height of the depth frames. It’s only available on depth cameras.

You can now create your sensor by calling the Create method and providing the configuration parameter:

Sensor sensor = Sensor.Create(_configuration);

Opening & closing the sensor

Before getting any data from the sensor, you need to open it. You do so by calling the Open method. When you are done with the sensor, remember to call the Close method to stop streaming frames. At any point, you can use the IsOpen property to check whether your device is open or closed.

public class LightBuzz_BodyTracking_Demo : MonoBehaviour
{
    [SerializeField]
    private DeviceConfiguration _configuration;
    private Sensor _sensor;
    private void Start()
    {
        _sensor = Sensor.Create(_configuration);
        _sensor?.Open();
    }
    private void OnDestroy()
    {
        _sensor?.Close();
    }
}

Capturing frame data

An open sensor streams data in a background thread. To capture the latest data, use the Update method. That method will return a frame object encapsulating all of the captured data. In case no data were received, the frame object be null. So, always check for null frames before processing them.

private void Update()
{
    if (_sensor == null || !_sensor.IsOpen) return;
    FrameData frame = _sensor.Update();
    if (frame != null)
    {
        DateTime timestamp = frame.Timestamp;
        byte[] color = frame.ColorData;
        ushort[] depth = frame.DepthData;
        List<Body> bodies = frame.BodyData;
    }
}

Frames include the following types of data.

Timestamp

The unique date/time the frame was captured.

Color data

The color frame data is an array of bytes representing the color stream. The bytes are encoded in RGBA (iOS) or RGB (Windows) format. RGB stands for Red, Green, and Blue. A stands for Alpha transparency and it’s always 255.

Depth data

If your camera supports depth, the depth data will be an array of unsigned short values. Each value represents the distance of the point and the vertical camera plane. Depth values are measured in millimeters.

Body & Joint data

Last but not least, the frame provides a list of the captured skeleton objects. Each skeleton is assigned a unique ID and contains a dictionary of Joints. Each joint includes the following properties:

Joint Type

Specifies the type of each joint. The LightBuzz SDK supports the following joint types (and we are adding more):

LightBuzz Body Tracking Silhouette

Confidence & Tracking State

The confidence value is a floating-point number between 0 and 1, indicating the tracking confidence of a joint. Values close to 0 indicate low confidence, usually because a joint is not visible. Values close to 1 indicate high confidence. In case a joint is entirely hidden, the confidence will be 0. Nevertheless, our SDK will try to guess the position of the hidden joints. Use the tracking state property to check whether the position of a joint is inferred or not.

Position 2D

The coordinates (X and Y) of the joint in the 2D color space. In case depth is supported, the 2D position in the depth frame will be the same as the color frame. The color and depth frames are always aligned. The 2D coordinates are measured in pixels.

Position 3D

The coordinates (X, Y, and Z) of the joint in the 3D world space. The reference point (0, 0, 0) is the camera itself. X and Y coordinates may be positive or negative. Z coordinates are positive or zero. The 3D coordinates are measured in meters.

The 3D position property is only available when the SensorType is set to iOS LiDAR, RealSense, Orbbec, Structure, and OAK-D.

Orientation

The orientation (X, Y, Z, W) of the joint in the 3D world space.

The orientation property is only available when the SensorType is set to iOS LiDAR, RealSense, Orbbec, Structure, and OAK-D.

The complete demo

Our demo project would look like this:

using LightBuzz.BodyTracking;
using UnityEngine;
public class LightBuzz_BodyTracking_Demo : MonoBehaviour
{
    [SerializeField]
    private DeviceConfiguration _configuration;
    private Sensor _sensor;
    private void Start()
    {
        _sensor = Sensor.Create(_configuration);
        _sensor?.Open();
    }
    private void OnDestroy()
    {
        _sensor?.Close();
    }
    private void Update()
    {
        if (_sensor == null || !_sensor.IsOpen) return;
        FrameData frame = _sensor.Update();
        if (frame != null)
        {
            List<Body> bodies = frame.BodyData;
            Body body = bodies.Closest();
            if (body != null)
            {
                Joint neck = body.Joints[JointType.Neck];
                Debug.Log($"Position: {neck.Position2D}");
                Debug.Log($"Tracking confidence: {neck.Confidence}");
            }
        }
    }
}

How can we help?