Skip to main content
TL;DR

In this guide, you’ll learn how to use AI body-tracking technology and a depth camera to measure the height of a person in real time. We’ll explore several approaches so you can select the one that fits your needs best.

Why you need height estimation?

Height estimation is the process of determining an individual’s height, which can be accomplished through various methods, both manual and technological. Traditionally, height is measured using stadiometers, measuring tapes, or other physical tools. This process involves the individual standing straight against a wall or a vertical surface while a measuring device is used to record the height from the floor to the top of the head. As cool software developers, we’ll automate the process and measure a person’s height using AI and a depth camera, such as iPhone’s LiDAR sensor or Intel RealSense.

That’s the result:

LightBuzz AI body tracking height measure (LiDAR)

There are several scenarios you might need that kind of functionality.

  • Fitness: Fitness apps use height data to tailor workout routines. Knowing an individual’s height helps in calculating their body mass index (BMI) and customizing exercise programs to suit their fitness goals.
  • Healthcare: Healthcare apps can use height data to provide insights on proper posture and ergonomics. This information is crucial for designing exercises that prevent injury and improve overall body mechanics (or even helping.
  • Fashion: In the context of virtual fitting room design, height estimation can facilitate virtual fittings for clothing and accessories, ensuring better comfort and fit.

How AI Body-Tracking works

AI body-tracking is a groundbreaking technology that uses cameras to capture the coordinates of human joints, offering a detailed and dynamic representation of body movements. Traditional RGB cameras are employed to capture visual data of the human body. These cameras identify key points on the body, such as the head, shoulders, elbows, wrists, hips, knees, and ankles, and track their movements over time.

In addition to RGB cameras, depth-sensing cameras, such as LiDAR, are used to provide accurate measurements of physical distances. These depth cameras emit pulses of light and measure the time it takes for the light to reflect back, creating a detailed map of the environment and the subject within it. This allows the system to determine the exact distance of each joint from the camera, adding a critical layer of precision to the data.

By combining the visual data from RGB cameras with the distance measurements from depth cameras, AI body-tracking systems can accurately assess the positions of joints in 3D space. This integration enables the creation of a precise skeletal model of the person, capturing their movements in real time. The resulting data is highly accurate and consistent, making AI body-tracking an invaluable tool for applications requiring detailed analysis of human motion, such as fitness training, physical therapy, and ergonomic assessments.

LightBuzz Body Tracking Skeleton Model (34 landmarks)

Capturing skeleton data

Capturing body skeleton data is very easy. All we need to do is import the LightBuzz SDK in our project and open the camera. In this example, I’ll be using Unity, but you are free to use the platform of your choice — the concepts remain the same.

private Sensor _sensor;
private void Start()
{
    _sensor = Sensor.Create(new DeviceConfiguration
    {
        SensorType = SensorType.LiDAR,
        RequestedColorResolution = new Size(1280, 720)
    });
    _sensor?.Open();
}
private void OnDestroy()
{
    _sensor?.Close();
    _sensor?.Dispose();
}
private void Update()
{
    if (_sensor == null || !_sensor.IsOpen) return;
    FrameData frame = _sensor.Update();
    if (frame == null) return;
    Body body = frame.BodyData?.Default();
    if (body != null)
    {
        // Measure height here!
    }
}

The above code opens the iOS LiDAR camera and selects the first available body. Feel free to experiment with different sensor types, such as RealSense, OAK-D, or Orbbec.

Measuring the height

As you can see, the LightBuzz AI captures 34 human body joints. When it comes to measuring height using AI body-tracking technology, there are two primary approaches: measuring the distance between specific joints and summing the distances of individual bones. Each method has its own advantages.

1. Measuring distance: Top of Skull ➡️ Toes

The first approach involves measuring the direct distance between the TopSkull and the HeelLeft / HeelRight joints. This method is more intuitive and closely resembles traditional manual height measurement techniques. The main advantage of this approach is its simplicity; it directly measures the vertical height from the top of the head to the feet, making it easy to understand and implement.

⚠️ Important

This method requires the person being measured to stand perfectly straight. Any deviation from a fully upright position, such as slouching or bending, can result in an inaccurate measurement. Despite this limitation, this approach is often preferred for its straightforwardness and ease of use, particularly in applications where quick and simple measurements are needed.

Let’s write some code to implement this approach:

private float CalculateHeight1(Body body)
{
    Vector3D topSkull = body.Joints[JointType.TopSkull].Position3D;
    Vector3D heelLeft = body.Joints[JointType.HeelLeft].Position3D;
    Vector3D heelRight = body.Joints[JointType.HeelRight].Position3D;
    Vector3D heelCenter = (heelLeft + heelRight) / 2.0f;
    float height = Calculations.Distance(topSkull, heelCenter);
    return height;
}

The first few lines simply capture the 3D position of the required joints. Then, we estimate the middle of the heels and, lastly, we use the handy Calculations class to measure the distance between 2 points in the 3D space.

Wait! Need expert help?

Navigating the intricate landscapes of Computer Vision and AI demands experience. LightBuzz has been at the forefront of Computer Vision technology, developing custom projects and cutting-edge AI systems. We love Maths and Software programming. So, do you need expert hands to steer your project towards success? Choose LightBuzz. Let’s bring your vision to life, pixel by pixel.

Contact us

2. Adding the lengths of individual bones

The second approach involves measuring the distances between multiple joints and summing them to calculate the total height. This method segments the body into parts: TopSkull to Neck, Neck to Pelvis, Pelvis to Hips, Hips to Knees, Knees to Ankles, and Ankles to Heels. By measuring each segment individually, this method can provide a more accurate height measurement, even if the person is not standing perfectly straight.

⚠️ Important

The main advantage of this approach is its accuracy. Since it accounts for each segment of the body, it can compensate for slight variations in posture, making it a more reliable method in scenarios where precise measurement is crucial. However, it is less intuitive compared to the direct measurement method and requires more complex calculations. A small deviation is expected either way.

The code is a bit more complex, but still self-explanatory.

private float CalculateHeight2(Body body)
{
    Vector3D topSkull = body.Joints[JointType.TopSkull].Position3D;
    Vector3D neck = body.Joints[JointType.Neck].Position3D;
    Vector3D pelvis = body.Joints[JointType.Pelvis].Position3D;
    Vector3D hipLeft = body.Joints[JointType.HipLeft].Position3D;
    Vector3D hipRight = body.Joints[JointType.HipRight].Position3D;
    Vector3D kneeLeft = body.Joints[JointType.KneeLeft].Position3D;
    Vector3D kneeRight = body.Joints[JointType.KneeRight].Position3D;
    Vector3D ankleLeft = body.Joints[JointType.AnkleLeft].Position3D;
    Vector3D ankleRight = body.Joints[JointType.AnkleRight].Position3D;
    Vector3D heelLeft = body.Joints[JointType.HeelLeft].Position3D;
    Vector3D heelRight = body.Joints[JointType.HeelRight].Position3D;
    Vector3D hipCenter = (hipLeft + hipRight) / 2.0f;
    Vector3D kneeCenter = (kneeLeft + kneeRight) / 2.0f;
    Vector3D ankleCenter = (ankleLeft + ankleRight) / 2.0f;
    Vector3D heelCenter = (heelLeft + heelRight) / 2.0f;
    float height = 
        Calculations.Distance(topSkull, neck) + 
        Calculations.Distance(neck, pelvis) +
        Calculations.Distance(pelvis, hipCenter) +
        Calculations.Distance(hipCenter, kneeCenter) +
        Calculations.Distance(kneeCenter, ankleCenter) +
        Calculations.Distance(ankleCenter, heelCenter);
    return height;
}

Comparing the two methods

Moving on, let’s compare the two methods. First, we need to capture the height, and then print the results:

if (body != null)
{
    float height1 = CalculateHeight1(body); // in meters
    float height2 = CalculateHeight2(body); // in meters
    Debug.Log($"Height 1: {height1 * 100.0f:N0} cm");
    Debug.Log($"Height 2: {height2 * 100.0f:N0} cm");
}

In this video, the person is doing squats. As expected, the height measured by the first method decreases as the person lowers their body. On the other hand, the height calculated with the second algorithm remains steady.

⚠️ Next steps.

For even greater accuracy, it’s crucial to ensure that the person is standing straight during the measurement process. This can be achieved by assessing the alignment of the spine, the angle of the neck, and the straightness of the knees. By monitoring these factors, we can confirm that the individual is in an optimal upright position, which minimizes errors and enhances the precision of the height measurement. You can do this by using the Calculations utility class. Let me know if you need help in the comments below!

Resources

Here are more resources to get you started with body-tracking app development:

‘Til the next time, keep coding, my friends!

Body tracking

LightBuzz has created the world’s most accurate body tracking software solution. Companies and universities worldwide use our SDK to develop commercial apps for desktop and mobile devices.

Vangos Pterneas

Vangos Pterneas is a software engineer, book author, and award-winning Microsoft Most Valuable Professional (2014-2019). Since 2012, Vangos has been helping Fortune-500 companies and ambitious startups create demanding motion-tracking applications. He's obsessed with analyzing and modeling every aspect of human motion using AI and Maths. Vangos shares his passion by regularly publishing articles and open-source projects to help and inspire fellow developers.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.