Building Real-Time Object Detection Tool with Flutter & TensorFlow Lite

Quick Summary: Explore how to build a real-time object detection tool using Flutter and TensorFlow Lite. This guide walks through setting up the Flutter framework with TensorFlow Lite for detecting objects in real time, offering practical insights into mobile machine learning integration.

Introduction

Object detection has become an integral part of various mobile applications, from augmented reality to autonomous vehicles. In this article, we'll explore how to build a real-time object detection tool using Flutter and TensorFlow Lite. We'll leverage the pre-trained `detect.tflite` model to identify objects and measure their height in real time. This tool can be expanded for various applications where understanding the size of objects is critical.

Setting Up the Project

To begin, ensure you have Flutter set up on your machine and add the necessary dependencies for camera access and TensorFlow Lite. The core of our application revolves around real-time object detection, which we'll implement using the following components:

-  CameraView: A widget to stream camera images.

-  Classifier: A utility to load and run the TensorFlow Lite model.

-  BoxWidget: A widget to display bounding boxes around detected objects.

-  Recognition: A model class to hold information about detected objects.

CTA Flutter

Real-Time Object Detection

Let's start with the main structure of our detection tool. The `RealTimeObjectDetectionPage` widget is responsible for displaying the camera feed and drawing bounding boxes around detected objects.

class RealTimeObjectDetectionPage extends StatefulWidget {

  // ... other code


  @override

  Widget build(BuildContext context) {

    return Scaffold(

      key: scaffoldKey,

      body: Stack(

        children: [

          Positioned.fill(

            child: CameraView(resultsCallback, statsCallback),

          ),

          if (results != null) boundingBoxes(results!),

        ],

      ),

    );

  }


  // Callback to update the results

  void resultsCallback(List<Recognition> results) {

    setState(() {

      this.results = results;

    });

  }

}

 

Here, we use a `Stack` to overlay bounding boxes on top of the camera feed. The `resultsCallback` method is triggered whenever new detection results are available, updating the UI accordingly.

Displaying Bounding Boxes

The bounding boxes are displayed using the `BoxWidget`, which takes the `Recognition` object and the calculated height of the detected object as input.

class BoxWidget extends StatelessWidget {

  final Recognition? result;

  final String heightText;


  @override

  Widget build(BuildContext context) {

    Color color = Colors.primaries[

      (result!.label.length + result!.label.codeUnitAt(0) + result!.id) % Colors.primaries.length

    ];


    return Positioned(

      left: result!.renderLocation.left,

      top: result!.renderLocation.top,

      width: result!.renderLocation.width,

      height: result!.renderLocation.height,

      child: Container(

        decoration: BoxDecoration(

          border: Border.all(color: color, width: 3),

          borderRadius: BorderRadius.all(Radius.circular(2)),

        ),

        child: Align(

          alignment: Alignment.topLeft,

          child: Container(

            color: color,

            child: Column(

              crossAxisAlignment: CrossAxisAlignment.start,

              children: [

                Text(result!.label, style: TextStyle(color: Colors.white, fontWeight: FontWeight.bold)),

                Text(result!.score.toStringAsFixed(2), style: TextStyle(color: Colors.white)),

                Text(heightText, style: TextStyle(color: Colors.white)),

              ],

            ),

          ),

        ),

      ),

    );

  }

}

Measuring Object Height

One unique feature of our tool is the ability to measure the height of detected objects. This is done by calculating the height in pixels from the bounding box and converting it to real-world units.

double calculateHeight(Rect renderLocation) {

  double boxHeightPixels = renderLocation.height;

  const double knownDistance = 2.0; // meters

  const double knownHeight = 1.0; // meters

  const double focalLength = 500.0; // pixels


  double heightInMeters = (boxHeightPixels * knownHeight * knownDistance) / focalLength;

  return (heightInMeters * 100) / 5; // Convert meters to centimeters

}


String _formatHeight(double height) {

  final parts = height.toStringAsFixed(1).split('.');

  final wholePart = parts[0];

  final decimalPart = parts[1];

  final formattedWholePart = _addCommas(wholePart);

  return '$formattedWholePart.$decimalPart cm';

}

Camera Integration

The `CameraView` widget is responsible for capturing the camera feed and passing each frame to the TensorFlow Lite model for inference.

class CameraView extends StatefulWidget {

  // ... other code


  void initializeCamera() async {

    cameras = await availableCameras();

    cameraController = CameraController(cameras![0], ResolutionPreset.low, enableAudio: false);


    cameraController!.initialize().then((_) async {

      await cameraController!.startImageStream(onLatestImageAvailable);

      Size? previewSize = cameraController!.value.previewSize;

      CameraViewSingleton.inputImageSize = previewSize;

      Size screenSize = MediaQuery.of(context).size;

      CameraViewSingleton.screenSize = screenSize;

      CameraViewSingleton.ratio = screenSize.width / previewSize!.height;

    });

  }


  onLatestImageAvailable(CameraImage cameraImage) async {

    if (classifier!.interpreter != null && classifier!.labels != null) {

      if (predicting!) return;


      setState(() { predicting = true; });


      var isolateData = IsolateData(cameraImage, classifier!.interpreter.address, classifier!.labels);

      Map<String, dynamic> inferenceResults = await inference(isolateData);


      widget.resultsCallback(inferenceResults["recognitions"]);

      widget.statsCallback((inferenceResults["stats"] as Stats)..totalElapsedTime = DateTime.now().millisecondsSinceEpoch - uiThreadTimeStart);


      setState(() { predicting = false; });

    }

  }

}

Conclusion

With the combination of Flutter and TensorFlow Lite, we've created a powerful real-time object detection tool that not only identifies objects but also measures their height. This tool can be adapted for various applications, making it a versatile addition to any mobile development toolkit.

Don't miss out on this opportunity to hire Flutter developers and level up your app development skills!

Contact Us CTA