firebase_ml_vision 0.2.0+1

  • Example
  • Installing
  • Versions
  • 95

ML Kit for Firebase

pub package

A Flutter plugin to use the ML Kit for Firebase API.

For Flutter plugins for other Firebase products, see

Note: This plugin is still under development, and some APIs might not be available yet. Feedback and Pull Requests are most welcome!


To use this plugin, add firebase_ml_vision as a dependency in your pubspec.yaml file. You must also configure Firebase for each platform project: Android and iOS (see the example folder or for step by step details).


Optional but recommended: If you use the on-device API, configure your app to automatically download the ML model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:

<application ...>
    android:value="ocr" />
  <!-- To use multiple models: android:value="ocr,label,barcode,face" -->

Using an On-device FirbaseVisionDetector

1. Create a FirebaseVisionImage.

Create a FirebaseVisionImage object from your image. To create a FirebaseVisionImage from an image File object:

final File imageFile = getImageFile();
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(imageFile);

2. Create an instance of a detector.

Get an instance of a FirebaseVisionDetector.

final BarcodeDetector barcodeDetector = FirebaseVision.instance.barcodeDetector();
final CloudLabelDetector cloudLabelDetector = FirebaseVision.instance.cloudLabelDetector();
final FaceDetector faceDetector = FirebaseVision.instance.faceDetector();
final LabelDetector labelDetector = FirebaseVision.instance.labelDetector();
final TextRecognizer textRecognizer = FirebaseVision.instance.textRecognizer();

You can also configure all detectors except TextRecognizer with desired options.

final LabelDetector detector = FirebaseVision.instance.labelDetector(
  LabelDetectorOptions(confidenceThreshold: 0.75),

3. Call detectInImage() with visionImage.

final List<Barcode> barcodes = await barcodeDetector.detectInImage(visionImage);
final List<Label> labels = await cloudLabelDetector.detectInImage(visionImage);
final List<Face> faces = await faceDetector.detectInImage(visionImage);
final List<Label> labels = await labelDetector.detectInImage(visionImage);
final VisionText visionText = await textRecognizer.detectInImage(visionImage);

4. Extract data.

a. Extract barcodes.

for (Barcode barcode in barcodes) {
  final Rectangle<int> boundingBox = barcode.boundingBox;
  final List<Point<int>> cornerPoints = barcode.cornerPoints;

  final String rawValue = barcode.rawValue;

  final BarcordeValueType valueType = barcode.valueType;

  // See API reference for complete list of supported types
  switch (valueType) {
    case BarcodeValueType.wifi:
      final String ssid = barcode.wifi.ssid;
      final String password = barcode.wifi.password;
      final BarcodeWiFiEncryptionType type = barcode.wifi.encryptionType;
    case BarcodeValueType.url:
      final String title = barcode.url.title;
      final String url = barcode.url.url;

b. Extract faces.

for (Face face in faces) {
  final Rectangle<int> boundingBox = face.boundingBox;

  final double rotY = face.headEulerAngleY; // Head is rotated to the right rotY degrees
  final double rotZ = face.headEulerAngleZ; // Head is tilted sideways rotZ degrees

  // If landmark detection was enabled with FaceDetectorOptions (mouth, ears,
  // eyes, cheeks, and nose available):
  final FaceLandmark leftEar = face.getLandmark(FaceLandmarkType.leftEar);
  if (leftEar != null) {
    final Point<double> leftEarPos = leftEar.position;

  // If classification was enabled with FaceDetectorOptions:
  if (face.smilingProbability != null) {
    final double smileProb = face.smilingProbability;

  // If face tracking was enabled with FaceDetectorOptions:
  if (face.trackingId != null) {
    final int id = face.trackingId;

c. Extract labels.

for (Label label in labels) {
  final String text = label.label;
  final String entityId = label.entityId;
  final double confidence = label.confidence;

d. Extract text.

String text = visionText.text;
for (TextBlock block in visionText.blocks) {
  final Rectangle<int> boundingBox = block.boundingBox;
  final List<Point<int>> cornerPoints = block.cornerPoints;
  final String text = block.text;
  final List<RecognizedLanguage> languages = block.recognizedLanguages;

  for (TextLine line in block.lines) {
    // Same getters as TextBlock
    for (TextElement element in line.elements) {
      // Same getters as TextBlock

Getting Started

See the example directory for a complete sample app using ML Kit for Firebase.


Bump Android dependencies to latest.


  • Breaking Change Update TextDetector to TextRecognizer for android mlkit '17.0.0' and firebase-ios-sdk '5.6.0'.
  • Added CloudLabelDetector.


  • Fix example imports so that publishing will be warning-free.


  • Set pod version of Firebase/MLVision to avoid breaking changes.


  • Breaking Change Add Barcode, Face, and Label on-device detectors.
  • Remove close method.


  • Bump Android and Firebase dependency versions.


  • Initial release with text detector.


// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

import 'dart:async';
import 'dart:io';

import 'package:firebase_ml_vision/firebase_ml_vision.dart';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';

import 'detector_painters.dart';

void main() => runApp(MaterialApp(home: _MyHomePage()));

class _MyHomePage extends StatefulWidget {
  _MyHomePageState createState() => _MyHomePageState();

class _MyHomePageState extends State<_MyHomePage> {
  File _imageFile;
  Size _imageSize;
  dynamic _scanResults;
  Detector _currentDetector = Detector.text;

  Future<void> _getAndScanImage() async {
    setState(() {
      _imageFile = null;
      _imageSize = null;

    final File imageFile =
        await ImagePicker.pickImage(source:;

    if (imageFile != null) {

    setState(() {
      _imageFile = imageFile;

  Future<void> _getImageSize(File imageFile) async {
    final Completer<Size> completer = Completer<Size>();

    final Image image = Image.file(imageFile);
    image.image.resolve(const ImageConfiguration()).addListener(
      (ImageInfo info, bool _) {

    final Size imageSize = await completer.future;
    setState(() {
      _imageSize = imageSize;

  Future<void> _scanImage(File imageFile) async {
    setState(() {
      _scanResults = null;

    final FirebaseVisionImage visionImage =

    FirebaseVisionDetector detector;
    switch (_currentDetector) {
      case Detector.barcode:
        detector = FirebaseVision.instance.barcodeDetector();
      case Detector.face:
        detector = FirebaseVision.instance.faceDetector();
      case Detector.label:
        detector = FirebaseVision.instance.labelDetector();
      case Detector.cloudLabel:
        detector = FirebaseVision.instance.cloudLabelDetector();
      case Detector.text:
        detector = FirebaseVision.instance.textRecognizer();

    final dynamic results =
        await detector.detectInImage(visionImage) ?? <dynamic>[];

    setState(() {
      _scanResults = results;

  CustomPaint _buildResults(Size imageSize, dynamic results) {
    CustomPainter painter;

    switch (_currentDetector) {
      case Detector.barcode:
        painter = BarcodeDetectorPainter(_imageSize, results);
      case Detector.face:
        painter = FaceDetectorPainter(_imageSize, results);
      case Detector.label:
        painter = LabelDetectorPainter(_imageSize, results);
      case Detector.cloudLabel:
        painter = LabelDetectorPainter(_imageSize, results);
      case Detector.text:
        painter = TextDetectorPainter(_imageSize, results);

    return CustomPaint(
      painter: painter,

  Widget _buildImage() {
    return Container(
      constraints: const BoxConstraints.expand(),
      decoration: BoxDecoration(
        image: DecorationImage(
          image: Image.file(_imageFile).image,
          fit: BoxFit.fill,
      child: _imageSize == null || _scanResults == null
          ? const Center(
              child: Text(
                style: TextStyle(
                  fontSize: 30.0,
          : _buildResults(_imageSize, _scanResults),

  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: const Text('ML Vision Example'),
        actions: <Widget>[
            onSelected: (Detector result) {
              _currentDetector = result;
              if (_imageFile != null) _scanImage(_imageFile);
            itemBuilder: (BuildContext context) => <PopupMenuEntry<Detector>>[
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Barcode'),
                    value: Detector.barcode,
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Face'),
                    value: Detector.face,
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Label'),
                    value: Detector.label,
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Cloud Label'),
                    value: Detector.cloudLabel,
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Text'),
                    value: Detector.text,
      body: _imageFile == null
          ? const Center(child: Text('No image selected.'))
          : _buildImage(),
      floatingActionButton: FloatingActionButton(
        onPressed: _getAndScanImage,
        tooltip: 'Pick Image',
        child: const Icon(Icons.add_a_photo),

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:

  firebase_ml_vision: ^0.2.0+1

2. Install it

You can install packages from the command line:

with Flutter:

$ flutter packages get

Alternatively, your editor might support flutter packages get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:

import 'package:firebase_ml_vision/firebase_ml_vision.dart';
Version Uploaded Documentation Archive
0.2.0+1 Oct 12, 2018 Go to the documentation of firebase_ml_vision 0.2.0+1 Download firebase_ml_vision 0.2.0+1 archive
0.2.0 Oct 10, 2018 Go to the documentation of firebase_ml_vision 0.2.0 Download firebase_ml_vision 0.2.0 archive
0.1.2 Aug 21, 2018 Go to the documentation of firebase_ml_vision 0.1.2 Download firebase_ml_vision 0.1.2 archive
0.1.1 Aug 17, 2018 Go to the documentation of firebase_ml_vision 0.1.1 Download firebase_ml_vision 0.1.1 archive
0.1.0 Jul 25, 2018 Go to the documentation of firebase_ml_vision 0.1.0 Download firebase_ml_vision 0.1.0 archive
0.0.1 Jun 28, 2018 Go to the documentation of firebase_ml_vision 0.0.1 Download firebase_ml_vision 0.0.1 archive
Describes how popular the package is relative to other packages. [more]
Code health derived from static analysis. [more]
Reflects how tidy and up-to-date the package is. [more]
Weighted score of the above. [more]
Learn more about scoring.

We analyzed this package on Nov 14, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using:

  • Dart: 2.0.0
  • pana: 0.12.6
  • Flutter: 0.11.3


Detected platforms: Flutter

References Flutter, and has no conflicting libraries.


Package Constraint Resolved Available
Direct dependencies
Dart SDK >=2.0.0-dev.28.0 <3.0.0
flutter 0.0.0
Transitive dependencies
collection 1.14.11
meta 1.1.6
sky_engine 0.0.99
typed_data 1.1.6
vector_math 2.0.8
Dev dependencies
firebase_core ^0.2.5+1
image_picker ^0.4.5