firebase_ml_vision 0.1.2

  • README.md
  • CHANGELOG.md
  • Example
  • Installing
  • Versions
  • 95

Google ML Kit for Firebase

pub package

A Flutter plugin to use the Google ML Kit for Firebase API.

For Flutter plugins for other Firebase products, see FlutterFire.md.

Note: This plugin is still under development, and some APIs might not be available yet. Feedback and Pull Requests are most welcome!

Usage

To use this plugin, add firebase_ml_vision as a dependency in your pubspec.yaml file. You must also configure Firebase for each platform project: Android and iOS (see the example folder or https://codelabs.developers.google.com/codelabs/flutter-firebase/#4 for step by step details).

Android

Optional but recommended: If you use the on-device API, configure your app to automatically download the ML model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:

<application ...>
  ...
  <meta-data
    android:name="com.google.firebase.ml.vision.DEPENDENCIES"
    android:value="ocr" />
  <!-- To use multiple models: android:value="ocr,label,barcode,face" -->
</application>

Using an On-device FirbaseVisionDetector

1. Create a FirebaseVisionImage.

Create a FirebaseVisionImage object from your image. To create a FirebaseVisionImage from an image File object:

final File imageFile = getImageFile();
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(imageFile);

2. Create an instance of a detector.

Get an instance of a FirebaseVisionDetector.

final BarcodeDetector barcodeDetector = FirebaseVision.instance.barcodeDetector();
final FaceDetector faceDetector = FirebaseVision.instance.faceDetector();
final LabelDetector labelDetector = FirebaseVision.instance.labelDetector();
final TextDetector textDetector = FirebaseVision.instance.textDetector();

You can also configure all detectors except TextDetector with desired options.

final LabelDetector detector = FirebaseVision.instance.labelDetector(
  LabelDetectorOptions(confidenceThreshold: 0.75),
);

3. Call detectInImage() with visionImage.

final List<Barcode> barcodes = await barcodeDetector.detectInImage(visionImage);
final List<Face> faces = await faceDetector.detectInImage(visionImage);
final List<Label> labels = await labelDetector.detectInImage(visionImage);
final List<TextBlock> blocks = await textDetector.detectInImage(visionImage);

4. Extract data.

a. Extract barcodes.

for (Barcode barcode in barcodes) {
  final Rectangle<int> boundingBox = barcode.boundingBox;
  final List<Point<int>> cornerPoints = barcode.cornerPoints;

  final String rawValue = barcode.rawValue;

  final BarcordeValueType valueType = barcode.valueType;

  // See API reference for complete list of supported types
  switch (valueType) {
    case BarcodeValueType.wifi:
      final String ssid = barcode.wifi.ssid;
      final String password = barcode.wifi.password;
      final BarcodeWiFiEncryptionType type = barcode.wifi.encryptionType;
      break;
    case BarcodeValueType.url:
      final String title = barcode.url.title;
      final String url = barcode.url.url;
      break;
  }
}

b. Extract faces.

for (Face face in faces) {
  final Rectangle<int> boundingBox = face.boundingBox;

  final double rotY = face.headEulerAngleY; // Head is rotated to the right rotY degrees
  final double rotZ = face.headEulerAngleZ; // Head is tilted sideways rotZ degrees

  // If landmark detection was enabled with FaceDetectorOptions (mouth, ears,
  // eyes, cheeks, and nose available):
  final FaceLandmark leftEar = face.getLandmark(FaceLandmarkType.leftEar);
  if (leftEar != null) {
    final Point<double> leftEarPos = leftEar.position;
  }

  // If classification was enabled with FaceDetectorOptions:
  if (face.smilingProbability != null) {
    final double smileProb = face.smilingProbability;
  }

  // If face tracking was enabled with FaceDetectorOptions:
  if (face.trackingId != null) {
    final int id = face.trackingId;
  }
}

c. Extract labels.

for (Label label in labels) {
  final String text = label.label;
  final String entityId = label.entityId;
  final double confidence = label.confidence;
}

d. Extract text.

for (TextBlock block in blocks) {
  final Rectangle<int> boundingBox = block.boundingBox;
  final List<Point<int>> cornerPoints = block.cornerPoints;
  final String text = block.text;

  for (TextLine line in block.lines) {
    // ...

    for (TextElement element in line.elements) {
      // ...
    }
  }
}

Getting Started

See the example directory for a complete sample app using Google ML Kit for Firebase.

0.1.2

  • Fix example imports so that publishing will be warning-free.

0.1.1

  • Set pod version of Firebase/MLVision to avoid breaking changes.

0.1.0

  • Breaking Change Add Barcode, Face, and Label on-device detectors.
  • Remove close method.

0.0.2

  • Bump Android and Firebase dependency versions.

0.0.1

  • Initial release with text detector.

example/lib/main.dart

// Copyright 2018 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

import 'dart:async';
import 'dart:io';

import 'package:firebase_ml_vision/firebase_ml_vision.dart';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';

import 'detector_painters.dart';

void main() => runApp(new MaterialApp(home: _MyHomePage()));

class _MyHomePage extends StatefulWidget {
  @override
  _MyHomePageState createState() => new _MyHomePageState();
}

class _MyHomePageState extends State<_MyHomePage> {
  File _imageFile;
  Size _imageSize;
  List<dynamic> _scanResults;
  Detector _currentDetector = Detector.text;

  Future<void> _getAndScanImage() async {
    setState(() {
      _imageFile = null;
      _imageSize = null;
    });

    final File imageFile =
        await ImagePicker.pickImage(source: ImageSource.gallery);

    if (imageFile != null) {
      _getImageSize(imageFile);
      _scanImage(imageFile);
    }

    setState(() {
      _imageFile = imageFile;
    });
  }

  Future<void> _getImageSize(File imageFile) async {
    final Completer<Size> completer = new Completer<Size>();

    final Image image = new Image.file(imageFile);
    image.image.resolve(const ImageConfiguration()).addListener(
      (ImageInfo info, bool _) {
        completer.complete(Size(
          info.image.width.toDouble(),
          info.image.height.toDouble(),
        ));
      },
    );

    final Size imageSize = await completer.future;
    setState(() {
      _imageSize = imageSize;
    });
  }

  Future<void> _scanImage(File imageFile) async {
    setState(() {
      _scanResults = null;
    });

    final FirebaseVisionImage visionImage =
        FirebaseVisionImage.fromFile(imageFile);

    FirebaseVisionDetector detector;
    switch (_currentDetector) {
      case Detector.barcode:
        detector = FirebaseVision.instance.barcodeDetector();
        break;
      case Detector.face:
        detector = FirebaseVision.instance.faceDetector();
        break;
      case Detector.label:
        detector = FirebaseVision.instance.labelDetector();
        break;
      case Detector.text:
        detector = FirebaseVision.instance.textDetector();
        break;
      default:
        return;
    }

    final List<dynamic> results =
        await detector.detectInImage(visionImage) ?? <dynamic>[];

    setState(() {
      _scanResults = results;
    });
  }

  CustomPaint _buildResults(Size imageSize, List<dynamic> results) {
    CustomPainter painter;

    switch (_currentDetector) {
      case Detector.barcode:
        painter = new BarcodeDetectorPainter(_imageSize, results);
        break;
      case Detector.face:
        painter = new FaceDetectorPainter(_imageSize, results);
        break;
      case Detector.label:
        painter = new LabelDetectorPainter(_imageSize, results);
        break;
      case Detector.text:
        painter = new TextDetectorPainter(_imageSize, results);
        break;
      default:
        break;
    }

    return new CustomPaint(
      painter: painter,
    );
  }

  Widget _buildImage() {
    return new Container(
      constraints: const BoxConstraints.expand(),
      decoration: new BoxDecoration(
        image: new DecorationImage(
          image: Image.file(_imageFile).image,
          fit: BoxFit.fill,
        ),
      ),
      child: _imageSize == null || _scanResults == null
          ? const Center(
              child: Text(
                'Scanning...',
                style: TextStyle(
                  color: Colors.green,
                  fontSize: 30.0,
                ),
              ),
            )
          : _buildResults(_imageSize, _scanResults),
    );
  }

  @override
  Widget build(BuildContext context) {
    return new Scaffold(
      appBar: new AppBar(
        title: const Text('ML Vision Example'),
        actions: <Widget>[
          new PopupMenuButton<Detector>(
            onSelected: (Detector result) {
              _currentDetector = result;
              if (_imageFile != null) _scanImage(_imageFile);
            },
            itemBuilder: (BuildContext context) => <PopupMenuEntry<Detector>>[
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Barcode'),
                    value: Detector.barcode,
                  ),
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Face'),
                    value: Detector.face,
                  ),
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Label'),
                    value: Detector.label,
                  ),
                  const PopupMenuItem<Detector>(
                    child: Text('Detect Text'),
                    value: Detector.text,
                  ),
                ],
          ),
        ],
      ),
      body: _imageFile == null
          ? const Center(child: Text('No image selected.'))
          : _buildImage(),
      floatingActionButton: new FloatingActionButton(
        onPressed: _getAndScanImage,
        tooltip: 'Pick Image',
        child: const Icon(Icons.add_a_photo),
      ),
    );
  }
}

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:


dependencies:
  firebase_ml_vision: ^0.1.2

2. Install it

You can install packages from the command line:

with Flutter:


$ flutter packages get

Alternatively, your editor might support flutter packages get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:


import 'package:firebase_ml_vision/firebase_ml_vision.dart';
  
Version Uploaded Documentation Archive
0.1.2 Aug 21, 2018 Go to the documentation of firebase_ml_vision 0.1.2 Download firebase_ml_vision 0.1.2 archive
0.1.1 Aug 17, 2018 Go to the documentation of firebase_ml_vision 0.1.1 Download firebase_ml_vision 0.1.1 archive
0.1.0 Jul 25, 2018 Go to the documentation of firebase_ml_vision 0.1.0 Download firebase_ml_vision 0.1.0 archive
0.0.1 Jun 28, 2018 Go to the documentation of firebase_ml_vision 0.0.1 Download firebase_ml_vision 0.0.1 archive
Popularity:
Describes how popular the package is relative to other packages. [more]
90
Health:
Code health derived from static analysis. [more]
100
Maintenance:
Reflects how tidy and up-to-date the package is. [more]
100
Overall:
Weighted score of the above. [more]
95
Learn more about scoring.

We analyzed this package on Sep 18, 2018, and provided a score, details, and suggestions below. Analysis was completed with status completed using:

  • Dart: 2.0.0
  • pana: 0.12.3
  • Flutter: 0.8.4

Platforms

Detected platforms: Flutter

References Flutter, and has no conflicting libraries.

Dependencies

Package Constraint Resolved Available
Direct dependencies
Dart SDK >=2.0.0-dev.28.0 <3.0.0
flutter 0.0.0
Transitive dependencies
collection 1.14.11
meta 1.1.6
sky_engine 0.0.99
typed_data 1.1.6
vector_math 2.0.8
Dev dependencies
flutter_test
image_picker ^0.4.5