mlkit 0.6.0

  • README.md
  • CHANGELOG.md
  • Example
  • Installing
  • Versions
  • 45

mlkit

pub package

A Flutter plugin to use the Firebase ML Kit.

Only your star motivate me!

Note: This plugin is still under development, and some APIs might not be available yet. Feedback and Pull Requests are most welcome!

official package

The flutter team now has the firebase_ml_vision package for Firebase ML Kit. Please consider trying to use firebase_ml_vision.

Features

FeatureAndroidiOS
Recognize text(on device)
Recognize text(cloud)yetyet
Detect faces(on device)
Scan barcodes(on device)
Label Images(on device)
Label Images(cloud)yetyet
Recognize landmarks(cloud)yetyet
Custom modelyet

What features are available on device or in the cloud?

Usage

To use this plugin, add mlkit as a dependency in your pubspec.yaml file.

Getting Started

Check out the example directory for a sample app using Firebase Cloud Messaging.

Android Integration

To integrate your plugin into the Android part of your app, follow these steps:

  1. Using the Firebase Console add an Android app to your project: Follow the assistant, download the generated google-services.json file and place it inside android/app. Next, modify the android/build.gradle file and the android/app/build.gradle file to add the Google services plugin as described by the Firebase assistant.

iOS Integration

To integrate your plugin into the iOS part of your app, follow these steps:

  1. Using the Firebase Console add an iOS app to your project: Follow the assistant, download the generated GoogleService-Info.plist file, open ios/Runner.xcworkspace with Xcode, and within Xcode place the file inside ios/Runner. Don't follow the steps named "Add Firebase SDK" and "Add initialization code" in the Firebase assistant.

  2. Remove the use_frameworks! line from ios/Podfile (workaround for flutter/flutter#9694).

Dart/Flutter Integration

From your Dart code, you need to import the plugin and instantiate it:

import 'package:mlkit/mlkit.dart';

FirebaseVisionTextDetector detector = FirebaseVisionTextDetector.instance;

// Detect form file/image by path
var currentLabels = await detector.detectFromPath(_file?.path);

// Detect from binary data of a file/image
var currentLabels = await detector.detectFromBinary(_file?.readAsBytesSync());

custom model interpreter

native sample code

import 'package:mlkit/mlkit.dart';
import 'package:image/image.dart' as img;

FirebaseModelInterpreter interpreter = FirebaseModelInterpreter.instance;
FirebaseModelManager manager = FirebaseModelManager.instance;
manager.registerCloudModelSource(
        FirebaseCloudModelSource(modelName: "mobilenet_v1_224_quant"));

var imageBytes = (await rootBundle.load("assets/mountain.jpg")).buffer;
img.Image image = img.decodeJpg(imageBytes.asUint8List());
image = img.copyResize(image, 224, 224);
var results = await interpreter.run(
                    "mobilenet_v1_224_quant",
                    FirebaseModelInputOutputOptions(
                        0,
                        FirebaseModelDataType.BYTE,
                        [1, 224, 224, 3],
                        0,
                        FirebaseModelDataType.BYTE,
                        [1, 1001]),
                    imageToByteList(image));

Uint8List imageToByteList(img.Image image) {
    var _inputSize = 224;
    var convertedBytes = new Uint8List(1 * _inputSize * _inputSize * 3);
    var buffer = new ByteData.view(convertedBytes.buffer);
    int pixelIndex = 0;
    for (var i = 0; i < _inputSize; i++) {
      for (var j = 0; j < _inputSize; j++) {
        var pixel = image.getPixel(i, j);
        buffer.setUint8(pixelIndex++, (pixel >> 16) & 0xFF);
        buffer.setUint8(pixelIndex++, (pixel >> 8) & 0xFF);
        buffer.setUint8(pixelIndex++, (pixel) & 0xFF);
      }
    }
    return convertedBytes;
  }

0.6.0

  • support custom model interpreter for android (cloud hosting model only).

0.5.0

  • Added functionality to detect from binary

0.4.1

  • Added On-device Face detecting API for android.

0.4.0

  • Added On-device Face detecting API (iOS only).

0.3.0

  • Added On-device Image Labeling API.

0.2.2

  • fix package dependency for android.

0.2.1

  • fully support Scan barcodes on device.

0.2.0

  • support Scan barcodes (iOS only)

0.1.1

  • brushed up example code.

0.1.0

  • fully support Recognize text on device.

0.0.2

  • support iOS.

0.0.1

  • initial release.

example/lib/main.dart

import 'dart:async';
import 'dart:io';
import 'dart:typed_data';

import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:image/image.dart' as img;
import 'package:image_picker/image_picker.dart';
import 'package:mlkit/mlkit.dart';

void main() => runApp(new MyApp());

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => new _MyAppState();
}

class _MyAppState extends State<MyApp> {
  File _file;
  List<VisionFace> _currentLabels = <VisionFace>[];

  FirebaseVisionFaceDetector detector = FirebaseVisionFaceDetector.instance;

  FirebaseModelInterpreter interpreter = FirebaseModelInterpreter.instance;

  FirebaseModelManager manager = FirebaseModelManager.instance;

  @override
  initState() {
    super.initState();
    manager.registerCloudModelSource(
        FirebaseCloudModelSource(modelName: "mobilenet_v1_224_quant"));
  }

  @override
  Widget build(BuildContext context) {
    return new MaterialApp(
      home: new Scaffold(
        appBar: new AppBar(
          title: new Text('Plugin example app'),
        ),
        body: _buildBody(),
        floatingActionButton: new FloatingActionButton(
          onPressed: () async {
            try {
              //var file = await ImagePicker.pickImage(source: ImageSource.camera);
              var file =
                  await ImagePicker.pickImage(source: ImageSource.gallery);
              setState(() {
                _file = file;
              });
              try {
                var imageBytes =
                    (await rootBundle.load("assets/mountain.jpg")).buffer;
                img.Image image = img.decodeJpg(imageBytes.asUint8List());
                image = img.copyResize(image, 224, 224);
                var ret = await interpreter.run(
                    "mobilenet_v1_224_quant",
                    FirebaseModelInputOutputOptions(
                        0,
                        FirebaseModelDataType.BYTE,
                        [1, 224, 224, 3],
                        0,
                        FirebaseModelDataType.BYTE,
                        [1, 1001]),
                    imageToByteList(image));
                for (var i = 0; i < ret.length; i++) {
                  int item = ret[i];
                  if (item != 0) {
                    print("prob in flutter : ${(ret[i] & 0xff) / 255.0 }");
                  }
                }
                var currentLabels = await detector.detectFromPath(
                    _file?.path,
                    VisionFaceDetectorOptions(
                        classificationType:
                            VisionFaceDetectorClassification.All,
                        landmarkType: VisionFaceDetectorLandmark.All));
                setState(() {
                  _currentLabels = currentLabels;
                });
              } catch (e) {
                print(e.toString());
              }
            } catch (e) {
              print(e.toString());
            }
          },
          child: new Icon(Icons.camera),
        ),
      ),
    );
  }

  Uint8List imageToByteList(img.Image image) {
    var _inputSize = 224;
    var convertedBytes = new Uint8List(1 * _inputSize * _inputSize * 3);
    var buffer = new ByteData.view(convertedBytes.buffer);
    int pixelIndex = 0;
    for (var i = 0; i < _inputSize; i++) {
      for (var j = 0; j < _inputSize; j++) {
        var pixel = image.getPixel(i, j);
        buffer.setUint8(pixelIndex++, (pixel >> 16) & 0xFF);
        buffer.setUint8(pixelIndex++, (pixel >> 8) & 0xFF);
        buffer.setUint8(pixelIndex++, (pixel) & 0xFF);
      }
    }
    return convertedBytes;
  }

  Widget _buildImage() {
    return SizedBox(
      height: 500.0,
      child: new Center(
        child: _file == null
            ? Text('No Image')
            : new FutureBuilder<Size>(
                future: _getImageSize(Image.file(_file, fit: BoxFit.fitWidth)),
                builder: (BuildContext context, AsyncSnapshot<Size> snapshot) {
                  if (snapshot.hasData) {
                    return Container(
                        foregroundDecoration:
                            TextDetectDecoration(_currentLabels, snapshot.data),
                        child: Image.file(_file, fit: BoxFit.fitWidth));
                  } else {
                    return new Text('Detecting...');
                  }
                },
              ),
      ),
    );
  }

  Future<Size> _getImageSize(Image image) {
    Completer<Size> completer = new Completer<Size>();
    image.image.resolve(new ImageConfiguration()).addListener(
        (ImageInfo info, bool _) => completer.complete(
            Size(info.image.width.toDouble(), info.image.height.toDouble())));
    return completer.future;
  }

  Widget _buildBody() {
    return Container(
      child: Column(
        children: <Widget>[
          _buildImage(),
          _buildList(_currentLabels),
        ],
      ),
    );
  }

  Widget _buildList(List<VisionFace> texts) {
    if (texts.length == 0) {
      return Text('Empty');
    }
    return Expanded(
      child: Container(
        child: ListView.builder(
            padding: const EdgeInsets.all(1.0),
            itemCount: texts.length,
            itemBuilder: (context, i) {
              return _buildRow(texts[i].toString());
            }),
      ),
    );
  }

  Widget _buildRow(String text) {
    return ListTile(
      title: Text(
        "Text: ${text}",
      ),
      dense: true,
    );
  }
}

class TextDetectDecoration extends Decoration {
  final Size _originalImageSize;
  final List<VisionFace> _texts;
  TextDetectDecoration(List<VisionFace> texts, Size originalImageSize)
      : _texts = texts,
        _originalImageSize = originalImageSize;

  @override
  BoxPainter createBoxPainter([VoidCallback onChanged]) {
    return new _TextDetectPainter(_texts, _originalImageSize);
  }
}

class _TextDetectPainter extends BoxPainter {
  final List<VisionFace> _texts;
  final Size _originalImageSize;
  _TextDetectPainter(texts, originalImageSize)
      : _texts = texts,
        _originalImageSize = originalImageSize;

  @override
  void paint(Canvas canvas, Offset offset, ImageConfiguration configuration) {
    final paint = new Paint()
      ..strokeWidth = 2.0
      ..color = Colors.red
      ..style = PaintingStyle.stroke;
    print("original Image Size : ${_originalImageSize}");

    final _heightRatio = _originalImageSize.height / configuration.size.height;
    final _widthRatio = _originalImageSize.width / configuration.size.width;
    for (var text in _texts) {
      print("text : ${text.toString()}, rect : ${text.rect}");
      final _rect = Rect.fromLTRB(
          offset.dx + text.rect.left / _widthRatio,
          offset.dy + text.rect.top / _heightRatio,
          offset.dx + text.rect.right / _widthRatio,
          offset.dy + text.rect.bottom / _heightRatio);
      //final _rect = Rect.fromLTRB(24.0, 115.0, 75.0, 131.2);
      print("_rect : ${_rect}");
      canvas.drawRect(_rect, paint);
    }

    print("offset : ${offset}");
    print("configuration : ${configuration}");

    final rect = offset & configuration.size;

    print("rect container : ${rect}");

    //canvas.drawRect(rect, paint);
    canvas.restore();
  }
}

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:


dependencies:
  mlkit: ^0.6.0

2. Install it

You can install packages from the command line:

with pub:


$ pub get

with Flutter:


$ flutter packages get

Alternatively, your editor might support pub get or flutter packages get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:


import 'package:mlkit/mlkit.dart';
  
Version Uploaded Documentation Archive
0.6.0 Jul 18, 2018 Go to the documentation of mlkit 0.6.0 Download mlkit 0.6.0 archive
0.5.0 Jun 29, 2018 Go to the documentation of mlkit 0.5.0 Download mlkit 0.5.0 archive
0.4.1 Jun 13, 2018 Go to the documentation of mlkit 0.4.1 Download mlkit 0.4.1 archive
0.4.0 Jun 12, 2018 Go to the documentation of mlkit 0.4.0 Download mlkit 0.4.0 archive
0.3.0 Jun 11, 2018 Go to the documentation of mlkit 0.3.0 Download mlkit 0.3.0 archive
0.2.2 Jun 8, 2018 Go to the documentation of mlkit 0.2.2 Download mlkit 0.2.2 archive
0.2.1 Jun 4, 2018 Go to the documentation of mlkit 0.2.1 Download mlkit 0.2.1 archive
0.2.0 Jun 3, 2018 Go to the documentation of mlkit 0.2.0 Download mlkit 0.2.0 archive
0.1.1 Jun 3, 2018 Go to the documentation of mlkit 0.1.1 Download mlkit 0.1.1 archive
0.1.0 Jun 1, 2018 Go to the documentation of mlkit 0.1.0 Download mlkit 0.1.0 archive

All 12 versions...

Popularity:
Describes how popular the package is relative to other packages. [more]
90
Health:
Code health derived from static analysis. [more]
0
Maintenance:
Reflects how tidy and up-to-date the package is. [more]
0
Overall:
Weighted score of the above. [more]
45
Learn more about scoring.

The package version is not analyzed, because it does not support Dart 2. Until this is resolved, the package will receive a health and maintenance score of 0.

Issues and suggestions

Support Dart 2 in pubspec.yaml.

The SDK constraint in pubspec.yaml doesn't allow the Dart 2.0.0 release. For information about upgrading it to be Dart 2 compatible, please see https://www.dartlang.org/dart-2#migration.

Dependencies

Package Constraint Resolved Available
Direct dependencies
Dart SDK >=1.23.0 <2.0.0