Performs segmentation on images.
The API expects a TFLite model with TFLite Model Metadata..
The API supports models with one image input tensor and one output tensor. To be more
specific, here are the requirements.
- Input image tensor (
kTfLiteUInt8
/kTfLiteFloat32
)
- image input of size
[batch x height x width x channels]
.
- batch inference is not supported (
batch
is required to be 1).
- only RGB inputs are supported (
channels
is required to be 3).
- if type is
kTfLiteFloat32
, NormalizationOptions are required to be attached
to the metadata for input normalization.
Output image tensor (kTfLiteUInt8
/kTfLiteFloat32
)
- tensor of size
[batch x mask_height x mask_width x num_classes]
, where batch
is required to be 1, mask_width
and mask_height
are the
dimensions of the segmentation masks produced by the model, and num_classes
is the number of classes supported by the model.
- optional (but recommended) label map(s) can be attached as AssociatedFile-s with type
TENSOR_AXIS_LABELS, containing one label per line. The first such AssociatedFile (if
any) is used to fill the class name, i.e.
ColoredLabel.getlabel()
of the
results. The display name, i.e. ColoredLabel.getDisplayName()
, is filled from
the AssociatedFile (if any) whose locale matches the `display_names_locale` field of
the `ImageSegmenterOptions` used at creation time ("en" by default, i.e. English). If
none of these are available, only the `index` field of the results will be filled.
An example of such model can be found on TensorFlow Hub..
Inherited Methods
From class
java.lang.Object
boolean
|
|
final
Class<?>
|
getClass()
|
int
|
hashCode()
|
final
void
|
notify()
|
final
void
|
notifyAll()
|
String
|
toString()
|
final
void
|
wait(long arg0, int arg1)
|
final
void
|
wait(long arg0)
|
final
void
|
wait()
|
From interface
java.io.Closeable
From interface
java.lang.AutoCloseable
Public Methods
public
static
ImageSegmenter
createFromFile
(Context context, String modelPath)
Parameters
context |
|
modelPath |
path of the segmentation model with metadata in the assets |
public
static
ImageSegmenter
createFromFile
(File modelFile)
Parameters
modelFile |
the segmentation model File instance |
Parameters
modelFile |
the segmentation model File instance |
options |
|
Parameters
context |
|
modelPath |
path of the segmentation model with metadata in the assets |
options |
|
Parameters
frameBufferHandle |
|
options |
|
Parameters
image |
a UINT8 TensorImage object that represents an RGB or YUV image |
Returns
- results of performing image segmentation. Note that at the time, a single
Segmentation
element is expected to be returned. The result is stored in a List
for later extension to e.g. instance segmentation models, which may return one segmentation
per object.
Performs actual segmentation on the provided MlImage
.
Parameters
image |
an MlImage to segment. |
Returns
- results of performing image segmentation. Note that at the time, a single
Segmentation
element is expected to be returned. The result is stored in a List
for later extension to e.g. instance segmentation models, which may return one segmentation
per object.
Parameters
image |
a UINT8 TensorImage object that represents an RGB or YUV image |
options |
the options configure how to preprocess the image |
Returns
- results of performing image segmentation. Note that at the time, a single
Segmentation
element is expected to be returned. The result is stored in a List
for later extension to e.g. instance segmentation models, which may return one segmentation
per object.
Parameters
image |
an MlImage to segment. |
options |
the options configure how to preprocess the image. |
Returns
- results of performing image segmentation. Note that at the time, a single
Segmentation
element is expected to be returned. The result is stored in a List
for later extension to e.g. instance segmentation models, which may return one segmentation
per object.