class TensorflowLite::Interpreter

Overview

The Interpreter takes a model, loads it, and allows you to run (or "interpret") the model, i.e., to use it to make predictions based on input data.

Defined in:

tensorflow_lite/interpreter.cr

Constructors

Instance Method Summary

Constructor Detail

def self.new(model : Model, options : InterpreterOptions) #

provide the model and options required for inference


[View source]

Instance Method Detail

def input_tensor(index : Int) : Tensor #

returns the requested input tensor for manipulation and loading of input data


[View source]
def input_tensor_count : Int32 #

the number of input tensors that are used to feed data into the model


[View source]
def inspect(io : IO) : Nil #
Description copied from class Reference

Appends a String representation of this object which includes its class name, its object address and the values of all instance variables.

class Person
  def initialize(@name : String, @age : Int32)
  end
end

Person.new("John", 32).inspect # => #<Person:0x10fd31f20 @name="John", @age=32>

[View source]
def invoke : Status #

runs the model and returns the result status

NOTE the results are stored in the output tensors


[View source]
def invoke! #

run the model, processing the input tensors and updating the output tensors


[View source]
def model : Model #

the model this interpreter is running


[View source]
def options : InterpreterOptions #

the options used to initialize this interpreter


[View source]
def output_tensor(index : Int) : Tensor #

returns the requested output tensor for results extraction


[View source]
def output_tensor_count : Int32 #

the number of output tensors, used to obtain the results of an invokation


[View source]