models.utils

Utilities for the entire package.

Module Contents

Functions

_to_unicode(x)
save_spec(spec,filename) Save a protobuf model specification to file.
load_spec(filename) Load a protobuf model specification from file
_get_nn_layers(spec) Returns a list of neural network layers if the model contains any.
_fp32_to_reversed_fp16_byte_array(fp32_arr)
_fp32_to_fp16_byte_array(fp32_arr)
_wp_to_fp16wp(wp)
_convert_nn_spec_to_half_precision(spec)
convert_neural_network_spec_weights_to_fp16(fp_spec)
convert_neural_network_weights_to_fp16(full_precision_model) Utility function to convert a full precision (float) MLModel to a
_get_model(spec) Utility to get the model and the data.
evaluate_regressor(model,data,target=”target”,verbose=False) Evaluate a CoreML regression model and compare against predictions
evaluate_classifier(model,data,target=”target”,verbose=False) Evaluate a CoreML classifier model and compare against predictions
evaluate_classifier_with_probabilities(model,data,probabilities=”probabilities”,verbose=False) Evaluate a classifier specification for testing.
rename_feature(spec,current_name,new_name,rename_inputs=True,rename_outputs=True) Rename a feature in the specification.
_sanitize_value(x) Performs cleaning steps on the data so various type comparisons can
_element_equal(x,y) Performs a robust equality test between elements.
evaluate_transformer(model,input_data,reference_output,verbose=False) Evaluate a transformer specification for testing.
has_custom_layer(spec) Returns true if the given protobuf specification has a custom layer, and false otherwise.
get_custom_layer_names(spec) Returns a list of className fields which appear in the given protobuf spec
get_custom_layers(spec) Returns a list of all neural network custom layers in the spec.
replace_custom_layer_name(spec,oldname,newname) Substitutes newname for oldname in the className field of custom layers. If there are no custom layers, or no
macos_version() Returns macOS version as a tuple of integers, making it easy to do proper
_get_feature(spec,feature_name)
_get_input_names(spec) Returns a list of the names of the inputs to this model.
_to_unicode(x)
save_spec(spec, filename)

Save a protobuf model specification to file.

spec: Model_pb
Protobuf representation of the model
filename: str
File path where the spec gets saved.
>>> coremltools.utils.save_spec(spec, 'HousePricer.mlmodel')

load_spec

load_spec(filename)

Load a protobuf model specification from file

filename: str
Location on disk (a valid filepath) from which the file is loaded as a protobuf spec.
model_spec: Model_pb
Protobuf representation of the model
>>> spec = coremltools.utils.load_spec('HousePricer.mlmodel')

save_spec

_get_nn_layers(spec)

Returns a list of neural network layers if the model contains any.

spec: Model_pb
A model protobuf specification.
[NN layer]
list of all layers (including layers from elements of a pipeline
_fp32_to_reversed_fp16_byte_array(fp32_arr)
_fp32_to_fp16_byte_array(fp32_arr)
_wp_to_fp16wp(wp)
_convert_nn_spec_to_half_precision(spec)
convert_neural_network_spec_weights_to_fp16(fp_spec)
convert_neural_network_weights_to_fp16(full_precision_model)

Utility function to convert a full precision (float) MLModel to a half precision MLModel (float16). Parameter ———- full_precision_model: MLModel

Model which will be converted to half precision. Currently conversion for only neural network models is supported. If a pipeline model is passed in then all embedded neural network models embedded within will be converted.
model: MLModel
The converted half precision MLModel
_get_model(spec)

Utility to get the model and the data.

evaluate_regressor(model, data, target="target", verbose=False)

Evaluate a CoreML regression model and compare against predictions from the original framework (for testing correctness of conversion)

filename: [str | MLModel]
File path from which to load the MLModel from (OR) a loaded version of MLModel.
data: [str | Dataframe]
Test data on which to evaluate the models (dataframe, or path to a .csv file).
target: str
Name of the column in the dataframe that must be interpreted as the target column.
verbose: bool
Set to true for a more verbose output.

evaluate_classifier

>>> metrics =  coremltools.utils.evaluate_regressor(spec, 'data_and_predictions.csv', 'target')
>>> print(metrics)
{"samples": 10, "rmse": 0.0, max_error: 0.0}
evaluate_classifier(model, data, target="target", verbose=False)

Evaluate a CoreML classifier model and compare against predictions from the original framework (for testing correctness of conversion). Use this evaluation for models that don’t deal with probabilities.

filename: [str | MLModel]
File from where to load the model from (OR) a loaded version of the MLModel.
data: [str | Dataframe]
Test data on which to evaluate the models (dataframe, or path to a csv file).
target: str
Column to interpret as the target column
verbose: bool
Set to true for a more verbose output.

evaluate_regressor, evaluate_classifier_with_probabilities

>>> metrics =  coremltools.utils.evaluate_classifier(spec, 'data_and_predictions.csv', 'target')
>>> print(metrics)
{"samples": 10, num_errors: 0}
evaluate_classifier_with_probabilities(model, data, probabilities="probabilities", verbose=False)

Evaluate a classifier specification for testing.

filename: [str | Model]
File from where to load the model from (OR) a loaded version of the MLModel.
data: [str | Dataframe]
Test data on which to evaluate the models (dataframe, or path to a csv file).
probabilities: str
Column to interpret as the probabilities column
verbose: bool
Verbosity levels of the predictions.
rename_feature(spec, current_name, new_name, rename_inputs=True, rename_outputs=True)

Rename a feature in the specification.

spec: Model_pb
The specification containing the feature to rename.
current_name: str
Current name of the feature. If this feature doesn’t exist, the rename is a no-op.
new_name: str
New name of the feature.
rename_inputs: bool
Search for current_name only in the input features (i.e ignore output features)
rename_outputs: bool
Search for current_name only in the output features (i.e ignore input features)
# In-place rename of spec
>>> coremltools.utils.rename_feature(spec, 'old_feature', 'new_feature_name')
_sanitize_value(x)

Performs cleaning steps on the data so various type comparisons can be performed correctly.

_element_equal(x, y)

Performs a robust equality test between elements.

evaluate_transformer(model, input_data, reference_output, verbose=False)

Evaluate a transformer specification for testing.

spec: [str | MLModel]
File from where to load the Model from (OR) a loaded version of MLModel.
input_data: list[dict]
Test data on which to evaluate the models.
reference_output: list[dict]
Expected results for the model.
verbose: bool
Verbosity levels of the predictions.
>>> input_data = [{'input_1': 1, 'input_2': 2}, {'input_1': 3, 'input_2': 3}]
>>> expected_output = [{'input_1': 2.5, 'input_2': 2.0}, {'input_1': 1.3, 'input_2': 2.3}]
>>> metrics = coremltools.utils.evaluate_transformer(scaler_spec, input_data, expected_output)

evaluate_regressor, evaluate_classifier

has_custom_layer(spec)

Returns true if the given protobuf specification has a custom layer, and false otherwise.

spec: mlmodel spec

True if the protobuf specification contains a neural network with a custom layer, False otherwise.

get_custom_layer_names(spec)

Returns a list of className fields which appear in the given protobuf spec

spec: mlmodel spec

set(str) A set of unique className fields of custom layers that appear in the model.

get_custom_layers(spec)

Returns a list of all neural network custom layers in the spec.

spec: mlmodel spec

[NN layer] A list of custom layer implementations

replace_custom_layer_name(spec, oldname, newname)

Substitutes newname for oldname in the className field of custom layers. If there are no custom layers, or no layers with className=oldname, then the spec is unchanged.

spec: mlmodel spec

oldname: str The custom layer className to be replaced.

newname: str The new className value to replace oldname

An mlmodel spec.

macos_version()

Returns macOS version as a tuple of integers, making it easy to do proper version comparisons. On non-Macs, it returns an empty tuple.

_get_feature(spec, feature_name)
_get_input_names(spec)

Returns a list of the names of the inputs to this model. :param spec: The model protobuf specification :return: [str] A list of input feature names