Error when converting a tf model to TFlite model

Hi all !

I am currently building a model to use it onto my nano 33 BLE sense board to predict weather by mesuring Humidity, Pressure, Temperature, I have 5 classes of output.
I have used a kaggle dataset to train on it.

    df_labels = to_categorical(df.pop('Summary'))
    df_features = np.array(df)
    
    from sklearn.model_selection import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(df_features, df_labels, test_size=0.15)
    
    normalize = preprocessing.Normalization()
    normalize.adapt(X_train)
    
    
    activ_func = 'gelu'
    model = tf.keras.Sequential([
                 normalize,
                 tf.keras.layers.Dense(units=6, input_shape=(3,)),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=100,activation=activ_func),
                 tf.keras.layers.Dense(units=5, activation='softmax')
    ])
    
    model.compile(optimizer='adam',#tf.keras.optimizers.Adagrad(lr=0.001),
                 loss='categorical_crossentropy',metrics=['acc'])
    model.summary()
    model.fit(x=X_train,y=y_train,verbose=1,epochs=15,batch_size=32, use_multiprocessing=True)

Then the model is trained, I want to convert it into a tflite model when I run the command convert I get the following message :

    # Convert the model to the TensorFlow Lite format without quantization
    converter = tf.lite.TFLiteConverter.from_keras_model(model)
    tflite_model = converter.convert()
    
    # Save the model to disk
    open("my_model.tflite", "wb").write(tflite_model)
      
    import os
    basic_model_size = os.path.getsize("my_model.tflite")
    print("Model is %d bytes" % basic_model_size)




    <unknown>:0: error: failed while converting: 'main': Ops that can be supported by the flex runtime (enabled via setting the -emit-select-tf-ops flag):
    	tf.Erf {device = ""}

For your information I use google colab to design the model.

If anyone has any idea or solution to this issue, I would be glad to hear it !

1 Like

At a high level the “Ops that can be supported” error means that there is something in your model that Tensorflow supports but Tensorflow Lite or TensorFlow Lite for Microcontrollers does not support. Remember that each is a subset of the former. I’d need to check on my laptop but my gut is that the “gelu” activation function is not supported yet. Try changing that to “relu” and see if that does the trick (because Dense layers should be fine).

2 Likes

Thanks for you anwser !! I’ll try it !