I would like to coach customized object detector for iOS and Android. Have to detect objects of 10 lessons. I acquired an issue with exporting KerasCV mannequin to CoreML. Obtained ready information to coach personal object detection, coaching goes nicely, on python aspect every part is recognised appropriate, however after conversion with CoreML I acquired wired outcomes.
mannequin = keras_cv.fashions.RetinaNet.from_preset( "mobilenet_v3_large_imagenet", num_classes=len(class_mapping), # For more information on supported bounding field codecs, go to # https://keras.io/api/keras_cv/bounding_box/ bounding_box_format="xyxy", ) mannequin.compile( classification_loss="focal", box_loss="smoothl1", optimizer=optimizer, metrics=None, ) mannequin.match( train_ds.ragged_batch(4), validation_data=eval_ds.ragged_batch(4), epochs=40, callbacks=[tensorboard_callback, VisualizeDetections(), model_checkpoint_callback], )
I’am utilizing mobilenet V3 and output ought to appear to be this:
packing containers [num_detections, 4] confidence [num_detections, 10] lessons [num_detections]
After changing mannequin to CoreML with this code:
outputss = [ct.TensorType(name="Identity", dtype=np.float32), ct.TensorType(name="Identity_1", dtype=np.float32)] converted_model = ct.convert(mannequin, inputs=[ct.ImageType(shape=(1, 640, 640, 3))], outputs=outputss, convert_to="mlprogram") print(converted_model.output_description) #save transformed mannequin converted_model.save("transformed.mlpackage")
I acquired in output two arrays (packing containers and confidence) of dimension 1 × 76725 × 4 and 1 × 76725 × 10 I do know that this output must be handed via NMS however earlier than that I attempt to get some outcomes and each worth in confidence array is unfavourable. Why? What can I do to acquired some actual values in CoreML mannequin?