How To Transform Byte[](decoded As Png Or Jpg) To Tensorflows Tensor
I'am trying to use Tensorflowsharp in a Project in Unity. The problem i'm facing is that for the transform you usually use a second Graph to transform the input into a tensor. The
Solution 1:
Instead of feeding the byte array and then use DecodeJpeg, you could feed the actual float array, which you can get like this:
float[] floatValues = newfloat[inputSize * inputSize * 3];
int[] intValues = newint[inputSize * inputSize];
bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight());
for (int i = 0; i < intValues.length; ++i) {
finalint val = intValues[i];
floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd;
floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd;
}
Tensor<Float> input = Tensors.create(floatValues);
In order to use "Tensors.create()" you need to have at least Tensorflow version 1.4.
Solution 2:
You probably didn't crop and scale your image before putting it into @sladomic function.
I managed to hack together a sample of using TensorflowSharp in Unity for object classification. It works with model from official Tensorflow Android example, but also with my self-trained MobileNet model. All you need is to replace the model and set your mean and std, which in my case were all equal to 224.
Post a Comment for "How To Transform Byte[](decoded As Png Or Jpg) To Tensorflows Tensor"