Using a batchsize greater than 1 when using Tensorflows c++ API -


i have tensorflow model trained in python , frozen freeze_graph script. i've succesfully loaded model in c++ , made inference single image. however, seems freeze_graph sets batchsize single image @ time, since unable pass model tensor more 1 image.

does know way of changing this? haven't been able locate in script happens.

thanks!

edit:

okay, scrapped keras eliminate black magic might doing, , set batch size of 16 when defining network tensorflow.

if print graph def, placeholder has shape:

node {   name: "inputs"   op: "placeholder"   attr {     key: "dtype"     value {       type: dt_float     }   }   attr {     key: "shape"     value {       shape {         dim {           size: 16         }         dim {           size: 50         }         dim {           size: 50         }         dim {           size: 3         }       }     }   } } 

however, when attempt load , run model in c++ tensor of shape 16 x 50 x 50 x 3, error:

tensorflow/core/framework/tensor.cc:433] check failed: 1 == numelements() (1 vs. 16)must have 1 element tensor 

something must happening somewhere when freeze graph?

this turned out stupid mistake on part. when getting output of graph, called .scalar<float>() on it. worked fine when had 1 input image, , therefore 1 output, can't cast vector scalar. changing .flat<float>() fixed issue.


Comments

Popular posts from this blog

sequelize.js - Sequelize group by with association includes id -

android - Robolectric "INTERNET permission is required" -

java - Android raising EPERM (Operation not permitted) when attempting to send UDP packet after network connection -