Specify either CPU or GPU for multiple models tensorflow java's job

Alex :

I am using Tensorflow java API (1.8.0) where I load multiple models (in different sessions). Those models are loaded from .pb files using the SavedModelBundle.load(...) method. Those .pb files were obtained by saving Keras' models.

Let's say that I want to load 3 models A, B, C. To do that, I implemented a java Model class :

public class Model implements Closeable {

private String inputName;
private String outputName;
private Session session;
private int inputSize;

public Model(String modelDir, String input_name, String output_name, int inputSize) {
    SavedModelBundle bundle = SavedModelBundle.load(modelDir, "serve");
    this.inputName = input_name;
    this.outputName = output_name;
    this.inputSize = inputSize;
    this.session = bundle.session();
}

public void close() {
    session.close();
}

public Tensor predict(Tensor t) {
    return session.runner().feed(inputName, t).fetch(outputName).run().get(0);
}
}

Then I easily can instantiate 3 Model objects corresponding to my A, B and C models with this class and make predictions with those 3 models in the same java program. I also noticed that if I have a GPU, the 3 models are loaded on it.

However, I would like only model A to be running on GPU and force the 2 others to be running on CPU.

By reading documentation and diving into the source code I didn't find a way to do so. I tried to define a new ConfigProto setting visible devices to None and instantiate a new Session with the graph but it didn't work (see code below).

    public Model(String modelDir, String input_name, String output_name, int inputSize) {
      SavedModelBundle bundle = SavedModelBundle.load(modelDir, "serve");
      this.inputName = input_name;
      this.outputName = output_name;
      this.inputSize = inputSize;
      ConfigProto configProto = ConfigProto.newBuilder().setAllowSoftPlacement(false).setGpuOptions(GPUOptions.newBuilder().setVisibleDeviceList("").build()).build();
      this.session = new Session(bundle.graph(),configProto.toByteArray());
}

When I load the model, it uses the available GPU. Do you have any solution to this problem ?

Thank you for your answer.

Remzouz :

According to this issue , the new source code fixed this problem. Unfortunately you will have to build from source following these instructions

Then you can test :

ConfigProto configProto = ConfigProto.newBuilder()
                .setAllowSoftPlacement(true) // allow less GPUs than configured
                .setGpuOptions(GPUOptions.newBuilder().setPerProcessGpuMemoryFraction(0.01).build())
                .build();
SavedModelBundle  bundle = SavedModelBundle.loader(modelDir).withTags("serve").withConfigProto(configProto.toByteArray()).load();

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=37013&siteId=1