Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
60 views
in Technique[技术] by (71.8m points)

python - Keras multiple Model Api

I would like build a rest api service with keras and dual models (model A and model B): I found this example but is working with just one model, i need something that i can use like

curl -X POST -F [email protected] "http://localhost:5000/predictA"

curl -X POST -F [email protected] "http://localhost:5000/predictB"

# keras_server.py 
  
# Python program to expose a ML model as flask REST API 
  
# import the necessary modules 
from keras.models import load_model
from keras.preprocessing.image import img_to_array  
from keras.applications import imagenet_utils 
import tensorflow as tf 
from PIL import Image 
import numpy as np 
import flask 
import io 
  
# Create Flask application and initialize Keras model 
app = flask.Flask(__name__) 
model = None
  
# Function to Load the model  
def load_model(): 
      
    # global variables, to be used in another function 
    global model      
    model = load_model("modelAPath") 
    global graph  
    graph = tf.get_default_graph() 
  
# Every ML/DL model has a specific format 
# of taking input. Before we can predict on 
# the input image, we first need to preprocess it. 
def prepare_image(image, target): 
    if image.mode != "RGB": 
        image = image.convert("RGB") 
      
    # Resize the image to the target dimensions 
    image = image.resize(target)  
      
    # PIL Image to Numpy array 
    image = img_to_array(image)  
      
    # Expand the shape of an array, 
    # as required by the Model 
    image = np.expand_dims(image, axis = 0)  
      
    # preprocess_input function is meant to 
    # adequate your image to the format the model requires 
    image = imagenet_utils.preprocess_input(image)  
  
    # return the processed image 
    return image 
  
# Now, we can predict the results. 
@app.route("/predict", methods =["POST"]) 
def predict(): 
    data = {} # dictionary to store result 
    data["success"] = False
  
    # Check if image was properly sent to our endpoint 
    if flask.request.method == "POST": 
        if flask.request.files.get("image"): 
            image = flask.request.files["image"].read() 
            image = Image.open(io.BytesIO(image)) 
  
            # Resize it to 224x224 pixels  
            # (required input dimensions for ResNet) 
            image = prepare_image(image, target =(224, 224)) 
  
        # Predict ! global preds, results 
            with graph.as_default(): 
                preds = model.predict(image) 
                results = imagenet_utils.decode_predictions(preds) 
                data["predictions"] = [] 
  
          
            for (ID, label, probability) in results[0]: 
                r = {"label": label, "probability": float(probability)} 
                data["predictions"].append(r) 
  
            data["success"] = True
  
    # return JSON response 
    return flask.jsonify(data) 
  
  
  
if __name__ == "__main__": 
    print(("* Loading Keras model and Flask starting server..."
        "please wait until server has fully started")) 
    load_model() 
    app.run() 
question from:https://stackoverflow.com/questions/65882036/keras-multiple-model-api

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

You should just load the different models, add them to the application as global variables as indicated below:

# Create Flask application and initialize Keras model 
app = flask.Flask(__name__) 
app.config['modelA'] = load_model("modelAPath") 
app.config['modelB'] = load_model("modelAPath") 
# also create two separate graphs for each model.
app.config['graphA'] = tf.Graph()
app.config['graphB'] = tf.Graph()

Then have respective endpoints for each model for instance:

@app.route("/predictA", methods =["POST"]) 
def predictA():
    # get the data here, feed to the model and return json results.
    with app.config['graphA'].as_default(): 
        app.config['modelA'].predict()

@app.route("/predictB", methods =["POST"]) 
def predictB():
    # get the data here, feed to the model and return json results.
    with app.config['graphB'].as_default(): 
        app.config['modelB'].predict()

Note that this is based on you wanting two separate endpoints, whereas you could do this with one too, using an extra "POST"ed form or json argument to select a model.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...