Skip to content

Commit 4843209

Browse files
author
hhsecond
committed
tf example
1 parent 999fef6 commit 4843209

File tree

3 files changed

+319
-7
lines changed

3 files changed

+319
-7
lines changed

ImageClassificationWithPytorch.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,7 @@
286286
"name": "python",
287287
"nbconvert_exporter": "python",
288288
"pygments_lexer": "ipython3",
289-
"version": "3.8.7"
289+
"version": "3.7.6"
290290
}
291291
},
292292
"nbformat": 4,

ImageClassificationWithTensorflow.ipynb

Lines changed: 286 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,293 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "0e9ffdce",
6+
"metadata": {},
7+
"source": [
8+
"# Image Classification with Tensorflow (1.x)\n",
9+
"\n",
10+
"This example is built with Tensorflow 1.5 (tensorflow 2.x support for RedisAI is can be implemented using graph freezing. Checkout the [documentation](https://github.com/RedisAI/RedisAI)). We first take a prebuilt model from Tensorflow hub and then push that to RedisAI for production. Towards the end of this example, we also show how the loaded model can be used for inference"
11+
]
12+
},
13+
{
14+
"cell_type": "code",
15+
"execution_count": 1,
16+
"id": "26841334",
17+
"metadata": {},
18+
"outputs": [],
19+
"source": [
20+
"import tensorflow as tf\n",
21+
"import tensorflow_hub as hub\n",
22+
"import json\n",
23+
"import time\n",
24+
"from redisai import Client\n",
25+
"from ml2rt import load_model, load_script, save_tensorflow\n",
26+
"from skimage import io\n",
27+
"import os"
28+
]
29+
},
30+
{
31+
"cell_type": "markdown",
32+
"id": "9f501f18",
33+
"metadata": {},
34+
"source": [
35+
"## Downloading and save the model\n",
36+
"The pretrained model is downloaded from the tensorlfow hub. We then use a dummy input `image` to serialize the graph into the disk"
37+
]
38+
},
39+
{
40+
"cell_type": "code",
41+
"execution_count": 3,
42+
"id": "5fc2e4d4",
43+
"metadata": {},
44+
"outputs": [
45+
{
46+
"name": "stdout",
47+
"output_type": "stream",
48+
"text": [
49+
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
50+
]
51+
},
52+
{
53+
"name": "stderr",
54+
"output_type": "stream",
55+
"text": [
56+
"INFO:tensorflow:Saver not created because there are no variables in the graph to restore\n"
57+
]
58+
},
59+
{
60+
"name": "stdout",
61+
"output_type": "stream",
62+
"text": [
63+
"INFO:tensorflow:Froze 272 variables.\n"
64+
]
65+
},
66+
{
67+
"name": "stderr",
68+
"output_type": "stream",
69+
"text": [
70+
"INFO:tensorflow:Froze 272 variables.\n"
71+
]
72+
},
73+
{
74+
"name": "stdout",
75+
"output_type": "stream",
76+
"text": [
77+
"INFO:tensorflow:Converted 272 variables to const ops.\n"
78+
]
79+
},
80+
{
81+
"name": "stderr",
82+
"output_type": "stream",
83+
"text": [
84+
"INFO:tensorflow:Converted 272 variables to const ops.\n"
85+
]
86+
}
87+
],
88+
"source": [
89+
"url = 'https://tfhub.dev/google/imagenet/resnet_v2_50/classification/1'\n",
90+
"images = tf.placeholder(tf.float32, shape=(1, 224, 224, 3), name='images')\n",
91+
"module = hub.Module(url)\n",
92+
"logits = module(images)\n",
93+
"logits = tf.identity(logits, 'output')\n",
94+
"\n",
95+
"with tf.Session() as sess:\n",
96+
" sess.run([tf.global_variables_initializer()])\n",
97+
" save_tensorflow(sess, 'resnet50.pb', output=['output'])"
98+
]
99+
},
100+
{
101+
"cell_type": "markdown",
102+
"id": "c3d45a93",
103+
"metadata": {},
104+
"source": [
105+
"## Setup RedisAI\n",
106+
"This tutorial assumes you already have a RedisAI server running. The easiest way to setup one instance is using docker\n",
107+
"\n",
108+
"```\n",
109+
"docker run -p 6379:6379 redislabs/redisai:latest-cpu-x64-bionic\n",
110+
"```\n",
111+
"\n",
112+
"Take a look at this [quickstart](https://oss.redis.com/redisai/quickstart/) for more details. Here we setup the connection credentials and ping the server to verify we can talk "
113+
]
114+
},
115+
{
116+
"cell_type": "code",
117+
"execution_count": 4,
118+
"id": "2be4e169",
119+
"metadata": {},
120+
"outputs": [
121+
{
122+
"data": {
123+
"text/plain": [
124+
"True"
125+
]
126+
},
127+
"execution_count": 4,
128+
"metadata": {},
129+
"output_type": "execute_result"
130+
}
131+
],
132+
"source": [
133+
"REDIS_HOST = os.getenv(\"REDIS_HOST\", \"localhost\")\n",
134+
"REDIS_PORT = int(os.getenv(\"REDIS_PORT\", 6379))\n",
135+
"con = Client(host=REDIS_HOST, port=REDIS_PORT)\n",
136+
"con.ping()"
137+
]
138+
},
139+
{
140+
"cell_type": "markdown",
141+
"id": "ea511bfc",
142+
"metadata": {},
143+
"source": [
144+
"## Load model\n",
145+
"Next step is to load the model we trained above into RedisAI for serving. We are using a convinent package [ml2rt](https://pypi.org/project/ml2rt/) here for loading but it's not a mandatory dependency if you want to keep your `requirements.txt` small. Take a look at the `load_model` function. This will give us a binary blob of the model we have built above. We need to send this to RedisAI and also inform which backend we'd like to use and which device this should run on. We'll set the model on a key so we can reference this key later\n",
146+
"\n",
147+
"Note: If you want to run on GPU, take a look at the above quick start to setup RedisAI on GPU"
148+
]
149+
},
150+
{
151+
"cell_type": "code",
152+
"execution_count": 6,
153+
"id": "bf775104",
154+
"metadata": {},
155+
"outputs": [
156+
{
157+
"data": {
158+
"text/plain": [
159+
"'OK'"
160+
]
161+
},
162+
"execution_count": 6,
163+
"metadata": {},
164+
"output_type": "execute_result"
165+
}
166+
],
167+
"source": [
168+
"model = load_model(\"resnet50.pb\")\n",
169+
"con.modelstore(\"tensorflow_model\", backend=\"TF\", device=\"CPU\", inputs=['images'], outputs=['output'], data=model)"
170+
]
171+
},
172+
{
173+
"cell_type": "markdown",
174+
"id": "3b5fdd8c",
175+
"metadata": {},
176+
"source": [
177+
"## Load script\n",
178+
"Why do you need Script? It's very likely that your deep learning model would have a pre/post processing step, like changing the dimensionality of the input (adding batch dimension) or doing normalizatoin etc. You normally do this from your client code and send the processed data to model server. With script, you can club this into your model serving pipeline. Script is one of the powerful feature of RedisAI. RedisAI Scripts are built on top of [TorchScript](https://pytorch.org/docs/stable/jit.html) and it's recommended to take a look if TorcScript is new to you. Torchscript is a subset of python programming langauge i.e it looks and smells like python but all the python functionalities are not available in torchscript. Now if you are wondering what's the benefit of TorchScript in RedisAI, there are few\n",
179+
"\n",
180+
"- It runs on a highly effecient C++ runtime\n",
181+
"- It can pipeline your preprocessing and postprocessing jobs, right where your model and data resides. So no back and forth of huge data blobs between your model server and pre/post processing scripts\n",
182+
"- It can run in a single redis pipeline or in RedisAI Dag which makes serving channel implementation smooth\n",
183+
"- As in this example, even if your model is in Tensorflow, you can use Script to pipe the worflow\n",
184+
"\n",
185+
"You can load the script from a file (`ml2rt.load_script` does this for you) which is probably your workflow normally since you save the script in a file but here we pass the string into the `scriptstore` method"
186+
]
187+
},
188+
{
189+
"cell_type": "code",
190+
"execution_count": 7,
191+
"id": "f097999c",
192+
"metadata": {},
193+
"outputs": [
194+
{
195+
"data": {
196+
"text/plain": [
197+
"'OK'"
198+
]
199+
},
200+
"execution_count": 7,
201+
"metadata": {},
202+
"output_type": "execute_result"
203+
}
204+
],
205+
"source": [
206+
"script = \"\"\"\n",
207+
"def pre_process(tensors: List[Tensor], keys: List[str], args: List[str]):\n",
208+
" image = tensors[0]\n",
209+
" return image.float().div(255).unsqueeze(0)\n",
210+
"\n",
211+
"def post_process(tensors: List[Tensor], keys: List[str], args: List[str]):\n",
212+
" output = tensors[0]\n",
213+
" # tf model has 1001 classes, hence negative 1\n",
214+
" return output.max(1)[1] - 1\n",
215+
"\"\"\"\n",
216+
"con.scriptstore(\"processing_script\", device=\"CPU\", script=script, entry_points=(\"pre_process\", \"post_process\"))"
217+
]
218+
},
219+
{
220+
"cell_type": "markdown",
221+
"id": "7d9f2834",
222+
"metadata": {},
223+
"source": [
224+
"## Load the image and final classes\n",
225+
"Here we load the input image and the final classes to find the predicted output"
226+
]
227+
},
228+
{
229+
"cell_type": "code",
230+
"execution_count": 8,
231+
"id": "3bef00e0",
232+
"metadata": {},
233+
"outputs": [],
234+
"source": [
235+
"image = io.imread(\"data/cat.jpg\")\n",
236+
"class_idx = json.load(open(\"data/imagenet_classes.json\"))"
237+
]
238+
},
239+
{
240+
"cell_type": "markdown",
241+
"id": "44013e64",
242+
"metadata": {},
243+
"source": [
244+
"## Run the model serving pipeline\n",
245+
"Here we run the serving pipeline one by one and finally fetch the results out. The pipeline is organized into 5 steps\n",
246+
"\n",
247+
"```\n",
248+
"Setting Input -> Pre-processing Script -> Running Model -> Post-processing Script -> Fetching Output\n",
249+
"```"
250+
]
251+
},
252+
{
253+
"cell_type": "code",
254+
"execution_count": 10,
255+
"id": "7a3a24ff",
256+
"metadata": {},
257+
"outputs": [
258+
{
259+
"name": "stdout",
260+
"output_type": "stream",
261+
"text": [
262+
"281 tabby, tabby catamount\n"
263+
]
264+
},
265+
{
266+
"name": "stderr",
267+
"output_type": "stream",
268+
"text": [
269+
"/Users/hhsecond/asgard/redisai-examples/venv/lib/python3.7/site-packages/ipykernel_launcher.py:2: DeprecationWarning: Call to deprecated method scriptrun. (Use scriptexecute instead) -- Deprecated since version 1.2.0.\n",
270+
" \n",
271+
"/Users/hhsecond/asgard/redisai-examples/venv/lib/python3.7/site-packages/ipykernel_launcher.py:3: DeprecationWarning: Call to deprecated method modelrun. (Use modelexecute instead) -- Deprecated since version 1.2.0.\n",
272+
" This is separate from the ipykernel package so we can avoid doing imports until\n",
273+
"/Users/hhsecond/asgard/redisai-examples/venv/lib/python3.7/site-packages/ipykernel_launcher.py:4: DeprecationWarning: Call to deprecated method scriptrun. (Use scriptexecute instead) -- Deprecated since version 1.2.0.\n",
274+
" after removing the cwd from sys.path.\n"
275+
]
276+
}
277+
],
278+
"source": [
279+
"con.tensorset('image', image)\n",
280+
"out4 = con.scriptrun('processing_script', 'pre_process', inputs='image', outputs='processed')\n",
281+
"out5 = con.modelrun('tensorflow_model', 'processed', 'model_out')\n",
282+
"out6 = con.scriptrun('processing_script', 'post_process', 'model_out', 'final')\n",
283+
"final = con.tensorget('final')\n",
284+
"print(final[0], class_idx[str(final[0])])"
285+
]
286+
},
3287
{
4288
"cell_type": "code",
5289
"execution_count": null,
6-
"id": "c0ef479d",
290+
"id": "b8aa981c",
7291
"metadata": {},
8292
"outputs": [],
9293
"source": []
@@ -25,7 +309,7 @@
25309
"name": "python",
26310
"nbconvert_exporter": "python",
27311
"pygments_lexer": "ipython3",
28-
"version": "3.8.7"
312+
"version": "3.7.6"
29313
}
30314
},
31315
"nbformat": 4,

0 commit comments

Comments
 (0)