|
6 | 6 | "metadata": {},
|
7 | 7 | "source": [
|
8 | 8 | "# Image Classification with PyTorch\n",
|
9 |
| - "Pytorch has been both researcher's and engineer's preferred choice of framework for DL development but when it comes to productionizing pytorch models, there still hasn't been a consensus on what to use. This guide run you through building a simple image classification model using Pytorch and then deploying that to RedisAI" |
| 9 | + "Pytorch has been both researcher's and engineer's preferred choice of framework for DL development but when it comes to productionizing pytorch models, there still hasn't been a consensus on what to use. This guide run you through building a simple image classification model using Pytorch and then deploying that to RedisAI. Let's start with importing the necessary packages" |
10 | 10 | ]
|
11 | 11 | },
|
12 | 12 | {
|
13 | 13 | "cell_type": "code",
|
14 |
| - "execution_count": 12, |
| 14 | + "execution_count": 28, |
15 | 15 | "id": "1e657632",
|
16 | 16 | "metadata": {},
|
17 | 17 | "outputs": [],
|
18 | 18 | "source": [
|
19 | 19 | "import torchvision.models as models\n",
|
20 |
| - "import torch" |
21 |
| - ] |
22 |
| - }, |
23 |
| - { |
24 |
| - "cell_type": "code", |
25 |
| - "execution_count": 13, |
26 |
| - "id": "56edea97", |
27 |
| - "metadata": {}, |
28 |
| - "outputs": [], |
29 |
| - "source": [ |
30 |
| - "model = models.resnet50(pretrained=True)\n", |
31 |
| - "model.eval()\n", |
| 20 | + "import torch\n", |
32 | 21 | "\n",
|
33 |
| - "scripted_model = torch.jit.script(model)\n", |
34 |
| - "torch.jit.save(scripted_model, 'resnet50.pt')" |
| 22 | + "import json\n", |
| 23 | + "import time\n", |
| 24 | + "from redisai import Client\n", |
| 25 | + "import ml2rt\n", |
| 26 | + "from skimage import io\n", |
| 27 | + "\n", |
| 28 | + "import os\n", |
| 29 | + "from redisai import Client" |
35 | 30 | ]
|
36 | 31 | },
|
37 | 32 | {
|
38 |
| - "cell_type": "code", |
39 |
| - "execution_count": 14, |
40 |
| - "id": "4ffd3d48", |
| 33 | + "cell_type": "markdown", |
| 34 | + "id": "43cd67a3", |
41 | 35 | "metadata": {},
|
42 |
| - "outputs": [], |
43 | 36 | "source": [
|
44 |
| - "import json\n", |
45 |
| - "import time\n", |
46 |
| - "from redisai import Client\n", |
47 |
| - "import ml2rt\n", |
48 |
| - "from skimage import io" |
| 37 | + "## Build Model\n", |
| 38 | + "For this example, we use a pretrained model from torchvision for image classification - the renowned resnet50. Since RedisAI is a C/C++ runtime, we'd need to export the torch model into [TorchScript](https://pytorch.org/docs/stable/jit.html). Here is how to do it but you can read more about TorchScript in the attached link" |
49 | 39 | ]
|
50 | 40 | },
|
51 | 41 | {
|
52 | 42 | "cell_type": "code",
|
53 |
| - "execution_count": 15, |
54 |
| - "id": "59b6599a", |
| 43 | + "execution_count": 29, |
| 44 | + "id": "56edea97", |
55 | 45 | "metadata": {},
|
56 | 46 | "outputs": [],
|
57 | 47 | "source": [
|
58 |
| - "import os\n", |
59 |
| - "from redisai import Client\n", |
| 48 | + "model = models.resnet50(pretrained=True)\n", |
| 49 | + "model.eval()\n", |
60 | 50 | "\n",
|
61 |
| - "REDIS_HOST = os.getenv(\"REDIS_HOST\", \"localhost\")\n", |
62 |
| - "REDIS_PORT = int(os.getenv(\"REDIS_PORT\", 6379))" |
| 51 | + "scripted_model = torch.jit.script(model)\n", |
| 52 | + "torch.jit.save(scripted_model, 'resnet50.pt')" |
63 | 53 | ]
|
64 | 54 | },
|
65 | 55 | {
|
66 |
| - "cell_type": "code", |
67 |
| - "execution_count": 16, |
68 |
| - "id": "97bbea43", |
| 56 | + "cell_type": "markdown", |
| 57 | + "id": "75c8faa0", |
69 | 58 | "metadata": {},
|
70 |
| - "outputs": [], |
71 | 59 | "source": [
|
72 |
| - "con = Client(host=REDIS_HOST, port=REDIS_PORT)" |
| 60 | + "## Setup RedisAI\n", |
| 61 | + "This tutorial assumes you already have a RedisAI server running. The easiest way to setup one instance is using docker\n", |
| 62 | + "\n", |
| 63 | + "```\n", |
| 64 | + "docker run -p 6379:6379 redislabs/redisai:latest-cpu-x64-bionic\n", |
| 65 | + "```\n", |
| 66 | + "\n", |
| 67 | + "Take a look at this [quickstart](https://oss.redis.com/redisai/quickstart/) for more details. Here we setup the connection credentials and ping the server to verify we can talk " |
73 | 68 | ]
|
74 | 69 | },
|
75 | 70 | {
|
76 | 71 | "cell_type": "code",
|
77 |
| - "execution_count": 17, |
78 |
| - "id": "95325dca", |
| 72 | + "execution_count": 31, |
| 73 | + "id": "59b6599a", |
79 | 74 | "metadata": {},
|
80 | 75 | "outputs": [
|
81 | 76 | {
|
|
84 | 79 | "True"
|
85 | 80 | ]
|
86 | 81 | },
|
87 |
| - "execution_count": 17, |
| 82 | + "execution_count": 31, |
88 | 83 | "metadata": {},
|
89 | 84 | "output_type": "execute_result"
|
90 | 85 | }
|
91 | 86 | ],
|
92 | 87 | "source": [
|
| 88 | + "REDIS_HOST = os.getenv(\"REDIS_HOST\", \"localhost\")\n", |
| 89 | + "REDIS_PORT = int(os.getenv(\"REDIS_PORT\", 6379))\n", |
| 90 | + "con = Client(host=REDIS_HOST, port=REDIS_PORT)\n", |
93 | 91 | "con.ping()"
|
94 | 92 | ]
|
95 | 93 | },
|
| 94 | + { |
| 95 | + "cell_type": "markdown", |
| 96 | + "id": "f68e8993", |
| 97 | + "metadata": {}, |
| 98 | + "source": [ |
| 99 | + "## Load model\n", |
| 100 | + "Next step is to load the model we trained above into RedisAI for serving. We are using a convinent package [ml2rt](https://pypi.org/project/ml2rt/) here for loading but it's not a mandatory dependency if you want to keep your `requirements.txt` small. Take a look at the `load_model` function. This will give us a binary blob of the model we have built above. We need to send this to RedisAI and also inform which backend we'd like to use and which device this should run on. We'll set the model on a key so we can reference this key later\n", |
| 101 | + "\n", |
| 102 | + "Note: If you want to run on GPU, take a look at the above quick start to setup RedisAI on GPU" |
| 103 | + ] |
| 104 | + }, |
96 | 105 | {
|
97 | 106 | "cell_type": "code",
|
98 |
| - "execution_count": 18, |
| 107 | + "execution_count": 32, |
99 | 108 | "id": "f7ddde68",
|
100 | 109 | "metadata": {},
|
101 | 110 | "outputs": [
|
|
105 | 114 | "'OK'"
|
106 | 115 | ]
|
107 | 116 | },
|
108 |
| - "execution_count": 18, |
| 117 | + "execution_count": 32, |
109 | 118 | "metadata": {},
|
110 | 119 | "output_type": "execute_result"
|
111 | 120 | }
|
|
115 | 124 | "con.modelstore(\"pytorch_model\", backend=\"TORCH\", device=\"CPU\", data=model)"
|
116 | 125 | ]
|
117 | 126 | },
|
| 127 | + { |
| 128 | + "cell_type": "markdown", |
| 129 | + "id": "c635e5f4", |
| 130 | + "metadata": {}, |
| 131 | + "source": [ |
| 132 | + "## Load script\n", |
| 133 | + "Why do you need Script? It's very likely that your deep learning model would have a pre/post processing step, like changing the dimensionality of the input (adding batch dimension) or doing normalizatoin etc. You normally do this from your client code and send the processed data to model server. With script, you can club this into your model serving pipeline. Script is one of the powerful feature of RedisAI. RedisAI Scripts are built on top of [TorchScript](https://pytorch.org/docs/stable/jit.html) and it's recommended to take a look if TorcScript is new to you. Torchscript is a subset of python programming langauge i.e it looks and smells like python but all the python functionalities are not available in torchscript. Now if you are wondering what's the benefit of TorchScript in RedisAI, there are few\n", |
| 134 | + "\n", |
| 135 | + "- It runs on a highly effecient C++ runtime\n", |
| 136 | + "- It can pipeline your preprocessing and postprocessing jobs, right where your model and data resides. So no back and forth of huge data blobs between your model server and pre/post processing scripts\n", |
| 137 | + "- It can run in a single redis pipeline or in RedisAI Dag which makes serving channel implementation smooth\n", |
| 138 | + "- You can use it with any framework, not just pytorch\n", |
| 139 | + "\n", |
| 140 | + "You can load the script from a file (`ml2rt.load_script` does this for you) which is probably your workflow normally since you save the script in a file but here we pass the string into the `scriptstore` method" |
| 141 | + ] |
| 142 | + }, |
118 | 143 | {
|
119 | 144 | "cell_type": "code",
|
120 |
| - "execution_count": 28, |
| 145 | + "execution_count": 8, |
121 | 146 | "id": "50bb90b1",
|
122 | 147 | "metadata": {},
|
123 | 148 | "outputs": [
|
|
127 | 152 | "'OK'"
|
128 | 153 | ]
|
129 | 154 | },
|
130 |
| - "execution_count": 28, |
| 155 | + "execution_count": 8, |
131 | 156 | "metadata": {},
|
132 | 157 | "output_type": "execute_result"
|
133 | 158 | }
|
|
153 | 178 | "con.scriptstore(\"processing_script\", device=\"CPU\", script=script, entry_points=(\"pre_process\", \"post_process\"))"
|
154 | 179 | ]
|
155 | 180 | },
|
| 181 | + { |
| 182 | + "cell_type": "markdown", |
| 183 | + "id": "5b16d972", |
| 184 | + "metadata": {}, |
| 185 | + "source": [ |
| 186 | + "## Load the image and final classes\n", |
| 187 | + "Here we load the input image and the final classes to find the predicted output" |
| 188 | + ] |
| 189 | + }, |
156 | 190 | {
|
157 | 191 | "cell_type": "code",
|
158 |
| - "execution_count": 29, |
159 |
| - "id": "f24ce05d", |
| 192 | + "execution_count": null, |
| 193 | + "id": "fe95d716", |
160 | 194 | "metadata": {},
|
161 | 195 | "outputs": [],
|
162 | 196 | "source": [
|
163 |
| - "image = io.imread(\"../data/cat.jpg\")" |
| 197 | + "image = io.imread(\"data/cat.jpg\")\n", |
| 198 | + "class_idx = json.load(open(\"data/imagenet_classes.json\"))" |
| 199 | + ] |
| 200 | + }, |
| 201 | + { |
| 202 | + "cell_type": "markdown", |
| 203 | + "id": "9440afb8", |
| 204 | + "metadata": {}, |
| 205 | + "source": [ |
| 206 | + "## Run the model serving pipeline\n", |
| 207 | + "Here we run the serving pipeline one by one and finally fetch the results out. The pipeline is organized into 5 steps\n", |
| 208 | + "\n", |
| 209 | + "```\n", |
| 210 | + "Setting Input -> Pre-processing Script -> Running Model -> Post-processing Script -> Fetching Output\n", |
| 211 | + "```" |
164 | 212 | ]
|
165 | 213 | },
|
166 | 214 | {
|
167 | 215 | "cell_type": "code",
|
168 |
| - "execution_count": 30, |
169 |
| - "id": "40e02215", |
| 216 | + "execution_count": 49, |
| 217 | + "id": "f24ce05d", |
170 | 218 | "metadata": {},
|
171 | 219 | "outputs": [
|
172 | 220 | {
|
173 |
| - "data": { |
174 |
| - "text/plain": [ |
175 |
| - "'OK'" |
176 |
| - ] |
177 |
| - }, |
178 |
| - "execution_count": 30, |
179 |
| - "metadata": {}, |
180 |
| - "output_type": "execute_result" |
| 221 | + "name": "stdout", |
| 222 | + "output_type": "stream", |
| 223 | + "text": [ |
| 224 | + "281 tabby, tabby catamount\n" |
| 225 | + ] |
181 | 226 | }
|
182 | 227 | ],
|
183 | 228 | "source": [
|
184 |
| - "con.tensorset('image', image)" |
| 229 | + "con.tensorset('image', image)\n", |
| 230 | + "con.scriptexecute('processing_script', 'pre_process', inputs='image', outputs='processed')\n", |
| 231 | + "con.modelexecute('pytorch_model', 'processed', 'model_out')\n", |
| 232 | + "con.scriptexecute('processing_script', 'post_process', inputs='model_out', outputs='final')\n", |
| 233 | + "final = con.tensorget('final')\n", |
| 234 | + "print(final[0], class_idx[str(ind[0])])" |
| 235 | + ] |
| 236 | + }, |
| 237 | + { |
| 238 | + "cell_type": "markdown", |
| 239 | + "id": "7b75d0ba", |
| 240 | + "metadata": {}, |
| 241 | + "source": [ |
| 242 | + "## Running with DAG\n", |
| 243 | + "Although this looks good, each of these calls has a network overhead of going back and forth and sometimes it's better to run everything as a single execution and that's what you can do with RedisAI DAG. DAGs are much more powerful than that but let's discuss that in another tutorial. Here we first setup a dag object and track all the operations we did above in the dag. Note that none of these tracking steps sends a request to RedisAI server. Once the dag object is ready with all the paths, you can trigger `dag.run()` to initiate the DAG execution in RedisAI backend" |
185 | 244 | ]
|
186 | 245 | },
|
187 | 246 | {
|
188 | 247 | "cell_type": "code",
|
189 |
| - "execution_count": 31, |
190 |
| - "id": "cacc9eb6", |
| 248 | + "execution_count": 51, |
| 249 | + "id": "40e02215", |
191 | 250 | "metadata": {},
|
192 | 251 | "outputs": [
|
193 | 252 | {
|
194 |
| - "ename": "ResponseError", |
195 |
| - "evalue": "The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): File \"<string>\", line 10, in pre_process mean = mean.unsqueeze(1).unsqueeze(1) std = std.unsqueeze(1).unsqueeze(1) temp = image.float().div(255).permute(2, 0, 1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return temp.sub(mean).div(std).unsqueeze(0) RuntimeError: number of dims don't match in permute ", |
196 |
| - "output_type": "error", |
197 |
| - "traceback": [ |
198 |
| - "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", |
199 |
| - "\u001b[0;31mResponseError\u001b[0m Traceback (most recent call last)", |
200 |
| - "\u001b[0;32m/var/folders/66/g3bgwk8s0mq9fmm1d32nmb8c0000gq/T/ipykernel_4521/4111896467.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mcon\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mscriptexecute\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'processing_script'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'pre_process'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'image'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'processed'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", |
201 |
| - "\u001b[0;32m~/asgard/redisai-examples/venv/lib/python3.8/site-packages/redisai/client.py\u001b[0m in \u001b[0;36mscriptexecute\u001b[0;34m(self, key, function, keys, inputs, args, outputs, timeout)\u001b[0m\n\u001b[1;32m 786\u001b[0m \"\"\"\n\u001b[1;32m 787\u001b[0m \u001b[0margs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mbuilder\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mscriptexecute\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkey\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mfunction\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkeys\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minputs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0moutputs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 788\u001b[0;31m \u001b[0mres\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexecute_command\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 789\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mres\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0menable_postprocess\u001b[0m \u001b[0;32melse\u001b[0m \u001b[0mprocessor\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mscriptexecute\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mres\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 790\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", |
202 |
| - "\u001b[0;32m~/asgard/redisai-examples/venv/lib/python3.8/site-packages/redis/client.py\u001b[0m in \u001b[0;36mexecute_command\u001b[0;34m(self, *args, **options)\u001b[0m\n\u001b[1;32m 899\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 900\u001b[0m \u001b[0mconn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msend_command\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 901\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mparse_response\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mconn\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcommand_name\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 902\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mConnectionError\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mTimeoutError\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 903\u001b[0m \u001b[0mconn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdisconnect\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", |
203 |
| - "\u001b[0;32m~/asgard/redisai-examples/venv/lib/python3.8/site-packages/redis/client.py\u001b[0m in \u001b[0;36mparse_response\u001b[0;34m(self, connection, command_name, **options)\u001b[0m\n\u001b[1;32m 913\u001b[0m \u001b[0;34m\"Parses a response from the Redis server\"\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 914\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 915\u001b[0;31m \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mconnection\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread_response\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 916\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mResponseError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 917\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mEMPTY_RESPONSE\u001b[0m \u001b[0;32min\u001b[0m \u001b[0moptions\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", |
204 |
| - "\u001b[0;32m~/asgard/redisai-examples/venv/lib/python3.8/site-packages/redis/connection.py\u001b[0m in \u001b[0;36mread_response\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 754\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 755\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0misinstance\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mresponse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mResponseError\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 756\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 757\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 758\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", |
205 |
| - "\u001b[0;31mResponseError\u001b[0m: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): File \"<string>\", line 10, in pre_process mean = mean.unsqueeze(1).unsqueeze(1) std = std.unsqueeze(1).unsqueeze(1) temp = image.float().div(255).permute(2, 0, 1) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return temp.sub(mean).div(std).unsqueeze(0) RuntimeError: number of dims don't match in permute " |
| 253 | + "name": "stdout", |
| 254 | + "output_type": "stream", |
| 255 | + "text": [ |
| 256 | + "281 tabby, tabby catamount\n" |
| 257 | + ] |
| 258 | + }, |
| 259 | + { |
| 260 | + "name": "stderr", |
| 261 | + "output_type": "stream", |
| 262 | + "text": [ |
| 263 | + "/var/folders/66/g3bgwk8s0mq9fmm1d32nmb8c0000gq/T/ipykernel_16269/3084769917.py:8: DeprecationWarning: Call to deprecated method run. (Use execute instead) -- Deprecated since version 1.2.0.\n", |
| 264 | + " final = dag.run()[-1]\n" |
206 | 265 | ]
|
207 | 266 | }
|
208 | 267 | ],
|
209 | 268 | "source": [
|
210 |
| - "con.scriptexecute('processing_script', 'pre_process', 'image', 'processed')" |
| 269 | + "dag = con.dag(routing='default')\n", |
| 270 | + "dag.tensorset('image', image)\n", |
| 271 | + "dag.scriptexecute('processing_script', 'pre_process', inputs='image', outputs='processed')\n", |
| 272 | + "dag.modelexecute('pytorch_model', 'processed', 'model_out')\n", |
| 273 | + "dag.scriptexecute('processing_script', 'post_process', inputs='model_out', outputs='final')\n", |
| 274 | + "dag.tensorget('final')\n", |
| 275 | + "\n", |
| 276 | + "final = dag.run()[-1]\n", |
| 277 | + "print(final[0], class_idx[str(ind[0])])" |
211 | 278 | ]
|
212 | 279 | },
|
213 | 280 | {
|
214 | 281 | "cell_type": "code",
|
215 | 282 | "execution_count": null,
|
216 |
| - "id": "848627bd", |
| 283 | + "id": "3fd407e5", |
217 | 284 | "metadata": {},
|
218 | 285 | "outputs": [],
|
219 | 286 | "source": []
|
|
0 commit comments