From 59d7d5f76bf0f625024a2fb9083c5e989feb6f62 Mon Sep 17 00:00:00 2001 From: Wu Yuhan <90036431+yuwu46@users.noreply.github.com> Date: Thu, 27 Feb 2025 21:08:18 +0800 Subject: [PATCH 1/5] Delete docs/api_guides/low_level/inference.rst remove docs/api_guides/low_level/inference.rst --- docs/api_guides/low_level/inference.rst | 58 ------------------------- 1 file changed, 58 deletions(-) delete mode 100644 docs/api_guides/low_level/inference.rst diff --git a/docs/api_guides/low_level/inference.rst b/docs/api_guides/low_level/inference.rst deleted file mode 100644 index 84dfbacfce2..00000000000 --- a/docs/api_guides/low_level/inference.rst +++ /dev/null @@ -1,58 +0,0 @@ -.. _api_guide_inference: - -######### -预测引擎 -######### - -预测引擎提供了存储预测模型 :ref:`cn_api_fluid_io_save_inference_model` 和加载预测模型 :ref:`cn_api_fluid_io_load_inference_model` 两个接口。 - -预测模型的存储格式 -================= - -预测模型的存储格式有两种,由上述两个接口中的 :code:`model_filename` 和 :code:`params_filename` 变量控制: - -- 参数保存到各个独立的文件,如设置 :code:`model_filename` 为 :code:`None` 、:code:`params_filename` 为 :code:`None` - - .. code-block:: bash - - ls recognize_digits_conv.inference.model/* - __model__ conv2d_1.w_0 conv2d_2.w_0 fc_1.w_0 conv2d_1.b_0 conv2d_2.b_0 fc_1.b_0 - -- 参数保存到同一个文件,如设置 :code:`model_filename` 为 :code:`None` 、:code:`params_filename` 为 :code:`__params__` - - .. code-block:: bash - - ls recognize_digits_conv.inference.model/* - __model__ __params__ - -存储预测模型 -=========== - -存储预测模型时,一般通过 :code:`fluid.io.save_inference_model` 接口对默认的 :code:`fluid.Program` 进行裁剪,只保留预测 :code:`predict_var` 所需部分。 -裁剪后的 program 会保存在指定路径 ./infer_model/__model__ 下,参数会保存到 ./infer_model 下的各个独立文件。 - -示例代码如下: - -.. code-block:: python - - exe = fluid.Executor(fluid.CPUPlace()) - path = "./infer_model" - fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'], - target_vars=[predict_var], executor=exe) - - -加载预测模型 -=========== - -.. code-block:: python - - exe = fluid.Executor(fluid.CPUPlace()) - path = "./infer_model" - [inference_program, feed_target_names, fetch_targets] = - fluid.io.load_inference_model(dirname=path, executor=exe) - results = exe.run(inference_program, - feed={feed_target_names[0]: tensor_img}, - fetch_list=fetch_targets) - -在这个示例中,首先调用 :code:`fluid.io.load_inference_model` 接口,获得预测的 :code:`inference_program` 、输入数据的名称 :code:`feed_target_names` 和输出结果的 :code:`fetch_targets` ; -然后调用 :code:`executor` 执行预测的 :code:`inference_program` 获得预测结果。 From eeb20118f556f96ab9f0f9cb814988a721832fb0 Mon Sep 17 00:00:00 2001 From: Wu Yuhan <90036431+yuwu46@users.noreply.github.com> Date: Thu, 27 Feb 2025 21:08:57 +0800 Subject: [PATCH 2/5] Delete docs/api_guides/low_level/inference_en.rst remove docs/api_guides/low_level/inference_en.rst --- docs/api_guides/low_level/inference_en.rst | 58 ---------------------- 1 file changed, 58 deletions(-) delete mode 100755 docs/api_guides/low_level/inference_en.rst diff --git a/docs/api_guides/low_level/inference_en.rst b/docs/api_guides/low_level/inference_en.rst deleted file mode 100755 index 4faf6de48e9..00000000000 --- a/docs/api_guides/low_level/inference_en.rst +++ /dev/null @@ -1,58 +0,0 @@ -.. _api_guide_inference_en: - -################# -Inference Engine -################# - -Inference engine provides interfaces to save inference model :ref:`api_fluid_io_save_inference_model` and load inference model :ref:`api_fluid_io_load_inference_model` . - -Format of Saved Inference Model -===================================== - -There are two formats of saved inference model, which are controlled by :code:`model_filename` and :code:`params_filename` parameters in the two interfaces above. - -- Parameters are saved into independent separate files, such as :code:`model_filename` set as :code:`None` and :code:`params_filename` set as :code:`None` - - .. code-block:: bash - - ls recognize_digits_conv.inference.model/* - __model__ conv2d_1.w_0 conv2d_2.w_0 fc_1.w_0 conv2d_1.b_0 conv2d_2.b_0 fc_1.b_0 - -- Parameters are saved into the same file, such as :code:`model_filename` set as :code:`None` and :code:`params_filename` set as :code:`__params__` - - .. code-block:: bash - - ls recognize_digits_conv.inference.model/* - __model__ __params__ - -Save Inference model -=============================== - -To save an inference model, we normally use :code:`fluid.io.save_inference_model` to tailor the default :code:`fluid.Program` and only keep the parts useful for predicting :code:`predict_var`. -After being tailored, :code:`program` will be saved under :code:`./infer_model/__model__` while the parameters will be saved into independent files under :code:`./infer_model` . - -Sample Code: - -.. code-block:: python - - exe = fluid.Executor(fluid.CPUPlace()) - path = "./infer_model" - fluid.io.save_inference_model(dirname=path, feeded_var_names=['img'], - target_vars=[predict_var], executor=exe) - - -Load Inference Model -===================== - -.. code-block:: python - - exe = fluid.Executor(fluid.CPUPlace()) - path = "./infer_model" - [inference_program, feed_target_names, fetch_targets] = - fluid.io.load_inference_model(dirname=path, executor=exe) - results = exe.run(inference_program, - feed={feed_target_names[0]: tensor_img}, - fetch_list=fetch_targets) - -In this example, at first we call :code:`fluid.io.load_inference_model` to get inference :code:`inference_program` , :code:`feed_target_names`-name of input data and :code:`fetch_targets` of output; -then call :code:`executor` to run inference :code:`inference_program` to get inferred result. From b2529f7cb202ce49b2b9a11a8d1d00c973d43d60 Mon Sep 17 00:00:00 2001 From: Wu Yuhan <90036431+yuwu46@users.noreply.github.com> Date: Fri, 28 Feb 2025 14:24:15 +0800 Subject: [PATCH 3/5] Update index_cn.rst --- docs/api_guides/index_cn.rst | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/api_guides/index_cn.rst b/docs/api_guides/index_cn.rst index b70aec927ee..c1f993c4b05 100755 --- a/docs/api_guides/index_cn.rst +++ b/docs/api_guides/index_cn.rst @@ -13,7 +13,6 @@ API 功能分类 low_level/optimizer.rst low_level/metrics.rst low_level/model_save_reader.rst - low_level/inference.rst low_level/memory_optimize.rst low_level/executor.rst low_level/compiled_program.rst From 0a0abe6cb37558243a119f032b8e15a464a7dadc Mon Sep 17 00:00:00 2001 From: Wu Yuhan <90036431+yuwu46@users.noreply.github.com> Date: Fri, 28 Feb 2025 14:24:39 +0800 Subject: [PATCH 4/5] Update index_en.rst --- docs/api_guides/index_en.rst | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/api_guides/index_en.rst b/docs/api_guides/index_en.rst index c193ed56ec9..a1d2323135a 100755 --- a/docs/api_guides/index_en.rst +++ b/docs/api_guides/index_en.rst @@ -13,7 +13,6 @@ This section introduces the Fluid API structure and usage, to help you quickly g low_level/optimizer_en.rst low_level/metrics_en.rst low_level/model_save_reader_en.rst - low_level/inference_en.rst low_level/memory_optimize_en.rst low_level/executor_en.rst low_level/compiled_program_en.rst From 529c1ebe089deb648e74a0868f973e6d977ba530 Mon Sep 17 00:00:00 2001 From: Wu Yuhan <90036431+yuwu46@users.noreply.github.com> Date: Fri, 28 Feb 2025 14:24:55 +0800 Subject: [PATCH 5/5] Update index_cn.rst