diff --git a/e2e-tutorials/image_classification/notebooks/30-CreatePipelinePackage.ipynb b/e2e-tutorials/image_classification/notebooks/30-CreatePipelinePackage.ipynb index f88e39a..68215a8 100644 --- a/e2e-tutorials/image_classification/notebooks/30-CreatePipelinePackage.ipynb +++ b/e2e-tutorials/image_classification/notebooks/30-CreatePipelinePackage.ipynb @@ -1,258 +1,258 @@ { - "cells": [ - { - "attachments": {}, - "cell_type": "markdown", - "metadata": { - "SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.", - "SPDX-License-Identifier": "MIT" - }, - "source": [ - "# Create a TensorFlow Lite edge configuration package\n", - "\n", - "In this notebook, the main goal is to create a pipeline with all of the contents that are necessary for the execution of the model on an Industrial Edge device. \n", - "In order to put the elements together, this example collects files\n", - "\n", - "from [10-CreateClassificationModel](10-CreateClassificationModel.ipynb) notebook: \n", - "- **classification_mobilnet.tflite**: the trained model for classification with TFlite\n", - "\n", - "from [20-CreateInferenceWrapper](20-CreateInferenceWrapper.ipynb) notebook: \n", - "- **entrypoint.py**: the script that is called by the runtime to execute the model on the Edge side\n", - "- **payload.py**: contains the method which extracts the payload and create a PIL Image to be processed\n", - "- **vision_classifier.py**: contains the method to utilize the model and produces a prediction" - ] + "cells": [ + { + "attachments": {}, + "cell_type": "markdown", + "metadata": { + "SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.", + "SPDX-License-Identifier": "MIT" + }, + "source": [ + "# Create a TensorFlow Lite edge configuration package\n", + "\n", + "In this notebook, the main goal is to create a pipeline with all of the contents that are necessary for the execution of the model on an Industrial Edge device. \n", + "In order to put the elements together, this example collects files\n", + "\n", + "from [10-CreateClassificationModel](10-CreateClassificationModel.ipynb) notebook: \n", + "- **classification_mobilnet.tflite**: the trained model for classification with TFlite\n", + "\n", + "from [20-CreateInferenceWrapper](20-CreateInferenceWrapper.ipynb) notebook: \n", + "- **entrypoint.py**: the script that is called by the runtime to execute the model on the Edge side\n", + "- **payload.py**: contains the method which extracts the payload and create a PIL Image to be processed\n", + "- **vision_classifier.py**: contains the method to utilize the model and produces a prediction" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Please note, there is no official TFLite runtime for Windows." + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Imports " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from simaticai import deployment" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a single component\n", + "\n", + "In this step, we create a `PythonComponent` that reads the input data `vision_payload` by `entrypoint.py`, processes the images by `vision_classifier.py` with model `classification_mobilnet.tflite` and produces an output `prediction` by `entrypoint.py` script." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "COMPONENT_DESCRIPTION =\"\"\"\\\n", + "This component uses a trained TensorFlow Lite image classification model that reads the vision_payload input and produces a prediction as an output.\n", + "\"\"\"\n", + "INPUT_DESCRIPTION =\"\"\"\n", + "Vision connector MQTT payload holding the image to be classified.\n", + "\"\"\"\n", + "OUTPUT_DESCRIPTION =\"\"\"\n", + "The most probable class predicted for the image as an integer string.\n", + "\"\"\"\n", + "\n", + "component = deployment.PythonComponent(\n", + " name='inference', \n", + " python_version='3.11',\n", + " desc=COMPONENT_DESCRIPTION)\n", + "\n", + "component.add_resources('..', 'entrypoint.py')\n", + "component.set_entrypoint('entrypoint.py')\n", + "component.add_resources('..', ['src/payload.py', 'src/vision_classifier.py'])\n", + "\n", + "component.add_input('vision_payload', 'ImageSet')\n", + "component.add_output('prediction', 'String')" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Add metrics\n", + "It can be useful to monitor the pipeline, e.g. watch the prediction probabilities. The metric name must contain an underscore (`_`), because the part before the underscore is used to group custom metrics on the dashboard.\n", + "\n", + "**⚠ Remember!**\n", + "You have to use the same names here and in the inference wrapper script." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "component.add_metric(\"ic_probability\")" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Add dependencies\n", + "\n", + "All of the required dependencies are collected in the file `runtime_requirements.txt`. See [How to handle Python dependencies](../../../howto-guides/04-handle-python-dependencies.md) for more possibilities." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "component.set_requirements('../runtime_requirements-py3.11.txt')" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Add a model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "component.add_resources('..', 'models/classification_mobilnet.tflite')" + ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create a pipeline from this component\n", + "\n", + "Now you can use the component to create a pipeline configuration. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [ + "testinit_package_name" + ] + }, + "outputs": [], + "source": [ + "PIPELINE_DESCRIPTION =\"\"\"\\\n", + "This pipeline runs a TensorFlow Lite model on an Industrial Edge device.\n", + "The model was trained to recognize and classify images of following Siemens SIMATIC automation products: ET 200AL, ET 200eco PN, ET 200SP, S7-1200, S7-1500.\n", + "\n", + "The pipeline is designed to be fed from Vision Connector via Databus with PNG or JPEG type images.\n", + "The pipeline output is to be sent to the Databus.\n", + "\"\"\"\n", + "\n", + "#To assure compatibility with older versions of AI SDK (