-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
5 changed files
with
1,058 additions
and
1,058 deletions.
There are no files selected for viewing
230 changes: 115 additions & 115 deletions
230
e2e-tutorials/batch_state_identifier/notebooks/40-TestPipelineLocally.ipynb
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,118 +1,118 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.", | ||
"SPDX-License-Identifier": "MIT" | ||
}, | ||
"source": [ | ||
"# Testing the edge configuration package\n", | ||
"\n", | ||
"In this notebook, the main goal is to test the edge configuration package created in notebook [30-CreatePipelinePackage](30-CreatePipelinePackage.ipynb)\n", | ||
"driving it with the training data set which was used in notebook [10-CreateModel](10-CreateModel.ipynb).\n", | ||
"\n", | ||
"The `LocalPipelineRunner` object takes the edge configuration package and extracts its components.\n", | ||
"Once the components are extracted, you can run them individually by calling `run_component` with component name and structured input data.\n", | ||
"The method builds a `venv` Python virtual environment for running the component and installs the required dependencies listed in `requirements.txt` for the component.\n", | ||
"Once the virtual python environment is ready, the method executes and feeds the component with your test data.\n", | ||
"The result will be the list of outputs of your component. If your component does not always produce an output, as is the case with a preprocessor aggregating a windowful of data, the output list will be shorter than the input list." | ||
] | ||
"cells": [ | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"metadata": { | ||
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.", | ||
"SPDX-License-Identifier": "MIT" | ||
}, | ||
"source": [ | ||
"# Testing the edge configuration package\n", | ||
"\n", | ||
"In this notebook, the main goal is to test the edge configuration package created in notebook [30-CreatePipelinePackage](30-CreatePipelinePackage.ipynb)\n", | ||
"driving it with the training data set which was used in notebook [10-CreateModel](10-CreateModel.ipynb).\n", | ||
"\n", | ||
"The `LocalPipelineRunner` object takes the edge configuration package and extracts its components.\n", | ||
"Once the components are extracted, you can run them individually by calling `run_component` with component name and structured input data.\n", | ||
"The method builds a `venv` Python virtual environment for running the component and installs the required dependencies listed in `requirements.txt` for the component.\n", | ||
"Once the virtual python environment is ready, the method executes and feeds the component with your test data.\n", | ||
"The result will be the list of outputs of your component. If your component does not always produce an output, as is the case with a preprocessor aggregating a windowful of data, the output list will be shorter than the input list." | ||
] | ||
}, | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Define a dataset to test the package\n", | ||
"The goal here is to create a list of input data which the `process_input(data: dict)` method will be triggered with. \n", | ||
"For this reason, we read the json files and create the list of payloads as we did in notebook [10-CreateModel](10-CreateModel.ipynb)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import json\n", | ||
"from pathlib import Path\n", | ||
"\n", | ||
"json_files = Path('../data/processed/example').glob('*.json')\n", | ||
"inputs = []\n", | ||
"for json_file in json_files:\n", | ||
" with open(json_file) as f:\n", | ||
" json_string = json.load(f)\n", | ||
" inputs.append({'json_data': json.dumps(json_string)})" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Test the pipeline locally\n", | ||
"To do so we instantiate a `LocalPipelineRunner` object with the path of our configuration package, and a directory where we want to check the results.\n", | ||
"This directory will contain both the extracted component and the created python virtual environment.\n", | ||
"If the directory is not defined, a temporary directory is created and deleted after testing." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"tags": [] | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"from simaticai.testing.pipeline_runner import LocalPipelineRunner\n", | ||
"\n", | ||
"test_dir = Path('../test')\n", | ||
"package_path = Path('../packages/Batch-State-Identifier-edge_1.zip')\n", | ||
"\n", | ||
"output = []\n", | ||
"with LocalPipelineRunner(package_path, test_dir) as pipelineRunner:\n", | ||
" output = pipelineRunner.run_pipeline(inputs) " | ||
] | ||
}, | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Check the results\n", | ||
"The method returns with the calculated results which can be compared to the expected result." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"output" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": " Python (batch_state_identifier)", | ||
"language": "python", | ||
"name": "batch_state_identifier" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.11.9" | ||
} | ||
}, | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Define a dataset to test the package\n", | ||
"The goal here is to create a list of input data which the `process_input(data: dict)` method will be triggered with. \n", | ||
"For this reason, we read the json files and create the list of payloads as we did in notebook [10-CreateModel](10-CreateModel.ipynb)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"import json\n", | ||
"from pathlib import Path\n", | ||
"\n", | ||
"json_files = Path('../data/processed/example').glob('*.json')\n", | ||
"inputs = []\n", | ||
"for json_file in json_files:\n", | ||
" with open(json_file) as f:\n", | ||
" json_string = json.load(f)\n", | ||
" inputs.append({'json_data': json.dumps(json_string)})" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Test the pipeline locally\n", | ||
"To do so we instantiate a `LocalPipelineRunner` object with the path of our configuration package, and a directory where we want to check the results.\n", | ||
"This directory will contain both the extracted component and the created python virtual environment.\n", | ||
"If the directory is not defined, a temporary directory is created and deleted after testing." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": { | ||
"tags": [] | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"from simaticai.testing.pipeline_runner import LocalPipelineRunner\n", | ||
"\n", | ||
"test_dir = Path('../test')\n", | ||
"package_path = Path('../packages/Batch-State-Identifier-edge_1.zip')\n", | ||
"\n", | ||
"output = []\n", | ||
"with LocalPipelineRunner(package_path, test_dir) as pipelineRunner:\n", | ||
" output = pipelineRunner.run_pipeline(inputs) " | ||
] | ||
}, | ||
{ | ||
"attachments": {}, | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Check the results\n", | ||
"The method returns with the calculated results which can be compared to the expected result." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"output" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": " Python (batch_state_identifier)", | ||
"language": "python", | ||
"name": "batch_state_identifier" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.11.9" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
} |
Oops, something went wrong.