Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixing broken links #4

Merged
merged 4 commits into from
Dec 4, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@
"cell_type": "markdown",
"metadata": {
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.",
"SPDX-License-Identifier": "MIT"
"SPDX-License-Identifier": "MIT"
},
"source": [
"# Testing the edge configuration package\n",
"\n",
"In this notebook, the main goal is to test the edge configuration package created in notebook [30-CreatePipelinePackage](30-CreatePipelinePackage.ipynb)\n",
"driving it with the training data set which was used in notebook [20-CreateModel](20-CreateModel.ipynb).\n",
"driving it with the training data set which was used in notebook [10-CreateModel](10-CreateModel.ipynb).\n",
"\n",
"The `LocalPipelineRunner` object takes the edge configuration package and extracts its components.\n",
"Once the components are extracted, you can run them individually by calling `run_component` with component name and structured input data.\n",
Expand All @@ -27,7 +27,7 @@
"source": [
"## Define a dataset to test the package\n",
"The goal here is to create a list of input data which the `process_input(data: dict)` method will be triggered with. \n",
"For this reason, we read the json files and create the list of payloads as we did in notebook [20-CreateModel](20-CreateModel.ipynb)"
"For this reason, we read the json files and create the list of payloads as we did in notebook [10-CreateModel](10-CreateModel.ipynb)"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
"attachments": {},
"cell_type": "markdown",
"metadata": {
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.",
"SPDX-License-Identifier": "MIT"
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.",
"SPDX-License-Identifier": "MIT"
},
"source": [
"# Create a TensorFlow Lite edge configuration package\n",
Expand Down Expand Up @@ -114,7 +114,7 @@
"source": [
"### Add dependencies\n",
"\n",
"All of the required dependencies are collected in the file `runtime_requirements.txt`. See [HOWTO](../HOWTO.md) for more possibilities."
"All of the required dependencies are collected in the file `runtime_requirements.txt`. See [How to handle Python dependencies](../../../howto-guides/04-handle-python-dependencies.md) for more possibilities."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion e2e-tutorials/object_detection/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ The notebook [01-CreateTestSet.ipynb](./notebooks/01-CreateTestSet.ipynb) explai

### 2. Analyzing and fixing the model ONNX format

The notebook [10-ObjectDetectonModel.ipynb](./notebooks/10-ObjectDetectonModel.ipynb) explains the used model, the possible mistakes in the ONNX definition and how to correct them. By the end of the execution of the notebook you will have an **well prepared onnx model** to package.
The notebook [10-ObjectDetectionModel.ipynb](./notebooks/10-ObjectDetectionModel.ipynb) explains the used model, the possible mistakes in the ONNX definition and how to correct them. By the end of the execution of the notebook you will have an **well prepared onnx model** to package.

### 3. Creating Pre- and Postprocessing steps to support the GPURuntimeComponent

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.",
"SPDX-License-Identifier": "MIT"
"SPDX-License-Identifier": "MIT"
},
"source": [
"# Test the edge configuration package locally"
Expand All @@ -31,7 +31,7 @@
"source": [
"#### Create ImageSet input from a JPEG or PNG file\n",
"\n",
"As we discussed in notebook [](./20-PreAndPostProcessing.ipynb), the payload of the Pipeline is an `ImageSet` in a dictionary, which will be processed by the Preprocessing step. For this reason, we create this payload from images in folder _../src/data/processed_ with the help of method defined in notebook [10-ObjectDetectonModel.ipynb](./10-ObjectDetectonModel.ipynb) and saved into Python script [payload.py](../src/preprocessing/payload.py)."
"As we discussed in notebook [](./20-PreAndPostProcessing.ipynb), the payload of the Pipeline is an `ImageSet` in a dictionary, which will be processed by the Preprocessing step. For this reason, we create this payload from images in folder _../src/data/processed_ with the help of method defined in notebook [10-ObjectDetectionModel.ipynb](./10-ObjectDetectionModel.ipynb) and saved into Python script [payload.py](../src/preprocessing/payload.py)."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion howto-guides/00-prepare-environment.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,4 @@ The brief [How to setup Linux](00-setup-linux.md) guide explains how to prepare

## Prepare environments

[How to prepare environment manager environments](00-prepare-environments) explains how to setup different notebook editors to work with the tutorials provided with AI SDK.
[How to prepare environment manager environments](00-setup-environments.md) explains how to setup different notebook editors to work with the tutorials provided with AI SDK.
2 changes: 1 addition & 1 deletion howto-guides/00-setup-windows.md
Original file line number Diff line number Diff line change
Expand Up @@ -179,4 +179,4 @@ However, if you want to test your model locally, currently there is no official
You can either use the full TensorFlow package, or you can build your TFLite models in a virtual Linux environment.

> **Note**\
To complete setup, follow the steps described in the [Linux setup](#linux-setup) section.
To complete setup, follow the steps described in the [Linux setup](00-setup-linux.md) section.
2 changes: 1 addition & 1 deletion howto-guides/03-use-variable-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,4 +223,4 @@ def process_input(data):
}
```

See details in [How to use `ImageSet` format for images.md](14-use-imageset-format-for-images.md)
See details in [How to use String format for images.md](14-use-string-format-for-images.md)
2 changes: 1 addition & 1 deletion howto-guides/11-process-images.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ def process_input(payload: dict):

<!-- from VCA user manual, section Accessing camera data via ZeroMQ -->
`Vision Connector` supports mainly the standardized GenICam pixel formats, as the most common `Mono8`, `RGB8` formats or `BayerXX8` formats to reduce network traffic while the color information is still recorded.
For different pixel format it is also recommended to use the GenICam naming convention as described in section 4.35 of the [GenICam_PFNC_2_4.pdf](https://www.emva.org/wp-content/uploads/GenICam_SFNC_v2_7.pdf​,) document.
For different pixel format it is also recommended to use the GenICam naming convention as described in section 4.35 of the [GenICam_PFNC_2_4.pdf](https://www.emva.org/wp-content/uploads/GenICam_SFNC_v2_7.pdf) document.

### `ImageSet` as Output

Expand Down
2 changes: 1 addition & 1 deletion howto-guides/18-package-inference-pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ attempt to auto-wire the components passed as a list,
- connecting the last component to the pipeline output.

Some components may require special variable types, e.g. the GPU component must receive its inputs in a specific format, and it will produce the output in a specific format too.
Please refer to [Guideline for writing runtime components](writing_components.md) for variable type handling.
Please refer to [How to use variable types](03-use-variable-types.md) for variable type handling.
In the general case, we recommend that you pass data from one component to the other in a single variable of type `String`, serializing and deserializing whatever data you have through a string.

The low-level methods of class `Pipeline` allow you to define arbitrary wiring between components and pipeline inputs and outputs
Expand Down
4 changes: 2 additions & 2 deletions howto-guides/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ SPDX-License-Identifier: MIT
* [How to setup Linux](00-setup-linux.md)
* [How to setup environment manager](00-setup-environment-manager.md)
* [How to setup a notebook editor](00-setup-notebook-editor.md)
* [How to setup setup environment manager environments](00-setup-environments)
* [How to setup setup environment manager environments](00-setup-environments.md)
* [How to define pipeline components?](01-define-components.md)
* [How to create entrypoints for AI Inference Server to be able to feed the model with data?](02-create-entrypoint.md)
* [How to use variables in pipelines?](03-use-variable-types.md)
Expand All @@ -30,7 +30,7 @@ SPDX-License-Identifier: MIT
* [How to process images?](11-process-images.md)
* [How to use Binary format for images](./12-use-binary-format-for-images.md)
* [How to use Object format for images](./13-use-object-format-for-images.md)
* [How to use ImageSet format for images](./14-use-imageset-format-for-images.md)
* [How to use String format for images](./14-use-string-format-for-images.md)
* [How to use Tensorflow instead of Tensorflow Light](./15-use-tensorflow-instead-of-tflight.md)
* [How to version pacakges and use Pacakge ID](./16-version-packages.md)
* [How to mock AI Inference Server logger locally](./17-mock-inference-server-logging.md)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"cell_type": "markdown",
"metadata": {
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.",
"SPDX-License-Identifier": "MIT"
"SPDX-License-Identifier": "MIT"
},
"source": [
"# Model Conversion for Keras models\n",
Expand All @@ -24,7 +24,7 @@
"source": [
"## Load the model\n",
"\n",
"For this model conversion tutorial we are going to use the same model we created and trained in our [Image Classification](\"../../use-cases/image-classification/Readme.md\") example. "
"For this model conversion tutorial we are going to use the same model we created and trained in our [Image Classification](../../../e2e-tutorials/image_classification/README.md) example."
]
},
{
Expand Down Expand Up @@ -222,7 +222,7 @@
"\n",
"The AI Inference Server with GPU support accepts ONNX models for execution. \n",
"For this purpose the model must be packaged into a `GPURuntimeComponent` step using AI Software Development Kit. \n",
"For details on how to create `GPURuntimeComponent` and build pipelines that run on a GPU enabled AI Inference Server you can study the [Object Detection](\"../../use-cases/object-detection/Readme.md\") example.\n"
"For details on how to create `GPURuntimeComponent` and build pipelines that run on a GPU enabled AI Inference Server you can study the [Object Detection](../../../e2e-tutorials/object_detection/README.md) example.\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"cell_type": "markdown",
"metadata": {
"SPDX-FileCopyrightText": "Copyright (C) Siemens AG 2021. All Rights Reserved.",
"SPDX-License-Identifier": "MIT"
"SPDX-License-Identifier": "MIT"
},
"source": [
"# Model conversion for PyTorch models\n",
Expand Down Expand Up @@ -238,7 +238,7 @@
"\n",
"The AI Inference Server with GPU support accepts ONNX models for execution. \n",
"For this purpose the model must be packaged into a `GPURuntimeComponent` step using AI Software Development Kit. \n",
"For details on how to create `GPURuntimeComponent` and build pipelines that run on a GPU enabled AI Inference Server you can study the [Object Detection](\"../../use-cases/object-detection/Readme.md\") example."
"For details on how to create `GPURuntimeComponent` and build pipelines that run on a GPU enabled AI Inference Server you can study the [Object Detection](../../../e2e-tutorials/object_detection/README.md) example."
]
}
],
Expand Down
Loading