Welcome to iTELL AI, a REST API for intelligent textbooks. iTELL AI provides the following principal features:
- Summary scoring
- Constructed response item scoring
- Structured dialogues with conversational AI
iTELL AI also provides some utility endpoints that are used by the content management system.
- Generating transcripts from YouTube videos
- Creating chunk embeddings and managing a vector store.
The API documention is hosted at the /redoc location.
- The app is defined in
src/app.py
. - The endpoints are defined in
src/routers/
. - The Pydantic models are defined in
src/schemas/
. - External connections are defined in
src/dependencies/
. - NLP and AI pipelines are defined in
src/pipelines/
. - Service logic is defined in
src/services/
.
Development requires a GPU with ~50GiB of VRAM.
- If not using the provided dev container, install
protobuf-compiler
on your system. This is a requirement to buildgcld3
. - Clone the repository and run
pip install -r requirements/requirements.in
- Make sure to create a
.env
file in the application root directory like.env.example
- Ask a team member for the values to use in the
.env
file. - If you are on Mac, you will need to add
export
before each line in the.env
file. - Load the environment variables with
source .env
or by using the provided devcontainer.
- Ask a team member for the values to use in the
- If not using the provided dev container, install development dependencies:
pip install pip-tools pytest asgi-lifespan
- Run
pytest
from the root directory to run the test suite.- Please write tests for any new endpoints.
- Please run tests using
pytest
before requesting a code review.
- Make changes to
requirements/requirements.in
- Run
pip-compile requirements/requirements.in
with a GPU.
This devcontainer only works on machines with an NVidia GPU.
- Install the Remote - Containers extension for VSCode.
- Open the repository in VSCode.
- Click the green button in the bottom left corner of the window and select "Reopen in Container".
- The container will build and VSCode will reload. You should now be able to run the code in the container.
The Makefile defines a build and push sequence to the localhost:32000 container registry.
The image is hosted on LEAR Lab Development Server #1.
kubernetes/manifest.yaml
defines a deployment and service for the image.- The deployment is configured to pull the image from a local Docker registry (microk8s built-in registry).
- The repository is located at
/srv/repos/itell-api
on the lab server.
You should only need the following commands to deploy an update. Run these from within the repository directory:
git fetch
git pull
make cuda_device=X
(Where X is 0, 1, or 2 depending on which GPU is available)
If you need to make any quick fixes to get the deployment working, please do not forget to push those changes directly to main:
- Make your changes to the files
git add .
git commit -m [commit message]
git push
If you make any changes to the required environment variables, these must be udpated using a kubernetes secret.
- Manually update the .env file on the production server. This is not version controlled.
microk8s kubectl delete secret itell-ai
microk8s kubectl create secret generic itell-ai --from-env-file=.env
- Find the pod's id using
microk8s kubectl get pods
. - Run
microk8s kubectl exec -i -t itell-api-[POD-ID] -- /bin/bash
microk8s kubectl logs itell-api-[tab-to-complete]