Skip to content

A Conversational Framework for Faithful Multi-Perspective Analysis of Production Processes

License

Notifications You must be signed in to change notification settings

angelo-casciani/conv_automata

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

76 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Conversational Framework for Faithful Multi-Perspective Analysis of Production Processes

Source code, datasets, and instructions for replicating the experiments of the paper "A Conversational Framework for Faithful Multi-Perspective Analysis of Production Processes".

Installation

To install the required Python packages for this project, you can use pip along with the requirements.txt file.

First, you need to clone the repository:

git clone https://github.com/angelo-casciani/conv_automata
cd conv_automata

Create a conda environment:

conda create -n conv_automata python=3.9 --yes
conda activate conv_automata

Run the following command to install the necessary dependencies using pip:

pip install -r requirements.txt

This command will read the requirements.txt file and install all the specified packages along with their dependencies.

LLMs Requirements

Please note that this software leverages the open-source and closed-source LLMs reported in the table:

Model HuggingFace Link
meta-llama/Meta-Llama-3-8B-Instruct HF link
meta-llama/Meta-Llama-3.1-8B-Instruct HF link
meta-llama/Llama-3.2-1B-Instruct HF Link
meta-llama/Llama-3.2-3B-Instruct HF link
mistralai/Mistral-7B-Instruct-v0.2 HF link
mistralai/Mistral-7B-Instruct-v0.3 HF link
mistralai/Mistral-Nemo-Instruct-2407 HF link
mistralai/Ministral-8B-Instruct-2410 HF link
Qwen/Qwen2.5-7B-Instruct HF link
google/gemma-2-9b-it HF link
gpt-4o-mini OpenAI link

Request in advance the permission to use each Llama model for your HuggingFace account. Retrive your OpenAI API key to use the supported GPT model.

Please note that each of the selected models have specific requirements in terms of GPU availability. It is recommended to have access to a GPU-enabled environment meeting at least the minimum requirements for these models to run the software effectively.

Running the Project

Before running the project, it is necessary to insert in the .env file your personal HuggingFace token (request the permission to use the Llama models for this token in advance) and OpenAI API key.

Eventually, you can proceed by going in the project directory and executing commands as the following one:

python3 main.py --llm_id Qwen/Qwen2.5-7B-Instruct --modality live --max_new_tokens 512

To run an evaluation for the simulation (evaluation-simulation), for the verification (evaluation-verification), or for the routing (evaluation-routing):

python3 main.py --llm_id Qwen/Qwen2.5-7B-Instruct --modality evaluation-simulation --max_new_tokens 512

To generate new test sets for the three supported evaluation, run the script test_sets_generation.py before running an evaluation.

python3 test_sets_generation.py

A comprehensive list of commands can be found at src/cmd4tests.sh.

About

A Conversational Framework for Faithful Multi-Perspective Analysis of Production Processes

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published