LLM-Evaluation-s-Always-Fatiguing
Pinned Loading
Repositories
Showing 10 of 12 repositories
- open-webui Public Forked from open-webui/open-webui
User-friendly WebUI for LLMs (Formerly Ollama WebUI)
LLM-Evaluation-s-Always-Fatiguing/open-webui’s past year of commit activity - leaf-playground Public
A framework to build scenario simulation projects where human and LLM based agents can participant in, with a user-friendly web UI to visualize simulation, support automatically evaluation on agent action level.
LLM-Evaluation-s-Always-Fatiguing/leaf-playground’s past year of commit activity - ChainForge Public Forked from ianarawjo/ChainForge
An open-source visual programming environment for battle-testing prompts to LLMs.
LLM-Evaluation-s-Always-Fatiguing/ChainForge’s past year of commit activity - node-event-source Public
A better API for making Event Source requests (SSE) in Node.js, with all the features of axios.
LLM-Evaluation-s-Always-Fatiguing/node-event-source’s past year of commit activity - leaf-eval-tools Public
LLM-Evaluation-s-Always-Fatiguing/leaf-eval-tools’s past year of commit activity - leaf-playground-hub Public
LLM-Evaluation-s-Always-Fatiguing/leaf-playground-hub’s past year of commit activity - temp-lora-pipeline Public
LLM-Evaluation-s-Always-Fatiguing/temp-lora-pipeline’s past year of commit activity