OpenSSA
: Small Specialist Agents for Industrial AI¶
---Documentation: aitomatic.github.io/openssa
-Installation:
-pip install openssa
(Python 3.10-3.11)SSA Problem-Solver App Launcher (after installation by
-pip install openssa[contrib]
):openssa launch solver
-(try out the same app hosted at openssa.streamlit.app)
OpenSSA is an open-source framework for Small Specialist Agents (SSAs), problem-solving AI agents for industrial applications. Harnessing the power of human domain expertise, SSAs operate either alone or in collaborative “teams”, and can integrate with both informational and operational sensors/actuators to deliver real-world industrial AI solutions.
-SSAs are light-weight, domain-focused and incorporate reasoning and planning capabilities. These characteristics make them ideal for complex hierarchical tasks typically found in industrial applications.
- -Small Size, Specific-Domain Specialization¶
-The trend towards specialization in AI models is a clear trajectory seen by many in the field.
----.. realize that smaller, cheaper, more specialized models make more sense for 99% of AI use-cases .. – Clem Delangue, Hugging Face -
-
As predicted by Clem Delangue and others, we will see “a rich ecosystem to emerge [of] high-value, specialized AI systems.” SSAs are the central part in the architecture of these systems.
-System-1 & System-2 Intelligence¶
-In addition to information-retrieval and inferencing (“System-1 intelligence”) capabilities, SSAs are additionally designed with hierachical reasoning and planning (“System-2 intelligence”) capabilities. They can execute tasks following general-purpose problem-solving paradigms (such as OODA) and domain-specific expert heuristics, in order to solve a diverse variery of problems that are hard for System-1-only Large Language Models (LLMs) and traditional AI models.
-System 2 thinking is often considered advantageous in certain contexts due to its deliberate, analytical nature. It excels in handling complex and novel situations, enabling individuals to engage in thoughtful reflection and make well-reasoned decisions. System 2 thinking is particularly valuable for risk assessment, mitigating impulsive judgments, and adapting mental models based on intentional learning. Moreover, it helps avoid cognitive biases and stereotypes by involving conscious, effortful processing. While System 1 thinking is valuable for quick and intuitive decision-making in familiar scenarios, System 2 thinking’s strengths lie in its ability to navigate intricate situations, analyze information thoroughly, and make informed choices that consider long-term consequences. The effectiveness of each thinking system depends on the specific demands of the task at hand, with both contributing to the overall cognitive toolkit.
-SSA vs LLM¶
-Unlike LLMs, which are computationally intensive and generalized, SSAs are lean, efficient, and designed specifically for individual domains. This focus makes them an optimal choice for businesses, SMEs, researchers, and developers seeking specialized and robust AI solutions for industrial applications.
+ +OpenSSA: Small Specialist Agents¶
+Enabling Efficient, Domain-Specific Planning and Reasoning for AI¶
+OpenSSA is an open-source framework for creating efficient, domain-specific AI agents. Build AI assistants for customer support, personalized recommendation engines, or autonomous systems for research. OpenSSA provides the tools to build Small Specialist Agents (SSAs) that solve complex problems in specific domains.
+SSAs tackle multi-step problems that require planning and reasoning beyond traditional language models. They apply OODA for deliberative reasoning (OODAR) and iterative, hierarchical task planning (HTP). This “System-2 Intelligence” breaks down complex tasks into manageable steps. SSAs make informed decisions based on domain-specific knowledge. With OpenSSA, create agents that process, generate, and reason about information. This makes them more effective and efficient in solving real-world challenges.
+Key Features¶
-
-
Fast, Cost-Effective & Easy to Use: SSAs are 100-1000x faster and more efficient than LLMs, making them accessible and cost-effective particularly for industrial usage where time and resources are critical factors.
-Industrial Focus: SSAs are developed with a specific emphasis on industrial applications, addressing the unique requirements of trustworthiness, safety, reliability, and scalability inherent to this sector.
-System-1 AND System-2 Capabilities, not just System-1: On top of System-1 capabilities such as knowledge query and inferencing/prediction, SSAs have hierarchical problem-solving capabilities based on the domain-specific knowledge and expert heuristics.
-Vendor Independence: OpenSSA allows everyone to build, train, and deploy their own domain-expert AI models, offering freedom from vendor lock-in and security concerns.
+Small: Create lightweight, resource-efficient AI agents through model compression techniques
+Specialist: Enhance agent performance with domain-specific facts, rules, heuristics, and fine-tuning for deterministic, accurate results
+Agents: Enable goal-oriented, multi-step problem-solving for complex tasks via systematic HTP planning and OODAR reasoning
+Integration-Ready: Works seamlessly with popular AI frameworks and tools for easy adoption
+Extensible Architecture: Easily integrate new models and domains to expand capabilities
+Versatile Applications: Build AI agents for industrial field service, customer support, recommendations, research, and more
Target Audience¶
-Our primary audience includes:
--
-
Businesses and SMEs wishing to leverage AI in their specific industrial context without relying on extensive computational resources or large vendor solutions.
-AI researchers and developers keen on creating more efficient, robust, and domain-specific AI agents for industrial applications.
-Open-source contributors believing in democratizing industrial AI and eager to contribute to a community-driven project focused on building and sharing specialized AI agents.
-Industries with specific domain problems that can be tackled more effectively by a specialist AI agent, enhancing the reliability and trustworthiness of AI solutions in an industrial setting.
-
Example Use Cases¶
+Boost RAG Performance with Reasoning¶
+OpenSSA significantly boosts the accuracy of Retrieval-Augmented Generation (RAG) systems. It fine-tunes the embedding or completion model with domain-specific knowledge. It also adds the ability to reason about queries and underlying documents. This powerful combination lifts RAG performance by significant margins, overcoming the limitations of generic language models.
+Enhance Conversational AI for Improved Customer Support¶
+Build AI assistants that provide accurate, context-aware responses in customer support, healthcare, and other domains. OpenSSA’s domain-specific fine-tuning capabilities enable you to create AI agents that understand and respond to user queries with unprecedented accuracy and relevance. This leads to improved customer satisfaction, reduced response times, and increased efficiency in handling customer inquiries.
SSA Architecture¶
-OpenSSA Framework Library¶
- +Enable Efficient Planning and Reasoning for Problem Solving¶
+OpenSSA enables you to create AI agents that can effectively plan and reason within specific domains to solve complex problems. By leveraging domain-specific knowledge and fine-tuned models, SSAs break down multi-step problems into manageable tasks. They efficiently and precisely solve these tasks, leading to accurate and timely solutions to real-world challenges.
High-Level Class Diagram¶
- +Optimize Industrial Field Service Operations¶
+Create AI agents that can guide field service technicians through complex maintenance and repair procedures. By leveraging domain-specific knowledge and reasoning capabilities, SSAs can provide step-by-step instructions, troubleshoot issues, and optimize resource allocation. This results in reduced downtime, increased first-time fix rates, and improved overall efficiency in industrial field service operations.
Getting Started with OpenSSA¶
-Who Are You?¶
+Getting Started¶
-
-
An end-user of OpenSSA-based applications
-A developer of applications or services using OpenSSA
-An aspiring contributor to OpenSSA
-A committer to OpenSSA
+Install OpenSSA:
pip install openssa
(Python 3.12)
+Explore the
examples/
directory
+Start building your own Small Specialist Agents
Detailed tutorials and guides are available in our Documentation.
Getting Started as an End-User¶
-Go straight to OpenSSA Streamlit app and start building your own SSA with your domain document today!
-Getting Started as a Developer¶
-See some example user programs in the examples/notebooks directory. For example, the see the sample use case on ALD semiconductor knowledge, do:
-% cd examples/notebooks
-
Common make
targets for OpenSSM developers¶
-See MAKEFILE for more details.
-% make clean
-% make build
-% make rebuild
-% make test
-
-% make poetry-init
-% make poetry-install
-% make install # local installation of openssm
-
-% make pypi-auth # only for maintainers
-% make publish # only for maintainers
-
Getting Started as an Aspiring Contributor¶
-OpenSSM is a community-driven initiative, and we warmly welcome contributions. Whether it’s enhancing existing models, creating new SSMs for different industrial domains, or improving our documentation, every contribution counts. See our Contribution Guide for more details.
-You can begin contributing to the OpenSSM project in the contrib/
directory.
Getting Started as a Committer¶
-You already know what to do.
-Community¶
-Join our vibrant community of AI enthusiasts, researchers, developers, and businesses who are democratizing industrial AI through SSAs. Participate in the discussions, share your ideas, or ask for help on our Community Discussions.
-Contribute¶
-OpenSSA is a community-driven initiative, and we warmly welcome contributions. Whether it’s enhancing existing models, creating new SSAs for different industrial domains, or improving our documentation, every contribution counts. See our Contribution Guide for more details.
-License¶
-OpenSSA is released under the Apache 2.0 License.
-API References¶
-Note: Lepton API Key¶
-Head to Lepton to get your API key.
+Contributing¶
+We welcome contributions from the community!
-
-
Go to
Settings
-Select
API tokens
-Copy
<YOUR_LEPTON_API_TOKEN>
+Join the discussion on our Community Forum
+Explore the
contrib/
directory for ongoing work and open issues
+Submit pull requests for bug fixes, enhancements, or new features
In terminal, run
-export LEPTON_API_KEY=<YOUR_LEPTON_API_TOKEN>
-
For more information, see our Contribution Guide.
+Note: Lepton API Key
diff --git a/modules.html b/modules.html
index 8e84c24c1..74c642985 100644
--- a/modules.html
+++ b/modules.html
@@ -10,7 +10,7 @@
-
+
@@ -92,7 +92,6 @@ openssa¶
- openssa package
- Subpackages
- openssa.contrib package
+- openssa.core.ooda namespace
+
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
+HeuristicSet
TaskDecompositionHeuristic
@@ -321,10 +387,12 @@ openssa¶
- openssa.core.ooda_rag.notifier module
- openssa.core.ooda_rag.ooda_rag module
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -369,8 +433,14 @@ openssa¶
- openssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
@@ -387,9 +457,19 @@ openssa¶
+- openssa.core.ooda_rag.query_rewritting_engine module
+
- openssa.core.ooda_rag.solver module
+PythonCodeTool
+
ReasearchAgentTool
@@ -408,6 +492,11 @@ openssa¶
ResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -419,6 +508,42 @@ openssa¶
+- openssa.core.rag_ooda namespace
+- Subpackages
+
+- Submodules
+
+
+
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
@@ -462,6 +587,18 @@ openssa¶
- openssa.core.ssa namespace
- Submodules
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -507,8 +644,8 @@ openssa¶
SSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -640,6 +777,7 @@ openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -678,6 +816,7 @@ openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -696,7 +835,6 @@ openssa¶
- openssa.integrations.llama_index.backend module
Backend
@@ -721,6 +859,7 @@ openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -748,6 +887,7 @@ openssa¶
AbstractAPIContext.key
AbstractAPIContext.max_tokens
AbstractAPIContext.model
+AbstractAPIContext.model_computed_fields
AbstractAPIContext.model_config
AbstractAPIContext.model_fields
AbstractAPIContext.temperature
@@ -776,22 +916,24 @@ openssa¶
+- openssa.utils.deprecated namespace
+- Submodules
-- Submodules
-- openssa.utils.aitomatic_llm_config module
+- Submodules
- openssa.utils.config module
Config
+Config.AITOMATIC_API_KEY
+Config.AITOMATIC_API_URL
+Config.AITOMATIC_API_URL_70B
+Config.AITOMATIC_API_URL_7B
+Config.AZURE_API_VERSION
Config.AZURE_GPT3_API_KEY
Config.AZURE_GPT3_API_URL
Config.AZURE_GPT3_API_VERSION
@@ -802,20 +944,26 @@ openssa¶
Config.AZURE_GPT4_API_VERSION
Config.AZURE_GPT4_ENGINE
Config.AZURE_GPT4_MODEL
+Config.AZURE_OPENAI_API_KEY
+Config.AZURE_OPENAI_API_URL
Config.DEBUG
+Config.DEFAULT_TEMPERATURE
Config.FALCON7B_API_KEY
Config.FALCON7B_API_URL
Config.LEPTONAI_API_KEY
Config.LEPTONAI_API_URL
+Config.LEPTON_API_KEY
+Config.LEPTON_API_URL
Config.OPENAI_API_KEY
Config.OPENAI_API_URL
+Config.US_AZURE_OPENAI_API_BASE
+Config.US_AZURE_OPENAI_API_KEY
Config.setenv()
- openssa.utils.fs module
-DirOrFilePath
FileSource
FileSource.file_paths()
FileSource.fs
@@ -829,60 +977,38 @@ openssa¶
-- openssa.utils.llm_config module
-AitomaticBaseURL
-
-LLMConfig
-LLMConfig.get_aito_embeddings()
-LLMConfig.get_aitomatic_13b()
-LLMConfig.get_aitomatic_yi_34b()
-LLMConfig.get_azure_embed_model()
-LLMConfig.get_azure_jp_api_key()
-LLMConfig.get_default_embed_model()
-LLMConfig.get_intel_neural_chat_7b()
-LLMConfig.get_llama_2_api_key()
-LLMConfig.get_llm()
-LLMConfig.get_llm_azure_jp_35_16k()
-LLMConfig.get_llm_azure_jp_4_32k()
-LLMConfig.get_llm_llama_2_70b()
-LLMConfig.get_llm_llama_2_7b()
-LLMConfig.get_llm_openai_35_turbo()
-LLMConfig.get_llm_openai_35_turbo_0613()
-LLMConfig.get_llm_openai_35_turbo_1106()
-LLMConfig.get_llm_openai_4()
-LLMConfig.get_openai_api_key()
-LLMConfig.get_openai_embed_model()
-LLMConfig.get_service_context_azure_gpt4()
-LLMConfig.get_service_context_azure_gpt4_32k()
-LLMConfig.get_service_context_azure_jp_35()
-LLMConfig.get_service_context_azure_jp_35_16k()
-LLMConfig.get_service_context_llama_2_70b()
-LLMConfig.get_service_context_llama_2_7b()
-LLMConfig.get_service_context_openai_35_turbo()
-LLMConfig.get_service_context_openai_35_turbo_1106()
-
-
-LlmBaseModel
-
-LlmModelSize
-LlmModelSize.gpt35
-LlmModelSize.gpt4
-LlmModelSize.llama2_13b
-LlmModelSize.llama2_70b
-LlmModelSize.llama2_7b
-LlmModelSize.neutral_chat_7b
-LlmModelSize.yi_34
+- openssa.utils.llms module
+AitomaticLLM
+
+AnLLM
+
+AzureLLM
+
+OpenAILLM
@@ -897,6 +1023,30 @@ openssa¶
+- openssa.utils.rag_service_contexts module
+ServiceContextManager
+ServiceContextManager.get_aitomatic_sc()
+ServiceContextManager.get_azure_jp_openai_35_turbo_sc()
+ServiceContextManager.get_azure_openai_4_0125_preview_sc()
+ServiceContextManager.get_azure_openai_sc()
+ServiceContextManager.get_openai_35_turbo_sc()
+ServiceContextManager.get_openai_4_0125_preview_sc()
+ServiceContextManager.get_openai_sc()
+
+
+
+
+- openssa.utils.usage_logger module
+
- openssa.utils.utils module
Utils
Utils.canonicalize_discuss_result()
@@ -931,7 +1081,7 @@ openssa¶
diff --git a/openssa.contrib.html b/openssa.contrib.html
index b2d26af29..edd6172b0 100644
--- a/openssa.contrib.html
+++ b/openssa.contrib.html
@@ -10,7 +10,7 @@
-
+
@@ -93,12 +93,6 @@
Candidate implementations of integrations
Reusable application components and/or templates (e.g., Gradio, Streamlit, etc.)
-
--
-openssa.contrib.StreamlitSSAProbSolver¶
-alias of SSAProbSolver
-
-
Subpackages¶
@@ -124,6 +118,7 @@ SubpackagesSSAProbSolver.ssa_solve()
+
update_multiselect_style()
@@ -136,7 +131,7 @@ Subpackages
diff --git a/openssa.contrib.streamlit_ssa_prob_solver.html b/openssa.contrib.streamlit_ssa_prob_solver.html
index a5c5913be..7a2fa56a6 100644
--- a/openssa.contrib.streamlit_ssa_prob_solver.html
+++ b/openssa.contrib.streamlit_ssa_prob_solver.html
@@ -10,7 +10,7 @@
-
+
@@ -90,7 +90,7 @@
SSA Problem-Solver Streamlit Component.
-
-class openssa.contrib.streamlit_ssa_prob_solver.SSAProbSolver(unique_name: int | str | UUID, domain: str = '', problem: str = '', expert_instructions: str = '', fine_tuned_model_url: str = '', doc_src_path: str = '', doc_src_file_relpaths: frozenset[str] = frozenset({}))¶
+class openssa.contrib.streamlit_ssa_prob_solver.SSAProbSolver(unique_name: Uid, domain: str = '', problem: str = '', expert_instructions: str = '', fine_tuned_model_url: str = '', doc_src_path: DirOrFilePath = '', doc_src_file_relpaths: FilePathSet = frozenset({}))¶
Bases: object
SSA Problem-Solver Streamlit Component.
@@ -105,7 +105,7 @@
+
+-
+openssa.contrib.streamlit_ssa_prob_solver.update_multiselect_style()¶
+
+
@@ -188,7 +193,7 @@
diff --git a/openssa.core.adapter.abstract_adapter.html b/openssa.core.adapter.abstract_adapter.html
index 74925d6da..a368babb8 100644
--- a/openssa.core.adapter.abstract_adapter.html
+++ b/openssa.core.adapter.abstract_adapter.html
@@ -10,7 +10,7 @@
-
+
@@ -177,7 +177,7 @@
diff --git a/openssa.core.adapter.base_adapter.html b/openssa.core.adapter.base_adapter.html
index 04ae4d896..611aedf86 100644
--- a/openssa.core.adapter.base_adapter.html
+++ b/openssa.core.adapter.base_adapter.html
@@ -10,7 +10,7 @@
-
+
@@ -191,7 +191,7 @@
diff --git a/openssa.core.adapter.html b/openssa.core.adapter.html
index ef92a442a..8e782716f 100644
--- a/openssa.core.adapter.html
+++ b/openssa.core.adapter.html
@@ -10,7 +10,7 @@
-
+
@@ -140,7 +140,7 @@ Submodules
diff --git a/openssa.core.backend.abstract_backend.html b/openssa.core.backend.abstract_backend.html
index 74848d0ff..2a2d9442d 100644
--- a/openssa.core.backend.abstract_backend.html
+++ b/openssa.core.backend.abstract_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -181,7 +181,7 @@
diff --git a/openssa.core.backend.base_backend.html b/openssa.core.backend.base_backend.html
index a9d1a7c70..b80ecbc99 100644
--- a/openssa.core.backend.base_backend.html
+++ b/openssa.core.backend.base_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -182,7 +182,7 @@
diff --git a/openssa.core.backend.html b/openssa.core.backend.html
index d503052e8..7853e03b0 100644
--- a/openssa.core.backend.html
+++ b/openssa.core.backend.html
@@ -10,7 +10,7 @@
-
+
@@ -163,7 +163,7 @@ Submodules
diff --git a/openssa.core.backend.rag_backend.html b/openssa.core.backend.rag_backend.html
index 7aeca8197..138c27e0d 100644
--- a/openssa.core.backend.rag_backend.html
+++ b/openssa.core.backend.rag_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -162,7 +162,7 @@
diff --git a/openssa.core.backend.text_backend.html b/openssa.core.backend.text_backend.html
index a48364e85..0d35d99be 100644
--- a/openssa.core.backend.text_backend.html
+++ b/openssa.core.backend.text_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -131,7 +131,7 @@
diff --git a/openssa.core.html b/openssa.core.html
index ed1d13f01..a8bb36357 100644
--- a/openssa.core.html
+++ b/openssa.core.html
@@ -10,7 +10,7 @@
-
+
@@ -226,23 +226,113 @@ Subpackagesopenssa.core.ooda namespace
+
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
-AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+
GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+
TaskAgent
@@ -281,6 +371,7 @@ SubpackagesHeuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -289,10 +380,12 @@ Subpackagesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -308,6 +401,7 @@ Subpackagesopenssa.core.ooda_rag.ooda_rag module
Executor
@@ -317,11 +411,6 @@ SubpackagesHistory.get_history()
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -337,8 +426,14 @@ Subpackagesopenssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
@@ -355,9 +450,19 @@ Subpackagesopenssa.core.ooda_rag.query_rewritting_engine module
+
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -368,6 +473,10 @@ SubpackagesAskUserTool.execute()
+PythonCodeTool
+
ReasearchAgentTool
@@ -376,6 +485,11 @@ SubpackagesResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -387,6 +501,73 @@ Subpackagesopenssa.core.rag_ooda namespace
+
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
@@ -450,6 +631,18 @@ Subpackagesopenssa.core.ssa namespace
- Submodules
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -495,8 +688,8 @@ SubpackagesSSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -629,7 +822,7 @@ Submodules
diff --git a/openssa.core.inferencer.abstract_inferencer.html b/openssa.core.inferencer.abstract_inferencer.html
index 14a982a56..54b60a35c 100644
--- a/openssa.core.inferencer.abstract_inferencer.html
+++ b/openssa.core.inferencer.abstract_inferencer.html
@@ -10,7 +10,7 @@
-
+
@@ -117,7 +117,7 @@
diff --git a/openssa.core.inferencer.base_inferencer.html b/openssa.core.inferencer.base_inferencer.html
index 6d561ea4c..8b7ba62b8 100644
--- a/openssa.core.inferencer.base_inferencer.html
+++ b/openssa.core.inferencer.base_inferencer.html
@@ -10,7 +10,7 @@
-
+
@@ -112,7 +112,7 @@
diff --git a/openssa.core.inferencer.html b/openssa.core.inferencer.html
index 584014ef2..bc8c7c5a2 100644
--- a/openssa.core.inferencer.html
+++ b/openssa.core.inferencer.html
@@ -10,7 +10,7 @@
-
+
@@ -117,7 +117,7 @@ Submodules
diff --git a/openssa.core.ooda.deprecated.html b/openssa.core.ooda.deprecated.html
new file mode 100644
index 000000000..12e7eb5f4
--- /dev/null
+++ b/openssa.core.ooda.deprecated.html
@@ -0,0 +1,138 @@
+
+
+
+
+
+
+ openssa.core.ooda.deprecated namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.deprecated namespace¶
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.deprecated.solver.html b/openssa.core.ooda.deprecated.solver.html
new file mode 100644
index 000000000..b9c212b8f
--- /dev/null
+++ b/openssa.core.ooda.deprecated.solver.html
@@ -0,0 +1,181 @@
+
+
+
+
+
+
+ openssa.core.ooda.deprecated.solver module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.deprecated.solver module¶
+
+-
+class openssa.core.ooda.deprecated.solver.History¶
+Bases: object
+
+-
+get_findings(step_name)¶
+
+
+
+-
+update(step_name, findings)¶
+
+
+
+
+
+-
+class openssa.core.ooda.deprecated.solver.LLM¶
+Bases: object
+
+-
+get_response(prompt, history)¶
+
+
+
+
+
+-
+class openssa.core.ooda.deprecated.solver.Solver(tools, heuristics, llm)¶
+Bases: object
+
+-
+act(decision, heuristic)¶
+
+
+
+-
+decide(orientation, heuristic)¶
+
+
+
+
+
+-
+orient(observation)¶
+
+
+
+-
+run_ooda_loop(task, heuristic)¶
+
+
+
+-
+select_optimal_heuristic(task)¶
+
+
+
+
+
+-
+subtask(task, heuristic)¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.heuristic.html b/openssa.core.ooda.heuristic.html
new file mode 100644
index 000000000..1a78d41c3
--- /dev/null
+++ b/openssa.core.ooda.heuristic.html
@@ -0,0 +1,124 @@
+
+
+
+
+
+
+ openssa.core.ooda.heuristic module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.html b/openssa.core.ooda.html
new file mode 100644
index 000000000..082672823
--- /dev/null
+++ b/openssa.core.ooda.html
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+ openssa.core.ooda namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda namespace¶
+
+Subpackages¶
+
+
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.ooda_loop.html b/openssa.core.ooda.ooda_loop.html
new file mode 100644
index 000000000..37447df18
--- /dev/null
+++ b/openssa.core.ooda.ooda_loop.html
@@ -0,0 +1,148 @@
+
+
+
+
+
+
+ openssa.core.ooda.ooda_loop module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.ooda_loop module¶
+
+-
+class openssa.core.ooda.ooda_loop.OODALoop(objective)¶
+Bases: object
+
+-
+class Step(name, prompt_function)¶
+Bases: object
+Represents a step in the OODA loop.
+
+- Attributes
name (str): The name of the step.
+prompt_function (function): The function used to generate the prompt for the step.
+input_data: The input data for the step.
+output_data: The output data generated by the step.
+
+
+
+-
+execute(objective, llm, history)¶
+Executes the step by generating a prompt using the prompt function,
+getting a response from the LLM, and storing the output data.
+
+- Args:
objective: The overall objective of the OODA loop.
+llm: The LLM (Language Learning Model) used to get the response.
+history: The history of previous prompts and responses.
+
+- Returns:
The output data generated by the step.
+
+
+
+
+
+
+
+-
+run(llm, history)¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.task.html b/openssa.core.ooda.task.html
new file mode 100644
index 000000000..154cbd9aa
--- /dev/null
+++ b/openssa.core.ooda.task.html
@@ -0,0 +1,169 @@
+
+
+
+
+
+
+ openssa.core.ooda.task module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.task module¶
+
+-
+class openssa.core.ooda.task.Task(goal, parent=None)¶
+Bases: object
+Represents a task in the OODA (Observe, Orient, Decide, Act) loop.
+
+- Attributes
goal: The goal of the task.
+subtasks: A list of subtasks associated with the task.
+parent: The parent task of the current task.
+ooda_loop: The OODA loop to which the task belongs.
+result: The result of the task.
+resources: Additional resources associated with the task.
+
+
+
+-
+class Result(status='pending', response=None, references=None, metrics=None, additional_info=None)¶
+Bases: object
+Represents the result of a task.
+
+- Attributes
status: The status of the task result.
+response: The response generated by the task.
+references: A list of references related to the task.
+metrics: Metrics associated with the task.
+additional_info: Additional information about the task result.
+
+
+
+
+
+-
+add_subtask(subtask)¶
+
+
+
+-
+has_ooda_loop()¶
+
+
+
+-
+has_subtasks()¶
+
+
+
+-
+property ooda_loop¶
+
+
+
+-
+property result¶
+
+
+
+-
+property status¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda_rag.builtin_agents.html b/openssa.core.ooda_rag.builtin_agents.html
index 87f1cdbf7..da0950548 100644
--- a/openssa.core.ooda_rag.builtin_agents.html
+++ b/openssa.core.ooda_rag.builtin_agents.html
@@ -10,7 +10,7 @@
-
+
@@ -88,34 +88,52 @@
openssa.core.ooda_rag.builtin_agents module¶
--
-class openssa.core.ooda_rag.builtin_agents.AgentRole¶
-Bases: object
-
--
-ASSISTANT = 'assistant'¶
-
-
-
--
-SYSTEM = 'system'¶
-
-
-
--
-USER = 'user'¶
-
+-
+class openssa.core.ooda_rag.builtin_agents.AnswerValidator(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, answer: str = '')¶
+Bases: TaskAgent
+AnswerValidator helps to determine whether the answer is complete
+
+-
+execute(task: str = '') bool ¶
+Execute the task agent with the given task.
+
-
-class openssa.core.ooda_rag.builtin_agents.AskUserAgent(llm: ~openai.OpenAI = <openai.OpenAI object>, model: str = 'aitomatic-model', ask_user_heuristic: str = '', conversation: ~typing.List | None = None)¶
+class openssa.core.ooda_rag.builtin_agents.AskUserAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, ask_user_heuristic: str = '', conversation: ~typing.List | None = None)¶
Bases: TaskAgent
AskUserAgent helps to determine if user wants to provide additional information
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.CommAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, instruction: str = '')¶
+Bases: TaskAgent
+CommAgent helps update tone, voice, format and language of the assistant final response
+
+-
+execute(task: str = '') str ¶
+Execute the task agent with the given task.
+
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.ContextValidator(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None, context: list | None = None)¶
+Bases: TaskAgent
+ContentValidatingAgent helps to determine whether the content is sufficient to answer the question
+
+-
+execute(task: str = '') dict ¶
Execute the task agent with the given task.
@@ -123,7 +141,7 @@
-
-class openssa.core.ooda_rag.builtin_agents.GoalAgent(llm: ~openai.OpenAI = <openai.OpenAI object>, model: str = 'aitomatic-model', conversation: ~typing.List | None = None)¶
+class openssa.core.ooda_rag.builtin_agents.GoalAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None)¶
Bases: TaskAgent
GoalAgent helps to determine problem statement from the conversation between user and SSA
@@ -134,6 +152,53 @@
+
+-
+class openssa.core.ooda_rag.builtin_agents.OODAPlanAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None)¶
+Bases: TaskAgent
+OODAPlanAgent helps to determine the OODA plan from the problem statement
+
+-
+execute(task: str = '') dict ¶
+Execute the task agent with the given task.
+
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.Persona¶
+Bases: object
+
+-
+ASSISTANT = 'assistant'¶
+
+
+
+-
+SYSTEM = 'system'¶
+
+
+
+-
+USER = 'user'¶
+
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.SynthesizingAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None, context: list | None = None)¶
+Bases: TaskAgent
+SynthesizeAgent helps to synthesize answer
+
+-
+execute(task: str = '') dict ¶
+Execute the task agent with the given task.
+
+
+
+
-
class openssa.core.ooda_rag.builtin_agents.TaskAgent¶
@@ -154,7 +219,7 @@
diff --git a/openssa.core.ooda_rag.custom.html b/openssa.core.ooda_rag.custom.html
index 2f097be98..bbbc9e07f 100644
--- a/openssa.core.ooda_rag.custom.html
+++ b/openssa.core.ooda_rag.custom.html
@@ -10,7 +10,7 @@
-
+
@@ -89,7 +89,7 @@
openssa.core.ooda_rag.custom module¶
-
-class openssa.core.ooda_rag.custom.CustomBackend(rag_llm: LLM = None, service_context=None)¶
+class openssa.core.ooda_rag.custom.CustomBackend(service_context=None)¶
Bases: Backend
-
@@ -119,7 +119,8 @@
-
query(query: str, source_path: str = '') dict ¶
-Returns a response dict with keys role, content, and citations.
+Query the index with the user input.
+Returns a tuple comprising (a) the response dicts and (b) the response object, if any.
@@ -131,7 +132,7 @@
-
-class openssa.core.ooda_rag.custom.CustomSSM(custom_rag_backend: ~openssa.core.backend.abstract_backend.AbstractBackend = None, s3_source_path: str = '', llm: ~llama_index.llms.llm.LLM = OpenAI(callback_manager=<llama_index.callbacks.base.CallbackManager object>, system_prompt=None, messages_to_prompt=<function messages_to_prompt>, completion_to_prompt=<function default_completion_to_prompt>, output_parser=None, pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'>, query_wrapper_prompt=None, model='llama2-70b', temperature=0.1, max_tokens=None, additional_kwargs={}, max_retries=3, timeout=60.0, default_headers=None, reuse_client=True, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-70b.lepton.run/api/v1', api_version=''), embed_model: ~llama_index.embeddings.openai.OpenAIEmbedding = OpenAIEmbedding(model_name='text-embedding-ada-002', embed_batch_size=10, callback_manager=<llama_index.callbacks.base.CallbackManager object>, additional_kwargs={}, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-7b.lepton.run/api/v1', api_version='', max_retries=10, timeout=60.0, default_headers=None, reuse_client=True))¶
+class openssa.core.ooda_rag.custom.CustomSSM(custom_rag_backend: AbstractBackend = None, s3_source_path: str = '')¶
Bases: RAGSSM
-
@@ -158,7 +159,7 @@
diff --git a/openssa.core.ooda_rag.heuristic.html b/openssa.core.ooda_rag.heuristic.html
index 0504e988b..fc70b4983 100644
--- a/openssa.core.ooda_rag.heuristic.html
+++ b/openssa.core.ooda_rag.heuristic.html
@@ -10,7 +10,7 @@
-
+
@@ -124,6 +124,13 @@
+
+-
+class openssa.core.ooda_rag.heuristic.HeuristicSet(**kwargs)¶
+Bases: object
+A set of heuristics.
+
+
-
class openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic(heuristic_rules: dict[str, list[str]])¶
@@ -144,7 +151,7 @@
diff --git a/openssa.core.ooda_rag.html b/openssa.core.ooda_rag.html
index dd7287b29..0897e1382 100644
--- a/openssa.core.ooda_rag.html
+++ b/openssa.core.ooda_rag.html
@@ -10,7 +10,7 @@
-
+
@@ -92,20 +92,40 @@ Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
-AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+
GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+
TaskAgent
@@ -144,6 +164,7 @@ SubmodulesHeuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -152,10 +173,12 @@ Submodulesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -171,6 +194,7 @@ Submodulesopenssa.core.ooda_rag.ooda_rag module
Executor
@@ -180,11 +204,6 @@ SubmodulesHistory.get_history()
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -200,8 +219,14 @@ Submodulesopenssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -231,6 +266,10 @@ SubmodulesAskUserTool.execute()
+PythonCodeTool
+
ReasearchAgentTool
@@ -239,6 +278,11 @@ SubmodulesResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -256,7 +300,7 @@ Submodules
diff --git a/openssa.core.ooda_rag.notifier.html b/openssa.core.ooda_rag.notifier.html
index 2cb0d2f6f..32a4cb52e 100644
--- a/openssa.core.ooda_rag.notifier.html
+++ b/openssa.core.ooda_rag.notifier.html
@@ -10,7 +10,7 @@
-
+
@@ -92,13 +92,13 @@
class openssa.core.ooda_rag.notifier.EventTypes¶
Bases: object
@@ -111,6 +111,16 @@
SUBTASK = 'ooda-subtask'¶
+
+-
+SUBTASK_BEGIN = 'ooda-subtask-begin'¶
+
+
+
+-
+SWICTH_MODE = 'switch_mode'¶
+
+
-
TASK_RESULT = 'task_result'¶
@@ -149,7 +159,7 @@
diff --git a/openssa.core.ooda_rag.ooda_rag.html b/openssa.core.ooda_rag.ooda_rag.html
index e48bbb0a8..7eb3cbf47 100644
--- a/openssa.core.ooda_rag.ooda_rag.html
+++ b/openssa.core.ooda_rag.ooda_rag.html
@@ -10,7 +10,7 @@
-
+
@@ -91,6 +91,11 @@
-
class openssa.core.ooda_rag.ooda_rag.Executor(task: str, tools: dict[str, Tool], ooda_heuristics: Heuristic, notifier: Notifier, is_main_task: bool = False)¶
Bases: object
+
+-
+check_resource_call(ooda_plan: dict) None ¶
+
+
-
execute_task(history: History) None ¶
@@ -104,12 +109,12 @@
Bases: object
-
-add_message(message: str, role: str) None ¶
+add_message(message: str, role: str, verbose: bool = True) None ¶
@@ -119,22 +124,6 @@
-
--
-class openssa.core.ooda_rag.ooda_rag.Model(llm, model)¶
-Bases: object
-
-
-
--
-parse_output(output: str) dict ¶
-
-
-
-
-
class openssa.core.ooda_rag.ooda_rag.Planner(heuristics: Heuristic, prompts: OODAPrompts, max_subtasks: int = 3, enable_generative: bool = False)¶
@@ -142,33 +131,33 @@
The Planner class is responsible for decomposing the task into subtasks.
-
-decompose_task(model: Model, task: str, history: History) list[str] ¶
+decompose_task(model: AnLLM, task: str, history: History) list[str] ¶
-
-class openssa.core.ooda_rag.ooda_rag.Solver(task_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic object>, ooda_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.DefaultOODAHeuristic object>, notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>, prompts: ~openssa.core.ooda_rag.prompts.OODAPrompts = <openssa.core.ooda_rag.prompts.OODAPrompts object>, llm=None, model: str = 'llama2', highest_priority_heuristic: str = '', enable_generative: bool = False, conversation: ~typing.List | None = None)¶
+class openssa.core.ooda_rag.ooda_rag.Solver(heuristic_set: ~openssa.core.ooda_rag.heuristic.HeuristicSet = <openssa.core.ooda_rag.heuristic.HeuristicSet object>, notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>, prompts: ~openssa.core.ooda_rag.prompts.OODAPrompts = <openssa.core.ooda_rag.prompts.OODAPrompts object>, llm=<openssa.utils.llms.OpenAILLM object>, enable_generative: bool = False, conversation: ~typing.List | None = None)¶
Bases: object
-
-run(input_message: str, tools: dict) str ¶
-Run the solver on input_message
+run(problem_statement: str, tools: dict) str ¶
+Run the solver on problem_statement
- Parameters:
-input_message – the input to the solver
+problem_statement – the input to the solver
tools – the tools to use in the solver
@@ -189,7 +178,7 @@
diff --git a/openssa.core.ooda_rag.prompts.html b/openssa.core.ooda_rag.prompts.html
index d0e65ca88..6066e4601 100644
--- a/openssa.core.ooda_rag.prompts.html
+++ b/openssa.core.ooda_rag.prompts.html
@@ -10,7 +10,7 @@
-
+
@@ -91,14 +91,44 @@
-
class openssa.core.ooda_rag.prompts.BuiltInAgentPrompt¶
Bases: object
+
+-
+ANSWER_VALIDATION = "Your role is to act as an expert in reasoning and contextual analysis. You need to evaluate if the provided answer effectively and clearly addresses the query. Respond with 'yes' if the answer is clear and confident, and 'no' if it is not. Here are some examples to guide you: \n\nExample 1:\nQuery: Can I print a part 50 cm long with this machine?\nAnswer: Given the information and the lack of detailed specifications, it is not possible to determine if the machine can print a part 50 cm long.\nEvaluation: no\n\nExample 2:\nQuery: Can I print a part 50 cm long with this machine?\nAnswer: No, it is not possible to print a part 50 cm long with this machine.\nEvaluation: yes\n\nExample 3:\nQuery: How to go to the moon?\nAnswer: I'm sorry, but based on the given context information, there is no information provided on how to go to the moon.\nEvaluation: no\n\n"¶
+
+
-
ASK_USER = 'Your task is to assist an AI assistant in formulating a question for the user. This should be based on the ongoing conversation, the presented problem statement, and a specific heuristic guideline. The assistant should formulate the question strictly based on the heuristic. If the heuristic does not apply or is irrelevant to the problem statement, return empty string for the question. Below is the heuristic guideline:\n###{heuristic}###\n\nHere is the problem statement or the user\'s current question:\n###{problem_statement}###\n\nOutput the response in JSON format with the keyword "question".'¶
+
+-
+ASK_USER_OODA = 'Your task is to assist an AI assistant in formulating a question for the user. This is done through using OODA reasoning. This should be based on the ongoing conversation, the presented problem statement, and a specific heuristic guideline. The assistant should formulate the question strictly based on the heuristic. If the heuristic does not apply or is irrelevant to the problem statement, return empty string for the question. Output the response of ooda reasoning in JSON format with the keyword "observe", "orient", "decide", "act". Example output key value:\n\n "observe": "Here, articulate your initial assessment of the task, capturing essential details and contextual elements.",\n "orient": "In this phase, analyze and synthesize the gathered information, considering different angles and strategies.",\n "decide": "Now, determine the most suitable action based on your observations and analysis.",\n "act": "The question to ask the user is here."\n \n\nBelow is the heuristic guideline:\n###{heuristic}###\n\nHere is the problem statement or the user\'s current question:\n###{problem_statement}###\n\nOutput the JSON only. Think step by step.'¶
+
+
+
+-
+COMMUNICATION = 'You are an expert in communication. Your will help to format following message with this instruction:\n###{instruction}###\n\nHere is the message:\n###{message}###\n\n'¶
+
+
+
+-
+CONTENT_VALIDATION = 'You are tasked as an expert in reasoning and contextual analysis. Your role is to evaluate whether the provided context and past conversation contain enough information to accurately respond to a given query.\n\nPlease analyze the past conversation and the following context. Then, determine if the information is sufficient to form an accurate answer. Respond only in JSON format with the keyword \'is_sufficient\'. This should be a boolean value: True if the information is adequate, and False if it is not.\n\nYour response should be in the following format:\n{{\n "is_sufficient": [True/False]\n}}\n\nDo not include any additional commentary. Focus solely on evaluating the sufficiency of the provided context and conversation.\n\nContext:\n========\n{context}\n========\n\nQuery:\n{query}\n'¶
+
+
+
+-
+GENERATE_OODA_PLAN = "As a specialist in problem-solving, your task is to utilize the OODA loop as a cognitive framework for addressing various tasks, which could include questions, commands, or messages. You have at your disposal a range of tools to aid in resolving these issues. Your responses should be methodically structured according to the OODA loop, formatted as a JSON dictionary. Each dictionary key represents one of the OODA loop's four stages: Observe, Orient, Decide, and Act. Within each stage, detail your analytical process and, when relevant, specify the execution of tools, including their names and parameters. Only output the JSON and nothing else. The proposed output format is as follows: \n{\n 'observe': {\n 'thought': 'Here, articulate your initial assessment of the task, capturing essential details and contextual elements.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // List tools and their parameters, if any are used in this stage.\n },\n 'orient': {\n 'thought': 'In this phase, analyze and synthesize the gathered information, considering different angles and strategies.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // Include any tools that aid in this analytical phase.\n },\n 'decide': {\n 'thought': 'Now, determine the most suitable action based on your observations and analysis.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // Specify tools that assist in making this decision, if applicable.\n },\n 'act': {\n 'thought': 'Finally, outline the implementation steps based on your decision, including any practical actions or responses.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // List any tools used in the implementation of the decision.\n }\n}"¶
+
+
-
-PROBLEM_STATEMENT = 'You are tasked with identifying the problem statement from a conversation between a user and an AI chatbot. Your focus should be on the entire context of the conversation, especially the most recent message from the user, to understand the issue comprehensively. Extract specific details that define the current concern or question posed by the user, which the assistant is expected to address. The problem statement should be concise, clear, and presented as a question, command, or task, reflecting the conversation\'s context and in the user\'s voice. In cases where the conversation is ambiguous return empty value for problem statement. Output the response in JSON format with the keyword "problem statement".\nExample 1:\nAssistant: Hello, what can I help you with today?\nUser: My boiler is not functioning, please help to troubleshoot.\nAssistant: Can you check and provide the temperature, pressure, and on-off status?\nUser: The temperature is 120°C.\n\nResponse:\n{\n "problem statement": "Can you help to troubleshoot a non-functioning boiler, given the temperature is 120°C?"\n}\n\nExample 2:\nAssistant: Hi, what can I help you with?\nUser: I don\'t know how to go to the airport\nAssistant: Where are you and which airport do you want to go to?\nUser: I\'m in New York\nResponse:\n{\n "problem statement": "How do I get to the airport from my current location in New York?"\n}\n\nExample 3 (Ambiguity):\nAssistant: How can I assist you today?\nUser: I\'m not sure what\'s wrong, but my computer is acting weird.\nAssistant: Can you describe the issues you are experiencing?\nUser: Hey I am good, the sky is blue.\n\nResponse:\n{\n "problem statement": ""\n}\n\nExample 4 (Multiple Issues):\nAssistant: What do you need help with?\nUser: My internet is down, and I can\'t access my email either.\nAssistant: Are both issues related, or did they start separately?\nUser: They started at the same time, I think.\n\nResponse:\n{\n "problem statement": "Can you help with my internet being down and also accessing my email?"\n}'¶
+PROBLEM_STATEMENT = 'You are tasked with constructing the problem statement from a conversation between a user and an AI chatbot. Your focus should be on the entire context of the conversation, especially the most recent messages from the user, to understand the issue comprehensively. Extract specific details that define the current concerns or questions posed by the user, which the assistant is expected to address. The problem statement should be clear, and constructed carefully with complete context and in the user\'s voice. Output the response in JSON format with the keyword "problem statement". Think step by step.\nExample 1:\nAssistant: Hello, what can I help you with today?\nUser: My boiler is not functioning, please help to troubleshoot.\nAssistant: Can you check and provide the temperature, pressure, and on-off status?\nUser: The temperature is 120°C.\n\nResponse:\n{\n "problem statement": "Can you help to troubleshoot a non-functioning boiler, given the temperature is 120°C?"\n}\n\nExample 2:\nAssistant: Hi, what can I help you with?\nUser: I don\'t know how to go to the airport\nAssistant: Where are you and which airport do you want to go to?\nUser: I\'m in New York\nResponse:\n{\n "problem statement": "How do I get to the airport from my current location in New York?"\n}\n\nExample 3 (Ambiguity):\nAssistant: How can I assist you today?\nUser: I\'m not sure what\'s wrong, but my computer is acting weird.\nAssistant: Can you describe the issues you are experiencing?\nUser: Hey I am good, the sky is blue.\n\nResponse:\n{\n "problem statement": ""\n}\n\nExample 4 (Multiple Issues):\nAssistant: What do you need help with?\nUser: My internet is down, and I can\'t access my email either.\nAssistant: Are both issues related, or did they start separately?\nUser: They started at the same time, I think.\n\nResponse:\n{\n "problem statement": "Can you help with my internet being down and also accessing my email?"\n}'¶
+
+
+
+-
+SYNTHESIZE_RESULT = 'As an expert in problem-solving and contextual analysis, you are to synthesize an answer for a given query. This task requires you to use only the information provided in the previous conversation and the context given below. Your answer should exclusively rely on this information as the base knowledge.\n\nYour response must be in JSON format, using the keyword \'answer\'. The format should strictly adhere to the following structure:\n{{\n "answer": "Your synthesized answer here"\n}}\n\nPlease refrain from including any additional commentary or information outside of the specified context and past conversation.\n\nContext:\n========\n{context}\n========\n\nQuery:\n{query}\n'¶
@@ -124,7 +154,7 @@
-
-DECOMPOSE_INTO_SUBTASKS = 'Given the tools available, if the task cannot be completed directly with the current tools and resources, break it down into maximum 3 smaller subtasks that can be directly addressed in order. If it does not need to be broken down, return an empty list of subtasks. Return a JSON dictionary {"subtasks": ["subtask 1", "subtask 2", ...]} each subtask should be a sentence or question not a function call.'¶
+DECOMPOSE_INTO_SUBTASKS = 'Given the tools available, if the task cannot be completed directly with the current tools and resources, break it down into maximum 2 smaller subtasks that can be directly addressed in order. If it does not need to be broken down, return an empty list of subtasks. Return a JSON dictionary {"subtasks": ["subtask 1", "subtask 2", ...]} each subtask should be a sentence or command or question not a function call. Return json only, nothing else. Think step by step.'¶
@@ -149,7 +179,7 @@
-
-SYNTHESIZE_RESULT = "As an expert in reasoning, you are examining a dialogue involving a user, an assistant, and a system. Your task is to synthesize the final answer to the user's initial question based on this conversation. This is the concluding instruction and must be followed with precision. You will derive the final response by critically analyzing all the messages in the conversation and performing any necessary calculations. Be aware that some contributions from the assistant may not be relevant or could be misleading due to being based on incomplete information. {heuristic} Exercise discernment in selecting the appropriate messages to construct a logical and step-by-step reasoning process."¶
+SYNTHESIZE_RESULT = "As an expert in reasoning, you are examining a dialogue involving a user, an assistant, and a system. Your task is to synthesize the final answer to the user's initial question based on this conversation. This is the concluding instruction and must be followed with precision. You will derive the final response by critically analyzing all the messages in the conversation and performing any necessary calculations. Be aware that some contributions from the assistant may not be relevant or could be misleading due to being based on incomplete information. {heuristic} If the conversation does not provide sufficient information to synthesize the answer then admit you cannot produce accurate answer. Do not use any information outside of the conversation context. Exercise discernment in selecting the appropriate messages to construct a logical and step-by-step reasoning process."¶
@@ -161,7 +191,7 @@
diff --git a/openssa.core.ooda_rag.query_rewritting_engine.html b/openssa.core.ooda_rag.query_rewritting_engine.html
new file mode 100644
index 000000000..1b451fac3
--- /dev/null
+++ b/openssa.core.ooda_rag.query_rewritting_engine.html
@@ -0,0 +1,136 @@
+
+
+
+
+
+
+ openssa.core.ooda_rag.query_rewritting_engine module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda_rag.query_rewritting_engine module¶
+Query Rewriting Retriever Pack.
+
+-
+class openssa.core.ooda_rag.query_rewritting_engine.QueryRewritingRetrieverPack(index: VectorStoreIndex = None, chunk_size: int = 1024, vector_similarity_top_k: int = 5, fusion_similarity_top_k: int = 10, service_context: ServiceContext = None, **kwargs: Any)¶
+Bases: BaseLlamaPack
+Query rewriting retriever pack.
+Rewrite the query into multiple queries and
+rerank the results.
+
+-
+get_modules() Dict[str, Any] ¶
+Get modules.
+
+
+
+-
+retrieve(query_str: str) Any ¶
+Retrieve.
+
+
+
+-
+run(*args: Any, **kwargs: Any) Any ¶
+Run the pipeline.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda_rag.solver.html b/openssa.core.ooda_rag.solver.html
index b3207f9af..b7b6ff38c 100644
--- a/openssa.core.ooda_rag.solver.html
+++ b/openssa.core.ooda_rag.solver.html
@@ -10,7 +10,7 @@
-
+
@@ -89,16 +89,21 @@
openssa.core.ooda_rag.solver module¶
-
-class openssa.core.ooda_rag.solver.OodaSSA(task_heuristics, highest_priority_heuristic: str = '', ask_user_heuristic: str = '', llm=<openai.OpenAI object>, rag_llm=OpenAI(callback_manager=<llama_index.callbacks.base.CallbackManager object>, system_prompt=None, messages_to_prompt=<function messages_to_prompt>, completion_to_prompt=<function default_completion_to_prompt>, output_parser=None, pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'>, query_wrapper_prompt=None, model='llama2-70b', temperature=0.1, max_tokens=None, additional_kwargs={}, max_retries=3, timeout=60.0, default_headers=None, reuse_client=True, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-70b.lepton.run/api/v1', api_version=''), embed_model=OpenAIEmbedding(model_name='text-embedding-ada-002', embed_batch_size=10, callback_manager=<llama_index.callbacks.base.CallbackManager object>, additional_kwargs={}, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-7b.lepton.run/api/v1', api_version='', max_retries=10, timeout=60.0, default_headers=None, reuse_client=True), model='aitomatic-model')¶
+class openssa.core.ooda_rag.solver.OodaSSA(task_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic object>, highest_priority_heuristic: str = '', ask_user_heuristic: str = '', llm=<openssa.utils.llms.OpenAILLM object>, research_documents_tool: ~openssa.core.ooda_rag.tools.Tool = None, enable_generative: bool = False)¶
Bases: object
-
-activate_resources(folder_path: str) None ¶
+activate_resources(folder_path: Path | str, re_index: bool = False) None ¶
+
+
+
+-
+get_ask_user_question(problem_statement: str) str ¶
@@ -110,7 +115,7 @@
diff --git a/openssa.core.ooda_rag.tools.html b/openssa.core.ooda_rag.tools.html
index a90800fc4..c4d047718 100644
--- a/openssa.core.ooda_rag.tools.html
+++ b/openssa.core.ooda_rag.tools.html
@@ -10,7 +10,7 @@
-
+
@@ -94,11 +94,11 @@
A tool for asking the user a question.
-
-execute(question: str) str ¶
+execute(task: str) str ¶
Ask the user for personal information.
- Parameters:
-(str) (question) – The question to ask the user.
+(str) (task) – The question to ask the user.
- Return (str):
The user’s answer to the question.
@@ -108,6 +108,27 @@
+
+-
+class openssa.core.ooda_rag.tools.PythonCodeTool¶
+Bases: Tool
+A tool for executing python code.
+
+-
+execute(task: str) str ¶
+Execute python code.
+
+- Parameters:
+(str) (task) – The python code to execute.
+
+- Return (str):
+The result of the code execution.
+
+
+
+
+
+
-
class openssa.core.ooda_rag.tools.ReasearchAgentTool(agent: RAGSSM)¶
@@ -115,13 +136,13 @@
A tool for querying a document base for information.
-
-execute(question: str) str ¶
+execute(task: str) dict ¶
Query a document base for factual information.
- Parameters:
-(str) (question) – The question to ask the document base.
+(str) (task) – The question to ask the document base.
-- Return (str):
+- Return (dict):
The answer to the question.
@@ -136,18 +157,44 @@
A tool for querying a document base for information.
+
+
+
+
+-
+class openssa.core.ooda_rag.tools.ResearchQueryEngineTool(query_engine)¶
+Bases: Tool
+A tool for querying a document base for information.
+
+-
+execute(question: str) dict ¶
Query a document base for factual information.
- Parameters:
(str) (question) – The question to ask the document base.
-- Return (str):
+- Return (dict):
The answer to the question.
+
+-
+get_citations(metadata: dict)¶
+
+
@@ -163,7 +210,7 @@
-
-abstract execute(question: str)¶
+abstract execute(task: str)¶
Execute the tool with the given arguments.
@@ -176,7 +223,7 @@
diff --git a/openssa.core.prompts.html b/openssa.core.prompts.html
index 21f4b4dab..60f10a0d6 100644
--- a/openssa.core.prompts.html
+++ b/openssa.core.prompts.html
@@ -10,7 +10,7 @@
-
+
@@ -91,7 +91,7 @@
-
class openssa.core.prompts.Prompts¶
Bases: object
-The Prompts class provides a way to retrieve and format prompts in the OpenSSM project. The prompts are stored in a nested dictionary `self.
+The Prompts class provides a way to retrieve and format prompts in the OpenSSA project. The prompts are stored in a nested dictionary `self.
Usage Guide:
-
@@ -108,7 +108,7 @@
diff --git a/openssa.core.rag_ooda.html b/openssa.core.rag_ooda.html
new file mode 100644
index 000000000..826d17907
--- /dev/null
+++ b/openssa.core.rag_ooda.html
@@ -0,0 +1,180 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda namespace¶
+
+Subpackages¶
+
+
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.rag_ooda.html b/openssa.core.rag_ooda.rag_ooda.html
new file mode 100644
index 000000000..fd835429c
--- /dev/null
+++ b/openssa.core.rag_ooda.rag_ooda.html
@@ -0,0 +1,159 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.rag_ooda module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.rag_ooda module¶
+
+-
+class openssa.core.rag_ooda.rag_ooda.RagOODA(resources: list[~openssa.core.rag_ooda.resources.rag_resource.RagResource | ~openssa.core.ooda_rag.tools.Tool] = None, conversation_id: str = 'f7b13f04-d300-4bd8-82b6-3c4001a38874', notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>)¶
+Bases: object
+
+-
+chat(query: str) str ¶
+
+
+
+-
+chat_with_agent(query: str) str ¶
+
+
+
+-
+get_answer(query: str, context: list) str ¶
+
+
+
+-
+classmethod get_conversation(conversation_id: str) list ¶
+
+
+
+-
+is_answer_complete(query: str, answer: str) bool ¶
+
+
+
+-
+is_sufficient(query: str, context: list) bool ¶
+
+
+
+
+
+-
+retrieve_context(query) list ¶
+
+
+
+-
+classmethod set_conversation(conversation_id: str, conversation: list) None ¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.dense_x.base.html b/openssa.core.rag_ooda.resources.dense_x.base.html
new file mode 100644
index 000000000..d62985dfe
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.dense_x.base.html
@@ -0,0 +1,138 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.dense_x.base module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.dense_x.base module¶
+
+-
+class openssa.core.rag_ooda.resources.dense_x.base.DenseXRetrievalPack(documents: ~typing.List[~llama_index.core.schema.Document], proposition_llm: ~llama_index.core.llms.llm.LLM | None = None, query_llm: ~llama_index.core.llms.llm.LLM | None = None, embed_model: ~llama_index.core.base.embeddings.base.BaseEmbedding | None = None, text_splitter: ~llama_index.core.node_parser.interface.TextSplitter = SentenceSplitter(include_metadata=True, include_prev_next_rel=True, callback_manager=<llama_index.core.callbacks.base.CallbackManager object>, id_func=<function default_id_func>, chunk_size=1024, chunk_overlap=200, separator=' ', paragraph_separator='\n\n\n', secondary_chunking_regex='[^,.;。?!]+[,.;。?!]?'), similarity_top_k: int = 4)¶
+Bases: BaseLlamaPack
+
+-
+get_modules() Dict[str, Any] ¶
+Get modules.
+
+
+
+-
+run(query_str: str, **kwargs: Any) Any ¶
+Run the pipeline.
+
+
+
+
+
+-
+openssa.core.rag_ooda.resources.dense_x.base.load_nodes_dict(nodes_cache_path: str) Dict[str, TextNode] ¶
+Load nodes dict.
+
+
+
+-
+openssa.core.rag_ooda.resources.dense_x.base.store_nodes_dict(nodes_dict: Dict[str, TextNode], nodes_cache_path) None ¶
+Store nodes dict.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.dense_x.dense_x.html b/openssa.core.rag_ooda.resources.dense_x.dense_x.html
new file mode 100644
index 000000000..dea9531f5
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.dense_x.dense_x.html
@@ -0,0 +1,113 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.dense_x.dense_x module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.dense_x.dense_x module¶
+
+-
+openssa.core.rag_ooda.resources.dense_x.dense_x.load_dense_x(data_dir: str, cache_dir: str, nodes_cache_path: str) RagResource ¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.dense_x.html b/openssa.core.rag_ooda.resources.dense_x.html
new file mode 100644
index 000000000..c8678e063
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.dense_x.html
@@ -0,0 +1,129 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.dense_x namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.html b/openssa.core.rag_ooda.resources.html
new file mode 100644
index 000000000..67a24fc91
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.html
@@ -0,0 +1,156 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources namespace¶
+
+Subpackages¶
+
+
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.rag_resource.html b/openssa.core.rag_ooda.resources.rag_resource.html
new file mode 100644
index 000000000..b67edee9b
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.rag_resource.html
@@ -0,0 +1,114 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.rag_resource module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.rag_resource module¶
+
+-
+class openssa.core.rag_ooda.resources.rag_resource.RagResource(query_engine: BaseQueryEngine, retriever: BaseRetriever)¶
+Bases: object
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.standard_vi.html b/openssa.core.rag_ooda.resources.standard_vi.html
new file mode 100644
index 000000000..86774a3ae
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.standard_vi.html
@@ -0,0 +1,119 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.standard_vi namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html b/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html
new file mode 100644
index 000000000..527bd36b7
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html
@@ -0,0 +1,113 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.standard_vi.standard_vi module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.standard_vi.standard_vi module¶
+
+-
+openssa.core.rag_ooda.resources.standard_vi.standard_vi.load_standard_vi(data_dir: str, cache_dir: str) RagResource ¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.slm.abstract_slm.html b/openssa.core.slm.abstract_slm.html
index a19146fad..bb513a8c5 100644
--- a/openssa.core.slm.abstract_slm.html
+++ b/openssa.core.slm.abstract_slm.html
@@ -10,7 +10,7 @@
-
+
@@ -128,7 +128,7 @@
diff --git a/openssa.core.slm.base_slm.html b/openssa.core.slm.base_slm.html
index 1e9195505..449185928 100644
--- a/openssa.core.slm.base_slm.html
+++ b/openssa.core.slm.base_slm.html
@@ -10,7 +10,7 @@
-
+
@@ -140,7 +140,7 @@
diff --git a/openssa.core.slm.html b/openssa.core.slm.html
index 31243cd2d..c75e62a18 100644
--- a/openssa.core.slm.html
+++ b/openssa.core.slm.html
@@ -10,7 +10,7 @@
-
+
@@ -160,7 +160,7 @@ Submodules
diff --git a/openssa.core.slm.memory.conversation_db.html b/openssa.core.slm.memory.conversation_db.html
index 9f575ac54..469d5d7bd 100644
--- a/openssa.core.slm.memory.conversation_db.html
+++ b/openssa.core.slm.memory.conversation_db.html
@@ -10,7 +10,7 @@
-
+
@@ -125,7 +125,7 @@
diff --git a/openssa.core.slm.memory.html b/openssa.core.slm.memory.html
index 17889c366..0b1f7a9b4 100644
--- a/openssa.core.slm.memory.html
+++ b/openssa.core.slm.memory.html
@@ -10,7 +10,7 @@
-
+
@@ -123,7 +123,7 @@ Submodules
diff --git a/openssa.core.slm.memory.sqlite_conversation_db.html b/openssa.core.slm.memory.sqlite_conversation_db.html
index 6dc3007a0..0d08aa3ed 100644
--- a/openssa.core.slm.memory.sqlite_conversation_db.html
+++ b/openssa.core.slm.memory.sqlite_conversation_db.html
@@ -10,7 +10,7 @@
-
+
@@ -125,7 +125,7 @@
diff --git a/openssa.core.ssa.agent.html b/openssa.core.ssa.agent.html
new file mode 100644
index 000000000..7e0398503
--- /dev/null
+++ b/openssa.core.ssa.agent.html
@@ -0,0 +1,144 @@
+
+
+
+
+
+
+ openssa.core.ssa.agent module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ssa.agent module¶
+
+-
+class openssa.core.ssa.agent.Agent(llm=<openssa.utils.llms.OpenAILLM object>, resources=None, short_term_memory=None, long_term_memory=None, heuristics=None)¶
+Bases: object
+
+-
+run_ooda_loop(task, heuristic)¶
+
+
+
+-
+select_optimal_heuristic(task)¶
+
+
+
+-
+solve(objective)¶
+
+
+
+-
+solve_task(task)¶
+
+
+
+-
+subtask(task, heuristic)¶
+
+
+
+-
+update_memory(key, value, memory_type='short')¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ssa.html b/openssa.core.ssa.html
index 6506bf069..7e45c6721 100644
--- a/openssa.core.ssa.html
+++ b/openssa.core.ssa.html
@@ -10,7 +10,7 @@
-
+
@@ -91,6 +91,18 @@
Submodules¶
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -136,8 +148,8 @@ SubmodulesSSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -154,7 +166,7 @@ Submodules
diff --git a/openssa.core.ssa.rag_ssa.html b/openssa.core.ssa.rag_ssa.html
index e1c45752a..4928ee2f5 100644
--- a/openssa.core.ssa.rag_ssa.html
+++ b/openssa.core.ssa.rag_ssa.html
@@ -10,7 +10,7 @@
-
+
@@ -139,7 +139,7 @@
diff --git a/openssa.core.ssa.ssa.html b/openssa.core.ssa.ssa.html
index a3e458b67..0698062d8 100644
--- a/openssa.core.ssa.ssa.html
+++ b/openssa.core.ssa.ssa.html
@@ -10,7 +10,7 @@
-
+
@@ -191,7 +191,7 @@
diff --git a/openssa.core.ssa.ssa_service.html b/openssa.core.ssa.ssa_service.html
index 866f9610e..08b257eb9 100644
--- a/openssa.core.ssa.ssa_service.html
+++ b/openssa.core.ssa.ssa_service.html
@@ -10,7 +10,7 @@
-
+
@@ -131,13 +131,13 @@
class openssa.core.ssa.ssa_service.SSAService¶
Bases: AbstractSSAService
@@ -161,7 +161,7 @@
diff --git a/openssa.core.ssm.abstract_ssm.html b/openssa.core.ssm.abstract_ssm.html
index 9a1d9c5d9..e5408458d 100644
--- a/openssa.core.ssm.abstract_ssm.html
+++ b/openssa.core.ssm.abstract_ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -204,7 +204,7 @@
diff --git a/openssa.core.ssm.abstract_ssm_builder.html b/openssa.core.ssm.abstract_ssm_builder.html
index b69cb84ca..629585dcb 100644
--- a/openssa.core.ssm.abstract_ssm_builder.html
+++ b/openssa.core.ssm.abstract_ssm_builder.html
@@ -10,7 +10,7 @@
-
+
@@ -137,7 +137,7 @@
diff --git a/openssa.core.ssm.base_ssm.html b/openssa.core.ssm.base_ssm.html
index a8ba40ffe..42e048d19 100644
--- a/openssa.core.ssm.base_ssm.html
+++ b/openssa.core.ssm.base_ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -93,7 +93,7 @@
Bases: AbstractSSM
@@ -262,7 +262,7 @@
diff --git a/openssa.core.ssm.base_ssm_builder.html b/openssa.core.ssm.base_ssm_builder.html
index 2258967c7..e1eaabcd1 100644
--- a/openssa.core.ssm.base_ssm_builder.html
+++ b/openssa.core.ssm.base_ssm_builder.html
@@ -10,7 +10,7 @@
-
+
@@ -142,7 +142,7 @@
diff --git a/openssa.core.ssm.html b/openssa.core.ssm.html
index 8b91d8d2c..7218debe3 100644
--- a/openssa.core.ssm.html
+++ b/openssa.core.ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -193,7 +193,7 @@ Submodules
diff --git a/openssa.core.ssm.rag_ssm.html b/openssa.core.ssm.rag_ssm.html
index d752327c8..34346a3ae 100644
--- a/openssa.core.ssm.rag_ssm.html
+++ b/openssa.core.ssm.rag_ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -144,7 +144,7 @@
diff --git a/openssa.html b/openssa.html
index 7e89c2648..f6810df8d 100644
--- a/openssa.html
+++ b/openssa.html
@@ -10,7 +10,7 @@
-
+
@@ -92,7 +92,6 @@ Subpackages
- openssa.contrib package
-StreamlitSSAProbSolver
- Subpackages
- openssa.contrib.streamlit_ssa_prob_solver package
SSAProbSolver
@@ -115,6 +114,7 @@ SubpackagesSSAProbSolver.ssa_solve()
+update_multiselect_style()
@@ -258,23 +258,113 @@ Subpackagesopenssa.core.ooda namespace
+
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
-AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+
GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+
TaskAgent
@@ -313,6 +403,7 @@ SubpackagesHeuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -321,10 +412,12 @@ Subpackagesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -340,6 +433,7 @@ Subpackagesopenssa.core.ooda_rag.ooda_rag module
Executor
@@ -349,11 +443,6 @@ SubpackagesHistory.get_history()
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -369,8 +458,14 @@ Subpackagesopenssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
@@ -387,9 +482,19 @@ Subpackagesopenssa.core.ooda_rag.query_rewritting_engine module
+
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -400,6 +505,10 @@ SubpackagesAskUserTool.execute()
+PythonCodeTool
+
ReasearchAgentTool
@@ -408,6 +517,11 @@ SubpackagesResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -419,6 +533,58 @@ Subpackagesopenssa.core.rag_ooda namespace
+
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
@@ -482,6 +648,18 @@ Subpackagesopenssa.core.ssa namespace
- Submodules
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -527,8 +705,8 @@ SubpackagesSSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -660,6 +838,7 @@ SubpackagesAPIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -698,6 +877,7 @@ SubpackagesAPIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -716,7 +896,6 @@ Subpackagesopenssa.integrations.llama_index.backend module
Backend
@@ -741,6 +920,7 @@ SubpackagesAPIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -768,6 +948,7 @@ SubpackagesAbstractAPIContext.key
AbstractAPIContext.max_tokens
AbstractAPIContext.model
+AbstractAPIContext.model_computed_fields
AbstractAPIContext.model_config
AbstractAPIContext.model_fields
AbstractAPIContext.temperature
@@ -796,22 +977,24 @@ Subpackagesopenssa.utils.deprecated namespace
+- Submodules
-- Submodules
-- openssa.utils.aitomatic_llm_config module
+- Submodules
- openssa.utils.config module
Config
Config.AZURE_GPT4_ENGINE
Config.AZURE_GPT4_MODEL
+Config.AZURE_OPENAI_API_KEY
+Config.AZURE_OPENAI_API_URL
Config.DEBUG
+Config.DEFAULT_TEMPERATURE
Config.FALCON7B_API_KEY
Config.FALCON7B_API_URL
Config.LEPTONAI_API_KEY
Config.LEPTONAI_API_URL
+Config.LEPTON_API_KEY
+Config.LEPTON_API_URL
Config.OPENAI_API_KEY
Config.OPENAI_API_URL
+Config.US_AZURE_OPENAI_API_BASE
+Config.US_AZURE_OPENAI_API_KEY
Config.setenv()
- openssa.utils.fs module
-DirOrFilePath
FileSource
FileSource.file_paths()
FileSource.fs
@@ -849,60 +1038,38 @@ Subpackagesopenssa.utils.llm_config module
-AitomaticBaseURL
-
-LLMConfig
-LLMConfig.get_aito_embeddings()
-LLMConfig.get_aitomatic_13b()
-LLMConfig.get_aitomatic_yi_34b()
-LLMConfig.get_azure_embed_model()
-LLMConfig.get_azure_jp_api_key()
-LLMConfig.get_default_embed_model()
-LLMConfig.get_intel_neural_chat_7b()
-LLMConfig.get_llama_2_api_key()
-LLMConfig.get_llm()
-LLMConfig.get_llm_azure_jp_35_16k()
-LLMConfig.get_llm_azure_jp_4_32k()
-LLMConfig.get_llm_llama_2_70b()
-LLMConfig.get_llm_llama_2_7b()
-LLMConfig.get_llm_openai_35_turbo()
-LLMConfig.get_llm_openai_35_turbo_0613()
-LLMConfig.get_llm_openai_35_turbo_1106()
-LLMConfig.get_llm_openai_4()
-LLMConfig.get_openai_api_key()
-LLMConfig.get_openai_embed_model()
-LLMConfig.get_service_context_azure_gpt4()
-LLMConfig.get_service_context_azure_gpt4_32k()
-LLMConfig.get_service_context_azure_jp_35()
-LLMConfig.get_service_context_azure_jp_35_16k()
-LLMConfig.get_service_context_llama_2_70b()
-LLMConfig.get_service_context_llama_2_7b()
-LLMConfig.get_service_context_openai_35_turbo()
-LLMConfig.get_service_context_openai_35_turbo_1106()
-
-
-LlmBaseModel
-
-LlmModelSize
-LlmModelSize.gpt35
-LlmModelSize.gpt4
-LlmModelSize.llama2_13b
-LlmModelSize.llama2_70b
-LlmModelSize.llama2_7b
-LlmModelSize.neutral_chat_7b
-LlmModelSize.yi_34
+- openssa.utils.llms module
+AitomaticLLM
+
+AnLLM
+
+AzureLLM
+
+OpenAILLM
@@ -917,6 +1084,30 @@ Subpackagesopenssa.utils.rag_service_contexts module
+ServiceContextManager
+ServiceContextManager.get_aitomatic_sc()
+ServiceContextManager.get_azure_jp_openai_35_turbo_sc()
+ServiceContextManager.get_azure_openai_4_0125_preview_sc()
+ServiceContextManager.get_azure_openai_sc()
+ServiceContextManager.get_openai_35_turbo_sc()
+ServiceContextManager.get_openai_4_0125_preview_sc()
+ServiceContextManager.get_openai_sc()
+
+
+
+
+- openssa.utils.usage_logger module
+
- openssa.utils.utils module
Utils
Utils.canonicalize_discuss_result()
@@ -948,7 +1139,7 @@ Subpackages
diff --git a/openssa.integrations.api_context.html b/openssa.integrations.api_context.html
index c21731712..2dc213d62 100644
--- a/openssa.integrations.api_context.html
+++ b/openssa.integrations.api_context.html
@@ -10,7 +10,7 @@
-
+
@@ -127,6 +127,12 @@
model: str | None¶
+
+-
+model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}¶
+A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
+
+
-
model_config: ClassVar[ConfigDict] = {}¶
@@ -165,7 +171,7 @@
openssa¶
- openssa package
- Subpackages
- openssa.contrib package
+- openssa.core.ooda namespace
+
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
+HeuristicSet
TaskDecompositionHeuristic
@@ -321,10 +387,12 @@ openssa¶
- openssa.core.ooda_rag.notifier module
- openssa.core.ooda_rag.ooda_rag module
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -369,8 +433,14 @@ openssa¶
- openssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
@@ -387,9 +457,19 @@ openssa¶
+- openssa.core.ooda_rag.query_rewritting_engine module
+
- openssa.core.ooda_rag.solver module
+PythonCodeTool
+
ReasearchAgentTool
@@ -408,6 +492,11 @@ openssa¶
ResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -419,6 +508,42 @@ openssa¶
+- openssa.core.rag_ooda namespace
+- Subpackages
+
+- Submodules
+
+
+
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
@@ -462,6 +587,18 @@ openssa¶
- openssa.core.ssa namespace
- Submodules
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -507,8 +644,8 @@ openssa¶
SSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -640,6 +777,7 @@ openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -678,6 +816,7 @@ openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -696,7 +835,6 @@ openssa¶
- openssa.integrations.llama_index.backend module
Backend
@@ -721,6 +859,7 @@ openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -748,6 +887,7 @@ openssa¶
AbstractAPIContext.key
AbstractAPIContext.max_tokens
AbstractAPIContext.model
+AbstractAPIContext.model_computed_fields
AbstractAPIContext.model_config
AbstractAPIContext.model_fields
AbstractAPIContext.temperature
@@ -776,22 +916,24 @@ openssa¶
+- openssa.utils.deprecated namespace
+- Submodules
-- Submodules
-- openssa.utils.aitomatic_llm_config module
+- Submodules
- openssa.utils.config module
Config
+Config.AITOMATIC_API_KEY
+Config.AITOMATIC_API_URL
+Config.AITOMATIC_API_URL_70B
+Config.AITOMATIC_API_URL_7B
+Config.AZURE_API_VERSION
Config.AZURE_GPT3_API_KEY
Config.AZURE_GPT3_API_URL
Config.AZURE_GPT3_API_VERSION
@@ -802,20 +944,26 @@ openssa¶
Config.AZURE_GPT4_API_VERSION
Config.AZURE_GPT4_ENGINE
Config.AZURE_GPT4_MODEL
+Config.AZURE_OPENAI_API_KEY
+Config.AZURE_OPENAI_API_URL
Config.DEBUG
+Config.DEFAULT_TEMPERATURE
Config.FALCON7B_API_KEY
Config.FALCON7B_API_URL
Config.LEPTONAI_API_KEY
Config.LEPTONAI_API_URL
+Config.LEPTON_API_KEY
+Config.LEPTON_API_URL
Config.OPENAI_API_KEY
Config.OPENAI_API_URL
+Config.US_AZURE_OPENAI_API_BASE
+Config.US_AZURE_OPENAI_API_KEY
Config.setenv()
- openssa.utils.fs module
-DirOrFilePath
FileSource
FileSource.file_paths()
FileSource.fs
@@ -829,60 +977,38 @@ openssa¶
-- openssa.utils.llm_config module
-AitomaticBaseURL
-
-LLMConfig
-LLMConfig.get_aito_embeddings()
-LLMConfig.get_aitomatic_13b()
-LLMConfig.get_aitomatic_yi_34b()
-LLMConfig.get_azure_embed_model()
-LLMConfig.get_azure_jp_api_key()
-LLMConfig.get_default_embed_model()
-LLMConfig.get_intel_neural_chat_7b()
-LLMConfig.get_llama_2_api_key()
-LLMConfig.get_llm()
-LLMConfig.get_llm_azure_jp_35_16k()
-LLMConfig.get_llm_azure_jp_4_32k()
-LLMConfig.get_llm_llama_2_70b()
-LLMConfig.get_llm_llama_2_7b()
-LLMConfig.get_llm_openai_35_turbo()
-LLMConfig.get_llm_openai_35_turbo_0613()
-LLMConfig.get_llm_openai_35_turbo_1106()
-LLMConfig.get_llm_openai_4()
-LLMConfig.get_openai_api_key()
-LLMConfig.get_openai_embed_model()
-LLMConfig.get_service_context_azure_gpt4()
-LLMConfig.get_service_context_azure_gpt4_32k()
-LLMConfig.get_service_context_azure_jp_35()
-LLMConfig.get_service_context_azure_jp_35_16k()
-LLMConfig.get_service_context_llama_2_70b()
-LLMConfig.get_service_context_llama_2_7b()
-LLMConfig.get_service_context_openai_35_turbo()
-LLMConfig.get_service_context_openai_35_turbo_1106()
-
-
-LlmBaseModel
-
-LlmModelSize
-LlmModelSize.gpt35
-LlmModelSize.gpt4
-LlmModelSize.llama2_13b
-LlmModelSize.llama2_70b
-LlmModelSize.llama2_7b
-LlmModelSize.neutral_chat_7b
-LlmModelSize.yi_34
+- openssa.utils.llms module
+AitomaticLLM
+
+AnLLM
+
+AzureLLM
+
+OpenAILLM
@@ -897,6 +1023,30 @@ openssa¶
+- openssa.utils.rag_service_contexts module
+ServiceContextManager
+ServiceContextManager.get_aitomatic_sc()
+ServiceContextManager.get_azure_jp_openai_35_turbo_sc()
+ServiceContextManager.get_azure_openai_4_0125_preview_sc()
+ServiceContextManager.get_azure_openai_sc()
+ServiceContextManager.get_openai_35_turbo_sc()
+ServiceContextManager.get_openai_4_0125_preview_sc()
+ServiceContextManager.get_openai_sc()
+
+
+
+
+- openssa.utils.usage_logger module
+
- openssa.utils.utils module
Utils
Utils.canonicalize_discuss_result()
@@ -931,7 +1081,7 @@ openssa¶
diff --git a/openssa.contrib.html b/openssa.contrib.html
index b2d26af29..edd6172b0 100644
--- a/openssa.contrib.html
+++ b/openssa.contrib.html
@@ -10,7 +10,7 @@
-
+
@@ -93,12 +93,6 @@
Candidate implementations of integrations
Reusable application components and/or templates (e.g., Gradio, Streamlit, etc.)
-
--
-openssa.contrib.StreamlitSSAProbSolver¶
-alias of SSAProbSolver
-
-
Subpackages¶
@@ -124,6 +118,7 @@ SubpackagesSSAProbSolver.ssa_solve()
+
update_multiselect_style()
@@ -136,7 +131,7 @@ Subpackages
diff --git a/openssa.contrib.streamlit_ssa_prob_solver.html b/openssa.contrib.streamlit_ssa_prob_solver.html
index a5c5913be..7a2fa56a6 100644
--- a/openssa.contrib.streamlit_ssa_prob_solver.html
+++ b/openssa.contrib.streamlit_ssa_prob_solver.html
@@ -10,7 +10,7 @@
-
+
@@ -90,7 +90,7 @@
SSA Problem-Solver Streamlit Component.
-
-class openssa.contrib.streamlit_ssa_prob_solver.SSAProbSolver(unique_name: int | str | UUID, domain: str = '', problem: str = '', expert_instructions: str = '', fine_tuned_model_url: str = '', doc_src_path: str = '', doc_src_file_relpaths: frozenset[str] = frozenset({}))¶
+class openssa.contrib.streamlit_ssa_prob_solver.SSAProbSolver(unique_name: Uid, domain: str = '', problem: str = '', expert_instructions: str = '', fine_tuned_model_url: str = '', doc_src_path: DirOrFilePath = '', doc_src_file_relpaths: FilePathSet = frozenset({}))¶
Bases: object
SSA Problem-Solver Streamlit Component.
@@ -105,7 +105,7 @@
+
+-
+openssa.contrib.streamlit_ssa_prob_solver.update_multiselect_style()¶
+
+
@@ -188,7 +193,7 @@
diff --git a/openssa.core.adapter.abstract_adapter.html b/openssa.core.adapter.abstract_adapter.html
index 74925d6da..a368babb8 100644
--- a/openssa.core.adapter.abstract_adapter.html
+++ b/openssa.core.adapter.abstract_adapter.html
@@ -10,7 +10,7 @@
-
+
@@ -177,7 +177,7 @@
diff --git a/openssa.core.adapter.base_adapter.html b/openssa.core.adapter.base_adapter.html
index 04ae4d896..611aedf86 100644
--- a/openssa.core.adapter.base_adapter.html
+++ b/openssa.core.adapter.base_adapter.html
@@ -10,7 +10,7 @@
-
+
@@ -191,7 +191,7 @@
diff --git a/openssa.core.adapter.html b/openssa.core.adapter.html
index ef92a442a..8e782716f 100644
--- a/openssa.core.adapter.html
+++ b/openssa.core.adapter.html
@@ -10,7 +10,7 @@
-
+
@@ -140,7 +140,7 @@ Submodules
diff --git a/openssa.core.backend.abstract_backend.html b/openssa.core.backend.abstract_backend.html
index 74848d0ff..2a2d9442d 100644
--- a/openssa.core.backend.abstract_backend.html
+++ b/openssa.core.backend.abstract_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -181,7 +181,7 @@
diff --git a/openssa.core.backend.base_backend.html b/openssa.core.backend.base_backend.html
index a9d1a7c70..b80ecbc99 100644
--- a/openssa.core.backend.base_backend.html
+++ b/openssa.core.backend.base_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -182,7 +182,7 @@
diff --git a/openssa.core.backend.html b/openssa.core.backend.html
index d503052e8..7853e03b0 100644
--- a/openssa.core.backend.html
+++ b/openssa.core.backend.html
@@ -10,7 +10,7 @@
-
+
@@ -163,7 +163,7 @@ Submodules
diff --git a/openssa.core.backend.rag_backend.html b/openssa.core.backend.rag_backend.html
index 7aeca8197..138c27e0d 100644
--- a/openssa.core.backend.rag_backend.html
+++ b/openssa.core.backend.rag_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -162,7 +162,7 @@
diff --git a/openssa.core.backend.text_backend.html b/openssa.core.backend.text_backend.html
index a48364e85..0d35d99be 100644
--- a/openssa.core.backend.text_backend.html
+++ b/openssa.core.backend.text_backend.html
@@ -10,7 +10,7 @@
-
+
@@ -131,7 +131,7 @@
diff --git a/openssa.core.html b/openssa.core.html
index ed1d13f01..a8bb36357 100644
--- a/openssa.core.html
+++ b/openssa.core.html
@@ -10,7 +10,7 @@
-
+
@@ -226,23 +226,113 @@ Subpackagesopenssa.core.ooda namespace
+
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
-AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+
GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+
TaskAgent
@@ -281,6 +371,7 @@ SubpackagesHeuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -289,10 +380,12 @@ Subpackagesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -308,6 +401,7 @@ Subpackagesopenssa.core.ooda_rag.ooda_rag module
Executor
@@ -317,11 +411,6 @@ SubpackagesHistory.get_history()
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -337,8 +426,14 @@ Subpackagesopenssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
@@ -355,9 +450,19 @@ Subpackagesopenssa.core.ooda_rag.query_rewritting_engine module
+
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -368,6 +473,10 @@ SubpackagesAskUserTool.execute()
+PythonCodeTool
+
ReasearchAgentTool
@@ -376,6 +485,11 @@ SubpackagesResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -387,6 +501,73 @@ Subpackagesopenssa.core.rag_ooda namespace
+
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
@@ -450,6 +631,18 @@ Subpackagesopenssa.core.ssa namespace
- Submodules
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -495,8 +688,8 @@ SubpackagesSSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -629,7 +822,7 @@ Submodules
diff --git a/openssa.core.inferencer.abstract_inferencer.html b/openssa.core.inferencer.abstract_inferencer.html
index 14a982a56..54b60a35c 100644
--- a/openssa.core.inferencer.abstract_inferencer.html
+++ b/openssa.core.inferencer.abstract_inferencer.html
@@ -10,7 +10,7 @@
-
+
@@ -117,7 +117,7 @@
diff --git a/openssa.core.inferencer.base_inferencer.html b/openssa.core.inferencer.base_inferencer.html
index 6d561ea4c..8b7ba62b8 100644
--- a/openssa.core.inferencer.base_inferencer.html
+++ b/openssa.core.inferencer.base_inferencer.html
@@ -10,7 +10,7 @@
-
+
@@ -112,7 +112,7 @@
diff --git a/openssa.core.inferencer.html b/openssa.core.inferencer.html
index 584014ef2..bc8c7c5a2 100644
--- a/openssa.core.inferencer.html
+++ b/openssa.core.inferencer.html
@@ -10,7 +10,7 @@
-
+
@@ -117,7 +117,7 @@ Submodules
diff --git a/openssa.core.ooda.deprecated.html b/openssa.core.ooda.deprecated.html
new file mode 100644
index 000000000..12e7eb5f4
--- /dev/null
+++ b/openssa.core.ooda.deprecated.html
@@ -0,0 +1,138 @@
+
+
+
+
+
+
+ openssa.core.ooda.deprecated namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.deprecated namespace¶
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.deprecated.solver.html b/openssa.core.ooda.deprecated.solver.html
new file mode 100644
index 000000000..b9c212b8f
--- /dev/null
+++ b/openssa.core.ooda.deprecated.solver.html
@@ -0,0 +1,181 @@
+
+
+
+
+
+
+ openssa.core.ooda.deprecated.solver module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.deprecated.solver module¶
+
+-
+class openssa.core.ooda.deprecated.solver.History¶
+Bases: object
+
+-
+get_findings(step_name)¶
+
+
+
+-
+update(step_name, findings)¶
+
+
+
+
+
+-
+class openssa.core.ooda.deprecated.solver.LLM¶
+Bases: object
+
+-
+get_response(prompt, history)¶
+
+
+
+
+
+-
+class openssa.core.ooda.deprecated.solver.Solver(tools, heuristics, llm)¶
+Bases: object
+
+-
+act(decision, heuristic)¶
+
+
+
+-
+decide(orientation, heuristic)¶
+
+
+
+
+
+-
+orient(observation)¶
+
+
+
+-
+run_ooda_loop(task, heuristic)¶
+
+
+
+-
+select_optimal_heuristic(task)¶
+
+
+
+
+
+-
+subtask(task, heuristic)¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.heuristic.html b/openssa.core.ooda.heuristic.html
new file mode 100644
index 000000000..1a78d41c3
--- /dev/null
+++ b/openssa.core.ooda.heuristic.html
@@ -0,0 +1,124 @@
+
+
+
+
+
+
+ openssa.core.ooda.heuristic module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.html b/openssa.core.ooda.html
new file mode 100644
index 000000000..082672823
--- /dev/null
+++ b/openssa.core.ooda.html
@@ -0,0 +1,183 @@
+
+
+
+
+
+
+ openssa.core.ooda namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda namespace¶
+
+Subpackages¶
+
+
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.ooda_loop.html b/openssa.core.ooda.ooda_loop.html
new file mode 100644
index 000000000..37447df18
--- /dev/null
+++ b/openssa.core.ooda.ooda_loop.html
@@ -0,0 +1,148 @@
+
+
+
+
+
+
+ openssa.core.ooda.ooda_loop module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.ooda_loop module¶
+
+-
+class openssa.core.ooda.ooda_loop.OODALoop(objective)¶
+Bases: object
+
+-
+class Step(name, prompt_function)¶
+Bases: object
+Represents a step in the OODA loop.
+
+- Attributes
name (str): The name of the step.
+prompt_function (function): The function used to generate the prompt for the step.
+input_data: The input data for the step.
+output_data: The output data generated by the step.
+
+
+
+-
+execute(objective, llm, history)¶
+Executes the step by generating a prompt using the prompt function,
+getting a response from the LLM, and storing the output data.
+
+- Args:
objective: The overall objective of the OODA loop.
+llm: The LLM (Language Learning Model) used to get the response.
+history: The history of previous prompts and responses.
+
+- Returns:
The output data generated by the step.
+
+
+
+
+
+
+
+-
+run(llm, history)¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda.task.html b/openssa.core.ooda.task.html
new file mode 100644
index 000000000..154cbd9aa
--- /dev/null
+++ b/openssa.core.ooda.task.html
@@ -0,0 +1,169 @@
+
+
+
+
+
+
+ openssa.core.ooda.task module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda.task module¶
+
+-
+class openssa.core.ooda.task.Task(goal, parent=None)¶
+Bases: object
+Represents a task in the OODA (Observe, Orient, Decide, Act) loop.
+
+- Attributes
goal: The goal of the task.
+subtasks: A list of subtasks associated with the task.
+parent: The parent task of the current task.
+ooda_loop: The OODA loop to which the task belongs.
+result: The result of the task.
+resources: Additional resources associated with the task.
+
+
+
+-
+class Result(status='pending', response=None, references=None, metrics=None, additional_info=None)¶
+Bases: object
+Represents the result of a task.
+
+- Attributes
status: The status of the task result.
+response: The response generated by the task.
+references: A list of references related to the task.
+metrics: Metrics associated with the task.
+additional_info: Additional information about the task result.
+
+
+
+
+
+-
+add_subtask(subtask)¶
+
+
+
+-
+has_ooda_loop()¶
+
+
+
+-
+has_subtasks()¶
+
+
+
+-
+property ooda_loop¶
+
+
+
+-
+property result¶
+
+
+
+-
+property status¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda_rag.builtin_agents.html b/openssa.core.ooda_rag.builtin_agents.html
index 87f1cdbf7..da0950548 100644
--- a/openssa.core.ooda_rag.builtin_agents.html
+++ b/openssa.core.ooda_rag.builtin_agents.html
@@ -10,7 +10,7 @@
-
+
@@ -88,34 +88,52 @@
openssa.core.ooda_rag.builtin_agents module¶
--
-class openssa.core.ooda_rag.builtin_agents.AgentRole¶
-Bases: object
-
--
-ASSISTANT = 'assistant'¶
-
-
-
--
-SYSTEM = 'system'¶
-
-
-
--
-USER = 'user'¶
-
+-
+class openssa.core.ooda_rag.builtin_agents.AnswerValidator(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, answer: str = '')¶
+Bases: TaskAgent
+AnswerValidator helps to determine whether the answer is complete
+
+-
+execute(task: str = '') bool ¶
+Execute the task agent with the given task.
+
-
-class openssa.core.ooda_rag.builtin_agents.AskUserAgent(llm: ~openai.OpenAI = <openai.OpenAI object>, model: str = 'aitomatic-model', ask_user_heuristic: str = '', conversation: ~typing.List | None = None)¶
+class openssa.core.ooda_rag.builtin_agents.AskUserAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, ask_user_heuristic: str = '', conversation: ~typing.List | None = None)¶
Bases: TaskAgent
AskUserAgent helps to determine if user wants to provide additional information
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.CommAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, instruction: str = '')¶
+Bases: TaskAgent
+CommAgent helps update tone, voice, format and language of the assistant final response
+
+-
+execute(task: str = '') str ¶
+Execute the task agent with the given task.
+
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.ContextValidator(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None, context: list | None = None)¶
+Bases: TaskAgent
+ContentValidatingAgent helps to determine whether the content is sufficient to answer the question
+
+-
+execute(task: str = '') dict ¶
Execute the task agent with the given task.
@@ -123,7 +141,7 @@
-
-class openssa.core.ooda_rag.builtin_agents.GoalAgent(llm: ~openai.OpenAI = <openai.OpenAI object>, model: str = 'aitomatic-model', conversation: ~typing.List | None = None)¶
+class openssa.core.ooda_rag.builtin_agents.GoalAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None)¶
Bases: TaskAgent
GoalAgent helps to determine problem statement from the conversation between user and SSA
@@ -134,6 +152,53 @@
+
+-
+class openssa.core.ooda_rag.builtin_agents.OODAPlanAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None)¶
+Bases: TaskAgent
+OODAPlanAgent helps to determine the OODA plan from the problem statement
+
+-
+execute(task: str = '') dict ¶
+Execute the task agent with the given task.
+
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.Persona¶
+Bases: object
+
+-
+ASSISTANT = 'assistant'¶
+
+
+
+-
+SYSTEM = 'system'¶
+
+
+
+-
+USER = 'user'¶
+
+
+
+
+
+-
+class openssa.core.ooda_rag.builtin_agents.SynthesizingAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None, context: list | None = None)¶
+Bases: TaskAgent
+SynthesizeAgent helps to synthesize answer
+
+-
+execute(task: str = '') dict ¶
+Execute the task agent with the given task.
+
+
+
+
-
class openssa.core.ooda_rag.builtin_agents.TaskAgent¶
@@ -154,7 +219,7 @@
diff --git a/openssa.core.ooda_rag.custom.html b/openssa.core.ooda_rag.custom.html
index 2f097be98..bbbc9e07f 100644
--- a/openssa.core.ooda_rag.custom.html
+++ b/openssa.core.ooda_rag.custom.html
@@ -10,7 +10,7 @@
-
+
@@ -89,7 +89,7 @@
openssa.core.ooda_rag.custom module¶
-
-class openssa.core.ooda_rag.custom.CustomBackend(rag_llm: LLM = None, service_context=None)¶
+class openssa.core.ooda_rag.custom.CustomBackend(service_context=None)¶
Bases: Backend
-
@@ -119,7 +119,8 @@
-
query(query: str, source_path: str = '') dict ¶
-Returns a response dict with keys role, content, and citations.
+Query the index with the user input.
+Returns a tuple comprising (a) the response dicts and (b) the response object, if any.
@@ -131,7 +132,7 @@
-
-class openssa.core.ooda_rag.custom.CustomSSM(custom_rag_backend: ~openssa.core.backend.abstract_backend.AbstractBackend = None, s3_source_path: str = '', llm: ~llama_index.llms.llm.LLM = OpenAI(callback_manager=<llama_index.callbacks.base.CallbackManager object>, system_prompt=None, messages_to_prompt=<function messages_to_prompt>, completion_to_prompt=<function default_completion_to_prompt>, output_parser=None, pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'>, query_wrapper_prompt=None, model='llama2-70b', temperature=0.1, max_tokens=None, additional_kwargs={}, max_retries=3, timeout=60.0, default_headers=None, reuse_client=True, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-70b.lepton.run/api/v1', api_version=''), embed_model: ~llama_index.embeddings.openai.OpenAIEmbedding = OpenAIEmbedding(model_name='text-embedding-ada-002', embed_batch_size=10, callback_manager=<llama_index.callbacks.base.CallbackManager object>, additional_kwargs={}, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-7b.lepton.run/api/v1', api_version='', max_retries=10, timeout=60.0, default_headers=None, reuse_client=True))¶
+class openssa.core.ooda_rag.custom.CustomSSM(custom_rag_backend: AbstractBackend = None, s3_source_path: str = '')¶
Bases: RAGSSM
-
@@ -158,7 +159,7 @@
diff --git a/openssa.core.ooda_rag.heuristic.html b/openssa.core.ooda_rag.heuristic.html
index 0504e988b..fc70b4983 100644
--- a/openssa.core.ooda_rag.heuristic.html
+++ b/openssa.core.ooda_rag.heuristic.html
@@ -10,7 +10,7 @@
-
+
@@ -124,6 +124,13 @@
+
+-
+class openssa.core.ooda_rag.heuristic.HeuristicSet(**kwargs)¶
+Bases: object
+A set of heuristics.
+
+
-
class openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic(heuristic_rules: dict[str, list[str]])¶
@@ -144,7 +151,7 @@
diff --git a/openssa.core.ooda_rag.html b/openssa.core.ooda_rag.html
index dd7287b29..0897e1382 100644
--- a/openssa.core.ooda_rag.html
+++ b/openssa.core.ooda_rag.html
@@ -10,7 +10,7 @@
-
+
@@ -92,20 +92,40 @@ Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
-AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+
GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+
TaskAgent
@@ -144,6 +164,7 @@ SubmodulesHeuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -152,10 +173,12 @@ Submodulesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -171,6 +194,7 @@ Submodulesopenssa.core.ooda_rag.ooda_rag module
Executor
@@ -180,11 +204,6 @@ SubmodulesHistory.get_history()
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -200,8 +219,14 @@ Submodulesopenssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -231,6 +266,10 @@ SubmodulesAskUserTool.execute()
+PythonCodeTool
+
ReasearchAgentTool
@@ -239,6 +278,11 @@ SubmodulesResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -256,7 +300,7 @@ Submodules
diff --git a/openssa.core.ooda_rag.notifier.html b/openssa.core.ooda_rag.notifier.html
index 2cb0d2f6f..32a4cb52e 100644
--- a/openssa.core.ooda_rag.notifier.html
+++ b/openssa.core.ooda_rag.notifier.html
@@ -10,7 +10,7 @@
-
+
@@ -92,13 +92,13 @@
class openssa.core.ooda_rag.notifier.EventTypes¶
Bases: object
@@ -111,6 +111,16 @@
SUBTASK = 'ooda-subtask'¶
+
+-
+SUBTASK_BEGIN = 'ooda-subtask-begin'¶
+
+
+
+-
+SWICTH_MODE = 'switch_mode'¶
+
+
-
TASK_RESULT = 'task_result'¶
@@ -149,7 +159,7 @@
diff --git a/openssa.core.ooda_rag.ooda_rag.html b/openssa.core.ooda_rag.ooda_rag.html
index e48bbb0a8..7eb3cbf47 100644
--- a/openssa.core.ooda_rag.ooda_rag.html
+++ b/openssa.core.ooda_rag.ooda_rag.html
@@ -10,7 +10,7 @@
-
+
@@ -91,6 +91,11 @@
-
class openssa.core.ooda_rag.ooda_rag.Executor(task: str, tools: dict[str, Tool], ooda_heuristics: Heuristic, notifier: Notifier, is_main_task: bool = False)¶
Bases: object
+
+-
+check_resource_call(ooda_plan: dict) None ¶
+
+
-
execute_task(history: History) None ¶
@@ -104,12 +109,12 @@
Bases: object
-
-add_message(message: str, role: str) None ¶
+add_message(message: str, role: str, verbose: bool = True) None ¶
@@ -119,22 +124,6 @@
-
--
-class openssa.core.ooda_rag.ooda_rag.Model(llm, model)¶
-Bases: object
-
-
-
--
-parse_output(output: str) dict ¶
-
-
-
-
-
class openssa.core.ooda_rag.ooda_rag.Planner(heuristics: Heuristic, prompts: OODAPrompts, max_subtasks: int = 3, enable_generative: bool = False)¶
@@ -142,33 +131,33 @@
The Planner class is responsible for decomposing the task into subtasks.
-
-decompose_task(model: Model, task: str, history: History) list[str] ¶
+decompose_task(model: AnLLM, task: str, history: History) list[str] ¶
-
-class openssa.core.ooda_rag.ooda_rag.Solver(task_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic object>, ooda_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.DefaultOODAHeuristic object>, notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>, prompts: ~openssa.core.ooda_rag.prompts.OODAPrompts = <openssa.core.ooda_rag.prompts.OODAPrompts object>, llm=None, model: str = 'llama2', highest_priority_heuristic: str = '', enable_generative: bool = False, conversation: ~typing.List | None = None)¶
+class openssa.core.ooda_rag.ooda_rag.Solver(heuristic_set: ~openssa.core.ooda_rag.heuristic.HeuristicSet = <openssa.core.ooda_rag.heuristic.HeuristicSet object>, notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>, prompts: ~openssa.core.ooda_rag.prompts.OODAPrompts = <openssa.core.ooda_rag.prompts.OODAPrompts object>, llm=<openssa.utils.llms.OpenAILLM object>, enable_generative: bool = False, conversation: ~typing.List | None = None)¶
Bases: object
-
-run(input_message: str, tools: dict) str ¶
-Run the solver on input_message
+run(problem_statement: str, tools: dict) str ¶
+Run the solver on problem_statement
- Parameters:
-input_message – the input to the solver
+problem_statement – the input to the solver
tools – the tools to use in the solver
@@ -189,7 +178,7 @@
diff --git a/openssa.core.ooda_rag.prompts.html b/openssa.core.ooda_rag.prompts.html
index d0e65ca88..6066e4601 100644
--- a/openssa.core.ooda_rag.prompts.html
+++ b/openssa.core.ooda_rag.prompts.html
@@ -10,7 +10,7 @@
-
+
@@ -91,14 +91,44 @@
-
class openssa.core.ooda_rag.prompts.BuiltInAgentPrompt¶
Bases: object
+
+-
+ANSWER_VALIDATION = "Your role is to act as an expert in reasoning and contextual analysis. You need to evaluate if the provided answer effectively and clearly addresses the query. Respond with 'yes' if the answer is clear and confident, and 'no' if it is not. Here are some examples to guide you: \n\nExample 1:\nQuery: Can I print a part 50 cm long with this machine?\nAnswer: Given the information and the lack of detailed specifications, it is not possible to determine if the machine can print a part 50 cm long.\nEvaluation: no\n\nExample 2:\nQuery: Can I print a part 50 cm long with this machine?\nAnswer: No, it is not possible to print a part 50 cm long with this machine.\nEvaluation: yes\n\nExample 3:\nQuery: How to go to the moon?\nAnswer: I'm sorry, but based on the given context information, there is no information provided on how to go to the moon.\nEvaluation: no\n\n"¶
+
+
-
ASK_USER = 'Your task is to assist an AI assistant in formulating a question for the user. This should be based on the ongoing conversation, the presented problem statement, and a specific heuristic guideline. The assistant should formulate the question strictly based on the heuristic. If the heuristic does not apply or is irrelevant to the problem statement, return empty string for the question. Below is the heuristic guideline:\n###{heuristic}###\n\nHere is the problem statement or the user\'s current question:\n###{problem_statement}###\n\nOutput the response in JSON format with the keyword "question".'¶
+
+-
+ASK_USER_OODA = 'Your task is to assist an AI assistant in formulating a question for the user. This is done through using OODA reasoning. This should be based on the ongoing conversation, the presented problem statement, and a specific heuristic guideline. The assistant should formulate the question strictly based on the heuristic. If the heuristic does not apply or is irrelevant to the problem statement, return empty string for the question. Output the response of ooda reasoning in JSON format with the keyword "observe", "orient", "decide", "act". Example output key value:\n\n "observe": "Here, articulate your initial assessment of the task, capturing essential details and contextual elements.",\n "orient": "In this phase, analyze and synthesize the gathered information, considering different angles and strategies.",\n "decide": "Now, determine the most suitable action based on your observations and analysis.",\n "act": "The question to ask the user is here."\n \n\nBelow is the heuristic guideline:\n###{heuristic}###\n\nHere is the problem statement or the user\'s current question:\n###{problem_statement}###\n\nOutput the JSON only. Think step by step.'¶
+
+
+
+-
+COMMUNICATION = 'You are an expert in communication. Your will help to format following message with this instruction:\n###{instruction}###\n\nHere is the message:\n###{message}###\n\n'¶
+
+
+
+-
+CONTENT_VALIDATION = 'You are tasked as an expert in reasoning and contextual analysis. Your role is to evaluate whether the provided context and past conversation contain enough information to accurately respond to a given query.\n\nPlease analyze the past conversation and the following context. Then, determine if the information is sufficient to form an accurate answer. Respond only in JSON format with the keyword \'is_sufficient\'. This should be a boolean value: True if the information is adequate, and False if it is not.\n\nYour response should be in the following format:\n{{\n "is_sufficient": [True/False]\n}}\n\nDo not include any additional commentary. Focus solely on evaluating the sufficiency of the provided context and conversation.\n\nContext:\n========\n{context}\n========\n\nQuery:\n{query}\n'¶
+
+
+
+-
+GENERATE_OODA_PLAN = "As a specialist in problem-solving, your task is to utilize the OODA loop as a cognitive framework for addressing various tasks, which could include questions, commands, or messages. You have at your disposal a range of tools to aid in resolving these issues. Your responses should be methodically structured according to the OODA loop, formatted as a JSON dictionary. Each dictionary key represents one of the OODA loop's four stages: Observe, Orient, Decide, and Act. Within each stage, detail your analytical process and, when relevant, specify the execution of tools, including their names and parameters. Only output the JSON and nothing else. The proposed output format is as follows: \n{\n 'observe': {\n 'thought': 'Here, articulate your initial assessment of the task, capturing essential details and contextual elements.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // List tools and their parameters, if any are used in this stage.\n },\n 'orient': {\n 'thought': 'In this phase, analyze and synthesize the gathered information, considering different angles and strategies.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // Include any tools that aid in this analytical phase.\n },\n 'decide': {\n 'thought': 'Now, determine the most suitable action based on your observations and analysis.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // Specify tools that assist in making this decision, if applicable.\n },\n 'act': {\n 'thought': 'Finally, outline the implementation steps based on your decision, including any practical actions or responses.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // List any tools used in the implementation of the decision.\n }\n}"¶
+
+
-
-PROBLEM_STATEMENT = 'You are tasked with identifying the problem statement from a conversation between a user and an AI chatbot. Your focus should be on the entire context of the conversation, especially the most recent message from the user, to understand the issue comprehensively. Extract specific details that define the current concern or question posed by the user, which the assistant is expected to address. The problem statement should be concise, clear, and presented as a question, command, or task, reflecting the conversation\'s context and in the user\'s voice. In cases where the conversation is ambiguous return empty value for problem statement. Output the response in JSON format with the keyword "problem statement".\nExample 1:\nAssistant: Hello, what can I help you with today?\nUser: My boiler is not functioning, please help to troubleshoot.\nAssistant: Can you check and provide the temperature, pressure, and on-off status?\nUser: The temperature is 120°C.\n\nResponse:\n{\n "problem statement": "Can you help to troubleshoot a non-functioning boiler, given the temperature is 120°C?"\n}\n\nExample 2:\nAssistant: Hi, what can I help you with?\nUser: I don\'t know how to go to the airport\nAssistant: Where are you and which airport do you want to go to?\nUser: I\'m in New York\nResponse:\n{\n "problem statement": "How do I get to the airport from my current location in New York?"\n}\n\nExample 3 (Ambiguity):\nAssistant: How can I assist you today?\nUser: I\'m not sure what\'s wrong, but my computer is acting weird.\nAssistant: Can you describe the issues you are experiencing?\nUser: Hey I am good, the sky is blue.\n\nResponse:\n{\n "problem statement": ""\n}\n\nExample 4 (Multiple Issues):\nAssistant: What do you need help with?\nUser: My internet is down, and I can\'t access my email either.\nAssistant: Are both issues related, or did they start separately?\nUser: They started at the same time, I think.\n\nResponse:\n{\n "problem statement": "Can you help with my internet being down and also accessing my email?"\n}'¶
+PROBLEM_STATEMENT = 'You are tasked with constructing the problem statement from a conversation between a user and an AI chatbot. Your focus should be on the entire context of the conversation, especially the most recent messages from the user, to understand the issue comprehensively. Extract specific details that define the current concerns or questions posed by the user, which the assistant is expected to address. The problem statement should be clear, and constructed carefully with complete context and in the user\'s voice. Output the response in JSON format with the keyword "problem statement". Think step by step.\nExample 1:\nAssistant: Hello, what can I help you with today?\nUser: My boiler is not functioning, please help to troubleshoot.\nAssistant: Can you check and provide the temperature, pressure, and on-off status?\nUser: The temperature is 120°C.\n\nResponse:\n{\n "problem statement": "Can you help to troubleshoot a non-functioning boiler, given the temperature is 120°C?"\n}\n\nExample 2:\nAssistant: Hi, what can I help you with?\nUser: I don\'t know how to go to the airport\nAssistant: Where are you and which airport do you want to go to?\nUser: I\'m in New York\nResponse:\n{\n "problem statement": "How do I get to the airport from my current location in New York?"\n}\n\nExample 3 (Ambiguity):\nAssistant: How can I assist you today?\nUser: I\'m not sure what\'s wrong, but my computer is acting weird.\nAssistant: Can you describe the issues you are experiencing?\nUser: Hey I am good, the sky is blue.\n\nResponse:\n{\n "problem statement": ""\n}\n\nExample 4 (Multiple Issues):\nAssistant: What do you need help with?\nUser: My internet is down, and I can\'t access my email either.\nAssistant: Are both issues related, or did they start separately?\nUser: They started at the same time, I think.\n\nResponse:\n{\n "problem statement": "Can you help with my internet being down and also accessing my email?"\n}'¶
+
+
+
+-
+SYNTHESIZE_RESULT = 'As an expert in problem-solving and contextual analysis, you are to synthesize an answer for a given query. This task requires you to use only the information provided in the previous conversation and the context given below. Your answer should exclusively rely on this information as the base knowledge.\n\nYour response must be in JSON format, using the keyword \'answer\'. The format should strictly adhere to the following structure:\n{{\n "answer": "Your synthesized answer here"\n}}\n\nPlease refrain from including any additional commentary or information outside of the specified context and past conversation.\n\nContext:\n========\n{context}\n========\n\nQuery:\n{query}\n'¶
@@ -124,7 +154,7 @@
-
-DECOMPOSE_INTO_SUBTASKS = 'Given the tools available, if the task cannot be completed directly with the current tools and resources, break it down into maximum 3 smaller subtasks that can be directly addressed in order. If it does not need to be broken down, return an empty list of subtasks. Return a JSON dictionary {"subtasks": ["subtask 1", "subtask 2", ...]} each subtask should be a sentence or question not a function call.'¶
+DECOMPOSE_INTO_SUBTASKS = 'Given the tools available, if the task cannot be completed directly with the current tools and resources, break it down into maximum 2 smaller subtasks that can be directly addressed in order. If it does not need to be broken down, return an empty list of subtasks. Return a JSON dictionary {"subtasks": ["subtask 1", "subtask 2", ...]} each subtask should be a sentence or command or question not a function call. Return json only, nothing else. Think step by step.'¶
@@ -149,7 +179,7 @@
-
-SYNTHESIZE_RESULT = "As an expert in reasoning, you are examining a dialogue involving a user, an assistant, and a system. Your task is to synthesize the final answer to the user's initial question based on this conversation. This is the concluding instruction and must be followed with precision. You will derive the final response by critically analyzing all the messages in the conversation and performing any necessary calculations. Be aware that some contributions from the assistant may not be relevant or could be misleading due to being based on incomplete information. {heuristic} Exercise discernment in selecting the appropriate messages to construct a logical and step-by-step reasoning process."¶
+SYNTHESIZE_RESULT = "As an expert in reasoning, you are examining a dialogue involving a user, an assistant, and a system. Your task is to synthesize the final answer to the user's initial question based on this conversation. This is the concluding instruction and must be followed with precision. You will derive the final response by critically analyzing all the messages in the conversation and performing any necessary calculations. Be aware that some contributions from the assistant may not be relevant or could be misleading due to being based on incomplete information. {heuristic} If the conversation does not provide sufficient information to synthesize the answer then admit you cannot produce accurate answer. Do not use any information outside of the conversation context. Exercise discernment in selecting the appropriate messages to construct a logical and step-by-step reasoning process."¶
@@ -161,7 +191,7 @@
diff --git a/openssa.core.ooda_rag.query_rewritting_engine.html b/openssa.core.ooda_rag.query_rewritting_engine.html
new file mode 100644
index 000000000..1b451fac3
--- /dev/null
+++ b/openssa.core.ooda_rag.query_rewritting_engine.html
@@ -0,0 +1,136 @@
+
+
+
+
+
+
+ openssa.core.ooda_rag.query_rewritting_engine module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ooda_rag.query_rewritting_engine module¶
+Query Rewriting Retriever Pack.
+
+-
+class openssa.core.ooda_rag.query_rewritting_engine.QueryRewritingRetrieverPack(index: VectorStoreIndex = None, chunk_size: int = 1024, vector_similarity_top_k: int = 5, fusion_similarity_top_k: int = 10, service_context: ServiceContext = None, **kwargs: Any)¶
+Bases: BaseLlamaPack
+Query rewriting retriever pack.
+Rewrite the query into multiple queries and
+rerank the results.
+
+-
+get_modules() Dict[str, Any] ¶
+Get modules.
+
+
+
+-
+retrieve(query_str: str) Any ¶
+Retrieve.
+
+
+
+-
+run(*args: Any, **kwargs: Any) Any ¶
+Run the pipeline.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ooda_rag.solver.html b/openssa.core.ooda_rag.solver.html
index b3207f9af..b7b6ff38c 100644
--- a/openssa.core.ooda_rag.solver.html
+++ b/openssa.core.ooda_rag.solver.html
@@ -10,7 +10,7 @@
-
+
@@ -89,16 +89,21 @@
openssa.core.ooda_rag.solver module¶
-
-class openssa.core.ooda_rag.solver.OodaSSA(task_heuristics, highest_priority_heuristic: str = '', ask_user_heuristic: str = '', llm=<openai.OpenAI object>, rag_llm=OpenAI(callback_manager=<llama_index.callbacks.base.CallbackManager object>, system_prompt=None, messages_to_prompt=<function messages_to_prompt>, completion_to_prompt=<function default_completion_to_prompt>, output_parser=None, pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'>, query_wrapper_prompt=None, model='llama2-70b', temperature=0.1, max_tokens=None, additional_kwargs={}, max_retries=3, timeout=60.0, default_headers=None, reuse_client=True, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-70b.lepton.run/api/v1', api_version=''), embed_model=OpenAIEmbedding(model_name='text-embedding-ada-002', embed_batch_size=10, callback_manager=<llama_index.callbacks.base.CallbackManager object>, additional_kwargs={}, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-7b.lepton.run/api/v1', api_version='', max_retries=10, timeout=60.0, default_headers=None, reuse_client=True), model='aitomatic-model')¶
+class openssa.core.ooda_rag.solver.OodaSSA(task_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic object>, highest_priority_heuristic: str = '', ask_user_heuristic: str = '', llm=<openssa.utils.llms.OpenAILLM object>, research_documents_tool: ~openssa.core.ooda_rag.tools.Tool = None, enable_generative: bool = False)¶
Bases: object
-
-activate_resources(folder_path: str) None ¶
+activate_resources(folder_path: Path | str, re_index: bool = False) None ¶
+
+
+
+-
+get_ask_user_question(problem_statement: str) str ¶
@@ -110,7 +115,7 @@
diff --git a/openssa.core.ooda_rag.tools.html b/openssa.core.ooda_rag.tools.html
index a90800fc4..c4d047718 100644
--- a/openssa.core.ooda_rag.tools.html
+++ b/openssa.core.ooda_rag.tools.html
@@ -10,7 +10,7 @@
-
+
@@ -94,11 +94,11 @@
A tool for asking the user a question.
-
-execute(question: str) str ¶
+execute(task: str) str ¶
Ask the user for personal information.
- Parameters:
-(str) (question) – The question to ask the user.
+(str) (task) – The question to ask the user.
- Return (str):
The user’s answer to the question.
@@ -108,6 +108,27 @@
+
+-
+class openssa.core.ooda_rag.tools.PythonCodeTool¶
+Bases: Tool
+A tool for executing python code.
+
+-
+execute(task: str) str ¶
+Execute python code.
+
+- Parameters:
+(str) (task) – The python code to execute.
+
+- Return (str):
+The result of the code execution.
+
+
+
+
+
+
-
class openssa.core.ooda_rag.tools.ReasearchAgentTool(agent: RAGSSM)¶
@@ -115,13 +136,13 @@
A tool for querying a document base for information.
-
-execute(question: str) str ¶
+execute(task: str) dict ¶
Query a document base for factual information.
- Parameters:
-(str) (question) – The question to ask the document base.
+(str) (task) – The question to ask the document base.
-- Return (str):
+- Return (dict):
The answer to the question.
@@ -136,18 +157,44 @@
A tool for querying a document base for information.
+
+
+
+
+-
+class openssa.core.ooda_rag.tools.ResearchQueryEngineTool(query_engine)¶
+Bases: Tool
+A tool for querying a document base for information.
+
+-
+execute(question: str) dict ¶
Query a document base for factual information.
- Parameters:
(str) (question) – The question to ask the document base.
-- Return (str):
+- Return (dict):
The answer to the question.
+
+-
+get_citations(metadata: dict)¶
+
+
@@ -163,7 +210,7 @@
-
-abstract execute(question: str)¶
+abstract execute(task: str)¶
Execute the tool with the given arguments.
@@ -176,7 +223,7 @@
diff --git a/openssa.core.prompts.html b/openssa.core.prompts.html
index 21f4b4dab..60f10a0d6 100644
--- a/openssa.core.prompts.html
+++ b/openssa.core.prompts.html
@@ -10,7 +10,7 @@
-
+
@@ -91,7 +91,7 @@
-
class openssa.core.prompts.Prompts¶
Bases: object
-The Prompts class provides a way to retrieve and format prompts in the OpenSSM project. The prompts are stored in a nested dictionary `self.
+The Prompts class provides a way to retrieve and format prompts in the OpenSSA project. The prompts are stored in a nested dictionary `self.
Usage Guide:
-
@@ -108,7 +108,7 @@
diff --git a/openssa.core.rag_ooda.html b/openssa.core.rag_ooda.html
new file mode 100644
index 000000000..826d17907
--- /dev/null
+++ b/openssa.core.rag_ooda.html
@@ -0,0 +1,180 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda namespace¶
+
+Subpackages¶
+
+
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.rag_ooda.html b/openssa.core.rag_ooda.rag_ooda.html
new file mode 100644
index 000000000..fd835429c
--- /dev/null
+++ b/openssa.core.rag_ooda.rag_ooda.html
@@ -0,0 +1,159 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.rag_ooda module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.rag_ooda module¶
+
+-
+class openssa.core.rag_ooda.rag_ooda.RagOODA(resources: list[~openssa.core.rag_ooda.resources.rag_resource.RagResource | ~openssa.core.ooda_rag.tools.Tool] = None, conversation_id: str = 'f7b13f04-d300-4bd8-82b6-3c4001a38874', notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>)¶
+Bases: object
+
+-
+chat(query: str) str ¶
+
+
+
+-
+chat_with_agent(query: str) str ¶
+
+
+
+-
+get_answer(query: str, context: list) str ¶
+
+
+
+-
+classmethod get_conversation(conversation_id: str) list ¶
+
+
+
+-
+is_answer_complete(query: str, answer: str) bool ¶
+
+
+
+-
+is_sufficient(query: str, context: list) bool ¶
+
+
+
+
+
+-
+retrieve_context(query) list ¶
+
+
+
+-
+classmethod set_conversation(conversation_id: str, conversation: list) None ¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.dense_x.base.html b/openssa.core.rag_ooda.resources.dense_x.base.html
new file mode 100644
index 000000000..d62985dfe
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.dense_x.base.html
@@ -0,0 +1,138 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.dense_x.base module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.dense_x.base module¶
+
+-
+class openssa.core.rag_ooda.resources.dense_x.base.DenseXRetrievalPack(documents: ~typing.List[~llama_index.core.schema.Document], proposition_llm: ~llama_index.core.llms.llm.LLM | None = None, query_llm: ~llama_index.core.llms.llm.LLM | None = None, embed_model: ~llama_index.core.base.embeddings.base.BaseEmbedding | None = None, text_splitter: ~llama_index.core.node_parser.interface.TextSplitter = SentenceSplitter(include_metadata=True, include_prev_next_rel=True, callback_manager=<llama_index.core.callbacks.base.CallbackManager object>, id_func=<function default_id_func>, chunk_size=1024, chunk_overlap=200, separator=' ', paragraph_separator='\n\n\n', secondary_chunking_regex='[^,.;。?!]+[,.;。?!]?'), similarity_top_k: int = 4)¶
+Bases: BaseLlamaPack
+
+-
+get_modules() Dict[str, Any] ¶
+Get modules.
+
+
+
+-
+run(query_str: str, **kwargs: Any) Any ¶
+Run the pipeline.
+
+
+
+
+
+-
+openssa.core.rag_ooda.resources.dense_x.base.load_nodes_dict(nodes_cache_path: str) Dict[str, TextNode] ¶
+Load nodes dict.
+
+
+
+-
+openssa.core.rag_ooda.resources.dense_x.base.store_nodes_dict(nodes_dict: Dict[str, TextNode], nodes_cache_path) None ¶
+Store nodes dict.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.dense_x.dense_x.html b/openssa.core.rag_ooda.resources.dense_x.dense_x.html
new file mode 100644
index 000000000..dea9531f5
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.dense_x.dense_x.html
@@ -0,0 +1,113 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.dense_x.dense_x module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.dense_x.dense_x module¶
+
+-
+openssa.core.rag_ooda.resources.dense_x.dense_x.load_dense_x(data_dir: str, cache_dir: str, nodes_cache_path: str) RagResource ¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.dense_x.html b/openssa.core.rag_ooda.resources.dense_x.html
new file mode 100644
index 000000000..c8678e063
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.dense_x.html
@@ -0,0 +1,129 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.dense_x namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.html b/openssa.core.rag_ooda.resources.html
new file mode 100644
index 000000000..67a24fc91
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.html
@@ -0,0 +1,156 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources namespace¶
+
+Subpackages¶
+
+
+
+Submodules¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.rag_resource.html b/openssa.core.rag_ooda.resources.rag_resource.html
new file mode 100644
index 000000000..b67edee9b
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.rag_resource.html
@@ -0,0 +1,114 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.rag_resource module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.rag_resource module¶
+
+-
+class openssa.core.rag_ooda.resources.rag_resource.RagResource(query_engine: BaseQueryEngine, retriever: BaseRetriever)¶
+Bases: object
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.standard_vi.html b/openssa.core.rag_ooda.resources.standard_vi.html
new file mode 100644
index 000000000..86774a3ae
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.standard_vi.html
@@ -0,0 +1,119 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.standard_vi namespace
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html b/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html
new file mode 100644
index 000000000..527bd36b7
--- /dev/null
+++ b/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html
@@ -0,0 +1,113 @@
+
+
+
+
+
+
+ openssa.core.rag_ooda.resources.standard_vi.standard_vi module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.rag_ooda.resources.standard_vi.standard_vi module¶
+
+-
+openssa.core.rag_ooda.resources.standard_vi.standard_vi.load_standard_vi(data_dir: str, cache_dir: str) RagResource ¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.slm.abstract_slm.html b/openssa.core.slm.abstract_slm.html
index a19146fad..bb513a8c5 100644
--- a/openssa.core.slm.abstract_slm.html
+++ b/openssa.core.slm.abstract_slm.html
@@ -10,7 +10,7 @@
-
+
@@ -128,7 +128,7 @@
diff --git a/openssa.core.slm.base_slm.html b/openssa.core.slm.base_slm.html
index 1e9195505..449185928 100644
--- a/openssa.core.slm.base_slm.html
+++ b/openssa.core.slm.base_slm.html
@@ -10,7 +10,7 @@
-
+
@@ -140,7 +140,7 @@
diff --git a/openssa.core.slm.html b/openssa.core.slm.html
index 31243cd2d..c75e62a18 100644
--- a/openssa.core.slm.html
+++ b/openssa.core.slm.html
@@ -10,7 +10,7 @@
-
+
@@ -160,7 +160,7 @@ Submodules
diff --git a/openssa.core.slm.memory.conversation_db.html b/openssa.core.slm.memory.conversation_db.html
index 9f575ac54..469d5d7bd 100644
--- a/openssa.core.slm.memory.conversation_db.html
+++ b/openssa.core.slm.memory.conversation_db.html
@@ -10,7 +10,7 @@
-
+
@@ -125,7 +125,7 @@
diff --git a/openssa.core.slm.memory.html b/openssa.core.slm.memory.html
index 17889c366..0b1f7a9b4 100644
--- a/openssa.core.slm.memory.html
+++ b/openssa.core.slm.memory.html
@@ -10,7 +10,7 @@
-
+
@@ -123,7 +123,7 @@ Submodules
diff --git a/openssa.core.slm.memory.sqlite_conversation_db.html b/openssa.core.slm.memory.sqlite_conversation_db.html
index 6dc3007a0..0d08aa3ed 100644
--- a/openssa.core.slm.memory.sqlite_conversation_db.html
+++ b/openssa.core.slm.memory.sqlite_conversation_db.html
@@ -10,7 +10,7 @@
-
+
@@ -125,7 +125,7 @@
diff --git a/openssa.core.ssa.agent.html b/openssa.core.ssa.agent.html
new file mode 100644
index 000000000..7e0398503
--- /dev/null
+++ b/openssa.core.ssa.agent.html
@@ -0,0 +1,144 @@
+
+
+
+
+
+
+ openssa.core.ssa.agent module
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ openssa
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+openssa.core.ssa.agent module¶
+
+-
+class openssa.core.ssa.agent.Agent(llm=<openssa.utils.llms.OpenAILLM object>, resources=None, short_term_memory=None, long_term_memory=None, heuristics=None)¶
+Bases: object
+
+-
+run_ooda_loop(task, heuristic)¶
+
+
+
+-
+select_optimal_heuristic(task)¶
+
+
+
+-
+solve(objective)¶
+
+
+
+-
+solve_task(task)¶
+
+
+
+-
+subtask(task, heuristic)¶
+
+
+
+-
+update_memory(key, value, memory_type='short')¶
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/openssa.core.ssa.html b/openssa.core.ssa.html
index 6506bf069..7e45c6721 100644
--- a/openssa.core.ssa.html
+++ b/openssa.core.ssa.html
@@ -10,7 +10,7 @@
-
+
@@ -91,6 +91,18 @@
Submodules¶
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -136,8 +148,8 @@ SubmodulesSSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -154,7 +166,7 @@ Submodules
diff --git a/openssa.core.ssa.rag_ssa.html b/openssa.core.ssa.rag_ssa.html
index e1c45752a..4928ee2f5 100644
--- a/openssa.core.ssa.rag_ssa.html
+++ b/openssa.core.ssa.rag_ssa.html
@@ -10,7 +10,7 @@
-
+
@@ -139,7 +139,7 @@
diff --git a/openssa.core.ssa.ssa.html b/openssa.core.ssa.ssa.html
index a3e458b67..0698062d8 100644
--- a/openssa.core.ssa.ssa.html
+++ b/openssa.core.ssa.ssa.html
@@ -10,7 +10,7 @@
-
+
@@ -191,7 +191,7 @@
diff --git a/openssa.core.ssa.ssa_service.html b/openssa.core.ssa.ssa_service.html
index 866f9610e..08b257eb9 100644
--- a/openssa.core.ssa.ssa_service.html
+++ b/openssa.core.ssa.ssa_service.html
@@ -10,7 +10,7 @@
-
+
@@ -131,13 +131,13 @@
class openssa.core.ssa.ssa_service.SSAService¶
Bases: AbstractSSAService
@@ -161,7 +161,7 @@
diff --git a/openssa.core.ssm.abstract_ssm.html b/openssa.core.ssm.abstract_ssm.html
index 9a1d9c5d9..e5408458d 100644
--- a/openssa.core.ssm.abstract_ssm.html
+++ b/openssa.core.ssm.abstract_ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -204,7 +204,7 @@
diff --git a/openssa.core.ssm.abstract_ssm_builder.html b/openssa.core.ssm.abstract_ssm_builder.html
index b69cb84ca..629585dcb 100644
--- a/openssa.core.ssm.abstract_ssm_builder.html
+++ b/openssa.core.ssm.abstract_ssm_builder.html
@@ -10,7 +10,7 @@
-
+
@@ -137,7 +137,7 @@
diff --git a/openssa.core.ssm.base_ssm.html b/openssa.core.ssm.base_ssm.html
index a8ba40ffe..42e048d19 100644
--- a/openssa.core.ssm.base_ssm.html
+++ b/openssa.core.ssm.base_ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -93,7 +93,7 @@
Bases: AbstractSSM
@@ -262,7 +262,7 @@
diff --git a/openssa.core.ssm.base_ssm_builder.html b/openssa.core.ssm.base_ssm_builder.html
index 2258967c7..e1eaabcd1 100644
--- a/openssa.core.ssm.base_ssm_builder.html
+++ b/openssa.core.ssm.base_ssm_builder.html
@@ -10,7 +10,7 @@
-
+
@@ -142,7 +142,7 @@
diff --git a/openssa.core.ssm.html b/openssa.core.ssm.html
index 8b91d8d2c..7218debe3 100644
--- a/openssa.core.ssm.html
+++ b/openssa.core.ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -193,7 +193,7 @@ Submodules
diff --git a/openssa.core.ssm.rag_ssm.html b/openssa.core.ssm.rag_ssm.html
index d752327c8..34346a3ae 100644
--- a/openssa.core.ssm.rag_ssm.html
+++ b/openssa.core.ssm.rag_ssm.html
@@ -10,7 +10,7 @@
-
+
@@ -144,7 +144,7 @@
diff --git a/openssa.html b/openssa.html
index 7e89c2648..f6810df8d 100644
--- a/openssa.html
+++ b/openssa.html
@@ -10,7 +10,7 @@
-
+
@@ -92,7 +92,6 @@ Subpackages
- openssa.contrib package
-StreamlitSSAProbSolver
- Subpackages
- openssa.contrib.streamlit_ssa_prob_solver package
SSAProbSolver
@@ -115,6 +114,7 @@ SubpackagesSSAProbSolver.ssa_solve()
+update_multiselect_style()
@@ -258,23 +258,113 @@ Subpackagesopenssa.core.ooda namespace
+
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-AgentRole
-AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+
GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+
TaskAgent
@@ -313,6 +403,7 @@ SubpackagesHeuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -321,10 +412,12 @@ Subpackagesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -340,6 +433,7 @@ Subpackagesopenssa.core.ooda_rag.ooda_rag module
Executor
@@ -349,11 +443,6 @@ SubpackagesHistory.get_history()
-Model
-
Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -369,8 +458,14 @@ Subpackagesopenssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
@@ -387,9 +482,19 @@ Subpackagesopenssa.core.ooda_rag.query_rewritting_engine module
+
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -400,6 +505,10 @@ SubpackagesAskUserTool.execute()
+PythonCodeTool
+
ReasearchAgentTool
@@ -408,6 +517,11 @@ SubpackagesResearchDocumentsTool.execute()
+ResearchQueryEngineTool
+
Tool
Tool.description
Tool.execute()
@@ -419,6 +533,58 @@ Subpackagesopenssa.core.rag_ooda namespace
+
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
@@ -482,6 +648,18 @@ Subpackagesopenssa.core.ssa namespace
- Submodules
+- openssa.core.ssa.agent module
+
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -527,8 +705,8 @@ SubpackagesSSAService
+SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
@@ -660,6 +838,7 @@ SubpackagesAPIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -698,6 +877,7 @@ SubpackagesAPIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -716,7 +896,6 @@ Subpackagesopenssa.integrations.llama_index.backend module
Backend
@@ -741,6 +920,7 @@ SubpackagesAPIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
@@ -768,6 +948,7 @@ SubpackagesAbstractAPIContext.key
AbstractAPIContext.max_tokens
AbstractAPIContext.model
+AbstractAPIContext.model_computed_fields
AbstractAPIContext.model_config
AbstractAPIContext.model_fields
AbstractAPIContext.temperature
@@ -796,22 +977,24 @@ Subpackagesopenssa.utils.deprecated namespace
+- Submodules
-- Submodules
-- openssa.utils.aitomatic_llm_config module
+- Submodules
- openssa.utils.config module
Config
Config.AZURE_GPT4_ENGINE
Config.AZURE_GPT4_MODEL
+Config.AZURE_OPENAI_API_KEY
+Config.AZURE_OPENAI_API_URL
Config.DEBUG
+Config.DEFAULT_TEMPERATURE
Config.FALCON7B_API_KEY
Config.FALCON7B_API_URL
Config.LEPTONAI_API_KEY
Config.LEPTONAI_API_URL
+Config.LEPTON_API_KEY
+Config.LEPTON_API_URL
Config.OPENAI_API_KEY
Config.OPENAI_API_URL
+Config.US_AZURE_OPENAI_API_BASE
+Config.US_AZURE_OPENAI_API_KEY
Config.setenv()
- openssa.utils.fs module
-DirOrFilePath
FileSource
FileSource.file_paths()
FileSource.fs
@@ -849,60 +1038,38 @@ Subpackagesopenssa.utils.llm_config module
-AitomaticBaseURL
-
-LLMConfig
-LLMConfig.get_aito_embeddings()
-LLMConfig.get_aitomatic_13b()
-LLMConfig.get_aitomatic_yi_34b()
-LLMConfig.get_azure_embed_model()
-LLMConfig.get_azure_jp_api_key()
-LLMConfig.get_default_embed_model()
-LLMConfig.get_intel_neural_chat_7b()
-LLMConfig.get_llama_2_api_key()
-LLMConfig.get_llm()
-LLMConfig.get_llm_azure_jp_35_16k()
-LLMConfig.get_llm_azure_jp_4_32k()
-LLMConfig.get_llm_llama_2_70b()
-LLMConfig.get_llm_llama_2_7b()
-LLMConfig.get_llm_openai_35_turbo()
-LLMConfig.get_llm_openai_35_turbo_0613()
-LLMConfig.get_llm_openai_35_turbo_1106()
-LLMConfig.get_llm_openai_4()
-LLMConfig.get_openai_api_key()
-LLMConfig.get_openai_embed_model()
-LLMConfig.get_service_context_azure_gpt4()
-LLMConfig.get_service_context_azure_gpt4_32k()
-LLMConfig.get_service_context_azure_jp_35()
-LLMConfig.get_service_context_azure_jp_35_16k()
-LLMConfig.get_service_context_llama_2_70b()
-LLMConfig.get_service_context_llama_2_7b()
-LLMConfig.get_service_context_openai_35_turbo()
-LLMConfig.get_service_context_openai_35_turbo_1106()
-
-
-LlmBaseModel
-
-LlmModelSize
-LlmModelSize.gpt35
-LlmModelSize.gpt4
-LlmModelSize.llama2_13b
-LlmModelSize.llama2_70b
-LlmModelSize.llama2_7b
-LlmModelSize.neutral_chat_7b
-LlmModelSize.yi_34
+- openssa.utils.llms module
+AitomaticLLM
+
+AnLLM
+
+AzureLLM
+
+OpenAILLM
@@ -917,6 +1084,30 @@ Subpackagesopenssa.utils.rag_service_contexts module
+ServiceContextManager
+ServiceContextManager.get_aitomatic_sc()
+ServiceContextManager.get_azure_jp_openai_35_turbo_sc()
+ServiceContextManager.get_azure_openai_4_0125_preview_sc()
+ServiceContextManager.get_azure_openai_sc()
+ServiceContextManager.get_openai_35_turbo_sc()
+ServiceContextManager.get_openai_4_0125_preview_sc()
+ServiceContextManager.get_openai_sc()
+
+
+
+
+- openssa.utils.usage_logger module
+
- openssa.utils.utils module
Utils
Utils.canonicalize_discuss_result()
@@ -948,7 +1139,7 @@ Subpackages
diff --git a/openssa.integrations.api_context.html b/openssa.integrations.api_context.html
index c21731712..2dc213d62 100644
--- a/openssa.integrations.api_context.html
+++ b/openssa.integrations.api_context.html
@@ -10,7 +10,7 @@
-
+
@@ -127,6 +127,12 @@
model: str | None¶
+
+-
+model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}¶
+A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
+
+
-
model_config: ClassVar[ConfigDict] = {}¶
@@ -165,7 +171,7 @@
- Subpackages
- openssa.contrib package +
- openssa.core.ooda namespace +
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-
-
AgentRole
+HeuristicSet
TaskDecompositionHeuristic
@@ -321,10 +387,12 @@openssa¶
- openssa.core.ooda_rag.notifier module
- openssa.core.ooda_rag.ooda_rag module -
Model
-Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -369,8 +433,14 @@
openssa¶
- openssa.core.ooda_rag.prompts module
BuiltInAgentPrompt
OODAPrompts
-
@@ -387,9 +457,19 @@
openssa¶
+ - openssa.core.ooda_rag.query_rewritting_engine module +
- openssa.core.ooda_rag.solver module +
PythonCodeTool
+ReasearchAgentTool
@@ -408,6 +492,11 @@openssa¶
ResearchDocumentsTool.execute()
+ ResearchQueryEngineTool
+Tool
Tool.description
Tool.execute()
@@ -419,6 +508,42 @@
openssa¶
+ - openssa.core.ooda_rag.builtin_agents module
- openssa.core.rag_ooda namespace
-
+
- Subpackages + +
- Submodules + +
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
-
@@ -462,6 +587,18 @@
openssa¶
- openssa.core.ssa namespace
- Submodules
-
+
- openssa.core.ssa.agent module +
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -507,8 +644,8 @@
openssa¶
SSAService
-
+
SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
+APIContext.gpt4_defaults()
APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
+APIContext.gpt4_defaults()
APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
openssa¶
- openssa.integrations.llama_index.backend module
Backend
@@ -721,6 +859,7 @@openssa¶
APIContext.from_defaults()
APIContext.gpt3_defaults()
+APIContext.gpt4_defaults()
APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
openssa¶
AbstractAPIContext.key
AbstractAPIContext.max_tokens
+AbstractAPIContext.model
AbstractAPIContext.model_computed_fields
AbstractAPIContext.model_config
AbstractAPIContext.model_fields
@@ -776,22 +916,24 @@AbstractAPIContext.temperature
openssa¶
+ - Submodules
- openssa.utils.deprecated namespace
-
+
- Submodules -
- Submodules
-
-
- openssa.utils.aitomatic_llm_config module +
- Submodules
- openssa.utils.config module
Config
-
+
Config.AITOMATIC_API_KEY
+Config.AITOMATIC_API_URL
+Config.AITOMATIC_API_URL_70B
+Config.AITOMATIC_API_URL_7B
+Config.AZURE_API_VERSION
Config.AZURE_GPT3_API_KEY
Config.AZURE_GPT3_API_URL
Config.AZURE_GPT3_API_VERSION
@@ -802,20 +944,26 @@ Config.AZURE_GPT4_API_VERSION
Config.AZURE_GPT4_ENGINE
Config.AZURE_GPT4_MODEL
+Config.AZURE_OPENAI_API_KEY
+Config.AZURE_OPENAI_API_URL
Config.DEBUG
+Config.DEFAULT_TEMPERATURE
Config.FALCON7B_API_KEY
Config.FALCON7B_API_URL
Config.LEPTONAI_API_KEY
Config.LEPTONAI_API_URL
+Config.LEPTON_API_KEY
+Config.LEPTON_API_URL
Config.OPENAI_API_KEY
Config.OPENAI_API_URL
+Config.US_AZURE_OPENAI_API_BASE
+Config.US_AZURE_OPENAI_API_KEY
Config.setenv()
openssa¶
- openssa.utils.fs module
-
-
DirOrFilePath
FileSource
FileSource.file_paths()
FileSource.fs
@@ -829,60 +977,38 @@
openssa¶
- - openssa.utils.llm_config module
-
-
AitomaticBaseURL
-
-LLMConfig
-
-
LLMConfig.get_aito_embeddings()
-LLMConfig.get_aitomatic_13b()
-LLMConfig.get_aitomatic_yi_34b()
-LLMConfig.get_azure_embed_model()
-LLMConfig.get_azure_jp_api_key()
-LLMConfig.get_default_embed_model()
-LLMConfig.get_intel_neural_chat_7b()
-LLMConfig.get_llama_2_api_key()
-LLMConfig.get_llm()
-LLMConfig.get_llm_azure_jp_35_16k()
-LLMConfig.get_llm_azure_jp_4_32k()
-LLMConfig.get_llm_llama_2_70b()
-LLMConfig.get_llm_llama_2_7b()
-LLMConfig.get_llm_openai_35_turbo()
-LLMConfig.get_llm_openai_35_turbo_0613()
-LLMConfig.get_llm_openai_35_turbo_1106()
-LLMConfig.get_llm_openai_4()
-LLMConfig.get_openai_api_key()
-LLMConfig.get_openai_embed_model()
-LLMConfig.get_service_context_azure_gpt4()
-LLMConfig.get_service_context_azure_gpt4_32k()
-LLMConfig.get_service_context_azure_jp_35()
-LLMConfig.get_service_context_azure_jp_35_16k()
-LLMConfig.get_service_context_llama_2_70b()
-LLMConfig.get_service_context_llama_2_7b()
-LLMConfig.get_service_context_openai_35_turbo()
-LLMConfig.get_service_context_openai_35_turbo_1106()
-
-LlmBaseModel
-
-LlmModelSize
-
-
LlmModelSize.gpt35
-LlmModelSize.gpt4
-LlmModelSize.llama2_13b
-LlmModelSize.llama2_70b
-LlmModelSize.llama2_7b
-LlmModelSize.neutral_chat_7b
-LlmModelSize.yi_34
+- openssa.utils.llms module
-
+
AitomaticLLM
+
+AnLLM
+
+AzureLLM
+
+OpenAILLM
openssa¶
+- openssa.utils.rag_service_contexts module
-
+
ServiceContextManager
-
+
ServiceContextManager.get_aitomatic_sc()
+ServiceContextManager.get_azure_jp_openai_35_turbo_sc()
+ServiceContextManager.get_azure_openai_4_0125_preview_sc()
+ServiceContextManager.get_azure_openai_sc()
+ServiceContextManager.get_openai_35_turbo_sc()
+ServiceContextManager.get_openai_4_0125_preview_sc()
+ServiceContextManager.get_openai_sc()
+
+
+ - openssa.utils.usage_logger module +
- openssa.utils.utils module
Utils
Utils.canonicalize_discuss_result()
@@ -931,7 +1081,7 @@ Candidate implementations of integrations
Reusable application components and/or templates (e.g., Gradio, Streamlit, etc.)
-- -openssa.contrib.StreamlitSSAProbSolver¶ -
alias of
-SSAProbSolver
update_multiselect_style()
@@ -136,7 +131,7 @@ - -class openssa.contrib.streamlit_ssa_prob_solver.SSAProbSolver(unique_name: int | str | UUID, domain: str = '', problem: str = '', expert_instructions: str = '', fine_tuned_model_url: str = '', doc_src_path: str = '', doc_src_file_relpaths: frozenset[str] = frozenset({}))¶ +class openssa.contrib.streamlit_ssa_prob_solver.SSAProbSolver(unique_name: Uid, domain: str = '', problem: str = '', expert_instructions: str = '', fine_tuned_model_url: str = '', doc_src_path: DirOrFilePath = '', doc_src_file_relpaths: FilePathSet = frozenset({}))¶
Bases:
object
SSA Problem-Solver Streamlit Component.
-
@@ -105,7 +105,7 @@
- +openssa.contrib.streamlit_ssa_prob_solver.update_multiselect_style()¶ +
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-
-
AgentRole
-
-
AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+TaskAgent
@@ -281,6 +371,7 @@Subpackages
Heuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -289,10 +380,12 @@Subpackagesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -308,6 +401,7 @@ Executor
@@ -317,11 +411,6 @@
Subpackagesopenssa.core.ooda_rag.ooda_rag module
Subpackages
History.get_history()
- Model
-Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -337,8 +426,14 @@ BuiltInAgentPrompt
OODAPrompts
-
@@ -355,9 +450,19 @@
Subpackagesopenssa.core.ooda_rag.query_rewritting_engine module +
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -368,6 +473,10 @@
Subpackages
AskUserTool.execute()
Subpackagesopenssa.core.ooda_rag.prompts module
+PythonCodeTool
+ReasearchAgentTool
@@ -376,6 +485,11 @@Subpackages
ResearchDocumentsTool.execute()
+ - openssa.core.ooda_rag.builtin_agents module
ResearchQueryEngineTool
+Tool
Tool.description
Tool.execute()
@@ -387,6 +501,73 @@
Subpackagesopenssa.core.rag_ooda namespace +
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
-
@@ -450,6 +631,18 @@
- Submodules
-
+
- openssa.core.ssa.agent module +
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -495,8 +688,8 @@ SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
- +class openssa.core.ooda.deprecated.solver.History¶ +
Bases:
+object
-
+
- +get_findings(step_name)¶ +
-
+
- +update(step_name, findings)¶ +
- +class openssa.core.ooda.deprecated.solver.LLM¶ +
Bases:
+object
-
+
- +get_response(prompt, history)¶ +
- +class openssa.core.ooda.deprecated.solver.Solver(tools, heuristics, llm)¶ +
Bases:
+object
-
+
- +act(decision, heuristic)¶ +
-
+
- +decide(orientation, heuristic)¶ +
-
+
- +orient(observation)¶ +
-
+
- +run_ooda_loop(task, heuristic)¶ +
-
+
- +select_optimal_heuristic(task)¶ +
-
+
- +subtask(task, heuristic)¶ +
- +class openssa.core.ooda.ooda_loop.OODALoop(objective)¶ +
Bases:
+object
-
+
- +class Step(name, prompt_function)¶ +
Bases:
+object
Represents a step in the OODA loop.
+-
+
- Attributes
name (str): The name of the step. +prompt_function (function): The function used to generate the prompt for the step. +input_data: The input data for the step. +output_data: The output data generated by the step.
+
+
-
+
- +execute(objective, llm, history)¶ +
Executes the step by generating a prompt using the prompt function, +getting a response from the LLM, and storing the output data.
+-
+
- Args:
objective: The overall objective of the OODA loop. +llm: The LLM (Language Learning Model) used to get the response. +history: The history of previous prompts and responses.
+
+- Returns:
The output data generated by the step.
+
+
-
+
- +run(llm, history)¶ +
- +class openssa.core.ooda.task.Task(goal, parent=None)¶ +
Bases:
+object
Represents a task in the OODA (Observe, Orient, Decide, Act) loop.
+-
+
- Attributes
goal: The goal of the task. +subtasks: A list of subtasks associated with the task. +parent: The parent task of the current task. +ooda_loop: The OODA loop to which the task belongs. +result: The result of the task. +resources: Additional resources associated with the task.
+
+
-
+
- +class Result(status='pending', response=None, references=None, metrics=None, additional_info=None)¶ +
Bases:
+object
Represents the result of a task.
+-
+
- Attributes
status: The status of the task result. +response: The response generated by the task. +references: A list of references related to the task. +metrics: Metrics associated with the task. +additional_info: Additional information about the task result.
+
+
-
+
- +add_subtask(subtask)¶ +
-
+
- +has_ooda_loop()¶ +
-
+
- +has_subtasks()¶ +
-
+
- +property ooda_loop¶ +
-
+
- +property result¶ +
-
+
- +property status¶ +
- -class openssa.core.ooda_rag.builtin_agents.AgentRole¶ -
Bases:
-object
-
-
- -ASSISTANT = 'assistant'¶ -
-
-
- -SYSTEM = 'system'¶ -
-
-
- -USER = 'user'¶ -
- +class openssa.core.ooda_rag.builtin_agents.AnswerValidator(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, answer: str = '')¶ +
Bases:
+TaskAgent
AnswerValidator helps to determine whether the answer is complete
+-
+
- +execute(task: str = '') bool ¶ +
Execute the task agent with the given task.
+
- -class openssa.core.ooda_rag.builtin_agents.AskUserAgent(llm: ~openai.OpenAI = <openai.OpenAI object>, model: str = 'aitomatic-model', ask_user_heuristic: str = '', conversation: ~typing.List | None = None)¶ +class openssa.core.ooda_rag.builtin_agents.AskUserAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, ask_user_heuristic: str = '', conversation: ~typing.List | None = None)¶
Bases:
TaskAgent
AskUserAgent helps to determine if user wants to provide additional information
+ +- +class openssa.core.ooda_rag.builtin_agents.CommAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, instruction: str = '')¶ +
Bases:
+TaskAgent
CommAgent helps update tone, voice, format and language of the assistant final response
+-
+
- +execute(task: str = '') str ¶ +
Execute the task agent with the given task.
+
- +class openssa.core.ooda_rag.builtin_agents.ContextValidator(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None, context: list | None = None)¶ +
Bases:
+TaskAgent
ContentValidatingAgent helps to determine whether the content is sufficient to answer the question
+-
+
- +execute(task: str = '') dict ¶
Execute the task agent with the given task.
- -class openssa.core.ooda_rag.builtin_agents.GoalAgent(llm: ~openai.OpenAI = <openai.OpenAI object>, model: str = 'aitomatic-model', conversation: ~typing.List | None = None)¶ +class openssa.core.ooda_rag.builtin_agents.GoalAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None)¶
Bases:
TaskAgent
GoalAgent helps to determine problem statement from the conversation between user and SSA
-
@@ -134,6 +152,53 @@
-
+
- +class openssa.core.ooda_rag.builtin_agents.OODAPlanAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None)¶ +
Bases:
+TaskAgent
OODAPlanAgent helps to determine the OODA plan from the problem statement
+-
+
- +execute(task: str = '') dict ¶ +
Execute the task agent with the given task.
+
-
+
- +class openssa.core.ooda_rag.builtin_agents.Persona¶ +
Bases:
+object
-
+
- +ASSISTANT = 'assistant'¶ +
-
+
- +SYSTEM = 'system'¶ +
-
+
- +USER = 'user'¶ +
-
+
- +class openssa.core.ooda_rag.builtin_agents.SynthesizingAgent(llm: ~openssa.utils.llms.AnLLM = <openssa.utils.llms.OpenAILLM object>, conversation: ~typing.List | None = None, context: list | None = None)¶ +
Bases:
+TaskAgent
SynthesizeAgent helps to synthesize answer
+-
+
- +execute(task: str = '') dict ¶ +
Execute the task agent with the given task.
+
- class openssa.core.ooda_rag.builtin_agents.TaskAgent¶ @@ -154,7 +219,7 @@
- -class openssa.core.ooda_rag.custom.CustomBackend(rag_llm: LLM = None, service_context=None)¶ +class openssa.core.ooda_rag.custom.CustomBackend(service_context=None)¶
Bases:
Backend
- @@ -119,7 +119,8 @@
- query(query: str, source_path: str = '') dict ¶ -
Returns a response dict with keys role, content, and citations.
+Query the index with the user input.
+Returns a tuple comprising (a) the response dicts and (b) the response object, if any.
- -class openssa.core.ooda_rag.custom.CustomSSM(custom_rag_backend: ~openssa.core.backend.abstract_backend.AbstractBackend = None, s3_source_path: str = '', llm: ~llama_index.llms.llm.LLM = OpenAI(callback_manager=<llama_index.callbacks.base.CallbackManager object>, system_prompt=None, messages_to_prompt=<function messages_to_prompt>, completion_to_prompt=<function default_completion_to_prompt>, output_parser=None, pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'>, query_wrapper_prompt=None, model='llama2-70b', temperature=0.1, max_tokens=None, additional_kwargs={}, max_retries=3, timeout=60.0, default_headers=None, reuse_client=True, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-70b.lepton.run/api/v1', api_version=''), embed_model: ~llama_index.embeddings.openai.OpenAIEmbedding = OpenAIEmbedding(model_name='text-embedding-ada-002', embed_batch_size=10, callback_manager=<llama_index.callbacks.base.CallbackManager object>, additional_kwargs={}, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-7b.lepton.run/api/v1', api_version='', max_retries=10, timeout=60.0, default_headers=None, reuse_client=True))¶ +class openssa.core.ooda_rag.custom.CustomSSM(custom_rag_backend: AbstractBackend = None, s3_source_path: str = '')¶
Bases:
RAGSSM
-
@@ -158,7 +159,7 @@
-
+
- +class openssa.core.ooda_rag.heuristic.HeuristicSet(**kwargs)¶ +
Bases:
+object
A set of heuristics.
+
- class openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic(heuristic_rules: dict[str, list[str]])¶ @@ -144,7 +151,7 @@
- openssa.core.ooda_rag.builtin_agents module
-
-
AgentRole
-
-
AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+TaskAgent
@@ -144,6 +164,7 @@Submodules
Heuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -152,10 +173,12 @@Submodulesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -171,6 +194,7 @@ Executor
@@ -180,11 +204,6 @@
Submodulesopenssa.core.ooda_rag.ooda_rag module
Submodules
History.get_history()
- Model
-Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -200,8 +219,14 @@ BuiltInAgentPrompt
OODAPrompts
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -231,6 +266,10 @@
Submodules
AskUserTool.execute()
Submodulesopenssa.core.ooda_rag.prompts module
+PythonCodeTool
+ReasearchAgentTool
@@ -239,6 +278,11 @@Submodules
ResearchDocumentsTool.execute()
ResearchQueryEngineTool
+Tool
Tool.description
Tool.execute()
@@ -256,7 +300,7 @@ Bases:
object
-
@@ -111,6 +111,16 @@
SUBTASK = 'ooda-subtask'¶
-
+
- +SUBTASK_BEGIN = 'ooda-subtask-begin'¶ +
-
+
- +SWICTH_MODE = 'switch_mode'¶ +
- TASK_RESULT = 'task_result'¶ @@ -149,7 +159,7 @@
- class openssa.core.ooda_rag.ooda_rag.Executor(task: str, tools: dict[str, Tool], ooda_heuristics: Heuristic, notifier: Notifier, is_main_task: bool = False)¶
Bases:
+object
-
+
- +check_resource_call(ooda_plan: dict) None ¶ +
- execute_task(history: History) None ¶ @@ -104,12 +109,12 @@
Bases:
object
- -add_message(message: str, role: str) None ¶ +add_message(message: str, role: str, verbose: bool = True) None ¶
-
@@ -119,22 +124,6 @@
-
-
- -class openssa.core.ooda_rag.ooda_rag.Model(llm, model)¶ -
Bases:
- - -object
-
-
- -parse_output(output: str) dict ¶ -
- class openssa.core.ooda_rag.ooda_rag.Planner(heuristics: Heuristic, prompts: OODAPrompts, max_subtasks: int = 3, enable_generative: bool = False)¶ @@ -142,33 +131,33 @@
- -decompose_task(model: Model, task: str, history: History) list[str] ¶ +decompose_task(model: AnLLM, task: str, history: History) list[str] ¶
The Planner class is responsible for decomposing the task into subtasks.
- -class openssa.core.ooda_rag.ooda_rag.Solver(task_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic object>, ooda_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.DefaultOODAHeuristic object>, notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>, prompts: ~openssa.core.ooda_rag.prompts.OODAPrompts = <openssa.core.ooda_rag.prompts.OODAPrompts object>, llm=None, model: str = 'llama2', highest_priority_heuristic: str = '', enable_generative: bool = False, conversation: ~typing.List | None = None)¶ +class openssa.core.ooda_rag.ooda_rag.Solver(heuristic_set: ~openssa.core.ooda_rag.heuristic.HeuristicSet = <openssa.core.ooda_rag.heuristic.HeuristicSet object>, notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>, prompts: ~openssa.core.ooda_rag.prompts.OODAPrompts = <openssa.core.ooda_rag.prompts.OODAPrompts object>, llm=<openssa.utils.llms.OpenAILLM object>, enable_generative: bool = False, conversation: ~typing.List | None = None)¶
Bases:
object
- -run(input_message: str, tools: dict) str ¶ -
Run the solver on input_message
+run(problem_statement: str, tools: dict) str ¶ +Run the solver on problem_statement
- Parameters:
-
-
input_message – the input to the solver
+problem_statement – the input to the solver
tools – the tools to use in the solver
@@ -189,7 +178,7 @@
- class openssa.core.ooda_rag.prompts.BuiltInAgentPrompt¶
Bases:
+object
-
+
- +ANSWER_VALIDATION = "Your role is to act as an expert in reasoning and contextual analysis. You need to evaluate if the provided answer effectively and clearly addresses the query. Respond with 'yes' if the answer is clear and confident, and 'no' if it is not. Here are some examples to guide you: \n\nExample 1:\nQuery: Can I print a part 50 cm long with this machine?\nAnswer: Given the information and the lack of detailed specifications, it is not possible to determine if the machine can print a part 50 cm long.\nEvaluation: no\n\nExample 2:\nQuery: Can I print a part 50 cm long with this machine?\nAnswer: No, it is not possible to print a part 50 cm long with this machine.\nEvaluation: yes\n\nExample 3:\nQuery: How to go to the moon?\nAnswer: I'm sorry, but based on the given context information, there is no information provided on how to go to the moon.\nEvaluation: no\n\n"¶ +
- ASK_USER = 'Your task is to assist an AI assistant in formulating a question for the user. This should be based on the ongoing conversation, the presented problem statement, and a specific heuristic guideline. The assistant should formulate the question strictly based on the heuristic. If the heuristic does not apply or is irrelevant to the problem statement, return empty string for the question. Below is the heuristic guideline:\n###{heuristic}###\n\nHere is the problem statement or the user\'s current question:\n###{problem_statement}###\n\nOutput the response in JSON format with the keyword "question".'¶
-
+
- +ASK_USER_OODA = 'Your task is to assist an AI assistant in formulating a question for the user. This is done through using OODA reasoning. This should be based on the ongoing conversation, the presented problem statement, and a specific heuristic guideline. The assistant should formulate the question strictly based on the heuristic. If the heuristic does not apply or is irrelevant to the problem statement, return empty string for the question. Output the response of ooda reasoning in JSON format with the keyword "observe", "orient", "decide", "act". Example output key value:\n\n "observe": "Here, articulate your initial assessment of the task, capturing essential details and contextual elements.",\n "orient": "In this phase, analyze and synthesize the gathered information, considering different angles and strategies.",\n "decide": "Now, determine the most suitable action based on your observations and analysis.",\n "act": "The question to ask the user is here."\n \n\nBelow is the heuristic guideline:\n###{heuristic}###\n\nHere is the problem statement or the user\'s current question:\n###{problem_statement}###\n\nOutput the JSON only. Think step by step.'¶ +
-
+
- +COMMUNICATION = 'You are an expert in communication. Your will help to format following message with this instruction:\n###{instruction}###\n\nHere is the message:\n###{message}###\n\n'¶ +
-
+
- +CONTENT_VALIDATION = 'You are tasked as an expert in reasoning and contextual analysis. Your role is to evaluate whether the provided context and past conversation contain enough information to accurately respond to a given query.\n\nPlease analyze the past conversation and the following context. Then, determine if the information is sufficient to form an accurate answer. Respond only in JSON format with the keyword \'is_sufficient\'. This should be a boolean value: True if the information is adequate, and False if it is not.\n\nYour response should be in the following format:\n{{\n "is_sufficient": [True/False]\n}}\n\nDo not include any additional commentary. Focus solely on evaluating the sufficiency of the provided context and conversation.\n\nContext:\n========\n{context}\n========\n\nQuery:\n{query}\n'¶ +
-
+
- +GENERATE_OODA_PLAN = "As a specialist in problem-solving, your task is to utilize the OODA loop as a cognitive framework for addressing various tasks, which could include questions, commands, or messages. You have at your disposal a range of tools to aid in resolving these issues. Your responses should be methodically structured according to the OODA loop, formatted as a JSON dictionary. Each dictionary key represents one of the OODA loop's four stages: Observe, Orient, Decide, and Act. Within each stage, detail your analytical process and, when relevant, specify the execution of tools, including their names and parameters. Only output the JSON and nothing else. The proposed output format is as follows: \n{\n 'observe': {\n 'thought': 'Here, articulate your initial assessment of the task, capturing essential details and contextual elements.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // List tools and their parameters, if any are used in this stage.\n },\n 'orient': {\n 'thought': 'In this phase, analyze and synthesize the gathered information, considering different angles and strategies.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // Include any tools that aid in this analytical phase.\n },\n 'decide': {\n 'thought': 'Now, determine the most suitable action based on your observations and analysis.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // Specify tools that assist in making this decision, if applicable.\n },\n 'act': {\n 'thought': 'Finally, outline the implementation steps based on your decision, including any practical actions or responses.',\n 'calls': [{'tool_name': '', 'parameters': ''}, ...] // List any tools used in the implementation of the decision.\n }\n}"¶ +
- -PROBLEM_STATEMENT = 'You are tasked with identifying the problem statement from a conversation between a user and an AI chatbot. Your focus should be on the entire context of the conversation, especially the most recent message from the user, to understand the issue comprehensively. Extract specific details that define the current concern or question posed by the user, which the assistant is expected to address. The problem statement should be concise, clear, and presented as a question, command, or task, reflecting the conversation\'s context and in the user\'s voice. In cases where the conversation is ambiguous return empty value for problem statement. Output the response in JSON format with the keyword "problem statement".\nExample 1:\nAssistant: Hello, what can I help you with today?\nUser: My boiler is not functioning, please help to troubleshoot.\nAssistant: Can you check and provide the temperature, pressure, and on-off status?\nUser: The temperature is 120°C.\n\nResponse:\n{\n "problem statement": "Can you help to troubleshoot a non-functioning boiler, given the temperature is 120°C?"\n}\n\nExample 2:\nAssistant: Hi, what can I help you with?\nUser: I don\'t know how to go to the airport\nAssistant: Where are you and which airport do you want to go to?\nUser: I\'m in New York\nResponse:\n{\n "problem statement": "How do I get to the airport from my current location in New York?"\n}\n\nExample 3 (Ambiguity):\nAssistant: How can I assist you today?\nUser: I\'m not sure what\'s wrong, but my computer is acting weird.\nAssistant: Can you describe the issues you are experiencing?\nUser: Hey I am good, the sky is blue.\n\nResponse:\n{\n "problem statement": ""\n}\n\nExample 4 (Multiple Issues):\nAssistant: What do you need help with?\nUser: My internet is down, and I can\'t access my email either.\nAssistant: Are both issues related, or did they start separately?\nUser: They started at the same time, I think.\n\nResponse:\n{\n "problem statement": "Can you help with my internet being down and also accessing my email?"\n}'¶ +PROBLEM_STATEMENT = 'You are tasked with constructing the problem statement from a conversation between a user and an AI chatbot. Your focus should be on the entire context of the conversation, especially the most recent messages from the user, to understand the issue comprehensively. Extract specific details that define the current concerns or questions posed by the user, which the assistant is expected to address. The problem statement should be clear, and constructed carefully with complete context and in the user\'s voice. Output the response in JSON format with the keyword "problem statement". Think step by step.\nExample 1:\nAssistant: Hello, what can I help you with today?\nUser: My boiler is not functioning, please help to troubleshoot.\nAssistant: Can you check and provide the temperature, pressure, and on-off status?\nUser: The temperature is 120°C.\n\nResponse:\n{\n "problem statement": "Can you help to troubleshoot a non-functioning boiler, given the temperature is 120°C?"\n}\n\nExample 2:\nAssistant: Hi, what can I help you with?\nUser: I don\'t know how to go to the airport\nAssistant: Where are you and which airport do you want to go to?\nUser: I\'m in New York\nResponse:\n{\n "problem statement": "How do I get to the airport from my current location in New York?"\n}\n\nExample 3 (Ambiguity):\nAssistant: How can I assist you today?\nUser: I\'m not sure what\'s wrong, but my computer is acting weird.\nAssistant: Can you describe the issues you are experiencing?\nUser: Hey I am good, the sky is blue.\n\nResponse:\n{\n "problem statement": ""\n}\n\nExample 4 (Multiple Issues):\nAssistant: What do you need help with?\nUser: My internet is down, and I can\'t access my email either.\nAssistant: Are both issues related, or did they start separately?\nUser: They started at the same time, I think.\n\nResponse:\n{\n "problem statement": "Can you help with my internet being down and also accessing my email?"\n}'¶ +
-
+
- +SYNTHESIZE_RESULT = 'As an expert in problem-solving and contextual analysis, you are to synthesize an answer for a given query. This task requires you to use only the information provided in the previous conversation and the context given below. Your answer should exclusively rely on this information as the base knowledge.\n\nYour response must be in JSON format, using the keyword \'answer\'. The format should strictly adhere to the following structure:\n{{\n "answer": "Your synthesized answer here"\n}}\n\nPlease refrain from including any additional commentary or information outside of the specified context and past conversation.\n\nContext:\n========\n{context}\n========\n\nQuery:\n{query}\n'¶
@@ -124,7 +154,7 @@
- -DECOMPOSE_INTO_SUBTASKS = 'Given the tools available, if the task cannot be completed directly with the current tools and resources, break it down into maximum 3 smaller subtasks that can be directly addressed in order. If it does not need to be broken down, return an empty list of subtasks. Return a JSON dictionary {"subtasks": ["subtask 1", "subtask 2", ...]} each subtask should be a sentence or question not a function call.'¶ +DECOMPOSE_INTO_SUBTASKS = 'Given the tools available, if the task cannot be completed directly with the current tools and resources, break it down into maximum 2 smaller subtasks that can be directly addressed in order. If it does not need to be broken down, return an empty list of subtasks. Return a JSON dictionary {"subtasks": ["subtask 1", "subtask 2", ...]} each subtask should be a sentence or command or question not a function call. Return json only, nothing else. Think step by step.'¶
- -SYNTHESIZE_RESULT = "As an expert in reasoning, you are examining a dialogue involving a user, an assistant, and a system. Your task is to synthesize the final answer to the user's initial question based on this conversation. This is the concluding instruction and must be followed with precision. You will derive the final response by critically analyzing all the messages in the conversation and performing any necessary calculations. Be aware that some contributions from the assistant may not be relevant or could be misleading due to being based on incomplete information. {heuristic} Exercise discernment in selecting the appropriate messages to construct a logical and step-by-step reasoning process."¶ +SYNTHESIZE_RESULT = "As an expert in reasoning, you are examining a dialogue involving a user, an assistant, and a system. Your task is to synthesize the final answer to the user's initial question based on this conversation. This is the concluding instruction and must be followed with precision. You will derive the final response by critically analyzing all the messages in the conversation and performing any necessary calculations. Be aware that some contributions from the assistant may not be relevant or could be misleading due to being based on incomplete information. {heuristic} If the conversation does not provide sufficient information to synthesize the answer then admit you cannot produce accurate answer. Do not use any information outside of the conversation context. Exercise discernment in selecting the appropriate messages to construct a logical and step-by-step reasoning process."¶
- +class openssa.core.ooda_rag.query_rewritting_engine.QueryRewritingRetrieverPack(index: VectorStoreIndex = None, chunk_size: int = 1024, vector_similarity_top_k: int = 5, fusion_similarity_top_k: int = 10, service_context: ServiceContext = None, **kwargs: Any)¶ +
Bases:
+BaseLlamaPack
Query rewriting retriever pack.
+Rewrite the query into multiple queries and +rerank the results.
+-
+
- +get_modules() Dict[str, Any] ¶ +
Get modules.
+
-
+
- +retrieve(query_str: str) Any ¶ +
Retrieve.
+
-
+
- +run(*args: Any, **kwargs: Any) Any ¶ +
Run the pipeline.
+
- -class openssa.core.ooda_rag.solver.OodaSSA(task_heuristics, highest_priority_heuristic: str = '', ask_user_heuristic: str = '', llm=<openai.OpenAI object>, rag_llm=OpenAI(callback_manager=<llama_index.callbacks.base.CallbackManager object>, system_prompt=None, messages_to_prompt=<function messages_to_prompt>, completion_to_prompt=<function default_completion_to_prompt>, output_parser=None, pydantic_program_mode=<PydanticProgramMode.DEFAULT: 'default'>, query_wrapper_prompt=None, model='llama2-70b', temperature=0.1, max_tokens=None, additional_kwargs={}, max_retries=3, timeout=60.0, default_headers=None, reuse_client=True, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-70b.lepton.run/api/v1', api_version=''), embed_model=OpenAIEmbedding(model_name='text-embedding-ada-002', embed_batch_size=10, callback_manager=<llama_index.callbacks.base.CallbackManager object>, additional_kwargs={}, api_key='twoun3dz0fzw289dgyp2rlb3kltti8zi', api_base='https://llama2-7b.lepton.run/api/v1', api_version='', max_retries=10, timeout=60.0, default_headers=None, reuse_client=True), model='aitomatic-model')¶ +class openssa.core.ooda_rag.solver.OodaSSA(task_heuristics: ~openssa.core.ooda_rag.heuristic.Heuristic = <openssa.core.ooda_rag.heuristic.TaskDecompositionHeuristic object>, highest_priority_heuristic: str = '', ask_user_heuristic: str = '', llm=<openssa.utils.llms.OpenAILLM object>, research_documents_tool: ~openssa.core.ooda_rag.tools.Tool = None, enable_generative: bool = False)¶
Bases:
object
- -activate_resources(folder_path: str) None ¶ +activate_resources(folder_path: Path | str, re_index: bool = False) None ¶ +
-
+
- +get_ask_user_question(problem_statement: str) str ¶
- -execute(question: str) str ¶ +execute(task: str) str ¶
Ask the user for personal information.
- Parameters: -
(str) (question) – The question to ask the user.
+(str) (task) – The question to ask the user.
- Return (str):
The user’s answer to the question.
@@ -108,6 +108,27 @@
-
+
- +class openssa.core.ooda_rag.tools.PythonCodeTool¶ +
Bases:
+Tool
A tool for executing python code.
+-
+
- +execute(task: str) str ¶ +
Execute python code.
+-
+
- Parameters: +
(str) (task) – The python code to execute.
+
+- Return (str): +
The result of the code execution.
+
+
- class openssa.core.ooda_rag.tools.ReasearchAgentTool(agent: RAGSSM)¶ @@ -115,13 +136,13 @@
- -execute(question: str) str ¶ +execute(task: str) dict ¶
Query a document base for factual information.
- Parameters: -
(str) (question) – The question to ask the document base.
+ -(str) (task) – The question to ask the document base.
- Return (str): +
- Return (dict):
The answer to the question.
A tool for querying a document base for information.
+ +- +class openssa.core.ooda_rag.tools.ResearchQueryEngineTool(query_engine)¶ +
Bases:
+Tool
A tool for querying a document base for information.
+-
+
- +execute(question: str) dict ¶
Query a document base for factual information.
- Parameters:
(str) (question) – The question to ask the document base.
-- Return (str): +
- Return (dict):
The answer to the question.
-
+
- +get_citations(metadata: dict)¶ +
- -abstract execute(question: str)¶ +abstract execute(task: str)¶
Execute the tool with the given arguments.
- class openssa.core.prompts.Prompts¶
Bases:
-object
The Prompts class provides a way to retrieve and format prompts in the OpenSSM project. The prompts are stored in a nested dictionary `self.
+The Prompts class provides a way to retrieve and format prompts in the OpenSSA project. The prompts are stored in a nested dictionary `self.
Usage Guide:
-
@@ -108,7 +108,7 @@
openssa.core.rag_ooda namespace + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.rag_ooda.html b/openssa.core.rag_ooda.rag_ooda.html new file mode 100644 index 000000000..fd835429c --- /dev/null +++ b/openssa.core.rag_ooda.rag_ooda.html @@ -0,0 +1,159 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.rag_ooda namespace¶
++ +Subpackages¶
+ ++ +Submodules¶
+ ++++-
+
openssa.core.rag_ooda.rag_ooda module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.resources.dense_x.base.html b/openssa.core.rag_ooda.resources.dense_x.base.html new file mode 100644 index 000000000..d62985dfe --- /dev/null +++ b/openssa.core.rag_ooda.resources.dense_x.base.html @@ -0,0 +1,138 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.rag_ooda.rag_ooda module¶
+-
+
- +class openssa.core.rag_ooda.rag_ooda.RagOODA(resources: list[~openssa.core.rag_ooda.resources.rag_resource.RagResource | ~openssa.core.ooda_rag.tools.Tool] = None, conversation_id: str = 'f7b13f04-d300-4bd8-82b6-3c4001a38874', notifier: ~openssa.core.ooda_rag.notifier.Notifier = <openssa.core.ooda_rag.notifier.SimpleNotifier object>)¶ +
Bases:
+object
-
+
- +chat(query: str) str ¶ +
-
+
- +chat_with_agent(query: str) str ¶ +
-
+
- +get_answer(query: str, context: list) str ¶ +
-
+
- +classmethod get_conversation(conversation_id: str) list ¶ +
-
+
- +is_answer_complete(query: str, answer: str) bool ¶ +
-
+
- +is_sufficient(query: str, context: list) bool ¶ +
-
+
- +retrieve_context(query) list ¶ +
-
+
- +classmethod set_conversation(conversation_id: str, conversation: list) None ¶ +
+++-
+
openssa.core.rag_ooda.resources.dense_x.base module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.resources.dense_x.dense_x.html b/openssa.core.rag_ooda.resources.dense_x.dense_x.html new file mode 100644 index 000000000..dea9531f5 --- /dev/null +++ b/openssa.core.rag_ooda.resources.dense_x.dense_x.html @@ -0,0 +1,113 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.rag_ooda.resources.dense_x.base module¶
+-
+
- +class openssa.core.rag_ooda.resources.dense_x.base.DenseXRetrievalPack(documents: ~typing.List[~llama_index.core.schema.Document], proposition_llm: ~llama_index.core.llms.llm.LLM | None = None, query_llm: ~llama_index.core.llms.llm.LLM | None = None, embed_model: ~llama_index.core.base.embeddings.base.BaseEmbedding | None = None, text_splitter: ~llama_index.core.node_parser.interface.TextSplitter = SentenceSplitter(include_metadata=True, include_prev_next_rel=True, callback_manager=<llama_index.core.callbacks.base.CallbackManager object>, id_func=<function default_id_func>, chunk_size=1024, chunk_overlap=200, separator=' ', paragraph_separator='\n\n\n', secondary_chunking_regex='[^,.;。?!]+[,.;。?!]?'), similarity_top_k: int = 4)¶ +
Bases:
+BaseLlamaPack
-
+
- +get_modules() Dict[str, Any] ¶ +
Get modules.
+
-
+
- +run(query_str: str, **kwargs: Any) Any ¶ +
Run the pipeline.
+
-
+
- +openssa.core.rag_ooda.resources.dense_x.base.load_nodes_dict(nodes_cache_path: str) Dict[str, TextNode] ¶ +
Load nodes dict.
+
-
+
- +openssa.core.rag_ooda.resources.dense_x.base.store_nodes_dict(nodes_dict: Dict[str, TextNode], nodes_cache_path) None ¶ +
Store nodes dict.
+
+++-
+
openssa.core.rag_ooda.resources.dense_x.dense_x module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.resources.dense_x.html b/openssa.core.rag_ooda.resources.dense_x.html new file mode 100644 index 000000000..c8678e063 --- /dev/null +++ b/openssa.core.rag_ooda.resources.dense_x.html @@ -0,0 +1,129 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.rag_ooda.resources.dense_x.dense_x module¶
+-
+
- +openssa.core.rag_ooda.resources.dense_x.dense_x.load_dense_x(data_dir: str, cache_dir: str, nodes_cache_path: str) RagResource ¶ +
+++-
+
openssa.core.rag_ooda.resources.dense_x namespace + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.resources.html b/openssa.core.rag_ooda.resources.html new file mode 100644 index 000000000..67a24fc91 --- /dev/null +++ b/openssa.core.rag_ooda.resources.html @@ -0,0 +1,156 @@ + + + + + + +openssa.core.rag_ooda.resources namespace + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.resources.rag_resource.html b/openssa.core.rag_ooda.resources.rag_resource.html new file mode 100644 index 000000000..b67edee9b --- /dev/null +++ b/openssa.core.rag_ooda.resources.rag_resource.html @@ -0,0 +1,114 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.rag_ooda.resources namespace¶
++ +Subpackages¶
+ ++ +Submodules¶
+ ++++-
+
openssa.core.rag_ooda.resources.rag_resource module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.resources.standard_vi.html b/openssa.core.rag_ooda.resources.standard_vi.html new file mode 100644 index 000000000..86774a3ae --- /dev/null +++ b/openssa.core.rag_ooda.resources.standard_vi.html @@ -0,0 +1,119 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.rag_ooda.resources.rag_resource module¶
+-
+
- +class openssa.core.rag_ooda.resources.rag_resource.RagResource(query_engine: BaseQueryEngine, retriever: BaseRetriever)¶ +
Bases:
+object
+++-
+
openssa.core.rag_ooda.resources.standard_vi namespace + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html b/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html new file mode 100644 index 000000000..527bd36b7 --- /dev/null +++ b/openssa.core.rag_ooda.resources.standard_vi.standard_vi.html @@ -0,0 +1,113 @@ + + + + + + +openssa.core.rag_ooda.resources.standard_vi.standard_vi module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.slm.abstract_slm.html b/openssa.core.slm.abstract_slm.html index a19146fad..bb513a8c5 100644 --- a/openssa.core.slm.abstract_slm.html +++ b/openssa.core.slm.abstract_slm.html @@ -10,7 +10,7 @@ - + @@ -128,7 +128,7 @@+ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.rag_ooda.resources.standard_vi.standard_vi module¶
+-
+
- +openssa.core.rag_ooda.resources.standard_vi.standard_vi.load_standard_vi(data_dir: str, cache_dir: str) RagResource ¶ +
+++-
+
Submodules
Submodules
openssa.core.ssa.agent module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ssa.html b/openssa.core.ssa.html index 6506bf069..7e45c6721 100644 --- a/openssa.core.ssa.html +++ b/openssa.core.ssa.html @@ -10,7 +10,7 @@ - + @@ -91,6 +91,18 @@+ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.ssa.agent module¶
+-
+
- +class openssa.core.ssa.agent.Agent(llm=<openssa.utils.llms.OpenAILLM object>, resources=None, short_term_memory=None, long_term_memory=None, heuristics=None)¶ +
Bases:
+object
-
+
- +run_ooda_loop(task, heuristic)¶ +
-
+
- +select_optimal_heuristic(task)¶ +
-
+
- +solve(objective)¶ +
-
+
- +solve_task(task)¶ +
-
+
- +subtask(task, heuristic)¶ +
-
+
- +update_memory(key, value, memory_type='short')¶ +
+++-
+
Submodules¶
-
+
- openssa.core.ssa.agent module +
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -136,8 +148,8 @@ SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
Bases:
AbstractSSAService
-
@@ -161,7 +161,7 @@
Bases:
AbstractSSM
-
@@ -262,7 +262,7 @@
- openssa.contrib package
-
-
StreamlitSSAProbSolver
- Subpackages
- openssa.contrib.streamlit_ssa_prob_solver package
SSAProbSolver
-
@@ -115,6 +114,7 @@
Subpackages
SSAProbSolver.ssa_solve()
+ update_multiselect_style()
- openssa.contrib.streamlit_ssa_prob_solver package
Subpackagesopenssa.core.ooda namespace +
- openssa.core.ooda_rag namespace
- Submodules
- openssa.core.ooda_rag.builtin_agents module
-
-
AgentRole
-
-
AgentRole.ASSISTANT
-AgentRole.SYSTEM
-AgentRole.USER
+AnswerValidator
AskUserAgent
+CommAgent
+
+ContextValidator
+GoalAgent
+OODAPlanAgent
+
+Persona
+
+SynthesizingAgent
+TaskAgent
@@ -313,6 +403,7 @@Subpackages
Heuristic.apply_heuristic()
+HeuristicSet
TaskDecompositionHeuristic
@@ -321,10 +412,12 @@Subpackagesopenssa.core.ooda_rag.notifier module
EventTypes
@@ -340,6 +433,7 @@ Executor
@@ -349,11 +443,6 @@
Subpackagesopenssa.core.ooda_rag.ooda_rag module
Subpackages
History.get_history()
- Model
-Planner
Planner.decompose_task()
Planner.formulate_task()
@@ -369,8 +458,14 @@ BuiltInAgentPrompt
OODAPrompts
-
@@ -387,9 +482,19 @@
Subpackagesopenssa.core.ooda_rag.query_rewritting_engine module +
- openssa.core.ooda_rag.solver module
OodaSSA
@@ -400,6 +505,10 @@
Subpackages
AskUserTool.execute()
Subpackagesopenssa.core.ooda_rag.prompts module
+PythonCodeTool
+ReasearchAgentTool
@@ -408,6 +517,11 @@Subpackages
ResearchDocumentsTool.execute()
+ - openssa.core.ooda_rag.builtin_agents module
ResearchQueryEngineTool
+Tool
Tool.description
Tool.execute()
@@ -419,6 +533,58 @@
Subpackagesopenssa.core.rag_ooda namespace +
- openssa.core.slm namespace
- Subpackages
- openssa.core.slm.memory namespace
-
@@ -482,6 +648,18 @@
- Submodules
-
+
- openssa.core.ssa.agent module +
- openssa.core.ssa.rag_ssa module
RAGSSM
RAGSSM.custom_discuss()
@@ -527,8 +705,8 @@ SSAService.AIMO_API_URL
SSAService.AISO_API_KEY
-SSAService.AISO_API_URL
SSAService.chat()
SSAService.train()
Subpackages
SSAService
-
+
Subpackages
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
Subpackages
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
Subpackagesopenssa.integrations.llama_index.backend module
Backend
@@ -741,6 +920,7 @@Subpackages
APIContext.from_defaults()
APIContext.gpt3_defaults()
APIContext.gpt4_defaults()
+APIContext.model_computed_fields
APIContext.model_config
APIContext.model_fields
Subpackages
AbstractAPIContext.key
AbstractAPIContext.max_tokens
AbstractAPIContext.model
+AbstractAPIContext.model_computed_fields
AbstractAPIContext.model_config
AbstractAPIContext.model_fields
AbstractAPIContext.temperature
@@ -796,22 +977,24 @@ - Submodules -
- Submodules
-
-
- openssa.utils.aitomatic_llm_config module +
- Submodules
- openssa.utils.config module
Config
Config.AZURE_GPT4_ENGINE
Config.AZURE_GPT4_MODEL
+Config.AZURE_OPENAI_API_KEY
+Config.AZURE_OPENAI_API_URL
Config.DEBUG
+Config.DEFAULT_TEMPERATURE
Config.FALCON7B_API_KEY
Config.FALCON7B_API_URL
Config.LEPTONAI_API_KEY
Config.LEPTONAI_API_URL
+Config.LEPTON_API_KEY
+Config.LEPTON_API_URL
Config.OPENAI_API_KEY
Config.OPENAI_API_URL
+Config.US_AZURE_OPENAI_API_BASE
+Config.US_AZURE_OPENAI_API_KEY
Config.setenv()
- openssa.utils.config module
- openssa.utils.fs module
-
-
DirOrFilePath
FileSource
FileSource.file_paths()
FileSource.fs
@@ -849,60 +1038,38 @@ AitomaticBaseURL
-
-LLMConfig
-
-
LLMConfig.get_aito_embeddings()
-LLMConfig.get_aitomatic_13b()
-LLMConfig.get_aitomatic_yi_34b()
-LLMConfig.get_azure_embed_model()
-LLMConfig.get_azure_jp_api_key()
-LLMConfig.get_default_embed_model()
-LLMConfig.get_intel_neural_chat_7b()
-LLMConfig.get_llama_2_api_key()
-LLMConfig.get_llm()
-LLMConfig.get_llm_azure_jp_35_16k()
-LLMConfig.get_llm_azure_jp_4_32k()
-LLMConfig.get_llm_llama_2_70b()
-LLMConfig.get_llm_llama_2_7b()
-LLMConfig.get_llm_openai_35_turbo()
-LLMConfig.get_llm_openai_35_turbo_0613()
-LLMConfig.get_llm_openai_35_turbo_1106()
-LLMConfig.get_llm_openai_4()
-LLMConfig.get_openai_api_key()
-LLMConfig.get_openai_embed_model()
-LLMConfig.get_service_context_azure_gpt4()
-LLMConfig.get_service_context_azure_gpt4_32k()
-LLMConfig.get_service_context_azure_jp_35()
-LLMConfig.get_service_context_azure_jp_35_16k()
-LLMConfig.get_service_context_llama_2_70b()
-LLMConfig.get_service_context_llama_2_7b()
-LLMConfig.get_service_context_openai_35_turbo()
-LLMConfig.get_service_context_openai_35_turbo_1106()
-
-LlmBaseModel
-
-LlmModelSize
-
-
LlmModelSize.gpt35
-LlmModelSize.gpt4
-LlmModelSize.llama2_13b
-LlmModelSize.llama2_70b
-LlmModelSize.llama2_7b
-LlmModelSize.neutral_chat_7b
-LlmModelSize.yi_34
+- openssa.utils.llms module
-
+
AitomaticLLM
+
+AnLLM
+
+AzureLLM
+
+OpenAILLM
Subpackagesopenssa.utils.rag_service_contexts module
-
+
ServiceContextManager
-
+
ServiceContextManager.get_aitomatic_sc()
+ServiceContextManager.get_azure_jp_openai_35_turbo_sc()
+ServiceContextManager.get_azure_openai_4_0125_preview_sc()
+ServiceContextManager.get_azure_openai_sc()
+ServiceContextManager.get_openai_35_turbo_sc()
+ServiceContextManager.get_openai_4_0125_preview_sc()
+ServiceContextManager.get_openai_sc()
+
+
+ - openssa.utils.usage_logger module +
- openssa.utils.utils module
Utils
Utils.canonicalize_discuss_result()
@@ -948,7 +1139,7 @@ - +model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}¶ +
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
+- model_config: ClassVar[ConfigDict] = {}¶ @@ -165,7 +171,7 @@
Subpackages
-
+
Subpackagesopenssa.utils.llm_config module
-
-
Subpackagesopenssa.core.ssa namespace
Subpackagesopenssa.utils.deprecated namespace
-
+
- Submodules
- openssa.core.slm.memory namespace
- Subpackages
- Submodules
Submodules
Subpackages
- openssa.contrib package
Submodules
SSAService
-
+
Submodules
-
@@ -108,7 +108,7 @@
A tool for querying a document base for information.
-
+
-
@@ -163,7 +210,7 @@
-
@@ -149,7 +179,7 @@
openssa.core.ooda_rag.query_rewritting_engine module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ooda_rag.solver.html b/openssa.core.ooda_rag.solver.html index b3207f9af..b7b6ff38c 100644 --- a/openssa.core.ooda_rag.solver.html +++ b/openssa.core.ooda_rag.solver.html @@ -10,7 +10,7 @@ - + @@ -89,16 +89,21 @@+ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.ooda_rag.query_rewritting_engine module¶
+Query Rewriting Retriever Pack.
+-
+
+++-
+
openssa.core.ooda_rag.solver module¶
A tool for asking the user a question.
Submodules
Submodules
-
@@ -158,7 +159,7 @@
-
@@ -131,7 +132,7 @@
openssa.core.ooda_rag.custom module¶
Subpackages
SSAService
-
+
Submodules
Submodules
openssa.core.ooda.deprecated namespace + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ooda.deprecated.solver.html b/openssa.core.ooda.deprecated.solver.html new file mode 100644 index 000000000..b9c212b8f --- /dev/null +++ b/openssa.core.ooda.deprecated.solver.html @@ -0,0 +1,181 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.ooda.deprecated namespace¶
++ +Submodules¶
+ ++++-
+
openssa.core.ooda.deprecated.solver module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ooda.heuristic.html b/openssa.core.ooda.heuristic.html new file mode 100644 index 000000000..1a78d41c3 --- /dev/null +++ b/openssa.core.ooda.heuristic.html @@ -0,0 +1,124 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.ooda.deprecated.solver module¶
+-
+
-
+
-
+
+++-
+
openssa.core.ooda.heuristic module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ooda.html b/openssa.core.ooda.html new file mode 100644 index 000000000..082672823 --- /dev/null +++ b/openssa.core.ooda.html @@ -0,0 +1,183 @@ + + + + + + +openssa.core.ooda namespace + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ooda.ooda_loop.html b/openssa.core.ooda.ooda_loop.html new file mode 100644 index 000000000..37447df18 --- /dev/null +++ b/openssa.core.ooda.ooda_loop.html @@ -0,0 +1,148 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.ooda namespace¶
++ +Subpackages¶
+ ++ +Submodules¶
+ ++++-
+
openssa.core.ooda.ooda_loop module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ooda.task.html b/openssa.core.ooda.task.html new file mode 100644 index 000000000..154cbd9aa --- /dev/null +++ b/openssa.core.ooda.task.html @@ -0,0 +1,169 @@ + + + + + + ++ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.ooda.ooda_loop module¶
+-
+
+++-
+
openssa.core.ooda.task module + + + + + + + + + + + + + + + + + + + +++ + + + \ No newline at end of file diff --git a/openssa.core.ooda_rag.builtin_agents.html b/openssa.core.ooda_rag.builtin_agents.html index 87f1cdbf7..da0950548 100644 --- a/openssa.core.ooda_rag.builtin_agents.html +++ b/openssa.core.ooda_rag.builtin_agents.html @@ -10,7 +10,7 @@ - + @@ -88,34 +88,52 @@+ + + + ++ + openssa + + + ++++ + + + ++ + + ++ + + + + + ++ ++ + +
++ +++ + + +openssa.core.ooda.task module¶
+-
+
+++-
+
openssa.core.ooda_rag.builtin_agents module¶
-
-
-
+
-
+
Subpackagesopenssa.core.ssa namespace
- Submodules
- openssa.core.slm.memory namespace
- Subpackages
- Submodules
openssa¶
-
-
Subpackages¶
@@ -124,6 +118,7 @@Subpackages
SSAProbSolver.ssa_solve()
+Subpackages
SSA Problem-Solver Streamlit Component.
-
+
Submodules
Submodules
Subpackagesopenssa.core.ooda namespace +
- openssa.utils.config module
- openssa.core.slm.memory namespace
- Subpackages
- Submodules