Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - self.llm.call CrewAgentExecutor takes in OPENAI KEY not ANTHROPIC_API_KEY. On windows machine. #1854

Open
tobiolabode opened this issue Jan 5, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@tobiolabode
Copy link

Description

Hey guys,

Been trying Claude on crewai via liteLLM seems to take the OPENAI key environment variable if when the ANTHROPIC_API_KEY is set into the .env. Been using the tutorial to set up. I tested the API on the direct code version as well which worked fine. But want follow the best practices for the AI agents.

leads to an ERROR:root:LiteLLM call failed: litellm.AuthenticationError: AnthropicException - {"type":"error","error":{"type":"authentication_error","message":"invalid x-api-key"}}

as it passes the wrong API key. Odd behaviour. Could just be me.

I believe it has something to do with the windows environment variables. (Which we all love to work on. 😂)

I set up a hotfix for myself inside crewai llm.py.

    def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
        with suppress_warnings():
            if callbacks and len(callbacks) > 0:
                self.set_callbacks(callbacks)

            try:
                params = {
                    "model": self.model,
                    "messages": messages,
                    "timeout": self.timeout,
                    "temperature": self.temperature,
                    "top_p": self.top_p,
                    "n": self.n,
                    "stop": self.stop,
                    "max_tokens": self.max_tokens or self.max_completion_tokens,
                    "presence_penalty": self.presence_penalty,
                    "frequency_penalty": self.frequency_penalty,
                    "logit_bias": self.logit_bias,
                    "response_format": self.response_format,
                    "seed": self.seed,
                    "logprobs": self.logprobs,
                    "top_logprobs": self.top_logprobs,
                    "api_base": self.base_url,
                    "api_version": self.api_version,
                    "api_key": self.api_key,
                    "stream": False,
                    **self.kwargs,
                }
                print('self.api_key', self.api_key)
                print('params', params)
                if self.api_key == 'sk-uf...':
                    self.api_key = 'sk-ant-....'
                    params['api_key'] = self.api_key

Steps to Reproduce

Was following the tutorial quickstart.

  1. crewai create crew latest-ai-development
  2. CLI ask for anthropic model: Type 2
  3. CLI asks for model Claude sonnet 3.5
  4. [optional] install packages
  5. crewai run

Expected behavior

Follow the tutorial with outputs and markdown report. Like so:

POST Request Sent from LiteLLM:
curl -X POST
https://api.anthropic.com/v1/messages
-H 'anthropic-version: ' -H 'x-api-key: sk-***************************************' -H 'accept: *****' -H 'content-type: *****'
-d '{'model': 'claude-3-5-sonnet-20240620', 'messages': [{'role': 'user', 'content': [{'type': 'text', 'text': "\nCurrent Task: Review the context you got and expand each topic into a full section for a report. Make sure the report
is detailed and contains any and all relevant information.\n\n\nThis is the expect criteria for your final answer: A fully fledge reports with the mains topics, each with a full section of information. Formatted as markdown without
'```'\n\nyou MUST return the actual complete content as the final answer, not a summary.\n\nThis is the context you're working with:\nHere's a list of 10 bullet points with the most relevant information about AI Large Language Models (LLMs) as of 2024:\n\n• Multimodal LLMs have become mainstream, with models capable of processing and generating text, images, audio, and video simultaneously. Leading examples include GPT-5 and PaLM-3, which can understand and create content across multiple modalities.\n\n• Quantum-enhanced LLMs have emerged, leveraging quantum computing principles to dramatically increase processing power and model complexity. These models can handle exponentially larger datasets and more intricate language tasks.\n\n• Ethical AI frameworks are now mandatory for LLM development and deployment in many countries. These frameworks address bias, fairness, and transparency concerns, with strict regulations on data collection and model training practices.\n\n• Personalized LLMs tailored to individual users have gained popularity. These models adapt to a user's writing style, preferences, and knowledge base over time, providing highly customized interactions and outputs.\n\n• LLMs specializing in scientific research and academic writing have revolutionized the publication process. These models assist researchers in literature reviews, experiment design, and even co-authoring papers, leading to a surge in scientific output.\n\n• Multilingual LLMs capable of seamless translation and communication across hundreds of languages have become a reality. These models have significantly reduced language barriers in global communication and commerce.\n\n• Energy-efficient LLMs have been developed, addressing concerns about the environmental impact of AI. These models use advanced hardware and optimized algorithms to reduce power consumption by up to 90% compared to their 2021 counterparts.\n\n• LLMs integrated with Internet of Things (IoT) devices have transformed smart homes and cities. These models can process and respond to real-time data from multiple sources, enabling more sophisticated automation and decision-making in urban environments.\n\n• Open-source LLMs have gained significant traction, with community-driven models rivaling proprietary ones in performance. This has democratized access to advanced AI capabilities and accelerated innovation in the field.\n\n• LLMs specialized in creative writing and storytelling have become popular tools for authors and screenwriters. These models can generate complex narratives, develop characters, and even adapt writing styles to match specific genres or authors.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:"}]}], 'stop_sequences': ['\nObservation:'], 'system': [{'type': 'text', 'text': "You are AI LLMs Reporting Analyst\n. You're a meticulous analyst with a keen eye for detail. You're known for your ability to turn complex data into clear and concise reports, making it easy for others to understand and act on the information you provide.\nYour personal goal is: Create detailed reports based on AI LLMs data analysis and research findings\n\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}], 'max_tokens': 4096}'

Screenshots/Code snippets

Running the Crew
Secret Key: sk-ant-....
# Agent: AI LLMs Senior Data Researcher
## Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.

self.api_key sk-ufeK...
params {'model': 'claude-3-5-sonnet-20240620', 'messages': [{'role': 'system', 'content': "You are AI LLMs Senior Data Researcher\n. You're a seasoned researcher with a knack for uncovering the latest developments in AI LLMs. Known 
for your ability to find the most relevant information and present it in a clear and concise manner.\n\nYour personal goal is: Uncover cutting-edge developments in AI LLMs\n\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!"}, {'role': 'user', 'content': '\nCurrent Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.\n\n\nThis is the expect criteria for your final 
answer: A list with 10 bullet points of the most relevant information about AI LLMs\n\nyou MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available 
and give your best Final Answer, your job depends on it!\n\nThought:'}], 'timeout': None, 'temperature': None, 'top_p': None, 'n': None, 'stop': ['\nObservation:'], 'max_tokens': None, 'presence_penalty': None, 'frequency_penalty': 
None, 'logit_bias': None, 'response_format': None, 'seed': None, 'logprobs': None, 'top_logprobs': None, 'api_base': None, 'api_version': None, 'api_key': 'sk-....', 'stream': False}      

Operating System

Windows 10

Python Version

3.10

crewAI Version

0.86.0

crewAI Tools Version

0.25.8

Virtual Environment

Conda

Evidence

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "....\Coding\Side_projects\AI_agents\test103\.venv\lib\site-packages\crewai\agent.py", line 333, in execute_task
    result = self.agent_executor.invoke(
  File "...Side_projects\AI_agents\test103\.venv\lib\site-packages\crewai\agents\crew_agent_executor.py", line 102, in invoke
    formatted_answer = self._invoke_loop()
  File "C...\Coding\Side_projects\AI_agents\test103\.venv\lib\site-packages\crewai\agents\crew_agent_executor.py", line 206, in _invoke_loop
    raise e
  File "...Coding\Side_projects\AI_agents\test103\.venv\lib\site-packages\crewai\agents\crew_agent_executor.py", line 115, in _invoke_loop
    answer = self.llm.call(
  File "...\AI_agents\test103\.venv\lib\site-packages\crewai\llm.py", line 182, in call
    response = litellm.completion(**params)
  File "...Side_projects\AI_agents\test103\.venv\lib\site-packages\litellm\utils.py", line 998, in wrapper
    raise e
  File "...\AI_agents\test103\.venv\lib\site-packages\litellm\utils.py", line 876, in wrapper
    result = original_function(*args, **kwargs)
  File "...Side_projects\AI_agents\test103\.venv\lib\site-packages\litellm\main.py", line 2959, in completion
    raise exception_type(
  File "...Side_projects\AI_agents\test103\.venv\lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 2189, in exception_type
    raise e
  File "...\Side_projects\AI_agents\test103\.venv\lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 504, in exception_type
    raise AuthenticationError(
litellm.exceptions.AuthenticationError: litellm.AuthenticationError: AnthropicException - {"type":"error","error":{"type":"authentication_error","message":"invalid x-api-key"}}

  File "...\Side_projects\AI_agents\test103\.venv\lib\site-packages\litellm\litellm_core_utils\exception_mapping_utils.py", line 504, in exception_type
    raise AuthenticationError(
litellm.exceptions.AuthenticationError: litellm.AuthenticationError: AnthropicException - {"type":"error","error":{"type":"authentication_error","message":"invalid x-api-key"}}
An error occurred while running the crew: Command '['uv', 'run', 'run_crew']' returned non-zero exit status 1.

Possible Solution

Going to start playing around with os.environ instead. of env. Checking up more on docs to do so. A simple pointer would help.

Additional context

Seems like Crewai run may have a type of default value for API keys somewhere. I don't know. I want set up config directly now.

@tobiolabode tobiolabode added the bug Something isn't working label Jan 5, 2025
@rayl
Copy link

rayl commented Jan 9, 2025

I had the same problem. changing "MODEL=claude-3-5-sonnet-20240620" to "MODEL=anthropic/claude-3-5-sonnet-20240620" helped. seems it defaults to "openai" https://github.com/crewAIInc/crewAI/blob/main/src/crewai/utilities/llm_utils.py#L141

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants