Skip to content

Conversation

@Marquis03
Copy link

This pull request updates the logic for validating the LLM connection in the validate_llm_connection function. The main changes include adjusting the parameters for the LLM test call and expanding the success criteria to support more result formats.

Improvements to LLM connection validation:

  • Updated the success condition to accept a response if either the content or reasoning_content attribute is present and non-empty in the result, making the validation compatible with Reasoning LLM response formats.
  • Increased the max_tokens parameter from 1 to 5 and the temperature from 0.1 to 1.0 in the GLOBAL_LLM.run call: The original parameter settings (max_tokens=1, temperature=0.1) could cause some models to output only a newline character (\n), which would be stripped out during content validation, leading to false negative results where the LLM connection was incorrectly judged as unresponsive. The adjusted parameters allow for more flexible and meaningful LLM responses during connection testing, avoiding such misjudgments.

@Marquis03 Marquis03 changed the base branch from main to dev January 8, 2026 12:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants