Skip to content

Conversation

@Stepka
Copy link

@Stepka Stepka commented Jan 13, 2026

This PR adds a system_prompt parameter to the GeneralLLM.invoke function.

The main motivation is to remove the need to rely on the deprecated Perplexity class for setting system-level instructions. With this change, system prompts can be passed directly to GeneralLLM.invoke, making the API more flexible and future-proof.

This is particularly useful for existing implementations such as Q3TemplateBot2024, where system prompts are currently handled via the deprecated Perplexity abstraction.

Changes

  • Added optional system_prompt parameter to GeneralLLM.invoke

  • Allows explicit system-level instructions without using deprecated classes

  • Improves compatibility with current and future LLM backends

Backward compatibility

  • Existing calls to GeneralLLM.invoke remain unchanged

  • The new parameter is optional and does not break current usage

Motivation

Deprecating Perplexity requires an alternative way to inject system prompts. This change provides a minimal and clean solution while keeping the existing API stable.

Example

This implementation of Q3TemplateBot2024.run_research

    async def run_research(self, question: MetaculusQuestion) -> str:
        system_prompt = clean_indents(
            """
            You are an assistant to a superforecaster.
            The superforecaster will give you a question they intend to forecast on.
            To be a great assistant, you generate a concise but detailed rundown of the most relevant news, including if the question would resolve Yes or No based on current information.
            You do not produce forecasts yourself.
            """
        )

        # Note: The original q3 bot did not set temperature, and I could not find the default temperature of perplexity
        response = await Perplexity(
            temperature=0.1, system_prompt=system_prompt
        ).invoke(question.question_text)
        return response

can be changed to

    async def run_research(self, question: MetaculusQuestion) -> str:
        system_prompt = clean_indents(
            """
            You are an assistant to a superforecaster.
            The superforecaster will give you a question they intend to forecast on.
            To be a great assistant, you generate a concise but detailed rundown of the most relevant news, including if the question would resolve Yes or No based on current information.
            You do not produce forecasts yourself.
            """
        )

        # Note: The original q3 bot did not set temperature, and I could not find the default temperature of perplexity
        model = self.get_llm("researcher", "llm") # Assume there is a `perplexity/sonar-pro` model
        response = await model.invoke(question.question_text, system_prompt=system_prompt)
        return response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant