Skip to content

Update copilot-instructions.md: security hardening and LLM wrapper updates#8

Open
jeongmoon2006 wants to merge 5 commits intoThe-Pocket:mainfrom
jeongmoon2006:main
Open

Update copilot-instructions.md: security hardening and LLM wrapper updates#8
jeongmoon2006 wants to merge 5 commits intoThe-Pocket:mainfrom
jeongmoon2006:main

Conversation

@jeongmoon2006
Copy link

📝 Overview
This PR focuses on improving security by removing hardcoded API keys, updating outdated LLM API versions (including the transition to Gemini 2.5), and expanding the supported design patterns.

🔒 Security & Configuration

  • Environment Variables: Replaced all hardcoded api_key="YOUR_API_KEY_HERE" with os.environ.get(...) across all LLM wrapper examples (OpenAI, Anthropic, Azure, Gemini).

  • Env Setup: Added a .env template file with API key placeholders.

  • Dependencies: Added python-dotenv to requirements.txt and integrated load_dotenv() into the main.py entry point.

🤖 LLM Wrapper Updates

  • Google Gemini:

    • Renamed "PaLM API" to Gemini.
    • Updated default model to gemini-2.5-flash.
    • Fixed broken indentation in the wrapper code.
  • Azure OpenAI:

    • Updated API version from 2023-05-15 to 2024-12-01-preview.
    • Configured endpoints and deployment names via environment variables.
  • Ollama: Updated default model from llama2 to llama3.3.

  • OpenAI/Claude: Made model selection configurable via environment variables for better flexibility.

📚 New Features & Patterns

  • Design Patterns: Added 6 new patterns to the documentation with links to the cookbook:

    • Streaming, MCP, Memory, Supervisor, HITL (Human-in-the-loop), and Majority Vote.
  • MCP Integration: Added Model Context Protocol (MCP) Tools link to the utility list.

🐛 Bug Fixes & Refactoring

  • Utils Example: Replaced a buggy Gemini snippet (due to mismatched use_cache parameters) with a clean, verified OpenAI implementation.

1. Security: Replace all hardcoded api_key="YOUR_API_KEY_HERE" with os.environ.get(...) across every LLM wrapper example (OpenAI, Anthropic, Azure, Gemini)
Env config: Add .env file to project structure with API key placeholders; add python-dotenv to requirements; add load_dotenv() to main.py example

2. LLM wrappers:
Google: Fix broken indentation, rename "PaLM API" → "Gemini", use gemini-2.5-flash default
Azure: Update API version 2023-05-15 → 2024-12-01-preview, use env vars for endpoint/key/deployment
Ollama: Update model llama2 → llama3.3
OpenAI/Claude: Make model configurable via env vars

3. Design patterns: Add 6 newer patterns (Streaming, MCP, Memory, Supervisor, HITL, Majority Vote) with link to cookbook

4. Utility functions: Add MCP Tools link to utility list

5. Utils example: Replace buggy Gemini snippet (mismatched use_cache param) with clean OpenAI example
… policies

Revise copilot instructions for clarity and detail in agent coding steps

Update copilot-instructions.md: refresh outdated content

1. Security: Replace all hardcoded api_key="YOUR_API_KEY_HERE" with os.environ.get(...) across every LLM wrapper example (OpenAI, Anthropic, Azure, Gemini)
Env config: Add .env file to project structure with API key placeholders; add python-dotenv to requirements; add load_dotenv() to main.py example

2. LLM wrappers:
Google: Fix broken indentation, rename "PaLM API" → "Gemini", use gemini-2.5-flash default
Azure: Update API version 2023-05-15 → 2024-12-01-preview, use env vars for endpoint/key/deployment
Ollama: Update model llama2 → llama3.3
OpenAI/Claude: Make model configurable via env vars

3. Design patterns: Add 6 newer patterns (Streaming, MCP, Memory, Supervisor, HITL, Majority Vote) with link to cookbook

4. Utility functions: Add MCP Tools link to utility list

5. Utils example: Replace buggy Gemini snippet (mismatched use_cache param) with clean OpenAI example
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant