Skip to content

Feature: Fine-tune TinyLlama and Qwen2.5-coder models for Magistrala and Prism codebase #27

@drasko

Description

@drasko

Is your feature request related to a problem? Please describe.

No

Describe the feature you are requesting, as well as the possible use case(s) for it.

As LLM can fine-tuned on custom data sets, so can SLMs.

We want to fine-tune:

  • TinyLlama
  • Phi-3

And we want to fine-tune them on our custom Magistrala, Prism and Cocos repositories, so that we can enhance their intelligence for code generation for our purposes.

We want to compare:

  • Which is better to fine-tune - more documented, easier, faster, etc ...
  • Which shows better result after fine-tuning

Some references:

Analysis should be done if we should use fine-tuning or RAG for this purpose: https://medium.com/@bijit211987/when-to-apply-rag-vs-fine-tuning-90a34e7d6d25

Indicate the importance of this feature to you.

Must-have

Anything else?

No response

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions