8000 Can I Finetune Llama3 Without Creating CustomDataset Function? · Issue #1362 · pytorch/torchtune · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Can I Finetune Llama3 Without Creating CustomDataset Function? #1362
Closed
@JinchuLi2002

Description

@JinchuLi2002

Hello,

I have a dataset in .jsonl format of the following format:
{"messages": [{"role": "system", "content": "some system msg"}, {"role": "user", "content": "some user input"}, {"role": "assistant", "content": "some expected output"}]}.

I see there is an option to specify conversation_style=openai and source=json in the config files. However the tutorial for Llama3 finetuning did not mention this.

I was able to get finetuning running, but the result wasn't very good and I was just wondering if my workflow is correct at all, (also I couldn't seem to input the system prompt in inference testing, any ideas?)

Thank you in advance

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0