Merge LORA weights
Description
Use the teklia-qwen merge command to merge the adapter layers into the base model.
| Parameter | Description | Type | Default |
|---|---|---|---|
|
Path or name of the base model. Can either be a local model or from HF. |
|
|
|
Path or name of the adapter. Can either be a local adapter or from HF. |
|
|
|
Path where the fully merged model will be saved. |
|
|
Example
Once you have trained a model with teklia-qwen train, you might want to merge the adapter layers into the base model.
-
Command to use:
teklia-qwen merge --base-model Qwen/Qwen3-VL-8B-Instruct \ --adapter output/checkpoint-1200 -
Output: The full model will be saved at
output/checkpoint-1200-merged/.$ ls output/checkpoint-1200-merged/ added_tokens.json chat_template.jinja config.json generation_config.json merges.txt model-00001-of-00004.safetensors model-00002-of-00004.safetensors model-00003-of-00004.safetensors model-00004-of-00004.safetensors model.safetensors.index.json preprocessor_config.json special_tokens_map.json tokenizer_config.json tokenizer.json video_preprocessor_config.json vocab.json
Note that the default output path follows the pattern "{adapter}-merged", but you can specify a custom path with the --output option.