Skip to content

Commit a6f3ed6

Browse files
committed
document parameters
1 parent 6ed68e0 commit a6f3ed6

File tree

1 file changed

+16
-7
lines changed

1 file changed

+16
-7
lines changed

src/create_lilypad_module/templates/README.md

Lines changed: 16 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,22 @@ lilypad run github.com/github_username/module_repo:v0.0.0 -i input=$(echo '{"pro
2929
3030
> \* === Required
3131
32-
| Parameter | Description | Default Value |
33-
| ----------- | ---------------------------------------------------------------------------------------------------- | ------------- |
34-
| prompt\* | Message from the user. | `""` |
35-
| system | System prompt for the model. | `""` |
36-
| num_ctx | Sets the size of the context window used to generate the next token. | `2048` |
37-
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. | `0.8` |
38-
| num_predict | Maximum number of tokens to predict when generating text. | `-1` |
32+
| Parameter | Description | Default |
33+
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
34+
| prompt\* | The content of the message sent from the user to the model. | `""` |
35+
| system | The content of the message sent from the system to the model. | `""` |
36+
| mirostat | Enable Mirostat sampling for controlling perplexity. (0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | `0` |
37+
| mirostat_eta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. | `0.1` |
38+
| mirostat_tau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. | `5` |
39+
| num_ctx | Sets the size of the context window used to generate the next token. | `2048` |
40+
| repeat_last_n | Sets how far back for the model to look back to prevent repetition. (0 = disabled, -1 = num_ctx) | `64` |
41+
| repeat_penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. | `1.1` |
42+
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. | `0.8` |
43+
| seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. | `0` |
44+
| num_predict | Maximum number of tokens to predict when generating text. (-1 = infinite generation) | `-1` |
45+
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. | `40` |
46+
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. | `0.9` |
47+
| min_p | Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. | `0.0` |
3948
4049
## Available Scripts
4150

0 commit comments

Comments
 (0)