Skip to content

Commit d72485a

Browse files
authored
Merge pull request #4 from DevlinRocha/ollama
Ollama
2 parents 7a7179c + 0889f5c commit d72485a

File tree

9 files changed

+129
-105
lines changed

9 files changed

+129
-105
lines changed

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "create-lilypad-module",
3-
"version": "0.0.30",
3+
"version": "0.0.39",
44
"description": "Create Lilypad modules with a modern Docker setup and minimal configuration.",
55
"bin": {
66
"create-lilypad-module": "src/create_lilypad_module/scaffold"

src/create_lilypad_module/templates/.env

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,4 +3,4 @@ MODEL_VERSION=
33
DOCKER_HUB_USERNAME=
44
DOCKER_IMAGE=
55
GITHUB_REPO=
6-
VERSION=
6+
VERSION=v0.0.0

src/create_lilypad_module/templates/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ RUN mkdir -p ./outputs && chmod 777 ./outputs
2929
# Set outputs directory as a volume
3030
VOLUME ./outputs
3131

32-
# Copy a script to start ollama and handle input
32+
# Copy source code and handle request
3333
COPY src ./src
3434
RUN chmod +x ./src/run_model
3535

src/create_lilypad_module/templates/README.md

Lines changed: 42 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -15,24 +15,59 @@ To build and run a module on Lilypad Network, you'll need to have the [Lilypad C
1515

1616
Your module's ready! 🎉
1717

18-
Once your Docker image has been pushed to Docker Hub, you can run your module on Lilypad Network:
18+
Once your Docker image has been pushed to Docker Hub, you can run your module on Lilypad Network.
1919

20-
> Run `git log` in your terminal to easily find the latest commit hash to use as the GitHub tag.
20+
> Make sure that you Base64 encode your request.
2121
2222
```sh
2323
export WEB3_PRIVATE_KEY=WEB3_PRIVATE_KEY
2424

25-
lilypad run github.com/github_username/module_repo:github_tag -i prompt="What animal order do frogs belong to"
25+
lilypad run github.com/GITHUB_USERNAME/MODULE_REPO:TAG \
26+
-i request="$(echo -n '{
27+
"model": "MODEL_NAME:MODEL_VERSION",
28+
"messages": [{
29+
"role": "system",
30+
"content": "you are a helpful AI assistant"
31+
},
32+
{
33+
"role": "user",
34+
"content": "what is the animal order of the frog?"
35+
}],
36+
"stream": false,
37+
"options": {
38+
"temperature": 1.0
39+
}
40+
}' | base64 -w 0)"
2641
```
2742

43+
### Valid Options Parameters and Default Values
44+
45+
- [Ollama Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values)
46+
47+
| Parameter | Description | Default |
48+
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
49+
| mirostat | Enable Mirostat sampling for controlling perplexity. (0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | `0` |
50+
| mirostat_eta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. | `0.1` |
51+
| mirostat_tau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. | `5` |
52+
| num_ctx | Sets the size of the context window used to generate the next token. | `2048` |
53+
| repeat_last_n | Sets how far back for the model to look back to prevent repetition. (0 = disabled, -1 = num_ctx) | `64` |
54+
| repeat_penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. | `1.1` |
55+
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. | `0.8` |
56+
| seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. | `0` |
57+
| stop | Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile. | |
58+
| num_predict | Maximum number of tokens to predict when generating text. (-1 = infinite generation) | `-1` |
59+
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. | `40` |
60+
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. | `0.9` |
61+
| min_p | Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. | `0.0` |
62+
2863
## Available Scripts
2964

3065
In the project directory, you can run:
3166

3267
### [`scripts/configure`](scripts/configure)
3368

34-
Configure your module.
35-
Set the following values in the [`.env` file](.env)
69+
Configures your module.
70+
Sets the following values in the [`.env` file](.env)
3671

3772
```
3873
MODEL_NAME
@@ -48,15 +83,15 @@ Builds the Docker image and pushes it to Docker Hub.
4883

4984
### `--major`, `--minor`, and `--patch` Flags
5085

51-
Increment the specified version before building the Docker image.
86+
Increments the specified version before building the Docker image.
5287

5388
#### `--local` Flag
5489

5590
Loads the built Docker image into the local Docker daemon.
5691

5792
### [`scripts/run`](scripts/run)
5893

59-
Run your module.
94+
Runs your module.
6095

6196
## Learn More
6297

src/create_lilypad_module/templates/help

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,6 @@ else
99
echo "Available commands:"
1010
echo -e "\tscripts/configure Configure the module"
1111
echo -e "\tscripts/build [--local] [--major] [--minor] [--patch] Build and push a new Docker image"
12-
echo -e "\tscripts/run [--local] <input> Run the module"
12+
echo -e "\tscripts/run [--local] <request> Run the module"
1313
exit 1
1414
fi

src/create_lilypad_module/templates/lilypad_module.json.tmpl

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -5,18 +5,10 @@
55
"Spec": {
66
"Deal": { "Concurrency": 1 },
77
"Docker": {
8-
"WorkingDirectory": "/app",
98
"Entrypoint": [
10-
"/app/src/run_model",
11-
"--prompt", {{ .prompt }}
12-
{{- if .temperature -}},
13-
"--temperature", {{ .temperature }}
14-
{{- end -}}
15-
{{- if .max_tokens -}},
16-
"--max_tokens", {{ .max_tokens }}
17-
{{- end -}}
9+
"/app/src/run_model", {{ .request }}
1810
],
19-
"Image": "dockerhub_username/image:tag"
11+
"Image": "dockerhub_username/image@"
2012
},
2113
"Engine": "Docker",
2214
"Network": { "Type": "None" },

src/create_lilypad_module/templates/scripts/build

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ if [ -z $MODEL_NAME ] || [ -z $MODEL_VERSION ] || [ -z $DOCKER_IMAGE ]; then
1313
fi
1414

1515
for arg in $@; do
16-
if [ $arg == "--local" ]; then
16+
if [ $1 == "--local" ] || [ $1 == "-l" ]; then
1717
echo "Building the Docker image and loading it into the local Docker daemon..."
1818
local=true
1919
fi
@@ -37,7 +37,6 @@ for arg in $@; do
3737

3838
VERSION="v$MAJOR.$MINOR.$PATCH"
3939
echo "New version: $VERSION"
40-
sed -i "" "s/^VERSION=.*/VERSION=$VERSION/" .env
4140
fi
4241

4342
if [ $arg == "--no-cache" ]; then
@@ -91,6 +90,7 @@ if [ -z $local ]; then
9190
echo "✅ Docker image built and published to Docker Hub successfully."
9291
echo -e "\thttps://hub.docker.com/repository/docker/$DOCKER_HUB_REPO/general"
9392
echo -e "\n\tscripts/run_module"
93+
sed -i "" "s/^VERSION=.*/VERSION=$VERSION/" .env
9494
else
9595
echo "✅ Docker image built and loaded into local daemon successfully."
9696
echo -e "\n\tscripts/run_module --local"
Lines changed: 60 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,75 @@
11
#!/usr/bin/env bash
22

3-
if [ "$#" -lt 1 ] || [ "$#" -gt 2 ]; then
4-
echo "Usage: scripts/run [--local] <input>"
3+
CONFIG_FILE=".env"
4+
source $CONFIG_FILE
5+
6+
function request {
7+
if ! printenv | grep -q "WEB3_PRIVATE_KEY=."; then
8+
printf "Enter your wallet private key: "
9+
read -r private_key
10+
WEB3_PRIVATE_KEY=$private_key
11+
echo "Private key set"
12+
echo "(Hint: Use 'export WEB3_PRIVATE_KEY=<PRIVATE_KEY>' to avoid this prompt in the future)"
13+
fi
14+
echo "Copy the JSON below to form your request:"
15+
echo '
16+
"messages": [{
17+
"role": "system",
18+
"content": "you are a helpful AI assistant"
19+
},
20+
{
21+
"role": "user",
22+
"content": "what is the animal order of the frog?"
23+
}],
24+
"options": {
25+
"temperature": 1.0
26+
}
27+
'
28+
printf "(Paste JSON as one line) ➔ "
29+
read -r request
30+
request="{\"model\": \"$MODEL_NAME:$MODEL_VERSION\", $request, \"stream\": false}"
31+
}
32+
33+
if [ $# -gt 2 ]; then
34+
echo "Usage: scripts/run [--local] <request>"
535
echo "Example: scripts/run 'What animal order do frogs belong to?'"
636
exit 1
7-
fi
37+
elif [ $# -eq 0 ]; then
38+
request
39+
else
40+
while [[ $# -gt 0 ]]; do
41+
case $1 in
42+
--local | -l)
43+
echo "Running the Lilypad module Docker image locally..."
44+
local=true
45+
shift
46+
;;
47+
*)
48+
request=$1
49+
shift
50+
;;
51+
esac
52+
done
853

9-
if [ $1 == "--local" ] || [ $1 == "-l" ]; then
10-
if [ "$#" -ne 2 ]; then
11-
echo "❌ Error: Input is required."
12-
echo "Example: scripts/run --local 'What animal order do frogs belong to?'"
13-
exit 1
54+
if [[ -z $request ]]; then
55+
request
1456
fi
15-
echo "Running the Lilypad module Docker image locally..."
16-
local=true
17-
INPUT=$2
18-
else
19-
INPUT=$1
2057
fi
2158

22-
commit_hash=$(git log --pretty=format:%H | head -n 1)
59+
# Base64 encode the request
60+
base64_request=$(echo $request | base64 -w 0)
2361

24-
if [ $local != true ]; then
62+
if [ -z $local ]; then
63+
commit_hash=$(git log --pretty=format:%H | head -n 1)
2564
MODULE=$GITHUB_REPO:$commit_hash
2665
echo "Running $MODULE on Lilypad Network..."
27-
echo "Original input: $JSON_INPUT"
28-
echo "Base64 encoded: $BASE64_INPUT"
29-
lilypad run $MODULE -i prompt=$INPUT
66+
echo "Original request: $request"
67+
echo "Base64 encoded: $base64_request"
68+
lilypad run $MODULE -i request=$base64_request --web3-private-key=$WEB3_PRIVATE_KEY
3069
else
3170
MODULE=$DOCKER_IMAGE:$VERSION
3271
echo "Running $MODULE locally..."
33-
echo "Original input: $JSON_INPUT"
34-
echo "Base64 encoded: $BASE64_INPUT"
35-
docker run $MODULE $INPUT
72+
echo "Original request: $request"
73+
echo "Base64 encoded: $base64_request"
74+
docker run $MODULE $base64_request
3675
fi

src/create_lilypad_module/templates/src/run_model

Lines changed: 19 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -1,39 +1,11 @@
11
#!/usr/bin/env bash
22

3-
# Initialize default values
4-
PROMPT=""
5-
TEMPERATURE="0.7"
6-
MAX_TOKENS="2048"
7-
83
# Create output directory if it doesn't exist
94
mkdir -p /outputs
105

11-
echo "Input: $1" >&2
12-
13-
# Parse command-line arguments
14-
while [[ $# -gt 0 ]]; do
15-
case "$1" in
16-
--prompt)
17-
PROMPT="$2"
18-
shift 2
19-
;;
20-
--temperature)
21-
TEMPERATURE="$2"
22-
shift 2
23-
;;
24-
--max_tokens)
25-
MAX_TOKENS="$2"
26-
shift 2
27-
;;
28-
*)
29-
echo "Unknown flag: $1" >&2
30-
exit 1
31-
;;
32-
esac
33-
done
34-
35-
# Extract values from input JSON with defaults
36-
messages="[{\"role\": \"user\", \"content\": \"$PROMPT\"}]"
6+
# Parse Base64 request argument and decode to JSON
7+
echo "Raw request (Base64): $1" >&2
8+
request=$(echo "$1" | base64 -d)
379

3810
# Start the ollama server in the background
3911
echo "Starting Ollama server..." >&2
@@ -42,7 +14,7 @@ nohup bash -c "ollama serve &" >&2
4214
# Wait for server with timeout
4315
timeout=30
4416
start_time=$(date +%s)
45-
while ! curl -s http://127.0.0.1:11434 >/dev/null; do
17+
while ! curl -s http://127.0.0.1:11434 >/dev/null 2>&1; do
4618
current_time=$(date +%s)
4719
elapsed=$((current_time - start_time))
4820
if [ $elapsed -gt $timeout ]; then
@@ -55,49 +27,35 @@ done
5527

5628
echo "Ollama server started" >&2
5729

58-
# Prepare the chat completion request
59-
request=$(
60-
cat <<EOF
61-
{
62-
"model": "$MODEL_ID",
63-
"messages": $messages,
64-
"temperature": $TEMPERATURE,
65-
"max_tokens": $MAX_TOKENS,
66-
"stream": false
67-
}
68-
EOF
69-
)
70-
7130
# Make the API call to Ollama's chat endpoint
7231
echo "Making request to Ollama..." >&2
7332
response=$(curl -s http://127.0.0.1:11434/api/chat \
7433
-H "Content-Type: application/json" \
7534
-d "$request")
7635

77-
# Create JSON structure following OpenAI format
78-
escaped_response=$(echo "$response" | sed 's/"/\\"/g')
7936
formatted_response="{
80-
\"id\": \"cmpl-$(openssl rand -hex 12)\",
81-
\"object\": \"text_completion\",
82-
\"created\": "$(date +%s)",
83-
\"model\": \"$MODEL_ID\",
84-
\"choices\": [{
85-
\"text\": \"$escaped_response\",
86-
\"index\": 0,
87-
\"logprobs\": null,
88-
\"finish_reason\": \"stop\"
37+
'id': 'cmpl-$(openssl rand -hex 12)',
38+
'object': 'text_completion',
39+
'created': "$(date +%s)",
40+
'model': '$MODEL_ID',
41+
'choices': [{
42+
'text': '$escaped_response',
43+
'index': 0,
44+
'logprobs': null,
45+
'finish_reason': 'stop'
8946
}],
90-
\"usage\": {
91-
\"prompt_tokens\": null,
92-
\"completion_tokens\": null,
93-
\"total_tokens\": null
47+
'usage': {
48+
'prompt_tokens': null,
49+
'completion_tokens': null,
50+
'total_tokens': null
9451
}
9552
}"
9653

9754
# Save debug info
9855
{
9956
echo "=== Debug Info ==="
100-
echo "Input: $1"
57+
date
58+
echo "Request (Base64): $1"
10159
echo "Request to Ollama: $request"
10260
echo "Response from Ollama:"
10361
echo "$response"

0 commit comments

Comments
 (0)