Skip to content

Support Anura API #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Mar 2, 2025
Merged
Show file tree
Hide file tree
Changes from 20 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "create-lilypad-module",
"version": "0.0.32",
"version": "0.0.38",
"description": "Create Lilypad modules with a modern Docker setup and minimal configuration.",
"bin": {
"create-lilypad-module": "src/create_lilypad_module/scaffold"
Expand Down
2 changes: 1 addition & 1 deletion src/create_lilypad_module/templates/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ RUN mkdir -p ./outputs && chmod 777 ./outputs
# Set outputs directory as a volume
VOLUME ./outputs

# Copy a script to start ollama and handle input
# Copy source code and handle request
COPY src ./src
RUN chmod +x ./src/run_model

Expand Down
36 changes: 25 additions & 11 deletions src/create_lilypad_module/templates/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,35 @@ Your module's ready! 🎉

Once your Docker image has been pushed to Docker Hub, you can run your module on Lilypad Network.

> Make sure that you Base64 encode your input.
> Make sure that you Base64 encode your request.

```sh
export WEB3_PRIVATE_KEY=WEB3_PRIVATE_KEY

lilypad run github.com/github_username/module_repo:v0.0.0 -i input=$(echo '{"prompt": "Which animal order do frogs belong to?", "system": "You are a helpful AI assistant", "temperature": "0.4"}' | base64 -w 0)
lilypad run github.com/GITHUB_USERNAME/MODULE_REPO:TAG \
-i request="$(echo -n '{
"model": "MODEL_NAME:MODEL_VERSION",
"messages": [{
"role": "system",
"content": "you are a helpful AI assistant"
},
{
"role": "user",
"content": "what is the animal order of the frog?"
}],
"stream": false,
"options": {
"temperature": 1.0
}
}' | base64 -w 0)"
```

### Valid Parameters and Default Values
### Valid Options Parameters and Default Values

> \* === Required
- [Ollama Modelfile](https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values)

| Parameter | Description | Default |
| -------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| prompt\* | The content of the message sent from the user to the model. | `""` |
| system | The content of the message sent from the system to the model. | `""` |
| mirostat | Enable Mirostat sampling for controlling perplexity. (0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | `0` |
| mirostat_eta | Influences how quickly the algorithm responds to feedback from the generated text. A lower learning rate will result in slower adjustments, while a higher learning rate will make the algorithm more responsive. | `0.1` |
| mirostat_tau | Controls the balance between coherence and diversity of the output. A lower value will result in more focused and coherent text. | `5` |
Expand All @@ -41,7 +54,8 @@ lilypad run github.com/github_username/module_repo:v0.0.0 -i input=$(echo '{"pro
| repeat_penalty | Sets how strongly to penalize repetitions. A higher value (e.g., 1.5) will penalize repetitions more strongly, while a lower value (e.g., 0.9) will be more lenient. | `1.1` |
| temperature | The temperature of the model. Increasing the temperature will make the model answer more creatively. | `0.8` |
| seed | Sets the random number seed to use for generation. Setting this to a specific number will make the model generate the same text for the same prompt. | `0` |
| num_predict | Maximum number of tokens to predict when generating text. (-1 = infinite generation) | `-1` |
| stop | Sets the stop sequences to use. When this pattern is encountered the LLM will stop generating text and return. Multiple stop patterns may be set by specifying multiple separate stop parameters in a modelfile. | |
| num_predict | Maximum number of tokens to predict when generating text. (-1 = infinite generation) | `-1` |
| top_k | Reduces the probability of generating nonsense. A higher value (e.g. 100) will give more diverse answers, while a lower value (e.g. 10) will be more conservative. | `40` |
| top_p | Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. | `0.9` |
| min_p | Alternative to the top_p, and aims to ensure a balance of quality and variety. The parameter p represents the minimum probability for a token to be considered, relative to the probability of the most likely token. For example, with p=0.05 and the most likely token having a probability of 0.9, logits with a value less than 0.045 are filtered out. | `0.0` |
Expand All @@ -52,8 +66,8 @@ In the project directory, you can run:

### [`scripts/configure`](scripts/configure)

Configure your module.
Set the following values in the [`.env` file](.env)
Configures your module.
Sets the following values in the [`.env` file](.env)

```
MODEL_NAME
Expand All @@ -69,15 +83,15 @@ Builds the Docker image and pushes it to Docker Hub.

### `--major`, `--minor`, and `--patch` Flags

Increment the specified version before building the Docker image.
Increments the specified version before building the Docker image.

#### `--local` Flag

Loads the built Docker image into the local Docker daemon.

### [`scripts/run`](scripts/run)

Run your module.
Runs your module.

## Learn More

Expand Down
2 changes: 1 addition & 1 deletion src/create_lilypad_module/templates/help
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ else
echo "Available commands:"
echo -e "\tscripts/configure Configure the module"
echo -e "\tscripts/build [--local] [--major] [--minor] [--patch] Build and push a new Docker image"
echo -e "\tscripts/run [--local] <input> Run the module"
echo -e "\tscripts/run [--local] <request> Run the module"
exit 1
fi
5 changes: 2 additions & 3 deletions src/create_lilypad_module/templates/lilypad_module.json.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,10 @@
"Spec": {
"Deal": { "Concurrency": 1 },
"Docker": {
"WorkingDirectory": "/app",
"Entrypoint": [
"/app/src/run_model", {{ .input }}
"/app/src/run_model", {{ .request }}
],
"Image": "dockerhub_username/image:v0.0.0"
"Image": "dockerhub_username/image@"
},
"Engine": "Docker",
"Network": { "Type": "None" },
Expand Down
4 changes: 2 additions & 2 deletions src/create_lilypad_module/templates/scripts/build
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ if [ -z $MODEL_NAME ] || [ -z $MODEL_VERSION ] || [ -z $DOCKER_IMAGE ]; then
fi

for arg in $@; do
if [ $arg == "--local" ]; then
if [ $1 == "--local" ] || [ $1 == "-l" ]; then
echo "Building the Docker image and loading it into the local Docker daemon..."
local=true
fi
Expand All @@ -37,7 +37,6 @@ for arg in $@; do

VERSION="v$MAJOR.$MINOR.$PATCH"
echo "New version: $VERSION"
sed -i "" "s/^VERSION=.*/VERSION=$VERSION/" .env
fi

if [ $arg == "--no-cache" ]; then
Expand Down Expand Up @@ -91,6 +90,7 @@ if [ -z $local ]; then
echo "✅ Docker image built and published to Docker Hub successfully."
echo -e "\thttps://hub.docker.com/repository/docker/$DOCKER_HUB_REPO/general"
echo -e "\n\tscripts/run_module"
sed -i "" "s/^VERSION=.*/VERSION=$VERSION/" .env
else
echo "✅ Docker image built and loaded into local daemon successfully."
echo -e "\n\tscripts/run_module --local"
Expand Down
79 changes: 54 additions & 25 deletions src/create_lilypad_module/templates/scripts/run
Original file line number Diff line number Diff line change
@@ -1,39 +1,68 @@
#!/usr/bin/env bash

if [ $# -lt 1 ] || [ $# -gt 2 ]; then
echo "Usage: scripts/run [--local] <input>"
echo "Example: scripts/run 'What animal order do frogs belong to?'"
exit 1
fi
CONFIG_FILE=".env"
source $CONFIG_FILE

if [ $1 == "--local" ] || [ $1 == "-l" ]; then
if [ $# -ne 2 ]; then
echo "❌ Error: Input is required."
echo "Example: scripts/run --local 'What animal order do frogs belong to?'"
exit 1
function request {
if ! printenv | grep -q "WEB3_PRIVATE_KEY=."; then
printf "Enter your wallet private key: "
read -r private_key
WEB3_PRIVATE_KEY=$private_key
fi
echo "Running the Lilypad module Docker image locally..."
local=true
INPUT=$2
echo "Copy the JSON below to form your request:"
printf '
"messages": [{
"role": "system",
"content": "you are a helpful AI assistant"
},
{
"role": "user",
"content": "what is the animal order of the frog?"
}],
"options": {
"temperature": 1.0
}
'
printf "\n(Paste JSON as one line) ➔ "
read -r request
request="{"model": $MODEL_NAME:$MODEL_VERSION, $request, "stream": false}"
}

if [ $# -gt 2 ]; then
echo "Usage: scripts/run [--local] <request>"
echo "Example: scripts/run 'What animal order do frogs belong to?'"
exit 1
elif [ $# -eq 0 ]; then
request
else
INPUT=$1
for arg in $@; do
if [ $# -eq 1 ] && ([ $1 == "--local" ] || [ $1 == "-l" ]); then
if [ $# -ne 2 ]; then
request
fi
echo "Running the Lilypad module Docker image locally..."
local=true
request=$2
else
request=$1
fi
done
fi

# Base64 encode the input
JSON_INPUT="{"messages": [{"role": "user", "content": "$INPUT"}]}"
BASE64_INPUT=$(echo $JSON_INPUT | base64)
commit_hash=$(git log --pretty=format:%H | head -n 1)
# Base64 encode the request
base64_request=$(echo $request | base64 -w 0)

if [ $local != true ]; then
if [ -z $local ]; then
commit_hash=$(git log --pretty=format:%H | head -n 1)
MODULE=$GITHUB_REPO:$commit_hash
echo "Running $MODULE on Lilypad Network..."
echo "Original input: $JSON_INPUT"
echo "Base64 encoded: $BASE64_INPUT"
lilypad run $MODULE -i prompt=$BASE64_INPUT
echo "Original request: $request"
echo "Base64 encoded: $base64_request"
lilypad run $MODULE -i request=$base64_request --web3-private-key=$WEB3_PRIVATE_KEY
else
MODULE=$DOCKER_IMAGE:$VERSION
echo "Running $MODULE locally..."
echo "Original input: $JSON_INPUT"
echo "Base64 encoded: $BASE64_INPUT"
docker run $MODULE $BASE64_INPUT
echo "Original request: $request"
echo "Base64 encoded: $base64_request"
docker run $MODULE $base64_request
fi
85 changes: 4 additions & 81 deletions src/create_lilypad_module/templates/src/run_model
Original file line number Diff line number Diff line change
Expand Up @@ -3,83 +3,9 @@
# Create output directory if it doesn't exist
mkdir -p /outputs

# Parse base64 input argument and decode to JSON
echo "Raw input (base64): $1" >&2
input_json=$(echo "$1" | base64 --d)

# Parse arguments
prompt=$(echo "$input_json" | sed -n 's/.*"prompt":[[:space:]]*"\([^"]*\)".*/\1/p')
system=$(echo "$input_json" | sed -n 's/.*"system":[[:space:]]*"\([^"]*\)".*/\1/p')
mirostat=$(echo "$input_json" | sed -n 's/.*"mirostat":[[:space:]]*"\([^"]*\)".*/\1/p')
mirostat_eta=$(echo "$input_json" | sed -n 's/.*"mirostat_eta":[[:space:]]*"\([^"]*\)".*/\1/p')
mirostat_tau=$(echo "$input_json" | sed -n 's/.*"mirostat_tau":[[:space:]]*"\([^"]*\)".*/\1/p')
num_ctx=$(echo "$input_json" | sed -n 's/.*"num_ctx":[[:space:]]*"\([^"]*\)".*/\1/p')
repeat_last_n=$(echo "$input_json" | sed -n 's/.*"repeat_last_n":[[:space:]]*"\([^"]*\)".*/\1/p')
repeat_penalty=$(echo "$input_json" | sed -n 's/.*"repeat_penalty":[[:space:]]*"\([^"]*\)".*/\1/p')
temperature=$(echo "$input_json" | sed -n 's/.*"temperature":[[:space:]]*"\([^"]*\)".*/\1/p')
seed=$(echo "$input_json" | sed -n 's/.*"seed":[[:space:]]*"\([^"]*\)".*/\1/p')
num_predict=$(echo "$input_json" | sed -n 's/.*"num_predict":[[:space:]]*"\([^"]*\)".*/\1/p')
top_k=$(echo "$input_json" | sed -n 's/.*"top_k":[[:space:]]*"\([^"]*\)".*/\1/p')
top_p=$(echo "$input_json" | sed -n 's/.*"top_p":[[:space:]]*"\([^"]*\)".*/\1/p')
min_p=$(echo "$input_json" | sed -n 's/.*"min_p":[[:space:]]*"\([^"]*\)".*/\1/p')

# Initialize default values
mirostat=${mirostat:-"0"}
mirostat_eta=${mirostat_eta:-"0.1"}
mirostat_tau=${mirostat_tau:-"5.0"}
num_ctx=${num_ctx:-"2048"}
repeat_last_n=${repeat_last_n:-"64"}
repeat_penalty=${repeat_penalty:-"1.1"}
temperature=${temperature:-"0.8"}
seed=${seed:-"0"}
num_predict=${num_predict:-"-1"}
top_k=${top_k:-"40"}
top_p=${top_p:-"0.9"}
min_p=${min_p:-"0.0"}

if [[ -z "$prompt" ]]; then
echo "❌ Error: Prompt is required" >&2
exit 1
fi

echo "Prompt: $prompt" >&2

# Prepare the messages array
if [[ -z "$system" ]]; then
messages=(
"[{\"role\": \"user\", \"content\": \"$prompt\"}]"
)
else
messages=(
"[{\"role\": \"system\", \"content\": \"$system\"},
{\"role\": \"user\", \"content\": \"$prompt\"}]"
)
fi

# Prepare the chat completion request
request=$(
cat <<EOF
{
"model": "$MODEL_ID",
"messages": $messages,
"stream": false,
"options": {
"mirostat": $mirostat,
"mirostat_eta": $mirostat_eta,
"mirostat_tau": $mirostat_tau,
"num_ctx": $num_ctx,
"repeat_last_n": $repeat_last_n,
"repeat_penalty": $repeat_penalty,
"temperature": $temperature,
"seed": $seed,
"num_predict": $num_predict,
"top_k": $top_k,
"top_p": $top_p,
"min_p": $min_p
}
}
EOF
)
# Parse Base64 request argument and decode to JSON
echo "Raw request (Base64): $1" >&2
request=$(echo "$1" | base64 -d)

# Start the ollama server in the background
echo "Starting Ollama server..." >&2
Expand Down Expand Up @@ -107,8 +33,6 @@ response=$(curl -s http://127.0.0.1:11434/api/chat \
-H "Content-Type: application/json" \
-d "$request")

# Create JSON structure following OpenAI format
escaped_response=$(echo "$response" | sed 's/"/\\"/g')
formatted_response="{
'id': 'cmpl-$(openssl rand -hex 12)',
'object': 'text_completion',
Expand All @@ -131,8 +55,7 @@ formatted_response="{
{
echo "=== Debug Info ==="
date
echo "System: $system"
echo "Input: $prompt"
echo "Request (Base64): $1"
echo "Request to Ollama: $request"
echo "Response from Ollama:"
echo "$response"
Expand Down