The above image remixes the Hydra code "Filet Mignon" from AFALFL and GLSL shader "Just another cube" from mrange. Licensed under CC BY-NC-SA 4.0 and CC0 respectively.
Patchies is a tool for building interactive audio-visual patches in the browser with JavaScript and GLSL. It's made for creative coding; patch objects and code snippets together to make visualizations, simulations, soundscapes and artistic explorations 🎨
Try it out at patchies.app - it's open source and free to use 😎
Patchies lets you use the audio-visual tools and libraries that you know (and love!), together in one place. For example:
- P5.js, library for creative coding and making art.
- Hydra, live-coding video synthesizer.
- Strudel, TidalCycles-like music environment
- ChucK, programming language for real-time sound synthesis.
- ML5.js, friendly machine learning library for the web.
- SwissGL, minimal WebGL2 wrapper for shaders.
- GLSL fragment shaders, for complex 3D visual effects.
- Tone.js, framework for creating interactive music in the browser.
- Web Audio API, powerful audio synthesis and processing.
- HTML5 Canvas API, for custom 2D graphics.
- ...as well as write JavaScript code directly.
Patchies is designed to mix textual coding and visual patching, using the best of both worlds. Instead of writing long chunks of code or patching together a huge web of small objects, Patchies encourages you to write small and compact programs and patch 'em together.
If you haven't used a patching environment before, patching is a visual way to program by connecting objects together. Each object does something e.g. generate sound, generate visual, compute some values. You connect the output of one object to the input of another object to create a flow of data. We call the whole visual program a "patch" or "patcher".
This lets you visually see the program's core composition and its in-between results such as audio, video and message flows, while using tools you're already familiar with that lets you do a lot with a bit of code. This is done through Message Passing, Video Chaining and Audio Chaining. They're heavily inspired by tools like Max/MSP, Pure Data, TouchDesigner and VVVV.
Here's a simple Patchies patch that uses Message Passing and Video Chaining together:
It contains a JS random walker (using code from The Nature of Code) which handles add
and clear
messages. On each frame, it ticks the walker then sends the [x, y]
position to a P5 object which draws points on the canvas:
p5
then pipes the image to a chain of Hydra nodes which masks and diffs the visuals:
Try out the patch here to see how it works.
"What I cannot create, I do not understand. Know how to solve every problem that has been solved." - Richard Feynman
- Go to patchies.app.
- Use your mouse to pan and zoom the canvas.
- Scroll up: zoom in. Scroll down: zoom out.
- Drag on empty space to pan the canvas.
- Press
Enter
to create a new object.- Type to search for object name. Try
hydra
orglsl
orp5
. Arrow Up/Down
navigates the list.Enter
inserts the object.Esc
closes the menu.
- Type to search for object name. Try
- Click on the "+ objects" button on the bottom left to see a list of objects you can create.
- Drag the object name from the bottom bar onto the canvas to create it.
- This is slower than
Enter
, but it lets you see all objects at a glance 👀
- Click on an object to select it. The outline color should change when an object is selected.
- If you can't drag an object, click on the title of the object and drag it.
- Once selected, drag the object to move it around.
Delete
to delete an object.
- When hovering the mouse over an object, you'll see floating icon buttons such as "edit code" and "play/stop" on the top right.
- Use "Edit Code" to open the code editor.
Shift + Enter
while in a code editor to run the code again. This helps you to make changes to the code and see the results immediately.
Ctrl/Cmd + K
brings up the command palette.- You can do many actions here, such as toggling fullscreen, import/export patch files, save/load patches in your browser, setting API keys, opening secondary output screen, toggling FPS monitors, toggling bottom bars and more.
- Click on the handle on the top and bottom of an object, and drag to connect to another object.
- Top handle are inputs. Bottom handle are outputs.
- You can connect multiple outlets to a single inlet.
To create shareable links, click on the "Share Link" button on the bottom right. You can also use "Share Patch" from the command palette.
You can use the Shortcuts button on the bottom right to see a list of shortcuts. Here are some of the most useful ones:
Click on object / title
: focus on the object.Drag on object / title
: move the object around.Scroll up
: zoom in.Scroll down
: zoom out.Drag on empty space
: pan the canvas.Enter
: create a new object at cursor position.Ctrl/Cmd + K
: open the command palette to search for commands.Shift + Enter
: run the code in the code editor within the selected object.Delete
: delete the selected object.Ctrl + C
: copy the selected object.Ctrl + V
: paste the copied object.
Each object can send message to other objects, and receive messages from other objects.
In this example, two slider
objects sends out their value to a expr $1 + $2
object which adds the number together. The result is sent as a message to the p5
object which displays it.
Here are some examples to get you started:
- Create two
button
objects, and connect the outlet of one to the inlet of another.- When you click on the first button, it will send a
{type: 'bang'}
message to the second button, which will flash.
- When you click on the first button, it will send a
- Create a
msg
object with the messagehello world
(you can hitEnter
and typem hello world
). Then, hitEnter
again and search for themessage-console.js
preset. Connect them together.- When you click on the message object, it will send the string
hello world
to the console object, which will log it to the virtual console.
- When you click on the message object, it will send the string
In JavaScript-based objects such as js
, p5
, hydra
, canvas
, strudel
, dsp~
and tone~
, you can use the send()
and recv()
functions to send and receive messages between objects. For example:
// In the source `js` object
send('Hello from Object A')
// In the target `js` object
recv((data) => {
// data is "Hello from Object A"
console.log('Received message:', data)
})
This is similar to the second example above, but using JavaScript code.
The recv
callback also accepts the meta
argument in addition to the message data. It includes the inlet
field which lets you know which inlet the message came from.
You can combine this with send(data, {to: inletIndex})
to send data to only a particular inlet, for example:
recv((data, meta) => {
send(data, {to: meta.inlet})
})
In the above example, if the message came from inlet 2, it will be sent to outlet 2.
In js
, p5
, hydra
, canvas
, dsp~
and tone~
objects, you can call setPortCount(inletCount, outletCount)
to set the exact number of message inlets and outlets. Example: setPortCount(2, 1)
ensures there is 2 message inlets and 1 message outlet.
See the Message Passing with GLSL section for how to use message passing with GLSL shaders to pass data to shaders dynamically.
You can chain visual objects together to create video effects and compositions, by using the output of a visual object as an input to another.
The above example creates a hydra
object and a glsl
object that produces a pattern, and connects them to a hydra
object that subtracts the two visuals together using src(s0).sub(s1).out(o0)
.
This is very similar to shader graphs in programs like TouchDesigner, Unity, Blender, Godot and Substance Designer.
To use video chaining:
-
Try out the presets to get started quickly.
- Pipe presets (e.g.
pipe.hydra
,pipe.gl
) simply passes the visual through without any changes. This is the best starting point for chaining. - Hydra has many presets that perform image operations (e.g.
diff.hydra
,add.hydra
,sub.hydra
) on two visual inputs, see hydra section. - Check out the docs of each visual objects for more fun presets you can use.
- Pipe presets (e.g.
-
The visual object should have at least one visual inlets and/or outlets, i.e. orange circles on the top and bottom.
- Inlets provides visual into the object, while outlets outputs visual from the object.
- In
hydra
, you can callsetVideoCount(ins = 1, outs = 1)
to specify how many visual inlets and outlets you want. See hydra section for more details. - For chaining
glsl
objects, you can dynamically create sampler2D uniforms. See glsl section for more details.
-
The visual object should have code that takes in a visual source, does something, and outputs visual. See the above presets for examples.
-
Connect the orange inlets of a source object to the orange outlets of a target object.
- Try connecting the orange visual outlet of
p5
to an orange visual inlet of apipe.hydra
preset, and then connect thehydra
object to apipe.gl
preset. You should see the output of thep5
object being passed throughhydra
andglsl
objects without modification.
- Try connecting the orange visual outlet of
-
Getting lag and slow patches? See the Rendering Pipeline section on how to avoid lag.
Similar to video chaining, you can chain many audio objects together to create audio effects and soundscapes.
The above example sets up a FM synthesizer audio chain that uses a combination of osc~
(sine oscillator), expr
(math expression), gain~
(gain control), and fft~
(frequency analysis) objects to create a simple synth with frequency modulation.
For a more fun example, here's a little patch by @kijjaz that uses expr~
to create a funky beat:
If you have used an audio patcher before (e.g. Pure Data, Max/MSP, FL Studio Patcher, Bitwig Studio's Grid), the idea is similar.
-
You can use these objects as audio sources:
strudel
,chuck
,ai.tts
,ai.music
,soundfile~
,sampler~
,video
,dsp~
,tone~
, as well as the web audio objects (e.g.osc~
,sig~
,mic~
)- VERY IMPORTANT!: you must connect your audio sources to
dac~
to hear the audio output, otherwise you will hear nothing. Audio sources do not output audio unless connected todac~
. Usegain~
to control the volume. - See the documentation on audio objects for more details on how these work.
- VERY IMPORTANT!: you must connect your audio sources to
-
You can use these objects to process audio:
gain~
,fft~
,+~
,lowpass~
,highpass~
,bandpass~
,allpass~
,notch~
,lowshelf~
,highshelf~
,peaking~
,compressor~
,pan~
,delay~
,waveshaper~
,convolver~
,expr~
,dsp~
,tone~
. -
Use the
fft~
object to analyze the frequency spectrum of the audio signal. See the Audio Analysis section on how to use FFT with your visual objects. -
You can use
dac~
to output audio to your speakers.
Here are the non-exhaustive list of objects that we have in Patchies.
These objects support video chaining and can be connected to create complex visual effects:
-
P5.js is a JavaScript library for creative coding. It provides a simple way to create graphics and animations, but you can do very complex things with it.
-
If you are new to P5.js, I recommend watching Patt Vira's YouTube tutorials on YouTube, or on her website. They're fantastic for both beginners and experienced developers.
-
Read the P5.js documentation to see how P5 works.
-
See the P5.js tutorials and OpenProcessing for more inspirations.
-
You can also use ML5.js and Matter.js in your P5 sketch to do machine learning and 2D physics simulations.
-
Use the
loadLibrary
function to load the libraries asynchronously. For example:let Matter let ml5 async function setup() { createCanvas(252, 164) pixelDensity(4) Matter = await loadLibrary('matter') ml5 = await loadLibrary('ml5') } function draw() { clear() fill(255, 255, 100) ellipse(126, 82, 80, 80) }
-
You can call these special methods in your sketch:
noDrag()
disables dragging the whole canvas. You must call this method if you want to add interactivity to your sketch, such as adding sliders or mousePressed events. You can call it in yoursetup()
function.- When
noDrag()
is enabled, you can still drag the "p5" title to move the whole object around.
- When
send(message)
andrecv(callback)
, see Message Passing.
- Hydra is a live coding video synthesizer created by Olivia Jack. You can use it to create all kinds of video effects.
- See the Hydra documentation to learn how to use hydra.
- Try out the standalone editor at Hydra to see how Hydra works.
- Use the "shuffle" button on the editor to get code samples you can use. You can copy it into Patchies. Check the license terms first.
- You can call these special methods in your Hydra code:
setVideoCount(ins = 1, outs = 1)
creates the specified number of Hydra source ports.setVideoCount(2)
initializess0
ands1
sources with the first two visual inlets.- full hydra synth is available as
h
- outputs are available as
o0
,o1
,o2
, ando3
. send(message)
andrecv(callback)
works here, see Message Passing.
- Try out these presets to get you started:
pipe.hydra
: passes the image through without any changesdiff.hydra
,add.hydra
,sub.hydra
,blend.hydra
,mask.hydra
: perform image operations (difference, addition, subtraction, blending, masking) on two video inputsfilet-mignon.hydra
: example Hydra code "Filet Mignon" from AFALFL. Licensed under CC BY-NC-SA 4.0.
- GLSL is a shading language used in OpenGL. You can use it to create complex visual effects and animations.
- You can use video chaining by connecting any visual objects (e.g.
p5
,hydra
,glsl
,swgl
,bchrn
,ai.img
orcanvas
) to the GLSL object via the four visual inlets. - You can create any number of GLSL uniform inlets by defining them in your GLSL code.
- For example, if you define
uniform float iMix;
, it will create a float inlet for you to send values to. - If you define the uniform as
sampler2D
such asuniform sampler2D iChannel0;
, it will create a visual inlet for you to connect video sources to.
- For example, if you define
- See Shadertoy for examples of GLSL shaders.
- All shaders on the Shadertoy website are automatically compatible with
glsl
, as they accept the same uniforms. - I recommend playing with The Book of Shaders to learn the GLSL basics!
- Try these presets for GLSL to get you started:
red.gl
: solid red colorpipe.gl
: passes the image through without any changesmix.gl
: mixes two video inputsoverlay.gl
: put the second video input on top of the first onefft-freq.gl
: visualizes the frequency spectrum from audio inputfft-waveform.gl
: visualizes the audio waveform from audio inputswitcher.gl
: switches between six video inputs by sending an int message of 0 - 5.
You can send messages into the GLSL uniforms to set the uniform values in real-time. First, create a GLSL uniform using the standard GLSL syntax, which adds two dynamic inlets to the GLSL object.
uniform float iMix;
uniform vec2 iFoo;
You can now send a message of value 0.5
to iMix
, and send [0.0, 0.0]
to iFoo
. When you send messages to these inlets, it will set the internal GLSL uniform values for the object. The type of the message must match the type of the uniform, otherwise the message will not be sent.
If you want to set a default uniform value for when the patch gets loaded, use the loadbang
object connected to a msg
object or a slider. loadbang
sends a {type: 'bang'}
message when the patch is loaded, which you can use to trigger a msg
object or a slider
to send the default value to the GLSL uniform inlet.
Supported uniform types are bool
(boolean), int
(number), float
(floating point number), vec2
, vec3
, and vec4
(arrays of 2, 3, or 4 numbers).
-
SwissGL is a wrapper for WebGL2 to create shaders in very few lines of code. Here is how to make a simple animated mesh:
function render({t}) { glsl({ t, Mesh: [10, 10], VP: `XY*0.8+sin(t+XY.yx*2.0)*0.2,0,1`, FP: `UV,0.5,1`, }) }
-
See the SwissGL examples for some inspirations on how to use SwissGL.
- Right now, we haven't hooked the mouse and camera to SwissGL yet, so a lot of what you see in the SwissGL demo won't work in Patchies yet. PRs are welcome!
-
You can use HTML5 Canvas to create custom graphics and animations. The rendering context is exposed as
ctx
in the JavaScript code, so you can use methods likectx.fill()
to draw on the canvas. -
You cannot use DOM APIs such as
document
orwindow
in the canvas code. This is because the HTML5 canvas runs as an offscreen canvas on the rendering pipeline. -
You can call these special methods in your canvas code:
noDrag()
to disable dragging the whole canvas. this is needed if you want to add interactivity to your canvas, such as adding sliders. You can call it in yoursetup()
function.send(message)
andrecv(callback)
, see Message Passing.
- Butterchurn is a JavaScript port of the Winamp Milkdrop visualizer.
- You can use it as video source and connect it to other visual objects (e.g.
hydra
andglsl
) to derive more visual effects. - It can be very compute intensive. Use it sparingly otherwise your patch will lag. It also runs on the main thread, see rendering pipeline for more details.
- Load and display images from URLs or local files.
- Supports video chaining - can be used as texture sources for other visual objects.
- Messages
string
: load the image from the given url.
- Load and display images from URLs or local files.
- Supports audio and video chaining - can be used as texture and audio sources for other objects.
- Messages
bang
: restart the videostring
: load the video from the given url.{type: 'play'}
: play the video{type: 'pause'}
: pause the video{type: 'loop', value: false}
: do not loop the video
- Set the final output that appears as the background.
- The endpoint for video chaining pipelines.
- Determines what the audience sees as the main visual.
- Use
console.log()
to log messages to the virtual console. - Use
setInterval(callback, ms)
to run a callback everyms
milliseconds.- The code block has a special version of
setInterval
that automatically cleans up the interval on unmount. Do not usewindow.setInterval
from the window scope as that will not clean up.
- The code block has a special version of
- Use
requestAnimationFrame(callback)
to run a callback on the next animation frame.- The code block has a special version of
requestAnimationFrame
that automatically cleans up on unmount. Do not usewindow.requestAnimationFrame
from the window scope as that will not clean up.
- The code block has a special version of
- Use
send()
andrecv()
to send and receive messages between objects. This also works in other JS-based objects. See the Message Passing section above. - Use
setRunOnMount(true)
to run the code automatically when the object is created. By default, the code only runs when you hit the "Play" button. - Use
setPortCount(inletCount, outletCount)
to set the number of message inlets and outlets you want. By default, there is 1 inlet and 1 outlet.- Use
meta.inlet
in therecv
callback to distinguish which inlet the message came from. - Use
send(data, { to: inletIndex })
to send data to a specific inlet of another object.
- Use
-
Evaluate mathematical expressions and formulas.
-
Use the
$1
to$9
variables to create inlets dynamically. For example,$1 + $2
creates two inlets for addition, and sends a message with the result each time inlet one or two is updated. -
This uses the expr-eval library from silentmatt under the hood for evaluating mathematical expressions.
-
There are so many mathematical functions and operators you can use here! See the expression syntax section.
-
Very helpful for control signals and parameter mapping.
-
You can also create variables and they are multi-line. Make sure to use
;
to separate statements. For example:a = $1 * 2 b = $2 + 3 a + b
This creates two inlets, and sends the result of
(inlet1 * 2) + (inlet2 + 3)
each time inlet one or two is updated. -
You can also define functions to make the code easier to read, e.g.
add(a, b) = a + b
.
- Run Python code directly in the browser using Pyodide.
- Great for data processing, scientific computing, and algorithmic composition.
- Full Python standard library available.
- Sends the
{type: 'bang'}
message when clicked. - Messages:
any
: flashes the button when it receives any message, and outputs the{type: 'bang'}
message out.
- Store and send predefined messages.
- Click to send the stored message to connected objects.
- Good for triggering sequences or sending configuration data.
- You can hit
Enter
and typem <message>
to create amsg
object with the given message.- Example:
m {type: 'start'}
creates amsg
object that sends{type: 'start'}
when clicked.
- Example:
- Examples
100
sends the number 100hello
or"hello"
sends the string "hello"{type: 'bang'}
sends the object{type: 'bang'}
. this is whatbutton
does.
- Messages:
{type: 'bang'}
: outputs the message
- Continuous value control with customizable range.
- Perfect for real-time parameter adjustment.
- Outputs numeric values that can control other objects.
- Hit
Enter
and type in these short commands to create sliders with specific ranges:slider <min> <max>
: integer slider control. example:slider 0 100
fslider <min> <max>
: floating-point slider control. example:fslider 0.0 1.0
.fslider
defaults to-1.0
to1.0
range if no arguments are given.vslider <min> <max>
: vertical integer slider control. example:vslider -50 50
vfslider <min> <max>
: vertical floating-point slider control. example:vfslider -1.0 1.0
.vfslider
defaults to-1.0
to1.0
range if no arguments are given.
- Messages:
{type: 'bang'}
: outputs the current slider valuenumber
: sets the slider to the given number within the range and outputs the value
- Create a multi-line textbox for user input.
- Messages:
{type: 'bang'}
: outputs the current textstring
: sets the text to the given string
- Strudel is a live coding environment based on TidalCycles. You can use it to expressively write dynamic music pieces, as well as create complex audio patterns and effects.
- See the Strudel workshop to learn how to use Strudel.
- Check out the Strudel showcase to get inspirations with how people use Strudel.
- Use
Ctrl/Cmd + Enter
to re-evaluate the code. - Don't forget to connect the
dac~
object to hear the audio output. - Limitations
recv
only works with a few functions, e.g.setcpm
right now. Tryrecv(setCpm)
to automate the cpm value.
- Please consider supporting the development of TidalCycles and Strudel at OpenCollective!
- ChucK is a programming language for real-time sound synthesis and music creation.
- Great for algorithmic composition and sound design.
- Runs in the browser via WebChucK.
- Actions
- Replace Shred
Ctrl/Cmd + \
: replaces the most recent shred - Add Shred
Ctrl/Cmd + Enter
: adds a new shred - Remove Shred
Ctrl/Cmd + Backspace
: removes the most recent shred - Click on the gear button to see list of running shreds. Remove any shred by clicking on the "x" button.
- Replace Shred
- Supports a wide range of audio processing, control, and utility objects.
- Create a textual object by pressing
Enter
, and type in the name of the object you want to create. - Hover over the inlet name to see a tooltip with description of what the inlet's type are, and what values it does accept.
- Try to hover over a
gain~
object's gain value (e.g.1.0
) to see the tooltip.
- Try to hover over a
These objects run on control rate, which means they process messages (control signals), but not audio signals.
mtof
: Convert MIDI note numbers to frequenciesloadbang
: Send bang on patch loadmetro
: Metronome for regular timingdelay
: Message delay (not audio)adsr
: ADSR envelope generator
Most of these objects are easy to re-implement yourself with the js
object as they simply emit messages, but they are provided for your convenience!
These objects run on audio rate, which means they process audio signals in real-time. They are represented with a ~
suffix in their names.
Audio Processing:
gain~
: Amplifies audio signals with gain controlosc~
: Oscillator for generating audio waveforms (sine, square, sawtooth, triangle)lowpass~
,highpass~
,bandpass~
,allpass~
,notch~
: Various audio filterslowshelf~
,highshelf~
,peaking~
: EQ filters for frequency shapingcompressor~
: Dynamic range compression for audiopan~
: Stereo positioning controldelay~
: Audio delay line with configurable delay time+~
: Audio signal additionsig~
: Generate constant audio signalswaveshaper~
: Distortion and waveshaping effectsconvolver~
: Convolution reverb using impulse responses- To input the impulse response, connect a
soundfile~
object to theconvolver~
object'smessage
inlet. Then, upload a sound file or send a url as an input message. - Then, send a
{type: "read"}
message to thesoundfile~
object to read the impulse response into theconvolver~
object. - The sound file must be a valid impulse response file. It is a usually a short audio file with a single impulse followed by reverb tail. You can clap your hands in a room and record the sound to create your own impulse response.
- To input the impulse response, connect a
fft~
: FFT analysis for frequency domain processing. See the audio analysis section for how to read the FFT data.
Sound Input and Output:
soundfile~
: Load and play audio files with transport controls- use
soundurl~ <url>
to load audio files and streams from URLs directly. - try
soundurl~ http://stream.antenne.de:80/antenne
to stream Antenne Bayern live radio.
- use
sampler~
: Sample playback with triggering capabilitiesmic~
: Capture audio from microphone inputdac~
: Send audio to speakers
- You can re-implement most of these audio objects yourself using the
dsp~
,expr~
ortone~
objects. In fact, the defaultdsp~
andtone~
object is a simple sine wave oscillator that works similar toosc~
. - Most of the audio objects correspond to Web Audio API nodes. See the Web Audio API documentation on how they work under the hood.
- Similar to
expr
but runs at audio rate for audio signal processing. - This uses the same expr-eval library as
expr
, so the same mathematical expression will work in bothexpr
andexpr~
. - This is useful for creating DSPs (digital signal processors) to generate audio effects.
- It requires an audio source to work. You can use
sig~
if you just need a constant signal. - It accepts many DSP variables:
s
: current sample value, a float between -1 and 1i
: current sample index in buffer, an integer starting from 0t
: current time in seconds, a float starting from 0channel
: current channel index, usually 0 or 1 for stereobufferSize
: the size of the audio buffer, usually 128samples
: an array of samples from the current channelinput
: first input audio signal (for all connected channels), a float between -1 and 1inputs
: every connected input audio signal$1
to$9
: dynamic control inlets
- Example:
sin(t * 440 * PI * 2)
creates a sine wave oscillator at 440Hzrandom()
creates white noises
outputs the input audio signal as-iss * $1
applies gain control to the input audio signals ^ 2
squares the input audio signal for distortion effect
- You can create variables from
$1
to$9
to create dynamic control inlets.- For example,
$1 * 440
creates one message inlet that controls the frequency of a sine wave oscillator. - You can then attach a
slider 1 880
object to control the frequency.
- For example,
- WARNING: Please use the
compressor~
object with appropriate limiter-esque setting afterexpr~
to avoid loud audio spikes that can and will damage your hearing and speakers. You have been warned! - Here are some patches you can play with!
This is similar to expr~
, but it takes in a single process
JavaScript function that processes the audio. It essentially wraps an AudioWorkletProcessor
. The worklet is always kept alive until the node is deleted.
Try out some patches that uses dsp~
to get an idea of its power:
Here's how to make white noise:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * 1 - 1
}
})
}
Here's how to make a sine wave oscillator at 440Hz:
function process(inputs, outputs) {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
let t = (currentFrame + i) / sampleRate
channel[i] = Math.sin(t * 440 * Math.PI * 2)
}
})
}
You can use the counter
variable that increments every time process
is called. There are also a couple more variables from the worklet global that you can use.
const process = (inputs, outputs) => {
counter // increments every time process is called
sampleRate // sample rate (e.g. 48000)
currentFrame // current frame number (e.g. 7179264)
currentTime // current time in seconds (e.g. 149.584)
}
You can use $1
, $2
, ... $9
to dynamically create value inlets. Message sent to the value inlets will be set within the DSP. The number of inlets and the size of the dsp~
object will adjust automatically.
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * $1 - $2
}
})
}
In addition to the value inlets, we also have standard message inlets. Use setPortCount(inletCount)
to set the number of message inlets. By default, there is no message inlet. Then, use recv
to receive messages from the message inlets.
setPortCount(2)
recv((msg, meta) => {
if (meta.inlet === 0) {
// do something
}
})
You can even use both value inlets and message inlets together in the DSP.
let k = 0
recv((m) => {
// you can use value inlets `$1` ... `$9` anywhere in the JavaScript DSP code.
k = m + $1 + $2
})
const process = (inputs, outputs) => {
outputs[0].forEach((channel) => {
for (let i = 0; i < channel.length; i++) {
channel[i] = Math.random() * k
}
})
}
The tone~
object allows you to use Tone.js to create interactive music. Tone.js is a powerful Web Audio framework that provides high-level abstractions for creating synthesizers, effects, and complex audio routing.
By default, tone~
adds a sample code for sine oscillator.
The Tone.js context gives you these variables:
Tone
: the Tone.js libraryinputNode
: GainNode from Web Audio API for receiving audio input from other nodesoutputNode
: GainNode from Web Audio API for sending audio output to connected nodes
Try out these presets:
poly-synth.tone
: Polyphonic synthesizer that plays chord sequenceslowpass.tone
- low pass filterspipe.tone
- directly pipe input to output
Code example:
// Process incoming audio through a filter
const filter = new Tone.Filter(1000, 'lowpass')
inputNode.connect(filter.input.input)
filter.connect(outputNode)
// Handle incoming messages to change frequency
recv((m) => {
filter.frequency.value = m
})
// Return cleanup function to properly dispose Tone.js objects
return {
cleanup: () => filter.dispose(),
}
- Receive MIDI messages from connected devices.
- Outputs note, velocity, and control change data.
- Perfect for musical controllers and hardware integration.
- Send MIDI messages to external devices or software.
- Control external synthesizers and DAWs.
- Supports note, CC, and system messages.
- Sends message across patches over WebRTC.
- When creating objects, type in
netsend <channelname>
to create anetsend
object that sends messages to the specified channel name. Example:netsend drywet
- Receives message across patches over WebRTC.
- When creating objects, type in
netrecv <channelname>
to create anetrecv
object that receives messages from the specified channel name. Example:netrecv drywet
These objects can be hidden via the "Toggle AI Features" command if you prefer not to use AI:
- Generate text using AI language models.
- Create dynamic content, lyrics, or procedural text.
- Integrates with message system for interactive generation.
- Generate images from text prompts using AI.
- Create visual content programmatically.
- Supports video chaining as texture source.
- Generate musical compositions using AI.
- Create backing tracks, melodies, or soundscapes.
- Outputs audio that can be processed by other objects.
- Convert text to speech using AI voices.
- Create dynamic narration or vocal elements.
- Outputs audio for further processing.
- Render Markdown text as formatted content.
- Perfect for documentation, instructions, or dynamic text display.
- Supports full Markdown syntax including links and formatting.
The fft~
audio object gives you an array of frequency bins that you can use to create visualizations in your patch.
First, create a fft~
object. Set the bin size (e.g. fft~ 1024
). Then, connect the purple "analyzer" outlet to the visual object's inlet.
Supported objects are glsl
, hydra
, p5
, canvas
and js
.
- Create a
sampler2D
GLSL uniform inlet and connect the purple "analyzer" outlet offft~
to it. - Hit
Enter
to insert object, and try out thefft-freq.gl
andfft-waveform.gl
presets for working code samples. - To get the waveform (time-domain analysis) instead of the frequency analysis, you must name the uniform as exactly
uniform sampler2D waveTexture;
. Using other uniform names will give you frequency analysis.
You can call the fft()
function to get the audio analysis data in the supported JavaScript-based objects: hydra
, p5
, canvas
and js
.
-
IMPORTANT: Patchies does NOT use standard audio reactivity APIs in Hydra and P5.js. Instead, you must use the
fft()
function to get the audio analysis data.- See the below section on Converting existing P5 and Hydra audio code for why this is needed and how to convert existing code.
-
fft()
defaults to waveform (time-domain analysis). You can also callfft({type: 'wave'})
to be explicit. -
fft({type: 'freq'})
gives you frequency spectrum analysis. -
Try out the
fft.hydra
preset for Hydra. -
Try out the
fft-capped.p5
,fft-full.p5
andrms.p5
presets for P5.js. -
Try out the
fft.canvas
preset for HTML5 canvas.- Because the canvas lives on the rendering pipeline, it has a lot more delay than
p5
in retrieving the audio analysis data. So, the audio reactivity will not be as tight asp5
. - On the upside,
canvas
will not slow down your patch if you chain it with other visual objects likehydra
orglsl
, thanks to running on the rendering pipeline.
- Because the canvas lives on the rendering pipeline, it has a lot more delay than
-
The
fft()
function returns theFFTAnalysis
class instance which contains helpful properties and methods:- raw frequency bins:
fft().a
- bass energy as float (between 0 - 1):
fft().getEnergy('bass') / 255
. You can use these frequency ranges:bass
,lowMid
,mid
,highMid
,treble
. - energy between any frequency range as float (between 0 - 1):
fft().getEnergy(40, 200) / 255
- rms as float:
fft().rms
- average as float:
fft().avg
- spectral centroid as float:
fft().centroid
- raw frequency bins:
-
Where to call
fft()
:-
p5
: call in yourdraw
function. -
canvas
: call in yourdraw
function that are gated byrequestAnimationFrame
-
js
: call in yoursetInterval
orrequestAnimationFrame
callbacksetInterval(() => { let a = fft().a }, 1000)
-
hydra
: call inside arrow functions for dynamic parameterslet a = () => fft().getEnergy('bass') / 255 src(s0).repeat(5, 3, a, () => a() * 2)
-
-
Q: Why not just use standard Hydra and P5.js audio reactivity APIs like
a.fft[0]
andp5.FFT()
?- A: The reason is that the
p5-sound
anda.fft
APIs only lets you access microphones and audio files. In contrast, Patchies lets you FFT any dynamic audio sources 😊 - You can FFT analyze your own audio pipelines like your web audio graph, and other live audio coding environment like Strudel and ChucK.
- It makes the API exactly the same between Hydra and P5.js. No need to juggle two.
- A: The reason is that the
-
Converting Hydra's Audio Reactivity API into Patchies:
-
Replace
a.fft[0]
withfft().a[0]
(un-normalized int8 values from 0 - 255) -
Replace
a.fft[0]
withfft().f[0]
(normalized float values from 0 - 1) -
Instead of
a.setBins(32)
, change the fft bins in thefft~
object instead e.g.fft~ 32
-
Instead of
a.show()
, use the below presets to visualize fft bins. -
Using the value to control a variable:
- osc(10, 0, () => a.fft[0]*4) + osc(10, 0, () => fft().f[0]*4) .out()
-
-
Converting P5's p5.sound API into Patchies:
- Replace
p5.Amplitude
withfft().rms
(rms as float between 0-1) - Replace
p5.FFT
withfft()
- Replace
fft.analyze()
with nothing -fft()
is always up to date. - Replace
fft.waveform()
withfft({ format: 'float' }).a
, as P5's waveform returns a value between -1 and 1. Usingformat: 'float'
gives you Float32Array. - Replace
fft.getEnergy('bass')
withfft().getEnergy('bass') / 255
(normalize to 0-1) - Replace
fft.getCentroid()
withfft().centroid
- Replace
If you dislike AI features (e.g. text generation, image generation, speech synthesis and music generation), you can hide them by activating the command palette with CMD + K
, then search for "Toggle AI Features". This will hide all AI-related objects and features, such as ai.txt
, ai.img
, ai.tts
and ai.music
.
Tip
Use objects that run on the rendering pipeline e.g. hydra
, glsl
, swgl
, canvas
and img
to reduce lag.
Behind the scenes, the video chaining feature constructs a rendering pipeline based on the use of framebuffer objects (FBOs), which lets visual objects copy data to one another on a framebuffer level, with no back-and-forth CPU-GPU transfers needed. The pipeline makes use of Web Workers, WebGL2, Regl and OffscreenCanvas (for canvas
).
It creates a shader graph that streams the low-resolution preview onto the preview panel, while the full-resolution rendering happens in the frame buffer objects. This is much more efficient than rendering everything on the main thread or using HTML5 canvases.
Objects such as hydra
, glsl
, swgl
, canvas
and img
runs entirely on the web worker thread and therefore are very high-performance.
In contrast, objects such as p5
and bchrn
run on the main thread, and at each frame we need to create an image bitmap on the main thread, then transfer it to the web worker thread for rendering. This is much slower than using FBOs and can cause lag if you have many p5
or bchrn
objects in your patch.