Advice required for "Vibe Coding" with Max (AI LLM coding assistants)

Michael Freeman's icon

Hi,

I should qualify my reasons here.

  1. I personally have a form of health problem that responds very well to assistance from AI coding assistants.

  2. My intention is not to "replace people who create for Max with AI". I would never want to see or support such a thing !

  3. I see AI as simply an assistant to the software engineer. Think of the Iron Man scene where he is using AI to assist in creating his suit. It's not the AI that creates the suit !

So with that out the way here's some background

I have been using Claude Code, Gemini CLI and Codex CLI with varying levels of success with patch creation and troubleshooting. The coding assistant often falls down conceptually from jumping from pure code creation to the visual programming nature of Max. The LLM expects exactly that, language and often gets confused even if it can decode and create the raw patching JSON.

So I thought I'd cracked this when I found the project maxpy which translates a maxpatch into python and vice versa. At first the AI does better with this and I made some jumps forward with a thorny problem in a patch of mine based on the suggestions the assistant made. However when it comes to actually getting the assistant to revise and write patches the whole thing quickly becomes bogged down with misplaced objects and misunderstandings of Max concepts.

So I went back to the drawing board and started looking at using pure Javascript which I'm currently testing with Codex.

So my question is, do Max users have any experience with this ? Could Javascript be the correct approach ? Or am I never going to find what I'm looking for without an AI LLM specifically trained in Max examples and documentation. I don't know if Cycling 74 would even consider doing such a thing ?

Iain Duncan's icon

FWIW, I have yet to see examples posted for Max that aren't a non-functional mess of Max, Gen, JavaScript, and stuff that has nothing to do with Max. The JavaScript examples people have posted have been just as much a giant mess.

Not saying it won't happen, but so far, everything I've seen seems to indicate trying to use LLMs for Max is a waste of time. We have people posting their frustrations here and on other online Max forums (reddit, facebook) every week. The mix of paradigms plus small amount of training data seems to be too much for them.

Michael Freeman's icon

Yes my tests with Javascript ended up in the same place !

I have $1000 of Azure credits and am looking at fine tuning gpt-5-codex with Max patches and docs training materials. See https://www.perplexity.ai/search/on-azure-how-do-i-train-an-exi-qmcqOKZdQ0ae28vz.ucnfw#0

Has anyone done this kind of thing before ?

As to Cycling 74 copyright. I don't plan to run the resulting fine tuned LLM publicly at the moment. I would consult with Cycling 74 before proceeding beyond experimental stages.

Sven H's icon

I've also used maxpy to make a fairly complex max patch, with a bunch of iterative prompting it worked well for me!

Are you getting bad results due to the model hallucinating nonexistent methods?

I'd say a better form of training would be to reverse engineer a bunch of existing max patches into maxpy code examples - e.g:

import maxpy as mp

patch = mp.MaxPatch()
osc = patch.place("cycle~ 440")
dac = patch.place("ezdac~")
patch.connect([osc, dac])

^ and feed those in as training examples

or simply use RAG to give it the docs as it needs them.

Depends on what you're trying to build though?