Unit tests for Max/MSP objects (and where's the Max 5 SDK, eh?)
Jun 21, 2008 at 4:21pm
Unit tests for Max/MSP objects (and where's the Max 5 SDK, eh?)
Hello again. As you can probably guess, I’m setting up for what looks like quite a few months of development (though I already have a nice 2-way driver for the Frontier Tranzport http://www.frontierdesign.com/Products/TranzPort, a truly fantastic and cheap little wireless unit, which I intend to contribute in the next few weeks…)
I realize that I’m deeply suspicious of any code I write that does not have unit tests, that I frankly don’t believe it works.
I also have issues with breakage in general; I find I make “tiny” changes to a patch and some other patch somewhere else breaks. Of course, this is due to my mistaken perception of what “tiny” really is but if I had testing I could make a small change, verify it, commit my work, and continue.
My thoughts are as follows.
* There need to be two levels of unit testing, one at the C/C++ level, one at the Max level.
* The C++ unit testing depends on the C++ framework for development. I downloaded the Max 4.6 SDK (any ETA for the Mac 5 SDK?) and will peruse it. I’m favourably impressed with the simplicity and clarify of its documentation so far; on the other hand, I have nothing yet to test.
* C++ unit testing I can work out: but how to unit test Max objects?
1. Create two well-known send/receive (s/r) pairs called UnitTest and UnitTestResult
2. Patches that need to be tested need to add one or more “unit test patches” to a single central unit test patch (“main test”) for the project.
3. A unit test patch for a patch can be that patch, its .maxhelp, or some new patch.
4. A unit test patch must receive UnitTest and send to UnitTestResult in at least one place.
5. A unit test patch has a name that must be unique over all unit tests in the same main test.
6. To start the test, the main test sends a start message to UnitTest.
7. Each unit test must first reply with a message to UnitTestResult:
8. Each unit test must subsequently send exactly one of two other messages to UnitTestResult
9. A unit test that has sent a start message to UnitTestResult but hasn’t sent an ok or fail message is said to be “stalled”.
10. On the original page, our unit test indicator is red if any fail message has been received since the last start was sent, yellow if some tests are stalled or green if all tests that were started had replied “OK”.
* Creating mocks appears to be impossible.
Jun 29, 2008 at 10:18pm
Quote: firstname.lastname@example.org wrote on Sat, 21 June 2008 09:21
I think you are entering unexplored territory here. Keep in mind most Max users don’t have a software engineering background. If you come up with a good system, please share.
If I have really complicated logic I tend to drop into java or ruby where I can unit test my code. Personally, I don’t want to add extra stuff to my patches just for testing purposes.
For the patch breakage issue, that’s something you get a better handle on over time. The most important part is figuring out how to properly modularize your patches into abstractions. If the abstractions are simple and focused on doing one thing well, they have less bugs. Remember the KISS principle!
I make help files for my important abstractions that demonstrate all the features. These help files more or less serve as my unit tests, but it’s a manual testing effort.
When you reuse abstractions for different projects, it’s often a good idea to create a new copy of the abstraction for the current project. That way, if you need to modify anything you won’t break all your old patches. This also helps stay organized, you can copy all the abstractions into the project folder and keep it all together.
Max will make you good at debugging ;)
Jun 30, 2008 at 4:07pm
Well I am a software engineer and a musician. And god knows
And by the way, where is the new SDK?
Jun 30, 2008 at 4:20pm
Quote: Anthony Palomba wrote on Mon, 30 June 2008 18:07
We recently had a very productive workshop in Paris, where a preliminary version of the still-being-finalized SDK was unveiled. So fear not, the new SDK is coming. I don’t believe that we announced a release date, though, so, well, thanks for your patience as we finish it up. It’ll come out when it’s ready.
Jun 30, 2008 at 9:47pm
Quote: Anthony Palomba wrote on Mon, 30 June 2008 18:07
I believe Tom is one of the few people (like me) that want to use max as a platform for actual software engineering (i.e. patches complex enough to benefit from known software engineering techniques).
On a smaller scale I always make some test data for small pieces of logic in Max and see if I agree with the output, but so far never for pieces of my patch that depend on a lot of communication to the outside. But with a server side Java project, I recently started doing a lot of unit testing. So Tom, I am very interested to see where you’re going with this.
In your scheme I think I currently miss the equivalent of assert statements. But I’m quite sure that the case you put up could very well be made to work in Max.
True, unless you introduced nifty semi-automatic renaming of receives/forwards. I guess the patch you are testing would need to use special abstractions instead of send/receives.
* Testing user interfaces appears difficult or impossible.
But this is always the case with unit tests, no?
* Simulating time (so unit tests don’t run in real time) seems difficult or impossible.
Yeah, but here too, with multiple threads, unit tests generally don’t apply or get really difficult.
* This appears to be best for testing logic or edge cases.
* However, most of my bugs at least are logic bugs or edge cases.
Jun 30, 2008 at 10:07pm
Even in C or C++ unit tests are really hard for things that have output that is in the form of audio or video. I worked on an image editor for a few years, and automated testing of sections of that code was really hard.
You can try and come up with a “fingerprint” for audio buffers, but unless you’re dealing with fairly simple audio paths, and keeping _everything_ in the audio domain, there is enough uncertainty in scheduling time to make comparing audio buffers non-trivial. Throw in any random, or hard to determine randomish control source, and things get really hard to test.
Jun 30, 2008 at 10:30pm
Jul 1, 2008 at 7:20am
Quote: barry threw wrote on Tue, 01 July 2008 00:30
Jul 1, 2008 at 7:43am
Quote: barry threw wrote on Mon, 30 June 2008 16:30
Senior member, eh? So why act like a junior?
Jul 1, 2008 at 1:51pm
Thanks for a lot of good comments.
Let’s start with
0. I had no idea of doing this for audio, I quake with fear at the very thought. Actually, I simply never do audio with Max/MSP, I’m really interested in routing and processing MIDI for performance.
1. I should also point out that this wasn’t “me looking for things to waste my time” (not that I don’t do that :-D ).
It’s “me doing Max development and having all sorts of subtle logic bugs that suck up a whole evening with nothing to show for it.”
Also, since I’m just in the first couple of months of (re)starting development I’m contemplating moderate investments that might pay off over a year or so.
I want to spend *less* time programming and more time playing.
Within my time programming I want to spend *less* time doing stressful debugging and more time making things.
2. My thought was that I might be able to do a lot of good by testing the 15% of my Max code with the most logic.
3. I’m also thinking of doing the complex logic in another language as mentioned by others too. However, I haven’t yet advanced on looking into native Python (aside from downloading the project that might do this).
4. I already have a set of .maxhelp patches for pretty well everything I wrote; I was wondering how to extend them to make at least some automated testing without work.
5. The system I proposed before doesn’t involve changing the patches you’re testing: you can think of it as adding a receive and a send to your maxhelp for those patches.
It might be several weeks before I have time to continue with this because I’m away for a lot of this time, but I’ll report any progress here.
My guess is that I’ll make a small unit test framework (two new patches with only light complexity inside) and even only test two or three of the most problematic and logic-oriented of my patches to date and hopefully this will take much less time than all this writing did (but that’s why it’s called “design” :-D).
(Classic story: we had a terrible problem with the flagship product in a previous company; it was an applet that did animation under all versions of Java back to 1.1 and we had an intermittent crashing bug under some IE/JVM combinations. We had a series of emergency meetings, came up with theories and ran a bunch of experiments, narrowed down the cause, and eventually came up with a fix, which was in fact quite short. Management, who were extremely ignorant of software development, were pissed: “If you hadn’t spent all that time yakking, you could have fixed this last week.”)
Jul 1, 2008 at 7:53pm
Jul 1, 2008 at 8:45pm
Jul 1, 2008 at 8:56pm
I don’t know if this is helpful or not, but we have just begun
The way it works is this:
The current version of the ruby script (testrunner.rb) is here:
Some components used by the system (need to be in the searchpath) are
Some example test patches are here:
Hopefully this is somewhat relevant. The system in Jamoma is not
Jul 1, 2008 at 9:09pm
No, what I’m saying is that its not interesting for at least some
But this isn’t a pissing contest.
Jul 1, 2008 at 11:45pm
I would also like to say that I didn’t necessarily mean to sound as
Indeed, I don’t feel strongly against unit tests, I guess one of the
It seems that Jamoma uses one faucet, with udp send and receive…many
Mattijs, do you generally structure your patches with one inlet and
So, sorry for the noise, which got carried on longer than it might have.
> But perhaps you are trying to say that this discussion is not
Jul 2, 2008 at 8:59pm
Quote: barry threw wrote on Wed, 02 July 2008 01:45
Happy to hear that, I think Tom’s initiative is great. And it applies to my current situation, maybe that’s why seeing it criticized in a blunt way might have gotten me carried away. Anyway, happy to have you aboard, barry.
Tim, cool to hear that you guys keep venturing into larger-scale software engineering with Max. You reminded me that I still have to give Jamoma some proper attention and send you the feedback.
> Mattijs, do you generally structure your patches with one inlet and
An example of how I structure my patches can be found on my user page, see the MPC Studio patch.
For smaller algorithms (one network with a few small subpatchers at the most and clearly defined in- and outputs), I highly prefer patch cords. But for sharing data between larger subsystems patch cords get messy so I use wireless connections. The problem with send/receives is that they are global, but we found a beautiful solution for that, as you can see ;)
Jul 4, 2008 at 8:27pm
Barry Threw schrieb:
I have a debug abhaXion in my collection, though I still use usually
I am sure the new debuggin’ modes of max 5 will reveal some extra
But what am I talking, I first have to look up these coders buzz words
As I am not the oo expert, I will appreciate any explanation what the
Jul 4, 2008 at 8:46pm
Quote: Stefan Tiedje wrote on Fri, 04 July 2008 13:27
Unit tests are automated. Usually there is a whole suite of tests that checks most of the code in an entire system after someone changes something. This is also known as “regression testing” and is really important if you work on a big project (think hundreds of thousands or millions of lines of code) with a lot of developers. This is how open source development for Linux, etc, can function with people from all over the world hacking on it.
So I think the real discussion here is, given a good help file that tests the functionality of an object/abstraction, how can we wrap that file with some scripting so that the test can be automated as much as possible.
For most Max users, this isn’t something they’ll ever need to worry about. But as Mattijs points out, maybe more “serious” developers will start using Max and be concerned about these things. If I was selling an application made with Max, I would start worrying about it.
Oh, and it has nothing to do with OO. It’s general software engineering process. Like inspections for civil engineering.
Jul 6, 2008 at 9:07am
Adam Murray schrieb:
Ah, that makes sense, you change an object and test its functionality
Seems tricky, usually you just know if the main application fails there
This creates a lot of overhead in coding for setting up the tests, don’t
Jul 6, 2008 at 3:55pm
I just want to give another point of view about testing in general.
I’m working on a project who use some robot percussionists on oil can, controlled by a mixed software environment (the show control system is written in Max, the robot control system is open source for robotics).
Aug 20, 2008 at 1:33am
Hello, all. As you can see, I have some time for Max/MSP again :-D and so I’m following up on old threads.
I did in fact write a unit testing framework for Max programs, but as we all suspected it probably wasn’t worth the effort.
You can see it here: http://code.google.com/p/max-swirly/source/browse/#svn/trunk/max-swirly/testing
Warning: it might need other things from the max-swirly patches (an informal place for me to put things people might want).
If there were interest, I’d clean it up or explain it.
You must be logged in to reply to this topic.