Max as an Introduction to Learning Functional Reactive Programming?

Sophia's icon

Hi, all! I've been off the forums for nearly four years now and glad to see them still going strong. After a long time using Max I decided to take a break and focus on applying the techniques I had learned with newer (mainly web-based) technologies for visualization like and WebGL. I feel like a totally different person so it's really cool knowing this community remains constant...especially now that I feel like my relationship with Max is coming full circle :)

I don't know how many of you are familiar with "functional reactive programming," or FRP, that's recently become a trendy paradigm for both increasingly complex web apps and native development as well. What's struck me just beginning to dip my toes in it is how similar it is to visual dataflow languages like Max (minus the runtime for live coding, although I would imagine those exist).

I generally try to sandbox myself from new web technologies in order to focus on the logic side of things so what really pushed me towards experimenting with FRP was realizing I have a rather poor memory. I have trouble keeping track of procedural code past even 100 lines or so mainly because it's often highly repetitive. Repeating function calls is bad enough, but if I can't see the state of everything stored in one place then my mental model of the program starts to deteriorate. So I made a resolution to mainly just try and keep my code as short as possible by not repeating anything (DRY vs. WET if you're into more dev trends) and exploring frameworks that facilitated that.

This is why I like Max so much! You can see everything at once. But where Max lost me was when I started to use it to build really large patches meant for distribution as standalones. I think the addition of presentation mode was somewhat an acknowledgment of all the spaghetti patches we had been looking at for years as people used it for actual development and how different they were from the simple elegance exemplified by the tutorials. Yet in the search for that elegance and legibility in more complex apps, I've somehow returned to something that looks a lot like Max.

Functional programming has been around since the beginning and I have a ton to learn just to get up to speed with it. But even without a deep understanding it's easy to understand how in a language like Max that manipulates streams of data instead of manually tying together disparate variables we use [expr] as a swiss army knife of computation. Even though we have access to objects for procedural programming like if then else statements, we all know how rarely they're the best tool for the job in this environment.

It's the *reactive* part that is both newer and I think more relevant here. It's easy to imagine how as web interfaces become increasingly complex there'd be demand for frameworks that automatically update their state based on your logic rather than storing variables in multiple places and, in the worst cases imo, updating them manually using increasingly convoluted event calls (this is why I loathe Objective C). This is hard for a lot of web devs to wrap their heads around, but I think for Maxers it's totally intuitive.

Take a look at this really good tutorial on getting started with Rx.js, the JavaScript implementation of one of the most popular reactive coding frameworks that's available in a ridiculous number of languages. What really struck me was the diagrams he uses to visualize streams. Look familiar?! The semantics behind reactive programming bear an uncanny similarity to an interpreted version of Max. But the syntax to specify all that ends up being much more concise than traditional procedural code (technically this is not purely functional, but increasing languages are adding implementations for multiple programming paradigms—Apple's Swift is a great example...if only it didn't still rely on Cocoa and KVO, sort of its own version of spaghetti patches imo).

I'd love to hear if any other Maxers are getting into FRP, especially if you use a traditionally functional language (I know there are a lot of Lisp types on here). This post is only partially informative, I'm also scraping for resources for learning FRP and functional programming that would ease the learning curve for someone with a background in Max and live visualization in general. I'm going to get around to fully going through that MIT Scheme book as a foundation and probably learning Haskell, but wonder what else is out there that might allow me to leverage the intuitive understanding of data streams I developed from years of playing with Max. Because I was rubbing my head a lot at the beginning of that Rx tutorial, but once I got to the diagrams it was like...of course!

Peter McCulloch's icon

Thanks for sharing this! I don't have experience with this framework, so I can't say how it compares, but you might also want to check out Clojure and ClojureScript. It's a LISP that runs on the JVM (or can compile to JS), and has a lot of really great features and an active community. It has things like promises, futures, and lazy sequences which might be of interest, and there are quite a few libraries. Nick Cassiel has done some work with Clojure with Max, and IIRC that's in C74 projects section. There's also a live coding platform for it called Overtone that you might find interesting, with SuperCollider as the audio engine.

Roman Thilenius's icon

thanks for another reminder that currently in max7 building applications for distribution is a real mess.

it is a bit like someone would be afraid of all the power theoretically contained in the enviroment beeing unleashed by accident.

Sophia's icon

Max was created for live performance and then turned out to be a great tool for education and prototyping. I realize people have been developing apps for distribution with it for decades now, and C74 actually has always done a great job in optimizing it for speed at least according to the profiles I've run, but I don't see how they could provide a manageable or pleasant environment for large app development without at least severely limiting the things that make it great: the fact that it's visual and live.

Do you know what goes into actually scheduling multiple threads of data? Max does this based on where you place the objects in a two dimensional plane. Would you rather have to explain to everyone who uses it for performance that they now need to specify the order in which each object is executed because that allows for faster and more manageable app development? Would you as an app developer still appreciate Max's visual environment if it came with the disclaimer that how it runs live is likely going to be different from when you compile your standalones?

I just don't see how C74 could make app development easier without completely altering the way its functioned for three decades. Presentation mode was necessary to launch M4L and although I've used it myself for standalones I now see that "quick fix" as a signal your patch is too large and should employ more bpatchers. You know what moving it further in that direction looks like? Exactly what I've described above, i.e. something already available through many different open source configurations that are quite stable due to corporate development (ever heard of React.js...?).

And at what cost? I honestly have no idea what you mean by "all the power." Pretty much everything you can do in Jitter you can do in a browser with native optimization. And I know good audio libraries are harder to come by, but...they still exist?

Graham Wakefield's icon

FRP is really nice. Although it is becoming the rage in web circles, it's not actually so new -- there's at least a reference from 1997 (http://dl.acm.org/citation.cfm?id=258973), and Hudak was using it for animation I believe. I agree, its probably quite simple to grasp from a Max user's point of view. At least, the first time I read Conal Elliot's papers on FRP a decade ago, I was struggling to see what was so novel... but I was already too deeply immersed in visual data-flow. Now I get it more.

BTW for learning Scheme I really appreciated this book (online free, but also worth getting a 2nd-hand copy if you can):
https://en.wikipedia.org/wiki/Structure_and_Interpretation_of_Computer_Programs

Thanks for popping up a really interesting thread.

Sophia's icon

I had no idea Fran was that old! Fran + Haskell seems the most serious implementation , but probably Clojure with React.js would be easiest to start with. You know I'm just getting into FRP so had no idea it was originally developed for animation rather than GUIs. Seems there's a really cool secret history of functional programming and digital media. Anything else I should look into along those lines?

And yes, that's the MIT book I was referring to. I got a hardcopy when I was in middle school, but only made it through the first chapter I think (in other words, just enough to learn the basics of the language) and then sold it a few years back. I'm going to follow the html version and see if I can complete the whole thing. I'm planning on going back to school for CS and think if I can pull off the exercises in chapters four and five then I'll be a good ways ahead of most students. Not many introductory programming courses have you coding interpreters and register machines...

Peter McCulloch's icon

For SICP, there are some free videos of Sussman and Abelson teaching it in the 80s. I'm not really sure why a synthesized harpsichord rendition of "Jesu, Joy of Man's Desire" is used for the intro music, but the teaching is great nonetheless.

Roman Thilenius's icon

I just don’t see how C74 could make app development easier without completely altering the way its functioned for three decades.

i was referring to the problem in max 7 here, which includes some 400 mb of externals and stuff in every stanalone you attempt to build.

this makes it more or less a no-go to share such apps. you´re better off to distribute your patch as patch and point people to the runtime download. which i wouldnt mind - but the download is as big as the standalone would be.

I honestly have no idea what you mean by "all the power." Pretty much everything you can do in Jitter you can do in a browser with native optimization.

well you are a code geek by definititon, and so you always have the option to choose between languages before you start a project. i am not. my world of programming is more or less limited to max/msp. i dont understand the lisp-substructure of kyma and my "browser" magic is limited to html 4 and css.
in my older computers i have quite a setup of custom utilities made with max. i use them for audio batching, text processing, bb code processing, different kind of calculators ... i feel like with max7 this option has been taken away from me.

yes i know, there is a dozen new features on the other hand, one of which is releated to the new free runtime model which fits perfectly to what i do. :) but it would be nice if the old features would not vanish so often.

Sophia's icon

Roman, can you please email C74 about all of this instead of commenting on a thread that has nothing to do with it? I don't have Max 7, but two points seem obvious:

1. I looked at several standalones in the projects area and all of them are a fraction of the size of these "400mb of externals" you claim are compiled with all standalones whether they're used or not. So that seems to be a problem with your build.

2. Max *is* a programming language. Not only that, it's a programming language with a very steep learning curve. And, actually relevant to this thread, the programming techniques you learn in Max are applicable to other languages/environments.

So if I told you that for the past 10 years (a little bit after you would have stopped learning html if you never learned any part of v5) there's been an html tag that works both like jit.gl.sketch AND a much easier 2d drawing app like intro Processing? So no new programming techniques, just the ones you're used to implemented in plain old html. True, some of the frameworks I've referenced here do take some learning, but about a day if you follow their tutorials and sample projects...not the year it took me to fully get the hang of Max.

"Code geek" is kind of a funny term. I often see people who picked up full stack web dev through a month long course use it to describe anyone with an allegiance to a decades old highly specialized native language. Then in cases like this I see people who've used a highly technical and specialized native language for over a decade use it to describe them not wanting to learn a "new" html tag.

Personally, what I'm about is finding the easiest way to do things and as a result I'm constantly relearning what "easy" means for me.

Sophia's icon

Back to the focus...any more examples of functional programming used for media art or graphics work pre: this recent trend? The standard history of graphics is sort of like everything was proprietary and then OpenGL and Direct3D came along and then here we are. But of course there were people developing methods for non-hardware specific high level graphics coding in the 80s and 90s, and even for digital art, it just seems they've been largely forgotten outside of academia.

I think this might be highly relevant right now generally, but it's especially serendipitous for me because the project I'm currently porting to several reactive html frameworks as a learning tool actually uses sprites (yeah...) to turn short video clips into raster-based animations I can mess around with in realtime.

And Peter, I looked into those videos and definitely going to watch them as I follow along. I find recorded lectures help me a lot and the awesome thing about these is they're from 1986 yet have better quality than a lot of university lectures released as podcasts today. It's because they're not from MIT, but rather a summer course taught at Hewlett Packard and recorded for some HP television network.

Graham Wakefield's icon

Probably worth looking through past proceedings of FARM (http://functional-art.org), at the very least. There's a long history of LISP variants in computer music too, but it is predominantly within academia. Nyquist is a lisp that is embedded in Audacity https://en.wikipedia.org/wiki/Nyquist_(programming_language), there's the Kyma software/hardware system which also sits on a lisp, more recently Alex Mclean's Tidal language (sitting on Haskell) for live coding http://toplap.org/tidal/, plenty (plenty!) more.

Sophia's icon

I'm learning a ton just from reading Conal Elliot and probably going to look into Hudak next even though music isn't my background. I put off his papers about Fran until later since I'd like a better foundation in functional programming, but found this Google talk pretty relevant to Max/MSP: https://www.youtube.com/watch?v=faJ8N0giqzw

Elliott is kind of fascinating to me as an example of what I'm calling "self-sandboxing," basically ignoring the rush of new technologies to focus on principle-based problem solving, and often it seems even to his detriment. His whole career has been on the periphery of some of the most important developments in computing, yet not only addressing them in completely different ways yet seemingly unaware of comparable efforts until much later (both mainstream and otherwise: everything built on top of OpenGL AND visual programming). The result is he’s been constantly unable to predict his own precedence decades ahead and seemingly unable to engage with people developing projects he inspired (e.g. he gave this talk at Google in ’07 yet never collaborated with Facebook on React.js or anything similar at all). Actually in another, largely more technical, keynote he gave at Lambda Jam 2015 he bashes on the recent trend in web dev for not employing true FRP principles. And in the same talk he refers to SGI/nVidia/GPUs in general as essentially abusing economies of scale to do “the same old thing,” i.e. imperative programming.

But this Google talk is mainly him demonstrating FRP principles for animation using an environment for Haskell called Eros (?) that I think would be interesting for anyone into Jitter. The audience keeps bringing up visual programming (Apple’s Automator as an analog to UNIX piping, but someone in Youtube comments mentions Max) and he makes clear the difference for him is he hides “intermediary action.” He also jettisons the sequentiality of procedural programming, which in Max takes the form of bangs and metros. We’ve seen this become a problem both in interfacing with OpenGL and more to Elliott’s point in inhibiting modularity, which is why C74 introduced the gen object that does not use bangs or metros. Gen patching is to some degree more like Elliot's "tangible values."

But to me "hiding intermediary action" really gets to the essence of why it’s difficult to build large applications in Max. His paradigm is something like presentation mode to expose snapshots of state where the logic is wholly separated from that. He explains this more towards the end. Visual programming is first order, meaning values and functions are differentiated. This strips functional programming of one of its foremost advantages: the ability to represent functions as values. Practically it results in increasing complexity as data flows down, essentially the data flow version of WET programming.

Much of what contributes to Max’s steep learning curve is the multiplicity of objects, which is more similar to learning several entire libraries simultaneously rather than learning a language first (it seems the majority of my own threads on this forum amounted to simply asking which obscure object did what I needed), whereas by composing functions Elliott is able to both sidestep that pitfall of visual programming as well as simplify the GUI. So this key aspect of functional programming that allows for fusing functions by treating them as values also solves the presentation mode issue. This means when he writes the functions in Haskell that make up the content of the GUI (in other words, the part that generally scares off Maxers) that is also much more concise than patching mode in Max.

In fact, his blue sky question at the end is whether this paradigm can actually lead to *better* UI design than approaching it from a solely design perspective separated from the code. In other words, as I’ve already hinted at, was presentation mode a step backwards in UI design? Conal Elliott says yes. Along with patch cords and bangs, so take that fwiw ;)

Sophia's icon

Just taking notes for myself at this point. Reading Elliott describe FRP reminded me of this gem I happened upon from Alan Kay commenting on fellow Maxer Robert Edgar's blog. I love how he criticizes the very platform he's using for separating the program from the interface instead of using WYSIWYG editing. Same as how I'm typing this right now, in fact...

Graham Wakefield's icon

Thanks for the interesting reading! I'm responding to a few things you've said, and even though I seem to disagree in a few points please don't take this negatively, I really appreciate you posting this here -- I'm hoping this is constructive:

I'm not sure that I'd map bangs & metros to the sequentiality of imperative programming -- metros are pretty much the starting points of discrete event streams, which FRP is supposed to handle as well as continuous ones, and bangs are just empty events. Qmetros in particular aren't even predictable. If anything it is [trigger] that maps better to the sequentiality of imperative programming -- that's the object we have to add to a patch to make sure subsections occur in a particular order. That and the implicit right-to-left outlet ordering of outlets. Other visual patching environments (e.g. Unreal's blueprints) make execution order an explicit, separate patch-line from the data flow. And it's good,
because often we want to make sure one thing happens before another. I think what Elliot is arguing for is a different question.

Also I can say with some confidence :-) that Gen was not introduced to avoid bangs and metros. It was introduced so that we can patch to lower levels without hitting walls of performance degradation. (Since you've been reading some good CS literature I can describe as where a Max patcher is essentially an interpreter in which each object is a pre-compiled library of code, a Gen patcher is more like a source code document that is complied en-masse to machine code, at each edit). What this means is that algorithms working at per-pixel or per-sample levels can entirely avoid the memory and cpu hits of going through interpreters, and best use the hardware, at orders of magnitude faster than what an equivalent Max patcher could do. A simple gen~ filter is hundreds of times faster than an equivalent MSP patcher in a poly~ @vs 1 -- which means you can have 100 filters -- e.g. a resonator bank -- where previously you could only afford one. Before, to write a resonator bank, you would have to either find an external that does it, or write a new external in C. Now you can patch it and edit it while it runs, which I guess does improve modularity a little -- but more importantly it is a massive improvement over C in terms of exploratory experience. But in terms of *concepts*, the gen~ patcher is not really different from an Max/MSP patcher.

Regarding "why it’s difficult to build large applications in Max"... You're right that the ability to treat functions and values as the same thing is very powerful -- I think of this conceptually as a powerful abstraction, but practically as a powerful protocol -- it means all kinds of interesting metaprogramming stuff is just right there when you need it. In Max we just can't send objects down patch cables. But that's really a feature of functional programming per se, not FRP in particular. And I'm not sure it is fair to say that it is the lack of this capability that results in complexity. For decades researchers have been putting out announcements that 'system X was really hard to write large applications in, so we designed framework Y'. I suspect that it is really hard to build large applications in *any* system. A beautifully simple piece of functional code can be incredibly hard to understand. The kinds of problems that, for example, composers or artists using Max try to solve are often messy ones. A solution to a messy problem embeds the complexity in the solution, either explicitly or implicitly. The more explicit, the closer it gets to spaghetti (term used both in Max and JS), the more implicit, the more it depends on your brain to fill in the complexity. And we're not very good at that. At least in a spaghetti patch -- and I get to see a lot of them, thanks to students :-) -- I get to drop in a number box or whatever at any point, and tweak things, to grasp what is actually happening. Max was way ahead of the curve in terms of responsive, live, immediate etc. programming in this way. Being able to drop in a number box / scope / pwindow in a running patcher, mess with its algorithm, without stopping and changing cognitive workspaces is a fantastic UI.

To be honest, I have never needed presentation mode (nor do I use styles...) I arrange my patchers so I can see the connections between UI and the algorithms, and encapsulate the algorithms so that they don't use too much screen space until I want to work on them. And I don't think of patchers as ever being finished products. I want to represent my system to me well, in such a way that I can return to it years later and still be able to explore it. So that others can understand how it works too. I don't know today what I will want to embed my patch into next year, so I design it to be flexible rather than polished, I keep parameters out in the open, I minimize dependencies, comment a lot, and mostly try not to hide things.

About the learning curve of the large multiplicity of objects in Max, I agree this is a difficult part for the start. But I would argue that there 50 or so 'primitive' objects that are the most commonly used -- a few UI objects, things like metro, trigger, pack/unpack, etc. which can be learned very quickly. You can compose these into more complex processes and wrap them in a subpatcher or abstraction, which is effectively a composite function -- it has exactly the same interface as a primitive object (inlets & outlets). Where the objects proliferate is when dealing with domain-specific stuff, external libraries, and efficiency limitations (to some extent things like gen are allowing us to compose these from simpler primitives) -- and of course the legacy of a twenty-year old system. I see this in other multi-domain languages & systems too (visual and textual). Is the number of common Max objects really much different than the number of methods in the Node.js manual?

To bring it back to FRP -- I do think it is interesting to think about the parallels & divergences between the FRP proposition and Max (bearing in mind that they are not two of a kind). I think it may be helpful to be concrete: what is it that FRP adds to functional programming? Can FRP be expressed visually? Is there something that can be expressed in FRP that cannot be expressed in visual data flow (e.g. sending functions down patch-cords)? What kinds of exploratory experiences does that make possible? etc.

Sophia's icon

Thanks for the insight, Graham.

I guess most of my opinions on this apply to the way I've come to realize I personally think, so they're necessarily highly subjective, but also a bit out of the norm as far as how most people learn programming so I figure they could help others who also think this way and are probably drawn to Max as a result.

First of all, I fundamentally disagree that messy problems imply messy solutions. For me, this "compression" in problem solving is the whole point of programming. Also messy problems aren't at all unique to artists. Sure, probably in comparison to programmers working on projects where they get to define the parameters, but certainly not domains like scientific computing that tend tend to aim for elegance even more than commercial software development, which for no good reason produces probably the most bloated code I've seen.

What that elegance *means* is where this becomes subjective. Personally I very much subscribe to Rich Hickey's maxim of simplicity over ease (http://www.infoq.com/presentations/Simple-Made-Easy), which is fitting since someone else mentioned Clojure above. A lot of this is probably because, as mentioned, I've realized I have a very poor memory when it comes to unrelated concepts and a rather good one when I can structure them semantically. Interestingly, I was just reading about how this is the exact opposite of how people on the autism spectrum think. Studies show they can often memorize nonsensical strings of words as well as meaningful sentences.

But it seems from what you wrote that you question the very dichotomy between simplicity and complexity in programming and of course I have to also strongly disagree with that. It seems a lot of it is because you're conflating that spectrum with easy/hard. The fact is you *always* have to interpret code to some degree to know what it does. However, in my experience this ability for programmatic inference is a skill one can readily improve upon whereas the capacity for memory is much more hardwired (yes, I know mnemonics exist...). And the more my brain is able to infer about a program the smaller mental map of it I have to hold in order to predict its behavior. Encapsulation alone doesn't solve this problem since your modules would have to be designed well enough you can only think about their parameters and not what's inside, which itself implies really elegant code...and then you're just in a rhetorical loop from there.

So that aim for semantic brevity is why I'm currently digging into functional programming and also probably why Max is one of the top two languages I've programmed in by volume and the only one of those two I enjoy (JavaScript is out of pure necessity). Although I obviously find it fascinating, I've never taken a computer science class in my life. To the contrary, my motivation here is something that may be akin to some form of learning disability. And since I'm still learning the basics of functional programming, let alone true FRP implementations (Max has the "reactive" part covered as well as anything), I can't answer many of your questions. So I'll just try to explain some of my initial statements regarding Elliott's work, mainly in my understanding so far of the differences between his conception of FRP and visual dataflow. Although, it’s important to first note that for such a staunch theory nerd Elliott’s conception of FRP has *always* been visual and always meant to enable artists (I know you’re aware of the latter, but it seems maybe not the former…I was certainly pleasantly surprised to find out).

You're probably right that bangs and metros aren't what Elliott means by sequentiality, but I highly doubt he'd approve of them either. For me the issue is they introduce an element of unpredictability to patching. I've even written patches that exploit that unpredictability to produce probabilistic sequencing, for example making flicker films by sending two videos to one window and tweaking the metros.

A lot of what functional programming seems to be about generally is eliminating unpredictability while still coding at a very high level. I've been reading a lot about applications for this in embedded software, particularly this project by the national research lab in Australia that developed a version of the L4 microkernal in Haskell and then demonstrated that it was formally verifiable. In other words, what used to be fault-resistant is now virtually fault-proof. The downside of this is the main application seems to be drone warfare...but then again computer science has always been funded by the military industrial complex so it's hard for any of us to not be complicit unless we go full Ted Kaczynski.

More palatable, but much more shadowy in terms of public knowledge, is the application of the same embedded architecture in hardware level security...most notably the iPhone's "secure enclave," which I don't think I need to add any more discussion to than one can already find on every tech blog in the world. I know this all might sound out of left field, but it's a great edge case in preventing runtime bugs in general. Plus even though I've only developed embedded applications on the AVR, I really enjoyed it and constantly think about it as an example of the "self-sandboxing" that intrigued me about Conal Elliott as an individual. Very little changes when it comes to 8-bit microcontrollers, which is really refreshing given my memory issues and the fact that the main venue for programming during my lifetime has been the web, i.e. the fastest changing development environment out there.

Now the other source of unpredictability I've found with Max, things that are more strictly bugs and runtime errors, occurs from hardware level interaction like with OpenGL. This is where Elliott loses me and I'm almost certain you'd agree. His graphics environments for artists seem ridiculously lacking in power at this point...especially considering he was working on Fran the same time his colleagues at Microsoft Research were developing Direct3D. This is what I meant when I brought up his reference to graphics developers abusing economies of scale. Actually, designing a specialized processor for matrices with a cross platform hardware level API was exactly the opposite of sitting around and waiting for CPUs to double in speed.

Not sure what else to add. I’m really happy to hear you don’t use presentation mode! To me many of the tutorials are the epitome of well designed patching.

Graham Wakefield's icon

Hello

Out of the norm thinking is great, that's why I'm enjoying this thread! I didn't mention before, but I really appreciate your thoughts on 'self-sandboxing' and 'abusing economies of scale', for example :-)

Sorry, I was probably a bit off the cuff with my definition of messy. I find that elegant design (including programming) revolves around fitting the dimensions of the target application (e.g. a composition, or a sci viz) to the dimensions of the material of the solution (the programming language(s)) in such a way that affordances, including mental capacities, are maximized, and arbitrariness due the solution's engineering is hidden. That is compression, I agree. I guess its a Christopher Alexander view too, and he also noted that the problem may still be messy. Its presentation to us may be elegant, but the messiness is still inside, otherwise it would not be a solution to the problem. There's an old Ashby quote I forget exactly, but something about the requisite variety (what I mean by messiness) of a solution must be the same or greater than the requisite variety of the problem, and what I believe is that usually we vastly underestimate the requisite variety of a problem.

I think I'm not disagreeing. It is very true that organizing the components of a solution along semantic symmetries makes it easier to grasp. (As you mention, recurrent structures are easier to keep in immediate consciousness. I also tried to utilize semantic symmetries in the design of the gen operators, but had to balance that with the community's familiarity with Max/MSP concepts and terms.) It's just that we have a tendency to buy into our designs beyond their real applicability. What I most concern over is that often solutions are developed that over-emphasize elegance in the presentation and boundaries of its affordances, leading toward a 'finished' product, which by doing so close off access to (possibly hidden, possibly more messy than apparent) dimensions that were actually important in the problem. A particularly problematic case for me is the use of static hierarchies that hide and make immutable their structure. It's rather like seeing the solution to an equation without being able to trace the working out, or admiring the design of a computer without the ability to replace its parts. It solves an immediate goal but closes itself off from the future. (By a different token, I think scientific code is the most bloated because it is often single-purpose -- it just needs to work, by whatever means is most easily thrown together, because the code itself is not the real outcome -- and this also makes it rather unportable and useless in the longer term. Very little code I've seen written by scientists is in any way elegant.)

And I'm definitely not conflating simplicity with easy, complexity with hard. That "you *always* have to interpret code to some degree to know what it does" is exactly what I meant, that the messiness is still necessarily there. It may be presented elegantly, but in following it through by inference, the messiness re-emerges.

It's interesting how you contrast memory-based and inference-based thinking in the programming space, I haven't seen that discussed so much. One thing I appreciate in Max's evolution has been the increasing cognitive support that the interface brings. I don't have to remember things so much anymore because the autocomplete, inline help, patcher/documentation search, etc. are providing much more extended thinking space. I really appreciate the chef's table metaphor -- more things being ready-at-hand. In a way the dependency on memory-based work is much lesser than it used to be.

I think of encapsulation as the general phenomenon of making new words. As in, once you have a name for a particular thing/process/concept, it is easier to reason about it, alone or with others. Just like language it necessarily means learning new words. Generally new words appear with greater frequency in science. The danger of encapsulation is similar: it is easy to forget the underlying complexity and use words carelessly.

A bang is just a pure discrete event (an entirely undifferentiated utterance). I'm pretty sure Elliot has this in his taxonomy.

Graham

Sophia's icon

Yes, the more that I think about it the memory vs. inference balance is the important discovery for me personally and wanting to explore these subjects in order to write better and more enjoyable code. It makes me wonder if autistic people make really good programmers. I mean, I'm sure they have their own problems (often being forced to do things the hard way has its benefits) but maybe they'd pick things up faster and be seen as "better" in the more standard productive sense. According to that study I'd fit more the anxious pattern, in the sense that they think reduced ability to factor new sensory data into decision making is the root of anxiety disorders. I can't find the link, but it was about variations in the mental application of Bayes' theorem to explain various mental illnesses.

Re: scientific computing I think it depends on what specifically we're talking about. Like you, I much more enjoy writing code for myself and that's much of what computational science is. It should be modular and easily modifiable, but being freed from the concerns of a user/developer distinction carries benefits I've yet to pinpoint beyond just the necessity of a pretty GUI that hides the programs functioning. But if you're talking about more limited use software for statistical analysis? Mostly garbage. I can't even get into it. This project I was about to do gave me the choice of either a really boring yet accessible API or a really powerful command line tool that requires hand formatting a SQL database, is too slow to run live, and not designed in a way amenable to modifying even if I'm okay with stripping functionality. I decided to abandon it.

And I suppose you've convinced me to reconsider bangs. Or reconsider how I use them that is? Jitter sort of invites me to use them as probabilistic streams and that's often an element of confusion. Perhaps metro and qmetro could be due for a redesign in version eight? I can imagine all kinds of built-in functionality that would allow messing with them while maintaining precision in ways not possible in patchers. Or maybe that's something I could already do with gen? I actually haven't upgraded to Max 7, but that's an exciting idea if it's able to sync the timing really precisely. Maybe I could build my own metro!

ajoe's icon

Sorry for the late reply but I found it interesting that I stumbled onto this thread. I used Max a few years ago on some music projects but had pretty much forgotten about the software.

Recently, doing research on functional reactive programming, I found several pieces of software that referenced Max as inspiration for their work. So I downloaded the demo and am very impressed with easy of building fully reactive and distributed network system.

My distributed network of computers running Max include 2 Macs, 1 Win 10, and an iPad running the Mira app -- all communicating in real-time.

If anyone is interested in experimenting with LAN distributed system between instances of Max, check of the mxj net.maxhole patch that offers zero config connectivity.

Thanks for creating the topic. It was great reading with some very good references.