In Need of Major Help with Max/Msp for undergraduate Project!
Hey Guys,
First off the bat, this is what my project is about:
We want to build a system that utilizes a webcam to project the image of our users from a projector to a wall. We wanted the system to add a "speech bubble" onto the user that is projected on the wall. The user would text us with their cellphones, and their text message would appear on the bubble above their head.
Additional functions that we wanted to add include allowing the user to physically interact with the projected speech bubble. Kinetic movements such as: poking, pushing, squishing, bouncing the speech bubble around the projected space on the wall.
We are a team of 5 people, but we all have almost-to-none experience with Max/msp. So if you have an answer to some of our questions or problems, please be very clear, or else we probably wouldn't understand. :(
Here are some of the major problems we are having:
1. How do we draw something like a Speech Bubble on Max/Msp?
- use an image of a speech bubble and then super-impose it onto Max/Msp camera background?
- Or is there an actual way to draw the speech bubble in Max/Msp?
- we can draw the oval, but the little triangle tag of a speech bubble is something we cannot figure out how to attach.
2. How do we connect the text messages that we obtain from our sms gateway service and super-impose it onto the speech bubble?
- How to read the lines of messages from the server onto Max/msp?
- How to create/edit texts in the bubble in real-time?
3. We want to utilize something similar to a hand-gesture recognition system so that the users can interact with the speech bubble with their hand.
- What is the easiest way to make Max/msp recognize a hand gesture to (for example) pop the speech bubble?
Usage of Blobs, Centroids or Collision Detection?
- Any other suggested jitter functions that we can use?
Paul Notzold, creator of TxTual Healing (http://www.txtualhealing.com/) created projects similar to what we are trying to achieve. The only difference is his project are mostly anonymous users, while we actually want to project the image of our users onto a display screen/wall.
Anyone have any idea how he was able to achieve what his project does?
Thank you for advance in taking the time to help us.
alrite guys, here are a few ideas...
1. Best bet in my view would be to quickly draw one up in paint and then import it into your patch using the 'fpic' object.
2a. Not too sure about connecting the cellphone to max. I would probably look into bluetooth. Try looking at the 'hi' object which interfaces external harware into max, look at connecting this to a 'umenu' to access bluetooth. You can then store strings of characters in the object 'coll'.
2b. You can display the messages in real time using 'thispatcher' object. Look at the help menu and the links to other patches within the help menu. You can give the messages scripting names (declared in the inspector) then you can send a message such as 'script show var1' and 'script hide var1' to 'thispatcher' to show and hide the message box labelled 'var1'. You can then trigger these messages as neccessary. The messages stored in coll can be routed to message boxes as required.
3. Here I would recommonend using Arduino. Arduino is a small circuit board that you connect via USB (although it reads off the serial ports). You can attach various sensors to the board (in your case motion sensors/infra-red sensors/ultrasound sensors) and using the arduino software you can write code in c language to declare the active pins on the board as either inputs or outputs. After a bit of fiddling arduino interfaces with Max MSP and you can build your patches depending on the inputs recieved from the sensors to suit your needs.
It seems reasonable to suggest that Paul Notzold may have used a similar method although I'm sure there are hundreds of ways he could have gone about producing such a system.
Any problems or suggestions just let me know,
Good Luck! Tom