best computer (graphic card etc..) for Jitter

phiol's icon

Hi all,

My apple macbookpro ssd 8g ram i7 13" (year 2011)laptop is dying and am planing on getting new Computer.
It would be primarily to use Jitter and interactive & generative video work.
I was thinking the macpro desktop but it's an expensive decision.
I was also thinking I could maybe change for a PC.
I believe I would get more bang for my buck with a PC.

Bottom line is, I am only interested in getting
a good computer that best serve Jitter and the physics world it offers.
I know nothing about gaming computers.
Would that be my best bet??

I am really curious to know your opinions

Thanks a lot

Phiol

Andro's icon

Pretty sure a desktop PC with a 1 gig graphics card will suffice.
I have a macbook pro from 4 years ago, 1 gig radeon card, works like a charm though it is on the pricy side, for me portability is paramount and I picked Mac for its stability.
512 mb card should do the job as well, suppose it depends on what your using as content.
Lots of films = need lots of RAM, lots of open gl means less ram and a better card as Open GL is all calculated on your GPU.

dtr's icon

It would be interesting to set up a benchmark test for this. One or a couple of openGL-heavy Jitter patches. Then share our results with each other to compare.

Myself I use a mid/high-end Nvidia GTX 670 gaming graphics card for my multi-display openGL Jitter work. I'm sure I'm not nearly using it to its full capabilities but I like to know that the bottleneck is not in the hardware I'm using but in my own coding. Its CUDA support is handy for GPU accelerated video editing in Adobe Premiere as well and I like playing a game every once in a while.

I came to the same conclusion that performant Apple gear is too expensive for me. Started building my own Hackintosh systems a couple of years ago. Now some of the software elements I use are Windows-only so it was easy for me to put in another SSD with Win7 on it and get on with it. I have one system with an i7 2600k CPU and I'm just building me a new one with an i5 4690k. Together with the big video card they're shredding for Max/Jitter work.

Of course, inefficient patching will destroy any system's performance. Only fast hardware isn't the magic bullet. Need patching skills to match it.

phiol's icon

Hey thanks a lot for the tips guys. I really appreciate it.
I also wonder what Rob Ramirez thinks is best the physic behavior to run smoothly.

As mentioned, I have a mbp 13" 2.7Ghz Intel core i7 / 8G Ram 1600Mhz DDR3/
Graphics Intel HD Graphics 3000 512 MB.
I openned it up a few months ago and changed the HD for a ssd, took out the cd drive
and popped in one of these :
http://store.mcetech.com/Merchant2/merchant.mvc?Screen=PROD&Product_Code=OBSXGB-UNB&Category_Code=STORHDOPTIBAY&Product_Count=0
and used my original HD to put in the optibay so now 2 HD(s). Having a ssd a main Hd, what a crazy difference.

Coding and interests:

Everyhting I do , I try to keep on the GPU side.
For vids I used vade's optimized video technique w. the uyvy to [jit.gl.slab @file cc.uyvy2rgba.lite.jxs]

Basically my stuff is very similar to what silicat does. http://vimeo.com/61475043
Interactivity using the kinect and loads of pngs for stop-motion animation nodes linked to jit.physics.
(quite honestly I don't know if the physics object using the cpu or gpu)

Little things like keeping the jit.gl.videoplane/gridshape @dim 2 2 for my png(s) is quite helpful on CPU.
Also, for stop motion, it's better to generate it in realtime w. all images (with png subject postions) loaded in tons of jit.gl.textures
and have a counter flip through each images than to use a "baked" method. i.e. built your stop motion in "final cut pro" then use a
jit.gl.pix lumakey technique in Jitter to maintain a cutout of the background.

Also, I've noticed that for node motion (transform :positon/scale/rotatexyz) control ,
using the anim.drive & path seems to run and lot smoother than using MSP to ->[snapshot 33], to create motion.
And is more CPU friendly as well.

Benchmark:

I would be happy to do bench marking patches for GPU, CPU
What do you suggest would be a good for this.

-jit.multple w. large matrix dimensions?
-loads jit.gl.nodes going through generative shadder chains?

Thanks again guys

brendan mccloskey's icon

No graphics advice here, but DA-YUM that silicat stuff is impressive. Magical and seamless integration between movement and media.

phiol's icon

Oh yeah forgot to ask you guys, (since coming from the music world I've never bought or even seen graphic cards)
Do you have to install them internally or is it like a soundcards and are external.

Andro did you install you 1 gig radeon card

Thanks again guys :-)

phiol

Andro's icon

Nope, card was built into my macbook pro.

dtr's icon

This is a large, higher-end graphics card: http://cdn4.wccftech.com/wp-content/uploads/2013/05/GTX-770.jpg
Plugs into a PCIe 16x slot of your computer, internally. With laptops your stuck with whatever was installed by the manufacturer.

Benchmark:

I would be happy to do bench marking patches for GPU, CPU
What do you suggest would be a good for this.

-jit.multple w. large matrix dimensions?
-loads jit.gl.nodes going through generative shadder chains?

Those 2 would make sense. I'd keep video out of it as HD/CPU/PCIe/etc bottlenecks come into play then.
The render window should have fixed dimensions as different display resolutions would skew the results. Window sync should be off so that the fps isn't capped by the display refresh rate.

I'd also add one with a very large window, a situation like when using a TripleHead2Go for multiple display outputs. 3 x 1920x1080= 5760x1080

phiol's icon

Thanks for the tips DTR,
will try a few test and get back to you.

In the end I decided to buy the fastest MBP. Need the portability.

It has 2 graphics cards
-intel Iris Pro Graphics
-NVIDIA GeForce GT 750M with 2GB of GDDR5 memory and
automatic graphics switching

I guess it will do for now.
I'll to patching in Jitter like I do for MSP (i.e. where chain are muted [vst~ ](disable 1) & [poly~ ] [thispoly~] (mute 1) )

I guess if I put all chains gl objects @automatic 0 and gate the bang going to the chain
all physics chains @enable 0 when not in used.

The Gpu and CPU should stay @ a low.
Do you have better suggestions ?

Thanks again DTR and Andro

phiol

I guess it's not the

Pedro Santos's icon

In the topic "Particle system benchmark" I've come to the conclusion that at least with AMD drivers, Windows 7 performance was far superior than Mac OS X 10.8 using the same computer.
I would definitely use a custom built Windows machine for that purpose and choose a nice graphics card.

Regarding the benchmark utility, I would concentrate on 2 features: draw calls performance with high poly count geometry and shader processing performance (a gaussian blur cascade with 10 to 20 instances). These 2 should be different benchmarks, because one result would influence the other without us being able to know where the bottleneck is.

I second DTR's suggestions and add that the qmetro should be configured to a very low value like 2.

phiol's icon

SHoot Just read you post Pedros, thanks a lot though :-)
I will try all of these suggestions.

Mac vs. PC mmm. I know I was just not ready for the sex-change.

thanks

dtr's icon

Do you have better suggestions ?

With openGL Jitter is of utmost importance to be strict about what happens on the CPU and what on the GPU. Channeling matrices back and forth between between CPU and GPU kills performance. So once your chain is operating in GPU territory (jit.gl objects, generally) you wanna keep it there and for example not process matrices with jit.op/expr objects. Use equivalent jit.gl objects or shaders instead.

Jit.pwindow's in the chain are another way to kill performance. Of course they're very useful for monitoring during development but be sure to disable them when in performance mode.

dtr's icon

In the topic "Particle system benchmark" I’ve come to the conclusion that at least with AMD drivers, Windows 7 performance was far superior than Mac OS X 10.8 using the same computer.
I would definitely use a custom built Windows machine for that purpose and choose a nice graphics card.

I got curious and tested it too. I see some fps differences but not huge - and contradictory between point and tri_grid modes - between OSX 10.8.5 and Win7pro on my system. I used the 2nd patch in the thread.

WIN7
Points mode = 110fps
Tri_grid mode = 70fps

OSX
Points mode = 105fps
Tri_grid mode = 80fps

Nvidia GTX670 2GB (latest Nvidia drivers on Win7, OSX-included drivers on OSX)
i7 2600k CPU
8GB RAM
Max 6.1.8

MrMaarten's icon

I'm also curious about different performances. Would be nice to have a reference. Why don't we make a patch and post the results?

dtr's icon

Have you read the previous posts in this thread?

MrMaarten's icon

sorry, posted at the same time...

I'll post my results tomorrow

MrMaarten's icon

got around to it now. A bit odd results, maybe someone with the macbook can confirm?

Macbook Pro Retina - Geforce GTX750 2GB - i7 2.3 Haswell, 16 GB ram,

Win 8.1 - both 32 and 64bit
125 point mode
30 tri-grid

Mac Os 10.9.5
100 point mode
28 tri-grid

Odd that my tri-grids are so below DTRs

phiol's icon

Here are my results

MacBookPro OS X 10.7.5 (early 2011)
processor 2.7 Ghz i7
Memory 8G
SSD Hardrive
Intel HD Graphics 3000 512 MB graphics

Tested on the 2 patches from pedro santos' s thread

Patch 1 performance (w. Rob Ramirez's @cache_mode immediate)

SHAPE1
@cache_mode vertexarray
fps 15

@cache_mode immediate
fps 22

SHAPE2
@cach_mode vertexarray
fps 8.5

@cach_mode immediate
fps 8.3

Patch 2 performance

points
fps jumps around from 65-95

tri_grid
fps jumps around from 50-60

DTR For your results In patch 1,
Did you guys try what Rob Ramirez suggests in the thread:
the trick is to set the jit.gl.gridshape @cache_mode attribute to "immediate" and enable @displaylist.
with those settings, i get performance comparable to max 5. hope this helps.
As noted, I saw a small difference in patch 1 w. @cache_mode attribute to "immediate"

Pedro Santos's icon

It's good to know that the difference between Mac and Windows isn't so extensive as it was in my case...
It could be related to the drivers being more optimised, as they are more recent graphics cards (I was using an AMD Radeon 4870 1 GB, by the way).
Or it could be that Nvidia's drivers are more optimised in OS X.

Anyway, what's really strange is DTR's and MRMAARTEN's results discrepancy... strange...

MrMaarten's icon

Yeah the discrepancy is really strange, esp. when a 2011 macbook gets even faster fps on the tri grid. I think I'll pay the genius bar a visit!

Pedro Santos's icon

MRMAARTEN, could it be that the difference is in the way you tested? Original window size or fullscreen?
Theoretically, the tri-grid mode should be more expensive with a larger window size because it needs to paint more pixels... just a thought...

vichug's icon

hey guys, i think i have some (non-fabricated hopefully) memories not so long ago of some dude talking about non-official OSX graphic card drivers, working better than the native ones... does that ring a bell for anybody ?

MrMaarten's icon

Hi Pedro, I tested on the standard small window (320x240?) In full screen it is much worse I see now. I really think there is something wrong with my laptop. (I already had suspicions but they were vague). It is good to have tests like this and numbers that reflect a kind of standard use, so you can also check to see it there is something wrong with the hardware!

Max Patch
Copy patch and select New From Clipboard in Max.

just to be sure this is the patch I used:

phiol's icon

Yep that's the patch I used as well

dtr's icon

Hey guys, not sure that patch is entirely suitable to do comparative tests. I was just curious to see if I'd notice Mac vs Win differences like Pedro Santos did. It could be a good base for one of the benchmark patches we're talking about. Maybe with some changes like fixing the window size. Also, that patch has CPU (jit.p.x, matrix operations, channeling the matrix to the GPU) elements before the GPU part. We should be clear about what we want to test with each patch: just openGL GPU or combined system performance of GPU+CPU+memory+HD etc.

dtr's icon

Btw, fullscreen 1920x1080 I get around 20fps in tri_grid mode.

dtr's icon

Concerning GTX670 vs GTX750M (M for mobile/laptop version, less powerful than the equivalent model numbers in the desktop range), these are entirely different beasts hé. Desktop vs laptop. High vs mid level. You'd expect to see differences.

But one thing you'd wanna check is Menu>Options>OpenGL status. Does it list your GTX750m as renderer, or the Intel HD x000 graphics? Maybe it doesn't switch to the discreet card as it should?

MrMaarten's icon

I agree DTR. Fixing the window size is the obvious one (maybe 640x480 is a good guide)?
What was in the patch of Pedro that made such a big difference between Mac and PC, can we see the patch?

I think what we tested now was CPU and CPU/ GPU. Those are valuable numbers. We just need one more chain that generates in jit.gl.pix and/ or gen?

Max Patch
Copy patch and select New From Clipboard in Max.

Edit: realize that jit.gen is not on GPU so I looked at an other example patch
Maybe like the patch Wesley Smith made (https://cycling74.com/forums/sharing-is-hairy-brains-gen-particles/)?

Max Patch
Copy patch and select New From Clipboard in Max.
Max Patch
Copy patch and select New From Clipboard in Max.

or

Once we agree on the patches we can make a good interface so the tests are uniform. We then can post the results in a new clean thread (Results GPU CPU tests)?

----

One more thing about the original question: it might be that my findings are skewed because of the defect my Macbook Pro Retina GTX 750M seems to have but I think I can sum up some advantages and disadvantages having worked with it.
Advantages:
- good battery live
- compact
- good screen (big real estate etc), good keyboard, good OS and all that (less messing with drivers etc)
- 2 thunderbolt/ mini display ports and HDMI: this means 3 external displays can be run! Also fast SSD raids can be hooked up to the Thunderbolt ports and they even daisy chain to external monitors! (so you can put the monitor at the end of the chain.)

Disadvantages:
- the 2880x1600 screen makes the GPU work a bit harder for just the screen. The screen is going some scaling in OS X (showing 1920x1200 for instance), if you turn this off with ResX then you win performance but work with tiny text.
- in a pc gaming laptop you can find a faster GPU like 850M or 860M (with DDR5). This seems a step up (http://www.game-debate.com/gpu/index.php?gid=2143&gid2=1715&compare=geforce-gtx-860m-2gb-vs-geforce-gt-750m-2gb-gddr5). I saw a MSI laptop with that GPU for €1400. It only has two screen outputs (HDMI and mini display port or VGA) and only USB3.0, no thunderbolt. But USB 3.0 is also fast. I am now considering getting this laptop (although I find windows a drag to work in). Anybody has experience with MSI or lenovo laptops? (e.g. these ones: http://www.notebookcheck.net/MSI-reveals-GS60-Ghost-and-GS70-Stealth-gaming-laptops-with-GeForce-GTX-870M-860M-graphics.114176.0.html)
- price performance: going full circle, maybe the MBPr is good enough for year to come, has a better user experience, maybe with a good windows laptop you get better bang for buck and raw performance?

dtr's icon

I agree DTR. Fixing the window size is the obvious one (maybe 640×480 is a good guide)?
What was in the patch of Pedro that made such a big difference between Mac and PC, can we see the patch?

1. VGA 640x480? It's 2014, lets make it HD 1920x1080 ;) (all joking aside, I'd like these tests to be close to real world situations. I think most of us like to run HD, at least)

2. It's the one you tried. The difference between OS's seem bound to worse ATI drivers on OSX. Which we shouldn't generalize, perhaps it's just with that particular model/generation of ATI cards.

Pedro Santos's icon

I didn't notice one of the cards is Mobile. That makes sense, then...

Max Patch
Copy patch and select New From Clipboard in Max.

Regarding that patch, I agree that it is not a good general benchmark example. It was only meant to showcase a bug/lack of optimization in jit.gl.multiple.
That patch is very CPU limited. If you turn off the jit.gl.mesh geometry rendering you probably won't notice a huge difference, but if you change the dim of the particle system from 100000 to 10000 you will see the fps go way up. So, the main bottleneck here is the CPU and not the GPU...

Anyway, I think the benchmark should measure at least 4 components:CPU processing:Regular jitter matrix operations
Physics processingGPU processing:geometry rendering
shader processing

These should be as isolated as possible.
The resolution could be in HD (1280 x 720) in order to accommodate both laptops and desktops easily.
The routine could be automated, cycling through each test for x seconds and capturing an average of the framerate for each one.
These values could be stored in a text file or a GUI that could be easily copy/pasted to the forum including captions.

MrMaarten's icon

sounds like an excellent idea Pedro to automate the capturing of the FPS etc automatically! I am still learning a lot of the details of Max/ Jitter so I can't get involved with developing the tests themselves but I would like help on the capture/ write to text file bit (if needed).

It would be nice if the test are also doable for older hardware, so we have a good overall view of scores. It is a comparative test anyway. To get any meaning from them, so you don't want the numbers to max out too fast or too soon. Maybe 1280 720 is fine. And let's do the full screen test also and put the screen res in the results. Let's keep the test to one monitor for ease of use?

How shall we accomplish the tests?

dtr's icon

Ok, I guess 1280x720 is suitable. Though I insist on a multi-screen test as well (I use 4 projectors and 1 monitor in my main projects). NB: one doesn't need all the display hardware for it. It's enough to make a large render window, doesn't matter if it doesn't fit on your 1 laptop screen. I 'll happily make such a patch, with multiple jit.gl.camera's for the multiple display views and so on. Though only after the 1st of november.

Would be great to have the automated logging. I can't contribute much patching till november though.

MrMaarten's icon

DTR, ok if the multiscreen test can also be performed on a single screen setup then it is good to include it! I think a test patch like this will be very interesting for future reference (for hardware decisions among other things), so unless someone else does it before, after nov 1 is great :)

I'll work something out with the logging and saving, and post when I have something.

phiol's icon

Hi all

finally got a new mac and redid the benchmark patch and strangely,
I'm getting very similar result to MrMaarten in the 2nd patch.

Mac Os 10.9.4
fps 96 point mode
fps 27 tri-grid

On my old 2011 mac I got
Mac Os 10.7.5
fps 65-95 point mode
fps 50-60 tri-grid

Weird

MrMaarten's icon

yeah it is very weird. It might me a single thread thing were the numerical Ghz only matters or something.

What are the spec's of yours?

I also did the test on a 2008 i7 2.7Ghz (Hackintosh) and got 55fps for the tri_grid (70 for the points). And a 2010 2.9 Ghz Hackingtosh of my friend got 75fps!

I also ran the test in the apple store and got also 70fps on a new Macbook Pro retina. I think a 2.5 i7 (I didn't check because I thought the difference was to big to be justified by 200Mhz). Mine is a 2.3 Ghz i7.

Then I demanded for a replacement logic board because I had already issues with it where I could really put my finger on (drops during vj-ing that were unusual). But now the test comes back the same! :(

I haven't had time to go back to the store and test the theory with lower clocked i5 etc.
I think I will go tomorrow to check the other laptops. The test works as a standalone on a USB stick.

phiol's icon

Hi MrMaarten

My specs are

1 day old computer

Version 10.9.4
2.8 Ghz i7 w. turbo boost up to 4Ghz
quad core
16 ram
nvidia 2G

MrMaarten's icon

Well that is really strange indeed. I just got back from the apple store... (already had this naging feeling when I got my repair back!)

Tested most all the configurations they had (they didn't have a MBP with a dedicated graphics card, I even ended up asking)

these are the results (tri grid first, points second):
macbook air i5 2.8
74 fps
105fps

macbook i5 2.5 with dvd
74 fps
104 fps

2.2 i7 iris pro macbook pro retina
80fps
80fps

1.4 i5 air
66 fps
75 fps

Imac 2.7 i5
105 fps
106 fps

-----

So there you have it. Not a single one below 60 fps. Even for a 1.4 i5

I think we got what they call in the industry a lemon. And I am going back for a repair until I also get at least 70 fps ;)

I don't know what else it could be?
(there where a lot of 2.2 i7 MBPr with intel iris graphics, and they all had the same result...)

phiol's icon

Lemon :

Or maybe the way the fps are being calculated behave differently on different machines.
Logically this sounds improbable but maybe?

This thread was started because I was to invest 3000$ on a computer for graphics.

sheesh

thanks for your input mrmaarten

MrMaarten's icon

I don't know how close to an apple store you live, but if you do you can see how your computer should perform. Or ask someone you know.

Here in amsterdam hey didn't have the mbpr with a graphics card in the store. But I asked and they said they could show them, but the 'special' room was occupied at that time (I went close to closing time). I am not going back for that (I am going to demand a computer that generates at least 70fps ;) ) but they do have sympathy for it having to work and seeing what can be done.

But for me it is clear: a 1.4 i5 macbook air performs beter. A 2.2 i7 macbook pro performs beter. Between that and beyond that are mine and your laptop. I don't see a reason why it should perform 250% less, unless it is broken. That is the illogical explanation for me ;)

phiol's icon

Awright thanks for the input MrMaarten, Will go to a mac store and let you you in 2 weeks.

Thanks

phiol's icon

Just tried it on my Gf's computer.
it's a 27" iMac

os x 3.2 10.8.5
3.2 Ghz Core i5
8 g Ram
graphiuc card nvidia GeForce 755M 1G card

results
points = 109 fps
tri grid = 34fps

What might be obvious to you but wasn't for me right away, is that the size of jit.pwindow counts right.
When I resize the window to the smallest it can go ,
I get 105 fps w. trigrid and w. points.

What size of jit.pwindow were you using for your test?

Christopher Overstreet's icon

I am thinking of buying the new imac 5K, and am wondering about the AMD card performance. Hopefully someone can test that or has feedback on AMD cards on mac?

MrMaarten's icon

I will be away till nov 1st, but after that I wouldn't mind going to the apple store to run the test on the 5K iMac. Or whenever they arrive in the store.

dtr's icon

Peoplez, as stated earlier in this thread and demonstrated by Phiol, the test you're using is not made and suitable for the purpose. Already something simple like the non-fixed window size is throwing it off. It was made to demonstrate a bug. Don't jump to conclusions with this.

MrMaarten's icon

it's a comparative test: they all have the same window size on opening. So the test is the same, the results are different. I will do the test again with a fresh install (but I think PHIOL also did that) to be sure, but then there shouldn't be that big of a difference.

What else could explain the difference? I am ready to try anything...

And: I am looking forwards to the other tests ;)

dtr's icon

from pedro earlier in this thread:

Regarding that patch, I agree that it is not a good general benchmark example. It was only meant to showcase a bug/lack of optimization in jit.gl.multiple.
That patch is very CPU limited. If you turn off the jit.gl.mesh geometry rendering you probably won’t notice a huge difference, but if you change the dim of the particle system from 100000 to 10000 you will see the fps go way up. So, the main bottleneck here is the CPU and not the GPU…

MrMaarten's icon

I understand that it is not a general benchmark example. For the iMac 5K sure we wait for the better benchmark.

But it is actually the CPU we are talking about in this case.
For my and PHIOL's laptops it indicates something wrong: all the computers ran the same test with the same window size. And it was our CPU's that were lacking. (The hardware I tested ranged from between 2008 and 2014, all had higher fps in the CPU test). What else then a hardware problem can cause the difference?

Pedro Santos's icon

Hello, again! Here's a better Jitter benchmark utility I just built. It could be a start for a more definitive tool with the help of the community.
It's also theoretically expandable to include other scenes...

For your reference here's the data for my custom-built desktop:
CPU: 97.8
GPU Geometry 1: 110.8
GPU Geometry 2: 98.3
GPU Pixel Shaders: 96.5

Windows 7 (64 bit), Max 6.1.9 (32 bit)
Intel Core 2 Quad Q9400 @ 3.2 GHz, 6GB RAM
AMD HD 4870 1GB RAM

jitter_benchmark-v1.0.maxpat
Max Patch
phiol's icon

Yeah what's crazy is that by changing the size (to smallest) on my gf's computer imac i5,
I got a fps of 107. When I did the same on my mbp (changed to smallest) I did not get a big different.
Maybe gained 20 fps.

Also, I've been working on physics patches with stop motion animation etc...

The biggest performance difference I've notice for large performance difference.
-if in edit mode I loose 30-40 fps
-CPU side. if I scrolled to not see the jit.fpsgui , I go from 65% CPU down to 25.

So Like Vade had mentioned a while back in his "Movie Optimization Methods" blog
found here: http://abstrakt.vade.info/?p=147

He was really right. but not just for movie playback, for physic as well.

Here is one of his statements:

DO NOT create unnecessary UI objects that constantly update – Jitter shares its drawing thread with the Max UI – the more you draw on screen, the less time Jitter has to draw to your window. If you REALLY need all that stuff on your screen blinking, set the screen refresh rate lower in performance options – and use qlim to limit the rate of updating.

No really, DO NOT create unnecessary UI elements, even hidden ones. Max has to keep track of them. Simply hiding them wont do it. Replace number boxes with [float] and [int], etc.

And finally, behaviour on my new 4 day old mpb 15"

If I leave the patch running and do not interact with it (just let the animation and physic do their thing),
it weirdly goes from a 60% CPU to 28% cpu in Activity monitor.

As if the task are being spread out to different cores.

Anyways, I am also up for bench marking any patch.
One that would show what my machine could do. I'm too much of a noob to know
how to built such a thing.

thanks all

phiol

phiol's icon

Wow!! thanks Pedro I had not seen your patch. You're the man :-)

mbp 15" (4 days old)
OSX 10.9.4
Processor 2.8 GHz Intel Core i7
Memory 16 GB 1600 MHz DDR3
Graphics NVIDIA GeForce GT 750M 2048 MB

CPU: 109.8
GPU Geometry 1: 6.3
GPU Geometry 2: 26.5
GPU Pixel Shaders: 82.9

dtr's icon

Pedro, you are the man :)

I'll have to take a look at the guts of this later. I'm packing for the start of a tour to Germany and Lithuania tomorrow as I'm typing this, literally.

Desktop system:
i5 4690k 3.5GHz
GA-Z87X-OC mobo
EVGA (Nvidia) GTX670 2GB FTW gfx card
8GB RAM 1600Mhz
Win 7 pro 64 bit
Max 6.1.9 32bit

CPU: 157.9
GPU Geometry 1: 10.6
GPU Geometry 2: 54.3
GPU Pixel Shaders: 245.8

Strange strange result on gpu geometry 1, compared to Pedro... Changing the openGL readback mode doesn't change anything. Any theories? Is this an AMD vs Nvidia thing?

MrMaarten's icon

Great Pedro! Maybe we should start a new thread were we post the results?

But here are mine also in the mean time:

Macbook Pro retina
2.3 Ghz i7 4850HQ
16Gb RAM
Geforce 750M 2GB VRAM

CPU: 109.0
GPU Geometry 1: 3.4
GPU Geometry 2: 25.0
GPU Pixel Shaders: 92.6

----
Hackintosh 10.8.5
2.66Ghz i7-920
18GB Ram
Geforce GTX 660 2Gb VRAM

CPU: 89.9
GPU Geometry 1: 8.0
GPU Geometry 2: 40.6
GPU Pixel Shaders: 108.8

-----
Mackintosh 10.9.5
3 Ghz i7 920(?)
8Gb RAM
ATI RADEON HD 5000 1024MB

CPU: 108.7
GPU Geometry 1: 19.9
GPU Geometry 2: 103.0
GPU Pixel Shaders: 108.8

----

Very good to see there comparisons! Still wrapping my head around them...
Will do some more apple tests in the store when I am back: also packing for Spain and Portugal performances...

--

Goodluck DTR with your tour! I saw your performance in Den Bosch once (I believe it was Integration 0.2) and it was really good on many levels!

dtr's icon

I'm really wondering how Pedro's AMD HD 4870 can blow everyone else out tenfold... An Nvidia GTX 670 is supposed to kill it, isn't it?
http://www.videocardbenchmark.net/compare.php?cmp[]=30&cmp[]=35
http://www.hwcompare.com/12594/geforce-gtx-670-vs-radeon-hd-4870-1gb/
My system in general is faster too. I do seem to recall there 's 1 specific point where AMD excels versus Nvidia, GPGPU if I'm not mistaken. That isn't at play here, is it?

Goodluck DTR with your tour! I saw your performance in Den Bosch once (I believe it was Integration 0.2) and it was really good on many levels!

Ha great, tanx! That would be Integration.03. That version is almost pre-historic to me now :D
Have a good one too!

Pedro Santos's icon

Hi, guys. Your GPU Geometry results are really strange, indeed.
I guess the polygon count of the 2 geometry scenes is unnusually high:
(scene 1 has 1764 instances of a sphere with dim 48 48)
(scene 2 has 3 different gridshapes, each with dim 320 320)

Anyway, compared to yours, my graphics card is ancient. Regarding the MBP, there's the difference of the cards being mobile variants, and they're on OS X.
But for you guys with better desktop cards (and processors) in Windows, the results are really strange. AMD vs NVIDIA? ...

DTR, your assumption about GP GPU processing is correct. The last NVidia cards weren't very good at OpenCL processing when compared to AMD, but that's not what's at play here... it's pure geometry calls, I guess...

I've added "cache_mode immediate" the jit.gl.gridshape object present in the GPU Geometry 1 test (based on jit.gl.multiple).
Does this make any difference?

jitter_benchmark-v1.01.maxpat
Max Patch
phiol's icon

Here are my results.
mbp 15" (4 days old)
OSX 10.9.4
Processor 2.8 GHz Intel Core i7
Memory 16 GB 1600 MHz DDR3
Graphics NVIDIA GeForce GT 750M 2048 MB

1st version
CPU: 109.8
GPU Geometry 1: 6.3
GPU Geometry 2: 26.5
GPU Pixel Shaders: 82.9

2nd version
CPU: 109.6
GPU Geometry 1: 6.4
GPU Geometry 2: 27.2
GPU Pixel Shaders: 84.9

not much difference

dtr's icon

@pedro: I'm having breakfast before driving off to Berlin as we speak... Will be able to test again only in a week.

(crunchy with benchmarks, the nerd's perfect breakfast ;)

jninek's icon

Lenovo w520
2.2ghz
ram 16gb
Nvidia quadro 2000m
CPU: 121.3
GPU Geometry 1: 38.2
GPU Geometry 2: 367.0
GPU Pixel Shaders: 29.3

Adrian Wyard's icon

Hi all, may I resurrect this thread from two years ago? I need to buy a Windows desktop machine to do reactive realtime visualizations responding to Kinect. Are there any Max/Jitter considerations I should bear in mind, or just get the baddest machine I can afford?

I suppose there might be a standout OpenGL implementation on one particular GPU architecture? Or can I just refer to the specs on any graphics card and expect Jitter performance to reflect those?

I'll guess that Max/Jitter do better with lots of RAM and/or VRAM? What's a good amount if every millisecond counts?

Thanks.

Andro's icon

1 - Kinect has a huge latency regardless of your machine. Like really huge delay. check videos of dancers + kinect and you'll see what I mean.
2 - the new kinect is a lot faster, still latency, lot less still there.
Do you need skeltal tracking ? then you need a kinect. Do you just need silhouettes from infra red light data, screw kinect and get a super fast infra red camera and a less powerful pc and then do blob + bounding box tracking.
Range ? 2 to 3 meters ? 2 to 6 meters ? following dancers ? 1 person ? 3 people ? fast movement ? slow movement ? Dark space or a lot of ambient light ? these questions are more important to answer than the power of a machine. So maybe a bit more info with what you want to do with it might help you more.

Adrian Wyard's icon

Thanks, Andro. My main application is tracking the hands of an orchestra conductor and generating reactive visuals. I've been investigating the latency issues in some other threads, so you'll find discussion and recent timings in the links below, but it boils down to this: between input latency (probably Kinect) and display device latency there's almost no time left for generating the visualization - hence the need for a powerful PC and graphics card that can do something satisfactory in a few milliseconds.

Adrian Wyard's icon

Quick question: I see high end modern gaming rigs come with dual graphics cards - would Max take advantage of that sort of setup?

Jesse's icon

No, as far as I know Max does not take advantage of multiple graphics cards.

The Pascal line of nVidia graphics cards are quite capable. Commenting on specific performance concerns without more details about the reactive visuals is not possible. I'd generally over-spec your system - you can always upgrade the graphics card if you find it necessary. To that end, a GTX1080 is a great starting point.

Pedro Santos's icon
Adrian Wyard's icon

Thanks, I'll post over there. From a quick review of that thread it looks as though I should expect better performance from a Windows machine with a newer graphics card.

Andro's icon

Hi Adrian, glad to see your getting your research down but theres a few things I'd like to point out.
- Your GPU being as fast as light itself won't make the kinect go any faster. The kinect is the bottleneck not the computer. I've used it extensively and just ditched it for experiments requiring anything lower than 33ms. For dancers or fast movement its just to slow.
- Even if you grab an incredibly low texture (say 256 x 256 ) from kinect it's only max that has less strain, not your gpu. As far as I know there is no way around this.
- Have you tried the leap motion, not the greatest range but super fast and super accurate sampling of hands. It can get mixed up by orientation but if your only tracking hand position (the kinect can't track fingers so its the same for you as the user) then it's great. Plus settings for speed to accuracy.
- Even a gtx760 do's everything in a few milliseconds, pretty much the same as a gtx1070 (1080 is faaaaaar too much money for that slight incremental increase in gpu power). Video memory is how much data your GPU can process at the same time, not how fast it is.
- Fast reactive visuals boils down to efficient programming. Will you use 3d models ? Do they have a low poly or high poly count, is it only frag shaders on the GPU ?? Every single one of those questions is more important than what GPU you might end up getting. With a 6 year old macbook pro with a 512mb GPU I had faster reactive visuals than people with 1 gig of GPU. Why ? I Understood what I was doing and trying to achieve.

I hope some of this info will help you out. If you have any more in depth questions please feel free to ask.

Christopher Overstreet's icon

I got a little bit older desktop PC do do VR stuff and use the kinect 2. I've been mostly on macs my whole life so know very little about building/upgrading PC's. I am thinking of buying the superclocked GeForce GTX 1060. Any reason I shouldn't? Any cards in the same price range ($260) that I should be looking at?

Thanks! Looking forward to getting back into the VR stuff again!

ChristopherO

Adrian Wyard's icon

Thanks for your comments, Andro. I have considered Leap Motion, but sadly the range is just too short for a typical conductor.

In terms of graphics approach, I'm resigned to being constrained by whatever visuals can be computed and drawn quickly. There may be some head scratching on how to make that look interesting, but I'm not too worried. And yes, to keep it quick I'll need to work with someone who knows what they're doing. I'm also confident that there are visual tricks that can somewhat mask latency.

At this point I'm going to do proof of concept tests for both camera and Kinect 2. The camera I'm using now is 60fps, and too be honest in the (not very representative) tests I've done so far, the lag on the Kinect seems no worse than the camera, i.e. almost but not quite instantaneous. Do you think I'll ultimately need to go to faster than 60fps?

dtr's icon

GTX1060 would be ample for what most of us can throw at it using Jitter. Jitter patching skills would likely be a much bigger bottleneck.

For reference my performance PC is an i5 4690k with GTX 670 and 8GB RAM. I'm rendering 4x 1280x800 contexts output on 4 projectors, plus a monitoring render window on a 5th screen, 4.1 audio synthesis, Kinect v1 skeleton tracking, sensor gloves. Kinect processes at 30Hz, the sensor gloves' 6 sensors come in at 60Hz, rendering runs at around 100Hz for snappy response.

Kinect skeleton tracking indeed always has noticeable lag. What I do is give my audiovisual material physical dynamics and inertia so that the lag blends with it and the whole gains organic character.