Hardware recommendations
-
this is true, after the discovery of this very useful command (Test.time_display) we dont need to have big models anymore. it is rather helpful to play with different style settings to find out how they influence the performance.
of course it still makes sense to have some scenes with different poly counts to check if the performance speed decreases proportional to the model complexity or if there are differences in hardware (for example that one crafic card is exactly the same speed as others with low poly but is much faster with hight poly)anyway, my test results wit the city model:
(Core2Duo 3.00 GHz, 2 GB Ram, nVidia Quadro FX 1700)
(Hardware Acceleration, Fast Feedback, Anti Aliasing 4x)Scene 1: 30.9 fps
Scene 7: 0.4 fps
and in Jackson's Cube model:
(17.1; 16.8) 16.8 fps
ps: nevertheless we should design a beautyful model that makes the whole process fun to watch. I think we first have to set up such a file, where we mind every factor that is important to know (textures, styles (like profiles), transparency (faster, nicer), low-/high-poly count, beauty ,...) and then we have to ask a ruby coder to write an automated script that runs the "Test.time_display" command, saves the result, proceeds to the next scene... finally displays all the gathered info in a window (like it does now after every test).
oh yes, is it possible to read out the hardware settings with ruby (or even the hardware components of the computer?) -
@plot-paris said:
ps: nevertheless we should design a beautyful model that makes the whole process fun to watch. I think we first have to set up such a file, where we mind every factor that is important to know (textures, styles (like profiles), transparency (faster, nicer), low-/high-poly count, beauty
Why? I'm not sure anyone is bothered about how pretty a 6 second fps test is. We know textures slow down a model, we know shadows slow down a model, etc, etc. They have almost no relevance to judging overall how user's PCs and Macs cope with SU orbiting. I'd say the simpler and smaller the file size the better, otherwise we'll just see loads of confused and confusing results.
-
why not but the test in a beautyful wrapping that pleases your eye?
but probably you are right - I am looking at it from an architects/artists point of fiew - this is a test solely for technical reasons...
-
Wow I can't believe my suggestion bore fruit so quickly! Thanks guys!!! The Ruby suggestion sounds great! Incidentally I am thinking of getting a quadro FX 1700. Whats the biggest model anyone's done with it?
-
Plot-Paris
Are you using a Intel Xeon processor?
-
Using Jackson's model, on the third run using "Test.time_display" typed in the ruby console I got 17.2 fps.
Intel Q6600, DFI P35 "Blood Iron", 4x OCZ 1GB DDR2-800, nVidia FX 3500 256MB (driver: 169.96_quadro_winxp2k_international_whql), programs HDD = WD 36GB Raptor (10000RPM), data HDD = WD 250GB Caviar (7200RPM), 22" Samsung 226BW at 1600x1050. Hardware acceleration and Fast feedback on.
-
@chango70 said:
Plot-Paris
Are you using a Intel Xeon processor?
no, it is the Intel core 2 Duo E6850
-
I just played arround a bit with the cube-test model.
-
at first I achieved 18.6 fps(1.8 slower than this morning, when my computer was relaxed )
-
then I grouped all the cubes, with no significant difference.
-
then I created a component out of all the cubes. now I got 16.4 fps
-
I made 24 copies of this component, put it in a hidden layer. now the result was 17.5 fps
now I am completely confused does anyone have an explanation for that?
-
-
Id suggest that there is a degree of random (or seemingly random) error in the test. i.e. having the components on a hidden layer doesnt really affect the results, its just a bit of +/- either way. The same would explain the difference between your first test (when your comp was relaxed ) and the second test.
-
I just did some tests and found out, that the consistency increases immensly with the poly count.
some figures:
my city-model, scene 1 (3.328 polygons):
framerate differed from 53.7 fpsto 56.9 fps(maximum difference in time 0.093 seconds)
the same model, scene 2 (180.496 polygons; more than 50 x bigger):
framerate always was 2.5 fps(maximum difference in time 0.067 seconds)
here we see, that the dime was more precise than in the low poly scene...
-
@jackson said:
How did you find out about that? That's fantastic!
It came from some random little post on these boards actually. I think I'd searched for "benchmark" or something and in a conspicuous thread about benchmark's someone was just like...um, why don't you just run this ruby? Didn't look like anyone even took note of it at the time.
-Brodie
-
@plot-paris said:
I just did some tests and found out, that the consistency increases immensly with the poly count.
some figures:
my city-model, scene 1 (3.328 polygons):
framerate differed from 53.7 fpsto 56.9 fps(maximum difference in time 0.093 seconds)
the same model, scene 2 (180.496 polygons; more than 50 x bigger):
framerate always was 2.5 fps(maximum difference in time 0.067 seconds)
here we see, that the dime was more precise than in the low poly scene...
I think that would be my reason for wanting a semi-complex benchmark. I like your idea about having a number of scenes with varying complexities and a script that would run the Test.time_display script, log the results, cycle to next scene, etc. and give you a final report at the end (in a txt file would be great). Also like you said, in conjunction something that could along with that log your settings would be fabulous.
I'm thinking 8 scenes. First 4 would be a pretty simple model which would run the 4 combos of textures and shades on/off. The next 4 scenes would be the same thing but with a more complex model.
I think something like your city model would be fine although I think all those punched openings are probably more intensive than necessary for the shadows. Also adding textures so we could get a feel for that as well.
I think what that would do would give us a better idea of the affect that the CPU and GPU have on the varying geometry, materials, and shadows.
As far as settings the following is what I'd consider standard...
GPU Settings
Clock Speed: Default
Fan Speed: 100%
3D Settings: DefaultSU Settings
Anti-Aliasing: x0 (or perhaps x4?)
Hardware Acceleration: ONDisplay Settings
Resolution: 1200x1024CPU Settings
External Programs Running: NONE-Brodie
-
@unknownuser said:
I'd missed this post before with your benchmark file. I ran the ruby I mentioned ( Test.time_display ) with shadows and textures on (although I don't think there are any textures) a few times and got an fps between 23.0 and 23.5 in scene 1. In scene 7 I got a 0.2 fps which took an agonizing 404 seconds to cycle through. Even without shadows on I only got a 0.5 fps which took 158 seconds.
-Brodie
I'm beginning to question my sanity. Nothing seems to make sense with my results. On my home home computer which is in every way inferior to my work computer I actually got better results. On scene 7, for example, my fps was still 0.2 but it took 377s instead of 404s which is noticable.
The only thing I can think of is maybe there's a fair sized difference in performance in running on a lower screen resolution. I'll test that later and see what I get.
-Brodie
-
On the city block test, I got 9.1 fps on scene 1 and .2 fps over 451 seconds on scene 7. On the cube test all three runs were generally the same for me: 5.4-5.6 fps and 12.95-13.06 seconds.
Looks like I'm the slow kid on the block with my 4 year old Sony Laptop, 1.73 Ghz, 1Gb, GeForce Go 6200, hardware and feedback turned on and AA at 4x.
-
just to cheer you up...
my 5 year old laptop with a Celeron-processor and 1 gig ram achieved 1.7 frames with the cube test!
-
Oh my my, sounds like processor speed is of lower priority...
-
@bellwells said:
Looks like I'm the slow kid on the block with my 4 year old Sony Laptop, 1.73 Ghz, 1Gb, GeForce Go 6200, hardware and feedback turned on and AA at 4x.
The AA will make a huge difference, I'm amazed you have it turned on at all on your laptop. My lappie is coming up for 3 years old and I always have AA turned off- I can't afford the slow frame rate when working and as much as x 4 AA'd lines look lovely I much prefer the fine crisp aliased lines- I find them much easier to select. Of course I apply AA or resize in PS for presentation images and animations. -
@jackson said:
@bellwells said:
Looks like I'm the slow kid on the block with my 4 year old Sony Laptop, 1.73 Ghz, 1Gb, GeForce Go 6200, hardware and feedback turned on and AA at 4x.
The AA will make a huge difference, I'm amazed you have it turned on at all on your laptop. My lappie is coming up for 3 years old and I always have AA turned off- I can't afford the slow frame rate when working and as much as x 4 AA'd lines look lovely I much prefer the fine crisp aliased lines- I find them much easier to select. Of course I apply AA or resize in PS for presentation images and animations.I'd be interested to see what your fps results would be if you go back and forth between x0 and x4 AA. I ran the script both ways and was shocked to find almost no difference at all (x4 was actually fasterbut probably well within
the margin of error).-Brodie
-
just tried it out with the city-model, scene 6 (more than 400.000 polygons).
the time-difference was 0,5 seconds at more than a minute duration.
that means the time improvement when switching from 4x anti-aliasing to 0x was only 0,7%.
the framerate was practically the same.so it doesn't seem to make much difference, if anti-aliasing is switched on or off (strange, I believe to remember having switched off aa with a particulaly big model and getting much better framerates some months ago... )
-
Tried the cube model on Dell Xeon dual 1.866Mhz core 2 quad with 4MB (basically giving 8 processors) - firms rendering machine.
Graphics card Quadro 3450/4000 sdi 256MB
Original cube - 14.3 fps
Select faster transparency quality - 21.1 fps
Turn off transparency - 41.6 fps
Advertisement