Rendering and HyperThreading
-
@pixero said:
My experience is that 2 cores are faster
But HT doesn't mean two cores - it's one core two virtual CPUs.
A dualcore with 3Ghz != singlecore HT 3GHz even though both lists two 3GHz CPUs.@notareal said:
I'd say give it a test try, usually you can disable hyperthreading from bios I think hyperthreading might not give 100% boost comparing to actual cores (i7), maybe more like 50-70%. But if it's a old CPU I don't know the result... might actually be worse.
All the computers are controlled by remote - so accessing BIOS is not easy as I need to hook up the machines with monitor, mouse and keyboards.
I've been searching the new - but been unable to work out if there is an overhead of having HT enabled for single-purpose processes.
I wonder if it's only worth it if you multi-task different tasks/applications. -
hmm.... missed this the first time I read the article:
http://en.wikipedia.org/wiki/Hyper-threading#Performance@unknownuser said:
The performance improvement seen is very application-dependent, however when running two programs that require full attention of the processor it can actually seem like one or both of the programs slows down slightly when Hyper Threading Technology is turned on. This is due to the replay system of the Pentium 4 tying up valuable execution resources, equalizing the processor resources between the two programs which adds a varying amount of execution time. (The Pentium 4 Prescott core gained a replay queue, which reduces execution time needed for the replay system. This is enough to completely overcome that performance hit.)
-
But then you got opposing info:
http://www.pcpro.co.uk/blogs/2010/05/09/hyper-threading/
http://ixbtlabs.com/articles3/cpu/ci7-turbo-ht-p1.htmlBut they both referred to i7 - not sure if it applies to older P4 HT...
-
-
@thomthom said:
@pixero said:
My experience is that 2 cores are faster
But HT doesn't mean two cores - it's one core two virtual CPUs.
A dualcore with 3Ghz != singlecore HT 3GHz even though both lists two 3GHz CPUs.@notareal said:
I'd say give it a test try, usually you can disable hyperthreading from bios I think hyperthreading might not give 100% boost comparing to actual cores (i7), maybe more like 50-70%. But if it's a old CPU I don't know the result... might actually be worse.
All the computers are controlled by remote - so accessing BIOS is not easy as I need to hook up the machines with monitor, mouse and keyboards.
I've been searching the new - but been unable to work out if there is an overhead of having HT enabled for single-purpose processes.
I wonder if it's only worth it if you multi-task different tasks/applications.My experience from quad cores and i7 gives the impression that with i7 it's better to enable hyperthreading (at least for Thea Render). But of course there maybe some other reasons why i7 works better... before that I was in AMD camp... so no experience on P4 hyperthreading.
-
Did find this clock for clock comparison, i7 is best performer. Does not help much for comparing older generations... but at least it gives some clue
http://www.legionhardware.com/articles_pages/clock_for_clock_core_i5core_i7core_2_quad_and_phenom_ii_x4_performance,1.htmlHere are some old P4 Prescott/Northwood tests with HT vs non-HT rendering performance. Based to that I would keep HT on also with Prescott/Northwood http://www.hardcoreware.net/reviews/review-195-6.htm
-
As others have commented, it depends on what you're doing with those virtual cores.
Hyperthreading gives each of those virtual cores their own flow control, but they share (for example) floating point processing resources. (ie there is physically 1 unit in the chip that actually does floating point, but its shared between HT cores).
So if your software has a mix of logic statements and floating point crunching, you get a win. If, as is the case with pretty much anything 3d, you're crunching lots of floating point, you'll get less of a win. All the tests I've ever done in the past showed it to be a marginal win but basically it shows it for what it is, which was a sop to the DB guys that Intel was courting at the time.
So wrt graphics, keep in mind these cores are virtual. If your floating point units are maxed out doing dot-products, no amount of switching virtual cores will help. Yer need more dilithium crystals, Captain.
Adam
-
But does it provide an overhead for a task such as rendering?
-
@thomthom said:
But does it provide an overhead for a task such as rendering?
Its a hardware feature. There isn't any cost to speak of.
However, if you're maxed out on floatingpoint, then no amount of multithreading will help you. Indeed, there is a cost associated with scheduling, running and generally managing a thread, but this is down to how efficient you OS is at switching threads and any overhead your Application introduces in splitting a task amongst N threads.
Traditionally, Windows has been poor at thread switching - so much so they introduced yet-another-concept called a "Fiber" (geddit!) which was meant to be a fast context switching object.. which was what a Thread was meant to be in the first place. slaps head.
And your renderer will have a fixed cost associated with each additional thread it brings to the rendering task.
These 2 things conspire to make for a non-simple answer.. ;-(
Adam
-
I think my conclusion is that I won't bother to hook up all the machines to mess with the BIOS for this.
cheers.
Advertisement