> I would think (as a layman
) that it would be better to just throw more processors at the problem than try and hack up the
> code for better single processor speed, considering today's (and yesterday's) multi-core machines and that the speed losses
> came as a cost of more features.
IMO that idea is a microcosm of why the world's going right down the toilet... :}
So how bad does one allow the basic rendering inefficiency to become? 50% worse? 100% worse? Unless the code does a
decent job of rendering with a single core, having lots of threads is just being lazy IMO. Come on guys, I thought the idea of
all this sort of thing was to write good
code, not just any-old-code-will-do...
This is the same slippery slope that gfx cards have dived down - nobody bothers writing efficient engines, just let ever
faster hardware boost the speed.
And hey, aren't we trying to save the planet and all that? (at least yesterday anyway
What we're effectively saying here is
that it's ok to knowingly use a significantly greater amount of power to achieve the same render time, when a bit of effort
would cut those times by more than a quarter.
I say get the basic rendering system working well first, then think about threading it out. As it stands, what we have right
now is a build that means a dual-300 Octane is no better than a single-400 Octane. Not good IMO.
By definition (I would have thought) any extra efficiency in the code will automatically be passed on to all threads when running
on multiple cores. That's a major saving in time, energy & hence money.
People often moan about MS apps getting all bloaty. We should be on our guard against freeware sw going the same way.