Thank you for all the insight you have given on this topic - but in all honestly the technical details about graphics and what this compositor is capable of are slightly above my head, but I hope someone else was able to take it all in.
**BUT** let me not give up so easily. Let me ask in terms I understand, and let me know if you can answer them.
So lets say I have two *pipes* - I always kind of thought of that as two "screens" - lets say on one screen/pipe I have a OpenGL fullscreen 1920x1200 app running that shows the "atlantis" demo. on the second screen I have the "swarm" OGL screensaver demo running at 1920x1200. Would the compositor combine both screens to be a composite of both running at 1920x1200?
lets say I have "atlantis" running on one screen at 1920x1200 and "atlantis" running on the other screen at 1920x1200 - will the end result (using the AA software/extension, and whatever else) give me a "nicer" looking "atlantis" @1920x1200 out of the compositer?
sorry if these seem very newb and dumb
Yes that’s effectively what a 'graphics pipe' is by my understanding, and while the term is loosely used, it is distinctively different from 'graphics pipeline' (which can use multiple pipes o.0) which refers to the entire backend process done via OpenGL api. Graphics 'pipe' usually refers to a subsystem which can process data for an output (So it tends to have its own frame buffer, video memory and glorified matrix calculator...GE & DM, or multiple GE's and DM's configured to work as one pipe).
In both examples, two screens are used, so the overall screen real-estate up for grabs is 3840x1200. The compositor handles max 1920x1200. Rather than thinking of the configuration outputting to two different screens, think of them outputting the same signal to both screens. Pointless in most cases, and pointless to output to the compositor since in 'Pixel Average' mode both signals would average to the same image o.0. However, with two pipes, the same frame can be rendered by each pipe at a slightly different time slice between frames, while this isn't the same as jittering the frustum and accumulating, ftp://ftp.sgi.com/opengl/contrib/blythe ... de124.html
. which must be done by the program (and requires rendering via an accumulation buffer)… it means all programs can be aliased through careful manipulation of the graphics pipes (time slices, and swap rates) and accumulation done by the compositor, rather than the accumulation buffer on a single pipe (a single pipe would require a frame to rendered 4 times, the compositor can accumulate the render of 4 different pipes). This means no extra knowledge is required by the program, so it doesn't need to implement the AA which is done automatically by the 'graphics pipeline'. It knows nothing...
So in your example, a single instance of Atlantis would be run on a configuration of say 4 x VPro (configured as a single 'hyperpipe'). The configuration implements the 'graphics pipeline' (Atlantis uses OpenGL api, which is effectively a blackbox from the point of view of the Atlantis application which simply passes the vertices and texture data through the OpenGL api), the configuration will tell all of its pipes in the 'hyperpipe' configuration to render each part of the frame at different time slices, producing 4 slightly different images, which compositor then accumulates together and displays at the required frame rate. Producing a lovely AA image. This is configured using the sgicombine utility and 'Pixel Average' mode.
The other modes divide the output of the screen as desired (quadrants) or into vertical/horizontal regions. How it does this, idk, but can take a wild guess and say 1/4 of the screen requires 1/4 of the bandwidth since the culling will eliminate most of the vertices form the scene except for the ones needed to be drawn by the pipe. This means it can fill at 4 x the speed, which means 4 of them can scale to produce the single image at either 4 x the frame rate, or the same frame rate as a single pipe, but with 4 x the amount of complexity since it can shove 4 x the number of vertices through the same pipe (because the scene its drawing is 1/4 of the size). Of course this is all theoretical, other factors will come into play such as some regions having far more vertices then others, meaning there won't be a linear scale, but certainly an improvement. AA seems the best use of this thing, unsure if the end effect warrants the amount of hardware required to achieve it, but none the less an interesting setup imo
anyone who tinkers with sgi's isn't dumb in my book
The compositor for sale on Ebay is the original device released in 2002. A year later in 2003 SGI released a revised version (rev b)
which features support for a multi-compositor setup which does not exist on the first compositor.https://techpubs.jurassic.nl/library/ma ... 02-001.pdf
I can't see anything about combining 2 compsitors, it does seem to be able to take 4 x pipes provided by O350's rather than 4 x pipes provided by 2 x V-bricks? And it mentions being able to use 3 x pipes o.0