Additive Blending Conclusion - Questions and Answers
The previous conclusion page, gave a broad outline of the success and failures of additive blending with the different combinations of cards, system and API. Below are a number of questions and answers that should aid in clarifying the points and issues raised.
Bug or Feature?
Its clear that the methodology employed exploits sw3d in a manor that was/should not have existed. Therefore its unclear whether this behavior will remain or be consistent in future versions of sw3d. Although hopefully a new version will actually support additive blending without the need for a workaround.
Fill Textures - to alpha channel or not?
Initially it was standard practice to use a alpha channel (mask or clear) texture for the fill layers. However tests have shown that simple plain textures with no alpha channel will also work. The cravat being that the global shader blend value must be set to 0. As yet no test into the differences in performances between the type of fill texture has been made, but its assumed that plain textures would be better. Yet there is still a case for alpha channel fill textures when perhaps you want to have a number of additive textures, that use the same mask. Instead of adding the mask to each image used as a texture, a single alpha mask texture can be used in the fill layers to achieve the same effect.
What about the NextShader lingo command?
This is an undocumented and therefore unsupported feature in sw3d, which at any time in the future might be removed or changed by MM/Intel at their discretion. Although no actual documentation exists, it has been deduced that it operates in the following manor. The nextShader is a property of a shader, a pointer (child) to another shader in the sw3d member. When a mesh is rendered with the shader, if it points to a nextshader, that shader will be rendered immediately after the first. In this way its possible to chain together many shaders. However it appears that the shader states are reset for each nextshader, and thus it doesn't look as though it has any benefit in regards to additive blending. Although this in itself might be useful to avoid additive blending a texture if you happen to use a shader where the number of layers is the cards max textureunits+1.
Being unsupported comes at a price, once a shader has a nextShader assigned to it, you cannot remove it. Attempting to set the shader.nextshader = 0 or void will cause Director to have a fatal crash. The workaround is to simply delete the actual shader, then rebuild it without the nextshader being set, but its unclear whether this might be causing invalid pointers to hang around, and therefore may also crash.
Why does API factor into getting additive support?
The main reason is that DX5 and occasionally openGL (see next question) does not support multi-texturing. Using it forces everything to be rendered in a single pass. As getHardwareInfo is context sensitive, the textureunits returned when running DX5 is always 1, no matter how many units the card actually has. This caused problems, because early demos simply grabbed the max textureunits at the beginning of the movie, and didn't bother to take into account the difference between DX5 and DX7. Which would often mean one or the other would fail to produce the AB effect, even though it clearly could do both.
Is the Mac getHardwareInfo reliable?
In short no, especially in regard to textureunits, where it seams to always be incorrect, usually with a value of 8.
Why does getHardwareInfo return different values?
This lingo command is API context sensitive. In DX7 or openGl (on PC only) it will correctly report the number of actual textureunits on the card. However in DX5 because it only supports 1 texture unit, it will always return 1.
The startup render has been known to affect the outcome of whether some API's display additive blending. In particular starting in DX5 use to cause problems for DX7, but it is believed that this is due to the API context sensitive getHardwareInfo return value for textureunits. Another report though stated starting in openGL, caused problems with DX5, but this could be due to the same issue.
Though really this just goes to show how important it is to always query getHardwareInfo after every API switch.
Why doesn't self diagnosis work in openGL?
Self diagnosis relies on obtaining an accurate screenshot of the sw3d sprite, in order to compare certain pixels and determine if additive blending is being applied. Alas on both Macs and PC's any method of getting the screengrab whilst running openGL results in the image being rendered through the openGL software mode (AFAIK). In this mode mipmapping/nearfiltering has no effect, so textures are blocky, but worse is it performs as DX5, requiring only a single fill layer and the additive layer, ignoring the number of textureunits on the card. It may be possible using an xtra to grab an actual screenshot of the sw3d sprite, but its yet to be tested.
What happens with multiple monitors?
Two things, generally openGL appears to be disabled, or if enabled it falls back to openGL software mode.The getHardwareInfo command can only display the primary driver/card so that information may be incorrect. The best solution is to disable all but one monitor.
What happened to my additive blending?
One keen tester discovered that if you have a dcr running where additive blending is correctly set up and running, then open another dcr, it is liable to cause the original dcr to lose the blend effect completely. At a guess I would say this has something to do with all Shockwave/sw3d instances using one 3D API at a time, and that the other dcr forced sw3d into a different API than the one set up in the original dcr for additive blending to work. The failure may be permanent for the duration of all the shockwave instances.
Why doesn't it work on card X?
There has yet to be a single card that the effect reportedly fails on, all have at least one API and approach that works. However this is not to say there never will be a case of it failing, but ignoring future 3d card development the three most obvious reason for such a failure are;
1. The user has set their sw 3d render property to 'Always use API X', this will override and lock out changes made be the dcr/projector. Being unable to change the API may not be a problem if its already set to the most appropriate, but if not, then its impossible to fix. The only course of action is an xtra to fiddle with the users sw 3d render property (not nice) or alert them to the issue, and explain how to fix it.
2. OpenGL is being used, and the user has been forced into openGL software mode. This can be due to several reasons, though usually it means their card or set up is not capable of accelerating the 3D at all.
3. Multiple monitors can cause several issues, so its worth considering.
All material including text, images, source, results
and demos are copyright Noisecrime Productions @ 1996-2003 Noisecrime Productions |