Search Results

Tuesday, January 25, 2011

OpenGL 2D Screen Filters in the BGE - Part 1

Hey, there. So today, we'll look into using some simple GLSL (OpenGL Shading Language) functions with the Blender Game Engine to make some 2D screen filters. Theoretically, you could even implement this with other game engines to create 2D screen filters, even engines using other programming languages, like Microsoft's HLSL (High Level Shading Language).


The principle behind a 2d screen filter is not complex at all. The idea is that each pixel on-screen basically runs a script that you write, in which you control the pixel's color. You perform math on the pixel's color, and can even pass in external variables, like a timer or percentage. It's actually surprisingly simple. Today, we'll make a really simple 2D screen filter to swap out some colors, making red turn to green, green to blue, and blue to red. The only real problem that may arise is that the 2D filter is written in C++ and GLSL, even though we communicate with the BGE using Python. However, the code isn't very complex, and if you've seen C++ for even a little while, there's probably nothing too surprising here. So, let's take a look at the source code:
---------------------------------------------------------------------------------------------------------------------------

from bge import logic

cont = logic.getCurrentController()
obj = cont.owner

filter = cont.actuators['Filter 2D'] # Get the 2D filter actuator

# Notice that while the BGE uses Python, 2D filters (or 3D shaders) are written in C++ and use GLSL. So, the syntax and style of our script here is Python until
# we start to edit the filter script below, in which we switch to C++ (hence the semicolons and //-style comments).

filter.shaderText = """              
    uniform sampler2D bgl_RenderedTexture;

   
    void main(void)
    {
        vec4 color = texture2D(bgl_RenderedTexture, gl_TexCoord[0].st); // This line gets the color from the texture that the BGE renders; as the coordinates, it uses
                    // the texture coordinate of each pixel in the viewport - gl_TexCoord[0]. I believe gl_TexCoord[1] is the texture coordinates of each pixel
                    // *in the viewport* once scaled to the size of the viewport
        gl_FragColor = color.gbra;  // This line sets the final color of each pixel (gl_FragColor) to be the color vector, in which r is the red channel,
        // b is the blue, g is the green, and a is the alpha channel. By switching these around, we can manipulate the final color of the screen so that
        // red is now where green should be, blue where green was, and green where red originally is.
    }
   
"""

if not 'init' in obj:
    cont.activate(filter)   ## The BGE will keep a 2D filter active once the 2D filter actuator has been activated; notice that we only do this once because
    obj['init'] = 1         ## activating the 2D filter takes a bit of time, so it's far more efficient to activate it only once than to constantly do so

---------------------------------------------------------------------------------------------------------------------------
Nothing is particularly outstanding except for the shader script itself. We just get the 2D filter actuator connected to the Python module running this script, set the script, and then activate and run the 2D filter. Simple. So the real problem is just understanding the script itself.

uniform sampler2D bgl_RenderedTexture;
   
    void main(void)
    {
        vec4 color = texture2D(bgl_RenderedTexture, gl_TexCoord[0].st); // This line gets the color from the texture that the BGE renders; as the coordinates, it uses
                    // the texture coordinate of each pixel in the viewport - gl_TexCoord[0]. I believe gl_TexCoord[1] is the texture coordinates of each pixel
                    // *in the viewport* once scaled to the size of the viewport
        gl_FragColor = color.gbra;  // This line sets the final color of each pixel (gl_FragColor) to be the color vector, in which r is the red channel,
        // b is the blue, g is the green, and a is the alpha channel. By switching these around, we can manipulate the final color of the screen so that
        // red is now where green should be, blue where green was, and green where red originally is.
    }


The first line gets the rendered texture that the BGE created, which we will be manipulating. It is what is called a sampler2D, or essentially a 2D image. The void main(void) line simply starts the main portion of the shader script. The first line creates a 4-component vector that contains the color of the screen, unaltered. The next line is also the last one, gl_FragColor = color.gbra. This line simply sets the final color of the pixel to be the color contained in the color vector variable. Notice, though, that it sets the final color to be offset from the usual channels. You see, normally, the color of a pixel is in the format RGBA, where R is red, G is green, B is blue, and A is the alpha channel. However, we rearranged the color values simply by pushing the letters that correspond to each color around. In the end, green is now red, blue is now green, and red is now green. That's it for the shader script, and we end up with a switched-up color pallete, just like that! Download the source blend file here.

The next tutorial will implement a more complex 2D screen filter. Keep coming back, and I'll keep making more tutorials. Have fun!

12 comments:

  1. Just to let you know, GLSL is it's own language it's not C++. However, it is based on C (not C++).

    Nice to see a 2d filter tutorial though. I hope to see some more. :)

    ReplyDelete
  2. Oops - yeah, I get these mixed up. I'll fix the tutorial. Thanks!

    ReplyDelete
  3. color the non code part, reading

    ReplyDelete
  4. Wow that's cool.

    Keep posting awaiting for more tutorials.

    Really hats up for you champ.

    ReplyDelete
  5. Thanks - I'll try to get another tutorial up soon.

    ReplyDelete
  6. need Clarifications:

    Your code is>>

    uniform sampler2D bgl_RenderedTexture; #line 1

    void main()
    {
    vec4 color = texture2D(bgl_RenderedTexture, gl_TexCoord[0].st);#line 2
    gl_FragColor = color.gbra;#line 3
    }


    The code is clear but do am i understanding it correctly:
    Now as per my understanding, line 1 is calling up sampler2d module(Is it a module ??) and which allow to access the

    bgl_RenderedTexture.(am i correct, that calling this module would allow us to access bgl_RenderedTexture?)

    bgl_RenderedTexture is a BGL defined variable which would always store frames that are drawn on game screen.
    More clearly> bgl_RenderedTexture store the 2d frame that is going to be drawn on screen.
    (am I correct ?)

    Now 2d frame contains> 4 values per pixel> that are RGBA. Therefore we would need 4 variable vec to store this frame.



    Line 2:

    vec4 color = texture2D(bgl_RenderedTexture, gl_TexCoord[0].st);

    We create a vector of length 4 to store RGBA value of frame.

    color is the name of the varibale of type Vec4.

    Valus to be stored in color is texture2D(bgl_RenderedTexture, gl_TexCoord[0].st)

    That means in vec4 varaible >> color store a 2d texture an image file (=viewport ??)
    This 2d texture is our current frame(bgl_RenderedTexture) and the coordinate system we are going to use is the texture

    coordinate of each pixel in the view port - gl_TexCoord[0].

    Now please clarify do view port and current frame are different ?

    and what is the difference between gl_TexCoord[1] and gl_TexCoord[0].



    Line 3:

    gl_FragColor = color.gbra;

    Fragment means > pixel along with all other information necessary to color it.(am i right???)

    color is a 2d image, now using function>>> color.gbra , swaps it RGBA value.

    therefore the line 3 would assign each Fragment color equal to color.gbra

    I am not clear with line 3.

    Clarification needed:

    Do gl_FragColor is used for accessing Fragments of the current frame(view-port in Blender game)?

    So do this function by default access current frame fragments?

    ReplyDelete
  7. can i have ur gmail id:

    mine is aonegamer(at)gmail(dot)com

    ReplyDelete
  8. I'll clarify a bit.
    Line 1) sampler2D is a class type, and the 'bgl_RenderedTexture' is an instance of this type. 'bgl_RenderedTexture' allows us to access the pixels that are rendered in the scene in the BGE. Yes, we would need a 4-component vector to store the value for each individual pixel, unless we pass the sampler info directly to the gl_fragcolor, which is the final color of the pixel.

    Line 2) The viewport is also the current frame texture. Without a screen filter, we are basically seeing bgl_RenderedTexture with no effects.

    I'm not exactly sure of the difference in texture coordinates in technical terms, but gl_texcoord[0] corresponds to the pixels themselves, while gl_texcoord[1] corresponds to the window coordinates. For example, if we wanted to only blur a portion of the screen, like the right half, we would use gl_texcoord[1], not gl_texcoord[0].

    Line 3) gl_fragcolor refers to the final output color of each individual _pixel_ onscreen. An example would be
    gl_fragcolor = vec4(1.0, 0.0, 0.0, 1.0);
    which would make the whole screen red. This function by default doesn't access current texture pixels - we assign the pixel information ourselves by using the sampler2D function. Hope that helped.

    ReplyDelete
  9. only one confusion i have:

    as we have applied transform that operate on a pixel, how it is applied to all the pixels in viewport.
    like bgl_fragColor work on a pixel.
    vec4 store a pixel
    so which key owrd help in looping this to all the pixel in an image.

    Do sample2d class help in doing so?

    ReplyDelete
  10. The reason the effect we coded operates on all pixels is because this is a 2D filter, and so the code runs for each pixel in the rendered image. There isn't any keyword that's making this work for all pixels - the 2D filter does this for us.

    ReplyDelete