Monday, March 24, 2014

Touch Based Android Tetris Implementation

Introduction

In my earlier entry, I talked about how I published my first app on Google play. Even though it was an extremely simple and experimental application, I learned quite a lot from it. Especially about the process of getting an app published on Google play. This time, I felt it was time to do something more interesting "in a game related" context and since Tetris is usually the most given answer on "what game should I start with?", well, you guessed it, I chose to implement a Tetris clone.
This app wasn't released on the appstore, but instead its source has been released... There are probably a lot of things that could have been done better, but I decided to go with what works, instead of over-complicating things. After the first functional code was developed, the reusable parts were somewhat refactored so they could be reused in my future developments. The end result can be seen in the sourcecode. The following sections will mostly be a summary of the concepts I learned during the development of this Tetris game. These concepts range from design patterns to Tetris itself.
Since my phone (and testing device) has a screen resolution of 320 x 480 pixels, the game has been coded with this resolution in mind.
Lets start with some general concepts of an Android application.

Android application: main thread vs rendering thread

An Android application can exist out of one of the following components: activity, service, broadcast receiver and content provider. The Tetris game consists out of a single activity since this component is the basic choice if you want to provide a window / user interface in your app. As soon as the application is started, a new process is launched and only a single thread, the main thread or UI thread is used for its execution. Special care must be taken not to block this thread: blocking it for several seconds (about 5), comes with the risk of your app returning an ANR, "Application Not Responding", message to the user.
In order to define the look of the activity, you need to create a view. For this app, a GLSurfaceView was chosen since this view allows the developer to use OpenGL to define custom graphics.
When using a GLSurfaceView, it is interesting to notice how the GLSurfaceView.Renderer will be executed in its own thread, lets call this the render thread. This thread is particularly useful for the implementation of the game loop.

Game loop

For this application, the game loop will be an infinite loop where the game is continually updated and rendered at the fastest speed allowed by the processor, figure 1 illustrates the concept.
Simplest game loop
Figure 1: Simplest game loop
The gameobjects will be updated and rendered in the update() and render() functions respectively. This functionality is implemented in the onDrawFrame() function of a GLSurfaceView.Renderer and is executed by the rendering thread.

Since the speed of execution of this gameloop is a function of the processor speed, the game will not run equally fast on different devices. Therefore, a "gametick" has been defined, allowing to call update() only after a specific amount of time has passed and thus inducing a rather "discrete" (e.g.: a Tetromino will drop one line after a certain amount of time has passed) behavior of the gameobjects:

    @Override
    public void onDrawFrame(GL10 gl) {
        //Log.d(TAG, "onDrawFrame() called");
               
        float deltaTS = (System.nanoTime() - mTime) / 1000000000.0f;
        mTime = System.nanoTime();
        mTimeAccS += deltaTS;
        if(mTimeAccS > 0.1f) { //0.1f determines the gameticke time
            update(mContext, mTimeAccS);
            mTimeAccS = 0.0f;
        }
               
        render();

    }

We see that each time onDrawFrame() is entered, the mTime variable is incremented with the time, deltaTS, that has passed. As soon as mTime is bigger then the gametick, the logic of updating the gameobjects is executed.

Interactivity - Input Manager - Consumer/Producer

A game requires at least some form of interactivity. In the case of Tetris, the absolute minimum is the rotation and the movement of the Tetrominoes by the player. In our implementation, the tetromino is rotated if the player slides his finger upward or downward on the screen, the tetromino is moved to the right or left according to the movement of the finger.

These "screen touch events" are centralized by our Input Manager which implements the OnTouchListener interface, that requires the implementation of a callback function, onTouch(). The callback will be called by the main thread when an event is detected, which allows us to store the events in a buffer. Please note however that this buffer will be consumed from the render thread, since this is where our game logic resides. In order to avoid conflicts in reading and writing to the buffer, a double buffering scheme with a synchronization mechanism has been put in place:
  • Only the buffer that is referenced by "ActiveBuffer" will be written with events in the main thread (producer).
  • getMotionEventBuffer() will be called from the render thread and will change the "ActiveBuffer" to reference the other buffer and will return a reference to the buffer that was previously active. The render thread will "consume" these events.
  • The synchronization between the 2 threads is performed by an intrinsic lock.
You can find the code for the input manager below:
public class InputManager implements OnTouchListener {
    private static final String TAG = "INPUTMANAGER";
  
    private List<MotionEvent> mMotionEventBuffer1;
    private List<MotionEvent> mMotionEventBuffer2;
    private List<MotionEvent> mMotionEventBufferActive;
          
    public InputManager() {
        mMotionEventBuffer1 = new ArrayList<MotionEvent>();
        mMotionEventBuffer2 = new ArrayList<MotionEvent>();
        mMotionEventBufferActive = mMotionEventBuffer1;
    }
  
    @Override
    public boolean onTouch(View v, MotionEvent e){
        synchronized(this) {
            if(e.getAction() == MotionEvent.ACTION_DOWN) {
                Log.d(TAG, "ACTION_DOWN");
            } else if(e.getAction() == MotionEvent.ACTION_MOVE){
                Log.d(TAG, "ACTION_MOVE");
            } else if(e.getAction() == MotionEvent.ACTION_UP){
                Log.d(TAG, "ACTION_UP");
            }
            mMotionEventBufferActive.add(MotionEvent.obtain(e));
        }
        return true;
    }
  
    public void clear(){
        synchronized(this){
            mMotionEventBuffer1.clear();
            mMotionEventBuffer2.clear();
        }
    }
  
    public List<MotionEvent> getMotionEventBuffer() {
        synchronized(this) {
            if(mMotionEventBufferActive == mMotionEventBuffer1){
                mMotionEventBufferActive = mMotionEventBuffer2;
              
                return mMotionEventBuffer1;
            } else {
                mMotionEventBufferActive = mMotionEventBuffer1;
              
                return mMotionEventBuffer2;
            }
        }
    }
}

GameScenes

The game has been subdivided in 2 gamescenes. A gamescene is nothing more then a set of gameobjects, where every gameobject can be updated and rendered. There is only 1 gamescene active at each moment. Hereafter an overview the different gamescenes:
  • The gamescene, used for the actual game, exists out of the following gameobjects (this can be seen in figure 3 on the left):
    • A background texture: this texture does not change and is always used as the background.
    • A "tetris-grid": The screen area was subdivided in a ficitional grid of 20 by 30 squares (so each square is 16 by 16 pixels big). The playing field were the Tetrominoes fall down is what I call, the tetris-grid, it is 10 cells wide and 22 cells high. In the code this is represented by an array of shorts. An empty array represents an empty playing field; all cells are empty.
    • A moving Tetromino: a Tetris block that is being manipulated on the Tetris-grid. Each gametick, the Tetromino drops a single cell. As soon as it hits the bottom or a block underneath it, the block reaches its end of life and the corresponding cells of the tetrisgrid are indicated as being occupied.
    • The next Tetromino: well, the name pretty much says it all.
  • The game-over screen is reached as soon as the "next Tetromino" can no longer become a "moving tetromino" (it immediately collides with an occupied cell of the Tetris-grid as soon as it comes onto the grid). This gamescene exists out of:
    • A background texture
    • A "Game Over" texture
    • A "Restart?" texture. This texture will actually behave as a button, since a touch event on the "Restart?" texture will be interpreted by the gamescene as a "click".

Figure 3: on the left, a picture can be seen of how the game looks like when it running. On the right, a picture can be seen of the ending screen



Tetromino implementation

It was interesting to realize that a Tetromino can be abstracted as a 4x4 grid where some fields are occupied. 

Figure 2: Tetrominoes and their orientations
We can see that instead of writing transformation matrices for the rotation of the Tetrominoes, 4 states are sufficient to describe the rotation of each block. As a disadvantage, we loose the genericity of the solution and the rotation state of each block needs to be hardcoded.

Block rotation and movement requires collision detection

In order to evaluate new positions and rotations of a Tetromino (e.g. due to a timer tick or input), a rather simple collision detection scheme was implemented. Its pseudocode can be seen below:
1. Store the current Tetromino state
2. New Tetromino state equals Tetromino state after rotation / movement
3. If Tetromino moves out of the playing field or the Tetromino collides with another Tetromino
    3a. Restore the Tetromino state from 1
4. Else
    4a. Keep the new Tetromino state
 Basically, it comes down to: "check if the updated state will cause collision".

Game state and state transitions between gamescenes

Since there can be only 1 gamescene active at a given moment in time, the game is keeping track of a state to identify which gamescene is currently active.

Rendering

Basically 2 concepts were sufficient to render this Tetris game:
  • Render a colored square
  • Render a textured rectangle with blending
Rendering a colored square was used to render an entire Tetromino, since each block can be subdivided in several squares. If you have a look at the figure with the block rotation states, then it can be seen that the decision if a square of the grid needs to be rendered, will simply depend on a boolean value: the given field is filled or it isn't.
Rendering a textured rectangle with blending was used to create the visualization of the button and the "Game Over" text. Nothing fancy as textures though, just some simple programmer art to understand the concept of creating a texture with transparency.

Some random thoughts - Further exploration

  • A textured rectangle can also be seen as a sprite. However the sprites used here are static. Dynamic sprites might be an interesting feature!
  • Would it be possible to put another Android view on top of the GLSurfaceView?
  • Only 1 screen resolution is supported - how to support multiple screen resolutions?
  • It might be interesting to have some means to render text from a texture atlas?

Conclusion

I finally managed to write my first game with a custom game engine for Android. A lot of interesting subjects were explored and a lot was learned from this implementation. In the line of this post on Hobby GameDev, I certainly got some ideas for future exploration! Now onto a more innovative implementation!

References

Since there is no point in reinventing the wheel (even though I created my custom game engine, but hey, as a hobby, I can indulge myself), several references where used during this work:

Wednesday, December 11, 2013

Ant Colony Live Wallpaper - First Android App - Post-Mortem

Introduction

It all started with a Samsung Galaxy Young S6310 I bought myself after the plain old cellphone decided to die on me. Our kids soon discovered the gaming possibilities of the phone: My eldest [5] got rather interested in this side-scrolling adventure game called "wind-up knight" by Robot Invader and my youngest kid [3], actually 2nd youngest kid now, enjoys himself a lot with "Bubble burst" from Androgames.
It has been pretty cool to see how our eldest explains and even assists his younger brother in "wind-up knight", which still seems to be a bit more challenging then his usual "Bubble Burst". Anyway, I guess this experience made me wanna look at the Android platform and I decided to write a rather small application based on some simple concepts: I decided to make a live wallpaper with particles, which resulted in Ant Colony Live Wallpaper.

Some reference material

It is always interesting to have some reference material and since I'm getting used to reading books on Kindle, I've gotten the Kindle version of "Beginning Android Games, 2nd edition".
If you have a look at the Android statistics, it also shows that today, 1/12/2013, over 98% percent of Android devices support OpenGL ES 2.0, so a book about OpenGL ES 2.0 seemed a must have as well.

Live wallpaper

Android allows you to develop an animated wallpaper. However, it doesn't seem to support OpenGL out of the box and that is what drove me to use glwallpaperservice. It basically boils down to calling your draw-call in the onDrawFrame() function in a class inherited from GLSurfaceView.Renderer, which in turn is set as Renderer in a GLEngine object. The latter is instantiated in onCreateEngine in a GLWallpaperService object.

AdMob

Since the app-market is pretty well known for its advertisement banners, I also decided to include an Admob ad in my application. For a live wallpaper, the most convenient was to include this in the live wallpaper's settings screen. Integrating this was no issue at all.

Google play

Well publishing the app on Google play went fairly as well. I guess it took about 30 minutes to get the app published after I signed up (and payed the one-time 25$ sign-up fee) and this only because I had to take additional screenshots since it wasn't immediately clear to me that I had to upload at least 3 of them.

Next up

I did enjoy writing this little app for the Android platform and the most interesting thing for me was that I've gotten some experience with publishing an app on Google play and that I have had a chance to learn something about Android development. Needless to say, it was quite an eye-opener to see how easy it actually was to get an app out there! Next up will be a little touch-based Tetris clone with a custom written game engine.
Also, there was some pretty exiting news earlier on about the Oculus Rift coming to Android. Imagine a device like the NVIDIA shield being patched into the Rift... Looks promising to me anyway.

Saturday, August 17, 2013

The Graphics Pipeline

Tutorial - Win32 and OpenGL 4.3 - Part3

1. Introduction

This article is mainly intended to give some introductory background information to the graphics pipeline in a triangle based rendering scheme and how it maps to the different system components. We'll only cover the parts of the pipeline that are relevant to understand the rendering of a single triangle with OpenGL (which will be covered in a later post).

2. Graphics Pipeline 

The basic functionality of the graphics pipeline is to transform your 3D scene, given a certain camera position and camera orientation, into a 2D image that represents the 3D scene from this camera's viewpoint. We'll start by giving an overview of this graphics pipeline for a triangle based rendering scheme in the following paragraph. Subsequent paragraphs will then elaborate on the identified components.

2.1. High-level Graphics pipeline overview

We'll discuss the graphics pipeline from what can be seen in figure 1. This figure shows the application running on the CPU as the starting point for the graphics pipeline. The application will be responsible for the creation of the vertices and it will be using a 3D API to instruct the CPU/GPU to draw these vertices to the screen.
We'll typically want to transfer our vertices to the memory of the GPU. As soon as the vertices have arrived on the GPU, they can be used as input to the shader stages of the GPU. The first shader stage is the vertex shader, followed by the fragment shader. The input of the fragment shader will be provided by the rasterizer and the output of the fragment shader will be captured in a color buffer which resides in the backbuffer of our double-buffered framebuffer. The contents of the frontbuffer from the double-buffered framebuffer is displayed on the screen. In order to create animation, the front- and backbuffer will need to swap roles as soon as a new image has been rendered to the backbuffer.

Figure 1: Functional Graphics Pipeline
Figure 1: Functional Graphics Pipeline


2.1. Geometry and primitives

Typically, our application is the place where we want to define the geometry that we want to render to the screen. This geometry can be defined by points, lines, triangles, quads, triangle strips... These are so called geometric primitives, since they can be used to generate the desired geometry. A square for example can be composed out of 2 triangles and a triangle can be composed from 3 points. Lets assume we want to render a triangle, then you can define 3 points in your application, which is exactly what we'll do here. These points will then reside in system memory. The GPU will need access to these points, this is where the 3D API, such as Direct3D or OpenGL will come into play. Your application will use the 3D API to transfer the defined vertices from system memory into the GPU memory. Also note that the order of the points can not be random. This will be discussed when we consider primitive assembly.

2.2. Vertices

In graphics programming, we tend add some more meaning to a vertex then its mathematical definition. In mathematics you could say that a vertex defines the location of a point in space. In graphics programming however, we generally add some additional information. Suppose we already know that we would like to render a green point, then this color information can be added. So we'll have a vertex that contains location, as well as color information. Figure 2 clarifies this aspect, where you can see a more classical "mathematical" point definition on the left and a "graphics programming" definition on the right.

Figure 2: Pure mathematics view on the left versus a "graphics programming" view on the right
Figure 2: Pure "mathematics" view on the left versus a "graphics programming" view on the right
2.3. Shaders - Vertex shaders

Shaders can be seen as programs, taking inputs to transform them into outputs. It is interesting to understand that a given shader is executed multiple times in parallel for independent input values: since the input values are independent and need to be processed in exact the same way, we can see how the processing can be done in parallel.
We can consider the vertices of a triangle as independent inputs to the vertex shaders. Figure 3 tries to clarify this with a "pass-through" vertex shader. A "pass-through" vertex shader will take the shader inputs and will pass these to its output without modifying these: the vertices P1, P2 and P3 from the triangle are fetched from memory, each individual vertex is fed to vertex shader instances which run in parallel. The outputs from the vertex shaders are fed into the primitive assembly stage.

Figure 3: Clarification of shaders
Figure 3: Clarification of shaders
2.4. Primitive assembly

The primitive assembly stage will break our geometry down into the most elementary primitives such as points, lines and triangles. For triangles it will also determine whether it is visible or not, based on the "winding" of the triangle. In OpenGL, an anti-clockwise wounded triangle is considered as front-facing by default and will thus be visible. Clockwise wounded triangles are considered back-facing and will thus be culled (or removed from rendering).

2.5. Rasterization

After the visible primitives have been determined by the primitive assembly stage, it is up to the rasterization stage to determine which pixels of the viewport will need to be lit: the primitive is broken down into its composing fragments. This can be seen in figure 4: the cells represent the individual pixels, the pixels marked in grey are the pixels that are covered by the primitive, they indicate the fragments of the triangle.


Figure 4: Rasterization of a primitive into 58 fragments
Figure 4: Rasterization of a primitive into 58 fragments
We see how the rasterization has divided the primitive into 58 fragments. These fragments are passed on to the fragment shader stage.

2.6. Fragment shaders

Each of these 58 fragments generated by the rasterization stage will be processed by fragment shaders. The general role of the fragment shader is to calculate the shading function, which is a function that indicates how light will interact with the fragment, resulting in a desired color for the given fragment. A big advantage of these fragments is that they can be treated independently from each other, meaning that the shader programs can run in parallel. After the color has been determined, this color is passed on to the framebuffer.

2.7. Framebuffer

From figure 1, we already learned that we are using a double-buffered framebuffer, which means that we have 2 buffers, a frontbuffer and a backbuffer. Each of these buffers contains a color buffer. Now the big difference between the frontbuffer and the backbuffer is that the frontbuffer's contents are actually being shown on the screen, whereas the backbuffer's contents are basically (I'm neglecting the blend stage at this point) being written by the fragment shaders. As soon as all our geometry has been rendered into the backbuffer, the front- and backbuffer can be swapped. This means that the frontbuffer becomes the backbuffer and the backbuffer becomes the frontbuffer.
Figure 1 and figure 5 represent this buffer swaps with the red arrows. In figure 1, you can see how color buffer 1 is used as color buffer for the backbuffer, whereas color buffer 2 is used for the frontbuffer. The situation is reversed in figure 5.

Functional Graphics Pipeline with Figure 4: Functional Graphics Pipeline with swapped front- and backbufferfront- and backbuffer
Figure 5: Functional Graphics Pipeline with swapped front- and backbuffer
This last paragraph concludes our tour through the graphics pipeline. We have now a basic understanding of how vertices and triangles end up on our screen.

3. Further readin


If you are interested to explore the graphics pipeline in more detail and read up on, e.g.: other shader stages, the blending stage... then, by all means, feel free to have a look at this.
If you want to have an impression of the OpenGL pipeline map, click on the link.
This article is also maintained on Gamedev.net.

Wednesday, July 3, 2013

Selecting your graphics card with NVIDIA Optimus

Introduction to Win32 and OpenGL 4.3 - Part2

1. Summary

This is part 2 of the "Introduction to Win32 and OpenGL 4.3" series. In this post, we'll have a look at NVIDIA Optimus and selecting your Graphics card: we'll want to make sure we have a graphics card that supports OpenGL 4.3 in the first place!

2. OpenGL 4.3

2.1. Laptop

When I wanted to get OpenGL 4.3 running, I was immediately confronted with the presence of an integrated and a discrete graphics card on my laptop: my application kept returning an OpenGL 4.0 context! It turns out that the answer was provided by glewinfo.exe, a program provided with GLEW.
When we look at the "NVIDIA Control Panel", we see that the preferred graphics processor has been set to "Auto-select", this basically means that the the integrated graphics card will be selected.

Preferred Graphics Processor is set to Auto-select

We can confirm the selected graphics card, by running glewinfo.exe, a program provided together with Glew. A part of the glewinfo.exe output can be found below:

---------------------------
    GLEW Extension Info
---------------------------

GLEW version 1.9.0
Reporting capabilities of pixelformat 3
Running on a Intel(R) HD Graphics 4000 from Intel
OpenGL version 4.0.0 - Build 9.17.10.2843 is supported

Our assumption has been confirmed, the integrated graphics processor, in this case an "Intel HD Graphics 4000" GPU was selected. However, we see that the supported OpenGL version is limited to 4.0! To this date (25/06/2013), I didn't find a driver that enables OpenGL 4.3 for this GPU. When we set the preferred graphics processor to High-Performance NVIDIA processor, as can be seen in the following picture, we will select the NVIDIA graphics card:

Selecting the high-performance NVIDIA processor
Indeed, glewinfo returns the following:

---------------------------
    GLEW Extension Info
---------------------------

GLEW version 1.9.0
Reporting capabilities of pixelformat 1
Running on a GeForce GT 630M/PCIe/SSE2 from NVIDIA Corporation
OpenGL version 4.3.0 is supported

We can see that the NVIDIA graphics card was selected, moreover, OpenGL 4.3 is supported on this card! (Please see the NVIDIA website for the graphics cards and minimum driver version required for OpenGL 4.3).
Now, in order to save precious battery power of your PC, you most likely don't want to enable the NVIDIA graphics card by default, nor do you want to ask the user of your application to select the graphics card manually. So there must be a way to do this programmatically. Indeed, NVIDIA provides the answer by its NVIDIA Optimus technology.
It turns out that it doesn't need to be more complex then exporting the following variable in order to select the NVIDIA card from within your application:

 extern "C" {  
      _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;  
 }  

Some other possibilities are explained more indepth here.

2.2. Desktop 

OpenGL 4.3 was relatively straightforward to get up and running on my desktop. I just needed the latest driver for my NVIDIA GT 640, which unlocked OpenGL 4.3.

3. Future post

Now that we have a graphics card and we are certain that it supports OpenGL 4.3, we'll set up our OpenGL 4.3 context in our Win32 application in the next post.

Monday, June 17, 2013

GLEW and a basic Win32 Application

Introduction to Win32 and OpenGL 4.3 - Part1

1. Introduction.

This blog post is split in multiple parts and will summarize the findings from my early experiments and programming with OpenGL 4.3 in a Win32 environment. The reader will find some detailed explanations about setting up a win32 application with OpenGL as 3D rendering API. More specifically, OpenGL will be used to render a Gouraud shaded cube. This blog post is mainly intended as a tutorial/howto

Hereafter some details of the environment that was used:
  • OS: Windows 7 64-bit
  • GPU: 
    • Nvidia GT 630M
    • Nvidia GT 640
  • Visual Studio express 2010
This blog limits itself to a 32-bit application and its debug-build.

2. Preparing the visual studio project.

Before we start implementing and explaining the code, we'll set up the required dependencies for this application. Actually, the only dependency that requires some minimal effort is GLEW. The other dependency, OpenGL can just be set as an additional import library in the application's project settings.

GLEW, the OpenGL Extension Wrangler Library.

OpenGL relies on extensions to extend the functionality of its core. In order to alleviate some work to load all extensions yourself, GLEW is there to help you. I decided to discuss the static library, you could use the shared library as well, but I prefered to compile the GLEW code into the application in order to avoid an additional DLL.
In order to integrate GLEW in the application, I felt it was the most easy to download the source, unzip it, make a copy into C:\Program Files (x86)\glew-1.9.0\ and then just compile the glew_static project, found in C:\Program Files (x86)\glew-1.9.0\build\vc10\. This  results in a static 32-bit debug library glew32sd.lib in C:\Program Files (x86)\glew-1.9.0\lib\.

This last path will thus need to be referenced in your project's "Additional Library Directories" as can be seen in the figure below:

Referencing glew32sd.lib in "Additional Library Directories"
Since the application will be using a static library for GLEW, we need to define the preprocessor definition GLEW_STATIC in our application.

Defining the preprocessor definition GLEW_STATIC
In order to access the functionality offered by GLEW, which is basically your opengl rendering calls, you'll need to include GL\glew.h. Therefore, you'll need to add C:\Program Files (x86)\glew-1.9.0\include\ as Additional Include Directory.

Setting the "Additional Include Directory"
More information on the compilation, installation and usage of GLEW can be found on their website.

OpenGL

OpenGL is directly provided by windows in the default visual studio include paths, so you'll just need to set OpenGL32.lib as an additional dependency.

3. Basic Windows application.

A basic window application requires you to write some minimal code for the creation of the window. However, in order to make the window interactive, we also need to take the message driven architecture of Windows into account, i.e. we'll need to create a message loop to treat the messages that Windows queues for our application.

We start by creating the window, where the rendering will take place:
  • First, you'll have to instantiate a window class, WNDCLASS
    • Among other things, this class allows you to define the windows procedure, WNDPROC. This callback function will be called every time we will dispatch a message to our window "to be". Please refer to "Writing the Window Procedure" for some more indepth information.
  • Then, you'll need to register this class with a call to RegisterClass().
  • Finally, you can create your window, CreateWindow(), its important to note that this function will return a handle to your window. Then you'll want to show and update your window.
Once we have correctly initialized our window, we can move on to the creation of the message loop. The messageloop will actually be the place were your program will be spending most of its time. You can find hereafter an example of such a message loop:

 while(msg.message != WM_QUIT)  
 {  
      if(PeekMessage(&msg, 0, 0, 0, PM_REMOVE))  
      {  
           TranslateMessage(&msg);  
           DispatchMessage(&msg);  
      }  
      else  
      {  
           //Do Other stuff  
      }  
 }  

The important functions here are: PeekMessage() and DispatchMessage(). I refer to TranslateMessage() for completeness.
  • PeekMessage(): This function will look in the queue to verify if messages are available.
  • TranslateMessage()
  • DispatchMessage(): if the message on the queue is intended for the window we created, then the corresponding window procedure we created earlier will be called. 
Other code can come in the else-branch of the if-else clause. This way, if no messages are available on the queue, we can let the processor handle other work (e.h. render calls). Now that we have covered a basic windows application, we are ready to move on to the creation of an OpenGL rendering context.

4. Closure
This article is already getting a bit long, so I'll wrap it up here now that we have some insight in our basic windows application and the required dependencies. The second part of this article will probably be finished somewhere next week. We'll have a look at initializing OpenGL 4.3, GLEW, vertex and fragment shaders. Feel free to leave some comments. Have fun coding and experimenting!

Saturday, January 26, 2013

Pixel shader expecting shader resource view

I have been experimenting with a game engine and 1 "Ubershader" as an effect for all materials. The "Ubershader" enabled several features like diffuse, specular (Blinn and Phong models), normal maps, texture maps... for all materials. The output-window of my visual studio environment kept spamming me with a lot of D3D10 INFO messages in the line of:
D3D10 INFO: ID3D10Device::DrawIndexed: The Pixel Shader unit expects a Shader Resource View at Slot 1, but none is bound. This is OK, as reads of an unbound Shader Resource View are defined to return 0. It is also possible the developer knows the data will not be used anyway. This is only a problem if the developer actually intended to bind a Shader Resource View here.  [ EXECUTION INFO #353: DEVICE_DRAW_SHADERRESOURCEVIEW_NOT_SET]
Root cause for these INFO messages was that not all textures were bound (used) for every material. I found a solution to remove these INFO messages with the creation of different techniques and different pixel shader declarations, e.g.:

// Default Technique
technique10 Default {
    pass p0 {
        SetRasterizerState(NoCulling);
        SetVertexShader(CompileShader(vs_4_0, MainVS()));
        SetGeometryShader( NULL );
        SetPixelShader(CompileShader(ps_4_0, PSDefault()));
    }
}

// Diffuse texture technique
technique10 TexturedDiffuse {
    pass p0 {
        SetRasterizerState(NoCulling);
        SetVertexShader(CompileShader(vs_4_0, MainVS()));
        SetGeometryShader( NULL );
        SetPixelShader(CompileShader(ps_4_0, PSDiffuseTexture()));
    }
}

// technique: Normal texture, Diffuse texture and Blinn model
technique10 NormalTexturedDiffuseSpecularBlinn {
    pass p0 {
        SetRasterizerState(NoCulling);
        SetVertexShader(CompileShader(vs_4_0, MainVS()));
        SetGeometryShader( NULL );
        SetPixelShader(CompileShader(ps_4_0, PSNormalTexturedDiffuseSpecularBlinn()));
    }
}
The corresponding Pixel shader functions thus become:

// Pixel shader Default
float4 PSDefault(VS_Output input) : SV_TARGET {
    return PS(input, false, false, false, false);
}

// Pixel shader Diffuse Texture
float4 PSDiffuseTexture(VS_Output input) : SV_TARGET {
    return PS(input, false, true, false, false);
}

// Pixel shader Normal Texture, Diffuse Texture and Blinn model for specular component
float4 PSNormalTexturedDiffuseSpecularBlinn(VS_Output input) : SV_TARGET {
    return PS(input, true, true, true, true);
}
The "main" pixel shader then looks like something as:

// The Pixel Shader
float4 PS(VS_Output input, uniform bool bHasNormalTexture, uniform bool bHasDiffuseTexture, uniform bool bSpecularBlinn, uniform bool bHasTextureSpecularLevel) : SV_TARGET {
          ...
//NORMAL
 float3 newNormal = CalculateNormal(input.Tangent, input.Normal, newTexCoord, bHasNormalTexture);
//DIFFUSE
 diffColor += CalculateDiffuse(newNormal, bHasDiffuseTexture, newTexCoord);
//SPECULAR
 specColor += CalculateSpecular(viewDirection, newNormal, newTexCoord, bSpecularBlinn, bHasTextureSpecularLevel);
  
           ...
return float4(finalColor, opacity);
}
After I refactored the effect in the above way, I effectively removed all related D3D10 INFO messages. Nevertheless, the effect got a bit more complicated: it looks like there will be a lot of different techniques required if our materials will be a combination of various different features from the ubershader. But this is probably a subject on its own. Maybe someone holds a more elegant solution?

For a correct understanding of the problem, I'm still looking for an interpretation of these specific D3D10 INFO messages and if they inform the developer of an impact on the performance of the effect. If the unbound resource views are totally harmless, then why would we want these kind of INFO messages?
A clean output window seems to come at a high price given the increased complexity of the effect.

Sunday, July 15, 2012

Rudimentary snake game - simple analysis

This post analyzes a rudimentary c++ snake game that I found on youtube. A common asked question on several game developer fora is usually "where should I begin?". At the time, I was looking myself for some small and easy code that could show me opengl's useage in a game and get me going on the subject. Once I started to copy/implement/think about the snake code, I noticed what other interesting stuff where behind this code besides opengl.

E.g.: when you look at how the code has been put together, then you notice that the Field class actually contains all the objects that should be represented in the gameworld (snake blocks, fruit blocks and invisible blocks, all arranged in a 2 dimensional array of blocks). The Snake class on the other hand contains the code to actually manipulate and move all objects around in the gameworld. The Painter class knows how to draw and render the different objects. At last there is the Game class that manages all the events in the gameworld.

For now I' m still a bit naive concerning game engines, but I would definitely dare to see a parallel between a physics component of a game engine and the Snake class. Since the Snake class from this example also takes care of computing the next state of the different blocks in the gameworld.

If I would have to answer "where should I begin?", I would say: "Just do!". Nothing will give you more experience/insights/new questions then having a go at a challenge. During the process, don't be afraid of having a look at how others have solved several challenges before you...

I hope you enjoyed this second post.