Escape-llc's Diary

Coordinator
Jun 27, 2008 at 11:58 AM
I am working on a physics-based game, with 2-d physics and 3-d model rendering.  Models are created via XSI.  I must say the integration was pretty effortless, even though I was overwhelmed by lots of new ideas blahblah.
First off, I am totally new to game programming, but not programming in general.  This is a very interesting way of looking at things.
I don't like using inheritence much for design; it is more for implementation these days.  Interfaces are king now!
Since I wanted to build a game, I also wanted to build a platform at the same time so I could build another game easier next time.  I'm just like that.
So the update and draw callbacks are delegated out to lists of "packages" which are controlled by an implementation with respect to the controller (Screen Manager).
I am going with "flat" being more efficient than digging down through levels of virtual methods.  The delegate-based coupling helps reinforce this flat view, which is more akin to what one expects when building GUI "event handlers" in a typical IDE.
Using a stack to manage the input delegation is also a key concept that can't be expressed well with virtual method chains.  This makes it really straightforward to implement modal input, without resorting to "flags" and such.
Coordinator
Jul 2, 2008 at 12:21 AM

I had a real problem with getting collisions to work!!  I finally had to set breakpoints in the Farseer code, and found out I need CollisionGroup == 0 or it doesn't "recognize" the collision.  So, it was finding it all right, but the notification machinery was not configured to its liking.

After I made that little tweak, it is all gravy!!

Coordinator
Jul 2, 2008 at 12:22 AM
Just took 30 minutes and converted an existing XNA example project to Games Core.  Posted in the Releases.
Coordinator
Jul 5, 2008 at 12:11 PM
Superimposing text over the 3D model was a snap with Viewport.Project().
Just remember to use the Matrix.Identity for your world transform, if you are using 3D world coordinates.
Coordinator
Jul 5, 2008 at 12:13 PM
Built an "overhead view" for the model, which is superimposed on the main gameplay screen using a viewport.
Also took the opportunity to do more heavy refactoring, as I am still settling in on an organization for the sub-layers, but it is good now.
Refactored the vector control accordingly.  The vector control is a mouse-drag area implemented in a viewport for capturing delta data for world coordinate "dragging".
Coordinator
Jul 7, 2008 at 11:47 AM
Edited Jul 7, 2008 at 11:49 AM
Basic gameplay is going now.  Added full-screen switching via F2 key release, three different game lengths, and more physics tweaking.
I'm finding it necessary to refactor to one-time instance-level allocation to reduce GC pressure; it seems to degrade the frame rate over time.  This will be a continuing theme to make sure everything runs smoothly.
All we need now is some audio and some overlay graphics.  Oh yes, and my daughter keeps complaining that there's no background, i.e. stands, walls, scoreboard, etc.  I told her to get into XSI and make some.  And oh yes, make me some characters too!
Coordinator
Jul 11, 2008 at 12:09 PM

Got basic audio going.  It's nice to have this powerful tool like XACT, but hell, can't it do some transcoding for me?  Good thing I already have some wave-manipulation tools!

XACT certainly makes for high overhead if you just want to bootstrap a couple sounds into your game, but I understand the need for it.  After all, I am all for highly-configurable anything.

Next up is integrating some GUI elements into the game.  After all, players need to enter names, configure options, etc.

Coordinator
Jul 16, 2008 at 4:23 PM

Made some more tweaks to the physics; I am not getting quite the collision dynamics I want, there's too much "drift".

Preparing for render-target-based transitions.  This will allow the ScreenManager to process everything, without any "cooperation" from the layers.

For example, a fade-in effect is just an alpha transition over time.  I've typically seen this implemented in a "cooperative" fashion, where the rendering implementation used an externally-updated "master alpha" value in all its drawing logic.  Indeed I did this as well, but wrapped it up in the update-package framework.

However, that way sucks, mainly because of the cooperative nature, i.e. my code has to be consiously aware of any ongoing effects and include them in rendering calls.  This is way problematic if I am rendering a model with BasicEffect instances.

Of course, the way to handle this is to use a render target for the rendering, and then realize that into a texture, and apply effects to the texture when putting it to the back buffer.  That way, the layer just renders normally, and any effect can be applied.

This is quite nicely facilitated by having a collection of render packages.  The screen manager controls whether and to where any rendering logic gets invoked.  Thus, a render target can be put into place if necessary, before the layer's packages are invoked.  If an effect is active, the ScreenManager allows it to realize a texture (from its render target) and apply it to the back buffer however it wants (e.g. alpha-blending).  If no effects are active, rendering goes directly to back buffer.  This also extends nicely to multi-target effects, e.g. cross-fade.