Skip to content

Archive

Tag: OpenGL-ES

If, like us, you have a rendering API based on OpenGL ES1, you should start feeling… uncomfortable. Additionally, as I accumulated knowledge about 3D rendering, I think that the design of our rendering engine is outdated.

There are not many resources out there for learning OpenGL-ES3, so I will try to track my progress on the blog.

OpenGL-ES3 is backwards compatible with OpenGL-ES2. To learn about the latter, I recommend Philip Rideout’s “iPhone 3D programming”, available online.

Migrating from OpenGL-ES1 to OpenGL-ES3 is not a trivial matter. So I will attack the problem from several angles:

  1. (Re)building an OpenGL-ES(1?)/2/3 renderer from scratch
  2. Integrating the new renderer with our game engine.
  3. Understanding OpenGL-ES2/3
A template

Apple provide an introduction to OpenGL-ES on iOS. Recent documents may require an apple ID.

In XCode, selecting new project, we find “OpenGL game”. As well as providing a stub for new OpenGL-ES applications, this doubles as an example featuring spinning, shaded cubes. As such it is comprehensive:

  • Demonstrates basic vertex and fragment shaders
  • Demonstrates how to render geometry
  • Demonstrates how to use the depth buffer
  • Demonstrates the use of basic matrix maths.
While the template is targeting ES2 – it should be forward compatible.

This is a ~400 lines program, running like this:

  • setup
    • init EAGLContext (viewDidLoad)
    • setup OpenGL
      • load shaders
        • compile a vertex shader
        • compile a fragment shader
      • link program
  • rendering loop
    • draw view (glkView: drawInRect:)
    • update

A catch with this ‘sample program’ is that finding documentation for it isn’t easy. A stack overflow item explains some of the code. Also, a you tube video introduces the template.

Ray Wenderlich describes an alternative setup on his blog.

I will not cover this template in detail here. I started refactoring the template to create a simple (yet usable) renderer.

Today I grabbed an iPhone4 – not quite at the price you get it in the US or the UK, and that gave me an immediate opportunity to judge the work I did on Antistar to support so called retina display.

For reminder, development started on an iPod 2nd gen. If you followed this blog on and off, you already know that getting the game to run at frame rate on the older iPods has been a constant struggle. Then I bought an iPad, and an iPhone 3GS along with it. The original plan was to do a separate version for the iPad, and performance tuning on the 3GS. The ‘iPad project’ sat in a cupboard for 8 to 12 weeks.

Optimizing for 3GS and iPod 3rd gen

On the 3GS, it became clear very quickly that there was spare processing time. Since I had already implemented depth of field (DoF) balancing (less details in the background when running out of processing time), this translated immediately into… more detailed images.

But I still felt I hadn’t reached the limits. So I figured a way I could introduce antialiasing without directly relying on the GL (remember multisampling is a late addition on iDevices), and that turned out to be rendering on canvas 1.5x as big as the screen, and letting the system resize and antialias the picture.

Well. That looked pretty good, and the game was still running very smoothly. Adding antialiasing did cause the DoF balancer to kick in at times, but nothing like the iPod 2nd gen.

The ‘Retina incident’

Now we get to the part that finally cost me nearly a $1000 (that’s what I got mine 4!) and somehow produced, as a collateral, a universal app running on all devices.

I postponed getting iOS4 on any of our devices. I also postponed getting the latest SDK. Seriously, if you’re about to release a game, what do you want? Do you want to fix all the bugs, then look into device / OS specific issues, or do you want to make a mess?

The first big surprise came from the iPhone4 simulator, and a short email clearly indicating that all iPhone games would run on the iPhone4 anyway.

Surely I did want to support the retina display (and the game does!). That still left me guessing what the performance of the device might be. With vague echos suggesting that the iPad and the iPhone4 ran pretty much the same hardware, the best way to have a try must be to run the game on an iPad. So I migrated the build to universal, and got to work.

At this stage, I was in for the second big surprise, and not a good one: given a bigger screen size, the iPad’s horsepower is worth something between the iPhone3GS and the iPod touch when it comes to 3D rendering (assuming you take away GPU optimizations, and I don’t have that running yet). Meaning…

…meaning that the DoF balancer kicked in much earlier on the iPad than it did on the 3GS. Kind of regrettable considering we have a much bigger screen, providing a more immersive experience, somewhat at the expense of having to handle a heavier, clumsier device.

And then what, well, this is what I did:

  • Disabled antialiasing on the iPad. the game looked beautiful in a different way, and surely ran smoother and better than on the 2nd gen iPodTouch. No great loss then (and no negative feedback from players).
  • Limited the canvas size to 1.5x iPhone – quite the same as the 3GS, but not rendered quite in the same way. Instead of antialiasing by blending pixels, we’re running somewhat under the maximum definition.

Where the user experience comes in…

Players reported that the iPhone 4 renders ‘just a little better than the iPhone’. If you’re comparing with an iPhone 3GS, this is necessarily true. We get a picture that’s just a little more crisp on the iPhone4. Other players asked me if the game really supported retina display. And it really does! But not quite in the way mip-mapped textures or a 2D game would look on your iPhone4.

I don’t feel my players on iPhone 4 are very happy. The truth is, they’d rather have less depth of field (and not know about it), than have less definition. Because increased definition is what they dropped the buck for.

So what’s left for me to do is drastically improve the engine performance. Because I don’t want anybody to feel they don’t get what they paid for.

Oh my. think about it, it’s a $3 title and it needs to run on a screen that’s 4 to 6 times as big as your PS3′s telly.

Well, I was supposed to provide better support for rendering ‘dummies’ – non-commital symbols representing game actors, to be used in game prototyping so we’re not tied to producing assets whenever we need to declare new characters and props.

Today I’m integrating realtime 3D character animations with my isometric view – how’s that for staying focused?

Isometrics again

While working out and testing Blender to GL export, I’ve setup my isometric view more reliably – the original setup was fiddly.

My unit conversions are getting complicated:

  • First I worked in pixels. This is great for 2D sprites, with 32 pixels amounting to roughly 1 meter.
  • Working with blender, I usually map 1 world unit to 1 meter.
  • To save space, my models are encoded as signed short values. To do this, I premultiply everything by 1000 (smallest resolvable unit = 1 millimeter).

Time to tidy up. Here’s my new setup for the orthographic projection:

[EAGLContext setCurrentContext:context];

glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);

glViewport(0, 0, backingWidth, backingHeight);

// setup an orthographic projection (no perspective)

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

// The following indicate how many world units we fit

// in the width and height of the screen. So if you set this

// to say, 10×15, you show 15 x 10 world units.

//

// I just map pixels to world units, so at this point I ensure that

// 1 gl unit is 1 pixel.

//

// The depth parameter extrudes the screen into a box. Whatever’s

// inside the box is showing, the rest isn’t.

float width=GL_RENDER_AREA_WIDTH;

float height=GL_RENDER_AREA_HEIGHT;

float depth=GL_ISO_RENDER_DEPTH*5;

glOrthof(-width/2, width/2,-height/2,height/2, -depth/2, depth/2);

// Now we reset the ‘model view’. In other words we’re fitting the world

// inside our box.

// I rotate the world 30 degrees. This creates an isometric projection

// except I’m not rotating 45 degrees on the Y axis, which would give

// the classical ‘lozenge tile’ isometric effect.

glMatrixMode(GL_MODELVIEW);

glLoadIdentity();

glRotatef(30,1.0f,0,0);

// At this point I rescale by ‘pixels per metre’.

// By default, 32 pixels is now 1 metre.

glScalef(pixelsPerMetre,pixelsPerMetre,pixelsPerMetre);

// Here I setup/clear the depth buffer.

glDepthMask(GL_TRUE);

glClearDepthf(1.0f);

glDepthFunc(GL_LEQUAL);

glEnable(GL_DEPTH_TEST);

// Clear with some non-descript dark gray

glClearColor(0.2f, 0.2f, 0.2f, 1.0f);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// Render the world

[contentRenderer render];

// Display the result.

glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);

[context presentRenderbuffer:GL_RENDERBUFFER_OES];

OK, now I need to correct everything down the chain. In passing, I removed a ’32′ scaling factor because both my actors and tiles already used world units. The data model still uses pixel units, I won’t touch that for now.

Blender Sprites?

I corrected the tile display – that’s not too important because tiles are really just make-does. I’m more excited about integrating my character animation from Blender. Pending the next serious article about the export/import code, drafted and tested this week-end, the main (and limited) interest of the integration code lies in the transformation sequence.

In the code below, ‘actor’ refers to the data model object. actor3d is a wrapper around geometry data. Since geometry data can be rendered using various methods, I’ve defined rendering code in a separate class, ActorRenderer3D.

This

// eval orientation angle

float angle=0;

Vector* u= actor.orientation;

if (u.length<0.001f) {

angle=0;

}else {

// here the angle is tested against the y axis because

// the orientation (from my legacy 2D isometrics) describes

// the actor’s direction in the model’s x-y plane, so I

// substitute y to z.

Vector* v= [[Vector alloc]initWithX:0.0f y:1.0f z:0.0f];

// [Vector angle::] returns the same values for x positive

// and x negative so I flip the angle to get the

// actual rotation.

angle=[Vector angle_deg:u with:v];

if (u.x<0) {

angle=-angle;

}

}

// transform and draw

glPushMatrix();

// we need to rescale by 32 because the model’s coordinates are in pixels

// (eventually the rescale factor should be passed down the render graph)

float tx=(actor.location.x-originX)/32.0f;

float ty=(actor.location.y-originY)/32.0f;

//

glTranslatef(tx,0,ty);

glRotatef(angle, 0, 1, 0);

[Actor3DRenderer renderActor:actor3d action:@"nod" frame:actor.phase];

glPopMatrix();


Right – that’s it for integration so far. I’ll have more to do later in two essential areas:

  • Selecting the animation frame to be displayed. The model’s ‘phase’ should be determined by the view. This is counter-intuitive (the model should drive the view, not the reverse) but logical (the animator/designer determines the action’s duration). A this point, phase is still determined (arbitrarily) by the model.
    I could export animation durations separately, but I don’t think this is a good idea. This is data duplication, and is kind of dishonest. On the other hand, it is likely that animation durations will require tuning later, so my bet is that I should (a)actually define animation durations somewhere else and (b) provide code that remaps animation frames to actual game loop frames.
  • Mapping/Matching model actions (ids) to animation names (strings) – I’ve hardwired an action name for now because I have no safety guards against an undefined action – an action for which there is no matching animation.

I have now resolved importing Blender data into my game (complete article with source code will be available next).

In this article I’m covering code structure changes required to migrate from a ’2D isometric view’ (CocoaTouch rendering) to a 3D isometric view (OpenGL-ES rendering). The calculations helped me understant glOrthof a little better.

My original plan was to provide first a ‘dummy view’ to allow level design without having to create dummy assets for every other character or tile. I’ll probably still do this, but because the dummy view required vector graphics, I dived head first into OpenGL ES – the downside being that this cannot integrate directly with my current view code.

Starting from the GLES2Sample example, I have done the following:

  1. modified it to remove GL ES2 support (at least for now)
  2. Added an ‘ESContentRenderer’ interface. This could be temporary; for now it’s just an acceptable way to separate GL setup/teardown from content rendering. ES1Renderer now calls [ESContentRenderer render] within its own render function.
  3. Arranged that EAGLView now extends my ActionView class (ActionView just converts simple taps into something like clicks).
  4. Created a StageView3D subclasser to EAGLView. StageView3D is like my 2D StageView class. This is for consistency.
  5. Added a StageViewRenderer class – this implements ESContentRenderer and is meant to do actual rendering work.

I somehow mean to define a common interface for StageView3D and StageView(2D). As it stands most methods in StageView were private – the view knows about the model, not the reverse – so there’s little gained in doing this.

With my new setup, I can instantiate StageView3D in place of StageView (duplicated the constructor). EAGLView owns the contentRenderer and passes it to ES1Renderer (could be renamed FrameRenderer). StageView3D extends EAGLView and provides StageViewRenderer as content renderer.

The above realizes two goals:

  • Put GL drawing in a separate class, part of my view rendering – EAGLView and ES1Renderer are part of my GL boiler plate.
  • Keep the new view setup compatible with my existing controllers.

I’ve been tempted to fire a paint message from ES1Renderer instead. I could handle that within StageView3D. which would keep it more like StageView. For now I don’t see the difference and feel a little suspicious about dispatching an event allowing classes to contribute drawing independently.

I still need to replicate the drawing of the actors, props and board, and (until I enable gl picking) I need to make sure that my isometric view is compatible with my existing (2D) picking code.

After I am finished with setting up my StageContentRenderer and StageView3D:

  • StageView3D duplicates StageView code for 2D picking. I won’t try to factor this out just yet (I will eventually migrate this to 3D picking I guess…)
  • StageContentRenderer graces me with a beautiful gray screen.

Drawing Actors, Props and Terrain with GL.

I already have a SpriteRenderer protocol. This was originally meant to support several implementations and hardly specifies anything, so I can likely create a new implementation for actor rendering; tile and prop rendering was too simple to deserve separate classes.

Pixels to World Coordinates

Previously, my sprite coordinates where defined in pixels. I have no reason to change that (and mess my 2D picking code – I still need it for now).

When rendering the first sprite, I just by-passed applying a translation.
I then use glPushMatrix, glPopMatrix to apply and remove the translation. First I directly use pixel coordinates (expecting to see that go out of the screen) – I do get a gray screen again (no sprite), but also get an interesting error in the console:

GLES2Sample[3130] <Error>: CGContextSaveGState: invalid context

Nothing happens when tracing GL calls. Aha! This is because I left 2D rendering code (calls to drawImage) as I’ve setup only one sprite. Since I’ve managed to run this a few times (without the transformation) and could render the model, this is something that can be ignored. Nice.

I learned a few things while trying to get this right; to setup my isometric view, I use:

glOrthof(-width/2, width/2,-height/2,height/2, -depth/2, depth/2);

Where:
- width is the height of the window/view we are rendering to.
- height is the height of the window/view we are rendering to.
- depth represents the bounds of the field of view. for now I use a parameter ‘like the width or height’

So what glOrthof  does is fit a box taken from the 3D world into the target UIView.

I got a few gray screens while fiddling with that. Common causes:

  • translating the target too much
  • using too small a value for the depth parameter
  • making the width/height too large (apparently if the model is too small, no pixel is rendered.

My test model is just a unit box. When the width and height match about 1 world unit for 1 pixel, this box is hardly visible. I think using 1 meter = 1 world unit is kind of nice (considering how it feels on the Blender grid), so I apply a scaling parameter for world unit to pixel conversion. For now I’d say 1 meter is about 32 pixels (my sprite for the player was 32×64 pixels).

Finally, here’s the code for my view/camera setup…

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
float width=320;
float height=460;
float depth=460;
glOrthof(-width/2, width/2,-height/2,height/2, -depth/2, depth/2);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glRotatef(60,1,0,0);

…and the code for my model transformation. Note that all this stuff is ‘nearly correct, not quite correct’ because the isometric view now tracks X coordinates correctly, whereas we have a 2:1 ratio on Y coordinates and I’ll need to correct that later.

glPushMatrix();
float tx=actor.location.x-originX; // (see note for 'origin')
float ty=actor.location.y-originY;
NSLog(@"translate to: x=%f,y=%f",tx,ty);
glTranslatef(tx,0,ty);
glScalef(32, 32, 32);
// draw the model...
glPopMatrix();

The ‘origin’ is the position of the player avatar because I like to keep it at the middle of the screen. Either way the isometric view needs to have a kind of ‘camera location’ so there would be an origin – but then that should be composed once and for all as part of view transform setup.

Uhoh. That took quite a while to get through – next time I’ll do the board and props and it should be a little easier.

This tutorial/quick reference explains the steps to setup lighting in OpenGL-ES 1.1 (may be compatible with OpenGL-ES 2.0).

Lighting requires that you prepare your models for lighting first, i.e. set/update material and provide per face/per vertex normals

1. Enable lighting; optionally, enable per vertex/per face color and smooth
shading.

glEnable(GL_LIGHTING);
glEnable(GL_COLOR_MATERIAL);
glShadeModel(GL_SMOOTH);

2. Enable up to 8 light sources:

glEnable(GL_LIGHTn) // (e.g: GL_LIGHT0)

3. Configure your light sources:

Common to all light types, set light color for ambient, diffuse and specular
light components. Ambient lets a light illuminate every point in a scene, diffuse lets a light illuminate objects around it and specular adds a ‘shiny’ spot to your models.

glLightfv(GL_LIGHTn, GL_AMBIENT, color4f );
glLightfv(GL_LIGHTn, GL_DIFFUSE, color4f );
glLightfv(GL_LIGHTn, GL_SPECULAR, color4f );

note: color4f is array of floats with rgba, eg: {1,0,0,1} is red.

3a. Directional or positional light

glLightfv(GL_LIGHTn, GL_POSITION, vector4f );

note: vector4f is x,y,z + w component
w=0 to create a directional light (x,y,z is the light direction) like the sun
w=1 to create a positional light like a fireball*.

3b. Spotlight

glLightfv(GL_LIGHTn, GL_POSITION, vector4f );
glLightfv(GL_LIGHTn, GL_SPOT_DIRECTION, vector3f );
glLightf(GL_LIGHTn, GL_SPOT_CUTOFF, angle); // angle is 0 to 180
glLightf(GL_LIGHTn, GL_SPOT_EXPONENT, exp); // exponent is 0 to 128

note: high exponent values make the light stronger at the middle of the
light cone.

4. Attenuation

Attenuation makes the power of a light source fade when the model is further from the light source. May slow down rendering.

glLightf(GL_LIGHTn, attenuation, value)

where attenuation is one of:

GL_CONSTANT_ATTENUATION
GL_LINEAR_ATTENUATION
GL_QUADRATIC_ATTENUATION

(*) that won’t actually draw a fireball :), just the lighting effect caused by something like a fireball.

Last time we looked at drawing triangles with the GLES2Sample project (you can get this from here). Let’s modify this project a little so we can do interesting things.

First problem is that the ‘model’ is spinning. This is done using:

glRotatef(3.0f, 0.0f, 0.0f, 1.0f);

You can change 3.0f to 0.0f or comment out this line of code to test. Here’s the code fragment, still within ES1Renderer:

// from ES1Renderer.m in GLES2Sample
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
 // left, right, bottom, top
glOrthof(-10.0f, 10.0f, -15.0f, 15.0f, -1.0f, 1.0f);
 glMatrixMode(GL_MODELVIEW);
 // this causes the square to rotate
 glRotatef(3.0f, 0.0f, 0.0f, 1.0f);

This code is puzzling. How does it work? What’s a matrix anyway? Can we use it to setup an isometric view? Yes! We want to tilt the ‘model’ 45 degrees left and 60 degrees right. tilt=rotate, so let’s uncomment glRotatef(!).


glRotatef(angle,x,y,z)

Where:
- angle is the angle we rotate by.
- x,y,z is a vector to rotate around.

For example:

glRotatef(60,1,0,0) rotates your model around the X axis – like holding a skewer in both hands and making it spin.
glRotatef(45,0,1,0) rotates your model around the Y axis – like a horse on a carrousel.

// updated to try creating an isometric view
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// left, right, bottom, top
glOrthof(-10.0f, 10.0f, -15.0f, 15.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
// this is trying to setup the isometric view.
glRotatef(60,1,0,0);
glRotatef(45,0,1,0);

This should give us an isometric view – tilted so we look from up and from the side. Run this code.
Well… actually this gives me headache. What we really want is rotate just once, then stop. There are several ways to do this:

1 – Since it’s animating there’s probably a time-loop running somewhere, s o we could stop the time-loop. Since we’re making a game, we probably want to leave the time-loop running.
2 – We could use a latch to rotate only once within (void)render:, so that the first time the function rotates, then stops rotating forever. That’s pretty dumb. Not elegant.
3 – Is there a way we can just specify the angle at every frame, rather than rotate again at every frame? Yes.

glLoadIdentity() happens to be used to clear the rotation of the ‘model’.

Given the fact, it seems that we are already clearing the rotation at every frame glLoadIdentity is already there! Let’s call this function again anyway, just before we call glRotatef()

glMatrixMode(GL_PROJECTION);
glLoadIdentity();  // this call was already there.
// left, right, bottom, top
glOrthof(-10.0f, 10.0f, -15.0f, 15.0f, -1.0f, 1.0f);
glMatrixMode(GL_MODELVIEW);
// this causes the square to rotate
glLoadIdentity();  // hard headed: calling glLoadIdentity again.
glRotatef(60,1,0,0);
glRotatef(45,0,0,1);

Run this. Yes – now that works. Notice that i swapped the y for the z coordinate to make it look right. This is easy to get wrong at the beginning.

*Why* does it work?

1 – GL doesn’t know about your ‘model’. It just draws stuff. That would be the first answer to the question ‘how does it know how to rotate that particular model’. It doesn’t. We’re not rotating the model, we’re rotating the view. So we rotate the view first, and draw next, and it seems like we have rotate the model.
2 – GL retains all state assigned to it. glLoadIdentity() actually affects a target defined by glMatrixMode(). Notice that there are two calls to glMatrixMode:

glMatrixMode(GL_PROJECTION) – set the so called projection matrix as target to transformations such as glRotatef, glOrthof etc…
glMatrixMode(GL_MODELVIEW) – set the so called model view matrix as target to transformations.

The first call to glLoadIdentity() actually resets the projection. This implies that before adding a second call to glLoadIdentity after glMatrixMode(GL_MODELVIEW), the model view matrix was not reset between two invocations of render: that explains the animated rotation.

The projection matrix defines the ’3D style’ such as isometric (no distortion) and perspective (further objects are smaller).
The model view matrix defines the rotation of the camera or ‘the whole world’.

The aim of this tutorial is to show how to get started with OpenGL ES for iPhone/iTouch; OpenGL ES 2.0 is only available on the iPhone 3GS, so this article focuses on 1.1.

The article is based on the GLES2Sample project from the Apple iPhone dev library. This project is ideal as a starting point for a 3D app/game for several reasons:

  • It is compatible with OpenGL-ES 1.1/2.0 and can choose the right version at runtime.
  • It provides a run loop, and will select the best strategy to setup a run loop according to whatever version of iPhone OS is running on a target.
  • We can ignore the boiler plate and start playing (or indeed, developing) right away.

In short, OpenGL-ES on iPhone requires setting up a special UIView and a run loop. You’ll need a good reason to modify the default setup. OpenGL rendering itself is all done within the ES1Renderer and ES2Renderer classes.

If you want to learn more about the boiler plate, it’s in the example (and there is apple documentation to explain the boiler plate) If you want to know how to get started with basic OpenGL commands, live in ignorance and bliss and learn more and more about OpenGL – ES, this article is just for you.

  1. Download and open the GLES2Sample project from the Apple iPhone dev library.
  2. In EAGLView.m comment out the following line:
    renderer = [[ES2Renderer alloc] init];
    This makes sure we are only using the ES1 renderer (otherwise changes described in this article will have no effect). An alternative could be to set your device version to 3.0
  3. Open ES1Renderer.m
  4. Search for - (void) render
  5. Notice the const GLfloat and const GLubyte arrays. The first array contain (2D) coordinates (vertices). The second array contains colors in RGBA format. Skip to glVertexPointer (a few lines down, same function)
    - glVertexPointer passes the array of vertices to the gl.
    - glEnableClientState(GL_VERTEX_ARRAY) tells the GL that you indeed want to use the passed vertex data.
    - glColorPointer passes colors (to be associated to vertices) to the GL.
    - glEnableClientState(GL_COLOR_ARRAY) tells GL that you indeed want to use the passed colors.
    - glDrawArrays tells gl to use the data you passed/enabled to draw triangles.

Here’s the code fragment anyway:

const GLfloat squareVertices[] = {
-0.5f, -0.5f,
0.5f,  -0.5f,
-0.5f,  0.5f,
0.5f,   0.5f,
};
const GLubyte squareColors[] = {
255, 255,   0, 255,
0,   255, 255, 255,
0,     0,   0,   0,
255,   0, 255, 255,
};
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
glViewport(0, 0, backingWidth, backingHeight);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();	// this clears the projection
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();	// this clears the model transform
glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
glRotatef(60.0f, 1.0f, 0.0f, 0.0f); // rotate around x axis
glRotatef(45.0f, 0.0f, 0.0f, 1.0f); // rotate around z axis

glClearColor(0.5f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);

glVertexPointer(2, GL_FLOAT, 0, squareVertices);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, squareColors);
glEnableClientState(GL_COLOR_ARRAY);

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

NOTE: The above code fragment is reproduced with Apple’s permission (as per ES1Renderer.m license in the ES2Example)

This is pretty much how the GL works. Some GL calls are used to pass data, other GL calls are used to effect commands on this data.

Let’s have a look at the above functions in detail:

1) in glVertexPointer(2,GL_FLOAT,0,squareVertices), parameters are:
- 2 is the number of coordinates per vertex. that could be 2 or 3.
- GL_FLOAT is the vertex format. I think float is often fine – modern CPUs/GPUs are kind of optimized for float versus integer calculations, not intuitive but maybe true. floats are easy to use.
- 0 is the stride. Let’s keep this to zero to save explaining what is stride.
- squareVertices is your vertices as defined in the array.
2) in glColorPointer(4,GL_UNSIGNED_BYTE,0,squareColors),
- 4 is colors per vertex, which is red, green, blue + alpha
- GL_UNSIGNED_BYTE is color components from zero to 255. You can have floating point colors in the range [0.0-1.0] if you prefer. I don’t know which is faster. maybe the unsigned byte version.
3) in glDrawArrays(GL_TRIANGLE_STRIP,0,4)
- GL_TRIANGLE_STRIP means draw triangles in a sequence, reusing
(apparently) 2 vertices every time. You can use GL_TRIANGLES instead. not as efficient, but to make things by hand, GL_TRIANGLES is logical – you want 2 triangles, you pass 6 vertices.
- 0 is the index of the first vertex you want to draw from.
- 4 is the index of the last vertex you want to use.

Usually to do some low level 3D rendering you draw triangles. GL also lets you draw lines and vertices.

OpenGL is the IT industry standard for drawing triangles, lines and points, and a lot more all the way down to the best realtime 3D games around.

OpenGL-ES1 is ‘OpenGL light’ suitable for mobile platforms such as the iPhone.

ES2 is the high end version of OpenGL-ES. Available on iPhone 3GS and maybe other platforms. ES2 has shaders, ES1 doesn’t.

Do I need to know OpenGL/ES to make (mobile) 3D games?

Not really, I don’t think so. You can learn a game engine instead, such as Unity, SIO2, Panda3D, Irrlich, Ogre.

So why would I learn OpenGL/ES?

OpenGL has been around for more than 15 years. If you want to become a 3D graphics programmer, it’s probably a good idea to learn both OpenGL and a game engine.

OpenGL is low level. Compared to using a game engine, knowing how to use the GL lets you be creative in different ways.

OpenGL is a little weird, but a lot of it is actually simple. Drawing shapes is a low level, yet simple concept. If you like ‘drawing and programming in 3D’, then OpenGL could be a better choice than a high level game engine that encourages/constrains you to just get your models from a 3D artist and fire animation commands.

OpenGL is the only API that maps directly to modern graphics hardware. Some people use the GL to do weird stuff that’s not really related to graphics, just because GPUs are fast and full of maths.

I’ve been hesitating a lot between re-learning the GL and learning a game engine. I’ll probably do both in the end but for now, because I don’t need serious physics, probably can’t afford having a lot of textures and I quite like drawing, I thought OpenGL may be a reasonable choice. Also, I’m currently creating a very high level scripting API to a 3D platform, so I like to look at things from the other end in my spare time.

Some GL calls are used to pass data, other GL calls are used to effect commands on this data.

The data you pass is usually retained, e.g. if you change the ‘drawing color’ the same color is used until you change the color again.
If you have never seen GL code before, this may feel awkward – why not just have commands like triangle(a,b,c,color) or the like? The reason is, GL is built for speed versus convenience. Passing lots of vertices at once reduces communication between the CPU and GPU; passing a color just once and then painting together all shapes of the same color reduces communication between CPU and GPU. Reusing coordinates on the fly for drawing two triangles (this is what TRIANGLE_STRIP is for) reduces memory usage and communication between the CPU and GPU.