Progress and Rebuttals June 20, 2003
It’s been a while since the last editorial update, but they just come so SLOWLY. I mean, I had to write my own editorial just to have some content (although I would have anyway.) Fortunately we’ve got two editorials today, the second in Angelo’s Progress Police series, this time focusing on graphics, and my rebuttal to Mr. Cruz’s piece on games as art. Read them, then send me editorials.
|
Progress Police Part 2: In The Trenches- Graphics
Last time, I gave an overview on how the current state of video game development companies is affecting the creative health of the industry, especially when it comes to development for consoles. This article will begin to go into some detail at one of the most basic levels of it all: the programming. Programming is a major area of game development, covering everything from basic player input to the more sophisticated routines governing in-game objects. The major divisions of programming typically are graphics, the game mechanics, and the user interface. Most video games also utilize artificial intelligence (AI) and sound to some degree, but neither are necessary like the first three. Each area of game programming has come a long way since the dawn of games. Or has it? In this article, we'll examine the part of game programming which has arguably seen the most progress: graphics.
The part of game programming that typically gets the most attention and exposure is graphics. The degree of involvement required to engineer a graphics engine varies with each type of game, but every game can be made or broken by its quality of graphics. In the beginning, game "graphics" were more like ASCII art. Based on user input, the system drew a certain pattern of characters. This is so primitive that it's not really worth examining, and many would remove it from their definition of graphics. Graphics are typically characterized as two-dimensional or three-dimensional, dimension being an abstract concept used as a tool to design the software. All screen images are ultimately two-dimensional, so 2D graphics have a natural one-to-one mapping to the screen, meaning the geometry is not transformed for the purpose of display, while 3D objects undergo what's called a projection to be displayed on the screen.
Two-dimensional graphics engines have their beginnings in points, being one pixel on the screen. A line is one or more connected points. Pong was programmed using nothing but points and straight lines, and everybody loved it! Moving points and lines on the screen was easy: turn off one pixel and light up one next to it. There's not much you can do with points and lines, though, so bringing in the polygon (the general one or more vertex shape) was the next logical step toward gaming utopia. Moving polygons is as easy as moving lines, and those with more than 2 vertices could be filled with color! The concept of shape did complicate things, though. A line is easy to rotate about its center, because finding the center is a simple calculation. Rotating a shape took a bit more work, because each line making up the shape moved in a different way. Needless to say, programming games was not all fun and games. Then came the sprite, and the sprite changed EVERYTHING.
A sprite is a static image of any shape that is drawn to the screen by ignoring a bounding color (usually white or black) and drawing everything else. Sprites can be associated with any underlying shape for the purpose of moving or collision detection, but the typical shape used is a rectangle (also called a bounding box). Sprites were not only more visually appealing, but also easier to deal with. Any kind of linear movement with a rectangle is simple, and the image - along with any changes it was to undergo during movement - could be farmed out to a team of artist slaves. Life was good! Of course, there were a few resulting issues to work out, but for the first time the programmer could focus on what the objects did in an abstract sense, without worrying how they looked on screen. The bounding box is still a powerful tool for making graphics, though certain types of games (such as fighters) benefit from more complex bounding polygons, in order to make the experience a little more real for the player.
Somewhere in the middle of all this, companies like Silicon Graphics, Inc. and Autodesk were delving into 3D graphics research, the results of which appeared in Descent, probably the first 3D video game of note. On paper, adding a 3rd dimension isn't really that difficult of a concept (being just an extension of the plane), but in computational terms, it was exponentially more complex. One of the biggest issues was the projection of a 3-dimensional scene onto the 2-dimensional screen. Some objects would have non-planar surfaces (in mathematical terms, a non-planar graph is one that you cannot draw on paper without some edges crossing each other), yielding the tell-tale ugly black triangles in the middle of that cool giant robot model. The worst problem was that it was SLOW on a computer. It turned out to be nothing that a little expensive specialty hardware couldn't fix, though. Of course, for video game development, especially on consoles that generally performed poorer than other contemporary machines, some corners had to be cut.
One of the early ways was a technique called raycasting, which constructed a scene that was really 2D in the calculations but looked 3D. Wolfenstein and Doom were such games. The method decided whether to draw an object based on the avatar's distance from it and angle of separation. Probably the most popular corner cutters are making use of billboards (2D objects that look the same no matter which direction you face them from), restrictive camera angles or obscure lighting (limiting the number of objects that have to be drawn at one time), and - oddly enough - sprites.
Fast-forwarding a bit, past the Quake induced boom in video hardware development, today's video game consoles and video cards are steadily gaining a greater ability for 3D graphics, though there is still a long way to go. At least now, the simple shapes that would put the Sony PlayStation in a world of hurt are no longer even a concern in the modern world, and there's even room for manipulating light sources and effects in a quasi-raytracing fashion*. So how has progress been in the graphics programming arena? Pretty good, but just like in any other area of research, there's a crutch: money, time, and the market.
The market is not only the biggest boon to research, it is also its biggest limiter. The consumer market loves consoles, and every new game console carries with it a new application programming interface (API). What's bad about this? The graphics programmers - who rely more on the performance of the hardware than anyone else in a game project - are forced to reinvent the wheel. In more extreme cases (Sony's PlayStation 2), developers discover that the API provided is inadequate, and even more wheels are reinvented. This is where the other crutch comes in: time. The first project on a new console is almost doomed to have graphics issues, due to the fact that little extra time is given to work through the problems that crop up with learning a new machine. And time is money, right? Not always. In the console world, you are going to lose money if you don't follow the herd to the newest and greatest gaming machine. So even if it costs more money and time to migrate, it is a necessary evil.
The PC world is similar, but not as bad. The main difference is that the API is lord, and hardware is developed to support it. The power is split between Microsoft's DirectX and OpenGL, the open graphics library developed primarily by Silicon Graphics, Inc., and supported by the open source community. Both APIs are supported in hardware by almost every video card maker, though DirectX can only be used in the Windows operating system. The advantages of both are market availability and free license for development. The disadvantage is - similarly to consoles - changes in the API. DirectX changes directly, while OpenGL changes through extension. The reason this does not affect development of PC games much, is the fact that many programmers have chosen to code their performance-critical routines using assembler language. Because of this, they can focus on adding flash to their rendering engines with the new API functionality.
Where does graphics development go from here? Aside from the obvious idea of making a better wheel if you have to reinvent it, it is desirable to learn from history. History shows development teams trying to do more with a particular machine than they should have, and gameplay has suffered in many of these cases. The other obvious answer is to learn from simulation research outside of video games. The yearly SIGGRAPH conference is always a good place for picking up new ideas, and there are others. Finally, today's developers must be conscious of their current corner-cutting techniques, and know when they don't need to use them anymore. Regardless, for every couple of bad examples of graphics development, there has been a significant contribution to the field, so this area of development is still doing very well, and showing few signs of slowing up.
Next time I'll talk about another major area of development: the user interface. Stay tuned until next time! Or don't. I don't care.
* Raytracing is the slowest but most accurate 3D rendering method, responsible for the gorgeous and highly detailed full-motion video sequences in games, and the family-favorite Pixar films. It differs from screen graphics in many ways, the most notable being that no clipping, hidden surface removal, or detail reductions occur. Calculations for each object are made with the highest possible accuracy (dependent on the CPU), and the resulting image is a true-color representation of the scene, showing near-perfect lines and lighting effects.
- Angelo |
Damian:
I’ve gotta give it to Angelo, he certainly does his research (or sounds like he does, rebuttals anyone?) It shouldn’t come as much of a surprise to anyone that graphics, more than almost any other feature of a game gets the most innovation, as it is the most prominent part of the medium. I think most developers would really enjoy seeing APIs that are more comprehensive, and quite a few companies forget that providing incredible system hardware is only part of the job of making a system appealing to developers. Hopefully we’ll see a trend of hardware designers creating better APIs for their machines, but until then, hang in there all you dedicated programmers.
|
|
A Little Bit Warhol, A Little Bit Picasso
When we think of art, what comes to mind first? Perhaps we picture painting or sculpture; perhaps poetry, dance, or music; maybe even movies and television. I’ll wager, however, that the first image is rarely video games. Video games are toys, hobbies, diversions from the everyday life we lead, allowing us to take on an alternate persona in often fantastic worlds. In his recent editorial, Mr. Cruz laid out his case against video games becoming an art form. He claimed that if video games were to be lifted to the realm of art, that the field would suffer at the hands of the artist, that the artist’s vision would become more important than the entertainment value of the game. After all, as Mr. Cruz alluded to, Schoenberg’s music is art, though that doesn’t mean it’s very enjoyable.
Yet, I take issue with Mr. Cruz’s argument on two levels, the first addressing his concerns about video games becoming art, and the second delineating how video games are already art.
The criticisms leveled at the “game-as-art” phenomenon fail to take into consideration the reverse side of the coin. The way I take Mr. Cruz’s argument, he fears that developers, if given artistic license, will tend to create games which focus more on the aesthetic nature of the medium, rather than the entertainment value. The result would be a slew of games which, while visually, aurally, or otherwise superficially impressive, would not necessarily be enjoyable to play. Corporations, whose main focus is the bottom line and profits, would be more concerned with providing gamers with an entertaining experience rather than a aesthetically pleasing but otherwise dull title. Entertained gamers will be more likely to buy future games, and recommend current entertaining titles to their friends.
This argument has a fundamental flaw, unfortunately. Corporations, being as fixated as they are on profit margins, will try and deliver entertainment so long as it suits a positive cost/benefit ratio. This explains the “sequel-mania” we find in many RPG series such as Dragon Quest, Breath of Fire, and Castlevania GBA titles. Game after game express only minor deviations on the same theme created in the first, with the errant deviation being rare (Breath of Fire: Dragon Quarter would be one example). Instead these games capitalize on the series’ name recognition to get gamers to buy them. Many companies sacrifice the true potential of a game or a series because they know people will buy these games on brand name alone. There is no need to innovate, and thus we gamers get an often bland product as the result.
On the other hand, many “artist/designers” have brought out products that were of high quality because they had a vision for the game. Metal Gear Solid 2 could have merely been a rehash of the first game (and you can argue that for the most part, it is). However, Hideo Kojima had a definite vision for this game as more than just a sequel to make money on. His presentation of story and atmosphere won rave reviews from most of the gaming press, and most gamers who bought the title found it quite enjoyable. The same could be said for Wil Wright, the creator of the Sim titles, whose vision brought about one of the most popular game series ever known, and the biggest selling PC title to date, The Sims. For the first time, a video game seriously reached across gender lines to attract females, a goal which corporate Mattel and their Barbie license could not do.
Don’t get me wrong, I do believe that both corporate concerns and artistic vision can be accommodated in a video game. Hironobu Sakaguchi had a vision for the Final Fantasy games beyond just the bottom line, and they turned out to be extremely popular and enjoyable (though individual titles can and will be debated). Shigeru Miyamoto had a vision for the Mario and Zelda title, and they were successful while still being fun and enjoyable. True, every now and then an artistic vision goes wrong, but at least gamers get the sense that the developer is trying to create something of quality, rather than deliver the same old tripe repackaged.
This brings me to the second point I wish to make regarding video games as an art form. While Mr. Cruz worries about the dark possibilities of video games eventually becoming an art form, he misses the fact that video games are already an art form. People like Kojima and Wright, Miyamoto, Sakaguchi and Spector, they are artists who use the medium of the disc or cartridge to devise and present interactive art that is at once aesthetically pleasing and entertaining. Many games employ a combination of artistic styles (visual, aural, literary, etc.) to create a more immersive experience for the gamer, to pull that gamer into the game itself. Pulling off fancy graphic tricks may have its root in technical inspiration, but it takes the necessity of the artistic expression for invention to be born. It requires inspiration to create the music that fits a theme. The storyline in an RPG is the result of creative expression of ideas and concepts that parallels the novel. Granted a lot of these games can seem like so much drivel when they come out, but there has also been a LOT of “bad” art.
In an age where many artists are working for companies, producing that art for the purposes of marketing a product and making money, how can we judge game designers? They both may have a duality of purpose behind the creation of their art, but it’s still their art (copyrights aside), and many a starving artist manage to produce works of genius while in the employ of another. How many works did the Catholic church commission of the Renaissance masters? Didn’t the Medici family sponsor a great deal of talent? Nobody would argue that the ceiling of the Sistine Chapel is art.
The bottom line is to remember not to fear video games as an art form, as it can raise quality just as much as the desire for profit. And after all, I’m willing to accept a SimPhilosophy game for the score of other great Sim games out there, aren’t you?
- Damian Thomas |
Damian:
Yes, rebuttals are welcome. I tend to stay out of the editorial arena, because it makes figuring out what to say here difficult. So I’ll just leave it at that.
|
|