Archive for April, 2010|Monthly archive page

How Hard Can It Be?

In Video Games on April 27, 2010 at 2:47 pm

An immutable facet of gaming is that of the challenge: the difficulty in acquiring or achieving the goals presented within the context of the game itself.  As an empathetic connection is formed through each successive obstacle — where we feel the same exultation in triumph or dread in becoming beset by further setbacks —an appropriate challenge becomes indelible.  When presented with an impossible task we should strive with every ounce of our effort and just barely cross the finish line, our avatars mirroring our exertion and sharing in the experience; when such monumental tasks are reduced to trivialities, our emotional investment is cheapened.  Likewise, when trivial tasks are insurmountable resentment is brewed and frustration simmers; our hate of the game is transferred to the plot and characters.  This may be lessened if it is assured we are facing the same challenges as the characters, rather than challenges imposed through a severe segregation of gameplay and justification.

Just how many games truly allow you to excel through your own merits, skill and talent?  As integral as challenge is to developing that empathetic connection, that allows us to transcend the boorish nature of manipulating our hand-held input devices in what only amounts to the illusion of a three-dimensional visage, you would naturally assume that success — and failure — is granted exclusively through your own efforts.  That you would not be bolstered through a severe separation of in-game advantages or crippled through a truly unfair, suspension-of-disbelief-shattering disadvantage?  Think on this: given our current limitations of interaction, the scope of your natural prowess is funneled narrowly through the limited defining of your physical and mental abilities: your hand eye coordination and reflexes; and your ability to absorb information, handle multiple variables and make sound tactical judgments.

Games may artificially — meaning outside the scope of your natural abilities — increase or reduce the margin by which your gameplay experience is affected by your own abilities.  This should be considered as separate from difficulty, and more along the lines of calibrating a game along the axis of genre and desired scope of gameplay; some games simply don’t want to be about certain things.  Your physical traits may be bolstered by something as simple as an aiming reticule, which is commonly accepted and expected in the interface of every shooter out there; even games with minimalist interfaces as Metro 2033 still use them.  You may be assisted by some degree of auto-aiming or lock-on capability, reducing the necessity for absolute accuracy or the need to track targets, as seen in most console shooters like Halo, but notably absent in the most recent Turok title.  “Bullet time” makes it easier to react quickly to new or multiple stimuli; you are not any faster than before, but the game has slowed to accommodate you.  Of course, many of these can be stylistic elements as well — Max Payne would be diminished without its slow-motion action sequences — consistent with the setting or tone, and not having them would make you feel like you aren’t quite allowed to walk the talk the game presents.

Mental capabilities are more difficult to gauge, but may be broken down into simply manipulating the flow of information.  On one side you must absorb and evaluate data, potentially from multiple sources in a myriad of formats, all towards making sound decisions and tactical judgments while allowing predictions for how such judgments will affect the game world to loop back around to merely providing additional data to consider — and so on and circularity.  In easing this burden, games should streamline their interface to facilitate data acquisition and management.  More aggressively, the game may have automatic processes designed to deal with certain data criterion which may or may not be set by players.  Then, the difficulty may be inflated or deflated through how well these processes act in the players’ stead, or to which degree they will accommodate and implement their tactical judgment.  These methods should be well familiar to fans of real-time strategy games, ramped up to eleven with cyclopean titles as Sins of a Solar Empire and Supreme Commander, as well as their more intriguing implementation in games like Final Fantasy XII, which stands out drastically when compared to its peers, and Demigod, which boils resource and army management down to how well you play a singular avatar.

Considering the above, whence comes difficulty?  Factor in luck and time investment and your answer shall be made apparent.  Luck, or more accurately that random factor well apart from any player-manageable aspect; and time investment, which simply states the game won’t let you truly play it unless it has been decided you have played enough.  MMOs are huge contenders for these factors, relying on an ever-increasing gradation of time investment to insure consistent subscriptions, with the time it takes to level and the random loot drops.  RPGs are also guilty of this; Persona 3 is a particularly egregious offender as while it has two-dozen characterization-based side quests, it still requires you to slog through a dungeon to level up at least once per in-game month, to say nothing of potentially losing hours of game time from randomly dying to an instant death spell.

I feel a great sense of accomplishment in games that allow me to succeed based primarily on my own capabilities; this feeling may be enhanced by increasing the game’s specifically delineated “difficulty level.”  In good games this will bring the enemies up to my skill level, reacting more quickly and more accurately, or responding more appropriately or more aggressively to my own actions.  In less good games, they simply have more health or do more damage, or become psychic outright and gain truly unfair advantages.  Early games had to utilize the latter as it was actually impossible to have enemies that operated within the confines of the game rules; today this should be less acceptable, but all the same the player was, and is, still allowed to win through the sweat on their own brow.

It is when games severely limit your capabilities or present inordinate reliance upon random factors or time investment do I begin to feel truly cheated, that the game is hiding its shallow nature through an obfuscating curtain of artificial difficulty.  When I’m really just playing a game for its story, this becomes unbearable as I’m forced to replay whole segments due to an inconvenient game over.  In cases such as these, I feel truly justified in lowering the difficulty, essentially opting out of the gameplay portion of the game, to experience the plot or have the experience adjusted down to actually feel like I’m playing capable individuals overcoming obstacles instead of being obliterated by them.  Likewise, this may sooth my irritation at an overly severe segregation between the game’s play and the justification presented in-game for why it must be so.  In both cases, I am allowed to enjoy the plot once more, or even bring the game down to enjoyable levels.

Does this make me less of a gamer?  I do not know what stigma may be attached to riding out on lesser difficulties; I don’t think it is something oft discussed in gaming communities.  There certainly are bragging rights for higher difficulties, and I do recall being told, in no uncertain terms, that I had not done anything but waste my time in abstaining from Halo’s legendary difficulty level.  All the same, no gamer would think for a moment that Doom’s nightmare difficulty represents anything but grim determination; the game itself proudly proclaims it isn’t even remotely fair.  Yet more gamers use unfair settings to acquire a challenge where the AI may falter, as in Supreme Commander’s specific “cheating” AIs that have resource and build-time advantages over the player.

Some gamers may bristle at this, but many games simply do not take much actual skill from the player, due to the degree by which they reduce the effectiveness of the player’s ability, or if they simply do not offer an avenue by which these abilities may affect the game.  Many, many RPGs fall into this, requiring no hand-eye coordination at all and minimizing tactical considerations through not providing enough information to act upon; else, you may simply lack the appropriate avenue to act upon such information.  Outside of these considerations, the difficulty may be lessened in all respects by simply leveling up a bit more.  Some games are harsher about this than others: World of Warcraft has what players call “gear checks” : encounters with timers or periods of focused activity that, regardless of actual player skill, will be met with failure unless the participants have acquired gear of sufficient quality.  To solve this, simply invest sufficient time in a dungeon more suited to your gear level to acquire better gear.

This all may show some bias; given all of the above, it can easily be seen that shooters and strategy games require the most skill to best, and both genres are more well represented on the PC, rather than consoles.  Compare this to the standard mouse and keyboard vs. gamepad debate and you might see where I’m coming from when I segregate my platform of choice along genre boundaries.  Once again and as always, in matters of taste, even for difficulty and skill, there is no argument.


On Ebert, That Most Vile Fiend!

In The Gaming Community, Video Games on April 22, 2010 at 1:00 pm

Whether games are art continues to behave like unto a boomerang, entering and re-entering the consciousness of the gaming community during topically relevant periods of unrest.  While the source of the discord may change from incident to incitation, the incandescent remark remains the same: someone somewhere had the gall to deny to gamers their past time’s artistic merit, significance or, more broadly, taxonomic schemata of even being art.  Most recently we have the gaming community’s longtime “rival” Roger Ebert, brought into our collective consciousness by Penny Arcade.  Before proceeding you may wish to familiarize yourself with Mr. Ebert’s relevant blog posting, but it should by no means be necessary.

Given the vivacious and vitriolic response the last time Mr. Ebert stopped by for a chat, it should be unsurprising his latest comments, lacking acquiescence, are viewed as caustic and closed-minded even as the actual content of his posting is cordial, respectful and laden with several disclaimers arguing subjectivity in matters so intrinsic to taste.  He dismisses the presented definition of art outright, instead presenting his stance that one generally knows art when one sees it; this is as accurate a definition of art as we’re liable to find, as even among fields we would believe to be inherently artistic — paintings, sculptures, et al. — there are enormous, decades-old debates as to what constitutes specifically as art, if they aren’t arguing over whether an individual piece fits the bill.  Generally, the only safe bet as to whether a given work is art is whether the creators are long dead and art-history majors are learning about it at university, and even then, it’s just a bet.  Hence his disclaimer.

Of course, his dismissal of his own opinion would seem hollow if said opinion did not first present itself, and his “Video Games Can Never Be Art” blog post certainly doesn’t disappoint, though it is inaccurately named: Mr. Ebert concedes that never is quite a long time to be right about something and is confident he will posthumously be proven wrong.  This isn’t a morbidly self-effacing measurement of his remaining life; rather, this is a declaration that while there are no games that would presently qualify as art, he generously concedes that this may change in an uncertain number of decades, likely exceeding this generation’s lifespan.  Hence the vitriol.

It’s strange to reconsider his article in that he is essentially agreeing with the notion that games are in an incredibly primitive stage of not-yet art, as the subject of his opinions, one Kellee Santiago, compares the current accomplishments and merits to cave paintings, chicken scratches and very, very early cinema.  This rather seems like a self-proclaimed genius receiving a “D” on their report card, only to turn around and demand recognition for not failing.  I don’t think it’s necessarily wrong of Mr. Ebert to rebuke this sort of attitude.  More importantly, I believe the gaming community is failing to understand what the manifold duties of a critic are!  It’s not their duty to applaud effort or work in progress, but to critically appraise a final work.  Those that produce such content should take it as their duty to at least acknowledge these critical evaluations as more than simple opinions, if not to use them to improve future works.

Of course, Mr. Ebert’s cursory dismissal of WACO, Braid and Flower are hardly comprehensive, but I believe the overall point is the same: he’s doing more good for the industry by rebuking their desire for acceptance since the appropriate reaction should be to prove him wrong, rather than presenting elongated, melodramatic treatises on how he’s wrong.  Seeking the approval of a movie critic is barking up the wrong tree anyway since games aren’t movies, and the parts of games that are most like movies actually remove that which makes it a game in the first place.  Understanding the key components of games as goals, rules, challenges and interaction should make us all realize that the vaunted cutscene, the currency through which our plots are purchased, contradict the very nature of gaming.  Some developers understand this, whether through Resident Evil 4’s frenetic quick-time events, Assassin’s Creed’s shifting camera angles and pacing protagonist, Quantic Dream’s games existing as fully interactive cutscenes or Bioshock’s meta-plot deconstruction; some games truly grasp the delicacy of deploying plot without destroying the game in the process.  Some games get away with it in lesser degrees due to their consistent portrayal, and for this I can’t praise Mirror’s Edge enough for its dedication to allowing its consistent, cohesive aesthetic to drive the major themes and plot home with its use of first-person viewpoint in a clean, dystopian city set among symbolic primary coloring.

But it is time for developers and gamers to stop deluding and start challenging ourselves.  Games are written last and built first, the needs of the plot second to primary considerations of gameplay, engine requirements and the whims of everyone else that has a say in the final product: producers, marketing, executives and even the rest of the development team in most cases.  Actual writers don’t tend to be brought on board unless under very special circumstances or if there is a need to proofread or otherwise clean up a script, and again, this happens near the end of the development cycle and has many requirements regarding the setting or justifying the gameplay.  The number of games that employ actual literary devices like symbolism, allegory or true consideration for framing, themes and the place of the narrator or player are negligible.  However exceptional these games are, it is important to understand that they are indeed exceptions.

The quality of the writing is just one of the myriad problems that must be faced; I understand that you might hold dear certain games because you love their plots, enjoy the characters and fondly recall the vistas you visited on your voyage to save their world, but know that it is as much in the telling as it is what’s being told, and we haven’t quite settled on just how interactive our interactive storytelling should be.


In The Gaming Community, Video Games on April 19, 2010 at 1:00 pm

Piracy has always been a hot-button issue with PC gamers and development companies, so much so that it is almost pointless to discuss it; you already have your opinion, your moral justification, your marketing directive or your contractual obligation to your publisher and little else needs to be said.  And, as much as I hate to toss the hat of my opinion onto the mountainous pyre of internet discourse on the subject, I do actually wish to contribute.

Succinctly, to my fellow gamers: piracy exists.  It exists and it’s widespread enough that every developer needs to have some kind of strategy acknowledging it wherever deploying their software is concerned, so cease acting as though it’s an extraordinarily isolated population of gamers or that utilizing any DRM at all makes a developer way off its base.  Stop acting surprised that a game requires an access key, disc verification and online authentication, and really, who are you fooling with your protest against online authentication at set intervals?  If your internet went down you’d be screaming at the loss anyway.  Game companies, just as piracy does indeed exist, know that it will always exist no matter how many millions of dollars you spend wishing to the contrary.  There are actually people out there, on the internet, that love pirating so much that they’re dedicated to hacking apart your copy protection like unto a climber ascending Mount Everest (you know, because it’s there).  It doesn’t even matter that they could sell their secrets to your DRM consulting firm for so much money they could pay the copyright fines and rebuy everything; they honestly prefer pirating and enabling others to do so.  So stop disenfranchising honest gamers who have to put up with your nonsensically draconian DRM; they’re going to wish they had pirated the game anyway, since pirates don’t need to bother with DRM.

Your target demographics are gamers that buy everything, and gamers that sometimes pirate when they feel like it.  That’s it.  Forget about the career pirates — they’re never going to buy anything, ever, anyway, and the more difficult you make it for them, the more honest gamers are feeling unfairly targeted or punished for that which they have not done — or something that they only do when, say, a game company dicks over its customers with crappy DRM.  Please stop pretending that the number of illegal copies of your software equals a like amount of lost profits that would be yours, if only.  That’s a lie your DRM consultants told you to convince you to buy their DRM.  And stop spending so much on DRM in the first place unless you intend on putting “difficult to steal” on the back-of-the-box list of features.

Those two demographics either already have every reason to buy games legitimately, or they really want a reason to do so.  Give it to them through magnanimity.  Be a good game company that makes good games; proclaim loudly that you hope to make more of the kind of game people like, if only it sells well; incorporate fan critique into your content releases; and boldly state that the goal of your DRM is to be as unobtrusive as possible.  The last thing you want to do is to give these fans a reason to abstain from purchasing your game, piracy notwithstanding.

And speaking of your DRM, ask your vendor to abide by a few principles: first, it should be non-invasive, which means you’re transparent about what it is and what it does, and what it does is only affect the bundled software, and only then under certain conditions that are truly indicative of having pirated the game (having daemon tools installed doesn’t count).  DRM should never hinder normal game play or ever, feasibly, prevent someone from playing a game they’ve purchased.  Lastly, your DRM should give as much as it takes: if you require access keys, online authentication, installation limits, re-verification and completely non-transferrable usage, consider granting the users disc-free access, bundling the online verification to an online profile that updates with the players’ achievements and progress, add matchmaking through this system, and finally grant easy, no-headaches access to patches and DLC.  Steam should be a big success story since people are largely unaware that it’s a DRM vehicle — quite an impressive feat if you consider the outrage around the initial installation and online authentication of Half-Life 2.

It’s no secret that PC gaming is a shadow of what it once was; games like Fallout, Baldur’s Gate and Unreal, games that defined PC gaming, now see their successors hosted on consoles for which they are primarily developed, with the PC version being ported.  It’s an expensive hobby when you consider that consoles cost a fraction of a cutting-edge PC, and most gamers are taking advantage of the HD TV revolution to enjoy crisper graphics on larger screens than your typical PC user.  Not to mention all of their games are tracked in a single online profile and matchmaking standard, while PC users muddle about with everything, from the lowly Gamespy to the hotly criticized Games for Windows Live, but never will all of their games and achievements be located under one roof.

So why work that much harder to make PC gaming even less appealing?

On Additional Content

In The Gaming Community, Video Games on April 12, 2010 at 1:00 pm

Having finally vanquished the final boss, triumphed over the last puzzle or accomplished the ultimate task in your latest game of choice, you are disappointed in only one area: that it ended.  A brief sojourn, utilizing eco-friendly green transportation, to your favored vendor of all materials gaming allows you to purchase: more game.  Cash relinquished and disc in hand, you return home and your adventures continue.

Did you just purchase a sequel, DLC or expansion?  Perhaps a new game described as a spiritual successor to the one you’ve just completed; maybe the DLC offers enough content to be a bargain expansion pack; perchance the expansion is a radical offering that does not even require the original content to play.  More darkly and ever cynically, you might have wasted your money:  the DLC offers but a modicum of new content, the sequel is just new levels or the expansion scarcely approaches the offering of the main game.  Whatever the case may be, you’ve been inducted into the latest brand of nomenclature based fiscal dissatisfaction — that is, if you haven’t already.

Gamers are obsessed with this, completely fixated on ensuring that what they’ve purchased is what it “should” be; this is compounded by an assumption of standardized pricing for gaming content: new games are sixty dollars, so additional content available at fractional costs should grant a proportionately and appropriately fractional level of content.  If it contains a small amount of content, it’s DLC; slightly larger, an expansion; larger still, a sequel.  Short expansions may be criticized as being a glorified DLC; samey –sequels might be pegged as a “full priced” expansion.  But then, the accepted content-to-price ratios vary between each of these classes…

As current, sixty dollars should grant you thirty-plus hours of progression-based story driven RPG content; five to fifteen hours of single-player action in first or third person shooter/action/adventure games (watch this value drop if there’s no multiplayer); and deep enough, with enough to do, to be sufficiently addictive and fun for sandbox or puzzle games.  Already we see the difficulty in fully grasping these values and assigning a definitive judgment against these products: value is dependent upon genre, but is it really fair to hold some games to a higher standard than others for that sake?  Using a more limited comparison, we might limit ourselves strictly to comparing add-on content to the game for which it’s released.  A ten dollar DLC, priced at one-sixth of the full game, should then provide a like amount of content compared to the full game, right?

A major, glossed over consideration is that, while we tend to value games primarily upon time played — games encourage this by tracking this statistic — there’s much more to this content, and its value, than how long it takes to complete.  Imagine if Rockstar lowered the movement speeds in Grand Theft Auto, then it would be a longer game, right?  Now imagine they released a DLC that increased the movement speeds, thus making the game “shorter.”  Does not the value lose its attachment to time, under these circumstances and considerations?

Halo received some criticism over whether its map packs were worth the money, offering three maps for ten dollars, providing nowhere near equivalent value, where the main game offered a full single player experience as well as a dozen multiplayer maps.  What justifies this?  Well, designers at Bungie would no doubt assert that the multiplayer experience is valued more highly than the single player to those they would consider the market for these map packs; after all, a rabid Halo fan will likely extend their Halo experience by many times the number of hours one might play the single player experience.  Additionally, these maps bring production assets and materials that contain value on a separate tract from simple time investment.  At the very least, these new maps are made with data collected from thousands of hours of playtime, literally sculpting new maps out of the favored strategies and predilections of previous matches.  Lastly, these maps are designed to extend the entirety of the Halo multiplayer experience, injecting something new and fresh into map rotations that may have grown familiar to the fans.  With this in mind, evaluating DLC content becomes abstract, much more than a formula of time expended.

Expansions suffer from similar judgments.  Ideally, expansions should grant an extension of the original game but not overtake it in terms of content.  A fast rule that is by no means backed by any policy is that expansions tend to contain anywhere from one-third to one-half of the content of the original, while costing anywhere from half to four-fifths as much (these distinctions may be slightly muddled by the introduction of stand-alone expansions, games that could technically be defined as sequels but whose lesser offerings and lower prices are excused in its naming convention). Each Morrowind or Oblivion expansion adds a land mass sized at approximately one-third of the main game, or provides a like proportion of questing and exploration, priced at thirty dollars per.  Strategy games like Dawn of War and Supreme Commander offer new, shorter campaigns, factions and units and are generally considered great value by all, despite costing forty dollars to the new game’s fify; compare them to prior games like Starcraft, whose expansion offered “just” six new units and a new campaign, but was highly valued by fans who praised the consistent quality across its products.  Here, we’re much more accepting of a less than one-to-one ratio.

Comparing the standards of DLCs to expansions, we can see gamers attach a higher value to DLC than an expansion; we’re willing to accept a much lower ratio of dollar-to-gaming-value if it’s called an expansion, whereas DLCs are easily criticized for not offering more than a one-to-one relationship.

Sequels have it harder.  Bioshock 2, while receiving favorable marks, has been slammed for being a full-priced expansion, offering very, very little as far as “new” content.  Identical game play, equivalent weapons, same plasmids, tonics and enemies; why wasn’t this an expansion or DLC?  In this particular instance, Bioshock 2 reused very little of its predecessor’s assets, as not only did its engine change (it’s now on the latest iteration of the Unreal Engine), but so did the art direction, requiring the models be remade, retextured and reanimated.  Every aspect of the game was revised, from plasmid behavior to weapon and ammo types, usage and upgrades.  And, of course, you’re exploring entirely new areas of Rapture after a full ten years of decay and flooding, likely presenting a considerable challenge to the art department — and it shows, with levels altogether more beautiful than the original.  Oh, and multiplayer was added too.  While I would not have minded paying an expansion pack price for this game, I would have definitely considered it a bargain.  Would it help to consider it a full priced standalone expansion?

In the future, if you take the full scope of what’s being presented, how it’s built and what it actually grants into consideration, maybe you’ll feel less ripped off and maybe just a bit happier with your purchase.  After all, what kind of price tag can you truly place on not shelving your favorite game for a little while longer?

On Intellectual Property

In The Gaming Community, Video Games on April 7, 2010 at 5:31 pm

Go back more than five or ten years ago and you wouldn’t be familiar with the term “intellectual property.”  A more cynical and short-winded man might simply state that the phrase in question owes its ubiquity to a paradigm shift in game (and other) marketing, but such an attitude would deny this attitude’s reflection in game development stratagem, at the risk of presenting a much shorter article.  Apart from both, gamers have become much more aware of their hobby due in no small part to a saturation of related informational material, which is why we know anything about it at all. 

It’s important that we’re aware of so-called IPs as the severity of their effect on our mutual, shared pastime cannot be denied or understated, affecting, as it does, the games we play, how we play them and the settings and characters through which these goods are delivered.  Just look at the first quarter releases: Bioshock 2, Mass Effect 2, Aliens Vs Predator, Bad Company 2, Supreme Commander 2, S.T.A.L.K.E.R.: Call of Pripyat, Dragon Age: Awakenings, Final Fantasy XIII; the list continues along similar trends throughout the year with Fallout: New Vegas and a potential new entry to The Elder Scrolls.  Certainly this able to be dismissed as these are all sequels to titles that sold well, but bear in mind that the current generation of gamers expects more reason for a sequel than a successful original title.

 It is imperative to recall, fondly if you wish, the days of sequels and expansions that offered naught but more game to play.  Most first-person shooters fell into this demand, but the number of forgettable expansions to Starcraft that only offered additional campaigns is worth a mention.  We expect more from today’s sequels and expansions, which some might sketch out as being a continuation of the story, to find out what happened to cherished characters and popular figures.  These are added as part of the normal extra abilities-features-weapons-spells combo meal, but what it really boils down to is, in essence, we want more of that particular world, that we are no longer satisfied with a cursory summation of plot and universe through the game’s manual or brief text-scroll of an introduction.  We want to know what’s over there, what’s truly behind the insurgency, or what if the bad guys are just well-intentioned extremists with additional stories that demand to be told.

 This fleshing out, broadening of scope and attention to detail may all be owed to the increased push for bankable intellectual properties.  With the drastic increase in production costs, game development has become a risky venture for which the foolhardy introduction of unfamiliar ideas, concepts and settings may very well court disaster.  At the same time, the very last situation a developer would desire is a demand for a sequel to a universe that’s effectively been closed to further installments, to say nothing of base material stretched perilously thin in the originating title and definitively broken at the mere thought of a sequel.  New game concepts need to be well realized enough to support multiple titles, potentially relying on the strength of the setting over the introduction of new gameplay tenants, which should be granted at a pace appropriate to support interest and stave off stagnation or criticism.

 Mass Effect and Dragon Age are probably the best examples for this realization: both settings are conceptualized around simple concepts but deployed with care and detail enough to support many games of differing genres.  Fictionalized accounts regarding events that occur centuries before the current series borders on the didactic; more importantly, every character, reaction, plot point and attitude makes sense.  Those are worlds built for living — and for fabricating an endless stream of bankable tie-ins.  As yet Mass Effect barely has more than a few games and novels, and an accompanying art book that foreshadows the depth and consideration given to the setting.  Unrealized or taking it slow, Dragon Age marches ahead with a pen-and-paper role-playing supplement for those who wish their Dragon Age upon a different vector.

 Other franchises have benefitted from this attention to detail: EA had a full scientific white doc written for tiberium, the alien substance and fuel for conflict within Command and Conquer, leading to a much more well realized and consistent portrayal in that series, while the world of Gears of War was fabricated, from the ground up, to grant reason and rational to both the lavish architecture as well as the politics and structure of the COG.

 Those that might disagree with granting so much to this paradigm shift would deny the symbiotic relationship games and gamers have.  Yes, we’ve long since realized that a paragraph in the manual is a poor substitute for plot, but the fleshing out of ideas and details to present a world truly worthy of interaction — rather than presenting a non-interactive, “deep” story — should be the logical next step.

The downside is living in a market that suffers from sequelitis and a dearth of original content, owing to the massive undertaking in both risk and investment, that producing a new intellectual property begets, but at least our games make sense and our plots aren’t relegated to keycard placement.