Previous month:
October 2010
Next month:
January 2011

December 2010

Game Design is Dead

Rant_smallFor videogames, the art and craft of game design is dead. In its place we are left only addiction design and geek design, which is after all only a particular form of addiction design.

If game design means the creation of tools for play tailored to the needs of specific groups of players, the rising cost of development on the power consoles has strangled this possibility out of existence in the blockbuster market. The concern there is in attracting, addicting, and retaining a community of players – and as ever, it's chiefly the adolescent boys (and their adult successors) who are sufficiently compulsive to be gouged regularly for 60 bucks a pop. Other players are unwilling to pay so much for their play.

Because of the sheer scale of modern development costs, the brands in the blockbuster space overshadow teams in importance by a wide margin. It is no coincidence that newly-matured cash cow Call of Duty has two different developers working on it. The new mega-brands will increasingly demand steps like these if they are to sustain the 10-20 million strong communities of addicts at their core.

Addiction is also on sale elsewhere. Via a more expressly social fictional world, a wider demographic containing geeks of all stripes are willing to pay 15 bucks a month to feed compulsive reward schedules like slot jockeys playing in teams. Comparable to Call of Duty, World of Warcraft sustains a community of some 10-20 million addicts. Most are happy with their habit; others lose all sense of proportion once they get hooked on communal fantasy life.

Against this, the only creative counterweight comes from the indie programmers, since at the opposite scale of development the market is open solely to those who program, and any artists they choose to bring along for the ride. Incredible games for geeks are made in this space, but geek design, ultimately, is a limited form of game design, one that is ultimately self serving. Geeks themselves have great respect for those who program the kinds of games geeks want to play – but this kind of respect is simply fandom in another guise.

Furthermore, geek design inevitably revolves around addictiveness, since the typical geek's buttons are ready and eager to be pushed. What makes games like Dwarf Fortress and Minecraft so beloved by their fans (myself included) is the powerful freedom to set your own goals within the constraints of the fictional world. The former presents itself as challenge ('losing is fun' is the game's motto), the latter compels the churning of resources in the pursuit of one's own projects to drive highly compulsive play. Both approaches are recognisably addictive for the player geeky enough to overcome the apparent strangeness of the representation, and the complexity of the processes to be learned.

And what of the non-geeks for whom such games are arcane and unappealing? The diversions market seems to have graduated from its experimental phase (the Casual game gold rush having concluded with just three viable genres: match 3, time management and hidden object) and gone straight to packaging addiction in more accessible wrappers. Zynga's Farmville takes the RPG reward schedules that originated with Dungeons & Dragons (and are now central to both Call of Duty and World of Warcraft) out to the masses with massive profitability as the result. Thus all game design has become addiction design.

But of course, I speak in a cavalier fashion of addicts and addiction. We don't talk of 'sports addicts' with respect to habitual sports fans (although we sometimes speak of 'soap addicts' with respect of TV shows). There's a thin, treacherous line between fan and addict, but as long as the profit motive rules, everyone will be looking for 'sticky' content – marketing speak for 'addictive'. While there are many forms of enjoyment, they all relate to the same biological mechanism (dopamine and the nucleus accumbens), and all have the potential to become habit-forming. TV, music, films, books, websites – whatever the medium, the corporate mission remains the same: find the sticky brands and strip-mine.

Which is why it's a shame that geek design always tends towards addiction design. If anything might avoid commercialism, it should be these small projects, but the centrality of the programmer seems to allow compulsive play to trump artistry every time. If it were not for extraordinary outfits like Tale of Tales, newsgaming, Prize Budget for Boys and thatgamecompany, there might be no art games outside of the growing cloud of 'artlets' by individuals such as Mory Buckman, Ferry Halim, Rod Humble, Deirdra Kiai, Jordan Magnuson and others too numerous to mention. The struggle for respect that digital games face is intimately connected with the difficulties of connecting revenue to artistry, a problem that art in all forms suffers from but which is acute in the case of infinitely reproducible media where there is no object to acquire the mystique that drives the auction market for conventional art.

Game design is dead. Designing addiction has taken its place. But the medium of videogames is still very much alive, and exciting possibilities lurk around every corner. Geek design still has remarkably original visions of play to explore (albeit, only compulsive play), blockbuster games are achieving incredible new high watermarks for polish, and when it comes to interactive artworks we can only guess at what might be possible. But it is all overshadowed by the commercial exploitation of addiction, a professional niche I find myself working in, and not without reservations. We live, as the saying goes, in interesting times. For the time being, that also means we live in addictive times.

Enjoy your winter festivities! ihobo returns in January.

Avatar and Doll: Entering Fictional Worlds

Artists doll What is it that we call an avatar? How do we actually use this term, and are the things we tend to call avatars really the entity most deserving of this title?

The term avatar, if it is to be deployed usefully, refers to the means by which the player of a game interacts with its fictional world. But in which case, the avatar as a representation is secondary to the avatar per se – surely the parser was the source of interaction in Colossal Cave Adventure, which after all did not represent the player at all. If we presume (following game design as make-believe) that the avatar is the prop which prescribes that the player imagines they are in the fictional world of the game, it is the verbal representation of the adventure game that achieves this by virtue of the use of second person narration. The MUD shows this is not just an idle observation: MUD players certainly had avatars, even without any explicit representation.

But if there need be no graphical representation for the avatar, then there is a distinction between the avatar as the source of interaction and the avatar as representation. The latter is merely a doll that prescribes we imagine how we appear within the fictional world, not that we are present in the fictional world. (To put this another way: the character sheet in a tabletop role-playing game need not have a picture or a written description to function as the player's avatar).Thus most of the things that are conventionally called avatars are merely dolls representing the avatar, what we might term an avatar doll.

Consider the alternative: that what we mean by avatar is precisely the prop that specifies our appearance in the fictional world. Then MUD players only have avatars if they type a description of their character (or if the name alone were considered to represent them in that world). This is tolerable, but it shows that our focus has become what we communicate to other players about our appearance in the world, which would be meaningless in a single player game (although a case could be made that players can communicate representative aspects to themselves, or that game makers can do this to their players). We could enter the fictional world of a MUD without a name or a description, so the avatar cannot be either of these things. It therefore cannot be representational at all.

Also consider that the first person shooter, in its classic form, shows only a gun and perhaps an arm – is the arm our avatar? Or the gun? We might prefer to recognize that the gun implies our ability to interact with the fictional world and thus deserves to be called the avatar, but this is a case where there is little or no doll to be found. Indeed, this distinguishes first person perspective from third person in most cases, although the first person computer RPG often still shows the doll on an inventory screen. Overall, the doll seems secondary to the avatar as such when we consider games presented in a first person perspective.

The situation is similar with the racing game. In first person, the dashboard reinforces our presence in the fictional world, but even without this we would still be driving in that world; the dashboard is closer in function to a doll than to an avatar. Similarly in third person: the car is the equivalent of a doll (or perhaps more precisely, a model our doll is presumed to be driving), but we would still have the avatar without the car model, since in first person any explicit representation is entirely optional.

Thus we can conclude that the doll (or model) in any game is a prop that prescribes we imagine the details of our presence in the fictional world, whereas the avatar is our capacity to act within that world. Crucially, this suggests the avatar is not representational; it is wholly functional. And this transforms our expectation of what the avatar must be completely. Given this, nothing that serves a representative function deserves the term 'avatar', and whatever does so may better be considered a 'doll' or 'avatar doll'. The doll implies the avatar, but the avatar is in no way dependent upon a doll. We act in fictional worlds even without a doll, and thus the avatar must not be a representational prop at all.

Move vs Kinect: The Future of Console Controllers?

Kinect-vs-move-vs-wii Anything Nintendo can do, Sony and Microsoft can do better, more expensively and too late to make a difference.

Sony recently launched its Super Wiimote (called Move), continuing its long term copycat policy of taking Nintendo's successful ideas and refining them slightly. At $100 to $130 (depending on how completely you want to copy the Wii's control schemes), Move is not going to sell many extra PS3's for Sony, although as you'd expect from a product coming four years after the original it's a quality piece of kit. Reviewers everywhere rightly observe that even with superior sensitivity to Wii Plus, Move is hardly going to be an essential purchase for gamer hobbyists, since no-one is looking for a step down in control precision, and the 'Mii Too' software line-up smacks of a complete failure of imagination.

Against this, Microsoft have released their Super EyeToy (called Kinect). I have to hand it to Microsoft, it's a bold strategy to eschew copying the successful new peripheral in town and instead copy an older, less successful device. To their credit, it's an impressively flashy device, but at $150 it's hardly going to shift 360's by doubling the retail price. I have to say, Kinect would be an outstanding basis for an arcade cabinet – perhaps Microsoft should chat to their old friends at the ghost of Sega about this –but its huge virtual footprint scupper any chance of it becoming a 'must have' for most 360 owners. Right now, investing in a Kinect requires something of a leap of faith (as CNET shrewdly observed) since the device promises more than the software currently delivers.

The bottom line is that neither Move nor Kinect are really about taking on Nintendo in this round of the console squabbles. This bout is already over and Nintendo won a decisive victory, while Microsoft earns an honourable mention for just barely wriggling out of last place, albeit with an ongoing failure to be profitable. The big question, the one that still matters, is 'which input mechanisms will be key to the digital games market in 2012?'

Why 2012? Well, Sony have always been refreshingly forthright about their plans: they run a ten-year hardware cycle with six years between new iterations: PlayStation in 1994, PS2 in 2000, PS3 in 2006, so PS4 in 2012. The timing isn't hard to calculate, but the strategy is harder to fathom. Imagine you're in Sony's shoes... You have to start work on your new hardware but you don't know what interface it's going to need. No console has ever shipped with multiple control mechanisms (nor is any likely to do so, since most hardware already sells as a loss leader), so you have to pick just one of the three approaches on offer, each with very different advantages and disadvantages.

Sony can't afford to lose their gamer hobbyist fanbase, because internal pressure within the electronics giant demands shiny and impressive hardware that doesn't come cheaply. Only the hobbyists are willing to splash out on a console costing more than $250, and appealing to them requires a standard gamepad controller or something with superior precision to that, and nothing makes that latter proposition a likely horse to bet on right now. However, Nintendo have now proved what I've been saying for decades: the standard gamepad is a confusing, intimidating device for mass market players, who need something simpler if they're going to have any fun. (It is not a coincidence that the DS stylus and Wii Remote are the first game controllers since Pac-man that can be operated one-handed). If Sony want to recapture anything like the 120 million installed base of the PS2 – and of course they do – they have to cover both bases in one control scheme. That's why Move is so important to Sony; not to make money now, but to see if this is the way forward for the PS4 controller.

What of the situation facing Microsoft? They don't really want to be forced into a new hardware cycle because they have still only barely recovered from the haemorrhaging of cash their last two consoles inflicted upon the entertainment division. Kinect is an attempt to generate the same kind of 'new way to play' buzz that the Wii enjoyed in the hope of finding the next big interface, but right now neither their technology nor their interface design is even remotely good enough to break into this wider market, especially not at a cost of $300 all-in (50% more than a Wii). The most promising aspect of the new device – voice control – is still a long way off in real terms, and if Microsoft had what was needed in this regard they'd be pushing it out much further than just the entertainment division. As anyone who has been stuck talking to an automated call centre program knows, voice recognition just isn't up to scratch yet.

Ultimately, I'm doubtful the fate of the next generation of game consoles is going to be determined by a purely gestural interface like EyeToy or Kinect. Make-believe theory suggests that kinaesthetic mimicry is always improved by a physical prop – as CNN observed in their review of Kinect, "games that would be better enhanced with a physical device in hand feel flat." To put this another way: it's more fun to pretend you're shooting a real gun than to pretend you're shooting when you have nothing in your hand, so until gesture detection beats mechanical controllers for accuracy (i.e. sometime between the future and never), Kinect is a novelty and not the next big thing in interfaces. On the whole, this suggests the pressure is off Microsoft for the time being. They have little to gain from being first mover next time around, and everything to gain from waiting as long as possible before committing to a new machine, and a new interface.

My impression is that Nintendo don't have another ace up their sleeve right now, having already played their best hand in twenty years with the Wii and DS, the latter being set to overtake the PS2 as the most successful console of all time any time soon. But no software developer outside of the Kyoto-based global corporation has really dealt with the problem that making games for the mass market means more than just simplifying the interface, it also requires the creation of entirely new development cultures able to make games for people other than diehard gamers. This gives Nintendo a corner on the lucrative wider audience for games, those who play few different games but who consequently contribute to gigantic sales figures with the games that do appeal to the masses. Neither Sony nor Microsoft have the developer talent in this space to supplant Nintendo right now, allowing the venerable company to effectively coast for a few years until they develop something innovative enough to push out the boat again. If there isn't another revolution waiting in the wings – and it's certainly not clear that there is this time – Nintendo can afford to drag their feet when it comes to a new home console, although I bet the distant sound of Sony's bugles can be heard all too clearly in Nintendo head office.

Ultimately, this puts Sony in the uncomfortable position of being forced to move first into uncharted (or at least, poorly understood) waters. The first mover advantage only works when you have something new to offer, and that has never been Sony's strong point. Conventional games industry wisdom tends to think that when all else fails, you just pump up the power – but if that didn't work this time around, you can bet it isn't going to be the decisive factor next time either. What Sony desperately needs is permission to be truly disruptive, to make their new machine better adapted to our schizophrenic new marketplace rather than to simply crank it up a notch. But the electronics conglomerate is neither agile nor prescient enough to move against its own corporate momentum. Hence, PS4 in 2012 – even if Sony don't yet know what kind of controller it's going to have to ship with.