Game Design Feed

Metagame vs Structure

Metroid MapWhat is a metagame and how is it different from a game’s structure?

The structure of a game is the framework of the design that compels players to keep playing over the long term. There are numerous different game structures, including narrative structures (linear, branching, threaded), geographical (sequential, hub and level, open world), and in terms of character advancement (class and level, advantages/perks etc.). Conversely, the metagame is the social consequence of releasing a game into a community of players, an ever-changing set of tactical and strategic considerations that have to be taken into account if players are going to remain engaged with the games’ community.

Understanding the distinction between these two concepts is crucial to effective game design (although it isn’t strictly necessary to understand these concepts by these specific names, of course). In this piece, I hope to disentangle some confusions about the relationship between game structure and metagame, to emphasise the benefits to thinking carefully about both, and raise some concerns about the ‘monetisation metagame’. Any game designer worth their salt is already thinking about both structure and metagame, and it can be helpful to see where these terms come from, and how they relate to your own mental model for understanding games.

Game Structures

The first games with significant structures may well have been the strategy games that influenced Dungeons & Dragons (1974), from which the concept of ‘campaign’ was inherited and spread to the wider community through TSR’s underground hit. D&D’s use of ‘campaign’ as a narrative structure that linked individual scenarios into a dynamic story-telling medium was innovative, and extremely influential on videogames. But it was the invention of character advancement that was the real structural innovation of the first tabletop role-playing game, generating long-term play by asking players to acquire experience points (XP) in order to gain levels, and thus increase in both power and narrative potential. This structure provides a powerful and compelling player experience in part because it combines the strengths of narrative progression with the compulsion of ‘prizes’ to be won, all linked together in a reward schedule far more sophisticated that anything B.F. Skinner considered.

Around the same time, videogames were experimenting with very simplistic structures necessitated by their technological limitations. Early arcade games were built upon lives or a timer system: players played the game until they were out of lives, or until the timer ran out. This drove the microtransaction economy of the arcade: coin drops. When you ran out of lives, you put in another coin to start again and, after Atari’s Gauntlet (1985), to continue. On home computers and consoles, which were purchases at a fixed price, there was no need for such a frantic style, and the compact structure of the arcades quickly gave way to new fictional geographies that supported longer play times.

Games like the Stamper brother’s Atic Atac (1983) and Matthew Smith’s Jet Set Willy (1984) changed the way structure worked by opening up the geography of the game world for exploration. Later, with the addition of a primitive save function, Nintendo’s The Legend of Zelda and Metroid (both 1986) took this further by mediating how players progressed: special items could be collected that allowed the players to reach new areas, driving curiosity and supporting more compelling exploration play. Today, a significant proportion of AAA videogames have settled upon an open world structure descended from Grand Theft Auto (1997) and its key influence, Elite (1984), a format which from the earliest days combined the advancement systems of D&D with the fictional geographies of early home videogames. It is a powerful – but expensive to develop – combination.

As game designers, decisions about structure provide ways to get maximum value from minimum development expense. A good character advancement system can wring a lot of player hours out of the same core content, and an expansive geography can also provide similar benefits, either through reuse of tiled content or via procedural generation (or a combination of the two). Structural design decisions determine how long players will play a game before they feel it has been ‘completed’, and as such this crosses over into narrative design for most games. As a result, structure is the core of the game design process for a great many styles of game.

Metagames

Whereas structural design is foundational to game design in general, metagame design is always player experience design at the level of the community. It is perhaps most commonly encountered in the sense of the distribution of tactics and strategies in a particular player community, but the term originally had an even wider sense when first used by Richard Garfield, designer of Magic: The Gathering, in a presentation at GDC in 2000:

Definition of Metagame: My definition of metagame is broad. It is how a game interfaces with life. A particular game, played with the exact same rules will mean different things to different people, and those differences are the metagame. The rules of poker may not change between a casino game, a neighborhood nickel-dime-quarter game, and a game played for matchsticks, but the player experience in these games will certainly change. The experience of roleplaying with a group of story oriented players and playing with some goal oriented power gamers is entirely different, even though the underlying rules being played with may be the same.

It is immediately apparent how the metagame is distinct from structure, since the structure is part of the internal design task of the development team, while the metagame is how the game interacts with its player community. It is a strictly an internal-external split that positions these two terms against one another. That Garfield coined the term is fitting since Magic: The Gathering is a superlative example of a metagame in action: the balance of cards used in player-constructed decks is constantly in flux as a result of the changes in the pool of available cards. One of the chief factors affecting a new card being considered for release by Wizards of the Coast are the effects on the metagame: is it going to shake things up and keep it interesting? Is it going to annoy too many players? Is it fatal to a particular style of play?

The name ‘metagame’ is well chosen: ‘meta’ from the Greek μετά meaning ‘after’ or ‘beyond’. The metagame happens both after a game is released, and is beyond its core design. There were, of course, metagames before Magic: The Gathering… Steve Jackson Games’s Car Wars (1980) supported a fascinating metagame via the Uncle Albert’s Auto Shop and Gunnery Stop faux advertisements in Autoduel Quarterly magazine. The designers of the game faced very similar issues to those Wizards of the Coast would encounter in terms of what the newly-created weapons and defensive options would do to the player tactics in their tabletop battle game. Today, we see significant metagames in MOBAs in the context of character choices and team balance, in choices of gym defenders in Niantic’s Pokémon Go AR game, and more or less anywhere that the player community bears an influence upon the further development of a game – which thanks to analytics, means almost everywhere.

For some reason (probably a blend of ignorance and an innocent coining of a ‘new’ term), Bungie called the campaign scoring in Halo 3 (2007) ‘the meta-game’. This should not be confused with metagames in Garfield’s sense, since Bungie’s ‘meta-game’ is in actuality structural in nature. It is not that this term is ‘wrong’ so much as it is not helpful. From Bungie’s perspective, it probably seemed as if the individual FPS battles were ‘the game’ and so any game layer above this could be called ‘the meta-game’. This does makes sense in terms of the original Greek term… it’s just not helpful because it is clearly just a matter of game structure. To insist on calling solely the real-time action ‘the game’ is to claim that Bungie doesn’t sell games at all, but rather software that happens to have games embedded inside. That’s strictly correct. But it’s not in any way helpful.

Designing for the metagame is a serious challenge, because you don’t know what you have until it’s out in the world. Even closed betas aren’t really a test for how this will pan out (although having this data is always an asset!) since what a subset of players do is radically distinct from what a wider community of players will end up doing. As game designers, we plan for the metagame – we want it if it's possible – and then we have to work hard to keep the meta from stagnating. Maintenance of the metagame is where the craft of game design and the art of community management collide, and successful companies are those that can make these different practices work together.

Monetisation as Metagame

A new set of circumstances for game design were created by the rise and flourishing of the free-to-play, microtransaction driven business model (circumstances quite unlike those fostered by the ‘free version, paid version’ freemium model it directly descends from). The monetisation strategies that developers pursue for acquiring revenue from microtransactions constitute a metagame, one that risks pitting the player and the developer against each other. It could be argued this was already the case for, say, Magic: The Gathering, which generated absurd revenue from its booster pack business model (a form of material microtransaction, you might say!).

What is apparent whichever way the lines are drawn is that games that published periodic expansions, sequels, or DLC like Car Wars or Super Smash Brothers used their metagames to maintain community interest in the brand, and thus support the fanbase. The fans bought the new games or expansions because they were enjoying playing the game, and the metagame maintenance was a service to the fanbase the developer provided in order to maintain a positive relationship and keep its core business strong. The developers best interests were served by this – but so too were the players’ best interests. It was a cybervirtuous relationship.

In monetisation by microtransaction (‘free to play’, but also more than this, since console games have recently discovered the ‘pay-and-pay-more’ business model), the metagame will cease to be a community service the moment the developer is making decisions based purely upon how best to extract value from the player community. For instance, while Niantic’s gym overhaul was healthy for Pokémon GO’s metagame, some players have alleged that the developer has choked the supply of healing items while simultaneously adding these to the (monetised) shop. This risks being perceived by players as a move against them in the monetisation metagame. After all, it can hardly be argued that Niantic were losing money on the 65 million player behemoth. (It is not clear whether this particular allegation is well-founded, but such is the perception of some players at the very least.)

Compare the arrangement of the new Raid battles in Pokémon GO, and the ticket system (Raid Passes) that drives it. A new feature was added to the game, expanding its play and giving players something new to do. To recoup the cost of developing and testing the system, Niantic sell Premium Passes in the game shop, which can effectively be purchased with real money. To ensure everyone gets to take part, they give one Raid Pass away for free every day. This system strikes an effective balance between providing value to the players, and ensuring Niantic’s work is financially compensated. There is no equivalent claim to be made about monetising healing items, which does not obviously add value to the player experience, although the accusation in this particular case hinges on whether Niantic intentionally reduced the supply of these items from free sources (otherwise, this is merely the provision of another purchase option in the shop).

Game development is expensive, and the companies that undertake it deserve to be compensated for the work they do. However, when the metagame strays into value extraction and away from community satisfaction, something has gone wrong. It is worth noting that this can be extremely damaging for a game – the addition of microtransactions to Overkill’s Payday 2 as a result of pressure from publisher 505 Games very nearly sank the franchise, until Starbreeze (who own Overkill) bought back the rights in a $30 million deal the likes of which the games industry had never seen before. Behind this unprecedented legal agreement was the intention to keep the player community happy with the game they were playing. There is always more money to made when you have a thriving community of contented players.

Metagames are important to the success of a game, both commercially and creatively. As such, the monetisation metagame is something that developers ought to be careful about playing. The most honourable question to ask about every proposed change should always be: “what extra value is the player getting for their money?” Whenever a change is introduced that is founded upon the question “what extra value are we extracting from the players?”, the monetisation metagame has turned toxic.

Agree? Disagree? Comments are always welcome!


Why Niantic's Gym Gamble Could Pay Off

PG updateWhat does Niantic hope to achieve with its substantial update of the gym system in Pokémon GO, already in progress? Is this about bringing in more players, maintaining the existing community, or improving monetisation?

Almost all news services this week have reported on the major update to Pokémon GO that is rolling out globally right now (the Android update is already available, and the iOS update is not far behind). The hugely successful Augmented Reality (AR) game still pulls in some 65 million monthly active users (MAU), which while not in the same league of the behemoth Candy Crush Saga at 405 million still makes it one of the world’s most popular games. For context, the entirety of Activision’s product line (including Call of Duty and Destiny) makes up 40 million MAU and all Blizzard’s monthly active users (including World of Warcraft and Overwatch) amounts to 41 million. Whichever way you look at it, Pokémon GO is a serious player in the market for games.

The changes coming in the new update primarily involve the gym system – but not the battles themselves, which remain unchanged. Rather, Niantic are redefining the way pokémon are stationed in gyms, providing new ways for players to engage with their local gyms and (in a further update coming a month later) adding Raids that are modelled on the endgame concepts popularised by World of Warcraft. You can read the most complete description of the changes from Niantic’s support article about the update.

Most sources have correctly reported that the new system limits only one pokémon of each type to a gym, transforming the meta-game and ending the days of gyms containing a depressing multitude of pokémon’s tedious ubertank, Blissey. Most mention the arrival of the new Motivation system and the consequent retiring of the old Prestige system, which asked players to train at friendly gyms to increase the number of pokémon that could be stationed there. However, few have commented on the significance of these changes, and very few mention the most crucial aspect of this overhaul, which means players with less time spent in the game have a chance of participating in the gym battle system.

This, indeed, appears to be the primary motivation for Niantic’s overhaul. The current gym system suffers from major king-of-the-castle problems: players who have been playing from the beginning are now at approximately level 30-36, and can field pokémon with more than 3,000 Combat Points (CP), the measure of the little beasties’ strength in battle. Crucially, these high CP monsters lock up friendly gyms and make it very difficult for new players to have any involvement whatsoever in what’s going on in their local battlegrounds. New players are frequently unable to do much at gyms until they clear about level 20, and even then, there is an inescapable feeling that they are outclassed by the ‘big guns’ who have been playing for longer. This is never a healthy state for a player community: newcomers are all too easily discouraged.

The update not only limits the number of big guns in each gym by only allowing one pokémon of each species, the new Motivation system radically changes the nature of gym defence. Motivation is a morale system – as the gym defenders are defeated in battle (or whenever they lie idle and don’t get to fight at all), they lose Motivation, which reduces their effective CP value. This makes them easier to defeat, which lowers their Motivation further until eventually they give up and go home. Taking a gym is now about the pokémon chosen to defend, not about Prestige, which in the old system could be reduced by battling just a few of the gym’s defenders.

Crucial to the success of this new system is this line from Niantic’s notes:

To help balance different Pokémon strengths and abilities, stronger Pokémon generally lose motivation more quickly than those that are not as strong.

Now the actual game implications of this design element will not become clear until the update has fully rolled out and Niantic turn the gyms on, but it is clear how this is intended to function: putting in ultra-strong 3,000 CP pokémon is no longer the only viable strategy for gym defence. These pokémon lose morale faster, and thus will presumably get kicked out of gyms faster. The new dominant strategy for gym defence, therefore, depends on finding a sweet spot in the strength of the monsters deployed to gyms – too strong, and they will suffer a crisis of morale too easily; too weak, and they’ll roll over in battle without putting up any serious resistance. The fact that there is a sweet spot to discover – and that it could be different in each local area – revitalises the gym system in a way that is highly likely to reinvigorate the interest of its existing players, as well as potentially bring back some that left it over its first year.

There’s an important ambiguity, however. Although it is something most Pokémon GO players are unaware of, the Prestige system for gyms (which is being retired) created a role for weaker pokémon as ‘prestigers’ who trained at friendly gyms to raise the Prestige and increase the number of available slots. The second-string monsters were relevant since bigger Prestige bonuses were awarded for using weaker pokémon to train, creating interest in a whole raft of relatively weak pokémon that would otherwise be useless. (I have been maintaining an army of Furrets for this very reason.) With Prestige going, there is a big question about whether the only useful gym attackers will now be the high CP pokémon – effectively undoing a great many of the benefits of the new system. However, if defenders lose more Motivation for being beaten by weaker pokémon (which would be sensible, but that doesn’t mean Niantic have done it…) there could be seriously interesting questions about which team to take into each and every gym battle.

The community is showing overwhelming support for the new changes, although there are some concerns about alterations to the rewards for defending gyms. Previously, you could collect PokéCoins (which are the in-game currency that can be purchased with real money transactions) once every 21 hours, amounting to 10 coins per pokémon defending gyms, up to a ceiling of 100 per day. The 100 per day cap still stands, but now you only get coins when your pokémon loses its morale and returns from the gym. This creates substantial uncertainties that some players are already getting anxious about. It’s not clear this will end the strategy of ‘gym squatting’ or not: players still have a motive to deploy many pokémon into gyms, and it’s not up to them when they return, which creates worries about yet further advantages to ‘gym shavers’ who (against Niantic’s terms and conditions) have multiple accounts belonging to different teams so they can manipulate the gym situation to their personal advantage.

What is very clear about Niantic’s plans with this gym update is that it is all about strengthening the community around local gyms. Players of all levels can give berries to defenders of their team to help them stay on – allowing players to effectively ‘vote with their berries’ as to which gym defenders they want to keep. Furthermore, players earn status with gyms to level up gym badges to give them advantages. These changes help make a player’s relationship with their local gym stronger, which could have beneficial results for player experience. Add to this the Raids (which will start in July) that will create community events at local gyms, and this could give a serious boost to the sense of player community that Pokémon GO engenders.

Making a radical change to core game mechanics is always a risk, but Niantic seem to have thought this one through quite carefully, and it certainly has the potential to invigorate the existing player community and bring back a few players who left out of either boredom or frustration. There are still a lot of unanswered questions, particularly in terms of whether the motivation system truly breaks the pokémonopoly of Blissey, Dragonite, Snorlax, Rhydon, Gyrados, and Vaporeon, which dominate every gym I have visited anywhere in the world. But even the possibility of thoroughly shaking up the meta-game is one that players of Pokémon GO are excited about. Let the battle for the pokémon gyms begin – again!

Are you a Pokémon GO player sad to lose the Prestige system, or excited about having to only fight one Blissey per gym? Leave a comment – we’d love to hear from you!


Cyberamicable Game Design

FriendsIs it possible to design videogames to encourage friendships? This is a question about whether cyberamicable games are a possibility, and it’s one that

Earlier this year, Dan Cook published a long report from his November 2016 Project Horseshoe visit, entitled Game Design Patterns for Building Friendships. This is precisely a discussion about what I have been calling cybervirtue, in the context of games, and I want to suggest that games that are designed to encourage friendship would be cyberamicable, since we call ‘amicable’ a person who gets on well with others, or who forms friendships easily.

Here’s an extract from Dan’s piece:

Games that lack the tools for disclosing personal info between two people will never facilitate deep relationships. They may never even facilitate shallow relationships since players see that there will never be a long term future for any relationship they form in the game. However, disclosure is a highly risky action and teams will often try to cut it from their designs. Sharing information before a relationship is strong enough can result in broken or antagonistic relationships.

There’s a ton of useful and thought-provoking ideas here, and it’s well worth a look for anyone working in the space of multiplayer games. Check it out!


Brian Green on Online Anonymity

Over on Psychochild’s Blog, Brian Green has a fantastic four part series exploring the relationship between privacy and anonymity, and arguing against the idea that removing anonymity would address the problem – both because this means giving up privacy, which we value, and because it is not practical to do so. Highly recommended reading for game designers and anyone interested in online abuse and privacy:

  • Part 1 looks at the relationship between privacy and anonymity, and the key questions about anonymity.
  • Part 2 examines the harms entailed in removing anonymity.
  • Part 3 makes the case for the impossibility for enforcing public identity and restricting anonymity.
  • Part 4 looks at dealing with the problems of online behaviour, and the changes that might be required.

You can read some brief responses from me over at Only a Game, and I shall respond in full in about two weeks time with a piece entitled Lessons from the MUD. Watch this space!


The Greatest Detective Game Ever Made

LewtonOver at Kotaku, Paul Walker-Emig has a wonderful piece on my first game as lead designer and writer, Discworld Noir. It’s called Discworld Noir: The Greatest Detective Game Ever Made, which is very flattering, especially since (as Paul admits) this game is mostly unknown, or otherwise forgotten. Here’s an extract from the start of the piece:

The forgotten Discworld Noir’s greatness hangs on a simple design element: the notebook. All the other artefacts of the hardboiled detective are there in this noir-inflected take on Terry Pratchett’s Discworld: the trenchcoat and trilby protagonist Lewton wears, treading through the rain that forever hammers the streets; a femme fatale straight from the big book of archetypes; storylines and characters taken wholesale from the pages of Chandler and Hammett; a cool jazz soundtrack evocative of the golden age of the PI. But it is clues and deduction that define the detective. There is the notebook, and then everything else is superficial.

What’s more, Paul’s piece has flushed out some Discworld Noir fans from the woodwork! Here’s a tweet by Dave Gilbert* (The Shivah, The Blackwell Legacy, Emerald City Confidential) confessing that Noir was an influence:

Dave Gilbert Tweet

This means a great deal to me, not only because Dave is a brilliant indie developer, but because I’ve always lamented not having influenced anyone else’s design work. The notebook in Noir, as Paul draws out, was a a big moment for me as a game designer and narrative designer, and I was always disappointed that it sunk without a trace. It seems this was not the case!

You can read the entirety of Discworld Noir: The Greatest Detective Game Ever Made over at Kotaku.

*Not that Dave Gilbert, the other one with the really amazing indie career.


Game Inventories

Game Inventories was a serial in five parts that ran here at ihobo.com from August 31st to September 28th 2016. Within it, pairs of games are examined, and the lineage connections between them are considered, especially in connection with inventory practices. Each of the parts ends with a link to the next one, so to read the entire serial, simply click on the first link below, and then follow the “next” links to read on.

Here are the five parts:

  1. Game Inventories (1): Minecraft and Dungeon Master
  2. Game Inventories (2): The Bard's Tale and Dungeons & Dragons
  3. Game Inventories (3): Diablo and Daggerfall
  4. Game Inventories (4): Resident Evil 4 and X-Com
  5. Game Inventories (5): EverQuest and MUDs

Special thanks to Erlend Grefsrud, Griddle Octopus, Doug Hill, Jacobo Luengo, Sketchwhale, Oscar Strik, VR Sam, Worthless Bums, and José Zagal for contributing to brief discussions on Twitter that helped shape this short serial. Additionally, and this always the case when I talk of the history of games, I am indebted to my friend and colleague Richard Boon. 

If you enjoyed this serial, please leave a comment. Thank you!


Game Inventories (5): EverQuest and MUDs

EverQuest 2 2004One final element of Minecraft’s inventory practices remains unaccounted for: the bar at the bottom that allows rapid access to the contents of the inventory. This is clearly an inventory practice that makes no sense at the tabletop, yet it will hardly be a surprise at this point to demonstrate that it too descends from a lineage that traces its departure point to Dungeons & Dragons. In this case, the pivotal game is Sony Online Entertainment’s EverQuest (1999), which is the first of the ‘graphical MUDs’ – what would become known as a Massively Multiplayer Role-playing Game or MMORPG. Pictured at the top here is an inventory window from EverQuest II (2004), which shows another conventional grid inventory, with the bottom three rows marked with keyboard shortcuts: this is what EverQuest termed a hotbar, and which comes to be known as a quickbar (styled in Minecraft’s case as a quick-bar).

EverQuest Hotbar 1999Tracing the practices of MMOs, or indeed any game that is run as a service, requires significantly greater effort than investigating games that were released as products. Game-as-services means constant changes and updates, and this makes archaeology difficult to adequately perform. Nonetheless, the picture here shows a very early (perhaps the first) form of the hotbar in the original EverQuest. The player is able to customise its contents by placing different actions (at this point primarily described in words e.g. “Melee Attack”) onto the bar, where it can be quickly clicked with the mouse, or activated with a hotkey. The name ‘hotbar’ is clearly a reference to the concept of a ‘hotkey’, which has its origin in the graphical interfaces of computer operating systems.

Dark Age of Camelot Quick Bar 2001 Dark Age of Camelot Quick Bar 2001It appears to be EverQuest’s early competitor Dark Age of Camelot (2001) which coins the term quickbar, and as with all games of this style, the design varies radically throughout its life. The images above depict one of the last versions of the iconography used (top), and the original green and gold iconography (bottom). The functionality, however, remains parallel to the equivalent practices of EverQuest.

Computer RPGs were already moving towards this kind of customisable inventory practice as the available hardware resources increased and games took advantage of this to add more functionality. The action bar at the bottom of the screen in Baldur’s Gate (1998) is a proto-quickbar, even though inventory items are a small part of the space allocated for it. Similarly, Diablo II (2000) offers a quickbar-like system that is presented as being part of the world of the game by linking its functionality to belt items. Each belt provides the capacity to access potions, with different belts having varying capacities. However, by Diablo III, this experiment had merged with the main lineage of quickbar practices, which blossomed in the MMORPGs. To appreciate why, we should examine the two decades before the first MMORPGs, and the lineages of the original multiplayer fictional worlds: MUDs.

 

MUD1 and its Descendants (1978)

When I met Richard Bartle in Dundee this year for the first international joint conference of DiGRA and FDG, where he was giving a keynote, I asked him about the influences that fed into MUD1, the 1978 game that took a simple database, hooked it up with a parser, and connected it to the outside world with BT’s packet switch stream (a precursor to the internet). Bartle was keen to play down the influence of the text adventures, admitting that the parser idea had come from them, but suggesting if it hadn’t been from games like Colossal Cave Adventure (1977) or Zork (1977/1980) it would have come from elsewhere. I’ve already suggested we should set aside such counter-factual reasoning: a history is a narrative that connects the events that occurred, and we should not be too distracted by mere possibilities when constructing one.

Similarly, while Bartle and his co-designer Roy Trubshaw, had played Dungeons & Dragons, which clearly serves as an influence in the trajectory of the MUDs, Bartle was keen to note single-player games of his own devising which were similar in form to early tabletops that had influenced him in making MUD1. This isn’t entirely surprising, since while it was D&D that spread the practices widely by being published, there were numerous proto-RPGs in circulation in the time preceding it. The collision of tabletop player practices with the world practices of novels created unique conditions for the creation of new player practices focussed on narrative play, out of which springs the explosion of inventiveness for which D&D is a key locus of influence.

The early MUDs, however, were much more exercises in world building and community play than adaptations of Dungeons & Dragons. It is the LP MUDs (1989) and especially the DIKU MUDs (1990), originating in Sweden and Denmark respectively, that saw in the MUDs the opportunity to (yet again) adapt D&D for computer form, repeating what had happened back in 1974 on the PLATO educational network. From its first publication through to the early 1990s, wherever there was an opportunity to adapt the various player practices of D&D into a computerised form, it was taken.

The inventory systems of all these games remains resolutely in the style of the early text adventures, and thus in the form of D&D: a list of words. A text command ‘inventory’, often available as just ‘i’, would list all the items that the player was carrying in a simple linear list. Each item was specified in the design of the game, either as a unique object (in most adventure games) or as a class to be instanced (in computer RPGs and MUDs). As long as these games were represented in text, there was no possibility of it being otherwise.

Where, then, is the connection to the highly customisable quickbar? Players of MUDs often found that there were actions (or clusters of actions) that they needed to perform frequently, and swiftly hit upon a solution via running additional software in parallel to the MUD supporting macros. A macro was simply a script of text actions coupled to a key press to trigger it, typically (but not exclusively) the function keys (F1-F12), which were ideally suited for such purposes. Later client software for MUDs began to build these macro systems in automatically, because the player practices had become dependent upon the macro concept for smooth play. Note also that it was the players who added this element to the MUDs, with no involvement from the game developers.

Because the developers of EverQuest were MUD players, they appear to have been drawn to providing customisable interface elements like the hotbar, and thus accelerating the development of what would become called the quickbar: they were (on this reading) a graphical substitute for macros, a customisable element that could tailor to the individual player’s practices. MUDs required more actions in part because they brought together multiple players, which necessitated communication and performance irrelevant in a single player game. MMORPGs inherited this requirement, and developed the quickbar practices to deal with it.

Here, in this final element of Minecraft’s inventory design, is an example of why examining the history of games as player practices can reveal aspects that are invisible if they are examined solely as artefacts, since it is only through the actions of the players that the practices of games are sustained. The design of every game is conditioned by the conservation of player practices, which sustains those practices that are effective at satisfying the visceral or imaginative needs of players. Every example within this serial serves to elucidate this point, and to show how games are never isolated objects: they are always embedded in the manifold of player practices responsible for their creation, and which they then contribute to maintaining.

The player is the heart of the game, and game design conserves player practices because designers are also players. We can trace lineages not because successful games are the rare exception that borrow their practices from earlier games, but because games that borrow the majority of their practices from earlier games are best positioned to be successful – especially if they can manage to bring something new to the table in the process. Notch probably did not play tabletop Dungeons & Dragons, or The Bard’s Tale, or Dungeon Master, or UFO: Enemy Unknown, or EverQuest, but the inventory practices of Minecraft nonetheless inherits the successful variations that these games introduced upon a bedrock of established player practices.

With thanks to Erlend Grefsrud, Griddle Octopus, Doug Hill, Jacobo Luengo, Sketchwhale, Oscar Strik, VR Sam, Worthless Bums, José Zagal and, always when I talk of the history of games, to my friend and colleague Richard Boon.


Game Inventories (4): Resident Evil 4 and X-Com

Resident Evil 4 2005The Attaché Case inventory in Resident Evil 4 (2005) feels like a logical next step from Diablo II’s grid inventory three years earlier. Both involve positioning multi-cell objects in a fixed size grid, with weapons varying in size. Resident Evil 4, however, takes the idea one step further, allowing the player to rotate the items, and providing a ‘swapping space’ to store items temporarily while the player fiddles with the layout. It’s an ingenious and absorbing design that most players loved, although a few complained about the way it broke them out of the world of the game (the aesthetic flaw I have called rupture).

Yet if we examine this game from the perspective of player practices then what we are dealing with is not a progression from Diablo II at all – for that particular game was never released in Japan, and is vanishingly unlikely to have been an influence upon the design of Resident Evil 4. Indeed, the player practices that condition the creation of this particular post-survival horror game are primarily those of the Resident Evil franchise itself, which represents an entirely parallel development of the grid inventory concept from largely different influences. I’ve warned previously about the danger of bringing in counter-factuals to examine the history of game design – but when it comes to this instance, we get an alternative history of a single design element because its actual history was different.

We do not, however, cast aside the influence of Dungeons & Dragons in this version of events, even though the game never had a serious following in Japan. Rather, it was once again Wizardry that took the design of the original tabletop RPG and brought its influence to Japan – firstly, as Henk Rogers’ Black Onyx (1984), which was Japan’s first computer RPG, and soon after as Dragon Quest (1986), and from there into the Japanese RPG lineage, a complete analysis of which would be a major project in itself. (Henk Rogers, incidentally, was a D&D player at the University of Hawaii, where he was studying business – and immediately saw how Wizardry’s adaptation of the tabletop RPG practices was a way to make money; he just had to bring it somewhere new…)

The original Resident Evil back in 1996 (Biohazard in Japan) has two key points of influence. From a technical standpoint, it is clearly inspired by Alone in the Dark (1992), which in turn was inspired by the hugely influential tabletop RPG Call of Cthulhu (1981), written by Sandy Peterson (who would go on to be a level designer for id Software, bringing Lovecraft’s Cthulhu mythos into their games). From a player practice perspective, however, it does not seem that Capcom’s team was coming from Alone in the Dark at all, but rather from an obscure Japanese RPG entitled Sweet Home (1989), itself adapted from a Japanese horror movie with the same name. This is hardly a secret: Tokuro Fujiwara, who made Sweet Home, was one of the two key people responsible for directing Capcom’s first Resident Evil game, and it is relatively clear that Sweet Home provides the rough draft of Resident Evil’s inventory practices – including the idea of limiting space in the inventory, the use of save rooms to store items, and individual character items like the lockpick and lighter.

The player practices of the survival horror genre are centred around the inventory, and the limitations therein that Sweet Home pioneered. Having rendered the inventory as a grid for the first game, director Shinji Mikami went on in Resident Evil 2 (1998) to make some weapons take up two spaces in the inventory, adding to the difficult decisions that had to be made. While reviewers complained about the limitations of the inventory, and the surreal quality of the Item Box that shares items between all save rooms, when Capcom eventually removed these in their final survival horror game, Resident Evil Zero (2002), the inventory system unravelled completely. Players were forced to choose a room to layout all of their belongings, which was even more surreal than the Item Boxes!

Alas, the decision to give up on the player practices of the survival horror game had already been made by the time Resident Evil Zero shipped: Mikami-san had been ordered to make an action game, which is where Resident Evil 4 came from. On the foundations of their own inventory practices, the ‘perfect’ grid inventory of Resident Evil 4 was born. But having traded survival horror for action, the stop-and-start inventory ultimately had to go to make room for multiplayer, and the perfection of the grid inventory in Japan was ultimately a dead end.

 

UFO: Enemy Unknown AKA X-Com (1994)

UFO 1994The connection between the inventory in UFO: Enemy Unknown and that of Diablo that it inspired is readily apparent: here, for the first time anywhere in the world, is a grid inventory where the equipment items take up variable cells of the grid, creating interesting decisions when equipping character. This game, known as X-Com: UFO Defense in the United States where it enjoyed tremendous commercial success, was to found a hugely successful strategy franchise, yet its influence is nowhere more apparent than its provision of the multi-cell grid inventory practices to Diablo.

Just as the case of Resident Evil 4 depended upon a contiguous set of player practices from one linear sequence of games, so the influence that led to Diablo’s grid inventory came from a contiguous sequence of games, in this case those of the British programmer and game designer Julian Gollop. From the age of 14, Gollop was playing Dungeons & Dragons and the strategy boardgames of Avalon Hill that had inspired it. Unlike everyone else considered in this serial, Gollop was strongly influenced by the design of strategic boardgames, and began creating games on 8-bit home computers that adapted the player practices of these games.

Rebelstar Raiders 1984Rebelstar Raiders (1984), pictured right, was one of Gollop’s first experiments with putting a strategy boardgame onto a computer, in this case the ZX Spectrum. With no AI at all, the game could only be played with two players, a limitation that was fixed with the sequels Rebelstar (1986) and Rebelstar II (1988). While the latter two games did allow for changes in weaponry, none of these titles featured an inventory system, which is not surprising since the strategy boardgame practices they had adapted never used an inventory concept either. This was the invention of D&D and the tabletop precursors that inspired it.

Laser Squad 1988It is with Laser Squad (1988), pictured left, that Gollop begins to combine D&D style differential characters – and thus inventories – with the player practices he had developed across his Rebelstar games, the last of which had been released earlier in the same year. As the screenshot makes clear, each member of the player’s squad has a name and an inventory of weaponry, shown with small icons. The more equipment a squad member carries, the more rapidly they run out of action points, and thus tire. The player practices of X-Com descend directly from Laser Squad – indeed, it was originally conceived as Laser Squad II, and both games combine an RPG-like differentiation of characters with the practices of a strategy wargame.

The step up to the full grid inventory with multi-cell weapons in X-Com feels like a substantial progression from Laser Squad, and six years separate the two games (although the last edition of Laser Squad, for PC, wasn’t released until 1992). It seems likely that in the intervening period, Gollop encountered Dungeon Master, and hence the grid inventory. However, the effect of multi-cell items on the grid inventory concept results in a substantial shift in the player practices, as Diablo made clear. I speculate that the influence here might have come from Steve Jackson’s classic tabletop autoduellist wargame Car Wars (1981), for which allocating weaponry to the limited spaces available in the chassis was a major element. Since Gollop worked upon the 8-bit videogame for Games Workshop’s 1983 dodgy knock off, Battlecars, it seems likely he was exposed to Car Wars player practices. But perhaps, like Resident Evil’s parallel lineage of grid inventories, Gollop just hit upon the idea on his own as he continued to develop his own unique lineage of strategic videogames.

Next week, the final part: EverQuest and MUDs


Game Inventories (3): Diablo and Daggerfall

Diablo II Horadric Cube 2002One key aspect of Minecraft’s inventory practices is absent from all of the examples previously examined: crafting. Indeed, crafting formed no part of Dungeons & Dragons player practices until the 3rd edition in 2000, and none of the early computer role-playing games descending from it feature this concept. The means of creating magical items that tabletop D&D offered has next to no influence, however. It seems to be Diablo II (2002) that largely establishes the player practice of crafting through the introduction of the Horadric Cube (pictured left), a secondary grid inventory of 3x4 spaces that includes a button to transmute its contents into a new magical item. This provided a means for players to create endgame items beyond waiting for them to drop, and although its function was considered fairly arcane, it was nonetheless a central part of many players’ experiences of this game.

That the crafting box in Minecraft resembles that of the Horadric Cube is not coincidental: Diablo (1996) and Diablo II (2002) were so commercially successful that it is these games (and those of the Elder Scrolls series, discussed below), that anchor the conservation of player practices in Western-style computer role-playing games from this point onward. The Japanese computer RPG lineage, which also traces its heritage back to D&D via The Black Onyx (1984) and Wizardry (1980) before it, would tell a different story, and one that exceeds the scope of this serial to adequately trace. Again, as I pointed out last week, we can tell whatever counter-factual stories we like about the ways things could have gone, but they cannot nullify the influences upon the actual history of games.

A less influential aspect of Diablo II’s crafting practices are items with sockets, which can be fitted with gems in order to power-up the item in question. Like the Horadric Cube, sockets attempt to make the overflowing treasure tables of Diablo something other than a demand to go to the shop and dump the junk. Ernest Adams has joked that computer role-playing games make their players into “itinerant second-hand arms dealers” – the practices added to Diablo II attempt (rather unsuccessfully, as it happens) to offset the root causes of this, and crafting ever since has built upon this idea. The only other major contributor to early crafting practices other than Diablo II appears to be Daggerfall, discussed below.

The original Diablo is one of the games that synthesises influences both from tabletop Dungeons & Dragons, and its computer game inheritors. Co-creators Erich and Max Schaefer had played in the kind of mindless dungeon bash style of D&D that was common (but by no means universal) in the early days of the hobby:

We wanted to do an RPG how we’d played Dungeons & Dragons as kids: hit monsters and gain loot. Our mission was that we wanted the minimum amount of time between when you started the game up to when you were clubbing a skeleton.

Indirect influences came in via the other co-creator, David Breivik, who had played Moria (1975/8) and Angband (1990), two early roguelike games descended ultimately from the unimaginatively titled dnd (1974) on the PLATO educational computer network, written in the same year that tabletop D&D appeared.

Diablo 1996The inventory in Diablo has another key point of influence, however, namely Julian Gollop’s X-Com: UFO Defense (1994), originally entitled UFO: Enemy Unknown. The influence here was in the idea of breaking out of the original grid inventory concept, which allocated one square to an item, by having items take up multiple spaces. As the crop of Diablo’s inventory screen to the right shows, weapons take up between three and six spaces in the grid, in various configurations, an inventory practice established by and descending from X-Com, which all three of the Diablo creators mentioned above point to as their inspiration for the interface design. This thread will be picked up next week.

 

Daggerfall (1996)

Daggerfall InventoryThe same year that Diablo was released, Bethesda were working on a follow up to their first Elder Scrolls game, Arena (1994). Just as Condor (later, Blizzard North) had been engaged in player practices from both the tabletop and from computer RPGs, Bethesda’s influences came from both lineages, with Ultima Underworld: The Stygian Abyss (1992) mentioned as informing the design of Arena. But Bethesda were deeply involved with the narrative practices of tabletop role-playing, and were far more interested in role play than the simple kill-and-level rule play that inspired Diablo. Daggerfall marks the point that their influences change from Dungeons & Dragons to later tabletop role-playing systems, particularly Steve Jackson Games’ GURPS (1986).

An interview on Gamespy with designer Ted Peterson (archived in this forum discussion) makes the connection explicit when discussing Daggerfall’s character creation system:

Julian [LeFay] and I had decided to go with a skill-based advancement system rather than Arena’s kill-the-monster-and-advance system, so each of the classes had been assigned different skill sets. Given that, it made sense to allow players to create their own classes assigning their own skills. Then, thinking about GURPS, we added additional bonuses and special abilities and disabilities that the player could assign. I’ve always enjoyed character creation systems in games of all kinds. I don't like playing Gamma World, but even now when I'm bored, I'll sometimes roll the dice and see what kind of mutations my character would develop if I actually wanted to play the game.

The move here is towards the second generation of tabletop role-playing games, like GURPS, with their emphasis on putting the player in charge of character creation and away form dice-rolling practices, like Gamma World (which was TSR’s post-apocalyptic game, using rules very close to D&D). White Wolf’s Vampire: The Masquerade (1991) has also been mentioned as feeding into the world of Daggerfall, with clans of vampires added into Tamriel for this iteration of the Elder Scrolls franchise.

A striking aspect of the inventory screen in Daggerfall is its division into categories like Weapons & Armour, Magic Items, Clothing & Misc, and Ingredients. As noted last week, this was a common aspect of Dungeons & Dragons character sheets, but it hadn’t been used much in computer RPGs. The influence of tabletop practices is also felt in Bethesda’s crafting systems. Arena had a spell creation system that was clearly a modular version of D&D’s fixed-definition spells. Daggerfall, on the other hand, has a more detailed spell and weapon enchantment system, drawing very clearly from the GURPS concepts of Advantages and Disadvantages, that would go on to influence Fallout (1997).

Daggerfall 1996As noted in respect of Diablo, tabletop RPGs hadn’t had a motive to include crafting systems, but the excessive volumes of loot earned in computer RPGs (a product, in part, of much faster paced play) created a need to find other things to do with items other than just sell them. For Daggerfall, the system that most resembles future crafting practices is the Potion Maker, pictured above. Certain items in the game were characterised as Ingredients and could be combined in a Mixing Cauldron, accessed from Temples or the Assassins’ Guild. Mixing could be done freely, or recipes (acquired as treasure drops) could be used to operate the Mixing Cauldron automatically.

Although Daggerfall’s Mixing Cauldron is narrower in scope than Diablo II’s Horaldric Cube, both are initiating the same kind of player practice: one where the inventory is not just a source of equipment for immediate use (as in Dungeons & Dragons), or simply fodder for sale (as in most computer RPGs prior to the 90s), but a set of active elements that can be combined in different patterns to get better equipment. These player practices made no sense at the tabletop, where complex look-up tables would be required. But on the computer, the availability of automation within the game systems kicked off experimentation with crafting as soon as there was sufficient memory space for such luxuries. It was thus the widespread adoption of the CD drive around 1993 that opened the door to crafting practices, which have thrived ever since.

Finders Keepers 1985That technological limitations play a role in delaying the onset of crafting can be seen by considering one of the few examples of crafting prior to the 90s: David Jones’ Finder’s Keepers (1985). This budget gem (Mastertronic sold it in the UK for £2, one fifth of the price of most games at that time) had a five slot inventory of the style we have already seen in The Bard’s Tale. The twist was that certain items would automatically react in the player’s inventory to form new items e.g. Bar of Lead and the Philosopher’s Stone would switch to Bar of Gold, while Sulphur, Saltpetre, and Charcoal would produce Gunpowder – used to escape the castle in the ZX Spectrum version, and resulting in the player blowing up (Game Over!) on the Commodore 64. But unlike Diablo and Daggerfall, this game failed to influence even its own sequels, which never again experimented with crafting. The 8-bit era does, however, have an important part of the story of game inventories, which we will turn to next week.

Next week: Resident Evil 4 and X-Com


Game Inventories (2): The Bard's Tale and Dungeons & Dragons

Bards Tale 1985

The ridiculously over-titled Tales of the Unknown, Volume 1: The Bard’s Tale, was released in 1985, two years before Dungeon Master, which as shown last week innovates the grid inventory practice that persists all the way to the present day. It’s a measure of the influence of The Bard’s Tale (as it came to be known) that the sequel drops the Tales of the Unknown branding and goes for The Bard’s Tale II: The Destiny Knight (1986). It ultimately goes on to complete a trilogy of games that are fondly remembered, despite not ranging very far from the practices that SirTech’s Wizardry: Proving Grounds of the Mad Overlord (1980) established five years earlier in adapting tabletop Dungeons & Dragons (D&D) to the computer.

As the screenshot above shows, each character in the party is allowed eight items in their inventory. The ordering of the items makes no difference, and the distinction between equipped items and unequipped items is marked with an asterisk. The Warrior shown in this picture has a shield, various items of armour, and a halberd to attack with. His appearance in the display window in the top left is utterly unaffected by what is equipped, but unlike Wizardry the images of both party members and monsters that appear in this corner are pleasingly animated, which added to the appeal at the time of its release.

There is one difference between Wizardry’s inventory and that in The Bard’s Tale: the former used a question mark to indicate items that had not been identified, a player practice invented by D&D but largely maintained only in Rogue (1980) and its descendants. The Bard’s Tale eliminates this practice for simplicity, and indeed can be seen as a refinement of its immediate predecessor, Wizardry, knocking out all manner of clunky elements that had either been inherited from its tabletop forebear (like identifying magical items) or thrown in for good measure (such as adding attribute points on top of random base values during character generation).

The true extent of the conservation of player practices between Wizardry and The Bard’s Tale can be felt with the latter’s sequel, The Destiny Knight, which allowed players to import their party of characters not only from The Bard’s Tale but also from Wizardry or Ultima III (1983). This was possible because the player practices that all three games were built upon were all intimately tied to D&D. Wizardry’s attributes of Strength, IQ, Piety, Vitality, Agility, and Luck become Strength, IQ, Constitution, Dexterity and Luck in The Bard’s Tale. It thus reverted Vitality and Agility to D&D’s terms Constitution and Dexterity, maintained Strength (also from D&D), kept Intelligence as IQ, and dropped Wizardry’s Piety, which had been renamed from D&D’s Wisdom. The only thing new in the computer games is Luck, which replaces D&D’s Charisma, since early computer RPGs struggled to implement any kind of character interaction, and The Bard’s Tale takes this attribute directly from Wizardry.

It will thus come as no surprise that Michael Cranford, the game designer and programmer at Interplay who was responsible for almost every aspect of The Bard’s Tale except its art, was not only playing Dungeons & Dragons at the tabletop (frequently as Dungeon Master,)but also playing a great deal of Wizardry. Like the team at FTL responsible for Dungeon Master two years later, Cranford wanted to create a ‘Wizardry Killer’, and with The Bard’s Tale achieved a streamlined perfection of the player practices of that earlier game, as well as bringing in a few of the new player practices TSR had added in Advanced Dungeons & Dragons, such as changing classes – which is precisely where the Bard character class had come from in the first place.

 

Dungeons & Dragons (1974)

Bob-Ruppert-Character-Sheet-1975_thu

It is hopefully clear at this point that for almost twenty years after its original release, TSR’s Dungeons & Dragons was the wellspring from which the player practices of computer role-playing games were established. D&D had many influences from the tabletop scene preceding it, not least of which the wargames of Charles S. Robert’s Avalon Hill, but the sheer degree to which the D&D rules were distributed throughout the US (primarily via college players) – both by purchase and through unlicensed copies – made this the definitive version of tabletop role-playing games player practices that were conserved by the computer variants.

The tabletop successors of D&D become a part of the story only a decade or so later, e.g. Steve Jackson Games’ GURPS (1986), which gives rise to Interplay’s later Fallout (1997). All the way through the 70s and 80s, D&D was feeding its player practices directly or indirectly into computer games. This is particularly clear in terms of the First Person Shooter lineage. As noted last week, the original controls of id software’s games were inherited from the cursor key movement of The Bard’s Tale and Wizardry, not to mention that the developer were avid players of D&D itself, with one campaign specifically inspiring DOOM (1993). Furthermore, the open world genre equally ties back to Dungeons & Dragons via Traveller (1977) and Space Opera (1980), two early tabletop RPGs that were beloved by David Braben and Ian Bell, whose Elite (1984) would be a major influence on Grand Theft Auto (1997).

In terms of inventories, Dungeons & Dragons effectively invents them (although not the name...) or rather, acquires this practice from early non-commercial tabletop role-playing games, and then becomes the locus of the conservation of player practices by being so widely distributed. The core element of the inventory practices is the use of a set for representation instead of mere function. Sets had enjoyed a millennia of productive use in board and tile games before Dungeons & Dragons used them to create a revolutionary new player practice: the character sheet. In collecting together all manner of fields (Name, Class, Attributes, Alignment and so forth) into one set of sets (a superset), D&D was creating a means of representing entities that would be picked up by early videogames and gradually explored over the decades that followed.

Within the superset of data that was the written character sheet, the inventory was nothing more or less than a written list, recording a subset of all of the possible items specified by the written rules. The original white box game of 1974 did not have a pre-designed character sheet, and players recorded all the text and numbers required to specify their characters without any established format. However, there was soon a thriving experimentation in home-printed character sheets, like the one depicted above that was created by Bob Rupport in 1975.

D&D Character Sheet 1980Many other variations followed until TSR established an official printed character record sheet in 1977, a 1980 variation of which appears to the right. In fact, as both these examples attest, the early tabletop RPG inventory was quite frequently multiple written lists: Rupport’s version divides the inventory into sections named Weapons/Armour, Magic Equipment, and Other Equipment, while TSR’s official version provides separate boxes for Magic Items and Normal Items, stressing the importance of Magic Items (acquired as treasure) to character advancement, both in the tabletop game and in its immediate successors. At heart, this choice to list items in two separate sets had no representative force, and it is hardly surprising that Wizardry and The Bard’s Tale didn’t bother to make the distinction. They also limited the size of the list to eight items (the paper sheets having no such limitation) as a practical matter tied to the keyboard: number keys could be used easily to select which item was to be used.

The tremendous influence of Dungeons & Dragons, in terms of the use of sets for representation (inventories, character sheets), character progression (experience points, levels), and codifying game worlds (dungeon, village, wilderness) may inevitably invite the retort: if it had not been D&D, it would have been something else. However, this counter-factual argument is not as important as it feels to game designers as they try to shrug off their debts to their direct influences. The 1980 release of Wander (a rival for the first text adventure, but nowhere near as influential as D&D-descended Colossal Cave Adventure) has an inventory command listing the items you are carrying; perhaps this was also in the original 1974 release. But game design ideas do not exist in a Platonic void, where the are magically plucked from. Game designers are players, and they are embedded in the player practices of the games they play. Once this is accepted, the legacy of Dungeons & Dragons is unavoidable and undeniable, and it comes into focus as the most influential game ever published. 

Next week: Diablo and Daggerfall