Monthly Archives: November 2011

The Lonely Psycopath And The Tragedy Of The Commons

The topic of today’s overly long blog post is ‘permanence in online worlds’. A few months ago, as a response to the blog post I made about morality[1], I was quizzed on why there was a necessity to formalise a morality system at all in a game. Many games don’t bother with such consequences, and where they have some kind of moral impact for what you do it’s based on having done certain things within the game. If you kill the little fairy princess, then people respond to that specific act rather than as a result of you being ‘chaotic evil versus fairy princesses’. If you can do that in games, why is there a need to introduce an artificial mechanism for tracking morality at all?

Permanence of action in online worlds is the answer – the need to create a world that is (largely) ‘same as it ever was’ has some pretty loopy implications for the way in which game systems must be represented. Online worlds require a sense of continuity (in some sense, a Groundhog Day of continuity) that single player, ‘once through to the end’ games don’t. The problem is that players can come in at the start of your game at any time, and likewise bow out at any point they like. In a single player game, the world ends with you[2]. In an online game, the world continues whether you are there or not.

You can’t really tell long, sweeping stories in an online world. At best, you can tell small stories that interlink. Moreover, you can’t be ‘the guy who did the thing’ in an online world, because there will be a thousand other people who are also ‘the guy who did the thing’. Likewise, the need to provide for people to ‘log in’ at any point they wish means that you need to have a consistent game world – the guy with the starting quest always has to be there, or you need to completely drop the idea of having fixed game content at all. The boss at the end of the instance always has to be there. The components for the spells you want to cast always have to be there (or at least, they always have to be replenished). There are limits to how dynamic a world you can create in an MMO or similar without having to dramatically redefine what kind of game you are offering.

So, the actions a player takes within an online world lack real persistence, and that is the biggest limitation with regards to the kind of stories you can tell. Mostly people are prepared to buy into the conceit that they did impossible things and that makes them mighty, but the suspension of disbelief that you can count on rapidly fades away when you have people saying ‘You! You’re the monster who killed our princess’ and she’s fluttering away RIGHT OVER THERE, GUYS! SHE’S ALIVE! You *can* do that, but to my mind it horribly breaks immersion and the conceit that people buy into is fragile enough as it stands.

Your actions have no permanence, and that also means your actions do not necessarily have consistency. You might kill that fairy princess a hundred times, and save her a hundred times. How then am I supposed to use your previous actions as a baseline for how the game world responds to you?

Warcraft uses ‘phasing’ to get around this to a degree – zones change as you perform mighty deeds within them. It’s very neat to see a zone altered by your actions, but again it’s a conceit and it’s very fragile. It’s also very technically limited because of where players within different phases of the content must interact. Either they are ghosts you cannot see or interact with (which is weird if it’s a friend of yours and you are supposedly adventuring together[3]), or they seem to be fighting imaginary monsters while you sit there sipping on a cool glass of soda (which is confusing). Multiplayer games require multiplayer approaches. While it’s possible[4] for everyone to have their very own instance of the world in which they adventure, at that point you cease to have a multiplayer game – you have a single player game with some shared content.

‘Why can’t you just have a game world that genuinely does maintain permanence’, is the question that raises itself next. I say, as if they are universal truths, that NPCs must respawn and quests must reset. ‘Is there a basis for assuming that must be true?’

Well, let’s look at it via a thought experiment. Let’s call this ‘ConsequenceWorld’, a persistent, multiplayer online game designed around real permanence of your actions. There are certain things we can infer from this, but they are not necessarily core to the idea. ConsequenceWorld *implies* a world of permanent player death, but it doesn’t have to. We’ll assume it doesn’t, just for the sake of this discussion. ConsequenceWorld implies a fixed economy, but again it doesn’t have to. We’ll assume again that it doesn’t.

What ConsequenceWorld does have is a finality to your game actions. If you kill an NPC, they stay killed forever. If there are respawns, they have a fixed limit – ‘there are 1000 people in this village. When you’ve killed them all, the village is gone forever’. Perhaps respawning villagers will take on the mantle of their predecessors, so you won’t necessarily always get your quest from ‘Bill The Farmer’, because if he’s dead ‘Dave the Blacksmith’ will take over, and so on and so on until everyone in the village is dead and the quest becomes impossible to attain.

ConsequenceWorld also means that if you steal something from a shop, it stays stolen – it doesn’t respawn. If you destroy a building, it stays destroyed. If you hunt the wildlife to extinction, they stay extinct. If you empty a mine of its ore, it remains empty. If you run out of food, everyone starves to death. In short, every action you take in the game is likely to be negative sum.

There’s a phrase in economics that beautifully describes what happens next – ‘the tragedy of the commons’, and it describes the situation where exploitation of a shared limited resource will eventually lead to its permanent depletion to the detriment of everyone. It’s in nobody’s interest for this to happen, but simultaneously it’s in nobody’s interest to be the one person who abstains from exploiting the resource. In our scenario, the common resource is the pool of harvestables such as XP, materials and quests. If someone kills all the quest givers, then that quest is no longer available for the rest of the people exploiting the resource, meaning the pool has become depleted. This in turn creates an additional pressure on other harvestable resources. If you’ve killed all the orcs, you need to go somewhere else for your XP – but you’ve just depleted one of the XP sources, which means that if the same number of people are looking for XP their attentions are going to be directed towards a more limited set of possibilities. This in turn depletes those resources that remain more rapidly. As each resource is used up, it speeds up the rate of depletion of the others.

The end result should be pretty obvious – you end up with a ghost world where there is nothing to do and no way to advance. New players log on and say ‘What? Where is the actual game?’, and older players say ‘A newbie! Kill him for his delicious sweetbreads!’. If you’ve ever played Fallout 3 in Mass Murder Mode[5], you’ll find that the world becomes pretty rapidly a very lonely, very dull place. There are *some* respawns in Fallout 3, but not enough to really make up for the rate at which you consume. Likewise for Human Revolution – the dead stay dead, and while that is a great reminder of the way in which you chose to play the game, it does guarantee that you fast become a very *lonely* psychopath.

I’m not saying that such a game would be unplayable, I’m just saying that such a game model doesn’t lend itself to the kind of online worlds that most of us enjoy. It would be a horrible gameplay experience to start up Deus Ex and be left with the carnage of the last person who played it. You wouldn’t be able to do the quests, you wouldn’t be able to gear up or get ammo – but it wouldn’t be a problem, because there wouldn’t be anyone left to kill. If you want to have these kind of permanent outcomes in a multiplayer, online world then you need to be developing something substantively different from what everyone else is doing. You’d need to be working on creating something like Project Zomboid, which is literally about creating a scenario whereby you try to stave off absolute depletion of all your available resources for as long as possible. Project Zomboid has the tremendous tag line ‘this is the story of how you died’, which sets the tone perfectly – it’s not a game where you are expected to emotionally invest in a a long, persistent session. There might be mileage in an online game that has permanence of action but resets when everything is gone, but personally I think that kind of design is great for experimentation, and great for single player games, but makes for a shitty persistent multiplayer game.

So, the lack of persistence of the consequences of actions requires a different approach, and that often works via a proxy system. If I can’t base gameplay systems on your previous actions, I should be able to base them on what your actions *imply* about you. I might not be able to say ‘you killed that princess’, but I can say ‘you’re a pretty mean-spirited killer’. I might not be able to say ‘You have always treated our faction fairly’, but I can say ‘You have earned this faction rep as the cumulative total of your actions’. These things may be an artificial abstraction of the deeds you have performed in the game, but they serve the purpose of letting you ensure game actions have consequence. For some games, that doesn’t matter – for others, the theme may demand some measure of adherence to a system like this. The Star Wars MMO won’t feel like Star Wars if you’re not feeling the impact of choosing to do light side versus dark side actions. If you can kill, murder, rape and main without it impacting on your ‘Good Jedi’ credentials, the game is going to struggle to maintain your buy-in to the underlying game philosophy. At the same time, for all the reasons outlined above, the game can’t simply base your reputation on the fact you went into the temple and killed all the ‘younglings’ – because they’re going to be back for the next respawn.

Abstract proxies for ingame actions then are the tool that you have available as a designer for multiplayer online worlds. They may not be a more elegant weapon for a more civilized age, but they allow you to cover quite a lot of the gap between the desire to add consequence to actions and the limitations of the format.

Drakkos.

[1] http://drakkos.co.uk/blog/blog.c?action=filter&blog=mud+commentary&id=506

[2] Which was a pretty great game I thought, especially the soundtrack.

[3] Players within different phases of content often report problems such as one player seeing a herb that doesn’t exist to the other.

[4] Although not technically feasible for most online games given the size and scope

[5] Just kill everyone once you’re done with them and then steal their cola.

Posted in Textual Intercourse | Leave a comment

Simple Systems, Complex Consequences – Part Three In An Occasional Series

Depth in games is very much a thing in which I am interested. I like games in which I can lose myself, and I like feeling that there is an opportunity for genuine mastery to be built within a set of rules. It sometimes confuses people then that I am so dismissive of complex game systems. So today I am going to write up part three of this occasional series of Epitaph’s Immutable Rules, and the rule is this: Have simple systems, but complex consequences.

Complex game systems are (in my view) those in which there are too many steps required to accomplish a goal, and one (or more) of those steps is so shrouded in external factors that you cannot *quickly* extrapolate a likely outcome from your available data. At the end of the scale farthest away from complex, we might have something like a D&D ‘difficulty check’, whereby you are given a number (let’s say 10) and a twenty sided dice to roll. You know what the likely outcome is (50/50 chance you’ll accomplish what it is you’re trying to do – we want to roll over the difficulty check) and it doesn’t take more than a trivial understanding of arithmetic to arrive at that understanding. It is, however, a tremendously inexpressive system because it doesn’t take into account context. It is then, *too* simple.

We move a little bit farther up the complexity scale when we add in ‘modifiers’. Calculating modifiers is an additional step before we roll the dice, and it will influence the likely outcome of your action. Modifiers are generally expressed as a plus (which makes things easier) or a minus (which makes things harder), and are applied to the dice roll. Given the job of climbing a rope (a difficulty check of 5), you might modify the roll for being in the rain (-5 modifier) and for the player carrying a full pack (-5 modifier). Now it gets a little bit trickier to work out the likely outcome. It’s still relatively simple – your target is 5, and you are going to roll a twenty sided dice and subtract ten from the result. That means that you need to roll at least a fifteen to pass the check. There’s a 25% chance you’ll pass that check. Again, straightforward and something you can pretty much intuitively tell ‘at a glance’.

You build complexity into game systems each time you add a step or increase the amount of calculation that goes on at a particular single step. You also increase the complexity each time you have the factors from an earlier step feed into future steps. Each time, you are making it more difficult for people to judge the consequences of their actions because they need to spend more mental energy computing things, and remembering all of the things that should go into those computations. Even in the relatively streamlined fourth edition of D&D you can end up with mind-bogglingly complex situations such as ‘Okay, you need to roll a 10 to hit that target with your bow, but you’re at +2 because you moved to there and got combat advantage, but they get a +2 because they are prone. First though you have to roll against your periodic damage and use your free shift given to you by the use of the daily power John used at the start of the encounter, and then…’. The more complex the situation is, the more chance things get forgotten, ‘forgotten’[1], or approximated[2]. When working with a human doing the calculations (as in a role playing game), this all has a real impact on game balance and if it gets out of hand it’s no better than just flipping a coin except that it’s a hundred times more frustrating.

Complexity can also be a consequence of bad ‘chunking’. Individually in a D&D combat round, nothing is very complicated – almost all of D&D revolves around the idea of the difficulty check, and that as we have seen is a very simple system. The problem really comes from the number of places which have consequences that impact on that difficulty check. I need to pay attention to what the enemies are doing, what my allies are doing, what I’m doing, what’s happened to me in previous rounds, and what my friends and enemies have been doing in previous rounds. There is no logically discreet point at which the consequences of those actions are ‘finished’ except at the very end of combat. I am constantly needing to feed new information into my decision making processes, and that’s good in a way (it makes combat feel fluid), but each new piece of information is a new level of complexity in my gameplay.

Ongoing effects (which you take damage from every round until you save) and player power bonuses are especially troublesome from this perspective. To take a random example of this, one of the warlord powers in 4E has an effect that says… ‘Until the end of the encounter, when you or an ally hits the target, that attack also deals ongoing 5 damage’. By itself, a very cool power – but everyone in your team has very cool powers like that, and they’re all using them too. One power by itself is a great flavouring in a combat, but combine that one with the power that gives people a +1 to all defences while they stand within a particular set of grid co-ordinates. Then add in the passive bonus granted by warlords that gives everyone on your side who can see you a +2 to initiative and the +1 shield bonus that was applied to everyone who was around Samantha at the time she used her power. Suddenly, combat becomes a number crunching exercise rather than a system of deciding narrative outcomes. You’re now spending your time filling in columns of a spreadsheet rather than saying ‘I will hit that guy, because that is exciting to me’

Chunking the numbers can greatly reduce that complexity by limiting where previous steps in a process impact on future steps, or by encapsulating their impact in some simpler abstracted mechanic. There is a huge difference in our ability to mentally process ‘I get a +5 to damage against this guy’ as opposed to ‘I get a +8 to damage, but he gets -5 from armour, but my weapon is +2 and I’m currently in my offensive stance which gives me a +2 versus his -2 from being in a defensive stance’. The game impact is identical, but what differs is the ability of people to comprehend that game impact.

Chunking also refers to the way in which earlier parts of a process feed into future parts – it’s about simplifying the inputs and outputs of stages and discarding (or abstracting) those elements that no longer matter. If you’re making a sword blade from a metal bar, I don’t need to know any more about how good a miner you are or how good the ore was – I just need to know how good this bar is. Thus, when I smelt the ore I take my own skills and the quality of the ore, and then abstract that into a single measure that is ‘quality of this metal bar’. That’s a form of chunking both by collapsing two points of data into one, and by having the production of the bar of metal being a logical end point in a process.

If I’m making a sword using a sword blade, I don’t need to know how good the metal was, I just need to know how good the sword blade is. We can accomplish good ‘chunking’ by making stages of a workflow temporally independent – we need to assume that stages of a workflow aren’t being followed to the letter, one after another with no pauses. We need to assume that if someone makes a sword blade that the next step (making a sword) may happen in a month’s time or even by someone entirely different. When we assume that, we need to make decisions about how much information we need and how we can realistically get hold of it. That in turn encourages us to be frugal with our requirements. We need to assume a paucity of data, and that greatly limits what our necessary inputs will be. That in turn greatly simplifies the decision making for our players.

When a game is running on a computer, you don’t need to worry about how well someone can understand the rules in order to evaluate them – on Epitaph, the code we have for doing a skill check is monstrously complicated when compared to the single dice roll that drives D&D. That’s okay from our side, because computers can do all of that math flawlessly and all but instantly. It’s also okay from the side of a player, because most of the calculations aren’t related to ‘pass’ or ‘fail’, they’re related to things like ‘did you get a TM from this action’, or turning our single digit difficulty and difficulty modifiers into an actual numerical target. It’s not the complexity of the math that’s the problem in computer game systems, it’s the tractability of player decisions.

My view on games is that they are engines for generating scenarios in which people must make interesting decisions. Too much complexity takes away how interesting a decision it is, because the two responses we have as individuals are ‘calculate the math’ in which case it’s simply picking the better number, or ‘making a decision based on imperfect understanding’, which is moving dangerously towards ‘making random decisions’.

Complexity seems like an excellent way to make player decisions matter, but they only *matter* if they’re being made from a position of understanding. It’s fine to force people to make snap judgements on complex game situations, but in doing so you need to ensure that the situation can be comprehended (although not necessarily optimised) at a glance. If someone has two seconds to decide what they’re going to do, then they should be spending those two seconds weighing up their own synthesized view of all the factors, not simply saying ‘well, I can’t know here so I’m just going to do the first one’. The only way to really do that is to make sure that each ‘chunk’ of the decision they are making is tractable. An interesting decision might be ‘I am low on health so I need to take out the main damage dealer. I’m closer to the guy with the scimitars than I am to the guy with the rocket launcher, but the rocket launcher is doing most of the damage and I think it’s probably better overall to spend the time getting to him’. The rule is basically the same for decisions with no time constraints – if you can’t make the decision before you’ve spent five minutes with a calculator, your game is too complex. Calculators are for optimisation of decisions, they shouldn’t be *requisite*.

It’s not complexity by itself that leads to deep games, it’s the interaction of game systems and the consequences of your interactions within them. All the stages in constructing a sword might be very simple, but when taken together they may have a very deep impact that is informed by all the decisions you made – the type of hilt you chose, the type of cross-guard, the material and shape of the blade, your own skills as a crafter, the random number gods, etc, etc. All of these can continue to be factors provided you make those ‘player decision’ parts of the process as simple as possible. ‘Steel is a better metal than gold for durability, but less valuable. I am making a sword blade, so I am interested only in the durability. Thus, I will make my sword blade out of steel’. That is a decision anyone can make. ‘This is the best sword blade I have produced, but it is curved. I want a straight blade, but the quality of the one of those I made is quite bad. Thus, I will go with the curved blade even though it is not quite what I want’.

By virtue of making all those small decisions, you can end up with a complex outcome… choosing a curved blade has an impact on the type of damage, the material impacts on weight, durability and value. It’s not that my outcomes are any simpler, it’s just that the factors that went into the outcomes were chunked well and each individual decision was not overly taxing. Thus – simple systems, but with complex consequences.

Drakkos

[1] As in, you remember about them but you know nobody else does and so you sneakily don’t remind the DM that

you should be taking 20 ‘spiders in my veins’ damage every turn.

[2] God, there are fifteen modifiers here. Let’s call it an even 10.

Posted in Textual Intercourse | Leave a comment

Drip-feeding Of New Content

At the moment, those of us working on the development of Epitaph are doing so with ‘live ammo’. It’s one the reasons I don’t like having too many players around – actual players make it difficult to be cavalier with whether or not the game is working. The nature of development and the language we use here is such that sometimes things break as a result of new features. LPC is a tremendously flexible language, and the virtual machine that the driver provides is a pretty forgiving beast. However, it’s not perfect and sometimes the smallest additions can stop other things from working, or even loading[1]. Working with live ammo means I can leave that kind of thing broken until we next reboot, or until I next need that object myself (whichever is sooner). It’s just easier for us.

When the game opens to players though, that’s no longer appropriate – so I wanted to talk a little about how I’m planning for new features and bug-fixes to be handled on Epitaph when having players is the norm (hopefully!).

In my experience, MUDs are particularly bad for having a good regime in place for this[2]. Many MUD engines require a full reboot if any changes have been made to anything particularly core, and those reboots are scheduled as and when features are completed or when it’s just time for the driver to reboot. Those that allow you to fix or change code in a running game (as with LPC) subtly encourage you to work directly on the active code. While that’s perfectly fine if all you’re doing is coding rooms in your own work-area, it becomes increasingly more perilous as you descend into the deep, dark places of the mudlib [3].

Still, when working with multiple developers, it’s hard to do anything other than work on the live game unless a formal process is in place. Software tools can go some way towards making collaboration less painful (revision control, for example, is all but mandatory), but in the end those are technical tools to address a social problem. The problem is that developers in these circumstances consider ‘temporary crashes’ a necessary evil of development. After a while, players too begin to simply accept that these things happen – when an error occurs in the game, someone will invariably say ‘maybe they just changed something’ rather than ‘why is this game suddenly causing runtimes all over the place?’

In especially egregious cases, developers with no-one who can really scold them for it may become so cavalier about changes that they don’t even test to see if their changes will even compile properly. They’ll make what they think is a foolproof change, and they’ll mess up the syntax and not even notice. They’ll think ‘That problem I identified earlier will be fixed for when the object is next updated’, but never update it themselves to double-check. Then they log off…

Partially this is all a result of the usual developer attitudes of MUD development[4], and partially it’s a resourcing, policy and architecture issue. How else are you *supposed* to do it?

Well, look at any commercial MMO that has long term, active development – they never just change code arbitrarily in your client as they’re developing it. Instead, they have a more sophisticated pipeline model – developers work with their own sandboxed versions of the game (there are probably several segments of the pipe here, but I don’t recall ever hearing the process described) before they push a ‘testing’ version onto one of the test servers. When the changes have been properly explored and stress tested, the testing version becomes a ‘release candidate’, and shortly after that the release candidate is pushed onto the real game as a patch[5]. The result is a much more ‘professional’ experience – you aren’t expected to simply shrug off a broken game.

The difficulty here is that it requires three things to be in place for this to work. You need the resources to run a ‘development’ version of your MUD, you need a policy that *requires* all developers to write new code on the development sandbox, and you need an architecture in place that allows for changes in the development version to be pushed easily into the live game. If you don’t have all three of these, then all you *can* do is work with live ammo.

For the purposes of most people, having a culture where people are simply accepting of errors is the easiest solution. But, there is another aspect to adopting a system like formal patching… it makes for a more exciting community, and it makes for a better packaged game.

A new patch for Warcraft is exciting, and it’s exciting for the entire period between it being deployed on a test realm to some time after it’s made live in the real game. A new patch is a source of discussion, a focus for anger and despair, and a promise of things to come. It’s a sign that you have something to which you can look forward[6]. If you deliver features as you develop them, you get a general low-level buzz of interest as people notice the new addition, but what you lose is the impact of having a large chunk of things ready at once. Think of it this way – as a child you probably got more presents[8] over a year than you got at Christmas or your birthday, but which do you remember best? Which did you and your friends talk about most?

Anticipation is a powerful tool, and you neglect it at your peril. Patches, provided you publicise them ahead of time, are a great way of generating anticipation of new features. Before the achievements were made live on Discworld, I made a news post saying ‘achievements are coming, get ready’. What happened from that point is that people started to generate excitement around a coming new feature. I’m not saying that was the only reason it ended up being so popular, but it was one of the reasons so much attention was directed to the system when it went live shortly afterwards.

Imagine then that you have twenty quests ready to go into the game over the course of six months. If you add them in one at a time then even if you post about it, all people will say is ‘a new quest? Yeah, cool’. If however you announce the ‘questing patch’ and then deliver them all at once, people will be talking about the upcoming patch with greater intensity and anticipation. That’s a great energy to have running through your game even if it does mean that the people who finished writing their quests first have to wait until the patch date to see people enjoy them.

So, patches are exciting – but they also have a second benefit for your game experience in that they allow you to package and theme new content in a way that greatly improves how it is delivered.

One of the difficult issues in game development is that content takes much longer to create than it does to consume. Genua took us four years to bring into the game, and I’d say it got ‘consumed’ within a few weeks. People had seen the areas, done the quests, played with the new game systems, and then went ‘yeah, that’s all fine’ and went back to optimising xp/hr. As a reward to effort ratio for developers, it’s soul-destroying.

However, if you have a formal patching regime you can be more cunning in how you package up new content. Let’s take one simple example – a long running game which is about to get a new area, and a new faction system. An area gets consumed within a matter of days, but if you have co-ordinated the development of these two features so that the new area is also highly integrated with the factions, you can ‘control’ the rate of consumption. You can lock off areas until enough faction standing has been accumulated, or you can block some parts of the area forever based on competing faction allegiances. That’s something you can only do if you have the development of the systems working side by side, and that in turn is only possible if the developer culture isn’t about drip-feeding things into the game as they are ‘ready’. The former encourages collaboration to increase impact, the latter encourages self-contained, isolated systems.

The neat thing about packaging is that it can work for almost any combination of game features – you can almost always think up a theme that is sufficiently all-encompassing that you can link up the major additions. If you add in an extensive pottery system, a new high level instance and a goat herding subgame, you can package that up easily by having the pottery system provide components for the instance and the goat herding, and have the instance feedback both rare raw components for the pottery, and perhaps epic goats for the herding system.

However…

One of the real dangers here is that you make your patches *too* big and *too* all-encompassing. Patches are exciting because they are relatively unusual, but not so unusual that they are an annual event *if you’re lucky*. Patches should be small enough to produce regularly (say, every six months) but also large enough to get people salivating.

In any case, the end result is that you get finer grained control over how players consume the content you put forward. Player fun must always be centre most in your mind of course, but it’s possible to increase the effort/reward ratio without simply adding grinds for the sake of grinds. It’s all about how you present it.

Lots of benefits then come from a formal patching system, but it does require a lot of back-end to make it work, and you have to be willing to sacrifice a little ‘agility’ because of the need to co-ordinate deployment.

Step one in setting up this back-end is to actually have a development sandbox. For most MUDs, this will be as simple as having another version of the MUD running off of a common code base, but ideally you want it on an entirely different computer or virtualisation shard. You want it so that if someone puts an infinite loop in their code then it doesn’t matter except to other developers, and so you want it to be held as far as possible from the ‘live’ game as you can.

Step two is to *make* people use this sandbox. Having a sandbox by itself is a cost in the cost/benefit analysis developers make, and humans are ruthless cost/benefit optimisers. You can’t make it optional, because a patching system requires everyone to be working from the same page. If someone makes a single change to the live code, then it is no longer possible to simply push a patch into the game because the patch will override the change that was made. Everyone has to be doing *all* their development on the development shard [9].

Step three is to make it a ‘pres butan’ task to deploy a new patch[10]. It shouldn’t be an awful, dreadful chore to push the patch onto the live game[11]. It should be almost entirely automated to reduce both errors in following the process, and to make it something that isn’t dependant on someone being able to make the time to do it. So much in MUD development doesn’t happen, or happens slowly, because people have real life ‘shit to do’, and you don’t want that to be the reason your patches are delayed. If patches aren’t being pushed into the game quickly enough then it makes your game seem static to players, and will frustrate the hell out of your developers.

If you’re willing to make the sacrifices and put in place a technical architecture to support the process, patches are an easy way to really harden your development processes, improve the effort/reward ratio for developers, generate community buzz, and just look like a more professional game entirely. We’re certainly going to be doing this for Epitaph when we become version 1.0.

Drakkos

[1] As an example of this, LPC has a programming structure called a ‘class’ (which is really just a C/C++ struct). If you add a new field to this class and update an object, the object will fail to load if a differently sized version of the class is in an inherit or, more painfully, in the general include that defines it. Thus, even if an object is not using the class at all, it may fail to load because of different sizing just because an object along the inherit chain has an #include.

[2] And not only am I guilty of everything I’m going to say here, I knew *at the time* I was guilty and just didn’t care very much.

[3] When doing the achievements on Discworld, the handler touched on almost every core system in the game, and was undergoing huge amounts of development. The result of this was that breaking the handler (which happened often) could result in any related piece of code breaking too. On a game with 100ish players online at any one time.

[4] The set of ‘people who are professional software engineers’ and the set of ‘people who want to run a game’ don’t overlap particularly.

[5] There’s also a more agile ‘hotfix’ system, but even that doesn’t work by changing code in the live game until it’s been written and tested somewhere else.

[6] If you’re one of those people so invested in a game that a +3% buff to your periodic damage is a source of excitement.[7]

[7] I’ve been one of those people.

[8] Here, ‘present’ is being defined as anything extra that was given to you for no reason other than people being nice to you.

[9] At the same time, one of the things that makes a MUD more intimate than an MMO is that you can get to know the developers, and developers are part of the social fabric. You want them chatting to players, because that limits the ‘us versus them’ mentality. There then needs to be some kind of bridge between the development sandbox and the live game. Perhaps you simply share talker channels between them.

[10] Although you’ll want to limit it so that it needs someone at the top of the org chart to be able to pres the butan at all.

[11] Imagine if the process was ‘Get a list from RCS of all the files that have been changed, then copy them by hand to the live game, making sure at all points not to over-write a change that has been made there’

Posted in Textual Intercourse | Leave a comment

Viability In Game Builds – Part Two Of An Occassional Series

The second of the immutable rules at the core of Epitaph is – every build should be viable. Now, I know what you are thinking. You are thinking ‘Drakkos, every game has that as a rule, you are derivative and not very smart’. Wow guys, not cool. There’s no need to be like that, I’m going to elaborate – it’s actually a relatively complex issue, and there’s no need to be mean. Not cool at all, guys.

Anyway.

Let’s start off by defining what I mean by a ‘build’. A build is a neat short-hand term for the aggregation of your various character creation choices. On Epitaph, a build would consist of a selection of knacks, your alignment, profession, stats, bought commands and XP spent on skill levels.

You’d usually be building a character for a particular mental conception of who or what that character is. We’re doing some online WH40K[1] forum RPG at the moment, and part of that character creation process is the selection of talents, skills and (if you have a character with psionic powers) their mental abilities. Some of these come from the character class and race you choose, but some you pick from a starting XP budget. My guy is an eight foot tall genetic freak who can torture with his mind, so I have ‘built’ him with talents that work well to how I want to play him. His psy powers and talents are largely centred around his ability to intimidate and interrogate – I don’t know how effective that will be in the game, so I am building him for RP purposes. He’s got the skills that support my mental view of who the character is.

Every game that has any kind of player choice at all in builds will let you do that, because effectiveness is irrelevant to letting you make the choices that best serve your conception of ‘your guy’. Maybe you won’t be able to pick skills and such that are exactly what you have in mind (because they’re not available), but within the limits of the customisation you have available you have all the choice in the world. In that respect, viability is a non-issue because there is no way in which you can fail in building a character.

If you don’t care about character conception and instead are all about maximising your ability to perform some in-game function, then builds can be something you can get right or wrong. You can make choices that are sub-optimal or just plain goofy. You can also though be guided by the community who will be able to give you a selection of ‘cookie cutter’ builds that meet your requirements. The more tightly controlled your character customisation process is, the fewer cookie cutter builds that you have to choose from. In WoW for example, there’s usually only one (or maybe two) talent builds per specialisation per class that are truly effective for gameplay. ‘Bad’ builds still confer character advantage, but they suffer from various problems – they either contain bonuses that often don’t apply (+10 to damage while in forest environments, when you spend 99% of your time in cities), bonuses that weren’t the ‘best in slot’[2], bonuses that are irrelevant to the game role you want to play (+20 to lockpicking for a character who only ever kicks doors down), or even bonuses that are actually penalties (-10% generated threat when you are a tank).

You can’t stop people making ‘bad’ builds, and within the context of the game world you can’t really stop people from being their own worst enemy. So, right away the immutable rule has an exception – ‘except in cases where people really haven’t thought through the consequences of their choices’.

Leaving aside that special case though, most games fall down dramatically on simultaneously offering a wide range of builds *and* making those builds viable in the game. You either get one or the other, and only very rarely both.

Let’s talk next about what I mean by ‘viable’. Within the context of a single player game, viable would mean ‘can complete the game without undue penalties’. Within the context of a multiplayer game, viable would mean ‘can advance to high ability commensurate with the amount of time that can be invested’.

The single player experience is easier to define, so let’s start there with two examples – Fallout 3 and Deus Ex: Human Revolution. Both of these games are remarkable both for the amount of choice they give you, and the way in which the game bends the knee to accommodate the weird combination of things you ended up picking. They offer multiple paths to complete almost every objective. If you have to get to a room inside a building, and that room happens to be blocked by a dozen heavily armed guards, then you have choices: You can fight your way in, sneak your way to the destination building, bribe your way through and other scenario specific options. They have ways to progress that work for both the person building an optimal build for a particular role, and for generalists who can do a bit of everything. Almost every objective has at least two paths to completion.

However…

Deus Ex in particular has a horrendous flaw in this regard in that you can play right up until the first boss battle in any way you want, and then it’s clobberin’ time. That boss battle is pure shoot ‘em up with no option to use any other skills. If you built your guy around sneaking and hacking, you’ll find yourself incredibly penalised by the encounter (which is not easy even if you built your guy like some kind of metal Rambo). The boss fights are mandatory, and thus while you *can* defeat them without having the appropriate build, you are hugely disadvantaged for not adopting a traditional ‘FPS’ skill-set. These builds are still ‘viable’, but not remotely catered for in that specific encounter[3]. Fallout 3 also has its moments when you’re either forced into a particular kind of encounter or are hugely penalised for not having invested in skills you find you suddenly need. By and large though both these games are tremendously forgiving of alternate play-styles, and they serve as excellent examples to which games should be aspiring.

Here though, note what I said about ‘viability’ – that you can complete the game. Viable doesn’t mean that you can get a 100% score in the game – you might not be able to do every quest or mission (ideally you would, but it’s not core to ‘viability’), you might not be able to advance as quickly or as effectively as other builds, and you might not be able to access all side content. Ideally, in cases where this is true, all builds should be penalised roughly equivalently (if you’re a sneaky guy you might miss a combat oriented mission, and if you’re a psycho killer[4] you might not be complete a mission that involves super-high competence in hacking). That’s fine for single player, in my view, because you can always replay to get the content you missed the last time around.

Things get different though when you accommodate multiple players in a persistent game world, because suddenly you’ve introduced a new yardstick – ‘how do I measure up to other people?’. This in itself is tremendously complex because everyone uses different yardsticks for this – some will use number of achievement points, some will use their skill bonuses/levels, some will use their xp/hr rates, some will use top ten tables, and so on and so on. In addition, the fact that you don’t have a ‘main campaign’ for people to complete (because your game is persistent) means that a lot of the measures you can use for single player games simply don’t apply. There’s no such thing as ‘completing’ a game like Warcraft – even if you beat the biggest boss of the current game, the next patch or expansion will bring another one your way. At best, you can conquer all the *current* challenges.

For games like this, ‘viability’ is more than just being able to make it to the end – it’s about being able to achieve equivalent amounts of prestige and reputation. It’s about being *wanted* for group content, and it’s about being able to progress skills at a roughly equivalent rate to everyone else. As we often do in these blog posts, let’s look at Discworld for an example of this *not* working.

Discworld gives you a reasonable amount of choice in builds – lots of skills on which to spend XP, a stat system that has very substantial impact on your viability in various roles, your choice of guild and for most guilds, your choice of specialisation within that guild. You have a lot of freedom to build the character you want to be. Sadly though, most characters don’t have viability unless they build for combat – combat is not only the most effective playing style to level your characters, in many cases it’s the *only* way[5]. Quests and achievements are useful for levelling, but only up to a point because they are a fixed, consumable resource and you can’t ever redo them. Getting taskmaster advances is great, but also very limited – the DW taskmaster is very stingy, and it’s not nearly appropriate as a primary method of advancement. No, on Discworld if you want to excel in any role, you have to fight. If you want to be King/Queen Pottery, then in order to advance those pottery skills quickly you’re going to have to kill a lot of stuff.

Partially the problem there is that the only currency of advancement is XP, and the optimal route to XP is fighting because it is scarcely rewarded for most in game activities. In cases where it is rewarded, the rewards are usually parsimonious to the point of irrelevance.

A better way to build viability in these kind of games is to reward everything. Making pots should give you XP, and it should give you non-trivial amounts. Successfully intimidating an NPC should give you XP, and again it should give you non-trivial amounts. In one respect, Discworld does this with its command XP system, but the rewards are based only on spending guild points and not actually accomplishing an in-game goal. Our philosophy on Epitaph is that XP is rewarded for consuming an available resource – you can kill an NPC precisely once (until it regens). You can make precisely one pot from those materials (you can make more when you get more materials). You can bribe that NPC precisely once (but it’ll forget at some point in the future). In addition, the XP reward scales for all actions in the same way that it scales for killing NPCs – you get more XP the higher a level of item you are creating, or the higher a level of NPC you are charming, or the higher a level of computer you are hacking.

By ensuring that some kind of resource is consumed through command use, we both give a way to normalise XP rewards *and* counteract the problem of someone sitting in a room grinding away at their pot making by just hitting the same alias over and over again[6]. You can make a ‘living’ by being a crafter and a crafter alone – sure, you’ll need to secure some source of the raw materials, but that doesn’t *require* you to go gather them yourself. You can hire someone from some of the profits of your crafting business. That shouldn’t be too difficult – gathering also rewards XP for each resource found and each resource successfully harvested. You can be viable focusing just on various kinds of crafting, you can be viable focusing on various kinds of gathering. You can be viable focusing on stealth and hacking… a wide variety of XP rewards generates viability.

Now, note here again the word is ‘viability’ and not ‘equivalence’. While we still haven’t yet done a pass over the XP costs and rewards in the system, when we do the goal isn’t 100% equality in all possible builds. It’s almost guaranteed that some builds will be *more* viable than others in terms of generating advancement, but the intention here is that it’s based on another external factor such as risk versus reward. Sitting in a room making pots is a ‘safe’ activity – you don’t need to worry (as much) about being molested by hordes of the undead. If that activity gives you the same advancement prospects as going out and killing zombies, then we’d have the unfortunate situation of the riskiest activity being the least profitable by virtue of the risk – if the risk is meaningful (in terms of accumulated downtime), then the safest activity will by simple mathematical inevitability became the most effective when all other things are equal. That’s all kinds of true in real life of course, but it’s bad design in a game. Viability means that you can achieve high station according to whatever yardstick you feel is appropriate – it doesn’t mean an *optimal* path to that high station.

What we’re looking to do then is make every build viable (except in cases where people don’t think through the consequences of their choices), while also ensuring that those who take risks in the grim darkness of the zombie apocalypse are rewarded in a way that is commensurate with the risks they are taking.

Drakkos

[1] Black Crusade, to be specific. We haven’t started yet, but it promises to be a lot of fun provided we don’t encounter any catering.

[2] A term which means the best thing you could have possibly chosen out of a limited selection.

[3] As it happens, it turns out the guys at Eidos aren’t very happy with the boss battles either – they’d been outsourced to another company due to time constraints, and it really shows.

[4] Qu’est que c’est

[5] Leaving aside idlechasing here, because that is an accidental and extremely undesirable path to viability. Basically, I don’t rate it as playing, and this post is all about viability when *playing* a game. It can’t be a viable play-style if play is not actually involved.

[6] Here, we assume that getting the raw materials is a non-trivial act in itself. If you can just buy them at a shop then it’s neither fun, challenging or interesting game design.

Posted in Textual Intercourse | Leave a comment