Proficiency and bounded accuracy in D&D Next

In my last post, I wrote about how the Dungeons & Dragons Next proficiency bonus jams all the tables and rules for attack bonuses and saving throw bonuses and check bonuses into a single rising bonus. This consolidation yields a simpler system, but the proficiency mechanic influences every corner of the game.

Attack roll tables from D&D Rules Cyclopedia

Attack roll tables from D&D Rules Cyclopedia

Proficiency bonuses increase slowly compared to similar bonuses in earlier versions of the game. They top at a mere +6 at 19th level. This slow progression stems from a principle the designers called bounded accuracy, because none of the designers come from the marketing team. Actually, “accuracy” refers to bonuses to the d20 rolls made to-hit, land spells, and make checks. Accuracy is “bounded” because the game no longer assumes characters will automatically gain steep bonuses as they advance to higher levels. See the Legends and Lore post, “Bounded Accuracy” for more.

Bonus to attack

Before third-edition D&D, armor class never rose much. In “‘To Hit’ vs. Armor Class,” longtime D&D designer Steve Winter charts the progression between to-hit rolls and AC. Steve concludes, “In AD&D, as characters advance up the level scale, they constantly gain ground against the monsters’ defenses. A 15th-level fighter doesn’t just hit lower-level monsters more often; he hits all monsters, even those of his own level, more reliably than before.”

This meant that rising attack bonuses eventually made attack rolls into a formality. Mechanically that works, because in early editions, as fighters’ gained levels, their damage increased not because each blow dealt more damage, but because they hit more often.

But attack rolls benefit D&D for two reasons:

  • Hit-or-miss attack rolls add fun. To-hit rolls offer more drama than damage rolls, and the rolls provide intermittent, positive reinforcement to attacks. See “Hitting the to-hit sweet spot” for more.
  • If to-hit bonuses overwhelm armor bonuses, armor and armor class becomes meaningless to high-level combatants. Perhaps this finally explains the chainmail bikini.

To keep attack rolls meaningful, fourth edition makes ACs rise automatically, even though nothing in the game world justifies the rise. (You might say that the rise in AC reflects combatants’ rising ability to evade attacks, but a rise in hit points reflects the same slipperiness.) The steep rise in AC meant that lower-level creatures couldn’t hit higher-level combatants and forced all battles to feature combatants of similar levels. In 4E, physical armor just provides a flavorful rational for the AC number appropriate for a level and role.

D&D Next returns to the older practice of making armor class a measure of actual armor, or at least something equivalent. At high levels, the game keeps to-hit rolls meaningful by limiting the proficiency bonus to that slight +6 at 19th level. With such a small bonus, to-hit rolls never climb enough to make armor pointless. For more, see “Bounded accuracy and matters of taste.”

In the last public playtest, and for the first time in D&D history, every class shares the same attack bonuses. In Next, characters don’t stand out as much for how often they hit as for what happens when they hit.

Bonus to checks

In third and fourth editions, characters gained steep bonuses to skill checks as they advanced in levels. Each game managed the bonuses in a different way, and each approach led to different problems.

In 3E, characters who improved the same skills with every level became vastly better at those skills than any character who lacked the skill. Eventually, DCs difficult enough to challenge specialists become impossible for parties that lacked a specialist. On the other hand, DCs easy enough to give non-specialists a chance become automatic for specialists. By specialists, I don’t mean a hyper-optimized, one-trick character, just a character who steadily improved the same skills.

In 4E, skills grant a constant, +5 bonus, and every character gains a half-level bonus to every check, so everyone gets steadily better at everything. This approach means that no character grows vastly better than their peers at the same level. It does mean that by level 10, a wizard with an 8 strength gains the ability to smash down a door as well as a first-level character with an 18 strength. To keep characters challenged, and to prevent suddenly mighty, strength-8 wizards from hulking out, 4E includes the “Difficulty class by level” table which appears on page 126 of the Rules Compendium. With this table in play, characters never improve their chance of making any checks, they just face higher DCs. Most players felt like their characters walked a treadmill that offered no actual improvements.

For more on checks in 3E and 4E, see “Two problems that provoked bounded accuracy.”

With the proficiency bonuses, D&D Next attempts to thread a needle. High-level bonuses should not reach so high that challenges for proficient characters become impossible for the rest. But the bonuses should go high enough to give proficient characters a chance to stand out and shine.

At the top end, a 19th-level character with an suitable 20 ability score and proficiency will enjoy a +11 to checks. This bonus falls well within the 1-20 range of a die roll, so most tasks within reach of specialist also fall within the ability of an lucky novice. If anything, the maximum +11 for a talented, proficient, level-20 superhero seems weak.

Two bonuses form that +11, the proficiency bonus and the ability modifier. To me, a proficiency bonus that starts at +2 at level 1 and rises to +6 at level 19 threads the needle well enough.

New characters gain a +2 proficiency bonus as opposed to the +4 or +5 skill bonuses in the last two editions. This paints new D&D Next characters as beginners, little better than untrained. New characters must rely on talent to gain an edge.

However, talented characters barely gain any edge either. Typical new characters gain a +3 ability modifier from their highest score. I’ve shown that ability modifiers are too small for checks. Players make 11.3 attack rolls for every 1 check, according to plausible research that I just made up. With so many attacks, a +3 to-hit bonus lands extra hits. With so few checks, a +3 bonus ranks with the fiddly little pluses that the designers eliminate in favor of the advantage mechanic.

The playtest package’s DM Guidelines advise skipping ability checks when a character uses a high ability score: “Take into account the ability score associated with the intended action. It’s easy for someone with a Strength score of 18 to flip over a table, though not easy for someone with a Strength score of 9.” The D&D Next rules demand this sort of DM intervention because the system fails to give someone with Strength 18 a significant edge over a Strength 9 character. The result of the d20 roll swamps the puny +4 bonus. In practice, the system math makes flipping the table only sightly easier at strength 18.

Update: The published game grants level-one characters a +2 proficiency bonus as opposed to the +1 that appeared in the final playtest.

In a curious move, the final public playtest packet eliminates the Thievery skill. Instead, the designers opt to make thieves proficient with thieves’ tools. Why? This results from the elimination of fiddly little pluses such as the +2 once granted by thieves’ tools. Without the +2, why bother with the tools? Now thieves need the tools to gain their proficiency bonus. Somewhere, sometime, a confused player will add a proficiency bonus that they assume they have for thievery, to a bonus for the tools, and double-dip two bonuses.

Next: Saving throw proficiency and ghouls

62 thoughts on “Proficiency and bounded accuracy in D&D Next

  1. Chris

    I just wanted to send a quick thanks for your articles. I’m very interested in D&D Next and consistently find your thoughts on the mechanics of D&D insightful. Keep up the good work.

    Reply
  2. Don Holt

    Good article. It really demonstrates two of my pet peeves about the “fun” in D&D.

    The first is the “arms race” of D&D4e. Unmatched level characters meant there are never any David and Golith stories in 4e, the mechanic doesn’t allow it.

    The second is the DC mechanic. Once upon a time, “save” rolls were required against a given trait. In early D&D, this meant you had to roll 3D6 under an appropriate trait. At that time a +1 – +3 to a trait because of a skill was a big difference. Why? Because the distribution of results rolling 3D6 is bell curve, verses the flat line distribution 1D20 of a DC.

    So what does this difference in result distributions mean? For average characters, a small bonus meant large increases in their chance of success, while still insuring a reasonable chance of failure. A 50% chance could become an 85% chance. Low stat characters with small bonuses have their virtual non existence chance of success increased to at less some nominal chance of success. Characters with training and high stats were almost assured of success, while their untrained counterparts would almost certainly fail every now and then.

    There are (and always have been) systems that avoid these bad game mechanics. I’m encouraged to see these changes in D&D Next.

    Reply
    1. Ken

      “In early D&D, this meant you had to roll 3D6 under an appropriate trait. At that time a +1 – +3 to a trait because of a skill was a big difference. Why? Because the distribution of results rolling 3D6 is bell curve, verses the flat line distribution 1D20 of a DC.”

      This was never the case in any edition of D&D. Stats definitely aided in saving throw rolls but that was it.

      Reply
  3. SittingBull

    Bounded accuracy fails for several reasons.

    First, the attack bonus difference between high- and low-level characters is small (and can verge on negligible), thus against enemies with low HP (e.g. a group of kobolds), there won’t be much noticeable difference between the success of high- and low-level characters against them. Why? Because the difference in their “accuracy” is now supposed to be represented with damage instead (a truly idiotic move), but doing more damage is basically irrelevant against low HP enemies. This was pointed out to WotC repeatedly, but they just ignored it.

    Secondly, some types of attacks (like poison daggers or arrows) are not necessarily intended to do a lot of damage, but instead rely on being able to hit to deliver their special effect. Again, the low-level character can succeed almost as well as a high-level character at doing this. Multiple workarounds have been attempted, but none of them really make sense. The only way to fix this and maintain verisimilitude is to go back to the more logical system of higher-level characters being significantly better to hit than low-level ones.

    I could go on, but it became apparent a year ago that WotC is simply not going to listen to those of us who are better at math than they are. This drastic change to “bounded accuracy” has made a lot of people very angry, and WotC has probably lost them forever.

    Reply
    1. Don Holt

      The way to fix this problem is not the arms race, i.e. level being the most important factor in hitting.

      The way to fix it is to do something logical, such as have characters take countermeasures which effect the To Hit probability. Thus by scaling the hit probabilities to slightly favor higher level characters and giving them bonuses to avoid being hit if they take certain actions, you can get a system that works well. If you also give increasing advantages to the outnumbering attackers, their position relative to the defender, and add a cost mechanism to each action a character takes, you have a system where the outcomes are much more dependent on the actions of the combatants, and not just their level.

      Reply
      1. SittingBull

        That’s way too complicated. You shouldn’t have to take “certain actions” to avoid being hit by lower level characters. Your vastly greater experience should enable you to do so without having to spend special actions every time you are attacked. Also, as a DM I don’t want to have to figure out the advantages/disadvantages of each kobold each and every round based on their positions.

        Un-bounding the accuracy doesn’t lead to an “arms race” or “treadmill” if you do defenses like they do in SAGA: you have 3 defenses, Fort, Ref, and Will, and they go up as you level, but not necessarily at even rates because your class choices (and other things) enhance certain ones over others. We have dozens of SAGA characters, and their defenses vary from character to character as a result of a multitude of factors. Never once has any of us felt like we are on a treadmill. The problem is that 4E made people feel that way, and a lot of those people assume that the only way to fix it is by butchering the accuracy system completely, throwing out the baby with the bathwater. And that just causes problems that are even worse.

        Furthermore, “bounding” the accuracy just makes no sense for verisimilitude. A high-level archer should be way, WAY more accurate than a low-level one, period. The high-level guy is picking off those kobolds with absolute ease, while the low-level guy is struggling. Their damage should be relatively the same, however. That’s the only way it makes sense.

        Reply
        1. Don Holt

          The problem in the unbounded system is the “can not fail” situation. If you feel that players have the right to win no matter what actions they take, then the “can not fail” situation is fine. So if the story is all predetermine except for exactly how the players kill their opponents then there is nothing wrong with the players always win mentality.

          On the other hand, if the story is being created based on what the characters do and losing doesn’t necessarily mean death for the characters, you have more options. You can consider what actions the players take and the situation they find themselves in (like # of attackers per defender.) An archer firing into a crowd at close range is more likely to hit than an archer firing at great distance at a single small target darting between cover. Level has an effect but so does situation.

          You are correct about damage being the same. Damage should be fixed (no die roll needed) and effects based on body part hit.

          Reply
        2. Don Holt

          And D&D has never been about verisimilitude, as there is no “fun” in that (according to some.) For example, explain to me how fighting with a sword automatically makes your character better with a bow? In D&D, as your character’s level goes up, all his abilities increase – whether or not he’s used any of them in reaching the next level .

          The concept of a character’s level has got to be one of the worst impediments to role play. It sets the whole tenor of the game into a let’s kill it because doing so increases the character’s abilities. Players are often only rewarded for taking actions that stifle role play.

          Reply
          1. SittingBull

            Any experienced gamer knows that the level of verisimilitude varies between gaming groups. But D&D has always attempted to provide a good degree of verisimilitude, e.g. presenting a wide variety of medieval weapons and armor, and representing them as best they could (not to mention other things like all the detailed places and organizations).

            Why? Because those details make the the fantasy elements more believable, enhancing the overall experience, just like all the great fantasy storytellers try to do (see Game of Thrones). Realistic detail helps create what is called the “suspension of disbelief”. Maybe you don’t care about that stuff, but many of us wouldn’t play a game without it in a million years.

            So if there is some really easy mechanical way to represent a character getting more accurate with a weapon, like a straightforward mathematical progression that increases significantly over time, we’d have to be complete idiots not to use it. If other parts of the game mechanics need tweaking, fine. But an advancing accuracy progression is the smartest thing in the entire game.

          2. SittingBull

            Regarding your comment on the weapon skills, I think it would be perfectly fine to have your various weapon skills increase independently.

            But that has nothing to do with whether we have bounded accuracy or not.

            Two entirely separate issues.

          3. Don Holt

            Well if you want verisimilitude, then look at Chivalry and Sorcery, which is what I play. I agree completely that the details make the story and have stated so many times over the years.

            From my prospective D&D is very abstracted. With D&D you have the complication of mechanics, but without the benefits of having any basis in believability or truly adding details to the story. Such things as hit locations and taking specific actions in fighting add to the story, not picking someone’s else description off a menu, as in 4e.

            My comment about weapon skill was just to point out the fallacy in your verisimilitude statement, and as you point out has nothing to do with the bounded accuracy.

            What I am saying is that bounded accuracy has the ring of truth, because every now and then the little guy does win. Scaling the combat system so this event can occur is desirable in my opinion and much preferable to the alternative.

          4. SittingBull

            I didn’t like 4E at all, and they are abandoning that model anyway, so it is irrelevant to me.

            And I never said that D&D needed perfect realism or verisimilitude… that is probably impossible anyways. However, my players certainly don’t want it to have any LESS verisimilitude than it had before. That would definitely be unacceptable, and the fact is that the boneheaded approach known as “bounded accuracy” totally obliterates the verisimilitude we had before, destroying all believability completely. It makes the game into a farce.

            The little guy sometimes wins? So are you going to pit your low-level characters up against some high-level bad guy? So that they can what… have a slightly better chance of not getting killed? Or are you going to pit your high-level characters against low-level enemies, just so they might actually get unlucky and die once in a while? Either way, I don’t see the point. Why on earth would you want that in your game?

            And you seem to be forgetting that D&D is supposed to be a game of heroes: they get powerful and get to the point where they shouldn’t be threatened by low-level characters anyway (just like a master swordsman IRL would never be threatened by a low-level swordsman). That makes the game more realistic and more fun, as the players really feel like their characters are truly getting good, and not just a little better than low-level schmucks. The fact is that advancement in skills/abilities/etc is the one of the most attractive elements of D&D. If you want a game without advancement, then it should be something else, and not D&D.

    2. DM David Post author

      Hi SittingBull,

      You outline two key problems with flatter accuracy bonuses. I described both problems in my post Changing the balance of power. I think multiple attacks can lesson the problem versus low-HP enemies. The problem of poison and touch attacks might be handled by giving such attacks accuracy bonuses, as third-edition did for touch attacks, and then relying on saving throws as the primary defense. That part of the game remained under development when the playtest ended.

      None of these measures nullify the problems raised by bounded accuracy, but every design forces trade offs.

      Still, I understand your pain. My post In D&D Next, ability modifiers are too small for the ability check mechanic called out my biggest beef with the new design.

      Dave

      Reply
      1. SittingBull

        Everyone getting multiple attacks as they level up just makes the game slower without really adding anything fun, as we saw with previous editions of D&D. It’s just more dice-rolling.

        SAGA uses a progressive accuracy bonus (thus avoiding the problem that bounded accuracy creates with low HP enemies), but it also has a better defense system than any of the D&D games: 3 different defenses that improve over time (Fort, Ref, and Will), but their exact values can vary quite a bit depending on your class choices (and other choices), so it’s not a 1-to-1 advancement with respect to the attack bonuses (i.e. no treadmill effect). It works better than any other system I’ve ever seen, AND it makes sense with respect to verisimilitude. Everyone I know loves it.

        Regarding poison, we already have a perfect solution to that too: the poison DC system that we’ve been using prior to 5E. The poison on a dagger should remain the same poison (with the same DC) regardless of who picks up that dagger. It also has the same DC if it is sitting in a vial, not being used as a weapon by anyone. With bounded accuracy, however, they have to invent some convoluted system for it to work at all, and even then it still doesn’t make sense.

        If something isn’t broken, then “fixing” it to compensate for other problems just tends to make more problems, and in this case it doesn’t fix the actual problem anyway.

        Reply
      2. SittingBull

        Naturally, I agree with you about the problems of bounded skill advancement in 5E.

        That +5 is indeed swamped by the d20 roll (and this is precisely the root of the problem I highlighted in my post about setting the skill DC’s), but for some reason the designers consistently ignored that problem without fixing it. And that’s because there are really only two possible fixes:

        1) Go back to progressive system, so that characters’ bonuses can reach 20 or so, allowing them to match the range of values that the d20 is creating. This means that their bonuses will eventually get high enough so that they are not constantly subject to randomness, thus allowing experts to really feel like experts.

        OR

        2) Use a d6 instead. Those +5 bonuses would be fine if the random part (the roll) had a range of about that same amount (5 or so).

        Reply
  4. Don Holt

    Who said we were talking only about D&D, we are talking about role playing systems.

    Sounds like you are in favor of game where the principle purpose of the game is to feed egos.

    My interest in a RPG is group story development. I have no interest in a system where thousands run through the same obstacles over and over. And I don’t want a game without advancement, I just think that D&D advancement is less than idea. D&D doesn’t have the elements needed for good story development, such as imagination, fine detail, and believable actions by characters. It does promote overblown heroic play, good for a few laughs.

    Have your players answer the following question: What was the name of the last NPC you helped? Or where that character lived? Usually they can tell you all about what they defeated(killed) or the last ability they gained. I hope you realize the difference, and appreciate how DMDavid is talking about expanding the way D&D is played.

    Reply
    1. SittingBull

      I’m talking about D&D, just as I have from the beginning. If you like some other game better (as you seem to do), then fine, go play it. I don’t mind, really. You don’t see me trying to force a D&D-style system onto some other game, do you?

      And “feed egos”? Not even close. We absolutely LOVE role-playing, first and foremost. There are no egos at our table. Most of us are scholars and artists in our 30’s and 40’s, and we spend more time together developing characters than just about anything else that we do. But we also really like having a mechanical system in place that allows significant mathematical character advancement, just as we always have. You seem to be assuming that gamers like us only think in extreme absolutist terms, but nothing could be further from the truth when it comes to my group… we like it all: the role-playing, the realistic elements, the fantasy elements, the mathematical elements, etc.

      Furthermore, if you think that all of D&D is “overblown heroic play, good for a few laughs,” as you said, then why do you even care? You wouldn’t be playing it anyway. If you were, then you’d know that in the Dungeon Master’s Guides there have been sections on playing different kinds of campaigns (e.g. high-magic, low-magic, etc.). So again, you seem to be making wild assumptions about how other people play these kinds of games.

      Lastly, you should have noticed that I mentioned Game of Thrones as an example, a story in which magic is scarce, and instead the main focus is on the character development and political intrigue. That is very much the style of D&D that we play, and have always played.

      Reply
      1. Don Holt

        Not wild assumptions, but what I have experienced with many many D&D groups. If you are not playing typical D&D then good for you. I’d be the first to applaud. I do much the same with character development and political intrigue, but without such a dark undertone as Game of Thrones.

        I have given up on D&D twice already(1980 & 2011), and am not inclined to try it a third time. It has neither the detail, mechanics, nor mathematics I consider to be reasonable. The game is too abstract for me and if you are going to be that abstract just play Amber. Something like 7 Seas at least uses panche to mix up the turn order a bit.

        And high magic and low magic are relative terms. What may be low magic for D&D is probably high magic from my perspective.

        I still find an ever escalating bonus system to problematic. I prefer a system that calculates a meta value for combat ability, binds this ability to a value range (once you’re the best fighter, increasing level does really matter), creates a hit matrix based on the differential combat level, takes into account time and fatigue usage, and makes actions taken (feint, parry, dodge, etc.) effect the result.

        Reply
        1. SittingBull

          Ok, so you don’t like D&D and you aren’t going to play it anymore, but for some reason you are advocating for changes to D&D that will make everyone I know hate it? Weird. You don’t see me advocating for changes to hockey. Why? Because I don’t like hockey and don’t care what game rules/system it uses.

          Secondly, you do realize that we are rolling a d20, correct? The d20 creates a wide range of variability, meaning that you need significant increases in the bonuses of combat abilities/skills over time to make those improvements truly tangible at the gaming table.

          Otherwise, if the increases over time are minimal (as in a “bounded” system), then the variability of the d20 totally overwhelms those small differences and makes the players feel like their characters aren’t really getting any better at all. In fact, a lower-level character can literally seem as good or better than someone higher in level, just because the d20 is doing almost all the work within the mathematical system, creating a nearly random system in which all plausibility is destroyed. We’ve tested this repeatedly, and there is no way around it. Bounded accuracy might work okay with a d6 system, but it fails miserably with a d20. It’s just boneheaded math.

          Reply
          1. Don Holt

            I’m invested in this discussion for 3 reasons.

            1) I like a game where the mechanics work. If everyone believed as you say that the unbounded system works then why have there be so many attempts at so many different systems (all under the name D&D)? If you going to argue that we just should keep it the way it was before 4e, then why not argue what was wrong with 1st? In fact, for an ability check (which precedes the skill check/DC), why didn’t the game retain the 3D6 check? This check is much superior to the D20 check because the trinomial distribution of 3D6 has much more verisimilitude to real life than does the flat D20 distribution.

            I contend the “bonehead” part of the game is D20, not the bounded accuracy. If your dealing with 3D6 check small bonuses/minuses make huge differences.

            D20 was only adopted to help the young and mathematically challenged. I pretty sure most of my players are comfortable with 3D6 (or in my case 2D10). So D20 is really only a marketing tool to appeal to a younger and broader audience. Mathematically, almost any other system is better than a flat distribution.

            2) I want GM’s to think about their game. Do we all have to follow lockstep with what first Gygax and then Wizards decided is game canon?
            I’m glad to see you sticking with your 3e in the face of pressure to move to 4e or Next. Maybe in 10 years, when even fewer people play 3e, you’ll understand where I’m coming from, where big bucks for marketing move people away from doing reasonable things.

            3)Where else do people get to discuss what is reasonable? (Even if we disagree.)

          2. SittingBull

            “1) I like a game where the mechanics work. If everyone believed as you say that the unbounded system works then why have there be so many attempts at so many different systems (all under the name D&D)?”

            Prior to D&D Next, the progressive accuracy system has been the same since I started playing D&D in the 1980’s. Other things have changed (saving throws, etc.), but not the progressive accuracy system.

            The designers specifically said that one of the reasons why they changed to bounded accuracy in D&D Next is to make it easier for DM’s to use the same monsters over and over. This is meant to simplify the game for younger, less savvy DM’s who are not capable of making simple adjustments to monsters (or invent their own).

            To be fair to WotC, that explanation is the ONE thing they got right about bounded accuracy. It does indeed allow you to use the same monsters over and over without adjustment (or without much), but it fails to deliver on every other promise that they made about it. Now, why anyone would want to use the same monsters over and over I have no idea. It sounds really boring to me, and my players would hate it if I did that.

          3. DM David Post author

            I think the d20 comes from Gary’s enchantment with the latest gaming technology circa 1973, the d20.

            As you say, D&D next in particular fails to make exceptional characters mathematically much better than average ones. The problem stands out with non-combat checks because players make far fewer of them. With all the combat rolls, a +1 makes a significant difference. Outside of combat, most checks amount to coin flips. Because of this, when I DM D&D next, I plan to reserve checks for cases when a character’s lack of skill and ability leave significant uncertainty about the character’s chance of success. The game’s dungeon master guidelines advises this approach. This advice serves to patch a serious flaw in the system.

            Dave

          4. SittingBull

            I don’t see it as being any better for combat, since the range of the d20 (as we have talked about) totally dwarfs those small improvements in accuracy. In other words, being +1 better at something doesn’t mean much if the d20 is giving you anywhere from a 1 to a 20 in a completely random distribution. Thus, the d20 variable plays a much more significant role in determining the outcome than those small improvements to attack bonuses. In fact, we’ve done repeated playtests with bounded accuracy in which characters a few points better than others cannot tell a difference at all. The randomness of the d20 is so large that it essentially “masks” those subtle improvements, and the players feel like their characters are never getting much better, if at all. Furthermore, the more combat-oriented characters don’t seem much better than the less combat-oriented ones, for the same reason: the bonuses gained over time (for combat types) are just way too small compared to the others.

            My players hate it, and instead want your bonus gained from experience (i.e. from leveling) to be the most important factor… and of course the only way to do that is to allow that bonus to increase over time enough to “challenge” the randomness of the d20. That means getting up to +20 or so as they reach the higher levels.

  5. SittingBull

    DM David: “In 3E, characters who improved the same skills with every level became vastly better at those skills than any character who lacked the skill. Eventually, DCs difficult enough to challenge specialists become impossible for parties that lacked a specialist.”

    Exactly as they should. No one but an expert lockpicker should be able to pick a really difficult lock. If anyone could do it, then it defeats the purpose of having a specialist who masters certain skills. (Incidentally, those “skill characters” are one of my favorite types to play.)

    “On the other hand, DCs easy enough to give non-specialists a chance become automatic for specialists. ”

    Well of course. If a lock is easy to pick, then why on earth should a master lockpicker have any difficulty whatsoever? They could do it in their sleep. And it’s the same for other skills and combat abilities: characters shouldn’t just get a little better at the things they focus on, but a LOT better. That’s the only way it makes sense. And the only way to effectively represent that is with an advancing mathematical progression that delivers significant improvements over time, i.e. an “unbounded” system.

    DM David: “In 4E, skills grant a constant, +5 bonus, and every character gains a half-level bonus to every check, so everyone gets steadily better at everything. This approach means that no character grows vastly better than their peers at the same level. It does mean that by level 10, a wizard with an 8 strength gains the ability to smash down a door as well as a first-level character with an 18 strength.”

    Well, 4E messed that up. In 3E and SAGA, breaking down a door is a STR check, just as it should be. It isn’t tied to a skill, and thus you don’t have those problems. It made perfect sense. Why they would make it so illogical in 4E is beyond me.

    Reply
    1. DM David Post author

      Hi SittingBull,

      DM David: “In 3E, characters who improved the same skills with every level became vastly better at those skills than any character who lacked the skill. Eventually, DCs difficult enough to challenge specialists become impossible for parties that lacked a specialist.”

      This isn’t a problem because its unrealistic; it’s a problem because it creates an adventure design challenge. An adventure designer could choose to set very high DCs that challenged players who specialized in a skill, but these DCs were impossibly high for non-specialists, so if the party lacked a specialist in a particular skill, the task became flat out impossible. Alternately, a designer could set low enough DCs to give non-specialists a chance, but these DCs grant the specialists an automatic success.

      Obviously, you feel comfortable dealing with this dilemma. Fair enough.

      And yes, 4E messed that up.

      Dave

      Reply
      1. SittingBull

        Clearly, that’s just a misunderstanding of how an adventure should be designed. Smart DM’s don’t design a part of an adventure in which the ONLY way to proceed depends on having character type A or character type B (like a lock with a really high DC that only a master thief can pick, or whatever). It’s perfectly FINE to have locks with really high DC’s, but that should never be a mandatory way of proceeding through an adventure. It’s just common sense.

        So something like that (a lock with a high DC) might enable the party to gain some advantage in the adventure somehow, OR they could gain some other advantage in some other way that does not depend on having a certain type of specialist. Adventures should be designed to allow the players to decide how to proceed, and they will decide based on their strengths/weaknesses/strategies/whims/etc.

        Furthermore, the idea that we need to completely overhaul the entire mathematical system just because some DM’s make boneheaded decisions is ludicrous. This pattern keeps coming up with respect to bounded accuracy (e.g. DM’s who want to use the same monsters over and over): fixing the wrong thing, and thus creating more problems than we had before.

        Reply
  6. Don Holt

    So a master lock picker never encounters a lock rusted completely shut? You are suggesting that there is never any condition where the expert might fail?

    Experts who fail and novices who succeed make good stories.

    Reply
    1. SittingBull

      I never said that at all.

      Any lockpicker could encounter a lock rusted completely shut.

      And sure, experts can fail sometimes, but not on easy locks that novices could pick. That wouldn’t make the slightest but of sense.

      Reply
      1. Don Holt

        But do you give characters the chance to fail or is it always automatic?

        So you have to decide a priori the lock is rusted shut? How would it occur as a random event in the game?

        If you don’t have a mechanism for these random events, aren’t you limiting the story by either deciding before hand or making the failure event impossible to occur? Something to think about.

        Reply
        1. SittingBull

          Don Holt: “But do you give characters the chance to fail or is it always automatic?”

          In my games, rolling a 1 is an automatic failure.

          Ok, so let’s focus on the skill thing, since you seem to care a lot about it but haven’t really explained how you think it should be done.

          As you know, I prefer a system in which the skill increases are mathematically significant over time in relation to the range of the d20 roll. Why? Because that’s the only way to have DC’s that make sense, e.g.: 15, 20, 25, 30, etc. A novice lockpicker may only be able to succeed at an easier lock (DC 15) about half the time, but the master should have no problem, succeeding 95% of the time. (with 1 being an automatic failure… maybe the lockpick was oily and it slipped for a second… or whatever).

          With harder locks, they will become harder and harder for novices to open them, if not impossible. That makes sense, because otherwise there wouldn’t be much point in being an expert spending an entire career specializing in your skills if instead a group of novices could accomplish the same thing after a few tries. Would it?

          So DC’s and skill bonuses need to get progressively harder and harder. It’s the only way to allow characters to truly become an expert at something. Experts should be MUCH better than novices, like 75% to 100% better (15 to 20 points better with their bonus).

          But if you instead “bound” the skill advancement so that the experts are only 20% or 25% better than the novices (i.e. 4 or 5 points better with their bonus), then where do you set your range of DC’s? You end up having to cram the ENTIRE range of DC’s (from very easy, to somewhat easy, to intermediate, to somewhat hard, to very hard) into such a small range that it becomes completely ridiculous.

          So if you want to bound the skills, where do you set your DC’s?

          Reply
          1. Don Holt

            I’ve said it over and over again (look at the very first reply I made to this topic, where I explained), you just don’t like the answer. DROP D20!

            Why would you accept a 5% failure rate for an expert, why not 1/216 or about .5%. Do you understand that +3 or -3 on a tri/bi-nomial can be much greater than 15%? I be glad to explicitly write of the distribution of 2D6 and the sum of 2D6 probalities (the probability of rolls less than or equal to a number between 2 and 12) if this would help you see my point.

            And for a hit table, you probably want a Chi-square distribution with the steepness centered around the area where the combatants are roughly the same level. This is what I used for my C&S game. (I think the Chi square is the sum of a binomial probabilities.)

          2. SittingBull

            Again, you want to make parts of the game more complicated, but you keep ignoring the main problem, which is that you can’t bound the improvement range to be so much smaller than the range that the d20 (or 3d6) is providing. All that gives us is a crazy system that doesn’t make sense.

            As far as expert failure rates? That’s a very minor problem compared to the big problem here, so whether their auto-fail rate is 5% or 2% or .5% isn’t all that important.

            And they are not going to drop d20, so that’s not really much of an answer, either.

          3. Don Holt

            Abandoning D20 is the only thing that DOES makes sense if you think about it. Most die results in the medium range so bonuses have large effect, experts with a small chance to fail, and novices who can succeed, but at rates lower than 5%.

            Lets look at 3D6 distribution and bonuses

            Raw Talent Success% +1 +2 +3 +4
            3 0 0.5 1.9 4.6 9.3
            4 .5 1.9 4.6 9.3 16.2
            5 1.9 4.6 9.3 16.2 25.9
            6 4.6 9.3 16.2 25.9 37.5
            7 9.3 16.2 25.9 37.5 50.0
            8 16.2 25.9 37.5 50.0 62.5
            9 25.9 37.5 50.0 62.5 74.1
            10 37.5 50.0 62.5 74.1 83.8
            11 50.0 62.5 74.1 83.8 90.7
            12 62.5 74.1 83.8 90.7 95.4
            13 74.1 83.8 90.7 95.4 98.1
            14 83.8 90.7 95.4 98.1 99.5
            15 90.7 95.4 98.1 99.5 99.5
            16 95.4 98.1 99.5 99.5 99.5
            17 98.1 99.5 99.5 99.5 99.5
            18 99.5 ?

            Small bonus (1-4) work exactly as you would wish them to work. Characters with little talent(underlying ability) stand at least some chance when trained. Characters with average ability when trained are greatly improved. Characters with high ability are improved, but there are diminishing returns.

            Also note the relative improvement. 5% going to 25% (D20+4) is 5x improvement.
            A character with an ability of 5 has 1.9% chance, but (3D6-4) gives this character a 25.9% chance, over 13x improvement.

            +4 is always going to give D20 a 20% advantage, but with (3D6-4) an ability of 9 receives a nearly 50% improvement. This would mean people of slightly lower than average ability improve dramatic with a fair amount of training.

            I don’t see how going back to 3D6 and using smaller modifiers needlessly complicates the game. In fact, I think conceptually it is simpler and truer to real life. Once a master of a task, more training has little effect.

            Combat rolls should be analogous.

          4. SittingBull

            So with 3d6, if you want to make your expert experience automatic failures 5% of the time, you would set 3, 4, and 5 as the auto-fail values. With d20, it would obviously be just rolling a 1, as we’ve been doing. Either way, he fails about 5% of the time.

            I don’t necessarily dislike the 3d6 method (as it does reduce the overall range), but it still doesn’t solve the bigger problem of bounded accuracy. However, if you combined the 3d6 method with +1 to the bonus at every other level (instead of the slower “bounded” progression), then it just might work. Then the effective range and the bonuses would match much better, and people would actually be able to notice their characters’ improvement.

          5. Don Holt

            Also notice your assumptions about experts being on 20-25% better than novices using 3D6 is completely false . The range of 3D6 is wider: 0-99.5% rather than 5-95% of D20, even though the VARIANCE of D20 is much larger than the variance of 3D6. In English, D20 has fewer possibilities and more randomness.

            In addition to training, innate ability help determine how training will effect a character’s ultimate chance of success with 3D6.

          6. SittingBull

            I was talking about bounded accuracy and the d20… 20-25% better using a d20. I never mentioned 3d6 in that entire post.

            So don’t put words in my mouth, ok?

          7. SittingBull

            Also, when I’ve mentioned the ranges, I was obviously talking about the range of RESULTS (e.g. 1 to 20). Not the range of the percentages.

  7. Don Holt

    Some of my “fun” in the game, as GM, is having to come up with plausible explanations as to why what should have happened didn’t or what should not have happen did. These things happen rarely, but when they do, they should be made interesting. I prefer systems where experts do sometimes fail and amateurs sometimes get lucky.

    Rather than deciding that a lock is rusted shut before hand, it would be only after the expert failed would that “fact” become true. And of course if an amateur attempts the lock first and fails, this would not mean that the lock is rusted shut, only that the amateur failed. It also might mean that the amateur’s knife blade is now broken off in the lock, increasing the difficulty for the expert.

    Reply
    1. BurgerBeast

      I absolutely hate this idea. If your players like it, that’s fine, play that way. Attempting to justify it, however, bothers me. Here’s why:

      (1) The game world should be consistent and make sense. It should not undergo random change as the game proceeds. The iron door isn’t iron any more, because the Str 8 wizard was able to break it down? The mayor of the town develops a fear of halflings because the halfling succeeded on an intimidate check? No thanks. You’re the DM. It’s your responsibility and obligation to make the world make sense and function accordingly.

      (2) It completely ignores the point of the DC system. Set the DC so that it works as intended. If the DC system misrepresents the phenomenon, then use a different system to determine the outcome. There’s nothing wrong with making certain checks just a straight comparison to an ability (characters with strength 14+ can lift this statue, 16+ to carry it) or using a different die (e.g. Dex check 1d4 DC 8 means characters with Dex 23 always succeed).

      (3) In many cases the DC system doesn’t work properly with auto-success on a natural 20 and auto-failure on a natural 1. It is hard to imagine a master craftsman, spending three days at a forge, failing to make a serviceable dagger. Likewise, it’s hard to imagine an untrained blacksmith spending three years at forge and crafting a legendary blade. As a DM, it’s your job to make the conditions of the world determine the DC of the task. It’s not your job to make the conditions of the world adapt to player check results. Let the mechanics serve the game, don’t make the game serve the mechanics.

      (4) You’ve created a new reality (the lock is rusted) that has consequences in the game world. The lock has has gone from “really easy to pick” to “impossible to pick” because one expert failed his check. How do you rationalize this when the next player tries to pick the lock?

      Reply
  8. SittingBull

    Don Holt: “D20 was only adopted to help the young and mathematically challenged.”

    Rolling a d20 or rolling 3d6 is no harder or easier for anyone. You don’t have to be good at math to roll dice, nor do you have to understand the mathematical implications of the two methods in order to use them.

    “I pretty sure most of my players are comfortable with 3D6 (or in my case 2D10). So D20 is really only a marketing tool to appeal to a younger and broader audience. Mathematically, almost any other system is better than a flat distribution.”

    I fail to see how marketing a game as either d20 or 3d6 would affect much of anything. Again, anyone can roll dice, and I’ve never heard of any new players saying that they were attracted to the game because the d20 is somehow “easier” or “more appealing” or anything like that.

    And ‘better’ is a relative term. The choice between 3d6 and 1d20 is dependent on how much randomness you want. If you want the result of the rolls to vary more widely (making it more common for people to fail at their well-trained skills, for example), then the d20 is good for that because it is a random distribution.

    3d6, on the other hand, provides more consistency, creating a nice bell curve that makes those extreme lucky/unlucky results a lot less common.

    So you seem to be arguing against yourself. You seem to want a more random game when it comes to the rolls for success or failure, yet you prefer 3d6 over d20, even though the 3d6 has a lot less random distribution.

    Reply
    1. Don Holt

      You are confused between randomness and variance. The flat distribution has wide variance which I abhor in RPG’s. Both distributions are random (assuming the dice aren’t loaded and the players are rolling fairly).

      You are trying to compensate for wide variance by overwhelming the die roll.

      I merely allow the unlikely to occur. You are calling this randomness. It is not. It is acknowledging the fact that something with low expectation (this is another technical calculated term in statistics) has occurred. Since I know that the rolls outside the middle of my distribution are extremely unlikely, I can let improbable events occur and rest assured that they are rare. When every event is 5%, you don’t have that freedom.

      Reply
      1. SittingBull

        But the point is that we need a system that can provide us with a sensible range of results, regardless of which method is used. (Incidentally, I actually wouldn’t mind using 3d6 instead of 1d20, but it doesn’t even come close to fixing our real problem.)

        The problem is that you can’t provide a reasonable DC range (or AC range) when combining bounded accuracy with either of those ranges (1-20 or 3-18). You end up having to cram all the DC’s (or AC’s) into such a small range that it becomes absurd. The only way out of that mess is to allow the bonuses to increase much more progressively so that you can spread out your DC’s (or AC’s) to allow players to really experience their characters’ improvements. Otherwise, the highest DC’s (or AC’s) aren’t going to be very far apart from the low ones, and then the system becomes completely illogical.

        Reply
        1. Don Holt

          I think I disagree with your opinion that people keep getting considerable with more practice. This is true initially, but soon you hit a point of diminishing returns for time spent.

          The tri/binomial (discreet probabilities) or the normal curve(continuous probabilities) model, what I think, is a fact of life.

          You keep saying:
          “Otherwise, the highest DC’s (or AC’s) aren’t going to be very far apart from the low ones, and then the system becomes completely illogical.”
          This is false. The number needed may not be all that different, but the probability of achieving that number is vastly different. 18 occurs 6 times less frequently than 15 in 3D6. 1 and 20 occur with equal frequency in D20.

          Your statements seems to reflect a doggedness to retain the flat distribution. In fact, your example, of achieving a 5% failure rate, was how to force the trinomial distribution to become flat. I am satisfied with the model where my players to have only a .5% failure rate, one tenth of D&D’s failure rate.

          What I feel is unrealistic is that at every level characters get 5% better, and always have 5% failure rate – after gaining a high enough level. This creates the “arms” race in a game. Would it not be better to find a way around this issue? And of course there is. Use the “saving throw” that D&D first used.

          Attacks (AC issue) are slightly different, but an analogous operation is possible. Better options are also available, but a D&D abstracted method works too.

          Reply
          1. SittingBull

            OK, so show me your DC range. I’ve asked you for it repeatedly, but you keep talking around it.

          2. Don Holt

            I’ve told you repeatedly, it binomial. It is not 3D6, but 2D10, as my game’s character traits are determined by 2D10. So:
            2 1/100
            3 2/100
            4 4/100
            etc.
            All “DCs” are saves against the underlying trait.
            skills/training are bonuses (subtractions) to the die roll. The result must be under your ability. Exactly like D&D version 1 (maybe 0), except for the 2D10 substitution for 3D6.

            (Plus is bad, minus is good.)
            Bonuses stop at about -5, though there may be one up to +6 for archery combat against fast flying creatures, such as shooting a hawk dropping from the sky, when it was NOT coming at you. Or say splitting an arrow coming at you. Archery combat is a 2D6 roll.

  9. SittingBull

    Don Holt: “
I’m glad to see you sticking with your 3e in the face of pressure to move to 4e or Next. Maybe in 10 years, when even fewer people play 3e, you’ll understand where I’m coming from, where big bucks for marketing move people away from doing reasonable things.”

    I don’t know how you could have missed it, but Pathfinder is the continuation of 3.5E through the open gaming license. It became quite a success, while at the same time WotC mishandled 4E and caused its demise. Pathfinder is also the most popular game at all of our local gaming centers, bringing in larger groups every week than 4E, or any other game for that matter.

    And by the way, the big bucks for marketing is exactly what caused WotC to move away from 3E/3.5E in the first place. WotC simplified and homogenized the game with 4E in attempt to bring in a younger/less experienced audience, just as a lot of corporations do who make games (e.g. video games).

    5E is another attempt at this “corporate strategy,” just in a different way. These corporate designers get dollar signs in their eyes, thinking they can somehow win all these new fans by simplifying and taking things OUT of the game, but usually what happens is that they change the game so much that it no longer provides the appeal for the more experienced players of the earlier versions of the game.

    Bounded accuracy is a perfect example of this kind of “marketing” strategy, making a lot of false promises, completely changing the fundamental math of the game, and creating even greater problems than we had before. It’s the most salient example of a gimmick in D&D that I have ever seen.

    Reply
    1. Don Holt

      Some people have trouble adding 3+5+6, but can use their fingers to add 6 to their roll of 9.

      D20 became popular for use by younger players. Before D20, percentile, 1D6, 1D6-1D6 (6,1 is not 1,6), and 2D6 were the dice rolls used in miniatures and wargames. Except for 2D6, result tables were used to yield the distribution of results the game authors desired. Lots of people had trouble subtracting 18 from 65. BTW, you wanted to roll under the percentage on percentile dice rolls, just as you wanted to roll 3D6 under your ability. Your bonus is subtracted from the dice roll. Again, the uninitiated and marketing types demanded that high is good.

      My thoughts: Any game where you have to overwhelm the die roll is a bad game. Change the distribution and use reasonable bonuses. And certainly never have a bonus that continues to increase indefinitely.

      Reply
    1. SittingBull

      Are you sure you are thinking far enough ahead? Yes, the 3d6 gives a different distribution curve, but the DC’s will also need to be changed accordingly.

      All that ultimately matters at the gaming table for these kinds of rolls is the chance of success at passing a certain threshold, and the DM’s set those chances of success by assigning DC’s and such. The vast majority of the time those thresholds are going to be between a 25% and a 75% chance of success. DM’s who regularly set the chances of success beyond those percentages are probably not going to have very happy players.

      So for something relatively hard, say for 25% chance of success, that would correspond with rolling at least a 16 with a d20 roll, and it doesn’t matter if I get a 16, 17, 18, 19, or 20, nor does it matter what the individual percentages are for rolling any specific one of those values (in this case each is 5%, obviously). I will succeed with anything 16 or higher, i.e. a 25% chance of success. That’s all that really matters.

      Likewise, with the 3d6 method, to set up something with about a 25% chance, that would correspond with rolling a 13 or higher, and it makes no difference to me whether I get a 13, 14, 15, 16, 17, or 18. As long as the roll is 13 or higher, I’m good. It also makes no difference that they only have a 1.39% chance of rolling a 17, or a 0.46% chance of rolling an 18. The only thing that matters to their success is that they pass the threshold, for which they have about a 25% chance.

      So with one method you need to roll a 16, 17, 18, 19, or 20 to succeed at a hard task (a 25% chance).

      And with the other method, you need to roll a 13, 14, 15, 16, 17, or 18 to succeed at the task (about a 25% chance).

      3d6 works great for ability scores, but its effects on threshold rolls aren’t really all that impressive. It could be made to work, perhaps as well as d20 or maybe even slightly better, but it’s not the best thing since sliced bread.

      And there is something else to consider… sometimes players/characters want to try things that are really hard in hopes to get lucky. So with the d20, “gambling” to roll a 20 in order to accomplish something really wild is more fun for the player, because the chance of rolling that highest value is actually not that small (5%, obviously).

      With the 3d6, getting an 18 is extremely rare (less than half a percent chance), so players aren’t going to see much point in taking those “gambles”, since they virtually never succeed. That’s less fun.

      And again, as a DM you have to think of the next implication (in this case, what the player might do next). So in d20, if they have the time, they might be able to try again. All of a sudden, attempting to accomplish something really hard might actually work. Maybe they will succeed in the nick of time, and it will be a moment to remember. Players really like this kind of thing.

      With 3d6, trying again to get that 18 is pretty much just as pointless as trying the first time. Not so fun for the players.

      So you should always think beyond the simple statistics and consider player psychology: 1) Players want to feel like their characters are truly getting better, and 2) Players want to be able to roll the highest value sometimes, enabling them to “save the day” or whatever. That drama is fun for players, and it can even enhance a group’s level of bonding with one another (either IN or OUT of game, or both). These kinds of psychological factors must always be considered when making a mechanical change, and too often people neglect the psychology of the game entirely (exactly like WotC did with bounded accuracy).

      Reply
  10. Don Holt

    Could you explain to me why your psychological are anything other than boosting player ego? There’s nothing wrong with a player ego boosting game. Probably 99% of players prefer your type of game. But in the player ego verses shared story continuum, aren’t you stating that your game’s objectives depend more on satisfying player ego than mine?

    I like a more intellectual game. Most people don’t see how that type of game can be fun, but then I liked math class.

    I like a “fair” game, where, by rules, both NPCs and PCs have the same chances of winning. What makes the difference is what my players have their characters do. My players enjoy thinking about what they should do, when they should fight, when they should withdraw, when they need help, and when they can give their character their own expertise (such as horseback riding.) My players live in the world, and more important than level is a character’s reputation and position in society. Even so, technically, combat in my game is more complex than D&D, though not any more difficult.

    So I do consider psychological factors, but I and my players enjoy the intellectual and knowledge components of the game we play, rather than defeating the next group of bad guys each week. I certainly had the opportunity to follow the more traditional model of RPG playing, but I count myself fortunate to have had early GM’s who had other things in mind. Both of these GM’s predated the widespread popularity of D&D.

    Enjoy your D20 and unbounded accuracy. If at some point, say when you start running for your children when they are 11-15, you might want to reconsider your game objectives.

    Reply
    1. SittingBull

      So you actually believe that my considering player psychology is purely about boosting egos? That’s a pretty massive leap of logic there.

      No, it’s about keeping the game moving and avoiding boredom. As I said, there are no egos at our table. I stopped playing with players like that many years ago. In fact, with our group we actually spend most of our time working on story and character ideas. We also build characters together, and even share them. There is no “ownership” for most of our characters. The story comes first.

      So please stop making these wildly inaccurate strawman arguments. You obviously have no idea what goes on at our table. I guarantee that our game is just as intellectual as yours (if not more so), just in a different style.

      Reply
      1. Don Holt

        No, I don’t assume anything about what goes on at your table. I depend on what you tell me.

        First, much of the language, such as “truly getting better” or “save the day”, are ego boosting concepts.

        Second, you use a system where there is an inherent bias, by design, for the PC’s to come out on top.

        Third, you are objecting to a change in that system that would decrease that PC bias.

        Please tell me more about how you play.

        Do you place your players in historical situations, such as the start of the French Revolution, where mayhem is breaking out all around?

        Do your players routinely use a fighting withdrawal tactic such as Sam Houston did against Santa Anna? How about ambush?

        Do you have your players kill something most sessions?

        Would your players be disappointed if they never gained a level?

        So when your players are considering standing up for a principle, do they only debate the grayness of the issue at hand or do they also consider the possibility they might lose? And could they lose? What happens if they lose?

        Reply
        1. SittingBull

          I’m having a hard time taking these last comments of yours seriously.

          Firstly, characters getting better in a game where they are SUPPOSED to get better and acquire more power over time can only be translated as “ego-boosting” to you? Give me a break. For us, it’s all about the story. We like developing characters, and part of that development in games like D&D is having the characters get more powerful within the context of the story. If you think that CHARACTERS gaining power over time automatically translates to PLAYER ego-boosting, then you are gravely misunderstanding the line that many of us draw between player and character. I honestly can’t believe that you made those comments. I thought you were a lot smarter than that.

          “Ego players” have a hard time distinguishing between player and character, but as I have made clear, we are pretty much the exact opposite of those kinds of players. In our campaign, some of our characters die and we are frequently introducing new characters into the story. Each of us have a lot of characters that we’ve made, and the very suggestion that we are ego-driven players is quite frankly insulting, especially considering how far it is from the truth.

          Secondly, the DM sets the difficulty of enemies, and yes, often those enemies are a little less powerful than the heroes. They wouldn’t be heroes, after all, if they die early on and we never get a chance to develop their personalities and histories. Sometimes they do die, however, and sometimes the bad guys win. It all plays a part in the bigger story we are weaving. Again, the very suggestion that it is somehow ego-driven at our table is about as opposite of the truth as I can possibly imagine.

          And third, I wouldn’t object if they switched to 3d6, but I don’t think it is some great panacea either. And I see nothing even REMOTELY “ego-boosting” about using the 1d20 and having results at the far ends of that range (1’s and 20’s) come up more often. It can be interesting/memorable within the story when we roll 20’s, and it can also be interesting when we roll 1’s. Ego has absolutely nothing to do with any of it whatsoever, and I find it extremely bizarre that you would jump to that conclusion.

          Plus, since we are most interested in keeping the story moving, using the d20 is just easy. We don’t want dice to get in the way of our storytelling. A player rolls the d20, and then we move on to the next thing. Most of our time is spent in conversation/interaction, not in die-rolling.

          Reply
          1. Don Holt

            The questions are serious. I don’t mean to be insulting.

            I’m glad you spend most time in conversation.

            One of the questions I omitted was how frequently do you think something unusual should happen: one in 10, one in 100, or less?

            So on average a 1 or 20 comes up every tenth roll. So if you don’t have a lot of combat rolls as you are in conversation/interaction (as am I), then 10% of the time when a roll is made either something extraordinarily good or bad happens.

            The other question I have is do you (as DM) plan the general outline of the story before hand, or does the group make it up as they go along?

            I have to have a lot of rolls in my game, as when my players ask a question for which I don’t know the answer, meet a new random NPC, or make a statement or take an action, that makes me think of a question, I need help to determine what the answer should be. I use a DC (actually percentile check) to see if the answer I’m about to craft is favorable, neutral, or unfavorable to the character in question. I never know what is going to happen nor how it will turn out. If every 10th roll was extraordinary, I’d be exhausted at the end of evening. We’ll usually run through 200-400 “questions” in a 3 hour session.

          2. SittingBull

            Sometimes a 1 or a 20 results in something extraordinary, while other times it just means someone failed at something, or just did really well at something. It depends on a whole host of factors, like what they are doing in the first place and how hard it is. If you are in combat, and you roll a 1, well then you just missed, perhaps in an embarrassing way, but that’s the end of it. We move on.

            Sometimes a 20 could be the only possible way of success (based on the DC), or a 1 could be the only possible way of failure. Those times the result tends to be more dramatic and surprising.

          3. SittingBull

            Sure, I plan general outlines of stories. I usually have it somewhat open-ended so that the party can go in different possible directions within the story, but there is usually a central goal that they are pursuing. So to just make up an example, if they are trying to track down an assassin, they could go about it in different ways, but the plot point of catching the assassin is central to that particular part of the story.

            We also discuss larger plot arcs as a group, and sometimes someone else will take over the DM duties for various reasons, with whatever we/they decide to run as an adventure always fitting in somehow with the overall plot of the campaign. As I said, with us it’s a group effort, and we even create characters together (one of our favorite things to do). Someone might have a cool idea for a character, and then someone else might have another idea that we incorporate into that character as well. So we tend to refer to them as “our” characters, not “mine” or “yours”.

        2. BurgerBeast

          Not when they are what really happened. Sometimes people do truly get better and sometimes one character truly does save the day, whether ego is present or not.

          This isn’t true. The bias, if there is one, would come from the tasks against which the PCs are set (i.e. the DCs compared to the range of results), not from the method of randomization or distribution of results.

          Again, not so. Bias depends on task difficulty within the context of the chosen system, not on the inherent properties of the system itself.

          Most of the ‘questions’ you pose in this post are completely irrelevant.

          Reply
  11. Don Holt

    So at 5% ,I can see why a 1 is just a miss. What would you have happen at .5%? Would the weapon going flying or be shattered? This is why I like a distribution with extreme tails. Some truly extraordinary should happen when they occur, as their occurrence is truly a rare event.

    So if a character is -20 to do something, and they need a 10, if they roll a 20 they still accomplish the task as would the character at -11? You don’t differentiate between those failing by 1 and those failing by 10?

    We let the computer generate the base character, selecting the first character who has some high ability or trait and one fairly high ability or trait, ability/trait chosen by the player. This way players can get the type of character they want, but there will be bad abilities and traits mixed in as well. (horoscope sign or aspect, family position are examples of traits.)

    We then just start playing the character, and as the players ask questions about their character including possessions, friends, skills, etc., they roll percentile with 00 being as favorable as they can get and 99 meaning the worse possible outcome. (I’m now thinking I’d like to switch from percentile to 3D6, as 18 is harder to make than 00.) Other players can ask questions too. We don’t have to spend more than about 5 minutes to get a new character in the game. We let gameplay determine what else we need to know. We have characters that are siblings, friends, and arch enemies.

    One of the neat aspects of an unpublished game called Adventure! was that alignment was dynamic function of what the character did, and as one reached one end or the other of the spectrum, it became harder to get enough good or bad deeds in during a session.

    Reply
  12. Pingback: Our Favorite D&D Stats Sites (So Far!) | Ludus Ludorum

  13. Pingback: “God Will Be Cut”: Giving Nonmagical Equipment the Hattori Hanzo Edge | Ludus Ludorum

  14. Pingback: Why D&D’s d20 Tests Make Experts Look Inept and How to Make the Best of It | DMDavid

Leave a Reply