Scatterplots, Scattershots, and Scatterbrains: Using Graphic Modeling in Assessing the Optimal Size of Trivia Teams

a woman drinking red wine at the bar counter

Rodney Dangerfield Professor Emeritus of Econometric Acyrology and Omphaloskepsis, Cranberry Lemon University, Dr. Edward Mickolus, PhD, (and former political science editor of Journal of Irreproducible Results, AIR’s predecessor) and Susan Schjelderup, MPA, CQM, CQA and former Quality Assurance Manager, principal investigators. Field data collection team: Micheline and Gary Bruce, BE, QTP, DAD; Robert Creal, PhD; Nancy Creal, MSW; Ken Curtis, BA; Michael Dalbey, MA; Shawn Davis, MPA; Karen and John Florio; Cheryl Ann Gorton, DNP; George E. Gorton, PhD; James Hunt, MA; Emily Serafa Manschot, MA; Ciana Mickolus, PsyD; Candyce Troyer, MA; Maureen Wildey, MBA; William Wildey, BA; Richard Willits, JD; Kay and Steve Woodford, PIPs (previously important people); et al.

Abstract

Attentive to the popularity of such question-and-answer television shows as Beat the Geeks, Jeopardy!, Weakest Link, and Who Wants to be a Millionaire, plus the old standby Christmas gift Trivial Pursuit,  bars and restaurants around the world have seized upon hosting question-and-answer trivia games, offering prizes, sometimes in the form of cash, sometimes in  bar scrip. With Covid-19-attributable financial difficulties of many universities necessitating fiscal cutbacks to professorial payrolls, many adjunct and other non-tenured faculty have turned to alternative sources of supplementing income. This has included playing competitive trivia at local watering holes, where, if one becomes successful, winning coupons and cash can offset the costs of one’s dining experiences. This article looks at one of the inputs to successful trivia playing—optimal team size. Results spoiler alert: 6 +/-2.

Hypothesis:

Does the size of a trivia team predict a winning score?

Method:

Our data collection team played weekly (and sometimes daily) trivia games between 2014 and 2021. Total games numbered circa 1,000 virtually and in person in 14 bars and other venues (including cruise ships) in Michigan, Florida, Massachusetts, New Hampshire, North Carolina, California, Europe, and Australia. Team composition included professors and professionals specializing in typical trivia categories, plus overall generalists with diverse experiences. Team size ranged from singletons to 20 members.

We identified one dependent variable to predict success: number of members on a trivia team. There are several ways to measure success:

  • Team rank at halftime
  • Team points at halftime
  • Team rank at end of final round
  • Team points at end of final round

In some variants of the game, a heavily-weighted grand finale question determines final rankings and points. Thus we have a fifth and sixth measure of success (See technical appendix regarding why we did not incorporate these additional measures.).

  • Team rank at end of finale
  • Team points at end of finale

We normed these ordinal and interval measures into a merged Team Victory variable, giving greater weight to the success of the final outcome (i.e., did our team win?).

               We examined the following independent Variables in relation to team success:

  • Diversity of professional and hobby interests of team members
  • Difficulty of questions

Estimating the difficulty of questions (they’re easy when you know the answer; hard when you don’t) is the most problematic of these measures. In hosting our own trivia games for some 300 participants, and daily virtual games for 30 participants, we attempted to front-load the easy questions in the first half, then used Bayesian modeling techniques (guessing) to predict performance across the teams. Our predictions show a reasonable fit to eventual field observations. Figure 1 shows that our predictions of incorrect answers (i.e. a stand-in for question difficulty) approximated teams’ performance.

Figure 1: Predicted vs Actual Incorrect Answers
Figure 1: Predicted vs Actual Incorrect Answers

In an effort to enhance odds of winning (maximizing the Team Victory variable) at any given venue, we assayed the frequencies of question categories in order to permit team members to select from among the most frequently recurring question category and study up for the next week’s trivia game. (Some bars use question-writing services, some permit their game hosts to also write the questions. In both instances, certain category-selection patterns pop up.) Figure 2 is a typical bar histogram of a question-writing service during our period of investigation.

Figure 2: Trivia Nation Category Frequency
Figure 2: Trivia Nation Category Frequency

               Using multiple teams of varying sizes and compositions over an six-year period, we developed scatterplots of team performance, with dependent variable Team Victory (with the lowest number, 1, indicating the best team rank) on the Y axis, and number of team members on the X axis. To obtain multiple measures, we also charted the individual variables that comprise the Team Victory variable. A typical scatterplot (shown in Figure 3) resembles the illustrative chart which first appeared in https://statacumen.com/teach/S4R/PDS_book/interpreting-the-scatterplot.html

Results:

While there is no straight-line correlation between the two variables, we found an impressive tendency of our teams with 4-8 members consistently finishing in the money (bar payouts tend to be for teams winning, placing, and showing, i.e., finishing in ranks 1, 2, or 3). Too few or too many team members increase the likelihood of losing.

Interpretation:

Our statistical results dovetail with commonsense wisdom on team composition. A team with too few people will tend to have amassed less specialized knowledge (assuming savants are in short supply, which might not be the case in college towns). Teams with large numbers of players will find difficulty in agreeing on a single answer, be unable to hear each other (especially the quiet-spoken and timid) in noisy bars, and will run out of time to provide the consensus position. Team cohesion can also suffer if two or more persuasive but incorrect alpha members dominate discussion. Flow of liquor and quality of bar food may also have an effect on team performance, although this will require further investigation (N=1000 is not sufficiently large for us to draw conclusions, even with our Bayesian guessing.).

The Bottom Line: Try for a team of 6, plus or minus two (depending upon other social commitments, and availability of babysitters), experts and generalists.

Further Research:

               One of the principal investigators is a certified Quality Manager, Quality Auditor, and Baldrige Award examiner, who is using Total Quality Management techniques to improve the speed and accuracy of hosting trivia games. Figure 4 displays the hosting trivia team’s efforts to reduce cycle time (length of game) to coincide with the need to vacate the venue before the evening cleaning staff arrives. A follow-up article will provide other real-world applications of these techniques.

Figure 4: Bar Trivia full Life Cycle
Figure 4: Bar Trivia full Life Cycle

Technical Appendix:

Our editors deleted the following terms from the article, but you might find this language helpful when you attempt to replicate our findings:

Periods: Most bar trivia games have a series of questions, divided in half, capped by a more complex finale question for which the points bet are either added if correctly answered or deducted if incorrect. These finale questions usually depend heavily upon luck, often offering teams a 1-in-24 (4 factorial) chance of getting them correct. These data were not included because of their irrelevance to the difficulty factor and it contributed substantially to team unruliness.

Wagers: Teams usually bet points on questions, which are offered in tranches of 3-4, with 3 tranches per round. Wagers are usually 1-3-5 points (used only once per tranche) in the first half; 2-4-6 in the second half. Finale wagers generally range from 0-15 or 20; some are “bet it all”, unacceptably skewing the data. In many cases, such reliance upon finales rewards mediocre teams who happen to hit on a 1-in-24 wager.

Social: All teams correctly answered a question, usually celebrated by a facility-wide toast

Anti-Social: No teams correctly answered a question, usually marked by a facility-wide toast of shame

If you enjoyed this clickbait please like, share, and subscribe with your email, our twitter handle (@JABDE6), our facebook group hereor the Journal of Immaterial Science Subreddit for weekly content.

One thought on “Scatterplots, Scattershots, and Scatterbrains: Using Graphic Modeling in Assessing the Optimal Size of Trivia Teams

Leave a Reply

%d bloggers like this: