Final thoughts after the voting closed


RPG Superstar™ General Discussion

1 to 50 of 53 << first < prev | 1 | 2 | next > last >>

3 people marked this as a favorite.

I was cramming in as many votes as I could before the voting closed moments ago. I saw a few of my personal favorites come by plus a few that I had not seen yet.

Major props to Petty Alchemy and everyone who took the time to update the Items Seen List after voting their hearts out. The thread let myself and others track the progress of their items as well as keep track of their favorites.

I am glad that I will be working while the top 32 and the alternates are announced as it will help keep myself occupied and have the time fly by.

No matter the outcome I had an absolute blast and have a few items on my mind for next year.

Champion Voter Season 6, Champion Voter Season 7, Champion Voter Season 8, Champion Voter Season 9

Pathfinder Maps, Pathfinder Accessories, Starfinder Society Subscriber; Pathfinder Roleplaying Game Superscriber

I have to get ready for work right now, but I will come back to this thread later to put in some thoughts on the voting this year.

Dedicated Voter Season 9

2 people marked this as a favorite.

Originally I was kind of pissed because my item didn't survive even until the first cull, as it was clearly better than the things I was voting on. Around the time of the third or fourth cull that changed, and I started to see some decent items, and even a few which were better than mine. Post 4th cull most of the items I was voting on were clearly better written than my own and I've even enjoyed voting since the fifth cull.

So while I'm still disappointed, I enjoyed the experience and have a better idea of the submission quality for the next go around. I'm also still unsure why I was DQ'd, but look forward to the item feedback thread. I've already received some very helpful feedback from one forum-goer and look forward to seeing some more suggestions for improvement.

Oh, and looking at the items seen sheet I can see why I was initially pissed; my sequential 'voting blocks' have been consistently full of items which were culled in the next pass.

RPG Superstar 2012 Top 32 , Dedicated Voter Season 6, Star Voter Season 7, Dedicated Voter Season 8, Star Voter Season 9 aka SmiloDan

I never saw my item. :-(

I know it survived the first two culls. Not sure about the rest.

Star Voter Season 8, Dedicated Voter Season 9

2 people marked this as a favorite.

I think the voting was very strange this year. I like the fact that there were gradual culls to leave us with the best 200 or so items, but I think that at that point it should have been judges who selected the 32 best items.

I think it is weird to leave it to the public vote, especially since people can vote as many times as they like. Some people voted 5000+ times, which gave their opinion a lot more weight than others. The items are then not necessarily the overall voter's choice when you have a few people voting 5000+ times. Their opinion has too much impact on the results.

I guess Paizo is experimenting, but for the credibility of the contest I really think judges should pick the top32. That is not to say that the top32 we ended up with is bad though. I think there are very good entries up there and some personal favorites for sure.

RPG Superstar Season 9 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka theheadkase

5000 votes yields an overall 2% of all votes (250k). This 2% was spread over many items...so affecting 1 item significantly more than others is not possible.

Marathon Voter Season 8, Dedicated Voter Season 9

Kiel Howell wrote:
5000 votes yields an overall 2% of all votes (250k). This 2% was spread over many items...so affecting 1 item significantly more than others is not possible.

Statistically speaking 2% for one person is a lot. A theoretical cabal of a couple of champions and several marathoners would be able to set the trend and decide a big chunk of the Top 32 if most of the data from the other voters is fuzzy.

I don't think this is the case if honest, and I personally enjoyed this year's first round, but the algorithm can be manipulated if you have the resources and will to do it.

RPG Superstar Season 9 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka theheadkase

But the important bit...

The 2% is spread over X pairings (which is equivalent to roughly 650*650 possible pairings). Now, you're only likely to see ~5000 pairings at champion level and the likelihood of the exact same pairing happening more than, let's say 5x, is pretty low. This year was a little different in that the multiple smaller culls drop the field of pairings each time, but you are still talking ~200*~200 which is ~40,000 different pairings after the final cull.

Long story short...unless getting the same item against other items all ~5000 times then that 2% very quickly becomes < 0.1 % after being spread out over all the possible pairings.

I would really like someone who's great at formulae to come up with the whatever equation it would be. Assume 650 items initially, 5 culls diminishing the pool each time, and assume 5,000 votes. Would be SUPER interesting.

RPG Superstar Season 9 Top 32 , Star Voter Season 6, Marathon Voter Season 7, Dedicated Voter Season 8, Marathon Voter Season 9

Kiel Howell wrote:

But the important bit...

The 2% is spread over X pairings (which is equivalent to roughly 650*650 possible pairings). Now, you're only likely to see ~5000 pairings at champion level and the likelihood of the exact same pairing happening more than, let's say 5x, is pretty low. This year was a little different in that the multiple smaller culls drop the field of pairings each time, but you are still talking ~200*~200 which is ~40,000 different pairings after the final cull.

Long story short...unless getting the same item against other items all ~5000 times then that 2% very quickly becomes < 0.1 % after being spread out over all the possible pairings.

I would really like someone who's great at formulae to come up with the whatever equation it would be. Assume 650 items initially, 5 culls diminishing the pool each time, and assume 5,000 votes. Would be SUPER interesting.

Maybe after I finish my map. And finish getting some draft ideas solidified for the next round. And the one after that >.> and that... Hopefully.

RPG Superstar Season 9 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka theheadkase

Hehe, I'm too busy/lazy myself to do it! 2 day deadline for map and 2 day deadline for large software project at work!!!!

Silver Crusade Marathon Voter Season 9

1 person marked this as a favorite.

Kiel I kinda disagree with your assessment.

Although it is hard to affect a single item having large counts of votes, Feros and Leblanc alone account for near 1 out of 20 (or may actually be 1/20 votes, because it states almost a quarter of a million, no qualifier for almost, and we don't know how many they cast past 5,000 each) or more of the TOTAL votes. It can greatly affect the outcome their opinion has over the entire contest.

This also does not account for the work shopping or pit crewing that goes on. 2 people who were in chat work shopped 4 items that made the list, one of those two made it as well. That knowledge of who made the item and the fact that you worked on it providing some input also slants voting for certain items, because lets be honest you want your creative input to succeed. Not questioning ethics but numbers wise having multiple people with some seeded investment into your item makes them more likely to vote for you, specially if they are your friend. So if you add a champion voter, with multiple marathon voters that all put forth items those collective votes adds up to a significant amount in the overall contest favoring a handful of items.

Keep in mind with these rules for advancement it is a popularity contest put out to the public. The people voting may have a wealth of design knowledge and experience, or practically none. Their design philosophy could be anywhere on the spectrum. My father, whom I love dearly, has no experience with item design or Pathfinder voted around 100-200 times because I told him that I entered. Per vote, his votes have as much weight as experienced tested cover designers.

With that said, for all those that did not make the top 32+4 do not be discouraged that your item is inferior to those who did...it just might not have been to the taste of the voters. Any player of PFS knows that in order to make a game balanced designers make choices that are not always the popular ones (Crane Wing, or Aasimar favored class bonuses?) so keep your head held high.

Before anyone states that I am biased or jealous because I did not make it. I had a blast hanging out and talking to everyone in the google chat and enjoyed my experience here. My item was thrown together and probably didn't deserve to make it into the top 32. I await to see if I made top 100. I will be posting my item the to CMI thread, along with my own critique of it.

edit - Removed names, because I thought it would be better that way.

Also you forget that it is a beat path, voting by comparison each vote has an impact on all items rated.

Star Voter Season 8, Dedicated Voter Season 9

1 person marked this as a favorite.

Well put Loradin. Those are basically the points I was trying to make and you took the time to express and develop them properly.

I personally hope that they go back to having judges pick out the winners next time around. I also hope that they leave it to the public to thin out the herd and do progressive culls like they did this time around, until we can get a sample small enough for judges to be able to deal with.

2% of votes for 1 person is enormous Kiel, and since there are more than 1 person who voted 5000+ times, it absolutely has an impact on what ends up in the top32. That being said, congratulations on making in to round 2 and I hope your map design is going well. Good luck for the remainder of the contest.

RPG Superstar 2014 Top 32 , Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9

I wonder if the judges comments will be harsh and honest if an item they really disliked or had major flaws made it in.

In the past, at least all the judges had to like the items to some degree, but now it's possible an item which appeals to the masses, may have issues that the experienced eyes of the judges would have never allowed to get in to the top-32.

RPG Superstar Season 9 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka theheadkase

Good points Loradin and LordCoSaX. I'm still pretty positive that the spread of that 2% of votes (it won't be MUCH greater than 2%) over the pairings doesn't allow for a single voter to sway any single item's ranking significantly...but I need to see an actual equation to see if I'm right.

Back to my map!!

Sczarni RPG Superstar 2014 Top 16 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Dedicated Voter Season 9 aka Arkos

16 people marked this as a favorite.

Time to nerd out.

RPGSS Open Call voting is done using the Schulze beatpath method, which creates a huge overall ranking. It's a modification of a series of preference-based voting methods, and is one that is currently used in many circles. Specifically, rather than a single ballot ranking all entries from best to worst, we get the pairing method we're all familiar with. Unfortunately, any kind of preference voting suffers from a few important flaws which are mitigated only under ideal circumstances. I feel as though some of these flaws are leading to the complaints we've seen, though I'm certainly not prepared to suggest any sort of major overhaul. I do, however, have some suggestions.

Disclaimer: I voted often this year and in years past. I love the entries that made it in, though I certainly wanted to get into the Top 32 this year. I also teach Game Theory and have trouble not thinking about things like voting methods and strategic voting, and I’ve used this competition as a class analysis in the past. So while this critique may have some feels behind it, I also plan to make sure it has some solid math as well. Take it as you will.

The Nitty Gritty:
Any preference based voting system generally fails to prevent Tactical Voting. On a ballot of five names with two major contenders, voting one of those contenders into last place in order to tank their chances of competing with the other contender is a solid example of tactical voting. The Schulze beatpath method claims to be resistant to Tactical Voting, since the tactics of a single voter on a single pairwise vote is outweighed by the "unpredictable" masses voting as they see fit.

"Unpredictable" is an important word. It implies that either everyone is voting tactically their own way, or that one voter cannot predict the votes of another. Unfortunately, the current state of the competition makes many votes more predictable than one might think.

  • Snark and Praise trends: Each of these threads present ideas about submissions in a positive or negative way. Because they are highly public, voters become accustomed to voting in a particular way. Keywords like "filigree" and "blood" mean that items including those words have a tougher time progressing simply because a mass of voters downvote them due to public opinion. If I can use the Snark and Praise threads to determine how I expect people to vote, then votes are no longer "unpredictable."

  • Items Seen thread: This thread allowed voters a great deal of insight into the process at any given moment, along with the ability to read the text of every single item. This resource would make tactical voting a simple process, due to the ability to create a specific preference list using the example process outlined above.

  • Workshopping: I hate to say this, because I believe workshopping is a good thing as a designer, but folks who workshop are almost certainly voters. If I know how a group of people are going to vote, then those voters are not "unpredictable." Simple as that.

  • High-Volume Voters: Champion voters, especially those who post often on the Snark or Praise threads, and who workshop with others, are especially predictable and have a large amount of sway on the competition. If five champion voters control 10% of the votes and act predictably, then the voting system is not ideal.

The last years have avoided these pitfalls by allowing a group of judges final say over the list (which has been generated using these same pitfalls, but let's leave that aside for now). How else could we get public voting to a place where it can sincerely create a top 100 list?

  • Total Silence: Allow no public conversation of any kind about the voting process. This seems unenforceable and not fun for voters.

  • No Workshopping: Force every item to be the sole product of a single mind. Also unenforceable and not good for the process of item creation.

  • Limit High-Volume Voters: Create an artificial maximum for votes per day or per competition. I don't see how this can be done at this point, and I don't think it's worthwhile.

Unfortunately, I think these pitfalls are unavoidable given the nature of the competition. I think that judges are one way to mediate these problems, by taking the final rankings out of the hands of a flawed public voting scheme. Unless that returns, we have the current method: a voting system where unintentional systemic biases lead to tactical voting.

I also don’t suggest the judges are anywhere near as fair as the ideal Schulze beatpath system. I simply want to point out that we are currently doing all the things that make Schulze beatpath unfair.

I would and could get more specific, but I feel like that sends me more into “sad also-ran” contention rather than being an analysis of the process.

I just want to say again that this year’s process created a good outcome. I don’t see the Top 36 as unworthy of their place in any way. They are great representations of the good things that come from this competition. I'm actually hesitating to hit submit because I enjoy this competition so much. However, I do want to give some legitimacy to some of the complaints about this year’s process and to hope that some analysis leads to a change in how the process is done in the future.

If you read this whole thing, you are my hero.

RPG Superstar Season 9 Top 32 , Dedicated Voter Season 8, Marathon Voter Season 9 aka Petty Alchemy

1 person marked this as a favorite.

You teach Game Theory, Rich? You're my hero.
---
Workshopping is not an agreement to vote for the workshopped item.
The majority of the people I workshopped for did not see my item, and I found I held items I workshopped to a higher standard when voting.

Star Voter Season 6, Star Voter Season 7, Star Voter Season 8, Star Voter Season 9

Rich Malena wrote:

Time to nerd out.

RPGSS Open Call voting is done using the Schulze beatpath method,

Rich -- Thank you for this fairly thorough (oh how I know you could go into more detail, I can see it in-between the words) discussion of Paizo's methodology.

And I think it sums up my questions and also own responses to this year's voting.

In the end, I'm fine with the system and think it will play out well long-run. The person that makes it to the very end is still likely going to have proven themselves able to have the chops to design.

One of the biggest things, in my mind, is that any serious attempt to "prevent" the tainting of data cuts into the "community" of RPGSS. As much as I love the chance to get a gig with Paizo, or as much as I enjoy (or don't) voting, what in large part has drawn me back 4+ seasons is the "friendly faces" and the online community and kin-ship with fellow RPGSSers.

I'm glad I read this before I started critiquing the Top 32+4. I think I'll soften my comments slightly --- the way I voted clearly isn't how everyone voted, etc.

Anyways, thank you for the post.

Sczarni RPG Superstar 2014 Top 16 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Dedicated Voter Season 9 aka Arkos

Isaac V wrote:
You teach Game Theory, Rich? You're my hero.

Awwww, shucks. :)

Isaac V wrote:

Workshopping is not an agreement to vote for the workshopped item.

The majority of the people I workshopped for did not see my item, and I found I held items I workshopped to a higher standard when voting.

That is true. The likelihood of seeing workshopped items is overall greater than seeing a single item, but not by a ton. The larger problem is simply that a "workshop" is similar to a voting bloc and may have similar guidelines for how they vote during the competition. Even without a nefarious agreement, it does create a bias.

Sorry if I implied something wicked happening! I was looking at the unintentional bias rather than some kind of plot!

RPG Superstar Season 9 Top 32 , Dedicated Voter Season 8, Marathon Voter Season 9 aka Petty Alchemy

It was hands down my favorite class in university. I would've loved to specialize in it, but I had no idea what I'd be doing to earn my daily bread afterwards.

And to clarify, I meant that the majority of the time I did not send my item for workshopping to a person I workshopped for. So maybe they saw it in the voting stage, but they didn't know it was mine.

It's also possible to create an unfavorable bias when workshopping, as someone might get offended or such. I don't know if this actually happened to anyone though, but certainly possible.

I do get what you're saying though.

Star Voter Season 8, Dedicated Voter Season 9

Fascinating post Rich. It would seem that reverting back to judges would indeed be a step in the right direction, although its not a perfect solution ( I doubt such a solution exists).

I want to mention also that although I am very critical of how the voting system was handled it doest mean that I think the people who made it to round 2 dont deserve it.

However I think its important to highlight the method's failings, as Rich did so well in his first post.

Scarab Sages RPG Superstar 2015 Top 16 , Dedicated Voter Season 8, Star Voter Season 9 aka Rusty Ironpants

Rich, I just want to clarify that the Items Seen thread did not provide voters with the ability to read the text of every single item. The tracking spreadsheet just provided the name and last reported seen date of each item.

Shadow Lodge Dedicated Voter Season 7, Dedicated Voter Season 8, Dedicated Voter Season 9

Pathfinder Maps, Pathfinder Accessories Subscriber; Pathfinder Roleplaying Game Superscriber; Starfinder Superscriber
Russ Brown wrote:
Rich, I just want to clarify that the Items Seen thread did not provide voters with the ability to read the text of every single item. The tracking spreadsheet just provided the name and last reported seen date of each item.

And item type/slot.

RPG Superstar Season 9 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka theheadkase

Thanks for chiming in Rich!

Star Voter Season 6, Dedicated Voter Season 7, Star Voter Season 8

Russ Brown wrote:
Rich, I just want to clarify that the Items Seen thread did not provide voters with the ability to read the text of every single item. The tracking spreadsheet just provided the name and last reported seen date of each item.

There was a document that the item seen spreadsheet linked to that contained someone's full-text list of items. That link was later removed.

Sczarni RPG Superstar 2014 Top 16 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Dedicated Voter Season 9 aka Arkos

Garrett Guillotte wrote:
Russ Brown wrote:
Rich, I just want to clarify that the Items Seen thread did not provide voters with the ability to read the text of every single item. The tracking spreadsheet just provided the name and last reported seen date of each item.
There was a document that the item seen spreadsheet linked to that contained someone's full-text list of items. That link was later removed.

Ah, I missed that it was removed. I just remember that it was on the site early on the process.

RPG Superstar 2009 Top 16, 2012 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka Epic Meepo

2 people marked this as a favorite.

I agree with 90% of the culls and I didn't see too many surprises in the Top 32, so I think the process did a fairly good job of identifying high-quality items. In fact, this is the first season where I haven't seen a single Top 32 pick that made me ask, "How did that get in here?"

That being said, based on my Marathon voting, I saw some areas where the voting system might have broken down a bit, especially after the final cull. Based on purely anecdotal evidence, the biggest problem I saw was the tendency of certain items to "follow" me as I voted. In other words, my nearly 2000 votes were not evenly distributed across the available selection of items.

Looking at only the final 223 items, I voted on each item an average of ten times. (I saw my own item seven times and one of the items I workshopped thirteen times, for example.) However, there were eleven items I saw fewer than five times each and thirteen items I saw more than fifteen times each. That's 10% of the final 223 that received a disproportionate number of my votes.

Let's look at the two most extreme cases. There was one item in the final 223 that I never saw. There was another item in the final 223 that I saw twenty times (and down-voted nineteen times, though I could see how other voters with different preferences might have liked it). That happenstance was pure chance. There was an equal probability that my viewing numbers for those two items would have been reversed.

For the sake of argument, let's say the item I never saw was one I would have really liked and up-voted almost every time I saw it. Let's also say that both of the items in question were middle-of-the-road items for voters other than myself. If my viewing numbers had been reversed, that would have meant 10 fewer down-votes and 10 more up-votes for the item I never saw (but would have liked) and 10 fewer down-votes and 10 more up-votes for the item I saw twenty times (but didn't like).

Would those additional up-votes have affected the chances of those two items? The answer to that question depends up lots of complicated things, but I don't think it's unreasonable to speculate that an extra 10 up-votes might have bumped either of two items already in the top 223 up a few spots. If one of those items finished in 37th place, it might have missed out on the Top 32 simply because its number of views by yours truly was a statistical outlier.

TL;TR: Item views need to be better distributed across all voters. When Marathon and Champion voters see a small number of items twenty or more times, their influence on those randomly-chosen items is disproportionately large.

Sczarni RPG Superstar 2014 Top 16 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Dedicated Voter Season 9 aka Arkos

Eric Morton wrote:
A really interesting conjecture with statistical evidence.

DATA! I wish I could see it all. I really appreciate that you've noted a specific and quantifiable moment here. This is the kind of analysis that can really detail how the process is working with asymmetric voting patterns. I would love to see how those twenty votes compare to the total votes for that item, how that played out over time, whether high-volume voters have significant sway... all sorts of interesting questions show up!

Why didn't I get into big data? Oh well.

Paizo Employee Chief Technical Officer

4 people marked this as a favorite.
Eric Morton wrote:
Item views need to be better distributed across all voters.

Each entry ends up being voted on (roughly) the same number of times, and (even more roughly) by the same number of people. Statistically speaking, the difference between "roughly" and "precisely" should be noise.

Scarab Sages RPG Superstar 2015 Top 16 , Dedicated Voter Season 8, Star Voter Season 9 aka Rusty Ironpants

Garrett Guillotte wrote:
Russ Brown wrote:
Rich, I just want to clarify that the Items Seen thread did not provide voters with the ability to read the text of every single item. The tracking spreadsheet just provided the name and last reported seen date of each item.
There was a document that the item seen spreadsheet linked to that contained someone's full-text list of items. That link was later removed.

Ah, I never saw that link. I agree that info shouldn't be available.

Star Voter Season 6, Dedicated Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9

Rich Malena wrote:

Time to nerd out.

If you read this whole thing, you are my hero....

Yeah! I'm somebody's hero. I win! I win!

Thanks for the insight Rich.
I think I can talk about workshopping with some credibility (I get asked to a lot for some reason :). But I still have 30 some Top 32 to review and am just dotting the thread.

I will post an anecdote however. I ran into an item that did something similar to one of the items I had critiqued. It was late in voting (after the 3rd or 4th cull). "Dang, this is close to 'Petra's' item. Double Dang it is way better than Petra's item. OK this is a keeper it is going on my list. Sorry Petra."

Lo and behold: that item made it into the Top 32.

Sczarni RPG Superstar 2012 Top 32 , Champion Voter Season 6, Champion Voter Season 7, Champion Voter Season 8, Champion Voter Season 9

2 people marked this as a favorite.

Where does the 250k total vote number come from? I haven't seen a blog or an official post with that number.

As for Feros and I, we each knew the other's item after the voting started (Day 3 or 4 I think) and neither of us made it into the Top 36 spots. I am pretty sure we have the same integrity in voting and chose the superior item every time. I liked his item a bunch, but some of the items in this year's Top 32+4 were superior to his in my opinion and received the vote over his. I won't lie and say the fact I knew his item always received an impartial vote though. If I found his item paired with an item with similar mojo, he received my vote because I know he delivers on a consistent basis and I wouldn't need to check his math or template.

Champion voters have never made the Top 32+ since public voting has started (95% sure at least). And if the effects of this year's Champion Voters had much of an effect, Feros would have survived the 5th cull and/or made the cut. And I could have possibly made the cut. This year my item accounted for .2% of my votes cast, not enough to make any difference. Even the year I cast 8k+ votes, I didn't make the Top 100 list (I must be over 25k lifetime votes cast).

I kept a detailed voting log for items I liked. I only down-voted Feros twice, both to items that made the Top 32. Not sure how he voted for me, but he pointed out flaws in my item. Of the 6 items I knew who created them, 4 made the Top 32, each had 1-3 down-votes from me and 14, 17, 22, and 24 up-votes.

There is an item that made the Top 32 I only up-voted a few times prior to the 4th cull. I voted on that item a lot (it seemed to stalk me). 16 down-votes after the 5th cull alone. I thought it was the 2nd worst item of its kind in the entire contest this year.

There is one item I never down-voted the 19 times I saw it and it didn't make the Top 32+.

I do agree with Eric Morton about items seeming to follow me. I would see a group of items heavily for a couple hours. I saw the largest variety of items to vote between 2000-0400 PST. I once voted on the same item on 5 different pairings in a row. And saw my item once 2 pairings in a row.

Champion Voter Season 6, Champion Voter Season 7, Champion Voter Season 8, Champion Voter Season 9

1 person marked this as a favorite.
Pathfinder Maps, Pathfinder Accessories, Starfinder Society Subscriber; Pathfinder Roleplaying Game Superscriber
Thomas LeBlanc wrote:
Where does the 250k total vote number come from? I haven't seen a blog or an official post with that number.

They don't give the final number, but Owen references it in the Top 32 Blog Post:

Owen K.C. Stephens wrote:
...Nearly a quarter of a million votes went into this selection, and we culled the bottom percentage of items 5 times throughout the contest, leaving us with just the top third of entries for the final push through the weekend...

As for my take on it, I have entered all four years I have voted. Not once have I made Top 32+4 in that time. I have seen my item many times throughout these contests and my vote has not won the day. I made top 100 a few times, but that's it.

I knew Monica Marlowe's item early on last season. I up-voted it a number of times before I knew it was hers and I down-voted it a number of times after I knew it was hers. Thomas and I are VERY serious about voter integrity and not allowing friendship and comradery to get in the way of giving the people running the show the information they need. I liked Thomas' item, but it wasn't as good as others. So though I mostly up-voted it, I down-voted it a number of times.

We both want each other to get into Round 2 of this contest. But we only vote the number of times we do because we love this contest and what it represents. Could we become a cabal and up-vote the ones we like? In theory, but it runs against our natures and our desire to see the best items win. So it is never going to happen. Both of our items had faults that cost us and kept us from proceeding. Welcome to RPG Superstar: nothing comes easy.

Now take this into consideration: since two of the Champion voters make it their business to vote with integrity, what does that do to any who are trying to (lamely) game the system? We are now bulwarks against the very influence that is being discussed here. Our votes counter theirs, making the whole exercise pointless. This makes me feel that the lack of sleep and general strain of voting that often comes with it is completely worth it.

So there is another view: at least two of the Champion voters are the defenders of this contest from voting cabals and dishonest team builders. And if I have any say in it, we always will be.

Star Voter Season 6, Dedicated Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9

Please keep in mind; no one's integrity is being questioned. Indeed Feros & Thomas' integrity makes the point (predictable voting behaviors). Rich and others are merely pointing out that predictable voting is a weakness of the system. And that statistical outliers in this method are predictable. :)

That said Feros & LeBlanc: WTF didn't you upvote my item!?! Didn't you see it grollicks genius?!?

;)

Sczarni RPG Superstar 2012 Top 32 , Champion Voter Season 6, Champion Voter Season 7, Champion Voter Season 8, Champion Voter Season 9

1 person marked this as a favorite.
Feros wrote:
Welcome to RPG Superstar: nothing comes easy.

Sounds like the blog title that should announce the next season, Season 10 (فصل العاشر)!

Sczarni RPG Superstar 2012 Top 32 , Champion Voter Season 6, Champion Voter Season 7, Champion Voter Season 8, Champion Voter Season 9

Curaigh wrote:

That said Feros & LeBlanc: WTF didn't you upvote my item!?! Didn't you see it grollicks genius?!?

;)

I think it may have been on my DQ list...

(Kidding, I don't know which was yours)

EDIT: I did submit about 10% of the items for DQ.

RPG Superstar 2015 Top 8 , Star Voter Season 7, Star Voter Season 8

I'm not noticing any big gap in quality between this year's winners and other years'. Maybe that "wisdom of crowds" stuff isn't so crazy.
Makes me think there should be a spell a la commune with nature but set in an urban environment -- the caster mystically communes with the hordes of intelligent creatures around him and puts together a pastiche of perspectives that answers a few questions about the area.
A priest of Erastil should be all over that!

RPG Superstar 2009 Top 16, 2012 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka Epic Meepo

Vic Wertz wrote:
Eric Morton wrote:
Item views need to be better distributed across all voters.
Each entry ends up being voted on (roughly) the same number of times, and (even more roughly) by the same number of people. Statistically speaking, the difference between "roughly" and "precisely" should be noise.

I'm not worried about the number of votes per item or the number of voters per item, since those are first-order effects. Variations from those numbers will become vanishingly small as the number of votes greatly exceeds the number of items (a few hundred).

I'm more concerned about the numbers of views of each item per voter, which are second-order effects. Variations from the averages for those numbers won't become vanishingly small until the number of votes greatly exceeds the number of item-voter combinations (several million).

If all voters had the same preferences, that wouldn't be an issue, since every item is equally likely to get a large number of views from at least one voter. However, it matters which voter sees which item a large number of times, and that pairing is chosen entirely at random.

I don't think there is any way for Marathon voters, Champion voters, or voting blocks to game the system, since they are all checks and balances against one another. And I don't think they can single-handedly elevate items they like into the Top 32.

I do, however, get the feeling that the fates of items on the threshhold of a cutoff are determined more by random chance than by the voters, because a random subset of the voting public (as opposed to the voting public as a whole) will have cast the deciding votes on those items.

That effect wouldn't be able to catapult an item at risk of getting eliminated in the first cull into the Top 32 or anything like that, but I think it could mean the difference between Alternate and Top 32, or the difference between eliminated in the fourth cull and eliminated in the fifth cull.

Star Voter Season 6, Dedicated Voter Season 7, Star Voter Season 8

Eric Morton wrote:
That effect wouldn't be able to catapult an item at risk of getting eliminated in the first cull into the Top 32 or anything like that, but I think it could mean the difference between Alternate and Top 32, or the difference between eliminated in the fourth cull and eliminated in the fifth cull.

Agreed; it's the edges between the Top 32+4 and the next few items where curation could put some of the voting concerns to rest, even if the judge shuffle through only the Top 40. The difference between being 33rd and 32nd, or 36th and 37th, are massive.

If it has to stick to voters, one last cull in the last day should narrow it down to the top 100, or even top 50, to allow even more of the votes to go toward resolving those edge cases.

Nobody's ever going to be happy, though. (Well, nobody but the Top 32+4!)

RPG Superstar 2009 Top 16, 2012 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka Epic Meepo

1 person marked this as a favorite.

I think the voters did fine without judge intervention. I would just like to see the algorithm tweaked to spread the items around better. That way, I'd feel more confident that most voters' thoughts on an item have been heard before I get an opportunity to down-vote that item nineteen times.

Instead of generating a random item pair each time someone casts a vote, perhaps the system could assign a (behind-the-scenes) string of pre-generated item pairs to each account before the start of voting. That would let Paizo better regulate the distribution of items to voters. No one would get twenty times as many votes on a given item than anyone else; everyone would see unique items more frequently without having to wade through constant repeats; and Paizo could encourage people to cast even more votes by offering incentives like, "Every Marathon voter is guaranteed to see each item at least once."

That's the sort of change I'd like to see. I think it would make voting both more reliable and more enjoyable for voters.

Paizo Employee Chief Technical Officer

Eric Morton wrote:
Instead of generating a random item pair each time someone casts a vote, perhaps the system could assign a (behind-the-scenes) string of pre-generated item pairs to each account before the start of voting.

We don't generate a random pair. But we're looking at distribution over the voting body as a whole rather than to individual voters. With the current participation levels, we are in the neighborhood of having each possible pairing voted on by someone*, and we can only make that happen if we assign the pairings on the fly. If we preassigned pairings to people before we know their participation level, there's a good probability that many pairings would be assigned to people who never vote them.

*Using (but please note that I am not confirming) the Items Seen List's estimate of 688 entries, there would be 236,328 different pairings, which aligns pretty neatly with Owen's tally of "nearly a quarter of a million votes."

RPG Superstar 2009 Top 16, 2012 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka Epic Meepo

Fair point.

Perhaps a better way to reduce repeat viewings would be to have a sequential (non-random) central list of every possible item pairing. As each voter enters the voting booth for the first time, they are assigned a random position on that list and see the corresponding item pair. The algorithm that assigns this position to the voter would specifically target unseen item pairs. Each time someone casts a vote, their position on the list increases by a number of spaces equal to first prime number greater than the number of items in the running. That would greatly reduce the number of repeat items seen (since a voter almost always skips from the block with Item X on the left to the block with Item X+1 on the left) while also guaranteeing that as many different item pairings as possible are seen.

Example: If you have 688 items again, you could generate a sequential list that has all 687 pairs with Item #1 on the left followed by all 687 pairs with Item #2 on the left, etc. The first time each voter arrives in the voting booth, they start at a random position corresponding to an unseen item pair. Each time they cast a vote, they move 691 spaces down the list. That should give you the maximum number of item pairs seen while still having relatively few repeat views per voter.

Marathon Voter Season 9

Aren't repeat viewings kind of helpful, though? I mean, I don't want my one vote on Awesome Item of Coolness to be negative just because it happened to be paired against Awesome Item of Coolness With Better Formatting.

Star Voter Season 6, Dedicated Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9

Kobold Cleaver wrote:
Aren't repeat viewings kind of helpful, though? I mean, I don't want my one vote on Awesome Item of Coolness to be negative just because it happened to be paired against Awesome Item of Coolness With Better Formatting.

That is kind of Eric's point.

Voter A: Awesome Item of Coolness < Awesome Item of Coolness With Better Formatting
Voter B: Awesome Item of Coolness > Awesome Item of Coolness With Better Formatting
The end results are different when Voter A votes on a specific item more often than Voter B.

Paizo Employee Chief Technical Officer

1 person marked this as a favorite.

But that's ameliorated by the fact that Voter A (or Voter B) isn't likely to be comparing those exact two items against each other repeatedly—instead, when he does see repeats, he's comparing each of them against different opponents. In the system we're using, that's building a more complete chain of his preferences, and that's actually valuable data.

For example, Voter A tells us he prefers Item 1 to Item 2, and he prefers Item 1 to Item 3, and he prefers Item 4 to Item 1. With our system, if every voter did exactly the same thing, Item 4 would win, Item 1 would come in second, and Items 2 and 3 would tie for third. The fact that Item 1 has been seen three times more often does not give an advantage to Item 1—but it *does* give us a more accurate picture of what the relative opinions of all of the items are. This is what we WANT.

In a system that just credits wins, you'd have a different winner. You'd have 66% of the votes for item 1, 33% of the votes for item 4, and no votes for items 2 or 3. Under a system like that, the fact that Item 1 had been seen more than the others would be a very big deal indeed.

RPG Superstar 2009 Top 16, 2012 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Marathon Voter Season 8, Marathon Voter Season 9 aka Epic Meepo

Vic Wertz wrote:
But that's ameliorated by the fact that Voter A (or Voter B) isn't likely to be comparing those exact two items against each other repeatedly—instead, when he does see repeats, he's comparing each of them against different opponents. In the system we're using, that's building a more complete chain of his preferences, and that's actually valuable data.

But does your method check individual voters' chains of preference? My understanding was that there was a single chain of preference based upon aggregate data collected for each individual item pair (i.e., 75% percent of voters chose Item 1 over Item 2 and 65% of voters chose Item 2 over Item 3, so Item 1 beats Item 2 beats Item 3 without needing to check which specific voters up-voted each item).

Voters' individual chains of preference would only matter if your system calculated multiple chains of preference (one per voter), averaged the placement each item got within those multiple chains of preference, and then ranked items from highest average placement to lowest average placement.

In a voting method where there is a single chain of preference based upon aggregate data for each individual item pair, repeat viewings by a single voter provide no more information than an equal number of viewings by multiple different voters.

Dark Archive Star Voter Season 7, Star Voter Season 8, Marathon Voter Season 9

I think I understand the Schulze Beatpath now. Here is a basic example as far as I understand it, am I on the right track?

Spoiler:

Assuming 4 entries in a competition.

entries :: votes | winner
1 vs. 2 :: 13-27 | Item 2
1 vs. 3 :: 25-17 | Item 1
1 vs. 4 :: 16-21 | Item 4
2 vs. 3 :: 32-8 | Item 2
2 vs. 4 :: 21-22 | Item 4
3 vs. 4 :: 19-18 | Item 3

This tally would leave items 2 & 4 tied for first place while items 1 & 3 are tied for third. Ties such as these are then resolved by seeing which item in the tie was victorious when they were pitted directly against each other. Item 4 beat item 2; item 1 defeated item 3.

Therefore the final ranking would be:
First: Item 4
Second: Item 2
Third: Item 1
Last: Item 3

Paizo Employee Chief Technical Officer

Eric Morton wrote:
Vic Wertz wrote:
But that's ameliorated by the fact that Voter A (or Voter B) isn't likely to be comparing those exact two items against each other repeatedly—instead, when he does see repeats, he's comparing each of them against different opponents. In the system we're using, that's building a more complete chain of his preferences, and that's actually valuable data.
But does your method check individual voters' chains of preference?

No—you're right—it was inaccurate to say we're "building a more complete chain of his preferences, and that's actually valuable data;" it would be more accurate to say "we're collecting more data on his preferences between a greater variety of different pairs, and that's actually valuable data." But the point is still the same: the extra volume of input we get from marathon voters is giving us new and useful data about different pairings rather than increasing the weight of that individual's preferences about any given pairing.

Liberty's Edge RPG Superstar Season 9 Top 32 , Marathon Voter Season 9 aka Thrawn007

Eric Morton wrote:


I'm more concerned about the numbers of views of each item per voter, which are second-order effects. Variations from the averages for those numbers won't become vanishingly small until the number of votes greatly exceeds the number of item-voter combinations (several million).

I'm not worried about this one either. I WAS worried during the voting. In about 1500 votes total, I had a single item come up 37 times. This is an item that was good. I definitely felt it would make the top 100. However, I didn't feel it should be top 32. It was simply above average. Of those 37 votes, I upvoted it 35 times. It got matched against items that were inferior to it time and time again. I was really concern that the fact I kept upvoting it over and over was going to put it into the top 32, even though I didn't feel it belonged there. However, since my votes were a small part of the bigger picture, things averaged out, and the item made the top 100, but not the top 32.

I had 5 other items that had 20+ votes and 95% or better upvote rates that were in similar situations. In all 5 of those cases, they made 100, not 32, just like I felt they should.

So at least for me...the final results showed that the system worked as intended, even though I was looking at a small sample size.

Liberty's Edge Star Voter Season 9

I feel like it was a Total Waste of My Time...

Considering I never saw any ot the Top 32 Items...

Wondering Why I wasted hours seeing the Same items...

Would I like to Win the Contest..Sure that is Why I am Entered..

So what would have been a Better Effort on my Part...wasting time voting or just doing my own thing...

Would I rather design my own stuff and offer it for Sale privately or Even run a Kickstarter..Sure...I believe I would have a better chance of even 1 sale of 1 Item than ever winning here. YES

RPG Superstar Season 9 Top 32 , Marathon Voter Season 6, Marathon Voter Season 7, Champion Voter Season 8, Marathon Voter Season 9 aka GM_Solspiral

4 people marked this as a favorite.
JPSTOD wrote:

I feel like it was a Total Waste of My Time...

Considering I never saw any ot the Top 32 Items...

Wondering Why I wasted hours seeing the Same items...

Would I like to Win the Contest..Sure that is Why I am Entered..

So what would have been a Better Effort on my Part...wasting time voting or just doing my own thing...

Would I rather design my own stuff and offer it for Sale privately or Even run a Kickstarter..Sure...I believe I would have a better chance of even 1 sale of 1 Item than ever winning here. YES

Judging solely by the item you presented, your reaction to the feedback given, and the general tone of your posts... You have a very hard road ahead of you if your intention is to launch a kickstarter.

1 to 50 of 53 << first < prev | 1 | 2 | next > last >>
Community / Forums / Archive / Paizo / RPG Superstar™ / General Discussion / Final thoughts after the voting closed All Messageboards

Want to post a reply? Sign in.