Phish.net is a non-commercial project run by Phish fans and for Phish fans under the auspices of the all-volunteer, non-profit Mockingbird Foundation.
This project serves to compile, preserve, and protect encyclopedic information about Phish and their music.
Credits | Terms Of Use | Legal | DMCA
The Mockingbird Foundation is a non-profit organization founded by Phish fans in 1996 to generate charitable proceeds from the Phish community.
And since we're entirely volunteer – with no office, salaries, or paid staff – administrative costs are less than 2% of revenues! So far, we've distributed over $2 million to support music education for children – hundreds of grants in all 50 states, with more on the way.
Your complaints about show ratings are well-founded, and have been discussed in past: attendance bias (+), recency bias (+), downrating to offset attendance and recency bias (-), people using the scale differently (i.e., compressing the scale to only two values, 4 and 5), etc. etc. I've actually got some ideas about how to overcome these problems, and will send you a PM. See below.
@MOO_PHUNK: This post arose from my interest in venue effects, which came from a larger econometric model based on 3.0-era data accounting for # songs/set, bustouts, debuts, vacuum solos, narrations, and a few other things. I'll be updating this model after the 2020 Mexico shows. PM me if you want to see it.
@PHISH21: I don't understand what you are saying; perhaps if I could see the graph I'd get what you're trying to say. PM me for my email address so that you can send the graph.
@DREAMER: it's a standard graph generated by Stata. Minitab, though...that takes me back almost three decades!
To All: I'm thinking of creating a show ratings panel of perhaps fifty respondents for the short fall tour in early December. I'd ask that participants listen to every show at least once, and then rate all seven shows during a specific time period AFTER the Fall tour but BEFORE the NYE run. This should help control for respondent fixed effects and some of the biases inherent in the .net ratings.