How can they pick the scale beforehand and maintain the validity of 'such and such scaled score is such and such percentile'?
How can they maintain percentile equivalence between tests if they only curve you against the people with whom you take the test?
They see how difficult the questions are by giving them to test takers as experimentals and then they use those peoples' percentiles on those questions to determine the scale. Or, at least, so I have been led to believe.
Yes. I read something similar. However, it may be more complicated. From what I read (can't remember where) the percentiles should be comparable to one of the '96 tests (can't remember which one). But the implication is that if I compare my score to someone who took the LSAT in say 1997, we should have scores that are comparable for the level of difficulty of the exam. If this is correct, only looking back 3 years may skew the comparison depending on how test tankers are fairing. Ultimately, I believe the goal is to have scores that the adcoms can compare. That means a 180 on the october 2004 exam should be the same as a 180 on the october 2000 exam. Otherwise the adcoms are comparing apples and oranges. Of course, this is difficult to achieve in reality. However, I'm assuming that L-Sac is doing whatever it can to ensure this comparison is as reliable as possible.