You know? Every ranking has some arguability, and each seems as though it could have a useful methodology, though I do not know what Hon. Thomas Brennan (Cooley Law) and Princeton Review are using.
Lawdragon ranking is based strictly on the prestige of practicing lawyers, which is a somewhat useful gage. But, one problem with this methodology, aside from the fact that it ignores crucial factors included in Lieter, USNWR and Princeton Review, is that the acclaim of many of these lawyers can be attributed to many factors besides the schools they attend. Any attempt to infer a causal relationship between the lawyers' stellar performances and their law schools (which they often attended 20 years ago or more) is tenuous because their careers have been positively affected by training/mentoring, additional education, their individual work ethics, and even luck. Can any of those factors be attributed to their schools? Perhaps, but in what measurable way(s)? Moreover, the schools themselves have changed a great deal since these attorneys attended them. Harvard, for one, continues to roll on the steam it gained in the early 1900's and a self-fulfilling prophecy that continually drives great talent to the school. This isn't to say that it isn't deserving of its high ranking, only that it's relative rating/ranking (and that of other so-called "elite" law schools) may be inflated.
USNWR ignores those qualities that, despite what one poster deems an undergraduate-centered approach, are very important considerations in picking a law school. One absolutely should care about career prospects, campus environment, and diversity (i.e., welcoming of older students and minorities), to name a few. Yet, despite offering some useful standards of measurement, some categories (such as the number of volumes in the library) may have little importance. Plus the weights applied to those measurements are arbitrary. How does USN choose these weights, and what makes the mag the arbiter of what's most or least important in determining a law school's "quality"? Surveys from members of "peer" institutions are also problematic for obvious reasons: these peers have built-in self-interested motives for downgrading the competition, and judges and attorneys often wind up grading schools with which they have little familiarity. Therefore, the use of casually administered peer assesments borders on the irresponsible. In addition, the ranking has been proven vulnerable to "gaming" when it comes to the reporting of such factors as per-student expendatures and the GPA's and LSAT's of incoming classes. One increasingly popular method has been the recent trend in use of part-time and transfer admissions, directing arguably lesser-credentialed students into these programs so that their numbers do not count against the USNWR LSAT/GPA grade for "student quality". To counteract the gaming of GPA/LSAT stats, the USNWR should consider redistributing the weights applied to GPA/LSAT and/or include the numbers from p/t students and transfers in its assesments. A school can inflate its library's volume measurement simply by keeping outdated, yet seldom used, materials. For these and other reasons, the USNWR ranking also has great limitations.
Leiter's Educational Quality Rankings are, much like the USNWR rankings, useful but flawed, because of the arbitrary weights applied to certain categories. Are frequently cited professors and journals necessarily indicative of law school quality? Besides that, Leiter ignores useful/important factors the other rankings include.
The elephant in the room is that all of the rankings bring something valuable to the table. The most useful ranking method should incorporate the best aspects of all of the current methods.