Law School Discussion

Nine Years of Discussion
;

Author Topic: Oct 2008 Impressions & Test taker surge implications  (Read 1552 times)

nantucket

  • Newbie
  • *
  • Posts: 3
    • View Profile
    • Email
Oct 2008 Impressions & Test taker surge implications
« on: October 05, 2008, 02:49:23 PM »
So impressions from different threads seems to say that on a scale of 1-10 difficulty was a pretty average. 

Games 6.5
LR 4.5
RC 5

I for one expect (/hope) a 170 = -11.  But one wildcard is really throwing me; the huge surge of testers that seems nearly inevitable.  I for one am expecting at least a 20% increase in takers for the oct test from people trying to camp out the recession in grad school.  What does this mean though? 

I have 3 friends that were going into business and now cant get the kindof jobs they had expected and so are taking the lsat, hardly think Im unique in this experience.  However I think a disproportional amount of this group will test in the 150's. 1) less preparation, "just give it a shot" mentality and 2) the fact that its more likely these testers who couldnt get jobs are those that are , relatively speaking, a less able group than those who did get decent jobs already.

How all this will REALLY affect the curve I dont know  and I think could be speculated in different ways.

thoughts??

ChiGirl

  • Sr. Citizen
  • ****
  • Posts: 122
    • View Profile
Re: Oct 2008 Impressions & Test taker surge implications
« Reply #1 on: October 05, 2008, 02:51:40 PM »
LR was a 4.5 on the exam? Not bad. There's hope! :) I'm assuming w/your scale, 10 is being the most difficult?

So impressions from different threads seems to say that on a scale of 1-10 difficulty was a pretty average. 

Games 6.5
LR 4.5
RC 5

I for one expect (/hope) a 170 = -11.  But one wildcard is really throwing me; the huge surge of testers that seems nearly inevitable.  I for one am expecting at least a 20% increase in takers for the oct test from people trying to camp out the recession in grad school.  What does this mean though? 

I have 3 friends that were going into business and now cant get the kindof jobs they had expected and so are taking the lsat, hardly think Im unique in this experience.  However I think a disproportional amount of this group will test in the 150's. 1) less preparation, "just give it a shot" mentality and 2) the fact that its more likely these testers who couldnt get jobs are those that are , relatively speaking, a less able group than those who did get decent jobs already.

How all this will REALLY affect the curve I dont know  and I think could be speculated in different ways.

thoughts??

!закон и право!

  • Sr. Citizen
  • ****
  • Posts: 1599
    • View Profile
Re: Oct 2008 Impressions & Test taker surge implications
« Reply #2 on: October 05, 2008, 02:53:42 PM »
It wont affect the curve at all. Firstly, the curve is determined even before you sit for the test. Secondly, the test takers for the Oct LSAT (even if they were incorporated into the curve) are just a drop in the hat considering that you're compared against every test taker for the last 3 years.

philosopher

  • Full Member
  • ***
  • Posts: 16
    • View Profile
    • Email
Re: Oct 2008 Impressions & Test taker surge implications
« Reply #3 on: October 05, 2008, 03:25:15 PM »
It wont affect the curve at all. Firstly, the curve is determined even before you sit for the test. Secondly, the test takers for the Oct LSAT (even if they were incorporated into the curve) are just a drop in the hat considering that you're compared against every test taker for the last 3 years.

exactly.
"People pay for what they do, and still more for what they have allowed themselves to become; and they pay for it very simply: by the lives they lead."
--James Baldwin

nantucket

  • Newbie
  • *
  • Posts: 3
    • View Profile
    • Email
Re: Oct 2008 Impressions & Test taker surge implications
« Reply #4 on: October 05, 2008, 03:33:40 PM »
It wont affect the curve at all. Firstly, the curve is determined even before you sit for the test. Secondly, the test takers for the Oct LSAT (even if they were incorporated into the curve) are just a drop in the hat considering that you're compared against every test taker for the last 3 years.

Makes sense, I was under the impression that the scale was only partly predetermined.  What about percentile reporting bands, is the reported percentile based solely on your test or others as well

!закон и право!

  • Sr. Citizen
  • ****
  • Posts: 1599
    • View Profile
Re: Oct 2008 Impressions & Test taker surge implications
« Reply #5 on: October 05, 2008, 03:38:25 PM »
It wont affect the curve at all. Firstly, the curve is determined even before you sit for the test. Secondly, the test takers for the Oct LSAT (even if they were incorporated into the curve) are just a drop in the hat considering that you're compared against every test taker for the last 3 years.

Makes sense, I was under the impression that the scale was only partly predetermined.  What about percentile reporting bands, is the reported percentile based solely on your test or others as well

It depends on your score only. But the method of determining the band is largely dependant on the validity of the test itself.

SEM or standard error of measurement for the LSAT I believe is p-tailed 0.95. That means your score reliable up to 95% within the range of the SEM from your individual score. SEM for the LSAT is 2.6 points (rounded to 3.0). Is also partly determined by the average increase or drop in score for retakers.

So your score band will be 157-163 if you land a 160 for instance.

sexonthebeach15

  • Full Member
  • ***
  • Posts: 29
    • View Profile
Re: Oct 2008 Impressions & Test taker surge implications
« Reply #6 on: October 05, 2008, 03:47:43 PM »
they always look at adjusting the curve after test performance and after looking at performance on the experimentals, to equate amongst administrations.  they have to do this, and they do it every time.

they have an expected level of performance and then re-evaluate if the test questions accurately reflected test taker performance, more as a function of the ability and performance of the questions, more than the examinees.

for example, assume that on one administration all examinees scored a 180.  if they did actually answer the number of questions correctly to obtain that score, than they would award everyone a 180, as long as the test questions functioned properly and assessed ability the way the exam makers want it to.

however, say that the same test takers take the test along with a larger group at the next administration, and obtain a 160 on the pre-determined scale for that exam.  while the test makers wouldn't have access to the data about this sample pool (mainly because its highly unlikely), they WOULD be in a position to examine whether the significant score decrease (in this hypothetical) was due to the examinees' performance, or a failure on the part of some test questions.  they wouldn't necessarily drop the questions, but it would be possible the questions didn't assess as well as they intended them to amongst that population of test takers, and therefore they might adjust the scale somewhat to compensate for this and more accurately reflect assessed ability across test forms.

differences in performance are inevitable because the content of tests varies significantly, and some test takers might respond differently to different kinds of content (e.g. a passage in RC about economics versus about purple loosestrife).

haven't you all heard of the famous SAT question that ETS faced criticism for and adjusted its scale for? (this is one of many, but it was well-publicized, and usually the scale is more flexible for verbal than for math, because of the subjective nature of many "verbal" questions, which as well all know populate the LSAT more than they do the SAT - part of the reason score reporting takes as long as it does, because they have to statistically analyze the test before reporting final scores):

it was an old analogy, and stated: OARSMEN:REGATTA::

white students performed significantly better on this question (by about 40%) than other students, so they adjusted their scale to basically cred everyone for this question, as they thought the question was flawed in that it didn't consistently assess test taker ability.  this is admittedly not the same as the situation i explained above, because this was adjusting for consistency within the same examinee population instead of across administrations, but it still embodies the same general principle of ensuring the score is an accurate and absolute predictor.

an example from the recent exam (not using specifics): the last game was difficult; they would look at and determine whether the difficulty stemmed from the questions, causing almost everyone to miss certain questions, or if they questions were perfectly fair and this sample of test takers just botched it up (in the former cause, they might adjust the scale depending on comparing the anticipated performance on that game versus across the entire test, whereas in the latter case they wouldn't adjust it, again making the same aforementioned assumption)

!закон и право!

  • Sr. Citizen
  • ****
  • Posts: 1599
    • View Profile
Re: Oct 2008 Impressions & Test taker surge implications
« Reply #7 on: October 05, 2008, 03:52:37 PM »
They don't adjust the scale much ex post - unless items are removed from the scale, in which case, the adjustment is minor. All test items are preequated and evaluated well in advance of the test ever having been published. It really doesnt matter how the composite group performs on the test, the assumption is simply that a disproportionate number of high scorers is a statistical anomaly that is otherwise statistically insignificant. The SEM makes an inbuilt adjustment for variations of this kind.

The entire basis for validity of any test, from a psychometric standpoint, is to be able to project a guassian distribution upon the composite test that can be applied to the general population of test takers (in this case, prelaw hopefuls). Everything is done in advance, and the very fact that you get your test scores returned to you so quickly indicates that significant adjustments have not been made. To readjust a scale significantly would involve months of analysis. It would also involve comparative sampling of other groups to retest the validity of the scale.

In any case, this is applicable primarily to the LSAT and other scholastic achievement batteries. For IQ tests, reequating is done much more frequently.