Law School Discussion

Nine Years of Discussion
;

Author Topic: LSAT / GMAT  (Read 5242 times)

lobby

  • Newbie
  • *
  • Posts: 1
    • View Profile
    • Email
Re: LSAT / GMAT
« Reply #10 on: March 17, 2006, 12:11:56 AM »

Depends on how you take it.

BigRedWarEagle

  • Sr. Citizen
  • ****
  • Posts: 161
    • View Profile
Re: LSAT / GMAT
« Reply #11 on: March 17, 2006, 08:43:17 AM »
My scores:

163 LSAT
630 GMAT
You are not your job.

The Repo Man

  • Newbie
  • *
  • Posts: 1
    • View Profile
Re: LSAT / GMAT
« Reply #12 on: March 17, 2006, 08:48:03 AM »
163(90%) = 710 (95%)

the bum next to me

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: LSAT / GMAT
« Reply #13 on: March 17, 2006, 10:40:10 PM »
Yea theoretically, The Repo Man.

tayyab

  • Newbie
  • *
  • Posts: 1
    • View Profile
Re: LSAT / GMAT
« Reply #14 on: May 14, 2006, 07:50:32 AM »
163(90%) = 710 (95%)

The equivalency is accurate.

babelfish

  • Newbie
  • *
  • Posts: 1
    • View Profile
Re: LSAT / GMAT
« Reply #15 on: May 22, 2006, 11:51:37 AM »
The GMAT CAT grading is fun ... supposedly it adjusts the level of difficulty until one is getting approximately 50% right and 50% wrong, so that your score is in the middle of the gaussian curve.  That's one difference between that and the LSAT, where one is always measured against the 150 group.  It's more noticable in the score bands, where the LSAT has about a 12% error at the 95% and the GMAT is closer to 5%.  Also, as you're probably aware the earlier questions are more critical in the adjustment process.  (my guess is they use some sort of kalman filter such that the entire history is contained in the state matrix, but that's a side point)

Now, if you did ok on the LSAT verbal, you'll kill the GMAT verbal for two reasons.  One, is obviously your verbal abilities, the second is that the GMAT population tends to have stronger quantitative rather than verbal skills.  Since the difficulty is based on the percentile missing a question, stronger verbal skills artificially boosts your score.

My GMAT diagnostic cold was a 650, mostly due to the verbal section.  I scored in the upper 90% on the verbal (need to brush up on the sentence correction), but on the quantitiave section in the 40%.. which totally sucks since my UGRAD and GRAD is in electrical/computer engineering.  But the good part, is it's entirely HIGH school math and so far seems pretty easily improvable (just jogging the memory).

If you haven't already done try these:
1) powerprep software (only 2 tests, so save until the end)
2) official gmat review
3) Kaplan CAT tests.  keep in mind the general consensus is these are more difficult than the actual to a difference of about 70 points, so don't be discouraged.

Marty

  • Newbie
  • *
  • Posts: 0
    • View Profile
    • Email
Re: LSAT / GMAT
« Reply #16 on: May 23, 2006, 04:17:36 AM »
The CAT scoring system determines your GMAT Quantitative and Verbal scores by accounting for not only the number of questions you answer correctly but also the difficulty level of the questions you answered correctly. Your reward — in terms of points — for responding correctly to a difficult question is greater than for an easier question. Of course, the scoring system for a non-adaptive test can also account for difficulty level — simply by assigning greater weight to more difficult questions. But the adaptive feature creates a certain dynamic — a self-adjustment mechanism — that continually homes in on your level of ability in each test area.

The scoring system accounts for a third factor as well: the range of cognitive abilities tested by the questions you answered correctly — within each of the two multiple-choice sections. The Quantitative section, for example, embraces a variety of substantive areas: number theory, arithmetical operations, algebra, geometry, statistical reasoning, interpretation of graphical data, and so forth. Also, the Quantitative section employs two distinct question formats: Problem Solving and Data Sufficiency. Problem Solving questions gauge your ability to work to a numerical solution, whereas Data Sufficiency questions stress your ability to reason quantitatively. Proving to the CAT that you can handle a variety of substantive areas in both question formats will boost your GMAT score.

As for how the CAT quantifies this third factor, the calculation involves the statistical concept of standard deviation. The greater the deviation among your areas of ability, the lower your score. In other words, the GMAT rewards generalists — test takers who demonstrate a broad range of competencies — while punishing less versatile test-takers who are not as well-rounded in terms of their skill sets. I don't want to overstate the significance of this third factor, though. The other two — number of correct responses and difficulty level — are the primary determinants of your score.

Why is the scoring system designed to account for this third factor? Because the GMAC recognizes that crack mathematicians or grammarians don't necessarily make good business managers. It's people who can put it all together — people with an overall package of quantitative, verbal, and analytical skills — who are most likely to succeed in B-school and beyond.

Hermioni

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: LSAT / GMAT
« Reply #17 on: May 23, 2006, 03:41:46 PM »
ETS claims that the CAT's adaptive feature enables a more accurate measurement of your cognitive abilities relative to other test-takers than the old paper-based test, even with fewer questions. The primary advantage — in terms of fairness — of adaptive testing over non-adaptive testing, whether computer-based or paper-based, has to do with distribution of scores. Assume two GMAT test-takers X and Y. Suppose that X has great difficulty with every question type at even low difficulty levels, while Y can handle any question type at even the highest difficulty level. Because the GMAT CAT adapts to individual ability, and rewards fewer points for correct responses to easy questions than difficult ones, the difference between GMAT scores for X and Y might be far greater than if they had taken the same bank of questions. In other words, a non-adaptive test does not allow for as wide a distribution of scores.

To the extent that the CAT creates a broader distribution of scores, it is a better means of comparing the cognitive abilities of test-takers. This is a statistics concept that's really pretty easy to understand on a non-technical level. Scores for multiple test-takers that all cluster closely together are less reliable for the purpose of comparing ability levels than more widely distributed scores are. That's all nice and dandy, but with only 27 scored Quantitative questions and 31 scored Verbal questions, not to mention the wide variety of question types within each section, how can the CAT possibly make a fair assessment of your abilities?

This drawback is not unique to the GMAT; you can say the same about almost any standardized exam. The greater the number of questions, the more accurate the assessment — all else being equal. But all else is not necessarily equal. During a longer test endurance becomes a factor — a factor that can undermine the purpose of the test to begin with. Also, with the inception of the CAT test-takers can take the GMAT far more often than they could under the old paper-based testing system; the more often a test-taker takes the GMAT, the more reliable the measurement.

In an ideal world, perhaps a more extensive battery of tests spread over several weeks — or even months — and that includes an oral component as well would be fairer. But it comes down to a tradeoff between fairness and administrative efficiency. The testing service couldn't provide such a test on an affordable basis, especially considering that more the a quarter-million GMAT tests are administered every year!

With CAT att the beginning of the test the computer has no information about your ability. It is the job of the computer to determine your ability by generating questions, and based on your response, to determine an appropriate follow-up question for you. This necessitates an important departure from traditional "paper and pencil" based testing: the computer will administer different questions to different test takers. Computer adaptive tests are not standardized tests in the sense that all test takers get the same test. Thus, it is to your advantage to be on the receiving end of hard questions.

In theory, the computer is generating questions for the purpose of finding your correct ability level. With each additional question the computer acquires more information about that level. The less information the computer has (the fewer number of questions you have answered) the more your answer to the next question will tell the computer about your ability. But, the more information the computer has (the more questions you have answered) the less your answer to the next question will tell the computer about your ability. This means that in evaluating the test taker the answers to earlier questions will impact more significantly on your score than the answers to later questions! Your answers to questions during the first half of each multiple choice section will impact much more heavily than the answers to the second half.

The questions on the CAT do not all count the same toward a student's score; a correct answer on one question may raise the score much more than a correct answer on another question. By contrast, on a paper-based test all questions are weighted equally, regardless of difficulty. This is a critical difference, because it means that some questions on a CAT are more "important" than others and thus demand more time and attention from the student looking to maximize his or her score.

Use books to brush up on your math skills, to review rules of grammar, to identify your weak areas, and for exercises and drills that help strengthen those weak areas. Use software to determine your optimal pace, to acclimate yourself to the computer interface, and to measure your performance. Taking paper-based practice tests may be worthwhile. As long as they accurately reflect the style and difficulty level of the actual GMAT, they’re quite useful for additional practice. By the same token, you shouldn’t assume that any GMAT software product will be a reliable predictor of your performance on the actual GMAT. Keep in mind that some GMAT software products are better than others — both in terms of replicating the style and difficulty level of actual GMAT questions and in terms of forecasting your scores on the actual GMAT. So choose your test-prep software carefully.

thatsy

  • Newbie
  • *
  • Posts: 2
    • View Profile
Re: LSAT / GMAT
« Reply #18 on: May 24, 2006, 01:25:31 PM »
try the other one

bennybm

  • Full Member
  • ***
  • Posts: 21
    • View Profile
    • LSN
    • Email
Re: LSAT / GMAT
« Reply #19 on: May 24, 2006, 01:39:53 PM »
I was 710 (95%) GMAT 164 (91%) LSAT

Best of luck
In - GSU, UGA
Out - UF
Withdrawn - Memphis, Kentucky, South Carolina, Mercer, Samford, FSU