Thursday, August 20, 2009

PROGRESS? IT DEPENDS HOW YOU MEASURE IT!

By Guy Brandenburg, retired DCPS math teacher

Question: Can the statisticians in the group check the following claim, made in the intro to the PBS segment on DCPS progress under chancellor Rhee?

“…almost half of elementary students are now on grade level, according to the city’s year-end DC-CAS test. That may not sound like much, but when Rhee took over, only 29% were on grade level in math.”

Answer: Well... sort of. But one could also make the much more unfavorable claim about Rhee's record: In reading, DCPS students are ALMOST, but not quite, back up to the levels they scored at in 2005!!!! In math, under the current leadership, they are still about 6 percentage points behind!

See for yourself. Here are the percentages of students on the elementary school level scoring ‘advanced’ or ‘proficient’ in reading + math over the past 7 years, from the DCPS- OSSE – NCLB website. (Use the buttons at top right hand side of the website.) I rounded to the nearest whole percent, and the testing company involved did change at one point, from Harcourt/Pearson to some other firm. (Guess when!)

Year Reading Math

2003 44% 54%

2004 46% 56%

2005 51% 58%

2006 37% 27%

2007 38% 30%

2008 45% 41%

2009 48% 48%

However, there is a problem with equating the SAT-9 or DC-CAS category 'proficient' with the concept of 'being on grade level'. I will try to explain.

By one widely-used definition, 'being on grade level' means scoring at or about the 50th percentile on some nationally-normed [or perhaps internationally-normed] test of whatever kids of that grade level are supposed to be learning. If Johnny is at the '50th percentile', that does not mean he got exactly half the questions right. It merely means that about half the kids on that grade level scored worse than Johnny did, and roughly half scored better than Johnny. (For simplicity, I am leaving out all the other kids who got the exact same score as Johnny.)

So (you are wondering), what's the problem? Isn't the DC-CAS a nationally-normed test? In a word, NO. It ain't. As far as I know, almost none of the questions were tried out on students in any other jurisdiction. Instead, DCPS or OSSE contracted with some company that makes tests, gave them the learning 'standards' ('objectives' if you haven't kept up with the jargon) and asked them to write a test. I am pretty sure that some DCPS teachers were asked to review the questions, or even to write some of them. I spent an afternoon a couple of years ago, with other teachers, helping to weed out questions that we thought were no good, but I no longer remember if that was for the DC-CAS or the DC-BAS. Confusing, huh? The DC-BAS was/is a series of practice tests supposedly designed to help kids prepare for the real thing - the DC-CAS. B versus C.

So, the DC-CAS and the SAT-9* are both 'criterion-referenced' tests rather than 'norm-referenced'. Let me illustrate the difference by looking at a hypothetical, impoverished, corrupt, 3rd world country where the typical child – let’s call her Rubina - has illiterate parents who can only afford the school fees for about half their children (typically the boys), and then, only for a few months a year, and for a few years only. After that, the children need to go to work doing whatever. Rubina, being a girl, doesn’t get sent to school at all.

A norm-referenced test of all of the 11-year-old kids in that country might show that if Rubina knows the local alphabet, can write her own name and a few other words, and add whole numbers that have no more than 2 digits, then she is about on par with the median kid of her age. So, half of the other kids in the country know and can do less than Rubina, but the other half of thev11-year olds know and can do more than her. In some cases, other students in her country know a great deal more indeed, attending schools on a par with any in the world.

A criterion-referenced test, on the other hand, might even consist of very similar questions [though probably a lot fewer of the extremely basic ones that might be on the previous one], or it might not. The big difference would be the scoring: some appointed committee or individual would decide in advance what they think 11-year-old children should know and be able to do. They would then designate the lowest scores as, perhaps, 'failing', or 'moron' [the term was actually invented here in the USA by Henry Goddard, one of the 'fathers' of testing psychology] or 'below basic' or whatever; and the top scores as 'superior', 'honors', 'gifted and talented' or 'advanced' or whatever. In our hypothetical 3rd world country, perhaps only a few percent of the cohort of 11-year-olds would do well enough to be considered ‘on grade level’. Little Rubina doesn’t stand much of a chance of passing.

Sound familiar?

And perhaps a great hue and cry would arise in said country to replace all the no-good teachers in said country with 2-year Peace Corps volunteers or similar missionary types from wealthier countries overseas. Also to get rid of the failing public schools, to privatize everything dealing with social services, and so on. Not to improve society as a whole, heavens, no! It's the TEACHER, that's all! Oh, yes, by all means, let's pay the teachers for test results.

Whoops. I forgot! Actually, they already have that pay-for-performance system in a number of 3rd world countries: the government doesn't pay teachers a living wage, so only the kids whose parents have enough money to pay their child's teacher for private tutoring, actually pass the tests. I have heard that ALL of those who pay, pass, one way or another. Hmmm....

Again, does that sound familiar?

So, does 'proficient' and/or 'advanced' mean the same as 'being on grade level'?

That depends on how you define the terms, doesn’t it? And it also depends on who’s writing the tests, and how they score them. I think that I have had to administer tests from at least 4 different companies during my 31 or so years in DCPS. You can define things so that Rhee is doing great, or just the opposite.

And speaking of Rhee and test scores, does anybody actually recall seeing the raw data showing that she actually performed that miracle when she taught in Baltimore? Last I heard, they all got 'lost'. Read the Daily Howler and search the archives for several fascinating articles about Frau Rhee.

Guy Brandenburg, Washington, DC

My home page on astronomy, mathematics, education:

http://home.earthlink.net/~gfbranden/GFB_Home_Page.html

or else http://tinyurl.com/r6fh2

=============================

"Education isn't rocket science. It's much, much harder."

(Author unknown)

* There are those who claim the SAT-9 was actually norm-referenced.

1 comment:

Linda/RetiredTeacher said...

In her podcast on testing, Michelle Rhee suggested that she believes it is OK to teach to the test because (she thinks) the test and the curriculum are one and the same. I also got the impression from this podcast that children are being drilled on exact test items. Of course, this would completely invalidate the test. If any teachers are evaluated based on these invalid scores, their dismissals should be vigorously contested in court. It wouldn't be difficult to get a testing expert to testify about the invalidity of these scores.

Parents and teachers should insist on truth in testing. The yearly tests should be completely different each year and they should be administered, collected and scored with tight security. Please don't settle for less.