Pages

Saturday, June 20, 2009

Switching Back to Kubuntu 8.04 "Hardy Heron"

About a week ago I replaced my well-worn installation of Kubntu 8.04 "Hardy Heron" with the sleeker, newer, Kubuntu 9.04 "Jaunty Jackalope." I hadn't made the jump to Kubuntu 8.10, the first to include KDE 4, because there were plenty of reports about bugs and incomplete features. After waiting through another release cycle, I figured it was time to stop falling behind the tech curve and upgrade. Surely I had been missing something.

Instead of finding a bunch of new indispensable features and conveniences, I'm afraid KDE 4.2 is still not ready for my desktop. My biggest gripe? File management. It's not so much a debate between Konqueror and Dolphin, but frankly neither are doing what I want them to do. In KDE 3, I could hover over the icon of a picture, and a pop-up would give me all sorts of good information: date, owner, filesize, image dimensions, and a little bit of EXIF data. Neither Konqueror or Dolphin do that now, although I've read that Dolphin's information panel should include that in KDE 4.3. I don't want to wait that long.

Another big disappointment has been Amarok 2. Podcast handling is very stripped down compared to 1.4 - no pane to view information about a podcast and no way to create folders to organize podcasts. I also don't like how loading a song into the playlist plays it automatically, and today I ran into significant problems with reading ID3 tags. That might have been the ultimate deal breaker, when Amarok insisted Laura Branigan's 80's classic "Self Control" was by Moby, and then listed an album for Moby with no songs in it. I can't trust an application that mismanages ID3 tags.

But at least Amarok has a KDE 4 version for me to complain about. Kaffeine, my favorite video player, has not yet found its way into KDE 4. I didn't like its replacement, Dragon Player, and while I always have VLC installed because it's just so useful, I don't like it as much as Kaffeine.

I know KDE 4 will eventually work out the bugs and be ready for prime time (by my standards), but right now I can't name enough functional advantages it's giving me over 8.04, the long-term support (LTS) release. It's supported until April of 2011, so I'll have plenty of time to evaluate other new releases in the meantime. I used to love running Debian experimental and installing the latest packages, bugs or no bugs, but those days are over. Give me stable features; give me KDE 3 (for now).

Wednesday, June 17, 2009

The Myth of the Slipping Math Student?


I've been teaching in Colorado for six years, and there's always been a troubling pattern in our state standardized math scores. As students progress from 3rd to 10th grade, the percentage that score proficient and advanced declines dramatically. Here are the percentages of students scoring proficient and advanced by grade level, averaged over all the years the test has been given (typically 2002-2008):












GradeAvg. % P+A
369
469
561
655
744
843
934
1029


The easiest explanation (and the one I've tended to believe) is that students' abilities are, in fact, slipping as they got older. That would be a good assumption if the test at each grade level was equally difficult. But what if the test questions were, on average (and adjusted for grade level), more difficult as students got older? Is it fair to assume a test with increasingly difficult questions would result in lower scores, even with sophisticated score scaling systems that take question difficulty into account?

Fortunately, the state releases "item maps" that describe the difficulty of each item on every test. Using 4 points for an advanced item, 3 points for a proficient item, 2 points for a partially proficient item, and 1 point for an unsatisfactory item, we can come up with an average difficulty for the CSAP at each grade level. Let's add that column to our table:












GradeAvg. DifficultyAvg. % P+A
32.4369.25
42.4368.5
52.5361.14
62.6955.43
72.9644
83.0442.86
93.1334
102.9628.86


This begs for regression analysis. How strong is the correlation between the difficulty of the questions and the scores?


The correlation is surprisingly strong, and the coefficient of determination (R squared) is 0.88, meaning that the average item difficulty is statistically responsible for 88% of the variance in the test scores. 88%? That's big. Statistics rarely tell the whole story, but 88% raises serious doubts that it's just a matter of slipping math students. Why wouldn't the state want to maintain a steady average difficulty year-to-year? Wouldn't that make year-to-year performance comparisons more reliable?