Testing Season Special: Best Predictors of District Growth

Testing season is once again upon us.  Over the next two weeks thousands of Tennessee students will take their TCAP and End of Course exams, which will then be used to calculate their actual vs expected growth using models created at the University of Tennessee by William Sanders that predicts where students should end up based off of past growth and other factors such as income, ethnicity and poverty rate. I’ll be running a few pieces over the next couple of weeks looking at testing policies and providing some insights related to data and policies.  This post in particular looks at the power of individual variables to predict student test scores.

In general I support the construction of and use of growth models like TVAAS. They are not perfect but when constructed well they do give us a reasonable predictive measure to compare against actual student growth.  They should not be taken as the entire story but rather as one of many snapshots that can be used to evaluate student, teacher, school and district performance.

That said, many people think that its foolish to look at holistic measures like TVAAS because they think that the vast majority of test scores can be explained by one or two individual variables.  While I can’t recreate the entire TVAAS system, I thought it would be interesting to do some simple analysis on the predictive power of some often cited culprits of low educational achievement here in Tennessee.  I was able to find information on the following 5 for almost all districts in Tennessee:

  1. Percent of the population classified as economically disadvantaged
  2. Percent of the population below the poverty rate
  3. Per pupil expenditures
  4. Student teacher ratio
  5. Household income

Fortunately the state provides a large database of all past TVAAS scores for districts.  Using this I pulled up the student 3-year-average growth data for every district in the state averaged all subjects together (EOC only, not TCAP).  I then did a simple graph of comprehensive testing growth vs each individual category and inserted a trend line.  Shown below are the results, followed by some brief observations

IncomeEconomic Disadvantaged

Poverty Rate

District poverty rate

Student Teacher RatioPer Pupil

Here are my five key takeaways from this little experiment:

Every Category Contains Large Variances -  The first thing you should see is a HUGE amount of variation regardless of the measure used.  Districts with the same level in each graph can vary by as much as 15-20 growth points.  This suggests that no one variable, such as poverty, income, class size or education spending should be considered the “silver bullet.”  If there was, we would see the data conform to a much more linear trend line.

Measures of Economic Success have Some Predictive Power, but They Aren’t the Only Culprit - All three graphs (income, poverty and %economic disadvantaged) suggest that the economic success of a district does contain some predictive power in student growth.  For example, an extra 100 dollars of income in a district is predicted to raise growth by 0.01 points, while a 1% increase in the poverty rate on average is predicted to reduce growth by 0.23 points.

That said, the impact is small in all three cases and considerable variation exists, suggestion that poverty alone is NOT the huge predictor of educational outcomes that critics of reformist policies often believe it is. We can’t simply say “its poverty” when this clearly shows a wide variation even specific poverty levels

Increased Student/Teacher Ratios Actually RAISE Scores - I found this one VERY interesting given that student teacher ratio and its cousin, class size, is often cited as a deterministic policy by both reformers and anti-reformers alike.  I found the student teacher ratios online for each district by dividing students enrolled by teacher number (not a perfect measure but its the best I can do) and then graphed them against growth rates.

What we find is that for every extra student per teacher is predicted to increase growth by 0.26 points.  I plan to do some additional digging on this one to see what the research suggests about class sizes, but it should be noted that no district reported class size above 22 students, so perhaps they simply hadn’t reached the threshold for student/teacher ratio where at growth starts to drop.

Expenditures Per Pupil Do Count – Every extra dollar spent increased growth by 0.0006 points.  This might not seem huge, but when you extrapolate it into the thousands of dollars, a $1000 increase in expenditures is projected to increase student test scores by 0.6 points, and a $3000 difference means a predicted 3 point difference in growth.  Again though, we see a wide variation even between districts with similar scores, so its difficult to draw any definitive conclusions.

No Singular Culprit Exists - So what can we take away from this? Hopefully this data makes the point that there is no one clear culprit for poor performance in our school districts.  All of them play a role in improving our schools and we are best when we come up with a comprehensive plan to address each of them.  It also suggests to me that we should be wary when anyone claims to be able to tell you the silver bullet to fixing education.

Please also feel free to post in the comments if you would like to see any other variables graphed against student growth scores!  I picked out these five because they are easy to find and often cited as crucial players in educational inequity, but I’m willing to take a look at others as well. Ones that I would have loved to run but couldn’t find data for are; growth vs teacher evaluation scores, growth vs percent of kids enrolled in PreK and growth vs charter school concentration.

[Update 4/28/14, 4:00 PM: thanks to Nashville Native for pointing out that I didn't include my data sources.  Here are my sources for each individual measure:]

Percent economically disadvantaged 

Per Pupil Expenditures

Income

Student/Teacher Ratio, poverty rate and 3-year growth averages can all be found at the state of Tennessee’s website for multiple years at this page.  I pulled the data several months ago and when I just tried again the links weren’t working unfortunately, but you can find it all here

You can find similar growth data at this portion of the Department of Education’s website (not in excel format unfortunately)

Follow Bluff City Education on Twitter @bluffcityed and look for the hashtag #iteachiam and #TNedu to find more of our stories.  Please also like our page on facebook

 

About these ads


Categories: Testing

Tags: , , , , , ,

21 replies

  1. Since most people who are calling for lower class sizes are calling for it in the early grades (K-2), your graph isn’t particularly relevant. I would love class sizes of 15 in the early grades, and around 25 or 30 in the high school grades. Since your data can’t distinguish between the ratios in early versus later grades, it’s not useful. Look at the research, such as the Tennessee STAR study, performed by professional researchers, to see whether research shows lower class sizes increase student achievement in the lower grades (they show that they do). https://www.princeton.edu/futureofchildren/publications/docs/05_02_08.pdf

  2. Also, poverty level IS the the huge predictor of educational outcomes when you look at achievement, not growth. Achievement is the end goal, isn’t it? So that is what we should focus on in this particular discussion, not growth.

  3. From that article: “The research base is currently insufficient to support the use of VAM for high-stakes decisions about individual teachers or schools”

    • Page 5:
      “For all of these reasons, most researchers have concluded that VAM is not appropriate as
      a primary measure for evaluating individual teachers”

      Emphasis here on “primary.” They don’t discount using it in some capacity. Hence why I specify that VAM is a snapshot and should be one of many measures, along with classroom observations, measures of professional development, student surveys, etc. Personally I don’t think it should be more than 33% maximum and in all fairness should be lower. Additionally, non-tested teachers shouldn’t have to take school-wide data, we need to come up with alternative methods like the portfolio system for art teachers created here in Memphis.

      • “In general, such measures should be used only in a low-stakes fashion when they are part of an integrated analysis of what the teacher is doing and who is being taught.”

        This report, written by experts in statistics, does not support your position. In fact, these experts are firmly against what you just suggested.

      • If we’re going to trade quotes from studies I think the one from the RAND study best states my position (found here http://www.rand.org/content/dam/rand/pubs/monographs/2004/RAND_MG158.pdf):
        “It is not clear that VAM estimates would be more harmful than alternative methods currently being used for test-based accountability.” They do recommend further research to identify alternatives to VAM, but right now its the best direct measure we have of teacher impact on student educational outcomes and can be used in a limited amount.

  4. I think the argument is more or less settled on the use of VAM for the evaluations of teachers, schools, or district. It’s junk. Too many scholarly organizations have presented arguments against it. The major red flag, for Tennessee, is that TVAAS has never been available for open scrutiny and peer review as it’s a proprietary measure that costs the state 1.7 million a year.

  5. Also, you need a source for your data.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 555 other followers

%d bloggers like this: