Many schools and teachers are curious about worldwide averages and distributions of component scores, since the IBO does not include them in their Statistical Bulletin.
At IB Score Reports, we see results from many different schools, so we decided to construct and share a descriptive analysis based on the data we have.
We encourage you to read this entire article before viewing the following tables, but we're including links here, near the top, for convenience.
We created a large sample by pooling the results of 64 different schools over the last five years, 2012 through 2016. Results were gathered from each school's component CSV files.
The 64 schools represent a wide range of programs, from well established to relatively new, from large enrollment to small. The sample includes schools from Asia, Europe, the U.S., and elsewhere. All of the schools participated in May exams, and all had results in each of the last five years.
Schools do not all offer the same subjects, and enrollments between subjects vary substantially. We included subjects only when we had results from at least 100 students and 10 schools in each year.
Many subjects in our sample vastly exceeded this cut-off, while others just met it. In our sample, MATHEMATICS SL has results from more than 2,000 students and all 64 schools each year, whereas GERMAN B HL averages about 110 students and 20 schools each year.
Our filter left us with 39 subjects, for which we display results for all components.
Exact counts of students and schools in each subject each year are presented here:
Component Scores – 2012-2016 - Student and School Counts - 64 Schools
The first thing that we observed is that the average scores for each subject in our sample were generally higher than the corresponding world average. Collectively, the students in our 64-school sample outperformed the the world in nearly every subject and every year.
You can see all of the sample and world averages, as well as the differences between them, here:
Subject Scores – 2012-2016 - Averages - 64 Schools
Given that the subject averages in our sample tend to run higher than the world, the same is likely true for each of the component averages.
Nonetheless, it is interesting to look at the differences between components within each subject.
Averages and distributions of component scores in our sample are presented here:
Component Scores – 2012-2016 - Averages and Distributions - 64 Schools
The column entitled “Max 5-yr Avg MINUS Min 5-yr Avg” merits an explanation.
That column provides a quick way to see which subjects have the greatest and least spread between their max and min 5-year average component scores.
Let's look at three subjects.
In the example above, the highest 5-year average for any MUSIC HL component (seen in the 6th column of numbers) was 5.4 for SOLO PERFORMING. The lowest 5-year average in that same subject (again, in the 6th column) was 4.1 for PAPER ONE. The difference between those two is 1.3.
With both PHYSICS HL and SL, the largest spread occurs between the PRACTICAL WORK and one of the exam papers (PAPER 1 for HL and PAPER THREE for SL).
Note: The actual values for PHYSICS HL are 5.74 minus 4.65, which is 1.09. Thus, the rounded values (5.7 - 4.7 = 1.1).
The differences between the average scores of each component tend to be reflected in the score distributions, which can be seen in the small bar charts at the end of each row.
There are several other subjects that have large score differences between components.
Often, those differences are between an internal assessment and an external exam. But not always.
HISTORY SL, for instance, has a large difference between the IA (5.4) and PAPER TWO (4.4). But it also has an equivalent difference between PAPER ONE (5.4) and PAPER TWO (4.4).
Again, these differences are reflected in the five-year distributions of scores and the corresponding bar charts.
Notice that three times as many students earned 7’s on PAPER ONE and the IA as on PAPER TWO. (The same is true of 6’s.) Also, no student earned lower than a 3 on PAPER ONE, and yet 5% earned a 2 or 1 on PAPER TWO.
These findings are intriguing, but the most important insights about component scores will come from your own school’s data, generated by your own program, teachers, and students. (Our first case study demonstrates that nicely.)
If you haven’t already, we encourage you to review component score trends within each of your subjects. We can help with that, of course, since our report package includes displays of every component of every subject at your school, like this:
Trends in your own data can then be compared with trends happening at other schools to gain additional insights. For example, if students at your school are performing better on Paper One than Paper Two in some particular subject, is that trend also present in the data from other schools? If scores have been rising or declining on a particular component at your school, is that trend present in the data from other schools?
Although our sample of 64 schools is quite diverse in terms of location, enrollment, and length of offering the IB Diploma, we do not claim that it is statistically representative of all IB schools, all IB schools in any particular region, or all IB Score Reports client schools. One should be cautious about generalizing our findings. Our goal was to provide a glimpse of what’s happening with component scores, using the best data we have available.
Getting started is easy. Just click here to send us an email: firstname.lastname@example.org