Dixons Cottingley Academy | Assessment
Skip to main content
Dixons Cottingley Academy

Assessment

Secondary Phase Summative Assessment Principles

At Dixons, the most important assessment data in our academies comes from formative assessment. Every day, we use specific and repetitive minute-by-minute formative assessment and other leading indicators to help shape students’ learning. However, for our Trust and academy leaders, analysing summative data, even though it is lagged, can also help to inform interventions and dynamic resource allocation both in the moment and over time.

Curriculum alignment

  • In each EBacc subject, there is an agreed minimum body of knowledge (substantive and procedural) that all academies are expected to cover by the end of Year 7, 8 and 9. How that knowledge is sequenced and taught over each year is up to the individual academy. In any given year, each academy can also extend the body of knowledge if they wish to.
  • Leaders are also committed to aligning KS4 exam specifications and long term plans in the EBacc. Once again, each academy has the autonomy to sequence and teach knowledge as they see fit.
  • A department with the confidence of the Executive, will be allowed to innovate (inc. teach a different specification); however, if their results do not match, or exceed, that of the best departments across our Trust, they will be expected to align to the direction set by the cross-cutting team champion.
  • If several academies decide to align further, other academies must still be afforded the right to only align to the minimum body of knowledge and / or to innovate. However, all academies must continue to engage fully in cross-cutting teams

Summative Collection Cycle

  • Summative assessments need to be far enough apart that students have the chance to meaningfully improve. Also, on the large domains of content which most summative assessments sample, students will not make particularly rapid improvements.
  • As each academy has the autonomy to sequence and teach knowledge as they see fit, we can be confident that the same content will be taught in each academy by the end of each year, but not by the end of any given week or term.
  • Therefore, once again, if summative assessments are used too frequently, there are risks:
    • students and teachers get demoralised because hard work in class is not showing up as improvement#
    • students and teachers start to focus on short-term tactics which will lead to improvement on the summative assessment (but not lead to real improvement in learning)
  • It is because of this that our Trust only sets common assessments once a year (towards the end of Cycle 3) and we don’t expect our academies to set more than one mid-year summative assessment (either at the end of Cycle 1 or Cycle 2).

Assessment Scope

  • Up until the end of Year 10, common assessments set by our Trust are cumulative1 (rather than unit2 or global3):
    • Cumulative assessments take advantage of the spacing effect: if you have already studied something, studying it again after a delay can produce a huge amount of learning
    • Knowing there will be a cumulative summative assessment changes the way most students study (for the better). Research suggests that simply telling students that there will be a cumulative assessment may enhance their learning
    • Students often underestimate the value of repeated studying and they do not like cumulative assessments for the very reason they ought to be used: preparing for them requires more time and energy devoted to understanding and remembering content
    • Cumulative assessments are a desirable difficulty: they enhance learning but students do not like them
  • Once students have covered enough of the curriculum, common assessments can be full past papers i.e. global in scope. This shift in scope is unlikely to happen until the end of Year 10; it is curriculum-driven and decided by leaders on cross-cutting teams.
  • Global assessments may be useful in reassuring teachers about their predictions; however, sitting full papers too early is unlikely to enhance learning and may cause greater anxiety amongst students.

Standardisation

  • Perhaps the most important concept in assessment is validity. Daniel Koretz, Professor of Assessment at Harvard University, says that ‘validity is the single most important criterion for evaluating achievement testing’. It is the central concept in assessment.
  • The actual result on an assessment does not matter. What matters are the inferences that we can make from that result. We need to be sure that the assessments we use are capable of supporting significant inferences. The process of doing this is called validation.
  • One difficult aspect concerning the validity of an assessment involves sampling. Most assessments designed to produce a summative inference do not directly test the entire domain. So, when thinking about the validity of a summative inference, we nearly always need to consider what the domain is that we are trying to measure and how the assessment has sampled from that domain
  • A second vital assessment concept is reliability: the consistency of assessment. One factor that can have a big impact on reliability is agreement between markers. Therefore, internal and external (where possible) moderation is carried out across our Trust.
  • Reliability is particularly important to consider when it comes to measuring progress. When we measure progress, we are often looking at the difference in performance between one test and the next. As such, we have two sets of potential measurement error to deal with: the error on the first test and the error on the second. Understanding reliability helps us to understand whether students really have made progress or not. As a crude rule of thumb, we consider individual student percentile rank improvement using a +/- 5% range, but class / academy averages with more precision.
  • Where possible, our common assessments are designed externally in order to make sure that our assessments are more valid and reliable. For example, externally designed assessments are less familiar or less predictable for teachers. If some teachers see the assessment in advance, they might distort the curriculum (or advice regarding revision topics) in a manner that improves assessment performance, but not the wider student knowledge domain.
  • Our common assessments (Y7-11) challenge students and provide an experience that matches that of final GCSE examinations. Academies are expected to ensure that assessments are undertaken in standardised conditions where students and teachers have standardised perceptions of the importance of the assessment.
  • For leaders to make valid inferences across classrooms, or across academies, they need to be clear that they understand how the stakes are being framed for all students taking the test, even those who are not in their own academy.

Approximation and Grading

  • In Years 7, 8, 9 and 10 (up to end of Cycle 2), we simply record and report raw scores from assessments as they are (with academy averages to provide benchmarks) as this is clear fact and judgement free.
  • We only start to report GCSE grades to students and their families once students sit full past papers that are global in scope. This is because GCSE grades are intended to be used to judge a student’s understanding of a full specification.
  • At the end of the year (only), for all years, we use colours to recognise those students with high attainment (purple), or who have made large shifts from their baseline (purple), as well as to prioritise students for academy-leader-driven intervention (red).

Our full assessment principles can be accessed in the document section below.

Assessment Windows

Year 11 Mock Exams – 14 November – 25 November 2022

Year 7 Cycle Exams – 30 January – 3 February 2023

Year 8 Cycle Exams – 9 January – 13 January 2023

Year 9 Cycle Exams – 30 January – 3 February 2023

Year 10 Cycle Exams – 9 January - 13 January 2022

Year 11 Mock Exams – 13 March - 17 March 2023

Year 7 – 10 Cycle 3 Assessments – 12 – 23 June 2023