Assessment fact sheet - print version - PDF (577KB)
The prime responsibility of a Registered Training Organisation (RTO) is to certify the competence of individuals. Certification must only be issued to a learner whom the RTO has assessed as meeting the requirements of the training product as specified in the relevant training package or VET accredited course (Clause 3.1). This may have been achieved through training delivered by the RTO or skills acquired elsewhere (Clause 1.12). Credible assessments therefore are an integral element of the VET system and it is essential that RTOs undertake their assessments as per the Standards for RTOs.
Assessment is defined in the Standards as: “the process of collecting evidence and making judgements on whether competency has been achieved, to confirm that an individual can perform to the standard required in the workplace, as specified in a training package or VET accredited course” (Standards for RTOs, Glossary, p7) |
---|
In VET there are fixed performance standards set to reflect industry needs. These are specified as units of competency, and all aspects of the requirements of the unit must be demonstrated in order to be competent. These rigorous requirements maximise consistency, reliability and validity. In competency-based VET we can have variable inputs to meet individual needs (Clause 1.7) but our task is to achieve uniform outputs that are meaningful and relevant to the workplace (Clause 1.5).
VET assessors and evidence-gatherers
In VET, trainers and assessors must meet the following strict criteria:
- be trained as assessors (Clauses 1.14 & 1.15);
- have the vocational competence they are assessing (Clause 1.13a);
- have current industry skills relevant to the assessment (Clause 1.13b & 1.6b);
- have current knowledge and skills in VET that informs their assessment (Clause 1.13c); and
- maintain the currency of all of the above through professional development (Clause 1.16).
It is necessary for each VET assessor to bring all this expertise to the task of building and managing assessment tools and processes. If the task is approached this way then valid assessment tools, valid assessment processes and valid assessment outcomes are most likely. The Standards for RTOs also remove the room for error by stipulating specific quality requirements for the development of assessment tools and assessment judgement processes (Clause 1.8) and the validation of assessment outcomes (Clauses 1.9 – 1.11; and Clause 1.25).
Interpreting units of competency
When interpreting units of competency, the first aspect to consider is the interpretation of ‘the standard required in the workplace’. The unit has been written in consultation with industry with the express purpose of defining a precise target for training delivery and a standard for assessment.
Competency means “the consistent application of knowledge and skill to the standard of performance required in the workplace. It embodies the ability to transfer and apply skills and knowledge to new situations and environments.” (Standards for RTOs, Glossary, p8). The format of Training Packages has changed, and as a result the unit of competency may have one of two possible structures – ‘new’ and ‘old’.
Common terms used within training packages
Some information within the unit is provided as guidance for the assessor, to position the unit within the workplace context. The assessor needs to draw upon current industry skills to make the most of this guidance.
Most of the information provided in the unit states non-negotiable mandatory requirements of the unit; aspects that must be included in the evidence that is used to make the assessment judgement. Assessors need to use their vocational competence and industry skills to accurately and consistently understand these requirements.
Evidence must be gathered for all mandatory requirements, any evidence that does not relate to these requirements must be ignored. All mandatory requirements must have been demonstrated for the assessor to make a decision of “competent”.
Apart from any prerequisite units, all required evidence is either evidence of possession of knowledge or evidence of the performance of a skill.
Knowledge and skills
Evidence-gathering methods
Given that it is necessary to explicitly assess knowledge and explicitly assess skill, it is not surprising to note that there are two different methods of evidence-gathering. What are these two assessment methods?
- show; and
- tell.
When it comes to gathering evidence of a skill, we ask the candidate to show us the skill - to perform the skill. When we are gathering evidence of knowledge, we ask the candidate to tell us the answer to a question. These do not have to be separate assessment events. While we are observing the performance of a skill we can ask questions and record answers, but they are two different assessment methods. It is the RTO assessor’s responsibility to develop a set of assessment tools to gather evidence of knowledge and evidence of skills. These tools can be used by the RTO assessor and also by others as third-party evidence-gatherers. (Note that when a third-party evidence-gatherer is not a trainer or assessor, Clauses 1.17 – 1.20 may apply).
While the RTO assessor developing and managing the assessment tools and processes must meet the five criteria listed on page 1 of this Fact Sheet, the evidence-gatherer only needs to be able to correctly use the assessment tools. If the evidence-gatherer is not the RTO assessor, then they must work under the direction of the RTO assessor to ensure that the rules of evidence are adhered to.
Third-party evidence-gatherers may be other RTO staff, employers, supervisors or others who are appropriately placed to use the assessment tools. They may simply ask questions and record answers (knowledge) and/or they may observe performances and record their observations (skills). Evidence-gatherers may be included in the assessment validation and verification processes. Evidence gatherers are merely reporting their observations, they are not making assessment judgements.
Having a range of potential evidence-gatherers greatly expands our assessment reach; we can gather evidence in a wide range of locations, including a candidate’s workplace. Our evidence-gathering methods, “show and tell” and observation, must meet all the rules of evidence (Clause 1.8b). These rules are:
Now we know what to assess (unit requirements), how to assess (question and answers and observation), and who can gather the evidence, we need to make an assessment plan. The purpose of the plan is to ensure that every candidate has the opportunity to demonstrate competence. We might deviate from the plan due to a change in circumstances, to be ‘flexible’ and ‘fair’, but we must have a plan that will support the gathering of valid, sufficient, current and authentic evidence so that a valid and reliable judgement of competence can be made (Clauses 1.1 and 1.3).
Planning valid evidence-gathering
The Standards for RTOs is silent about how assessment processes must be managed. It speaks of the quality of the assessment outcomes, but not of the means by which those outcomes can be achieved. The following section provides guidance to assist RTOs in demonstrating compliance with the Standards for RTOs. Additional guidance on assessment is also provided in the TAC workshops on Assessment available via the TAC Webinar Recordings and Resources webpage and in the publication: Assessment in the VET Sector, second edition (2016) published by the Department of Training and Workforce Development.
There are many unit requirements to assess, and each one of them must be addressed (sometimes more than once), so we can start with a list of unit requirements. The list must consider every mandatory unit requirement including:
This above list will shape evidence-gathering processes and judgements (and may also be used to guide training delivery) so before we continue we need to ask someone else to verify the list (Clause 1.4 and 1.8a).
A common problem for RTOs is when the assessment of the element is fragmented. It is important that the element be observed as a whole, not different parts in different assessment events. When we are planning to assess a particular element the assessment event must entail its performance criteria. Consider the example below.
BSBAUD504 Report on a quality audit
Element | Performance Criteria |
---|---|
2. Prepare report | 2.1 Provide objective evidence relating to the need for reduction, elimination and prevention of non-conformance as the basis for the audit report 2.2 Produce audit report according to specified audit requirements 2.3 Present audit report to auditee and other stakeholders |
- The evidence required would be the presentation of a completed audit report (at least one).
- When the assessor is making a judgement about the evidence, the report, they will use the performance criteria to determine if the report met the required standard.
- The performance evidence for this unit of competency provides more information about the requirements of the audit report. Specifically; Interpret audit results and produce a detailed audit report containing detailed analysis according to specified requirements.
- The assessor will also take these requirements into consideration when determining if the audit report met the required standard
Now we need to determine the “how, when, where and who” to gather evidence for each unit requirement. Once again we need to draw upon our current industry skills and vocational competence to imagine a reasonable set of tasks to observe and question/answer opportunities that together will enable us to gather enough evidence for all of the unit requirements.
In the same way that we verified the list of evidence required for assessment we must also verify the assessment tasks that have been created. The approach adopted by many RTOs is a mapping or matrix of unit requirements and assessment tasks. This may be represented in a table with each of the unit requirements listed down the left hand side, and a column for each assessment task. For each mandatory unit requirement one or more assessment tasks are mapped to show that the assessment tasks addresses it, and which specific part (item number) in the assessment task.
Some example lines of such a table might look something like this:
In this example, there are two required prerequisites. These must have been achieved at the RTO (or at another RTO, Clause 3.5) before a judgement of “competent” can be made.
Underneath the prerequisites are four evidence gathering events. One is an observation combined with some set questions. Element 3 (and all its performance criteria) are observed in part 5 of Observation 1 and in part 2 of Observation 2. Element 4 (and all of its performance criteria) are observed in part 6 of Observation 1 and Part 4 of Observation 2. Note that Element 4 is not assessed by questions and answer. Also in this example, item 7 of the knowledge evidence is assessed in Q6 of the question/answer part of the second assessment event, and again in Q2 in the question/answer test. Note that this is not assessed through observation. Observation 3 is not involved with any of the listed requirements in this section of the mapping.
This looks straightforward, what can go wrong? There are a number of issues that this kind of mapping can reveal, and help us avoid:
|
---|
Note that we have not listed performance criteria in the table. The performance criteria of an element are bound to that element and must always be assessed together in the same event when the element is being assessed. The performance criteria are to be used to determine whether the element has been met so to assess them separately would be to invalidate the element. Later we will use the performance criteria to guide the writing of an observation checklist for each element.
Performance criteria can be listed in the mapping and this can provide for more precision in the linkages between unit requirements and assessment items, but this must not be an invitation to assess performance criteria from an element in different tasks.
When the mapping is complete each column provides a specification for observation and/or questions for each assessment task. If assessment tools have been purchased, the same mapping process will enable us to ensure that the tools provide valid and sufficient evidence and if not, the mapping will guide us to create supplementary tasks or questions to close the gaps. RTOs should always check the accuracy purchased mapping documents and are responsible for ensuring validity.
We can also use the mapping in a number of other ways:
|
---|
We may find that in delivery and assessment it is convenient to cluster units of competency. This does not change the mapping, it just means that the assessment activities will gather evidence for a number of units but we still have a separate mapping for each unit. Later we will see that we can also use the mapping as a tool to keep a progressive record of each individual’s skills and knowledge, and ultimately as a tool to make the assessment judgement. The mapping table works as a useful tool to ensure (and demonstrate) that the rules of evidence of validity and sufficiency are met in the planning process.
Managing the gathering of evidence and record keeping
The RTO assessor must plan and create (or select) the evidence-gathering tools, and manage the evidence-gathering process, including giving direction to any third-party evidence-gatherers. In developing evidence-gathering tools from the mapping, the RTO assessor needs to develop a set of real or simulated workplace tasks that will provide the candidate with an opportunity to display their skills, and a set of oral or written questions which will enable the candidate to display their knowledge.
Real or simulated workplace tasks include a set of instructions for the candidate (task, outcomes, equipment, materials, conditions etc), a set of directions for the observer (typical workplace observations, reasonable flexibility, recording and reporting requirements, questions and typical answers, etc.), and an observer checklist to guide observation and record observations and answers. To understand the specific requirements of elements, the checklist should ensure that every performance criterion of each element is included in the checklist, contextualised to the set task and the assessment environment. The observer is not making assessment judgements, they are making observations, recording them and reporting them to the RTO assessor.
Each oral or written question/answer would include instructions for the candidate (answering mode, time etc), a set of directions for the administrator (time, conditions, typical answers, recording and reporting requirements etc.). The administrator is not making assessment judgements, they are recording answers and reporting them to the RTO assessor. A process to record the evidence that is being gathered and whether the evidence is satisfactory or not for each candidate is required. This enables the RTO to track the candidate’s progress. For example, as observation checklists and questions and answers are completed, the RTO assessor could record the outcomes on an assessment record or a modified copy of the assessment mapping.
If a modified copy of the assessment mapping was used it would now look like the following:
We could also keep a column empty so that any other unplanned evidence could be noted against the appropriate unit requirement(s), or to manage reasonable adjustments where alternative evidence is gathered, or to record credible evidence offered as a part of an RPL process.
This mapping/record document provides a continuous record of an individual candidate’s progress, indicating where supplementary training and assessment might be needed, and can be used to identify common problems that the cohort of learners might be experiencing with particular aspects of the unit.
Over time the record for each candidate will accumulate, until a judgement has to be made of their competence. The Principles of Assessment (Clause 1.8b) state:
- assessments must be fair - candidate needs are considered in the design and delivery of the assessment process, including reasonable adjustments, and candidates are fully informed about the processes to be followed;
- assessments must be flexible - the assessment process can be modified to meet candidate needs, including offering RPL and varying the range of contexts where assessment might take place;
- assessments must be reliable - assessment processes are so structured as to ensure that outcomes do not vary from one assessor to another; and, above all
- assessment must be valid - the assessment judgement is made on credible evidence of the achievement of all the unit mandatory requirements and no other factors.
The assessment mapping/record document can also be used to ensure that evidence gathered for assessments associated with the Recognition of Prior Learning are based on the same criteria (Clause 1.8), and to support the evidence-gathering processes of partner organisations delivering training and assessment on behalf of the RTO (Clause 2.1).
Making the assessment judgement
When making the assessment judgement, assessors determine if competency has been achieved through assessing whether all of the unit requirements have been accomplished, rather than asking whether all the assessment tools have been completed. The tools belong to the RTO and are not nationally recognised, only the unit is nationally recognised, so the unit must be the basis of the assessment judgement and national certification (Clause 3.1). As we already have the record from the mapping of the student’s evidence against the unit’s requirements, all we need to do is add one more column to record the assessor’s decisions with regard to each requirement, and a final summation, as is suggested on the following table:
The assessor notes that Element 3 has been demonstrated only once, and if in the example the RTO is assessing every skill twice, then this cannot be signed off as demonstrated. Element 4 was not seen in Observation 2, but was subsequently seen in the workplace, so it’s OK. Knowledge item 7 has been demonstrated. As element 3 has not been demonstrated, the overall judgement is “not yet competent”.
This mapping/recording/judgement document, together with a set of evidence-gathering tools provides detailed information supporting remedial or supplementary training (particularly if delivery resources are also mapped), and provides an evidence trail for potential appeals (Clause 6.2) or validation (Clauses 1.9 – 1.11).
A VET assessment system needs to ensure assessments are consistent and are based on the Principles of Assessment and the Rules of Evidence (Clause 1.8). | The Standards for RTOs require that RTOs conduct regular validations of assessment processes and outcomes (Clause 1.8 and Clauses 1.9 – 1.11). |
---|
Validating VET assessment processes and outcomes
The following discussion is designed to link the information above to the requirements detailed in the TAC Fact Sheet Assessment Validation and the Standards for RTOs. The validation of assessment is a requirement of the Standards for RTOs in Clauses 1.8 – 1.11, 1.25, 2.2 and 2.4, and in the Glossary page 13. This Fact Sheet indicates that there are three phases where validation is needed.
1. | Firstly, validation needs to be built-in to the process of developing assessment tools and systems. In particular we need to verify that:
|
2. | The second phase is the evidence-gathering stage. Here we need to verify that:
|
3. |
|
It is necessary to build validation processes or review points into the development of all three phases for all units of competency. This is reflected in the validity requirements of Clause 1.8 of the Standards for RTOs and is intrinsic to the development, revision, and delivery process for each unit of competency. This validation process is one of reviewing documentation relating to assessment policies and procedures, assessment plans, assessment tools, judgement tools, assessor qualifications and experience, the management of third-party evidence-gatherers, and information provided to candidates before, during and after assessments. This validation can be conducted by the developer or the user of the assessment tools.
In contrast, Clauses 1.9 – 1.11 require that assessment practices and judgements be systematically validated on a five year cycle. This is the validation of assessment systems, assessment products, assessment outcomes and assessors for a qualification, skill set, or for stand-alone units not delivered as part of a qualification or skill set. This validation process is designed to provide assurance that the RTO’s assessment development and delivery processes are used properly and lead to accurate and credible outcomes. This validation process involves the observing of actual assessment activities and evidence-gatherers to verify compliance with planned processes, interviewing candidates, and following the evidence-trail to the judgement of competence.
Clauses 1.9 – 1.11 define conditions for this validation process, including planning, timing, coverage, personnel and resulting action. Note that as the assessor is being validated, that person cannot be a part of the validation team (Clause 1.11) when validating that assessor’s assessments.
Validation conducted under Clauses 1.9 – 1.11 will include the ten components listed above for the three phases, but in addition will review the actual assessments and assessment outcomes of a sample of candidates of stand-alone units or a sample of candidates from a sample of units from each qualification or skill set. More detailed information is provided in the TAC Fact Sheet Assessment Validation and DTWD publication Assessment in the VET Sector.
What are the auditors looking for?
During an audit the auditors are looking for evidence that you have established an assessment system that meets training package requirements, are using assessment tools that meet the Rules of Evidence, and that you have a process to make assessment judgements that meet the Principles of Assessment. This will include being able to show that:
- all unit requirements have been identified;
- valid, current, authentic and sufficient evidence is being gathered for all required aspects of competency (this may be demonstrated by a mapping or similar);
- assessment strategies are suitable for the learner cohort and can be adjusted to meet learner needs;
- assessment strategies and tools reflect current industry practices;
- assessor instructions and marking guides are available to support reliability;
- clear instructions are provided to make a competency-based assessment decision using evidence gathered;
- all those involved in developing and using the assessment tools have the necessary credentials; and
- all the above also apply to assessments conducted for the recognition of prior learning (RPL).