In the realm of education there
is a variety of pedagogic strategies and theories, but to substantiate these we
must have a proven form of validation to assure both the success of the
educational experience itself and verify the learning outcomes of the student
themselves. We may find, however, that students and even on occasion educators
may not see the point of an assessment as part of a course feeling that merely
the participation is enough. So why should we validate their knowledge?
For this matter to be overcome
we must justify the validation itself, but from various readings the issue
would appear to sometimes arise from the usage of the wrong assessment type
(Gibbs & Simpson 2005). The challenge in using these assessments is to find
the right assessment to validate the learning experience since the assessments
need to match the measurement of the intended outcomes and be an appropriate
means of testing for the audience. For instance a validation for a school
student may not be appropriate to use in a professional environment.
In an educational setting,
assessment is traditionally used as a form of validation of the students’ work
and development. We most commonly see this used in an empirical manner where
the goal is to assess components of the knowledge, around a rote learning form
of strategy. The idea being that the students have memorised and absorbed the
information given to them and can regurgitate it in a test environment. This can
be seen to relate in its approach to Skinner and his behaviourist theories
(Skinner 1979) when the output is that of automated response, or even similar
to Gagne on a much deeper level where he describes this in his nine events of
instruction (Gagne 1974) with stage 8 – ‘Assess Performance’, to lead from the
ability to simply regurgitate the information to ‘Enhance Retention’ of the knowledge itself in his final stage. This is a
summative approach to assessment.
However in a higher education
learning environments we tend to see the focus of the assessment brought away
from the traditional empirical methods and more geared towards an ideal of
involvement and participation. This could be due to the fact that school
education is education with a purpose of getting you ready to pass a state
exam, so development comes secondary to an empirical result. Whilst further and
higher educations are mostly aimed at developing you to function in a working
environment where continual growth and collaboration are generally most sought
after than rote memorisation skills.
With this in mind we can see
that the assessment that are created try to be more open in their approach for
these higher learning institutes. We see a focus on rationalisation rather than
memorisation where the student now needs to research for themselves and develop
according to their own interests and the ability to rationalise and discuss
their findings is the main area of importance. The assessments can now be
varied and a formative approach can be seen to show the students’ knowledge and
understanding, using ideas from portfolios and participation based assessments
where we can now gain an overview of development or even break it down into
increments of development.
In these formative scenarios where
we can see development happening through the stages, the idea is that we can assess
the quality of the students’ achievements whilst they are still in the process
of learning. This is to say that although the students are being tested on
their performance, the test itself may be open-ended. In
the formative scenario, the student cannot know too much, obviously a limit can
be reached in the time frame but the limit is set by the student and their
involvement, this is in direct opposition to summative where the goal is to
achieve only the designated levels to be tested. This could, in turn, be nearly
perceived as putting a cap on the education of the student. e.g. “in learning
this course material and that is all you will need to know to pass the exam”. This
isn’t to say summative doesn’t have its own benefits, we will touch on later,
but it puts to question, how or even do we match our assessment to out desired
learning outcomes.
Assessment is by no means
limited to the academic environment; in industry there are many valid reasons
for us to use assessment on a regular basis, from initial interviewing to
progression and development; these again tend to be a mixture of formative and
summative assessments dependent on the area needing to be gauged.
A summative assessment in these
situations may not just be a knowledge test, for example we can see these being
a technical test in applying for a computer job role or a maths skill test for
an actuarial position. In some circumstances though we may use a language test
to assess an individual’s skill with a language before being brought into work
in a multicultural environment; even though an interview scenario may be more
beneficial it is not always a feasible solution due to time constraints and
location considerations, but in deciding our appropriate assessment method we
do consider if they are at an appropriate level for the business? The
difference in this sort of scenario is we as a business consider that the
assessment is a tool, unlike the academic environment, used not to see the
limits of the individual knowledge but to see if they have the skill level
required to work in the business, which is why the summative approach is just
as effective and in fact more cost and time effective.
In a developmental direction
however we can see in some creative careers
how an ongoing portfolio of work is kept to display not just one’s ability but
a timeframe over how they have changed. This was commonly seen in an art and
design career, but now is used in a technical role, to display a history of
software development, for example. Some roles will now go so far as to develop
entire training structures to build an individual for a role. In these, from my
own experience in the industry, the student keeps an ongoing portfolio of work
and development and completes a final testing as well. This shows the potential
for development of the individual whilst showing a definitive knowledge of
certain areas. All this is aimed to be done within reason of the role; however,
in certain areas, an individual would need to know direct from memory, in
others they would have time to research. To assure the candidate is properly
versed in this the testing must be appropriately matched to the intended
skills.
I have mentioned the point of
finding the right assessment for matching the learning targets but how do we do
this? There seems to be no set instruction on how to approach the matter, but
there are noted considerations on how we should broach the subject. For
instance a primary concern as mentioned above is to be clear in you targets, a
well defined outcome is essential in directing towards the correct assessment. There
are two perspectives to this point, the students and the educators. Does the
assessment technique benefit the student and consider, does the proposed
assessment method actually prove a knowledge or understanding of the topic for
the educator to assess them? As noted for some business scenarios an assessment
does not necessarily need to be a benefit but rather a method of skill testing.
This is also a consideration, why create a portfolio when all you need is to
know the person has a basic understanding? It’s of no benefit to either party
and creates undue work.
When we
say educational assessment – why bother? The simple answer is; how could we not
bother? In reflection when we discuss assessment it may not always be apparent,
especially to the assessed as to the benefits and reasoning behind the method
and approach. This can be mostly overcome by merely explaining the logic behind
the practice specific to the given scenario. Each assessment is, or at least
should be, there with purpose. In education it is used to develop the student,
and in business assessment is a means to assess performance. Without assessment
we could not validate skills or knowledge of an individual or group. The
challenge though is not when to use assessment but what assessment to use? Each
assessment should be well planned out to best fit the desired outcomes, if the
assessment is appropriately selected and implemented a better learning
experience can be created.
Bibliography
Gagne, R.M. & Briggs,L. J.
, 1974, Principles of instructional design, Holt, Rinehart and Winston, Texas.
Gibbs & Simpson, 2004, ‘Conditions Under Which
Assessment Supports Students’ Learning’ ,Learning
and Teaching in Higher Education, Issue 1
Nitko,
A.J. & Brookhart., 2006, ‘Educational Assessment in Students’,
Pearson, Accessed 22 February 2010, from http://wps.prenhall.com/chet_nitko_education_5/47/12089/3094917.cw/-/t/index.html
Skinner,
1979, The shaping of a behaviourist, Alfred A. Knopf, New York.
In the realm of education there
is a variety of pedagogic strategies and theories, but to substantiate these we
must have a proven form of validation to assure both the success of the
educational experience itself and verify the learning outcomes of the student
themselves. We may find, however, that students and even on occasion educators
may not see the point of an assessment as part of a course feeling that merely
the participation is enough. So why should we validate their knowledge?
For this matter to be overcome
we must justify the validation itself, but from various readings the issue
would appear to sometimes arise from the usage of the wrong assessment type
(Gibbs & Simpson 2005). The challenge in using these assessments is to find
the right assessment to validate the learning experience since the assessments
need to match the measurement of the intended outcomes and be an appropriate
means of testing for the audience. For instance a validation for a school
student may not be appropriate to use in a professional environment.
In an educational setting,
assessment is traditionally used as a form of validation of the students’ work
and development. We most commonly see this used in an empirical manner where
the goal is to assess components of the knowledge, around a rote learning form
of strategy. The idea being that the students have memorised and absorbed the
information given to them and can regurgitate it in a test environment. This can
be seen to relate in its approach to Skinner and his behaviourist theories
(Skinner 1979) when the output is that of automated response, or even similar
to Gagne on a much deeper level where he describes this in his nine events of
instruction (Gagne 1974) with stage 8 – ‘Assess Performance’, to lead from the
ability to simply regurgitate the information to ‘Enhance Retention’ of the knowledge itself in his final stage. This is a
summative approach to assessment.
However in a higher education
learning environments we tend to see the focus of the assessment brought away
from the traditional empirical methods and more geared towards an ideal of
involvement and participation. This could be due to the fact that school
education is education with a purpose of getting you ready to pass a state
exam, so development comes secondary to an empirical result. Whilst further and
higher educations are mostly aimed at developing you to function in a working
environment where continual growth and collaboration are generally most sought
after than rote memorisation skills.
With this in mind we can see
that the assessment that are created try to be more open in their approach for
these higher learning institutes. We see a focus on rationalisation rather than
memorisation where the student now needs to research for themselves and develop
according to their own interests and the ability to rationalise and discuss
their findings is the main area of importance. The assessments can now be
varied and a formative approach can be seen to show the students’ knowledge and
understanding, using ideas from portfolios and participation based assessments
where we can now gain an overview of development or even break it down into
increments of development.
In these formative scenarios where
we can see development happening through the stages, the idea is that we can assess
the quality of the students’ achievements whilst they are still in the process
of learning. This is to say that although the students are being tested on
their performance, the test itself may be open-ended. In
the formative scenario, the student cannot know too much, obviously a limit can
be reached in the time frame but the limit is set by the student and their
involvement, this is in direct opposition to summative where the goal is to
achieve only the designated levels to be tested. This could, in turn, be nearly
perceived as putting a cap on the education of the student. e.g. “in learning
this course material and that is all you will need to know to pass the exam”. This
isn’t to say summative doesn’t have its own benefits, we will touch on later,
but it puts to question, how or even do we match our assessment to out desired
learning outcomes.
Assessment is by no means
limited to the academic environment; in industry there are many valid reasons
for us to use assessment on a regular basis, from initial interviewing to
progression and development; these again tend to be a mixture of formative and
summative assessments dependent on the area needing to be gauged.
A summative assessment in these
situations may not just be a knowledge test, for example we can see these being
a technical test in applying for a computer job role or a maths skill test for
an actuarial position. In some circumstances though we may use a language test
to assess an individual’s skill with a language before being brought into work
in a multicultural environment; even though an interview scenario may be more
beneficial it is not always a feasible solution due to time constraints and
location considerations, but in deciding our appropriate assessment method we
do consider if they are at an appropriate level for the business? The
difference in this sort of scenario is we as a business consider that the
assessment is a tool, unlike the academic environment, used not to see the
limits of the individual knowledge but to see if they have the skill level
required to work in the business, which is why the summative approach is just
as effective and in fact more cost and time effective.
In a developmental direction
however we can see in some creative careers
how an ongoing portfolio of work is kept to display not just one’s ability but
a timeframe over how they have changed. This was commonly seen in an art and
design career, but now is used in a technical role, to display a history of
software development, for example. Some roles will now go so far as to develop
entire training structures to build an individual for a role. In these, from my
own experience in the industry, the student keeps an ongoing portfolio of work
and development and completes a final testing as well. This shows the potential
for development of the individual whilst showing a definitive knowledge of
certain areas. All this is aimed to be done within reason of the role; however,
in certain areas, an individual would need to know direct from memory, in
others they would have time to research. To assure the candidate is properly
versed in this the testing must be appropriately matched to the intended
skills.
I have mentioned the point of
finding the right assessment for matching the learning targets but how do we do
this? There seems to be no set instruction on how to approach the matter, but
there are noted considerations on how we should broach the subject. For
instance a primary concern as mentioned above is to be clear in you targets, a
well defined outcome is essential in directing towards the correct assessment. There
are two perspectives to this point, the students and the educators. Does the
assessment technique benefit the student and consider, does the proposed
assessment method actually prove a knowledge or understanding of the topic for
the educator to assess them? As noted for some business scenarios an assessment
does not necessarily need to be a benefit but rather a method of skill testing.
This is also a consideration, why create a portfolio when all you need is to
know the person has a basic understanding? It’s of no benefit to either party
and creates undue work.
When we
say educational assessment – why bother? The simple answer is; how could we not
bother? In reflection when we discuss assessment it may not always be apparent,
especially to the assessed as to the benefits and reasoning behind the method
and approach. This can be mostly overcome by merely explaining the logic behind
the practice specific to the given scenario. Each assessment is, or at least
should be, there with purpose. In education it is used to develop the student,
and in business assessment is a means to assess performance. Without assessment
we could not validate skills or knowledge of an individual or group. The
challenge though is not when to use assessment but what assessment to use? Each
assessment should be well planned out to best fit the desired outcomes, if the
assessment is appropriately selected and implemented a better learning
experience can be created.
Bibliography
Gagne, R.M. & Briggs,L. J.
, 1974, Principles of instructional design, Holt, Rinehart and Winston, Texas.
Gibbs & Simpson, 2004, ‘Conditions Under Which
Assessment Supports Students’ Learning’ ,Learning
and Teaching in Higher Education, Issue 1
Nitko,
A.J. & Brookhart., 2006, ‘Educational Assessment in Students’,
Pearson, Accessed 22 February 2010, from http://wps.prenhall.com/chet_nitko_education_5/47/12089/3094917.cw/-/t/index.html
Skinner,
1979, The shaping of a behaviourist, Alfred A. Knopf, New York.