انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

Alternative assessnnent

الكلية كلية التربية للعلوم الانسانية     القسم قسم اللغة الانكليزية     المرحلة 4
أستاذ المادة منير علي خضير ربيع       1/27/2012 3:40:55 PM
Alternative assessnnent

Self-assessment is one example of what is increasingly
called alternative assessment . Alternative assessment
is usually taken to mean assessment procedures
which are less formal than traditional testing, which
are gathered over a period of time rather than being
taken at one point in time, which are usually formative
rather than summative in function, are often
low-stakes in terms of consequences, and are claimed
to have beneficial washback effects. Although such
procedures may be time-consuming and not very
easy to administer and score, their claimed advantages
are that they provide easily understood information,
they are more integrative than traditional tests and
they are more easily integrated into the classroom.
McNamara (1998) makes the point that alternative
assessment procedures are often developed in an
attempt to make testing and assessment more responsive
and accountable to individual learners, to promote
learning and to enhance access and equity in
education (1998: 310). Hamayan (1995) presents a
detailed rationale for alternative assessment, describes
different types of such assessment, and discusses procedures
for setting up alternative assessment. She also
provides a very useful bibliography for further reference.
A recent special issue of Language Testing, guestedited
by McNamara (Vol 18, 4, October 2001)
reports on a symposium to discuss challenges to the
current mainstream in language testing research,
covering issues like assessment as social practice,
democratic assessment, the use of outcomes based
assessment and processes of classroom assessment.
Such discussions of alternative perspectives are closely
linked to so-called critical perspectives (what
Shohamy calls critical language testing).
The alternative assessment movement, if it may be
termed such, probably began in writing assessment,
where the limitations of a one-off impromptu single
writing task are apparent. Students are usually given
only one, or at most two tasks, yet generalisations
about writing ability across a range of genres are
often made. Moreover, it is evidently the case that
most writing, certainly for academic purposes but
also in business settings, takes place over time,
involves much planning, editing, revising and redrafting,
and usually involves the integration of input
from a variety of (usually written) sources. This is in
clear contrast with the traditional essay which usually
has a short prompt, gives students minimal input,
minimal time for planning and virtually no opportunity
to redraft or revise what they have produced
under often stressful, time-bound circumstances. In
such situations, the advocacy of portfolios of pieces
of writing became a commonplace, and a whole
portfolio assessment movement has developed, especially
in the USA for first language writing (Hamp-
Lyons & Condon, 1993, 1999) but also increasingly
for ESL writing assessment (Hamp-Lyons, 1996) and
also for the assessment of foreign languages (French,
Spanish, German, etc.) writing assessment.
Although portfolio assessment in other subject
areas (art, graphic design, architecture, music) is not
new, in foreign language education portfolios have
been hailed as a major innovation, supposedly overcoming
the drawbacks of traditional assessment. A
typical example is Padilla et al. (1996) who describe
the design and implementation of portfolio assessment
in Japanese, Chinese, Korean and Russian, to
assess growth in foreign language proficiency. They
make a number of practical recommendations to
assist teachers wishing to use portfolios in progress
assessment.
Hughes Wilhelm (1996) describes how portfolio
assessment was integrated with criterion-referenced
grading in a pre-university English for academic
purposes programme, together with the use of contract
grading and collaborative revision of grading
criteria. It is claimed that such an assessment scheme
encourages learner control whilst maintaining
standards of performance.
Short (1993) discusses the need for better assessment
models for instruction where content and language
instruction are integrated. She describes examples
of the implementation of a number of alternative
assessment measures, such as checklists, portfolios,
interviews and performance-tasks, in elementary and
secondary school integrated content and language
classes.
Alderson (2000d) describes a number of alternative
procedures for assessing reading, including
checklists, teacher-pupil conferences, learner diaries
and journals, informal reading inventories, classroom
reading aloud sessions, portfolios of books read, selfassessments
of progress in reading, and the like.
Many of the accounts of alternative assessment are
for classroom-based assessment, often for assessing
progress through a programme of instruction.
Gimenez (1996) gives an account of the use of
process assessment in an ESP course; Bruton (1991)
describes the use of continuous assessment over a full
school year in Spain, to measure achievement of
objectives and learner progress. Haggstrom (1994)
describes ways she has successfully used a video
camera and task-based activities to make classroombased
oral testing more communicative and realistic,
less time-consuming for the teacher, and more
enjoyable and less stressful for students. Lynch (1988)
describes an experimental system of peer evaluation
using questionnaires in a pre-sessional EAP summer
programme, to assess speaking abilities. He concludes
that this form of evaluation had a marked effect on
the extent to which speakers took their audience
into account. Lee (1989) discusses how assessment
can be integrated with the learning process, illustrating
her argument with an example where pupils prepare,
practise and perform a set task in Spanish
together. She offers practical tips for how teachers
can reduce the amount of paperwork involved in
classroom assessment of this sort. Sciarone (1995) discusses
the difficulties of monitoring learning with
large groups of students (in contrast with that of
individuals) and describes the use, with 200 learners
of Dutch, of a simple monitoring tool (a personal
computer) to keep track of the performance of individual
learners on a variety of learning tasks.
Typical of these accounts, however, is the fact that
they are descriptive and persuasive, rather than
research-based, or empirical studies of the advantages
and disadvantages of alternative assessment . Brown
and Hudson (1998) present a critical overview of
such approaches, criticising the evangelical way in
which advocates assert the value and indeed validity
of their procedures without any evidence to support
their assertions. They point out that there is no such
thing as automatic validity, a claim all too often made
by the advocates of alternative assessment. Instead
of alternative assessment , they propose the term
alternatives in assessment , pointing out that there
are many different testing methods available for
assessing student learning and achievement. They
present a description of these methods, including
selected-response techniques, constructed-response
techniques and personal-response techniques.
Portfolio and other forms of alternative assessment
are classified under the latter category, but Brown
and Hudson emphasise that they should be subject to
the same criteria of reliability, validity and practicality
as any other assessment procedure, and should be
critically evaluated for their fitness for purpose ,
what Bachman and Palmer (1996) called usefulness .
Hamp-Lyons (1996) concludes that portfolio scoring
is less reliable than traditional writing rating; little
training is given and raters may be judging the writer
as much as the writing. Brown and Hudson emphasise
that decisions for use of any assessment procedure
should be informed by considerations of
consequences (washback), the significance and need
for, and value of, feedback based on the assessment
results, and the importance of using multiple sources
of information when making decisions based on
assessment information.
Clapham (2000b) makes the point that many
alternative assessment procedures are not pre-tested
and trialled, their tasks and mark schemes are therefore
of unknown or even dubious quality, and despite
face validity, they may not tell the user very much at
all about learners abilities.
In short, as Hamayan (1995) admits, alternative
assessment procedures have yet to come of age , not
only in terms of demonstrating beyond doubt their
usefulness, in Bachman and Palmer s terms, but
also in terms of being implemented in mainstream
assessment, rather than in informal class-based assessment.
She argues that consistency in the applicati
of alternative assessment is still a problem, that mech-
anisms for thorough self-criticism and evaluation of
alternative assessment procedures are lacking, that
some degree of standardisation of such procedures
will be needed if they are to be used for high-stakes
assessment, and that the financial and logistic viability
of such procedures remains to be demonstrated.


المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .