National Curriculum of Islamic
Republic of Iran (2013)proposes “active self-confident
communicative approach”- a so-called indigenous version of it- for teaching
English in public schools (Curriculum and Textbook Developement Office, 2014). The goals of
learning English in primary high schools are domain specific themes derived
from upstream documents. The document does not address a plan for testing
curriculum directly, except a general ability to read and write essays as
secondary high school. Thus, textbook developers have codified a set of functions
and notions for the textbooks (Nikoopoor, 2013) and a testing format depending on
textbook and level of learners according to current trend in language testing
and assessment regulations. So far, no research has addressed investigation of
These textbooks are tried to be geared
for Iranian zero beginners as well as those who have already been learning
English in urban and rural areas. For this, “Prospect 1” and “Prospect 2” comprise
contents of personal domain- mainly oral communication skills. Consequently, teacher-developed proficiency tests,
the recommended test format by developers, must focus on evaluating both oral
and written skills (Curriculum and Textbook Developement Office, 2014).
Accordingly, textbook developers suggest both
formative and summative in forms of oral and written assessments complying with
current semi-traditional quantitative assessment trend in primary high schools.
These assessments are:
Workbook and other in class
Student’s activities as formative assessment;
A writing and reading
comprehension test as final written assessment;
A speaking (aural-oral)
test as a part of oral assessment; and
A listening-writing test (Foreign
Language Department of Curriculum and Textbook Development Office, 2017).
Trends in Language Testing
Traditionally tests of language were either pen and paper tests
or performance ones. The first one, is usually assessment of separated language
skills in the form of familiar exam questions. Whereas in latter case, language
skills are assessed in simulated real-life communication acts which needs
trained raters and an agreed rubric based on which raters judge testee.
In recent decades, testing trend
has shifted from merely pen and paper quantitative tests to an alternative
type. Although there has been some controversies about what it replaces the
consensus is that the new testing trend is substitute for traditional one (AssessmentTrends). In language
learning, alternative assessment uses communication for meaningful purposes
with activities that emphasizes on learners’ strength using different scoring
system from the preceding fashion e.g. checklists, portfolios projects, rubrics,
etc (NCLRC, 2007).
Approaches to Language
Following the emergence of different schools of linguistics
and psychology, a number of theories in language learning and teaching and
testing was proposed. Generally, there are three broad changes in approaches to
testing in past century. Logics and procedures proposed by different scholars caused
furcation of some.
The Structural Approach: Psychometric
This approach was offspring of applying behavioral
psychology, structural linguistics in language teaching. The elements of four
levels and four ways to mobilize language use were criterion for proficiency whereas
there was no criterion for performance (Baker, 1989, p. 32). Consequently, knowledge
of language were atomized, decontextualized and tested in isolation (McNamara, 2000). This approach mainly uses item formats that
enables designers to design atomistic tests that is independent from other
items and return easily quantifiable reliable data for further analysis i.e.
multiple choice, true-false, etc (item_format).
The above-mentioned test characteristics
make them valuable, hence; the same factors made it inadequate and irrational.
For example, breaking language apart without interaction to its context causes
loss of some crucial properties of language and create others that does not
have. Also, assessing isolated language components does not necessarily create
a satisfactory whole (Weir, 1991). Also, these tests
fail to address everyday tasks and only assesses knowledge about the language not
real life communication. Such problems and shortcomings, and expiration of it
basic theory made test designers to think twice.
The Integrative Approach: The
Integrative tests, focused on almost
everything that was once banned by psychometric-structural testing for the sake
of achieving communication however; they do not involve the use of functional
language (Baker, 1989). They Instead of seeing language as set of habits,
viewed language as system whose subskills are measured simultaneously (Farhady,
Ja’farpur, & Birjandi, 2009). It had no sampling criteria because language
is considered indivisible (Baker, 1989).The structure that was
proposed by Oller was simple i.e. it has no structure at all.
Later, Oller (1979) developed
pragmatic testing model based on concept of closure in Gestalt psychology and
integrative approach. He proposed cloze and dictation as reliable alternatives
to other integrative tests. Several merits made these tests credible. For
example, these tests were relatively easy to construct and score objectively. Such
tests have two distinguishing criteria. They should raise: a) the testee’s evolving
grammatical system; and b) linguistic sequence associated to extra-linguistic
meaningfully i.e. the ability to recognize between linguistic elements and
extralinguistic context i.e. meaning.
This approach failed at aiming real
life communication because they claimed setting linguistic test a real-life
constrains -like time- is potential for communication (Weir, 1991). A thing that was
not true since cloze and dictation cannot render the existence of communicative
competence nor it performance into using language skills (Farhady,
Ja’farpur, & Birjandi, 2009).
While integrative testing is
believed to be a contrasting pole in Integrative-Discrete point testing
continuum, in most cases it is adjunct to DP and not an alternative to it. In
action, standards of neither model are achieved purely. The only merit of IN
model over the DP is its theoretical ground (Farhady,
The Communicative Language Testing
In 1970s the new Hymes’ proposed a new theory of language
that states knowing language is not just knowing grammatical rules and communicative
context has some rules related to culture (Heaton, 1990). Therefore, the aim
of communicative language teaching derived from Hymes’ theory was to
incorporate tasks that simulate everyday language use i.e. how to communicate
In 1970s by denounced of teaching
grammar, the goal of language teaching broadened from accuracy to both fluency
and accuracy. Therefore, instead of specifying a set of grammar and vocabulary
to learn, curricula and courses were redesigned based on learners’ needs (Richards, 2006). In fact, mastering linguistic
knowledge was no longer the only goal in language learning.
Advent of CLT is accompanied with a
shift, from traditional quantitative one to a new alternative in assessment
trend i.e. the qualitative descriptive one. This shift was advantageous for several
reasons. For example, since in this trend a detailed description of each
performance level is provided, the reliability of scoring is increased. Also,
learner and teacher know which aspect of language learning requires further
development. To put it in nutshell, this trend is more consistent with the
changes and classroom practice that CLT seeks than the traditional one.
During past 40 years, several models
for communicative language ability (CLA) proposed by different scholars. These
models are considered the headway of curriculum, course design, and assessment.
Some of these models enjoy empirical research to define components of CLA e.g.
Canale and Swain 1980; Bachman and Palmer, 1996; Douglas, 2000 and Purpura
2004. Such approaches aim to reflect how L2 learners use these components. They
provide potential targets of assessment for different purposes and contexts
rather than prescribing a test development guide. Another approach HF2 uses
firsthand experience and opinions of experts who have clear ideas and skills in
communication in L2 for developing curriculum and assessment standards like ESL
standards for pre-k12 tests. These standards describe learners’ achievements
without explicit model of proficiency or systematic inquiry to verify them.
Similarly, CEFR attempts to specify what learners “can do” with language at
various proficiency levels by a set of statements. Generally, these statements
are based on a set of general competencies and a set of language specific ones
which is considered essential for communication. In another approach, components
of CLA are defined in terms of how language is used through skills i.e. listening,
reading, writing, speaking not formal elements of language e.g. iBT TOEFL test design
framework (Purpura, 2008). Currently, Textbook
developers seem to have used something in between the second and third approach
for CLA since as mentioned earlier no research based framework has been
developed for textbooks.
As mentioned, communicative test should assess the learners’
ability to apply his knowledge of language in a meaningful real situation.
Therefore, such tests need to have some principles to be followed. Many
scholars believe that these test developers must consider the following
principles while designing their tests:
Start from somewhere: Tests
developers should clearly express that what testees need to perform to use
target language in a specific context and what criteria is appropriate to
measure it. In fact test developer should what they are going to test.
Concentrate on content: Test
developers must gear topics and tasks of a test to test takers’ age,
proficiency level, interests, and goals/needs. In other words, test content
must be relevant to test takers’ future it would be far from their anticipations.
Bias for best: test
developers should make sure that testees are well-prepared to their tests. If
this is not the case e.g. learners are not familiar with the format of the
test, teachers and test takers must provide ways that makes them ready for the
Communicative Language Test
Along with the principles mentioned, researchers believe
that tests that are designed based on communicative principles require to have
some characteristics that distinguishes it from the tests of preceding
The purpose of a communicative
test as mentioned is to measure language proficiency, therefore; one must
account for where, when, how, with whom, and why the language is to be used and
on what topics and with what effects. Moreover, test developers must bear in
mind that there is no best test. In other words, whether a test is good or not
depends on purposes for which a test is developed. (Weir, 1991)
In test construction, developers
must notice that the sample they are using in their test be as representative
as possible. Also, according to McNamara (2000) the test must engage
learner in “extended act of communication either receptive, productive or both”.
In addition, these tests must have the distinguishing feature of communicative
tests i.e. the must pay attention to the social roles that candidates will possess
in real life setting. Therefore, test constructors must identify skills and
performance conditions that are components of language use in particular
situations. For example, in the case of aural channel of test they should reflect
interactive nature of typical spoken language. Furthermore, these properties
should be woven so that they result in a meaningful communication like one that
testee would probably encounter in future language use. Finally, as mentioned
in section 2.2 and 2.3.3 alternative scoring trend to tests are adjunct to
communicative language tests. Therefore, these tests must have a holistic and
qualitative trend for assessment of productive skills
In the case of content of test, Weir
(1991) believes that communicative should possess the
They should be authentic:
the more authentic the better. Also, test developers should use unsimplified
language (genuine input) and if test developers are going to simplify it they
must apply it to other dimensions of test e.g. size of text, difficulty of
grammar rules, length of text, etc.
The input of these test
must be unpredicted i.e. not predictable to test takers.
Finally, the content must
invoke a creative output to the part of learner.
The format of communicative test does not necessarily need
to follow a special rule, rather; it demands explicitness at test design and
evaluation stage. Many researchers believe that, like pragmatic tests,
communicative tests must assess integrative performance of language skills.
Again it worth mentioning that communicative tests must not necessarily be different,
though; there must be a logic for using a special form of test e.g. cloze,
multiple-choice. (Weir, 1991). Lastly, test items
must be as direct as possible.
seems Prospect series are based on suc an approach