ASSIGNMENT NO. 1
WRITTEN BY: MADIHA AFZAL
PROGRAMME: B.ED (1.5)
SEMESTER: 1st
Question No:1
What is the
types of assessmrnt? Different assessment
for training of learning and as learning.
Answer:
Types of Assessment:
Types of
Assessment "As coach and facilitator, the teacher uses formative
assessment to help support and enhance student learning, As judge and jury, the
teacher makes summative judgments about a student's achievement..." Atkin, Black & Coffey (2001) Assessment
is a purposeful activity aiming to facilitate students’ learning and to improve
the quality of instruction.
Based upon the
functions that it performs, assessment is generally divided into three types:
assessment for learning, assessment of learning and assessment as learning.
a) Assessment for Learning (Formative
Assessment)
Assessment
for learning is a continuous and an ongoing assessment that allows teachers
to monitor students on a day-to-day basis and modify their teaching based on
what the students need to be successful. This assessment provides students with
the timely, specific feedback that they need to enhance their learning. The
essence of formative assessment is that the information yielded by this type of
assessment is used on one hand to make immediate decisions and on the other
hand based upon this information; timely feedback is provided to the students
to enable them to learn better. If the primary purpose of assessment is to
support high-quality learning then formative assessment ought to be understood
as the most important assessment practice.
Assessment for learning has many unique
characteristics for example this type of assessment is taken as
“practice." Learners should not be
graded for skills and concepts that have been just introduced. They should be
given opportunities to practice.
Formative assessment helps teachers to determine next steps during the
learning process as the instruction approaches the summative assessment of
student learning. A good analogy for this is the road test that is required to
receive a driver's license. Before the
final driving test, or summative assessment, a learner practice by being
assessed again and again to point out the deficiencies in the skill Another
distinctive characteristic of formative assessment is student involvement. If
students are not involved in the assessment process, formative assessment is
not practiced or implemented to its full effectiveness. One of the key
components of engaging students in the assessment of their own learning is
providing them with descriptive feedback as
• Student
record keeping It also helps the teachers to assess beyond a "grade,"
to see where the learner started and the progress they are making towards the
learning goals
. b)
Assessment of Learning (Summative Assessment):
Summative assessment or assessment of learning is used to evaluate
students’ achievement at some point in time, generally at the end of a course.
The purpose of this assessment is to help the teacher, students and parents
know how well student has completed the learning task. In other words summative evaluation is used
to assign a grade to a student which indicates his/her level of achievement in
the course or program. Assessment of learning is basically designed to provide
useful information about the performance of the learners rather than providing
immediate and direct feedback to teachers and learners, therefore it usually
has little effect on learning. Though high quality summative information can
help and guide the teacher to organize their courses, decide their teaching
strategies and on the basis of information generated by summative assessment
educational programs can be modified.
Many experts believe that all forms of assessment have some formative
element. The difference only lies in the nature and the purpose for which
assessment is being conducted.
Question No:2
What do you
know about taxonomy of educational objective? Write in detail.
Answer:
Following the
1948 Convention of the American Psychological Association, a group of college
examiners considered the need for a system of classifying educational goals for
the evaluation of student performance. Years later and as a result of this
effort, Benjamin Bloom formulated a classification of "the goals of the
educational process". Eventually, Bloom established a hierarchy of
educational objectives for categorizing level of abstraction of questions that
commonly occur in educational settings (Bloom, 1965). This classification is
generally referred to as Bloom's Taxonomy. Taxonomy means 'a set of classification
principles', or 'structure'. The
followings are six levels in this
taxonomy: Knowledge, Comprehension, Application, Analysis, Synthesis, and
Evaluation. The detail is given below:
Cognitive
domain: The cognitive
domain (Bloom, 1956) involves the development of intellectual skills. This
includes the recall or recognition of specific facts, procedural patterns, and
concepts that serve in the development of intellectual abilities and skills.
There are six levels of this domain starting from the simplest cognitive
behaviour to the most complex. The levels can be thought of as degrees of
difficulties. That is, the first ones must normally be mastered before the next
ones can take place.
Affective
domain: The affective
domain is related to the manner in which we deal with things emotionally, such
as feelings, values, appreciation, enthusiasms, motivations, and attitudes. The
five levels of this domain include: receiving, responding, valuing,
organization, and characterizing by value.
Psychomotor
domain: Focus is on
physical and kinesthetic skills. The psychomotor domain includes physical
movement, coordination, and use of the motor-skill areas. Development of these
skills requires practice and is measured in terms of speed, precision, distance,
procedures, or techniques in execution. There are seven levels of this domain
from the simplest behaviour to the most complex. Domain levels include:
Perception, set, guided response, mechanism, complex or overt response,
adaptation.
Over all Blooms
taxonomy is related to the three Hs of education process that are Head, Heart
and Hand.
Write congitive domian objective :
Cognitive
abilities in this taxonomy are arranged on continuum ranging from the lower to
the higher Lower Higher
Knowledge Comprehension Application
Analysis Synthesis Evaluation An
analogy depicting the taxonomy of learning objectives can be thought as
assembling blocks in building a pyramid. The knowledge level creates the basis
for the foundation from which the higher- level skills are built. When writing
educational objectives, a teacher must know that for a good objective it is
necessary to use the clear verb that clearly indicates the type of observable
behaviour. The following table will not only help you to understand the level
of cognitive domain but will guide you what action verbs can be used to state
objectives of that particular level.
Learning
Objective/ Level |
Description |
Action Verbs
to be used to state objectives |
Knowledge |
The first
level of learning is knowledge. Knowledge can be characterized as awareness
of specifics and of the ways and means of dealing with specifics. The
knowledge level focuses on memory or recall where the learner recognizes
information, ideas, principles in the approximate form in which they were
learned. |
To arrange,
to define, to describe, to identify, to list, to label, to name, to order, to
recognize, to recall, to relate, to repeat, to reproduce, to state, to
underline. |
Comprehension |
Comprehension
is the next level of learning and encompasses understanding. Has the
knowledge been internalized or understood? The student should be able to
translate, comprehend, or interpret information based on the knowledge. |
To choose,
to compare, to classify, to describe, to demonstrate, to determine, to
discuss, to discriminate, to explain, to express, to identify, to indicate,
to interpret, to label, to locate, to pick, to recognize, to relate, to
report, to respond, to restate, to review, to select, to tell, to |
|
Application
is the use of knowledge. Can the student use the knowledge in a new
situation? It can also be the application of theory to solve a real world
problem. The student selects, transfers, and uses data and principles to
complete solve a problem. |
To apply, to
classify, to demonstrate, to develop,
to dramatize, to employ, to generalize, to illustrate, to interpret, to
initiate, to operate, to organize, to practice, to relate, to restructure, to
rewrite, to schedule, to sketch, to solve, to use, to utilize, to transfer |
Analysis |
Analysis
involves taking apart a piece of knowledge, the investigation of parts of a
concept. It can only occur if the student has obtained knowledge of and
comprehends a concept. The student examines, classifies, hypothesizes,
collects data, and draws conclusions. |
To analyze,
to appraise, to calculate, to categorize, compare, conclude, contrast, or
criticize; to detect, to debate, to determine, to develop, distinguish, or
deduce; to diagram, to diagnose, differentiate, or discriminate; to estimate,
to examine, to evaluate, to experiment, to inventory, to inspect, to relate,
solve, or test; to question Synthesis |
Synthesis |
Synthesis is
the creative act. It’s the taking of knowledge and the creation of something
new. It is an inductive process—one of building rather than one of breaking
down. The student originates, integrates, and combines ideas into something
that is new to him/her. |
To arrange,
to assemble, to collect, to compose, to construct, to constitute, to create,
to design, to develop, to device, to document, to formulate, to manage, to
modify, to originate, to organize, to plan, to prepare, to predict, to
produce, to propose, to relate, to reconstruct, to set up, to specify, to
synthesize, to systematize, to tell, to transmit |
Evaluation |
Evaluation
is judgment or decision making. The student appraises, assesses or criticizes
on a basis of specific standards and criteria. |
To appraise,
argue, or assess; to attach, to choose, to contrast, to consider, to
critique, to decide, to defend, to estimate, to evaluate, to judge, to
measure, to predict, to rate, to revise, to score, to select, to support, to
standardize, to validate, to value, to test |
Bloom's
Taxonomy underpins the classical 'Knowledge, Attitude, Skills' structure of
learning. It is such a simple, clear and effective model, both for explanation
and application of learning objectives, teaching and training methods, and
measurement of learning outcomes. Bloom's Taxonomy provides an excellent
structure for planning, designing, assessing and evaluating teaching and
learning process. The model also serves as a sort of checklist, by which you
can ensure that instruction is planned to deliver all the necessary development
for students.
Bloom's
Revised Taxonomy:
Bloom’s
former students Lorin Anderson and David Krathwohl revised Bloom’s Taxonomy in 1990. - Bloom's
Revised Taxonomy was published in 2001.
Key to this is the use of verbs rather than nouns for each of the categories
and a rearrangement of the sequence within the taxonomy. They are arranged
below in increasing order, from Lower Order Thinking Skills (LOTS) to Higher
Order Thinking Skills (HOTS).
Defining
Learning Outcomes?
Learning
outcomes are the statements indicating what a student is expected to be able to
do as a result of a learning activity. Major difference between learning
objectives and out comes is that objectives are focused upon the instruction,
what will be given to the students and the outcomes are focused upon the
students what behaviour change they are being expected to show as the result of
the instruction.
1. Different Definitions of Learning Outcomes:
Adam, 2004 defines
learning outcomes as: A learning outcome
is a written statement of what the successful student/learner is expected to be
able to do at the end of the module/course unit, or qualification. (Adam,
2004)
The Credit Common
Accord for Wales defines learning outcomes as: Statements of what a learner can
be expected to know, understand and/or do as a result of a learning experience.
Learning
Outcome:
An expression of what
a student will demonstrate on the successful completion of a module. Learning
outcomes: • are related to the level of the learning; • indicate the intended
gain in knowledge and skills that a typical student will achieve; • should be
capable of being assessed.
2. Difference between Learning Outcomes and
Objectives:”
Learning outcomes and objectives’
are often used synonymously, although they are not the same. In simple words,
objectives are concerned with teaching and the teacher’s intentions
whereas learning outcomes are concerned
with students learning. However,
objectives and learning outcomes are usually written in same terms. For further
detail check the following website.
http://www.qualityresearchinternational.com/glossary/learningoutcomes.htm
3. Importance of Learning Outcomes:
Learning
outcomes facilitate teachers more precisely to tell students what is expected
of them. Clearly stated learning outcomes:
• help students to learn more effectively.
They know where they stand and the curriculum is made more open to them.
• make it
clear what students can hope to gain from a particular course or lecture.
• help instructors select the appropriate
teaching strategy, for example lecture, seminar, student self-paced, or laboratory
class. It obviously makes sense to match the intended outcome to the teaching
strategy.
• help
instructors more precisely to tell their colleagues what a particular activity
is designed to achieve.
• assist in
setting examinations based on the content delivered.
• Help in the selection of appropriate
assessment strategies.
4. SOLO
Taxonomy:
The SOLO taxonomy stands
for: Structure of Observed Learning
Outcomes
SOLO taxonomy
was developed by Biggs and Collis (1982) which is further explained by Biggs
and Tang (2007). This taxonomy is used by Punjab for the assessment. It
describes level of increasing complexity in a student's understanding of a
subject, through five stages, and it is claimed to be applicable to any subject
area. Not all students get through all five stages, of course, and indeed not
all teaching.
1
Pre-structural: here students are simply acquiring bits of
unconnected information, which have no organisation and make no sense.
2
Unistructural: simple and obvious
connections are made, but their significance is not grasped.
3
Multistructural: a number of
connections may be made, but the metaconnections between them are missed, as is
their significance for the whole. 4 Relational level: the student is now able to
appreciate the significance of the parts in relation to the whole.
4
At the extended abstract level, the
student is making connections not only within the given subject area, but also
beyond it, able to generalise and transfer the principles and ideas underlying
the specific instance. SOLO taxonomy
Preparation of
Content Outline:
First you must
understand that what is content. In this regard content refers to the major
matter that will be included in a measuring device. For example, the test of
General Science he diagrams, pictures of different plants, insects or animal or
living or non-living things that constitute the test. For a psychomotor test
such as conducting an experiment in
laboratory might require setting up of apparatus for the experiment. For an
effective device, the content might consist of the series of statement to which
the students might choose correct or best answer. Most tests taken by students are developed by
teachers who are already teaching the subject for which they have to develop
the test. Therefore selection of test content might not be the problem for
them. Selection and preparation of
content also depends on the type of decisions a teacher has to make about the
students. If the purpose of a test is to evaluate the instruction, then the
content of a test must reflect the age appropriateness. If test is made for
making decisions regarding selection then the content might of predictive
nature. This type of test domain will provide information that how well the
student will perform in the program. A
teacher should know that items selected for the test come from instructional
material which a teacher has covered during teaching. You may heard about students
reaction during examination that ‘ test was out of course’. It indicates that
teacher while developing the test items has not considered the content that was
taught to the student. The items included in the test might have been not
covered during the instruction period.
It implies
that the content from which the test item have to be taken should be well defined and structured. Without
setting the boundary of knowledge, behaviour, or skills to be measured, the
test development task will become difficult and complex. As a result
the assessment
will produce unreliable results. Therefore a good test represents the taught
content up to maximum extent. A test which is representative of the entire
content domain is actually is a good test.
Therefore it is imperative for a teacher to prepare outline of the
content that will be covered during the instruction. The next step is the
selection of subject matter and designing of instructional activities. All
these steps are guided by the objectives. One must consider objectives of the
unit before selection of content domain and subsequently designing of a test.
It is clear from above discussion that the
outline of the test content should base on the following principles:
1. Purpose of
the test (diagnostic test, classification, placement, or job employment)
2.
Representative sample of the knowledge, behaviour, or skill domain being
measured.
3. Relevancy of the topic with the content of
the subject
4. Language of the content should be according
to the age and grade level of the students.
5. Developing
table of specification. A test, which meets the criteria stated in above
principles, will provide reliable and valid information for correct decision
regarding the individual.
Now keeping in
view these principles go on the following activity.
Preparation of
Table of Specification:
It
has been discussed earlier that the educational objectives play a significant
role in the development of classroom tests.
The reason is that the preparation of classroom test is closely related
to the curriculum and educational objectives. And we have also explained that a
test should measure what was taught. For ensuring that there is similarity
between classroom instruction and test content is the development and application
of table of specification, which is also called a test blue print. As the name implies, it specifies the
content of a
test. It is a two-way framework which
ensures the congruence between classroom instruction and test content. This is
one of the most popular procedures used by test developers for defining the
content-domain. One dimension of the test reflects the content to be covered
and other dimension describes the kinds of student cognitive behaviour to be
assessed. Table 2.1 Provides the example
of table of specification.
Question No:3
How will you
define attitude? Elaborate its components.
Answer:
Attitude:
Attitude is a posture, action or disposition
of a figure or a statue. A mental and neural state of readiness, organized
through experience, exerting a directive or dynamic influence upon the
individual's response to all objects and situations with which it is related.
Attitude is the state of mind with which you approach a task, a challenge, a
person, love, life in general. The definition of attitude is “a complex mental
state involving beliefs and feelings and values and dispositions to act in
certain ways”. These beliefs and feelings are different due to various
interpretations of the same events by various people and these differences
occur due to the earlier mentioned inherited characteristics’.
(i)
Components of Attitude 1.
Cognitive Component: It refers
that's part of attitude which is related in general know how of a person, for
example, he says smoking is injurious to health. Such type of idea of a person
is called cognitive component of attitude. 2.
Effective Component: This part
of attitude is related to the statement which affects another person. For
example, in an organization a personal report is given to the general manager.
In report he point out that the sale staff is not performing their due responsibilities.
The general manager forwards a written notice to the marketing manager to
negotiate with the sale staff. 3.
Behavioral Component: The behavioral component refers to that part of
attitude which reflects the intension of a person in short run or long run. For
example, before the production and launching process the product. Report is
prepared by the production department which consists of the intention in near
future and long run and this report is handed over to top management for the
decision.
(ii) List of Attitude:
In the broader sense of the word there are
only three attitudes, a positive attitude, a negative attitude, and a neutral
attitude. But in general sense, an attitude is what it is expressed through.
Given below is a list of attitudes that are expressed by people, and are more
than personality traits which you may have heard of, know of, or might be even
carrying them: • Acceptance • Confidence
• Seriousness • Optimism • Interest
• Cooperative • Happiness • Respectful
• Authority • Sincerity • Honest
• Sincere
Intelligence
Tests:
Intelligence involves the ability to think,
solve problems, analyze situations, and understand social values, customs, and
norms. Two main forms of intelligence are involved in most intelligence
assessments:
• Verbal Intelligence is the ability to
comprehend and solve language-based problems; and
• Nonverbal
Intelligence is the ability to understand and solve visual and spatial
problems. Intelligence is sometimes
referred to as intelligence quotient (IQ), cognitive functioning, intellectual
ability, aptitude, thinking skills and general ability.
While
intelligence tests are psychological tests that are designed to measure a
variety of mental functions, such as reasoning, comprehension, and judgment.
Intelligence test is often defined as a measure of general mental ability. Of
the standardized intelligence tests, those developed by David Wechsler are
among those most widely used. Wechsler defined intelligence as “the global
capacity to act purposefully, to think rationally, and to deal effectively with
the environment.” While psychologists generally agree with this definition,
they don't agree on the operational definition of intelligence (that is, a
statement of the procedures to be used to precisely define the variable to be
measured) or how to accomplish its measurement.
The goal of intelligence tests is to obtain an idea of the person's
intellectual potential. The tests center around a set of stimuli designed to
yield a score based on the test maker's model of what makes up intelligence.
Intelligence tests are often given as a part of a battery of tests.
(i)
Types of Intelligence:
Tests Intelligence tests
(also called instruments) are published in several forms:
(a) Group
Intelligence tests usually consist of a paper test booklet and scanned scoring
sheets. Group achievement tests, which assess academic areas, sometimes include
a cognitive measure. In general, group tests are not recommended for the
purpose of identifying a child with a disability. In some cases, however, they
can be helpful as a screening measure to consider whether further testing is
needed and can provide good background information on a child's academic
history.
(b) Individual
intelligence tests may include several types of tasks and may involve easel
test books for pointing responses, puzzle and game-like tasks, and question and
answer sessions. Some tasks are timed.
(c)
Computerized tests are becoming more widely available, but as with all
tests, examiners must consider the needs of the child before choosing this
format.
(d) Verbal
tests evaluate your ability to spell words correctly, use correct grammar,
understand analogies and analyze detailed written information. Because they
depend on understanding the precise meaning of words, idioms and the structure
of the language they discriminate very strongly towards native speakers of the
language in which the test has been developed. If you speak English as a second
language, even if this is at a high standard, you will be significantly
disadvantaged in these tests. There are two distinct types of verbal ability
questions, those dealing with spelling, grammar and word meanings, and those
that try to measure your comprehension and reasoning abilities. Questions about
spelling, grammar and word meanings are speed tests in that they don’t require
very much reasoning ability. You either know the answer or you don’t.
(e) Non-verbal
tests are comprised of a variety of item types, including series completion,
codes and analogies. However, unlike verbal reasoning tests, none of the
question types requires learned knowledge for its solution. In an educational context, these tests are
typically used as an indication of a pupil’s ability to understand and
assimilate novel information independently of language skills. Scores on these
tests can indicate a pupil’s ability to learn new material in a wide range of
school subjects based on their current levels of functioning.
(ii) Advantages:
In general,
intelligence tests measure a wide variety of human behaviours better than any
other measure that has been developed. They allow professionals to have a uniform
way of comparing a person's performance with that of other people who are
similar in age. These tests also provide information on cultural and biological
differences among people. Intelligence
tests are excellent predictors of on academic achievement and provide an
outline of a person's mental strengths and weaknesses. Many times the scores
have revealed talents in many people, which have led to an improvement in their
educational opportunities. Teachers, parents, and psychologists are able to devise
individual curricula that matches a person's level of development and
expectations.
(ii)
Disadvantages:
Some researchers argue that intelligence tests
have serious shortcomings. For example, many intelligence tests produce a
single intelligence score. This single score is often inadequate in explaining
the multidimensional. Another problem
with a single score is the fact that individuals with similar intelligence test
scores can vary greatly in their expression of these talents. It is important
to know the person's performance on the various subtests that make up the
overall intelligence test score. Knowing the performance on these various
scales can influence the understanding of a person's abilities and how these abilities
are expressed. For example, two people have identical scores on intelligence
tests. Although both people have the same test score, one person may have
obtained the score because of strong verbal skills while the other may have
obtained the score because of strong skills in perceiving and organizing
various tasks. Furthermore, intelligence
tests only measure a sample of behaviors or situations in which intelligent
behavior is revealed. For instance, some intelligence tests do not measure a
person's everyday functioning, social knowledge, mechanical skills, and/or
creativity. Along with this, the formats of many intelligence tests do not
capture the complexity and immediacy of real-life situations. Therefore,
intelligence tests have been criticized for their limited ability to predict
non-test or nonacademic intellectual abilities. Since intelligence test scores
can be influenced by a variety of different experiences and behaviors, they
should not be considered a perfect indicator of a person's intellectual
potential.
Personality
Tests
Your personality is what
makes you who you are. It's that organized set of unique traits and
characteristics that makes you different from every other person in the world.
Not only does your personality make you special, it makes you!? “The particular pattern of behavior and
thinking that prevails across time and contexts, and differentiates one person
from another.” The goal of
psychologists is to understand the causes of individual differences in
behavior. In order to do this one must firstly identify personality
characteristics (often called personality traits), and then determine the
variables that produce and control them. A personality trait is assumed to be
some enduring characteristic that is relatively constant as opposed to the
present temperament of that person which is not necessarily a stable
characteristic. Consequently, trait theories are specifically focused on
explaining the more permanent personality characteristics that differentiate
one individual from another. For example, things like being; dependable,
trustworthy, friendly, cheerful, etc.
A personality
test is completed to yield a description of an individual’s distinct
personality traits. In most instances,
your personality will influence relationships with your family, friends, and
classmates and contribute to your health and wellbeing. Teachers can administer
a personality test in class to help your children discover their strengths and
developmental needs. The driving force
behind administering a personality test is to open up lines of communication
and bring students together to have a higher appreciation for one another. A personality test can provide guidance to
teachers of what teaching strategies will be the most effective for their
students. Briefly personality test can benefit your students by: • Increasing
productivity • Get along better with
classmates • Help students realize their
full potential • Identify teaching
strategies for students • Help students
appreciate other personality types.
(i)
Types of Personality Tests:
Personality tests are used to determine your
type of personality, your values, interests and your skills. They can be used
to simply assess what type of person you are or, more specifically, to
determine your aptitude for a certain type of occupation or career. There are many different types of personality
tests such as self-report inventory, Likert scale and projective tests.
(a)
Self-report Inventory
A self-report inventory is a type of
psychological test often used in personality assessment. This type of test is
often presented in a paper-and-pencil format or may even be administered on a
computer. A typical self-report inventory presents a number of questions or
statements that may or may not describe certain qualities or characteristics of
the test subject. Chances are good that you have taken a self-report inventory
at some time the past. Such questionnaires are often seen in doctors’ offices,
in on-line personality tests and in market research surveys. This type of
survey can be used to look at your current behaviors, past behaviors and
possible behaviors in hypothetical situations.
(i) Strengths
and Weaknesses of Self-Report Inventories
Self-report
inventories are often good solution when researchers need to administer a large
number of tests in relatively short space of time. Many self-report inventories
can be completed very quickly, often in as little as 15 minutes. This type of
questionnaire is an affordable option for researchers faced with tight budgets.
Strength is that the results of self-report inventories are generally much more
reliable and valid. Scoring of the tests a standardized and based on norms that
have been previously established.
However, self report inventories do have their weaknesses. Such as
people are able to exercise deception while taking self-report tests (Anastasi
& Urbina, 1997). Another weakness is that some tests are very long and
tedious. For example, the MMPI takes approximately 3 hours to complete. In some
cases, test respondents may simply lose interest and not answer questions
accurately. Additionally, people are sometimes not the best judges of their own
behavior. Some individuals may try to hide their own feelings, thoughts and
attitudes.
(iii)
Types of Self Reports
•
Myers-Briggs Inventory First designed to help suite people's personality to
jobs identifies 'type' of person not 'traits' in people
• MMPI & MMPI-2 used to assess
personality and mental health
• 16 Personality Factor Questionnaire
identifies a person’s traits
• The Big Five identifies on a scale of five
traits where a person sits
(b) Likert
Scale
A Likert Scale is a type of
psychometric scale frequently used in psychology questionnaires. It was
developed by and named after organizational psychologist Rensis Likert. A
Likert item is simply a statement which the respondent is asked to evaluate
according to any kind of subjective or objective criteria; generally the level
of agreement or disagreement is measured. It is considered symmetric or
"balanced" because there are equal amounts of positive and negative
positions. Often five ordered response
levels are used, although many psychometricians advocate using seven or nine
levels. The format of a typical five-level Likert item, for example, could be:
1. Strongly
disagree
2. Disagree
3. Uncertain
4. Agree
5. Strongly
Agree:
Likert scaling is a bipolar scaling
method, measuring either positive or negative response to a statement.
Sometimes an even-point scale is used, where the middle option of "Neither
agree nor disagree" is not available. This is sometimes called a "forced
choice" method, since the neutral option is removed. The neutral option
can be seen as an easy option to take when a respondent is unsure, and so
whether it is a true neutral option is questionable. It has been shown that
when comparing between a 4-point and a 5-point Likert scale, where the former
has the neutral option unavailable, the overall difference in the response is
negligible.
(c) Projective
tests
A projective test is
a personality test designed to let a person respond to ambiguous stimuli,
presumably revealing hidden emotions and internal conflicts. In psychology, a
projective test is a type of personality test in which the individual offers
responses to ambiguous scenes, words or images. This type of test emerged from
the psychoanalytic school of thought, which suggested that people have
unconscious thoughts or urges. These projective tests were intended to uncover
such unconscious desires that are hidden from conscious awareness.
(i) How Do Projective Test Work?
In many projective tests, the participant is shown an ambiguous image
and then asked to give the first response that comes to mind. The key to
projective tests is the ambiguity of the stimuli. According to the theory
behind such tests, clearly defined questions result in answers that are
carefully crafted by the conscious mind. By providing the participant with a
question or stimulus that is not clear, the underlying and unconscious
motivations or attitudes are revealed.
(ii) Types
of Projective Tests
There are a
number of different types of projective tests. The following are just a few
examples of some of the best-known projective tests.
(a) The Rorschach
Inkblot Test \
The Rorschach Inkblot
was one of the first projective tests and continues to be one of the
best-known. Developed by Swiss psychiatrist Hermann Rorschach in 1921, the test
consists of 10 different cards that depict an ambiguous inkblot. The
participant is shown one card at a time and asked to describe what he or she
sees in the image. The responses are recorded verbatim by the tester. Gestures,
tone of voice and other reactions are also noted. The results of the test can
vary depending on which of the many existing scoring systems the examiner
uses.
(b) The
Thematic Apperception Test (TAT)
In the Thematic
Apperception Test, an individual is asked to look at a series of ambiguous
scenes. The participant is then asked to tell a story describing the scene,
including what is happening, how the characters are feeling and how the story
will end. The examiner then scores the test based on the needs, motivations and
anxieties of the main character as well as how the story eventually turns out.
iii. Strengths
and Weaknesses of Projective Tests
• Projective
tests are most frequently used in therapeutic settings. In many cases,
therapists use these tests to learn qualitative information about a client.
Some therapists may use projective tests as a sort of icebreaker to encourage
the client to discuss issues or examine thoughts and emotions.
• While projective tests have some benefits, they also have a number of weaknesses and limitations. For example, the respondent's answers can be heavily influenced by the examiner's attitudes or the test setting. Scoring projective tests is also highly subjective, so interpretations of answers can vary dramatically from one examiner to the next.
Question No: 4
What are the type of every questions? Also
write its advantages and disadvantages.
Answer:
This method is
quite important. Through question, an attempt is made to ascertain and evaluate
the knowledge of students in regard to the subject. This method ensures
participation. The teacher should ask question and the student should be
encouraged to ask questions.
About this
method a renowned author says, “If the teacher does not know the answer he
should admit it and either he ask the students to find it in the text-book or
offer to find out the answer himself. No teacher can answer all the question
which can be asked in yes or no. The students should be asked such questions
which compel them to think the matter over. If the students cannot answer the
question fully, his partial answer should be accepted and another student may
be asked to improve upon it. The teacher himself be in regular habit of reading
latest texts as students should also be asked to find answers in authoritative
texts”.
In this method
the teacher controls the situation. Generally informal lesson is developed by
means of question-answer method.
Advantages of Question-Answer Method:
(i) It can be
used in all teaching situations.
(ii) It helps
in developing the power of expression of the students.
(iii) It is
helpful to ascertain the personal difficulties of the students.
(iv) It provides a check on preparation of
assignments.
(v) It can be
used to reflect student’s background and attitude.
(vi) It is
quite handy to the teacher when no other suitable teaching method is available.
Disadvantages:
(i) It requires a lot of skill on the part of
teacher to make a proper use of this method.
(ii) It may
sometime mar the atmosphere of the class.
(iii) This
method generally is quite embracing for timid students.
(iv) It is
time consuming
It’s good to regularly review the advantages and disadvantages of the
most commonly used test questions and the test banks that now frequently
provide them.
1.
MULTIPLE-CHOICE QUESTIONS
Advantages
·
Quick and easy to score, by hand or electronically
·
Can be written so that they test a wide range of higher-order thinking
skills
·
Can cover lots of content areas on a single exam and still be answered in
a class period
Disadvantages
·
Often test literacy skills: “if the student reads the question carefully,
the answer is easy to recognize even if the student knows little about the
subject” (p. 194)
·
Provide unprepared students the opportunity to guess, and with guesses
that are right, they get credit for things they don’t know
·
Expose students to misinformation that can influence subsequent thinking
about the content
·
Take time and skill to construct (especially good questions)
2.
TRUE-FALSE QUESTIONS
Advantages
·
Quick and easy to score
Disadvantages
·
Considered to be “one of the most unreliable forms of assessment” (p.
195)
·
Often written so that most of the statement is true save one small, often
trivial bit of information that then makes the whole statement untrue
·
Encourage guessing, and reward for correct guesses
3.
SHORT-ANSWER QUESTIONS
Advantages
·
Quick and easy to grade
·
Quick and easy to write
Disadvantages
·
Encourage students to memorize terms and details, so that their understanding
of the content remains superficial
4.
ESSAY QUESTIONS
Advantages
·
Offer students an opportunity to demonstrate knowledge, skills, and
abilities in a variety of ways
·
Can be used to develop student writing skills, particularly the ability
to formulate arguments supported with reasoning and evidence
Disadvantages
·
Require extensive time to grade
·
Encourage use of subjective criteria when assessing answers
·
If used in class, necessitate quick composition without time for planning
or revision, which can result in poor-quality writing.
5.
QUESTIONS PROVIDED BY TEST BANKS
Advantages
·
Save instructors the time and energy involved in writing test questions
·
Use the terms and methods that are used in the book
Disadvantages
·
Rarely involve analysis, synthesis, application, or evaluation
(cross-discipline research documents that approximately 85 percent of the
questions in test banks test recall)
·
Limit the scope of the exam to text content; if used extensively, may
lead students to conclude that the material covered in class is unimportant and
irrelevant
We tend to think that
these are the only test question options, but there are some interesting
variations. The article that promoted this review proposes one: Start with a
question, and revise it until it can be answered with one word or a short
phrase. Do not list any answer options for that single question, but attach to
the exam an alphabetized list of answers. Students select answers from that
list. Some of the answers provided may be used more than once, some may not be
used, and there are more answers listed than questions. It’s a ratcheted-up
version of matching. The approach makes the test more challenging and decreases
the chance of getting an answer correct by guessing.
Remember, students do
need to be introduced to any new or altered question format before they
encounter it on an exam.
Question No: 5
Construct a test, administer it and ensure its reliability.
Answer:
What does the term reliability
mean? Reliability means Trustworthy. A test score is called reliable when we
have reasons for believing the test score to be stable and objective. For
example if the same test is given to two classes and is marked by different
teachers even then it produced the similar results, it may be considered as
reliable. Stability and trustworthiness depends upon the degree to which score
is free of chance error.. They are: i) Inter-Rater or Inter-Observer
Reliability To assess the degree to which different ratters/observers give
consistent estimates of the same phenomenon. That is if two teachers mark same
test and the results are similar, so it indicates the inter-ratter or
interobserver reliability. ii) Test-Retest Reliability: To assess the
consistency of a measure from one time to another, when a same test is administered
twice and the results of both administrations are similar, this constitutes the
test-retest reliability. Students may remember and may be mature after the
first administration creates a problem for test-retest reliability. iii)
Parallel-Form Reliability: To assess the consistency of the results of two
tests constructed in the same way from the same content domain. Here the test
designer tries to develop two tests of the similar kinds and after
administration the results are similar then it will indicate the parallel form
reliability. iv) Internal Consistency Reliability: To assess the consistency of
results across items within a test, it is correlation of the individual items
score with the entire test. v) Split half Reliability: To assess the consistency
of results comparing two halves of single test, these halves may be even odd
items on the single test. vi) Ruder-Richardson Reliability: To assess the
consistency of the results using all the possible split halves of a test. Let's
discuss each of these in turn. 5.2.1. Inter-Ratter or InterObserver Reliability
Whenever we observe or activities of humans, we have to think about the
procedure for reliable and consistent results. For this two or more than two
observers are assigned to observe the students or teachers. So how do we
determine whether two observers are being consistent in their observations? We
probably should establish inter-ratter reliability by considering the
similarity of the scores awarded by the two observers. After all, if we use data
to establish reliability, and we find that reliability is low. We should have
to focus upon the criteria established for the observation. And if it is tried
first in the actual situation then it may help to develop the reasonable
criteria for the observation, and may be more objective. There are two major
ways to actually estimate inter-ratter reliability. If your measurement
consists of categories -- the ratters are checking off which category each
observation falls in -- you can calculate the percent of agreement between the
ratters. For instance, let's say you had 100 observations that were being rated
by two ratters. For each observation, the ratter could check one of three
categories. Imagine that on 86 of the 100 observations, the ratters checked the
same category. In this case, the percent of agreement would be 86%. OK, it's a
crude measure, but it does give an idea of how much agreement exists, and it
works no matter how many categories are used for each observation. The other
major way to estimate inter-ratter reliability is appropriate when the measure
is a continuous one. There, all you need to do is calculate the correlation
between the ratings of the two observers. For instance, they might be rating
the overall level of activity in a classroom on a 1-to-7 scale. You could have
them give their rating at regular time intervals (e.g., every 30 seconds). The
correlation between these ratings would give you an estimate of the reliability
or consistency between the ratters. One might think of this type of reliability
as "calibrating" the observers. There are other things one could do
to encourage reliability between observers, even without estimating it. For
instance, in a psychiatric unit where every morning a nurse had to do a
ten-item rating of each patient on the unit. Of course, it’s difficult to count
on the same nurse being present every day, so there is a need to find a way to
assure that any of the nurses would give comparable ratings. The way we did, it
was to hold weekly "calibration" meetings where we would have all of
the nurses ratings for several patients and discuss why they chose the specific
values they did. If there were disagreements, the nurses would discuss them and
attempt to come up with rules for deciding when they would give a "3"
or a "4" for a rating on a specific item. Although this was not an
estimate of reliability, it probably went a long way towards improving the
reliability between ratters.
Post a Comment