Useful Computer-Adaptive Testing Computer Program...

UCAT

Computer-Adaptive Testing (CAT) by Microcomputer

You may download the self-extracting, self-installing files: ucatzip.exe. (size 130K - Microsoft Professional BASIC, 1999). Then run "ucatzip.exe".

Note: Memo 40 was written at a time when the CAT experts were saying "CAT computer programs have to be big and complicated!". So I wrote a program that did more than theirs in only a few pages of code. BASIC is an excellent language for this type of project. Now, I would use Visual Basic which supports graphics very well, or if the CAT program was to be used on webpages, I would use JAVA. The change in Memo 40 for rating scales is only small. But it does require a decision. Most CAT methods are based on "maximum information" - with rating scales these target respondents at the center of the rating scale. Essentially the central rating scale item difficulty substitutes for the dichotomous item difficulty as far as item selection is concerned, and www.rasch.org/rmt/rmt122q.htm is used for the estimation. I prefer a CAT method based on "maximum falsification" - the tests are slightly longer, but yield a greater variety of expected responses.  It is more complicated, so start with the "maximum information" method, and if respondents say "this test is too bland", then switch to "maximum falsification": www.rasch.org/rmt/rmt91d.htm

Selecting and asking targeted questions to measure ability quickly.

Taking tests is a necessary, but often nasty, part of education. You, or any other test-taker, will probably find being tested is a more agreeable experience if the testing session is short, and the questions you are asked are targeted on your own level of ability. The computer can present this kind of test. It can quickly find out your level of knowledge by selecting, asking and scoring appropriate questions that create a personalized test adapted to your performance.

Here is a program, in Microsoft BASIC, which determines your competence by choosing appropriate questions to ask you from a disk file of multiple-choice questions about a subject. When you give a right answer, the computer raises its estimate of your ability. If you give a wrong answer, its estimate of your ability is lowered. The program selects the next question to give you based on that estimate. When the estimate is accurate enough, testing stops, and you are told how you scored.

How do we measure ability?

Mike Linacre and Ben Wright discuss CAT, ca. 1990
Mike Linacre and Ben Wright discuss CAT, ca. 1990

Our first problem is how to give a numerical value to your ability. What number corresponds to easy? What number means high ability? Measuring your ability seems quite different to measuring your height. You can find your height by using a tape measure, marked out in inches, and then measure from the floor to the top of you head. We can do the same sort of thing for educational tests, but with an important difference. We can not measure ability directly as we can height, so we have to estimate it indirectly. We do this by observing ability in action, answering test questions right or wrong. A geography test is a method of allowing you to show your ability in geography in a way we can observe. Your answers become the means by which we measure your knowledge of geography. A computer can count up the number of correct answers and give it as your score, which is a rough indication of your ability.

How can we score an adaptive test?

When everybody is asked a different selection of questions, we can no longer make comparisons using a count of right answers alone. You may be asked the 20 easiest questions or the 10 hardest. A score of 5 on a hard test could be better than 15 on an easy test. To get round this problem, each question is given a numerical rating of its difficulty. Then the computer can give you a numerical estimate of your ability based on the difficulties of the questions you got right and wrong. You can directly compare this estimate with the ability estimates of others taking the test, no matter which questions you were asked.

You may wonder why we find an ability "estimate". In fact, no test (computerized or pencil-and-paper) can exactly determine your ability. A test can only give us a good indication of what your ability really is, in other words, an estimate.

Turning answers into measurements

Take a closer look at a geography test. We know some people are good at geography, and some are weak at it. Some geography questions are hard and some are easy. So let's imagine a vertical ruler whose scale measures knowledge of geography. Along this ruler we would like to arrange the people so that those with a poor knowledge are at the bottom, and those with a strong knowledge are at the top. Easy questions about geography, which about everyone gets right, also belong at the low end of the ruler, while hard questions, which just about everyone gets wrong, belong at the top.

What will probably happen when we give a geography test? If a knowledgeable person (shown as tall by our ruler) takes an easy item (shown as short), he will very likely get it right. If a person with little geography knowledge (shown as short) takes a difficult question (shown as tall), he will very likely get it wrong. If someone of intermediate ability in geography (mid-size on our ruler), tries a question of intermediate difficulty (mid- size too), the result will be a toss-up. Of course we cannot be sure how anyone will answer any question, we can only talk about what will probably happen.

Georg Rasch, a Danish Mathematician, imagined this scale in the 1950's. The major challenge was to make it independent of which people took which questions (Rasch, 1980). For adaptive testing, we too must have a scale that is unaffected by the fact that everyone is answering a different selection of questions from our question file.

Rasch discovered he could make a practical scale by using as his "inch" a unit of measurement based on the logarithm of the odds of someone getting the correct answer to a particular question. Let's say that we know that a particular person has a 75% chance of succeeding on a question about state capitals. This 75% probability of success means that he has 3 chances succeeding to 1 of failing, so that the scale value is the natural logarithm of 3/1 = loge (3) = 1.1 logarithmic units ("logits"). A 50% chance of success, or a 50% chance of failure would be 1 chance of succeeding to 1 chance of failing, giving a scale value of the logarithm of 1/1 = loge (1) = 0. Numbers like 1.1 are often difficult to think about, so in this program, we multiply these values by 9.1024, and then add 100, so that 1.1 turns into a scaled score of 110 units, and 0, the midpoint between success and failure, becomes 100 units.

When we look at test results overall, we can work out the measurement of every question in the test, and the unit measurement of everyone who has taken it, along the loge odds scale, by discovering the numerical values which most closely reflect what actually happened.

How the program uses the measurement scale

If the computer program knew your unit measurement, and the unit measurement of the question you are about to answer, it could calculate your chance of success. To start with, it does not know your measurement, so it makes a guess, and asks you a question of about that difficulty, a question for which you have apparently about a 50% chance of success. If you get it right, the program increases its estimate of your measurement, but if you get the question wrong the program decreases your measurement. Then you are asked another question targeted on your new ability measurement. Your final measurement is the place on the scale where you would probably succeed on the questions below you and fail on the questions above you. The approach used in this program is based on developments of the Rasch theory by Professor Benjamin Wright (Wright & Stone 1979; Wright & Masters 1982).

What taking a test looks like

Figure 1 shows a multiple choice question as it appears on your computer screen. The text for this screen was read from the question file. The computer selects which questions to give you. Each question has a one-line question text, and five alternative answers. You press the number on your keyboard matching your chosen answer. The computer then asks another question, until it has measured your ability accurately enough.

At the end of the test session you get a summary report like figure 2. Each line shows the identifying number of a question you took, its difficulty measurement, the answer you selected, and whether the answer was right or wrong. If your answer was quite contradictory to your estimated ability, because either you got a very easy question wrong or a very hard question right, you would see the word "SURPRISINGLY" in front of "RIGHT" or "WRONG". "SURPRISINGLY" can indicate many things: lucky guesses, careless slips, special knowledge or even mistakes in writing the questions.

What the computer is doing during the test

On starting the test, the program assumes your measurement to be near the mid-range of ability, 100 units. A question of around 100 units is asked. While you are reading the question, the computer calculates what your ability estimate would be if you got the question wrong, and also your ability if you got it right. It then finds a question between these estimates which will be the next question it asks you. Meanwhile, you finish reading the first question, and key in the number corresponding to your choice of the correct answer. The computer checks whether this answer is correct or incorrect, and updates your ability estimate with one of the new estimates it has already calculated. It immediately displays the question already chosen to be next. You start reading this question and the computer sets about calculating possible ability levels and choosing the best question to give you next. You key in your answer once more. This process continues until the computer has calculated your ability sufficiently accurately.

At the end of the test, the computer displays a summary report of how you did. It also adds this report to the test history file on disk, which is used for re-estimating the difficulty of test items. For everyone who takes your test, the computer records name, estimated ability, each question asked, answer chosen, and whether it was correct. The computer also reports whether this particular response is much as expected, right or wrong, or surprisingly right (perhaps a lucky guess), or surprisingly wrong (perhaps a careless error).

Constructing the file of questions

Frequently, the hardest part of the testing process is making up the questions and alternate answers. You will have to do this and type them into a file in your computer. Figure 3 shows the first few questions in my file of geography questions, which I typed in using my text-editor. You can use the non-document mode of a word- processor or whatever other means you have available. You can build files for whatever topics you wish. Each file should contain questions for only one area, such as geography or math, so that you are measuring ability in only one area at a time. When you make up your questions, follow the layout in Figure 3. Each question has 10 lines. The first line is a question identifying number for your own reference. Numbers must be in ascending order, but not every number has to be used, so that you can add or delete questions from your question file as you develop your test. The second line is the question, which can be up to 250 characters long, but must all be typed as one line (no Return or "soft-return" codes). The third through the seventh lines each have one of the five alternative answers, one of which must be the correct one, again each answer can be up to 250 characters long. I'm sure you've taken plenty of multiple-choice tests so that you'll know better than to have two correct answers, no correct answers or answers that do not fit in with the grammar of the question. Enter on the eighth line the number of the correct choice of answer: 1,2,3,4 or 5. "1" means the first of the five alternatives is the correct answer. If you get it wrong yourself, the program will tell you about a lot of surprisingly wrong answers when competent test-takers consistently fail to select your incorrectly specified "right" answer.

On the ninth line, you have to make an educated guess. One problem we face with a new test is that we don't know how hard the questions are. But, after a few people have taken your test, the computer can estimate the questions' difficulties. The trouble is we need some sort of value to start with, so, the ninth line of each question contains your initial estimate of the difficulty of the question you have just written. This will be a number in the range of 1 to 200 with 100 being "average" difficulty. If you really have no idea of a particular item's difficulty, you can always enter it as 100. The computer may alter this substantially later, when you ask it to re-estimate item difficulties.

Finally, leave the tenth line blank, and then key in the next question.

Put as many questions as you like in your file of test questions, up to the limit of 51 I have allowed in the program. Twenty questions is a good starting point. You can add more questions at any time, but when you add to your test, or make other changes, use new numbers, so that the program does not get confused between an old question, now deleted or changed, and a new question which happens to have the same identifying number.

Improving ability measurements - re-estimating question difficulties

You can review the test history file on disk using your text editor, or "type" it to the screen, or "print" it on your computer printer. An example is shown in figure 4. It contains a complete loge of each testing session.

After several people have taken your test, you can re-estimate the difficulty levels of the questions and also re-estimate the previous test-takers' abilities, so that they more closely correspond with the way your test is behaving overall. This is done in much the same way we estimated your ability when you took the test. Once new difficulty and ability estimates have been calculated they are included in the question and test history files and written out to disk. Figure 5 shows how the new question difficulty and its range is included before the previous difficulty estimate in the question file. Figure 6 shows how the revised ability estimate and range is included before the previous ability and range in the test history file.

Detailed explanation of the BASIC program

Now let's see how our program uses the question file and conducts a test. You can follow along in the BASIC listing in figure 8, if you like. I have included the name and purpose of each BASIC variable in figure 7. For those of you keying this program into your computer, you may leave out all program lines which contain only comments, and all comments at the end of lines. Comments start with a '. Keep the same line numbers as I have for those lines you do type in.

In lines 30 and 40, you can see that the arrays are subscripted to allow for a maximum of 51 questions and 51 test-takers. You can have more questions and people by changing all the 50's to some large number.

In line 50, the random number generator is initialized. This program uses random numbers to give everyone a different series of questions which meet the test requirements.

Line contains the function definitions to convert between report units (centered on 100) and logits, the internal unit, centered on 0.

The accuracy with which we can calculate an ability estimate is known, statistically, as the standard error of estimation. It indicates the size of the zone above and below your ability estimate in which your true ability probably lies. All measuring techniques have some sort of standard error associated with them, but it is usually not reported and so measurements are frequently thought to be more exact than they really are. In this program, the maximum standard error is set, in line 80, at about 7 units (.6 logits), so that if your ability is reported as 100 units, your true ability is probably between 93 to 107. (This program ignores the standard error of the item difficulties, which could increase the size of the probable zone by 20%.)

You then input, in line 100, the name of your file of questions. In a subroutine starting at line 860, the 10 lines of text corresponding to each question are loaded into the question text array. A check is made to insure that that indicated correct answer on line 8 is in the range, 1 to 5, of valid options. The question difficulty on line 9 is converted from external units to logits and checked that it is between 1 and 200 units. If an error is found the program stops and reports the approximate line number where the error was found. After successfully reading the file we return to the main program.

In line 110, you are asked the name of a file in which to store the test history. These should all be accumulated in the same file so that it can be used for test refinement later. You are then given, in line 120, the option to conduct a test. After that, in line 140, you may also re-estimate the question difficulties. (You may do this as often as you like, but it only makes sense to do it after testing a number of people). If you wish to do neither, the testing session is finished.

You can use this program for giving other tests, by specifying different question and test history files.

Test Session Program Logic

A testing session starts, in line 180. You enter your name, followed by the Enter (Return) key. You always need to press the Enter (Return) key in order for the computer to accept your response.

In line 200, the computer randomly assigns a lower limit to your ability in the range 91-95 units (-1.0 to -0.5 logits). The program uses the logit for its internal units to simplify the math. The initial upper limit to your ability estimate is put 9 units (1 logit) higher. The initial standard error of estimation, the accuracy estimate, is assumed to be the desired accuracy level.

The computer then flags all questions as unasked in line 220, initializes the counters of questions asked and your score so far, and selects the first question. The question selection subroutine, in line 460, sets the selected question to zero. If our current standard error is better than the required accuracy, or we have asked all the questions, we immediately return without selecting another question. The question file is in its original order, so that in order not to give the same sequence of questions, we use the random number generator to give us the position in the question array from which to start looking for a question that meets our needs. We want a question that we have not asked, and whose difficulty is between our higher and lower ability estimates. This means the question is of a difficulty appropriate to our estimate of your ability. When we find a suitable question, we return. If we have been through the question array without finding one, we ask the last question we found that has not been asked.

More sophisticated techniques can be used for question selection to increase efficiency and also to check for knowledge gaps and special knowledge. The technique here is intended to minimize code and select the next question quickly.

If a question has been found, which should always happen the first time, we go to the subroutine at line 330. This displays the question and its possible answers on the screen. Though each line of text in the question file (the question itself and the possible answers) only occupy one line, these lines can be up to 250 characters long. The display routine at line 313 will split them into several lines on the screen. It also updates the various arrays to show this question has been asked, but does not wait for your answer. In line 380, the computer calculates an expected score based on the previous ability estimate and the difficulty of the items encountered so far including the one just presented. The expected score is the sum of the probabilities of success on the questions. Thus, if you have the same ability estimate as an item has difficulty, i.e. your ability estimate is the same as the question's difficulty, then your probability of success is that same as your probability of failure, that is 50% or .5. So that for this question, your expected score is .5. For the test so far, we add all these probabilities together to get your expected score so far. We also get an idea of how accurately we know your ability estimate by calculating its variance, which is the sum of the product of the probability of success and the probability of failure on each question. The standard error of estimation is the reciprocal of the square root of that variance.

At this point, two actual scores are possible. You can get the question displayed on the screen right or wrong. We first assume that you get the answer wrong, so that your total count of correct answers, your score so far, will not change. In line 430, this gives a low ability estimate which, we find by adjusting our previous estimate by the difference between your actual score and what the program expected you to score, divided by the expected score variance. However we do not want our estimate to change too quickly due to careless mistakes or other quirks and so we insure that the variance used is never less than one. We then calculate your ability if you were to get the answer to the current question right. This would give a higher estimate of your ability, by adding to the low ability estimate the value of one more right answer.

On returning to the main program at line 250, we still do not know your response to the previous question, but we assume it is going to take some time for you to read all the text that is displayed and to make a decision, so we go to line 460 to select the next question you are going to see. That question is selected by starting at a random point in the question file and selecting the first question between the high and low ability estimates, or the last question that has not yet been asked.

The next subroutine at line 240 returns to the screen to ask what you have chosen as right answer to the question you have been reading. Only a number in the range 1 to 5 is allowed (followed by pressing the Enter key). After receiving a selection, the computer notes it in an array and, if it is correct, updates your score and replaces the newly calculated low ability estimate with the high ability estimate. The "low" ability estimate is now our best idea of your real ability. The test can be stopped at any time by pressing the Ctrl and S keys together and then Enter. This is useful if you want to stop the test early.

Back to line 250, and, if a new question has been selected, we now go round again and display the next question. If no question has been selected, either because there are none left, or, more desirably, because our current standard error of estimation is better than the accuracy required, we fine tune our ability estimate with one more re-estimation cycle in line 270. We don't increase the number of questions asked so that what before was our low estimate of ability, now becomes our most likely estimate of ability. We then tell you the test is over and display, using the subroutine at 610, the range and likely value of your ability measurement.

When the test supervisor returns to the keyboard, we display all the questions taken and results obtained so far with the subroutine at line 640. We also write this information, in even more detail, onto the test history file. The word "SURPRISINGLY" is added, in line 760, to those answers which represent an unexpected right or wrong response to a question which is more than 18 units (2 logits) harder or easier than our ability estimate.

At this point you can give another test, or re-estimate question difficulties. When you first constructed the question file, you may have had to guess which are the easy and hard questions, and particularly what precise difficulty value to give them. However, after a dozen or so people have taken your test, you may want the computer to re-estimate the difficulty levels of your questions based on what actually happened.

The computer will do this when you request re-estimation at line 140. In the subroutine at 960, it will read the test history file, and note for each person what ability they were estimated to have and which questions they answered correctly and incorrectly. The reason for obtaining the test-taker's ability estimates is to give us a starting point for the re-estimation procedure, and also to enable us to keep the same mean ability for the test-takers so that the re-estimation procedure alters their estimates as little as possible. After reading in the responses, we ignore all test- takers and questions for whom we do not have at least one right and one wrong answer. For a big question file, it may take a good number of tests before all questions can be re-estimated. At line 1250, we make sure we have enough left to be able to continue with re-estimation.

At line 1270, we make an adjustment for statistical bias because we are using our data to correct our estimates. We now refine our estimates, starting in line 1340, in the same way we did when calculating abilities, but this time we change the question difficulties and test-taker abilities simultaneously. We do this through 10 cycles of re-estimation which generally means that there remains no significant difference between any pair of expected and observed scores. During this procedure, at line 1480, we keep the average test-taker ability fixed so that we minimize changes in the test-takers' estimates.

After the re-estimation procedure is completed at line 1550, the question file is written to disk with the new difficulty estimate, and probable range inserted at the front of the ninth line of each question. We also copy the test history file adding, in line 1700, below each test-taker's name his revised ability estimate and its probable range. You may now continue testing, or exit the program.

Linacre J.M. (1987) UCAT: a BASIC computer-adaptive testing program. MESA Memorandum number 40, MESA Psychometric Laboratory. (ERIC ED 280 895)

Author's biographical note:

John Linacre is a consultant who has written educational and social science software, including several commercially available programs.

References:

Rasch, G., Probabilistic Models for Some Intelligence and Attainment Tests. Chicago: University of Chicago Press, 1980

Wright, B.D. & Stone, M.H., Best Test Design: Rasch Measurement. Chicago: Mesa Press, 1979

Wright, B.D. & Masters, G.N., Rating Scale Analysis: Rasch Measurement. Chicago: Mesa Press, 1982

Automating UCAT system

In order to simplify the system, adjustments have been made. The following DOS control line switches are available:

/D go into debug mode - display item difficulties and answers while item is administered.

/Annn set standard error of final estimate (i.e. length of test) in CHIP units ( = logit*9.1024)

/S means supervisor conducts test - reports scores to screen and allows item bank recalibration

/Ifilename provide name of item bank of test questions e.g. /Imathbank if appended by ".SEC" then itembank is in secure format: /Imathbank.sec

/Pfilenameprovide name of data file for person responses e.g. /P student if appended by ".SEC" then person data is in secure format: /P student.sec

e.g. to give a standard secure CAT test without a supervisor, but with a S.E. of .5 logits = 4.5 CHIP units:
C:>UCAT /Iitembank.sec /Pdata.sec /A4.5

Security:

The program "SECURE" converts between secured ".SEC" and non- secured files. Put this on your diskette but not on the students!

At DOS prompt:

c:>SECURE itembank
write secured file "itembank.sec" from file "itembank"

c:>SECURE itembank.sec
write unsecured file "itembank" from file "itembank.sec"

Figure 1: The CAT screen

Question identifier: 2

Please select the correct answer to the following question:

Which country is in the continent of Africa?

The answer is one of:

1 . Australia
2 . Bolivia
3 . Cambodia
4 . Nigeria
5 . Romania

Type the number of your selection here:

Figure 1. The computer chooses and displays a multiple-choice question of the appropriate level of difficulty.

Figure 2: The Examinee Report

Summary report on questions administered to Fred

Identifier Difficulty Answer Right/Wrong
2 96 4 RIGHT
24 99 2 RIGHT
1 104 3 WRONG
25 114 5 RIGHT
7 106 2 WRONG
13 111 4 RIGHT
12 105 2 RIGHT
15 109 1 WRONG
3 85 2 SURPRISINGLY WRONG
18 103 3 RIGHT

Fred scored in the range from 101 to 115 at about 108 after 10 questions

Figure 2. At the end of the test session a summary of the test session is displayed, as well as an estimate of the test-taker's ability.

Figure 3: The Item Bank

1
Which city is the capital of West Germany ?
Berlin
Bonn
Dortmund
Hamburg
Weimar
2
104

2
Which country is in the continent of Africa?
Australia
Bolivia
Cambodia
Nigeria
Romania
4
96

7
Which city is the state capital of New York?
Albany
Buffalo
New York City
Rochester
Syracuse
1
106

Figure 3. Example of questions entered on the question file using a text editor. Each question has an identifying number (in ascending order but gaps are allowed), then the text of the question, the 5 possible answers, the number of the correct answer (1-5), and a preliminary estimate of the questions difficulty, relative to 100.

Figure 4: Examinee File

Test-taker's name: George
Estimated ability: 108
Probable ability range: 101 - 115

Question identifier: 2
Estimated difficulty: 96
Question text:Which country is in the continent of Africa?
Answer: 1 , Australia
This answer is: WRONG

Question identifier: 24
Estimated difficulty: 99
Question text:Which country has no sea coast?
Answer: 4 , Switzerland
This answer is: RIGHT

Figure 4. Details of each test session are written to the test-taker file on disk. The include the test-taker's name and estimated ability, and the range in which it probably lies. Then each question he was asked and how he answered it.

Figure 5: Recalibrated Item Bank

1
Which city is the capital of West Germany ?
Berlin
Bonn
Dortmund
Hamburg
Weimar
2
106, 99 - 113, 104

Figure 5. Your question file, with the reestimated difficulty and range inserted before your difficulty estimate on the ninth line of the question.

Figure 6: Re-estimated Examinee Ability File

Test-taker's name: George
Revised estimated ability: 106
Probable ability range: 100 - 112
Estimated ability: 108
Probable ability range: 101 - 115


Question identifier: 2
Estimated difficulty: 96
Question text:Which country is in the continent of Africa?
Answer: 1 , Australia
This answer is: WRONG

Figure 6. The test-taker file, showing a revised ability estimate included before the ability estimate made at the time of the interactive test.

Figure 7: Explanation of BASIC variable names

Variable Description
-------- -----------
ABILITY Estimate of ability, sometimes assumes next answer wrong
ABILRIGHT Estimate of ability, if next answer right
ACCURACY Maximum width of likely zone of estimate, =2*standard error
ANSWERS$ Valid responses to questions: Update the list if responses not "12345"
BIAS Adjustment for statistical bias
CURSCOL% Current cursor column on screen
CURSROW% Current cursor row on screen
FNL Function to convert external units to logits (-10 to +10)
FNU Function to convert logits to external units (1-200)
I Subscript index and numerical working variable
KEYSTR$ Last key pressed
L Subscript index and numerical working variable
MAXITEMS Number of questions in item file (calculated by program)
MAXPERSONS Maximum number of persons to re-estimate (set by user)
MSG$ Message to be sent to screen
N Numerical working variable
NAM$ Name of test taker
P Current location in test-taker array
PABILITY() Estimated test-takers' abilities
PADJ Total of previous ability estimates
PANSWER() Answers keyed in by test-taker to questions asked
PASKED Number of questions asked or number of answers to a question
PEXP Expected score by test-taker
PQUESTION() Location in QTEXT$ array of questions asked test-taker
PRESULT Count of questions asked a test-taker
PSCORE() Count of test-taker's correct answers, i.e. raw score
PSE() Standard error of estimation (accuracy) of ability estimates
PSUM Sum of all ability estimates
PTOTAL Total number of test-takers being reestimated
PVAR Variance of expected score for a test-taker
Q Location in question arrays
QASKED() Number of times question has been asked
QCOUNT Number of questions in question file
QDIFF() Difficulty estimates for questions
QEXP() Estimated score based on ability and difficulty estimates
QFIL$ Name of question text file
QSCORE() Count of number of correct answers to each question
QSELECT Location of next question to be displayed in question array
QTEXT$(,10) Text of questions and answer options (10 lines per question)
QTOTAL Total number of questions being reestimated
QVAR() Variance of expected score by test-taker's on a question
RECOUNT Working variable to control recounting and reestimating
RESIDUAL Difference between actual and estimated scores
RESPONSE$ Response to the question
RESULT%(,) Answers by test-takers to questions: 1=correct 0=incorrect -1=unknown (not taken)
SE Standard error of estimation of ability measure
SUCCESS Probability of correct answer by test-taker to question
TEXT$ Text data
TFIL$ Name of file of test-taker's abilities and responses
TREVFIL$ Name of revised test-taker file
VALID$ Valid responses for for pressed keys

Figure 7. Names and usage of variables used in the BASIC program Arrays are dimensioned for MAXPERSONS=50 test-takers.

Program 1: The CAT program (Microsoft Professional BASIC - 1999).

Copy and Paste this to a DOS-TEXT or ASCII file.
Then edit this with a small font size to avoid unintentional line breaks.

'$DYNAMIC
' Computer-adaptive test presentation and scoring program
' Written by John Michael Linacre 1986 - modify freely
' Line break with \
' Forced end of line with @
' select nearest if none good - line 485
' Join the Rasch Measurement SIG
' Modify this code for your own Logit to Reporting Unit Conversion
' external units are scaled: 10 units = 1.1 logits!
' External units=(logits*9.1024): logits=(external units/9.1024)
DEF FNU (i) = CINT(i * 9.1024): DEF FNL (TEXT$) = VAL(TEXT$) / 9.1024
' End of modification
IF INSTR(UCASE$(COMMAND$), "/D") > 0 THEN debug% = -1 'show debug information
i% = INSTR(UCASE$(COMMAND$), "/A") ' set accuracy limit
IF i% > 0 THEN
ACCURACY = FNL(MID$(COMMAND$, i% + 2))
ELSE
ACCURACY = .7 'measure ability within zone of .7 LOGITS
END IF
IF INSTR(UCASE$(COMMAND$), "/S") > 0 THEN super% = -1 '/S supervisor mode
' prints number of items in bank
i% = INSTR(UCASE$(COMMAND$), "/I") 'item bank
IF i% > 0 THEN
j% = INSTR(i%, UCASE$(COMMAND$) + " ", " ")
QFIL$ = MID$(COMMAND$, i% + 2, j% - i% - 2)
END IF
i% = INSTR(UCASE$(COMMAND$), "/P") 'person file
IF i% > 0 THEN
j% = INSTR(i%, UCASE$(COMMAND$) + " ", " ")
tfil$ = MID$(COMMAND$, i% + 2, j% - i% - 2)
END IF
MAXPERSONS = 500 'Update to reflect maximum number of persons
RANDOMIZE TIMER 'set random number generator so it differs every time
CLS : PRINT "Preparing to administer test questions .."
IF QFIL$ = "" THEN INPUT "What is the name of your file of questions"; QFIL$
GOSUB 850 'Find how many options AND HOW MANY ITEMS
IF super% THEN PRINT "There are" + STR$(maxitems) + " items in the bank, with" + STR$(maxanswer%) + " options each"
DIM QASKED(maxitems), QDIFF(maxitems), QEXP(maxitems), QSCORE(maxitems)
DIM QTEXT$(maxitems, MEASURE%), QVAR(maxitems), result%(MAXPERSONS, maxitems)
IF maxitems > MAXPERSONS THEN MAXPERSONS = maxitems'TO ALLOW ENOUGH ROOM
DIM PABILITY(MAXPERSONS), PANSWER(MAXPERSONS), PQUESTION(MAXPERSONS)
DIM PSCORE(MAXPERSONS), PSE(MAXPERSONS)
GOSUB 870 'Read in the questions
IF tfil$ = "" THEN INPUT "What is the name of your file of test-takers"; tfil$
120 IF super% THEN MSG$ = "Do you want to give a test?": GOSUB 810
IF (RESPONSE$ = "Y") OR NOT super% THEN
GOSUB 180: GOSUB 2660 'administer test and write results
IF super% GOTO 120'Administer another test
END IF
IF super% THEN
MSG$ = "Do you want the computer to reestimate question difficulties?"
GOSUB 810: IF RESPONSE$ = "Y" THEN GOSUB 960: GOTO 120'reestimate
PRINT "Then we have finished. Review the responses in file " + tfil$
END IF
WHILE LEN(INKEY$) > 0: WEND
PRINT "Thank you! - Please press any key to conclude test session"
WHILE LEN(INKEY$) = 0: WEND
SYSTEM
' Conduct a test session
180 CLS : PRINT "Welcome to a Computer-administered test session!": PRINT
INPUT "Please type your name here:", nam$
ABILITY = qmean * (.45 + .1 * RND)'Starting ability is between 90 and 95
ABILRIGHT = ABILITY + 1: SE = ACCURACY'Upper ability estimate 100-105
FOR Q = 1 TO qcount: QASKED(Q) = 0: NEXT Q'0 means question not asked
PASKED = 0: presult = 0: GOSUB 460'select starting question
WHILE QSELECT <> 0: GOSUB 330'put questions and update ability estimates
GOSUB 460: GOSUB 540: WEND'choose next question, check previous answer
IF PASKED = 0 THEN RETURN' no questions asked
' Do another estimation to refine the measurement and finish test
GOSUB 380: CLS : PRINT : PRINT "You have finished your test."
IF super% THEN
PRINT "You"; : GOSUB 610'Display ability estimate
PRINT nam$ + ", please call the test supervisor now."
MSG$ = "Is the test supervisor at the keyboard?": RESPONSE$ = "N"
WHILE RESPONSE$ <> "Y": GOSUB 810: WEND: GOSUB 640
END IF
RETURN'Show test results
'PRINT multiple LINE TEXT AND ONE SPACE
313 WHILE (LEN(TEXT$) > 79) or (instr(text$,"@") > 0)
' @ is the forced end of line code
lx=instr(mid$(text$,1,80),"@")
if lx > 0 then
print left$(text$,lx-1)
else
IX = 1
WHILE IX <= 79:
LX = IX: IX = INSTR(IX + 1, TEXT$ + " ", " ")
WEND
PRINT left$(TEXT$,LX)
endif
TEXT$ = MID$(TEXT$, LX + 1)
WEND: PRINT TEXT$: RETURN
' Display the question on the screen and update ability estimate
330 CLS : PRINT "Question identifier:"; QTEXT$(QSELECT, 1): PRINT
PRINT "Please select the best answer to the following question:"
PRINT : TEXT$ = QTEXT$(QSELECT, 2): GOSUB 313
PRINT : PRINT "The answer is one of:": PRINT
FOR i = 1 TO maxanswer%: TEXT$ = MID$(ANSWER$, i, 1) + ". " + QTEXT$(QSELECT, i + 2): GOSUB 313: NEXT i
PASKED = PASKED + 1: PQUESTION(PASKED) = QSELECT: QASKED(QSELECT) = 1'This question
IF debug% THEN 'REPORT THE STATUS SO FAR
CURSROW% = CSRLIN: CURSCOL% = POS(0) ' SAVE POSITION
LOCATE 24, 1, 0 ' PENULTIMATE ROW
PRINT " item: SEQU NO DIFFICULTY ANSWER person: SCORE MEASURE SE";
LOCATE 25, 1, 0
PRINT USING " #### ##### \ \ ### ##### #####"; PASKED; FNU(QDIFF(QSELECT)); QTEXT$(QSELECT, correct%); presult; FNU(ABILITY); FNU(SE);
LOCATE CURSROW%, CURSCOL%, 1 'RESTORE POSITION
END IF
380 PEXP = 0: PVAR = 0: FOR P = 1 TO PASKED'Estimate ability based on current score
SUCCESS = 1 / (1 + EXP(QDIFF(PQUESTION(P)) - ABILITY))'Probability of success
PEXP = PEXP + SUCCESS: PVAR = PVAR + (SUCCESS * (1 - SUCCESS)): NEXT P'sum
SE = SQR(1 / PVAR)'standard error of estimation = accuracy
IF PVAR < 1 THEN PVAR = 1'limit change in estimates
ABILITY = ABILITY + ((presult - PEXP) / PVAR)'ability so far
ABILRIGHT = ABILITY + (1 / PVAR): RETURN' ability if next answer right
' Select useful next question if needed for accuracy and available
460 QSELECT = 0: IF ACCURACY > SE OR PASKED = qcount THEN RETURN
n = INT(qcount * RND) + 1'Starting point to look for suitable question
ABILHALF = (ABILRIGHT + ABILITY) * .5: QSELECT = 0
FOR QQ = n + 1 TO qcount + n
IF QQ > qcount THEN Q = QQ - qcount ELSE Q = QQ
IF QASKED(Q) = 1 THEN 500
i = QDIFF(Q): IF i >= ABILITY AND i <= ABILRIGHT THEN QSELECT = Q: RETURN'found one
IF (QSELECT = 0) OR (ABS(i - ABILHALF) < QHOLD) THEN QSELECT = Q: QHOLD = ABS(i - ABILHALF)' nearest available
500 NEXT QQ
RETURN 'If none are very close, default to last possibility
' Get and check person's answer to question: update ability if right
540 PRINT ': PRINT QTEXT$(PQUESTION(PASKED), CORRECT%), QDIFF(PQUESTION(PASKED))
PRINT "Type the number of your selection here:";
VALID$ = ANSWER$ + CHR$(19) 'Valid responses to questions on screen + Ctrl-S
GOSUB 1900
IF RESPONSE$ = CHR$(19) THEN PASKED = PASKED - 1: QSELECT = 0: RETURN'FORCE END
n = VAL(RESPONSE$): PANSWER(PASKED) = n'Update answer array, update score
i = VAL(QTEXT$(PQUESTION(PASKED), correct%))'Determine correct answer
IF n = i THEN presult = presult + 1: ABILITY = ABILRIGHT'Update if correct
RETURN
' Display estimates of ability
610 PRINT " scored in the range from "; LTRIM$(STR$(FNU(ABILITY - SE))); " to "; LTRIM$(STR$(FNU(ABILITY + SE)));
PRINT " at about "; LTRIM$(STR$(FNU(ABILITY))); " after "; ltrim$(str$(PASKED)); " questions.": RETURN
' Record person's ability and answers on disk
640 PRINT "Summary report on questions administered to " + nam$
PRINT "Identifier", "Difficulty", "Answer", "Right/Wrong"
FOR P = 1 TO PASKED: Q = PQUESTION(P): n = PANSWER(P)
IF n = VAL(QTEXT$(Q, correct%)) THEN i = 1: TEXT$ = "RIGHT" ELSE i = -1: TEXT$ = "WRONG"
IF (ABILITY - QDIFF(Q)) * i < -2 THEN TEXT$ = "SURPRISINGLY " + TEXT$
PRINT QTEXT$(Q, 1), FNU(QDIFF(Q)), n, TEXT$
NEXT P
PRINT nam$; : GOSUB 610: RETURN'Display estimated ability
' This routine checks for Yes/No answers - no Enter key required
810 IF LEN(MSG$) < 61 THEN PRINT MSG$; ELSE PRINT MSG$
PRINT " Yes or No (Y/N):"; : VALID$ = "NY": GOSUB 1900: RETURN
' Load the question file (9 lines per question +blank) into an array
'FIND NUMBER OF OPTIONS IN QUESTION FILE
850 IF INSTR(UCASE$(QFIL$), ".SEC") > 0 THEN qsec% = -1
n = 0: i = 0: OPEN QFIL$ FOR INPUT AS #1: TEXT$ = "A"
WHILE NOT EOF(1) AND (TEXT$ + " " <> " "): GOSUB 1800: i = i + 1: WEND
maxanswer% = i - 5: ANSWER$ = LEFT$("123456789", maxanswer%)
correct% = maxanswer% + 3: MEASURE% = maxanswer% + 4
' FIND NUMBER OF ITEMS
maxitems = 1: WHILE NOT EOF(1): maxitems = maxitems + 1
FOR i = 1 TO maxanswer% + 4: GOSUB 1800: NEXT i
IF NOT EOF(1) THEN GOSUB 1800: IF TEXT$ <> "" THEN PRINT "Blank line expected at line" + STR$(n): GOTO 930'WE HAVE AN ERROR
WEND
CLOSE #1: RETURN
'READ IN QUESTIONS FILE
870 qcount = 0: n = 0: qmean = 0
OPEN QFIL$ FOR INPUT AS #1: WHILE NOT EOF(1)
qcount = qcount + 1: i = 0
WHILE i < maxanswer% + 4: i = i + 1: GOSUB 1800
QTEXT$(qcount, i) = TEXT$
WEND: IF NOT EOF(1) THEN GOSUB 1800
IF VAL(QTEXT$(qcount, 1)) <= VAL(QTEXT$(qcount - 1, 1)) THEN 930'Check ID
i = VAL(QTEXT$(qcount, correct%))
IF i < 1 OR i > maxanswer% THEN PRINT "Incorrect answer: " + QTEXT$(qcount, correct%): GOTO 930'Check correct answer
i = FNL(QTEXT$(qcount, MEASURE%))
IF i < FNL("1") OR i > FNL("2000") THEN PRINT "Incorrect difficulty: " + QTEXT$(qcount, MEASURE%): GOTO 930'Units ok?"
QDIFF(qcount) = i
qmean = qmean + i
WEND: CLOSE #1: IF qcount > 0 THEN qmean = qmean / qcount: RETURN'if all ok
PRINT "No questions found"
930 PRINT "Error in question file, " + QFIL$ + ", at or before line "; n
PRINT "Test session ended": STOP
' Reestimation routine for question and test-taker measurements
960 PRINT "Reading test-takers' answers..."
PASKED = 0: OPEN tfil$ FOR INPUT AS #2: WHILE NOT EOF(2)
LINE INPUT #2, TEXT$: IF INSTR(TEXT$, "Test-taker") = 0 THEN 1030
' We have another test-taker - set his responses to unknown
PASKED = PASKED + 1: FOR Q = 1 TO qcount: result%(PASKED, Q) = -1: NEXT Q
PABILITY(PASKED) = 0: GOTO 1110
' Read previous estimate of test-taker's ability
1030 i = INSTR(TEXT$, "ability"): IF i = 0 OR PABILITY(PASKED) > 0 THEN 1050
PABILITY(PASKED) = FNL(MID$(TEXT$, i + 8)): GOTO 1110
1050 i = INSTR(TEXT$, "identifier"): IF i = 0 THEN 1090'is this a question id?
Q = VAL(MID$(TEXT$, i + 11))'Question identifier - look up in table
FOR i = 1 TO qcount: IF Q = VAL(QTEXT$(i, 1)) THEN Q = i: GOTO 1110
NEXT i: Q = 0: GOTO 1110'if not found flag as zero which is unused
1090 IF INSTR(TEXT$, "RIGHT") > 0 THEN result%(PASKED, Q) = 1: GOTO 1110'save answer
IF INSTR(TEXT$, "WRONG") > 0 THEN result%(PASKED, Q) = 0'1=right 0=wrong
1110 WEND: CLOSE #2
1120 PRINT "Totalling scores...": QTOTAL = 0: PTOTAL = 0: recount = 0
FOR Q = 1 TO qcount: QASKED(Q) = 0: QSCORE(Q) = 0: NEXT Q
FOR P = 1 TO PASKED: presult = 0: PSCORE(P) = 0: FOR Q = 1 TO qcount
n = result%(P, Q): IF n < 0 THEN 1180
presult = presult + 1: QASKED(Q) = QASKED(Q) + 1
PSCORE(P) = PSCORE(P) + n: QSCORE(Q) = QSCORE(Q) + n
1180 NEXT Q: IF presult = 0 THEN 1210
IF PSCORE(P) > 0 AND PSCORE(P) < presult THEN PTOTAL = PTOTAL + 1: GOTO 1210
recount = 1: FOR Q = 1 TO qcount: result%(P, Q) = -1: NEXT Q
1210 NEXT P: FOR Q = 1 TO qcount: IF QASKED(Q) = 0 THEN 1240
IF QSCORE(Q) > 0 AND QSCORE(Q) < QASKED(Q) THEN QTOTAL = QTOTAL + 1: GOTO 1240
recount = 1: FOR P = 1 TO PASKED: result%(P, Q) = -1: NEXT P
1240 NEXT Q
IF PTOTAL < 2 OR QTOTAL < 2 THEN PRINT "Not enough data to reestimate": RETURN
IF recount = 1 THEN 1120
' was this part of the R. Smith problem?
BIAS = 1 '(QTOTAL - 1) * (PTOTAL - 1) / (QTOTAL * PTOTAL)'allow for statistical bias
FOR Q = 1 TO qcount: IF QASKED(Q) <> 0 THEN QDIFF(Q) = FNL(QTEXT$(Q, MEASURE%)) / BIAS
NEXT Q: PADJ = 0: FOR P = 1 TO PASKED
IF PSCORE(P) > 0 THEN PABILITY(P) = PABILITY(P) / BIAS: PADJ = PABILITY(P) + PADJ
NEXT P 'Sum current abilities to determine average ability level
' Now perform reestimation for 10 iterations.
PRINT "Reestimating for"; PTOTAL; "test-takers and"; QTOTAL; "questions"
recount = 1: Cycle% = 1
WHILE recount > 0 OR maxresidual > .1: recount = 0: Cycle% = Cycle% + 1: maxresidual = 0
PRINT "Estimation cycle no."; Cycle%
PSUM = 0: FOR Q = 1 TO qcount: QEXP(Q) = 0: QVAR(Q) = 0: NEXT Q
FOR P = 1 TO PASKED: IF PSCORE(P) = 0 THEN 1470
PEXP = 0: PVAR = 0: FOR Q = 1 TO qcount: IF QASKED(Q) = 0 THEN 1420
IF result%(P, Q) = -1 THEN 1420'Look at each valid answer
SUCCESS = 1 / (1 + EXP(QDIFF(Q) - PABILITY(P)))'Probability of success
QEXP(Q) = QEXP(Q) + SUCCESS: PEXP = PEXP + SUCCESS'Accumulate estimated scores
n = SUCCESS * (1 - SUCCESS): QVAR(Q) = QVAR(Q) + n: PVAR = PVAR + n'sum variance
1420 NEXT Q
RESIDUAL = PSCORE(P) - PEXP'difference between actual and estimated
IF ABS(RESIDUAL) > maxresidual THEN maxresidual = ABS(RESIDUAL)
IF PVAR > 1 THEN RESIDUAL = RESIDUAL / PVAR'amount to adjust by
PABILITY(P) = PABILITY(P) + RESIDUAL'new ability estimate
PSE(P) = 1 / SQR(PVAR): PSUM = PSUM + PABILITY(P)'standard error + ability sum
1470 NEXT P: PSUM = (PSUM - PADJ) / PTOTAL'What is change in mean ability?
FOR P = 1 TO PASKED: IF PSCORE(P) > 0 THEN PABILITY(P) = PABILITY(P) - PSUM
NEXT P 'Keep mean ability of test-takers constant
FOR Q = 1 TO qcount: IF QASKED(Q) = 0 THEN 1540'reestimate questions
RESIDUAL = QSCORE(Q) - QEXP(Q)'difference between actual and estimated
IF ABS(RESIDUAL) > maxresidual THEN maxresidual = ABS(RESIDUAL)
IF QVAR(Q) > 1 THEN RESIDUAL = RESIDUAL / QVAR(Q)'amount to adjust by
QDIFF(Q) = QDIFF(Q) - RESIDUAL'new question difficulty estimate
1540 NEXT Q: WEND: PRINT "Reestimation complete."
INPUT "What is the name of the updated question file"; QFIL$
OPEN QFIL$ FOR OUTPUT AS #1: FOR Q = 1 TO qcount' write out all questions
FOR i = 1 TO correct%: PRINT #1, QTEXT$(Q, i): NEXT i: IF QASKED(Q) = 0 THEN 1600
i = QDIFF(Q) * BIAS: QDIFF(Q) = i: SE = BIAS / SQR(QVAR(Q))' new difficulties
PRINT #1, FNU(i); ","; FNU(i - SE); "-"; FNU(i + SE); ","; ' insert in line 9
1600 PRINT #1, QTEXT$(Q, MEASURE%): PRINT #1, "": NEXT Q:
FOR Q = 1 TO qcount
PRINT #1, Q; FNU(QDIFF(Q) * BIAS)
NEXT Q
CLOSE #1'Append old estimate
' Now rewrite the test-taker file with revised abilities
1620 INPUT "What is the name of the revised test-taker file"; TREVFIL$
IF tfil$ = TREVFIL$ THEN 1620'must be a different file
OPEN TREVFIL$ FOR OUTPUT AS #1: PASKED = 0'read previous test-taker file
OPEN tfil$ FOR INPUT AS #2 'output revised test-taker file
WHILE NOT EOF(2): LINE INPUT #2, TEXT$: PRINT #1, TEXT$'copy over
IF INSTR(TEXT$, "Test-taker") = 0 THEN 1720'is this next test-taker ?
PASKED = PASKED + 1: IF PSCORE(PASKED) = 0 THEN 1720'is his ability revised?
ABILITY = PABILITY(PASKED) * BIAS: SE = PSE(PASKED) * BIAS'remove bias
PRINT #1, "Revised estimated ability:"; FNU(ABILITY)
PRINT #1, "Probable ability range:"; FNU(ABILITY - SE); "-"; FNU(ABILITY + SE)
1720 WEND
' output the matrix of Responses
PRINT #1, ""
FOR P = 1 TO PASKED
x$ = "Responses="
FOR Q = 1 TO qcount: x$ = x$ + LEFT$(LTRIM$(STR$(result%(P, Q))), 1): NEXT Q
PRINT #1, x$
NEXT P

CLOSE #2: CLOSE #1: tfil$ = TREVFIL$: RETURN'Use new test-taker file
' READ IN THE NEXT LINE OF THE DATA FILE
1800 TEXT$ = ""
LINE INPUT #1, ttt$
IF qsec% THEN
FOR tti% = LEN(ttt$) TO 1 STEP -1
ttx% = ASC(MID$(ttt$, tti%, 1))
IF ttx% >= 32 THEN MID$(ttt$, tti%, 1) = CHR$((ttx% AND 224) + (((ttx% AND 31) + 16) AND 31))
NEXT tti%
END IF
ttt$ = RTRIM$(ttt$): n = n + 1
IF LEN(ttt$) > 0 THEN
' continuation is \, forced end of line is @
WHILE (RIGHT$(ttt$, 1) = "\") or (RIGHT$(ttt$, 1) = "@")
if RIGHT$(ttt$, 1) = "\" then MID$(ttt$, LEN(ttt$), 1) = " "
TEXT$ = TEXT$ + ttt$
LINE INPUT #1, ttt$
IF qsec% THEN
FOR tti% = LEN(ttt$) TO 1 STEP -1
ttx% = ASC(MID$(ttt$, tti%, 1))
IF ttx% >= 32 THEN MID$(ttt$, tti%, 1) = CHR$((ttx% AND 224) + (((ttx% AND 31) + 16) AND 31))
NEXT tti%
END IF
ttt$ = RTRIM$(ttt$): n = n + 1
WEND
END IF: TEXT$ = TEXT$ + ttt$
RETURN
' READ IN A VALID KEY - VALID RESPONSES IN VALID$
1900 CURSROW% = CSRLIN: CURSCOL% = POS(0): RESPONSE$ = "ZZ"
1901 WHILE INSTR(VALID$, RESPONSE$) = 0
LOCATE CURSROW%, CURSCOL%, 1
RESPONSE$ = INKEY$: WHILE LEN(RESPONSE$) = 0: RESPONSE$ = INKEY$: WEND
RESPONSE$ = UCASE$(RESPONSE$)
WEND
PRINT RESPONSE$;
LOCATE CURSROW%, CURSCOL%, 1' CONFIRM OR DENY
KEYSTR$ = INKEY$: WHILE LEN(KEYSTR$) = 0: KEYSTR$ = INKEY$: WEND
IF KEYSTR$ <> CHR$(13) THEN RESPONSE$ = KEYSTR$: GOTO 1901
WHILE LEN(INKEY$) <> 0: WEND
RETURN
2660 OPEN tfil$ FOR APPEND AS #1
pl$ = "Test-taker's name: " + nam$: GOSUB 658
pl$ = "Estimated ability:" + STR$(FNU(ABILITY)): GOSUB 658
pl$ = "Probable ability range:" + STR$(FNU(ABILITY - SE)) + "-" + STR$(FNU(ABILITY + SE)): GOSUB 658
pl$ = "Score =" + STR$(presult) + " out of" + STR$(PASKED): GOSUB 658
pl$ = "": GOSUB 658
rstring$ = STRING$(qcount, "-")
FOR P = 1 TO PASKED: Q = PQUESTION(P): n = PANSWER(P)
pl$ = "Question identifier:" + QTEXT$(Q, 1): GOSUB 658
pl$ = "Estimated difficulty:" + STR$(FNU(QDIFF(Q))): GOSUB 658
pl$ = "Question text:" + LEFT$(QTEXT$(Q, 2), 50): GOSUB 658
pl$ = "Answer:" + STR$(n) + ", " + LEFT$(QTEXT$(Q, n + 2), 50): GOSUB 658
' Find if answer is right or wrong and if unexpectedly so.
IF n = VAL(QTEXT$(Q, correct%)) THEN
i = 1: TEXT$ = "RIGHT"
MID$(rstring$, Q, 1) = "1"
ELSE
i = -1: TEXT$ = "WRONG"
MID$(rstring$, Q, 1) = MID$("ABCDEF", n, 1)'output the wrong distractor
END IF
IF (ABILITY - QDIFF(Q)) * i < -2 THEN TEXT$ = "SURPRISINGLY " + TEXT$
pl$ = "This answer is: " + TEXT$: GOSUB 658: pl$ = "": GOSUB 658'blank line after
NEXT P
pl$ = "Responses=" + rstring$ + " " + nam$: GOSUB 658
CLOSE #1
RETURN
658 IF INSTR(UCASE$(tfil$), ".SEC") > 0 THEN
FOR tti% = LEN(pl$) TO 1 STEP -1
ttx% = ASC(MID$(pl$, tti%, 1))
IF ttx% >= 32 THEN MID$(pl$, tti%, 1) = CHR$((ttx% AND 224) + (((ttx% AND 31) + 16) AND 31))
NEXT tti%
END IF
PRINT #1, pl$
RETURN

Program 2: The SECURE program

Edit this with a small font size to avoid unwanted line breaks

' ascii values must be above 31
' take the low order bits and add 15 to them and then save
CLS
f$=ucase$(command$)
f%=instr(f$,".")
if f%=0 then ofile$=f$ else ofile$=mid$(f$,1,f%-1)
if instr(command$,".SEC")=0 then
ofile$=f$+".SEC"
Print "Writing secure file to "+ofile$
else
Print "Writing unsecured file to "+ofile$
endif
open f$ for input as #1
open ofile$ for output as #2
while not eof(1)
line input #1,l$
FOR i% = len(l$) TO 1 step -1
x% = ASC(MID$(l$, i%, 1))
IF x% >= 32 THEN MID$(l$, i%, 1) = CHR$((x% AND 224) + (((x% AND 31) + 16) AND 31))
NEXT i%
PRINT #2, l$
wend
close
print "converted"
system
STOP

Program 2: SECURE.BAS to encrypt and decrypt the data files.


Go to Top of Page
Go to Institute for Objective Measurement Page

Please help with Standard Dataset 4: Andrich Rating Scale Model



Rasch Publications
Rasch Measurement Transactions (free, online) Rasch Measurement research papers (free, online) Probabilistic Models for Some Intelligence and Attainment Tests, Georg Rasch Applying the Rasch Model 3rd. Ed., Bond & Fox Best Test Design, Wright & Stone
Rating Scale Analysis, Wright & Masters Introduction to Rasch Measurement, E. Smith & R. Smith Introduction to Many-Facet Rasch Measurement, Thomas Eckes Invariant Measurement: Using Rasch Models in the Social, Behavioral, and Health Sciences, George Engelhard, Jr. Statistical Analyses for Language Testers, Rita Green
Rasch Models: Foundations, Recent Developments, and Applications, Fischer & Molenaar Journal of Applied Measurement Rasch models for measurement, David Andrich Constructing Measures, Mark Wilson Rasch Analysis in the Human Sciences, Boone, Stave, Yale
in Spanish: Análisis de Rasch para todos, Agustín Tristán Mediciones, Posicionamientos y Diagnósticos Competitivos, Juan Ramón Oreja Rodríguez

To be emailed about new material on www.rasch.org
please enter your email address here:

I want to Subscribe: & click below
I want to Unsubscribe: & click below

Please set your SPAM filter to accept emails from Rasch.org

www.rasch.org welcomes your comments:
Please email inquiries about Rasch books to books \at/ rasch.org

Your email address (if you want us to reply):

 

FORUMRasch Measurement Forum to discuss any Rasch-related topic

Coming Rasch-related Events
May 26 - June 23, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 30 - July 29, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
July 31 - Aug. 3, 2017, Mon.-Thurs. Joint IMEKO TC1-TC7-TC13 Symposium 2017: Measurement Science challenges in Natural and Social Sciences, Rio de Janeiro, Brazil, imeko-tc7-rio.org.br
Aug. 7-9, 2017, Mon-Wed. In-person workshop and research coloquium: Effect size of family and school indexes in writing competence using TERCE data (C. Pardo, A. Atorressi, Winsteps), Bariloche Argentina. Carlos Pardo, Universidad Catòlica de Colombia
Aug. 7-9, 2017, Mon-Wed. PROMS 2017: Pacific Rim Objective Measurement Symposium, Sabah, Borneo, Malaysia, proms.promsociety.org/2017/
Aug. 10, 2017, Thurs. In-person Winsteps Training Workshop (M. Linacre, Winsteps), Sydney, Australia. www.winsteps.com/sydneyws.htm
Aug. 11 - Sept. 8, 2017, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Aug. 18-21, 2017, Fri.-Mon. IACAT 2017: International Association for Computerized Adaptive Testing, Niigata, Japan, iacat.org
Sept. 15-16, 2017, Fri.-Sat. IOMC 2017: International Outcome Measurement Conference, Chicago, jampress.org/iomc2017.htm
Oct. 13 - Nov. 10, 2017, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 5 - Feb. 2, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
Jan. 10-16, 2018, Wed.-Tues. In-person workshop: Advanced Course in Rasch Measurement Theory and the application of RUMM2030, Perth, Australia (D. Andrich), Announcement
Jan. 17-19, 2018, Wed.-Fri. Rasch Conference: Seventh International Conference on Probabilistic Models for Measurement, Matilda Bay Club, Perth, Australia, Website
April 13-17, 2018, Fri.-Tues. AERA, New York, NY, www.aera.net
May 25 - June 22, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
June 29 - July 27, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Further Topics (E. Smith, Winsteps), www.statistics.com
Aug. 10 - Sept. 7, 2018, Fri.-Fri. On-line workshop: Many-Facet Rasch Measurement (E. Smith, Facets), www.statistics.com
Oct. 12 - Nov. 9, 2018, Fri.-Fri. On-line workshop: Practical Rasch Measurement - Core Topics (E. Smith, Winsteps), www.statistics.com
The HTML to add "Coming Rasch-related Events" to your webpage is:
<script type="text/javascript" src="http://www.rasch.org/events.txt"></script>

 

Our current URL is www.rasch.org

The URL of this page is www.rasch.org/memo40.htm