Introduction
In America today, thousands of children are meeting state compulsory education requirements at home instead of in school. In 1987, 29 states explicitly permitted instruction at home by a parent or tutor, and all of the other states allow home instruction in some form (Lines, 1987). Not only in the United States, but in Canada as well (Common & MacMullen, 1986), home schooling is clearly a growing movement.
The increasing prevalence of home schooling has provided educators with a number of vexing legal issues regarding, for example, the conflict between parents’ religious rights and a state’s educational interest (Harris III & Fields, 1982; Wendel, Konnert, & Foreman, 1986). One issue that may be receiving more attention relates to the qualifications of home schooling instructors.
In the spring of 1988, the South Carolina Legislature enacted legislation that established requirements for home instruction. Among other things, the law stipulates that home schooling instructors must either (a) have a baccalaureate degree, or (b) have a high school diploma or a GED certificate and attain a passing score on the Education Entrance Examination (EEE). The EEE is a basic skills test in reading, writing, and mathematics created several years ago by the South Carolina Department of Education (SCDE) to ascertain whether prospective teacher-education candidates had basic skills in those three areas. By South Carolina law, students must pass the EEE to be admitted to one of the state’s teacher education programs.
Although the EEE was carefully constructed and rigorously appraised in connection with its original function, its suitability for this new, albeit related, function needed to be evaluated. In July 1988, SCDE issued a request for proposals for the appraisal of the EEE for use with home schooling instructors. IOX Assessment Associates (IOX) was awarded the contract.
IOX has conducted numerous test-appraisal studies. Many of those studies have been appraisals of teacher licensure tests, both basic skills tests like the EEE and subject matter tests like the NTE program tests (formerly called the National Teacher Examinations). Over the years, we have developed rigorous, fair, and legally defensible procedures for appraising the suitability of a licensure test for its intended purpose (Carlson, in press). Because of the purpose for which the EEE was to be used, however, the EEE test-appraisal study presented some unique issues that had to be addressed.
This paper briefly describes the rationale for and the procedures used in a typical teacher licensure test-appraisal study. Then, the test-appraisal study of the EEE in South Carolina is summarized. Departures from the typical procedures, as well as other substantive and procedural issues, will be discussed. Finally, the results of the EEE study will be briefly summarized.
Rationale for the Appraisal of Teacher Licensure Tests
The primary activity in test-appraisal studies of teacher licensure tests has been the collection of what is known as “content-related evidence of validity.” The American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education, in their joint publication entitled Standards for Educational and Psychological Testing (1985), have described content-related evidence of validity as evidence regarding “the degree to which the sample of items, tasks, or questions on a test are representative of some defined universe or domain of content” (p. 10).
The nature of the “universe or domain of content” has been described in the Uniform Guidelines on Employee Selection Procedures (Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, & Department of Justice, 1978). The Uniform Guidelines were intended to help employers comply with federal law prohibiting discrimination by race, color, religion, sex, and national origin. Employment decisions such as hiring, licensing, and certification are encompassed within the scope of these guidelines. Close examination of the guidelines reveals that tests used for licensure purposes must be directly job-related. According to these federal guidelines, the universe of content should consist of the knowledge and skills that are necessary for successful performance on the job for which the candidates are to be evaluated. As stated in the Uniform Guidelines, “selection procedures which purport to measure knowledge, skills, or abilities may in certain circumstances be justified by content validity…if that knowledge, skill, or ability is a necessary prerequisite to successful job performance” (Section 14C(1)). This view is echoed in the Standards for Educational and Psychological Testing, in which it is stated that a “rationale should be provided to support a claim that the knowledge or skills being assessed are required for competent performance in an occupation” (Standard 11.1, p. 64). In short, the knowledge or skills measured by a test that has employment implications should be those that are required for successful job performance.
Typical Test-Appraisal Procedures
The first task in conducting a test-appraisal study is to recruit the individuals who will review the test. For subject-matter tests (e.g., tests in English or science), these are typically teachers in the same subject area as the test. In addition, teacher education faculty are also often invited. To obtain these individuals, nominations are solicited from school district superintendents and deans or chairpersons of teacher education programs. From those nominated, individuals are selected and invited to participate in the test-appraisal study. Selection criteria usually include variables such as ethnicity, gender, and geographic representation.
The selected individuals who agree to participate in a test-review session are asked to independently review test items
and make judgments about each item, as well as about the test as a whole. The participants often serve on one of two panels. A content review panel, typically consisting of college faculty, judges the extent to which the content on the test is covered in their teacher education programs. A job relevance panel, typically consisting of teachers, judges the extent to which the knowledge and skills tested are necessary for successful job performance. Both panels may also be asked to review the test for potential bias against examinees on the basis of characteristics such as ethnicity and gender.
Results of this procedure are summarized and reported to a decision-making body, usually the State Board of Education. The results of the test-appraisal study, as well as other factors, are considered in determining whether or not the test is suitable for its intended purpose.
The EEE Study
In planning and implementing the EEE test-appraisal study, we used the same general procedures as those just described. A number of issues, however, specific to evaluating a test to be used with home schooling instructors had to be addressed. These issues, as well as the resulting procedural modifications, will now be described.
Building Cooperation
Teacher licensure tests have been in use in this country for years. Although many teachers are opposed to such tests, they have become a part of the American educational system. Consequently, most public school educators recognize the importance of test-appraisal studies and are willing to participate.
The use of such tests for home schooling instructors, however, is new–at least in South Carolina. The state’s home schoolers’ association was strongly opposed to the legislation requiring that some home schooling instructors take the EEE. Thus, the EEE study was conducted in a more highly-charged emotional environment than that typically encountered with a teacher licensure test. Recognizing this, the SCDE took steps to help ensure that the state’s home schooling instructors would participate cooperatively in the test-appraisal study.
Early in the planning stages of the EEE test-appraisal study, SCDE officials met with several key representatives of the state’s home schoolers’ association. The primary purpose of the meeting was to provide home schooling instructors with information regarding all facets of the proposed study, such as general procedures, the study’s intent, and the role that home schooling instructors would play in the study. The meeting, however, had several ancillary benefits. It provided SCDE officials with an opportunity to hear, directly, several of the concerns held by home schooling instructors. It also proved useful in gaining the support of the home schooling instructors that may not have been readily attained if such an effort had not been made.
For example, we learned that members of the home schoolers’ association had initially decided to refuse to participate in the study if invited. By meeting with home schooling representatives early in the planning stages, SCDE officials were able to emphasize to home schooling instructors the critical role they would play in the study. As a consequence, key members of the home schoolers’ association encouraged the participation of all home schooling instructors who were invited to be panelists.
Selection of Participants
A key issue in planning a test-appraisal study is the identification of the appropriate groups of individuals to render the various judgments regarding the test under review. As described earlier, in the past, two primary groups have typically been represented in test-appraisal studies: teachers and college faculty. The college faculty judged the extent to which the skills and knowledge covered on a test are taught in a teacher education program. The appropriateness of collecting such “adequacy-of-preparation” or “opportunity-to-learn” data has been recently challenged, however (Coyle, 1988). Many now agree that the essential element in establishing the defensibility of a licensure examination is the extent to which the tested knowledge and skills are needed for satisfactory job performance. Furthermore, the EEE assesses basic skills in reading, writing, and mathematics. Such skills are most commonly taught in elementary and secondary school, not in college teacher education programs. For these reasons the SCDE decided to convene one test-review panel that would supply judgments regarding the job-relatedness of the EEE. Judgments regarding adequacy-of-preparation would not be collected.
With that issue settled, the next issue was the identification of individuals to participate on the job-relatedness panel. Clearly, judgments regarding a test’s job-relatedness should be made by individuals who are familiar with the job demands required of the educators who must take the test (Yalow, 1983). The EEE was being proposed for use in qualifying home schooling instructors. Thus, one might argue that the review group should consist of individuals who currently serve as home schooling instructors, because they are most familiar with the demands of educating children in the home. One could also argue, however, that the job demands of a home schooling instructor in the basic skill areas of reading, writing, and mathematics are similar to those of public school educators. This would suggest that public school teachers, administrators, and teacher educators should be included as well.
In identifying which groups to include in the EEE study, we faced an interesting paradox. Certainly, home schooling instructors are in an excellent position, perhaps better than any other group, to identify the knowledge or skills necessary for satisfactory performance by a home schooling instructor. At the same time, however, many, if not all, South Carolina home schooling instructors are strongly opposed to the recently enacted home schooling legislation. As a consequence, it was thought that home schooling instructors might use the test-review opportunity to register their disapproval by rendering unduly negative appraisals of the test’s content.
This concern is not one-sided, however. A similar issue arises when one considers the inclusion of public school faculty. Many public educators are not supportive of the home schooling movement. Consequently, they may also use the opportunity to manifest their biases by expressing overwhelming support of the test regardless of its relevance to the tasks of a home schooling instructor.
In an effort to minimize the chances of one group biasing the results, the SCDE decided to include representatives from three relevant groups, namely, home schooling instructors, public school teachers and administrators, and university faculty from teacher education programs. SCDE officials decided that home schooling instructors should constitute at least half of the panel, and that there should be about twice as many public school faculty as university faculty. By including multiple categories of reviewers it was hoped that members of the various groups would recognize that they were not the sole judges of the test’s suitability and, hence, would be less likely to provide biased ratings.
Recruitment of Panelists
Nominations of qualified individuals were secured by sending nomination forms to deans or chairpersons of approved teacher education programs and superintendents of public school districts in the state, including districts that currently have home schooling instructors.
Deans or chairpersons were asked to nominate faculty members who had at least two years of experience teaching courses that were part of a teacher preparation program and that focused on elementary or secondary teaching methods. District superintendents were asked to nominate home schooling instructors who had baccalaureate degrees and at least one year of experience as home schooling instructors, and/or two teachers and two principals each with at least two years of experience.
From those individuals nominated, selections were made such that there would be individuals on the panel from small, mid-size, and large districts, and from the various regions within the state. In addition, both black and white educators were selected, as were both males and females. The selected individuals were invited, by mail, to participate in the test-appraisal study.
The Test-Review Session
A total of 33 South Carolina educators participated in the study: five college faculty members, 11 public school faculty members, and 17 home schooling instructors. The panel session was held in Columbia, South Carolina, on December 8, 1988. Following opening remarks, panelists were given time to read written directions that described the judgments they were to make. Then, oral directions were read that included a summary of the more detailed written directions that the panelists had read. After panelists’ questions were answered, the test forms were distributed.
Each panelist received one form of the EEE, which consisted of 56 multiple-choice reading items, 56 multiple-choice mathematics items, and two writing assignments (called writing prompts). (During a test an examinee is to respond to only one of the two writing prompts.)
All panelists completed their review tasks independently. The primary charge to the panelists was to judge the extent to which the knowledge and skills covered on the EEE are necessary prerequisites for satisfactory performance as a home schooling instructor in South Carolina. Specifically, panelists were asked to make four kinds of judgments. For each test item and writing prompt on the test form being reviewed, panelists were to respond “yes” or “no” to both a task-relatedness and an absence-of-bias question. The task-relatedness question for the reading and mathematics items was as follows:
“Is the knowledge or skill needed to answer this item a necessary prerequisite for satisfactory performance by a home-schooling instructor in South Carolina?”
For the writing prompts, this question read:
“Is the knowledge or skill needed to write an acceptable composition in response to the prompt a necessary prerequisite for satisfactory performance by a home-schooling instructor in South Carolina?”
The absence-of-bias question was the following:
“Might this item (or writing prompt) offend or unfairly penalize any group of examinees on the basis of personal characteristics such as gender, ethnicity, religion, or socioeconomic status?”
The panelists were also asked to respond to a content-representativeness question for each of the three sections on the test, that is, reading, mathematics, and writing. After reviewing each section, panelists answered the following question by providing a percentage estimate between 0 and 100 percent rounded to the nearest 10 percent:
“What percentage of the (reading, mathematics, or writing) knowledge and skills needed by a home-schooling instructor in South Carolina is covered on this test form?”
After having reviewed the entire set of items on the test, panelists were asked to make the following overall absence-of-bias judgment:
“Considering the entire set of test items that you just reviewed, do the items, taken as a whole, offend or unfairly penalize any group of examinees on the basis of personal characteristics such as gender, ethnicity, religion, or socioeconomic status?”
Finally, panelists were given an opportunity to register (in writing) any comments or concerns regarding the proposed use of the EEE with home schooling instructors.
Separating Emotions from the Judgments
During the planning stages of the EEE study, it became imminently clear that the legislation underlying the study was emotionally charged. As mentioned previously, home schooling instructors did not support the legislation, while many public school educators did.
It was believed that these underlying emotions could interfere with the panelists’ abilities to make fair judgments regarding the test’s content. Two steps were taken to address this issue. First, we acknowledged the emotions at the beginning of the review session and, at the same time, encouraged participants to try to separate their emotions regarding the legislation from the judgments they were being asked to make. Clearly, we did not expect the panelists to simply dismiss their deep-felt sentiments regarding the broader issue. We believe, however, that by addressing the underlying emotions rather than ignoring them, we were able to reduce the likelihood of biased ratings by panelists.
Second, as mentioned above, all panelists were given an opportunity to register (in writing) any concerns or comments regarding the proposed use of the test. Participants were informed at the outset that their written comments would be forwarded to the State Board along with the study results. It was hoped that, given this opportunity to express their opinion about the use of the EEE with home schooling instructors, the participants would be more inclined to give the test a fair review.
Data Analysis
A number of indices were computed based on the judgments about each item, only one of which will be described here. For each item, a per-item score was computed based on the number of “yes” and “no” responses given for that item. For the task-relatedness question, the per-item score is the percentage of panelists who responded “yes” to the item. For the absence-of-bias question, the per-item score is the percentage of panelists who responded “no” (indicating a lack of bias) to the item. Then, for the reading and mathematics sections of the test, an average per-item score was computed by calculating the mean of the per-item scores. On the writing section of the test there are two prompts and examinees select one; thus, only the per-item scores were computed, not the average.
In addition to individual item judgments, panelists also made judgments regarding each of the three sections of the EEE. The panelists were asked to estimate, from 0 to 100, the percentage of content needed by home schooling instructors that was covered on the test being reviewed. These responses were averaged to obtain a content-representativeness mean percentage for each of the three test sections. Panelists were also asked to make a bias judgment regarding the entire set of items. The percentage of panelists responding either “yes” or “no” to this question was computed.
Results
In general, the results were positive, with one notable exception. The task-relatedness average per-item scores for reading and mathematics were approximately 84 and 79, respectively. The per-item score for the first writing prompt was 65, and the per-item score for the second prompt was 71.
The average absence-of-bias scores were approximately 91 and 96 for reading and mathematics, respectively. The absence-of-bias score for the first writing prompt was 44; the score for the second prompt was 91. For the overall bias judgment, approximately 75% of the panelists indicated that the test as a whole (that is, all three sections considered together) is free of bias. The mean percentage estimates of content-representativeness for reading, mathematics, and writing were approximately 75, 77, and 71, respectively.
All these results are positive, with the exception of those for the first writing prompt, which received a task-relatedness score of 65 and an absence-of-bias score of only 44. This writing prompt was designed to elicit a writing sample about a topic relevant to public schooling, which was apparently judged not to be relevant for home schooling instructors. The other prompt, which received more positive results, was about a more general topic. As pointed out earlier, however, examinees can choose the prompt to which they will respond.
Board Approval of the EEE
The study results were presented to the South Carolina Board of Education at a Board meeting in February 1989. At that meeting the Board approved the use of the EEE with home schooling instructors.
References
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1985). Standards for educational and psychological testing. Washington, DC: American Psychological Association.
Carlson, R.E. (in press). An alternative methodology for appraising the job-relatedness of NTE Program tests. Applied Measurement in Education.
Common, R.W., & MacMullen, M. (1986, Summer). Home-schooling…a growing movement. Education Canada, pp. 4-7.
Coyle, K.K. (1988, April). An NTE-appraisal shift: From adequacy-of-preparation to job relevance. Paper presented at the annual meeting of the American Educational Research Association, New Orleans.
Equal Employment Opportunity Commission, Civil Service Commission, U.S. Department of Labor, & U.S. Department of Justice. (1978). Adoption by four agencies of uniform guidelines on employee selection procedures. Federal Register, 43, 38290-38315.
Harris, J.J. III, & Fields, R.E. (1982, Fall). Outlaw generation: A legal analysis of the home-instruction movement. Educational Horizons, pp. 26-31.
Lines, P.M. (1987, March). An overview of home instruction. Phi Delta Kappan, pp. 510-517.
Wendel, J., Konnert, W., & Foreman, C. (1986, Summer). Home schooling and compulsory school attendance. School Law Bulletin, pp. 1-8.
Yalow, E.S. (1983, April). Content validity discontent. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.
SOUTH CAROLINA’S EDUCATION ENTRANCE EXAMINATION: A STUDY OF ITS SUITABILITY FOR USE WITH HOME…
Share this article:
Related Articles: