Date: June 14, 1999

To: WASC Self Study Office (M 135-A)

From: David Rutherford

Subject: Final report for Learning Outcomes Assessment Grant



This report is being submitted as a requirement for the Learning Outcomes Assessment Grant that was awarded earlier in the semester. Please note that this is an ongoing research project and that a substantially more extensive analysis is in process and is scheduled to be complete by December 1999 in the form of a Masters thesis.

I wish to thank you on behalf of myself and the faculty members who were involved in this study for the grant monies that you made available to us. If we may be of further assistance in the WASC self study and/or visitation process please let us know and we will be glad to participate.


Computer Aided Instruction in Geography

The Geography Department introduced a computer aided instructional program for the introductory level World Regional Geography course (Geog. 100). This program was developed in the department and uses Geographic Information System software (GIS) to provide the primary instruction of selected systematic topics in geography. Developed and tested in Spring 1998, the program was implemented on a small scale in Fall 1998 and then incorporated into five sections of the Geog. 100 class during the Spring 1999 semester. Three faculty members and a graduate assistant participated in a collaborative effort to implement the program and assess the learning taking place.

This computer aided instruction (CAI) of systematic topics in geography accounted for one-third of the total instructional time in the courses; the remaining two-thirds was devoted mostly to regional study of the world’s major geographic realms delivered using traditional instructional strategies. The CAI portion of the course was delivered during regularly scheduled course meeting times (e.g. Friday for a Monday, Wednesday, Friday course). Students worked in collaborative groups of two or three at each computer in order to complete hard-copy exercises that contained brief explanations of concepts along with questions for students to answer and directions for navigating through the instructional material available in the CAI program. Each exercise was designed to be completed in one class meeting with a minimum of instruction from a lab assistant.

The computer program and the computer based small group instructional format were developed by Dr. William Lloyd, Geography Department Chair, who currently oversees implementation and assessment as well as continuing development. Two other faculty members taught the various sections of the course during the Spring 1999 semester and worked to integrate the systematic topics of the CAI program into the regional topics of the lecture portion of the course. A graduate assistant conducted the CAI portion of the course; he also developed and implemented the evaluation and assessment program as his thesis research in Geography Education.

The literature dealing with computer aided instruction emphasizes a mosaic approach to assessment (Proctor and Richardson 1997, Reeves 1991). We are following that model using several methodologies including student surveys, student focus groups, and an experiment that compares student learning with CAI versus a lecture instructional strategy. These were all conducted during the Spring 1999 semester and have generated substantial data. Some preliminary and statistical analyses have been conducted on the quantitative data; partial preliminary analysis has been conducted on the qualitative data. This paper reports the status of the assessment research as of the end of the Spring 1999 semester. The final report on this research is scheduled to be complete by the end of the Fall 1999 semester in the form of a Master’s thesis. The assessment originally set out to address four main questions, each of which is discussed below. A fifth question emerged during the semester and has been added.

Question #1. Is student learning of map reading skills, geographic facts and comprehension of geographic concepts more effectively achieved through use of the CAI program or traditional lecture format?

The comparative experiment was the principle means of gathering data to answer this question. One of the five course sections was excluded from the experiment - the night class that had a mean student age of 27.48 years. The four daytime sections had younger and more consistent mean student ages of 19.16, 19.44, 19.67 and 20.71 years. The four sections were aggregated so as to produce two groups that contained as equal a number of students as possible within the constraints of the course enrollments - sixty-five students in Group #1 and forty-nine in Group #2. A pretest containing eighteen questions was administered at the beginning of the semester to all students and established that no significant difference in geography knowledge existed between the two groups.

Two systematic topics were selected, and Group #1 received lecture instruction over that content while Group #2 received CAI. Students were not aware of the experiment. The two instructional strategies were closely monitored to insure that identical material was presented; the same facts and concepts were presented in the same order and using the same maps and graphics in both instructional strategies. Student achievement was assessed by administering the same mid-term exam to all students. The exam was a multiple choice objective test that contained a total of seventy five questions, nineteen of which came from the two systematic topics related to the experiment. The mean scores on the nineteen questions were 11.39 for Group #1 and 10.72 for Group #2. The Mann Whitney U test revealed that this was not a significant difference (Table #2).

This result was not surprising, and is consistent with the vast majority of research literature that has compared the overall effectiveness of CAI to other forms of instruction (Bayard-White 1985, Tiene et. al. 1989, Krendl and Clark 1994). In anticipation of such a result, specific exam questions were developed using Bloom's taxonomy of educational objectives (Bloom et. al. 1956) to test for differences in student achievement which might occur in the learning of facts, the comprehension of concepts, and the application of learning to new situations. Preliminary evaluation of these data have not revealed discernable overall trends. In other words, at this point we cannot state if CAI or lecture instruction is more effective from an overall standpoint for any of these learning objectives. Further analysis of the data is needed to determine if either of the instructional strategies is more effective in improving student achievement in one or more of Bloom's educational objectives.

An additional question has come to light. Perhaps the different instructional strategies will prove to be more effective at delivering different aspects of subject content. Consequently, another approach to the analysis has emerged, that is, looking at specific subject content (both facts and concepts) to see if patterns of student learning appear. For example, comprehension of population growth rates may have been accomplished more effectively with lecture instruction while the CAI may have been a more effective means to teach a knowledge of the global distribution of major climate zones.

Question #2. To what extent does this CAI program promote interaction between students of diverse ethnic backgrounds?

For this portion of the study we are primarily attempting to determine the extent to which students experienced a multicultural work environment by participating in the self-selected, computer-based, small group instructional strategy. At the beginning of the semester, a survey questionnaire was filled out by students in which they were asked to self-identify their ethnic affiliation using the twenty-six categories found on the California State University entrance application. Using this information, a record was kept of the ethnic composition of all the self-selected student groups that formed during the semester. All five sections of the course were involved in this portion of the study for a total of 154 students. Seventeen different ethnicities (in addition to those who declined to state their ethnicity) were found among the 154 students. The number of ethnicities found in the class groups varied:

    1. Group #1: 27 total students, 11 ethnicities
    2. Group #2: 44 total students, 15 ethnicities
    3. Group #3: 33 total students, 8 ethnicities
    4. Group #4: 21 total students, 9 ethnicities
    5. Group #5: 29 total students, 7 ethnicities.

A total of 386 groupings of students occurred during the semester. This included groups of three, pairs of students, and single students working alone. Students were encouraged but not required to work in groups, and students who chose to work alone were not considered to have engaged in inter-ethnic or multicultural interaction through the instructional activity. "Groups" made up of single students comprised a mean of 21.4% of the total number of student groups that formed. Students who formed groups of two or three students of the same ethnic group made up another 16.6% of the total number of groups that formed. This means that 38% of the groups involved no inter-ethnic make-up, while the remaining 62% comprised inter-ethnic work groups.

In addition, the survey asked students to rank their agreement or disagreement, using a five point Likert Scale, with six statements related to attitudes toward cultural diversity, inter-ethnic relations, and comfort in a multicultural working environment. These surveys indicated a generally positive attitude toward these issues. A follow-up survey asking the same questions was conducted at the end of the semester. Two primary goals of this portion of the study are to compare change in attitudes over the course of the semester and to determine if any statistically significant correlations exist between ethnicity and attitudes toward multiculturalism. Questions related to these multicultural issues were also discussed during the focus group sessions and will provide qualitative data to use in conjunction with the quantitative.

Question #3. What is the effectiveness of this CAI program in improving students’ awareness of the process involved in locating and evaluating information?

This was originally proposed as a research question, and it is certainly an important topic to consider in our increasingly information based society. However, it was not adequately developed and incorporated as a learning objective into the courses. Consequently, the current assessment will not investigate this question.

Question #4. Does this CAI program provide a level of computer use which gently introduces computer skills to students with no computer experience while simultaneously extending the skills of more advanced users?

A section of the survey questionnaires asked students to self-evaluate their level of competence in using computers. None of this data has been processed at this time. We hope to compare responses from the surveys administered at the beginning and end of the semester. In addition, focus group data will be integrated into the analysis.

Question #5. How do students evaluate various instructional strategies as a use of their learning time?

Technology-based instruction is only one of a number of different learning environments used in the Geog. 100 course. These include:

At the end of the Spring 1999 semester we administered a brief survey in which we asked students to rate the six learning environments as a use of the time they spent learning geography. They ranked each activity on a scale ranging from 1 (poor use of their time) to 5 (excellent use of their time). The survey was administered to students from several sections, all taught by the same instructor. A total of 81 students completed the survey. The results were as follows:

Student Rating of Mode of Instruction

Mode of Instruction Mean Rank

Applying the Willcoxon Signed-Ranks test, we found that differences among the three highest ranked and three lowest ranked activities were not statistically significant. However, differences between the top three and bottom three were significant at the 0.05 level.

In general, then, students believe that their learning time is best spent working at the computer, viewing slide lectures, and watching and discussing videos. Further attempts to separate the top three using cross-tabulation and chi-square analysis yielded no significant differences among the students. Instead, we found that students who rated computers as a good to excellent use of their time also tended to have the same opinion of slide lectures and videos, while students who didn’t like learning with computers did not like learning with slide lectures or videos either.


Baynard-White, C. 1985 Interactive Video Case Studies and Directory (London, Great

Bloom, B.S., Englehart, M.D., Furst, E.J., Hill, W.H. and Krathwohl, D.R. 1956.

Krendl, K. A. and Clark, G. 1994. The impact of computers on learning: research in-

Proctor, J. D. and Richardson, A. E. 1997. Evaluating the effectiveness of

Reeves, T. C. 1991. Ten commandments for the evaluation of interactive

Tiene, D., Evans, A., Milheim, W., Callahan, B, and Buck, S. 1989. The instructional