Issues In Educational Research, Vol 11, 2001   [Contents Vol 11] [IIER Home]

First years of laptops in science classrooms result in more learning about computers than science

Ed Stolarchuk
St Hilda's School, Southport Queensland

Darrell Fisher
Curtin University of Technology

This paper presents the findings of a study designed to assess the effectiveness of laptop computers, in Australian Independent Schools, in Years 8 and 9 science classrooms. Students' attitudinal and achievement outcomes and their perceptions of classroom environment were determined to assess effectiveness. Attitudes to science were assessed using a scale from the Test of Science-Related Attitudes (TOSRA), achievement was measured using scales from the Test of Enquiry Skills (TOES) and students' perceptions of classroom environment were assessed using the Science Classroom Environment Survey (SCES). These quantitative instruments were administered to over 800 students in years 8 and 9 science classes, in 14 Independent schools, across four Australian states in 1995. Qualitative data were collected from two of the fourteen schools in 1996. The reliability and validity of the SCES for science laptop classroom research were confirmed. Laptops appeared to have little effect on students' perceptions of science classroom environment; however, laptop students' perceptions were found to be more positively associated with students' attitudinal outcomes than with their cognitive achievement outcomes. The qualitative data revealed the unexpected finding that perhaps, during the first few years of using laptops in science classrooms, students learned more about computers than science.

Introduction

With the advent of microcomputers in 1977 (Alessi & Trollip, 1985; Okey, 1985) computers have become increasingly prevalent in schools, and have been increasingly incorporated into the curriculum in many countries (Hebenstreit, 1992; Okey, 1985). This scenario was repeated in Australia, and since the late 1980s, in an increasing number of Australian Independent Schools, laptop computers have become an essential item on students' back-to-school equipment lists.

The earliest studies into the effects of computers in education usually focused on the four factors of achievement, attitude to learning, time to learn and costs (Okey, 1985; Oliver, 1986; Wise & Okey, 1983). However, to date, there has been a minimal amount of reported research on the impact laptop computers have had on students' attitudes to science and their cognitive achievement in science. Furthermore, there are no published reports on the impact laptop computers have had on students' perceptions of science classroom environment. The few published laptop studies have focussed on the more traditional areas of "computers in science classrooms" evaluation, for example, grade improvement, efficiency of learning, work presentation, content retention and so on (Gardner, Morrison, & Jarman, 1993; Loader, 1993; McMillan & Honey, 1993; Mitchell & Loader, 1993; Rowe, 1993; Shears, 1995).

The availability of proven instruments for assessing science classroom environment, student attitude to science and student cognitive achievement in science, enabled this study to occur. Furthermore, students' perceptions of science classroom environment have been favourably associated with student attitude to science and student cognitive achievement in science (Fraser, 1991; Fraser, 1994; Fraser, Walberg, Welch, & Hattie, 1987; Haertel, Walberg, & Haertel, 1981; McRobbie & Fraser, 1993), and it was of interest to see if this same association existed in laptop science classrooms.

Classroom environment

Over the past twenty-five years science education researchers have led the world in the field of science classroom environment research (Fraser, 1994; Fraser & Walberg, 1991), and numerous science classroom environment research instruments have been validated for use in educational research. One such instrument is the Individualised Classroom Environment Instrument (ICEQ) developed by Fraser. The final published version of the short form of the ICEQ (Fraser, 1990) contained 25 items evenly distributed across the five scales of Personalisation, Participation, Independence, Investigation and Differentiation. Each item is responded to on a five-point, Likert-type scale ranging from 1 (Almost never) to 5 (Very Often), with some items being reverse scored.

Validation statistics for the short form report that the Cronbach alpha reliability coefficients ranged from 0.69 to 0.85 indicating good internal scale consistency. Each scale's ability of differentiating between the perceptions of students in different classrooms was confirmed by calculating one-way ANOVAs for each scale, using class membership as the main effect. The ANOVA eta2 statistic calculated for each scale, representing the proportion of variance due to class membership, ranged from 0.21 to 0.39 (p < 0.001 for all scales) indicating adequate scale differentiation. The mean scale correlations ranged from 0.15 to 0.34, indicating each scale measured a different although somewhat overlapping dimension of the classroom environment.

These reliability and validation figures confirmed that the short form of the ICEQ is a reliable and valid instrument and that it could be used with confidence to measure students' perceptions of science classroom environment. The short form of the ICEQ was the basis for the construction of the Science Classroom Environment Questionnaire (SCES) used in this study. The major change was that the items of the ICEQ were reworded, as necessary, to ensure each one was written in the personal form. For example, the item 'The teacher is unfriendly to students' was reworded to read 'The teacher is unfriendly to me'. This was done to elicit students' perceptions of their own experiences, rather than their perceptions of the class or group experience as a whole. This was thought to be important in classrooms where laptops are used. That individuals' perceptions, are based on their perspectives, was recognised over four decades ago when Stern, Stein, and Bloom (1956) differentiated between private beta press, unique individual perceptions, and consensual beta press, group perceptions.

As the ICEQ did not address an aspect of the science classroom environment that was of interest, this being, whether the laptop science classroom allowed for more student interaction and peer learning/teaching, various classroom environment instruments were reviewed to identify scales that could be used to meet the above need. A suitable scale was found in the Constructivist Learning Environment Survey (CLES) (Taylor, Fraser, & Fisher, 1997). This scale is referred to as the 'Negotiation' scale in the SCES. The final version of the SCES contains six scales, the five ICEQ scales and the one CLES scale. Each scale consists of five items, with some items being reverse scored. Each item is responded to on a five-point, Likert-type scale ranging from 1 (Almost never) to 5 (Very Often). A description of the various scales in the SCES and a sample item from each scale appears in Table 1.

Table 1: Description of scales and a sample item for each scale of the SCES

Scale name Description of Scale Sample Item
Personalisation Emphasis on opportunities for individual students to interact with the teacher and on concern for the personal welfare and social growth of the individual The teacher helps me if I am having trouble (+)
Participation Extent to which students are encouraged to participate rather than be passive listeners I ask the teacher questions (+)
Independence Extent to which students are allowed to make decisions and have control over their learning and behaviour The teacher decides which students I work with (-)
Investigation Emphasis on the skills and processes of inquiry and their use in problem solving and investigation I explain the meaning of statements, diagrams and graphs (+)
Differentiation Emphasis on the selective treatment of students on basis of ability, learning style, interests, and rate of working I move on to other topics if I work faster than other students (+)
Negotiation Emphasis on opportunities for students to explain and justify to other students their newly developing ideas; reflect self-critically on the viability of their own ideas Other students ask me to explain my ideas (+)
(-) Reverse scoring for this item

Attitude and cognitive achievement

An 'attitude to science' scale, simply referred to as the Attitude scale, was attached to the SCES. It was adopted from the 'Enjoyment of Science Lessons' scale of the Test of Science-Related Attitudes (TOSRA) (Fraser, 1981) and contained five items.

Cognitive achievement was measured using three scales from the 'interpreting and processing information skills' section of the Test of Enquiry Skills (TOES) (Fraser, 1979), which was designed to test non-content specific enquiry skills of science students in grades 7 to 10. The three scales selected were 'Scales', 'Charts and Tables' and 'Graphs'.

Methodology

Numerous Independent schools were contacted in all Australian states to determine if they used laptop computers in grades 8 and 9 science and if so, whether they would participate in this study. The final sample consisted of 863 students in 44 classes of grades 8 and 9 science classes in 14 independent schools, in four states. Of these 14 schools, only one had used laptop computers for more than three years. The laptop sample consisted of 433 students in 23 different laptop science classrooms. As a means of comparison, a non-laptop sample was selected and included in the study. This sample consisted of 430 students in 21 non-laptop science classrooms. The non-laptop schools and classes were matched as closely as possible with the laptop sample. Each student in the sample completed the Science Classroom Environment Survey (SCES), with attached Attitude scale and enquiry skill scales. Qualitative data were collected from students and teachers in two laptop schools.

In keeping with current learning environment research, SCES reliability and validity were confirmed by determining each scale's internal consistency (Cronbach alpha coefficient), discriminant validity (mean correlation of each scale with the other scales) and ability to differentiate between classrooms (ANOVA eta2). The associations between students' perceptions of science laptop classroom environment and students' attitudinal and cognitive achievement outcomes were investigated by analysing the data using both simple and multiple correlations. The simple correlation (r) describes the bivariate association between a selected outcome and each scale of the instrument, the SCES in this instance. The multiple correlation, as expressed by the standardised regression weight (beta), describes the multivariate association between an outcome and particular scale, when all other scales are controlled.

The effects laptop computers had on students' perceptions of science classroom environment were investigated by calculating effect sizes, ES, for each of the SCES scales. Effect sizes were calculated using Cohen's d formula (1977) where the difference in the two group means, for each scale, is divided by the pooled standard deviation. Effect sizes of 0.2 are considered small effects, 0.5 medium effects and 0.8 large effects.

The researcher collected qualitative data by conducting interviews with laptop students and teachers, after the quantitative data were initially analysed. This was done to confirm the quantitative findings and to explore possible reasons for some of the quantitative findings. The interviews were recorded and later transcribed for ease of analyses and interpretation.

Results

Validation of the SCES questionnaire
Both the individual student and the class mean were used as the units of analyses in the descriptive statistics reported in Table 2. Scale internal consistencies were confirmed by calculating Cronbach alpha coefficients for each scale. Values obtained were satisfactory, ranging from 0.60 to 0.81 and from 0.73 to 0.93 using the individual student and the class mean as the unit of analysis respectively.

Scale discriminant validity was confirmed by calculating the mean correlation of each of the instrument's six scales with the remaining scales. The correlations ranged from 0.08 to 0.34 using the individual student as the unit of analysis and from 0.08 to 0.45 using the class mean as the unit of analysis, indicating satisfactory scale discriminant validity. Each scale's ability of differentiating between the perceptions of students in different classrooms was confirmed by calculating one-way ANOVAs for each scale, using class membership as the main effect. The ANOVA eta2 statistic calculated for each scale, representing the proportion of variance due to class membership, ranged from 0.11 to 0.39 (p < 0.01), confirming each scale's ability to differentiate between classrooms.

Scale means are reported in Table 2, as this is the first reported use of the ICEQ scales (these were part of the SCES) in science laptop classrooms. Using the individual student as the unit of analysis, scale means ranged from a high of 3.58 for the Independence scale to a low of 1.99 for the Differentiation scale, and from a high of 3.57 to a low of 1.97, for the same scales respectively, using the class mean as the unit of analysis. Fraser (1990) reported scale means, for the class mean unit of analysis, for grades 8 and 9 science students, ranging from a high of 3.60 for the Participation scale to a low of 2.20 for the Differentiation. The reliability and validity of the SCES, for use in science laptop classroom research, are confirmed based on the reliability and validity statistics calculated for the instrument using the laptop student sample in this study.

Associations between laptop student outcomes and science classroom environment
An examination of the simple correlation (r) results in Table 3 indicates that of the 12 possible relationships between science classroom environment and the outcome variables of attitude and achievement, 10 are statistically significant (p < 0.05). Both multiple correlation (R) statistics, between the set of SCES scales and each of the outcomes, are statistically significant. An examination of the multiple correlation beta weights (beta), however, reveals that only three of the 12 possible relationships are statistically significant (p < 0.05). This is still five times that expected by chance alone.

Table 2: Internal consistency (Cronbach alpha coefficients), discriminant validity (mean
correlation with other scales), ability to differentiate between classrooms (ANOVA eta2),
scale means and standard deviations for science laptop student sample


Unit of
analysis
Alpha
reliability
Mean
correlation
ANOVA
(eta2)
Scale
mean
Standard
deviation
Personalisation Individual
Class Mean
0.81
0.93
0.34
0.45
0.34* 3.26
3.27
0.85
0.50
Participation Individual
Class Mean
0.67
0.80
0.32
0.40
0.15* 3.45
3.46
0.71
0.27
Independence Individual
Class Mean
0.65
0.84
0.08
0.08
0.39* 3.58
3.57
0.77
0.49
Investigation Individual
Class Mean
0.67
0.73
0.23
0.23
0.11* 2.89
2.88
0.71
0.23
Differentiation Individual
Class Mean
0.60
0.77
0.10
0.25
0.19* 1.99
1.97
0.65
0.29
Negotiation Individual
Class Mean
0.72
0.87
0.21
0.34
0.11* 3.02
3.04
0.72
0.25
*p < 0.01    student n = 433    class n = 23

The multiple correlation (R) statistic, of 0.69 (p < 0.001), suggests that the association between students' perceptions of science laptop classroom environment as measured by the SCES, and students' attitude to science is a strong one. Furthermore, the R2 statistic, at 0.48 indicates that 48% of the variance in laptop students' attitude to science is explained by students' perceptions of science classroom environment.

The simple correlation (r) data in Table 3 indicate that all the associations between students' attitudinal outcomes and the SCES scales are statistically significant. The associations range from quite large for the scales of Personalisation, Participation and Investigation to small for the scales of Independence and Differentiation. All of the associations are positive, except for the Differentiation scale. These findings suggest that positive students' attitude to science are promoted in science laptop classrooms where students perceive the science classroom as being characterised by personalisation, participation, independence, investigation and negotiation. In contrast, students' attitude to science decreases in science laptop classrooms where students perceive the science classroom as being characterised by differentiation. However, as noted earlier, this is a small association.

An examination of the attitudinal outcomes standard regression weights (beta) data, indicates that only two of the six scales retain their statistical significance. This more conservative analysis suggests that of the science laptop classroom characteristics earlier identified as promoting positive student attitudes to science, it is the classroom characteristics of personalisation and investigation that are most influential.

Table 3: Associations between SCES scales and laptop students' attitudinal and cognitive achievement outcomes in terms of simple correlations (r) and standardised regression coefficients (beta)

ScaleStrength of classroom environment-outcome association
AttitudinalAchievement
rbeta r beta
Personalisation 0.65** 0.52** 0.22** 0.12
Participation 0.47** 0.01 0.19** 0.08
Independence 0.11* 0.02 0.13* 0.08
Investigation 0.47** 0.24** 0.05 -0.02
Differentiation -0.10* -0.02 -0.26** -0.22**
Negotiation 0.33** 0.07 0.08 0.01
Multiple R Correlation
0.69**
0.33**
R2
0.48
0.11
*p < 0.05    **p < 0.001    n = 433 (laptops)

The multiple correlation (R) statistic, 0.33 (p < 0.001), indicates a significant association between science laptop classroom environment, as measured by the all the SCES scales, and student cognitive achievement. The R2 statistic indicates that 11% of the variance in students' cognitive achievement can be attributed to laptop students' perceptions of their science classroom environment.

The simple correlation (r) data indicates that four of the six correlations between students' cognitive achievement and the SCES scales are statistically significant. Of the four scales, the Personalisation, Participation, and Independence scales are positively correlated and the Differentiation scale is negatively correlated. The correlation values are generally small, suggesting that to a small extent, students' cognitive achievement is higher in science laptop classrooms where students' perceive the classroom as being characterised by personalisation, participation and independence. Students' cognitive achievement is lower in science laptop classrooms where students' perceive the classroom as being characterised by differentiation.

Achievement outcome association standard regression weights (beta) data in Table 3 indicate that only one scale, of the three scales earlier identified as being statistically significant, retains its statistical significance. This more conservative analysis of association suggests that student cognitive achievement is most influenced by science laptop classrooms perceived to be characterised by differentiation (emphasis on the selective treatment of students on the basis of ability, learning style, interests and rate of working).

Table 4: Effect sizes (ES) for laptop computers on the science
classroom environment as measured by the SCES Scales

Scale Unit of analysis Effect size - ES*
Personalisation Individual
Class Mean
-0.02
0.02
Participation Individual
Class Mean
-0.04
0.00
Independence Individual
Class Mean
0.01
0.15
Investigation Individual
Class Mean
0.10
0.20
Differentiation Individual
Class Mean
0.25
0.45
Negotiation Individual
Class Mean
0.03
0.17
* ES was calculated by subtracting the non-laptop mean from the laptop mean and dividing the difference by the pooled standard deviation, Cohen's d (1977)

Effects of laptops on students' perceptions of classroom environment
Differences between laptop and non-laptop students' perceptions of science classroom environment were examined by calculating effect sizes for each of the SCES scales. An examination of the direction of effect sizes in Table 4, reveals that the effects laptop computers have had on science students' perceptions of their classroom environment have all been positive, except for those factors measured by the Personalisation and Participation scales, and only at the student unit of analysis.

An examination of the magnitude of the effect sizes indicates that for the scales of Independence, Investigation and Negotiation, at the class mean unit of analysis, the effects are small (0.15 to 0.20). The effect size for the Differentiation scale, 0.45, approaches the medium effect category at the class mean unit of analysis and is small, 0.25, for the individual student unit of analysis. All other reported effect sizes are negligible, at 0.10 or smaller.

Comparison of cognitive achievement outcomes in laptop and non-laptop classrooms - qualitative findings
The qualitative data strongly supported the quantitative findings that the use of laptops in science did not help to increase student cognitive achievement. Moreover, the qualitative data clearly suggest why laptop science students' cognitive achievement was not quite as good as that of the non-laptop students.

When the laptop students were asked, 'Do you think laptops have made it easier for you to learn some of the things in science?' students indicated laptops helped them with things such as making tables, spreadsheets, graphing, presentation, getting work done faster, editing of work, projects, note taking, writing up investigations and organisation, but it really didn't make it easier to learn. One of the students commented, "The graphing took a long time to learn, but once you know how to do it, it saves a lot of time and stuff."

When students were asked, 'Do you think laptops affected the way you learn science, or did laptops help you understand science?' their responses indicated that basically they did not think laptops did either. They felt that laptops were only used as tools for graphing, note taking and so on; they were not used to 'teach' any of the science materials or topics. Comments such as the following support this observation.

Not really in science - it did in other subjects. In science you can't use it to do experiments and stuff but you can use it to do notes.

Laptops don't really help with our knowledge of science, except for our knowledge of computers.

It's made a lot of things easier, like reports, and labs and things - it just makes things easier.

When laptop students were asked to comment on the fact they actually achieved slightly lower than their non-laptop counterparts, their responses included the following.
The laptop didn't help us learn more stuff in the subject, like we learned a lot about computers but not science.

It makes doing notes and stuff a lot faster, so it might give you more time to study and help in that way.

Like they expect you to know everything on the computer and if you don't then you have to waste time to find out how to do it, and you miss out on class things.

It is quite evident that students felt that the laptops did not help them 'learn' more science, or learn their science better. Although the cognitive achievement test tested on topics such as tables and graphing, which laptop students said they learned to do using the laptop, it appears these students may not have internalised and understood the concepts behind what they were doing to the extent that their non-laptop counterparts did. The cognitive achievement test did not involve any simulation or modeling software; it was a traditional pencil and paper type of test because non-laptop students also had to participate in the test.

When teachers were asked why they thought the laptop student group did not have better cognitive achievement scores than the non-laptop group, they responded with comments such as

Ah, I think that to start with the information they are putting in the computer, they don't actually take notice of. But later when this becomes second nature and they know how to get it all set up and ready to go then perhaps the information you are actually teaching them is becoming more important to them.

I guess what I'd say is that there are more questions about the actual computing than science which is maybe a bit of a worry. This could actually detract from the learning of science - they are so busy trying to learn how to do a table they actually don't pay attention to the information going in.

We were teaching a lot of computer skills, like graphing which they then used later - it was an unknown situation.

These comments indicate that students are perhaps so preoccupied with learning how to use their laptops for these new purposes, such as graphing for example, that the meaningful learning about graphing becomes secondary. This is a plausible situation and perhaps is the main reason why laptop students did not outperform their non-laptop counterparts.

The above teacher comments led to one more question to teachers in this area, 'Do you feel laptop students have a better understanding of computers and are more computer literate than non-laptop students?' Teachers responded with an emphatic "Absolutely, no doubt, yes. It makes school a lot more relevant for them," and "I think a lot of them, by the end of grade 8, really are quite competent and are really quite happy to use computers for all sorts of things."

Conclusions and recommendations

This study has confirmed the SCES as a valid and reliable instrument for collecting student perceptual data on science laptop classroom environment. As the SCES was based on the ICEQ, rewritten in the personal form, the study has also presented the first reliability and validity statistics for the ICEQ when written in the personal form and when used in science laptop classrooms.

The multiple correlation (R) statistics in Table 3 suggest that of the two student outcomes, attitude and cognitive achievement, the strength of the association between students' attitudinal outcomes and their perceptions of science laptop classroom environment is just over twice as strong as that between students' cognitive achievement outcomes and their perceptions of science laptop classroom environment. Furthermore, the R2 statistic, indicates that the percentage of variance in students' attitudinal scores, as explained by students' perceptions of science laptop classroom environment, is over four times that, for students' cognitive achievement scores and their perceptions of science laptop classroom environment.

The effect size data in Table 4 suggest that laptops have had minimal effects on students' perceptions of science classroom environment, especially at the individual student unit of analysis.

The qualitative findings generally supported the quantitative and provided some insightful explanations, especially for the cognitive quantitative data findings. The research results also pointed heavily to the fact that students appear to learn more about computers than science, during the first few years after their introduction into the science classroom.

The findings are important for beginning science laptop teachers, as they point out the risk of teachers getting too caught up in helping students struggling with their laptops, at the expense of helping students understand and internalise the content and skills being taught using the laptop. Science laptop teachers must also not assume that students learn to read and interpret graphs, for example, by simply using the laptop to create graphs.

These findings are also important for schools planning to introduce a laptop program in science for the following reasons. First, a consultant or permanent staff member with expertise and experience in teaching science, using laptops, must be involved when the implementation plan is being formulated and finalised. A lead-time of at least one year is necessary for adequate staff and program development. And second, an implementation plan must be developed and finalised that, at minimum, addresses the following points.

One, how laptops will be used in the teaching of science. For an effective science laptop program, it is crucial that laptops be used to actually teach, assimilate, interpret and comprehend scientific truths. Laptops must not be used simply as tools for the manipulation and presentation of data, or the production of scientific reports and projects.

Two, what equipment and materials will be needed to use the laptops effectively in the science classroom, and what skills teachers and students will need to acquire to effectively use this equipment and materials in conjunction with their laptops.

Three, how students are to be taught the laptop skills necessary to do science. Will individual science teachers be responsible for teaching all skills necessary for science, or will there be a centralised approach where students will be taught most skills by a specialist?

Four, teachers will need adequate professional development to gain expertise and experience in all the skills students will learn, even if these skills are taught to students by a specialist or in other subjects.

The findings are also important to schools contemplating the introduction of laptops, as they have indicated positive associations between students' perceptions of science laptop classrooms and especially their attitudinal outcomes. Furthermore, they have indicated that students' cognitive achievement in science could actually be somewhat negatively affected during the first few years after laptops are introduced into the science classroom, if the 'minimum implementation plan' described above is neglected.

References

Alessi, S.M., & Trollip, S.R. (1991). Computer-based instruction. (2nd ed.). Englewood Cliffs: Prentice Hall.

Cohen, J. (1977). Statistical power analysis for the behavioral sciences. New York: Academic Press.

Fraser, B.J. (1994). Research on classroom and school climate. In D.L. Gabel, (Ed.), Handbook of research on science teaching and learning. New York: Macmillan.

Fraser, B.J. (1991). Two decades of classroom environment research. In B.J. Fraser & H.J. Walberg (Eds.), Educational environments: Evaluation, antecedents and consequences. Oxford: Pergamon Press.

Fraser, B.J. (1990). Individualised Classroom Environment Questionnaire: Handbook and test master set. Hawthorn: The Australian Council for Educational Research Ltd, Radford House.

Fraser, B.J. (1981). TOSRA: Test of Science-Related Attitudes handbook. Hawthorn: The Australian Council for Educational Research Limited.

Fraser, B.J. (1979). Test of Enquiry Skills handbook. Hawthorn: The Australian Council for Education Research Limited.

Fraser, B.J., & Walberg, H.J. (Eds.). (1991). Educational environments: Evaluation, antecedents, and consequences. Oxford: Pergamon Press.

Fraser, B.J., Walberg, H.J., Welch, W.W., & Hattie, J.A. (1987). Synthesis of educational productivity research. International Journal of Educational Research, 11(2), 145-252.

Gardner, J., Morrison, H., & Jarman, R. (1993). The impact of high access to computers on learning. Journal of Computer Assisted Learning, 9, 2-16.

Haertel, G.D., Walberg, H.J., & Haertel, E.H. (1981). Socio-psychological environments and learning: a quantitative synthesis. British Educational Research Journal, 7, 27-36.

Hebenstreit, J. (1992). Where are we and how did we get there? In UNESCO (Ed.), Education and informatics worldwide: The state of the art and beyond (pp. 9-65). London: Jessica Kingsley Publishers.

Loader, D. (1993). Reconstructing an Australian school. The Computing Teacher, 20(7), 12, 14-15.

McMillan, K., & Honey, M. (1993). Year one of Project Pulse: Pupils using laptops in science and English. A final report. New York: Bank Street College of Education. (ERIC Document Reproduction Service No. ED 358 822)

McRobbie, C.J., & Fraser, B.J. (1993). Associations between student outcomes and psychosocial science environment. Journal of Educational Research, 87, 78-85.

Mitchell, J., & Loader, D. (1993). Learning in a learning community: Methodist Ladies' College case study. Jolimont: Incorporated Association of Registered Teachers of Victoria.

Okey, J.R. (1985). The effectiveness of computer-based education: A review. Paper presented at the Annual Meeting of the National Association for Research in Science Teaching, French Lick Springs.

Oliver, R. (1986). Using computers in schools: A guide for teachers. Willetton: Heron Publishing.

Rowe, H.A.H. (1993). Learning with personal computers. Camberwell: The Australian Council for Educational Research.

Shears, L. (Ed.). (1995). Computers and schools. Camberwell: The Australian Council for Educational Research Ltd.

Stern, G.G., Stein, M.I., & Bloom, B.S. (1956). Methods in personality assessment. Glencoe: Free Press.

Taylor, P.C., Fraser, B.J., & Fisher, D.L. (1997). Monitoring constructivist classroom learning environments. International Journal of Educational Research, 27, 293-302.

Wise, K.C., & Okey, J.R. (1983, April). The impact of microcomputer based instruction on student achievement. Paper presented at the Annual Meeting of the National Association for Research in Science Teaching, Dallas.

Authors: Darrell Fisher is an Associate Professor in the National Key Centre for School Science and Mathematics at Curtin University of Technology, Perth WA. His research interests involve studies of classroom and school environments, teacher-student interpersonal behaviour and curriculum evaluation.

Ed Stolarchuk is Head of Faculty, Science and Technology at St Hilda's School, Southport, Queensland. His research interests involve the effects the use of laptop computers in science classrooms have on students' attitudes and cognitive achievement in science.

Please cite as: Stolarchuk, E. and Fisher, D. (2001). First years of laptops in science classrooms result in more learning about computers than science. Issues In Educational Research, 11(1), 25-39. http://www.iier.org.au/iier11/stolarchuk.html


[Contents Vol 11] [IIER Home]
© 2001 Issues In Educational Research. This URL: http://www.iier.org.au/iier11/stolarchuk.html
HTML: Clare McBeath [c.mcbeath@bigpond.com] and Roger Atkinson [rjatkinson@bigpond.com]
Last revision: 4 Sep 2013.