IIER logo 4
Issues In Educational Research, Vol 20(3), 2010
[ Contents Vol 20 ] [ IIER Home ]

Journal ratings and the publications of Australian academics

Brian Hemmings and Russell Kay
Charles Sturt University

Journal ranking systems are becoming more commonplace and receiving increased attention from academics and their senior managers. The study reported below was designed to examine the relationship between particular background factors (e.g., gender and qualifications) of academics and the rankings of what they perceived as their most significant refereed journal publication. Survey data were gathered from academics employed at two Australian universities and these data were analysed using a variety of nonparametric statistical procedures. The results demonstrated that: 1) less than half of these academics nominated a publication ranked in the top 20% of journals in their respective field of research; 2) academics who had published greater numbers of journal articles were more likely to have higher rated publications; 3) more senior academics were more likely to publish in the higher-rated journals than their junior counterparts; and, 4) academics with doctorates were much more likely to identify an article published in the highest rated journals. The implications of these results for higher education practice are considered.


Introduction

Considerable scholarly effort has been expended on investigating those factors which affect the publication outputs of academics (Budd, 1995; Green, 1998; Pratt, Margaritas, & Coy, 1999; Raijmakers, 2003; Stack, 2004). This effort has been pronounced for a number of decades and the results of these studies have shown especially that the background factors of academics (e.g., qualifications, academic level, and gender) have been associated with publication productivity. To exemplify, Tien (2000), researching within a Taiwanese context, has demonstrated that academics holding a doctoral qualification, compared with those holding lesser qualifications, were more inclined to publish articles in refereed journals. More recently, an Australian study, undertaken by Hemmings and Kay (2007) and using a structural equation modelling approach, has highlighted the direct and significant effect that qualifications can have on publication output. That is, those with doctorates, compared to those without doctorates, were more likely to produce greater numbers of peer-reviewed works in the form of books, book chapters, conference papers, and journal articles. Hemmings and Kay (2007) also found that qualifications had an indirect effect on publication output by acting on a factor referred to as 'writing confidence'.

A consistent trend emerges when viewing the outcomes of studies examining the relationship between academic level (or seniority) and publication output. Both Blackburn and Lawrence (1995) and Sax, Hagedorn, Arredondo and Dicrisi (2002), for example, conclude from their research that academics with greater seniority tend to have greater publication output than their more junior counterparts. It has been argued that this trend reflects, in particular, the advantage that senior academics have in relation to accessing networks and resources such as doctoral students (Dundar & Lewis, 1998).

A recurrent theme in the literature relating to publishing outcomes has been the influence of gender, with the bulk of the evidence indicating that male academics, on average, produce more publications than female academics (Research Corporation, 2001). The main reason given for such a discrepancy is that females usually give more of their time and energy to parenting and marital responsibilities (Raijmakers, 2003; Sax et al., 2002; Stack, 2004).

Researchers such as Coates, Goedegebuure, van der Lee, and Meek (2008) and Watty, Bellamy, and Morley (2008) have described how the academic landscape in Australia, during the past ten years, has changed. One of the biggest changes has been the shift in emphasis from the quantity of publications an academic is expected to produce to a focus on the quality of those publications (Fairbairn, Holbrook, Bourke, Preston, Cantwell, & Scevak, 2009; Graham, 2008). This shift reflects government policy and is in line with research performance assessments in other jurisdictions such as the United Kingdom and New Zealand (see, for example, Hemmings, Rushbrook, & Smith, 2007; Jarwal, Brion, & King, 2009; Rosenstreich, 2007). Interestingly, both sides of Australian politics have instigated a push to assess research outcomes and the quality of those outcomes (Goodyear, 2008). From 2006-2007, the then Howard conservative government set up the Research Quality Framework (RQF) and the more liberal Rudd government in 2008 superseded the RQF with the Excellence in Research for Australia (ERA) initiative. One of the main differences between the two research performance assessments is that the ERA is administered by the Australian Research Council (ARC) rather than a federal government department (Watson, 2008). And, a key feature of the ERA is to benchmark Australia's research effort against international competitors (Carr, 2008; Sharpe, 2008). The ERA is still in its infancy, with the first comprehensive assessment being carried out during 2010.

At the forefront of the ERA initiative is a new journal ranking system. This system uses 181 field-of-research codes and is based on approximately 19,500 indexed journals (Lamp, 2009). Thomson Reuters and Scopus draw on 15,000 journals. The journals in the new system are rated under four tiers or levels, namely, A* (top 5%), A (next 15%), B (next 30%), and C (bottom 50%). The final ranks were released in early 2010 following sector feedback gained from individuals, professional associations, and other groupings (Goodyear, 2008). Before this release, a set of draft ranks were made available and a lengthy consultation period ensued (Graham, 2008). Anecdotal evidence suggests that these draft rankings had a bearing on the writing and publication behaviours of some academics, especially those working in discipline areas which did not have an explicit rating system. It is important to note that journal ratings have been used for many years across a range of discipline areas e.g., economics, marketing, and medicine, but other disciplines or sub-disciplines have no history of using formalised ratings (Fairbairn et al., 2009; Rosenstreich, 2007). Some writers see the rating process as quite problematic. Barrio Minton, Fernando, and Ray (2008), for example, argue that the process to determine ratings can either lack objectivity or be subject to bias and simple manipulation. Others such as Goodyear (2008) and Jarwal et al. (2009) contend that ratings, a proxy for journal quality, can only be a guide at best because of the possible variation in the quality of the articles appearing in a journal. And, Lamp (2009) points out that niche journals are likely to be rated poorly and that emerging journals may not be listed. This raises a question as to how new journals can gain status, and therefore a rating, given that the publishing behaviours of academics will be moderated by the ERA system and particularly those journals with a relatively high rating.

Atkinson (2010) identifies at least two noteworthy issues in relation to the ARC journal ranking list. First, that there is a lack of detailed information available on the processes employed to set the ranks; and second, that considerable work still needs to be completed to reconcile differences between the journal rankings and Scopus 'journalmetrics'. Genoni, Haddow and Dumbell (2009) add support to this view by claiming that their research, based on Australian social sciences and humanities journals, calls into question the "reliance by ERA on the peer ranking of journals" (p.13). They also assert that there is evidence that the ARC journal ranking system has become so highly influential in the Australian higher education context that is 'out-muscling' other systems such as the Thomson Reuters Impact Factor and Scopus bibliometrics.

Generally speaking, the Australian literature pertaining to the ERA initiative does not have a widely recognised empirical base. The study reported here goes against this tradition by drawing on survey data from a sample of Australian academics. Even though the study was conducted prior to the introduction of the ERA initiative and the creation of the ARC journal ranking system, the data collected have the potential to offer some useful insights about a set of factors that could influence publication behaviour within the emerging ERA context. Specifically, the study was designed to examine the relationship between the background factors of academics (viz., gender, qualifications, and academic level) and the rankings of what they perceived as their most significant refereed journal publication.

Method

The participants (N=357) were academic staff drawn from two of the 40 Australian universities; one was a pre-Dawkins university and the other was a post-Dawkins university (Gable, 2006). The participating sample had the following characteristics. In terms of gender, 52.8 per cent were male and 47.2 per cent were female. The majority held doctoral qualifications (68.5 per cent); whereas, the remaining 31.5 per cent had gained lesser qualifications. And, the sample separated into three academic seniority levels, namely, Associate Lecturer/Lecturer [Level A/B] (49.5 per cent), Senior Lecturer [Level C] (30.1 per cent), and Associate Professor/Professor [Level D/E] (20.4 per cent). It also needs to be kept in mind that the sample comprised participants drawn from the full range of disciplines commonly taught in Australian universities. These participants were affiliated with eight categorised-research fields, including information and computing, medical and health science, and arts/humanities/social sciences.

The participants were surveyed by means of a mail-out questionnaire. A follow-up reminder to complete the questionnaire was placed on the electronic noticeboards of the two universities as a means of raising the response rate at each institution. Participants were then given up to four weeks to return their questionnaires in a pre-paid envelope. There was a response rate of approximately 36 per cent.

The questionnaire sought information of a background nature as well as answers to questions about publication history and behaviours. More than 60 per cent of the sample responded to the question that asked for a nomination of the 'most significant journal publication' they had achieved in their career, including the title of the journal and the year that it was published. Approximately 20 per cent of the sample had not produced a journal publication and the remaining participants failed to identify one of their articles. Apart from apathy, it was assumed that these non-respondents were unable or reluctant to identify a single journal article. This reluctance may have been fuelled by a concern with anonymity.

A five-point rating scale was used to rank the self-nominated journals (0= not rated or unlisted, 1= C rated journal, 2= B rated journal, 3= A rated journal, and 4= A* rated journal). This scale was based on the ERA rankings of journals made available at the beginning of 2010. A scale categorising the total number of refereed journal articles reported by the academics was also produced. This second scale ranged from 1 (only one or two publications) to 5 (more than 20 publications). All the analyses of the data were performed using SPSS (Version 16.0).

Results

The results that follow are based on frequency, cross-tabulation, chi-square tests, a correlation analysis, and a Wilcoxon signed rank test.

As can be seen in Table 1, approximately 40 per cent of the respondents nominated a journal in one of the top two rating categories. A similar percentage of respondents (namely, 47 per cent) listed a B or C rated journal; whereas only 13 per cent of the respondents nominated in the unlisted category.

Table 1: Frequency of journal rating


FrequencyPercentage
02712.6
14018.6
26128.4
35324.7
43415.8
Total215100

An examination of the various bar graphs (refer to Figure 1) revealed that academics who had published many journal articles (that is, those with publication output categorised as either 4 or 5) were more likely to have higher rated publications, while those with fewer publications tended to publish articles in lower rated journals, that is, B, C, or unlisted journals. For example, 39% of the publications of category 5 publishers were at the A* level, while a similar percentage of the lowest publication group were unlisted. Moreover, the median number (middle frequency) of journal articles written by academics whose articles were rated with an A+ was 16; in contrast, for respondents with B or C and unlisted articles respectively the medians were 7 and 5.

Figure 1

Figure 1: Percentages of published articles by journal rating and relationship to publication

Chi-square testing investigates whether or not there is a significant difference between an expected and an observed result within a distribution. In this study, chi-square tests were used to examine the relationships between the background characteristics of academics and the rankings of what they perceived as their most significant refereed journal publication. Three tests were carried out to examine the relationship between the nominated journal ranking and gender, qualifications held, and level of employment respectively. First, it was found that there was no significant relationship between gender and journal publication ratings [chi-squared(2, N=215) = 4.52, p =.105]. Second, and in regard to academic qualifications, academics with doctorates were more likely to identify an article published in the highest rated journals [chi-squared(2, N=215) = 10.63, p =.02]. Third and last, the Level C and D/E academics were significantly more likely to have articles published in the highest two rated journal categories [chi-squared(4, N=215) = 13.37, p = .012].

Using the list of journals nominated by this particular sample of academics, a rank-order correlation between the draft and final ERA journal rankings yielded a significant result [r(198) = .70, p<.001]. Although this result is statistically significant it does indicate that there is a considerable amount of variation between the two ranking lists. Drawing on the same sample of 198 journal titles, a Wilcoxon signed rank test was also used to compare the results provided by the two ERA journal ranking procedures. The test showed that there was no overall significant difference between the ratings provided by the two procedures [z(198) = -.562, p=.574]. Taken together, the results of the rank-order correlation and the Wilcoxon signed rank test suggest that, in spite of the amount of variation between the two ranking lists, there was no substantive shift, up or down, for this sample in the move from one procedure to the other.

Discussion

There is a paucity of information available about the ERA journal ranking system and its implications for Australian academics. Although the results of the study reported here are based on a relatively small sample of academics, they show that a number of individual factors, namely, qualifications and seniority, are linked to the publishing outcomes of the academics surveyed. These are important findings and provide a foundation for future studies. No doubt, other studies will be designed to examine how other personal attributes and, perhaps, institutional features could impact on the ERA journal rankings obtained by academics. One study could investigate why some academics target journals with a low rank (i.e., a C rating) or an unlisted rank. Such a study might show that these academics have legitimate reasons for their selection (e.g., wide circulation and readership) for their choice of publication. However, on another front, a study might reveal that some academics are either confused or quite na•ve about the journal publishing process. This final point resonates with a viewpoint expressed by Fairbairn et al. (2009) about early career researchers in the education discipline. They conclude that "there is little comprehensive information about the range and scope of refereed education research journals" (p. 2).

One interesting finding worthy of additional comment was the correlation (r = .70) found between the draft and the ultimate ERA journal rankings. This correlation was based on a total of 198 different journals across a range of disciplines and showed that there was quite a discrepancy between the two published rankings. If this relationship were to be maintained across the entire list of ERA journals, then those submitting manuscripts to draft-ranked A* and A journals may have had their manuscripts accepted, at a later stage, in the same journals but with a lower rank. Such a discrepancy has meant that some academics seeking higher honours (e.g., promotion and 'tenure'), through the targeting of A* and A journals, have been somewhat disadvantaged. Anecdotal evidence from colleagues, based in several Australian institutions, suggests that the draft and final rankings have left some academics disgruntled and frustrated by the changes between the two rankings. This is not surprising as some journal rankings in this study changed from A* to B and from A to C.

There are at least five limitations inherent in the study. First, the data relating to publication behaviour were self-reported and, as a consequence, could not be substantiated. Second, the actual proportion (in percentage terms) of the academic's contribution to the nominated journal article was not sought. Third, the relatively small sample size prevented any worthwhile analysis using the categorised-research fields. Fourth, the participants made their choice of the most significant journal publication without being aware of a comprehensive ranking system. Even though some of the participants may have been guided by discipline-specific journal rankings such as those used in economics, many would have made their decision without reference to a ranking system. Obviously, knowledge of the ARC rankings may have caused some to rethink their nominations. Fifth and last, greater clarification of what was meant by the term 'journal article significance' would have helped to avoid some ambiguity for participants as a number of respondents indicated that significance could mean several things, including journal prestige or practitioner impact. This is not too surprising as a study of 'high status journals', conducted by Wellington and Torgerson (2005), showed that a sample of British and American Professors of Education had rather mixed views about what constituted high status. Was it the quality of the journal's content, editorial membership, author reputation, rate of citation, or the gate-keeping process centring on entry and acceptance? A replication and extension study would need to address all five concerns raised above.

Given the current 'outputs-driven' environment (Fairbairn et al., 2009) and an increasing pressure to ratchet up research performance across the Australian tertiary sector, and, in so doing, target top ranking journals which are often based outside Australia, a concerted effort will be made by the senior managers of Australian universities to encourage their research-oriented faculty to work strategically with colleagues with national and international reputations. The competition, among academics, to gain publications in A* and A journals will be intense and, obviously, those who are new to academe might find the associated workplace conditions daunting. As a result, it will be critical that neophyte academics are appropriately supported through individual and group mentoring. Arguably, many academics, particularly those at junior levels and/or those without doctorates, will need to be encouraged at a number of levels to ensure greater publishing success (Hemmings & Kay, 2010; Hemmings et al., 2007). The opportunity to collaborate with an experienced colleague will often distinguish between those publishing in the more and less prestigious journals.

The ERA initiative will bring many challenges and these will play out in various ways across institutions. Those universities with large numbers of experienced and highly-qualified faculty will be at a decided advantage in the journal publication stakes and will probably benefit financially once the initial round of research assessments is completed. Other universities will need to marshal substantial human and physical resources if they are to compete. Adding to the complexity of publication rating and assessment is the recent announcement (ARC, 2010) that another iteration of journal ranking by the ARC is about to take place.

Acknowledgement

The authors wish to thank Doug Hill for his insightful comments on an earlier version of the manuscript. And, special thanks to Caroline Byrne for her assistance with some of the data entry and processing.

References

ARC (2010). Review of the ERA 2010 Ranked Outlet Lists. Announcement dated 1 November and available at http://www.arc.gov.au/era/era_2012/era_2012.htm (accessed 12 November 2010)

Atkinson, R.J. (2010). Bibliometrics out, journalmetrics in! HERDSA News, 32(1). Available at http://www.roger-atkinson.id.au/pubs/herdsa-news/32-1.html (accessed 13 November 2010)

Barrio Minton, C.A., Fernando, D.M., & Ray, D.C. (2008). Ten years of peer-reviewed articles in counselor education: Where, what, who? Counselor Education & Supervision, 48, 133-143. Available at http://www.highbeam.com/doc/1G1-190196218.html (accessed 14 September 2010)

Blackburn, R.T., & Lawrence, J.H. (1995). Faculty at work: Motivation, expectation, satisfaction. Baltimore, MD: The Johns Hopkins University Press.

Budd, J.M. (1995). Faculty publishing productivity: An institutional analysis and comparison with library and other measures. College & Research Libraries, 56(6), 547-554.

Carr, K. (2008). A NEW ERA for Australian research quality assessment. Campus Review, 18(9), 5.

Coates, H., Goedegebuure, L., van der Lee, J., & Meek, L. (2008). The Australian academic profession in 2007: A first analysis of the survey results. A CHEMP-ACER Report. Melbourne, Vic.: Centre for Higher Education Management and Policy.

Dundar, H., & Lewis, D.R. (1998). Determinants of research productivity in higher education. Research in Higher Education, 39(6), 607-631. Available at http://www.springerlink.com/content/kq13440g53140n31/ (accessed 12 September 2010)

Fairbairn, H., Holbrook, A., Bourke, S., Preston, G., Cantwell, R., & Scevak, J. (2009). A profile of education journals. In P. Jeffrey (ed.), AARE 2008 Conference Papers Collection [Proceedings]. Available at http://www.aare.edu.au/08pap/fai08605.pdf (accessed 2 August 2010)

Gable, G.G. (2006). The information systems discipline in Australian universities: A contextual framework. Australasian Journal of Information Systems, 14(1), 103-22.

Genoni, P., Haddow, G., & Dumbell, P. (2009). Assessing the impact of Australian journals in the social sciences and humanities. In Proceedings ALIA Information Online 2009, Sydney, 20-22 January. Available at http://conferences.alia.org.au/online2009/docs/PresentationC16.pdf (accessed 12 November 2010)

Goodyear, P. (2008). Educational research and the ERA. AARE News, 63, July, 4-5. http://www.aare.edu.au/news/newsplus/news63.pdf

Graham, L.J. (2008). Rank and file: Assessing research quality in Australia. Educational Philosophy and Theory, 40(7), 811-815. Available at http://bulletin.edfac.usyd.edu.au/wp-content/uploads/2008/11/rank-and-file_era-2008_graham-doc.pdf (accessed 15 September 2010)

Green, R.G. (1998). Faculty rank, effort, and success: A study of publication in professional journals. Journal of Social Work Education, 34(3), 415-426.

Hemmings, B., & Kay, R. (2007, June). I'm sure I can write! Writing confidence and other factors which influence academic output. Presentation at the European College Teaching and Learning Conference, Ljubljana, Slovenia.

Hemmings, B., & Kay, R. (2010). University lecturer publication output: Qualifications, time, and confidence count. Journal of Higher Education Policy and Management, 32(2), 185-197.

Hemmings, B., Rushbrook, P., & Smith, E. (2007). Academics' views on publishing refereed works: A content analysis. Higher Education, 54(3), 307-332. Available at http://www.springerlink.com/content/t87q7114711n0j02/ (accessed 15 July 2010)

Jarwal, S.D., Brion, A.M., & King, M.L. (2009). Measuring research quality using the journal impact factor, citations and 'Ranked Journals': Blunt instruments or inspired metrics? Journal of Higher Education Policy and Management, 31(4), 289-300. Available at http://www.informaworld.com/smpp/content~content=a915615523~db=all~jumptype=rss (accessed September 14 2010)

Lamp, J.W. (2009). At the sharp end: Journal ranking and the dreams of academics. Online Information Review, 33(4), 827-830.

Pratt, M., Margaritas, D., & Coy, D. (1999). Developing a research culture in a university faculty. Journal of Higher Education Policy and Management, 21(1), 43-55. Available at http://www.informaworld.com/smpp/content~db=all~content=a746537885 (accessed 3 August 2010)

Raijmakers, L.R. (2003). Review of factors which influence research productivity. Report 30052003 for Vaal Triangle Technikon, South Africa.

Research Corporation (2001). Determining research productivity and grant activity among science faculty at surveyed institutions. (Report No. BBB26706). Tucson, AZ. (ERIC Document Reproduction Service No. ED 469 492) Available at http://www.rescorp.org/gdresources/downloads/publications/ae-oct-2001.pdf (accessed 3 October 2010)

Rosenstreich, D. (2007). Journal reputations and academic reputations - the role of ranking studies. Presentation at the Australian and New Zealand Marketing Academy Conference, Dunedin, New Zealand, December 3-5. Available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1329974 (accessed 21 November 2009)

Sax, L.J., Hagedorn, S.A., Arredondo, M., & Dicrisi, F.A. (2002). Faculty research productivity: Exploring the role of gender and family-related factors. Research in Higher Education, 45(3), 423-446. Available at http://www.springerlink.com/content/2gghj1fqxvejl0da/ (accessed 23 September 2010)

Sharpe, M. (2008). Brave new era for Australian scholars. Arena Magazine, August-September, 35-37.

Stack, S. (2004). Gender, children and research productivity. Research in Higher Education, 45(8), 891-920. Available at http://www.springerlink.com/content/w061mtg8824139k2/ (accessed 1 September 2010)

Tien, F.F. (2000). To what degree does the desire for promotion motivate faculty to perform research? Testing the expectancy theory. Research in Higher Education, 41(6), 723-752. Available at http://www.springerlink.com/content/m32528x655071457/ (accessed 18 September 2010)

Watson, L. (2008). Developing indicators for a new ERA: Should we measure the policy impact of education research? Australian Journal of Education, 52(2), 117-128.

Watty, K., Bellamy, S., & Morley, C. (2008). Changes in higher education and valuing the job: The views of accounting academics in Australia. Journal of Higher Education Policy and Management, 30(2), 139-151. Available at http://researchbank.rmit.edu.au/eserv/rmit:2314/n2006008050.pdf (accessed 16 May 2010)

Wellingon, J., & Torgerson, C.J. (2005). Writing for publication: What counts as a 'high status, eminent academic journal'? Journal of Further and Higher Education, 29(1), 35-48.

Authors: Dr Brian Hemmings is a Senior Lecturer in the School of Education, Charles Sturt University, Wagga Wagga. His research focus is on academic achievement at all education levels. Email: bhemmings@csu.edu.au

Russell Kay is an Adjunct Senior Lecturer in Education at Charles Sturt University. His research interests concentrate on schooling performance and he draws on the use multivariate statistics.

Please cite as: Hemmings, B. & Kay, R. (2010). Journal ratings and the publications of Australian academics. Issues In Educational Research, 20(3), 234-243. http://www.iier.org.au/iier20/hemmings.html


[ PDF version of this article ] [ Contents Vol 20 ] [ IIER Home ]
© 2010 Issues In Educational Research. This URL: http://www.iier.org.au/iier20/hemmings.html
Created 9 Dec 2010. Last revision: 9 Dec 2010.
HTML: Roger Atkinson [rjatkinson@bigpond.com]