High Stakes Accountability and High School Student Perceptions of Instructional Climate

High Stakes Accountability and High School Student Perceptions of Instructional Climate

Nonpartisan Education Review / Articles: Volume 6, Number 5

Access this article in .pdf format


 

 

 

 

 

High Stakes Accountability and High School Student Perceptions of Instructional Climate:

A Longitudinal Trend Study

 

 

 

Kenneth Stichter

Department of Educational Leadership

California State University, Fullerton

 

 

 

 

This longitudinal trend study investigated student satisfaction perceptions of instructional climate in one high school district. The purpose was to probe what students define as instructional climate and whether their perception changed during a decade of expanding emphasis on accountability in California. Archival data from six biennial surveys conducted over 10 years were analyzed using factor analysis to identify a sustained instructional climate factor. Factor loading items were tracked over the six survey cycles for purposes of ascertaining how student satisfaction scale components performed. Findings suggest that as the emphasis on standardized testing and accountability in California was ramped up between 1999 and 2009 student satisfaction levels with instructional climate kept pace. Student perception was that their satisfaction with the instructional climate was improving. Also, student satisfaction with instructional climate appeared to be viewed through the lens of student experience in English and mathematics courses.

 

Introduction

             The first decade of the 21st Century has asked much of America’s schools. This is especially true for high schools where state and national accountability driven testing has added to a previously existing regimen of tests that include PSAT, SAT, ACT, AP and IB. Now high school students must also be prepared for state standards exams, high school exit exams, and federal academic performance expectations. For California high schools, as with high schools elsewhere, the state accountability system generates increasing amounts of data regarding student academic performance which, in turn, generates increasing pressure to improve instruction. There can be little doubt that the current instructional climate for high school students is much different now than a decade earlier.

             But what is the effect of all of this on student perceptions of instruction? Just how satisfied are high school students with their instructional experience? True, there are test results that give us some insight into student achievement, but such data do little to inform us on how students view their learning experience. It seems reasonable to expect that efforts to improve instruction should also reflect improved learning experience perceptions on the part of students.

 

Purpose

             The purpose of this study was to investigate student perceptions of what constitutes instructional climate in one high school district and to further explore student satisfaction levels with instructional climate over time. Certainly student performance on standardized testing is critical feedback regarding instructional program effectiveness. However, an assumption in this study was that student attitudes about their instructional experience is also an important measure of evolving accountability efforts. This study did not correlate student satisfaction perceptions with achievement on state accountability measures. No doubt there is a relationship between student satisfaction levels and achievement outcomes, but what was of interest here was whether the pressure for instructional improvement in high school is reflected in improved student perception of the learning experience.

 

Literature

             The relationship between school climate and school effectiveness has been extensively explored (Witcher, 1993, Hoy & Miskel, 1996; Hoy, Hannum & Tschannen-Moran, 1998). Research has also found that school climate data is an effective instrument for assessing efforts to improve the instructional environment (Freiburg, 2003). According to Stevens and Sanchez (2003), instructional environment and instructional process are essential ingredients in an effective school climate. Instructional climate is directly related to what teachers do in class (Freiberg & Stein, 2003; Stevens & Sanchez, 2003). Since students are on the receiving end of the instructional process, their perceptions should shed light on effectiveness in the larger learning experience in this era of high stakes accountability.

             For purposes of this study, instructional climate was defined as the shared student perceptions of what transpires in the environment and activities associated with the academic experiences of students in the typical high school classroom. Research has found that the routines of school promote enduring perceptions of climate (Freiberg, 2003; Hoy & Feldman, 2003; Hoy & Miskel, 1996). Ellis (1988) found that climate qualities are the aggregate of complex relationships which can be described. Freiberg and Stein (2003), in discussing the durability of a healthy learning environment, advance the idea that the lasting quality of a school climate is likely due to the aggregate of many “little things linked together over time” (p. 26). If this collective quality is true for the larger context of school climate it is reasonable to expect that instructional climate exhibits similar characteristics.

             Also of interest to this study were Fraser’s (2003) findings that improving student achievement is closely allied with learning environment. Student perceptions have been found to be effective measures of school learning environment (Goh, Young & Fraser, 1995). But school learning environment is much more encompassing than instructional climate. Instructional climate, for purposes of this study, is more akin to Stevens and Sanchez’s (2003) definition of instructional environment and instructional process constructs that include teacher academic pedagogy and how students view their relationship with the teacher.

 

Background

The Data Source

             The district in this study is a suburban high school district in Southern California. There are six comprehensive high schools and a continuation high school. Total district student enrollment in 1999 was a little over 11,000 and by 2009 it was just under 14,000 for the seven schools. As noted in Table 1, the ethnic makeup of the district evolved during the ten years of study data. The district serves three large suburban cities and students come to the district from four feeder districts. For purposes of this study, it was not possible to disaggregate data according to language or ethnicity subpopulations since the data were not collected along demographic lines. However, given the high response rates (Table 2) it is possible to surmise that responses reflect ethnic distributions. Also, because the district annually reclassifies student standing according to units of credit earned, it was not possible to disaggregate data according to the self reported student grade level since one could not determine whether the student was reporting according to year in school or reclassified standing.

Table 1

Student enrollment by ethnicity from 1999 to 2009*

Year

Enrollment

Percent of total enrollment by ethnicity

Asian

Hispanic

White

Other

1999

11,276

16.7

43.8

34.8

4.7

2001

12,308

16.4

45

33.5

5.1

2003

13,078

17.2

48.1

29

5.7

2005

13,784

16.3

47.2

26.5

10

2007

13,953

17.3

46.2

25.4

10.8

2009

13,783

18.9

48.6

23

9.5

*Data source: California Department of Education School Reports


 

Since 1989 the district has conducted a biennial survey of all students. Since 1999 the survey has been unchanged, making it possible to compare results longitudinally. The same 50 Likert-type questions (Yes, most of the time; Yes, some of the time; No, seldom; No, not at all; and Insufficient Information/Does Not Apply) make up the student satisfaction survey. The survey is one data source in a biennial self-assessment report compiled, published, and publically discussed. The 50 survey items probe a range of issues including perceptions of the learning environment, facilities, equipment, safety, student activities, communication, recognition, fiscal management, and support services. The survey process is conducted by an outside evaluator responsible for administration, data analysis, and results reporting. The actual responses are tabulated by an outside vendor and raw and descriptive data by item are given to the evaluator and district. Table 2 notes sample sizes and response rates for students for the six survey cycles.

 

Table 2

Survey cycles and the number of student respondents

 

Year

Questionnaire Items

Student

Population

Number of Students

 Responding

Percent

Response

1999

50

11,276

9,942

88

2001

50

12,308

10,106

82

2003

50

13,078

11,285

86

2005

50

13,784

11,880

86

2007

50

13,953

11,915

85

2009

50

13,783

12,150

88

Total

 

78,182

67,278

86

 

 

             Surveys have been found to be effective data gathering instruments in investigating the complexities of school climate (Freiberg & Stein, 2003; Griffith, 1997, 1998, 2000; Stevens & Sanchez, 2003). But caution is warranted. Research has tended to find that high school students are more likely to have school climate perceptions that are much more negative than those of elementary and middle school students (Freiberg & Stein, 2003). Therefore, data gathered regarding high school student perceptions would be expected to be something less than flattering. In this study the effort to measure satisfaction was enhanced by the reality that all students continuously enrolled in participating schools took the survey at least two times.

 

California Standardized Testing and Reporting (STAR) Program

             During the decade of survey administration (1999-2009) outlined above, California built a school accountability system authorized under legislation enacted in the fall of 1997. The goal was a system of student assessment reflective of state content standards which, for English language arts and mathematics, were adopted in December 1997. The California Standardized Testing and Reporting (STAR) program began testing in 1998 using a well known “off the shelf” standardized achievement test in language arts and mathematics for grades 2 through 11. The state accountability program continued to develop during the first decade of the 21st Century (See Figure 1). The California Public School Accountability Act of 1999 became the cornerstone of efforts to evaluate academic performance of students and schools and thereby promote improved instruction in schools across the state. Paralleling California’s efforts at accountability have been federal accountability endeavors including No Child Left Behind (NCLB). As evidenced by the matrix in Figure 1, the evolution of accountability spans the time frame of this study with the effect being an ever increasing expectation regarding teacher performance and student achievement.

 

The student survey in this study offered a unique opportunity to view over time student attitudes about the experiential impact of state and federal accountability efforts. It is reasonable to expect that from 1999 to 2009 there was increased pressure at high schools to improve student academic performance. Such efforts played out in classroom environments. The instructional climate to which students were exposed came under increasing scrutiny not only by administrators, parents, and community but also by the high school students as clients.

 

 

1999/2000

 

2000/2001

 

2001/2002

 

2002/2003

 

2003/2004

 

2004/2005

 

2005/2006

 

2006/2007

 

2007/2008

 

2008/2009

State Standards in...

...English & Math

 

...History/Social Science

 

...Science

 

 

--------

 

--------

 

--------

 

 

--------

 

--------

 

--------

 

 

--------

 

--------

 

--------

 

 

--------

 

--------

 

--------

 

 

--------

 

--------

 

--------

 

 

--------

 

--------

 

--------

 

 

--------

 

--------

 

--------

 

 

--------

 

--------

 

--------

 

 

--➤

 

--➤

 

--➤

CA (PSAA) *

   -STAR

   -SAT9 test

   -API

   -Awards

   -II/USP

 

 

--------

--------

------

--------

--------

--------

--------

--------

--➤

API** Ranking added

--------

--------

--------

--------

--------

--------

--------

--➤

SAT9 replaced with CST***

 

History/Social Science test added grades 10 & 11

 

High school exit exam added

--------

 

--------

 

--------

--------

 

--------

 

--------

--------

 

--------

 

--------

--------

 

--------

 

--------

--------

 

--------

 

--------

--------

 

--------

 

--------

--➤

 

--➤

 

--➤

Awards program no longer funded

 

Science test added to grades 9 - 11

--------

 

--------

--------

 

--------

--------

 

--------

--------

 

--------

--------

 

--------

--➤

 

--➤

NCLB AYP calculation added

--------

--------

--------

--------

--➤

ELA & Math Standards = State standards were implemented in 1997. History/Social Science and Science standards were being implemented in 1999/2000.

*CA(PSAA) = California Public Schools Accountability Act included several provisions:

- STAR = Standardized Testing and Reporting program used state adopted test to evaluate schools and districts

- SAT9 = Stanford Achievement Test was the state standardized test in grades 2 through 11

oEnglish/Language Arts

oMathematics

- API = Academic Progress Index annually calculated for each school (using standardized test results) It included the API Base score, an API Growth Score, and API Target score (the target is the difference between the base score and the expected growth score).

- Awards = Governor’s Performance Award Program which gave funds to schools based on their growth.

- II/USP = Immediate Intervention / Underperforming Schools Program placed sanctions on schools for poor API performance.

**API Ranking = Using the API, individual schools ranked statewide & compared with similar schools

***CST test = California Standards Test. Designed to align with state standards. It is not a norm referenced test.

Figure 1: California Public Schools Accountability Act developments

between 1999 and 2009.

 

As noted earlier, this study did not correlate achievement levels with perceptions of learning experience. However, as a footnote of interest, the California Academic Performance Index District Reports for the decade of this study do indicate that each school and the district overall showed steady improvement in student achievement. This reality is mentioned here only to further emphasize the fact that the district and component schools were responding to the pressure of the growing accountability expectations of the state.

 

 

Methods

 

The raw data from each survey cycle since 1999 was provided to this researcher by the school district. Since the purpose of this study was to probe the issue of alignment of student perceptions along instructional climate themes, the first task was to identify those survey items which, over six cycles, had the potential to collectively represent a stable factor that could be said to define instructional climate. To explore this potential, exploratory principal component factor analysis was used with each survey cycle of data.

 

Factor analysis was deemed appropriate for this study because the biennial survey questionnaires were the same across the six cycles and because they contained a large number of items that appeared to lend themselves to groups, or dimensions of questions (Stichter, 2008). Essentially, factor analysis is a data reduction methodology commonly used with survey data when the number of questionnaire items includes multiple questions that may be grouped around a common focus (Morgan & Griego, 1998; George & Mallery, 1999; Mertler & Vannatta, 2002). The objective is to reduce the number of variables suggested by the questionnaire items to a “smaller number of composite variables” (Morgan & Griego, 1998, p. 111). In this study factor analysis did isolate a variable dimension that was given the descriptor “instructional climate,” (see Table 1) as suggested by the items that loaded into the factor scale (Kline, 2002). When the instructional climate variable was identified, factor analysis extraction data indicated that the variable accounted for 19%, 28%, 28%, 29%, 32% and 30% of the total item variance within the respective surveys (1999 – 2009). The reliability of the instructional climate scale extracted from each survey cycle was important if it was to be used for purposes of trend analysis. Therefore, two criteria were used to tease out an instructional climate factor variable that included common scale loadings. First, a Cronbach alpha for each scale was determined. An alpha of .70 or greater was used to suggest acceptable internal reliability (Pedhazur & Schmelkin, 1991). Second, if the alpha was below .70, the scale could be considered acceptable if the number of loadings was small but the intercorrelation of the items exceeded .25 (Nunnally, 1978; Griffith, 1998). Table 3 chronicles the loadings for the instructional climate factor.

 

Table 3

Student factor scale loadings by year

Questions

Factor Scale Items

1999

2001

2003

2005

2007

2009

19

Satisfied with most teachers

.691

.709

.705

.676

.695

.678

13

Satisfied with teacher grading practices

.676

.700

.698

.710

.703

.709

14

Satisfied with teacher homework practices

.582

.605

.637

.662

.606

.649

12

Satisfied with quality of teaching

.602

.664

.672

.628

.692

.656

26

Treated respectfully by most teachers

.573

.653

.625

.572

.658

.604

27

Satisfied with learning environment

.422

.559

.531

.454

.502

.460

35

Satisfied with quality of English courses

.442

.519

.517

.496

.534

.496

37

Satisfied with quality of math courses

.338

.384

.424

.428

.404

.468

28

Satisfied with quality of courses taken overall

.400

.398

.413

.463

.512

.457

 

Cronbach Alpha

.79

.88

.88

.87

.89

.88

 

Intrascale Correlation

.30

.43

.43

.43

.47

.44

 

 

Findings and Discussion

 

On the surface the instructional climate factor extracted appears to represent perception through the lens of student experience in two core academic courses – English and math. While many other courses are part of the high school experience, English and math certainly dominate the instructional program and are at the heart of most efforts to measure academic performance. However, in addition to suggesting that instructional climate is closely allied with teaching practices in these two subject areas there is the implication that English and math serve as benchmarks regarding learning experience, the quality of courses overall, and whether or not students are treated respectfully by most teachers. In essence, the students in this study make the case that instructional climate is to be measured by what teachers do and how they treat students.

 

But this study was also interested in the level of satisfaction reflected in the perceptions of students. It is one thing to suggest what comprises instructional climate and quite another matter to imply the level of satisfaction with that climate. To address the latter concern, the scale items for this factor were tracked over the six cycles by returning to the raw data for each scale and looking at the positive response (the total percent of “Yes, most of the time” and “Yes, some of the time” responses for each item) trend lines for each item. The resulting data are noted in Table 4 and illustrated in Figure 2.

 

 

Table 4

Student satisfaction with instructional climate by year

 

 

Factor Scale Items

 

Percent Satisfied*

1999

2001

2003

2005

2007

2009

19

  Satisfied with most teachers

77

75

82

86

86

87

13

Satisfied with teacher grading practices

65

68

74

78

78

79

14

Satisfied with teacher homework practices

64

65

69

75

75

75

12

Satisfied with quality of teaching

67

72

80

84

84

85

26

Treated respectfully by most teachers

83

81

86

89

89

91

27

Satisfied with learning environment

73

75

83

88

88

90

35

Satisfied with quality of English courses

74

71

82

84

83

86

37

Satisfied with quality of math courses

63

63

75

79

78

80

28

Satisfied with quality of courses taken overall

76

78

85

88

88

89

Equals combined percent of “Yes, most of the time” and “Yes, some of the time” responses.

 

 

Figure 2: Percent of positive responses by survey year

 

 

The levels of satisfaction for the items in the instructional climate factor have shown steady improvement since the second survey cycle in 2001. However, the suggestion of improvement after 2001 is not meant to imply poor satisfaction before that point in time. In fact, several indicators of satisfaction were in 1999 and 2001 already high. Put another way, in 1999 and 2001 there were 11 out of 18 scale items above 70% satisfied. Of those, only two were above 80%. By 2003 eight of the nine items were above 70% positive. Six years later in 2009, all levels of positive satisfaction were at or above 75%, and seven were at or above 80% satisfied. In 2003 the range of satisfaction was from 63% to 81%. In 2009 the range was from 75% to 91%. From the perspective of the high school students in this study, they tended to view positively their satisfaction with the instructional climate in 1999 and 2001. During the next four survey cycles these already positive satisfaction levels improved substantially. 

Although this study did not consider student achievement results during the decade under review, it is evident that students were responding positively to teacher instructional practices. The mirrored longitudinal patterns across the items that loaded into the instructional climate factor provide insight into the attitudes of the students in the district with regards to the value placed on the learning experience. The learning experience hinges on the actions of teachers as regards teaching and guiding student work. Also, students indicate that it is more than just teaching practices. Equally important was how teachers treat students and the perceived quality of the environment in which the learning takes place. One of the interesting phenomena here is that students seemed to connect their experience in English and math classes with their perceptions about the instructional climate. This may be because these two courses are at the core of the required academic curriculum.

 

 

Conclusions

This study supports the idea that schools are complex institutions (Ellis, 1988) and evaluation of effectiveness should consider data derived from multiple sources over time. There is value in longitudinal measurements as opposed to time-ordered only approaches (Pedhazur & Schmelkin, 1991). Whether resulting data are used for formative assessment and planning purposes (Kaufman, 1995; Kaufman, Herman & Waters, 2002) or for summative evaluation of goals, policies, and decisions, such data should, in the case of high schools, include the attitudes and perceptions of students. Given their age, years of experience in school environments, and apparent willingness to render opinions, high school students are a valuable source of insight into the effectiveness of schools on multiple fronts.

Improving student achievement is not a quick fix. The effort to improve student performance is proceeding on a broad front. If, as several researchers have suggested (Hoy & Tarter, 1997; Freiberg, 2003; Goddard, Hoy, & Woolfolk Hoy, 2004) student academic performance is especially susceptible to student perceptions regarding the larger context of the school, then efforts to improve student achievement should be paralleled by efforts to strengthen the school climate. Results of this study imply that students can give us healthy insight into the impact of instructional improvement efforts.

Change does not come easily. Curriculum and instruction reform efforts have been a rough road all across America. However, the high school students in this study indicate growing satisfaction with the evolving instructional climate, especially in English and math courses. It is not a stretch to suggest that efforts to improve achievement would do well to consider the perceptions of those on the receiving end of instruction.

 


Citation: Stichter, K. (2010). High Stakes Accountability and High School Student Perceptions of Instructional Climate: A Longitudinal Trend Study. Nonpartisan Education Review / Articles, 6(5). Retrieved [date] from http://nonpartisaneducation.org/Review/Articles/v6n5.pdf

Access this article in .pdf format 

 

References

Ellis, T. I. (1988). School climate. Research Roundup, 4(2) 3-6.

Fraser, B. J. (2003). Using learning environment assessments to improve classroom and school climates. In H. J. Freiberg (Ed.), School climate: Measuring, improving, and sustaining healthy learning environments (pp. 65-83). New York: RoutledgeFalmer.

Freiberg, H. J. (2003). Introduction. In H. J. Freiberg (Ed.), School climate: Measuring, improving, and sustaining healthy learning environments (pp. 1-10). New York: RoutledgeFalmer.

Freiberg, H. J., & Stein, T. A. (2003). Measuring, improving and sustaining healthy learning environments. In H. J. Freiberg (Ed.), School climate: Measuring, improving, and sustaining healthy learning environments (pp. 11-29). New York: RoutledgeFalmer.

George, D. & Mallery, P. (2003). SPSS for windows step by step: A simple guide and reference, 11.0 update (4th ed.). Boston, MA: Allyn and Bacon.

Goddard, R. G., Hoy, W. K. & Woolfolk Hoy, A. (2004). Collective efficacy: Theoretical development, empirical evidence, and future directions. Educational Researcher, 33, 3-13.

Goh, S. C., Young, D. J. & Fraser, B. J. (1995). Psychosocial climate and student outcomes in elementary mathematics classrooms: A multilevel analysis. Journal of Educational Research, 64, 29 – 40.

Griffith, J. (1997). Student and parent perceptions of school social environment: Are they group based? The Elementary School Journal, 98(2), 135-150.

Griffith, J. (1998). The relation of school structure and social environment to parent involvement in elementary schools. The Elementary School Journal, 99(1), 53-80.

Griffith, J. (2000). School climate as group evaluation and group consensus: Student and parent perceptions of the elementary school environment. The Elementary School Journal, 101(1), 35-61.

Hoy, W. K. & Feldman, J. A. (2003). Organizational health profiles for high schools. In H. J. Freiberg (Ed.), School climate: Measuring, improving, and sustaining healthy learning environments (pp. 84-102). New York: RoutledgeFalmer.

Hoy, W. K., Hannum, J., & Tschannen-Moran, M. (July, 1998). Organizational climate and student achievement: A parsimonious and longitudinal view. Journal of School Leadership, 8, 336-359.

Hoy, W. K., & Miskel, C. W. (1996). Educational administration: Theory into practice, (5th ed). New York: McGraw Hill.

Kaufman, R. (1995). Mapping educational success: Strategic thinking and planning for school administrators. Thousand Oaks, CA: Corwin Press, Inc.

Kaufman, R., Herman, J., & Watters, K. (2002). Educational planning: Strategic, tactical, and operational. Lanham, MD: Scarecrow Press, Inc.

Kline, P. (2002). An easy guide to factor analysis. New York: Routledge.

Mertens, D. M. (1998). Research methods in education and psychology: Integrating diversity with quantitative & qualitative approaches. Thousand Oaks, CA: SAGE Publications.

Mertler, C. A. & Vannatta, R. A. (2002). Advanced and multivariate statistical methods: Practical application and interpretation. (2nd ed.). Los Angeles: Pyrczak Publishing.

Morgan, G. A., & Griego, O. V. (1998). SPSS for windows: An introduction to use and interpretation in research. Mahwah, NJ: Lawrence Erlbaum Associates.

Nunnally, J. (1978). Psychometric Theory. New York: McGraw-Hill.

Pedhazur, E. J., & Schmelkin, L. P. (1991). Measurement, design, and analysis: An integrated approach. Hillsdale, NJ: Erlbaum.

Stevens, C. J., & Sanchez, K. S. (2003). Perceptions of parents and community members as measures of school climate. In H. J. Freiberg (Ed.), School climate: Measuring, improving, and sustaining healthy learning environments (124-147). New York: RoutledgeFalmer.

Stichter, K. (2008). Student school climate perceptions as a measure of school district goal attainment. Journal of Educational Research & Policy Studies, 8(1), 44-66.

Witcher, A. E. (1993). Assessing school climate: An important step for enhancing school quality. NASSP Bulletin, 77(554), 1-5.