Oral Presentation AFSS/NZFSS Joint Conference 2019

Is each of these things just like the other? Testing and refining the repeatability of wetland monitoring indicator scores to improve power to detect change (#31)

Caitlin Johns 1 , Jan Tilden 1 , Maria Vandergragt 1
  1. Department of Environment and Science, Queensland Government, Brisbane, QLD, Australia

‘Wetland Tracker’ assessments are used by the QLD Government Wetland Science team to monitor the condition of wetlands in Queensland’s Great Barrier Reef catchments and change over time. Overall condition scores derive from numerous desktop and field-based indicators with categorical scores ranging 1­–5. Desktop indicator scores mostly derive from automated quantitative assessments of land use and vegetation mapping. However, some field indicators (e.g. pest plant cover, percentage soil disturbance, vegetation composition and structure) use visual cover estimates or other subjective criteria, with more potential for differing scores between observers. Minimising observer-based variability is essential to detect change accurately in long term monitoring programs. We evaluated observer variability in field indicator scores and its implications for power to detect change between monitoring years, during development of the Wetland Tracker method. First, we analysed confidence ratings and supporting notes recorded by field staff against indicator scores, during a two year trial (2015–2016) covering n = 41 wetland assessments. This helped us identify potential problem indicators and areas for improvement. We implemented method refinements, then conducted a field trial in 2017, testing the repeatability of field indicator scores between two independent teams with similar training levels, assessing the same (n = 11) wetlands. Overall, we found that variability in scoring between assessment teams was low enough for us to detect differences in average wetland condition scores smaller than the desired minimum detectable effect (one point on the 13-point Wetland Tracker assessment scale), using our intended sample size of n = 40 wetlands per monitoring year. Based on the level of variability between observers, which can be a major source of extraneous variance in rapid assessment instruments, we are confident in our ability to report on change using smaller sample sizes, making reporting at smaller scales (e.g. per region) viable in future.