3320 shaares
  
        2 results
        
        
          
          tagged
          
              
                
                  regression
                
              
          
        
        
        
      
    By using techniques that assess the performance impact of a build in relation to the performance characteristics (magnitude, variance, trend) of adjacent builds, we can more confidently distinguish genuine regressions from metrics that are elevated for other reasons (e.g. inherited code, regressions in previous builds or one-off data spikes due to test irregularities). We also spend less time chasing false negatives and no longer need to manually assign a threshold to each result — the data itself now sets the thresholds dynamically.
Peter Hedenskog works in the performance team at the Wikimedia Foundation and explains how the monitoring is done with sitespeed.io.
 
 
  