Abstract
We illustrate and compare commonly used benchmark, or reference, methods for probabilistic solar forecasting that researchers use to measure the performance of their proposed techniques. A thorough review of the literature indicates wide variation in the benchmarks implemented in probabilistic solar forecast studies. To promote consistent and sensible methodological comparisons, we implement and compare ten variants from six common benchmark classes at two temporal scales: intra-hourly forecasts and hourly resolution forecasts. Using open-source Surface Radiation Budget Network (SURFRAD) data from 2018, these benchmark methods are compared using proper probabilistic metrics and common diagnostic tools. Practical implementation issues, such as the impact of missing data and applicability for operational forecasting, are also discussed. We make recommendations for practitioners on the appropriate selection of benchmark methods to properly showcase state-of-the-art improvements in forecast reliability and sharpness. All code and open-source data are available on Github for reproducibility and for other researchers to apply the same benchmark methods to their own data.
Original language | American English |
---|---|
Pages (from-to) | 52-67 |
Number of pages | 16 |
Journal | Solar Energy |
Volume | 206 |
DOIs | |
State | Published - Aug 2020 |
Bibliographical note
Publisher Copyright:© 2020
NREL Publication Number
- NREL/JA-5D00-76127
Keywords
- Benchmarking
- Irradiance
- Probabilistic forecasts
- Solar forecasts
- Solar power