Abstract
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- and voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.
Original language | American English |
---|---|
Pages | 1-5 |
Number of pages | 5 |
DOIs | |
State | Published - 29 Jan 2018 |
Event | 2017 IEEE Power and Energy Society General Meeting, PESGM 2017 - Chicago, United States Duration: 16 Jul 2017 → 20 Jul 2017 |
Conference
Conference | 2017 IEEE Power and Energy Society General Meeting, PESGM 2017 |
---|---|
Country/Territory | United States |
City | Chicago |
Period | 16/07/17 → 20/07/17 |
Bibliographical note
Publisher Copyright:© 2017 IEEE.
NREL Publication Number
- NREL/CP-5D00-67889
Keywords
- OpenDSS
- Parallelization
- PV impact studies
- Python
- Quasi-static time-series
- Temporal decomposition