Real-time simulation of communications and power systems for testing distributed embedded controls

The motivation for establishing a testbed comes from the need to demonstrate new technology and concepts before going into the field or, possibly, without even having the need or chance to demonstrate an actual installation.

Go to the profile of Karl Schoder
Jul 25, 2017
Upvote 1 Comment

Authors: Karl Schoder; Ziyuan Cai; Sindhuja Sundararajan; Ming Yu; Mike Sloderbeck; Isaac Leonard; Michael Steurer 


The anticipated benefits of the next generation of electric distribution systems of efficiency and reliability greatly rely on a dependable power delivery system that incorporates information and communication infrastructures. While great efforts have been expanded to address the challenges of modelling and simulating either the power or the communications system, combining these two into a single, coherent system is still a field of active research. This article describes the approach taken to achieve real time simulation capabilities of such a hybrid system to demonstrate developments performed within the FREEDM Systems Center. The objective is to provide a platform that allows evaluating a newly developed, power electronics-based distribution system concept that requires communication among the various components to properly operate and respond to fault conditions. This objective is achieved by extending existing real-time power system simulation capabilities by a real-time communications simulation tool and bridging the gap between the two. The issues described herein address core modelling and simulation components and the interfaces developed to access simulation data and cope with the required amount of exchanged data.


The motivation for establishing a testbed comes from the need to demonstrate new technology and concepts before going into the field or, possibly, without even having the need or chance to demonstrate an actual installation. As developing new and advanced systems is performed by several teams at multiple locations and institutions, an integrated and real-time capable environment for the demonstration of operation and control of the integrated systems is seen as the best means to verify and validate developed concepts and components. Furthermore, as algorithm design is a core development part of engineered systems, the capability to deploy software (SW) on embedded platforms provides the possibility to test system-of-systems while going beyond simplified behaviour models and include timing effects as close as possible to the final implementation. With respect to the envisioned FREEDM Systems Center's distribution circuit, the testbed concerns the real-time simulation and execution of three core aspects: electric power system, operation and control algorithms, and communication. As the operation and control algorithms are to be tested on embedded platforms, all subsystems need to adhere to real time. Furthermore, efforts are underway to build and demonstrate components of the FREEDM system, e.g. solid state transformers, and these may be interfaced to the testbed described herein through power amplifiers at a later stage.

Related efforts in establishing a communications enabled power system simulators have addressed electric power flow, controls, dynamic stability, and protection. Many of these efforts are in the context of smart grid applications that build on a communications infrastructure to implement controls. Most of these efforts build on tools that are designed for offline, non-real-time simulation. An early example is the Electric Power and Communication Synchronizing Simulator [1] that combines the electromechanical transient simulator power system load flow (PSLF) [2], the electromagnetic transient simulator power system computer aided design/electromagnetic transient direct current (PSCAD/EMTC) [3], the event-driven network simulator NS2 [4], and the SW agent platform AgentHQ [5]. A comprehensive and recent overview of co-simulation platforms are given in [6,7]. Examples of testbeds that incorporate real-time simulation can be found in [8–12]. The implementation in [8] combines an alternate real-time power system platform, i.e. Opal-RT, with the network simulator OPNET, and a PC hosted central microgrid control centre. The testbed developed in [9] directly links the real-time simulated electric power system with programmable logic controllers, phasor measurement unit processing and database, and human–machine interfaces by a network implemented in hardware (HW). One of the testbeds implemented in [10] is based on the system in the loop concept, and therefore operates in real time. It also combines an Opal-RT-based power system simulation with an OPNET modelled communication network, and targets phasor measurement-based applications. The grid simulator implemented in [11] is targeting transmission system related transient stability issues and uses phasor-based power system modelling. In [12] the authors propose a two-step approach for simulating the effects of communication in power systems. In the first step, the effects of network and protocol designs are studied using OPNET when linked to an offline power system simulator. These parameters are used in a second step to include a non-real-time communication network emulator tool that acts in conjunction with the real-time power system simulator.

For the development of the FREEDM distribution system concept and as described here, a real-time capable platform that supports designing algorithms and HW-in-the-loop testing was seen as an important enabler of new technology. The power systems of interest here are electric distribution systems that require electro-magnetic transient type modelling, and the communication layer should enable interfacing multiple external controllers. This work is the outcome of efforts performed at several universities as part of the NSF FREEDM Systems Center, and this paper focuses on the concepts and aspects of implementing a real-time testbed at the Center for Advanced Power Systems, Florida State University. The remaining sections are structured as follows. The problem formulation including the problem domains and the FREEDM distribution system concept are presented in the following section. In the real-time testbed section, the sub-system aspects are addressed individually: the real-time power system, communication system, and distributed controls system. In the system integration section, the sub-systems are combined into a coherent, integrated system. The system integration section provides selected network case studies to demonstrate capabilities and characteristics. The discussion section discusses aspects of the real-time testbed, and the conclusion section provides concluding remarks.

Problem formulation

Problem domains

The power system aspect of such a testbed can be based on several SW simulation tools and fundamentally on three different modelling concepts, i.e. power flow time/series, dynamic stability/equivalent phasors, and transient analysis/electromagnetic transient program (EMTP) type SW packages. For this project, the capability was needed to include more detailed models, though not necessarily switching-based, but average-value models, of power electronics interfaced components for distribution system studies. This need lead to the incorporation of a real time simulator that is based on EMTP-type algorithms. Consequently, the capabilities exist to include detailed models of large-scale electric transmission and/or distribution systems with point-on-wave details.

The second domain concerns distributed controls. Due to the algorithm design work as part for the FREEDM system, the choice of demonstrating these on embedded platforms was made. This decision supports incorporating real-time aspect (e.g. task scheduling) in the algorithm design early on.

The third core part of the testbed is a real-time capable network simulator. A SW implementation of communication networks should support the possibility to link all subsystems with real-time characteristics. Furthermore, the implementation has to be customisable and parameterisable to allow systematic evaluation of design choices.

Overall, this approach represents a controller HW-in-the-loop (HIL) concept that allows performing experiments of the integrated systems. Also, as the testbed is developed to ensure real-time capabilities, the possibility to perform power-HIL (PHIL) simulation based testing at a later stage is given. During the PHIL-tests selected power devices will be connected and integrated with the testbed.

FREEDM distribution system

A key concept in the FREEDM system [ 13 ] is to flexibly integrate distributed generation and storage resources and to autonomously reconfigure the systems. Feeder operation should allow optimised incorporation of distributed resources, and support both grid-connected and islanded modes, i.e. microgrid capabilities. Power electronics play an important and enabling role (i.e. enhanced controllability) today already, and combined with new means of information sharing and communication, a decentralised means of system monitoring and operation is established. A graphical depiction of an example of the FREEDM system concept is shown in Fig.  .

Fig 1: FREEDM distribution system concept

Distributed controls at devices such as solid-state transformers (SSTs) [14] and fault-isolation devices are the core of feeder operation that is implemented through distributed grid intelligence (DGI) [15]. The embedded DGI devices run dedicated algorithms that establish new means for routing and sharing power, controlling voltage, and handling faults. The overall objective is to improve power and energy management capabilities and increase reliability.

The system as depicted in Fig 1, though with varying structures and number of components, was implemented in the form of a real-time testbed. As described in detail below, the electric power subsystems and communication network are simulated with dedicated tools and interfaced to embedded controllers.

Real-time testbed

The following sections highlight aspects of the parts involved in developing a coherent real-time testbed that embraces three core domains.

Real-time power system

The real-time simulation is realised through a dedicated real-time simulator (RTS) platform, the real-time digital simulator (RTDS®), see Fig [16]. The RTS is a parallel processing platform specifically designed for power system modelling and simulation. One of the fundamental differences to general computing platforms is the availability of and access to signals and measurements in both directions, i.e. analogue and digital I/O-capabilities. Furthermore, standard protocols such as Modbus, DNP3, and IEC 61850 are supported, and the possibility to develop custom extensions using field programmable gate arrays exists. This capability allows, e.g. to implement communication based on TCP and PCI express, which can be interfaced with the simulation at every time step.

Fig 2: RTS

The RTS has an accurate internal clock on its own, but it could also be synchronised through GPS to avoid time drifts in case of linking testbeds hosted at different sites. The discrete time step solver is typically executed with time steps of 50 µs for network, machines, and controls, and 2 µs for power electronics-based converters with pulse-width modulation-based controls. The system is designed to meet the real-time deadline and halts if the deadline is not met.


On the communications side of the testbed the functionality of interest concerns the ability to model, simulate, and analyse network traffic. To this end, a dedicated SW tool was chosen that features a discrete event simulator appropriate for several communication and routing protocols. Protocols of interest are the generic Internet Protocols (TCP/IP and UDP), but also real time and custom extensions. The tool provides means of observing and debugging network traffic, e.g. round-trip delays and channel loading. The choice of a specialised tool rather than extensions to the RTS was made due to the extended features already available and the better suiting and more efficient simulation engine.

The SW tool is OPNET [17] and used in combination with its system-in-the-loop module and interface for faster execution and, consequently, better soft real-time performance. Network communication features are supported through implementation of message filters and allow, e.g. introducing delays, dropped packages, and communication errors.

With respect to parameterisation, analysing and reporting, of special interest are latencies and channel loading statistics. As an example, time resolution of (packet) communication analysis is in the (hundred) microsecond-range, and networks with round trip times of half a millisecond and above can be modelled in real time, which means that networks extending beyond single rooms (hubs) are feasible. Also, delays can be modelled at this time resolution.

Distributed controls

As distributed controls and algorithms are an integral part in the development process of the FREEDM system, embedded computation platforms and their integration into the testbed is a major objective. Several choices exist, but currently two commercially available solutions, one based on the Mamba x86- and one based on the ARM-architecture (see [18,19], respectively), are deployed. Both embedded platforms are operating with a modified Linux and a real-time scheduler (see Fig 3). These platforms allow rapid development and deployment of the envisioned concepts and come with support for multiple Ethernet interfaces and TCP/IP and UDP protocols.

Fig 3: Distributed control platform

The FREEDM distribution system development specifically concerns a distributed and autonomous concept that is referred to as distributed grid intelligence (DGI [20]). The DGI builds on SW agents that are computing nodes with the following features: (i) operate and manage power and energy distribution, (ii) reconfigure if necessary, and (iii) handle a subset of faults (i.e. faults that can be isolated) through reconfiguration to reenergise the feeder.

DGI components interface with SSTs – the power actuators – and are a means of operating and controlling at a higher system level. Local ‘brokers’ coordinate group behaviour and maintain state-of-the-system information, and part of the messages exchanged support making the SSTs plug-and-play capable, i.e. SSTs can become part of the system and leave through a defined process.

System integration

After introducing the salient components of the testbed, this section reviews the system integration aspects of building a coherent real-time modelling and simulation platform. A graphical overview of the FREEDM system testbed implementation is shown in Fig 4. The individual components included are the RTS, custom interface exchanging power system model data, embedded platforms with embedded DGIs, and the SW network simulator. In addition, the figure shows the timing information for the real-time testbed: the RTS is a hard real-time system with fixed time steps, the simulated network is soft real time, and the number of messages is a conservative limit experienced while testing. Examples included in the discussion section below provide experimental measurements of message throughput.

Fig 4: Integrated FREEDM system testbed and timing information

With respect to the subsystem performance, the RTS guarantees real-time execution or halts operation in case of time overruns. The DGIs are implemented on dedicated, embedded HW and execute using a scheduler that ensures that all tasks get a fair chance (i.e. avoids starvation). OPNET's performance simulating a network has been compared to a HW implementation and it does depend on the platform (using PC/Linux here), but experience shows satisfactory results for the networks of interest.

The modelled subsystems exchange data via a server-client-based communication model. Part of this process is a custom implementation to access RTS-values, which is feasible by direct access to data in digital format avoiding conversion to analogue signals.

As there are many aspects to testbed setup and operation, efforts have been made to automate steps necessary to configure models and systems and to start simulations. Through the development of configuration files, functions, and scripts, scenarios are executed programmatically, simulation data stored and archived, and post-processing supported. In addition, as several other university teams are part of the FREEDM system development, the test best is available through remote access and cross-campus collaborations.

System integration aspects with respect to the communication networks are the following. Real (HW) networks and workstations/computing nodes interact with simulated networks. The networks are implemented via Ethernet, and the SW tool supports the ability to link multiple physical and SW networks. As shown in Fig.  , the simulation host is the workstation simulating a desired network in SW and exchanges data with physical (real world) networks.

Fig 5: Communication network system integration

Helper tools and Application Programmer Interfaces (APIs) are available for configuring simulation hosts, and SW libraries either exist or can be extended (best though C-code implementations) to implement a desired network in SW. Packet filters provide a means of configuring the simulated SW network during an operation.

Network case studies

To evaluate the networked platform capabilities, example studies were performed and the following summarises salient features and characteristics. The first example evaluates the HW- and SW-based communication capabilities in a switch-based network. For the purpose of this test, the probing traffic is based on UDP packets of 1024 bytes between two embedded x86-nodes. The communication timing results for the HW- and SW-based implementations are depicted in Fig 6. As shown, the average communication round trip time for the HW-based network (150 µs) is about 400 µs shorter than for the SW-based (i.e. OPNET with 550 µs) implementation. The difference in round trip times is acceptable with respect to the intended applications of the cyber physical testbed.

Fig 6: Communication round trip times between two embedded x86-nodes using a switch-based network: comparing HW and SW implementations

As a second example, the capability to introduce customary delays into the SW emulated network is evaluated. The network configuration and traffic are as described above, but a delay of 1 ms is introduced in the simulated network. The communication timing results are depicted in Fig 7, and these demonstrate that the delay is emulated as the round trip times increase from an average of about 550–1500 μs.

Fig 7: Communication round trip times between two embedded x86-nodes using a switch-based network: SW implementation without and with 1 ms delay

To provide authentication, data integrity, encryption and decryption for the DGI traffic, IPSec [21,22] is implemented on all the embedded x86-boards. As expected and shown in Fig.  , introduction of the IPSec increases the delay in communication between the nodes. The timing results of the SW-based implementations are appropriate for emulating the expected network traffic type and amount of the envisioned FREEDM distribution system.

Fig 8: Communication round trip times between two embedded x86-nodes using a switch-based network: HW and SW implementations without and with IPSec


What challenges can and are being addressed by building and using this testbed? First, of most concern is the stability of algorithms as several new features are introduced to the FREEDM system concept, and convergence properties considering latencies need to be established. In case of discrepancies between the expected and testbed results, differences need to be investigated. As expected results are most often derived by using an analytical approach while making simplifying assumptions, the testbed provides a great means of ensuring validity of assumptions and expectations, and facilitates finding root causes of differences.

As an example the testbed helped uncover performance issues with DGI's distributed control algorithms, as convergence times for dispatching power were higher than expected and resulted in discrepancies between analytically determined and measured testbed values [23,24]. Figs and 10 show a comparison study that was performed when DGI was first ported to the embedded controllers on the HIL testbed. The data in Fig was recorded by the FREEDM team from MS&T and shows DGI executing its load balance algorithm (see [15] for details on the algorithm) with five distributed controllers participating. In this experiment, the distributed controllers were run on a Windows PC cluster, and the controllers interacted with a non-real-time PSCAD power system simulation. The total simulation time for the experiment was approximately 2 s, and required just over a minute to complete. In this experiment, the load balance operation completed in approximately 600 ms. Using the same initial conditions, and an identical power system simulation on the RTDS, a companion experiment was performed on the HIL testbed. While similar load balance behaviour is shown in Fig 10, the time required to complete the operation is markedly different. In the non-real-time test environment, DGI was able to complete its load balance operation in less than 1 s (∼600 ms). On the HIL testbed, DGI requires more than 20 s to accomplish the same task. These discrepancies were due to unanticipated latencies in the data communications network and the assumptions made in power system dynamics. These issues were missed earlier in the less realistic system verification environment.

Fig 9: DGI load balance operation performed in non-real-time environment

Fig 10: DGI load balance operation performed on HIL testbed at CAPS

Another example concerns the network communication emulation capabilities of the HIL testbed. These capabilities allow to place embedded controllers in non-ideal (i.e. more realistic) operating environments. Figs 11 and 12 show the results of an experiment designed to investigate the quantity of network traffic generated by varying number of distributed control processes during routine operation. Fig 11 shows the quantity of network traffic per second generated by six distributed control processes, one on each of the Mamba embedded controllers, during a typical DGI load balance cycle. The six participating controllers communicate and coordinate their efforts as expected, and generate an average network traffic of 142 packets per second over the course of the experiment.

Fig 11: Testbed distributed controller network traffic measured using OPNET, six distributed control processes communicating

Fig 12: Testbed distributed controller network traffic measured using OPNET, 24 distributed control processes communicating

The number of participating distributed control processes was then increased to 24 in order to analyse how the network traffic scales. Fig 12 shows the quantity of network traffic per second generated by 24 distributed control processes, 4 on each of the MAMBA embedded controllers, during an equivalent DGI load balance cycle. Increasing the number of control processes by a factor of four resulted in an average network traffic of 920 packets per second. This represents a factor of 6.46 increases in network traffic.

The spikes in network traffic, shown in both Figs 11 and 12 represent periods in which DGI’s group management algorithm sends extra messages to coordinate group formation among peers. These traffic bursts occur at non-periodic intervals depending on the stability of the distributed controller groups. This network traffic data was measured using OPNET and provided verification of the suitability of the testbed to handle the maximum expected message traffic during network traffic bursts.

Third, mathematical models of physical components may ignore the influence of quantisation and measurement errors that may in actual systems lead to susceptibility with respect to oscillations and/or limit cycles, e.g. dispatching generation and shifting power among resources. Forth, communication needs can be investigated through parametric studies including messages per second and payload feasible. Fifth, requirements for (precision) clock synchronisation and state information, i.e. the need of DGI nodes to know about the current state of the system including actual time, but also about the progress and status of negotiations and messages. With respect to the electric power system, components and controls are modelled at varying levels of details, and the integrated electric system is modelled from the feeder level down to the load and generation integration at SSTs. This approach supports investigating component control behaviour and system stability and performance.


This article summarises the work performed to develop an integrated testbed for studying the FREEDM distribution systems concept. Through enabling modelling and simulation of a combination of electric power distribution systems, power system components, local and distributed controls, and communications, the operation and coordination of the system and its salient, power electronics-based components can be evaluated in a comprehensive and holistic manner. The testbed is an extremely valuable tool to assist in the development process and allows to test features before actual HW and field tests are feasible. To date, the testbed has guided numerous improvements to distributed control algorithms that would not have been possible using other, less realistic system verification environments. For example, the convergence time for DGI's energy dispatching algorithm, measured experimentally using the real-time testbed, was longer than expected and differed from the convergence time that was predicted using non-real-time analysis. This and other issues would have been missed otherwise and ultimately led to improvements in DGI's group formation and load balancing algorithms, as well as better understanding of real-time timing constraints for distributed controls.


This work was supported by ERC Program of the National Science Foundation under Award Number EEC-08212121.


  1. Hopkinson K.: ‘EPOCHS: A platform for agent-based electric power and communication simulation built from commercial off-the-shelf components’, IEEE Trans. Power Syst., 2006, 21, pp. 548–558, (doi: 10.1109/TPWRS.2006.873129).
  2. General Electric: PSLF,, accessed May 2015.
  3. Manitoba HVDC Research Centre: PSCAD,, accessed May 2015.
  4., accessed May 2015.
  5. Rehtanz C. (Ed.): ‘Autonomous systems and intelligent agents in power system control and operation’ (Springer, Berlin Heidelberg, 2003).
  6. Li W. Zhang X. Li H.: ‘Co-simulation platforms for co-design of networked control systems: An overview’, Control Eng. Pract., 2014, (23), pp. 44-56,
  7. Mets K. Ojea J. A. Develder C.: ‘Combining power and communication network simulation for cost-effective smart grid analysis’, IEEE Commun. Surv. Tutor., 2014, 16, (3), pp. 1771–1796, (doi: 10.1109/SURV.2014.021414.00116).
  8. Guo F. Herrera L. Murawski R. et al.: ‘Comprehensive real-time simulation of the smart grid’, IEEE Trans. Ind. Appl., 2013, 49, (2), pp. 899–908, (doi: 10.1109/TIA.2013.2240642).
  9. Reddi R. Srivastava A.: ‘Real time test bed development for power system operation, control and cyber security’. Proc. North American Power Symp., 2010, pp. 1–6, doi: 10.1109/NAPS.2010.5618985.
  10. Bottura R. Babazadeh D. Zhu K. et al.: ‘SITL and HLA Co-simulation platforms: tools for analysis of the integrated ICT and electric power system’. EuroCon, Zagreb, Croatia, 1–4 July 2013.
  11. Anderson D. Zhao C. Hauser C. H. et al.: ‘Intelligent design real-time simulation for smart grid control and communications design’, IEEE Power Energy Mag., 2012, 10, (1), pp. 49–57, (doi: 10.1109/MPE.2011.943205).
  12. Stevic M. Li W. Ferdowsi M. et al.: ‘A two-step simulation approach for joint analysis of power systems and communication infrastructures’. Fourth IEEE/PES Innovative Smart Grid Technologies Europe (ISGT EUROPE), 6–9 October 2013, pp. 1–5, doi: 10.1109/ISGTEurope.2013.6695440.
  13. Huang A.: ‘FREEDM system – a vision for the future grid’. Proc. of the IEEE Power and Energy Society General Meeting, 25–29 July 2010, doi: 10.1109/PES.2010.5590201.
  14. Du Y. Baek S. Bhattacharya S. et al.: ‘High-voltage high-frequency transformer design for a 7.2 kV to 120 V/240 V 20 kVA solid state transformer’. Proc. of IECON 2010 – 36th Annual Conf. on IEEE Industrial Electronics Society, 7–10 November 2010, pp. 493–498, doi: 10.1109/IECON.2010.5674828.
  15. Akella R. Meng F. Ditch D. et al.: ‘Distributed power balancing for the FREEDM system’. Proc. of the IEEE Conf. on Smart Grid Communication (SmartGridComm'10), Gaithersburg, MD, 4–6 October 2010, pp. 7–12, doi: 10.1109/SMARTGRID.2010.5622003.
  16. Kuffel R. Giesbrecht J. Maguire T. et al.: ‘RTDS-a fully digital power system simulator operating in real time’. Conf. Proc. of the IEEE WESCANEX, Communications, Power, and Computing, 15–16 May 1995, vol. 2, pp. 300–305, doi: 10.1109/WESCAN.1995.494045.
  17. Riverbed Technology:, accessed May 2015.
  18. VersaLogic Corporation, Mamba,, accessed May 2015.
  19. Technologic Systems, TS-7800 Single Board Computer,, accessed May 2015.
  20. Crow M. L. McMillin B. Wang W. et al.: ‘Intelligent energy management of the FREEDM system’. Proc. of the Power and Energy Society General Meeting, 25–29 July 2010, doi: 10.1109/PES.2010.5589992.
  21. Cisco Systems Inc.: ‘IPSec’. white paper , 1998.
  22. Cai Z. Dong Y. YuM. et al.: ‘A secure and distributed control network for the communications in smart grid’. Proc. of the IEEE Int. Conf. on Systems, Man, and Cybernetics (SMC'11), 9–12 October 2011, pp. 2652–2657, doi: 10.1109/ICSMC.2011.6084071.
  23. Cai Z. Yu M. Steurer M. et al.: ‘Real-time emulation of the communication layer for microgrids using HIL simulations’. Microgrid RODEO Summit, Austin, Texas, 20 February 2014.
  24. Stanovich M. J. Leonard I. Srivastava S. K. et al.: ‘Development of a smart-grid cyber-physical systems testbed’. Proc. of the IEEE PES Innovative Smart Grid Technologies (ISGT'13), Washington, DC, 24–27 February 2013, pp. 1–6, doi: 10.1109/ISGT.2013.6497874.
Go to the profile of Karl Schoder

Karl Schoder

Assistant research scholar, Center for Advanced Power Systems

No comments yet.