The purpose of this chapter is to provide a framework for what this book is to cover. Which are the types of questions that it aspires to answer, and what are the kinds of knowledge that you, the reader, can expect to gain by working through the material presented in this book? What are the relations between real physical systems and their mathematical models? What are the characteristics of mathematical descriptions of physical systems? We shall then talk about simulation as a problem solving tool, and finally, we shall offer a classification of the basic characteristics of simulation software systems.
In this chapter, we shall discuss some basic ideas behind the algorithms that are used to numerically solve sets of ordinary differential equations specified by means of a state–space model. Following a brief introduction into the concept of numerical extrapolation that is at the heart of all numerical integration techniques, and after analyzing the types of numerical errors that all these algorithms are destined to exhibit, the two most basic algorithms, Forward Euler (FE) and Backward Euler (BE), are introduced, and the fundamental differences between explicit and implicit integration schemes are demonstrated by means of these two algorithms.
The reader is then introduced to the concept of numerical stability as opposed to analytical stability. The numerical stability domain is introduced as a tool to characterize an integration algorithm, and a general procedure to find the numerical stability domain of any integration scheme is presented. The numerical stability domain of an integration method is a convenient tool to assess some of its most important numerical characteristics.
This chapter extends the ideas of numerical integration by means of a Taylor–Series expansion from the first–order (FE and BE) techniques to higher orders of approximation accuracy. The well–known class of explicit Runge–Kutta techniques is introduced by generalizing the predictor–corrector idea.
The chapter then explores special classes of single–step techniques that are well suited for the simulation of stiff systems and for that of marginally stable systems, namely the extrapolation methods and the backinterpolation algorithms. The stability domain serves as a good vehicle for analyzing the stability properties of these classes of algorithms.
We are then delving more deeply into the question of approximation accuracy. The accuracy domain is introduced as a simple tool to explore this issue, and the order star approach is subsequently introduced as a more refined and satisfying alternative.
The chapter ends with a discussion of the ideas behind step–size control and order control, and the techniques used to accomplish these in the realm of single–step algorithms.
In this chapter, we shall look at several families of integration algorithms that all have in common the fact that only a single function evaluation needs to be performed in every integration step, irrespective of the order of the algorithm. Both explicit and implicit varieties of this kind of algorithms exist and shall be discussed. As in the last chapter, we shall spend some time discussing the stability and accuracy properties of these families of integration algorithms.
Whereas step–size and order control were easily accomplished in the case of the single–step techniques, these issues are much more difficult to tackle in the case of the multi–step algorithms. Consequently, their discussion must occupy a significant portion of this chapter.
The chapter starts out with mathematical preliminaries that shall simplify considerably the subsequent derivation of the multi–step methods.
In this chapter, we shall look at integration algorithms designed to deal with system descriptions containing second–order derivatives in time. Such system descriptions occur naturally in the mathematical modeling of mechanical systems, as well as in the mathematical modeling of distributed parameter systems leading to hyperbolic partial differential equations.
In this chapter, we shall concentrate on mechanical systems. The discussion of partial differential equations is postponed to the next chapter.
Whereas it is always possible to convert second derivative systems to state–space form, integration algorithms that deal with the second derivatives directly may, in some cases, offer a numerical advantage.
In this chapter, we shall deal with method–of–lines solutions to models that are described by individual partial differential equations, by sets of coupled partial differential equations, or possibly by sets of mixed partial and ordinary differential equations.
Emphasis will be placed on the process of converting partial differential equations to equivalent sets of ordinary differential equations, and particular attention will be devoted to the problem of converting boundary conditions. To this end, we shall again consult our –meanwhile well–understood– Newton–Gregory polynomials.
We shall then spend some time analyzing the particular difficulties that await us when numerically solving the sets of resulting differential equations in the cases of parabolic, hyperbolic, and elliptic partial differential equations. It turns out that each class of partial differential equations exhibits its own particular and peculiar types of difficulties.
In this chapter, we shall analyze simulation problems that don’t present themselves initially in an explicit state–space form. For many physical systems, it is quite easy to formulate a model where the state derivatives show up implicitly and possibly even in a non-linear fashion anywhere within the equations. We call system descriptions that consist of a mixture of implicitly formulated algebraic and differential equations Differential Algebraic Equations (DAEs). Since these cases constitute a substantial and important portion of the models encountered in science and engineering, they deserve our attention. In this chapter, we shall discuss the question, how sets of DAEs can be converted symbolically in an automated fashion to equivalent sets of ODEs.
In the previous chapter, we have discussed symbolic algorithms for converting implicit and even higher–index DAE systems to explicit ODE form. In this chapter, we shall look at these very same problems once more from a different angle. Rather than converting implicit DAEs to explicit ODE form, we shall try to solve the DAE systems directly. Solvers that are capable of dealing with implicit DAE descriptions directly have been coined differential algebraic equation solvers or DAE solvers. They are the focus point of this chapter.
In this chapter, we shall discuss how discontinuous models can be handled by the simulation software, and in particular by the numerical integration algorithm. Discontinuous models are extremely common in many areas of engineering, e.g. to describe dry friction phenomena or impact between bodies in mechanical engineering, or to describe switching circuits in electronics. In the first part of this chapter, we shall be dealing with the numerical aspects of integrating across discontinuities. Two types of discontinuities are introduced, time events and state events, that require different treatment by the simulation software. In the second part of this chapter, we shall discuss the modeling aspects of how discontinuities can be conveniently described by the user in an object–oriented manner, and what the compiler needs to do to translate these object–oriented descriptions down into event descriptions.
In this chapter, we shall discuss the special requirements of real–time simulation, i.e., of simulation runs that keep abreast of the passing of real time, and that can accommodate driving functions (input signals) that are generated outside the computer and that are read in by means of analog to digital (A/D) converters.
Until now, computing speed has always been a soft constraint — slow simulation meant expensive simulation, but now, it becomes a very hard constraint. Simulation becomes a race against time. If we cannot complete the computations associated with one integration step before the real–time clock has advanced by h time units, where h is the current step size of the integration algorithm, the simulation is out of sync, and we just lost the race.
Until now, we always tried to make simulation more comfortable for the user. For example, we introduced step–size controlled algorithms so that the user wouldn’t have to worry any more about whether or not the numerical integration meets his or her accuracy requirements. The algorithm would do so on its own. In the context of real–time simulation, we may not be able to afford all this comfort any longer. We may have to throw many of the more advanced features of simulation over board in the interest of saving time, but of course, this means that we have to understand even better ourselves how simulation works in reality.
This chapter explores a new way of approximating differential equations, replacing the time discretization by a quantization of the state variables. We shall see that this idea will lead us to discrete event systems in terms of the DEVS formalism instead of difference equations, as in the previous approximations.
Thus, before formulating the numerical methods derived from this approach, we shall introduce the basic definitions of DEVS. This methodology, as a general discrete event systems modeling and simulation formalism, will provide us the tools to describe and translate into computer programs the routines that implement a new family of methods for the numerical integration of continuous systems.
Further, the chapter explores the principles of quantization–based approximations of ordinary differential equations and their representation as DEVS simulation models.
Finally, we shall briefly introduce the QSS method in preparation for the next chapter, where we shall study this numerical method in more detail.
This chapter focuses on the Quantized State Systems (QSS) method and its extensions. After a brief explanation concerning the connections between this discrete event method and perturbation theory, the main theoretical properties of the method, i.e., convergence, stability, and error control properties, are presented.
The reader is then introduced to some practical aspects of the method related to the choice of quantum and hysteresis, the incorporation of input signals, as well as output interpolation.
In spite of the theoretical and practical advantages that the QSS method offers, the method has a serious drawback, as it is only first-order accurate. For this reason, a second-order accurate quantization-based method is subsequently presented that conserves the main theoretical properties that characterize the QSS method.
Further, we shall focus on the use of both quantization–based methods in the simulation of DAEs and discontinuous systems, where we shall observe some interesting advantages that these methods have over the classical discrete-time methods.
Finally, and following the discussion of a real-time implementation of these methods, some drawbacks and open problems of the proposed methodology shall be discussed with particular emphasis given to the simulation of stiff system.