Русский     English

DVCompute Simulator Tutorial


The discrete event simulation describes an activity that can be represented in terms of computations. We define some simple units to represent primitive computations. Then we combine these units to create computations that would depend on results of the intermediate computations. This is a very general concept that came from functional programming world, but it can be applied for discrete event simulation too.

Moreover, in the same manner we define both sequential and distributed simulation models. While you can launch the former models on your laptop, the optimistic Time Warp method will allow you to create large distributed simulation models that can be launched on super-computers. Everywhere the same approach works, the same idea is applied.

As the author of this method and the corresponding implementation DVCompute Simulator, I hope you will find this approach not only as a theoretical invention but also you will be able to create real and useful models for practice. Here I suppose that the reader is familiar with the Rust programming language.

Time Delay

The key point is how we represent time delays in simulation. So, we can hold the discontinuous process for the specified time interval and then proceed with the computation.

fn hold_process(dt: f64) -> impl Process<Item = ()>

Here the function returns some object that implements the Process trait. Had it been involved in the simulation, the corresponding process would be delayed, while other simulation activities would continue their execution concurrently. But to involve this object in the simulation, we have to create and run a combined computation that would depend on it.

Creating Processes

If you are familiar with the concept of futures in Rust, then you will find this material easy-to-use, especially if you applied so called combinators:

To create a primitive Process computation by the specified pure value, you can call the following function:

fn return_process<T>(val: T) -> impl Process<Item = T>

For example, the return_process(10) computation does nothing but computes the value of 10 when used in the simulation. But it is difficult to understand why we actually need this unless we introduce the following combinator, which has a long history in the world of programming.

To combine the current computation with its continuation, we can use the closure (the pseudo-code similar to Rust):

trait Process {
    type Item;
    fn flat_map<U, F>(self, f: F) -> impl Process<Item = U::Item>
        where U: Process,
              F: FnOnce(Self::Item) -> U

For instance, if we want to hold the current process for 15 modelling time units and then for 5 modelling time units after the first delay, then we can write:

    .flat_map(|()| { hold_process(5.0) })

This is already a combined computation that holds the current process for 20 (= 15 + 5) modelling time units when used in the simulation. It is important to understand that this computation does nothing unless we involve it in the simulation. Later we will return to this subject.

Also the closure must return the Process computation. This is why we need the mentioned return_process function. If we have no ready computation then we create it. As a demonstration, let us take a very abstract example that takes the current value within Process computation and increments it:

fn inc_process(m: impl Process<isize>) -> impl Process<isize> {
    m.flat_map(|x| { return_process(x + 1) })

As usual, general concepts are difficult to understand and realise. This is indeed a general concept known as monad. For example, the standard futures in Rust are monads. If you could apply the futures with help of combinators to create asynchronous programs, then you will be able to create and run models with help of DVCompute Simulator. Here I just want to note that DVCompute Simulator has a solid and powerful theoretical basis, which means that you can model many and many simulation activities.

Random Delay

There is a plenty of different functions for generating random values in DVCompute Simulator, but we need one of them, which will be used in the example later.

fn random_exponential_process(mu: f64) -> impl Process<Item = f64>

It holds the current process for random interval distributed exponentially with the specified average value. This delay is a side effect of the computation. But the computation itself returns the value of time delay applied. This is what described by type f64 in the resulting Process computation.

Actually, this function just generates a random value for the delay interval and then combines it with the described above hold_process function that already implements the delay. In other words, this is a compound Process computation. Soon we will see how we can create composite computations to model some useful activity.

Boxed Computation

This is a little bit too technical detail related to the programming language that has no garbage collector, which Rust is. Usually, the Process computations, i.e. objects that implement the Process trait, are such Rust objects that are allocated on stack of the computer. It is very useful for performance. Moreover, such computations are often in-lined. So, there is almost no performance penalty. But to model activities with loop-back, we do need computations defined recursively, which is possible only if we allocate already those computations on heap.

type ProcessBox<T>

impl<T> Process for ProcessBox<T> {
    type Item = T;

Every Process computation can be transformed to a value of the ProcessBox type by calling the into_boxed method defined in trait Process. The resulting object will be allocated on the heap. It will remain to be the Process computation, nevertheless. We can just transfer the object from stack to heap.

trait Process {
    type Item;

    fn into_boxed(self) -> ProcessBox<Self::Item>

With help of ProcessBox, we can create Process computations defined recursively as shown below.

Machine Process

To illustrate the concepts introduced above, we will consider a model of the machine that works during some up-time distributed exponentially. Then the machine breaks down and the repair-person repairs the machine during the time interval, which is distributed exponentially too. After the machine is repaired, it continues working. Please note that this is a recursively defined behaviour.

const UP_TIME_MEAN: f64 = 1.0;
const REPAIR_TIME_MEAN: f64 = 0.5;

fn machine_process() -> ProcessBox<()> {
        .flat_map(move |up_time| {
                .flat_map(move |repair_time| {

To combine different parts together, we use the flat_map combinator described before. Each continuation proceeds only after the past part of the computation is finished. Here the sequential behaviour is described.

The modelling time changes in steps, but the code itself looks like it might be linear. Moreover, the code is defined recursively as an infinite loop, but it is actually stopped right after the simulation terminates when approaching the final time.

Just calling the machine_process computation does not run the discontinuous process. As it was noted before, the resulting computation must be involved yet in the simulation to be launched. Soon we will see how it can be done.

Finally, in the real model we could add counters to gather statistics about up-time and repair-time. They can be implemented with help of special references as it will be described later.

Event Handlers

The introduced Process computation is not alone in DVCompute Simulator. There are other simulation computations too.

The Process computation representing a discontinuous process can be run within Event computation. The latter is often used for representing an event handler. It actually implies some action that occurs at the current modelling time. It cannot be interrupted. It cannot have time delays. It cannot be blocked and so on. This is just a function of time.

trait Event {
    type Item;

trait Process {
    type Item;

    fn run(self) -> impl Event<Item = ()>
        where Self: Process<Item = ()>;

The Event computation can be used for defining models in terms of the event-oriented paradigm of discrete event simulation as we will see later, while the Process computation is an instrument of the process-oriented paradigm.

Each simulation computation has the corresponding boxed representation, which name starts the same but then it has suffix Box. For example, the boxed representation of the Process computation is called ProcessBox. Similarly, the boxed representation of the Event computation has name EventBox. Each boxed computation is already a concrete structure parameterised by some item type. This structure implements the corresponding simulation trait. For example, EventBox is an implementation of the Event trait.

The key feature of the Event computation is that it allows us to define the event handler that will be actuated at the specified time point unless the simulation terminates earlier.

pub fn enqueue_event(time: f64, comp: EventBox<()>) -> impl Event<Item = ()>;

There are similar combinators for the Event computation that actually allow us to define a wide range of simulation activities. Only the time delay must be either represented with help of the just mentioned enqueue_event function, or must be represented in terms of the Process computation or more high-level Block computation that was not introduced yet.

Thus, the Event trait is self-sufficient, but every Process computation must be run within the Event computation.

Simulation Run

If we take some Event computation and launch it in the start time, then we receive another computation that is called Simulation. The latter represents some action that occurs within simulation run. It is not bound up with the modelling time point already. It is related to the entire simulation.

trait Simulation {
    type Item;

trait Event {
    type Item;

    fn run_in_start_time(self) -> impl Simulation<Item = Self::Item>;

    fn run_in_stop_time(self) -> impl Simulation<Item = Self::Item>;

While the first function runs the computation in start time, the last one runs it in the final time point of simulation.

This is not the lowest simulation computation yet, but this is already a place, where we can finally run our simulation of the machine which we wrote before.

Simulation Specs

Now it is time to define the simulation specs by which we will be able to run the simulation. The specs are defined by the following structure that defines the start time of simulation, the final time, some small time step and the random number generator type. The last two attributes require more attention.

    let specs = Specs {
        start_time: 0.0,
        stop_time: 1000.0,
        dt: 1.0,
        generator_type: GeneratorType::Simple

Please think of the time step as the integration time step if we applied Euler’s method or the Runge-Kutta method, when integrating the set of differential equations. Actually, this attribute has namely these roots, but it can be used for other purposes, for example, for defining the grid step size, when collecting statistics for the deviation chart. So, the time step should be quite small but not less. The number of steps should have a quite reasonable value.

The random number generator can return a reproducible sequence of values for the sequential simulation if required, but the distributed simulation mode supports a non-reproducible pseudo-random number sequence only. The latter is specified by value GeneratorType::Simple. It will generate a non-reproducible pseudo-random number sequence both for the sequential simulation mode and optimistic distributed simulation mode.

Given the simulation specs, we can run the specified Simulation computation and receive the result.

trait Simulation {
    type Item;

    fn run(self, specs: Specs) -> simulation::Result<Self::Item>;

Here the simulation::Result type is just an abbreviation of the standard Rust type Result. This method is namely what launches the entire simulation!

Running Machines

Earlier we defined the machine process. If we take two machines then we can launch them in the start time point. The resulting model will have a type that implements the Simulation<Item = ()> trait.

    let model =
            .flat_map(move |_| {
            .flat_map(move |_| {

Here we do need to run an empty computation in the final time point so that two discontinuous processes would continue working until the simulation terminates. Otherwise, all the activity would finish directly in the start time. But the computation launched in the stop time forces the event queue to process all event handlers. The discontinuous processes are implicitly transformed into such event handlers too.

The final step is to launch our simulation.

    let result =;

Collecting Statistics

There are two structures to represent statistics summary in DVCompute Simulator. The first structure is called SamplingStats. It is used for collecting observations-based statistics. For example, it can be a wait time in the queue or resource. The second structure TimingStats is used already for collecting the time persistent variable statistics. For instance, it can be the queue size statistics in the same resource.

The both structures represent immutable data types. The usual approach is to mutate some RefComp reference within Event computation to modify the statistics value. It is quite efficient and it works both for the sequential and distributed simulation modes.

Resource, Queue, Facility, Storage and GPSS

DVCompute Simulator provides built-in simulation entities such as a limited Resource that can block discontinuous processes while the resource is busy. There are bound and unbound variations of the Queue type, which also may block the discontinuous processes.

Also there is a Facility that may preempt discontinuous processes with less transact priorities. Actually, DVCompute Simulator supports an embedded domain-specific language, which is similar to popular GPSS modelling language, but only models are defined in terms of the Rust programming language. This is based on the described above Process computation. Only to represent blocks, there is the Block computation, which can be run within the Process computation. Similarly, there is a Storage counterpart from GPSS in DVCompute Simulator too.


DVCompute Simulator extensively uses the Observable trait to implement signals internally, but you can use it in your models too. The Event and Observable computations can be used for modelling hardware such as digital integral circuits. The event handlers may have priorities, which can be used for synchronising the signals. Also, the signals can be delayed through the event queue.

Optimistic Distributed Simulation

Everything described above is applicable both for sequential simulation and distributed simulation based on the optimistic Time Warp method. You can launch the simulation on your laptop, but you can also create a distributed model and then launch it on the super-computer with help of DVCompute Simulator. In the latter case the MPI protocol is used for efficient communication between logical processes.

The difference between the both modes is that you have to use different crates, but the simulation computations and entities have the same names. Only in case of distributed simulation, these computations and entities often require that types and closures must satisfy the Clone trait. It is defined explicitly, and the Rust compiler will certainly inform you if you will forget to satisfy this requirement. Fortunately, it is usually easy to satisfy.

The cloning is required, because the distributed simulation may have rollbacks. Therefore, computations cannot be irreversible. Every time the event queue has to launch the event handler, it clones the handler. But usually, it is quite a cheap operation. Possibly, the need to clone is the most significant difference between the sequential and distributed simulation modes in DVCompute Simulator. They are very similar in other aspects.

Also, in case of distributed simulation you create logical processes that can send messages to each other. The act of sending the message is expressed in terms of the Event computation, while the act of receiving messages is naturally expressed within Observable computation.

Simulator Code Distribution

At time of writing this document, DVCompute Simulator has the following structure. There are cores for the both simulation modes. Each core is called DVCompute Simulator Core. This is a binary dynamic library that implements the event-oriented strategy of discrete event simulation. You need to receive an instance of DVCompute Simulator Core compiled for your specific operating system. In case of distributed simulation, the core library must be compiled for the specific MPI implementation too.

The most part of the simulator is distributed in the source form under a license that allows you to use freely the simulator for non-commercial projects. It is easy to generate the API documentation by the source code. There are simple examples. There are verification tests.

So, there is an example of the closed queue network that allows us to estimate and compare the speed of simulation both in the sequential and distributed modes. By the way, this model was often used for estimating the speed of simulators in the literature.

Also there is a validation test for distributed simulation. Regardless of that how often we launch the simulation and what computation nodes we use for running logical processes, we must always receive the same reproducible result on the same computer architecture by the same simulation specs. This is a sign that we can trust the distributed simulator core.

Finally, to use the simulator in commercial projects, you have to purchase a commercial license by contacting me at .