You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/index.md
+3-34Lines changed: 3 additions & 34 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,41 +6,10 @@ CurrentModule = ModelTesting
6
6
7
7
Documentation for [ModelTesting](https://github.com/BenChung/ModelTesting.jl).
8
8
9
-
ModelTesting is intended to facilitate the easy testing and post-facto analysis of ModelingToolkit models.
10
-
Test ensembles are constructed using two core abstractions:
9
+
ModelTesting is intended to facilitate the easy testing and post-facto analysis of ModelingToolkit models. It currently provides two key bits of functionality:
11
10
12
-
**Rollouts*, or an execution of a model. A rollout consists of a model and the data required to execute the model forward (which could consist of parameter values, real observations/control inputs, or synthetic components that should be plugged into the system). Rollouts can also modify the model being executed (such as introducing new equations to track conservation properties).
13
-
**Timeseries*, which describe the state of a system as it executes forward (PDEs are TBD). A timeseries consists of time-indexed data whose values inhabit named states. Rollouts can be executed to create a timeseries, but you can also get a timeseries from test data or an analytic solution.
14
-
Timeseries can be combined in a namespace-aware way with the <+ operator; combining timeseries requires that both are defined over an overlapping interval and can be evaluated at the same points (by default the union of the points). Timeseries are not nessecarily tabular!
15
-
16
-
The library then works by executing evaluations over rollouts. Timeseries are defined over a time span that's encoded into the timeseries and is checked for mutual consistentcy before evaluation.
17
-
18
-
Questions:
19
-
* How to represent parameters? Want to be able to have multiple parameter maps and be able to overlay them. Should be version controlled separately from the models.
20
-
* The results data in the below examples is implicitly timebased. What do we do about timeseries that don't have a consistent time base (for example how do we compare results between two different solvers or between an adaptive solver and sampled real data)?
Lower level version of `compare` that lets you specify the mapping from values in `new_sol` (specified as either variable => name or variable => (input_name, output_name) pairs) to
56
+
the column names in `reference`. The tuple-result version lets you specify what the column names are when passed to the comparison method. Since the mapping is explicit
57
+
this version does not take `to_name`, but it still does take `warn_observed`. See the implicit-comparison version of `compare` for more information.
0 commit comments