Difference between revisions of "Models"

From Developer Documents
Jump to navigation Jump to search
 
(24 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
See old documentation: [[Models Deprecated]].
 +
 
= Basic concepts =
 
= Basic concepts =
  
[[File:ModelConcepts.png]]
+
== Entities ==
 
 
== Entity ==
 
 
 
Everything in Simantics database is called an entity.
 
 
 
There are three dimensions for classifying an entity that is under a model:
 
* Intrinsic type of the entity: it can be for example a diagram, a time series, etc. (see below)
 
* Origin of the entity: it might come from some measuring device, it might be imported from a file or a database, it can be handwritten or a result of some activity. In the last case, the data is organized under the activity that generated it.
 
* The role of the entity in different activities. The data can be used for example as an initial state for a simulation, as an input time series or as a target for optimizing model parameters.
 
 
 
One principle in designing model concepts is to use type only to describe what kind of data the entity contains and how it can be adapted. The types should not be used to describe the role of the entity, because the same entity can serve in many different roles.
 
 
 
== Library ==
 
 
 
Library is a container of entities. It has no other meaning than just to group them, but types inheriting it may add additional semantics. Entities can be linked to a library either by composition (ConsistsOf) or aggragation (IsLinkedTo).
 
  
== Model ==
+
''Model'' is a structure that is created in purpose of analysing some existing or planned system. It is usually tied to a particular method of analysis such as dynamic or steady-state simulation or model-checking. What is inside the model depends strongly on the analysis method. Model is however always a unit that can be exported from the Simantics workspace and imported back to some other workspace.
  
A [http://en.wikipedia.org/wiki/Scientific_modeling#Model model] is an abstracted description of a real or imaginary physical object or phenomenon.
+
Model contains one or more ''configurations''. Configuration is a description of the system being modelled. Usually (always?) one of the configurations is the root configuration describing the most aspects of the system and other configurations specify some deviations from it. A configuration can be parametrized. Multiple configurations are used to maintain many different but related designs (cases) of the system within the same model or to parametrize the configuration so that optimization, sensitivity analysis or similar method can be applied to the system.
  
In Simantics, a ''model'' is a container for interrelated ''entities'' that are used to describe, visualize or compute some properties of a physical object or phenomenon.  
+
The main purpose of creating a model of a system is to apply some analysis to it. We call these analyses ''experiments''. An experiment points to a certain configuration but may also contain an additional specification of how the analysis is executed such as simulation sequence, list of subscribed variables, simulation method used, etc..  
  
Entities are organized under the model in a hierarchical fashion in libraries. Every entity has to be a part of (directly or with libraries between) exactly one model, project or ontology. Usually the entities linked to the model are part of the project or some ontology.
+
Each individual execution of the experiment is a ''run''. What a single run generates, depends on the analysis method and the experiment specification. Typical artifacts produced include:
 +
* ''State'' is an assignment of values to the properties of the components in the configuration
 +
* ''History'' is an assignment of time series to the properties
 +
Additionally the run can be interactive so that the current state being simulated can be accessed and even modified during the simulation.
  
Entities can be graph or workspace persistent or transient.
+
States and histories can be also independent entities in the model that are not produced by experiment runs. They can be used as an input in the experiments.
  
Models are typed. A model type typically contains a template for creating an empty model, restricts which kind of entities can be created and asserts that certain entities are included to the model (for example some standard library of component types).
+
Multiple runs can be executed in parallel, some in remote machines. One of the runs (states, or histories?) is the ''active experiment'' whose state is visualized in the UI.
  
== Configuration ==
+
Some analysis methods have a capability of storing a snapshot of the state of the analysis algorithm. We call these snapshots ''IC''s. An experiment may specify IC to be used to initialize the analysis. IC and state are slightly overlapping concepts. The main difference between them is that IC contains a complete state of the analysis algorithm including the internal state not seen by users in a representation that is optimized for fast initialization of the algorithm. On the other hand a state contains only properties of components in the configuration, it is optimized for efficient browsing and may be partial (not assigning value to all possible properties).
  
Configuration is an entity that is used to describe a structure of a model. It is an abstract concept and doesn't restrict in any way how the structure is represented. Its only contract is to provide an ''address space''. An address space is a set of variable names and their data types.
+
== Analogy ==
  
There is currently no mechanism for getting the address space of a configuration. The base realization of the configuration can however be used for that purpose.
+
Consider crash testing of cars. The configuration describes the car and possibly how the crash test dummy is positioned in it. There may be many different configurations with varying safety equipments and we may for example parametrize the size of the airbag in order to find the size that minimizes head injuries. The experiment describes which configuration is used and how the crash test is executed (for example crashing speed). It also describes the variables that are measured during the crash. A run is one crash test. Each run produces time series of all variables that were measured, maybe a high speed video of the crash and the final state of the car and the dummy after the crash.
  
== Realization ==
+
== Operations ==
  
A realization is not a type but a role of a configuration. Every realization realizes some configuration. A realization is a tree-structure that binds a value to variables of the configuration it realizes. There are many different realization relations: the relation specifies how the type of the value in realization is related to the type of the variable specified in the realized configuration. In this way, for example states and histories can be uniformly represented.
+
We describe here the basic operations involving models and experiments. They are not necessarily the same operations that are presented to user in UI but building blocks with smaller granularity. In particularly, we consider starting an experiment an explicit operation while some this may be an automatic operation in UI. If the analysis is fast enough, even simulation results can be updated automatically when the user modifies the configuration.
  
[[File:RealizationAdaptation.png]]
+
''Running an experiment'' creates a new run starting the corresponding runtime entities. This involves:
 +
* Start the actual analysis algorithm (if a remote server is used, this may include waiting that computational resource become available)
 +
* Initialize the algorithm state. This can be done in many ways:
 +
** Write the configuration in a form understood by the algorithm (for example Modelica code)
 +
** Load previously stored IC and synchronize the algorithm state with the current configuration
 +
** Initialize the algorithm in a "blank" state and synchronize the current configuration
 +
* Run the analysis
 +
** This phase may be interactive so that state of the algorithm can be monitored and mutated
 +
** It may be possible to run synchronization operation during the analysis
 +
* Make the results of the analysis available
 +
If the analysis is fast running all these phases happen almost immediately after the experiment is started.
  
== Activity ==
+
''Synchronization'' is the operation of making the current state of an analysis algorithm compatible with a certain configuration (and parameters, if the configuration is parametrized).
  
Activities are defined as something that can be run. Multiple activities can be run at the same time, and in different machines and processes. Currently active activities can be controlled.
+
''Save/load IC''
  
An activity usually creates a ''Run''-entity under the Activity-entity that configures it. That entity contains all results the activity produced. There can also be activities that modify the model in some other way.
+
''Archive simulation results''
  
Activities can be defined as continuous: they are run automatically when the model changes.
+
= Questions =
  
Activities are configured to run using certain computational resources. In the model, these resources are referred with logical names, the logical names are assigned to actual physical resources in the project. The default assignment is graph persistent but each user can override the resource assignments with workspace persistent configuration.
+
* The line between configuration and experiment is not well defined (for example is the crashing speed in the analogy part of configuration or experiment). Experiments and configurations are probably often tied together. Also experiments (such as simulation sequences) are parametrizable. Would it be possible to consider experiments as part of the configuration?
  
[[File:ActivityAdaptation.png]]
 
  
== View ==
+
TODO Q&A:
  
Views visualize one or multiple entities. Typically the visualizations access the entity using their realizations and different variables in the configurations are referred relatively so that the same view can be used to visualize different realizations of the configuration.
+
;Q1. Miten selection view toimii?
 +
:A1: Annetaan viewille pino variableja (konf-delta1-delta2-state), jotka näytetään combossa
 +
:A1: Pinon päällä on aktiivisen experimentin variable ja muut variablet ovat sen konfigurointipuu
 +
:A1: Propertyvariablella on aliproperty jokaista columnia varten (alipropertyn implementaatio vastaa luvusta ja kirjoituksesta stringinä)
 +
:A2: Lisäksi jokainen propertyvariable määrittää yhden kategorian (joka voi olla hidden)
 +
:A3: Propertyjen järjestäminen?
  
There are both ''independent'' and ''associated'' views. Only independent views are included to the model. They may depend on many data items but are not bound to any one of them. Associated views are associated with one particular data item: typically a structual component has an associated diagram. Associated views are only linked to their data items and when the data item is removed, also the view is removed.
+
;Q2. Mitä variableja Simantics tarjoaa?
 +
:A2: Jokainen konfiguraatio tarjoaa variablen (miten polun alkuosa määräytyy)
 +
:A2: Jokainen run tarjoaa variablen
 +
:A2: Saako experimentin konfiguraatiosta variablen? Ei saa?
 +
:A2: IC voi tarjota variablen (onko IC konfiguraatio?)
 +
:A2: State voi tarjota variablen? (onko State konfiguraatio?)
 +
:A2: Minkä variablen kautta historioita katsellaan? Run-variablen? Tuleeko konffiksen alle Runien lisäksi myös HistoricalRun?
  
[[File:ViewAdaptation.png]]
+
;Q3. Miten eri variablet implementoidaan?
 +
:A3: Geneerinen toteutus, joka käy stackin läpi ja palauttaa ensimmäisen tuotetun arvon.
 +
:A3: Solvervariable + graphvariable?
 +
:A3: Millä perusteella valueAccessor ja domainChildren hakevat datansa?
  
= Examples =
+
;Q4. Miten IC ja State näkyvät käyttäjälle?
 +
:A4: Mallin alle voi tallentaa IC:itä aktiivisesta runista. IC:llä voi alustaa uuden runin. Onko IC konfiguraatio?
 +
:A4: Mitä Statella voi tehdä?
 +
:A4: Mikä on hierarkia käsitteiden IC, State, Configuration välillä?
  
[[File:ModelEx1.png|center|700px]]
+
;Q5. Mikä on runin handle?
 +
:* Meillähän on kahdenlaisia runeja. Toisaalta sellaisia, joiden takana on aktiivinen simulaattori ja toisaalta sellaisia, jotka on arkistoitu.
 +
:* Aktiiviseen simulaattoriin viitataan uudessa eksperiment controllissa muodossa <simulointipalvelimen osoite>/experiments/<experimentin id>
 +
:* UI:ssa monitoroitaville runeille pitää olla Run-resurssi
 +
:* Mitkä aktiivisista experimenteistä tuodaan (virtuaali)graafiin run resurssi? Esim. jos jokin toinen käyttäjä on käynnistänyt runin ja se halutaan tuoda myös omaan workspaceen.
 +
:A5: Voidaanko lähteä siitä, että jokainen run näkyy tietokannassa resurssina (mahd. virtuaali) ja Run-resurssilta voi kysyä tarvittavia rajapintoja adaptilla?
 +
:A5: Vai variablessa (ei tarvitse resurssia)?
  
* Structural components
+
;Q6. Miten useamman ajon Runin parametrisaatio näkyy runin variablessa?
* Component types
+
:A6: Experimentin konfiguraatioon liittyvässä skriptissä on parametrien asettelulogiikka suorituksessa. Runin variable kuvaa nykyistä tilaa.
* Time series
 
* Valuations
 
* Simulation snapshots
 
* CSG-geometries
 
* FEM-node or element datasets
 
* Other models (data items under the linked models are not part of the parent model)
 
  
* Simulation experiments
+
;Q7. Miten solveri vastaanottaa konfiguraationsa?
* Optimizations
+
: Tässä on solverikohtaisesti useita strategioita
* Validators
+
:* Lähetetään konfiguraatio solverispesifissä muodossa, joka muodostetaan lennosta (Modelica, NuSMV)
* Report generators
+
:* Lähetetään IC (solverin muistidumppi) ja mahdollisesti synkronoidaan tämän jälkeen
* Composite activities
+
:* Lähetetään state, johon synkronoidaan tyhjästä tilasta
  
* Charts
+
;Q8. Miten solveri palauttaa tuloksensa?
* Operation diagrams
+
:* Remote simuloinnissa client ei ole välttämättä päällä kun solveri on saanut laskennan valmiiksi, joten serverin tulee säilyttää laskentatulokset ainakin jonkin aikaa
* CFD-visualizations
+
:* Lopulliset tulokset ovat puumainen rakenne blobeja, rakenne riipuu experimentin laadusta
* Spreadsheets (these can be also data items in some cases?)
+
:* Client voi halutessaan arkistoida tulokset graafiin
* Queries
 
  
[[Category: Model Development]]
+
;Q9. Miten määritellään osittainen / full synkronointi vrt. solverin alustus ja päivitys?
 +
:A9: Case Balas nyt: kun run käynnistetään syntyy transientti state, joka pitää ensin synkronoida (full)
 +
:A9: Jatkossa myös Balasissa voitaisiin säilöä stateja alustusta varten
 +
:A9: Millä rajapinnalla solveri alustetaan staten tai ic:n perusteella?

Latest revision as of 08:06, 14 June 2013

See old documentation: Models Deprecated.

Basic concepts

Entities

Model is a structure that is created in purpose of analysing some existing or planned system. It is usually tied to a particular method of analysis such as dynamic or steady-state simulation or model-checking. What is inside the model depends strongly on the analysis method. Model is however always a unit that can be exported from the Simantics workspace and imported back to some other workspace.

Model contains one or more configurations. Configuration is a description of the system being modelled. Usually (always?) one of the configurations is the root configuration describing the most aspects of the system and other configurations specify some deviations from it. A configuration can be parametrized. Multiple configurations are used to maintain many different but related designs (cases) of the system within the same model or to parametrize the configuration so that optimization, sensitivity analysis or similar method can be applied to the system.

The main purpose of creating a model of a system is to apply some analysis to it. We call these analyses experiments. An experiment points to a certain configuration but may also contain an additional specification of how the analysis is executed such as simulation sequence, list of subscribed variables, simulation method used, etc..

Each individual execution of the experiment is a run. What a single run generates, depends on the analysis method and the experiment specification. Typical artifacts produced include:

  • State is an assignment of values to the properties of the components in the configuration
  • History is an assignment of time series to the properties

Additionally the run can be interactive so that the current state being simulated can be accessed and even modified during the simulation.

States and histories can be also independent entities in the model that are not produced by experiment runs. They can be used as an input in the experiments.

Multiple runs can be executed in parallel, some in remote machines. One of the runs (states, or histories?) is the active experiment whose state is visualized in the UI.

Some analysis methods have a capability of storing a snapshot of the state of the analysis algorithm. We call these snapshots ICs. An experiment may specify IC to be used to initialize the analysis. IC and state are slightly overlapping concepts. The main difference between them is that IC contains a complete state of the analysis algorithm including the internal state not seen by users in a representation that is optimized for fast initialization of the algorithm. On the other hand a state contains only properties of components in the configuration, it is optimized for efficient browsing and may be partial (not assigning value to all possible properties).

Analogy

Consider crash testing of cars. The configuration describes the car and possibly how the crash test dummy is positioned in it. There may be many different configurations with varying safety equipments and we may for example parametrize the size of the airbag in order to find the size that minimizes head injuries. The experiment describes which configuration is used and how the crash test is executed (for example crashing speed). It also describes the variables that are measured during the crash. A run is one crash test. Each run produces time series of all variables that were measured, maybe a high speed video of the crash and the final state of the car and the dummy after the crash.

Operations

We describe here the basic operations involving models and experiments. They are not necessarily the same operations that are presented to user in UI but building blocks with smaller granularity. In particularly, we consider starting an experiment an explicit operation while some this may be an automatic operation in UI. If the analysis is fast enough, even simulation results can be updated automatically when the user modifies the configuration.

Running an experiment creates a new run starting the corresponding runtime entities. This involves:

  • Start the actual analysis algorithm (if a remote server is used, this may include waiting that computational resource become available)
  • Initialize the algorithm state. This can be done in many ways:
    • Write the configuration in a form understood by the algorithm (for example Modelica code)
    • Load previously stored IC and synchronize the algorithm state with the current configuration
    • Initialize the algorithm in a "blank" state and synchronize the current configuration
  • Run the analysis
    • This phase may be interactive so that state of the algorithm can be monitored and mutated
    • It may be possible to run synchronization operation during the analysis
  • Make the results of the analysis available

If the analysis is fast running all these phases happen almost immediately after the experiment is started.

Synchronization is the operation of making the current state of an analysis algorithm compatible with a certain configuration (and parameters, if the configuration is parametrized).

Save/load IC

Archive simulation results

Questions

  • The line between configuration and experiment is not well defined (for example is the crashing speed in the analogy part of configuration or experiment). Experiments and configurations are probably often tied together. Also experiments (such as simulation sequences) are parametrizable. Would it be possible to consider experiments as part of the configuration?


TODO Q&A:

Q1. Miten selection view toimii?
A1: Annetaan viewille pino variableja (konf-delta1-delta2-state), jotka näytetään combossa
A1: Pinon päällä on aktiivisen experimentin variable ja muut variablet ovat sen konfigurointipuu
A1: Propertyvariablella on aliproperty jokaista columnia varten (alipropertyn implementaatio vastaa luvusta ja kirjoituksesta stringinä)
A2: Lisäksi jokainen propertyvariable määrittää yhden kategorian (joka voi olla hidden)
A3: Propertyjen järjestäminen?
Q2. Mitä variableja Simantics tarjoaa?
A2: Jokainen konfiguraatio tarjoaa variablen (miten polun alkuosa määräytyy)
A2: Jokainen run tarjoaa variablen
A2: Saako experimentin konfiguraatiosta variablen? Ei saa?
A2: IC voi tarjota variablen (onko IC konfiguraatio?)
A2: State voi tarjota variablen? (onko State konfiguraatio?)
A2: Minkä variablen kautta historioita katsellaan? Run-variablen? Tuleeko konffiksen alle Runien lisäksi myös HistoricalRun?
Q3. Miten eri variablet implementoidaan?
A3: Geneerinen toteutus, joka käy stackin läpi ja palauttaa ensimmäisen tuotetun arvon.
A3: Solvervariable + graphvariable?
A3: Millä perusteella valueAccessor ja domainChildren hakevat datansa?
Q4. Miten IC ja State näkyvät käyttäjälle?
A4: Mallin alle voi tallentaa IC:itä aktiivisesta runista. IC:llä voi alustaa uuden runin. Onko IC konfiguraatio?
A4: Mitä Statella voi tehdä?
A4: Mikä on hierarkia käsitteiden IC, State, Configuration välillä?
Q5. Mikä on runin handle?
  • Meillähän on kahdenlaisia runeja. Toisaalta sellaisia, joiden takana on aktiivinen simulaattori ja toisaalta sellaisia, jotka on arkistoitu.
  • Aktiiviseen simulaattoriin viitataan uudessa eksperiment controllissa muodossa <simulointipalvelimen osoite>/experiments/<experimentin id>
  • UI:ssa monitoroitaville runeille pitää olla Run-resurssi
  • Mitkä aktiivisista experimenteistä tuodaan (virtuaali)graafiin run resurssi? Esim. jos jokin toinen käyttäjä on käynnistänyt runin ja se halutaan tuoda myös omaan workspaceen.
A5: Voidaanko lähteä siitä, että jokainen run näkyy tietokannassa resurssina (mahd. virtuaali) ja Run-resurssilta voi kysyä tarvittavia rajapintoja adaptilla?
A5: Vai variablessa (ei tarvitse resurssia)?
Q6. Miten useamman ajon Runin parametrisaatio näkyy runin variablessa?
A6: Experimentin konfiguraatioon liittyvässä skriptissä on parametrien asettelulogiikka suorituksessa. Runin variable kuvaa nykyistä tilaa.
Q7. Miten solveri vastaanottaa konfiguraationsa?
Tässä on solverikohtaisesti useita strategioita
  • Lähetetään konfiguraatio solverispesifissä muodossa, joka muodostetaan lennosta (Modelica, NuSMV)
  • Lähetetään IC (solverin muistidumppi) ja mahdollisesti synkronoidaan tämän jälkeen
  • Lähetetään state, johon synkronoidaan tyhjästä tilasta
Q8. Miten solveri palauttaa tuloksensa?
  • Remote simuloinnissa client ei ole välttämättä päällä kun solveri on saanut laskennan valmiiksi, joten serverin tulee säilyttää laskentatulokset ainakin jonkin aikaa
  • Lopulliset tulokset ovat puumainen rakenne blobeja, rakenne riipuu experimentin laadusta
  • Client voi halutessaan arkistoida tulokset graafiin
Q9. Miten määritellään osittainen / full synkronointi vrt. solverin alustus ja päivitys?
A9: Case Balas nyt: kun run käynnistetään syntyy transientti state, joka pitää ensin synkronoida (full)
A9: Jatkossa myös Balasissa voitaisiin säilöä stateja alustusta varten
A9: Millä rajapinnalla solveri alustetaan staten tai ic:n perusteella?