Difference between revisions of "Models"
(18 intermediate revisions by 2 users not shown) | |||
Line 5: | Line 5: | ||
== Entities == | == Entities == | ||
− | ''Model'' is a structure that is created in purpose of analysing some existing or planned system. It is usually tied to a particular method of analysis such as dynamic or steady-state simulation or model-checking. What is inside the model depends strongly on the analysis method. Model is however always | + | ''Model'' is a structure that is created in purpose of analysing some existing or planned system. It is usually tied to a particular method of analysis such as dynamic or steady-state simulation or model-checking. What is inside the model depends strongly on the analysis method. Model is however always a unit that can be exported from the Simantics workspace and imported back to some other workspace. |
− | Model contains one or more ''configurations''. Configuration is a description of the system being modelled. Usually (always?) one of the configurations is the root configuration describing the most aspects of the system and other configurations specify some | + | Model contains one or more ''configurations''. Configuration is a description of the system being modelled. Usually (always?) one of the configurations is the root configuration describing the most aspects of the system and other configurations specify some deviations from it. A configuration can be parametrized. Multiple configurations are used to maintain many different but related designs (cases) of the system within the same model or to parametrize the configuration so that optimization, sensitivity analysis or similar method can be applied to the system. |
− | The main purpose of creating a model of a system is to apply some analysis to it. We call these analyses ''experiments''. An experiment | + | The main purpose of creating a model of a system is to apply some analysis to it. We call these analyses ''experiments''. An experiment points to a certain configuration but may also contain an additional specification of how the analysis is executed such as simulation sequence, list of subscribed variables, simulation method used, etc.. |
Each individual execution of the experiment is a ''run''. What a single run generates, depends on the analysis method and the experiment specification. Typical artifacts produced include: | Each individual execution of the experiment is a ''run''. What a single run generates, depends on the analysis method and the experiment specification. Typical artifacts produced include: | ||
Line 16: | Line 16: | ||
Additionally the run can be interactive so that the current state being simulated can be accessed and even modified during the simulation. | Additionally the run can be interactive so that the current state being simulated can be accessed and even modified during the simulation. | ||
− | Some analysis methods have a capability of storing a snapshot of the state of the analysis algorithm. We call these snapshots ''IC''s. An experiment may specify IC to be used to initialize the analysis. IC and state are overlapping | + | States and histories can be also independent entities in the model that are not produced by experiment runs. They can be used as an input in the experiments. |
+ | |||
+ | Multiple runs can be executed in parallel, some in remote machines. One of the runs (states, or histories?) is the ''active experiment'' whose state is visualized in the UI. | ||
+ | |||
+ | Some analysis methods have a capability of storing a snapshot of the state of the analysis algorithm. We call these snapshots ''IC''s. An experiment may specify IC to be used to initialize the analysis. IC and state are slightly overlapping concepts. The main difference between them is that IC contains a complete state of the analysis algorithm including the internal state not seen by users in a representation that is optimized for fast initialization of the algorithm. On the other hand a state contains only properties of components in the configuration, it is optimized for efficient browsing and may be partial (not assigning value to all possible properties). | ||
+ | |||
+ | == Analogy == | ||
+ | |||
+ | Consider crash testing of cars. The configuration describes the car and possibly how the crash test dummy is positioned in it. There may be many different configurations with varying safety equipments and we may for example parametrize the size of the airbag in order to find the size that minimizes head injuries. The experiment describes which configuration is used and how the crash test is executed (for example crashing speed). It also describes the variables that are measured during the crash. A run is one crash test. Each run produces time series of all variables that were measured, maybe a high speed video of the crash and the final state of the car and the dummy after the crash. | ||
== Operations == | == Operations == | ||
+ | |||
+ | We describe here the basic operations involving models and experiments. They are not necessarily the same operations that are presented to user in UI but building blocks with smaller granularity. In particularly, we consider starting an experiment an explicit operation while some this may be an automatic operation in UI. If the analysis is fast enough, even simulation results can be updated automatically when the user modifies the configuration. | ||
+ | |||
+ | ''Running an experiment'' creates a new run starting the corresponding runtime entities. This involves: | ||
+ | * Start the actual analysis algorithm (if a remote server is used, this may include waiting that computational resource become available) | ||
+ | * Initialize the algorithm state. This can be done in many ways: | ||
+ | ** Write the configuration in a form understood by the algorithm (for example Modelica code) | ||
+ | ** Load previously stored IC and synchronize the algorithm state with the current configuration | ||
+ | ** Initialize the algorithm in a "blank" state and synchronize the current configuration | ||
+ | * Run the analysis | ||
+ | ** This phase may be interactive so that state of the algorithm can be monitored and mutated | ||
+ | ** It may be possible to run synchronization operation during the analysis | ||
+ | * Make the results of the analysis available | ||
+ | If the analysis is fast running all these phases happen almost immediately after the experiment is started. | ||
''Synchronization'' is the operation of making the current state of an analysis algorithm compatible with a certain configuration (and parameters, if the configuration is parametrized). | ''Synchronization'' is the operation of making the current state of an analysis algorithm compatible with a certain configuration (and parameters, if the configuration is parametrized). | ||
+ | |||
+ | ''Save/load IC'' | ||
+ | |||
+ | ''Archive simulation results'' | ||
+ | |||
+ | = Questions = | ||
+ | |||
+ | * The line between configuration and experiment is not well defined (for example is the crashing speed in the analogy part of configuration or experiment). Experiments and configurations are probably often tied together. Also experiments (such as simulation sequences) are parametrizable. Would it be possible to consider experiments as part of the configuration? | ||
+ | |||
+ | |||
+ | TODO Q&A: | ||
+ | |||
+ | ;Q1. Miten selection view toimii? | ||
+ | :A1: Annetaan viewille pino variableja (konf-delta1-delta2-state), jotka näytetään combossa | ||
+ | :A1: Pinon päällä on aktiivisen experimentin variable ja muut variablet ovat sen konfigurointipuu | ||
+ | :A1: Propertyvariablella on aliproperty jokaista columnia varten (alipropertyn implementaatio vastaa luvusta ja kirjoituksesta stringinä) | ||
+ | :A2: Lisäksi jokainen propertyvariable määrittää yhden kategorian (joka voi olla hidden) | ||
+ | :A3: Propertyjen järjestäminen? | ||
+ | |||
+ | ;Q2. Mitä variableja Simantics tarjoaa? | ||
+ | :A2: Jokainen konfiguraatio tarjoaa variablen (miten polun alkuosa määräytyy) | ||
+ | :A2: Jokainen run tarjoaa variablen | ||
+ | :A2: Saako experimentin konfiguraatiosta variablen? Ei saa? | ||
+ | :A2: IC voi tarjota variablen (onko IC konfiguraatio?) | ||
+ | :A2: State voi tarjota variablen? (onko State konfiguraatio?) | ||
+ | :A2: Minkä variablen kautta historioita katsellaan? Run-variablen? Tuleeko konffiksen alle Runien lisäksi myös HistoricalRun? | ||
+ | |||
+ | ;Q3. Miten eri variablet implementoidaan? | ||
+ | :A3: Geneerinen toteutus, joka käy stackin läpi ja palauttaa ensimmäisen tuotetun arvon. | ||
+ | :A3: Solvervariable + graphvariable? | ||
+ | :A3: Millä perusteella valueAccessor ja domainChildren hakevat datansa? | ||
+ | |||
+ | ;Q4. Miten IC ja State näkyvät käyttäjälle? | ||
+ | :A4: Mallin alle voi tallentaa IC:itä aktiivisesta runista. IC:llä voi alustaa uuden runin. Onko IC konfiguraatio? | ||
+ | :A4: Mitä Statella voi tehdä? | ||
+ | :A4: Mikä on hierarkia käsitteiden IC, State, Configuration välillä? | ||
+ | |||
+ | ;Q5. Mikä on runin handle? | ||
+ | :* Meillähän on kahdenlaisia runeja. Toisaalta sellaisia, joiden takana on aktiivinen simulaattori ja toisaalta sellaisia, jotka on arkistoitu. | ||
+ | :* Aktiiviseen simulaattoriin viitataan uudessa eksperiment controllissa muodossa <simulointipalvelimen osoite>/experiments/<experimentin id> | ||
+ | :* UI:ssa monitoroitaville runeille pitää olla Run-resurssi | ||
+ | :* Mitkä aktiivisista experimenteistä tuodaan (virtuaali)graafiin run resurssi? Esim. jos jokin toinen käyttäjä on käynnistänyt runin ja se halutaan tuoda myös omaan workspaceen. | ||
+ | :A5: Voidaanko lähteä siitä, että jokainen run näkyy tietokannassa resurssina (mahd. virtuaali) ja Run-resurssilta voi kysyä tarvittavia rajapintoja adaptilla? | ||
+ | :A5: Vai variablessa (ei tarvitse resurssia)? | ||
+ | |||
+ | ;Q6. Miten useamman ajon Runin parametrisaatio näkyy runin variablessa? | ||
+ | :A6: Experimentin konfiguraatioon liittyvässä skriptissä on parametrien asettelulogiikka suorituksessa. Runin variable kuvaa nykyistä tilaa. | ||
+ | |||
+ | ;Q7. Miten solveri vastaanottaa konfiguraationsa? | ||
+ | : Tässä on solverikohtaisesti useita strategioita | ||
+ | :* Lähetetään konfiguraatio solverispesifissä muodossa, joka muodostetaan lennosta (Modelica, NuSMV) | ||
+ | :* Lähetetään IC (solverin muistidumppi) ja mahdollisesti synkronoidaan tämän jälkeen | ||
+ | :* Lähetetään state, johon synkronoidaan tyhjästä tilasta | ||
+ | |||
+ | ;Q8. Miten solveri palauttaa tuloksensa? | ||
+ | :* Remote simuloinnissa client ei ole välttämättä päällä kun solveri on saanut laskennan valmiiksi, joten serverin tulee säilyttää laskentatulokset ainakin jonkin aikaa | ||
+ | :* Lopulliset tulokset ovat puumainen rakenne blobeja, rakenne riipuu experimentin laadusta | ||
+ | :* Client voi halutessaan arkistoida tulokset graafiin | ||
+ | |||
+ | ;Q9. Miten määritellään osittainen / full synkronointi vrt. solverin alustus ja päivitys? | ||
+ | :A9: Case Balas nyt: kun run käynnistetään syntyy transientti state, joka pitää ensin synkronoida (full) | ||
+ | :A9: Jatkossa myös Balasissa voitaisiin säilöä stateja alustusta varten | ||
+ | :A9: Millä rajapinnalla solveri alustetaan staten tai ic:n perusteella? |
Latest revision as of 08:06, 14 June 2013
See old documentation: Models Deprecated.
Basic concepts
Entities
Model is a structure that is created in purpose of analysing some existing or planned system. It is usually tied to a particular method of analysis such as dynamic or steady-state simulation or model-checking. What is inside the model depends strongly on the analysis method. Model is however always a unit that can be exported from the Simantics workspace and imported back to some other workspace.
Model contains one or more configurations. Configuration is a description of the system being modelled. Usually (always?) one of the configurations is the root configuration describing the most aspects of the system and other configurations specify some deviations from it. A configuration can be parametrized. Multiple configurations are used to maintain many different but related designs (cases) of the system within the same model or to parametrize the configuration so that optimization, sensitivity analysis or similar method can be applied to the system.
The main purpose of creating a model of a system is to apply some analysis to it. We call these analyses experiments. An experiment points to a certain configuration but may also contain an additional specification of how the analysis is executed such as simulation sequence, list of subscribed variables, simulation method used, etc..
Each individual execution of the experiment is a run. What a single run generates, depends on the analysis method and the experiment specification. Typical artifacts produced include:
- State is an assignment of values to the properties of the components in the configuration
- History is an assignment of time series to the properties
Additionally the run can be interactive so that the current state being simulated can be accessed and even modified during the simulation.
States and histories can be also independent entities in the model that are not produced by experiment runs. They can be used as an input in the experiments.
Multiple runs can be executed in parallel, some in remote machines. One of the runs (states, or histories?) is the active experiment whose state is visualized in the UI.
Some analysis methods have a capability of storing a snapshot of the state of the analysis algorithm. We call these snapshots ICs. An experiment may specify IC to be used to initialize the analysis. IC and state are slightly overlapping concepts. The main difference between them is that IC contains a complete state of the analysis algorithm including the internal state not seen by users in a representation that is optimized for fast initialization of the algorithm. On the other hand a state contains only properties of components in the configuration, it is optimized for efficient browsing and may be partial (not assigning value to all possible properties).
Analogy
Consider crash testing of cars. The configuration describes the car and possibly how the crash test dummy is positioned in it. There may be many different configurations with varying safety equipments and we may for example parametrize the size of the airbag in order to find the size that minimizes head injuries. The experiment describes which configuration is used and how the crash test is executed (for example crashing speed). It also describes the variables that are measured during the crash. A run is one crash test. Each run produces time series of all variables that were measured, maybe a high speed video of the crash and the final state of the car and the dummy after the crash.
Operations
We describe here the basic operations involving models and experiments. They are not necessarily the same operations that are presented to user in UI but building blocks with smaller granularity. In particularly, we consider starting an experiment an explicit operation while some this may be an automatic operation in UI. If the analysis is fast enough, even simulation results can be updated automatically when the user modifies the configuration.
Running an experiment creates a new run starting the corresponding runtime entities. This involves:
- Start the actual analysis algorithm (if a remote server is used, this may include waiting that computational resource become available)
- Initialize the algorithm state. This can be done in many ways:
- Write the configuration in a form understood by the algorithm (for example Modelica code)
- Load previously stored IC and synchronize the algorithm state with the current configuration
- Initialize the algorithm in a "blank" state and synchronize the current configuration
- Run the analysis
- This phase may be interactive so that state of the algorithm can be monitored and mutated
- It may be possible to run synchronization operation during the analysis
- Make the results of the analysis available
If the analysis is fast running all these phases happen almost immediately after the experiment is started.
Synchronization is the operation of making the current state of an analysis algorithm compatible with a certain configuration (and parameters, if the configuration is parametrized).
Save/load IC
Archive simulation results
Questions
- The line between configuration and experiment is not well defined (for example is the crashing speed in the analogy part of configuration or experiment). Experiments and configurations are probably often tied together. Also experiments (such as simulation sequences) are parametrizable. Would it be possible to consider experiments as part of the configuration?
TODO Q&A:
- Q1. Miten selection view toimii?
- A1: Annetaan viewille pino variableja (konf-delta1-delta2-state), jotka näytetään combossa
- A1: Pinon päällä on aktiivisen experimentin variable ja muut variablet ovat sen konfigurointipuu
- A1: Propertyvariablella on aliproperty jokaista columnia varten (alipropertyn implementaatio vastaa luvusta ja kirjoituksesta stringinä)
- A2: Lisäksi jokainen propertyvariable määrittää yhden kategorian (joka voi olla hidden)
- A3: Propertyjen järjestäminen?
- Q2. Mitä variableja Simantics tarjoaa?
- A2: Jokainen konfiguraatio tarjoaa variablen (miten polun alkuosa määräytyy)
- A2: Jokainen run tarjoaa variablen
- A2: Saako experimentin konfiguraatiosta variablen? Ei saa?
- A2: IC voi tarjota variablen (onko IC konfiguraatio?)
- A2: State voi tarjota variablen? (onko State konfiguraatio?)
- A2: Minkä variablen kautta historioita katsellaan? Run-variablen? Tuleeko konffiksen alle Runien lisäksi myös HistoricalRun?
- Q3. Miten eri variablet implementoidaan?
- A3: Geneerinen toteutus, joka käy stackin läpi ja palauttaa ensimmäisen tuotetun arvon.
- A3: Solvervariable + graphvariable?
- A3: Millä perusteella valueAccessor ja domainChildren hakevat datansa?
- Q4. Miten IC ja State näkyvät käyttäjälle?
- A4: Mallin alle voi tallentaa IC:itä aktiivisesta runista. IC:llä voi alustaa uuden runin. Onko IC konfiguraatio?
- A4: Mitä Statella voi tehdä?
- A4: Mikä on hierarkia käsitteiden IC, State, Configuration välillä?
- Q5. Mikä on runin handle?
-
- Meillähän on kahdenlaisia runeja. Toisaalta sellaisia, joiden takana on aktiivinen simulaattori ja toisaalta sellaisia, jotka on arkistoitu.
- Aktiiviseen simulaattoriin viitataan uudessa eksperiment controllissa muodossa <simulointipalvelimen osoite>/experiments/<experimentin id>
- UI:ssa monitoroitaville runeille pitää olla Run-resurssi
- Mitkä aktiivisista experimenteistä tuodaan (virtuaali)graafiin run resurssi? Esim. jos jokin toinen käyttäjä on käynnistänyt runin ja se halutaan tuoda myös omaan workspaceen.
- A5: Voidaanko lähteä siitä, että jokainen run näkyy tietokannassa resurssina (mahd. virtuaali) ja Run-resurssilta voi kysyä tarvittavia rajapintoja adaptilla?
- A5: Vai variablessa (ei tarvitse resurssia)?
- Q6. Miten useamman ajon Runin parametrisaatio näkyy runin variablessa?
- A6: Experimentin konfiguraatioon liittyvässä skriptissä on parametrien asettelulogiikka suorituksessa. Runin variable kuvaa nykyistä tilaa.
- Q7. Miten solveri vastaanottaa konfiguraationsa?
- Tässä on solverikohtaisesti useita strategioita
- Lähetetään konfiguraatio solverispesifissä muodossa, joka muodostetaan lennosta (Modelica, NuSMV)
- Lähetetään IC (solverin muistidumppi) ja mahdollisesti synkronoidaan tämän jälkeen
- Lähetetään state, johon synkronoidaan tyhjästä tilasta
- Q8. Miten solveri palauttaa tuloksensa?
-
- Remote simuloinnissa client ei ole välttämättä päällä kun solveri on saanut laskennan valmiiksi, joten serverin tulee säilyttää laskentatulokset ainakin jonkin aikaa
- Lopulliset tulokset ovat puumainen rakenne blobeja, rakenne riipuu experimentin laadusta
- Client voi halutessaan arkistoida tulokset graafiin
- Q9. Miten määritellään osittainen / full synkronointi vrt. solverin alustus ja päivitys?
- A9: Case Balas nyt: kun run käynnistetään syntyy transientti state, joka pitää ensin synkronoida (full)
- A9: Jatkossa myös Balasissa voitaisiin säilöä stateja alustusta varten
- A9: Millä rajapinnalla solveri alustetaan staten tai ic:n perusteella?