You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/src/developers.md
+25-25Lines changed: 25 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,12 +1,12 @@
1
-
# Developer guidelines
1
+
# Developer guidelines
2
2
3
3
This page is intended for people who wish to contribute to PlantSimEngine, and indicates the various parts to bear in mind when adding in new code.
4
4
5
5
## Working on PlantSimEngine
6
6
7
7
Instructions are no different than for any other package. Use git to clone the repository [https://github.com/VirtualPlantLab/PlantSimEngine.jl](https://github.com/VirtualPlantLab/PlantSimEngine.jl).
8
8
9
-
When testing your changes, your environement will need to use a command such as `Pkg.develop("PlantSimEngine")` to make use of your code.
9
+
When testing your changes, your environement will need to use a command such as `Pkg.develop("PlantSimEngine")` to make use of your code.
10
10
11
11
We work with VSCode and are most comfortable with that IDE for Julia development. We mostly follow the manual's [Julia style guide](https://docs.julialang.org/en/v1/manual/style-guide/)
12
12
@@ -21,6 +21,7 @@ Other details and questions can be posted on our issues page, or as part of your
21
21
### Testing environments
22
22
23
23
PlantSimEngine has several developer environements:
24
+
24
25
-`/PlantSimEngine/test`, to check for non-regressions
25
26
-`/PlantSimEngine/test/downstream`, whose folder contains a few benchmarks on PlantSimEngine, PlantBioPhysics and XPalm, run as a Github Action, to ensure changes don't cause performance regressions in packages depending on PlantSimEngine. You’ll need to have a version of those packages accessible if you wish to test them locally. Those are distinct from the Github Action that does some integration checks to ensure no unexpected breaking changes occurs.
26
27
`/PlantSimEngine/docs`, to build the documentation. The documentation runs code, and some of the functions' documentation for the API are also tested as `jldoctest` instances
@@ -41,35 +42,34 @@ In the `/PlantSimEngine/docs` environment, run `/PlantSimEngine/docs/make.jl`. I
41
42
42
43
### Editing benchmarks
43
44
44
-
⁃If you wish for a branch to be benchmarked after every commit, then you need to declare it in the Github Action for benchmarks's yml file : [https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/main/.github/workflows/benchmarks_and_downstream.yml](https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/main/.github/workflows/benchmarks_and_downstream.yml) and add your branch to the `on: push:` section.
45
-
⁃You can view benchmarks here: https://virtualplantlab.github.io/PlantSimEngine.jl/dev/bench/index.html. They are still somewhat WIP and not yet battle-tested.
46
-
⁃You may occasionally need to update or delete a benchmark, in which case you will need to manually delete it in the **gh-pages** branch, in `dev/bench/index.html`
47
-
⁃The actual benchmark list is located in the `test/downstream` folder.
45
+
⁃If you wish for a branch to be benchmarked after every commit, then you need to declare it in the Github Action for benchmarks's yml file : [https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/main/.github/workflows/benchmarks_and_downstream.yml](https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/main/.github/workflows/benchmarks_and_downstream.yml) and add your branch to the `on: push:` section.
46
+
⁃You can view benchmarks here: <https://virtualplantlab.github.io/PlantSimEngine.jl/dev/bench/index.html>. They are still somewhat WIP and not yet battle-tested.
47
+
⁃You may occasionally need to update or delete a benchmark, in which case you will need to manually delete it in the **gh-pages** branch, in `dev/bench/index.html`
48
+
⁃The actual benchmark list is located in the `test/downstream` folder.
48
49
49
50
## Things to keep an eye out for
50
51
51
-
### Downstream tests
52
+
### Check downstream tests
52
53
53
-
⁃If your changes affect the API, then they might affect a package depending on PlantSimEngine. Benchmarks can be a way to check, as some benchmarks run other packages. Otherwise, a specific GitHub action, [https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/main/.github/workflows/Integration.yml] runs other packages’ test suites. If this action fails, then it is likely some breaking change was introduced that hasn’t been accounted for in the downstream package. If you expected a breaking change and labelled your release as such, there will be no action failure
54
-
⁃Note that those tests don’t build the doc (iirc), so they don’t cover that.
55
-
⁃API changes can also affect downstream packages’ documentation and tests...
54
+
⁃If your changes affect the API, then they might affect a package depending on PlantSimEngine. Benchmarks can be a way to check, as some benchmarks run other packages. Otherwise, a specific GitHub action, [https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/main/.github/workflows/Integration.yml] runs other packages’ test suites. If this action fails, then it is likely some breaking change was introduced that hasn’t been accounted for in the downstream package. If you expected a breaking change and labelled your release as such, there will be no action failure
55
+
⁃Note that those tests don’t build the doc (iirc), so they don’t cover that.
56
+
⁃API changes can also affect downstream packages’ documentation and tests...
56
57
57
58
### Which documentation pages may be affected by changes
58
59
59
60
You may impact several specific documentation pages depending on what you changed. Features and API changes affect whatever they might affect, but there are some less obvious ramifications:
60
61
61
-
⁃Improving user errors may impact the **Troubleshooting** page.
62
-
⁃Extra features might also expand the **Tips and workarounds** page, as well as the ‘implicit contracts’ page.
63
-
⁃Some experimental features might be worth documenting in the dedicated **API** page, once it's added
⁃Potentially, other pages such as the **Credits** page, **Key Concepts**, etc. If the API makes use of new Julia features or syntax, the **Julia basics** page is probably also worth updating.
66
-
⁃New examples are worth making doctests of.
62
+
⁃Improving user errors may impact the **Troubleshooting** page.
63
+
⁃Extra features might also expand the **Tips and workarounds** page, as well as the ‘implicit contracts’ page.
64
+
⁃Some experimental features might be worth documenting in the dedicated **API** page, once it's added
⁃Potentially, other pages such as the **Credits** page, **Key Concepts**, etc. If the API makes use of new Julia features or syntax, the **Julia basics** page is probably also worth updating.
67
+
⁃New examples are worth making doctests of.
67
68
68
69
### Previewing documentation
69
70
70
71
You can preview generated documentation (assuming it was able to build) relating to your PR (example given with #128) by checking the related link: [https://virtualplantlab.github.io/PlantSimEngine.jl/previews/PR128/](https://virtualplantlab.github.io/PlantSimEngine.jl/previews/PR128/)
71
72
72
-
73
73
## Checklist before submitting PRs
74
74
75
75
⁃ Ensure your code, uh, works
@@ -80,12 +80,12 @@ You can preview generated documentation (assuming it was able to build) relating
80
80
⁃ Build the PSE doc and update whatever doc tests were broken
81
81
⁃ Push your commit, and let the Github Actions run their course
82
82
⁃ Check the 'CI' GitHub action and fix if necessary
83
-
⁃ Check downstream and benchmark GitHub actions:
84
-
If benchmarks tanked, then fix your code. If you need to add/update/delete benchmarks, do so.
85
-
If you broke an integration/downstream test, you’ll need to investigate it
86
-
If API changes were made, also check downstream packages’ documentation
83
+
⁃ Check downstream and benchmark GitHub actions:
84
+
- If benchmarks tanked, then fix your code. If you need to add/update/delete benchmarks, do so.
85
+
- If you broke an integration/downstream test, you’ll need to investigate it
86
+
- If API changes were made, also check downstream packages’ documentation
87
87
88
-
It’s probably now safe to request a merge.
88
+
It’s probably now safe to request a merge.
89
89
90
90
### A few extra things worth doing
91
91
@@ -102,11 +102,11 @@ It’s probably now safe to request a merge.
102
102
103
103
### Automatic model generation
104
104
105
-
A specific feature requires generating models on the fly, to enable passing vectors to `Status` objects in multi-scale simulations. There may be more features that wish to generate models.
105
+
A specific feature requires generating models on the fly, to enable passing vectors to `Status` objects in multi-scale simulations. There may be more features that wish to generate models.
106
106
107
107
The solution makes use of a somewhat brittle feature, `eval()`, with some subtleties. You can read more about the related world age problem [here](https://arxiv.org/abs/2010.07516), or [here](https://discourse.julialang.org/t/world-age-problem-explanation/9714/15).
108
108
109
-
The related file is `model_generation_from_status_vectors.jl`, which has some additional comments.
109
+
The related file is `model_generation_from_status_vectors.jl`, which has some additional comments.
110
110
111
111
What is important to bear in mind, is that if you call functions which generate models via `eval()`, you will need to return to top-level scope for those changes to become visible. You can see an example in `tests/helper_functions.jl` with the functions `test_filtered_output_begin` and `test_filtered_output`. The first function calls `modellist_to_mapping`, which creates some models on the fly to convert status vectors between a ModelList and its equivalent pseudo-multiscale mapping. The function is split in two so that it is possible to return to global scope and make the `eval()` changes publicly available. The second function then is able to run the simulations on the mapping with its generated models, and complete the test successfully.
112
112
@@ -122,4 +122,4 @@ Not all combinations of weather data structure/weather dataset size/status sizes
122
122
123
123
They were briefly mentioned earlier in the page, but the test banks to increase the number of combinations tested for in terms of weather data, modellists/mappings and tracked outputs, could definitely be improved upon.
124
124
125
-
TODO extra section on memory allocations, type stability etc.
125
+
Some additional work and tests regarding tracking memory allocations, type stability etc. would also be worth implementing/documenting.
`PlantSimEngine.jl` uses the [`ModelList`](@ref) to automatically compute a dependency graph between the models and run the simulation in the correct order. When running a simulation with [`run!`](@ref), the models are then executed following this simple set of rules:
6
6
7
-
1. Independent models are run first. A model is independent if it can be run independently from other models, only using initializations (or nothing).
7
+
1. Independent models are run first. A model is independent if it can be run independently from other models, only using initializations (or nothing).
8
8
2. Then, models that have a dependency on other models are run. The first ones are the ones that depend on an independent model. Then the ones that are children of the second ones, and then their children ... until no children are found anymore. There are two types of children models (*i.e.* dependencies): hard and soft dependencies:
9
-
1. Hard dependencies are always run before soft dependencies. A hard dependency is a model that list dependencies in their own method for `dep`. See [this example](https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/3d91bb053ddbd087d38dcffcedd33a9db35a0fcc/examples/dummy.jl#L39) that shows `Process2Model` defining a hard dependency on any model that simulate`process1`. Inner hard dependency graphs (*i.e.* consecutive hard-dependency children) are considered as a single soft dependency.
10
-
2. Soft dependencies are then run sequentially. A model has a soft dependency on another model if one or more of its inputs is computed by another model. If a soft dependency has several parent nodes (*e.g.* two different models compute two inputs of the model), it is run only if all its parent nodes have been run already. In practice, when we visit a node that has one of its parent that did not run already, we stop the visit of this branch. The node will eventually be visited from the branch of the last parent that was run.
9
+
1. Hard dependencies are always run before soft dependencies. A hard dependency is a model that is directly called by another model. It is declared as such by its parent that lists its hard-dependencies as `dep`. See [this example](https://github.com/VirtualPlantLab/PlantSimEngine.jl/blob/3d91bb053ddbd087d38dcffcedd33a9db35a0fcc/examples/dummy.jl#L39) that shows `Process2Model` defining a hard dependency on any model that simulates`process1`.
10
+
2. Soft dependencies are then run sequentially. A model has a soft dependency on another model if one or more of its inputs is computed by another model. If a soft dependency has several parent nodes (*e.g.* two different models compute two inputs of the model), it is run only if all its parent nodes have been run already. In practice, when we visit a node that has one of its parent that did not run already, we stop the visit of this branch. The node will eventually be visited from the branch of the last parent that was run.
Copy file name to clipboardExpand all lines: docs/src/planned_features.md
+9-10Lines changed: 9 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,13 @@
4
4
5
5
### Varying timesteps
6
6
7
-
Currently, all models are required to make use of the same timestep. Some physiological phenomenae within a plant tend to run on an hourly basis, others are slower. Weather data is often provided daily. Enabling different timesteps depending on the model is on the roadmap.
7
+
Currently, all models are required to make use of the same timestep. Some physiological phenomenae within a plant tend to run on an hourly basis, others are slower. Weather data is often provided daily. Enabling different timesteps depending on the model is on the roadmap, and is planned as the next milestone.
8
8
9
9
### Multi-plant/Multi-species simulations
10
10
11
11
A goal for PlantSimEngine down the line is to be able to simulate complex scenes with data comprising several plants, possibly of different species, for agroforestry purposes.
12
12
13
-
Its current state doesn't enable practical declaration of several plant species, or multiple plants relying on similar subsets of models with partially different models or parameters.
13
+
Its current state doesn't enable practical declaration of several plant species, or multiple plants relying on similar subsets of models with partially different models or parameters.
14
14
15
15
## Minor features
16
16
@@ -34,23 +34,22 @@ Its current state doesn't enable practical declaration of several plant species,
34
34
35
35
## Possible features (likely not a priority)
36
36
37
-
- API enabling iterative builds and validation of mappings and modellists
38
-
- Improved parallelisation
37
+
- API enabling iterative builds and validation of mappings and ModelLists
38
+
- Build step for the models, *i.e.* a function that would write a mapping or ModelList into a Julia script for validation, improved readability and (maybe) performance (no need to traverse the dependency graph anymore).
39
+
- Improved parallelisation
39
40
- Reintroduce multi-object parallelisation in single-scale
40
41
41
42
## Other minor points
42
43
43
44
- Examples/solutions for floating-point accumulation errors
44
45
- More examples for fitting/type conversion/error propagation
45
-
- MTG couple of new features #106
46
+
- MTG couple of new features #106
46
47
- Other minor bugs
47
48
- Unrolling the run! function
48
49
49
50
## Other
50
51
51
-
- Reproducing another FSPM?
52
-
- Diffusion model example?
52
+
- Reproducing another FSPM?
53
+
- Diffusion model example?
53
54
54
-
The full list of issues can be found [here](https://github.com/VirtualPlantLab/PlantSimEngine.jl/issues)
55
-
56
-
TODO Detail other related packages' roadmaps (mostly stuff like PlantGeom's mtg reference mesh linking) ? PBP updates ?
55
+
The full list of issues can be found [here](https://github.com/VirtualPlantLab/PlantSimEngine.jl/issues)
Copy file name to clipboardExpand all lines: docs/src/prerequisites/julia_basics.md
+1-5Lines changed: 1 addition & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,8 +52,4 @@ Also of importance:
52
52
53
53
Many of these are also briefly presented in [this Julia Data Science](https://juliadatascience.io/julia_basics) guide, which also happens to focus on the DataFrames.jl package.
54
54
55
-
Understanding more about methods, parametric types and the typing system is usually worthwhile, when working with Julia packages.
56
-
57
-
TODO point to Rémi's videos ? Other videos ?
58
-
TODO extra concepts useful for developers ?
59
-
55
+
Understanding more about methods, parametric types and the typing system is usually worthwhile, when working with Julia packages.
0 commit comments