Skip to content

Commit

Permalink
More simplifications and bug fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
Maginor committed Oct 30, 2024
1 parent 9a857fc commit 34345f8
Show file tree
Hide file tree
Showing 8 changed files with 69 additions and 69 deletions.
79 changes: 29 additions & 50 deletions dev_notes/todo.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,34 +2,16 @@

*** High pri ***



Dynamic module selection is completely needed for things like PET modules. Otherwise you get different model versions for everything...



NIVAFjord:

Need a complete rethink of horizontal fluxes. They cause too much numerical instability when there is large in/outflow for small layers.

There should really be an error if there is an opening to a neighboring basin on a depth below the max depth of that basin.

In Varna setup, can merge channel into beloslav to mitigate this a bit?
Probably necessary!

Moreover, they need to be proper two-way for the layer switching to work in both ways.



MobiView
If the plot x axis was set to nonsense since the first selected series was entirely NaN, it won't be fixed by selecting a non-NaN series.
Should not be considered as was_auto_resized in this case.
Should not be considered as was_auto_resized if it was entirely NaN, so that the autoresize can trigger again.
If the parameter search is already selected, it doesn't react to clicks.


Dataset no longer checks correctly if the correct amount of parameter values were provided?
Dataset no longer checks if the correct amount of parameter values were provided?
May be intentional, but then it breaks when saving it again.
I think this only happens for a parameter group that is not used by the model.
Dataset doesn't generate identifier for generated index set.
I think this only happens for a parameter group that is not used by the model. Then when it tries to save it, there is a problem.
Dataset doesn't generate identifier for generated index set, and hence par_groups that refer to it are formatted incorrectly.
Default to use same as in the model?

Got an (internal) error in the following case
Expand All @@ -39,54 +21,32 @@
layer.water.tox was distributed over tox_index with the actual dependency.
Caused problem when the dissolved flux were to be added to the connection aggregate (lack of index set).
Probably also a problem for vert.top etc.

It is a bit tricky for the framework to predetermine this.. But there is maybe something like this for graph aggregates
It is a bit tricky for the framework to predetermine this.. But there is maybe something like this for graph aggregates already that could be applied here too.


Want to be able to declare something like
basin.tox.vel_air
to get tox_index dependency for the variable without basin.tox being a variable.
For now, could make it a variable with @override {1[k g]}, but it is a bit annoying.

Didn't we already make this possible? Test it!


If a discrete flux goes from something that is @override, it should not be clamped to the value of what it is carrying.



Add forums on github page? (github Discussions)



Made it possible to add properties to solvers manually, but it could be error prone.
Fix errors if they appear.


There is no guard against loading a library twice, instead you get identifier name conflict.


@index_with for par_group causes problems if you want to reuse the module without one of the index sets..
Was introduced to force distribution over those index sets.
Because in external_computation it needs to know distribution at framework compile time, not model compile time.
That info could be declared differently, like a @fully_distribute note or something.
There is no guard against loading a library twice into the same scope, instead you get identifier name conflict.
Should probably allow it, but ignore the second load.


Something is broken with airsea, but only in some cases .... (has something to do with ice or longwave).
Something is broken with airsea, but only in some rare cases .... (has something to do with ice or longwave (?) ).
In 07_model it looks like a bad interaction with mixing, numerical instability (why was it not an issue before??).
EasyTox still broken.

Not including declaration of union index set in data set can create an internal error.
wb_index : index_set("Water body") @union(sc, lake)

Get rid of the idea of aggregating to a *compartment*. Instead, only aggregate to the index set tuple.
Difficulties:
Serialization name depends on compartment (should be easy to change)
aggregation_weight depends on compartment. Make it work like unit_conversion instead?
See other suggestion about merging unit_conversion and aggregation_weight.
Make par_aggregate work the same as regular_aggregate.





Specific models
Expand All @@ -95,6 +55,15 @@
Also for O2 (found a difficulty with that earlier, but maybe we could resolve it?)

NIVAFjord

Need a complete rethink of horizontal fluxes. They cause too much numerical instability when there is large in/outflow for small layers.

There should really be an error if there is an opening to a neighboring basin on a depth below the max depth of that basin.

In Varna setup, can merge channel into beloslav to mitigate this a bit?
Probably necessary!

Moreover, they need to be proper two-way for the layer switching to work in both ways.

Go to the simpler version from MyLake of parametrizing vertical mixing
Mathematically the same, just removes unnecessary parameters
Expand Down Expand Up @@ -207,6 +176,16 @@

*** Intermediate-pri ***

Get rid of the idea of aggregating to a *compartment*. Instead, only aggregate to the index set tuple.
Difficulties:
Serialization name depends on compartment (should be easy to change)
aggregation_weight depends on compartment. Make it work like unit_conversion instead?
See other suggestion about merging unit_conversion and aggregation_weight.
Make par_aggregate work the same as regular_aggregate.


Make option() also available inside modules. Unify some of the airsea FPV stuff this way? Also try to unify some of the NIVAFjord models to not have that many files.

Make something like

computation("A computation", soil) {
Expand Down
7 changes: 3 additions & 4 deletions models/modules/nivafjord/basin.txt
Original file line number Diff line number Diff line change
@@ -1,13 +1,12 @@

preamble("NIVAFjord dimensions", version(0, 0, 0),
all_basins : index_set,
layer_idx : index_set,
all_layer : compartment,
layer : compartment
) {

par_group("Layer thickness") {
par_group("Layer thickness", all_layer) {
dz0 : par_real("Stable layer thickness", [m], 1, 0.1, 5)
} @index_with(all_basins, layer_idx)
} @fully_distribute

par_group("Layer area", layer) {
A : par_real("Layer area", [m 2], 10000, 0, 1e6)
Expand Down
2 changes: 1 addition & 1 deletion models/nivafjord_lake_model.txt
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ It is almost possible to unify a lot of this one with nivafjord_model.txt . The
module("Atmospheric", air, temp, wind, g_rad, pressure, a_hum, rho, lwd, cos_z, a_vap, s_vap))

load("modules/nivafjord/basin.txt",
dims : preamble("NIVAFjord dimensions", basin_idx, layer_idx, layer),
dims : preamble("NIVAFjord dimensions", layer, layer),
module("NIVAFjord basin", air, basin, layer, water, salt, heat, temp, salinity, pressure, wind, g_rad, rho, attn, z, dz, h, area, freeze_t, sw, loc(basin.ice.indicator), vert, sw_vert, loc(sediment.heat), dims))

fjord_phyt : quantity("Fjord phytoplankton")
Expand Down
4 changes: 3 additions & 1 deletion models/nivafjord_model.txt
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,10 @@ model("NIVAFjord") {
module("AirSea", "AirSea Fjord", air, basin, ice, heat, temp, precip, wind, g_rad, pressure, rho, a_hum, lwd, sw, attn, indicator,
evap, cos_z, loc(basin.freeze_t), loc(basin.area), loc(layer.water[vert.top]), loc(layer.water.heat[sw_vert.top])))

all_layer : compartment("All layers", all_basins, layer_idx) # This is not used for anything other than to force a par group to distribute over all_basin

load("modules/nivafjord/basin.txt",
dims : preamble("NIVAFjord dimensions", all_basins, layer_idx, layer),
dims : preamble("NIVAFjord dimensions", all_layer, layer),
module("NIVAFjord basin", air, basin, layer, water, salt, heat, temp, salinity, pressure, wind, g_rad, rho, attn, z, dz, h, area, freeze_t, sw, loc(basin.ice.indicator), vert, sw_vert, loc(sediment.heat), dims))

load("modules/nivafjord/boundary_basin.txt",
Expand Down
4 changes: 2 additions & 2 deletions src/ast.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1140,8 +1140,8 @@ match_declaration(Decl_AST *decl, const std::initializer_list<std::initializer_l
found_notes.insert(note_name);
}
} else if(!decl->notes.empty()) {
decl->notes[0]->body->opens_at.print_error_header();
fatal_error("This declaration should not have more than one body.");
decl->notes[0]->decl.print_error_header();
fatal_error("This declaration should not have notes.");
}

return found_match;
Expand Down
9 changes: 9 additions & 0 deletions src/model_application.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -466,6 +466,15 @@ process_par_group_index_sets(Mobius_Model *model, Data_Set *data_set, Entity_Id
}
index_sets.push_back(index_set_id);
}

if(par_group->must_fully_distribute) {
for(auto set_id : par_group->max_index_sets) {
if(std::find(index_sets.begin(), index_sets.end(), set_id) == index_sets.end()) {
par_group_data->source_loc.print_error_header();
fatal_error("The par_group \"", par_group->name, "\" is marked as @fully_distribute in the model. That means that it must be given all the index sets that it can. It is missing \"", model->index_sets[set_id]->name, "\".");
}
}
}
}
}

Expand Down
30 changes: 20 additions & 10 deletions src/model_declaration.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -185,11 +185,14 @@ load_top_decl_from_file(Mobius_Model *model, Source_Location from, String_View p
Entity_Id id = invalid_entity_id;

if(decl->type == Decl_Type::library) {

id = model->libraries.register_decl(&model->top_scope, decl);
auto lib = model->libraries[id];
lib->scope.parent_id = id;
lib->normalized_path = normalized_path;
lib->name = found_name;
set_serial_name(model, lib, 0); // To prevent double loads of libraries.

} else if (is_module_like) {
id = model->module_templates.register_decl(&model->top_scope, decl);
auto mod = model->module_templates[id];
Expand Down Expand Up @@ -417,7 +420,7 @@ Par_Group_Registration::process_declaration(Catalog *catalog) {
set_serial_name(catalog, this);
scope.parent_id = id;

// TODO: Don't allow duplicate components or index sets.
// TODO: Don't allow duplicate components.

if(which >= 1) {
for(int idx = 1; idx < decl->args.size(); ++idx)
Expand All @@ -426,12 +429,11 @@ Par_Group_Registration::process_declaration(Catalog *catalog) {

for(auto note : decl->notes) {
auto str = note->decl.string_value;
if(str == "index_with") {
if(str == "fully_distribute") {

match_declaration_base(note, {{{Decl_Type::index_set, true}}}, 0);
match_declaration_base(note, {{}}, 0);

for(auto arg : note->args)
direct_index_sets.push_back(parent_scope->resolve_argument(Reg_Type::index_set, arg));
must_fully_distribute = true;

} else {
note->decl.print_error_header();
Expand Down Expand Up @@ -1017,8 +1019,16 @@ process_load_library_declaration(Mobius_Model *model, Decl_AST *decl, Entity_Id
auto lib_load_decl = decl->args[idx]->decl;
match_declaration(lib_load_decl, {{Token_Type::quoted_string}}, false, false);
std::string library_name = single_arg(lib_load_decl, 0)->string_value;

load_library(model, to_scope, file_name, relative_to, library_name, lib_load_decl->source_loc);

auto find_id = model->get_scope(to_scope)->deserialize(library_name, Reg_Type::unrecognized);
if(!is_valid(find_id)) // If it doesn't exist, load it
load_library(model, to_scope, file_name, relative_to, library_name, lib_load_decl->source_loc);
else if(find_id.reg_type != Reg_Type::library) {
lib_load_decl->source_loc.print_error_header();
fatal_error("Trying to load a library \"", library_name, "\" that is named the same as another non-library entity in the same scope");
}
// If a library was already loaded with that name, just ignore the reattempted load.
// TODO: What if it was a different library with the same name??
}
}

Expand Down Expand Up @@ -1702,8 +1712,8 @@ basic_checks_and_finalization(Mobius_Model *model) {
for(auto index_set : model->components[comp_id]->index_sets)
insert_dependency(model, group->max_index_sets, index_set);
}
for(auto index_set : group->direct_index_sets)
insert_dependency(model, group->max_index_sets, index_set);
//for(auto index_set : group->direct_index_sets)
// insert_dependency(model, group->max_index_sets, index_set);
}
}

Expand Down Expand Up @@ -1731,7 +1741,7 @@ process_module_load_outer(Mobius_Model *model, Module_Load &load) {

bool allow_identifier = (module_spec->type == Decl_Type::preamble);

int which = match_declaration(module_spec,
int which = match_declaration(module_spec,
{
{Token_Type::quoted_string, Token_Type::quoted_string},
{Token_Type::quoted_string, Token_Type::quoted_string, {true}},
Expand Down
3 changes: 2 additions & 1 deletion src/model_declaration.h
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,9 @@ struct
Par_Group_Registration : Registration_Base {

std::vector<Entity_Id> components;
std::vector<Entity_Id> direct_index_sets;
//std::vector<Entity_Id> direct_index_sets;
Decl_Scope scope;
bool must_fully_distribute = false;

Index_Set_Tuple max_index_sets; // This one will not be correctly set until the model is fully loaded.

Expand Down

0 comments on commit 34345f8

Please sign in to comment.