Conversation
…BC/physXAI into 26-integrate-modular-anns
…BC/physXAI into 26-integrate-modular-anns
…BC/physXAI into 26-integrate-modular-anns
…l' into 26-integrate-modular-anns
There was a problem hiding this comment.
Pull request overview
This PR introduces a new modular ANN system to physXAI, enabling flexible model construction by composing neural network components programmatically. The changes include new modular expression classes, updates to existing ANN construction to support normalization and feature control, and widespread adoption of explicit keyword arguments for preprocessing classes.
Key Changes
- New modular ANN architecture system with composable expressions (ModularExpression, ModularANN, ModularModel, ModularLinear, etc.)
- Configuration schema updates adding
normalizeandn_featuresparameters to support modular model construction - Refactored all preprocessing instantiations to use explicit keyword arguments for better clarity
Reviewed changes
Copilot reviewed 23 out of 25 changed files in this pull request and generated 19 comments.
Show a summary per file
| File | Description |
|---|---|
physXAI/models/modular/modular_expression.py |
New core expression classes for modular architecture (192 lines) |
physXAI/models/modular/modular_ann.py |
Modular ANN implementations including ModularModel, ModularLinear, ModularPolynomial (278 lines) |
physXAI/models/modular/__init__.py |
Empty module init (modular classes not exposed in public API) |
physXAI/models/ann/ann_design.py |
Added model_config initialization in ANNModel base class |
physXAI/models/ann/configs/ann_model_configs.py |
Added normalize and n_features fields; changed n_layers validation from gt=0 to ge=0 |
physXAI/models/ann/model_construction/ann_models.py |
Added conditional normalization and n_features support |
physXAI/models/ann/model_construction/rbf_models.py |
Major refactor: single-layer RBF, gamma initialization, conditional normalization |
physXAI/models/ann/model_construction/residual_models.py |
Changed rescale_mean to hardcoded 0 for residual models |
physXAI/preprocessing/preprocessing.py |
Updated super() calls to use explicit keyword arguments |
physXAI/preprocessing/constructed.py |
Added input() method to FeatureBase; Feature and FeatureLag auto-register inputs |
unittests/modular/test_modular.py |
New test file with utility functions but no pytest test cases |
unittests/test_coverage.py, unittests/verify_installation.py |
Updated preprocessing calls to use keyword arguments |
executables/bestest_hydronic_heat_pump/*.py |
Updated preprocessing calls to use keyword arguments |
executables/bestest_hydronic_heat_pump/P_hp_modular.py |
New example demonstrating modular ANN usage |
Empty __init__.py files |
Removed blank lines from various __init__.py files |
build/reports/coverage.svg |
Coverage decreased from 89% to 80% |
| def __init__(self, name: str, **kwargs): | ||
| super().__init__(name, **kwargs) | ||
| FeatureConstruction.add_input(self.feature) |
There was a problem hiding this comment.
The Feature class now automatically registers itself in FeatureConstruction.inputs upon instantiation (line 175). This creates a side effect in the constructor that modifies global state, which can lead to unexpected behavior in tests or when creating temporary Feature objects. This is a subtle breaking change from the previous behavior where Feature was a simple pass-through class. Consider whether this automatic registration is always desired, or if it should be opt-in or controlled via a parameter.
| allowed_models = [ClassicalANNModel, CMNNModel, LinearRegressionModel] | ||
| i = 0 | ||
|
|
||
| def __init__(self, model: ANNModel, inputs: list[ModularExpression, FeatureBase], name: str = None, nominal_range: tuple[float, float] = None): |
There was a problem hiding this comment.
The type hint list[ModularExpression, FeatureBase] is invalid Python syntax. It should be list[Union[ModularExpression, FeatureBase]] or list[ModularExpression | FeatureBase] (Python 3.10+). This same error appears on lines 133, 162, 196, 205, and 244. The code may still work at runtime due to type hints not being enforced, but will cause issues with type checkers and IDEs.
| def __init__(self, model: ANNModel, inputs: list[ModularExpression, FeatureBase], name: str = None, nominal_range: tuple[float, float] = None): | |
| def __init__(self, model: ANNModel, inputs: list[Union[ModularExpression, FeatureBase]], name: str = None, nominal_range: tuple[float, float] = None): |
|
|
||
| class ModularExistingModel(ModularExpression): | ||
|
|
||
| def __init__(self, model: Union[Sequential, Functional, str, Path], original_inputs: list[ModularExpression, FeatureBase], trainable: bool, name: str = None): |
There was a problem hiding this comment.
Invalid type hint syntax: list[ModularExpression, FeatureBase] should be list[Union[ModularExpression, FeatureBase]] or list[ModularExpression | FeatureBase].
| def __init__(self, model: Union[Sequential, Functional, str, Path], original_inputs: list[ModularExpression, FeatureBase], trainable: bool, name: str = None): | |
| def __init__(self, model: Union[Sequential, Functional, str, Path], original_inputs: list[Union[ModularExpression, FeatureBase]], trainable: bool, name: str = None): |
| class ModularPolynomial(ModularExpression): | ||
| i = 0 | ||
|
|
||
| def __init__(self, inputs: list[ModularExpression, FeatureBase], degree: int = 2, interaction_degree: int = 1, name: str = None, nominal_range: tuple[float, float] = None): |
There was a problem hiding this comment.
Invalid type hint syntax: list[ModularExpression, FeatureBase] should be list[Union[ModularExpression, FeatureBase]] or list[ModularExpression | FeatureBase].
| def __init__(self, inputs: list[ModularExpression, FeatureBase], degree: int = 2, interaction_degree: int = 1, name: str = None, nominal_range: tuple[float, float] = None): | |
| def __init__(self, inputs: list[Union[ModularExpression, FeatureBase]], degree: int = 2, interaction_degree: int = 1, name: str = None, nominal_range: tuple[float, float] = None): |
| def get_config(self) -> dict: | ||
| config = super().get_config() | ||
| config.update({}) | ||
| warning("ModularANN currently does not save architecture config.") | ||
| return config |
There was a problem hiding this comment.
ModularANN does not save its architecture configuration, as noted in the warning message. This means saved ModularANN models cannot be reconstructed from configuration alone, breaking the save/load pattern used by other models in physXAI. Users who save a ModularANN will not be able to use model_from_config() to reload it, which is a significant limitation for production deployment and model persistence. Consider documenting this limitation prominently in the docstring.
| else: | ||
| x = input_layer | ||
|
|
||
| kmeans = KMeans(n_clusters=n_neurons, random_state=config['random_state'], n_init='auto') | ||
| kmeans.fit(normalization(td.X_train_single).numpy()) |
There was a problem hiding this comment.
When normalize is False, the normalization variable is not defined, but line 74 unconditionally uses it to fit KMeans clusters. This will cause a NameError at runtime. The code should either use x or handle both cases separately.
| else: | |
| x = input_layer | |
| kmeans = KMeans(n_clusters=n_neurons, random_state=config['random_state'], n_init='auto') | |
| kmeans.fit(normalization(td.X_train_single).numpy()) | |
| kmeans_input = normalization(td.X_train_single).numpy() | |
| else: | |
| x = input_layer | |
| kmeans_input = td.X_train_single | |
| kmeans = KMeans(n_clusters=n_neurons, random_state=config['random_state'], n_init='auto') | |
| kmeans.fit(kmeans_input) |
| class ClassicalANNConstruction_config(BaseModel): | ||
|
|
||
| n_layers: int = Field(..., gt=0) | ||
| n_layers: int = Field(..., ge=0) |
There was a problem hiding this comment.
Changing n_layers from gt=0 (greater than 0) to ge=0 (greater than or equal to 0) allows zero layers, which may not make physical sense for a neural network and could lead to runtime errors in model construction code that assumes at least one layer. This is a breaking change that loosens validation constraints. If zero layers are intentional for the modular system, this should be documented, and model construction code should handle this edge case safely.
| n_layers: int = Field(..., ge=0) | |
| n_layers: int = Field(..., gt=0) |
|
|
||
| super().__init__(name) | ||
| self.model = model | ||
| self.model.model_config.update({ |
There was a problem hiding this comment.
model only has attribute model_config if model is of type ANNModel. However, LinearRegressionModel is in allowed models as well. The latter is only of type SingleStepModel but not of type ANNModel --> insert instance check before updating model_config
In addition: update type hint for model parameter in init header
…BC/physXAI into 26-integrate-modular-anns
…BC/physXAI into 26-integrate-modular-anns
Pull Request
Description
Adds modular ANNs to physXAI. Modular ANNs allow more flexibility to create new models, e.g. subsystems with linear and non-linear inputs
Type of Change
Required Checklist
Testing
Examples
Compatibility
pyproject.toml(required independenciesor optional in[project.optional-dependencies])Documentation
Breaking Changes
Optional
GitHub Copilot Review