The Machine Learning Language
helps to train surrogate models. A surrogate model is used in optimisation when the optimisation function is unknown or too expensive to compute. EvoAl offers surrogate learning capabilities that can be configured using the Machine Learning Language
for such cases. A simple example can be seen in the following code snippet:
import "definitions" from de.evoal.surrogate.ml;
import "definitions" from de.evoal.surrogate.smile.ml;
import "data" from surrogate;
module training {
prediction svr
maps 'x:0'
to 'y:0'
using
layer transfer
with function 'gaussian-svr'
mapping 'x:0'
to 'y:0'
with parameters
'ε' := 1.4;
'σ' := 3.0;
'soft-margin' := 0.15;
tolerance := 0.1;
for _counter in [1 to 10] loop
predict svr from "data.json"
and measure
'cross-validation'(10);
'R²'();
end
and store to "svr_${_counter}.pson"
end
}
The file consists of two sections. In the first section, a model is configured, and in the second, it is trained and persisted for later use.
Model Configuration
A surrogate model maps i input variables x\text{:}1 \ldots x\text{:}i to j output variables y\text{:}1 \ldots y\text{:}j. The Machine Learning Language
allows configuring multiple layered models as a surrogate function. An example can be seen in the following depiction:
graph TD
x:1{x:1} --> model:1
x:2{x:2} --> model:1
x:3{x:3} --> model:1
x:3 --> model:2
x:4{x:4} --> model:2
x:5{x:5} --> model:2
model:1 --> z:1{z:1}
model:1 --> z:2{z:2}
model:2 --> z:3{z:3}
x:1 --> model:3
z:1 --> model:3
z:2 --> model:3
z:3 --> model:3
model:3 --> y:1{y:1}
In the example, there are two model layers. The first layer contains model:1
and model:2
, and the second layer contains model:3
. As shown, while a model can use all input features, it does not have to.