MAP Optimizers and solvers
Optimizers module.
AdamOptimizer
Bases: BaseOptimizer
Adam optimizer for NumPyro models.
This class implements the Adam optimization algorithm.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
step_size |
float
|
The step size (learning rate) for the optimizer. Default is 0.001. |
0.001
|
Source code in src/prophetverse/engine/optimizer/optimizer.py
__init__(step_size=0.001)
create_optimizer()
Create and return a NumPyro Adam optimizer instance.
Returns:
Type | Description |
---|---|
_NumPyroOptim
|
An instance of NumPyro's Adam optimizer. |
Source code in src/prophetverse/engine/optimizer/optimizer.py
BaseOptimizer
Bases: BaseObject
Base class for optimizers in NumPyro.
This abstract base class defines the interface that all optimizers must implement.
Source code in src/prophetverse/engine/optimizer/optimizer.py
__init__()
Initialize the BaseOptimizer.
Since this is an abstract base class, initialization does not do anything.
create_optimizer()
Create and return a NumPyro optimizer instance.
Returns:
Type | Description |
---|---|
_NumPyroOptim
|
An instance of a NumPyro optimizer. |
Raises:
Type | Description |
---|---|
NotImplementedError
|
This method must be implemented in subclasses. |
Source code in src/prophetverse/engine/optimizer/optimizer.py
CosineScheduleAdamOptimizer
Bases: BaseOptimizer
Adam optimizer with cosine decay learning rate schedule.
This optimizer combines the Adam optimizer with a cosine decay schedule for the learning rate.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
init_value |
float
|
Initial learning rate. Default is 0.001. |
0.001
|
decay_steps |
int
|
Number of steps over which the learning rate decays. Default is 100_000. |
100000
|
alpha |
float
|
Final multiplier for the learning rate. Default is 0.0. |
0.0
|
exponent |
int
|
Exponent for the cosine decay schedule. Default is 1. |
1
|
Source code in src/prophetverse/engine/optimizer/optimizer.py
create_optimizer()
Create and return a NumPyro optimizer with cosine decay schedule.
Returns:
Type | Description |
---|---|
_NumPyroOptim
|
An instance of a NumPyro optimizer with cosine decay schedule. |
Source code in src/prophetverse/engine/optimizer/optimizer.py
LBFGSSolver
Bases: BaseOptimizer
L-BFGS solver.
This solver is more practical than other optimizers since it usually does not require tuning of hyperparameters to get better estimates.
If your model does not converge with the default hyperparameters, you can try
increasing memory_size
, max_linesearch_steps
, or setting a larger number
of steps.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
gtol |
float
|
Gradient tolerance for stopping criterion. |
1e-6
|
tol |
float
|
Function value tolerance for stopping criterion. |
1e-6
|
learning_rate |
float
|
Initial learning rate. |
1e-3
|
memory_size |
int
|
Memory size for L-BFGS updates. |
10
|
scale_init_precond |
bool
|
Whether to scale the initial preconditioner. |
True
|
max_linesearch_steps |
int
|
Maximum number of line search steps. |
20
|
initial_guess_strategy |
str
|
Strategy for the initial line search step size guess. |
"one"
|
max_learning_rate |
float
|
Maximum allowed learning rate during line search. |
None
|
linesearch_tol |
float
|
Tolerance parameter for line search. |
0
|
increase_factor |
float
|
Factor by which to increase step size during line search when conditions are met. |
2
|
slope_rtol |
float
|
Relative tolerance for slope in the line search. |
0.0001
|
curv_rtol |
float
|
Curvature condition tolerance for line search. |
0.9
|
approx_dec_rtol |
float
|
Approximate decrease tolerance for line search. |
0.000001
|
stepsize_precision |
float
|
Stepsize precision tolerance. |
1e5
|
Source code in src/prophetverse/engine/optimizer/optimizer.py
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 |
|
create_optimizer()
Create and return a NumPyro L-BFGS solver instance.