Package

com.github.gradientgmm

optim

Permalink

package optim

Visibility
  1. Public
  2. All

Type Members

  1. class ConjugatePrior extends Regularizer

    Permalink

    Implementation of conjugate prior regularization; this means an Inverse-Wishart prior over the covariance matrices, a Normal prior over the means and a Dirichlet distribution prior over the weights.

  2. class GradientAscent extends Optimizer

    Permalink

    Implementation of standard gradient ascent

  3. class LogBarrier extends Regularizer

    Permalink

    Regularization term of the form scale*log(det(cov))

  4. class MomentumGradientAscent extends Optimizer

    Permalink

    Implementation of gradient descent with momentum

    Implementation of gradient descent with momentum

    As formulated in Goh, "Why Momentum Really Works", Distill, 2017. http://doi.org/10.23915/distill.00006

  5. class NesterovGradientAscent extends Optimizer

    Permalink

    Implementation of gradient ascent with Nesterov's correction

  6. trait Optimizable extends Serializable

    Permalink

    Contains basic functionality for an object that can be modified by Optimizer

  7. trait Optimizer extends Serializable

    Permalink

    Contains base hyperparameters with the respective getters and setters.

  8. trait ParameterOperations[A] extends Serializable

    Permalink

    Contains common mathematical operations that can be performed in both matrices and vectors.

    Contains common mathematical operations that can be performed in both matrices and vectors. Its purpose is avoid duplicating code in the optimization algorithms' classes

  9. trait Regularizer extends Serializable

    Permalink

    Contains basic functionality for a regularization term.

  10. class SoftmaxWeightTransformation extends WeightsTransformation

    Permalink

    Softmax mapping to fit the weights vector

    Softmax mapping to fit the weights vector

    The precise mapping is w_i => log(w_i/w_last) and is an implementation of the procedure described here

  11. trait WeightsTransformation extends Serializable

    Permalink

    Contains the basic functionality for any mapping to be used to fit the weights of a mixture model

  12. class ADAM extends Optimizer

    Permalink

    Implementation of ADAM.

    Implementation of ADAM. See Adam: A Method for Stochastic Optimization. Kingma, Diederik P.; Ba, Jimmy, 2014

    Using it is NOT recommended; you should use SGD or its accelerated versions instead.

    Annotations
    @deprecated
    Deprecated

    (Since version gradientgmm >= 1.4) ADAM can be unstable for GMM problems and should not be used

  13. class ADAMAX extends Optimizer

    Permalink

    Implementation of ADAMAX.

    Implementation of ADAMAX. See Adam: A Method for Stochastic Optimization. Kingma, Diederik P.; Ba, Jimmy, 2014

    Using it is NOT recommended; you should use SGD or its accelerated versions instead.

    Annotations
    @deprecated
    Deprecated

    (Since version gradientgmm >= 1.4) ADAMAX can be unstable for GMM problems and should not be used

  14. class RatioWeightTransformation extends WeightsTransformation

    Permalink

    Ratio mapping to fit the weight vector

    Ratio mapping to fit the weight vector

    The precise mapping is w_i => w_i/w_last

    Using it is NOT recommended; use the default Softmax transformation instead.

    Annotations
    @deprecated
    Deprecated

    (Since version gradientgmm >= 1.4) This is an experimental, numerically unstable transformation and should not be used

Ungrouped