Class

com.github.gradientgmm.optim

MomentumGradientAscent

Related Doc: package optim

Permalink

class MomentumGradientAscent extends Optimizer

Implementation of gradient descent with momentum

As formulated in Goh, "Why Momentum Really Works", Distill, 2017. http://doi.org/10.23915/distill.00006

Linear Supertypes
Optimizer, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MomentumGradientAscent
  2. Optimizer
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MomentumGradientAscent()

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. var beta: Double

    Permalink

    Inertia parameter

  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. def direction[A](grad: A, utils: AcceleratedGradientUtils[A])(ops: ParameterOperations[A]): A

    Permalink

    compute the ascent direction.

    compute the ascent direction.

    grad

    Current batch gradient

    utils

    Wrapper for accelerated gradient ascent utilities

    ops

    Deffinitions for algebraic operations for the apropiate data structure, e.g. vector or matrix. .

    returns

    updated parameter values

    Definition Classes
    MomentumGradientAscentOptimizer
  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. def fromSimplex(weights: DenseVector[Double]): DenseVector[Double]

    Permalink

    Use fromSimplex method from WeightsTransformation

    Use fromSimplex method from WeightsTransformation

    weights

    mixture weights

    Definition Classes
    Optimizer
  12. def getBeta: Double

    Permalink
  13. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  14. def getLearningRate: Double

    Permalink
    Definition Classes
    Optimizer
  15. def getMinLearningRate: Double

    Permalink
    Definition Classes
    Optimizer
  16. def getShrinkageRate: Double

    Permalink
    Definition Classes
    Optimizer
  17. def getUpdate[A](current: A, grad: A, utils: AcceleratedGradientUtils[A])(implicit ops: ParameterOperations[A]): A

    Permalink

    Compute full updates for the model's parameters.

    Compute full updates for the model's parameters. Usually this has the form X_t + alpha * direction(X_t) but it differs for some algorithms, e.g. Nesterov's gradient ascent.

    current

    Current parameter values

    grad

    Current batch gradient

    utils

    Wrapper for accelerated gradient ascent utilities

    ops

    Deffinitions for algebraic operations for the apropiate data structure, e.g. vector or matrix. .

    returns

    updated parameter values

    Definition Classes
    Optimizer
  18. def halveStepSizeEvery(m: Int): MomentumGradientAscent.this.type

    Permalink

    Alternative method to set step size's shrinkage rate.

    Alternative method to set step size's shrinkage rate. it will be automatically calculated to shrink the step size by half every m iterations.

    m

    positive intger

    returns

    this

    Definition Classes
    Optimizer
  19. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  20. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  21. var learningRate: Double

    Permalink

    Step size

    Step size

    Attributes
    protected
    Definition Classes
    Optimizer
  22. var minLearningRate: Double

    Permalink

    Minimum allowed learning rate.

    Minimum allowed learning rate. Once this lower bound is reached the learning rate will not shrink anymore

    Attributes
    protected
    Definition Classes
    Optimizer
  23. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  24. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  25. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  26. def setBeta(beta: Double): MomentumGradientAscent.this.type

    Permalink
  27. def setLearningRate(learningRate: Double): MomentumGradientAscent.this.type

    Permalink
    Definition Classes
    Optimizer
  28. def setMinLearningRate(m: Double): MomentumGradientAscent.this.type

    Permalink
    Definition Classes
    Optimizer
  29. def setShrinkageRate(s: Double): MomentumGradientAscent.this.type

    Permalink
    Definition Classes
    Optimizer
  30. def setWeightsOptimizer(wo: WeightsTransformation): MomentumGradientAscent.this.type

    Permalink
    Definition Classes
    Optimizer
  31. var shrinkageRate: Double

    Permalink

    Rate at which the learning rate is decreased as the number of iterations grow.

    Rate at which the learning rate is decreased as the number of iterations grow. After t iterations the learning rate will be shrinkageRate^t * learningRate

    Attributes
    protected
    Definition Classes
    Optimizer
  32. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  33. def toSimplex(weights: DenseVector[Double]): DenseVector[Double]

    Permalink

    Use toSimplex method from WeightsTransformation

    Use toSimplex method from WeightsTransformation

    returns

    valid mixture weight vector

    Definition Classes
    Optimizer
  34. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  35. def updateLearningRate: Unit

    Permalink

    Shrink learningRate by a factor of shrinkageRate

    Shrink learningRate by a factor of shrinkageRate

    Definition Classes
    Optimizer
  36. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  37. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  38. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Optimizer

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped