Class

com.github.gradientgmm.optim

ADAM

Related Doc: package optim

Permalink

class ADAM extends Optimizer

Implementation of ADAM. See Adam: A Method for Stochastic Optimization. Kingma, Diederik P.; Ba, Jimmy, 2014

Using it is NOT recommended; you should use SGD or its accelerated versions instead.

Annotations
@deprecated
Deprecated

(Since version gradientgmm >= 1.4) ADAM can be unstable for GMM problems and should not be used

Linear Supertypes
Optimizer, Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ADAM
  2. Optimizer
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ADAM()

    Permalink

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. var beta1: Double

    Permalink

    Exponential smoothing parameter for the first moment estimator

  6. var beta2: Double

    Permalink

    Exponential smoothing parameter for the second non-central moment estimator

  7. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  8. def direction[A](grad: A, utils: AcceleratedGradientUtils[A])(ops: ParameterOperations[A]): A

    Permalink

    compute the ascent direction.

    compute the ascent direction.

    grad

    Current batch gradient

    utils

    Wrapper for accelerated gradient ascent utilities

    ops

    Deffinitions for algebraic operations for the apropiate data structure, e.g. vector or matrix. .

    returns

    updated parameter values

    Definition Classes
    ADAMOptimizer
  9. var eps: Double

    Permalink

    offset term to avoid division by zero in the main direction calculations

  10. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  11. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  12. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. def fromSimplex(weights: DenseVector[Double]): DenseVector[Double]

    Permalink

    Use fromSimplex method from WeightsTransformation

    Use fromSimplex method from WeightsTransformation

    weights

    mixture weights

    Definition Classes
    Optimizer
  14. def getBeta1: Double

    Permalink
  15. def getBeta2: Double

    Permalink
  16. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  17. def getEps: Double

    Permalink
  18. def getLearningRate: Double

    Permalink
    Definition Classes
    Optimizer
  19. def getMinLearningRate: Double

    Permalink
    Definition Classes
    Optimizer
  20. def getShrinkageRate: Double

    Permalink
    Definition Classes
    Optimizer
  21. def getUpdate[A](current: A, grad: A, utils: AcceleratedGradientUtils[A])(implicit ops: ParameterOperations[A]): A

    Permalink

    Compute full updates for the model's parameters.

    Compute full updates for the model's parameters. Usually this has the form X_t + alpha * direction(X_t) but it differs for some algorithms, e.g. Nesterov's gradient ascent.

    current

    Current parameter values

    grad

    Current batch gradient

    utils

    Wrapper for accelerated gradient ascent utilities

    ops

    Deffinitions for algebraic operations for the apropiate data structure, e.g. vector or matrix. .

    returns

    updated parameter values

    Definition Classes
    Optimizer
  22. def halveStepSizeEvery(m: Int): ADAM.this.type

    Permalink

    Alternative method to set step size's shrinkage rate.

    Alternative method to set step size's shrinkage rate. it will be automatically calculated to shrink the step size by half every m iterations.

    m

    positive intger

    returns

    this

    Definition Classes
    Optimizer
  23. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  24. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  25. var learningRate: Double

    Permalink

    Step size

    Step size

    Attributes
    protected
    Definition Classes
    Optimizer
  26. var minLearningRate: Double

    Permalink

    Minimum allowed learning rate.

    Minimum allowed learning rate. Once this lower bound is reached the learning rate will not shrink anymore

    Attributes
    protected
    Definition Classes
    Optimizer
  27. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  28. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  29. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  30. def reset: Unit

    Permalink

    Reset iterator counter

  31. def setBeta1(beta1: Double): ADAM.this.type

    Permalink
  32. def setBeta2(beta2: Double): ADAM.this.type

    Permalink
  33. def setEps(x: Double): Unit

    Permalink
  34. def setLearningRate(learningRate: Double): ADAM.this.type

    Permalink
    Definition Classes
    Optimizer
  35. def setMinLearningRate(m: Double): ADAM.this.type

    Permalink
    Definition Classes
    Optimizer
  36. def setShrinkageRate(s: Double): ADAM.this.type

    Permalink
    Definition Classes
    Optimizer
  37. def setWeightsOptimizer(wo: WeightsTransformation): ADAM.this.type

    Permalink
    Definition Classes
    Optimizer
  38. var shrinkageRate: Double

    Permalink

    Rate at which the learning rate is decreased as the number of iterations grow.

    Rate at which the learning rate is decreased as the number of iterations grow. After t iterations the learning rate will be shrinkageRate^t * learningRate

    Attributes
    protected
    Definition Classes
    Optimizer
  39. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  40. var t: Double

    Permalink

    iteration counter

  41. def toSimplex(weights: DenseVector[Double]): DenseVector[Double]

    Permalink

    Use toSimplex method from WeightsTransformation

    Use toSimplex method from WeightsTransformation

    returns

    valid mixture weight vector

    Definition Classes
    Optimizer
  42. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  43. def updateLearningRate: Unit

    Permalink

    Shrink learningRate by a factor of shrinkageRate

    Shrink learningRate by a factor of shrinkageRate

    Definition Classes
    Optimizer
  44. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  45. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  46. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Optimizer

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped