Trait

com.github.gradientgmm.optim

Optimizer

Related Doc: package optim

Permalink

trait Optimizer extends Serializable

Contains base hyperparameters with the respective getters and setters.

Linear Supertypes
Serializable, Serializable, AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Optimizer
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def direction[A](grad: A, utils: AcceleratedGradientUtils[A])(ops: ParameterOperations[A]): A

    Permalink

    compute the ascent direction.

    compute the ascent direction.

    grad

    Current batch gradient

    utils

    Wrapper for accelerated gradient ascent utilities

    ops

    Deffinitions for algebraic operations for the apropiate data structure, e.g. vector or matrix. .

    returns

    updated parameter values

Concrete Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  6. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  7. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  8. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  9. def fromSimplex(weights: DenseVector[Double]): DenseVector[Double]

    Permalink

    Use fromSimplex method from WeightsTransformation

    Use fromSimplex method from WeightsTransformation

    weights

    mixture weights

  10. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  11. def getLearningRate: Double

    Permalink
  12. def getMinLearningRate: Double

    Permalink
  13. def getShrinkageRate: Double

    Permalink
  14. def getUpdate[A](current: A, grad: A, utils: AcceleratedGradientUtils[A])(implicit ops: ParameterOperations[A]): A

    Permalink

    Compute full updates for the model's parameters.

    Compute full updates for the model's parameters. Usually this has the form X_t + alpha * direction(X_t) but it differs for some algorithms, e.g. Nesterov's gradient ascent.

    current

    Current parameter values

    grad

    Current batch gradient

    utils

    Wrapper for accelerated gradient ascent utilities

    ops

    Deffinitions for algebraic operations for the apropiate data structure, e.g. vector or matrix. .

    returns

    updated parameter values

  15. def halveStepSizeEvery(m: Int): Optimizer.this.type

    Permalink

    Alternative method to set step size's shrinkage rate.

    Alternative method to set step size's shrinkage rate. it will be automatically calculated to shrink the step size by half every m iterations.

    m

    positive intger

    returns

    this

  16. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  17. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  18. var learningRate: Double

    Permalink

    Step size

    Step size

    Attributes
    protected
  19. var minLearningRate: Double

    Permalink

    Minimum allowed learning rate.

    Minimum allowed learning rate. Once this lower bound is reached the learning rate will not shrink anymore

    Attributes
    protected
  20. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  21. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  22. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  23. def setLearningRate(learningRate: Double): Optimizer.this.type

    Permalink
  24. def setMinLearningRate(m: Double): Optimizer.this.type

    Permalink
  25. def setShrinkageRate(s: Double): Optimizer.this.type

    Permalink
  26. def setWeightsOptimizer(wo: WeightsTransformation): Optimizer.this.type

    Permalink
  27. var shrinkageRate: Double

    Permalink

    Rate at which the learning rate is decreased as the number of iterations grow.

    Rate at which the learning rate is decreased as the number of iterations grow. After t iterations the learning rate will be shrinkageRate^t * learningRate

    Attributes
    protected
  28. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  29. def toSimplex(weights: DenseVector[Double]): DenseVector[Double]

    Permalink

    Use toSimplex method from WeightsTransformation

    Use toSimplex method from WeightsTransformation

    returns

    valid mixture weight vector

  30. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  31. def updateLearningRate: Unit

    Permalink

    Shrink learningRate by a factor of shrinkageRate

  32. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  33. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  34. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped