Minibatch size for each iteration in the ascent procedure.
Minibatch size for each iteration in the ascent procedure. If None, it performs full-batch optimization
convergence flag for local optimisation
convergence flag for local optimisation
Error tolerance in log-likelihood for the stopping criteria
Error tolerance in log-likelihood for the stopping criteria
this prevents the seed from repeating every time step() is called which would cause the same samples being taken
this prevents the seed from repeating every time step() is called which would cause the same samples being taken
number of componenrs
number of componenrs
Current loss value
Current loss value
Maximum number of iterations allowed
Maximum number of iterations allowed
Optimization object
Optimization object
predict cluster membership
predict cluster membership
Streaming data
predict cluster membership
predict cluster membership
vector membership label
predict cluster membership
predict cluster membership
RDD with the points' labels
predict cluster membership
predict cluster membership
vector membership label
predict cluster membership
predict cluster membership
RDD with the points' labels
predict soft cluster membership
predict soft cluster membership
Streaming data
predict soft cluster membership
predict soft cluster membership
Array giving the membership probabilities for each cluster
predict soft cluster membership
predict soft cluster membership
RDD with arrays giving the membership probabilities for each cluster
predict soft cluster membership
predict soft cluster membership
RDD with arrays giving the membership probabilities for each cluster
predict soft cluster membership
predict soft cluster membership
RDD with arrays giving the membership probabilities for each cluster
Optional regularization term
Optional regularization term
random seed for mini-batch sampling
random seed for mini-batch sampling
Perform local sequential gradient descent optimisation
Perform local sequential gradient descent optimisation
Training data as an Array of Breeze vectors
Perform a gradient-based optimization step
Perform a gradient-based optimization step
Data to fit the model
Update model parameters using streaming data
Perform a gradient-based optimization step
Returns a Spark's Gaussian Mixture Model with the current parameters initialized with the current parameters
Linear Algebra operations necessary for computing updates for the parameters
Linear Algebra operations necessary for computing updates for the parameters
This is to avoid duplicating code for Gaussian and Weights updates in the optimization algorithms' classes
Optimizable gradient-based Gaussian Mixture Model See An Alternative to EM for Gaussian Mixture Models: Batch and Stochastic Riemannian Optimization]]