Posts by Scott Sievert (University of Wisconsin–Madison)

Comparing Dask-ML and Ray Tune's Model Selection Algorithms

Hyperparameter optimization is the process of deducing model parameters that can’t be learned from data. This process is often time- and resource-consuming, especially in the context of deep learning. A good description of this process can be found at “Tuning the hyper-parameters of an estimator,” and the issues that arise are concisely summarized in Dask-ML’s documentation of “Hyper Parameter Searches.”

Read more ...