cheat sheet: supercharge your machine learning experiment management allows you to automatically track your machine learning code, experiments, hyperparameters, and results to achieve reproducibility, transparency, and more efficient iteration cycles.

We built it after seeing many data scientists trying to grapple with disjointed scripts, notebooks (both Jupyter and paper ones), and complex file structures to remember what they ran previously. has native support for popular machine learning frameworks like Tensorflow, Keras, PyTorch, MXNet, and more.

We’ve created a cheat sheet for our users that want a handy one-page reference or for those who need an extra push to get started.

Download it here or check out the highlights below!

Experiment Method Highlights

The core class of is an Experiment. An Experiment will automatically log scripts output (stdout/stderr), code, and command line arguments on any script and for the supported libraries will also log hyper parameters, metrics and model configuration.

Note: these methods are also available for existing experiments.

set_name: set a name for your experiment. This name will appear in the Comet experiment table and in the project visualizations legend so identifying specific versions of your model is easy.

log_dataset_hash: will just make a hash of whatever you give it, including a filename, or the entire contents of a file. You can look at the resulting hash and see if it is the same as another hash — a good way of seeing if your training runs are using the same training data.

log_asset: upload artifacts from your training run, whether they’re model weights or log files.

log_html: reports any HTML blob to the HTML tab on and is rendered as an Iframe. For example, you can also convert .ipynb files to html in order to log notebooks.

log_metric: logs a general metric (accuracy, f1 score, etc…) as well as custom metrics. Comet automatically logs certain metrics for machine learning frameworks like Tensorflow, Keras, etc…so you do not need to manually report them.

Your logged metrics will be rendered automatically as visualizations in the `Charts` tab like the line charts shown above.

log_parameter: report hyperparameters such as learning rate, batch size, optimizers and more.

log_figure: upload figures and plots from matplotlib

Logged figures and images appear in the `Graphics` tab with options to search, filter, and sort for easy discovery.

Once you start logging experiments, you can conduct meta-analysis across multiple experiments with project-level visualizations, code diffs, and more. This list only scratches the surface of possible methods and customization but is a great way to get started with logging experiments to!

You can see examples of these methods in action for libraries like PyTorch, Keras, etc. in our Comet Examples repo helps data scientists and machine learning engineers to automatically track their datasets, code, experiments and results creating efficiency, visibility, and reproducibility.

Learn more & see a demo at

Enjoyed this article? Check out these relevant articles:

It’s easy to get started

And it's free. Two things everyone loves.