FAQ

Advanced Questions


Yes, you can export data using our REST API or Python API.

 

 

 

Comet applies some limits to the amount of data each user can submit to Comet. These limits exist to ensure that our system can support millions of models. To read more about rate limiting, please see our docs.

Comet allows users to compute a hash of your dataset and log it to an experiment using Experiment.log_dataset_hash(). Users can log other dataset information with Experiment.log_dataset_info().

We recommend storing configuration variables, like your Comet API key, in a .comet.config file. Read more about Comet configuration here.

In order to resume a crashed experiment, you can use Comet’s ExistingExperiment objects to pick up where an experiment left off. You can use the Optimizer ID to recover an optimization sweep. Read more about continuing from a crashed or paused optimizer here.

Automatic logging can be turned off either by setting the environmental variable COMET_DISABLE_AUTO_LOGGING to 1, or by passing the argument auto_param_logging=False when you create an experiment object. Read more about configuring automatic logging in our docs.

There are several good ways to use Comet in multi-step, multi-process pipelines. One is to use Comet’s ExistingExperiment class to save and load an experiment at each step of a pipeline. Another is to use Comet’s context management to track different phases of experimentation. Consult our docs to read more.

Yes. You can log data samples to Comet using the Experiment.log_asset()Experiment.log_image(), or Experiment.log_audio() methods.

The Individual Tier provides unlimited public and private projects and experiments, and the ability to run up to 50 concurrent experiments. Comet’s paid tiers provide access to greater bandwidth for concurrent experimentation, collaboration features, hyperparameter optimization and meta machine learning features, full REST API access, and more.

Yes! Comet has a full REST API you can use to set up instrumentation between your machine learning library and Comet.

Yes, Comet offers a state-of-the-art hyperparameter optimization service as part of our paid plans.

No. The Comet SDK provides offline experiments that allow you to log your experiment data on your local disk until you are ready to upload it from memory to Comet. You do need an internet connection to use Comet’s optimizer service.

Comet has full support for Jupyter notebooks (we have a core contributor to Jupyter on our team). Even better, Comet saves execution-ordered code from experiments run using Jupyter notebooks, so collaborators can easily reproduce any experiment.

Yes, Comet Enterprise can be hosted on-premise or on VPC. We have multiple flavors of installation that support anything from a single machine to distributed micro-services. Comet Enterprise supports all the major cloud providers. If you’re interested in learning more about on-premise, please schedule a demo with us.

Yes! Comet is completely agnostic to your choice of IDE. We have users using anything from vim and emacs to Spider and PyCharm.

Not at all. Comet is useful for any experimentation using data and code. If you’re doing machine learning or other statistical analysis, Comet can help.

Comet provides deeper reporting and more features compared to Tensorboard. Additionally, Comet allows users to view multiple experiments and manage all experiments from a single location, whereas Tensorboard is focused on single experiment views and runs locally on your machine. Finally, Tensorboard does not scale whereas Comet supports 1m+ experiments.

Technical Questions


Yes, you can export data using our REST API or Python API.

 

 

 

Comet applies some limits to the amount of data each user can submit to Comet. These limits exist to ensure that our system can support millions of models. To read more about rate limiting, please see our docs.

Comet allows users to compute a hash of your dataset and log it to an experiment using Experiment.log_dataset_hash(). Users can log other dataset information with Experiment.log_dataset_info().

We recommend storing configuration variables, like your Comet API key, in a .comet.config file. Read more about Comet configuration here.

In order to resume a crashed experiment, you can use Comet’s ExistingExperiment objects to pick up where an experiment left off. You can use the Optimizer ID to recover an optimization sweep. Read more about continuing from a crashed or paused optimizer here.

Automatic logging can be turned off either by setting the environmental variable COMET_DISABLE_AUTO_LOGGING to 1, or by passing the argument auto_param_logging=False when you create an experiment object. Read more about configuring automatic logging in our docs.

Project admins can add collaborators by navigating to Settings → Collaborators from the project page. The number of collaborators per project is unlimited.

Comet automatically integrates with your project’s git repository. Comet keeps track of the working branch and last commit, and if there are any uncommitted changes when an experiment is run, Comet automatically computes a git patch so anyone can reproduce your experiment with the exact same code that was used to run it.

There are several good ways to use Comet in multi-step, multi-process pipelines. One is to use Comet’s ExistingExperiment class to save and load an experiment at each step of a pipeline. Another is to use Comet’s context management to track different phases of experimentation. Consult our docs to read more.

Yes, Comet offers a state-of-the-art hyperparameter optimization service as part of our paid plans.

Comet doesn’t store any of your training data unless you explicitly log samples to your Comet experiments. If you are using git, Comet captures the state of the underlying git repository to allow experiments to be reproduced. Comet also captures system information about your computer and package dependencies for each experiment.

No. The Comet SDK provides offline experiments that allow you to log your experiment data on your local disk until you are ready to upload it from memory to Comet. You do need an internet connection to use Comet’s optimizer service.

No, Comet does not provide training servers. Comet works with any kind of infrastructure whether it’s your laptop or a fully fledged cloud orchestration system.

Comet has full support for Jupyter notebooks (we have a core contributor to Jupyter on our team). Even better, Comet saves execution-ordered code from experiments run using Jupyter notebooks, so collaborators can easily reproduce any experiment.

Comet supports Python, Java, and Javascript, with SDKs for R and more languages currently under development.

Not at all. Comet is useful for any experimentation using data and code. If you’re doing machine learning or other statistical analysis, Comet can help.

Comet provides deeper reporting and more features compared to Tensorboard. Additionally, Comet allows users to view multiple experiments and manage all experiments from a single location, whereas Tensorboard is focused on single experiment views and runs locally on your machine. Finally, Tensorboard does not scale whereas Comet supports 1m+ experiments.

General Questions


Yes. You can log data samples to Comet using the Experiment.log_asset()Experiment.log_image(), or Experiment.log_audio() methods.

Comet works with most machine learning libraries and is easy to set up with custom libraries as well. View our quick start guide to see how to use Comet with pytorchkerastensorflowfastai, and more.

Follow our Release Notes to learn about new and upcoming features in Comet. Once you register for Comet, you’ll receive a monthly newsletter that includes product updates.

Reach out to us on our slack channel or on our Github page and let us know what you’d like to see added to Comet!

You can report a bug by opening an issue on our Github page, or by reaching out to us on our public slack channel.

Comet is free for anyone in the academic community, including students, researchers and professors. You can apply for an academic license here.

Comet allows you to report your code, hyperparameters, metrics, dependencies, system metrics, dataset samples, models, key values and anything else that relates to a machine learning experiment.

To start using Comet, users only need to do two things: Install the Comet SDK on your computer (pip install comet_ml), and create an experiment object (experiment = Experiment()) in your training script. That’s it! Visit our getting started page for more information.

Comet is a meta machine learning platform designed to help AI practitioners and teams build reliable machine learning models for real-world applications by streamlining the machine learning model lifecycle. By leveraging Comet, users can track, compare, explain and reproduce their machine learning experiments. Backed by thousands of users and multiple Fortune 100 companies, Comet provides insights and data to build better, more accurate AI models while improving productivity, collaboration and visibility across teams.

Get started today for free

Trusted By Over 10,000 Data Scientists

CREATE A FREE ACOUNT CONTACT SALES CONTACT SALES CREATE A FREE ACOUNT