Experiment

Experiment

Experiment is a unit of measurable research that defines a single run with some data/parameters/code/results.

Creating an Experiment object in your code will report a new experiment to your Comet.ml project. Your Experiment will automatically track and collect many things and will also allow you to manually report anything.

You can create multiple objects in one script (such as when looping over multiple hyper parameters).

Experiment.url

Get the url of the experiment.

Example:

>>> api_experiment.url
"https://www.comet.ml/username/34637643746374637463476"

Experiment.__init__

__init__(self, api_key=None, project_name=None, workspace=None, log_code=True, log_graph=True, \
    auto_param_logging=True, auto_metric_logging=True, parse_args=True, \
    auto_output_logging="default", log_env_details=True, log_git_metadata=True, \
    log_git_patch=True, disabled=False, log_env_gpu=True, log_env_host=True, \
    display_summary=None, log_env_cpu=True)

Creates a new experiment on the Comet.ml frontend. Args:

  • api_key: Your API key obtained from comet.ml
  • project_name: Optional. Send your experiment to a specific project. Otherwise will be sent to Uncategorized Experiments. If project name does not already exists Comet.ml will create a new project.
  • workspace: Optional. Attach an experiment to a project that belongs to this workspace
  • log_code: Default(True) - allows you to enable/disable code logging
  • log_graph: Default(True) - allows you to enable/disable automatic computation graph logging.
  • auto_param_logging: Default(True) - allows you to enable/disable hyper parameters logging
  • auto_metric_logging: Default(True) - allows you to enable/disable metrics logging
  • parse_args: Default(True) - allows you to enable/disable automatic parsing of CLI arguments
  • auto_output_logging: Default("default") - allows you to select which output logging mode to use. You can pass "native" which will log all output even when it originated from a C native library. You can also pass "simple" which will work only for output made by Python code. If you want to disable automatic output logging, you can pass False. The default is "default" which will detect your environment and deactivate the output logging for IPython and Jupyter environment and sets "native" in the other cases.
  • log_env_details: Default(True) - log various environment information in order to identify where the script is running
  • log_env_gpu: Default(True) - allow you to enable/disable the automatic collection of gpu details and metrics (utilization, memory usage etc..). log_env_details must also be true.
  • log_env_cpu: Default(True) - allow you to enable/disable the automatic collection of cpu details and metrics (utilization, memory usage etc..). log_env_details must also be true.
  • log_env_host: Default(True) - allow you to enable/disable the automatic collection of host information (ip, hostname, python version, user etc...). log_env_details must also be true.
  • log_git_metadata: Default(True) - allow you to enable/disable the automatic collection of git details
  • log_git_patch: Default(True) - allow you to enable/disable the automatic collection of git patch
  • display_summary: Default(True) - control whether the summary is displayed on the console or not. If disabled, the summary notification is still sent.
  • disabled: Default(False) - allows you to disable all network communication with the Comet.ml backend. It is useful when you just needs to works on your machine-learning scripts and need to relaunch them several times at a time.

Experiment.add_tag

add_tag(self, tag)

Add a tag to the experiment. Tags will be shown in the dashboard. Args:

  • tag: String. A tag to add to the experiment.

Experiment.add_tags

add_tags(self, tags)

Add several tags to the experiment. Tags will be shown in the dashboard. Args:

  • tag: List. Tags list to add to the experiment.

Experiment.clean

clean(self)

Clean the experiment loggers, useful in case you want to debug your scripts with IPDB.


create_symlink(self, project_name)

creates a symlink for this experiment in another project. The experiment will now be displayed in the project provided and the original project.

Args:

  • project_name: String. represents the project name. Project must exists.

Experiment.disable_mp

disable_mp(self)

Disabling the auto-collection of metrics and monkey-patching of the Machine Learning frameworks.


Experiment.display

display(self, clear=False, wait=True, new=0, autoraise=True, tab=None)

Show the comet.ml experiment page in an IFrame in a Jupyter notebook or Jupyter lab, OR open a browser window or tab.

For Jupyter environments:

Args:

  • clear: to clear the output area, use clear=True
  • wait: to wait for the next displayed item, use wait=True (cuts down on flashing)

For non-Jupyter environments:

Args:

  • new: open a new browser window if new=1, otherwise re-use existing window/tab
  • autoraise: make the browser tab/window active

Experiment.end

end(self)

Use to indicate that the experiment is complete. Useful in Jupyter environments to signal comel.ml that the experiment has ended.

In Jupyter, this will also upload the commands that created the experiment, from the beginning to the end of this session. See the Code tab at Comet.ml.


Experiment.get_callback

get_callback(self, framework, *args, **kwargs)

Get a callback for a particular framework.

When framework == 'keras' then return an instance of Comet.ml's Keras callback.

When framework == 'tf-keras' then return an instance of Comet.ml's TensorflowKeras callback.

Note:

The keras callbacks are added to your Keras model.fit() callbacks list automatically to report model training metrics to Comet.ml so you do not need to add them manually.


Experiment.get_keras_callback

get_keras_callback(self)

This method is deprecated. See Experiment.get_callback("keras")


Experiment.get_key

get_key(self)

Returns the experiment key, useful for using with the ExistingExperiment class Returns: Experiment Key (String)


Experiment.get_metric

get_metric(self, name)

Get a metric from those logged.

Args:

  • name: str, the name of the metric to get

Experiment.get_name

get_name(self)

Get the name of the experiment, if one.

Example:

>>> experiment.set_name("My Name")
>>> experiment.get_name()
'My Name'

Experiment.get_other

get_other(self, name)

Get an other from those logged.

Args:

  • name: str, the name of the other to get

Experiment.get_parameter

get_parameter(self, name)

Get a parameter from those logged.

Args:

  • name: str, the name of the parameter to get

Experiment.get_predictor

get_predictor(self)

Get the predictor.


Experiment.get_predictor_callback

get_predictor_callback(self, framework, *args, **kwargs)

Get a predictor callback for a particular framework.

Possible frameworks are:

  • "keras" - return a callback for keras predictive early stopping
  • "tf-keras" - return a callback for tensorflow.keras predictive early stopping
  • "tensorflow" - return a callback for tensorflow predictive early stopping

Experiment.get_tags

get_tags(self)

Return the tags of this experiment. Returns: set. The set of tags.


Experiment.log_asset

log_asset(self, file_data, file_name=None, overwrite=False, copy_to_tmp=True, step=None, \
    metadata=None)

Logs the Asset determined by file_data.

Args:

  • file_data: String or File-like - either the file path of the file you want to log, or a file-like asset.
  • file_name: String - Optional. A custom file name to be displayed. If not provided the filename from the file_data argument will be used.
  • overwrite: if True will overwrite all existing assets with the same name.
  • copy_to_tmp: If file_data is a file-like object, then this flag determines if the file is first copied to a temporary file before upload. If copy_to_tmp is False, then it is sent directly to the cloud.
  • step: Optional. Used to associate the asset to a specific step.

Examples:

>>> experiment.log_asset("model1.h5")

>>> fp = open("model2.h5", "rb")
>>> experiment.log_asset(fp,
...                      file_name="model2.h5")
>>> fp.close()

>>> fp = open("model3.h5", "rb")
>>> experiment.log_asset(fp,
...                      file_name="model3.h5",
...                      copy_to_tmp=False)
>>> fp.close()

Experiment.log_asset_data

log_asset_data(self, data, name=None, overwrite=False, step=None, metadata=None, file_name=None)

Logs the data given (str, binary, or JSON).

Args:

  • data: data to be saved as asset
  • name: String, optional. A custom file name to be displayed If not provided the filename from the temporary saved file will be used.
  • overwrite: Boolean, optional. Default False. If True will overwrite all existing assets with the same name.
  • step: Optional. Used to associate the asset to a specific step.
  • metadata: Optional. Some additional data to attach to the the asset data. Must be JSON-encodable.

See also: APIExperiment.get_experiment_asset(return_type="json")


Experiment.log_asset_folder

log_asset_folder(self, folder, step=None, log_file_name=False, recursive=False)

Logs all the files located in the given folder as assets.

Args:

  • folder: String - the path to the folder you want to log.
  • step: Optional. Used to associate the asset to a specific step.
  • log_file_name: Optional. if True, log the file path with each file.
  • recursive: Optional. if True, recurse folder and save file names.

If log_file_name is set to True, each file in the given folder will be logged with the following name schema: FOLDER_NAME/RELPATH_INSIDE_FOLDER. Where FOLDER_NAME is the basename of the given folder and RELPATH_INSIDE_FOLDER is the file path relative to the folder itself.


Experiment.log_audio

log_audio(self, audio_data, sample_rate=None, file_name=None, metadata=None, overwrite=False, \
    copy_to_tmp=True, step=None)

Logs the audio Asset determined by audio data.

Args:

  • audio_data: String or a numpy array - either the file path of the file you want to log, or a numpy array given to scipy.io.wavfile.write for wav conversion.
  • sample_rate: Integer - Optional. The sampling rate given to scipy.io.wavfile.write for creating the wav file.
  • file_name: String - Optional. A custom file name to be displayed. If not provided, the filename from the audio_data argument will be used.
  • metadata: Some additional data to attach to the the audio asset. Must be JSON-encodable.
  • overwrite: if True will overwrite all existing assets with the same name.
  • copy_to_tmp: If audio_data is a numpy array, then this flag determines if the WAV file is first copied to a temporary file before upload. If copy_to_tmp is False, then it is sent directly to the cloud.
  • step: Optional. Used to associate the audio asset to a specific step.

Experiment.log_confusion_matrix

log_confusion_matrix(self, y_true=None, y_predicted=None, matrix=None, labels=None, \
    title="Confusion Matrix", row_label="Actual Category", column_label="Predicted Category", \
    max_examples_per_cell=25, max_categories=25, winner_function=None, \
    index_to_example_function=None, cache=True, file_name="confusion-matrix.json", \
    overwrite=False, step=None, **kwargs)

Logs a confusion matrix.

Args:

  • y_true: (optional) a list of target vectors containing the correct values for the y_predicted vectors. If not provided, then matrix may be provided.
  • y_predicted: (optional) a list of output vectors containing the predicted values for the y_true vectors. If not provided, then matrix may be provided.
  • labels: (optional) a list of strings that name of the columns and rows, in order. By default, it will be "0" through the number of categories (e.g., rows/columns).
  • matrix: (optional) the confusion matrix (list of lists). Must be square, if given. If not given, then it is possible to provide y_true and y_predicted.
  • title: (optional) a custom name to be displayed. By default, it is "Confusion Matrix".
  • row_label: (optional) label for rows. By default, it is "Actual Category".
  • column_label: (optional) label for columns. By default, it is "Predicted Category".
  • max_example_per_cell: (optional) maximum number of examples per cell. By default, it is 25.
  • max_categories: (optional) max number of columns and rows to use. By default, it is 25.
  • winner_function: (optional) a function that takes in an entire list of rows of patterns, and returns the winning category for each row. By default, it is argmax.
  • index_to_example_function: (optional) a function that takes an index and returns either a number, a string, a URL, or a {"sample": str, "assetId": str} dictionary. See below for more info. By default, the function returns a number representing the index of the example.
  • cache: (optional) should the results of index_to_example_function be cached and reused? By default, cache is True.
  • selected: (optional) None, or list of selected category indices. These are the rows/columns that will be shown. By default, select is None. If the number of categories is greater than max_categories, and selected is not provided, then selected will be computed automatically by selecting the most confused categories. kwargs: (optional) any extra keywords and their values will

    be passed onto the index_to_example_function. file_name: (optional) logging option, by default is "confusion-matrix.json", overwrite: (optional) logging option, by default is False step: (optional) logging option, by default is None

See the executable Jupyter Notebook tutorial at Comet Confusion Matrix.

Examples:

>>> experiment = Experiment()

# If you have a y_true and y_predicted:
>>> y_predicted = model.predict(x_test)
>>> experiment.log_confusion_matrix(y_true, y_predicted)

# Or, if you have already computed the matrix:
>>> experiment.log_confusion_matrix(labels=["one", "two", "three"],
  matrix=[[10, 0, 0],
  [ 0, 9, 1],
  [ 1, 1, 8]])

# However, if you want to reuse examples from previous runs,
# you can reuse a ConfusionMatrix instance.

>>> from comet_ml.utils import ConfusionMatrix

>>> cm = ConfusionMatrix()
>>> y_predicted = model.predict(x_test)
>>> cm.compute_matrix(y_true, y_predicted)
>>> experiment.log_confusion_matrix(matrix=cm)

# Log again, using previously cached values:
>>> y_predicted = model.predict(x_test)
>>> cm.compute_matrix(y_true, y_predicted)
>>> experiment.log_confusion_matrix(matrix=cm)

For more information, see comet_ml.utils.ConfusionMatrix.


Experiment.log_curve

log_curve(self, name, x, y, overwrite=False, step=None)

Log timeseries data.

Args:

  • name: (str) name of data
  • x: list of x-axis values
  • y: list of y-axis values
  • overwrite: (optional, bool) if True, overwrite previous log
  • step: (optional, int) the step value

Examples:

>>> experiment.log_curve("my curve", x=[1, 2, 3, 4, 5],
  y=[10, 20, 30, 40, 50])
>>> experiment.log_curve("my curve", [1, 2, 3, 4, 5],
  [10, 20, 30, 40, 50])

Experiment.log_dataset_hash

log_dataset_hash(self, data)

Used to log the hash of the provided object. This is a best-effort hash computation which is based on the md5 hash of the underlying string representation of the object data. Developers are encouraged to implement their own hash computation that's tailored to their underlying data source. That could be reported as `experiment.log_parameter("dataset_hash",your_hash).

data: Any object that when casted to string (e.g str(data)) returns a value that represents the underlying data.


Experiment.log_dataset_info

log_dataset_info(self, name=None, version=None, path=None)

Used to log information about your dataset.

Args:

  • name: Optional string representing the name of the dataset.
  • version: Optional string representing a version identifier.
  • path: Optional string that represents the path to the dataset. Potential values could be a file system path, S3 path or Database query.

At least one argument should be included. The logged values will show on the Other tab.


Experiment.log_dependency

log_dependency(self, name, version)

Reports name,version to the Installed Packages tab on Comet.ml. Useful to track dependencies. Args:

  • name: Any type of key (str,int,float..)
  • version: Any type of value (str,int,float..)

Returns: None


Experiment.log_epoch_end

log_epoch_end(self, epoch_cnt, step=None)

Logs that the epoch finished. Required for progress bars.

Args:

  • epoch_cnt: integer

Returns: None


Experiment.log_figure

log_figure(self, figure_name=None, figure=None, overwrite=False, step=None)

Logs the global Pyplot figure or the passed one and upload its svg version to the backend.

Args:

  • figure_name: Optional. String - name of the figure
  • figure: Optional. The figure you want to log. If not set, the global pyplot figure will be logged and uploaded
  • overwrite: Optional. Boolean - if another figure with the same name exists, it will be overwritten if overwrite is set to True.
  • step: Optional. Used to associate the audio asset to a specific step.

Experiment.log_histogram_3d

log_histogram_3d(self, values, name=None, step=None, **kwargs)

Logs a histogram of values for a 3D chart as an asset for this experiment. Calling this method multiple times with the same name and incremented steps will add additional histograms to the 3D chart on Comet.ml.

Args:

  • values: a list, tuple, array (any shape) to summarize, or a Histogram object
  • name: str (optional), name of summary
  • step: Optional. Used as the Z axis when plotting on Comet.ml. kwargs: Optional. Additional keyword arguments for histogram.

Note:

This method requires that step is either given here, or has been set elsewhere. For example, if you are using an auto- logger that sets step then you don't need to set it here.


Experiment.log_html

log_html(self, html, clear=False)

Reports any HTML blob to the HTML tab on Comet.ml. Useful for creating your own rich reports. The HTML will be rendered as an Iframe. Inline CSS/JS supported. Args:

  • html: Any html string. for example:
  • clear: Default to False. when setting clear=True it will remove all previous html.
    experiment.log_html('<a href="www.comet.ml"> I love Comet.ml </a>')
    

Returns: None


Experiment.log_html_url

log_html_url(self, url, text=None, label=None)

Easy to use method to add a link to a URL in the HTML tab on Comet.ml.

Args:

  • url: a link to a file or notebook, for example
  • text: text to use a clickable word or phrase (optional; uses url if not given)
  • label: text that precedes the link

Examples:

>>> experiment.log_html_url("https://my-company.com/file.txt")

Adds html similar to:

<a href="https://my-company.com/file.txt">
 - **https**: //my-company.com/file.txt
</a>
>>> experiment.log_html_url("https://my-company.com/file.txt",
  "File")

Adds html similar to:

<a href="https://my-company.com/file.txt">File</a>
>>> experiment.log_html_url("https://my-company.com/file.txt",
  "File", "Label")

Adds html similar to:

Label: <a href="https://my-company.com/file.txt">File</a>

Experiment.log_image

log_image(self, image_data, name=None, overwrite=False, image_format="png", image_scale=1.0, \
    image_shape=None, image_colormap=None, image_minmax=None, image_channels="last", \
    copy_to_tmp=True, step=None)

Logs the image. Images are displayed on the Graphics tab on Comet.ml.

Args:

  • image_data: Required. image_data is one of the following:
  • a path (string) to an image
  • a file-like object containing an image
  • a numpy matrix
  • a TensorFlow tensor
  • a PyTorch tensor
  • a list or tuple of values
  • a PIL Image
  • name: String - Optional. A custom name to be displayed on the dashboard. If not provided the filename from the image_data argument will be used if it is a path.
  • overwrite: Optional. Boolean - If another image with the same name exists, it will be overwritten if overwrite is set to True.
  • image_format: Optional. String. Default: 'png'. If the image_data is actually something that can be turned into an image, this is the format used. Typical values include 'png' and 'jpg'.
  • image_scale: Optional. Float. Default: 1.0. If the image_data is actually something that can be turned into an image, this will be the new scale of the image.
  • image_shape: Optional. Tuple. Default: None. If the image_data is actually something that can be turned into an image, this is the new shape of the array. Dimensions are (width, height).
  • image_colormap: Optional. String. If the image_data is actually something that can be turned into an image, this is the colormap used to colorize the matrix.
  • image_minmax: Optional. (Number, Number). If the image_data is actually something that can be turned into an image, this is the (min, max) used to scale the values. Otherwise, the image is autoscaled between (array.min, array.max).
  • image_channels: Optional. Default 'last'. If the image_data is actually something that can be turned into an image, this is the setting that indicates where the color information is in the format of the 2D data. 'last' indicates that the data is in (rows, columns, channels) where 'first' indicates (channels, rows, columns).
  • copy_to_tmp: If image_data is not a file path, then this flag determines if the image is first copied to a temporary file before upload. If copy_to_tmp is False, then it is sent directly to the cloud.
  • step: Optional. Used to associate the audio asset to a specific step.

Experiment.log_metric

log_metric(self, name, value, step=None, epoch=None, include_context=True)

Logs a general metric (i.e accuracy, f1).

e.g.

y_pred_train = model.predict(X_train)
acc = compute_accuracy(y_pred_train, y_train)
experiment.log_metric("accuracy", acc)

See also log_metrics

Args:

  • name: String - name of your metric
  • value: Float/Integer/Boolean/String
  • step: Optional. Used as the X axis when plotting on comet.ml
  • epoch: Optional. Used as the X axis when plotting on comet.ml
  • include_context: Optional. If set to True (the default), the current context will be logged along the metric.

Returns: None

Down sampling metrics: Comet guarantees to store 15,000 data points for each metric. If more than 15,000 data points are reported we perform a form of reservoir sub sampling - https://en.wikipedia.org/wiki/Reservoir_sampling.


Experiment.log_metrics

log_metrics(self, dic, prefix=None, step=None, epoch=None)

Logs a key,value dictionary of metrics. See also log_metric


Experiment.log_model

log_model(self, name, file_or_folder, file_name=None, overwrite=False, metadata=None, \
    copy_to_tmp=True)

Logs the model data under the name. Data can be a file path, a folder path or a file-like object.

Args:

  • name: string (required), the name of the model
  • file_or_folder: the model data (required); can be a file path, a folder path or a file-like object.
  • file_name: (optional) the name of the model data. Used with file-like objects or files only.
  • overwrite: boolean, if True, then overwrite previous versions Does not apply to folders.
  • metadata: Some additional data to attach to the the data. Must be JSON-encodable.
  • copy_to_tmp: for file name or file-like; if True copy to temporary location before uploading; if False, then upload from current location

Returns: dictionary of model URLs


Experiment.log_other

log_other(self, key, value)

Reports a key and value to the Other tab on Comet.ml. Useful for reporting datasets attributes, datasets path, unique identifiers etc.

See related methods: log_parameter and log_metric

Args:

  • key: Any type of key (str,int,float..)
  • value: Any type of value (str,int,float..)

Returns: None


Experiment.log_others

log_others(self, dictionary)

Reports dictionary of key/values to the Other tab on Comet.ml. Useful for reporting datasets attributes, datasets path, unique identifiers etc.

See log_other

Args:

  • key: dict of key/values where value is Any type of value (str,int,float..)

Returns: None


Experiment.log_parameter

log_parameter(self, name, value, step=None)

Logs a single hyperparameter. For additional values that are not hyper parameters it's encouraged to use log_other.

See also log_parameters.

If the same key is reported multiple times only the last reported value will be saved.

Args:

  • name: String - name of your parameter
  • value: Float/Integer/Boolean/String/List
  • step: Optional. Used as the X axis when plotting on Comet.ml

Returns: None


Experiment.log_parameters

log_parameters(self, dic, prefix=None, step=None)

Logs a dictionary of multiple parameters. See also log_parameter.

e.g:

experiment = Experiment(api_key="MY_API_KEY")
params = {
  "batch_size":64,
  "layer1":"LSTM(128)",
  "layer2":"LSTM(128)",
  "MAX_LEN":200
}

experiment.log_parameters(params)

If you call this method multiple times with the same keys your values would be overwritten. For example:

experiment.log_parameters({"key1":"value1","key2":"value2"})
On Comet.ml you will see the pairs of key1 and key2.

If you then call:

experiment.log_parameters({"key1":"other value"})l
On the UI you will see the pairs key1: other value, key2: value2


Experiment.log_system_info

log_system_info(self, key, value)

Reports the key and value to the System Metric tab on Comet.ml. Useful to track general system information. This information can be added to the table on the Project view. You can retrieve this information via the Python API.

Args:

  • key: Any type of key (str,int,float..)
  • value: Any type of value (str,int,float..)

Returns: None

Example:

# Can also use ExistingExperiment here instead of Experiment:
>>> from comet_ml import Experiment, APIExperiment
>>> e = Experiment()
>>> e.log_system_info("info-about-system", "debian-based")
>>> e.end()

>>> apie = APIExperiment(previous_experiment=e.id)
>>> apie.get_system_details()['logAdditionalSystemInfoList']
[{"key": "info-about-system", "value": "debian-based"}]

-------------------

### Experiment.log_text

```python
log_text(self, text, step=None, metadata=None)

Logs the text. These strings appear on the Text Tab in the Comet UI.

Args:

  • text: string to be stored
  • step: Optional. Used to associate the asset to a specific step.
  • metadata: Some additional data to attach to the the text. Must be JSON-encodable.

Experiment.send_notification

send_notification(self, title, status=None, additional_data=None)

Send yourself a notification through email when an experiment ends.

Args:

  • title: str - the email subject.
  • status: str - the final status of the experiment. Typically, something like "finished", "completed" or "aborted".
  • additional_data: dict - a dictionary of key/values to notify.

Note:

In order to receive the notification, you need to have turned on Notifications in your Settings in the Comet user interface.

If you wish to have the additional_data saved with the experiment, you should also call Experiment.log_other() with this data as well.

This method uses the email address associated with your account.


Experiment.set_cmd_args

set_cmd_args(self)

Experiment.set_code

set_code(self, code, overwrite=False)

Sets the current experiment script's code. Should be called once per experiment. Args:

  • code: String. Experiment source code.
  • overwrite: Bool, if True, send the code

Experiment.set_epoch

set_epoch(self, epoch)

Sets the current epoch in the training process. In Deep Learning each epoch is an iteration over the entire dataset provided. This is used to generate plots on comet.ml. You can also pass the epoch directly when reporting log_metric.

Args:

  • epoch: Integer value

Returns: None


Experiment.set_filename

set_filename(self, fname)

Sets the current experiment filename. Args:

  • fname: String. script's filename.

Experiment.set_model_graph

set_model_graph(self, graph, overwrite=False)

Sets the current experiment computation graph. Args:

  • graph: String or Google Tensorflow Graph Format.
  • overwrite: Bool, if True, send the graph again

Experiment.set_name

set_name(self, name)

Set a name for the experiment. Useful for filtering and searching on Comet.ml. Will shown by default under the Other tab. Args:

  • name: String. A name for the experiment.

Experiment.set_os_packages

set_os_packages(self)

Reads the installed os packages and reports them to server as a message. Returns: None


Experiment.set_pip_packages

set_pip_packages(self)

Reads the installed pip packages using pip's CLI and reports them to server as a message. Returns: None


Experiment.set_predictor

set_predictor(self, predictor)

Set the predictor.


Experiment.set_step

set_step(self, step)

Sets the current step in the training process. In Deep Learning each step is after feeding a single batch into the network. This is used to generate correct plots on Comet.ml. You can also pass the step directly when reporting log_metric, and log_parameter.

Args: step: Integer value

Returns: None


Experiment.stop_early

stop_early(self, epoch)

Should the experiment stop early?


Experiment.test

test(*args, **kwds)

A context manager to mark the beginning and the end of the testing phase. This allows you to provide a namespace for metrics/params. For example:

with experiment.test():
  pred = model.predict(x_test)
  test_acc = compute_accuracy(pred, y_test)
  experiment.log_metric("accuracy", test_acc)
  # this will be logged as test accuracy
  # based on the context.

Experiment.train

train(*args, **kwds)

A context manager to mark the beginning and the end of the training phase. This allows you to provide a namespace for metrics/params. For example:

experiment = Experiment(api_key="MY_API_KEY")
with experiment.train():
  model.fit(x_train, y_train)
  accuracy = compute_accuracy(model.predict(x_train),y_train)
  # returns the train accuracy
  experiment.log_metric("accuracy",accuracy)
  # this will be logged as train accuracy based on the context.

Experiment.validate

validate(*args, **kwds)

A context manager to mark the beginning and the end of the validating phase. This allows you to provide a namespace for metrics/params. For example:

with experiment.validate():
  pred = model.predict(x_validation)
  val_acc = compute_accuracy(pred, y_validation)
  experiment.log_metric("accuracy", val_acc)
  # this will be logged as validation accuracy
  # based on the context.