Experiments

Track machine learning experiments with W&B.

Track machine learning experiments with a few lines of code. You can then review the results in an interactive dashboard or export your data to Python for programmatic access using our Public API.

Utilize W&B Integrations if you use popular frameworks such as PyTorch, Keras, or Scikit. See our Integration guides for a for a full list of integrations and information on how to add W&B to your code.

The image above shows an example dashboard where you can view and compare metrics across multiple runs.

How it works

Track a machine learning experiment with a few lines of code:

  1. Create a W&B run.
  2. Store a dictionary of hyperparameters, such as learning rate or model type, into your configuration (run.config).
  3. Log metrics (run.log()) over time in a training loop, such as accuracy and loss.
  4. Save outputs of a run, like the model weights or a table of predictions.

The following code demonstrates a common W&B experiment tracking workflow:

# Start a run.
#
# When this block exits, it waits for logged data to finish uploading.
# If an exception is raised, the run is marked failed.
with wandb.init(entity="", project="my-project-name") as run:
  # Save mode inputs and hyperparameters.
  run.config.learning_rate = 0.01

  # Run your experiment code.
  for epoch in range(num_epochs):
    # Do some training...

    # Log metrics over time to visualize model performance.
    run.log({"loss": loss})

  # Upload model outputs as artifacts.
  run.log_artifact(model)
Python

Get started

Depending on your use case, explore the following resources to get started with W&B Experiments:

  • Read the W&B Quickstart for a step-by-step outline of the W&B Python SDK commands you could use to create, track, and use a dataset artifact.
  • Explore this chapter to learn how to:
    • Create an experiment
    • Configure experiments
    • Log data from experiments
    • View results from experiments
  • Explore the W&B Python Library within the W&B API Reference Guide.

Best practices and tips

For best practices and tips for experiments and logging, see Best Practices: Experiments and Logging.


Create an experiment

Create a W&B Experiment.

Configure experiments

Use a dictionary-like object to save your experiment configuration

Projects

Compare versions of your model, explore results in a scratch workspace, and export findings to a report to save notes and visualizations

View experiments results

A playground for exploring run data with interactive visualizations

What are runs?

Learn about the basic building block of W&B, Runs.

Log objects and media

Keep track of metrics, videos, custom plots, and more

Track Jupyter notebooks

se W&B with Jupyter to get interactive visualizations without leaving your notebook.

Experiments limits and performance

Keep your pages in W&B faster and more responsive by logging within these suggested bounds.

Reproduce experiments

Import and export data

Import data from MLFlow, export or update data that you have saved to W&B

Environment variables

Set W&B environment variables.