Tensorboard Tutorial

The Ultimate Guide

Zito Relova
21 min readFeb 16, 2022
Image source

This article was originally written for Layer.ai

Machine learning will usually involve visualizing and measuring the performance of the model during training. After all, it is almost impossible to improve a model if we cannot measure its performance in the first place. There are many tools available for this task. In this guide, we will focus on TensorFlow’s open-source suite of tools called TensorBoard.

What is TensorBoard?

TensorBoard is a collection of tools for data visualization. It is included as part of the popular open-source machine learning library Tensorflow. How is it useful to machine learning practitioners? TensorBoard’s main features include:

  • Visualizing the graph of a TensorFlow model
  • Tracking model metrics like loss and accuracy
  • Examining histograms of weights, biases, and other components in the machine learning workflow
  • Displaying non-tabular data, including images, text, and audio
  • Projecting high dimensional embeddings into a lower-dimensional space

We cannot discuss TensorBoard without mentioning TensorFlow, the library it comes with. The TensorFlow library is an open-source library explicitly made for machine learning applications. TensorFlow was made with deep neural networks in mind, so its support for traditional machine learning is limited. It is a second-generation system developed by Google and born out of an earlier system called DistBelief.

Google Brain built the earlier DistBelief system in 2011. As its user base quickly grew, it was simplified and refactored into the library we now know as Tensorflow. TensorFlow was then released to the public in 2015. Together with TensorFlow came TensorBoard, the visualization tool needed to examine metrics and other important components of a machine learning pipeline.

Now that we know about TensorBoard, how do we start using it?

How to use TensorBoard

Let’s discuss how to set up TensorBoard and how to run it on different platforms.

How to install TensorBoard

TensorBoard is included with the TensorFlow library, so if we have successfully installed TensorFlow, we also have TensorBoard ready to go. To install TensorFlow, we can simply open up our terminal or command prompt and type in the following command:

pip install tensorflow

If we prefer a `conda` installation, we can run:

conda install -c conda-forge tensorboard

This should download the latest stable release of TensorFlow for CPU and GPU. It is also possible to install a preview build if we want to see the latest changes to the library. If we want to try out the preview build, we can run:

pip install tf-nightly

Please note that the preview builds will not be as stable as release builds, so we should use them with caution.

An alternative to installing TensorFlow directly is to run it from a Docker container. We can download the latest stable TensorFlow image by running:

docker pull tensorflow/tensorflow:latest

Launching TensorBoard

To launch TensorBoard, we should open our terminal or command prompt and run:

tensorboard --logdir=<directory_name>

We have to replace the `directory_name` tag with the directory we want to save our data. By convention, a `logs` folder is usually created in the same directory where we are running TensorBoard and passed as an argument when executing this command.

After running this command, we will see the following output:

Serving TensorBoard on localhost; to expose to the network, use a proxy or pass –bind_allTensorBoard 2.2.0 at http://localhost:6006/ (Press CTRL+C to quit)

This means that TensorBoard has successfully launched. We can go to http://localhost:6006/ to view it.

When the page first opens, we will see something like this:

Image source

What’s going on here? There are no active dashboards, so TensorBoard has nothing to display. As soon as we start adding data to the `logdir` folder, it will show up on TensorBoard.

Using TensorBoard with Jupyter Notebooks

If we want to use TensorBoard within Jupyter Notebooks, we will need TensorFlow installed on our computer. Once TensorFlow is installed, we can create a new notebook and run:

%load_ext tensorboard

Running this line of code will load the TensorBoard extension and allow us to use it for visualization. After the extension has been loaded, we can now start TensorBoard with:

%tensorboard --logdir logs

Using TensorBoard with Google Colab

When using Google Colab, TensorFlow and TensorBoard will already be installed once we create a new notebook. To run it, we can follow the same process as outlined for Jupyter Notebooks. Just enter the following into a notebook cell:

%load_ext tensorboard

This should start the service and allow us to see the TensorBoard application.

Image source

How to run TensorBoard

Here we will get TensorBoard up and running as well as discuss the fundamentals of tracking a model’s performance during training.

How to use the TensorBoard callback

The TensorBoard callback is a callback found in the TensorFlow library. This callback is what enables visualizations for TensorBoard. How does it work?

According to the Keras documentation, a callback is an object that can perform actions at various stages of training. When we want to automate a task after certain periods in the training process (e.g., after each iteration/epoch), we use callbacks.

Let’s look at a quick example of how the TensorBoard callback is used.

First, let’s create a simple model using TensorFlow and train it on the MNIST dataset.

import tensorflow as tf# Load and normalize MNIST data
mnist_data = tf.keras.datasets.mnist
(X_train, y_train), (X_test, y_test) = mnist_data.load_data()
X_train, X_test = X_train / 255.0, X_test / 255.0
# Define the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
# Compile the model
model.compile(
optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

After compiling the model, we will need to create a callback to use when we call the `fit` method.

tf_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs")

The variable we have set here can now be used when calling the `fit` method on the model. I have created a `logs` folder in my working directory and have passed it as an argument to `log_dir`. Let’s call fit and pass this in as a callback.

model.fit(X_train, y_train, epochs=5, callbacks=[tf_callback])

After calling the fit method, go to localhost:6006 to see the graph of the model.

Above, we see a dashboard and two different plots. The first shows the accuracy of the model going up per epoch. The second shows the loss as it decreases on every epoch. The main observation from these two is that the model training has gone well.

Running TensorBoard remotely

Aside from running locally, it is also possible to run TensorBoard remotely. This is common if we are running an experiment on a different server with more powerful GPUs. We may still want to examine the model locally in this case.

To start, we SSH and transfer the remote server’s port to our local computer. Let’s run this command on our local computer to transfer the remote’s port.

ssh -L 6006:127.0.0.1:6006 username@server_ip

After this, we just need to start TensorBoard on the remote server. On the remote server run:

tensorboard --logdir=’logs’ --port=6006

TensorBoard should now be running, and we can go to localhost:6006 to view it.

TensorBoard Dashboard

The TensorBoard Dashboard is composed of different elements for visualizing data. We will look at each of these in-depth in the next section.

TensorBoard Scalars

The machine learning process necessitates the tracking of different metrics related to a model’s performance. This is important to spot any problems quickly and to determine if a model is overfitting, among other things.

Using the Scalars Dashboard of TensorBoard, we can visualize these metrics and debug a model a lot easier. We have already seen scalars previously. When we looked at our first example, where we plotted the loss and accuracy of the model on the MNIST dataset, we were dealing with scalars.

The procedure for logging involves three simple steps:

  1. Create a callback
  2. Specify a directory to log the data
  3. Pass the callback when calling the fit method

This works for most scenarios, but what if we want to log a custom scalar that is not readily available? For that, we can use the Summary API of TensorFlow. As the name suggests, this particular API is used for writing summary data which can later be used for visualization and analysis.

Let’s see an example to understand this better. We will use a simple sine wave as the scalar we want to show on TensorBoard.

# Specify a directory for logging data
logdir = "./logs"
# Create a file writer to write data to our logdir
file_writer = tf.summary.create_file_writer(logdir)
# Loop from 0 to 199 and get the sine value of each number
for i in range(200):
with file_writer.as_default():
tf.summary.scalar('sine wave', np.math.sin(i), step=i)

Using the `scalar` method from `tf.summary`, we can log pretty much any scalar data we want. We are not limited to losses and metrics when using TensorBoard. Below we see the output to our dashboard after running the above commands.

Image source

TensorBoard Images

When working with image data, we may want to view our data to look for any problems or just look at samples to ensure data quality. With TensorBoard’s Image Summary API, we can easily do this.

Let’s go back to the MNIST dataset to see how images are displayed in TensorBoard. We will follow a similar procedure to when we were logging custom scalars.

As a review, let’s first load the data using the datasets module of TensorFlow.

# Load and normalize MNIST data
mnist_data = tf.keras.datasets.mnist
(X_train, y_train), (X_test, y_test) = mnist_data.load_data()
X_train, X_test = X_train / 255.0, X_test / 255.0

After this, we can take the first image and visualize it. We will reshape it first before passing it to the file writer.

# Reshape the first image
img = np.reshape(X_train[0], (-1, 28, 28, 1))
# Specify a directory for logging data
logdir = "./logs"
# Create a file writer to write data to our logdir
file_writer = tf.summary.create_file_writer(logdir)
# With the file writer, log the image data
with file_writer.as_default():
tf.summary.image("Training data", img, step=0)

After logging the image data, we can go back to the dashboard. If we look at the Images tab, we will see the image we selected is displayed on the dashboard.

Image source

TensorBoard Graphs

All TensorFlow models can be seen as a computation graph. It can sometimes be difficult to see the architecture of a model by looking at the code alone. A visual representation can benefit us greatly by making it easy to see how a model is structured. This will also ensure that the architecture we are using is what we intended or designed.

Let’s visualize the model we used earlier for the MNIST dataset. Below is the model definition.

# Define the model
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])

Once again, we will create a TensorBoard callback and use it when we train the model.

# Create a callback
tf_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs")
# Pass in the callback when fitting the model
model.fit(X_train, y_train, epochs=5, callbacks=[tf_callback])

After training, let’s look at the dashboard and go to the Graphs tab.

The first thing we see is the op-level graph of the model. This graph shows the model architecture layer by layer. This is important to see if our model is correct and each of the layers is what we intend them to be. Note that the graph is inverted, with data flowing from the bottom to the top.

If we want, we can also see the conceptual graph. To do this, we just need to find the Tag heading in the sidebar and change it to Keras. This will bring us to the conceptual graph of the model.

Image source

This is useful if we are using a saved model and we just want to see if the structure is correct. In our model, we see a single node showing that our model is a sequential one.

TensorBoard Distributions and Histograms

TensorBoard Distributions and Histograms are another great way to track the progress of a model. We might notice that after training, several tabs show up on TensorBoard. If we go to the Distributions tab, we will see an image like the one below.

Image source

This set of graphs shows the tensors that make up the model. On the horizontal axis of each graph, we see the epoch number, and on the vertical axis, we see each tensor’s value. The graphs essentially show how these tensors change over time as training progresses. The darker areas show where the values spent the most amount of time. If we are concerned that our model weights are not updating properly at each epoch, we will spot those problems using this tab.

We see a different set of graphs representing the model’s tensors on the Histograms tab.

Image source

These graphs show a different view of the tensors in the model. We see that each graph has five histograms stacked on top of each other, representing each of the five epochs we have trained. They show similar information regarding where the tensor weights tend to center around. Once again, this is useful for debugging misbehaving models.

Text

Text is a commonly used type of data when creating machine learning models. A lot of the time, it can be hard to visualize text data. Thankfully with TensorBoard, we can visualize text data easily with the Text Summary API. Let’s have a look at how it works.

We’ll work with the text Hello World as a simple example.

# Our sample text
sample_text = "Hello world!"
# Specify a directory for logging data
logdir = "./logs/text/"
# Create a file writer to write data to our logdir
file_writer = tf.summary.create_file_writer(logdir)
# With the file writer, log the text data
with file_writer.as_default():
tf.summary.text("sample_text", sample_text, step=0)

Looking at TensorBoard, we see the text we entered in the Text tab.

Image source

Using the TensorBoard projector

Deep learning models commonly work with data that has a large number of dimensions. Visualizing these dimensions may give us insight into improving the performance of a model. TensorBoard has an embedding projector to allow us to do this.

First off, we need to import the projector plugin from TensorBoard.

from tensorboard.plugins import projector

For this section, we will be using the IMDB movie review dataset to visualize embeddings.

import os
import tensorflow as tf
import tensorflow_datasets as tfds
# Load the data
(train_data, test_data), info = tfds.load(
"imdb_reviews/subwords8k",
split=(tfds.Split.TRAIN, tfds.Split.TEST),
with_info=True,
as_supervised=True,)
encoder = info.features["text"].encoder# Create training batches
train_batches = train_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
# Create testing batches
test_batches = test_data.shuffle(1000).padded_batch(
10, padded_shapes=((None,), ())
)
# Get the first batch
train_batch, train_labels = next(iter(train_batches))

In the code above, we load in and preprocess the data. Next, we create a simple model that will generate embeddings for the text. After this, we train the model for a single epoch and visualize the results.

# Create an embedding layer
embedding_dim = 16
embedding = tf.keras.layers.Embedding(
encoder.vocab_size,
embedding_dim)
# Configure the embedding layer as part of a Keras model
model = tf.keras.Sequential([
embedding,
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1),])
# Compile the model
model.compile(
optimizer="adam",
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=["accuracy"],
)
# Train the model for a single epoch
history = model.fit(
train_batches, epochs=1, validation_data=test_batches
)

Once we fit the model, we just need to write the data to the `logdir`, similar to what we have done in previous sections.

# Set up a log dir, just like in previous sections
log_dir='/logs/imdb-example/'
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# Save labels separately in a line-by-line manner.
with open(os.path.join(log_dir, 'metadata.tsv'), "w") as f:
for subwords in encoder.subwords:
f.write("{}\n".format(subwords))
# Fill in the rest of the labels with "unknown"
for unknown in range(1, encoder.vocab_size - len(encoder.subwords)):
f.write("unknown #{}\n".format(unknown))
# Save the weights we want to analyze as a variable
weights = tf.Variable(model.layers[0].get_weights()[0][1:])
# Create a checkpoint from embedding, the filename and key are the
# name of the tensor
checkpoint = tf.train.Checkpoint(embedding=weights)
checkpoint.save(os.path.join(log_dir, "embedding.ckpt"))
# Set up config
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
# The name of the tensor will be suffixed by
# `/.ATTRIBUTES/VARIABLE_VALUE`
embedding.tensor_name = "embedding/.ATTRIBUTES/VARIABLE_VALUE"
embedding.metadata_path = 'metadata.tsv'
projector.visualize_embeddings(log_dir, config)

We have saved the embedding data to our `logdir`. Now we just have to run TensorBoard.

tensorboard --logdir /logs/imdb-example/

Navigate to Projector on the dropdown menu located on the top-right to see the embeddings visualized. We should see something like the image below.

Image source

TensorFlow Profiler

The TensorFlow Profiler is used to profile the execution of TensorFlow code. This is important because we want to know if the model we are running is optimized properly. To start, we need to make sure we have the profiler plugin installed.

pip install -U tensorboard_plugin_profile

Once we have the plugin installed, we can start looking at the different parts of the profiler and what they are capable of.

Overview Page

We will follow all the same steps that we did previously; creating a model then fitting it with a TensorBoard callback.

# Create a callback
tf_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs")
# Pass in the callback when fitting the model
model.fit(X_train, y_train, epochs=5, callbacks=[tf_callback])

When we go to the TensorBoard page, we will see a new tab named Profile. Switching to this tab, we will see something similar to the image below.

Image source

This is the Overview Page. There is a lot of information here, so let’s break it down:

  • There is a Step-time graph on the top-right of the page showing which parts of the training process are taking the most amount of time. We can see that the model is not input-bound, and a lot of the time is spent on launching the kernel.
  • We also see recommendations that we can follow to optimize the performance of the model.
  • In our case, none of our computation is using 16-bit operations, so we could potentially improve the performance by replacing existing operations that utilize more bits.

Trace Viewer

On the left-hand side, we see a dropdown menu called Tools. Here we can select the Trace Viewer to see where the bottlenecks in our model’s performance occur. The Trace Viewer shows a timeline of GPU and CPU events that happened during the profiling period.

Image source

The vertical axis shows event groups with different trace events. Trace events are collected from both CPU and GPU. Each of the rectangles is a separate trace event. We can click any one of them to focus on a trace event and analyze it. We can also drag our cursor to select multiple events at once.

Input Pipeline Analyzer

On the Tools dropdown, we also have access to the `input_pipeline_analyzer` option to see the model’s input pipeline performance based on collected data.

Image source

This essentially tells us if the model is input-bound. This would mean the model spends a lot of time waiting for inputs rather than running inference. It can also tell us which stage in the pipeline is slowing us down the most.

TensorFlow Stats

For a more in-depth look at different TensorFlow operations, we have another tool called TensorFlow stats. This tool shows a breakdown of the different operations the model is performing.

We see pie charts that show different operations we are computing when we run the model. This can include computations like encoding, matrix multiplications, and many other operations we need to perform for inference. We will see which of these takes up the most time to focus on those areas when optimizing the model.

Fairness Indicators Dashboard

The Fairness Indicators Dashboard helps us compute different fairness metrics for binary and multiclass classifiers. Using this dashboard, we can evaluate the fairness of our models across different runs and compare their performance between different groups.

To start using the Fairness Indicators dashboard on our data, we will need to install a few things. In our command prompt or terminal, run the following:

pip install fairness_indicators
pip install tensorboard-plugin-fairness-indicators

We can now import the fairness indicators plugin using:

from tensorboard_plugin_fairness_indicators import summary_v2

We can use `summary_v2` the same way we used `tf.summary` when logging data to the `logdir` in earlier sections. We will also need a separate directory for storing the evaluation results. Let’s do this by running the code below.

# Create a file writer to write data to our logdir
file_writer = tf.summary.create_file_writer(logdir)
# Write data to our result directory
with file_writer.as_default():
summary_v2.FairnessIndicators(result_dir, step=1)

After this, we just need to run `tensorboard`:

tensorboard --logdir=logdir

We will see a separate Fairness Indicators tab when we open TensorBoard. Here we can view a breakdown of the different class values the model is predicting and their percentage difference from the baseline to determine if the model is fair.

Model Understanding with the What-If Tool Dashboard

TensorBoard comes with a What-If Tool (WIT) that can help us understand black-box classification and regression models. Using this tool, we can make predictions on a set of data and visualize the results in different ways.

It is also possible to manually or programmatically edit examples to see how changing them affects the model’s predictions.

To use WIT, we need a model and data. The models we want to explore have to be deployed using TensorFlow Serving with the classify, regress, or predict API. We can find more information on deploying models to TensorFlow Serving here.

Also, the dataset we want to make predictions on should be stored in the TFRecord format and accessible by the server where we are running TensorBoard.

After setup is complete, go to the TensorBoard Dashboard and select the What-If Tool option from the dropdown menu on the top-right of the page. We should see a page that looks like this:

Image source

Let’s go over each of the fields we need to fill. The first one is the inference address where the model is being served. If we are serving a model locally, this would be equal to `localhost:port`. We would replace `port` with the port number where we are serving the model. We also need the model name, optionally, the model version, and the model’s signature.

Next, we will need to input the path where the data is located. This should be the `TFRecord` file we discussed earlier. After this, just click the Accept button, and we should be brought to the dashboard with the model and data.

Image source

Each of the points in the dataset is now colored according to their class. There is also a lot of flexibility in how we manipulate our data. We can do operations like binning, bucketing, creating scatterplots, and coloring our data points differently.

Accessing TensorBoard Data as DataFrames

TensorBoard is primarily a GUI tool for visualizing data. However, some users may want to programmatically interact with TensorBoard data for different purposes like custom visualization and ad-hoc analysis. For this purpose, it is possible to access TensorBoard data as a DataFrame to use in a separate program later on. In this section, we will explore this process.

Note that this particular API is still at an experimental stage, so it may experience breaking changes. Another thing to note is that the `logdir` from the model needs to be uploaded to TensorBoard.dev to use this feature. The process for uploading `logdirs` to TensorBoard.dev is discussed in a separate section. After uploading the `logdirs` to TensorBoard.dev, we can proceed with the next steps.

There are a few dependencies we need for this part. First, we need to make sure we have Pandas installed. This is a Python library used for manipulating and analyzing data. Once we have that, we also need a way to visualize the data. Some commonly used libraries are Matplotlib and Seaborn. We will be using these libraries in this example.

A TensorBoard `logdir` on TensorBoard.dev is referred to as an experiment. Each experiment has a unique ID that we can use to access data programmatically.

# Change the experiment id to our own
experiment_id = "insert experiment ID here"
# Get the experiment using the id
experiment = tb.data.experimental.ExperimentFromDev(experiment_id)
# Save the scalars to a dataframe
df = experiment.get_scalars()

Inside the `df` variable, we will now have all the `logdir` data available in a DataFrame. Below is an image of how this could look.

Image source

We can now manipulate this like we would any other DataFrame to analyze our model runs further.

Using TensorBoard with PyTorch

PyTorch is another deep learning framework that is very popular among researchers. You might be surprised that TensorBoard is actually supported in PyTorch. Here we will see how we can use these tools together.

We used the Summary API to create the file writer that would log data into the `logdir` folder when using TensorFlow. There is a similar file writer that we can use when working with PyTorch.

# Import the summary writer
from torch.utils.tensorboard import SummaryWriter
# Create an instance of the object
writer = SummaryWriter()

We can use the same MNIST dataset that we worked on when we were using TensorFlow.

import torch
import torchvision
from torchvision import datasets, transforms
# Compose a set of transforms to use later on
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))
])
# Load in the MNIST dataset
trainset = datasets.MNIST(
'mnist_train',
train=True,
download=True,
transform=transform
)
# Create a data loader
trainloader = torch.utils.data.DataLoader(
trainset,
batch_size=64,
shuffle=True
)
# Get a pre-trained ResNet18 model
model = torchvision.models.resnet18(False)
# Change the first layer to accept grayscale images
model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)
# Get the first batch from the data loader
images, labels = next(iter(trainloader))
# Write the data to TensorBoard
grid = torchvision.utils.make_grid(images)
writer.add_image('images', grid, 0)
writer.add_graph(model, images)
writer.close()

After we run this, we can go to TensorBoard and see the output that we have saved.

Image source

We see that we got a similar output to the one we had when we used TensorFlow. We can use this summary writer to write data just like we would with the Summary API.

Uploading and Sharing Results with TensorBoard.dev

TensorBoard.dev is a component of TensorBoard that allows us to host machine learning experiments on the web. This includes tracking as well as sharing experiments with the world. For those that are eager to show their results, this is the tool for you. How does it all work?

The first thing we need to do is determine which TensorBoard logs we want to upload. Once we have this out of the way, we can upload it to TensorBoard.dev from the terminal or command prompt with this command:

tensorboard dev upload --logdir {logs}

The logs here will be the log files that we want to upload. Make sure not to upload any sensitive data!

We will be given a URL to authorize the upload. After authorizing the upload with a Google account, we will get a code to paste. Once we have entered the code, we will get a link to our uploaded TensorBoard logs.

Image source

Just visit the link, and we will see the logs. We can now share this link with anyone for them to see the work that we have done.

Limitations of using TensorBoard

Though TensorBoard comes with a host of tools for visualizing our data and models, it has its limitations. We will go over some of its biggest limitations to keep them in mind when using the library.

Lack of user and workspace management

TensorBoard does not have the concept of a user as it works in a single environment. We cannot run multiple instances of TensorBoard on the same machine, so if we are working on multiple projects simultaneously, it can be challenging to work with.

Difficult to use in a team setting

Similar to the limitation above, a TensorBoard instance cannot be shared among other users. If we have several people working on the same project, we may want a centralized dashboard and separate workspaces for each person. This is not supported in TensorBoard.

No support for data and model versioning

When tuning a model or setting values for hyperparameters, we will want to save different model and training data versions. Especially when conducting experiments, we want to look at different versions of the model and data at once. When using TensorBoard, we cannot tag a certain run or a set of data as particularly important.

Problems arise when performing a large number of runs

TensorBoard was not created with a large number of consecutive runs in mind. If we keep running our model and logging data repeatedly, we will run into UI problems that make the interface difficult to use.

No support for logging and visualizing unstructured data formats like video files

Some data types cannot be visualized in TensorBoard. Particularly, commonly used video data cannot be visualized in TensorBoard. If our work involves modeling this type of data, it will be difficult to use TensorBoard.

Final thoughts

We have looked at TensorBoard, the data visualization toolkit for visualizing machine learning models. We covered all the basic components of the dashboard and new, experimental features like the What-If Tool and Fairness Indicators component. With all these capabilities at our disposal, we should be able to easily view and debug the inner workings of the models we train and ultimately improve their performance.

Thank you for reading!

You can connect with me through these channels:

--

--