TensorBoard
类keras.callbacks.TensorBoard(
log_dir="logs",
histogram_freq=0,
write_graph=True,
write_images=False,
write_steps_per_second=False,
update_freq="epoch",
profile_batch=0,
embeddings_freq=0,
embeddings_metadata=None,
)
启用 TensorBoard 的可视化。
TensorBoard 是 TensorFlow 提供的可视化工具。使用此回调需要安装 TensorFlow。
此回调记录 TensorBoard 的事件,包括
在 model.evaluate()
或常规验证中使用时,除了每个 epoch 的摘要之外,还会记录一个摘要,其中记录了评估指标与 model.optimizer.iterations
的关系。指标名称将以 evaluation
为前缀,model.optimizer.iterations
为 TensorBoard 中可视化的步骤。
如果您已使用 pip 安装 TensorFlow,则应该能够从命令行启动 TensorBoard
tensorboard --logdir=path_to_your_logs
您可以在此处找到有关 TensorBoard 的更多信息 此处。
参数
log_dir = os.path.join(working_dir, 'logs')
。此目录不应被任何其他回调重用。write_graph
设置为 True
时,日志文件可能会变得非常大。"batch"
或 "epoch"
或整数。使用 "epoch"
时,在每个 epoch 之后将损失和指标写入 TensorBoard。如果使用整数,例如 1000
,则所有指标和损失(包括通过 Model.compile
添加的自定义指标和损失)都将每 1000 个批次记录到 TensorBoard 中。"batch"
是 1 的同义词,表示它们将在每个批次写入。但是,请注意,过于频繁地写入 TensorBoard 会降低训练速度,尤其是在与分布式策略一起使用时,因为它会产生额外的同步开销。批次级摘要写入也可通过 train_step
覆盖来实现。有关更多详细信息,请参阅 TensorBoard 标量教程 # noqa: E501。示例
tensorboard_callback = keras.callbacks.TensorBoard(log_dir="./logs")
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])
# Then run the tensorboard command to view the visualizations.
在子类化模型中自定义批次级摘要
class MyModel(keras.Model):
def build(self, _):
self.dense = keras.layers.Dense(10)
def call(self, x):
outputs = self.dense(x)
tf.summary.histogram('outputs', outputs)
return outputs
model = MyModel()
model.compile('sgd', 'mse')
# Make sure to set `update_freq=N` to log a batch-level summary every N
# batches. In addition to any [`tf.summary`](https://tensorflowcn.cn/api_docs/python/tf/summary) contained in `model.call()`,
# metrics added in `Model.compile` will be logged every N batches.
tb_callback = keras.callbacks.TensorBoard('./logs', update_freq=1)
model.fit(x_train, y_train, callbacks=[tb_callback])
在函数式 API 模型中自定义批次级摘要
def my_summary(x):
tf.summary.histogram('x', x)
return x
inputs = keras.Input(10)
x = keras.layers.Dense(10)(inputs)
outputs = keras.layers.Lambda(my_summary)(x)
model = keras.Model(inputs, outputs)
model.compile('sgd', 'mse')
# Make sure to set `update_freq=N` to log a batch-level summary every N
# batches. In addition to any [`tf.summary`](https://tensorflowcn.cn/api_docs/python/tf/summary) contained in `Model.call`,
# metrics added in `Model.compile` will be logged every N batches.
tb_callback = keras.callbacks.TensorBoard('./logs', update_freq=1)
model.fit(x_train, y_train, callbacks=[tb_callback])
分析
# Profile a single batch, e.g. the 5th batch.
tensorboard_callback = keras.callbacks.TensorBoard(
log_dir='./logs', profile_batch=5)
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])
# Profile a range of batches, e.g. from 10 to 20.
tensorboard_callback = keras.callbacks.TensorBoard(
log_dir='./logs', profile_batch=(10,20))
model.fit(x_train, y_train, epochs=2, callbacks=[tensorboard_callback])