You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Suggestion regarding accumulation TB history for the clients. Clients recording TB history locally from `trainer`:
fromtensorboard.backend.event_processing.event_accumulatorimportEventAccumulatordefget_tensorboard_history(trainer):
# Get the log directory from trainer's loggerlog_dir=trainer.logger.experiment.log_dir# Create an EventAccumulator objectevent_acc=EventAccumulator(log_dir)
event_acc.Reload() # Load all data# Get available tags (metrics)tags=event_acc.Tags()['scalars']
# Extract history for each metrichistory= {}
fortagintags:
events=event_acc.Scalars(tag)
# Each event has attributes: wall_time, step, valuehistory[tag] = [(e.step, e.value) foreinevents]
returnhistory
And then after sharing through Shareable object you create a TB writer to accumulate them all:
tb_writer=SummaryWriter('continuous_logs')
forshareableinshareables:
metrics_history=shareable['history']
# Write each metric to the continuous logformetric_name, valuesinmetrics_history.items():
forstep, valueinvalues:
global_step= (current_round*max_steps_per_round) +stepwriter.add_scalar(f'continuous/{metric_name}', value, global_step=global_step)
And then after sharing through
Shareable
object you create a TB writer to accumulate them all:Originally posted by @farhadrgh in #3095 (comment)
The text was updated successfully, but these errors were encountered: