You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Every time there is a PR making big changes in the UDP tacker I run the aquatic UDP load test manually, just to make sure there is no performance regression. For example:
It would be nice to do it automatically because there could be some changes that affect the performance in non ovbious ways.
Notes
The workflow should be executed always not only when the UDP tracker code is directly affected. Changes in the core tracker can affect the UDP tracker. In fact, It would be a good way to detect performance regressions in other parts of the code.
It can be hard to define the threshold. Ideally, we should run the test for the target branch and the PR branch. So we make sure the test is run in the same context.
I think we should run benchmarks at different levels. We can start with the UDP tracker packages stack:
main app
udp-tracker-server
udp-tracker-core
tracker-core
torrent-repository (we already have benchmark tests for this package)
If we run benchmarks at different levels we can not only detect performance problems but also know which layers is the bottleneck (if there is one). That would allow us to know where we can put our effort to improve performance the get the most from it.
Every time there is a PR making big changes in the UDP tacker I run the aquatic UDP load test manually, just to make sure there is no performance regression. For example:
#873
It would be nice to do it automatically because there could be some changes that affect the performance in non ovbious ways.
Notes
cc @da2ce7
The text was updated successfully, but these errors were encountered: