Hey everyone,
Just wanted to share a quick update on some behind-the-scenes improvements I've made to the hive-bench
and engine-bench
utilities – the tools that power the node performance data published to the @nectarflower
(for Hive nodes) and @flowerengine
(for Hive-Engine nodes) accounts.
Previously, the ranking of nodes was often based on simpler metrics or performance in individual tests. While useful, it didn't always perfectly reflect which nodes feel best in real-world usage.
The main improvement I've implemented is a new weighted scoring system. This system assigns different levels of importance (weights) to the various benchmark tests (like block retrieval, history lookups, API call speed, latency, config access, etc.). For example, latency and block retrieval speed might be weighted more heavily than how quickly a node responds to a simple config request, as those factors often have a bigger impact on user experience in apps and wallets.
The result is a single weighted_score
for each node, where a higher score indicates better overall performance based on these weighted factors.
Richer Data in Metadata
This change also means the JSON data published to the @nectarflower
and @flowerengine
accounts will be more informative in the upcoming runs. You can expect to see:
- Nodes Sorted by Weighted Score: The primary list of nodes will now be ordered based on this new, more realistic performance score (best score first).
- Weighted Score Included: Each node entry will include its calculated
weighted_score
. - Tests Completed Count: Each node will also show how many of the benchmark tests it successfully completed (e.g.,
tests_completed: 5/5
or4/5
). This gives a quick indicator of reliability across different test types. - Scoring Weights Published: The metadata will also include details about the specific weights used in the scoring calculation, for transparency.
Why Does This Matter?
The goal is to provide a more nuanced and useful picture of node performance. For developers using libraries like hive-nectar
(which consume this data) or anyone manually selecting nodes, this weighted score and additional data should offer a better way to pick nodes that are not just technically working, but are likely to provide a better, faster experience for actual applications.
These updates (bumping the tools to v0.2.6) will be reflected in the benchmark runs and metadata updates going forward. Keep an eye on the @nectarflower
and @flowerengine
accounts!
As always,
Michael Garcia a.k.a. TheCrazyGM