Update Benchmarks
Similar to #3050
We need to go through each files in benchmarks_v2 and verify it is benchmarking the same thing as the original. This is far more nuanced than updating the tests. sort-cases in particular i remember being concerned about since it doesn't lend itself well to the new format (since it makes use of python generators, see #2276)
One quick litmu…
Similar to #3050
We need to go through each files in benchmarks_v2 and verify it is benchmarking the same thing as the original. This is far more nuanced than updating the tests. sort-cases in particular i remember being concerned about since it doesn't lend itself well to the new format (since it makes use of python generators, see #2276)
One quick litmus test is to run the new and old benchmark and make sure the results are fairly similar
@Bears-R-Us/arkouda-core-dev while going through these, we should keep in mind if it makes sense for something to be used as benchmark. Ideally benchmarks are capturing the core functionality that underpins most workflows. Unlike tests, I don't think we should try to benchmark as much of our functions as possible, but to focus on the ones that we really want to know if there's a performance drop off. I think we should keep this targeted to the ones we really care about, but I'm open to other opinions!
I know some of these benchmarks were added as one off to see how a newly added function performed and to track it as we optimized
Once we are confident these work and cover what we are looking for, we should work with @hokiegeek2 to use the JSON(?) output to create a grafana dashboard. Then we should work with @bmcdonald3 or @jeremiah-corrado to create a script that turns the output of these into something that is readable by the chpl nightly graphs
I chatted offline with @bmcdonald3 and it seems like converting from the JSON to be readable by the chpl graphs would be quite difficult. I think the better approach is to find a way to add an option for our current benchmarks to give JSON output
Helpful links:
https://chapel-lang.org/docs/developer/bestPractices/TestSystem.html#a-performance-test
https://chapel-lang.org/perf/arkouda/16-node-xc/?graphs=all
https://github.com/Bears-R-Us/arkouda/blob/master/benchmarks/run_benchmarks.py