Conversation
|
@jamesturner246 @Aurashk @miruuna this is the PR to clarify what tests should be performed for PR merges. This will update the PR template so the testing requiremenst are clearer, this is sometthing that I would value feedback on. The tests that should be performed are:
Looking forward to feedback from you |
Thanks Pawel, that's very helpful. |
|
There are no hardware restrictions, but this code has been written to run on AlmaLinux9. If I remember correctly, @jamesturner246 got CVMFS set up on his laptop, and should be able to run these tests locally |
|
This looks good and it'll help standardize the workflow. However, running |
Thanks Miruna. Yes, it is expected that this test takes very long, but this should be only have to be run when major changes are made to the core codebase, prior to merging a PR. I will make this clearer in the testing list requirements |
|
In case it is useful, I will contribute a couple of notes regarding where we can run the
|
Thanks Kurt, that's very helpful, it's possible 2. explains some test failures I was seeing locally last time I tried. Is it totally unpredictable what will happen in the tests without these checks for computational resources or is there a common point of failure? Also one other more general thing that might be useful to know is what makes the tests run long? Is it a timed simulation of everything working meaninfully together or is it doing a lot of computational work? |
|
Also another thing came to mind. What's the situation with this MSQT test in the CI https://github.com/DUNE-DAQ/drunc/actions/workflows/run_mqst.yml? It seems like it was abandoned some time earlier in the year judging by the actions runs. Is this something we want to get working again? |
When a session is running on a host with insufficient resources, a session will likley throw errors with the number of missing/empty data products. This takes time as we run a variety of configurations with many runs - there are 9 tests and multiple configurations for some of these tests. Supposing each tests takes 3 mins, this will get you the approximate half hour for running. |
This is something held back by the development of the Subprocess process manager, in this PR |
|
In principle, the time that each regression test takes to run is dominated by the amount of time spent waiting in each FSM state (e.g. trigger-enabled), as much time as the writer of the test chose. Of course, if lots of failures happen and/or a process either stalls or crashes, run control transitions can take longer than usual (e.g. some of the "stop-run" transitions), and those might produce a noticeable extra amount of time. |
|
Hi all. As discussed last meeting, I think a special cluster account just for testing PRs would be invaluable for this workflow. Something we could perhaps hook into CI -- e.g. manually (or even auto, but maybe too noisy) trigger the full integration test suite on the cluster once the PR is marked ready for review. |
|
@jamesturner246 @Aurashk @miruuna @bieryAtFnal |
|
Hi @PawelPlesniak , the updated template looks reasonable to me. I've made a note to myself to revisit the template once the global bundle script is generally available. When I do that, I will update the template to reference the new script ( |
|
Looking nice and very useful, thanks @PawelPlesniak. I have a couple of small suggestions.
|
|
@emmuhamm can you give a final review |
|
Hi @PawelPlesniak, thanks for this :D. Since its a MD based PR I'll just note my thoughts here Comments in templateI see a lot of comments in the template itself. Have you considered exposing the template so that its visible in the docs? See the template docs. NightliesI think in the template the developer should list which nightly was used to test this. As a reviewer, to test a PR I would normally check out a specific nightly, clone the relevant branch(es), and the run the test. If not specified, the checked out PR branch may be no longer up to date with the rest of the nightlies.
Naturally the best thing would be to have the PRs continously up to date with the current nightly via constant rebases/merges. Actually, would this be worth sticking in the template as a 'thing to do'? VerbosityRight now the initial checklist bit is quite verbose. As a developer I would prefer if there was just a clear set of check boxes I need to do as required (eg. preparing the PR for review, or reviewing someone else's). Maybe consider either having a separate docs with all this info (not personally preferred), or restructuring this section and making use of the "details" functionality in GitHub Flavored Markdown. Eg: Title of collapsiblesome extra information!In particular, parts where I found quite verbose:
LintingShould linting be part of the checklist? Merge responsibilities
Still isn't clear who does the final merging. Is it the reviewer or the asignee or the author? |
Description
Changes the structure of the PR template to prioritize the testing, introduces the requirements from other repos as a field.
No tests or further checks have been run, as this is a template issue, and does not affect the core code.
Type of change
Key checklist
python -m pytest)pre-commit run --all-files)Further checks
(Indicate issue here: # (issue))