This session will include the following subject(s):
Continuation of third party CI:
We now hold the hypervisor driver interface to a much higher standard than we used to. Should we hold other plug in interfaces (scheduler, database, etc) to a higher testing standard as well? What would this look like?
(Session proposed by Michael Still)
Raising the bar on virt CI: status and next steps:
Last cycle, we enforced the requirement that all drivers in Nova have functional testing CI systems. Next, we need to work towards meeting a minimum level of performance, in terms of job run time, coverage, and pass percentage.
In this session we should review where we are, and where we need to get to in the Juno cycle.
(Session proposed by Dan Smith)
Base feature requirements for compute drivers:
In the Icehouse cycle, we finished implementing the CI testing requirement for all compute drivers. I feel it's time to visit the next topic around consistency between drivers.
The support matrix for features across the compute drivers is full of gaps. We should decide what base level of functionality is absolutely required for drivers.
- What is the required feature list? How should we decide what's required?
- What timeline should we set for meeting the baseline? What happens if it isn't met?
- How does the resulting list line up with our current set of features marked as the core API for v3?
- Should the requirements change for a container based driver vs. hypervisor?