Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic
Tuesday, May 13 • 5:30pm - 6:10pm
Test Matrices and Feature Parity

Sign up or log in to save this to your schedule and see who's attending!

The idea for this talk spawned in https://etherpad.openstack.org/p/juno-test-maxtrices. You can refer to the content there, but I have also included the current state of that etherpad here.

During the icehouse cycle we added a few extra features to the check and gate zuul pipelines. In short active changes will be automatically rechecked if their test results become more than 72 hours old and if a change has check results that are more than 24 hours old when approval happens that change will be rechecked prior to queuing in the gate pipeline. The goal behind these changes was to avoid stale test results so that reviewers always have up to data and relevant results.

While these changes have succeeded in keeping results up to date they have traded gate pileups for check pileups. Arguably this was also the intent so not a bad thing, but it would be great if we could further streamline the process for dev and user sanity.

Wouldn't it be great if we could come up with some sort of minimal test matrix that gives us reasonable code coverage? Today we have done this in a fairly ad hoc manner, but a formal matrix with an understanding of the interfaces between different pieces of code should allow us to run fewer tests per change increasing the overall throughput of the system.

Simplified matrix:
TestA TestB TestC TestD
DB MySQL MySQL Postgres Postgres
Hypervisor KVM/QEMU Docker LXC Xen
MQ Rabbit ZMQ Qpid $RANDOM
Cells No Yes No Yes
Networking NovaNet NeutronA NeutronB NuetronC
NeutronA: neutron + linuxbridge
NeutronB: neutron + ovs
NeutronC: neutron + opendaylight
* Note I haven't looked at any code and have no idea if the above remotely makes sense.

So why won't this work? Feature Parity. It turns out we have done a terrible job in maintaining feature parity across the different example rows of that matrix. Nova with cells and Nova without cells support different things. Neutron and nova network don't have feature parity. Same story with the hypervisors. I think the DB layer is just about the only one that *should* work regardless of setup.

We have two intertwined problems here. We run a ton of tests and should be better about what we test which requires us to care about feature parity. We should embrace both problems (and by embrace I mean actually care about them instead of ignoring them). (this is way off in left field and getting into refstack territory >_>) Maybe we should consider chopping off the deadweight from that matrix and focus on testing the things that "work".

I propose that we get everyone in a room and talk about this. We need a better story as an overall project (Openstack) on how we deal with feature parity within and across projects. Assuming we can come up with some concrete ideas around fixing feature parity we should be able to talk about fixing our test matrices too.

(Session proposed by Clark Boylan)


Tuesday May 13, 2014 5:30pm - 6:10pm
B302

Attendees (38)