12:00:47 <E-P> #startmeeting Mer QA meeting 26/4/2012
12:00:47 <MerBot> Meeting started Thu Apr 26 12:00:47 2012 UTC.  The chair is E-P. Information about MeetBot at http://wiki.merproject.org/wiki/Meetings.
12:00:47 <MerBot> Useful Commands: #action #agreed #help #info #idea #link #topic.
12:01:57 <E-P> let's see how many can participate today
12:03:14 <E-P> #topic Current status
12:04:09 <E-P> from the last meeting we had one action and it was for me
12:04:18 <E-P> http://mer.bfst.de/meetings/mer-meeting/2012/mer-meeting.2012-04-19-11.00.html
12:04:49 <E-P> #info First draft for the test mapping, http://wiki.merproject.org/wiki/Talk:Quality/Test_processes
12:05:19 <lbt> back ...
12:05:21 <E-P> the mapping will change if we change the test process
12:05:32 <E-P> lbt: great to have you here
12:06:37 <E-P> #info timoph is working on https://bugs.merproject.org/show_bug.cgi?id=315
12:08:05 <E-P> #info E-P tested the eat package on nemo virtual image
12:09:10 <E-P> anything else?
12:09:41 <lbt> looking at the test definition ... makes sens
12:09:42 <lbt> e
12:09:58 <E-P> we can discuss more about that in the next topic
12:10:04 <lbt> OK
12:10:38 <E-P> #topic Test constraints and mapping
12:11:11 <E-P> #info http://wiki.merproject.org/wiki/Talk:Quality/Test_processes
12:11:51 <E-P> lbt: did I miss something important from what we chatted on Monday?
12:12:12 <lbt> I haven't cross checked - I should action myself to do that :)
12:12:43 <lbt> so/but looking at : create_l2cap_connection .... I see environment: hardware .... do we need to say "needs a bluetooth peer"
12:13:09 <lbt> I don't want to be too detailed
12:13:25 <l3i> E-P: So, where would the actualt test cases be defiend?
12:13:27 <lbt> but somewhere that should be noted... "expects a peer setup using profile XYZ..."
12:13:41 <l3i> s/defiend/defined/
12:14:13 <E-P> lbt: you are right, in that case the bluetooth peer could be some automatic server
12:14:24 <E-P> I try to avoid the master-slave stuff at the moment
12:14:49 <lbt> yep - that's exactly the kind of environmental dependency I was thinking of
12:14:50 <l3i> E-P: I mean not "how the tests are like", but how they're executed in  practice
12:15:04 <E-P> l3i: that example is missing the "steps"
12:15:22 <l3i> E-P: Anyway the idea would be to include the steps in same definition somehow?
12:15:32 <E-P> l3i: basically using our current test-definition format, extending it and removing it from the packages
12:15:53 <E-P> l3i: yes
12:16:19 <l3i> E-P: Still using XML as well, or some other format? And would there still be "test packages" as such (I'd assume not directly)?
12:16:26 <E-P> lbt: if we have environment dependencies, does those effect to the test plan creation somehow?
12:16:47 <lbt> I think so - they need to be met to allow the test to be meaningful
12:16:59 <lbt> a failure when there is no peer is misleading
12:17:09 <lbt> and the automation needs to "know" somehow
12:17:13 <E-P> l3i: no decions yet, I would like to keep the XML format because the tools are using that
12:17:25 <E-P> l3i: and in some level maybe supporting the old test packaking
12:17:32 <l3i> E-P: Ok
12:18:05 <E-P> lbt: yep, that requires that we can specify our test environments
12:18:09 * lbt has no problem with using XML or any other sane format btw
12:18:26 <E-P> which is another story
12:18:40 <lbt> E-P: OK
12:19:17 <E-P> what do you think about the idea of having test sets defined in the test-definition, like in my example?
12:19:50 <E-P> that would make the mapping a bit easier, if you don't have to define all the test cases there
12:19:57 <lbt> *nod*
12:20:29 <E-P> you could anyway do the mapping in the case level
12:20:30 <lbt> I like it
12:20:32 <l3i> E-P: Looks pretty good to me
12:20:45 <lbt> what is important is that the tests have/need state
12:21:14 <l3i> E-P: Nothing in the format example what comes to "what components the test is testing", right?
12:21:37 <l3i> E-P: Just dependencies, but nothing to tell what the test target is
12:22:00 <l3i> E-P: I mean, on component/package level
12:22:29 <E-P> l3i: yes, the test-definition's component is missing from that
12:22:49 <l3i> E-P: You mean there would be a single component for the entire definition?
12:23:05 <lbt> I could see that being specified at the top and at a per-test level
12:23:05 <l3i> E-P: Or.. on which level? Definition of a case?
12:23:08 <E-P> l3i: no, test case could define that
12:23:14 <l3i> E-P: Ok
12:23:23 <l3i> E-P: Thought you thought so
12:23:56 <E-P> how about the mapping then...
12:24:07 <lbt> top-level for common, (eg bluez) and then maybe some specifics eg: create_l2cap_connection may test connman
12:24:27 <lbt> but xfer_1Mib_file doesn't test connman
12:24:44 <E-P> lbt: right
12:24:53 <l3i> E-P: If you do the mapping in a separate mapping file, how well does that fit together with mapping per case?
12:25:02 <E-P> if we have in the mapping all core packages listed, that will be pretty heavy to maintain
12:25:14 <E-P> l3i: that is my question too
12:25:35 <E-P> should it be somehow automatic that we can use fields in the test definition to do the mapping?
12:25:48 <lbt> so far this file looks like what I thought a mapping file would look like
12:26:06 <lbt> barring the package bit we just mentioned
12:27:11 <l3i> E-P: I think the targets defined in test cases themselves (or on set level) should be the "master" source for "what a test tests"
12:27:13 <lbt> I'd ecpect to see (for clarity sake) the actual test steps extracted to per-case files/definitions
12:27:40 <l3i> E-P: And that should be taken into account in any additional mapping outside the definition
12:28:14 <lbt> bear in mind that logically this is one huge data structure - the task is to make rational breaks to aid management
12:29:33 <lbt> this file looks a good level of detail to be useful for managing and deciding what tests to run - it helps aggregate using the sets
12:29:47 <lbt> it helps decide what set to use by looking into the components
12:29:59 <lbt> so I see it as ideal for test planning
12:30:20 <lbt> execution and maybe even test setup needs additional detail
12:30:43 <E-P> lbt: you are right, but maybe not splitting each test to single file
12:31:00 <lbt> implementation detail :)
12:31:23 <lbt> I'd exepce them to be "not in this file"
12:31:33 <lbt> mmm expect :)
12:31:36 <l3i> I think there should be some automation available to help create mappings based on the target info in test definitions. Like, listing sets out of a collection of tests that are "testing a certain component"
12:32:02 <lbt> l3i: is that backwards?
12:32:11 <lbt> automation uses mappings, not creates them
12:32:11 <E-P> l3i: if I understood you correctly, do you mean that when a package changes the process looks from the test-definios what to test?
12:32:13 <l3i> lbt: Depends on what is forwards ,)
12:32:30 <lbt> automation creates plans (ie .ks files and a list of tests/sets to run)
12:32:31 <l3i> E-P: Not necessarily
12:32:48 <l3i> E-P: When the test definitions change, the mappings should be revised
12:32:51 <lbt> ah
12:32:57 <E-P> l3i: i agree, having a tool to create sets easily eg. create set to domain connectivity
12:33:12 <lbt> yeah, that's not what I mean by automation
12:33:24 <lbt> automation here (for me anyhows)  is BOSS-like
12:33:41 <l3i> lbt: I just used the word automation to mean opposite of doing something manually
12:33:55 <lbt> *nod* a tool to analyse dependencies from the architecture and create a basic mapping
12:34:16 <l3i> In any case, whoever writes a test normally has a picture of what it's testing...
12:34:31 <l3i> And having to do mappings completely without that info in use would be illogical
12:34:35 <lbt> feature: bluetooth
12:35:00 <lbt> or domain: communications
12:35:08 <l3i> lbt: Where would the features/domains be defined
12:35:17 <l3i> and where the mapping of packages etc. to those would reside
12:35:28 <lbt> platform architecture
12:35:40 <Stskeeps> domains is mer arch docs, each rpm belonfs to group
12:35:49 <l3i> Stskeeps: Ok
12:35:50 <Stskeeps> group is domain
12:35:53 <lbt> http://releases.merproject.org/~carsten/GraphicsX11.png
12:36:26 <l3i> Stskeeps: Ok, how about when only one package changes, I suppose you don't want to run all the tests for that domain..?
12:36:31 <lbt> something like this - but I agree that w need more scoping
12:36:53 <lbt> l3i: I think there are 2 scopes happening here
12:37:08 <lbt> one is to do with creating and managing tests
12:37:15 <lbt> the other to do with selection and execution
12:37:17 <l3i> lbt: yes
12:37:32 <l3i> lbt: And they need a clear connection too
12:37:36 <lbt> yep
12:37:59 <lbt> so your first point uses diagrams like Stskeeps' with docs and 'domain knowledge'
12:38:14 <lbt> it has to provide enough data to satisfy your 2nd point
12:38:41 <lbt> ie "when pkgA changes, what tests can I run given I have no hardware"
12:39:27 <lbt> E-P's test-definition format was written knowing how bluetooth works
12:39:52 <lbt> when looking at coverage we'd check that all packages were tested
12:40:09 <lbt> and (still in scope 1) we'd analyse that each package caused sane tests to be run
12:40:27 <lbt> scope 2 happens when we throw it over the wall :)
12:41:43 <lbt> l3i:  if we find that we run too many tests when a package changes, that's a bug in the test domain somewhere
12:42:08 <l3i> lbt: How do you measure when you're running too many tests?
12:42:16 <lbt> people complain
12:43:27 <Stskeeps> metrics, lack of qa capacity
12:43:43 <lbt> as I said, "we'd analyse that each package caused sane tests to be run" ... sane is a bit subjective
12:44:30 <E-P> we need some process how to add new tests to the mapping
12:44:48 <l3i> Hmm.. Would it make sense to 1) have each test case tell what it's supposed to test, what it requires etc., and then 2) use the mapping outside the test definition to select only the bigger blocks of tests ro run, while on the case level the definition would be what "decides"..?
12:44:53 <E-P> that we don't have too heavy tests when package changes
12:44:59 <lbt> yep, there's an entire test development and release process E-P
12:45:13 <l3i> So, the mapping files would do only "rough selection"
12:45:52 <lbt> l3i: I see most of that in E-P's file already
12:46:00 <E-P> l3i: exactly
12:46:44 <l3i> lbt: me too, just hadn't got an explanation confirming that
12:46:57 <lbt> :)
12:47:33 <lbt> nb ... I'm a bit pedantic about names.... sry ... so I'm not keen on "Mapping/configuration file"
12:47:47 <l3i> lbt: Test selection file?
12:47:48 <lbt> I think that's practically the test plan
12:48:01 <lbt> and it looks like a .ks that implements the plan
12:48:08 <lbt> test selection is good too
12:48:20 <E-P> both fine for me
12:48:30 <lbt> some words have domain-specific meaning so I'm cautious
12:48:52 <l3i> I'd say a test plan is maybe something that's automatically generated in the process when there's a change, and when you go through the test selection and definitions
12:49:15 <l3i> to pick tests to execute in different environments
12:49:26 <l3i> After picking the tests, you'd have the plan
12:49:32 <lbt> yep
12:49:35 <E-P> true
12:49:43 <lbt> so can we use that term?
12:49:55 <l3i> lbt: I'd not use it for the "Mapping/configuration file"
12:50:05 <lbt> a test plan is a sequential set of tests on a specific platform configuration
12:50:06 <l3i> As I dfon't see that as a test plan
12:50:20 <l3i> lbt: Yes, that's more as I see it
12:50:41 <l3i> Hmm
12:51:32 <lbt> looking at the diagram: step 1 makes sense
12:51:38 <lbt> step 3 makes sense
12:51:39 <E-P> like I talked with timoph, these changes requires lots of work and I would like that we have some automation up-and-running before making these changes
12:51:40 <lbt> step 2...
12:51:50 <lbt> E-P: not a problem
12:52:10 <lbt> E-P: to some degree we're blue-skying here
12:52:19 <E-P> if we use current tools, we should get them pretty fast up
12:52:20 <lbt> I like that as it helps agree a direction
12:52:29 <l3i> E-P: What were you actually referring to by "Test plan(s)" in step 3?
12:53:13 <E-P> l3i: test plans to OTS, one test plan per device
12:53:20 <l3i> E-P: Ok, I see
12:54:40 <E-P> you are ok that the test selection file has .ks files and devices specified?
12:54:45 <l3i> Maybe that could be described on the page too
12:55:05 <lbt> so step 2... I see that as input to "Test execution planner" and I see it as all the "test-definition" files that we talked about earlier
12:55:13 <E-P> l3i: yes I will add that, that page is anyway only for discussion/draft
12:55:40 <l3i> E-P: ok
12:56:22 <lbt> so actually.... add them as another input to the execution planner
12:56:35 <E-P> nod
12:56:53 <lbt> and I think I'm seeing what that config file is now
12:57:12 <lbt> it's supposed to help determin which tests to pick and how to setup a device (like templates)
12:57:24 <E-P> one issue is that where do we keeps this information, it would be nice to have in a database
12:58:06 <lbt> E-P: lets not worry too much just yet - decide how to master it and then optimise storage/retreival later?
12:58:28 <E-P> fine for me
12:58:29 <l3i> A database whose content would be automatically updated when committing changes to the files are done could be useful�
12:58:32 <lbt> yes
12:58:36 <E-P> yes
12:58:52 <l3i> But yes, too early
12:59:12 <E-P> I think we have now common understanding what we want and need
12:59:23 <lbt> FYI phaeron helped me fix IMG last night
12:59:54 <lbt> so that's a step closer to having an image
13:00:00 <E-P> #info test definition: adding environment dependency, what is needed from the test environment eg. needs bluetooth pair
13:00:21 <E-P> #info test environment dependency requries defining the environments
13:00:24 <lbt> next we need to update OBS to make image generation post-build easier... hopefully next week
13:01:02 <E-P> I will add task bugs to bugzilla about these features
13:01:13 <E-P> easier to track where we are and what needs to be done
13:01:30 <E-P> #info keeping the test definition in the management level, separating test steps to own file
13:01:46 <E-P> #info test definition: defining a component in the top and test case level
13:02:16 <E-P> #info test definition: defining  a test set in the definition looks ok
13:02:22 <E-P> #info a tool is needed for creating test sets easily, eg. create set for communications domain
13:02:57 <E-P> #info test definition defines tests and their requirements, mapping is outside of the test definition
13:03:12 <E-P> #info test mapping selects bigger blocks of tests to run
13:03:25 <E-P> #info using test selection name instead of mapping
13:03:37 <E-P> #info setting up the current tools before implementing new process
13:03:56 <E-P> #info next we need to update OBS to make image generation post-build easier
13:04:12 <E-P> did I miss something?
13:05:01 <E-P> so next steps taking the current tools into action, adding tasks to bugzilla
13:05:19 <lbt> E-P: dual scope issue
13:06:25 <lbt> and maybe a re-draft of the diagram with 1,2 and 3 as inputs and 4 as the test-plan (as per the definition mentioned or similar)
13:07:40 <E-P> #info re-draft the diagram  1,2 and 3 as inputs and 4 as the test-plan
13:08:11 <E-P> I have to go soon, something else for today?
13:08:44 <E-P> I can create a draft plan how we take the QA test process into action
13:08:55 <lbt> nope, I have to go v. soon too
13:09:05 <l3i> Nothing from me
13:09:11 <l3i> either
13:09:18 <E-P> #action E-P Create a plan how to take the QA test process into action
13:09:21 <lbt> I'd like to see a minimal e2e solution ASAP
13:09:39 <lbt> it's always easier to grow something small :)
13:09:51 <E-P> I agree
13:10:42 <E-P> thanks for the discussion, again we are one step closer to have QA up-and-running
13:10:45 <lbt> grab me in #mer and we can get something operational
13:10:58 <E-P> ok
13:11:12 <E-P> #endmeeting