12:00:47 #startmeeting Mer QA meeting 26/4/2012 12:00:47 Meeting started Thu Apr 26 12:00:47 2012 UTC. The chair is E-P. Information about MeetBot at http://wiki.merproject.org/wiki/Meetings. 12:00:47 Useful Commands: #action #agreed #help #info #idea #link #topic. 12:01:57 let's see how many can participate today 12:03:14 #topic Current status 12:04:09 from the last meeting we had one action and it was for me 12:04:18 http://mer.bfst.de/meetings/mer-meeting/2012/mer-meeting.2012-04-19-11.00.html 12:04:49 #info First draft for the test mapping, http://wiki.merproject.org/wiki/Talk:Quality/Test_processes 12:05:19 back ... 12:05:21 the mapping will change if we change the test process 12:05:32 lbt: great to have you here 12:06:37 #info timoph is working on https://bugs.merproject.org/show_bug.cgi?id=315 12:08:05 #info E-P tested the eat package on nemo virtual image 12:09:10 anything else? 12:09:41 looking at the test definition ... makes sens 12:09:42 e 12:09:58 we can discuss more about that in the next topic 12:10:04 OK 12:10:38 #topic Test constraints and mapping 12:11:11 #info http://wiki.merproject.org/wiki/Talk:Quality/Test_processes 12:11:51 lbt: did I miss something important from what we chatted on Monday? 12:12:12 I haven't cross checked - I should action myself to do that :) 12:12:43 so/but looking at : create_l2cap_connection .... I see environment: hardware .... do we need to say "needs a bluetooth peer" 12:13:09 I don't want to be too detailed 12:13:25 E-P: So, where would the actualt test cases be defiend? 12:13:27 but somewhere that should be noted... "expects a peer setup using profile XYZ..." 12:13:41 s/defiend/defined/ 12:14:13 lbt: you are right, in that case the bluetooth peer could be some automatic server 12:14:24 I try to avoid the master-slave stuff at the moment 12:14:49 yep - that's exactly the kind of environmental dependency I was thinking of 12:14:50 E-P: I mean not "how the tests are like", but how they're executed in practice 12:15:04 l3i: that example is missing the "steps" 12:15:22 E-P: Anyway the idea would be to include the steps in same definition somehow? 12:15:32 l3i: basically using our current test-definition format, extending it and removing it from the packages 12:15:53 l3i: yes 12:16:19 E-P: Still using XML as well, or some other format? And would there still be "test packages" as such (I'd assume not directly)? 12:16:26 lbt: if we have environment dependencies, does those effect to the test plan creation somehow? 12:16:47 I think so - they need to be met to allow the test to be meaningful 12:16:59 a failure when there is no peer is misleading 12:17:09 and the automation needs to "know" somehow 12:17:13 l3i: no decions yet, I would like to keep the XML format because the tools are using that 12:17:25 l3i: and in some level maybe supporting the old test packaking 12:17:32 E-P: Ok 12:18:05 lbt: yep, that requires that we can specify our test environments 12:18:09 * lbt has no problem with using XML or any other sane format btw 12:18:26 which is another story 12:18:40 E-P: OK 12:19:17 what do you think about the idea of having test sets defined in the test-definition, like in my example? 12:19:50 that would make the mapping a bit easier, if you don't have to define all the test cases there 12:19:57 *nod* 12:20:29 you could anyway do the mapping in the case level 12:20:30 I like it 12:20:32 E-P: Looks pretty good to me 12:20:45 what is important is that the tests have/need state 12:21:14 E-P: Nothing in the format example what comes to "what components the test is testing", right? 12:21:37 E-P: Just dependencies, but nothing to tell what the test target is 12:22:00 E-P: I mean, on component/package level 12:22:29 l3i: yes, the test-definition's component is missing from that 12:22:49 E-P: You mean there would be a single component for the entire definition? 12:23:05 I could see that being specified at the top and at a per-test level 12:23:05 E-P: Or.. on which level? Definition of a case? 12:23:08 l3i: no, test case could define that 12:23:14 E-P: Ok 12:23:23 E-P: Thought you thought so 12:23:56 how about the mapping then... 12:24:07 top-level for common, (eg bluez) and then maybe some specifics eg: create_l2cap_connection may test connman 12:24:27 but xfer_1Mib_file doesn't test connman 12:24:44 lbt: right 12:24:53 E-P: If you do the mapping in a separate mapping file, how well does that fit together with mapping per case? 12:25:02 if we have in the mapping all core packages listed, that will be pretty heavy to maintain 12:25:14 l3i: that is my question too 12:25:35 should it be somehow automatic that we can use fields in the test definition to do the mapping? 12:25:48 so far this file looks like what I thought a mapping file would look like 12:26:06 barring the package bit we just mentioned 12:27:11 E-P: I think the targets defined in test cases themselves (or on set level) should be the "master" source for "what a test tests" 12:27:13 I'd ecpect to see (for clarity sake) the actual test steps extracted to per-case files/definitions 12:27:40 E-P: And that should be taken into account in any additional mapping outside the definition 12:28:14 bear in mind that logically this is one huge data structure - the task is to make rational breaks to aid management 12:29:33 this file looks a good level of detail to be useful for managing and deciding what tests to run - it helps aggregate using the sets 12:29:47 it helps decide what set to use by looking into the components 12:29:59 so I see it as ideal for test planning 12:30:20 execution and maybe even test setup needs additional detail 12:30:43 lbt: you are right, but maybe not splitting each test to single file 12:31:00 implementation detail :) 12:31:23 I'd exepce them to be "not in this file" 12:31:33 mmm expect :) 12:31:36 I think there should be some automation available to help create mappings based on the target info in test definitions. Like, listing sets out of a collection of tests that are "testing a certain component" 12:32:02 l3i: is that backwards? 12:32:11 automation uses mappings, not creates them 12:32:11 l3i: if I understood you correctly, do you mean that when a package changes the process looks from the test-definios what to test? 12:32:13 lbt: Depends on what is forwards ,) 12:32:30 automation creates plans (ie .ks files and a list of tests/sets to run) 12:32:31 E-P: Not necessarily 12:32:48 E-P: When the test definitions change, the mappings should be revised 12:32:51 ah 12:32:57 l3i: i agree, having a tool to create sets easily eg. create set to domain connectivity 12:33:12 yeah, that's not what I mean by automation 12:33:24 automation here (for me anyhows) is BOSS-like 12:33:41 lbt: I just used the word automation to mean opposite of doing something manually 12:33:55 *nod* a tool to analyse dependencies from the architecture and create a basic mapping 12:34:16 In any case, whoever writes a test normally has a picture of what it's testing... 12:34:31 And having to do mappings completely without that info in use would be illogical 12:34:35 feature: bluetooth 12:35:00 or domain: communications 12:35:08 lbt: Where would the features/domains be defined 12:35:17 and where the mapping of packages etc. to those would reside 12:35:28 platform architecture 12:35:40 domains is mer arch docs, each rpm belonfs to group 12:35:49 Stskeeps: Ok 12:35:50 group is domain 12:35:53 http://releases.merproject.org/~carsten/GraphicsX11.png 12:36:26 Stskeeps: Ok, how about when only one package changes, I suppose you don't want to run all the tests for that domain..? 12:36:31 something like this - but I agree that w need more scoping 12:36:53 l3i: I think there are 2 scopes happening here 12:37:08 one is to do with creating and managing tests 12:37:15 the other to do with selection and execution 12:37:17 lbt: yes 12:37:32 lbt: And they need a clear connection too 12:37:36 yep 12:37:59 so your first point uses diagrams like Stskeeps' with docs and 'domain knowledge' 12:38:14 it has to provide enough data to satisfy your 2nd point 12:38:41 ie "when pkgA changes, what tests can I run given I have no hardware" 12:39:27 E-P's test-definition format was written knowing how bluetooth works 12:39:52 when looking at coverage we'd check that all packages were tested 12:40:09 and (still in scope 1) we'd analyse that each package caused sane tests to be run 12:40:27 scope 2 happens when we throw it over the wall :) 12:41:43 l3i: if we find that we run too many tests when a package changes, that's a bug in the test domain somewhere 12:42:08 lbt: How do you measure when you're running too many tests? 12:42:16 people complain 12:43:27 metrics, lack of qa capacity 12:43:43 as I said, "we'd analyse that each package caused sane tests to be run" ... sane is a bit subjective 12:44:30 we need some process how to add new tests to the mapping 12:44:48 Hmm.. Would it make sense to 1) have each test case tell what it's supposed to test, what it requires etc., and then 2) use the mapping outside the test definition to select only the bigger blocks of tests ro run, while on the case level the definition would be what "decides"..? 12:44:53 that we don't have too heavy tests when package changes 12:44:59 yep, there's an entire test development and release process E-P 12:45:13 So, the mapping files would do only "rough selection" 12:45:52 l3i: I see most of that in E-P's file already 12:46:00 l3i: exactly 12:46:44 lbt: me too, just hadn't got an explanation confirming that 12:46:57 :) 12:47:33 nb ... I'm a bit pedantic about names.... sry ... so I'm not keen on "Mapping/configuration file" 12:47:47 lbt: Test selection file? 12:47:48 I think that's practically the test plan 12:48:01 and it looks like a .ks that implements the plan 12:48:08 test selection is good too 12:48:20 both fine for me 12:48:30 some words have domain-specific meaning so I'm cautious 12:48:52 I'd say a test plan is maybe something that's automatically generated in the process when there's a change, and when you go through the test selection and definitions 12:49:15 to pick tests to execute in different environments 12:49:26 After picking the tests, you'd have the plan 12:49:32 yep 12:49:35 true 12:49:43 so can we use that term? 12:49:55 lbt: I'd not use it for the "Mapping/configuration file" 12:50:05 a test plan is a sequential set of tests on a specific platform configuration 12:50:06 As I dfon't see that as a test plan 12:50:20 lbt: Yes, that's more as I see it 12:50:41 Hmm 12:51:32 looking at the diagram: step 1 makes sense 12:51:38 step 3 makes sense 12:51:39 like I talked with timoph, these changes requires lots of work and I would like that we have some automation up-and-running before making these changes 12:51:40 step 2... 12:51:50 E-P: not a problem 12:52:10 E-P: to some degree we're blue-skying here 12:52:19 if we use current tools, we should get them pretty fast up 12:52:20 I like that as it helps agree a direction 12:52:29 E-P: What were you actually referring to by "Test plan(s)" in step 3? 12:53:13 l3i: test plans to OTS, one test plan per device 12:53:20 E-P: Ok, I see 12:54:40 you are ok that the test selection file has .ks files and devices specified? 12:54:45 Maybe that could be described on the page too 12:55:05 so step 2... I see that as input to "Test execution planner" and I see it as all the "test-definition" files that we talked about earlier 12:55:13 l3i: yes I will add that, that page is anyway only for discussion/draft 12:55:40 E-P: ok 12:56:22 so actually.... add them as another input to the execution planner 12:56:35 nod 12:56:53 and I think I'm seeing what that config file is now 12:57:12 it's supposed to help determin which tests to pick and how to setup a device (like templates) 12:57:24 one issue is that where do we keeps this information, it would be nice to have in a database 12:58:06 E-P: lets not worry too much just yet - decide how to master it and then optimise storage/retreival later? 12:58:28 fine for me 12:58:29 A database whose content would be automatically updated when committing changes to the files are done could be useful� 12:58:32 yes 12:58:36 yes 12:58:52 But yes, too early 12:59:12 I think we have now common understanding what we want and need 12:59:23 FYI phaeron helped me fix IMG last night 12:59:54 so that's a step closer to having an image 13:00:00 #info test definition: adding environment dependency, what is needed from the test environment eg. needs bluetooth pair 13:00:21 #info test environment dependency requries defining the environments 13:00:24 next we need to update OBS to make image generation post-build easier... hopefully next week 13:01:02 I will add task bugs to bugzilla about these features 13:01:13 easier to track where we are and what needs to be done 13:01:30 #info keeping the test definition in the management level, separating test steps to own file 13:01:46 #info test definition: defining a component in the top and test case level 13:02:16 #info test definition: defining a test set in the definition looks ok 13:02:22 #info a tool is needed for creating test sets easily, eg. create set for communications domain 13:02:57 #info test definition defines tests and their requirements, mapping is outside of the test definition 13:03:12 #info test mapping selects bigger blocks of tests to run 13:03:25 #info using test selection name instead of mapping 13:03:37 #info setting up the current tools before implementing new process 13:03:56 #info next we need to update OBS to make image generation post-build easier 13:04:12 did I miss something? 13:05:01 so next steps taking the current tools into action, adding tasks to bugzilla 13:05:19 E-P: dual scope issue 13:06:25 and maybe a re-draft of the diagram with 1,2 and 3 as inputs and 4 as the test-plan (as per the definition mentioned or similar) 13:07:40 #info re-draft the diagram 1,2 and 3 as inputs and 4 as the test-plan 13:08:11 I have to go soon, something else for today? 13:08:44 I can create a draft plan how we take the QA test process into action 13:08:55 nope, I have to go v. soon too 13:09:05 Nothing from me 13:09:11 either 13:09:18 #action E-P Create a plan how to take the QA test process into action 13:09:21 I'd like to see a minimal e2e solution ASAP 13:09:39 it's always easier to grow something small :) 13:09:51 I agree 13:10:42 thanks for the discussion, again we are one step closer to have QA up-and-running 13:10:45 grab me in #mer and we can get something operational 13:10:58 ok 13:11:12 #endmeeting