![]() |
2010-01-20
, 20:43
|
|
Posts: 11,700 |
Thanked: 10,045 times |
Joined on Jun 2006
@ North Texas, USA
|
#52
|
is this now to be merged with http://talk.maemo.org/showthread.php?t=41179
?? just received on for the stinks thread to be merged over to it... whole thread or from which post on?
![]() |
2010-01-20
, 21:07
|
Posts: 5,795 |
Thanked: 3,151 times |
Joined on Feb 2007
@ Agoura Hills Calif
|
#53
|
![]() |
2010-01-20
, 21:11
|
Posts: 71 |
Thanked: 49 times |
Joined on Sep 2009
@ Espoo
|
#54
|
(So go on, tell me, just between ourselves... what does rm -rf / really do???)
![]() |
2010-01-20
, 21:19
|
|
Posts: 11,700 |
Thanked: 10,045 times |
Joined on Jun 2006
@ North Texas, USA
|
#55
|
I agree with the Texrat idea "a single trusted gatekeeper with full control' till things are straightened out. Of course, the identity of the gatekeeper is important. I nominate Texrat.
The Following User Says Thank You to Texrat For This Useful Post: | ||
![]() |
2010-01-20
, 21:39
|
Posts: 434 |
Thanked: 325 times |
Joined on Sep 2009
|
#56
|
![]() |
2010-01-20
, 21:46
|
|
Posts: 11,700 |
Thanked: 10,045 times |
Joined on Jun 2006
@ North Texas, USA
|
#57
|
![]() |
2010-01-20
, 21:54
|
Posts: 434 |
Thanked: 325 times |
Joined on Sep 2009
|
#58
|
![]() |
2010-01-20
, 22:10
|
|
Posts: 11,700 |
Thanked: 10,045 times |
Joined on Jun 2006
@ North Texas, USA
|
#59
|
![]() |
2010-01-20
, 23:45
|
|
Administrator |
Posts: 1,036 |
Thanked: 2,019 times |
Joined on Sep 2009
@ Germany
|
#60
|
The Following User Says Thank You to chemist For This Useful Post: | ||
What I wonder is:
o are there defined test cases for a project/applications
- unit tests for especially scripted applications (python
=> I would expect the answer to be yes, as most projects get ported and hopefully have the tests executed that many bring along.
- functional tests for well - functionality.
=> I saw these in the wiki, however not very well organized at the time I checked
- integration tests - to check that components that can interact actually do that as expected
=> also saw these in the wiki, just not too much of them (i was loking for n800 testing, it might have improved in the meantime)
o are results from running these test tracked
This is the more interesting item from my point of view. If I understand it right then only the 'summery' from the 'testers' gets collected and once the 'PASSED' level reaches a certain point, the application is considerd tested successfully.
The problem i see there - and that is actually also the main reason for me answering on that topic at all - is that this process can be easily cheated, which in fact already happens. Another maemo developer whom i know private asked me to please give him the thumbs up so his package finally makes it into extras ...
I guess this is not the only occasion of this happening.
I can't say what the easiest technical approach there would be but know that there exists test tracking software for manual test cases that is open source. There is even a test extension for bugzilla itself called 'testopia' - it's pretty bloated and can be quite slow - so I'd look for something simpler but I really don;t care how the testing gets tracked in the end as long as it's tracked.
On the topic of rerunning all tests on a version/release upgrade - when the test cases are sorted in several dimensions, one can start selecting tests specifically for the area changes in the upgrade happened and run this subset instead of all tests.
o is it easy to add test cases?
In my opinion there have not to be that many testcases, just something more than the big "thumbs up / thumbs down".
More important is that anyone involved is able to easy add tests he thinks might cover functionality he needs and that this gets propagated. These most likely will be of varying quality but if you have some test case staging area and some regular reviews then I would expect that coverage especially in the area of functional tests will rise fast.
Also, having such a system, Nokia could add their own tests for new features as well as upcoming bugfixes and give dedicted beta testers a better guide on what to check.
Maybe all these things got covered in the meantime, when i was interested in n800 testing a year ago, the major problem in my pov
was the missing tool to track test results.
Once you have some defined tests, it becomes possible to better determine the actual quality of the application and also interested users that would like to use it, but are not sure if something from extras-testing might screw their device, can see a more detailed overview on what they have to expect when package is installed
Examples of basic tests I personally would expect having to at least pass to get stuff even into testing (i would expect this is already written down somewhere
o package integraty
- goes to /opt
- no errors from install scripts in the package
- ...
o basic function
- appears in the menu
- can be started (very importan for python apps
- does not exceed defined cpu usage per time if not used
- does not exceed defined disk space usage per time
- runs as user
- can be terminated from gui without remains like processes, temporal files and alike
o normal function (these now of course depend on the application, so I list the some I'd like to see for an image viewer)
- opens an image
- can rotate
- can zoom
- can handle X amount of images in defined time
- ...
When these succeed, put the package in testing and let the real features and advanced functionality of an app be tested, like changing options, attaching tags, and so on.
I myself install randomly software from testing and devel as I generally know what I'm doing and have no problem with reflashing the device. So I would love to give feedback on what actually works, and not only the bugs i see and that feedback could help users as well as developers.
Tell me where to report results - and please don't send me to a wiki page
things we learned from movies
38) No matter how badly a spaceship is attacked, its internal gravity system is never damaged.