Active Topics

 


Reply
Thread Tools
Posts: 268 | Thanked: 304 times | Joined on Oct 2009 @ Orlando, USA
#61
A 10 day quarantine may be too long for some and I appreciate the developer(s) need to see the package in Extras quickly but in the real world, I don't see how 10 days go against the 'release early release often' mantra. 10 days before an app reaches a large number of users is not too long IMO. Perhaps the time can be reduced by a bit and maybe more for updates. But on the other end, 5 days for new apps is not enough. We must provide a 'weekend opportunity' since most of us use our free time to contribute towards testing. Reducing it to 7 days would be preferable and enable testers at least a weekend to come around to the app.

I agree with the assessment that the current process has too many 'open to interpretation' areas that have brought the process very close to abuse.

1) Packages without bug-trackers
These are the most common. But again, this is a grey area depending on the size of the project. small project, like wallpapers etc may be not all that necessary. Its not difficult to find many of these apps with >10 thumbs up (e.g. Easy-chroot).

2) Optification
These are easily caught but again there is a grey area on how much should an app take (including/excluding dependencies) before it is categorised as not-optified.

The sad part is both the above checks can be easily automated (build? promotion?) to save energy downstream. Requirement for a bug-tracker or a mailto link would be easy to check. I am sure some simple rules can be applied for checking optification.

The process as it is today is good enough but its not properly written up. Lack of clarity in QA-Checklist doesn't help justifying a thumbs-down against a popular app.

The idea of having a dedicated super-testers group is also a good one to "override" the judgement of testers. but this complicates the process further IMO but perhaps a necessary step as more people begin to rate without a firm understanding the goals of extras-testing.

Maemo community apps stand for reliability and authenticity. I hope we can iron-out the process and come to a consensus quickly.
 

The Following 3 Users Say Thank You to archebyte For This Useful Post:
jeremiah's Avatar
Posts: 170 | Thanked: 261 times | Joined on Feb 2009 @ Gothenburg, Sweden
#62
Originally Posted by Texrat View Post
Testing attributes need to be built into the Maemo infrastructure IMO. Apps should have the right metadata wrapped around them throughout their development/deployment lifecycle, and there needs to be a Maemo-managed web-based system in place to interact with that metadata. Once that's in place, making sure testing efforts are meaningful becomes simply a matter of exposing and updating the proper App attributes at various points along the lifecycle. Right now there does not appear to be enough detail.

related to this Brainstorm: http://talk.maemo.org/showthread.php?t=38014

(sorry, Flandry, I see this was outside your intended scope)
I strongly disagree with this Texrat - this is moving in the wrong direction to an over-engineered non-solution.

Note that two of the most popular linux distros use the current packaging format and forking away from that is a really bad idea. The maemo build system is already too different from debian (i.e. optify.)
 

The Following 3 Users Say Thank You to jeremiah For This Useful Post:
jeremiah's Avatar
Posts: 170 | Thanked: 261 times | Joined on Feb 2009 @ Gothenburg, Sweden
#63
Originally Posted by Flandry View Post
or half finished, depending on whether you are a pessimist or an optimist, i suppose.
  • provide Quality Assurance by requiring testing of basic essential criteria for apps destined for Extras
This is my area and I take full responsibility for it. I have written the code, I am just waiting to plug it in to the new builder for it to do its thing. But that is just an excuse. I will work hard to deliver a solution to this step and have it done soon.
 

The Following 2 Users Say Thank You to jeremiah For This Useful Post:
Posts: 434 | Thanked: 325 times | Joined on Sep 2009
#64
As it happens, I made a Brainstorm this morning about this issue. I had no idea that this was already been discussed. But RevdKathy kindly pointed me to this thread. As my defence, Talk was down when I did it, so I could not search the forum.

Anyway, since this is been discussed, let me offer my opinion too. These are the issues I have with the current system (and some of them have already been mentioned):

1. A new version in Extras-testing resets package Karma/quarantine time

For example, I got this app in Extras-testing. I know that there is a little mistake on the actual help file. It say "time you want to countdown from" instead of "count up from". Normally I would just correct this in few minutes. But with the current system I would loose the Karma/quarantine time of the app. I already lost them once, so I'm not going to do that again. Granted, it won't affect the functionality of the app, but it might confuse some end-users. However, I still think that the user would much rather have it sooner instead of having a grammatically correct help file 10 days later (that is, if it would get so many votes, which is highly unlikely, so the real wait might be several weeks. In other words, this current system discourages updates!

2. Why can't bug reporting page be automated?

The preferred place for bug reporting is Bugzilla. Fine. I have nothing against that. However, with the current method, I have to fist release my package, send an email to request a Bugzilla page for my package, wait for the creation of that page and finally release another package with the correct reference to the Bugzilla page. Why could there not be a simple way to auto create the proper Bugzilla place, like the Optify - Auto setting? If some additional pages or settings are needed, only then could a request be sent.

3. There are no testers in Extras-devel

A developer needs testers. But those are not available in Extras-devel. Especially since the user is warned that his device will explode if he activates this repository filled with malicious apps from mad developers who are secretly palning to take over the world. So the only real testing can happen in Extras-testing. Therefore there should be two stages:

Beta testing

A stage where the developer can get valuable feedback from users and thus improve the app. Updates that add features are allowed.

Release candidate

A stage that is initiated by the developer himself. Yes, not all developers are clinically insane megalomaniacs! Some of them actually take bride of the quality of their work and don't want to release a product that is not working well. In this stage, only bug fixing and some minor needed functionality changes would be allowed.

For these two stages, there should be some kind of unified voting system. What system? I have no idea and I'm tired of typing.

Anyway, this was only my opinion. Not that great, I know, but at least it's mine.

Last edited by Sasler; 2010-01-19 at 12:30.
 

The Following 9 Users Say Thank You to Sasler For This Useful Post:
Flandry's Avatar
Posts: 1,559 | Thanked: 1,786 times | Joined on Oct 2009 @ Boston
#65
Originally Posted by pycage View Post
IMHO the extras-testing warning should read differently than beware, here be dragons.

Mayb something like
If you use the word "rate", most native english speakers are going to give you their opinion, not the outcome of a series of tests. This is the way i set it up when i have an app in testing and am requesting user testers:
I've promoted it to Extras-testing, which means i consider it ready for end users. Now, it's up to testers to verify that. If you are willing to be a tester, please read about the Extras-Testing repo and make sure the package meets the criteria in the QA Checklist. You can find the testing report page for it here.
The real problem, however, is that there is any need to plug (advertise) testing in the first place. Users eager to download an app can not be expected to do a good job testing it. The ones you can count on to be more impartial are those "weird" (not my word) blessed souls who go through the testing queue because they want to contribute in that way. I wouldn't mention the testing at all in a thread if i thought ten "real" tests was necessary, so i guess my opinion was revealed when i started asking for testers. The way to solve this is either to reduce the total karma so no need is felt to appeal to users to test, or to add a separate requirement for "official testers" so that we can be sure the app is given a thorough test. The first option would be acceptable to me, but the latter (which will require more work on the infrastructure, yes) is the best because it lets the dev sollicit user testing without making them jump through hoops--and this is the process most likely to get him feedback on usabiity issues.

Here's what happens in the rare case that one of the solicited user testers tries to jump through the requested hoops rather than just going and giving a thumbs up:
Ok. I really would like to make report but I´m having little issues. I have read wiki page and it makes little sense to me

So couple of questions before writing report.

...

Maybe I´m not adequate to do testing but i really much would like to learn. It would greatly help if there were more examples in wiki. Right now it feels like its only for professionals :|
And then later:
Originally Posted by slender View Post
Oh. Now i understand. I should give thumbs up only after doing review and copy paste it to comment section.

Douh. I have already voted thumb up after first install because i thought that it´s nice that you put it together. Same thing what i have done on brainstorm or in download section.

Sorry to say but this whole system is really confusing. From wiki it says that system is under construction. Whole comment system should be totally different. It should first ask sections in wiki one by one and after that you could give thumbs up or down.
The current system is too much of one, too little of another.
__________________

Unofficial PR1.3/Meego 1.1 FAQ

***
Classic example of arbitrary Nokia decision making. Couldn't just fallback to the no brainer of tagging with lat/lon if network isn't accessible, could you Nokia?
MAME: an arcade in your pocket
Accelemymote: make your accelerometer more joy-ful

Last edited by Flandry; 2010-01-19 at 13:09.
 

The Following 5 Users Say Thank You to Flandry For This Useful Post:
qgil's Avatar
Posts: 3,105 | Thanked: 11,088 times | Joined on Jul 2007 @ Mountain View (CA, USA)
#66
Originally Posted by fms View Post
There is no way you can trust community to deal with the legal issues. Legal issues require someone who has received actual legal training and is acquainted with the particular legal issues.
The same could be said about analysing system performance and power management. Still, the community is happy detecting only flagrant cases of sluggish systems and drained batteries.

In your first comment you mentioned an "obvious" case, and it's this "obvious" level what the community can assume without legal training. My only point is that legal problems are troublesome for the community just as they are for Nokia, while in your comment it seemed that you were putting all the responsibility and reasons to be concerned on Nokia alone.
 
Posts: 434 | Thanked: 325 times | Joined on Sep 2009
#67
It would be interesting to know the amount of downloads per app in Extras-testing. This would give some indication how many of those who download the app, bother to actually vote for it.
 

The Following 3 Users Say Thank You to Sasler For This Useful Post:
Texrat's Avatar
Posts: 11,700 | Thanked: 10,045 times | Joined on Jun 2006 @ North Texas, USA
#68
Back to the quarantine issue... in my brief study of software QA materials the past couple of days, one thing that stuck was the emphasis on lines of code (LOC). Granted, sheer LOC doesn't tell you everything you need to know about an app, but it's a decent, rough indicator of complexity.

So maybe LOC and/or compiled file size could be *one* factor that drives quarantine length. We could create demarcation points every-so-many-kilobytes for instance and relate them to days in quarantine (eg, 1 day per 25K file size or 2500 LOC, etc).

Does anyone know of any best practices in this regard? We're not the first to travel this path...
__________________
Nokia Developer Champion
Different <> Wrong | Listen - Judgment = Progress | People + Trust = Success
My personal site: http://texrat.net
 
Posts: 434 | Thanked: 325 times | Joined on Sep 2009
#69
What if these Super Testers (or moderators... whatever they should be called) would have the ability to override the quarantine? I mean, if an app or update would not have any votes from these Mega Beings, the normal quarantine would be in place. But, provided that the app has already enough voters (which, btw, should be 5 ) then one of the Celestial Creatures would be able to give it a green light, if he feels that no further testing would be needed.
 

The Following 2 Users Say Thank You to Sasler For This Useful Post:
Texrat's Avatar
Posts: 11,700 | Thanked: 10,045 times | Joined on Jun 2006 @ North Texas, USA
#70
There's merit to that, Sasler, but my thinking is we need to first step all the way back to the entry point of the process and start applying rational methodology as opposed to numbers and actions driven by warm fuzzies. "Jailbreaking" quarantine might just ensure more bad apps get out. We need to add more meaning to process steps... especially do what we can to ensure the proposed 5 testers aren't thumbing up or down based on like or dislike.
__________________
Nokia Developer Champion
Different <> Wrong | Listen - Judgment = Progress | People + Trust = Success
My personal site: http://texrat.net
 

The Following 3 Users Say Thank You to Texrat For This Useful Post:
Reply

Tags
extras-tesing, finishing the job, quality assurance, quarantine, software quality, user testing


 
Forum Jump


All times are GMT. The time now is 16:37.