Web Portal localization testing estimation as % of estimated functional testing efforts?
emmalee last edited by user
I'm currently estimating QA efforts for huge enterprise web portal (about 5k+ hrs DEV efforts / 1.5k hrs QA), based on Liferay / Alfresco platforms / solutions. One of the requirements - 3 interface languages user may switch from any page (place) of the solution.
So the question is that simple: is there any common practice to estimate localization testing as percentage of overall functional testing efforts?
The following assumptions / aspects are actual as well:
- 2 of 3 languages are common and do not require any special skills for checking from QA team; the 3rd language is uncommon (however, not TOTALLY different, as e.g. English and Japanese). Any criteria for estimations relating that 3rd language, perhaps x2 / x3 efforts required for localization testing of common language?
- Which approach is better (or easier to explain) - %% based on overall functional testing efforts OR based on the approximate number of separate web portal pages?
At the moment the following makes sense for me:
- 5% of overall testing efforts (i.e. about 75 hrs) for common language localization testing;
- Approx. 10-15%, i.e. about 200 hrs for UNcommon language localization testing.
For the record - web portal will have about 120-140 different types of pages that should have localized elements.
Any suggestions / help / ideas / approach (as best - based on real experience / cases) are highly appreciated!
P.S. no sample link available - it's requirements refining stage.
As well as "what Joe said, in bulldozer-sized loads", I have a few pointers to add from experience with localization/translation.
To test internationalization - which will need to happen first - you'll want something that's really obvious so that in a test environment your team can go through every interaction in the system to check that everything that should be translated is translated. This alone in a sufficiently large system is extremely time-consuming because it won't be possible to simply go to each page - you'll need to perform the actions that will trigger each error message/information message/popup/etc. That should be happening before the actual translation is tested. The testing I did used a bogus language (consisting of random numbers) so it was really obvious if anything wasn't translated. You open the page and see words anywhere that's not pulled from the database, the internationalization hasn't been done.
To test translation - this will depend on whether or not you have someone on hand who actually speaks the language. If you have a native speaker or bilingual tester, it will take about the same amount of time as internationalization, maybe a little more if the language takes a lot more space.
If you don't have a native speaker or bilingual tester on hand, figure extra time to check that the translation is actually more or less correct. The more technical the text they need to check, the more time they'll need.
Some other caveats I've run into: you'll want to allow extra time on the development end for GUI modifications: English is one of the more concise languages around. Almost every other language requires more space to display a caption than the English equivalent.
If the language is like Japanese, it requires much more memory per character. I ran into an issue where the Japanese localization of an application was overflowing buffers because the memory requirements of some 200 lines of kanji was about the same as the memory requirements of some 5000 lines of English (that particular application represented the kanji as images rather than via code pages).