[GeoNetwork-devel] Localization proposal - II

Hi all,
I have some more observations and ideas on the localization of the general GUI and possibly also related to multi lingual metadata.

I was merging new localized strings today and got !@!&^%$@#^&*%$ once more... :slight_smile:

Main reason:
The way we maintain strings is not very effective when looking at it from a maintenance point of view.

Related reason:
Everybody that adds new strings/ localizes new part, seems to take the easy route and dump the new strings at the start or end of a file in fairly random order.

Consequence:
Someone (and for now that has been me, hence my complaint!) has to go through all language files and manually sort them, remove duplicates, discover missing strings and possibly declare old ones as obsolete. I did this today for just one of them (Arabic, not used and so probably not the most useful one, but hey it is the first language in the list....)

This has become a sheer impossible task by now. Only the strings.xml file contains over 500 lines per language. Not to even mention the other 27 XML files in the loc folder (again per language), the loc XML files in each schema and the loc files in the InterMap application.

I started flaming on some IRC box, and had a discussion following on that that pointed me in the direction of a more standardized approach. There may be others so don't hesitate to mention those in response to this email.

The tools we could use are:
- XLIFF as internal format to store the language files in. These could than be exported into the format we actually need (but maybe those could be just the same!?).
- Pootl as the web interface to allow people to translate strings. We could start a discussion at OSGeo level to see if there are more projects interested to have this as a facility for more OSGeo projects.

I updated the localization discussion page I put up some time ago at i18n – GeoNetwork opensource Developer website and I hope you can have a look there and give feedback/ continue to discuss with me what solution we should go for in the future.

We can, with experiences from the encoding used in XLIFF (or other i18n practices) also discuss how to best tackle the multi lingual metadata content. Maybe some Pootl concepts could be used, but I'm saying this while completely ignorant on those concepts :slight_smile:

One thing is sure, I need some people that will take the same tedious task as I did to go through the strings.xml files and put elements in alphabetic order, remove dup.... etc....

Ciao,
Jeroen

Hi Jeroen:

I can help ordering the strings.xml files in alphabetic order and remove
duplicates.
Could you give me some guidance on how to do it, and if I need to work over
trunk or over some branch.

Regards
Godofredo Contreras

Jeroen Ticheler-3 wrote:

Hi all,
I have some more observations and ideas on the localization of the
general GUI and possibly also related to multi lingual metadata.

I was merging new localized strings today and got !@!&^%$@#^&*%$ once
more... :slight_smile:

Main reason:
The way we maintain strings is not very effective when looking at it
from a maintenance point of view.

Related reason:
Everybody that adds new strings/ localizes new part, seems to take the
easy route and dump the new strings at the start or end of a file in
fairly random order.

Consequence:
Someone (and for now that has been me, hence my complaint!) has to go
through all language files and manually sort them, remove duplicates,
discover missing strings and possibly declare old ones as obsolete. I
did this today for just one of them (Arabic, not used and so probably
not the most useful one, but hey it is the first language in the
list....)

This has become a sheer impossible task by now. Only the strings.xml
file contains over 500 lines per language. Not to even mention the
other 27 XML files in the loc folder (again per language), the loc XML
files in each schema and the loc files in the InterMap application.

I started flaming on some IRC box, and had a discussion following on
that that pointed me in the direction of a more standardized approach.
There may be others so don't hesitate to mention those in response to
this email.

The tools we could use are:
- XLIFF as internal format to store the language files in. These could
than be exported into the format we actually need (but maybe those
could be just the same!?).
- Pootl as the web interface to allow people to translate strings. We
could start a discussion at OSGeo level to see if there are more
projects interested to have this as a facility for more OSGeo projects.

I updated the localization discussion page I put up some time ago at
http://trac.osgeo.org/geonetwork/wiki/i18n
  and I hope you can have a look there and give feedback/ continue to
discuss with me what solution we should go for in the future.

We can, with experiences from the encoding used in XLIFF (or other
i18n practices) also discuss how to best tackle the multi lingual
metadata content. Maybe some Pootl concepts could be used, but I'm
saying this while completely ignorant on those concepts :slight_smile:

One thing is sure, I need some people that will take the same tedious
task as I did to go through the strings.xml files and put elements in
alphabetic order, remove dup.... etc....

Ciao,
Jeroen

-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
GeoNetwork-devel mailing list
GeoNetwork-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/geonetwork-devel
GeoNetwork OpenSource is maintained at
http://sourceforge.net/projects/geonetwork

--
View this message in context: http://www.nabble.com/Localization-proposal---II-tp15676871s18419p15738745.html
Sent from the geonetwork-devel mailing list archive at Nabble.com.

Hi Godofredo,

frdcn wrote:

Hi Jeroen:

I can help ordering the strings.xml files in alphabetic order and remove
duplicates.
Could you give me some guidance on how to do it,

Maybe a good option to do this on a regular basis could be to write an
xslt using xsl:sort to sort all elements and xsl:group (available only
in xsl 2.0 I guess) to remove duplicate entries.

Francois